repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
pytest-dev/pytest-mock | pytest | 400 | Call args/kwargs stored in a Spy can be overwritten without intention | Suppose I have the following class definition and test code (in `test.py`):
```
class AClass:
def foo(self, x):
return None
def bar(self):
x = {"data": [1]}
self.foo(x)
x["data"].append(2)
self.foo(x)
def test_bar(mocker):
a = AClass()
spy = mocker.spy(a, "foo")
a.bar()
print(spy.call_args_list)
```
I want to check if `foo` has been called with the correct arguments. However, when I run `pytest -s test.py` I see the following output:
```
...
test.py::test_bar [call({'data': [1, 2]}), call({'data': [1, 2]})]
...
```
where I would've expected:
```
...
test.py::test_bar [call({'data': [1]}), call({'data': [1, 2]})]
...
```
I suspect the spy stores a reference to the calling args and kwargs, which allows for this behavior to happen. Creating a `deepcopy` would solve the issue, but I realize it can be quite costly to do so. Alternatively, having a flag to enable deep-copying if required would be useful. | closed | 2023-12-11T14:59:19Z | 2023-12-13T00:05:58Z | https://github.com/pytest-dev/pytest-mock/issues/400 | [] | sybrenjansen | 1 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,245 | can i run this in google colab without any problem | and i tried too
and i got this
Running a test of your configuration...
Found 1 GPUs available. Using GPU 0 (Tesla T4) of compute capability 7.5 with 15.8Gb total memory.
Preparing the encoder, the synthesizer and the vocoder...
Loaded encoder "encoder.pt" trained to step 1564501
Synthesizer using device: cuda
Building Wave-RNN
Trainable Parameters: 4.481M
Loading model weights at saved_models/default/vocoder.pt
Testing your configuration with small inputs.
Testing the encoder...
Traceback (most recent call last):
File "/content/Real-Time-Voice-Cloning/demo_cli.py", line 80, in <module>
encoder.embed_utterance(np.zeros(encoder.sampling_rate))
File "/content/Real-Time-Voice-Cloning/encoder/inference.py", line 144, in embed_utterance
frames = audio.wav_to_mel_spectrogram(wav)
File "/content/Real-Time-Voice-Cloning/encoder/audio.py", line 58, in wav_to_mel_spectrogram
frames = librosa.feature.melspectrogram(
TypeError: melspectrogram() takes 0 positional arguments but 2 positional arguments (and 2 keyword-only arguments) were given
Colab paid products - Cancel contracts here
| open | 2023-08-27T03:38:26Z | 2023-09-30T17:18:15Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1245 | [] | Gonharaka | 1 |
Miserlou/Zappa | flask | 1,911 | Failed to generate or install certificate! :( | Facing below issue when trying to run zappa certify
```
(myenv) root@ip-10-0-0-20:/home/ubuntu/zappa-s3-signature# zappa certify prod
Calling certify for stage prod..
Are you sure you want to certify? [y/n] y
Certifying domain cnd.doxbot.io..
Setting DNS challenge..
Waiting for DNS to propagate..
Domain challenge did not pass: {u'status': u'invalid', u'token': u'86P82OiXd8YvlSl5UK-7-NRLhIWYtHx_wyUfiLoSehs', u'type': u'dns-01', u'uri': u'https://acme-v01.api.letsencrypt.org/acme/challenge/Vv84wVPPZN6QaW4-oDO-koxR_RXNtoYRtIWdIc3THaE/18970297341', u'error': {u'status': 403, u'type': u'urn:acme:error:unauthorized', u'detail': u'No TXT record found at _acme-challenge.cnd.doxbot.io'}}
Failed to generate or install certificate! :(
==============
```
Route 53 entries

Note: The TXT record was created by zappa, it isn't done manually
| open | 2019-07-31T17:22:49Z | 2019-08-02T05:29:34Z | https://github.com/Miserlou/Zappa/issues/1911 | [] | parikhudit | 1 |
viewflow/viewflow | django | 223 | Django 2.1 support | When will viewflow supporto Django 2.1? I'm mostly concerned about the view permission | closed | 2018-07-25T08:15:07Z | 2018-08-21T08:56:43Z | https://github.com/viewflow/viewflow/issues/223 | [
"request/question",
"dev/flow"
] | lorenzomorandini | 1 |
yt-dlp/yt-dlp | python | 12,040 | Add an argument to ignore --cookies-from-browser errors | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a feature unrelated to a specific site
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
I would like to use YT-DLP with --cookies-from-browser argument.
However, in case it fails to extract cookies, I would like it to just proceed with no cookies.
I suggest to add an additional command line argument for this. Something like --cookies-from-browser-ignore-errors.
Thanks! :)
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```
[debug] Command-line config: ['-vU', '--cookies-from-browser', 'chrome', 'https://www.youtube.com/watch?v=p6ebfMzTgyY']
[debug] Encodings: locale cp1251, fs utf-8, pref cp1251, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.12.23 from yt-dlp/yt-dlp [65cf46cdd] (source)
[debug] Lazy loading extractors is disabled
[debug] Git HEAD: 0b6b7742c
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 4.3.1
[debug] Optional libraries: sqlite3-3.40.1
[debug] Proxy map: {}
Extracting cookies from chrome
[debug] Extracting cookies from: "C:\Users\user\AppData\Local\Google\Chrome\User Data\Default\Network\Cookies"
[debug] Found local state file at "C:\Users\user\AppData\Local\Google\Chrome\User Data\Local State"
[Cookies] Loading cookie 0/ 30ERROR: Failed to decrypt with DPAPI. See https://github.com/yt-dlp/yt-dlp/issues/10927 for more info
File "D:\Work\Source\yt-dlp\yt_dlp\__main__.py", line 17, in <module>
yt_dlp.main()
File "D:\Work\Source\yt-dlp\yt_dlp\__init__.py", line 1093, in main
_exit(*variadic(_real_main(argv)))
File "D:\Work\Source\yt-dlp\yt_dlp\__init__.py", line 991, in _real_main
with YoutubeDL(ydl_opts) as ydl:
File "D:\Work\Source\yt-dlp\yt_dlp\YoutubeDL.py", line 720, in __init__
self.print_debug_header()
File "D:\Work\Source\yt-dlp\yt_dlp\YoutubeDL.py", line 4078, in print_debug_header
write_debug(f'Request Handlers: {", ".join(rh.RH_NAME for rh in self._request_director.handlers.values())}')
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\functools.py", line 981, in __get__
val = self.func(instance)
File "D:\Work\Source\yt-dlp\yt_dlp\YoutubeDL.py", line 4252, in _request_director
return self.build_request_director(_REQUEST_HANDLERS.values(), _RH_PREFERENCES)
File "D:\Work\Source\yt-dlp\yt_dlp\YoutubeDL.py", line 4227, in build_request_director
cookiejar=self.cookiejar,
File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\functools.py", line 981, in __get__
val = self.func(instance)
File "D:\Work\Source\yt-dlp\yt_dlp\YoutubeDL.py", line 4118, in cookiejar
return load_cookies(
File "D:\Work\Source\yt-dlp\yt_dlp\cookies.py", line 99, in load_cookies
extract_cookies_from_browser(browser_name, profile, YDLLogger(ydl), keyring=keyring, container=container))
File "D:\Work\Source\yt-dlp\yt_dlp\cookies.py", line 122, in extract_cookies_from_browser
return _extract_chrome_cookies(browser_name, profile, keyring, logger)
File "D:\Work\Source\yt-dlp\yt_dlp\cookies.py", line 331, in _extract_chrome_cookies
is_encrypted, cookie = _process_chrome_cookie(decryptor, *line)
File "D:\Work\Source\yt-dlp\yt_dlp\cookies.py", line 366, in _process_chrome_cookie
value = decryptor.decrypt(encrypted_value)
File "D:\Work\Source\yt-dlp\yt_dlp\cookies.py", line 551, in decrypt
return _decrypt_windows_dpapi(encrypted_value, self._logger).decode()
File "D:\Work\Source\yt-dlp\yt_dlp\cookies.py", line 1087, in _decrypt_windows_dpapi
logger.error(message)
File "D:\Work\Source\yt-dlp\yt_dlp\utils\_utils.py", line 5650, in error
self._ydl.report_error(message, is_error=is_error)
File "D:\Work\Source\yt-dlp\yt_dlp\YoutubeDL.py", line 1092, in report_error
self.trouble(f'{self._format_err("ERROR:", self.Styles.ERROR)} {message}', *args, **kwargs)
File "D:\Work\Source\yt-dlp\yt_dlp\YoutubeDL.py", line 1020, in trouble
tb_data = traceback.format_list(traceback.extract_stack())
ERROR: Failed to decrypt with DPAPI. See https://github.com/yt-dlp/yt-dlp/issues/10927 for more info
Traceback (most recent call last):
File "D:\Work\Source\yt-dlp\yt_dlp\cookies.py", line 99, in load_cookies
extract_cookies_from_browser(browser_name, profile, YDLLogger(ydl), keyring=keyring, container=container))
File "D:\Work\Source\yt-dlp\yt_dlp\cookies.py", line 122, in extract_cookies_from_browser
return _extract_chrome_cookies(browser_name, profile, keyring, logger)
File "D:\Work\Source\yt-dlp\yt_dlp\cookies.py", line 331, in _extract_chrome_cookies
is_encrypted, cookie = _process_chrome_cookie(decryptor, *line)
File "D:\Work\Source\yt-dlp\yt_dlp\cookies.py", line 366, in _process_chrome_cookie
value = decryptor.decrypt(encrypted_value)
File "D:\Work\Source\yt-dlp\yt_dlp\cookies.py", line 551, in decrypt
return _decrypt_windows_dpapi(encrypted_value, self._logger).decode()
File "D:\Work\Source\yt-dlp\yt_dlp\cookies.py", line 1088, in _decrypt_windows_dpapi
raise DownloadError(message) # force exit
yt_dlp.utils.DownloadError: Failed to decrypt with DPAPI. See https://github.com/yt-dlp/yt-dlp/issues/10927 for more info
``` | closed | 2025-01-09T16:25:46Z | 2025-01-12T09:55:26Z | https://github.com/yt-dlp/yt-dlp/issues/12040 | [
"enhancement",
"wontfix",
"core:cookies"
] | meowcateatrat | 7 |
microsoft/unilm | nlp | 1,452 | [WavLM] Finetuning for speaker diarization | I am intending on using WavLM and finetuning it for speaker diarization. My aim is to obviously get the DER's that is in Readme for WavLM. I tried to find some resources on how to even begin this, and found some things on speaker recognition and verification. However, that isn't speaker diarization, so I was wondering if anyone had any pointers? (At the very least a starting place.)
| open | 2024-02-03T00:18:23Z | 2024-12-17T19:45:02Z | https://github.com/microsoft/unilm/issues/1452 | [] | aynig | 1 |
sigmavirus24/github3.py | rest-api | 210 | Pages API | https://developer.github.com/changes/2014-02-13-exposing-the-page-api/
| closed | 2014-02-18T03:50:58Z | 2014-05-27T11:29:41Z | https://github.com/sigmavirus24/github3.py/issues/210 | [] | sigmavirus24 | 2 |
paperless-ngx/paperless-ngx | django | 7,554 | [BUG] Merging PDF + PNG omits PNG | ### Description
When attempting to merge a PDF-based document with another document that resulted from a PNG upload the merge process finishes without reporting any obvious error, but the resulting "merged" PDF is missing the page from the PNG-based document. No error is visible from the web frontend.
Working around the issue by downloading and re-uploading the PDF generated for the PNG (and using that for the merge) unfortunately does not work as it is rejected as a duplicate.
### Steps to reproduce
1. Upload PDF
2. Upload PNG
3. Select both documents for merging (generate new metadata, do not delete original files).
### Webserver logs
```bash
[2024-08-27 10:44:43,663] [INFO] [paperless.bulk_edit] Attempting to merge 2 documents into a single document.
[2024-08-27 10:44:43,820] [ERROR] [paperless.bulk_edit] Error merging document 592, it will not be included in the merge: /usr/src/paperless/media/documents/originals/0000592.png: unable to find trailer dictionary while recovering damaged file
Traceback (most recent call last):
File "/usr/src/paperless/src/documents/bulk_edit.py", line 262, in merge
with pikepdf.open(str(doc.source_path)) as pdf:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pikepdf/_methods.py", line 398, in open
pdf = Pdf._open(
^^^^^^^^^^
pikepdf._core.PdfError: /usr/src/paperless/media/documents/originals/0000592.png: unable to find trailer dictionary while recovering damaged file
[2024-08-27 10:44:43,822] [INFO] [paperless.bulk_edit] Adding merged document to the task queue.
[2024-08-27 10:44:43,857] [INFO] [celery.worker.strategy] Task documents.tasks.consume_file[fa0ac878-d0ad-472f-9f8b-b1a6f934d58a] received
[2024-08-27 10:44:43,905] [INFO] [paperless.tasks] WorkflowTriggerPlugin completed with:
[2024-08-27 10:44:43,913] [INFO] [paperless.consumer] Consuming 618_592_merged.pdf
[2024-08-27 10:44:43,931] [INFO] [paperless.parsing.tesseract] pdftotext exited 0
[2024-08-27 10:44:45,693] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 5.52 - no change
[2024-08-27 10:44:55,983] [INFO] [ocrmypdf._pipelines.ocr] Postprocessing...
[2024-08-27 10:44:56,807] [INFO] [ocrmypdf._pipeline] Image optimization ratio: 1.22 savings: 18.3%
[2024-08-27 10:44:56,808] [INFO] [ocrmypdf._pipeline] Total file size ratio: 1.24 savings: 19.6%
[2024-08-27 10:44:56,810] [INFO] [ocrmypdf._pipelines._common] Output file is a PDF/A-2B (as expected)
[2024-08-27 10:44:58,394] [INFO] [paperless.parsing] convert exited 0
[2024-08-27 10:45:01,713] [INFO] [paperless.handlers] Assigning correspondent Thilo-Alexander Ginkel to 2024-08-26 618_592_merged
[2024-08-27 10:45:01,722] [INFO] [paperless.handlers] Assigning document type Abrechnung to 2024-08-26 Thilo-Alexander Ginkel 618_592_merged
[2024-08-27 10:45:01,732] [INFO] [paperless.handlers] Tagging "2024-08-26 Thilo-Alexander Ginkel 618_592_merged" with "Veranstaltung, Abrechnung"
[2024-08-27 10:45:01,799] [INFO] [paperless.consumer] Document 2024-08-26 Thilo-Alexander Ginkel 618_592_merged consumption finished
[2024-08-27 10:45:01,803] [INFO] [paperless.tasks] ConsumeTaskPlugin completed with: Success. New document id 619 created
```
### Browser logs
_No response_
### Paperless-ngx version
2.11.6
### Host OS
Ubuntu 22.04
### Installation method
Docker - official image
### System status
```json
{
"pngx_version": "2.11.6",
"server_os": "Linux-5.15.0-117-generic-x86_64-with-glibc2.36",
"install_type": "docker",
"storage": {
"total": 473533612032,
"available": 359678787584
},
"database": {
"type": "postgresql",
"url": "paperless",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "documents.1052_document_transaction_id",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://broker:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2024-08-27T10:47:27.021629+02:00",
"index_error": null,
"classifier_status": "OK",
"classifier_last_trained": "2024-08-27T08:05:00.617726Z",
"classifier_error": null
}
}
```
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-08-27T08:57:28Z | 2024-09-27T03:08:17Z | https://github.com/paperless-ngx/paperless-ngx/issues/7554 | [
"not a bug"
] | ginkel | 3 |
huggingface/datasets | tensorflow | 7,164 | fsspec.exceptions.FSTimeoutError when downloading dataset | ### Describe the bug
I am trying to download the `librispeech_asr` `clean` dataset, which results in a `FSTimeoutError` exception after downloading around 61% of the data.
### Steps to reproduce the bug
```
import datasets
datasets.load_dataset("librispeech_asr", "clean")
```
The output is as follows:
> Downloading data: 61%|██████████████▋ | 3.92G/6.39G [05:00<03:06, 13.2MB/s]Traceback (most recent call last):
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 56, in _runner
> result[0] = await coro
> ^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/implementations/http.py", line 262, in _get_file
> chunk = await r.content.read(chunk_size)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/streams.py", line 393, in read
> await self._wait("read")
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/streams.py", line 311, in _wait
> with self._timer:
> ^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/helpers.py", line 713, in __exit__
> raise asyncio.TimeoutError from None
> TimeoutError
>
> The above exception was the direct cause of the following exception:
>
> Traceback (most recent call last):
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/load_dataset.py", line 3, in <module>
> datasets.load_dataset("librispeech_asr", "clean")
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/load.py", line 2096, in load_dataset
> builder_instance.download_and_prepare(
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 924, in download_and_prepare
> self._download_and_prepare(
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 1647, in _download_and_prepare
> super()._download_and_prepare(
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 977, in _download_and_prepare
> split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/.cache/huggingface/modules/datasets_modules/datasets/librispeech_asr/2712a8f82f0d20807a56faadcd08734f9bdd24c850bb118ba21ff33ebff0432f/librispeech_asr.py", line 115, in _split_generators
> archive_path = dl_manager.download(_DL_URLS[self.config.name])
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 159, in download
> downloaded_path_or_paths = map_nested(
> ^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 512, in map_nested
> _single_map_nested((function, obj, batched, batch_size, types, None, True, None))
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 380, in _single_map_nested
> return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]
> ^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 216, in _download_batched
> self._download_single(url_or_filename, download_config=download_config)
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 225, in _download_single
> out = cached_path(url_or_filename, download_config=download_config)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 205, in cached_path
> output_path = get_from_cache(
> ^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 415, in get_from_cache
> fsspec_get(url, temp_file, storage_options=storage_options, desc=download_desc, disable_tqdm=disable_tqdm)
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 334, in fsspec_get
> fs.get_file(path, temp_file.name, callback=callback)
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 118, in wrapper
> return sync(self.loop, func, *args, **kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 101, in sync
> raise FSTimeoutError from return_result
> fsspec.exceptions.FSTimeoutError
> Downloading data: 61%|██████████████▋ | 3.92G/6.39G [05:00<03:09, 13.0MB/s]
### Expected behavior
Complete the download
### Environment info
Python version 3.12.6
Dependencies:
> dependencies = [
> "accelerate>=0.34.2",
> "datasets[audio]>=3.0.0",
> "ipython>=8.18.1",
> "librosa>=0.10.2.post1",
> "torch>=2.4.1",
> "torchaudio>=2.4.1",
> "transformers>=4.44.2",
> ]
MacOS 14.6.1 (23G93) | open | 2024-09-24T08:45:05Z | 2025-01-14T09:48:23Z | https://github.com/huggingface/datasets/issues/7164 | [] | timonmerk | 6 |
deepinsight/insightface | pytorch | 2,076 | problem about the preprocess of face image before feed it into face recognition onnx model | I download the resnet 18 pretrained model(R18 | Glint360K | 72.07) for face encoding or face embedding or anything we call it, and it is onnx format. I do not know how to preprocess the aligned face image before feed it into this onnx model. I use the another face detect model and the alignment of dlib library. in the face embedding, I'd like use insightface pretrained model. I'd appeciated if any one can help me. | open | 2022-08-16T08:51:40Z | 2022-08-18T02:31:26Z | https://github.com/deepinsight/insightface/issues/2076 | [] | smilealvin92 | 3 |
wagtail/wagtail | django | 12,230 | ES Autocomplete search queries should properly use boosted fields | ### Is your proposal related to a problem?
The fixes to Elasticsearch boosting from #10653 never properly made it into autocomplete queries. This means that if you're using the autocomplete method to search pages, it will often give a higher rank to pages with titles that don't match the search results. I've been able to make these changes in the site that I discovered this issue in, but the existing code is written in such a way that it's a challenge to make small changes to search backends without having to duplicate a lot of base Wagtail code, which I'd really like to not do.
### Describe the solution you'd like
Autocomplete queries should also have boosted fields that are copied to using the same method as the base query compiler and then queried with a boost at runtime.
### Describe alternatives you've considered
If the Wagtail team doesn't want to fully support this, it would still be appreciated to be able to break out some of the existing methods in ways that make it easier to extend.
### Additional context
I know that the Wagtail team is generally critical about using partial matching/autocompletion for normal searches, but this came up because I've developed a relatively large Wagtail instance that uses autocomplete queries for their base site search. The client sells a large amount of products, and these products are often referred to by shortened versions of the page title. For example, 299 is a very common search term on our site, and users (and our client) expect to be able to find all of the products sold on the site in that product line, all of which will have titles like 299E4STD3G. In our cases, it makes sense then to use edgengrams for our main search, as that's one of the primary ways that users are browsing the site. I wouldn't be surprised if other Wagtail instances have similar requirements, so I think this is a reasonable usecase to support.
### Working on this
I've already developed a solution for my site, so I know what places in the code need to be changed. I would likely need to do some additional work (tests, etc) to get it ready for the main repo, but I would be happy to work on it.
Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
| open | 2024-08-14T18:16:11Z | 2024-10-24T20:18:17Z | https://github.com/wagtail/wagtail/issues/12230 | [
"type:Enhancement"
] | ethanaward | 0 |
JaidedAI/EasyOCR | deep-learning | 508 | Heatmaps | Hello, Is it possible to get heatmaps from EasyOCR?
Thank you in advance. | closed | 2021-08-08T04:46:10Z | 2022-03-02T09:25:04Z | https://github.com/JaidedAI/EasyOCR/issues/508 | [] | alikaz3mi | 5 |
biolab/orange3 | pandas | 6,503 | Number of features remains disabled when Suggest features is closed during search | 
How to reproduce:
1. open _Suggest features_ and run
2. close while running
3. choose another mode (circular, LDA or PCA)
4. open _Suggest features_: the _Number of variables_ field is disabled despite no search running
OS: Windows 10 x64
Orange: 3.35.0 | closed | 2023-07-10T10:15:02Z | 2023-09-01T13:40:06Z | https://github.com/biolab/orange3/issues/6503 | [
"bug",
"snack"
] | processo | 0 |
pytorch/pytorch | python | 148,902 | Remove Direct Arm Compute Libray (ACL) Integration for Quantized Matmuls: `qlinear`/`qlinear_dynamic` | PR https://github.com/pytorch/pytorch/pull/148585 (temporarily) introduced a direct ACL implementation for `qlinear` and `qlinear_dynamic` for AArch64 when `USE_MKLDNN_ACL` is set.
This direct ACL implementation is a lot faster than the existing implementations that utilized ACL through oneDNN (MKLDNN) due to the (current) API friction between the stateful ACL API and the stateless oneDNN API (see benchmarks and numbers on https://github.com/pytorch/pytorch/pull/148585).
I'm creating this issue to make sure that we end up removing this direct ACL path for `qlinear` and `qlinear_dynamic` once we're done enabling a fast implementation for quantized matmuls through oneDNN+ACL.
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @malfet @snadampal @milpuz01 | open | 2025-03-10T18:46:14Z | 2025-03-10T21:08:31Z | https://github.com/pytorch/pytorch/issues/148902 | [
"oncall: quantization",
"module: arm"
] | fadara01 | 1 |
ckan/ckan | api | 8,347 | db clean and search-index rebuild should remove orphans from index | ## CKAN version
all
## Describe the bug
Old copies of metadata in the solr search index can cause performance and search result issues
### Steps to reproduce
- create a dataset on a clean db
- run `ckan db clean`
- create a dataset with the same name
- search now returns both datasets, and package_show with `id=name` is very slow
### Expected behavior
Only the latest dataset is visible in search and retrieving datasets will act normally
### Additional details
`ckan search-index clear`, `ckan search-index rebuild -c` or a later call to `ckan search-index clear-orphans` will fix the issue, but it would be nicer for `db clean` to clear the search index, and `search-index rebuild` should `clear-orphans` by default.
reference: #7044
| open | 2024-07-17T21:24:48Z | 2024-10-29T16:20:32Z | https://github.com/ckan/ckan/issues/8347 | [
"Good for Contribution",
"Beginner Friendly"
] | wardi | 1 |
benbusby/whoogle-search | flask | 661 | [BUG] Settings keep resetting | **Describe the bug**
Seemingly by random, my settings keep getting reset
**To Reproduce**
Steps to reproduce the behavior:
1. Search a couple times
2. Settings get reset
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [ ] Docker
- [ ] `run` executable
- [x] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [ ] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [x] Version [v0.7.1]
- [ ] Not sure
**Instance:**
https://gowogle.voring.me
**Desktop (please complete the following information):**
- OS: Linux
- Browser: Mozilla Firefox 97.0
| closed | 2022-02-17T23:56:38Z | 2022-08-01T16:32:44Z | https://github.com/benbusby/whoogle-search/issues/661 | [
"bug"
] | ThatOneCalculator | 6 |
tensorflow/tensor2tensor | machine-learning | 1,885 | train meachine translation OOM | ### Description

Can you tell me why even I set batch_size to 4, also occur OOM problem ?
I know maybe the OOM problem because of model save and eval, but I don't know the OOM problem more specific.
### Environment information
python /root/anaconda3/lib/python3.6/site-packages/tensor2tensor/bin/t2t_trainer.py --data_dir=./data_dir \
--problem=translate_enzh_bpe50k \
--model=transformer \
--hparams="batch_size=4" \
--hparams_set=transformer_base_single_gpu \
--output_dir=./en_zh_model \
--schedule=continuous_train_and_eval \
--train_steps=900000 \
--t2t_usr_dir=user_dir
process the english data with bpe.
python 3.7
tensor2tensor == 1.9.0
tensorflow-gpu == 1.12.0

```
OS: <your answer here>
$ pip freeze | grep tensor
# your output here
$ python -V
# your output here
```
### For bugs: reproduction and error logs
```
# Steps to reproduce:
...
```
```
# Error logs:
...
```
| closed | 2021-04-22T02:32:26Z | 2021-04-23T09:52:56Z | https://github.com/tensorflow/tensor2tensor/issues/1885 | [] | charlesfufu | 0 |
rougier/numpy-100 | numpy | 135 | Alternative solution for 21 | You can also use:
```python
np.tile(np.identity(2),(4,4))
``` | closed | 2020-12-28T14:04:00Z | 2021-08-30T09:10:14Z | https://github.com/rougier/numpy-100/issues/135 | [] | yunisdev | 2 |
jina-ai/serve | deep-learning | 5,231 | Bug: For external Executors, passing entire address to `host` does not work | The following syntax, which is supported for the gateway, does not work for external executors:
```python
f.add(host='grpc://localhost:1234', external=True)
```
Instead, host and port have to be precised separately:
```python
f.add(host='localhost', port=1234, external=True)
```
Not that this bug also applies to replicated external Executors:
```python
f.add(host='grpc://localhost:1234,grpc://localhost:1235', external=True)
```
| closed | 2022-09-30T08:29:10Z | 2022-10-06T10:49:27Z | https://github.com/jina-ai/serve/issues/5231 | [] | JohannesMessner | 0 |
mjhea0/flaskr-tdd | flask | 82 | test_messages, maybe a mistake, maybe my error | Current block is:
```
def test_messages(client):
"""Ensure that user can post messages"""
login(client, app.config["USERNAME"], app.config["PASSWORD"])
rv = client.post(
"/add",
data=dict(title="<Hello>", text="<strong>HTML</strong> allowed here"),
follow_redirects=True,
)
assert b"No entries here so far" not in rv.data
assert b"<Hello>" in rv.data
assert b"<strong>HTML</strong> allowed here" in rv.data
```
But this never passes the final test.
I replaced the first of the 3 asserts with
`assert b"New entry was successfully posted" in rv.data`
Which then passes | open | 2023-11-10T16:43:14Z | 2023-11-10T16:43:14Z | https://github.com/mjhea0/flaskr-tdd/issues/82 | [] | barendburger | 0 |
nltk/nltk | nlp | 2,735 | Importing words throws numpy deprecation warning | Warning:
DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
max_n_alphas=1000, n_jobs=None, eps=np.finfo(np.float).eps | closed | 2021-06-22T21:04:34Z | 2021-07-28T19:38:38Z | https://github.com/nltk/nltk/issues/2735 | [] | ihawn | 2 |
microsoft/nni | deep-learning | 5,187 | Error reported when NNI was used to prune Albert-tiny | **Describe the issue**:
I'm just pruning the linear layer. Everything seemed fine, but the following problem occurred when I was about to use the ModelSpeedup module to speed up the model.
```
device = torch.device("cpu")
inputs =(torch.LongTensor([tokenizer_out['input_ids']]).to(device),torch.LongTensor([tokenizer_out['attention_mask']]).to(device))
ModelSpeedup(S_model, inputs, masks).speedup_model()
```
```
config_list = [{
'sparsity_per_layer': 0.5,
'op_types': ['Linear']
}, {
'exclude': True,
'op_names': ['dense2']
}]
```
```
bert_model.transformer.layer.0.attention.q_lin sparsity : 0.5
bert_model.transformer.layer.0.attention.k_lin sparsity : 0.5
bert_model.transformer.layer.0.attention.v_lin sparsity : 0.5
bert_model.transformer.layer.0.attention.out_lin sparsity : 0.5
bert_model.transformer.layer.0.ffn.lin1 sparsity : 0.5
bert_model.transformer.layer.0.ffn.lin2 sparsity : 0.5
bert_model.transformer.layer.1.attention.q_lin sparsity : 0.5
bert_model.transformer.layer.1.attention.k_lin sparsity : 0.5
bert_model.transformer.layer.1.attention.v_lin sparsity : 0.5
bert_model.transformer.layer.1.attention.out_lin sparsity : 0.5
bert_model.transformer.layer.1.ffn.lin1 sparsity : 0.5
bert_model.transformer.layer.1.ffn.lin2 sparsity : 0.5
bert_model.transformer.layer.2.attention.q_lin sparsity : 0.5
bert_model.transformer.layer.2.attention.k_lin sparsity : 0.5
bert_model.transformer.layer.2.attention.v_lin sparsity : 0.5
bert_model.transformer.layer.2.attention.out_lin sparsity : 0.5
bert_model.transformer.layer.2.ffn.lin1 sparsity : 0.5
bert_model.transformer.layer.2.ffn.lin2 sparsity : 0.5
bert_model.transformer.layer.3.attention.q_lin sparsity : 0.5
bert_model.transformer.layer.3.attention.k_lin sparsity : 0.5
bert_model.transformer.layer.3.attention.v_lin sparsity : 0.5
bert_model.transformer.layer.3.attention.out_lin sparsity : 0.5
bert_model.transformer.layer.3.ffn.lin1 sparsity : 0.5
bert_model.transformer.layer.3.ffn.lin2 sparsity : 0.5
cpu
[2022-10-26 08:50:04] start to speedup the model
no multi-dimension masks found.
[2022-10-26 08:50:05] infer module masks...
[2022-10-26 08:50:05] Update mask for bert_model.embeddings.word_embeddings
[2022-10-26 08:50:06] Update mask for bert_model.embeddings.aten::size.46
[2022-10-26 08:50:06] Update mask for bert_model.embeddings.aten::slice.47
[2022-10-26 08:50:06] Slice dim:0, Slice obj:slice(0, 9223372036854775807, 1)
[2022-10-26 08:50:06] Get attribute: bert_model
[2022-10-26 08:50:06] Get attribute: embeddings
[2022-10-26 08:50:06] Get attribute: position_ids
[2022-10-26 08:50:06] Update mask for bert_model.transformer.layer.0.attention.aten::eq.61
[2022-10-26 08:50:06] Update mask for bert_model.transformer.layer.1.attention.aten::eq.84
[2022-10-26 08:50:06] Update mask for bert_model.transformer.layer.2.attention.aten::eq.107
[2022-10-26 08:50:06] Update mask for bert_model.transformer.layer.3.attention.aten::eq.130
[2022-10-26 08:50:06] Update mask for bert_model.embeddings.aten::slice.48
[2022-10-26 08:50:06] Slice dim:1, Slice obj:slice(0, tensor([300]), 1)
[2022-10-26 08:50:06] Model has Slice operation, and the operand size=torch.Size([1, 512]), Slice object:(slice(None, None, None), slice(0, tensor([300]), 1))
[2022-10-26 08:50:06] Model has Slice operation, and the operand size=torch.Size([1, 512]), Slice object:(slice(None, None, None), slice(0, tensor([300]), 1))
[2022-10-26 08:50:06] Update mask for bert_model.embeddings.position_embeddings
[2022-10-26 08:50:06] Update mask for bert_model.embeddings.aten::add.49
[2022-10-26 08:50:06] Update mask for bert_model.embeddings.LayerNorm
[2022-10-26 08:50:06] Update mask for bert_model.embeddings.dropout
[2022-10-26 08:50:06] Update mask for bert_model.transformer.layer.0.attention.q_lin
[2022-10-26 08:50:06] Update mask for bert_model.transformer.layer.0.attention.k_lin
[2022-10-26 08:50:06] Update mask for bert_model.transformer.layer.0.attention.v_lin
[2022-10-26 08:50:06] Update mask for bert_model.transformer.layer.0.attention.aten::size.50
[2022-10-26 08:50:06] Update mask for bert_model.transformer.layer.0.attention.aten::size.51
[2022-10-26 08:50:06] Update mask for bert_model.transformer.layer.0.attention.aten::view.52
[2022-10-26 08:50:06] WARNING: throw some args away when calling the function "view"
[2022-10-26 08:50:06] WARNING: throw some args away when calling the function "view"
[2022-10-26 08:50:06] Update mask for bert_model.transformer.layer.0.attention.aten::view.54
[2022-10-26 08:50:06] WARNING: throw some args away when calling the function "view"
[2022-10-26 08:50:06] WARNING: throw some args away when calling the function "view"
[2022-10-26 08:50:06] Update mask for bert_model.transformer.layer.0.attention.aten::view.56
[2022-10-26 08:50:06] WARNING: throw some args away when calling the function "view"
[2022-10-26 08:50:06] WARNING: throw some args away when calling the function "view"
[2022-10-26 08:50:06] Update mask for bert_model.transformer.layer.0.attention.aten::view.62
[2022-10-26 08:50:06] WARNING: throw some args away when calling the function "view"
Traceback (most recent call last):
File "/workspace/bert2distill_albert/prune_s2_main.py", line 67, in <module>
prune_student_model(configs, dataManager, logger)
File "/workspace/bert2distill_albert/engines/prune_distill_model.py", line 112, in prune_student_model
ModelSpeedup(S_model, inputs, masks).speedup_model()
File "/root/miniconda3/envs/tf_nlp/lib/python3.8/site-packages/nni/compression/pytorch/speedup/compressor.py", line 543, in speedup_model
self.infer_modules_masks()
File "/root/miniconda3/envs/tf_nlp/lib/python3.8/site-packages/nni/compression/pytorch/speedup/compressor.py", line 380, in infer_modules_masks
self.update_direct_sparsity(curnode)
File "/root/miniconda3/envs/tf_nlp/lib/python3.8/site-packages/nni/compression/pytorch/speedup/compressor.py", line 247, in update_direct_sparsity
_auto_infer.update_direct_sparsity()
File "/root/miniconda3/envs/tf_nlp/lib/python3.8/site-packages/nni/compression/pytorch/speedup/infer_mask.py", line 328, in update_direct_sparsity
self.random_init()
File "/root/miniconda3/envs/tf_nlp/lib/python3.8/site-packages/nni/compression/pytorch/speedup/infer_mask.py", line 138, in random_init
randomize_tensor(tensor, start, end)
File "/root/miniconda3/envs/tf_nlp/lib/python3.8/site-packages/nni/compression/pytorch/utils/utils.py", line 72, in randomize_tensor
torch.randint(int(start), int(end), tensor.size(),
RuntimeError: to - 1 is out of bounds for bool
```
**Environment**:
- NNI version:2.9
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:3.8
- PyTorch/TensorFlow version:1.12.1
- Is conda/virtualenv/venv used?:
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | closed | 2022-10-26T09:03:02Z | 2022-12-07T03:05:15Z | https://github.com/microsoft/nni/issues/5187 | [] | Smile-L-up | 11 |
modin-project/modin | pandas | 6,576 | Don't use deprecated `is_int64_dtype` and `is_period_dtype` functions | closed | 2023-09-18T13:46:19Z | 2023-09-18T16:31:49Z | https://github.com/modin-project/modin/issues/6576 | [
"Code Quality 💯",
"P2"
] | anmyachev | 0 |
|
graphql-python/graphene-django | graphql | 890 | Extending connections using relay on same level as edges. | I have been pulling my hair out tying to solve this one. I would appreciate some help with this. Graphene Relay specs limits me to having my api calls received in the following format:
```
QUERY:
{
allSpecies {
edges {
node {
id
name
}
}
}
}
RECEIVED DATA
"data": {
"allSpecies": {
"edges": [
{
"node": {
"id": "U3BlY2llOjE=",
"name": "Human"
}
},
{
"node": {
"id": "U3BlY2llOjI=",
"name": "Alien"
}
}
]
}
}
}
```
I want to be able to create a same level property to access the api without going through edges and node first but still keep relay integrated for the purposes of pagination in case i need the added functionality.
```
NEW QUERY:
{
allSpecies {
newProperty{
id
name
}
}
}
```
In my current setup I am trying to point my newly created property "new property" to the edges node. How can I easily point to the same edges connection from the connection class and receive the same list of data? Is there a better way to do this?
```
class Custom(graphene.Connection):
class Meta:
abstract = True
new_property = graphene.List(graphene.String)
def resolve_new_property(self, info):
return self.edges
class Specie(DjangoObjectType):
eye_colors = graphene.List(graphene.String)
hair_colors = graphene.List(graphene.String)
skin_colors = graphene.List(graphene.String)
def resolve_eye_colors(self, info):
return [c.strip() for c in self.eye_colors.split(',')]
def resolve_hair_colors(self, info):
return [c.strip() for c in self.hair_colors.split(',')]
def resolve_skin_colors(self, info):
return [c.strip() for c in self.skin_colors.split(',')]
class Meta:
model = models.Species
interfaces = (Node, )
exclude_fields = ('created', 'edited', 'eye_colors', 'hair_colors',
'skin_colors')
filter_fields = {'name': {'startswith', 'contains'}}
connection_class = Custom
class Query(graphene.ObjectType):
all_species = DjangoFilterConnectionField(Specie)
schema = graphene.Schema(
query=Query,
mutation=Mutation
)
```
After running the query, I get the following error
```
{
"data": {
"allSpecies": {
"newProperty": "[<graphene.relay.connection.SpecieEdge object at 0x04BB2750>, <graphene.relay.connection.SpecieEdge object at 0x04BB23F0>]"
}
}
}
``` | open | 2020-03-03T20:27:24Z | 2020-07-02T09:07:05Z | https://github.com/graphql-python/graphene-django/issues/890 | [] | mahelmahmoud | 0 |
blacklanternsecurity/bbot | automation | 1,550 | Preset config should take priority over include | We need to write a test that makes sure a preset's `config` section overrides the config from any other preset specified in the `include` section. | closed | 2024-07-09T16:51:24Z | 2025-01-24T21:19:51Z | https://github.com/blacklanternsecurity/bbot/issues/1550 | [
"bug"
] | TheTechromancer | 2 |
liangliangyy/DjangoBlog | django | 559 | 1146, "Table 'djangoblog.django_site' doesn't exist" | 添加文章的时候为什么提示这个:1146, "Table 'djangoblog.django_site' doesn't exist"
我看着Models中也没有site这个model啊 | closed | 2022-03-13T08:12:08Z | 2022-10-11T07:05:58Z | https://github.com/liangliangyy/DjangoBlog/issues/559 | [] | 15210859049 | 6 |
fastapi-users/fastapi-users | fastapi | 1,170 | GET users/me returns different ObjectId on each call | also on the `/register` route. See:
https://github.com/fastapi-users/fastapi-users/discussions/1142 | closed | 2023-03-10T13:54:50Z | 2024-07-14T13:24:43Z | https://github.com/fastapi-users/fastapi-users/issues/1170 | [
"bug"
] | gegnew | 1 |
huggingface/datasets | pandas | 7,084 | More easily support streaming local files | ### Feature request
Simplify downloading and streaming datasets locally. Specifically, perhaps add an option to `load_dataset(..., streaming="download_first")` or add better support for streaming symlinked or arrow files.
### Motivation
I have downloaded FineWeb-edu locally and currently trying to stream the dataset from the local files. I have both the raw parquet files using `hugginface-cli download --repo-type dataset HuggingFaceFW/fineweb-edu` and the processed arrow files using `load_dataset("HuggingFaceFW/fineweb-edu")`.
Streaming the files locally does not work well for both file types for two different reasons.
**Arrow files**
When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/datasets/HuggingFaceFW___fineweb-edu/default/0.0.0/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/fineweb-edu-train-*.arrow"})` resolving the data files is fast, but because `arrow` is not included in the known [extensions file list](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/utils/file_utils.py#L738) , all files are opened and scanned to determine the compression type. Adding `arrow` to the known extension types resolves this issue.
**Parquet files**
When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/hub/dataset-HuggingFaceFW___fineweb-edu/snapshots/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/data/CC-MAIN-*/train-*.parquet"})` the paths do not get resolved because the parquet files are symlinked from the blobs (which contain all files in case there are different versions). This occurs because the [pattern matching](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/data_files.py#L389) checks if the path is a file and does not check for symlinks. Symlinks (at least on my machine) are of type "other".
### Your contribution
I have created a PR for fixing arrow file streaming and symlinks. However, I have not checked locally if the tests work or new tests need to be added.
IMO, the easiest option would be to add a `streaming=download_first` option, but I'm afraid that exceeds my current knowledge of how the datasets library works. https://github.com/huggingface/datasets/pull/7083 | open | 2024-07-31T09:03:15Z | 2024-07-31T09:05:58Z | https://github.com/huggingface/datasets/issues/7084 | [
"enhancement"
] | fschlatt | 0 |
databricks/koalas | pandas | 2,235 | No module named 'databricks' after installing koalas | I have installed koalas using conda install.
However when I try the following import, I get an ModuleNotFoundError.
Can you help me to solve this issue?
>>> import databricks.koalas as ks
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'databricks' | open | 2024-07-05T05:30:16Z | 2024-07-05T08:50:07Z | https://github.com/databricks/koalas/issues/2235 | [] | loungehub | 1 |
deeppavlov/DeepPavlov | nlp | 1,388 | Look into GA & Medium analytics to see what's most popular right now (e.g., blog posts, etc.) | Moved to internal Trello | closed | 2021-01-27T10:04:12Z | 2021-11-30T10:19:41Z | https://github.com/deeppavlov/DeepPavlov/issues/1388 | [] | danielkornev | 0 |
google-research/bert | nlp | 546 | Fine-tuning bert results in a strange output, what happened? | I am applying bert to my own model and use the functions in modeling.py. After some times of training, I found the output of bert model (model.get_pooled_output()) contains only 1&-1 and different input sentences produce the same output. I used tf.stop_gradient and then everything is correct. What happened to this bert fine-tuning? | open | 2019-04-04T02:41:43Z | 2019-04-04T02:41:43Z | https://github.com/google-research/bert/issues/546 | [] | RefluxNing | 0 |
ClimbsRocks/auto_ml | scikit-learn | 106 | df support | Here are the things we need to make sure df support includes:
- [x] splitting out the output column
- [x] convert a list of dictionaries to a DataFrame (ideally with error logging to the user)
- [x] convert numbers stored as strings to proper numbers
- [x] convert numbers stored as strings with commas in them to proper numbers.
- [x] dropping all rows where the output column is an ineligible value (nan, None, etc.). this particularly comes into play for our subpredictors.
- [x] ignoring (dropping) any columns that should be ignored
- [x] date feature engineering
- [x] robust feature scaling
- [x] optional (gscv) feature truncating
- [x] removing all nan, None, and other values
- [ ] FUTURE: imputing missing values
- [x] getting dummies from categorical columns
- [x] convert to a scipy sparse matrix
- [x] support NLP feature transformations. For now we'll just do it as-is, but in the future, we'll probably split this out into it's own thing that we'l do separately, and just hstack the sparse results from tfidfvectorizer onto the sparse results from vectorizing our dataframe.
| closed | 2016-10-08T17:49:49Z | 2016-10-11T03:00:18Z | https://github.com/ClimbsRocks/auto_ml/issues/106 | [] | ClimbsRocks | 2 |
tiangolo/uvicorn-gunicorn-fastapi-docker | pydantic | 92 | Virtual Env is not respected | Hey,
I have a dockerfile like this:
...
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8 AS service
COPY --from=test_runner /venv /venv
ENV PATH=/venv/bin:$PATH
When I run the container It cannot find my installed packaged. | closed | 2021-06-08T09:10:15Z | 2024-08-25T03:44:10Z | https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/92 | [] | jomach | 0 |
ludwig-ai/ludwig | computer-vision | 3,570 | Upload to HF fails for non-LLM trained | **Describe the bug**
When a model is trained for categories/classification, the model weights are saved a `file` called `model/model_weights`. If the model is trained with type llm instead, the weights are saved to the **directory** `model/model_weights` with contents `README.md`, `adapter_config.json`, `adapter_model.bin`.
**To Reproduce**
Steps to reproduce the behavior:
1. Train a model with name `MODEL_NAME=bug-reprod-model` and config
```
{
"input_features": [
{
"name": "text",
"type": "text",
"encoder": {
"trainable": true,
"pretrained_model_name_or_path": "meta-llama/Llama-2-7b-hf",
"adapter": {
"type": "lora"
}
}
}
],
"output_features": [
{
"name": "label",
"type": "category"
}
]
}
```
2. Attempt to uploaded the trained model for hugging face account `HF_ID=bug-reprod-hf-id`
```
ludwig upload hf_hub -r $HF_ID/$MODEL_NAME -m $MODEL_NAME/api_experiment_$MODEL_NAME
```
You should see an error like this

3. Manually move the weights file to a directory
```
pushd $MODEL_NAME/api_experiment_$MODEL_NAME/model && \
mv model_weights adapter_model.bin && \
mkdir model_weights && \
mv adapter_model.bin model_weights && \
cp ~/save/$MODEL_NAME/{adapter_config.json,README.md} model_weights && \
popd
```
4. The upload to HF should now be successful
```
ludwig upload hf_hub -r $HF_ID/$MODEL_NAME -m $MODEL_NAME/api_experiment_$MODEL_NAME
```

**Expected behavior**
The model should upload to HF without having to manually create the directory
**Environment (please complete the following information):**
- OS:
Distributor ID: Ubuntu
Description: Ubuntu 20.04.6 LTS
Release: 20.04
Codename: focal
- Version: CUDA 11.8
- Pytorch: 2.0.0+cu118
- Python: 3.8.10
- Ludwig: 0.8.1.post1
**Additional context**
Add any other context about the problem here.
@arnavgarg1 | closed | 2023-08-31T19:42:18Z | 2024-10-18T16:58:41Z | https://github.com/ludwig-ai/ludwig/issues/3570 | [] | thelinuxkid | 1 |
qubvel-org/segmentation_models.pytorch | computer-vision | 56 | KeyError: 'resnext50_32x4d' | what's wrong with resnext50_32x4d?
`model = smp.Unet("resnext50_32x4d", encoder_weights="imagenet", classes=4, activation='sigmoid')`
error:
KeyError: 'resnext50_32x4d'
However, it's defined here:
https://github.com/qubvel/segmentation_models.pytorch/blob/master/segmentation_models_pytorch/encoders/resnet.py#L85 | closed | 2019-09-12T02:08:19Z | 2019-09-13T15:31:00Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/56 | [] | hktxt | 1 |
davidsandberg/facenet | tensorflow | 557 | A mistake in guidline of Validate on LFW | Hi,
I am trying to follow the tutorial of Validate on LFW. However I found an error in the command. I think that it may be confused the beginner. I help to address the mistake.
```
cd ~/datasets
mkdir -p lfw/raw
tar xvf ~/Downloads/lfw.tgz -C /lfw/raw --strip-components=1
```
While we are in the `~/datasets` folder, we need to decompress the files into `lfw/raw` rather than `/lfw/raw`.
Thanks! | closed | 2017-11-28T09:52:02Z | 2018-04-11T16:23:12Z | https://github.com/davidsandberg/facenet/issues/557 | [] | jennyHsiao | 1 |
xorbitsai/xorbits | numpy | 153 | DOC: `np` is alias both for `numpy` and `xorbits.numpy` which may cause ambiguity | 
After `import numpy as np`, the `np` has become numpy instead of xorbits.numpy, I recommend to use `import numpy` directly. | open | 2023-01-09T04:13:12Z | 2023-05-17T04:27:19Z | https://github.com/xorbitsai/xorbits/issues/153 | [
"documentation",
"good first issue"
] | qianduoduo0904 | 0 |
Anjok07/ultimatevocalremovergui | pytorch | 1,569 | error while processing | Last Error Received:
Process: Ensemble Mode
If this error persists, please contact the developers with the error details.
Raw Error Details:
RuntimeError: "MPS backend out of memory (MPS allocated: 1.76 GB, other allocations: 1.57 GB, max allowed: 3.40 GB). Tried to allocate 160.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure)."
Traceback Error: "
File "UVR.py", line 6584, in process_start
File "separate.py", line 1025, in seperate
File "separate.py", line 1153, in inference_vr
File "separate.py", line 1120, in _execute
File "lib_v5/vr_network/nets.py", line 161, in predict_mask
File "lib_v5/vr_network/nets.py", line 137, in forward
File "lib_v5/vr_network/nets.py", line 45, in __call__
File "lib_v5/vr_network/layers.py", line 77, in __call__
File "lib_v5/vr_network/layers.py", line 24, in __call__
File "torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/modules/container.py", line 215, in forward
input = module(input)
^^^^^^^^^^^^^
File "torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/modules/conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/modules/conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
"
Error Time Stamp [2024-09-25 13:15:42]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 5
window_size: 512
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: Choose Model
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: True
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: PCM_16
cuda_set: Default
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: All Stems | open | 2024-09-25T10:30:15Z | 2024-09-25T10:30:15Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1569 | [] | Samkata03 | 0 |
kizniche/Mycodo | automation | 665 | high CPU rPi 3B | ## Mycodo Issue Report:
Mycodo Version: 7.5.8
Python Version: 3.5.3 (default, Sep 27 2018, 17:25:39) [GCC 6.3.0 20170516]
Database Version: 6333b0832b3d
Daemon Status: Running
Daemon Process ID: 1108
Daemon RAM Usage: 110.196 MB
Daemon Virtualenv: Yes
Frontend RAM Usage: 54.984 MB
Frontend Virtualenv: Yes
#### Problem Description
High CPU usage slow response to commands
### Errors
2019-06-13 21:22:07,639 - ERROR - mycodo.daemon - Could not query output state: 'NoneType' object has no attribute 'output_state'
2019-06-13 21:22:11,112 - ERROR - mycodo.daemon - Could not find Output Controller
2019-06-13 21:33:47,277 - ERROR - mycodo.daemon - Could not query output state: You must setup() the GPIO channel first
2019-06-13 21:33:47,714 - ERROR - mycodo.daemon - Could not query output state: You must setup() the GPIO channel first
2019-06-13 21:33:50,264 - ERROR - mycodo.daemon - Could not query output state: You must setup() the GPIO channel first
2019-06-13 21:57:35,184 - ERROR - mycodo.daemon - Could not query output state: 'NoneType' object has no attribute 'output_state'
2019-06-13 21:57:44,772 - ERROR - mycodo.daemon - Could not query output state: 'NoneType' object has no attribute 'output_state'
2019-06-13 21:57:45,880 - ERROR - mycodo.daemon - Could not query output state: 'NoneType' object has no attribute 'output_state'
2019-06-13 21:57:46,894 - ERROR - mycodo.daemon - Could not query output state: 'NoneType' object has no attribute 'output_state'
2019-06-13 21:57:53,029 - ERROR - mycodo.daemon - Could not query output state: 'NoneType' object has no attribute 'output_state'
2019-06-13 21:58:41,684 - ERROR - mycodo.daemon - Could not query output state: 'NoneType' object has no attribute 'output_state
htop screenshot:

Controls using high cpu are GPIO hardwired controls that seem to physically work ok but generating daemon errors.
| closed | 2019-06-15T05:11:45Z | 2019-06-16T14:32:28Z | https://github.com/kizniche/Mycodo/issues/665 | [] | SAM26K | 18 |
LAION-AI/Open-Assistant | python | 2,853 | Tollboard: Active users filter not applied when switching between time ranges | When opening the Trollboard the "Daily" view correctly shows only enabled users. But when switching over to "Weekly" while the "Show active users" option is still active all users (including disabled one) are shown. Only after clicking on "Show banned users" and then again on "Show active users" the filter is applied correctly.
Make sure the list is always filtered according to current filter settings (test switching between the time-ranges). | open | 2023-04-23T10:54:14Z | 2023-04-23T10:54:14Z | https://github.com/LAION-AI/Open-Assistant/issues/2853 | [
"bug",
"website"
] | andreaskoepf | 0 |
biosustain/potion | sqlalchemy | 64 | Add option to keep key order in fields.Object() and FieldSet.parse_request() | open | 2016-01-15T12:27:45Z | 2016-01-15T12:27:45Z | https://github.com/biosustain/potion/issues/64 | [] | lyschoening | 0 |
|
iterative/dvc | machine-learning | 10,206 | dvc push: Unexpected error when pushing to Google Cloud storage or S3 | # Bug Report
dvc push: "Unexpected error" when pushing to Google Cloud storage or S3
### Reproduce
```
dvc init
dvc remote add -d s3 s3://bucket # or gcs gs://bucket
dvc import-url https://data.dvc.org/get-started/data.xml
dvc push -v
```
output (s3):
```
2023-12-27 19:56:42,605 DEBUG: v3.36.1 (pip), CPython 3.9.18 on Linux-5.15.139-93.147.amzn2.x86_64-x86_64-with-glibc2.26
2023-12-27 19:56:42,605 DEBUG: command: /path/bin/dvc push -v
Collecting |0.00 [00:00, ?entry/s]
Pushing |0.00 [00:00, ?file/s]
Collecting my.bucket/key on s3 |3.00 [00:00, 4.84entry/s]
2023-12-27 19:56:43,676 ERROR: unexpected error
Traceback (most recent call last):
File "/path/lib/python3.9/site-packages/dvc/cli/__init__.py", line 211, in main
ret = cmd.do_run()
File "/path/lib/python3.9/site-packages/dvc/cli/command.py", line 27, in do_run
return self.run()
File "/path/lib/python3.9/site-packages/dvc/commands/data_sync.py", line 64, in run
processed_files_count = self.repo.push(
File "/path/lib/python3.9/site-packages/dvc/repo/__init__.py", line 65, in wrapper
return f(repo, *args, **kwargs)
File "/path/lib/python3.9/site-packages/dvc/repo/push.py", line 144, in push
push_transferred, push_failed = ipush(
File "/path/lib/python3.9/site-packages/dvc_data/index/push.py", line 101, in push
old = build(data.path, data.fs)
File "/path/lib/python3.9/site-packages/dvc_data/index/build.py", line 90, in build
for entry in build_entries(path, fs, ignore=ignore):
File "/path/lib/python3.9/site-packages/dvc_data/index/build.py", line 55, in build_entries
walk_iter = fs.walk(path, detail=detail)
File "/path/lib/python3.9/site-packages/dvc_http/__init__.py", line 162, in walk
raise NotImplementedError
NotImplementedError
2023-12-27 19:56:43,752 DEBUG: link type reflink is not available ([Errno 95] no more link types left to try out)
2023-12-27 19:56:43,755 DEBUG: Removing '/path/.MHVNkr3eAijD7Q5aau3NRK.tmp'
2023-12-27 19:56:43,755 DEBUG: Removing '/path/.MHVNkr3eAijD7Q5aau3NRK.tmp'
2023-12-27 19:56:43,757 DEBUG: Removing '/path/.MHVNkr3eAijD7Q5aau3NRK.tmp'
2023-12-27 19:56:43,757 DEBUG: Removing '/path/bkw-9036/.dvc/cache/files/md5/.mnnSioPUuXvRUCqUV2ug87.tmp'
2023-12-27 19:56:43,777 DEBUG: Version info for developers:
DVC version: 3.36.1 (pip)
-------------------------
Platform: Python 3.9.18 on Linux-5.15.139-93.147.amzn2.x86_64-x86_64-with-glibc2.26
Subprojects:
dvc_data = 3.3.0
dvc_objects = 3.0.0
dvc_render = 1.0.0
dvc_task = 0.3.0
scmrepo = 2.0.2
Supports:
gs (gcsfs = 2023.12.2.post1),
http (aiohttp = 3.9.1, aiohttp-retry = 2.8.3),
https (aiohttp = 3.9.1, aiohttp-retry = 2.8.3),
s3 (s3fs = 2023.12.2, boto3 = 1.33.13)
Config:
Global: /home/jdt/.config/dvc
System: /etc/xdg/dvc
Cache types: hardlink, symlink
Cache directory: ext4 on /dev/nvme1n1p1
Caches: local
Remotes: s3
Workspace directory: ext4 on /dev/nvme1n1p1
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/9d9135fb99d9d827364c4dc5a42cdc60
Having any troubles? Hit us up at https://dvc.org/support, we are always happy to help!
2023-12-27 19:56:43,781 DEBUG: Analytics is enabled.
2023-12-27 19:56:43,860 DEBUG: Trying to spawn ['daemon', 'analytics', '/tmp/tmpccxiwrmd', '-v']
2023-12-27 19:56:43,871 DEBUG: Spawned ['daemon', 'analytics', '/tmp/tmpccxiwrmd', '-v'] with pid 22406
```
output (gcs):
```
2023-12-27 19:47:22,768 DEBUG: v3.36.1 (pip), CPython 3.9.18 on Linux-5.15.139-93.147.amzn2.x86_64-x86_64-with-glibc2.26
2023-12-27 19:47:22,769 DEBUG: command: /path/bin/dvc push -v
Collecting |0.00 [00:00, ?entry/s]
Pushing |0.00 [00:00, ?file/s]
Collecting bucket/path on gs |3.00 [00:01, 2.84entry/s]
2023-12-27 19:47:24,328 ERROR: unexpected error
Traceback (most recent call last):
File "/path/lib/python3.9/site-packages/dvc/cli/__init__.py", line 211, in main
ret = cmd.do_run()
File "/path/lib/python3.9/site-packages/dvc/cli/command.py", line 27, in do_run
return self.run()
File "/path/lib/python3.9/site-packages/dvc/commands/data_sync.py", line 64, in run
processed_files_count = self.repo.push(
File "/path/lib/python3.9/site-packages/dvc/repo/__init__.py", line 65, in wrapper
return f(repo, *args, **kwargs)
File "/path/lib/python3.9/site-packages/dvc/repo/push.py", line 144, in push
push_transferred, push_failed = ipush(
File "/path/lib/python3.9/site-packages/dvc_data/index/push.py", line 101, in push
old = build(data.path, data.fs)
File "/path/lib/python3.9/site-packages/dvc_data/index/build.py", line 90, in build
for entry in build_entries(path, fs, ignore=ignore):
File "/path/lib/python3.9/site-packages/dvc_data/index/build.py", line 55, in build_entries
walk_iter = fs.walk(path, detail=detail)
File "/path/lib/python3.9/site-packages/dvc_http/__init__.py", line 162, in walk
raise NotImplementedError
NotImplementedError
2023-12-27 19:47:24,370 DEBUG: link type reflink is not available ([Errno 95] no more link types left to try out)
2023-12-27 19:47:24,371 DEBUG: Removing '/path/.fJ4uXqQznknWmbrzzUTXLQ.tmp'
2023-12-27 19:47:24,371 DEBUG: Removing '/path/.fJ4uXqQznknWmbrzzUTXLQ.tmp'
2023-12-27 19:47:24,371 DEBUG: Removing '/path/.fJ4uXqQznknWmbrzzUTXLQ.tmp'
2023-12-27 19:47:24,371 DEBUG: Removing '/path/bkw-9036/.dvc/cache/files/md5/.M6iwnJkjQgKzg54kN6chVi.tmp'
2023-12-27 19:47:24,377 DEBUG: Version info for developers:
DVC version: 3.36.1 (pip)
-------------------------
Platform: Python 3.9.18 on Linux-5.15.139-93.147.amzn2.x86_64-x86_64-with-glibc2.26
Subprojects:
dvc_data = 3.3.0
dvc_objects = 3.0.0
dvc_render = 1.0.0
dvc_task = 0.3.0
scmrepo = 2.0.2
Supports:
gs (gcsfs = 2023.12.2.post1),
http (aiohttp = 3.9.1, aiohttp-retry = 2.8.3),
https (aiohttp = 3.9.1, aiohttp-retry = 2.8.3)
Config:
Global: /home/jdt/.config/dvc
System: /etc/xdg/dvc
Cache types: hardlink, symlink
Cache directory: ext4 on /dev/nvme1n1p1
Caches: local
Remotes: gs
Workspace directory: ext4 on /dev/nvme1n1p1
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/9d9135fb99d9d827364c4dc5a42cdc60
Having any troubles? Hit us up at https://dvc.org/support, we are always happy to help!
2023-12-27 19:47:24,379 DEBUG: Analytics is enabled.
2023-12-27 19:47:24,445 DEBUG: Trying to spawn ['daemon', 'analytics', '/tmp/tmpk_30nnlt', '-v']
2023-12-27 19:47:24,455 DEBUG: Spawned ['daemon', 'analytics', '/tmp/tmpk_30nnlt', '-v'] with pid 15755
```
### Expected
Successful push
### Environment information
<!--
This is required to ensure that we can reproduce the bug.
-->
```
DVC version: 3.36.1 (pip)
-------------------------
Platform: Python 3.9.18 on Linux-5.15.139-93.147.amzn2.x86_64-x86_64-with-glibc2.26
Subprojects:
dvc_data = 3.3.0
dvc_objects = 3.0.0
dvc_render = 1.0.0
dvc_task = 0.3.0
scmrepo = 2.0.2
Supports:
gs (gcsfs = 2023.12.2.post1),
http (aiohttp = 3.9.1, aiohttp-retry = 2.8.3),
https (aiohttp = 3.9.1, aiohttp-retry = 2.8.3),
s3 (s3fs = 2023.12.2, boto3 = 1.33.13)
Config:
Global: /home/jdt/.config/dvc
System: /etc/xdg/dvc
Cache types: hardlink, symlink
Cache directory: ext4 on /dev/nvme1n1p1
Caches: local
Remotes: s3
Workspace directory: ext4 on /dev/nvme1n1p1
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/c9c73dbc105eb09a15137f49a60e6a5b
```
**Additional Information (if any):**
| closed | 2023-12-28T03:32:06Z | 2024-01-03T01:17:32Z | https://github.com/iterative/dvc/issues/10206 | [
"bug"
] | turkanis | 12 |
aleju/imgaug | deep-learning | 406 | how to use imgaug with pytorch | I want to use imgaug with pytorch.
def __getitem__(self, index) in torch.utils.data.Dataset can process one picture at a time, but in seq(images = images, keypoints = keypoints), I must give 4 dims (B, H, W, Channel). I want to know how to use imgaug without expansion dimension.
Thank you! | open | 2019-08-30T07:17:43Z | 2021-05-16T09:30:08Z | https://github.com/aleju/imgaug/issues/406 | [] | flowtcw | 11 |
TencentARC/GFPGAN | deep-learning | 23 | 'BASICSR_JIT' is not recognized as an internal or external command | Is this an issue with setting environment variables in Windows? Is there a method to resolve this? | closed | 2021-07-21T00:25:14Z | 2021-07-21T15:33:15Z | https://github.com/TencentARC/GFPGAN/issues/23 | [] | Kubishime | 1 |
ageitgey/face_recognition | machine-learning | 1,203 | 为什么小孩的识别效果不好? | 是因为人脸对齐时,小孩的人脸对齐效果不好造成的么? | open | 2020-08-20T10:02:36Z | 2020-08-20T10:02:36Z | https://github.com/ageitgey/face_recognition/issues/1203 | [] | yfq512 | 0 |
ultralytics/ultralytics | machine-learning | 19,270 | expected str, bytes or os.PathLike object, not NoneType | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
I imported the dataset directly from Roboflow, so it should not have problem.
The following is the code I run
from ultralytics import YOLO, checks, hub
checks()
hub.login('hidden')
model = YOLO('https://hub.ultralytics.com/models/DDnZzdKNetoATXL0SY0Q')
results = model.train()
Traceback (most recent call last):
File "C:\Program Files\Python310\lib\site-packages\ultralytics\engine\trainer.py", line 558, in get_dataset
elif self.args.data.split(".")[-1] in {"yaml", "yml"} or self.args.task in {
AttributeError: 'NoneType' object has no attribute 'split'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\a\Desktop\Weight1\Train.py", line 7, in <module>
results = model.train()
File "C:\Program Files\Python310\lib\site-packages\ultralytics\engine\model.py", line 803, in train
self.trainer = (trainer or self._smart_load("trainer"))(overrides=args, _callbacks=self.callbacks)
File "C:\Program Files\Python310\lib\site-packages\ultralytics\engine\trainer.py", line 134, in __init__
self.trainset, self.testset = self.get_dataset()
File "C:\Program Files\Python310\lib\site-packages\ultralytics\engine\trainer.py", line 568, in get_dataset
raise RuntimeError(emojis(f"Dataset '{clean_url(self.args.data)}' error ❌ {e}")) from e
File "C:\Program Files\Python310\lib\site-packages\ultralytics\utils\__init__.py", line 1301, in clean_url
url = Path(url).as_posix().replace(":/", "://") # Pathlib turns :// -> :/, as_posix() for Windows
File "C:\Program Files\Python310\lib\pathlib.py", line 960, in __new__
self = cls._from_parts(args)
File "C:\Program Files\Python310\lib\pathlib.py", line 594, in _from_parts
drv, root, parts = self._parse_args(args)
File "C:\Program Files\Python310\lib\pathlib.py", line 578, in _parse_args
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
### Environment
Ultralytics 8.3.75 🚀 Python-3.10.11 torch-2.5.1+cu118 CUDA:0 (NVIDIA GeForce RTX 2080 Ti, 11264MiB)
Setup complete ✅ (16 CPUs, 15.9 GB RAM, 306.5/446.5 GB disk)
OS Windows-10-10.0.19045-SP0
Environment Windows
Python 3.10.11
Install pip
RAM 15.93 GB
Disk 306.5/446.5 GB
CPU AMD Ryzen 7 5700X 8-Core Processor
CPU count 16
GPU NVIDIA GeForce RTX 2080 Ti, 11264MiB
GPU count 1
CUDA 11.8
numpy ✅ 1.26.4<=2.1.1,>=1.23.0
matplotlib ✅ 3.10.0>=3.3.0
opencv-python ✅ 4.10.0.84>=4.6.0
pillow ✅ 10.4.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.15.1>=1.4.1
torch ✅ 2.5.1+cu118>=1.8.0
torch ✅ 2.5.1+cu118!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.20.1+cu118>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 6.1.1
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
### Minimal Reproducible Example
from ultralytics import YOLO, checks, hub
checks()
hub.login('hidden')
model = YOLO('https://hub.ultralytics.com/models/DDnZzdKNetoATXL0SY0Q')
results = model.train()
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-02-16T22:55:49Z | 2025-02-16T23:15:55Z | https://github.com/ultralytics/ultralytics/issues/19270 | [
"question"
] | felixho789 | 2 |
opengeos/leafmap | streamlit | 122 | Add pydeck as a new plotting backend | References:
- https://deckgl.readthedocs.io
- https://github.com/agressin/pydeck_myTileLayer | closed | 2021-10-16T18:23:01Z | 2021-10-17T17:35:17Z | https://github.com/opengeos/leafmap/issues/122 | [
"Feature Request"
] | giswqs | 1 |
piccolo-orm/piccolo | fastapi | 883 | auto migrations fails when table in schema | Hello,
I'm running the latest Postgres image, python 3.11.3 (venv) and piccolo 0.119.0.
When i create a new asgi application with a fresh venv, i can perfectly add, change and delete rows using the migrations new .. --auto / forward command. Then i can migrate the table into a schema e.g. `class Task(Table, schema="blog"):`, running forward will succefully put the table into that schema. But as soon as it is in a schema i'm not able to perform any auto migrations anymore. It won't find that table:
```
- 2023-09-08T14:20:15:789180 [forwards]... The command failed.
relation "task" does not exist
Traceback (most recent call last):
File "C:\venvs\piccolodb\Lib\site-packages\targ\__init__.py", line 448, in run
command.call_with(arg_class)
File "C:\venvs\piccolodb\Lib\site-packages\targ\__init__.py", line 229, in call_with
asyncio.run(self.command(**cleaned_kwargs))
File "C:\Program Files\Python311\Lib\asyncio\runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\asyncio\base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "C:\venvs\piccolodb\Lib\site-packages\piccolo\apps\migrations\commands\forwards.py", line 159, in forwards
response = await run_forwards(
^^^^^^^^^^^^^^^^^^^
File "C:\venvs\piccolodb\Lib\site-packages\piccolo\apps\migrations\commands\forwards.py", line 120, in run_forwards
response = await manager.run()
^^^^^^^^^^^^^^^^^^^
File "C:\venvs\piccolodb\Lib\site-packages\piccolo\apps\migrations\commands\forwards.py", line 97, in run
return await self.run_migrations(app_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\venvs\piccolodb\Lib\site-packages\piccolo\apps\migrations\commands\forwards.py", line 82, in run_migrations
await response.run()
File "C:\venvs\piccolodb\Lib\site-packages\piccolo\apps\migrations\auto\migration_manager.py", line 863, in run
await self._run_drop_columns(backwards=backwards)
File "C:\venvs\piccolodb\Lib\site-packages\piccolo\apps\migrations\auto\migration_manager.py", line 642, in _run_drop_columns
await self._run_query(
File "C:\venvs\piccolodb\Lib\site-packages\piccolo\apps\migrations\auto\migration_manager.py", line 393, in _run_query
await query.run()
File "C:\venvs\piccolodb\Lib\site-packages\piccolo\query\base.py", line 445, in run
return await engine.run_ddl(self.ddl[0], in_pool=in_pool)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\venvs\piccolodb\Lib\site-packages\piccolo\engine\postgres.py", line 553, in run_ddl
response = await current_transaction.connection.fetch(ddl)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\venvs\piccolodb\Lib\site-packages\asyncpg\connection.py", line 620, in fetch
return await self._execute(
^^^^^^^^^^^^^^^^^^^^
File "C:\venvs\piccolodb\Lib\site-packages\asyncpg\connection.py", line 1659, in _execute
result, _ = await self.__execute(
^^^^^^^^^^^^^^^^^^^^^
File "C:\venvs\piccolodb\Lib\site-packages\asyncpg\connection.py", line 1684, in __execute
return await self._do_execute(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\venvs\piccolodb\Lib\site-packages\asyncpg\connection.py", line 1731, in _do_execute
result = await executor(stmt, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "asyncpg\protocol\protocol.pyx", line 201, in bind_execute
asyncpg.exceptions.UndefinedTableError: relation "task" does not exist
```
I can change the migration_file from
````
from piccolo.apps.migrations.auto.migration_manager import MigrationManager
ID = "2023-09-08T14:20:15:789180"
VERSION = "0.119.0"
DESCRIPTION = ""
async def forwards():
manager = MigrationManager(
migration_id=ID, app_name="home", description=DESCRIPTION
)
manager.drop_column(
table_class_name="Task",
tablename="task",
column_name="completed_at",
db_column_name="completed_at",
)
return manager
````
to
````
....
manager.drop_column(
table_class_name="Task",
tablename="blog.task",
column_name="completed_at",
db_column_name="completed_at",
)
return manager
````
it will perform the migration. Any clues what's going on here? | closed | 2023-09-08T12:39:29Z | 2023-09-09T00:37:30Z | https://github.com/piccolo-orm/piccolo/issues/883 | [] | lherrman | 3 |
polarsource/polar | fastapi | 4,598 | Orders API: Use `get_unprefixed_state` for state output to strip country prefix | Stripe Tax uses `ISO 3166-2` with subdivisions to perform a lookup on tax rates, e.g `country-state`, which makes sense. On checkout we therefore store the state in this format too for tax calculations.
However, customers expect a separation. Getting country from `orders.billing_address.country` and state from `orders.billing_address.state` (clean from country codes).
So our output of the schema should use `get_unprefixed_state` to strip the country code from the state output in our API. | open | 2024-12-04T10:08:20Z | 2024-12-04T10:08:20Z | https://github.com/polarsource/polar/issues/4598 | [
"enhancement",
"dx"
] | birkjernstrom | 0 |
marcomusy/vedo | numpy | 894 | Convert colored mesh to volume | Hi, given a mesh with colors appointed to each vertex, how can I convert it into a volume (voxel format) with the colors of the voxels relating to these points. Is there a simple way to achieve this? My purpose requires taking either the nearest point as the color, or the most frequent color within the space of the voxel.
Thanks.
Note that I currently use `mesh.binarize(spacing, fg_val=1, bg_val=0)` but this (obviously) doesn't convert the colors into voxel metadata. | closed | 2023-07-07T06:35:56Z | 2023-07-14T11:58:41Z | https://github.com/marcomusy/vedo/issues/894 | [] | JeffreyWardman | 6 |
teamhide/fastapi-boilerplate | sqlalchemy | 21 | What is the config for postgres ? | Hi,
Can you guide me how can I change config for postgres ? I am using postgres+pyscopg2://postgres:password@localhost:5432/fastapi. But I have no luck ? | closed | 2023-04-24T07:22:08Z | 2024-01-28T09:09:54Z | https://github.com/teamhide/fastapi-boilerplate/issues/21 | [] | phtran-dev | 1 |
browser-use/browser-use | python | 362 | UPDATE the Langchain Chat models support | ### Type of Documentation Issue
Incorrect documentation
### Documentation Page
https://docs.browser-use.com/customize/langchain-models
### Issue Description
Currently I have used Gemini to run this
`from langchain_google_genai import ChatGoogleGenerativeAI
from browser_use import Agent
import asyncio
from dotenv import load_dotenv
load_dotenv()
async def main():
agent = Agent(
task="Go to Reddit, search for 'browser-use' in the search bar, click on the first post and return the first comment.",
llm=ChatGoogleGenerativeAI(model="gemini-1.5-flash"),
)
result = await agent.run()
print(result)
asyncio.run(main())`
and it worked fine so do update the docs
### Suggested Changes
`from langchain_google_genai import ChatGoogleGenerativeAI
from browser_use import Agent
import asyncio
from dotenv import load_dotenv
load_dotenv()
async def main():
agent = Agent(
task="Go to Reddit, search for 'browser-use' in the search bar, click on the first post and return the first comment.",
llm=ChatGoogleGenerativeAI(model="gemini-1.5-flash"),
)
result = await agent.run()
print(result)
asyncio.run(main())` | closed | 2025-01-23T19:22:48Z | 2025-01-24T11:58:32Z | https://github.com/browser-use/browser-use/issues/362 | [
"documentation"
] | snpixel | 1 |
HIT-SCIR/ltp | nlp | 573 | 如何高速处理5万多行的分词数据 | 为了处理5万多行的分词数据,改写了脚本案例
import sys,os,time
sys.path.append(os.path.abspath(os.path.dirname(__file__) + '/' + '..'))
from ltp import LTP
root_path=os.path.abspath(os.path.dirname(__file__) + '/' + '..')
ltp = LTP(path = "base")
url = "tests/zrbzdz.txt"
t1 = time.time()
url = "tests/zrbzdz.txt"
t1 = time.time()
output=''
with open(url,"r",encoding='utf-8-sig') as f:
lines=f.readlines()
for line in lines:
segment, _ = ltp.seg([line])
output+="/ ".join(segment[0])+'\n'
tt = time.time()-t1
# 输出分词后的文件路径
LTP_f = open("tests/output/1_LTP.txt","wb")
LTP_f.write(output.encode('utf-8'))
LTP_f.close()
print('time ' + str(tt))
执行这个脚本,数据5万多条,执行时间43分钟,怎么解决这个时间长的问题 | closed | 2022-08-11T01:23:13Z | 2023-01-20T17:59:25Z | https://github.com/HIT-SCIR/ltp/issues/573 | [] | liyanfu520 | 7 |
erdewit/ib_insync | asyncio | 128 | Getting OrderHistory | I'm wondering if there is a way to get history of all orders. My bot needs to check some information on past orders to make buy or sell decision. | closed | 2019-01-18T04:46:00Z | 2020-07-04T13:59:37Z | https://github.com/erdewit/ib_insync/issues/128 | [] | quadricanna | 14 |
dpgaspar/Flask-AppBuilder | rest-api | 2,261 | get a list of requirements |
### Environment windows 10
Flask-Appbuilder version:4.0.0
apispec==3.3.2
attrs==23.2.0
Authlib==1.0.0
Babel==2.14.0
cffi==1.15.1
click==8.1.7
colorama==0.4.6
cryptography==42.0.8
dnspython==2.3.0
email-validator==1.3.1
Flask==2.0.3
Flask-AppBuilder==4.0.0
Flask-Babel==2.0.0
Flask-JSGlue==0.3.1
Flask-JWT-Extended==4.6.0
Flask-Login==0.5.0
Flask-SQLAlchemy==2.5.1
Flask-WTF==0.15.1
greenlet==3.0.3
idna==3.7
importlib-metadata==6.7.0
importlib-resources==5.12.0
itsdangerous==2.1.2
jinja2==3.1.4
jsonschema==4.17.3
MarkupSafe==2.1.5
marshmallow==3.19.0
marshmallow-enum==1.5.1
marshmallow-sqlalchemy==0.26.1
packaging==24.0
pkgutil-resolve-name==1.3.10
prison==0.2.1
pycparser==2.21
PyJWT==2.8.0
pyrsistent==0.19.3
python-dateutil==2.9.0.post0
pytz==2024.1
PyYAML==6.0.1
six==1.16.0
SQLAlchemy==1.4.52
SQLAlchemy-Utils==0.41.2
typing-extensions==4.7.1
Werkzeug==2.0.3
WTForms==2.3.3
zipp==3.15.0
### Describe the expected results
i wanna to run oauth + flask appbuilder
so i need the correct versions to run it
### Describe the actual results
at pycharm
``` from ssl import SSLContext
File "c:\users\usuario\anaconda3\lib\ssl.py", line 98, in <module>
import _ssl # if we can't import it, let the error propagate
ImportError: DLL load failed: No se puede encontrar el módulo especificado.
```
at console
```
from jinja2 import Markup
ImportError: cannot import name 'Markup' from 'jinja2'
```
| open | 2024-07-17T19:54:34Z | 2024-07-18T20:17:50Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/2261 | [] | EnriqueGautoSand | 1 |
coqui-ai/TTS | deep-learning | 2,957 | [Bug] Text input via cat and multiple lines | ### Describe the bug
When feeding multiple lines of text using cat into the tts command line tool, tts creates long pauses after a line break.
### To Reproduce
1. Write some text with line breaks into demo.txt
2. Execute tts --text "$(cat demo.txt)" --out_path demo.wav --model_name whatever/model
### Expected behavior
It shouldn't make a pause when just a simple line break is found.
### Logs
_No response_
### Environment
```shell
Windows 10 64 bit using WSL 2 and Ubuntu 23.04. coqui tts 0.17.2 and Python 3.11.
```
### Additional context
_No response_ | closed | 2023-09-17T08:38:06Z | 2023-10-29T18:11:12Z | https://github.com/coqui-ai/TTS/issues/2957 | [
"bug",
"wontfix"
] | domasofan | 3 |
deeppavlov/DeepPavlov | tensorflow | 892 | Downgrade tensoflow version AttributeError | There is no fixed tensorflow version in requirments.txt.
So, if I downgrade tensorflow to version 1.10, I'll catch error: "AttributeError: module 'tensorflow' has no attribute 'init_scope'. | closed | 2019-06-20T14:43:11Z | 2019-07-15T13:14:32Z | https://github.com/deeppavlov/DeepPavlov/issues/892 | [] | artyerokhin | 2 |
marcomusy/vedo | numpy | 229 | Could not find example to adjust threshold | First thanks for sharing this wonderful project.
I can use command line
vedo https://vedo.embl.es/examples/data/head.vti
to see head isosurface and adjust threshold. But could not find any scrip in example to do this.
Could you please help? | closed | 2020-10-16T15:04:06Z | 2020-10-17T00:54:46Z | https://github.com/marcomusy/vedo/issues/229 | [] | mit10000 | 4 |
ansible/awx | django | 15,795 | awx.conf.settings The current value "'GroupOfUniqueNamesType'" for setting "AUTH_LDAP_GROUP_TYPE" is invalid | ### Please confirm the following
- [x] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [x] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [x] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [x] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
I am using [awx operator](https://github.com/ansible/awx-operator), I am using the latest version, `2.19.1`, https://github.com/ansible/awx-operator/releases/tag/2.19.1.
I am using the extra settings to configure my ldap, https://github.com/ansible/awx-operator/blob/devel/docs/user-guide/advanced-configuration/extra-settings.md#extra-settings
I follow this settings : https://github.com/ansible/awx-operator/blob/devel/docs/user-guide/advanced-configuration/enabling-ldap-integration-at-awx-bootstrap.md
The Ldap feature is working as expected, however, I have this error log generated, continuously in my awx web pod, it flood the disk space.
Can someone look at this issue?
### AWX version
awx operator v2.19.1
### Select the relevant components
- [x] UI
- [ ] UI (tech preview)
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Installation method
kubernetes
### Modifications
no
### Ansible version
2.19.1
### Operating system
kubernetes
### Web browser
Chrome
### Steps to reproduce
follow this settings in my awx configuration manifest, https://github.com/ansible/awx-operator/blob/devel/docs/user-guide/advanced-configuration/enabling-ldap-integration-at-awx-bootstrap.md
it gave me this error log in the web pod, however, ldap works as expected.
### Expected results
no error message should be generated in awx pod.
### Actual results
```
2025-01-29 20:41:49,696 WARNING [dbb9620b7e334a86815755d844c08c83] awx.conf.settings The current value "{'name_attr': 'cn'}" for setting "AUTH_LDAP_GROUP_TYPE_PARAMS" is invalid.
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/conf/settings.py", line 402, in _get_local
internal_value = field.to_internal_value(value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/sso/fields.py", line 480, in to_internal_value
group_type_cls = find_class_in_modules(group_type_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/sso/fields.py", line 52, in find_class_in_modules
cls = getattr(m, class_name, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: attribute name must be string, not 'type'
2025-01-29 20:41:49,697 WARNING [dbb9620b7e334a86815755d844c08c83] awx.conf.settings The current value "'GroupOfUniqueNamesType'" for setting "AUTH_LDAP_GROUP_TYPE" is invalid.
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/conf/settings.py", line 402, in _get_local
internal_value = field.to_internal_value(value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/sso/fields.py", line 458, in to_internal_value
self.fail('invalid_parameters', parameters_type=type(params))
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/rest_framework/fields.py", line 603, in fail
raise ValidationError(message_string, code=key)
rest_framework.exceptions.ValidationError: [ErrorDetail(string="Invalid group_type parameters. Expected instance of dict but got <class 'type'> instead.", code='invalid_parameters')]
2025-01-29 20:41:49,698 WARNING [dbb9620b7e334a86815755d844c08c83] awx.conf.settings The current value "{'name_attr': 'cn'}" for setting "AUTH_LDAP_GROUP_TYPE_PARAMS" is invalid.
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/conf/settings.py", line 402, in _get_local
internal_value = field.to_internal_value(value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/sso/fields.py", line 480, in to_internal_value
group_type_cls = find_class_in_modules(group_type_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/sso/fields.py", line 52, in find_class_in_modules
cls = getattr(m, class_name, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: attribute name must be string, not 'type'
```
### Additional information
_No response_ | open | 2025-01-29T21:08:03Z | 2025-02-05T18:24:42Z | https://github.com/ansible/awx/issues/15795 | [
"type:bug",
"needs_triage",
"community"
] | kevrrnet | 1 |
python-restx/flask-restx | flask | 623 | 自动化文档能否使用最新版本3.0及以上? | 自动化文档能否使用最新版本3.0及以上? | open | 2024-10-15T02:32:13Z | 2024-10-15T02:32:13Z | https://github.com/python-restx/flask-restx/issues/623 | [
"enhancement"
] | haike-1213 | 0 |
mckinsey/vizro | plotly | 740 | [Docs] Simplify our docs code examples | I believe someone from the development team should help streamline our code examples in the documentation. Upon rereading them, I noticed many instances where simplification is possible. Here are some specific recommendations:
- [ ] https://github.com/mckinsey/vizro/issues/713 - if this won't be done in the GHC, we should do it
- [ ] Eliminate any unnecessary controls.
- [ ] Remove any unnecessary filter interactions or actions.
- [ ] Exclude any secondary or tertiary components that aren't essential.
In general, we should remove everything that isn't required to demonstrate the feature in question. This approach will keep the tutorials focused and prevent distractions from the main purpose.
**Example:**
```
from vizro import Vizro
import vizro.models as vm
import vizro.plotly.express as px
df = px.data.gapminder()
gapminder_data = (
df.groupby(by=["continent", "year"]).
agg({"lifeExp": "mean", "pop": "sum", "gdpPercap": "mean"}).reset_index()
)
first_page = vm.Page(
title="First Page",
layout=vm.Layout(grid=[[0, 0], [1, 2], [1, 2], [1, 2]]),
components=[
vm.Card(
text="""
# First dashboard page
This pages shows the inclusion of markdown text in a page and how components
can be structured using Layout.
""",
),
vm.Graph(
id="box_cont",
figure=px.box(gapminder_data, x="continent", y="lifeExp", color="continent",
labels={"lifeExp": "Life Expectancy", "continent": "Continent"}),
),
vm.Graph(
id="line_gdp",
figure=px.line(gapminder_data, x="year", y="gdpPercap", color="continent",
labels={"year": "Year", "continent": "Continent",
"gdpPercap":"GDP Per Cap"}),
),
],
controls=[
vm.Filter(column="continent", targets=["box_cont", "line_gdp"]),
],
)
dashboard = vm.Dashboard(pages=[first_page])
Vizro().build(dashboard).run()
```
**What could be removed:**
- `id` provision inside the charts
- Removal of `targets` inside `vm.Filter` as it will target all charts if not specified
- Removal of `labels` argument inside charts
| open | 2024-09-24T08:53:38Z | 2024-09-24T08:57:32Z | https://github.com/mckinsey/vizro/issues/740 | [
"Nice to have :cherries:"
] | huong-li-nguyen | 0 |
keras-team/keras | deep-learning | 20,890 | Improve Model.layers setter error message | Current error message for Model.layers setter is ambiguous and doesn't explain how to properly handle layers.
Current:
“`Model.layers` attribute is reserved and should not be used. Please use another name.”
Proposed:
“`Model.layers` is a read-only property. Use Model.add() to add new layers.” | closed | 2025-02-10T22:46:01Z | 2025-02-20T18:11:56Z | https://github.com/keras-team/keras/issues/20890 | [
"type:docs"
] | nikolasavic3 | 2 |
2noise/ChatTTS | python | 64 | add model to hugging face | closed | 2024-05-29T16:30:38Z | 2024-07-15T04:01:47Z | https://github.com/2noise/ChatTTS/issues/64 | [
"stale"
] | clmnt | 2 |
|
plotly/dash | plotly | 2,827 | cannot pickle 'SSLContext' with background callback | **Describe your context**
Windows 11, Python 3.9
```
dash 2.16.1
dash-bootstrap-components 1.4.1
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
Occurs with Edge & Chrome.
**Describe the bug**
I use a background callback in a multi-page app that throws an error when called:
* ``TypeError: cannot pickle 'SSLContext' object`` in the browser (without more details, see screenshot below)
* in the standard output:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\201021305\.conda\envs\track\lib\site-packages\multiprocess\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\201021305\.conda\envs\track\lib\site-packages\multiprocess\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
File "C:\Users\201021305\.conda\envs\track\lib\site-packages\dill\_dill.py", line 289, in load
return Unpickler(file, ignore=ignore, **kwds).load()
File "C:\Users\201021305\.conda\envs\track\lib\site-packages\dill\_dill.py", line 444, in load
obj = StockUnpickler.load(self)
EOFError: Ran out of input
```
Here's how the callback is defined:
```python
BACKGROUND_CALLBACK_MANAGER = dash.DiskcacheManager(diskcache.Cache("./cache"))
@dash.callback(
[...]
prevent_initial_call=True,
background=True,
manager=BACKGROUND_CALLBACK_MANAGER,
)
```
The callback involves an object that has a SQLAlchemy engine as an attribute. The connection is made through SSL, so I guess this is the object that fails to be pickled. However, I can serialize this object successfully with ``dill.dumps``, so I'm not sure...
Maybe related to https://github.com/uqfoundation/dill/issues/308, but until the issue is fixed, there might be a workaround?
**Expected behavior**
I expect the callback to run without error.
**Screenshots**

| open | 2024-04-03T11:13:34Z | 2024-08-13T19:48:24Z | https://github.com/plotly/dash/issues/2827 | [
"bug",
"P3"
] | fxstempfelals | 7 |
plotly/dash | dash | 2,802 | allow deletion of data from Heat Figs with Patch() | I am trying to delete some data from a heat fig, and it doesn't seem to be possible with patch. With patch, you would return something like
```
fig = Patch()
del fig['data'][0][x][0]
```
to delete the first item in the heatfig. The issue is that the heatfig has the following structure:
```
{ "data": [
{
"x": ["2021-12-21T19:58:00.542000", "2021-12-21T19:58:01.542000", "2021-12-21T19:58:02.542000" ],
"y": [13500.0, 13503.33591, 13506.67183 ],
"z": [[599.8054, 581.1404, 570.4771 ],
[678.9323, 644.2858, 610.9979 ],
[576.6772, 568.9164, 565.6251 ],
}]}
```
and so you would need to loop through each list in the z field to remove the first item as well (as the heatfig is a 3D array). We can't know ahead of time how many items are in the z field making it impossible to delete data.
Basically, if you want to remove the first timestamp in the fig, you would need the data to look like:
```
{ "data": [
{
"x": ["2021-12-21T19:58:01.542000", "2021-12-21T19:58:02.542000" ],
"y": [13500.0, 13503.33591, 13506.67183 ],
"z": [[581.1404, 570.4771 ],
[644.2858, 610.9979 ],
[568.9164, 565.6251 ],
}]}
```
I don't want to load in the whole fig as state because it is quite large (and defeats the purpose of using a Patch()).
I haven't found a work around for this yet.
| open | 2024-03-18T19:34:51Z | 2024-08-13T19:47:46Z | https://github.com/plotly/dash/issues/2802 | [
"feature",
"P3"
] | cleaaum | 4 |
django-import-export/django-import-export | django | 1,100 | please update your documentation about IMPORT_EXPORT_USE_TRANSACTIONS | IMPORT_EXPORT_USE_TRANSACTIONS default is True and not False as documented in latest
https://django-import-export.readthedocs.io/en/stable/installation.html
IMPORT_EXPORT_USE_TRANSACTIONS = False in settings is **mandatory** if one want to actually import something in a mysql db (at least)
without it :
result = somemodel_resource.import_data(datanew, dry_run=True)
will fail with "ImproperlyConfigured" and no explanation.
spend 3 hours to tweak my csv and test just about this before to look at import_export\resources.py, and its a pity because the project is really cool. | closed | 2020-03-18T13:10:26Z | 2023-04-12T13:52:12Z | https://github.com/django-import-export/django-import-export/issues/1100 | [
"docs",
"good first issue"
] | spamandeggs | 1 |
yt-dlp/yt-dlp | python | 11,835 | [SoundCloud] Some cover arts are downloaded in 100x100 resolution instead of original size | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Poland
### Provide a description that is worded well enough to be understood
Hi,
I am not entirely sure if it is a bug, but I’ve noticed that some SoundCloud cover arts ([sample song](https://soundcloud.com/shinpuru/miracle)) are downloaded in 100x100 dimensions, even though the original artwork is available in higher resolution.
To download the file, I use this command and obtain the following log:
```bash
❯ yt-dlp --add-metadata --parse-metadata "%(artists)l:%(meta_artist)s" --embed-thumbnail -o "%(artists)l - %(title)s.%(ext)s" https://soundcloud.com/shinpuru/miracle
[soundcloud] Extracting URL: https://soundcloud.com/shinpuru/miracle
[soundcloud] shinpuru/miracle: Downloading info JSON
[soundcloud] 1987169691: Downloading hls_aac format info JSON
[soundcloud] 1987169691: Downloading hls_mp3 format info JSON
[soundcloud] 1987169691: Downloading http_mp3 format info JSON
[soundcloud] 1987169691: Downloading hls_opus format info JSON
[MetadataParser] Parsed meta_artist from '%(artists)l': 'shinpuru'
[info] 1987169691: Downloading 1 format(s): hls_aac_160k
[info] Downloading video thumbnail 0 ...
[info] Writing video thumbnail 0 to: shinpuru - Miracle.png
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 11
[download] Destination: shinpuru - Miracle.m4a
[download] 100% of 2.16MiB in 00:00:00 at 5.86MiB/s
[FixupM4a] Correcting container of "shinpuru - Miracle.m4a"
[Metadata] Adding metadata to "shinpuru - Miracle.m4a"
[EmbedThumbnail] mutagen: Adding thumbnail to "shinpuru - Miracle.m4a"
```
The downloaded `shinpuru - Miracle.m4a` file has 100x100 cover art dimensions, while the [original art](https://i1.sndcdn.com/artworks-hh0yahMrXxlmwJKO-72s1hA-original.png) seems to be 500x500.
This is not an issue for some other songs, such as [this one](https://soundcloud.com/capturelight/one-second-per-second) being downloaded in the `.opus` format with 1999x1999 cover art.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.12.13 from yt-dlp/yt-dlp [542166962] (pip)
[debug] Python 3.12.7 (CPython AMD64 64bit) - Windows-11-10.0.26100-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.1-essentials_build-www.gyan.dev (setts), ffprobe 7.1-essentials_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.17, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.47.0, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.12.13 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.12.13 from yt-dlp/yt-dlp)
```
| closed | 2024-12-17T00:21:57Z | 2025-02-23T06:20:55Z | https://github.com/yt-dlp/yt-dlp/issues/11835 | [
"site-bug",
"patch-available"
] | pyxelr | 5 |
vitalik/django-ninja | rest-api | 355 | [BUG] Reverse url names are not auto generated | **Describe the bug**
Reverse resolution of urls described on https://django-ninja.rest-framework.com/tutorial/urls/ does not work for me. By inspecting the generated resolver, I discovered, that views that do not explicitly specify `url_name` do not have a name generated at all. View function name is not used.
**Versions (please complete the following information):**
- Python version: 3.8
- Django version: 3.2
- Django-Ninja version: 0.17.0
| closed | 2022-02-09T14:01:11Z | 2022-06-26T16:29:26Z | https://github.com/vitalik/django-ninja/issues/355 | [] | stinovlas | 2 |
Tanuki/tanuki.py | pydantic | 19 | Resolve Jupyter notebook system error with typed returns | ---------------------------------------------------------------------------
OSError Traceback (most recent call last)
Cell In[67], line 1
----> 1 x = create_todolist_item("I need to go and visit Jeff at 3pm tomorrow")
2 print(x)
File ~/Paperplane/repos/monkey-patch.py/examples/wikipedia/../../src/monkey.py:202, in Monkey.patch.<locals>.wrapper(*args, **kwargs)
200 @wraps(test_func)
201 def wrapper(*args, **kwargs):
--> 202 function_description = Register.load_function_description(test_func)
203 f = str(function_description.__dict__.__repr__() + "\n")
204 output = Monkey.language_modeler.generate(args, kwargs, Monkey.function_modeler, function_description)
File ~/Paperplane/repos/monkey-patch.py/examples/wikipedia/../../src/register.py:86, in Register.load_function_description(func_object)
80 # Extract class definitions for input and output types
81 input_class_definitions = {
82 param_name: get_class_definition(param_type)
83 for param_name, param_type in input_type_hints.items()
84 }
---> 86 output_class_definition = get_class_definition(output_type_hint)
88 return FunctionDescription(
89 name=func_object.__name__,
90 docstring=docstring,
(...)
94 output_class_definition=output_class_definition
95 )
File ~/Paperplane/repos/monkey-patch.py/examples/wikipedia/../../src/register.py:77, in Register.load_function_description.<locals>.get_class_definition(class_type)
75 return [get_class_definition(arg) for arg in class_type.__args__ if arg is not None]
76 elif inspect.isclass(class_type) and class_type.__module__ != "builtins":
---> 77 return inspect.getsource(class_type)
78 return class_type.__name__
File /usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/inspect.py:1262, in getsource(object)
1256 def getsource(object):
1257 """Return the text of the source code for an object.
1258
1259 The argument may be a module, class, method, function, traceback, frame,
1260 or code object. The source code is returned as a single string. An
1261 OSError is raised if the source code cannot be retrieved."""
-> 1262 lines, lnum = getsourcelines(object)
1263 return ''.join(lines)
File /usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/inspect.py:1244, in getsourcelines(object)
1236 """Return a list of source lines and starting line number for an object.
1237
1238 The argument may be a module, class, method, function, traceback, frame,
(...)
1241 original source file the first line of code was found. An OSError is
1242 raised if the source code cannot be retrieved."""
1243 object = unwrap(object)
-> 1244 lines, lnum = findsource(object)
1246 if istraceback(object):
1247 object = object.tb_frame
File /usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/inspect.py:1063, in findsource(object)
1055 def findsource(object):
1056 """Return the entire source file and starting line number for an object.
1057
1058 The argument may be a module, class, method, function, traceback, frame,
1059 or code object. The source code is returned as a list of all the lines
1060 in the file and the line number indexes a line in that list. An OSError
1061 is raised if the source code cannot be retrieved."""
-> 1063 file = getsourcefile(object)
1064 if file:
1065 # Invalidate cache if needed.
1066 linecache.checkcache(file)
File /usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/inspect.py:940, in getsourcefile(object)
936 def getsourcefile(object):
937 """Return the filename that can be used to locate an object's source.
938 Return None if no way can be identified to get the source.
939 """
--> 940 filename = getfile(object)
941 all_bytecode_suffixes = importlib.machinery.DEBUG_BYTECODE_SUFFIXES[:]
942 all_bytecode_suffixes += importlib.machinery.OPTIMIZED_BYTECODE_SUFFIXES[:]
File /usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/inspect.py:908, in getfile(object)
906 return module.__file__
907 if object.__module__ == '__main__':
--> 908 raise OSError('source code not available')
909 raise TypeError('{!r} is a built-in class'.format(object))
910 if ismethod(object):
OSError: source code not available | open | 2023-10-31T20:59:28Z | 2023-10-31T20:59:28Z | https://github.com/Tanuki/tanuki.py/issues/19 | [] | bmagz | 0 |
dropbox/sqlalchemy-stubs | sqlalchemy | 139 | Execution contexts missing some functions | SQLAlchemy allows defining column defaults with a callable getting the insert/update context as an argument ([doc](https://docs.sqlalchemy.org/en/13/core/defaults.html?highlight=column%20default%20callable#context-sensitive-default-functions)), e.g:
```python
def get_default(context):
return context.get_current_parameters()['name'] + 'whatever'
class MyModel(Base):
__tablename__ = 'my_model'
id = Column(Integer, primary_key=True)
name = Column(Unicode, nullable=False)
something = Column(Unicode, default=get_default, nullable=False)
```
I'm trying to add type annotations to the `get_default` function in my code. For the example above, the return type would be a `str` (the column is defined as `Unicode`).
The `context` argument is (in my case, using PostgreSQL with the pyscopg2 backend) an instance of the `sqlalchemy.dialects.postgresql.psycopg2.PGExecutionContext_psycopg2` class. Its MRO is:
```
>>> sqlalchemy.dialects.postgresql.psycopg2.PGExecutionContext_psycopg2.__mro__
(<class 'sqlalchemy.dialects.postgresql.psycopg2.PGExecutionContext_psycopg2'>,
<class 'sqlalchemy.dialects.postgresql.base.PGExecutionContext'>,
<class 'sqlalchemy.engine.default.DefaultExecutionContext'>,
<class 'sqlalchemy.engine.interfaces.ExecutionContext'>,
<class 'object'>)
```
The problem is none of these classes have the `get_current_parameters()` method defined in sqlalchemy-stubs.
In SQLAlchemy, it's defined in [`sqlalchemy.engine.default.DefaultExecutionContext`](https://github.com/sqlalchemy/sqlalchemy/blob/master/lib/sqlalchemy/engine/default.py#L1324), all the child classes just inherit from it.
I'd be happy to send a pull request adding the stubs for this function, but I'm unsure about the return value, since it returns a dictionary:
> which includes entries for each column/value pair that is part
> of the INSERT or UPDATE statement. The keys of the dictionary will be
> the key value of each :class:`.Column`, which is usually synonymous
> with the name.
So it seems to me the return value would be a `Dict[str, Any]`, since the key (name) of the `Column` will be a string, and its type can be anything, depending on how the column is defined.
```
def get_current_parameters(isolate_multiinsert_groups: bool = True) -> Dict[str, Any]: ...
```
Is that correct? | open | 2020-01-22T11:23:49Z | 2020-01-22T16:48:05Z | https://github.com/dropbox/sqlalchemy-stubs/issues/139 | [
"priority-normal",
"topic-stubs"
] | bochecha | 2 |
mwaskom/seaborn | data-visualization | 3,591 | Heatmap doees not display all entries | Hi all,
I have an issue with my heatmap.
I generated a dataframe with 30k columns and here I set some of the values to a non-nan value (2k of them) (some might be double hits but that is beside the point). The values I fill the dataframe with are values between 0-1 to tell the function how to color each sample
When I plot this, I only get a low amount of hits displayed and wondered why this is.
In my real case example what is shown is even less (more rows, less non-nan's) as in this example.
Am I doing something wrong here?
python=3.12; seaborn=0.12; matplotlib=3.8.2; pandas=2.1.4 (Ubuntu=22.04)
```
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
import os
import pandas as pd
new_dict = {}
for k in ["a", "b", "c", "d"]:
v = {str(idx): None for idx in range(30000)}
rand_ints = [np.random.randint(low=0, high=30000) for i in range(2000)]
for v_hits in rand_ints:
v[str(v_hits)] = v_hits/30000
new_dict[k] = v
df_heatmap_hits = pd.DataFrame(new_dict).transpose()
sns.heatmap(df_heatmap_hits)
plt.show()
``` | closed | 2023-12-11T11:45:03Z | 2023-12-11T16:34:24Z | https://github.com/mwaskom/seaborn/issues/3591 | [] | dansteiert | 5 |
vi3k6i5/flashtext | nlp | 96 | how to find keyword from a a string like regex does? | for example i have a string : "todayIgotEmailreport"
how do i get email keyword from this string ??
`if i use str.contains('report,False,regex=True)`
this will return this string.
how can we do it with flashtext?
| closed | 2019-11-08T03:25:36Z | 2020-02-13T18:32:46Z | https://github.com/vi3k6i5/flashtext/issues/96 | [] | ilovefood2 | 3 |
dynaconf/dynaconf | django | 734 | [RFC] Add option to validate only current env in validators.validate | **Problem**
I currently have a problem in a project when trying to validate a configuration. I have some variables in production that I do not have in development environment, and that are contained in a .secrets file not present in my project. I would like to validate the presence of the production variables when I run the project in production environment. However when running the validators, they go through all environments, and thus fail when trying to validate my production environment in development because some variables are missing as the .secrets.yml file is not present.
**Proposed Solution**
I just wanted to know if you had considered this issue. I did a quick fix on my side by adding a `check_only_current_env: bool = False` var to all `validate` functions in [validators.py](https://github.com/rochacbruno/dynaconf/blob/master/dynaconf/validator.py) and simply add a check in `Validator.validate` before the [validate_items lines](https://github.com/rochacbruno/dynaconf/blob/master/dynaconf/validator.py#L193-L205):
```
if check_only_current_env:
if settings.current_env in self.envs:
this._validate_items(...)
return
```
I am open to any other solutions and can also make a pull request with this small change.
Thank you very much for the project it's really nice !
| closed | 2022-04-08T09:08:18Z | 2022-04-11T18:45:38Z | https://github.com/dynaconf/dynaconf/issues/734 | [
"Not a Bug",
"RFC"
] | UgoBena | 2 |
pandas-dev/pandas | data-science | 60,747 | DOC: Reinstate the JupyterLite-based live shell for the pandas website | ## Description
Hello, I've recently been looking into the JupyterLite shell for [the `pandas` website's Getting Started page](https://pandas.org/getting_started.html) that briefly used to serve as an interactive endpoint for users browsing the website. It was discussed in https://github.com/pandas-dev/pandas/issues/46682 and added in #47428, subsequently [reported to be a bit slow back then](https://github.com/pandas-dev/pandas/issues/47530), and was removed as a result in https://github.com/pandas-dev/pandas/pull/49807.
I'd like to propose reinstating this shell for the website (either on the same page, or elsewhere on the [docs website's landing page](https://pandas.pydata.org/docs/index.html) via [the `jupyterlite-sphinx` project](https://github.com/jupyterlite/jupyterlite-sphinx), similar to https://github.com/matplotlib/matplotlib/pull/22634), and wish to seek thoughts from the `pandas` maintainers via this issue on whether it would be a good idea to do so for usage by newcomers.
## Rationale and additional context
- In early 2025, it has been a lot of time by now, and while the world of Python running in WebAssembly still experimental, we've since then made a bunch of improvements across the Pyodide and JupyterLite ecosystems across many past releases – both for improving the stability of the shell, if not its speed, and for being able to run `pandas` code within it.
- As the one who helped add the WASM CI job for Pandas last year via $57896, this is a related area in terms of `pandas`'s usage within Pyodide, and I would be happy to maintain the shell if it's added and establish some relevant automations towards its upkeep.
- We have been working on similar improvements to contemporary shells, such as those that exist and have been retained on the websites for [NumPy](https://numpy.org/)and [SymPy](https://live.sympy.org/), recently
xref: https://github.com/Quansight-Labs/czi-scientific-python-mgmt/issues/134
Thank you for your time! :)
<hr>
P.S. Here's a short [example](https://jupyterlite.github.io/demo/repl/index.html?&code=import%20pandas%20as%20pd%0Adf%20%3D%20pd.DataFrame(%0A%20%20%20%20%5B%5B1,%202%5D,%20%5B4,%205%5D,%20%5B7,%208%5D%5D,%0A%20%20%20%20index%3D%5B%27cobra%27,%20%27viper%27,%20%27sidewinder%27%5D,%0A%20%20%20%20columns%3D%5B%27max_speed%27,%20%27shield%27%5D%0A)%0Adf.loc%5B%5B%27viper%27,%20%27sidewinder%27%5D%5D%0A&kernel=python&execute=1), which takes ~7.5 seconds for me to load on a decently stable connection – but even for those with throttled connections, it should be easy to add a small admonition before it that just says "This is an experimental playground", or just prefix the word "Experimental" before the heading.
P.P.S. I noticed that a similar approach has been taken by the Ibis project; they have an admonition on this page: https://ibis-project.org/tutorials/browser/repl that states that it is experimental at the moment.
cc: @jtpio for visibility, as he was among those who collaborated on (and led) this effort previously through the issues and PRs linked.
<hr>
The description and rationale have been copied over with minor changes from my recent message on 18/01/2025 in the `pandas` Slack workspace: https://pandas-dev-community.slack.com/archives/C03PH1SU1M1/p1737168137448029 as suggested by @rhshadrach, which should help this proposal receive greater visibility. | closed | 2025-01-21T13:29:59Z | 2025-03-04T01:25:36Z | https://github.com/pandas-dev/pandas/issues/60747 | [
"Enhancement",
"Web"
] | agriyakhetarpal | 2 |
dgtlmoon/changedetection.io | web-scraping | 2,217 | [feature] Use a chrome plugin that can both add the site and the cookies that were used at the time (keep the login etc) | **_TLDR - A chrome extension that can add a URL to your cdio install as well as store the cookies used._**
So from working on https://github.com/dgtlmoon/changedetection.io/issues/2197 , it turns out that setting cookies is more complicated that just setting the `cookie: xyz=1` custom header field, it is also 10)% necessary to enter data like "expires, httpOnly, domain" etc, which is kind of 'meta-data' to the cookie that is not actually set in the "cookie" header field.
So I found this https://github.com/kairi003/Get-cookies.txt-LOCALLY
I was thinking that this could be forked to push the current URL and cookies to your changedetection installation at the click of a button using the existing API.
With some little hack it could be made to recognise the "api settings" page and automatically setup the API token, you just have to locate to the config page
This would also solve one piece that is missing, which is some chrome button to add a site to your watch - however setting the actual "Visual selector" should still be done (for now) in changedetection, but that could be an easy addition in the future to the chrome extension
Another idea is that it could also be told to set the mode to simple 'change detecting' or 'restock detection' etc
| closed | 2024-02-26T17:19:49Z | 2024-03-17T17:28:18Z | https://github.com/dgtlmoon/changedetection.io/issues/2217 | [
"enhancement"
] | dgtlmoon | 8 |
aio-libs/aiopg | sqlalchemy | 546 | Bulk update values with SQLAlchemy | Since aiopg does not support bulk insert (https://github.com/aio-libs/aiopg/issues/112), so I use this to insert everything in a single query:
```
await conn.execute(
sa_foo_table
.insert()
.values([
dict(name='name1', x=1),
dict(name='name2', x=2),
dict(name='name3', x=3),
])
)
```
Is there any such thing for bulk updating? Because if I update one by one, it might take quite some time (there are thousands of rows). | open | 2019-03-14T08:45:32Z | 2020-01-15T18:07:27Z | https://github.com/aio-libs/aiopg/issues/546 | [] | Yureien | 4 |
open-mmlab/mmdetection | pytorch | 11,816 | 如何在window | closed | 2024-06-28T02:40:02Z | 2024-06-28T02:40:16Z | https://github.com/open-mmlab/mmdetection/issues/11816 | [] | wdzwdxy | 0 |
|
PokemonGoF/PokemonGo-Bot | automation | 6,321 | bot doesnot run File "pokecli.py", line 35, in <module> |
File "pokecli.py", line 35, in <module>
import six
ImportError: No module named six
Something went wrong and the bot needed to be restarted. Please investigate the cause.
Waiting for 46 seconds, press a key to continue ... | open | 2023-12-20T02:56:32Z | 2023-12-21T12:35:32Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/6321 | [] | omarmuhammad552 | 1 |
hankcs/HanLP | nlp | 1,289 | 我可以用自己的训练集训练一个依存句法分析模型吗 | closed | 2019-10-03T07:44:39Z | 2020-05-13T10:52:38Z | https://github.com/hankcs/HanLP/issues/1289 | [
"ignored"
] | parkourcx | 2 |
|
hyperspy/hyperspy | data-visualization | 3,129 | Complex signal type warning | Hi everyone, I get the following warning while loading a hologram:
`sig1 = hs.load(s1, signal_type='hologram')`
`WARNING:hyperspy.io: signal_type='complex_signal2d' not understood. See hs.print_known_signal_types() for a list of installed signal types or https://github.com/hyperspy/hyperspy-extensions-list for the list of all hyperspy extensions providing signals.
`
It is pretty annoying, since I load multiple files and the warning extends over multiple pages. In this list (hs.print_known_signal_types()), the signal_type appears.
I need to load the signal as indicated above, otherwise I cannot call a few functions like:
```python
sb_position = sig1.estimate_sideband_position(ap_cb_radius=None, sb='lower')
sb_size = sig1.estimate_sideband_size(sb_position) * 2/3
statistics = sig1.statistics(sb_position=sb_position)
out_size = int(2*sb_size.data)
wave_image = sig1.reconstruct_phase(ref1, sb_position=sb_position, sb_size=sb_size, output_shape=(out_size, out_size))
wave = wave_image.unwrapped_phase()
```
If I don't call them with the signal_type option, these functions are not available.
What do I do wrong? | open | 2023-04-15T08:44:02Z | 2023-04-15T14:10:57Z | https://github.com/hyperspy/hyperspy/issues/3129 | [] | opens21 | 5 |
dpgaspar/Flask-AppBuilder | flask | 2,022 | Nested Group LDAP Auth not working in Airflow FAB | Hi All,
I am currently using flask_appbuilder.security.manager in order to provide LDAP authentication for my Airflow Users.
While doing the AUTH_ROLES_MAPPING I have noticed that it only works for direct members of the AD groups not the nested groups.
Has anyone been able to get this to work for nested groups?
Example:
in my current set up
_> AUTH_ROLES_MAPPING = {"CN=DIRECTGROUP,OU=!!sample,OU=!!sample,OU=!!sample,DC=sample,DC=sample" : ["Viewer", "executer"],
User A: member of DIRECTGROUP
User B: member of NESTEDGROUP which is a member of DIRECTGROUP_
Only User A would be able to login to my airflow instance as Viewer/Executer.
User B gets assigned to default "Public" role since flask is not able to drill down to the subtree.
I have tried using the Microsoft special string via AUTH_LDAP_SEARCH_FILTER but no luck. I browsed to different options in FAB but couldn't find one which supports this.
Current config options used(webserver_config.py) in Airflow 2.3.2
import os
from airflow.www.fab_security.manager import AUTH_LDAP
basedir = os.path.abspath(os.path.dirname(file))
Flask-WTF flag for CSRF
WTF_CSRF_ENABLED = True
AUTH_LDAP_SEARCH = "dc=sample,dc=samp,dc=com" # the LDAP search base
AUTH_LDAP_UID_FIELD = "sAMAccountName"
AUTH_LDAP_BIND_USER & AUTH_LDAP_BIND_PASSWORD
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = "Public"
AUTH_LDAP_GROUP_FIELD = "memberOf"
AUTH_LDAP_SEARCH_FILTER='(&(objectClass=user)(memberOf:1.2.840.113556.1.4.1941:=CN=DIRECTGROUP,OU=!!sample,OU=!!sample,OU=!!sample,DC=sample,DC=sample))
AUTH_ROLES_SYNC_AT_LOGIN=True
PERMANENT_SESSION_LIFETIME=1800
### Environment
Flask-Appbuilder version: 3.4.5
pip freeze output:
aenum==3.1.11
aiohttp==3.8.4
aioretry==5.0.2
aiosignal==1.3.1
alembic==1.8.0
ansible==6.3.0
ansible-core==2.13.3
anyio==3.6.1
apache-airflow==2.3.2
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-mssql==2.1.1
apache-airflow-providers-postgres==3.0.0
apache-airflow-providers-snowflake==2.5.1
apache-airflow-providers-sqlite==2.1.3
apispec==3.3.2
appdirs==1.4.4
APScheduler==3.10.1
argcomplete==2.0.0
asn1crypto==1.5.1
async-timeout==4.0.2
attrs==20.3.0
auth-lib==1.2.0
azure-core==1.26.3
azure-storage-blob==12.15.0
Babel==2.10.1
backoff==1.11.1
bcpy==0.1.8
bcrypt==4.0.1
beautifulsoup4==4.11.1
blinker==1.4
bs4==0.0.1
cachelib==0.7.0
case-conversion==2.1.0
cattrs==1.10.0
cdsapi==0.5.1
certifi==2022.12.7
cetools==1.10.2
cffi==1.15.0
cftime==1.6.0
chardet==5.1.0
charset-normalizer==2.0.12
click==8.1.3
clickclick==20.10.2
clipboard==0.0.4
colorama==0.4.4
colorlog==4.8.0
colour==0.1.5
commonmark==0.9.1
configparser==5.3.0
connexion==2.13.1
coverage==7.2.2
crayons==0.4.0
cron-descriptor==1.2.31
croniter==1.3.5
cronitor==4.6.0
cryptography==3.4.8
cssselect==1.2.0
cursor==1.3.5
decorator==4.4.2
defusedxml==0.7.1
Deprecated==1.2.13
dill==0.3.1.1
distro==1.8.0
dnspython==2.2.1
docutils==0.18.1
email-validator==1.2.1
et-xmlfile==1.1.0
exceptiongroup==1.1.1
fake-useragent==1.1.3
Flask==1.1.2
Flask-AppBuilder==3.4.5
Flask-Babel==2.0.0
Flask-Caching==1.11.1
Flask-JWT-Extended==3.25.1
Flask-Login==0.4.1
Flask-OpenID==1.3.0
Flask-Session==0.4.0
Flask-SQLAlchemy==2.5.1
Flask-WTF==0.14.3
freezegun==1.2.2
frozenlist==1.3.3
ftfy==6.1.1
fuzzywuzzy==0.18.0
gender-guesser==0.4.0
graphviz==0.20
greenlet==1.1.2
gunicorn==20.1.0
h11==0.12.0
html-text==0.5.2
html5lib==1.1
http-constants==0.5.0
httpcore==0.15.0
httpx==0.23.0
humanize==4.6.0
idna==3.3
imageio==2.26.1
imageio-ffmpeg==0.4.8
importlib-metadata==4.11.4
importlib-resources==5.12.0
infi.systray==0.1.12
inflect==6.0.2
inflection==0.5.1
iniconfig==2.0.0
isodate==0.6.1
itsdangerous==1.1.0
Jinja2==3.0.3
jsonschema==4.5.1
kayrros-client==1.2.17
lazy-object-proxy==1.7.1
lockfile==0.12.2
lxml==4.8.0
Mako==1.2.0
Markdown==3.3.7
MarkupSafe==2.0.1
marshmallow==3.16.0
marshmallow-enum==1.5.1
marshmallow-oneofschema==3.0.1
marshmallow-sqlalchemy==0.26.1
maybe-else==0.2.1
mbstrdecoder==1.1.2
mock==5.0.1
moviepy==1.0.3
msal==1.21.0
msoffcrypto-tool==5.0.0
multidict==6.0.4
mypy-extensions==1.0.0
nest-asyncio==1.5.6
ntlm-auth==1.5.0
numpy==1.23.5
O365==2.0.21
oauthlib==3.2.2
office365==0.3.15
Office365-REST-Python-Client==2.3.16
olefile==0.46
openpyxl==3.0.7
oscrypto==1.3.0
packaging==21.3
pandas==1.1.5
pandera==0.6.5
paramiko==2.12.0
parse==1.19.0
parsedatetime==2.6
pathmagic==0.3.14
pathspec==0.9.0
pendulum==2.1.2
Pillow==9.4.0
pluggy==1.0.0
ply==3.11
polars==0.15.16
prettierfier==1.0.3
prison==0.2.1
proglog==0.1.10
psutil==5.8.0
psycopg2-binary==2.9.5
py==1.11.0
pyADW==2.9.997
pyarrow==11.0.0
pyasn1==0.4.8
pycparser==2.21
pycryptodomex==3.17
pycurl==7.45.2
pydantic==1.9.2
pydub==0.25.1
pyee==8.2.2
Pygments==2.12.0
pyinstrument==4.4.0
pyiotools==0.3.18
PyJWT==1.7.1
pykerberos==1.2.4
pymiscutils==0.3.14
pymssql==2.2.7
PyNaCl==1.5.0
pyodbc==4.0.32
pyOpenSSL==20.0.1
pyparsing==2.4.7
PyPDF2==3.0.1
pyperclip==1.8.2
pyppeteer==1.0.2
PyQt5==5.15.9
PyQt5-Qt5==5.15.2
PyQt5-sip==12.11.1
pyquery==2.0.0
pyrsistent==0.18.1
pysmb==1.2.7
PySocks==1.7.1
pysubtypes==0.3.18
pytest==7.2.2
python-daemon==2.3.0
python-dateutil==2.8.2
python-docx==0.8.11
python-nvd3==0.15.0
python-slugify==6.1.2
python3-openid==3.2.0
pytz==2022.1
pytz-deprecation-shim==0.1.0.post0
pytzdata==2020.1
pyxlsb==1.0.9
PyYAML==5.4.1
readchar==4.0.5
regex==2023.3.23
requests==2.26.0
requests-html==0.10.0
requests-kerberos==0.11.0
requests-ntlm==1.1.0
requests-oauthlib==1.3.1
responses==0.23.1
retry==0.9.2
rfc3986==1.5.0
rich==12.4.4
scipy==1.5.4
scraping-framework==2.23.506
selenium==3.141.0
selenium-stealth==1.0.6
Send2Trash==1.8.0
setproctitle==1.2.3
simplejson==3.18.4
six==1.16.0
sniffio==1.2.0
snowflake-connector-python==2.7.1
snowflake-sqlalchemy==1.4.7
soupsieve==2.4
SQLAlchemy==1.4.9
SQLAlchemy-JSONField==1.0.0
SQLAlchemy-Utils==0.38.2
statsd==4.0.1
stopwatch.py==2.0.1
stringcase==1.2.0
swagger-ui-bundle==0.0.9
sxl==0.0.1a10
synmax-api-python-client==0.0.29
tabula-py==2.3.0
tabulate==0.8.9
tenacity==8.0.1
termcolor==1.1.0
text-unidecode==1.3
tomli==2.0.1
tqdm==4.65.0
typepy==1.3.0
types-PyYAML==6.0.12.8
typing-inspect==0.8.0
typing_extensions==4.2.0
tzdata==2022.7
tzlocal==4.3
unicodecsv==0.14.1
urllib3==1.26.9
w3lib==2.1.1
wcwidth==0.2.6
webdriver-manager==3.5.3
webencodings==0.5.1
websockets==10.4
Werkzeug==1.0.1
wrapt==1.14.1
WTForms==2.3.3
xarray==0.16.2
xlrd==1.2.0
XlsxWriter==3.0.9
yarl==1.8.2
zipp==3.8.0
### Describe the expected results
Users from nested groups should be assigned the roles as per the primary group role mapping configured in "AUTH_ROLES_MAPPING"
### Describe the actual results
User gets assigned to the default airflow role
```pytb
Paste the full traceback if there was an exception.
```
### Steps to reproduce
| open | 2023-04-18T15:33:35Z | 2023-05-03T09:22:42Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/2022 | [
"enhancement"
] | ranga2crazyy | 1 |
Yorko/mlcourse.ai | matplotlib | 13 | Ссылка на 3 статью | Добавьте пожалуйста в readme ссылку на третью статью на хабр. | closed | 2017-03-17T07:43:27Z | 2017-03-17T08:05:13Z | https://github.com/Yorko/mlcourse.ai/issues/13 | [
"minor_fix"
] | loopdigga96 | 2 |
seleniumbase/SeleniumBase | pytest | 2,957 | Upgrade `seleniumbase` to use the newer `selenium` (`4.23.1`) | ## Upgrade `seleniumbase` to use the newer `selenium` (`4.23.1`)
`selenium` `4.23.1` has been released.
`selenium` `4.23.0` had a bug, which why `seleniumbase` `4.28.7` is still using `selenium` `4.22.0` (the previous version).
Now it's safe to upgrade that dependency if all tests pass. | closed | 2024-07-24T18:03:27Z | 2024-07-25T15:35:11Z | https://github.com/seleniumbase/SeleniumBase/issues/2957 | [
"dependencies"
] | mdmintz | 1 |
seleniumbase/SeleniumBase | pytest | 2,821 | https://proxy-tools.com/proxy (reCAPTCHA) | On the site https://proxy-tools.com/proxy / when you click on "Show port" in the browser, the captcha passes without additional verification. However, if you use the script, an additional check appears.
My script
`from seleniumbase import SB
with SB(uc=True, test=True, locale_code="en") as sb:
url = "https://proxy-tools.com/proxy/https"
sb.driver.uc_open_with_reconnect(url, 1)
sb.click('button:contains("Got")')
sb.driver.uc_click('a[data-target="#showPorts"]')
sb.switch_to_frame("iframe")
sb.driver.uc_click('span[id="recaptcha-anchor"',reconnect_time=10)
`
| closed | 2024-06-03T05:40:43Z | 2024-06-03T12:57:16Z | https://github.com/seleniumbase/SeleniumBase/issues/2821 | [
"external",
"UC Mode / CDP Mode"
] | MaxKarpyza | 1 |
explosion/spaCy | machine-learning | 13,238 | TypeError: can not serialize 'DocTransformerOutput' object | This seems to be exactly #6672, but since that's locked, I cannot comment on it.
## How to reproduce the behaviour
The example from #6672:
```python
import spacy
sentence = "I love you."
nlp = spacy.load('en_core_web_trf')
doc = nlp(sentence)
doc.to_bytes()
```
raises `TypeError: can not serialize 'DocTransformerOutput' object`.
## Info about spaCy
- **spaCy version:** 3.7.2
- **Platform:** Linux-6.2.0-1018-lowlatency-x86_64-with-glibc2.37
- **Python version:** 3.11.4
- **Pipelines:** en_core_web_trf (3.7.3), de_core_news_lg (3.7.0), de_core_news_md (3.7.0), de_core_news_sm (3.7.0), de_dep_news_trf (3.7.2) | closed | 2024-01-16T12:27:25Z | 2024-02-26T00:02:33Z | https://github.com/explosion/spaCy/issues/13238 | [
"bug",
"feat / serialize",
"feat / transformer"
] | sliedes | 4 |
pytest-dev/pytest-xdist | pytest | 203 | testsuite completely falls appart locally | ```
Replacing crashed slave gw494
[gw495] node down: Traceback (most recent call last):
File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/execnet/gateway_base.py", line 1072, in executetask
do_exec(co, loc) # noqa
File "<string>", line 1, in do_exec
File "<remote exec>", line 174, in <module>
OSError: [Errno 2] No such file or directory
Replacing crashed slave gw495
[gw496] node down: Traceback (most recent call last):
File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/execnet/gateway_base.py", line 1072, in executetask
do_exec(co, loc) # noqa
File "<string>", line 1, in do_exec
File "<remote exec>", line 174, in <module>
OSError: [Errno 2] No such file or directory
Replacing crashed slave gw496
[gw497] node down: Traceback (most recent call last):
File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/execnet/gateway_base.py", line 1072, in executetask
do_exec(co, loc) # noqa
File "<string>", line 1, in do_exec
File "<remote exec>", line 174, in <module>
OSError: [Errno 2] No such file or directory
Replacing crashed slave gw497
[gw498] node down: Traceback (most recent call last):
File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/execnet/gateway_base.py", line 1072, in executetask
do_exec(co, loc) # noqa
File "<string>", line 1, in do_exec
File "<remote exec>", line 174, in <module>
OSError: [Errno 2] No such file or directory
Replacing crashed slave gw498
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/_pytest/main.py", line 110, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/_pytest/main.py", line 146, in _main
INTERNALERROR> config.hook.pytest_runtestloop(session=session)
INTERNALERROR> File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__
INTERNALERROR> return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR> File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 302, in __call__
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 280, in get_result
INTERNALERROR> _reraise(*ex) # noqa
INTERNALERROR> File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 265, in __init__
INTERNALERROR> self.result = func()
INTERNALERROR> File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 300, in <lambda>
INTERNALERROR> outcome = _CallOutcome(lambda: self.oldcall(hook, hook_impls, kwargs))
INTERNALERROR> File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
INTERNALERROR> _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR> File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/xdist/dsession.py", line 114, in pytest_runtestloop
INTERNALERROR> self.loop_once()
INTERNALERROR> File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/xdist/dsession.py", line 133, in loop_once
INTERNALERROR> call(**kwargs)
INTERNALERROR> File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/xdist/dsession.py", line 197, in slave_errordown
INTERNALERROR> self._clone_node(node)
INTERNALERROR> File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/xdist/dsession.py", line 261, in _clone_node
INTERNALERROR> node = self.nodemanager.setup_node(spec, self.queue.put)
INTERNALERROR> File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/xdist/slavemanage.py", line 68, in setup_node
INTERNALERROR> gw = self.group.makegateway(spec)
INTERNALERROR> File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/execnet/multi.py", line 127, in makegateway
INTERNALERROR> io = gateway_io.create_io(spec, execmodel=self.execmodel)
INTERNALERROR> File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/execnet/gateway_io.py", line 113, in create_io
INTERNALERROR> return Popen2IOMaster(args, execmodel)
INTERNALERROR> File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/execnet/gateway_io.py", line 17, in __init__
INTERNALERROR> self.popen = p = execmodel.PopenPiped(args)
INTERNALERROR> File "/home/rpfannsc/Projects/pytest-dev/pytest-xdist/.tox/py27/lib/python2.7/site-packages/execnet/gateway_base.py", line 178, in PopenPiped
INTERNALERROR> return self.subprocess.Popen(args, stdout=PIPE, stdin=PIPE)
INTERNALERROR> File "/usr/lib64/python2.7/subprocess.py", line 390, in __init__
INTERNALERROR> errread, errwrite)
INTERNALERROR> File "/usr/lib64/python2.7/subprocess.py", line 908, in _execute_child
INTERNALERROR> errpipe_read, errpipe_write = self.pipe_cloexec()
INTERNALERROR> File "/usr/lib64/python2.7/subprocess.py", line 860, in pipe_cloexec
INTERNALERROR> r, w = os.pipe()
INTERNALERROR> OSError: [Errno 24] Too many open files
``` | closed | 2017-08-05T07:45:49Z | 2024-01-08T10:30:02Z | https://github.com/pytest-dev/pytest-xdist/issues/203 | [
"needs information"
] | RonnyPfannschmidt | 3 |
yaroslaff/nudecrawler | web-scraping | 3 | IndexError: tuple index out of range | It works, but after a while it drops out with an error
```
Traceback (most recent call last):
File "/root/nudecrawler/bin/nudecrawler", line 467, in <module>
main()
File "/root/nudecrawler/bin/nudecrawler", line 456, in main
check_word(w, day, args.fails, print_urls = args.urls, resumecount=resumecount)
File "/root/nudecrawler/bin/nudecrawler", line 263, in check_word
p = analyse(url)
File "/root/nudecrawler/bin/nudecrawler", line 165, in analyse
p.check_all()
File "/usr/local/lib/python3.9/dist-packages/nudecrawler/page.py", line 150, in check_all
self.check_images()
File "/usr/local/lib/python3.9/dist-packages/nudecrawler/page.py", line 318, in check_images
self.is_nude(url)
File "/usr/local/lib/python3.9/dist-packages/nudecrawler/page.py", line 290, in is_nude
self.do_detect_image(url)
File "/usr/local/lib/python3.9/dist-packages/nudecrawler/page.py", line 246, in do_detect_image
verdict = ri.detect_image(self.detect_image)
File "/usr/local/lib/python3.9/dist-packages/nudecrawler/remoteimage.py", line 60, in detect_image
return n.parse().result
File "/usr/local/lib/python3.9/dist-packages/nude.py", line 90, in parse
b = pixels[x, y][2] # blue
IndexError: tuple index out of range
```
Python 3.9.2
Debian 11.6
Use with wordlist.txt and urls.txt from here | closed | 2023-04-07T18:58:54Z | 2023-04-12T18:42:00Z | https://github.com/yaroslaff/nudecrawler/issues/3 | [] | jeffscrum | 9 |
modin-project/modin | data-science | 7,334 | BUG: Series.compare with differently named series raises ValueError, but should not | ### Modin version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest released version of Modin.
- [X] I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow [this guide](https://modin.readthedocs.io/en/stable/getting_started/installation.html#installing-from-the-github-main-branch).)
### Reproducible Example
```python
import modin.pandas as pd
pd.Series(1, name='a').compare(pd.Series(2, name='b'))
```
### Issue Description
`DataFrame.compare` requires the two frames to have the same columns, but `Series.compare` should not.
### Expected Behavior
should match pandas and ignore the series `name`.
### Error Logs
<details>
```python-traceback
RayTaskError(ValueError) Traceback (most recent call last)
File ~/miniconda3/envs/modin-latest/lib/python3.9/site-packages/IPython/core/formatters.py:708, in PlainTextFormatter.__call__(self, obj)
701 stream = StringIO()
702 printer = pretty.RepresentationPrinter(stream, self.verbose,
703 self.max_width, self.newline,
704 max_seq_length=self.max_seq_length,
705 singleton_pprinters=self.singleton_printers,
706 type_pprinters=self.type_printers,
707 deferred_pprinters=self.deferred_printers)
--> 708 printer.pretty(obj)
709 printer.flush()
710 return stream.getvalue()
File ~/miniconda3/envs/modin-latest/lib/python3.9/site-packages/IPython/lib/pretty.py:410, in RepresentationPrinter.pretty(self, obj)
407 return meth(obj, self, cycle)
408 if cls is not object \
409 and callable(cls.__dict__.get('__repr__')):
--> 410 return _repr_pprint(obj, self, cycle)
412 return _default_pprint(obj, self, cycle)
413 finally:
File ~/miniconda3/envs/modin-latest/lib/python3.9/site-packages/IPython/lib/pretty.py:778, in _repr_pprint(obj, p, cycle)
776 """A pprint that just redirects to the normal repr function."""
777 # Find newlines and replace them with p.break_()
--> 778 output = repr(obj)
779 lines = output.splitlines()
780 with p.group():
File ~/sources/modin/modin/logging/logger_decorator.py:144, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
129 """
130 Compute function with logging if Modin logging is enabled.
131
(...)
141 Any
142 """
143 if LogMode.get() == "disable":
--> 144 return obj(*args, **kwargs)
146 logger = get_logger()
147 logger.log(log_level, start_line)
File ~/sources/modin/modin/pandas/dataframe.py:273, in DataFrame.__repr__(self)
271 num_rows = pandas.get_option("display.max_rows") or len(self.index)
272 num_cols = pandas.get_option("display.max_columns") or len(self.columns)
--> 273 result = repr(self._build_repr_df(num_rows, num_cols))
274 if len(self.index) > num_rows or len(self.columns) > num_cols:
275 # The split here is so that we don't repr pandas row lengths.
276 return result.rsplit("\n\n", 1)[0] + "\n\n[{0} rows x {1} columns]".format(
277 len(self.index), len(self.columns)
278 )
File ~/sources/modin/modin/logging/logger_decorator.py:144, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
129 """
130 Compute function with logging if Modin logging is enabled.
131
(...)
141 Any
142 """
143 if LogMode.get() == "disable":
--> 144 return obj(*args, **kwargs)
146 logger = get_logger()
147 logger.log(log_level, start_line)
File ~/sources/modin/modin/pandas/base.py:276, in BasePandasDataset._build_repr_df(self, num_rows, num_cols)
254 """
255 Build pandas DataFrame for string representation.
256
(...)
273 A pandas dataset with `num_rows` or fewer rows and `num_cols` or fewer columns.
274 """
275 # Fast track for empty dataframe.
--> 276 if len(self.index) == 0 or (self._is_dataframe and len(self.columns) == 0):
277 return pandas.DataFrame(
278 index=self.index,
279 columns=self.columns if self._is_dataframe else None,
280 )
281 row_indexer = _get_repr_axis_label_indexer(self.index, num_rows)
File ~/sources/modin/modin/pandas/base.py:4294, in BasePandasDataset.__getattribute__(self, item)
4280 @disable_logging
4281 def __getattribute__(self, item) -> Any:
4282 """
4283 Return item from the `BasePandasDataset`.
4284
(...)
4292 Any
4293 """
-> 4294 attr = super().__getattribute__(item)
4295 if item not in _DEFAULT_BEHAVIOUR and not self._query_compiler.lazy_execution:
4296 # We default to pandas on empty DataFrames. This avoids a large amount of
4297 # pain in underlying implementation and returns a result immediately rather
4298 # than dealing with the edge cases that empty DataFrames have.
4299 if callable(attr) and self.empty and hasattr(self._pandas_class, item):
File ~/sources/modin/modin/pandas/base.py:643, in BasePandasDataset._get_index(self)
634 def _get_index(self) -> pandas.Index:
635 """
636 Get the index for this DataFrame.
637
(...)
641 The union of all indexes across the partitions.
642 """
--> 643 return self._query_compiler.index
File ~/sources/modin/modin/core/storage_formats/pandas/query_compiler.py:102, in _get_axis.<locals>.<lambda>(self)
89 """
90 Build index labels getter of the specified axis.
91
(...)
99 callable(PandasQueryCompiler) -> pandas.Index
100 """
101 if axis == 0:
--> 102 return lambda self: self._modin_frame.index
103 else:
104 return lambda self: self._modin_frame.columns
File ~/sources/modin/modin/core/dataframe/pandas/dataframe/dataframe.py:709, in PandasDataframe._get_index(self)
700 """
701 Get the index from the cache object.
702
(...)
706 An index object containing the row labels.
707 """
708 if self.has_index_cache:
--> 709 index, row_lengths = self._index_cache.get(return_lengths=True)
710 else:
711 index, row_lengths = self._compute_axis_labels_and_lengths(0)
File ~/sources/modin/modin/core/dataframe/pandas/metadata/index.py:194, in ModinIndex.get(self, return_lengths)
192 if not self.is_materialized:
193 if callable(self._value):
--> 194 index, self._lengths_cache = self._value()
195 self._value = ensure_index(index)
196 elif self._value is None:
File ~/sources/modin/modin/core/dataframe/pandas/metadata/index.py:106, in ModinIndex._get_default_callable.<locals>.<lambda>()
91 @staticmethod
92 def _get_default_callable(dataframe_obj, axis):
93 """
94 Build a callable extracting index labels and partitions lengths for the specified axis.
95
(...)
104 callable() -> tuple(pandas.Index, list[ints])
105 """
--> 106 return lambda: dataframe_obj._compute_axis_labels_and_lengths(axis)
File ~/sources/modin/modin/logging/logger_decorator.py:144, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
129 """
130 Compute function with logging if Modin logging is enabled.
131
(...)
141 Any
142 """
143 if LogMode.get() == "disable":
--> 144 return obj(*args, **kwargs)
146 logger = get_logger()
147 logger.log(log_level, start_line)
File ~/sources/modin/modin/core/dataframe/pandas/dataframe/dataframe.py:835, in PandasDataframe._compute_axis_labels_and_lengths(self, axis, partitions)
833 if partitions is None:
834 partitions = self._partitions
--> 835 new_index, internal_idx = self._partition_mgr_cls.get_indices(axis, partitions)
836 return new_index, list(map(len, internal_idx))
File ~/sources/modin/modin/logging/logger_decorator.py:144, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
129 """
130 Compute function with logging if Modin logging is enabled.
131
(...)
141 Any
142 """
143 if LogMode.get() == "disable":
--> 144 return obj(*args, **kwargs)
146 logger = get_logger()
147 logger.log(log_level, start_line)
File ~/sources/modin/modin/core/dataframe/pandas/partitioning/partition_manager.py:1193, in PandasDataframePartitionManager.get_indices(cls, axis, partitions, index_func)
1191 if len(target):
1192 new_idx = [idx.apply(func) for idx in target[0]]
-> 1193 new_idx = cls.get_objects_from_partitions(new_idx)
1194 else:
1195 new_idx = [pandas.Index([])]
File ~/sources/modin/modin/logging/logger_decorator.py:144, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
129 """
130 Compute function with logging if Modin logging is enabled.
131
(...)
141 Any
142 """
143 if LogMode.get() == "disable":
--> 144 return obj(*args, **kwargs)
146 logger = get_logger()
147 logger.log(log_level, start_line)
File ~/sources/modin/modin/core/dataframe/pandas/partitioning/partition_manager.py:1134, in PandasDataframePartitionManager.get_objects_from_partitions(cls, partitions)
1130 partitions[idx] = part.force_materialization()
1131 assert all(
1132 [len(partition.list_of_blocks) == 1 for partition in partitions]
1133 ), "Implementation assumes that each partition contains a single block."
-> 1134 return cls._execution_wrapper.materialize(
1135 [partition.list_of_blocks[0] for partition in partitions]
1136 )
1137 return [partition.get() for partition in partitions]
File ~/sources/modin/modin/core/execution/ray/common/engine_wrapper.py:139, in RayWrapper.materialize(cls, obj_id)
136 return ray.get(obj_id) if isinstance(obj_id, ray.ObjectRef) else obj_id
138 if all(isinstance(obj, ray.ObjectRef) for obj in obj_id):
--> 139 return ray.get(obj_id)
141 ids = {}
142 result = []
File ~/miniconda3/envs/modin-latest/lib/python3.9/site-packages/ray/_private/auto_init_hook.py:24, in wrap_auto_init.<locals>.auto_init_wrapper(*args, **kwargs)
21 @wraps(fn)
22 def auto_init_wrapper(*args, **kwargs):
23 auto_init_ray()
---> 24 return fn(*args, **kwargs)
File ~/miniconda3/envs/modin-latest/lib/python3.9/site-packages/ray/_private/client_mode_hook.py:103, in client_mode_hook.<locals>.wrapper(*args, **kwargs)
101 if func.__name__ != "init" or is_client_mode_enabled_by_default:
102 return getattr(ray, func.__name__)(*args, **kwargs)
--> 103 return func(*args, **kwargs)
File ~/miniconda3/envs/modin-latest/lib/python3.9/site-packages/ray/_private/worker.py:2563, in get(object_refs, timeout)
2561 worker.core_worker.dump_object_store_memory_usage()
2562 if isinstance(value, RayTaskError):
-> 2563 raise value.as_instanceof_cause()
2564 else:
2565 raise value
RayTaskError(ValueError): ray::remote_exec_func() (pid=58063, ip=127.0.0.1)
At least one of the input arguments for this task could not be computed:
ray.exceptions.RayTaskError: ray::_deploy_ray_func() (pid=58063, ip=127.0.0.1)
File "/Users/mvashishtha/sources/modin/modin/core/execution/ray/implementations/pandas_on_ray/partitioning/virtual_partition.py", line 335, in _deploy_ray_func
result = deployer(axis, f_to_deploy, f_args, f_kwargs, *deploy_args, **kwargs)
File "/Users/mvashishtha/sources/modin/modin/logging/logger_decorator.py", line 144, in run_and_log
return obj(*args, **kwargs)
File "/Users/mvashishtha/sources/modin/modin/core/dataframe/pandas/partitioning/axis_partition.py", line 575, in deploy_func_between_two_axis_partitions
result = func(lt_frame, rt_frame, *f_args, **f_kwargs)
File "/Users/mvashishtha/sources/modin/modin/core/dataframe/pandas/dataframe/dataframe.py", line 2078, in _tree_reduce_func
series_result = func(df, *args, **kwargs)
File "/Users/mvashishtha/sources/modin/modin/core/storage_formats/pandas/query_compiler.py", line 4663, in <lambda>
lambda left, right: pandas.DataFrame.compare(
File "/Users/mvashishtha/miniconda3/envs/modin-latest/lib/python3.9/site-packages/pandas/core/frame.py", line 8580, in compare
return super().compare(
File "/Users/mvashishtha/miniconda3/envs/modin-latest/lib/python3.9/site-packages/pandas/core/generic.py", line 10118, in compare
mask = ~((self == other) | (self.isna() & other.isna()))
File "/Users/mvashishtha/miniconda3/envs/modin-latest/lib/python3.9/site-packages/pandas/core/ops/common.py", line 76, in new_method
return method(self, other)
File "/Users/mvashishtha/miniconda3/envs/modin-latest/lib/python3.9/site-packages/pandas/core/arraylike.py", line 40, in __eq__
return self._cmp_method(other, operator.eq)
File "/Users/mvashishtha/miniconda3/envs/modin-latest/lib/python3.9/site-packages/pandas/core/frame.py", line 7884, in _cmp_method
self, other = self._align_for_op(other, axis, flex=False, level=None)
File "/Users/mvashishtha/miniconda3/envs/modin-latest/lib/python3.9/site-packages/pandas/core/frame.py", line 8183, in _align_for_op
raise ValueError(
ValueError: Can only compare identically-labeled (both index and columns) DataFrame objects
```
</details>
### Installed Versions
<details>
```
INSTALLED VERSIONS
------------------
commit : 759d548814a6ac224e83e7531cf98e20b13d85cb
python : 3.9.18.final.0
python-bits : 64
OS : Darwin
OS-release : 23.5.0
Version : Darwin Kernel Version 23.5.0: Wed May 1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
Modin dependencies
------------------
modin : 0.31.0+2.g759d5488
ray : 2.8.0
dask : 2024.3.1
distributed : 2024.3.1
pandas dependencies
-------------------
pandas : 2.2.1
numpy : 1.26.1
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.0.0
pip : 23.3
Cython : None
pytest : 8.1.1
hypothesis : None
sphinx : 7.2.6
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 5.1.0
html5lib : None
pymysql : None
psycopg2 : 2.9.9
jinja2 : 3.1.2
IPython : 8.17.2
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
bottleneck : None
dataframe-api-compat : None
fastparquet : 2024.2.0
fsspec : 2024.3.1
gcsfs : None
matplotlib : 3.8.1
numba : None
numexpr : 2.8.4
odfpy : None
openpyxl : 3.1.2
pandas_gbq : 0.22.0
pyarrow : 14.0.1
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : 2024.3.1
scipy : 1.11.3
sqlalchemy : 2.0.29
tables : 3.9.2
tabulate : 0.9.0
xarray : 2024.2.0
xlrd : 2.0.1
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
```
</details>
| open | 2024-06-29T00:26:36Z | 2024-06-29T00:27:05Z | https://github.com/modin-project/modin/issues/7334 | [
"bug 🦗",
"P2",
"Interfaces and abstractions"
] | sfc-gh-mvashishtha | 1 |
mouredev/Hello-Python | fastapi | 478 | 网赌被黑怎么办、有办法挽救吗 | 出黑咨询+微:zdn200 微信:xiaolu460570
飞机“@lc15688
如果你出现以下这些情况,说明你已经被黑了:↓ ↓
【网赌被黑怎么办】【网赌赢了平台不给出款】【系统更新】【取款失败】【注单异常】【网络波动】【提交失败】
【单注为回归 】【单注未更新】【出款通道维护】 【打双倍流水】 【充值同等的金额】
关于网上网赌娱乐平台赢钱了各种借口不给出款最新解决方法
切记,只要你赢钱了,遇到任何不给你提现的借口,基本表明你已经被黑了

| closed | 2025-03-06T08:37:06Z | 2025-03-10T13:47:08Z | https://github.com/mouredev/Hello-Python/issues/478 | [] | khyl55 | 0 |
django-import-export/django-import-export | django | 1,730 | V4: _check_import_id_fields doesn't produce intended error message. | **Describe the bug**
With V4 I have started getting following errors on `django-hordak` tests (for some reason it updates to pre versions even if it is not requested):
```
File "/home/runner/work/django-hordak/django-hordak/hordak/views/statement_csv_import.py", line 84, in post
self.result = resource.import_data(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.1/x64/lib/python3.12/site-packages/django_import_export-4.0.0b2-py3.12.egg/import_export/resources.py", line 799, in import_data
result = self.import_data_inner(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.1/x64/lib/python3.12/site-packages/django_import_export-4.0.0b2-py3.12.egg/import_export/resources.py", line 835, in import_data_inner
self._check_import_id_fields(dataset.headers)
File "/opt/hostedtoolcache/Python/3.12.1/x64/lib/python3.12/site-packages/django_import_export-4.0.0b2-py3.12.egg/import_export/resources.py", line 1068, in _check_import_id_fields
import_id_fields = [self.fields[f] for f in self.get_import_id_fields()]
~~~~~~~~~~~^^^
KeyError: 'id'
```
(https://github.com/adamcharnock/django-hordak/actions/runs/7421188620/job/20194029428)
I expect, that the `_check_import_id_fields` function should provide better error output like `The following import_id_fields are not present in the dataset` in such cases.
**Versions (please complete the following information):**
- Django Import Export: 4.0.0b2
- Python 3.8-3.12
- Django 4.2-5.0 | closed | 2024-01-05T11:51:28Z | 2024-05-03T08:08:59Z | https://github.com/django-import-export/django-import-export/issues/1730 | [
"bug"
] | PetrDlouhy | 3 |
dynaconf/dynaconf | fastapi | 1,259 | [3.3.0] Adjust Python versions supported |
### Discussed in https://github.com/dynaconf/dynaconf/discussions/1258
<div type='discussions-op-text'>
<sup>Originally posted by **judy-devData** February 19, 2025</sup>
The latest version of dynaconf released on february 17 includes support for Python 3.11. Are there any plans in the near future to support python 3.12?</div>
--
We must support the latest 5 versions of Python https://endoflife.date/python
3.9, 3.10, 3.11, 3.12 and 3.13
So this issue is about dropping 3.8 and ensuring test matrix covers the versions above
and also ensure metadata is updated on package. | open | 2025-02-19T15:32:06Z | 2025-02-19T15:32:06Z | https://github.com/dynaconf/dynaconf/issues/1259 | [] | rochacbruno | 0 |
miguelgrinberg/python-socketio | asyncio | 593 | Client doesn't work on simple example. Error: simplejson.errors.JSONDecodeError | I'm trying to run the simple example:
```
import socketio
sio = socketio.Client()
@sio.event
def connect():
print('connected to server')
@sio.event
def disconnect():
print('disconnected from server')
sio.connect('wss://echo.websocket.org')
sio.wait()
```
but I'm getting an error:
`Traceback (most recent call last):
File "cli.py", line 15, in <module>
sio.connect('wss://echo.websocket.org')
File "/home/ihor/.local/lib/python3.6/site-packages/socketio/client.py", line 275, in connect
engineio_path=socketio_path)
File "/home/ihor/.local/lib/python3.6/site-packages/engineio/client.py", line 187, in connect
url, headers or {}, engineio_path)
File "/home/ihor/.local/lib/python3.6/site-packages/engineio/client.py", line 292, in _connect_polling
arg = r.json()
File "/home/ihor/.local/lib/python3.6/site-packages/requests/models.py", line 900, in json
return complexjson.loads(self.text, **kwargs)
File "/usr/lib/python3/dist-packages/simplejson/__init__.py", line 518, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
simplejson.errors.JSONDecodeError: Expecting value: line 1 column 1 (char 0)`
I tried also to import json module and set it to sio.Client(json=json) but got the same error. Please advice the solution. | closed | 2020-12-21T20:49:44Z | 2020-12-21T20:58:43Z | https://github.com/miguelgrinberg/python-socketio/issues/593 | [] | ihormihal | 1 |
plotly/dash-table | dash | 233 | Clean up build output & committed build/intermediary files | Following #212, some issues in the build and general lifecycle of the repo have become apparent. Here are some points to fix, explore, or justify.
1. Make the build output file name match those of other Dash repos (e.g lib/dash-table.[min|dev].js)
2. Do not commit build and intermediary results in the repo (e.g. /dash_table, /lib, /dist)
3. Expose source map | open | 2018-11-08T12:58:18Z | 2019-07-06T12:24:45Z | https://github.com/plotly/dash-table/issues/233 | [
"dash-type-maintenance"
] | Marc-Andre-Rivet | 0 |
scikit-learn-contrib/metric-learn | scikit-learn | 215 | set/update the preprocessor outside the init | Just a thought, but it might be useful to have a function to update/set the preprocessor outside of the init. Right now it is just initialized when calling fit (so using sth like `mcc.set_params(preprocessor=X)` won't help). But one might realize later after fitting that they need a/another preprocessor, for instance if they want to `score_pairs` on a huge dataset, or maybe if they want to `score_pairs` on a different dataset than the one they used at fit time. | open | 2019-06-12T09:49:31Z | 2019-06-12T14:07:32Z | https://github.com/scikit-learn-contrib/metric-learn/issues/215 | [] | wdevazelhes | 1 |
inventree/InvenTree | django | 9,260 | Reverse proxy: Contradictory scheme headers | ### Deployment Method
- [ ] Installer
- [ ] Docker Development
- [ ] Docker Production
- [ ] Bare metal Development
- [ ] Bare metal Production
- [ ] Digital Ocean image
- [x] Other (please provide a link `Steps to Reproduce`
### Describe the problem*
First I would like to apologize for being a completely lost n00b. I'm clearly out of my depth here. I have spun up multiple docker containers and LXCs in the past, and reverse-proxied them successfully with Linuxserver's SWAG container with extremely minimal config, but there is something going WAY over my head here. My config resulted in "Bad Request; Contradictory scheme headers", and everything I did just seemed to make the situation worse until I restored from backups.
### Steps to Reproduce
1. Install [Inventree LXC via Proxmox Helper Scripts](https://community-scripts.github.io/ProxmoxVE/scripts?id=inventree) (Not sure if they're using the installer script or their own script for a bare metal install. I've looked at their script source and it's all black magic to me; I guess the script somehow is importing more steps from somewhere else because as far as I comprehend the linked script source seems to do basically nothing.)
2. Inventree is working fine at http://192.168.43.157 (aside from seemingly defaulting to CUI instead of PUI and I can't seem to figure out how to change that default yet, but that's tangential to this post).
3. Use [LinuxServer's SWAG container](https://docs.linuxserver.io/general/swag/) along with the [Cloudflared mod](https://github.com/linuxserver/docker-mods/tree/universal-cloudflared) to perform reverse-proxying including automatically getting LetsEncrypt cert, setting DNS records, and configuring Cloudflare tunnel. There is an [example of how they do this on their blog](https://www.linuxserver.io/blog/zero-trust-hosting-and-reverse-proxy-via-cloudflare-swag-and-authelia) but I don't use the additional authentication layer (eg. authelia) yet.
4. The container provides prebuilt proxy confs for many selfhosted services, but unfortunately not Inventree. I've created my own confs based on their [sample subdomain conf](https://github.com/linuxserver/reverse-proxy-confs/blob/master/_template.subdomain.conf.sample) without any issues before. Here's my inventree.subdomain.conf:
```
## Version 2024/07/16
# Basically just the example conf
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name inventree.*;
include /config/nginx/ssl.conf;
client_max_body_size 0;
location / {
include /config/nginx/proxy.conf;
include /config/nginx/resolver.conf;
set $upstream_app 192.168.43.157;
set $upstream_port 80;
set $upstream_proto http;
proxy_pass $upstream_proto://$upstream_app:$upstream_port;
}
}
```
5. Reload nginx and fail utterly to load the site externally
6. Read documentation, Reddit, Github discourse, make various changes to nginx proxy conf and/or Inventree config.yaml, see site get progressively more broken with each change
7. Restore LXC from last night's working backup so it's at least accessible by IP again
8. Sheepishly post on Github and hope someone is willing to help
### Relevant log output
```bash
Bad Request
Contradictory scheme headers
``` | closed | 2025-03-07T20:54:32Z | 2025-03-11T23:11:05Z | https://github.com/inventree/InvenTree/issues/9260 | [
"question",
"setup"
] | realcanadrian | 2 |
python-restx/flask-restx | flask | 545 | Swagger schema creation can crash if multiple requests arrive quickly on startup [theory] | Hello flask-restx team!
This is a bit of a nasty one sorry! We have recently twice observed a crash (call stack below) inside the Swagger() constructor on application startup, when it receives its first request. The exception being thrown ("dictionary changed size during iteration") is indicative of a threading issue where there are multiple threads concurrently trying to construct a Swagger() object, which is assigned to a cached property on the Api class when the first request that requires validation arrives (or when the swagger-ui url is loaded). As there are no locks and no threads in flask-restx, it appears that the Swagger() constructor is not thread-safe, and if multiple requests arrive very quickly at application startup (and flask is running with threaded=True), it is possible that data corruption and crashes can happen during schema rendering. Please note this is just my theory on root cause, and I'm submitting this issue to hear from anyone else in case I've assumed wrong. The crash randomly happens (we've seen it twice in the last week), and despite trying, I have so far not found a way to reproduce it unfortunately.
As for a fix, it would seem that a lock should be used to guarantee thread-safety of the Swagger() constructor. I would be happy to work on a PR for that if advised by flask-restx maintainers.
### **Code**
Happy to provide, in particular the model definitions we use, if it helps, but as this is largish application and the call stack indicates a non-reproducible threading condition, my thought is that the root cause is not directly related to our model definitions. So I initially wanted to seek advice on course of action based on the call stack and my interpretation. We do have Nested fields, but only a single level of nesting.
### **Repro Steps** (if applicable)
Sorry, not known.
### **Expected Behavior**
If multiple requests reach the server quickly on startup, schema creation should be synchronized to ensure it is created before any request is processed.
### **Actual Behavior**
If schema creation fails, the application continues to run, but requests that expect validation using can crash during validation when schema is referenced, indicative of corrupt/incomplete schema, for example, we see this:
Traceback (most recent call last):
File "/home/app/.local/lib/python3.8/site-packages/jsonschema/validators.py", line 966, in resolve_fragment
document = document[part]
KeyError: 'definitions'
### **Error Messages/Stack Trace**
2023-05-29 11:52:47,766 ERROR T140221658154752 [api.__schema__] Unable to render schema
Traceback (most recent call last):
File "/home/app/.local/lib/python3.8/site-packages/flask_restx/api.py", line 573, in __schema__
self._schema = Swagger(self).as_dict()
File "/home/app/.local/lib/python3.8/site-packages/flask_restx/swagger.py", line 275, in as_dict
serialized = self.serialize_resource(
File "/home/app/.local/lib/python3.8/site-packages/flask_restx/swagger.py", line 482, in serialize_resource
path[method] = self.serialize_operation(doc, method)
File "/home/app/.local/lib/python3.8/site-packages/flask_restx/swagger.py", line 488, in serialize_operation
"responses": self.responses_for(doc, method) or None,
File "/home/app/.local/lib/python3.8/site-packages/flask_restx/swagger.py", line 622, in responses_for
responses[code]["schema"] = self.serialize_schema(d["model"])
File "/home/app/.local/lib/python3.8/site-packages/flask_restx/swagger.py", line 672, in serialize_schema
self.register_model(model)
File "/home/app/.local/lib/python3.8/site-packages/flask_restx/swagger.py", line 703, in register_model
self.register_field(field)
File "/home/app/.local/lib/python3.8/site-packages/flask_restx/swagger.py", line 713, in register_field
self.register_field(field.container)
File "/home/app/.local/lib/python3.8/site-packages/flask_restx/swagger.py", line 711, in register_field
self.register_model(field.nested)
File "/home/app/.local/lib/python3.8/site-packages/flask_restx/fields.py", line 261, in nested
return getattr(self.model, "resolved", self.model)
File "/home/app/.local/lib/python3.8/site-packages/werkzeug/utils.py", line 109, in __get__
value = self.fget(obj) # type: ignore
File "/home/app/.local/lib/python3.8/site-packages/flask_restx/model.py", line 176, in resolved
resolved = copy.deepcopy(self)
File "/usr/local/lib/python3.8/copy.py", line 153, in deepcopy
y = copier(memo)
File "/home/app/.local/lib/python3.8/site-packages/flask_restx/model.py", line 236, in __deepcopy__
[(key, copy.deepcopy(value, memo)) for key, value in self.items()],
File "/home/app/.local/lib/python3.8/site-packages/flask_restx/model.py", line 236, in <listcomp>
[(key, copy.deepcopy(value, memo)) for key, value in self.items()],
File "/usr/local/lib/python3.8/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/usr/local/lib/python3.8/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/usr/local/lib/python3.8/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/usr/local/lib/python3.8/copy.py", line 229, in _deepcopy_dict
for key, value in x.items():
**RuntimeError: dictionary changed size during iteration**
2023-05-29 11:52:47,866 DEBUG T140221658154752 PUT to /api/v1/devices/100 processed in 169ms code 200
2023-05-29 11:52:47,880 DEBUG T140221972719360 PUT to /api/v1/devices/101 processed in 178ms code 200
2023-05-29 11:52:47,886 DEBUG T140221658154752 POST to /api/v1/devices/query processed in 17ms code 200
2023-05-29 11:52:47,888 DEBUG T140221689624320 PUT to /api/v1/devices/102 processed in 188ms code 200
2023-05-29 11:52:47,909 DEBUG T140221972719360 POST to /api/v1/devices/query processed in 4ms code 200
^^^ Note the multiple requests arriving on different theads within the same second as the crash, logged after the call stack ^^^
### **Environment**
- Python version 3.8.10
- Flask version 2.0.3
- Flask-RESTX version 1.0.6
- Other installed Flask extensions (none)
Thanks for your time. | open | 2023-06-06T06:06:01Z | 2024-02-22T13:45:55Z | https://github.com/python-restx/flask-restx/issues/545 | [
"bug"
] | peterhorsley | 3 |
darrenburns/posting | automation | 217 | Bug: ignore user-agent header and uses it own user-agent | When i set a user-agent posting ignore my user-agent and use it default user agent i'v had posting v2.3.0 and i update to v2.5.2 but it not fixed! | closed | 2025-03-09T11:28:03Z | 2025-03-13T20:05:50Z | https://github.com/darrenburns/posting/issues/217 | [
"bug"
] | ImMohammad20000 | 4 |
graphql-python/flask-graphql | graphql | 85 | which graphiql version is using? | I have `Flask-GraphQL==2.0.1` installed and inside Chrome it is requiring dependencies like
> http://cdn.jsdelivr.net/npm/graphiql@0.11.11/graphiql.min.js
note: currently the latest version in jsdelivr is 1.0.6
however the github readme says
> graphiql_version: The graphiql version to load. __Defaults to "1.0.3".__
reallly? If I set `graphiql_version=1.0.3 ` explicitly, then Chrome throws error
> Uncaught Error: GraphiQL 0.18.0 and after is not compatible with React 15 or below
I did not find anywhere the `render_graphiql.py` set the variable to "1.0.3"
In my local drive is `GRAPHIQL_VERSION = '0.11.11'`;
and gitlab `GRAPHIQL_VERSION = '0.7.1'` | open | 2020-11-16T09:42:44Z | 2021-07-27T03:31:55Z | https://github.com/graphql-python/flask-graphql/issues/85 | [] | qinst64 | 2 |
ageitgey/face_recognition | machine-learning | 1,185 | giving cpu as parameter ? | * face_recognition version:1.3.0
* Python version:3.7.6
* Operating System:3.7.6
### Description
I want to know how to give CPU as a parameter in the API calls
| open | 2020-07-11T10:56:33Z | 2020-07-27T13:40:35Z | https://github.com/ageitgey/face_recognition/issues/1185 | [] | silexxx | 1 |
mwaskom/seaborn | matplotlib | 3,239 | [Feature Request] sns.histplot() automatically supports discreted axis labeling. | # Background / related function desc
As the doc saids:
When both x and y are assigned, a bivariate histogram is computed and shown as a heatmap:
```python
sns.histplot(penguins, x="bill_depth_mm", y="body_mass_g")
```

---
# The requested feature is that
**If the provided `x=` was a `Pandas Categorical DType`, well cause It's a cat dtype, so what I expected `seaborn` could treat it as discrete variable instead of continuous variable.**
Recalls that the `Pandas Categorical DType` it could be a `numeric cat` like `Categories (5, int64): [1 < 10 < 100 < 1000 < 10000]`, or `str cat` like `Categories (3, object): ['0', '10ms', '20ms', '30ms']`.
- If it's `str cat`, the `histplot` gives the expected result.

- However, if `num cat`, and it just follows the numberic axis labeling.

Indeed, may be we can work around by using `log_scale=(True)`, but what if the `num cat` is not log-scale cat? Like `[1 < 200 < 1000< 1200]`.
# Currently best workaround I found
For those `num cat` dtype columns in dataframe, just convert to `str cat` first then apply histplot().
```python
df[col] = df[col].astype(str).astype('category')
sns.histplot(df, x=x, y=col)
``` | closed | 2023-01-29T07:53:32Z | 2023-02-06T17:29:56Z | https://github.com/mwaskom/seaborn/issues/3239 | [] | kaimo455 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.