repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,676 | [Bug]: Lora, Textual Inversion, Hypernetworks Tab Not Displany Anything | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I did a clean install of a1111 and when I try and display any Loras the tabs are completely blank, as are the ones for Textual Inversion, and Hypernetworks. I've tried setting the Lora directory from the webui file as well but it didn't change anything.
### Steps to reproduce the problem
Install automatic1111, click on Lora tab.
### What should have happened?
Populated a list of Loras and what have you.
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
[sysinfo-2024-05-01-07-22.json](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/15174791/sysinfo-2024-05-01-07-22.json)
### Console logs
```Shell
git config --global --add safe.directory D:/AI/stable-diffusion-webui
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.9.3
Commit hash: <none>
Launching Web UI with arguments: --xformers --upcast-sampling --medvram --autolaunch --no-half-vae --skip-version-check --lora-dir D:\AI\stable-diffusion-webui\models\lora
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
Loading weights [67ab2fd8ec] from D:\AI\stable-diffusion-webui\models\Stable-diffusion\ponyDiffusionV6XL_v6StartWithThisOne.safetensors
Creating model from config: D:\AI\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
D:\AI\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\metadata_editor.py:399: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
with gr.Row().style(equal_height=False):
D:\AI\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\metadata_editor.py:521: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
cover_image = gr.Image(
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 30.6s (prepare environment: 14.3s, import torch: 6.9s, import gradio: 2.1s, setup paths: 2.0s, initialize shared: 0.5s, other imports: 1.4s, load scripts: 2.4s, create ui: 0.6s, gradio launch: 0.2s).
Applying attention optimization: xformers... done.
Model loaded in 5.5s (load weights from disk: 0.5s, create model: 0.6s, apply weights to model: 3.0s, calculate empty prompt: 1.1s).
Restarting UI...
Closing server running on port: 7860
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
D:\AI\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\metadata_editor.py:399: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
with gr.Row().style(equal_height=False):
D:\AI\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\metadata_editor.py:521: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
cover_image = gr.Image(
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 0.9s (load scripts: 0.4s, create ui: 0.3s).
```
### Additional information
_No response_ | open | 2024-05-01T07:24:16Z | 2024-05-02T00:37:15Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15676 | [
"bug-report"
] | makaras221 | 1 |
serengil/deepface | deep-learning | 635 | module 'deepface.commons.functions' has no attribute 'find_input_shape' | I have used your code for face recognition and have applied on faces . Now It is showing module 'deepface.commons.functions' has no attribute 'find_input_shape' error | closed | 2023-01-26T16:48:51Z | 2023-02-10T09:20:09Z | https://github.com/serengil/deepface/issues/635 | [
"dependencies"
] | chandniagarwal | 13 |
python-security/pyt | flask | 40 | sqli.py test fails on non-3.6 versions of python | So I tried to add a tox.ini with all the versions of py3 + pypy3, due to the results file I realized with 3.6 you get:
```python
2 vulnerabilities found:
Vulnerability 1:
File: example/vulnerable_code/sql/sqli.py
> User input at line 26, trigger word "get(":
param = request.args.get('param', 'not set')
File: example/vulnerable_code/sql/sqli.py
> reaches line 27, trigger word "execute(":
result = db.engine.execute(param)
Vulnerability 2:
File: example/vulnerable_code/sql/sqli.py
> User input at line 33, trigger word "get(":
param = request.args.get('param', 'not set')
File: example/vulnerable_code/sql/sqli.py
> reaches line 36, trigger word "filter(":
result = session.query(User).filter('username={}'.format(param))
```
whereas below 3.6 you get
```python
2 vulnerabilities found:
Vulnerability 1:
File: example/vulnerable_code/sql/sqli.py
> User input at line 33, trigger word "get(":
param = request.args.get('param', 'not set')
File: example/vulnerable_code/sql/sqli.py
> reaches line 36, trigger word "filter(":
result = session.query(User).filter('username={}'.format(param))
Vulnerability 2:
File: example/vulnerable_code/sql/sqli.py
> User input at line 26, trigger word "get(":
param = request.args.get('param', 'not set')
File: example/vulnerable_code/sql/sqli.py
> reaches line 27, trigger word "execute(":
result = db.engine.execute(param)
```
notice the difference? It's just an order problem
This would explain why I had to change the `results` file to get Travis CI to pass in https://github.com/python-security/pyt/pull/23 | closed | 2017-05-04T00:02:56Z | 2018-04-15T01:53:03Z | https://github.com/python-security/pyt/issues/40 | [] | KevinHock | 5 |
BeanieODM/beanie | pydantic | 1,055 | [BUG] Memory consumption during iterative migration is very high | **Describe the bug**
We recently try to run an iterative migration, but it always was killed by the OS because it kept using to much memory. It was a collection with hundred thousands of rather large documents and after a few minutes the migration scripts used up over 10 GiBs of memory.
**To Reproduce**
Create a big collection (best multiple GiBs on disk) and run an iterative migration on them.
**Expected behavior**
Memory consumption does not grow during the migration
**Additional context**
Using a free fall migration works fine. From the implementation of the iterative migration it is clear where to "problem" is, because all operations are collected and only executed at the end, so every document has to be held in memory. I'm not sure about the batching logic which is implemented there, but wouldn't it be a solution to directly execute each batch instead of collecting them?
| closed | 2024-10-15T15:55:37Z | 2025-02-24T02:41:53Z | https://github.com/BeanieODM/beanie/issues/1055 | [
"improvement",
"Stale"
] | pschoen-itsc | 6 |
seleniumbase/SeleniumBase | pytest | 3,457 | rosetta error: failed to open elf at /lib64/ld-linux-x86-64.so.2 | I created a docker container for my selenium crawler and I am using x86 linux machine. There is no official chromedriver for x86 so I downloaded the one from electron.
After running the basic selenium script i get this error, but the script still continues and works properly. I wanted to ask what is this error?
```
rosetta error: failed to open elf at /lib64/ld-linux-x86-64.so.2
Trace/breakpoint trap
``` | closed | 2025-01-27T20:31:17Z | 2025-01-27T20:43:40Z | https://github.com/seleniumbase/SeleniumBase/issues/3457 | [
"question",
"external"
] | MaazBinMusa | 1 |
microsoft/Bringing-Old-Photos-Back-to-Life | pytorch | 109 | Why doesn't VAE generate mean and variance | Why doesn't VAE generate mean and variance? Auto-Encoder or VAE? | closed | 2021-01-20T09:27:55Z | 2021-01-20T12:18:08Z | https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/109 | [] | ryanqiutu | 1 |
allenai/allennlp | nlp | 5,036 | spacy 3.0 warnings about lemmatization and POS | I'm training a model (https://github.com/allenai/allennlp-models/blob/main/training_config/pair_classification/mnli_roberta.jsonnet) with allennlp 2.1.0, using SpaCy 3.
There are a bunch of warnings that show up (I'm not sure if these were here before, but I noticed them now because my log files are now massive):
```
[W108] The rule-based lemmatizer did not find POS annotation for the token 'Tommy'. Check that your pipeline includes components that assign token.pos, typically 'tagger'+'attribute_ruler' or 'morphologizer'.
[W108] The rule-based lemmatizer did not find POS annotation for the token 'hesitated'. Check that your pipeline includes components that assign token.pos, typically 'tagger'+'attribute_ruler' or 'morphologizer'.
[W108] The rule-based lemmatizer did not find POS annotation for the token '.'. Check that your pipeline includes components that assign token.pos, typically 'tagger'+'attribute_ruler' or 'morphologizer'.
[W108] The rule-based lemmatizer did not find POS annotation for the token 'Tommy'. Check that your pipeline includes components that assign token.pos, typically 'tagger'+'attribute_ruler' or 'morphologizer'.
[W108] The rule-based lemmatizer did not find POS annotation for the token 'hesitated'. Check that your pipeline includes components that assign token.pos, typically 'tagger'+'attribute_ruler' or 'morphologizer'.
[W108] The rule-based lemmatizer did not find POS annotation for the token 'for'. Check that your pipeline includes components that assign token.pos, typically 'tagger'+'attribute_ruler' or 'morphologizer'.
[W108] The rule-based lemmatizer did not find POS annotation for the token 'a'. Check that your pipeline includes components that assign token.pos, typically 'tagger'+'attribute_ruler' or 'morphologizer'.
[W108] The rule-based lemmatizer did not find POS annotation for the token 'short'. Check that your pipeline includes components that assign token.pos, typically 'tagger'+'attribute_ruler' or 'morphologizer'.
...
```
I think this is because `allennlp.common.util.get_spacy_model` (https://github.com/allenai/allennlp/blob/96415b2bab6d8c70a0fa80ca4a8b9d1dc988720e/allennlp/common/util.py#L258) has POS off by default, but doesn't disable lemmatization by default.
Not sure what y'all think is the best way to solve this...can add a lemmatization argument for get_spacy_model that is default by false? This is a change in the defaults from previous versions, though. | closed | 2021-03-04T21:39:14Z | 2021-03-24T22:28:31Z | https://github.com/allenai/allennlp/issues/5036 | [
"bug"
] | nelson-liu | 6 |
numba/numba | numpy | 9,922 | serialize the compilation library file | When using Numba to develop a computational geometry library, the large number of precompiled classes and functions results in extremely slow startup times. I would like to know what the LLVM compilation results are and whether they can be serialized. For example, could they be saved as dynamic library files to enable hot-start functionality? | open | 2025-02-11T16:07:52Z | 2025-03-22T06:31:25Z | https://github.com/numba/numba/issues/9922 | [
"no action required"
] | yxdragon | 11 |
pyeve/eve | flask | 1,074 | OPLOG =True is not working currently, | eve/methods/common.py
in v0.8
if not config.OPLOG \
or op not in config.OPLOG_METHODS\
or resource in config.URLS[resource]:
return
in v0.7 and preview
if not config.OPLOG or op not in config.OPLOG_METHODS:
return
it works in v0.7,and not work in v0.8 | closed | 2017-10-17T09:48:10Z | 2018-03-30T08:54:58Z | https://github.com/pyeve/eve/issues/1074 | [] | cst4049 | 7 |
opengeos/streamlit-geospatial | streamlit | 98 | Error running app. | This error shows up in the console for the file useAppState.ts:43
INITIAL -> (10, 0, undefined) -> ERROR
| closed | 2022-11-26T15:26:03Z | 2022-11-29T14:51:28Z | https://github.com/opengeos/streamlit-geospatial/issues/98 | [] | pdinkins | 0 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 405 | [BUG] 简短明了的描述问题 | docker 版怎么自己换cookie啊。 | closed | 2024-05-22T03:08:56Z | 2024-05-26T05:38:25Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/405 | [
"BUG"
] | xiaoyaozz | 1 |
tortoise/tortoise-orm | asyncio | 1,568 | Corrupt sql generated when bulk_create method is configured with on_conflict parameter | **Describe the bug**
In the sqlite engine, configuring `update_fields` will configure on_conflict for `self.insert_query` and `self.insert_query_all`, resulting in duplicate on conflict fields.
`'INSERT INTO "fileevent" ("id","tn","date","url","status","create_at") VALUES (?,?,?,?,?,?) ON CONFLICT ("id", "id") DO UPDATE SET "url"=EXCLUDED."url","url"=EXCLUDED."url"'`
**To Reproduce**
sqlite3 and use bulk api
**Expected behavior**
`'INSERT INTO "fileevent" ("id","tn","date","url","status","create_at") VALUES (?,?,?,?,?,?) ON CONFLICT ("id") DO UPDATE SET "url"=EXCLUDED."url","url"=EXCLUDED."url"'`
| open | 2024-03-12T10:19:22Z | 2024-09-05T22:44:08Z | https://github.com/tortoise/tortoise-orm/issues/1568 | [] | zzl221000 | 1 |
robinhood/faust | asyncio | 417 | Workers don't get assigned to available partitions | ## Steps to reproduce
- create an agent that groups stream by some id
- the number of groups (different ids) I have is 5
- start 6 workers from the terminal each with a different web port
## Expected behavior
each worker gets assigned to a partition and process them in parallel
## Actual behavior
- only 3 workers do the processing (2 of them processes 2 groups and 1 process the remaining group) while the other 3 workers remain idle
# Versions
* Python version = 3.6
* Faust version = 1.7.4
* Operating system = Ubuntu 18.04.2
| open | 2019-09-04T16:30:14Z | 2020-09-09T22:11:34Z | https://github.com/robinhood/faust/issues/417 | [] | MarcoRizk | 5 |
chaos-genius/chaos_genius | data-visualization | 995 | [BUG] Edit KPI does not trigger KPI Validation | ## Describe the bug
Edit KPI does not trigger KPI Validation
## Current behavior
Internal Server Error on an invalid KPI setup
## Expected behavior
KPI should fail to edit with a relevant popup.
| closed | 2022-06-16T08:38:41Z | 2022-06-16T10:32:15Z | https://github.com/chaos-genius/chaos_genius/issues/995 | [] | varunp2k | 0 |
graphdeco-inria/gaussian-splatting | computer-vision | 208 | The problems in cov3D in forward.cu. | I found that the _cov3D_ in _forward.cu_ is calculated based on [here:](https://github.com/graphdeco-inria/diff-gaussian-rasterization/blob/59f5f77e3ddbac3ed9db93ec2cfe99ed6c5d121d/cuda_rasterizer/forward.cu#L143)`glm::mat3 Sigma = glm::transpose(M) * M `
where M is defined as`glm::mat3 M = S * R`
This is not consistent with the equation 6 in the paper (M = R * S). Will this have any impact on the final outcomes? | closed | 2023-09-18T13:37:45Z | 2024-02-22T08:03:51Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/208 | [] | cjfcsjt | 2 |
SCIR-HI/Huatuo-Llama-Med-Chinese | nlp | 14 | 是否有用到真实医疗场景数据? | Hello!
想问下是否有用到其他数据集(除了instruct生成的数据外)? | closed | 2023-04-24T03:38:00Z | 2023-04-27T06:45:01Z | https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese/issues/14 | [] | kaishxu | 1 |
psf/black | python | 3,709 | support configuration to prefer single/double quote string style? | closed | 2023-05-29T03:54:10Z | 2023-05-29T11:55:55Z | https://github.com/psf/black/issues/3709 | [
"T: enhancement"
] | skygragon | 5 |
|
Urinx/WeixinBot | api | 147 | 有没有获取动画表情的接口? | open | 2017-01-25T10:26:57Z | 2017-01-25T10:26:57Z | https://github.com/Urinx/WeixinBot/issues/147 | [] | lauhg | 0 |
|
hbldh/bleak | asyncio | 476 | Error with Bleak and Kivy [TypeError: __import__() takes at least 1 argument (0 given)] | * bleak version: 0.11.0a3
* Python version: 3.8.6
* Operating System: Windows 10
Henrik:
I came across this error today while trying to combine kivy and bleak.
Example Code
https://raw.githubusercontent.com/kivy/kivy/master/examples/async/asyncio_basic.py
The below code works fine
```
import asyncio
from kivy.app import async_runTouchApp
from kivy.lang.builder import Builder
kv = '''
BoxLayout:
orientation: 'vertical'
Button:
id: btn
text: 'Press me'
BoxLayout:
Label:
id: label
text: 'Button is "{}"'.format(btn.state)
'''
async def run_app_happily(root, other_task):
'''This method, which runs Kivy, is run by the asyncio loop as one of the
coroutines.
'''
# we don't actually need to set asyncio as the lib because it is the
# default, but it doesn't hurt to be explicit
await async_runTouchApp(root, async_lib='asyncio') # run Kivy
print('App done')
# now cancel all the other tasks that may be running
other_task.cancel()
async def waste_time_freely():
'''This method is also run by the asyncio loop and periodically prints
something.
'''
try:
while True:
print('Sitting on the beach')
await asyncio.sleep(2)
except asyncio.CancelledError as e:
print('Wasting time was canceled', e)
finally:
# when canceled, print that it finished
print('Done wasting time')
if __name__ == '__main__':
def root_func():
'''This will run both methods asynchronously and then block until they
are finished
'''
root = Builder.load_string(kv) # root widget
other_task = asyncio.ensure_future(waste_time_freely())
return asyncio.gather(run_app_happily(root, other_task), other_task)
loop = asyncio.get_event_loop()
loop.run_until_complete(root_func())
loop.close()
```
**TypeError: import() takes at least 1 argument (0 given) is thrown when bleak is imported**
```
import asyncio
import kivy
from kivy.app import async_runTouchApp
from kivy.lang.builder import Builder
from bleak import BleakClient, BleakError
kivy.require("1.10.1")
kv = '''
BoxLayout:
orientation: 'vertical'
Button:
id: btn
text: 'Press me'
BoxLayout:
Label:
id: label
text: 'Button is "{}"'.format(btn.state)
'''
async def run_app_happily(root, other_task):
'''This method, which runs Kivy, is run by the asyncio loop as one of the
coroutines.
'''
# we don't actually need to set asyncio as the lib because it is the
# default, but it doesn't hurt to be explicit
await async_runTouchApp(root, async_lib='asyncio') # run Kivy
print('App done')
# now cancel all the other tasks that may be running
other_task.cancel()
async def waste_time_freely():
'''This method is also run by the asyncio loop and periodically prints
something.
'''
try:
while True:
print('Sitting on the beach')
await asyncio.sleep(2)
except asyncio.CancelledError as e:
print('Wasting time was canceled', e)
finally:
# when canceled, print that it finished
print('Done wasting time')
if __name__ == '__main__':
def root_func():
'''This will run both methods asynchronously and then block until they
are finished
'''
root = Builder.load_string(kv) # root widget
other_task = asyncio.ensure_future(waste_time_freely())
return asyncio.gather(run_app_happily(root, other_task), other_task)
loop = asyncio.get_event_loop()
loop.run_until_complete(root_func())
loop.close()
```
```
C:\repos\BioController\venv\Scripts\python.exe C:/repos/BioController/ex_kivy_async.py
[INFO ] [Logger ] Record log in C:\Users\Brandon\.kivy\logs\kivy_21-03-06_27.txt
[INFO ] [deps ] Successfully imported "kivy_deps.angle" 0.3.0
[INFO ] [deps ] Successfully imported "kivy_deps.glew" 0.3.0
[INFO ] [deps ] Successfully imported "kivy_deps.sdl2" 0.3.1
[INFO ] [Kivy ] v2.0.0
[INFO ] [Kivy ] Installed at "C:\repos\BioController\venv\lib\site-packages\kivy\__init__.py"
[INFO ] [Python ] v3.8.6 (tags/v3.8.6:db45529, Sep 23 2020, 15:52:53) [MSC v.1927 64 bit (AMD64)]
[INFO ] [Python ] Interpreter at "C:\repos\BioController\venv\Scripts\python.exe"
[INFO ] [Factory ] 186 symbols loaded
[INFO ] [Image ] Providers: img_tex, img_dds, img_sdl2, img_pil (img_ffpyplayer ignored)
Traceback (most recent call last):
File "C:/repos/BioController/ex_kivy_async.py", line 67, in <module>
loop.run_until_complete(root_func())
File "C:/repos/BioController/ex_kivy_async.py", line 61, in root_func
root = Builder.load_string(kv) # root widget
File "C:\repos\BioController\venv\lib\site-packages\kivy\lang\builder.py", line 404, in load_string
widget = Factory.get(parser.root.name)(__no_builder=True)
File "C:\repos\BioController\venv\lib\site-packages\kivy\factory.py", line 153, in __getattr__
module = __import__(
TypeError: __import__() takes at least 1 argument (0 given)
Process finished with exit code 1
``` | closed | 2021-03-07T03:53:34Z | 2021-03-19T00:09:42Z | https://github.com/hbldh/bleak/issues/476 | [] | BioMycoBit | 4 |
coqui-ai/TTS | pytorch | 3,218 | [Bug] --list_speaker_idx doesn't work in tts-cpu on docker for model xtts_v2 | ### Describe the bug
--list_speaker_idx doesn't work in tts-cpu on docker for model xtts_v2
### To Reproduce
1) On ubuntu-server with docker installed:
2) Run: sudo docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/coqui-ai/tts-cpu:latest
3) Run: python3 TTS/server/server.py --model_name "tts_models/multilingual/multi-dataset/xtts_v2" --list_speaker_idxs
### Expected behavior
View list speaker_idxs
### Logs
```shell
root@62b2743e2caf:~# python3 TTS/server/server.py --model_name "tts_models/multilingual/multi-dataset/xtts_v2" --list_speaker_idxs
usage: server.py [-h] [--list_models [LIST_MODELS]] [--model_name MODEL_NAME] [--vocoder_name VOCODER_NAME] [--config_path CONFIG_PATH] [--model_path MODEL_PATH] [--vocoder_path VOCODER_PATH] [--vocoder_config_path VOCODER_CONFIG_PATH] [--speakers_file_path SPEAKERS_FILE_PATH] [--port PORT] [--use_cuda USE_CUDA]
[--debug DEBUG] [--show_details SHOW_DETAILS]
server.py: error: unrecognized arguments: --list_speaker_idxs
root@62b2743e2caf:~# python3 TTS/server/server.py --model_name tts_models/multilingual/multi-dataset/xtts_v2 --list_speaker_idxs
usage: server.py [-h] [--list_models [LIST_MODELS]] [--model_name MODEL_NAME] [--vocoder_name VOCODER_NAME] [--config_path CONFIG_PATH] [--model_path MODEL_PATH] [--vocoder_path VOCODER_PATH] [--vocoder_config_path VOCODER_CONFIG_PATH] [--speakers_file_path SPEAKERS_FILE_PATH] [--port PORT] [--use_cuda USE_CUDA]
[--debug DEBUG] [--show_details SHOW_DETAILS]
server.py: error: unrecognized arguments: --list_speaker_idxs
```
### Environment
```shell
coqui/tts-cpu for docker LTS
on ryzen 7 5700u 32gb ram, on a nvme ssd
```
### Additional context
_No response_ | closed | 2023-11-14T18:52:30Z | 2023-11-15T09:20:34Z | https://github.com/coqui-ai/TTS/issues/3218 | [
"bug"
] | maxime-fleury | 2 |
horovod/horovod | deep-learning | 3,687 | Support custom data loaders in TorchEstimator | Follow-up to #3602, to add support for customizable data modules/loaders in the TorchEstimator. | closed | 2022-09-08T15:28:18Z | 2023-02-01T06:17:20Z | https://github.com/horovod/horovod/issues/3687 | [
"enhancement"
] | leewyang | 0 |
ryfeus/lambda-packs | numpy | 30 | Geeting error on uploading Pack.zip for Tesseract | START RequestId: dd896580-dd19-11e8-b337-198d982a5114 Version: $LATEST
LD_LIBRARY_PATH=/var/task/lib TESSDATA_PREFIX=/var/task ./tesseract /tmp/imgres.png /tmp/result
Start
END RequestId: dd896580-dd19-11e8-b337-198d982a5114
REPORT RequestId: dd896580-dd19-11e8-b337-198d982a5114 Duration: 3003.30 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 38 MB
2018-10-31T14:33:02.588Z dd896580-dd19-11e8-b337-198d982a5114 Task timed out after 3.00 seconds | closed | 2018-10-31T14:36:44Z | 2018-11-01T08:34:05Z | https://github.com/ryfeus/lambda-packs/issues/30 | [] | kranthijulakantiwork | 0 |
miguelgrinberg/Flask-SocketIO | flask | 1,398 | Concurrent socket connections handling. | **Increasing the performance of sockets and the connections**
Good day! I develop an API using Python Flask framework and I've got to move a part of code for socket.io usage. But my issue is the quantity of connections on single worker, because I need a >1000 users to be connected. Is it possible to increase the qty of possible connections with configuring async sockets or will it require a server with better performance (RAM, CPU)?
Developers recommend to use **NodeJS** with the same **SocketIO** library, saying that Node can run thousands async sockets on a single worker, and eventually create NodeJS socket server, so the Flask app will work as a client. I feel that this is a bad idea, but I need to know that Python's implementation of SocketIO server will give the same good performance.
If yes, what has to be done for it?
Thanks in advance.
| closed | 2020-10-28T13:16:41Z | 2021-04-06T13:19:12Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1398 | [
"question"
] | mikebionic | 6 |
vitalik/django-ninja | rest-api | 631 | Django Forms and validators | The django-ninja is awesome, but I found something that is not supported.
Django has validators that used in models and forms. It would be very helpful to support that feature in django-ninja.
Of course after data-validation we can put the data into a form and validate it. But it's better to have it in a Schema class or so.
It can validate data in regular way django-ninja does and then put an object into a django form and validate there. The main point of it is having single format of validation error and single interface to get validated data. And also reusing already written code in forms and validators, especially when we use 3rd party libraries.
Any thought about it?
| open | 2022-12-14T22:33:43Z | 2023-08-23T10:24:12Z | https://github.com/vitalik/django-ninja/issues/631 | [] | liminspace | 4 |
iterative/dvc | data-science | 10,347 | dvc fetch: Files downloaded from remote storage (AWS S3) to the DVC cache should have mtime restored | We want to use DVC to store media files of a static page that is build with Jupyter Book (Sphinx doc). However, `dvc fetch` / `dvc pull` sets the mtime of the files downloaded from remote storage in AWS S3 to the local DVC cache to the current time instead of the last modified time of the remote file object. This then triggers a complete rebuild of the entire documentation, consisting of >1000 pages. The files are then checked out using `dvc checkout` (or `dvc pull`, but after fetch it won't re-download anything) to the local repository using link type `symlink`. That latter step works to preserve the mtime of the object in the local DVC cache. But the download from remote storage to local cache is the issue.
It would be great if DVC would set the mtime of the files in the cache to the last modified time of the remote storage object to help avoid the rebuild issue. Otherwise we would need to use AWS CLI or a custom script to download the remote folder to the local cache directory instead of `dvc fetch`. | open | 2024-03-08T11:49:41Z | 2024-04-02T09:37:20Z | https://github.com/iterative/dvc/issues/10347 | [] | aschuh-hf | 7 |
waditu/tushare | pandas | 1,427 | pro.fut_settle 数据缺失 | ```python
import tushare as ts
pro = ts.pro_api()
df = pro.fut_settle(trade_date='20200903')
```
`df` 中缺失大商所, 郑商所, 中金所相关数据。 | open | 2020-09-09T07:11:30Z | 2020-09-09T07:11:30Z | https://github.com/waditu/tushare/issues/1427 | [] | flios | 0 |
ahmedfgad/GeneticAlgorithmPython | numpy | 290 | pygad.kerasga | Hi
why pygad.kerasga is much slower compared to pygad.gann?
I am using the same model specification (1 Input layer, 2 hidden layers and 1 output layer). Why Keras model runtime is very high?
Another question is why we are not using crossover and mutation in Keras example? Are we actually using Genetic algorithm here if both mutation and crossover are disabled?
Please help to answer. | open | 2024-04-27T04:23:12Z | 2025-01-07T22:09:21Z | https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/290 | [
"question"
] | hiteshKjindal | 1 |
graphistry/pygraphistry | pandas | 15 | better error when NaN in src/dst/node | closed | 2015-07-10T05:40:10Z | 2015-08-06T13:53:03Z | https://github.com/graphistry/pygraphistry/issues/15 | [
"bug"
] | lmeyerov | 2 |
|
coqui-ai/TTS | pytorch | 3,718 | Can't start training due to recursion depth error | ### Describe the bug
I have manually split my .wav audio file into chunks of 10 seconds in 75 files and written transcriptions for them. However, when I tried to train the model I got an error: `RecursionError: maximum recursion depth exceeded while calling a Python object`. Nevertheless, I have used a similar approach to another dataset with 100 audio chunks with the same duration, and it worked.
Can you please help to fix it? Is it possible to use that amount of data? I also tried to duplicate all samples to make 150. However, it didn't helped!
A similar issue has been seen and not resolved: https://github.com/coqui-ai/TTS/issues/3410
### To Reproduce
Try to train on the songs
### Expected behavior
Should work
### Logs
_No response_
### Environment
```shell
SAME ON THE INSTRUCTION
```
### Additional context
_No response_ | closed | 2024-05-03T15:45:48Z | 2024-06-26T16:49:12Z | https://github.com/coqui-ai/TTS/issues/3718 | [
"bug",
"wontfix"
] | yukiarimo | 1 |
Morizeyao/GPT2-Chinese | nlp | 249 | 如何加入微信群或者QQ群 | 作者您好,请问如何才能加微信群或者qq群呢? | open | 2022-08-02T08:11:50Z | 2022-08-02T08:11:50Z | https://github.com/Morizeyao/GPT2-Chinese/issues/249 | [] | ipeaking | 0 |
davidsandberg/facenet | tensorflow | 521 | Align the LFW dataset | I am getting error while aligning it.
I am trying to follow the steps for
" N in {1..4}; do python src/align/align_dataset_mtcnn.py ~/datasets/lfw/raw ~/datasets/lfw/lfw_mtcnnpy_160 --image_size 160 --margin 32 --random_order --gpu_memory_fraction 0.25 & done"
Am i supposed to run this command in command prompt using administrative mode?
I have saved the LFW datasets in the folder kt rather than lfw/raw.
Kindly guide me with the necessary steps.
I am in really need.
Hope to hear from you soon.
Thank you so much in advance. | open | 2017-11-09T07:47:13Z | 2019-11-26T18:36:28Z | https://github.com/davidsandberg/facenet/issues/521 | [] | KrishnaJoshii | 13 |
chaoss/augur | data-visualization | 2,822 | What should be possible after docker compose up |
I've been able to get the augur docker containers up and running using the Docker quick start instructions on windows as described on this page https://oss-augur.readthedocs.io/en/main/docker/quick-start.html. However, it isn't clear what to do after running `docker compose up`.
I had to use the `docker-compose-externalDB.yml` file instead of the default `docker-compose.yml` file, because I already have postgresql running locally for another project and didn't want to disable it. Docker apparently won't let me start Postgresql in Docker if there's another one running as the port is reserved.
I've gotten to a point where Docker desktop shows that rabbitmq-1, redis-1, and augur-1 containers are all up and running without error after running the command ` docker compose -f docker-compose-externalDB.yml up`. What's next? The docs don't clarify what do next in the docker quick start scenario. I assumed either augur CLI would be installed in that augur-1 container, it wasn't, or I could access a front-end at http://localhost:5002/ or http://localhost:8080 but that isn't the case either.
If someone could point me in the direction of what next steps should be possible after `docker compose up`, that would help me debug as I'm not getting any errors. Thank you. | closed | 2024-06-17T03:39:54Z | 2024-10-01T22:51:03Z | https://github.com/chaoss/augur/issues/2822 | [
"installation"
] | JustinGOSSES | 2 |
pyeve/eve | flask | 1,018 | Nothing being returned on valid query. | Hey there,
I'm having a strange problem where a raw mongodb query is working fine, but the same results aren't being returned in a eve-run project?
```
db.getCollection('metabolites').find({"adducts.positive" : {"$elemMatch" : {"accurate_mass" : {"$gte" : 300, "$lte" : 301},"type" : {"$in" : ["[M+K]1+"]}}}}, {"name" : 1, "adducts.positive.$" : 1})
```
Returns 35 results.
```
?where={"adducts.positive" : {"$elemMatch" : {"type" : {"$in" : ["[M+K]1+"},"accurate_mass" : {"$lte" : 301, "$gte" : 300}}}}&projection={"name" : 1, "chemical_formula" : 1, "adducts.positive.$":1
```
Returns none! | closed | 2017-05-09T16:20:21Z | 2018-05-18T17:19:51Z | https://github.com/pyeve/eve/issues/1018 | [
"stale"
] | KeironO | 1 |
graphdeco-inria/gaussian-splatting | computer-vision | 1,031 | How to Perform Depth Map Rendering | Hello!How to Perform Depth Map Rendering | open | 2024-10-27T04:01:50Z | 2024-10-27T04:01:50Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/1031 | [] | zhuchi1121 | 0 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 58 | tiktok 海外视频貌似解析不了? | 用的是说明文档里面的链接尝试的,试了一下其他的也一样,无法解析
Scraper.tiktok() | Expecting value: line 1 column 1 (char 0) | https://www.tiktok.com/@tvamii/video/7045537727743380782
| closed | 2022-07-27T05:58:27Z | 2022-08-01T05:34:02Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/58 | [
"API Down",
"Fixed"
] | formating | 5 |
sinaptik-ai/pandas-ai | data-science | 1,125 | Unable to run locally to contribute | ### System Info
master branch (2.0.34)
Using Python 3.11
Package installation (I have tried both approaches):
* ```poetry install --all-extras --with dev ```
* ```pip install . ```
### 🐛 Describe the bug
I can get pandasai working via ```pip install pandasai``` using the PANDASAI_API_KEY however when I use the same API key but use the poetry local installation:
```poetry install --all-extras --with dev ```
I get the following error when I run sdf.chat("message"):
```Traceback (most recent call last):
File "/Users/jwalkinshaw/GenAI/Testing_others/Random testing/pandas/pandas-ai/pandasai/pipelines/chat/generate_chat_pipeline.py", line 307, in run
output = (self.code_generation_pipeline | self.code_execution_pipeline).run(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jwalkinshaw/GenAI/Testing_others/Random testing/pandas/pandas-ai/pandasai/pipelines/pipeline.py", line 137, in run
raise e
File "/Users/jwalkinshaw/GenAI/Testing_others/Random testing/pandas/pandas-ai/pandasai/pipelines/pipeline.py", line 101, in run
step_output = logic.execute(
^^^^^^^^^^^^^^
File "/Users/jwalkinshaw/GenAI/Testing_others/Random testing/pandas/pandas-ai/pandasai/pipelines/chat/code_generator.py", line 33, in execute
code = pipeline_context.config.llm.generate_code(input, pipeline_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jwalkinshaw/GenAI/Testing_others/Random testing/pandas/pandas-ai/pandasai/llm/base.py", line 200, in generate_code
response = self.call(instruction, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jwalkinshaw/GenAI/Testing_others/Random testing/pandas/pandas-ai/pandasai/llm/bamboo_llm.py", line 18, in call
response = self._session.post("/llm/chat", json=data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jwalkinshaw/GenAI/Testing_others/Random testing/pandas/pandas-ai/pandasai/helpers/request.py", line 37, in post
return self.make_request("POST", path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jwalkinshaw/GenAI/Testing_others/Random testing/pandas/pandas-ai/pandasai/helpers/request.py", line 71, in make_request
raise PandasAIApiCallError(data["message"])
pandasai.exceptions.PandasAIApiCallError: Unable to generate LLM response.```
**For reference the error code does not occur during installation it is at the end of this code**
```python
import pandas as pd
from pandasai import SmartDataframe
import os
os.environ['PANDASAI_API_KEY'] ="XXXX"
df = pd.DataFrame({
"country": [
"United States",
"United Kingdom",
"France",
"Germany",
"Italy",
"Spain",
"Canada",
"Australia",
"Japan",
"China",
],
"gdp": [
19294482071552,
2891615567872,
2411255037952,
3435817336832,
1745433788416,
1181205135360,
1607402389504,
1490967855104,
4380756541440,
14631844184064,
],
"happiness_index": [6.94, 7.16, 6.66, 7.07, 6.38, 6.4, 7.23, 7.22, 5.87, 5.12],
})
sdf = SmartDataframe(df,config={"verbose": True})
sdf.chat("Return the top 5 countries by GDP")
```
Has anyone come across this issue? Am I doing something incorrectly? | closed | 2024-04-19T05:41:05Z | 2024-08-28T16:08:53Z | https://github.com/sinaptik-ai/pandas-ai/issues/1125 | [] | jackwalkin | 8 |
pydantic/pydantic | pydantic | 11,051 | The default value in the annotated type does not apply | ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
Hi, I have a model that uses this construction in fastapi
```python
type Maybe[T] = Annotated[T | None, Field(None)]
```
Today I upgraded pydantic from 2.9.2 to 2.10.3 and now I get the `Field required` error. I tried rolling back the pydantic version and this error was not there, as expected the field can be left blank.
I have created a small example that reproduces this behavior. When using version 2.9 `field` can be left blank, whereas in version 2.10 I get the error.
### Example Code
```Python
from typing import Annotated
import uvicorn
from fastapi import FastAPI
from pydantic import Field, BaseModel
type Maybe[T] = Annotated[T | None, Field(None)]
app = FastAPI()
class Item(BaseModel):
field: Maybe[int]
@app.post("/items/")
async def create_item(item: Item):
return item
uvicorn.run(app)
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.10.3
pydantic-core version: 2.27.1
pydantic-core build: profile=release pgo=false
install path: /Users/denis/Projects/project/backend/.venv/lib/python3.13/site-packages/pydantic
python version: 3.13.1 (main, Dec 5 2024, 14:24:26) [Clang 16.0.0 (clang-1600.0.26.3)]
platform: macOS-15.0-arm64-arm-64bit-Mach-O
related packages: fastapi-0.115.6 pydantic-settings-2.6.1 mypy-1.13.0 pyright-1.1.390 typing_extensions-4.12.2
commit: unknown
```
| closed | 2024-12-05T14:27:59Z | 2024-12-09T10:25:24Z | https://github.com/pydantic/pydantic/issues/11051 | [
"bug V2"
] | 9en9i | 7 |
paperless-ngx/paperless-ngx | machine-learning | 8,277 | [BUG] Header bar unexpectedly shifts down, covering content in Paperless-ngx file tasks view | ### Description
Currently, Paperless-ngx has a layout issue where the green header bar is unexpectedly shifted downward in the file tasks view. This behavior causes the header bar to overlap with the list of documents, obscuring parts of the content and limiting usability.
This issue seems to affect the display of file tasks, specifically when viewing the completed or pending tasks. The header bar should remain fixed at the top of the page but instead appears further down, creating a distracting overlay over the content area.
**Expected Behavior**: The header bar should remain at the top, outside the document list, without obscuring any content in the file tasks view.
**Actual Behavior**: The header bar is shifted down and overlays part of the content area.

### Steps to reproduce
1. Open Paperless-ngx and navigate to the "File Tasks" section.
2. View the list of tasks under "Completed" or "Pending."
3. scrawl down (browser on mobile / desktop)
4. Observe that the header bar is shifted downward, covering part of the list view.
### Webserver logs
```bash
_No specific web server errors were noted in the log for this layout issue._
```
### Browser logs
```bash
_No specific browser errors were noted in the console for this layout issue._
```
### Paperless-ngx version
2.13.5
### Host OS
Synology DS923+
### Installation method
Docker - official image
### System status
_No response_
### Browser
Tested on Chrome, Firefox (Ubuntu 24LTS)
### Configuration changes
_No configuration changes were made to the default setup._
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-11-14T07:35:07Z | 2024-12-15T03:19:11Z | https://github.com/paperless-ngx/paperless-ngx/issues/8277 | [
"not a bug"
] | Friedjof | 2 |
yt-dlp/yt-dlp | python | 12,018 | Unable to use :ytfav to get Liked videos. | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
I'm unable to get my liked videos with ytfav, it used to work but now it doesn't. I forgot since when it stopped working.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: [':ytfav', '--skip-download', '--flat-playlist', '--ignore-no-formats', '--cookies', '/storage/emulated/0/Tasker/Files/youtube-dlp/youtube.txt', '--extractor-args', 'youtubetab:approximate-date', '-vU']
[debug] Encodings: locale utf-8, fs utf-8, pref utf-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.12.23 from yt-dlp/yt-dlp [65cf46cdd] (pip)
[debug] Python 3.12.8 (CPython aarch64 64bit) - Linux-5.10.177-android12-9-00001-g219d8dfbba07-ab10551810-aarch64-with-libc (OpenSSL 3.3.2 3 Sep 2024, libc)
[debug] exe versions: ffmpeg 7.1 (setts), ffprobe 7.1, phantomjs .
[debug] Optional libraries: brotli-1.1.0, certifi-2024.12.14, pycrypto-3.21.0, requests-2.32.3, sqlite3-3.47.2, urllib3-2.3.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.12.23 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.12.23 from yt-dlp/yt-dlp)
[debug] [youtube:favorites] Extracted SAPISID cookie
[youtube:favorites] Extracting URL: :ytfav
[youtube:tab] Extracting URL: https://www.youtube.com/playlist?list=LL
[youtube:tab] LL: Downloading webpage
WARNING: [youtube:tab] YouTube said: The playlist does not exist.
ERROR: [youtube:tab] LL: YouTube said: The playlist does not exist.
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/yt_dlp/extractor/youtube.py", line 5001, in wrapper
info_dict = func(self, url, smuggled_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/yt_dlp/extractor/youtube.py", line 7042, in _real_extract
self._extract_and_report_alerts(data, only_once=True)
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/yt_dlp/extractor/youtube.py", line 864, in _extract_and_report_alerts
return self._report_alerts(self._extract_alerts(data), *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/yt_dlp/extractor/youtube.py", line 861, in _report_alerts
raise ExtractorError(f'YouTube said: {errors[-1][1]}', expected=expected)
```
| closed | 2025-01-07T06:10:54Z | 2025-01-11T16:05:04Z | https://github.com/yt-dlp/yt-dlp/issues/12018 | [
"question",
"site:youtube"
] | mqwec43as | 5 |
pytorch/vision | computer-vision | 8,677 | Deterministic order of classes and images for the Omniglot dataset | ### 🚀 The feature
By sorting the output of the `list_files()` and `list_dir()` helper functions, the order of samples in the Omniglot dataset can be made deterministic across OSes.
### Motivation, pitch
Right now, it is quite difficult to obtain reproducible accuracies across machine with the same few-shot learning setup when the Omniglot dataset is used.
### Alternatives
_No response_
### Additional context
_No response_ | open | 2024-10-09T12:05:12Z | 2024-10-09T12:10:29Z | https://github.com/pytorch/vision/issues/8677 | [] | V0XNIHILI | 0 |
babysor/MockingBird | pytorch | 217 | demo闪退 | Feel free to add your own. You can still use the toolbox by recording samples yourself.
Traceback (most recent call last):
File "C:\Users\sha7dow\OneDrive - longessay\文档\GitHub\MockingBird\demo_toolbox.py", line 43, in <module>
Toolbox(**vars(args))
File "C:\Users\sha7dow\OneDrive - longessay\文档\GitHub\MockingBird\toolbox\__init__.py", line 76, in __init__
self.setup_events()
File "C:\Users\sha7dow\OneDrive - longessay\文档\GitHub\MockingBird\toolbox\__init__.py", line 113, in setup_events
self.ui.setup_audio_devices(Synthesizer.sample_rate)
File "C:\Users\sha7dow\OneDrive - longessay\文档\GitHub\MockingBird\toolbox\ui.py", line 149, in setup_audio_devices
for device in sd.query_devices():
File "C:\ProgramData\Anaconda3\envs\a\lib\site-packages\sounddevice.py", line 559, in query_devices
return DeviceList(query_devices(i)
File "C:\ProgramData\Anaconda3\envs\a\lib\site-packages\sounddevice.py", line 559, in <genexpr>
return DeviceList(query_devices(i)
File "C:\ProgramData\Anaconda3\envs\a\lib\site-packages\sounddevice.py", line 573, in query_devices
name = name_bytes.decode('utf-8')
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc2 in position 6: invalid continuation byte | closed | 2021-11-14T10:20:02Z | 2022-04-03T04:00:08Z | https://github.com/babysor/MockingBird/issues/217 | [] | sha7dowXX | 7 |
stitchfix/hamilton | numpy | 244 | Cannot use @does with @extract_columns | Cannot use `@does` with `@extract_columns` when used together on a function.
# Current behavior
The code fails on DAG construction with a function defined like so:
```python
def _cross_join(**series: pd.Series) -> pd.DataFrame:
# logic...
@extract_columns('a', 'b')
@does(_cross_join)
def field1_field2_cross_join(sample_field1: pd.Series, sample_field2: pd.Series) -> pd.DataFrame:
pass
```
## Stack Traces
```python
Traceback (most recent call last):
File "/Users/stefankrawczyk/dw/user_help/does_bug/run.py", line 9, in <module>
dr = driver.Driver({}, functions)
File "/Users/stefankrawczyk/dw/hamilton/hamilton/driver.py", line 62, in __init__
raise e
File "/Users/stefankrawczyk/dw/hamilton/hamilton/driver.py", line 59, in __init__
self.graph = graph.FunctionGraph(*modules, config=config, adapter=adapter)
File "/Users/stefankrawczyk/dw/hamilton/hamilton/graph.py", line 184, in __init__
self.nodes = create_function_graph(*modules, config=self._config, adapter=adapter)
File "/Users/stefankrawczyk/dw/hamilton/hamilton/graph.py", line 92, in create_function_graph
for n in function_modifiers_base.resolve_nodes(f, config):
File "/Users/stefankrawczyk/dw/hamilton/hamilton/function_modifiers_base.py", line 356, in resolve_nodes
nodes = node_expander.transform_dag(nodes, config, fn)
File "/Users/stefankrawczyk/dw/hamilton/hamilton/function_modifiers_base.py", line 162, in transform_dag
return self.expand_node(node_, config, fn)
File "/Users/stefankrawczyk/dw/hamilton/hamilton/function_modifiers.py", line 444, in expand_node
node.Node(
File "/Users/stefankrawczyk/dw/hamilton/hamilton/node.py", line 87, in __init__
raise ValueError(
ValueError: Missing type hint for _does__fn in function field1_field2_cross_join. Please add one to fix.
```
## Steps to replicate behavior
```python
# functions.py
import pandas as pd
from hamilton.function_modifiers import does, extract_columns
def _cross_join(**series: pd.Series) -> pd.DataFrame:
return pd.DataFrame({'a': [1, 2, 3], 'b': [3, 4, 5]})
def sample_field1() -> pd.Series:
return pd.Series([38115, 71525, 84920, 25997], name='field1')
def sample_field2() -> pd.Series:
return pd.Series(['a', 'b'], name='field12')
@extract_columns('a', 'b')
@does(_cross_join)
def field1_field2_cross_join(sample_field1: pd.Series, sample_field2: pd.Series) -> pd.DataFrame:
pass
```
```python
# driver.py
from hamilton import driver
import functions
inputs = {}
dr = driver.Driver({}, functions)
# Run everything!
print(dr.execute(['a', 'b'], inputs=inputs))
```
`python driver.py`
## Library & System Information
Latest version of Hamilton 1.11.0
# Expected behavior
This should work -- the extract_columns should operate on the output of the `@does` function.
# Additional context
First identified by Baldo Faieta [in slack](https://hamilton-opensource.slack.com/archives/C03M33QB4M8/p1670972688158709).
| closed | 2022-12-14T04:38:45Z | 2022-12-18T05:43:27Z | https://github.com/stitchfix/hamilton/issues/244 | [
"bug"
] | skrawcz | 0 |
xinntao/Real-ESRGAN | pytorch | 173 | Version file lost but import in code | https://github.com/xinntao/Real-ESRGAN/blob/42110857efae8a3ae8bcf5a1de245e4542de1384/realesrgan/__init__.py#L6 | open | 2021-12-05T09:27:08Z | 2022-02-09T10:12:03Z | https://github.com/xinntao/Real-ESRGAN/issues/173 | [] | qhduan | 1 |
pytest-dev/pytest-selenium | pytest | 87 | Question: FF/ Chrome unable to bypass self sign SSL certificate | Hi,
First of all I would like to applogise, because I am not sure this issue is causing by selenium 3 itself or other components issue, please forgive me to spare your time when you reading this.
I have configure selenium 3 in grid setup, then I try to run my existing test script. But my browsers (FF/ Chrome) were unable to bypass self sign SSL certificate error.
However If I configure grid setup with selenium 2.53.1 and my test script just run well.
I have no idea what is the problem and how to fix it. Please share your though if possible, thank you.
Here is my environment setup:
- FF 47.0.1
- pytest 3.0.3
- pytest-selenium 1.5.0
- selenium 3.0.1
| closed | 2016-10-19T06:22:13Z | 2016-10-19T10:24:26Z | https://github.com/pytest-dev/pytest-selenium/issues/87 | [] | rarajabs | 1 |
jupyter-incubator/sparkmagic | jupyter | 113 | Local mode for wrapper kernels | Add a magic (`%%python`? `%%local`?) which tells the kernel to run the code locally inside the IPython kernel. This lets users access the `Out` collection to collect dataframes and perform analyses/visualizations on them locally.
| closed | 2016-01-13T01:47:29Z | 2016-01-13T01:50:56Z | https://github.com/jupyter-incubator/sparkmagic/issues/113 | [
"duplicate",
"kind:enhancement"
] | msftristew | 1 |
horovod/horovod | tensorflow | 3,948 | Dimensions mismatch for add & div when using NCCL | **Environment:**
1. Framework: TensorFlow
2. Framework version: 2.11
3. Horovod version: 0.26.1.8
4. MPI version:
5. CUDA version:
6. NCCL version:
7. Python version: 3.10
**Bug report:**
Using the below optimizer
```python
hvd.DistributedOptimizer(tf.keras.optimizers.legacy.Adam(), average_aggregated_gradients=True,
backward_passes_per_step=1, sparse_as_dense=False, # for local gradient accumulation
num_groups=1, # for NCCL
compression=hvd.Compression.none) # data is compressed before all reduce
```
produces the below error when trained with `model.fit`
```
File "<SOME_PATH>/site-packages/keras/engine/training.py", line 1249, in train_function *
return step_function(self, iterator)
File "<SOME_PATH>/site-packages/horovod/tensorflow/__init__.py", line 536, in allreduce_grads *
reduce_ops_group = _grouped_allreduce_cond(grad_group,
File "<SOME_PATH>/site-packages/horovod/tensorflow/__init__.py", line 355, in allreduce_fn *
return grouped_allreduce(tensors, *args, process_set=process_set, **kwargs)
File "<SOME_PATH>/site-packages/horovod/tensorflow/__init__.py", line 284, in grouped_allreduce *
new_values += (values / horovod_size) if op == Average else values
File "<SOME_PATH>/site-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler
return fn(*args, **kwargs)
File "<SOME_PATH>/site-packages/tensorflow/python/ops/math_ops.py", line 1442, in r_binary_op_wrapper
return func(x, y, name=name)
File "<SOME_PATH>/site-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler
return fn(*args, **kwargs)
File "<SOME_PATH>/site-packages/tensorflow/python/util/dispatch.py", line 1176, in op_dispatch_handler
return dispatch_target(*args, **kwargs)
File "<SOME_PATH>/site-packages/tensorflow/python/ops/math_ops.py", line 1757, in _add_dispatch
return gen_math_ops.add_v2(x, y, name=name)
File "<SOME_PATH>/site-packages/tensorflow/python/ops/gen_math_ops.py", line 475, in add_v2
_, _, _op, _outputs = _op_def_library._apply_op_helper(
File "<SOME_PATH>/site-packages/tensorflow/python/framework/op_def_library.py", line 795, in _apply_op_helper
op = g._create_op_internal(op_type_name, inputs, dtypes=None,
File "<SOME_PATH>/site-packages/tensorflow/python/framework/func_graph.py", line 749, in _create_op_internal
return super(FuncGraph, self)._create_op_internal( # pylint: disable=protected-access
File "<SOME_PATH>/site-packages/tensorflow/python/framework/ops.py", line 3798, in _create_op_internal
ret = Operation(
File "<SOME_PATH>/site-packages/tensorflow/python/framework/ops.py", line 2106, in __init__
c_op = _create_c_op(g, node_def, inputs, control_input_ops, op_def=op_def)
File "<SOME_PATH>/site-packages/tensorflow/python/util/traceback_utils.py", line 141, in error_handler
return fn(*args, **kwargs)
File "<SOME_PATH>/site-packages/tensorflow/python/framework/ops.py", line 1967, in _create_c_op
raise ValueError(e.message)
ValueError: Dimensions must be equal, but are 0 and 10 for '{{node DistributedAdam_Allreduce/cond/add}} = AddV2[T=DT_FLOAT](DistributedAdam_Allreduce/cond/add/x, DistributedAdam_Allreduce/cond/truediv)' with input shapes: [0], [?,10].
``` | open | 2023-06-23T05:47:18Z | 2023-06-23T05:47:18Z | https://github.com/horovod/horovod/issues/3948 | [
"bug"
] | Nithanaroy | 0 |
KaiyangZhou/deep-person-reid | computer-vision | 451 | How to traing on PRW(Person Re-identification in the Wild)? | open | 2021-08-06T08:28:47Z | 2021-08-06T08:28:47Z | https://github.com/KaiyangZhou/deep-person-reid/issues/451 | [] | BarryKCL | 0 |
|
dinoperovic/django-salesman | rest-api | 45 | Add PATCH endpoint for basket item "extra" data | It would be helpful if there was an endpoint one could use to patch extra key values like:
```
<form
hx-patch="{% endverbatim %}{% url 'salesman-basket-list' %}{% verbatim %}{{ ref }}/"
hx-swap="none"
>
<input type="number" name="unit_price_override" value="{{ unit_price }}" min="0" step="0.01">
<button type="submit" class="btn btn-primary" data-bs-dismiss="modal">Submit</button>
</form>
```
The result of submitting the form should be that the `unit_price_override` key in the basket item extra dict is set to the submitted value without modifying the other keys of the extra data. | open | 2024-04-23T20:32:49Z | 2024-04-23T20:32:49Z | https://github.com/dinoperovic/django-salesman/issues/45 | [] | thenewguy | 0 |
ultralytics/ultralytics | python | 18,819 | Moving comet-ml to opensource? | ### Search before asking
- [x] I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
### Description
There is a new recently project that is opensource and it's cool for everyone and open source AI
https://github.com/aimhubio/aim
Feel free to close the issue if you are not interested @glenn-jocher
### Use case
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-01-22T09:41:02Z | 2025-01-22T20:03:47Z | https://github.com/ultralytics/ultralytics/issues/18819 | [
"enhancement"
] | johnnynunez | 2 |
influxdata/influxdb-client-python | jupyter | 473 | Warn on measurement starting with hashtags | <!--
Thank you for suggesting an idea to improve this client.
* Please add a :+1: or comment on a similar existing feature request instead of opening a new one.
* https://github.com/influxdata/influxdb-client-python/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+is%3Aclosed+sort%3Aupdated-desc+label%3A%22enhancement%22+
-->
__Proposal:__
Warn when a measurement name starts with a #.
__Current behavior:__
When a measurement starts with #, it is silently ignored. When such a name is written in the line protocol, this line is ignored since the # at the start of a line marks a comment.
__Desired behavior:__
Either find a way to escape the # or warn that such a measurement cannot be used.
__Use case:__
We have measurement names generated from data that can start with a #. The behavior that they are just missing is confusing and we didn't register it at first.
| closed | 2022-07-25T07:13:12Z | 2022-08-23T11:04:29Z | https://github.com/influxdata/influxdb-client-python/issues/473 | [
"enhancement"
] | NicholasTNG | 5 |
mlflow/mlflow | machine-learning | 14,175 | [FR] mlflow.get_experiment_by_name should take an artifact_location argument just like create_experiment | ### Willingness to contribute
Yes. I would be willing to contribute this feature with guidance from the MLflow community.
### Proposal Summary
`mlflow.get_experiment_by_name` takes a single argument, the name, and as a side effect it creates an empty and needless `./mlruns` folder. `mlflow.create_experiment` on the other hand takes an artifact_location argument.
Conceptually I can see why get_experiment_by_name takes no artifact_location - single responsibility, it doesn't need it. On the other hand, it _does_ create a pointless empty directory as a side effect, so pragmatically telling mlflow where the artifact repository is located makes sense.
### Motivation
> #### What is the use case for this feature?
Code that accesses an mlflow server
> #### Why is this use case valuable to support for MLflow users in general?
Nobody needs random empty directories in their projects
> #### Why is this use case valuable to support for your project(s) or organization?
I also don't
> #### Why is it currently difficult to achieve this use case?
Because of the way MLflow is currently coded
### Details
_No response_
### What component(s) does this bug affect?
- [ ] `area/artifacts`: Artifact stores and artifact logging
- [ ] `area/build`: Build and test infrastructure for MLflow
- [ ] `area/deployments`: MLflow Deployments client APIs, server, and third-party Deployments integrations
- [ ] `area/docs`: MLflow documentation pages
- [ ] `area/examples`: Example code
- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry
- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors
- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates
- [ ] `area/projects`: MLproject format, project running backends
- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs
- [ ] `area/server-infra`: MLflow Tracking server backend
- [X] `area/tracking`: Tracking Service, tracking client APIs, autologging
### What interface(s) does this bug affect?
- [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server
- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry
- [ ] `area/windows`: Windows support
### What language(s) does this bug affect?
- [ ] `language/r`: R APIs and clients
- [ ] `language/java`: Java APIs and clients
- [ ] `language/new`: Proposals for new client languages
### What integration(s) does this bug affect?
- [ ] `integrations/azure`: Azure and Azure ML integrations
- [ ] `integrations/sagemaker`: SageMaker integrations
- [ ] `integrations/databricks`: Databricks integrations | open | 2024-12-28T23:19:44Z | 2025-01-05T09:47:28Z | https://github.com/mlflow/mlflow/issues/14175 | [
"enhancement",
"area/tracking"
] | fritz-trawa | 5 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,036 | Some questions regarding CycleGAN testing | 1. I am not clear about the difference in between the options --ntest and --num_test in test_options.py. Can you please elaborate on these two options? Also, I have observed that test.py always tests 45 or less images, even though neither options mentioned earlier has a default of 45. So how can I get test.py to execute on all my test images?
2. Does test.py currently work only on a single GPU? I have observed explicitly setting --gpu_ids doesn't make it work on multiple GPUs.
3. I have a model trained with --batch_size 8 and --norm instance. I test with the options --model test --no_dropout --preprocess none --norm instance. Do I also need to set --batch_size? I am a little confused how batches work in test mode. Can you please elaborate on that issue? | open | 2020-05-20T20:43:55Z | 2020-05-24T00:24:32Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1036 | [] | debnay | 1 |
openapi-generators/openapi-python-client | fastapi | 958 | Invalid code generated for nullable discriminated union | **Describe the bug**
Given the schema below, using the generated code like:
```python
from test_client.models.demo import Demo
from test_client.models.a import A
Demo(example_union=A()).to_dict()
```
fails with:
```
Traceback (most recent call last):
File "/Users/eric/Desktop/test/test.py", line 4, in <module>
Demo(example_union=A()).to_dict()
File "/Users/eric/Desktop/test/test_client/models/demo.py", line 32, in to_dict
elif isinstance(self.example_union, Union["A", "B"]):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/eric/.pyenv/versions/3.12.1/lib/python3.12/typing.py", line 1564, in __instancecheck__
return self.__subclasscheck__(type(obj))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/eric/.pyenv/versions/3.12.1/lib/python3.12/typing.py", line 1568, in __subclasscheck__
if issubclass(cls, arg):
^^^^^^^^^^^^^^^^^^^^
TypeError: issubclass() arg 2 must be a class, a tuple of classes, or a union
```
**OpenAPI Spec File**
```yaml
openapi: 3.0.3
info:
title: Test
version: 0.0.0
description: test
paths: {}
components:
schemas:
Demo:
type: object
properties:
example_union:
allOf:
- $ref: '#/components/schemas/ExampleUnion'
nullable: true # <-- bug does not happen if this line is removed
A:
type: object
properties:
type:
type: string
B:
type: object
properties:
type:
type: string
ExampleUnion:
oneOf:
- $ref: '#/components/schemas/A'
- $ref: '#/components/schemas/B'
discriminator:
propertyName: type
mapping:
a: '#/components/schemas/A'
b: '#/components/schemas/B'
```
**Desktop**
- OS: macOS 14.3
- Python Version: 3.12.1
- openapi-python-client version: 0.17.2
**Additional context**
The failing generated code is:
```python
isinstance(self.example_union, Union["A", "B"])
```
Using `instanceof` on a `Union` with quoted types is not allowed:
```
>>> class A:
... pass
...
>>> class B:
... pass
...
>>> from typing import Union
>>> isinstance(None, Union[A, B])
False
>>> isinstance(None, Union["A", "B"])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/eric/.pyenv/versions/3.12.1/lib/python3.12/typing.py", line 1564, in __instancecheck__
return self.__subclasscheck__(type(obj))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/eric/.pyenv/versions/3.12.1/lib/python3.12/typing.py", line 1568, in __subclasscheck__
if issubclass(cls, arg):
^^^^^^^^^^^^^^^^^^^^
TypeError: issubclass() arg 2 must be a class, a tuple of classes, or a union
``` | closed | 2024-02-09T18:53:43Z | 2024-02-20T01:11:40Z | https://github.com/openapi-generators/openapi-python-client/issues/958 | [] | codebutler | 0 |
ultralytics/ultralytics | deep-learning | 19,100 | TOTALLY WRONG `AssertionError: ERROR` | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Export
### Bug
So, I was having a ridiculously hard time trying deploy a YOLO model using OpenVINO without Torch.
The main problem I was facing was that the export kept throwing this error:
```
Traceback (most recent call last):
File "C:\Users\lucas.sangoi\Documents\ocr-extract-data\beta\convertToOpenVino.py", line 7, in <module>
model.export(format="onnx", simplify=True, nms=True)
File "C:\Users\lucas.sangoi\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\engine\model.py", line 738, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
File "C:\Users\lucas.sangoi\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\engine\exporter.py", line 252, in __call__
validate_args(fmt, self.args, fmt_keys)
File "C:\Users\lucas.sangoi\AppData\Local\Programs\Python\Python310\lib\site-packages\ultralytics\engine\exporter.py", line 154, in validate_args
assert arg in valid_args, f"ERROR ❌️ argument '{arg}' is not supported for format='{format}'"
AssertionError: ERROR ❌️ argument 'nms' is not supported for format='onnx'
```
After a long time struggling, I decided to convert it to ONNX, and it worked fine with nms=True. But I did this on my home computer. Today, when I got to work, I ran into the same error again.
What was causing the error? I didn't realize my venv wasn't activated. Once I activated it, the export worked normally for onnx and openvino.
So, PLEASE, be VERY careful with your error handling. It should've said that a library was missing instead of saying the argument isn't supported. That kind of misleading error can waste a lot of time for someone trying to figure this out.
### Environment
The system doens't matter in this case.
### Minimal Reproducible Example
Neither the code.
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-02-06T12:05:28Z | 2025-02-07T03:05:25Z | https://github.com/ultralytics/ultralytics/issues/19100 | [
"exports"
] | sangoi-exe | 4 |
pytest-dev/pytest-selenium | pytest | 206 | MarkInfo removed removed in pytest 4.1, breaks pytest-selenium | Version 4.1 of pytest (released yesterday) [removes the `MarkInfo` class](https://github.com/pytest-dev/pytest/pull/4564) which `pytest-selenium` imports [here](https://github.com/pytest-dev/pytest-selenium/blob/master/pytest_selenium/drivers/saucelabs.py#L8). This causes pytest to fail:
```
Traceback (most recent call last):
File "/builds/repo/repo/venv/bin/pytest", line 11, in <module>
sys.exit(main())
File "/builds/repo/repo/venv/lib/python3.6/site-packages/_pytest/config/__init__.py", line 61, in main
config = _prepareconfig(args, plugins)
File "/builds/repo/repo/venv/lib/python3.6/site-packages/_pytest/config/__init__.py", line 196, in _prepareconfig
pluginmanager=pluginmanager, args=args
File "/builds/repo/repo/venv/lib/python3.6/site-packages/pluggy/hooks.py", line 284, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/builds/repo/repo/venv/lib/python3.6/site-packages/pluggy/manager.py", line 67, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/builds/repo/repo/venv/lib/python3.6/site-packages/pluggy/manager.py", line 61, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/builds/repo/repo/venv/lib/python3.6/site-packages/pluggy/callers.py", line 203, in _multicall
gen.send(outcome)
File "/builds/repo/repo/venv/lib/python3.6/site-packages/_pytest/helpconfig.py", line 93, in pytest_cmdline_parse
config = outcome.get_result()
File "/builds/repo/repo/venv/lib/python3.6/site-packages/pluggy/callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "/builds/repo/repo/venv/lib/python3.6/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/builds/repo/repo/venv/lib/python3.6/site-packages/_pytest/config/__init__.py", line 652, in pytest_cmdline_parse
self.parse(args)
File "/builds/repo/repo/venv/lib/python3.6/site-packages/_pytest/config/__init__.py", line 838, in parse
self._preparse(args, addopts=addopts)
File "/builds/repo/repo/venv/lib/python3.6/site-packages/_pytest/config/__init__.py", line 784, in _preparse
self.pluginmanager.load_setuptools_entrypoints("pytest11")
File "/builds/repo/repo/venv/lib/python3.6/site-packages/pluggy/manager.py", line 267, in load_setuptools_entrypoints
plugin = ep.load()
File "/builds/repo/repo/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2346, in load
return self.resolve()
File "/builds/repo/repo/venv/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2352, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/builds/repo/repo/venv/lib/python3.6/site-packages/_pytest/assertion/rewrite.py", line 308, in load_module
six.exec_(co, mod.__dict__)
File "/builds/repo/repo/venv/lib/python3.6/site-packages/pytest_selenium/drivers/saucelabs.py", line 8, in <module>
from _pytest.mark import MarkInfo
ImportError: cannot import name 'MarkInfo'
```
Downgrading to pytest < 4.1 (e.g. 4.0.2) works around the problem.
Failing version info:
- python 3.6
- pytest 4.1.0
- pytest-selenium 1.15.0 | closed | 2019-01-06T23:23:00Z | 2019-07-22T17:17:00Z | https://github.com/pytest-dev/pytest-selenium/issues/206 | [
"bug"
] | yourcelf | 13 |
mwaskom/seaborn | data-science | 2,857 | kdeplot cmap legend no color | ```
sns.kdeplot(x=casual_workday, y=registered_workday, cmap="Reds")
sns.kdeplot(x=casual_non_workday, y=registered_non_workday, cmap="Blues")
plt.legend(["Workday", "Non-Workday"])
```
or
```
sns.kdeplot(x=casual_workday, y=registered_workday, cmap="Reds", label="Workday")
sns.kdeplot(x=casual_non_workday, y=registered_non_workday, cmap="Blues", label="Non-Workday")
plt.legend()
```

It turns out that the line color of the legend is none. However, the expected behavior is:

where the line color of the legend corresponds to the lightest line color of the plot. | closed | 2022-06-14T10:55:46Z | 2022-06-14T12:45:25Z | https://github.com/mwaskom/seaborn/issues/2857 | [] | yihuajack | 1 |
iperov/DeepFaceLab | deep-learning | 546 | train H64 uses CPU instead of GPU | Hi everybody!
I'm new here, as I'm new in Deepfacelab use
I have a problem that don't know how to solve and hope that somebody here can gime some useful info!
when I'm using train H64, I see that my pc is working with cpu instead of gpu
I've installed cuda 9,2
I was wondering if working with gpu will be faster that cpu...
and also if someone can give some suggestions about the best batch size to use...I wrote 20 and it worked, now I'm trying 12 but I don't personally think that is a problem...but I'll really appreciate some suggestions about
here're the features of my pc (I try to write everything, don't know what is useful or not :D)
windows 10
intel core i7
mainboard asus g750jz
ram 16gb
nvidia geforce gtx 880m 4gb
## Expected behavior
process should runs on gpu, but it runs only on cpu
## Actual behavior
process run on cpu

## Steps to reproduce
I've tried to check device.py but don't know properly what parts I can change and in case, how...
## Other relevant information
I've installed cuda 10, but with cuda 9.2 it's faster, 14-15 s/iteration while with cuda 10 it was around 30 s...
nvidia drivers are updated to latest version | closed | 2020-01-05T02:12:43Z | 2020-03-28T05:42:17Z | https://github.com/iperov/DeepFaceLab/issues/546 | [] | Aletv90 | 4 |
pyg-team/pytorch_geometric | pytorch | 9,717 | FlopCounterMode RuntimeError with APPNP | ### 🐛 Describe the bug
```python
from torch_geometric.nn import APPNP
import torch
in_dim = 16
sample = 100
x = torch.randn(sample, in_dim)
edge_index = torch.randint(0, sample, (2, 1000))
model = APPNP(K=10, alpha=0.1)
from torch.utils.flop_counter import FlopCounterMode
with FlopCounterMode():
model.forward(x, edge_index)
```
And the error goes
```cmd
Module FLOP % Total
-------- ------ ---------
Global 0 0%
Traceback (most recent call last):
File "/mnt/n/python-code/IgnisGraph/ttttt.py", line 12, in <module>
model.forward(x, edge_index)
File "/root/miniconda3/envs/wsl-py311-IgnisGraph/lib/python3.11/site-packages/torch_geometric/nn/conv/appnp.py", line 92, in forward
edge_index, edge_weight = gcn_norm( # yapf: disable
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/wsl-py311-IgnisGraph/lib/python3.11/site-packages/torch_geometric/nn/conv/gcn_conv.py", line 99, in gcn_norm
edge_index, edge_weight = add_remaining_self_loops(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/wsl-py311-IgnisGraph/lib/python3.11/site-packages/torch_geometric/utils/loop.py", line 629, in add_remaining_self_loops
loop_index = EdgeIndex(
^^^^^^^^^^
File "/root/miniconda3/envs/wsl-py311-IgnisGraph/lib/python3.11/site-packages/torch_geometric/edge_index.py", line 337, in __new__
out = super().__new__(cls, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Creating a new Tensor subclass EdgeIndex but the raw Tensor object is already associated to a python object of type Tensor which is not a subclass of the requested type
ERROR conda.cli.main_run:execute(125)
```
### Error
``` python
from torch_geometric.nn import APPNP
import torch
in_dim = 16
sample = 100
x = torch.randn(sample, in_dim)
edge_index = torch.randint(0, sample, (2, 1000))
model = APPNP(K=10, alpha=0.1)
from torch.utils.flop_counter import FlopCounterMode
with FlopCounterMode():
model.forward(x, edge_index)
```
```
Module FLOP % Total
-------- ------ ---------
Global 0 0%
Traceback (most recent call last):
File "/mnt/n/python-code/IgnisGraph/ttttt.py", line 12, in <module>
model.forward(x, edge_index)
File "/root/miniconda3/envs/wsl-py311-IgnisGraph/lib/python3.11/site-packages/torch_geometric/nn/conv/appnp.py", line 92, in forward
edge_index, edge_weight = gcn_norm( # yapf: disable
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/wsl-py311-IgnisGraph/lib/python3.11/site-packages/torch_geometric/nn/conv/gcn_conv.py", line 99, in gcn_norm
edge_index, edge_weight = add_remaining_self_loops(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/wsl-py311-IgnisGraph/lib/python3.11/site-packages/torch_geometric/utils/loop.py", line 629, in add_remaining_self_loops
loop_index = EdgeIndex(
^^^^^^^^^^
File "/root/miniconda3/envs/wsl-py311-IgnisGraph/lib/python3.11/site-packages/torch_geometric/edge_index.py", line 337, in __new__
out = super().__new__(cls, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Creating a new Tensor subclass EdgeIndex but the raw Tensor object is already associated to a python object of type Tensor which is not a subclass of the requested type
ERROR conda.cli.main_run:execute(125):
```
Versions
```
Collecting environment information...
PyTorch version: 2.3.1
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 560.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 97
Model name: AMD Ryzen 9 7900 12-Core Processor
Stepping: 2
CPU MHz: 3699.916
BogoMIPS: 7399.83
Virtualization: AMD-V
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 12 MiB
L3 cache: 32 MiB
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm
Versions of relevant libraries:
[pip3] gpytorch==1.12
[pip3] numpy==1.25.0
[pip3] pytorch-lightning==2.3.3
[pip3] torch==2.3.1
[pip3] torch_cluster==1.6.3+pt23cu121
[pip3] torch_geometric==2.5.3
[pip3] torch_scatter==2.1.2+pt23cu121
[pip3] torch_sparse==0.6.18+pt23cu121
[pip3] torch_spline_conv==1.2.2+pt23cu121
[pip3] torchaudio==2.3.1
[pip3] torchmetrics==1.4.0.post0
[pip3] torchvision==0.18.1
[pip3] triton==2.3.1
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] gpytorch 1.12 pypi_0 pypi
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.8 py311h5eee18b_0
[conda] mkl_random 1.2.4 py311hdb19cb5_0
[conda] numpy 1.25.0 pypi_0 pypi
[conda] pytorch 2.3.1 py3.11_cuda12.1_cudnn8.9.2_0 pytorch
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-lightning 2.3.3 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch-cluster 1.6.3+pt23cu121 pypi_0 pypi
[conda] torch-geometric 2.5.3 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt23cu121 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt23cu121 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt23cu121 pypi_0 pypi
[conda] torchaudio 2.3.1 py311_cu121 pytorch
[conda] torchmetrics 1.4.0.post0 pypi_0 pypi
[conda] torchtriton 2.3.1 py311 pytorch
[conda] torchvision 0.18.1 py311_cu121 pytorch
``` | closed | 2024-10-19T17:44:47Z | 2024-10-21T15:48:16Z | https://github.com/pyg-team/pytorch_geometric/issues/9717 | [
"bug"
] | MH-limarco | 4 |
biolab/orange3 | numpy | 5,974 | Runtime error on Ubuntu 22.04 - Could not load the Qt platform plugin "xcb" | When I execute the command ```python3 -m Orange.canvas``` i get the following error:
```
Warning: Ignoring XDG_SESSION_TYPE=wayland on Gnome. Use QT_QPA_PLATFORM=wayland to run on Wayland anyway.
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb.
Aborted (core dumped)
```
The problem occurs on ```Ubuntu 22.04``` fresh installation. I installed Orange via the command ```pip install orange3``` (pip version 22.1, python version 3.10). | closed | 2022-05-16T15:51:32Z | 2024-11-22T02:14:44Z | https://github.com/biolab/orange3/issues/5974 | [
"bug report"
] | abertagnon | 15 |
jstrieb/github-stats | asyncio | 92 | fatal: unable to access 'https://github.com/***/': The requested URL returned error: 403 | Hi,
I am trying to replicate this work for my profile. I follow all the steps but end up here. None of my workflows have succeeded.
<img width="1051" alt="Screenshot 2023-03-15 at 4 58 43 PM" src="https://user-images.githubusercontent.com/46529961/225296083-bed64536-5df5-4fc9-ae3b-faf3b581f2e3.png">
where the issue is expanded here:
```
0s
Run git config --global user.name "github-stats[bot]"
[main 9c0dbbc] temp commit
2 files changed, 363 insertions(+)
create mode [1](https://github.com/pra-dan/github-stats-transparent/actions/runs/4423287721/jobs/7755869861#step:7:1)00644 generated/languages.svg
create mode 100644 generated/overview.svg
Switched to a new branch 'output'
rm '.gitattributes'
rm '.github/workflows/main.yml'
rm '.gitignore'
rm 'LICENSE'
rm 'README.md'
rm 'generate_images.py'
rm 'generated/languages.svg'
rm 'generated/overview.svg'
rm 'github_stats.py'
rm 'readme_images/Actions.png'
rm 'readme_images/Exclude.png'
rm 'readme_images/Forks.png'
rm 'readme_images/Token.png'
rm 'readme_images/dark.png'
rm 'readme_images/light.png'
rm 'requirements.txt'
rm 'templates/languages.svg'
rm 'templates/overview.svg'
[output (root-commit) 5ba69e1] Update generated files
2 files changed, 363 insertions(+)
create mode 100644 generated/languages.svg
create mode 100644 generated/overview.svg
remote: Permission to ***.git denied to github-actions[bot].
fatal: unable to access 'https://github.com/***/': The requested URL returned error: 403
Error: Process completed with exit code 1[28](https://github.com/pra-dan/github-stats-transparent/actions/runs/4423287721/jobs/7755869861#step:7:29).
``` | open | 2023-03-15T11:29:58Z | 2023-03-15T11:29:58Z | https://github.com/jstrieb/github-stats/issues/92 | [] | pra-dan | 0 |
recommenders-team/recommenders | deep-learning | 1,232 | DKN how to save/restore model ? [ASK] | ### Description
How can I save and then restore a trained DKN model? The 00_quick example for DKN doesn't show it nor does the 02_model_content_based_filtering/dkn_deep_dive.ipynb.
### Other Comments
I prefer not to have to train it every time I want to mess with evaluating the model.
Thanks. | open | 2020-11-05T22:43:03Z | 2020-11-15T02:16:35Z | https://github.com/recommenders-team/recommenders/issues/1232 | [
"help wanted"
] | wingz1 | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 765 | Cannot show and save the intermediate results | I am running the experiments on images with 512*256 resolution, when I run the following command
`python train.py --dataroot ./datasets/Cityscapes --name Cityscapes_exp --model pix2pix --which_model_netG resnet_9blocks --which_direction AtoB --norm instance --gpu_ids 0 --batchSize 2 --no_flip --niter 100 --niter_decay 100 --display_id 1 --display_ncols -1
`
the training process goes well and the vidsom can show the loss but cannot show the intermediate results.
Also, the intermediate checkpoints can be saved in my disk but the intermediate results cannot.
The strangest thing is that there is no error message.
Do you have any suggestions to fix this? Thanks. | open | 2019-09-11T15:52:31Z | 2019-09-12T18:40:11Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/765 | [] | Ha0Tang | 1 |
lukas-blecher/LaTeX-OCR | pytorch | 352 | Running latexocr in WSL2 ubuntu | When I executed **latexocr** command, I got the following errors:
`fused_weight_gradient_mlp_cuda` module not found. gradient accumulation fusion with weight gradient computation disabled.
xkbcommon: ERROR: failed to add default include path /home/user/mambaforge/envs/myenvs/share/X11/xkb
qt.qpa.wayland: failed to create xkb context
Sandboxing disabled by user.
Fontconfig error: Cannot load default config file: No such file: (null)
doh set to "" -- SystemOnly
Segmentation fault
The **pix2tex** command works as intended, but with the same warning:
`fused_weight_gradient_mlp_cuda` module not found. gradient accumulation fusion with weight gradient computation disabled. | open | 2023-12-27T09:23:54Z | 2023-12-27T09:23:54Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/352 | [] | lyheangUNG | 0 |
Yorko/mlcourse.ai | seaborn | 693 | some notes on MAPE and infinite values | Lecture 9 describes MAPE and other metrics. As noted by @amber4eg it's good to mention that these metrics can explode around zero. | closed | 2021-12-22T22:21:02Z | 2022-01-07T13:26:02Z | https://github.com/Yorko/mlcourse.ai/issues/693 | [
"articles"
] | Yorko | 0 |
Farama-Foundation/Gymnasium | api | 410 | [Proposal] Future APIs (and revisions) should include a version system (similar to environments) | ### Proposal
New APIs and new revisions of existing APIs should include a versioning system (for `Gymansium` and all other `Farama` projects)
e.g. if a future revision of `gymnasium.vector.VectorEnv` is made, it should be called `gymnasium.vector.VectorEnv_v1`
e.g. if a new API is created for `pettingzoo` hybrid AEC/parallel, it should be called `pettingzoo.utils.env.HybridEnv_v0`
Note:
I am not proposing here:
- a new API revision
- the creation of a new API
### Motivation
It should create less confusion when a new revision of an API is created
e.g. look at the confusion of the old Gymnasium API, and the new Gymnasium API
and should make Shimmy Docs easier to read
### Pitch
This should make the creation of new API revisions less painful,
For example, when making a new version of PettingZoo APIs (to be more Aline with the "new" gymnasium API), the new API versioning system could be used to reduce confusion.
I do not see a downside
### Alternatives
Currently, the API versioning system is to list the version of the project at the API revision
e.g. `Gymansium` API v21 (old), `Gymansium` API v26 (new)
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| closed | 2023-03-28T13:30:54Z | 2023-03-29T13:38:53Z | https://github.com/Farama-Foundation/Gymnasium/issues/410 | [
"enhancement"
] | Kallinteris-Andreas | 6 |
hack4impact/flask-base | flask | 15 | Find easier way to create first admin | See discussion at https://github.com/hack4impact/women-veterans-rock/pull/1
| closed | 2015-10-21T03:14:59Z | 2016-07-07T17:32:52Z | https://github.com/hack4impact/flask-base/issues/15 | [
"enhancement"
] | sandlerben | 3 |
ray-project/ray | machine-learning | 50,980 | [Core] calling remote function in `Future` callback breaks ray | ### What happened + What you expected to happen
When a callback function installed to a future created from ref.future() calls remote function, ray seems to be broken
### Versions / Dependencies
latest ray 2.43, python 3.10, ubuntu
### Reproduction script
```python
import time
import ray
ray.init()
@ray.remote
def f():
time.sleep(1)
return "f"
@ray.remote
def g():
pass
def cb(fut):
"""call a ray function in callback"""
print("calling cb")
ray.get(g.remote())
print("finish cb")
```
```python
ref = f.remote()
fut = ref.future()
fut.add_done_callback(cb)
```
Note "finish cb" is not printed
```python
ray.get(ref)
# calling cb
# 'f'
```
the original ref can be fetched
```python
ray.get(ref)
# 'f'
```
now new jobs cannot be invoked, ray get stuck, but ctrl+C still works
```python
ray.get(f.remote())
```
```python
ray.get(g.remote())
```

### Issue Severity
Medium: It is a significant difficulty but I can work around it. | open | 2025-02-28T06:45:43Z | 2025-03-22T00:58:04Z | https://github.com/ray-project/ray/issues/50980 | [
"bug",
"P1",
"core"
] | auderson | 0 |
Johnserf-Seed/TikTokDownload | api | 587 | [Feature]能否提供解析用户主页的api接口 | 就是获取用户主页的所有视频的真实视频地址
这样比较灵活点,后续用户可以自定义去根据用户主页视频链接来查看视频或者是下载,而不是统一下载全部 | closed | 2023-10-27T06:11:49Z | 2024-02-24T10:08:26Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/587 | [
"已确认(confirmed)"
] | ghost | 3 |
ansible/awx | automation | 15,436 | Unable to request new `access_token` using `refresh_token` grant per documentation | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
I am writing an automation where I want to refresh an existing `access_token`. According to the documentation, the request below should return an access_token:
```bash
curl --request POST \
--url https://tower-machine.domain.local/api/o/token/ \
--header 'Authorization: Basic WklsVHR0dXRSRnFjMVJPMTlIRWxFVFJYU1pKcG5oem81YlU0N2JSNjpoRm0zN0lyQmRuNzJpRGU3WU1ZMUEzNFYybmUxaG5yRlVVRmNWWm10QmNGNEJQTW1ueGdMYTdMa3ByaVhmaHBnVVhwY1ZJc3BubEV2S0FrZkJ4RVhoNDNCNHNDWTlLZk00bUpuaGhCWWxJWENHbFNyMTNxcWIySVBPV0pqd3UxSg==' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data grant_type=refresh_token \
--data 'refresh_token= 2LQjZCRjtQTxC0IOdin8QRc2iJDCwg'
```
Instead, I am getting the following message:
```json
{
"error": "invalid_grant"
}
```
I am running Ansible Automation Platform Controller 4.2.0.
### AWX version
Ansible Automation Platform Controller 4.2.0
### Select the relevant components
- [ ] UI
- [ ] UI (tech preview)
- [X] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Installation method
N/A
### Modifications
no
### Ansible version
N/A
### Operating system
RHEL 8
### Web browser
_No response_
### Steps to reproduce
1. Open AAP
2. Go to Access > Users > { Select user } > Tokens.
3. Click "Add", select the relevant Application, then generate and copy the Token and Refresh Token.
4. Try to generate a new `access_token` using the `refresh_token`. Request fails with "invalid_grant".
### Expected results
New `access_token` and `refresh_token`.
### Actual results
The following response is returned from the API:
```json
{
"error": "invalid_grant"
}
```
### Additional information
_No response_ | closed | 2024-08-12T03:14:30Z | 2024-08-12T03:49:39Z | https://github.com/ansible/awx/issues/15436 | [
"type:bug",
"component:api",
"needs_triage",
"community"
] | timothydilbert | 1 |
deepfakes/faceswap | deep-learning | 993 | Run problems again | when i satisfy all the requirements,and i run:
python faceswap.py extract -i ./data/trump -o ./face/trump
it raise:
Setting Faceswap backend to NVIDIA
11/10/2019 18:12:52 INFO Log level set to: INFO
but nothing happened!
of course,i have dataset.
what's happen? thx for your answer
This issue doesn't help
https://github.com/deepfakes/faceswap/issues/930 | closed | 2020-03-18T17:22:16Z | 2020-03-29T17:27:55Z | https://github.com/deepfakes/faceswap/issues/993 | [] | Valeronich | 6 |
slackapi/bolt-python | fastapi | 988 | app.action listener should accept block_id-only constraints for bolt-js feature parity | (Filling out the following details about bugs will help us solve your issue sooner.)
### Reproducible in:
```bash
pip freeze | grep slack
python --version
sw_vers && uname -v # or `ver`
```
#### The `slack_bolt` version
```
$ .venv/bin/pip freeze | grep slack
slack-bolt==1.18.0
slack-sdk==3.24.0
```
#### Python runtime version
```
$ .venv/bin/python --version
Python 3.9.16
```
#### OS info
(`sw_vers` is not valid in RHEL-related OSes)
```
$ cat /etc/redhat-release && uname -v
AlmaLinux release 9.2 (Turquoise Kodkod)
#1 SMP PREEMPT_DYNAMIC Tue Sep 12 09:28:32 EDT 2023
```
#### Steps to reproduce:
```python
@app.action( { 'type': 'block_action', 'block_id': 'response' } )
def handle_response_action(ack, client, body):
pass
```
### Expected result:
Per the documentation...
> You can use a constraints object to listen to `block_id`s and `action_id`s (or any combination of them).
Therefore, I expected to have an action handler that responded to any action from a block with id `response`.
### Actual result:
```
Failed to run a middleware (error: 'action_id')
Traceback (most recent call last):
File "/home/darkfox/git/codeberg/darkfoxprime/FreeportTrawler/.venv/lib64/python3.9/site-packages/slack_bolt/app/app.py", line 534, in dispatch
if listener.matches(req=req, resp=resp):
File "/home/darkfox/git/codeberg/darkfoxprime/FreeportTrawler/.venv/lib64/python3.9/site-packages/slack_bolt/listener/listener.py", line 25, in matches
is_matched = matcher.matches(req, resp)
File "/home/darkfox/git/codeberg/darkfoxprime/FreeportTrawler/.venv/lib64/python3.9/site-packages/slack_bolt/listener_matcher/builtins.py", line 54, in matches
return self.func(
File "/home/darkfox/git/codeberg/darkfoxprime/FreeportTrawler/.venv/lib64/python3.9/site-packages/slack_bolt/listener_matcher/builtins.py", line 327, in func
return _block_action(constraints, body)
File "/home/darkfox/git/codeberg/darkfoxprime/FreeportTrawler/.venv/lib64/python3.9/site-packages/slack_bolt/listener_matcher/builtins.py", line 317, in _block_action
action_id_matched = _matches(constraints["action_id"], action["action_id"])
KeyError: 'action_id'
```
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2023-11-20T03:05:10Z | 2023-11-21T07:03:40Z | https://github.com/slackapi/bolt-python/issues/988 | [
"bug"
] | darkfoxprime | 5 |
NVlabs/neuralangelo | computer-vision | 38 | Problem with Docker Container | Hi! Your project is so amazing that even a person who knows nothing about coding (yes, that's me) decided to try it :) Expectedly, I had some problems getting everything to work. I use WSL2 on Windows 11, and when I run the script:
docker run -it chenhsuanlin/colmap:3.8 /bin/bash
I get the following warning about nvidia driver:
==========
== CUDA ==
==========
CUDA Version 11.8.0
Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
WARNING: The NVIDIA Driver was not detected. GPU functionality will not be available.
Use the NVIDIA Container Toolkit to start this container with GPU support; see
https://docs.nvidia.com/datacenter/cloud-native/ .
But when I run the same with --gpus, like:
docker run --gpus all -it chenhsuanlin/colmap:3.8 /bin/bash
I get this:
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: mount error: file creation failed: /var/lib/docker/overlay2/1140653493db5ca0f2b71b42c2194624b1e9e50bd0f9f72121bf836058a77900/merged/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1: file exists: unknown.
ERRO[0000] error waiting for container:
What am I doing wrong? | closed | 2023-08-18T05:03:15Z | 2023-08-28T18:23:20Z | https://github.com/NVlabs/neuralangelo/issues/38 | [] | iam-machine | 20 |
python-restx/flask-restx | flask | 353 | typing support instead of marshal_with | If there is interest, I can suggest a PR. Please let me know if this is something you would be interested in adding to flast-restx.
**Describe the solution you'd like**
Just like FastAPI has, allow type hints to be used for documentation and validation automatically. This will provide a more integrated, modern, up to the standards of FastAPI, and Pythonic, way of using the type annotations.
That is, instead of:
```
@ns.marshal_with(todo)
def get(...):
passs
```
We would have:
```
def get(...) -> todo:
passs
```
Backwards compatibility would not be broken, and type hints would be used as a synonym of the current decorators if enabled.
**Describe alternatives you've considered**
marshal_with decorator
**Additional context**
| open | 2021-07-14T17:44:45Z | 2021-07-14T17:44:45Z | https://github.com/python-restx/flask-restx/issues/353 | [
"enhancement"
] | miquelvir | 0 |
kizniche/Mycodo | automation | 1,340 | GNU License Window Unresponsive | ### Describe the problem/bug
Upon execution of "curl -L https://kizniche.github.io/Mycodo/install | bash" the installation downloads and launches, but hangs up on the portion where you can accept the GNU license. You cannot select "Yes" or "No" through any input key command: ex. "Y", "ENTER", "YES+ENTER", "CONFIRM", or even adding "echo y |" in front of the command did not work. The installation is now on the device, but the installation process is unexecuted.
### Versions:
- Mycodo Version: Latest version: 8.15.9
- Raspberry Pi Version: Using Raspberry Pi 3B
- Raspbian OS Version: Raspberry Pi OS with desktop and recommended software
Release date: October 10th 2023
System: 64-bit
Kernel version: 6.1
Debian version: 12 (bookworm)
Size: 2,725MB
### Reproducibility
Please list specific setup details that are involved and the steps to reproduce the behavior:
1. Flash Debian version onto SD card.
2. Boot up new OS
3. Connect to Network
4. Launch Command window
5. Input "curl -L https://kizniche.github.io/Mycodo/install | bash" and Enter
### Expected behavior
It downloads the repository and installs Mycodo
### Additional context
I was able to circumnavigate the issue by going directly to the install file, "setup.sh", now in the file directory using the right click text-editor option and copying "sudo /bin/bash ~/Mycodo/install/setup.sh" from it.
I opened an terminal window and executed "sudo /bin/bash ~/Mycodo/install/setup.sh". The GNU license screen pops up and a simple "ENTER" key selects the highlighted response. The installation continues as expected.
| closed | 2023-10-14T13:17:44Z | 2023-10-16T01:42:32Z | https://github.com/kizniche/Mycodo/issues/1340 | [] | neutralgenius | 6 |
pyppeteer/pyppeteer | automation | 259 | page.screenshot gets stuck | Happens consistently when I open and close three pages in the Chrome browser
Probably related to https://github.com/puppeteer/puppeteer/issues/4273
I have tried to switch to PDF, but I am getting "unsupported" in my settings. I guess something amiss in the dependencies I install. | closed | 2021-05-14T18:06:59Z | 2021-08-15T06:55:55Z | https://github.com/pyppeteer/pyppeteer/issues/259 | [] | larytet | 1 |
HIT-SCIR/ltp | nlp | 437 | metaclass conflict | 简单的示例
```
from ltp import LTP
ltp = LTP()
sents = ltp.sent_split(["他叫汤姆去拿外衣。", "汤姆生病了。他去了医院。"])
```
运行报错:
> Traceback (most recent call last):
> File "/Users/dylan/PycharmProjects/Demo/Test.py", line 1, in <module>
> from ltp import LTP
> File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ltp/__init__.py", line 7, in <module>
> from .data import Dataset
> File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ltp/data/__init__.py", line 7, in <module>
> from .fields import Field
> File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ltp/data/fields/__init__.py", line 83, in <module>
> from .label import LabelField
> File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ltp/data/fields/label.py", line 12, in <module>
> from ltp.data.dataset import Dataset
> File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ltp/data/dataset/__init__.py", line 5, in <module>
> from .dataset import rationed_split, RandomShuffler, Dataset
> File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/ltp/data/dataset/dataset.py", line 14, in <module>
> class Dataset(torch.utils.data.dataset.Dataset, metaclass=Registrable):
> TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
环境:
Python3.6.3,LTP4.0.9,MacOS Catalina 11.15.7 | closed | 2020-11-13T09:00:23Z | 2020-11-16T01:58:22Z | https://github.com/HIT-SCIR/ltp/issues/437 | [] | JiaCheng-Huang | 1 |
Asabeneh/30-Days-Of-Python | flask | 406 | png on introduction is broken | png on introduction is broken | closed | 2023-06-19T10:41:42Z | 2023-06-20T10:28:35Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/406 | [] | CR00N0S | 0 |
HumanSignal/labelImg | deep-learning | 566 | extra unnecessary classes in classes.txt by default | how to remove these by default added classes by labellmg
Classes from 1-15 were added by default.
classes from 16-21 are classes used by me

| open | 2020-03-21T08:09:02Z | 2020-04-06T19:10:16Z | https://github.com/HumanSignal/labelImg/issues/566 | [] | prateekmahajan | 1 |
plotly/dash | jupyter | 3,145 | [BUG] Unable to run werkzeug profiler on python 3.12 | **Describe your context**
I'm following the instructions posted in this [Community forum blog post](https://community.plotly.com/t/performance-profiling-dash-apps-with-werkzeug/65199) to run `werkzeug` profiler in my Dash application to identify performance bottlenecks.
It doesn't work with python version 3.12, receiving error:`ValueError: Another profiling tool is already active`. I'm not running any other profiler in the application and I've tried disabling Linux profiler (`python app.py -X perf 0`)
It works fine with python <=3.11
- `pip list | grep dash`:
```
dash 2.18.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-iconify 0.1.2
dash-table 5.0.0
```
**Describe the bug**
I receiving the error `ValueError: Another profiling tool is already active` when running the application with python version 3.12.
**Expected behavior**
It should work as it does with python<=3.11
| open | 2025-01-31T11:29:35Z | 2025-01-31T13:46:03Z | https://github.com/plotly/dash/issues/3145 | [
"bug",
"sev-3"
] | Farkites | 1 |
proplot-dev/proplot | matplotlib | 148 | Error about xticklabel in geoaxes | <!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). -->
### Description
I use labels=True in ax.format, although i set lonlim=(120,260), it return all the ticks along x axis.
### Steps to reproduce
```python
f,axs=plot.subplots(axwidth=10,proj='cyl',proj_kw={'lon_0': 180})
ax=axs[0]
for i in range(lon2.shape[1]):
longitude=lon2[:,i]
latitude=lat2[:,i]
h=p2[1:,i]
points = np.array([longitude, latitude]).T.reshape(-1, 1, 2)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
norm = plt.Normalize(100, 800)
lc = LineCollection(segments, cmap=cmaps.MPL_jet, norm=norm,transform=ccrs.PlateCarree())
lc.set_array(h)
line = ax.add_collection(lc)
tick=[-0.5,0.5]
m=ax.contour(flag,levels=tick,color='red',zorder=20)
ax.scatter(LHx2,LHy2,marker='x')
ax.format(title='1979-01-04:00',fontsize=15, land=True, landcolor='gray',latlim=(15,80),lonlim=(120,260),labels=True)
ax.colorbar(line,loc='bottom',label='unit(hPa)')
```

**Expected behavior**: [What you expected to happen]
only show the xticklabels in the lonlim.
**Actual behavior**: [What actually happened]
show the ticklabels outside of the lonlim
### Equivalent steps in matplotlib
Please make sure this bug is related to a specific proplot feature. If you're not sure, try to replicate it with the [native matplotlib API](https://matplotlib.org/3.1.1/api/index.html). Matplotlib bugs belong on the [matplotlib github page](https://github.com/matplotlib/matplotlib).
```python
# your code here, if applicable
```
### Proplot version
0.50
Paste the result of `import proplot; print(proplot.version)` here.
| closed | 2020-05-08T06:42:50Z | 2020-05-11T10:18:11Z | https://github.com/proplot-dev/proplot/issues/148 | [
"bug"
] | heylsitan | 1 |
pyjanitor-devs/pyjanitor | pandas | 945 | Read in Data From the command line into a Pandas DataFrame | I would like to propose reading data from the command line into a Pandas DataFrame. This can come in handy when processing large files, where the pre-filtering step is done on the command line (`grep/sed/...`) before pulling into Pandas. This can be handy, efficient and avoid loading all the data into Pandas before processing. Inspiration for this is from R's data.table [fread](https://github.com/Rdatatable/data.table/wiki/Convenience-features-of-fread)
# Example API
```python
# transform only one column, while creating a new column name for it.
df.read_cmd(command line)
# filter for specific rows before reading into Pandas
df.read_cmd("grep -v TRIAL sum_data.txt")
# unzip file with command line before reading into Pandas
df.read_cmd('unzip -cq mtcars.csv.zip')
# can even combine several command lines
# even concatenate multiple files within the command line
# before reading into Pandas (possibly avoid concat's memory overhead)
# Unix is battle tested and performant
df.read_cmd('cat *dat.gz | gunzip | grep -v "^Day"')
```
Links:
-[fread cmd pydatatable](https://op8867555.github.io/posts/2017-10-13-use-your-unix-toolbox-with-pandas.html)
-[pandas bash fu](https://op8867555.github.io/posts/2017-10-13-use-your-unix-toolbox-with-pandas.html) | closed | 2021-10-11T02:52:18Z | 2022-01-12T10:12:00Z | https://github.com/pyjanitor-devs/pyjanitor/issues/945 | [] | samukweku | 5 |
vitalik/django-ninja | rest-api | 1,005 | [BUG] filters.filter does not apply filters that consist of a list of values | **Describe the bug**
`filters.filter` that applies the defined filters does not work when applying a filter that has a list of values. If I change this filter to only accept a singular value it does work.
**Versions (please complete the following information):**
- Python version: Python 3.11.7
- Django version: 4.1.5
- Django-Ninja version: 1.1.0
- Pydantic version: 2.5.2
**Extra info**
_endpoint:_
```
@api.get("/business-units", response=List[BusinessUnitSchema])
def get_companies(request, filters: Query[IdFilterSchema]):
"""Returns all possible businessunits for this user"""
user = get_user_from_request(request)
return filters.filter(user.business_units.all())
```
_schema:_
```
class IdFilterSchema(FilterSchema):
id: List[uuid.UUID] = None
```
(changing `uuid.UUID` with `str` also gives an error)
_error message:_
```
`["‘[UUID('28237095-4257-4dcc-817f-4fb322a2fa67'), UUID('cd55beb4-709b-49bf-80a9-384166f4ee55')]’ is geen geldige UUID."]
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/django/db/models/fields/__init__.py", line 2649, in to_python
return uuid.UUID(**{input_form: value})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/uuid.py", line 175, in __init__
hex = hex.replace('urn:', '').replace('uuid:', '')
^^^^^^^^^^^
AttributeError: 'list' object has no attribute 'replace'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/ninja/operation.py", line 107, in run
result = self.view_func(request, **values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/expert/users/api_user/api_user.py", line 564, in get_companies
return filters.filter(user.business_units.all())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/ninja/filter_schema.py", line 53, in filter
return queryset.filter(self.get_filter_expression())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/query.py", line 1421, in filter
return self._filter_or_exclude(False, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/query.py", line 1439, in _filter_or_exclude
clone._filter_or_exclude_inplace(negate, args, kwargs)
File "/usr/local/lib/python3.11/site-packages/django/db/models/query.py", line 1446, in _filter_or_exclude_inplace
self._query.add_q(Q(*args, **kwargs))
File "/usr/local/lib/python3.11/site-packages/django/db/models/sql/query.py", line 1532, in add_q
clause, _ = self._add_q(q_object, self.used_aliases)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/sql/query.py", line 1562, in _add_q
child_clause, needed_inner = self.build_filter(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/sql/query.py", line 1388, in build_filter
return self._add_q(
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/sql/query.py", line 1562, in _add_q
child_clause, needed_inner = self.build_filter(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/sql/query.py", line 1478, in build_filter
condition = self.build_lookup(lookups, col, value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/sql/query.py", line 1303, in build_lookup
lookup = lookup_class(lhs, rhs)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/lookups.py", line 27, in __init__
self.rhs = self.get_prep_lookup()
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/lookups.py", line 341, in get_prep_lookup
return super().get_prep_lookup()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/lookups.py", line 85, in get_prep_lookup
return self.lhs.output_field.get_prep_value(self.rhs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/fields/__init__.py", line 2633, in get_prep_value
return self.to_python(value)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/fields/__init__.py", line 2651, in to_python
raise exceptions.ValidationError(
django.core.exceptions.ValidationError: ["‘[UUID('28237095-4257-4dcc-817f-4fb322a2fa67'), UUID('cd55beb4-709b-49bf-80a9-384166f4ee55')]’ is geen geldige UUID."`
``` | closed | 2023-12-14T09:34:24Z | 2024-01-19T13:51:45Z | https://github.com/vitalik/django-ninja/issues/1005 | [] | stvdrsch | 2 |
aiortc/aiortc | asyncio | 413 | DTLS handshake timeout | I'm trying to establish a WebRTC stream to my server that is behind a firewall, via my TURN server over TCP. I think the TURN server is working fine. However, I encounter timeouts concerning DTLS.
This is the output of the server:
```
DEBUG:asyncio:Using selector: EpollSelector
======== Running on http://0.0.0.0:8080 ========
(Press CTRL+C to quit)
INFO:pc:PeerConnection(9ae6682a-962e-4765-be9f-360ffdfe6728) Created for 134.130.139.102
DEBUG:ice:Connection(0) protocol(0) connection_made(<_SelectorDatagramTransport fd=12 read=idle write=<idle, bufsize=0>>)
DEBUG:turn:turn/tcp connection_made(<_SelectorSocketTransport fd=13 read=idle write=<idle, bufsize=0>>)
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.ALLOCATE, message_class=Class.REQUEST, transaction_id=b'\xcc\xf4EB\x8c\x00a\xb4[\x99\x16\xe3')
DEBUG:turn:turn/tcp < ('111.111.111.111', 5349) Message(message_method=Method.ALLOCATE, message_class=Class.ERROR, transaction_id=b'\xcc\xf4EB\x8c\x00a\xb4[\x99\x16\xe3')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.ALLOCATE, message_class=Class.REQUEST, transaction_id=b'\xd5Z\x86\xff\xf5,\x19\xc3\xaf\x15bR')
DEBUG:turn:turn/tcp < ('111.111.111.111', 5349) Message(message_method=Method.ALLOCATE, message_class=Class.RESPONSE, transaction_id=b'\xd5Z\x86\xff\xf5,\x19\xc3\xaf\x15bR')
INFO:turn:TURN allocation created ('111.111.111.111', 65302)
DEBUG:ice:Connection(0) protocol(1) connection_made(<aioice.turn.TurnTransport object at 0x7f0ee7c06160>)
DEBUG:ice:controlled - new -> checking
INFO:ice:Connection(0) Check CandidatePair(('134.130.139.119', 36270) -> ('111.111.111.111', 51010)) State.FROZEN -> State.WAITING
INFO:ice:Connection(0) Check CandidatePair(('111.111.111.111', 65302) -> ('111.111.111.111', 51010)) State.FROZEN -> State.WAITING
INFO:pc:PeerConnection(9ae6682a-962e-4765-be9f-360ffdfe6728) ICE connection state is checking
INFO:ice:Connection(0) Check CandidatePair(('134.130.139.119', 36270) -> ('111.111.111.111', 51010)) State.WAITING -> State.IN_PROGRESS
DEBUG:ice:Connection(0) protocol(0) > ('111.111.111.111', 51010) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'3\xf6\xc1\xabv\xcd\x88\xca\xcf\x10\xdf\x8d')
INFO:ice:Connection(0) Check CandidatePair(('111.111.111.111', 65302) -> ('111.111.111.111', 51010)) State.WAITING -> State.IN_PROGRESS
DEBUG:ice:Connection(0) protocol(1) > ('111.111.111.111', 51010) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b']\x8e\xfe\xd8\xd9\x8e\xb3\xa5\x9a\x9f\x95\xd1')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.CHANNEL_BIND, message_class=Class.REQUEST, transaction_id=b'j\xfc\xb2\xdb\x14\x83C\xf9\x17\xa9-#')
DEBUG:turn:turn/tcp < ('111.111.111.111', 5349) Message(message_method=Method.CHANNEL_BIND, message_class=Class.RESPONSE, transaction_id=b'j\xfc\xb2\xdb\x14\x83C\xf9\x17\xa9-#')
INFO:turn:TURN channel bound 16384 ('111.111.111.111', 51010)
DEBUG:ice:Connection(0) protocol(1) < ('111.111.111.111', 51010) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'\xb1\xa3!\xc5QT\xcd@y\x07\xe8\x0b')
DEBUG:ice:Connection(0) protocol(1) > ('111.111.111.111', 51010) Message(message_method=Method.BINDING, message_class=Class.RESPONSE, transaction_id=b'\xb1\xa3!\xc5QT\xcd@y\x07\xe8\x0b')
DEBUG:ice:Connection(0) protocol(0) > ('111.111.111.111', 51010) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'3\xf6\xc1\xabv\xcd\x88\xca\xcf\x10\xdf\x8d')
DEBUG:ice:Connection(0) protocol(1) > ('111.111.111.111', 51010) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b']\x8e\xfe\xd8\xd9\x8e\xb3\xa5\x9a\x9f\x95\xd1')
DEBUG:ice:Connection(0) protocol(1) < ('111.111.111.111', 51010) Message(message_method=Method.BINDING, message_class=Class.RESPONSE, transaction_id=b']\x8e\xfe\xd8\xd9\x8e\xb3\xa5\x9a\x9f\x95\xd1')
INFO:ice:Connection(0) Check CandidatePair(('111.111.111.111', 65302) -> ('111.111.111.111', 51010)) State.IN_PROGRESS -> State.SUCCEEDED
INFO:ice:Connection(0) ICE completed
DEBUG:ice:controlled - checking -> completed
DEBUG:dtls:client - State.NEW -> State.CONNECTING
INFO:pc:PeerConnection(9ae6682a-962e-4765-be9f-360ffdfe6728) ICE connection state is completed
DEBUG:dtls:client x DTLS handling timeout
DEBUG:dtls:client x DTLS handling timeout
INFO:pc:PeerConnection(d2802394-df12-4785-b80b-9e891a65a630) Created for 134.130.139.102
DEBUG:ice:Connection(1) protocol(2) connection_made(<_SelectorDatagramTransport fd=15 read=idle write=<idle, bufsize=0>>)
DEBUG:turn:turn/tcp connection_made(<_SelectorSocketTransport fd=16 read=idle write=<idle, bufsize=0>>)
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.ALLOCATE, message_class=Class.REQUEST, transaction_id=b'bI\x9d\xc7\x80y`3Z\xc0*%')
DEBUG:turn:turn/tcp < ('111.111.111.111', 5349) Message(message_method=Method.ALLOCATE, message_class=Class.ERROR, transaction_id=b'bI\x9d\xc7\x80y`3Z\xc0*%')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.ALLOCATE, message_class=Class.REQUEST, transaction_id=b'\xb0K\xe3\x1a\xa4kr\x15~\xac\x9b\x18')
DEBUG:turn:turn/tcp < ('111.111.111.111', 5349) Message(message_method=Method.ALLOCATE, message_class=Class.RESPONSE, transaction_id=b'\xb0K\xe3\x1a\xa4kr\x15~\xac\x9b\x18')
INFO:turn:TURN allocation created ('111.111.111.111', 52190)
DEBUG:ice:Connection(1) protocol(3) connection_made(<aioice.turn.TurnTransport object at 0x7f0ee6f8b6a0>)
DEBUG:ice:controlled - new -> checking
INFO:ice:Connection(1) Check CandidatePair(('134.130.139.119', 53243) -> ('111.111.111.111', 49518)) State.FROZEN -> State.WAITING
INFO:ice:Connection(1) Check CandidatePair(('111.111.111.111', 52190) -> ('111.111.111.111', 49518)) State.FROZEN -> State.WAITING
INFO:pc:PeerConnection(d2802394-df12-4785-b80b-9e891a65a630) ICE connection state is checking
INFO:ice:Connection(1) Check CandidatePair(('134.130.139.119', 53243) -> ('111.111.111.111', 49518)) State.WAITING -> State.IN_PROGRESS
DEBUG:ice:Connection(1) protocol(2) > ('111.111.111.111', 49518) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'\x8eB\xb8\xef\xa7\xbd\xce{\t\xcb\xa2\x83')
INFO:ice:Connection(1) Check CandidatePair(('111.111.111.111', 52190) -> ('111.111.111.111', 49518)) State.WAITING -> State.IN_PROGRESS
DEBUG:ice:Connection(1) protocol(3) > ('111.111.111.111', 49518) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'C\x03ffJ\xef\xb4\xfeF\xf1\x01\x93')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.CHANNEL_BIND, message_class=Class.REQUEST, transaction_id=b't\xa7y/\x9av\xf1\xd3\x1b\x80\xb3\xa3')
DEBUG:turn:turn/tcp < ('111.111.111.111', 5349) Message(message_method=Method.CHANNEL_BIND, message_class=Class.RESPONSE, transaction_id=b't\xa7y/\x9av\xf1\xd3\x1b\x80\xb3\xa3')
INFO:turn:TURN channel bound 16384 ('111.111.111.111', 49518)
DEBUG:ice:Connection(1) protocol(3) < ('111.111.111.111', 49518) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'C\xf5\xe4\xbd\xce\xa6\xd7\xa7\x01\x16/\xfc')
DEBUG:ice:Connection(1) protocol(3) > ('111.111.111.111', 49518) Message(message_method=Method.BINDING, message_class=Class.RESPONSE, transaction_id=b'C\xf5\xe4\xbd\xce\xa6\xd7\xa7\x01\x16/\xfc')
DEBUG:ice:Connection(0) protocol(1) > ('111.111.111.111', 51010) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'7\x9d\x0c~\xd9R\xa1\xfe\xb0\xd8t\xca')
DEBUG:ice:Connection(1) protocol(2) > ('111.111.111.111', 49518) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'\x8eB\xb8\xef\xa7\xbd\xce{\t\xcb\xa2\x83')
DEBUG:ice:Connection(1) protocol(3) > ('111.111.111.111', 49518) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'C\x03ffJ\xef\xb4\xfeF\xf1\x01\x93')
DEBUG:ice:Connection(1) protocol(3) < ('111.111.111.111', 49518) Message(message_method=Method.BINDING, message_class=Class.RESPONSE, transaction_id=b'C\x03ffJ\xef\xb4\xfeF\xf1\x01\x93')
INFO:ice:Connection(1) Check CandidatePair(('111.111.111.111', 52190) -> ('111.111.111.111', 49518)) State.IN_PROGRESS -> State.SUCCEEDED
INFO:ice:Connection(1) ICE completed
DEBUG:ice:controlled - checking -> completed
DEBUG:dtls:client - State.NEW -> State.CONNECTING
INFO:pc:PeerConnection(d2802394-df12-4785-b80b-9e891a65a630) ICE connection state is completed
DEBUG:dtls:client x DTLS handling timeout
DEBUG:dtls:client x DTLS handling timeout
DEBUG:dtls:client x DTLS handling timeout
DEBUG:dtls:client x DTLS handling timeout
DEBUG:ice:Connection(1) protocol(3) > ('111.111.111.111', 49518) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'n\x19\xa1t\xd7\xb1-]\xe5\xd0v\xc2')
DEBUG:ice:Connection(0) protocol(1) > ('111.111.111.111', 51010) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'[\x10eH7\xce\xa4\xe9\xcf\x91\xc5\xbf')
DEBUG:dtls:client x DTLS handling timeout
DEBUG:dtls:client x DTLS handling timeout
DEBUG:ice:Connection(0) protocol(1) > ('111.111.111.111', 51010) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'\x9a\xd4\x90\x8c17\xf07@\xf5\xd2\x9a')
DEBUG:ice:Connection(1) protocol(3) > ('111.111.111.111', 49518) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'f\xf8 \xd5\xad\xc0EyW\x19Ib')
DEBUG:ice:Connection(0) protocol(1) > ('111.111.111.111', 51010) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'l\xd5\t\x8d,<\x10Q\xa5\x9b\xe0\xfb')
DEBUG:dtls:client x DTLS handling timeout
DEBUG:ice:Connection(1) protocol(3) > ('111.111.111.111', 49518) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'\x16ZM-\x8b@\xebB\xf3\xf6\xf3?')
DEBUG:ice:Connection(1) protocol(3) > ('111.111.111.111', 49518) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'_\xb7\x93\xb2\x13\xe6\xef\xa2;\x82>\xdc')
DEBUG:ice:Connection(0) protocol(1) > ('111.111.111.111', 51010) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'\x91\xee\xcbv\xbc\xb3\x80\x07GT\xb1\xd4')
DEBUG:dtls:client x DTLS handling timeout
DEBUG:ice:Connection(1) protocol(3) > ('111.111.111.111', 49518) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'\x0f\x82\xb5GU\x12\xe1\xd3{\xed!^')
DEBUG:ice:Connection(0) protocol(1) > ('111.111.111.111', 51010) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'\x81\xf82N\x903\xd9\xcbW\xde\xcd5')
INFO:ice:Connection(0) Consent to send expired
DEBUG:ice:Connection(0) protocol(0) connection_lost(None)
DEBUG:ice:controlled - completed -> failed
INFO:pc:PeerConnection(9ae6682a-962e-4765-be9f-360ffdfe6728) ICE connection state is failed
DEBUG:dtls:client - DTLS shutdown complete
DEBUG:ice:controlled - failed -> closed
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\xef\xa3\x1dFE\x11\xa0\x1a\x03\x16s\xc2')
INFO:pc:PeerConnection(9ae6682a-962e-4765-be9f-360ffdfe6728) ICE connection state is closed
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\xf8\x1f\xe7\xfe\xd2\x80\x1c\x88\xd9~\xe4\xf0')
DEBUG:dtls:client x DTLS handshake failed (connection error)
DEBUG:dtls:client - State.CONNECTING -> State.FAILED
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\xef\xa3\x1dFE\x11\xa0\x1a\x03\x16s\xc2')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\xf8\x1f\xe7\xfe\xd2\x80\x1c\x88\xd9~\xe4\xf0')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\xef\xa3\x1dFE\x11\xa0\x1a\x03\x16s\xc2')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\xf8\x1f\xe7\xfe\xd2\x80\x1c\x88\xd9~\xe4\xf0')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\xef\xa3\x1dFE\x11\xa0\x1a\x03\x16s\xc2')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\xf8\x1f\xe7\xfe\xd2\x80\x1c\x88\xd9~\xe4\xf0')
DEBUG:ice:Connection(1) protocol(3) > ('111.111.111.111', 49518) Message(message_method=Method.BINDING, message_class=Class.REQUEST, transaction_id=b'@\xa1s\xe8\xef\x96\x17\xf2iK\xce}')
DEBUG:dtls:client x DTLS handling timeout
INFO:ice:Connection(1) Consent to send expired
DEBUG:ice:Connection(1) protocol(2) connection_lost(None)
DEBUG:ice:controlled - completed -> failed
INFO:pc:PeerConnection(d2802394-df12-4785-b80b-9e891a65a630) ICE connection state is failed
DEBUG:dtls:client - DTLS shutdown complete
DEBUG:ice:controlled - failed -> closed
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'Kd\xa1uw\xb1p\xc9d!"\x85')
INFO:pc:PeerConnection(d2802394-df12-4785-b80b-9e891a65a630) ICE connection state is closed
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\x8f\x17/\xdc\xfc\xcd\n\xb8h\xf0\x8dK')
DEBUG:dtls:client x DTLS handshake failed (connection error)
DEBUG:dtls:client - State.CONNECTING -> State.FAILED
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'Kd\xa1uw\xb1p\xc9d!"\x85')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\x8f\x17/\xdc\xfc\xcd\n\xb8h\xf0\x8dK')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'Kd\xa1uw\xb1p\xc9d!"\x85')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\x8f\x17/\xdc\xfc\xcd\n\xb8h\xf0\x8dK')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\xef\xa3\x1dFE\x11\xa0\x1a\x03\x16s\xc2')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\xf8\x1f\xe7\xfe\xd2\x80\x1c\x88\xd9~\xe4\xf0')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'Kd\xa1uw\xb1p\xc9d!"\x85')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\x8f\x17/\xdc\xfc\xcd\n\xb8h\xf0\x8dK')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'Kd\xa1uw\xb1p\xc9d!"\x85')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\x8f\x17/\xdc\xfc\xcd\n\xb8h\xf0\x8dK')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\xef\xa3\x1dFE\x11\xa0\x1a\x03\x16s\xc2')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\xf8\x1f\xe7\xfe\xd2\x80\x1c\x88\xd9~\xe4\xf0')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'Kd\xa1uw\xb1p\xc9d!"\x85')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\x8f\x17/\xdc\xfc\xcd\n\xb8h\xf0\x8dK')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\xef\xa3\x1dFE\x11\xa0\x1a\x03\x16s\xc2')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\xf8\x1f\xe7\xfe\xd2\x80\x1c\x88\xd9~\xe4\xf0')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'Kd\xa1uw\xb1p\xc9d!"\x85')
DEBUG:turn:turn/tcp > ('turn.server.com', 5349) Message(message_method=Method.REFRESH, message_class=Class.REQUEST, transaction_id=b'\x8f\x17/\xdc\xfc\xcd\n\xb8h\xf0\x8dK')
```
I think the problem is related to DTLS, because of the `DEBUG:dtls:client x DTLS handling timeout` above.
The connection breaks due to `INFO:ice:Connection(1) Consent to send expired` .
This is the output of the webrtc example:
```
SDP
Offer
v=0
o=mozilla...THIS_IS_SDPARTA-76.0 8662261460150179593 0 IN IP4 0.0.0.0
s=-
t=0 0
a=sendrecv
a=fingerprint:sha-256 F9:85:3C:2A:51:6C:06:35:48:20:C6:3C:BC:65:0A:9C:DF:36:23:D1:60:C0:07:89:B2:EA:DB:E5:DE:4D:AF:33
a=group:BUNDLE 0
a=ice-options:trickle
a=msid-semantic:WMS *
m=application 49518 UDP/DTLS/SCTP webrtc-datachannel
c=IN IP4 138.68.86.134
a=candidate:0 1 UDP 8331263 138.68.86.134 49518 typ relay raddr 138.68.86.134 rport 49518
a=sendrecv
a=end-of-candidates
a=ice-pwd:969e786a521db47c01be66f28bbe75f9
a=ice-ufrag:f8e94fb7
a=mid:0
a=setup:actpass
a=sctp-port:5000
a=max-message-size:1073741823
Answer
v=0
o=- 3807545692 3807545692 IN IP4 0.0.0.0
s=-
t=0 0
a=group:BUNDLE 0
a=msid-semantic:WMS *
m=application 53243 UDP/DTLS/SCTP webrtc-datachannel
c=IN IP4 134.130.139.119
a=mid:0
a=sctp-port:5000
a=max-message-size:65536
a=candidate:f97bc3953707c662fa2b1437dc13b1e3 1 udp 2130706431 134.130.139.119 53243 typ host
a=candidate:335491dbe22f1d9fb619f4e50a7f1c33 1 udp 16777215 138.68.86.134 52190 typ relay raddr 134.130.139.119 rport 49844
a=end-of-candidates
a=ice-ufrag:4Qnz
a=ice-pwd:xE8fMvNs2VWMxZOpEo8DGe
a=fingerprint:sha-256 82:EC:00:AF:CE:D3:EF:6B:F1:04:D5:3A:08:01:79:88:61:56:17:28:58:C3:31:94:73:A2:35:BF:1C:3A:EF:FE
a=setup:active
```
Is it possible to use aiortc with TURN via TCP? What may be the cause of the DTLS timeout? | closed | 2020-08-27T20:01:01Z | 2021-03-01T21:35:56Z | https://github.com/aiortc/aiortc/issues/413 | [
"question"
] | Schmiddo | 2 |
marcomusy/vedo | numpy | 1,012 | distance_to produces error if the point cloud does not contain faces | ```
p = Point((0,0,1))
C=Cube()
p.distance_to(C)
```
works fine. But
```
pc = Points(C.vertices)
p.distance_to(pc)
```
produces error:
pcloud.point_locator.SetDataSet(pcloud)
TypeError: SetDataSet argument 1: method requires a VTK object
Making it a mesh by adding an edge (or a face) makes it work again:
```
pm = Mesh([C.vertices, [[0,1]]])
p.distance_to(pm)
``` | closed | 2024-01-10T00:15:59Z | 2024-01-10T14:40:35Z | https://github.com/marcomusy/vedo/issues/1012 | [] | baba-yaga | 4 |
comfyanonymous/ComfyUI | pytorch | 6,509 | Unable to Free Up Memory in a ComfyUI workflow | ### Feature Idea
I am currently working on a workflow in ComfyUI that involves loading large models, such as FluxDev. The workflow performs generation tasks where the output of one model serves as the input for another. However, I encounter an issue when attempting to load a third model; my system's memory (not VRAM) becomes fully utilized.
I would like to know if there is a way to clear memory and unload models that I no longer need within the same workflow. I have 64 GB of RAM, yet loading just the FluxDev model consumes 16 GB of it.
### Existing Solutions
_No response_
### Other
_No response_ | open | 2025-01-18T09:49:58Z | 2025-01-18T09:49:58Z | https://github.com/comfyanonymous/ComfyUI/issues/6509 | [
"Feature"
] | oomiDimoo | 0 |
tortoise/tortoise-orm | asyncio | 1,034 | help!!Occasional bugs with tortoise + aiomysql running on sanic | **Describe the bug**
Version: tortoise-orm==0.17.7
A clear and concise description of what the bug is.
**To Reproduce**
Unable to re, only occasionally does this error occur
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/tortoise/models.py", line 717, in _init_from_db
setattr(self, model_field, kwargs[key])
KeyError: 'images'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/frontend/api.py", line 3296, in viewRecord
scheme_obj = await Scheme.get_or_none(uni_id=scheme_id)
File "/usr/local/lib/python3.8/dist-packages/tortoise/queryset.py", line 908, in _execute
instance_list = await self._db.executor_class(
File "/usr/local/lib/python3.8/dist-packages/tortoise/backends/base/executor.py", line 134, in execute_select
instance: "Model" = self.model._init_from_db(
File "/usr/local/lib/python3.8/dist-packages/tortoise/models.py", line 734, in _init_from_db
setattr(self, key, meta.fields_map[key].to_python_value(value))
KeyError: 'watch_user_id'
Traceback (most recent call last):
File "/data/frontend/api.py", line 1446, in schemeList
s['sale_num'] = await Order.filter(scheme_id=s['uni_id'], status=1).count()
File "/usr/local/lib/python3.8/dist-packages/tortoise/queryset.py", line 1174, in _execute
count = list(dict(result[0]).values())[0] - self.offset
IndexError: list index out of range
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/tortoise/models.py", line 717, in _init_from_db
setattr(self, model_field, kwargs[key])
KeyError: 'nickname'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/frontend/api.py", line 474, in getSubscribe
wx_user_obj = await WXUserInfo.get_or_none(user_id=request.ctx.uid, app_id=ACTIVE_WX_APP_ID)
File "/usr/local/lib/python3.8/dist-packages/tortoise/queryset.py", line 908, in _execute
instance_list = await self._db.executor_class(
File "/usr/local/lib/python3.8/dist-packages/tortoise/backends/base/executor.py", line 134, in execute_select
instance: "Model" = self.model._init_from_db(
File "/usr/local/lib/python3.8/dist-packages/tortoise/models.py", line 734, in _init_from_db
setattr(self, key, meta.fields_map[key].to_python_value(value))
KeyError: 'COUNT(*)'
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/tortoise/models.py", line 717, in _init_from_db
setattr(self, model_field, kwargs[key])
KeyError: 'user_id'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/frontend/api.py", line 474, in getSubscribe
wx_user_obj = await WXUserInfo.get_or_none(user_id=request.ctx.uid, app_id=ACTIVE_WX_APP_ID)
File "/usr/local/lib/python3.8/dist-packages/tortoise/queryset.py", line 908, in _execute
instance_list = await self._db.executor_class(
File "/usr/local/lib/python3.8/dist-packages/tortoise/backends/base/executor.py", line 134, in execute_select
instance: "Model" = self.model._init_from_db(
File "/usr/local/lib/python3.8/dist-packages/tortoise/models.py", line 734, in _init_from_db
setattr(self, key, meta.fields_map[key].to_python_value(value))
KeyError: 'username'
Traceback (most recent call last):
File "/data/frontend/api.py", line 1441, in schemeList
s['create_time'] = s['create_time'].strftime("%Y-%m-%d %H:%M:%S") if s['create_time'] is not None else ''
KeyError: 'create_time'
Traceback (most recent call last):
File "/data/frontend/api.py", line 3296, in viewRecord
scheme_obj = await Scheme.get_or_none(uni_id=scheme_id)
File "/usr/local/lib/python3.8/dist-packages/tortoise/queryset.py", line 922, in _execute
raise MultipleObjectsReturned("Multiple objects returned, expected exactly one")
tortoise.exceptions.MultipleObjectsReturned: Multiple objects returned, expected exactly one
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/tortoise/models.py", line 717, in _init_from_db
setattr(self, model_field, kwargs[key])
KeyError: 'user_id'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/frontend/api.py", line 738, in get
order_objs = await Order.filter(scheme_id=scheme_obj.uni_id, pay_user_id=view_user_obj.id)
File "/usr/local/lib/python3.8/dist-packages/tortoise/queryset.py", line 908, in _execute
instance_list = await self._db.executor_class(
File "/usr/local/lib/python3.8/dist-packages/tortoise/backends/base/executor.py", line 134, in execute_select
instance: "Model" = self.model._init_from_db(
File "/usr/local/lib/python3.8/dist-packages/tortoise/models.py", line 734, in _init_from_db
setattr(self, key, meta.fields_map[key].to_python_value(value))
KeyError: 'has_limit_time'
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/tortoise/models.py", line 717, in _init_from_db
setattr(self, model_field, kwargs[key])
KeyError: 'images'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/frontend/api.py", line 683, in get
scheme_objs = await Scheme.filter(uni_id=id)
File "/usr/local/lib/python3.8/dist-packages/tortoise/queryset.py", line 908, in _execute
instance_list = await self._db.executor_class(
File "/usr/local/lib/python3.8/dist-packages/tortoise/backends/base/executor.py", line 134, in execute_select
instance: "Model" = self.model._init_from_db(
File "/usr/local/lib/python3.8/dist-packages/tortoise/models.py", line 734, in _init_from_db
setattr(self, key, meta.fields_map[key].to_python_value(value))
KeyError: 'nickname'
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/tortoise/models.py", line 717, in _init_from_db
setattr(self, model_field, kwargs[key])
KeyError: 'nickname'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/frontend/api.py", line 679, in get
view_user_objs = await UserInfo.filter(username=username)
File "/usr/local/lib/python3.8/dist-packages/tortoise/queryset.py", line 908, in _execute
instance_list = await self._db.executor_class(
File "/usr/local/lib/python3.8/dist-packages/tortoise/backends/base/executor.py", line 134, in execute_select
instance: "Model" = self.model._init_from_db(
File "/usr/local/lib/python3.8/dist-packages/tortoise/models.py", line 734, in _init_from_db
setattr(self, key, meta.fields_map[key].to_python_value(value))
KeyError: 'has_limit_time'
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/tortoise/models.py", line 717, in _init_from_db
setattr(self, model_field, kwargs[key])
KeyError: 'user_id'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/frontend/api.py", line 474, in getSubscribe
wx_user_obj = await WXUserInfo.get_or_none(user_id=request.ctx.uid, app_id=ACTIVE_WX_APP_ID)
File "/usr/local/lib/python3.8/dist-packages/tortoise/queryset.py", line 908, in _execute
instance_list = await self._db.executor_class(
File "/usr/local/lib/python3.8/dist-packages/tortoise/backends/base/executor.py", line 134, in execute_select
instance: "Model" = self.model._init_from_db(
File "/usr/local/lib/python3.8/dist-packages/tortoise/models.py", line 734, in _init_from_db
setattr(self, key, meta.fields_map[key].to_python_value(value))
KeyError: 'username'
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/tortoise/models.py", line 717, in _init_from_db
setattr(self, model_field, kwargs[key])
KeyError: 'username'
| open | 2022-01-06T09:57:30Z | 2023-06-27T01:25:20Z | https://github.com/tortoise/tortoise-orm/issues/1034 | [] | ChangeMoreNate | 6 |
microsoft/nni | pytorch | 5,457 | Failed to create NNI experiment, error: AttributeError: 'dict' object has no attribute 'name' | **Describe the issue**:
But when I created the NNI experiment using the official example of Hyperparameter Optimization, I received the following error information:
```
Traceback (most recent call last):
File "/home/sunze/anaconda3/bin/nnictl", line 8, in <module>
sys.exit(parse_args())
File "/home/sunze/anaconda3/lib/python3.9/site-packages/nni/tools/nnictl/nnictl.py", line 497, in parse_args
args.func(args)
File "/home/sunze/anaconda3/lib/python3.9/site-packages/nni/tools/nnictl/launcher.py", line 91, in create_experiment
exp.start(port, debug, RunMode.Detach)
File "/home/sunze/anaconda3/lib/python3.9/site-packages/nni/experiment/experiment.py", line 135, in start
self._start_impl(port, debug, run_mode, None, [])
File "/home/sunze/anaconda3/lib/python3.9/site-packages/nni/experiment/experiment.py", line 94, in _start_impl
config = self.config.canonical_copy()
File "/home/sunze/anaconda3/lib/python3.9/site-packages/nni/experiment/config/base.py", line 166, in canonical_copy
canon._canonicalize([])
File "/home/sunze/anaconda3/lib/python3.9/site-packages/nni/experiment/config/experiment_config.py", line 124, in _canonicalize
if algo is not None and algo.name == '_none_': # type: ignore
AttributeError: 'dict' object has no attribute 'name'
```
**The NNI source code at the wrong place is as follows**:
```
def _canonicalize(self, _parents):
if self.log_level is None:
self.log_level = 'debug' if self.debug else 'info'
self.tuner_gpu_indices = utils.canonical_gpu_indices(self.tuner_gpu_indices)
for algo_type in ['tuner', 'assessor', 'advisor']:
algo = getattr(self, algo_type)
# TODO: need a more universal solution for similar problems
if isinstance(algo, dict):
# the base class should have converted it to `_AlgorithmConfig` if feasible
# it is a `dict` here means an exception was raised during the convertion attempt
# we do the convertion again to show user the error message
_AlgorithmConfig(**algo) # pylint: disable=not-a-mapping
if algo is not None and algo.name == '_none_': # type: ignore
setattr(self, algo_type, None)
```
**I found ‘_algo_’ in code should be type ‘__AlgorithmConfig_’, but when I was running, I got the ‘_dict_’ type. I think this is the root of the problem, but I don't know how to solve it. My other Ubuntu server can run normally with the same environment, but this machine reports the error on the above error. So I want to ask what to do in this situation, I can't think of other solutions anymore. Thank you very much for your answer!**
**Environment**:
- NNI version: 2.9
- Training service (local|remote|pai|aml|etc): local
- Client OS: Ubuntu 20.04
- Python version: 3.9.13
- PyTorch version: 1.12.0
- Is conda/virtualenv/venv used?: conda used
- Is running in Docker?: No
**Configuration**:
```
experimentName: demo
searchSpace:
features:
_type: choice
_value: [ 128, 256, 512, 1024 ]
lr:
_type: loguniform
_value: [ 0.0001, 0.1 ]
momentum:
_type: uniform
_value: [ 0, 1 ]
trialCommand: python model.py
trialCodeDirectory: .
trialGpuNumber: 1
trialConcurrency: 4
maxTrialNumber: 80
tuner:
name: TPE
classArgs:
optimize_mode: maximize
trainingService:
platform: local
useActiveGpu: True
maxTrialNumberPerGpu: 1
```
**The very simple code of model.py**:
```
import nni
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor
params = {
'features': 512,
'lr': 0.001,
'momentum': 0,
}
optimized_params = nni.get_next_parameter()
params.update(optimized_params)
training_data = datasets.FashionMNIST(root="data", train=True, download=True, transform=ToTensor())
test_data = datasets.FashionMNIST(root="data", train=False, download=True, transform=ToTensor())
batch_size = 64
train_dataloader = DataLoader(training_data, batch_size=batch_size)
test_dataloader = DataLoader(test_data, batch_size=batch_size)
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using {device} device")
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, params['features']),
nn.ReLU(),
nn.Linear(params['features'], params['features']),
nn.ReLU(),
nn.Linear(params['features'], 10)
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
model = NeuralNetwork().to(device)
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=params['lr'], momentum=params['momentum'])
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
X, y = X.to(device), y.to(device)
pred = model(X)
loss = loss_fn(pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
def test(dataloader, model, loss_fn):
size = len(dataloader.dataset)
num_batches = len(dataloader)
model.eval()
test_loss, correct = 0, 0
with torch.no_grad():
for X, y in dataloader:
X, y = X.to(device), y.to(device)
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= num_batches
correct /= size
return correct
epochs = 5
for t in range(epochs):
print(f"Epoch {t+1}\n-------------------------------")
train(train_dataloader, model, loss_fn, optimizer)
accuracy = test(test_dataloader, model, loss_fn)
nni.report_intermediate_result(accuracy)
nni.report_final_result(accuracy)
``` | closed | 2023-03-18T12:38:13Z | 2023-03-21T04:48:49Z | https://github.com/microsoft/nni/issues/5457 | [] | sunze992 | 4 |
sammchardy/python-binance | api | 991 | Problem with binance library | After writing the test code:
_from binance.client import Client_
I get:
_### Traceback (most recent call last):
File "C:\Users\Mistrz Karp\PycharmProjects\pythonProject2\cos.py", line 2, in <module>
from binance.client import Client
ModuleNotFoundError: No module named 'binance'_
I am using Python 3.9 with pycharm Win 10.
When I go (in pycharm) to External Libraries --> Python39 --> Lib-->site-packages I can finde file called binance which contains client.py
Looking for help. Thanks in advance for helping novice.
| closed | 2021-08-14T09:53:39Z | 2021-08-16T12:40:03Z | https://github.com/sammchardy/python-binance/issues/991 | [] | karpus420 | 1 |
sgl-project/sglang | pytorch | 4,158 | [Bug] Accuracy issue with SGLang using DeepSeek-R1-AWQ | ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [x] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
The inference result of a simple question such as "9.11 and 9.8 which is greater?" can often time (~4 out of 5 times) result in progressively meaningless texts as more tokens are being generated.
The model checkpoint is: https://huggingface.co/cognitivecomputations/DeepSeek-R1-AWQ
SGlang installation from pip install "sglang[all]>=0.4.3.post4" --find-links https://flashinfer.ai/whl/cu124/torch2.5/flashinfer-python
### Reproduction
python3 -m sglang.launch_server --model /model_ckpt/DeepSeek-R1-AWQ --trust-remote --tp 8 --dtype float16
### Environment
sglang[all]>=0.4.3.post4
flashinfer.ai/whl/cu124/torch2.5/flashinfer-python | open | 2025-03-07T04:04:05Z | 2025-03-24T06:55:01Z | https://github.com/sgl-project/sglang/issues/4158 | [
"quant"
] | TheTinyTeddy | 5 |
chatopera/Synonyms | nlp | 10 | language detect | 在处理时,检测语言是否是中文。不是返回None,或者raise Exception。
# solution
```
pip install langid
``` | closed | 2017-11-02T13:00:40Z | 2020-10-01T11:38:50Z | https://github.com/chatopera/Synonyms/issues/10 | [] | hailiang-wang | 1 |
huggingface/peft | pytorch | 2,283 | TypeError when inference with different LoRA adapters in the same batch | ### System Info
transformers 4.41.0
peft 0.13.2
### Who can help?
@BenjaminBossan
I tried to adopt [Inference with different LoRA adapters in the same batch] to an encoder-decoder T5 model.
Specifically, I load the base model, the first LoRA, and the second LoRA adapters, and perform inference with these three models in the same batch. However, some errors occurred.
BTW, does [inference with different LoRA adapters in the same batch] support beam search when using generate()?
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
Code:
```python
base_model = MT5ForConditionalGeneration.from_pretrained(base_model_path, cache_dir='cache')
peft_model = PeftModel.from_pretrained(base_model,<lora_path1> ,adapter_name="l1")
peft_model.load_adapter(<lora_path2>, adapter_name="l2")
adapter_names = ["__base__", "l1", "l2"]
output = model.generate(
input_ids=inputs['input_ids'],
adapter_names=adapter_names,
max_length=20,
prefix_allowed_tokens_fn=self.restrict_decode_vocab,
early_stopping=True
)
```
The error message:
```
Traceback (most recent call last):
File "/home/user/user1/GR/trainer.py", line 1025, in prediction_step
doc_ids = model.generate(
File "/home/user/anaconda3/envs/test/lib/python3.8/site-packages/peft/peft_model.py", line 1972, in generate
with self._enable_peft_forward_hooks(**kwargs):
File "/home/user/anaconda3/envs/test/lib/python3.8/contextlib.py", line 113, in __enter__
python-BaseException
return next(self.gen)
File "/home/user/anaconda3/envs/test/lib/python3.8/site-packages/peft/peft_model.py", line 798, in _enable_peft_forward_hooks
with self.base_model._enable_peft_forward_hooks(*args, **kwargs):
File "/home/user/anaconda3/envs/test/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/home/user/anaconda3/envs/test/lib/python3.8/site-packages/peft/tuners/lora/model.py", line 441, in _enable_peft_forward_hooks
handle = module.register_forward_pre_hook(pre_forward, with_kwargs=True)
TypeError: register_forward_pre_hook() got an unexpected keyword argument 'with_kwargs'
```
### Expected behavior
I expect the existing function for [inference with different LoRA adapters in the same batch] to support T5 with LoRAs and work in my beam search experiments during generation. | closed | 2024-12-15T13:07:34Z | 2025-01-22T15:03:50Z | https://github.com/huggingface/peft/issues/2283 | [] | yuxiang-guo | 9 |
guohongze/adminset | django | 62 | 主机信息采集问题 | 嗨,您好,
首先就是很高兴能够遇到adminset这么好的项目!
但是在测试过程中发现一个小的问题。那就是磁盘采集信息的问题,采集的不是物理机硬盘的块数,这方面是否想过进行优化呢。
还有就是比较关心的就是,该项目用到了mongodb/redis/MySQL数据库,关于该软件架构文档可以提供吗?
最后,还是非常感谢您能提供这么好的项目!!!! | open | 2018-06-01T07:55:52Z | 2018-06-01T07:55:52Z | https://github.com/guohongze/adminset/issues/62 | [] | ixiaoyi93 | 0 |
graphql-python/flask-graphql | graphql | 24 | Subscriptions in flask-graphql | Is there any plan to implement this / are you accepting pull requests? | open | 2017-02-17T03:26:37Z | 2024-12-26T22:00:17Z | https://github.com/graphql-python/flask-graphql/issues/24 | [] | willdeuschle | 8 |
davidteather/TikTok-Api | api | 273 | [BUG] - userLikedbyUsername returns a blank list when count is greater than the amount of liked videos | **Describe the bug**
When retrieving a list of IDs with `userLikedbyUsername`, it returns `[]` when `count` is greater than the user's amount of liked videos. For example, the user `tiktok` has 33 liked videos. `api.userLikedbyUsername(name, count=33)` returns an array that contains all 33 of tiktok's liked videos, but `api.userLikedbyUsername(name, count=34)` returns `[]`.
**The buggy code**
```
api.userLikedbyUsername(name, count=x) #x = any number above the amount of liked videos
```
**Expected behavior**
If there are less than `count` tiktoks, it should return all of them.
**Desktop (please complete the following information):**
- OS: Windows 10
- TikTokApi Version 3.3.1
| closed | 2020-09-21T23:24:57Z | 2020-09-22T03:23:27Z | https://github.com/davidteather/TikTok-Api/issues/273 | [
"bug"
] | Coloradohusky | 3 |
airtai/faststream | asyncio | 1,170 | Feature: Adds support for the Redis cluster | **Is your feature request related to a problem? Please describe.**
We use redis cluster at our company, but there is no cluster mode support in faststream. When trying to connect to one of the nodes and send a message, faststream displays an error
```
redis.exceptions.ResponseError: MOVED 7304 127.0.0.1:1234
```
**Describe the solution you'd like**
I would like to get support for redis cluster mode. And specifically:
1. I am most interested in working with redis streams from cluster mode, so it would be cool if you added a RedisClusterBroker that would take as an argument a list of connections to clusters
2. The method of sending a message must correctly distribute messages across clusters [(you can read about this in the redis documentation)](https://redis.io/docs/management/scaling/) and have a parameter for the name of the consumer group when the "stream" parameter is specified
3. The consumer must create a stream in the cluster and correctly receive messages from different nodes
| open | 2024-01-25T04:50:50Z | 2025-03-05T19:23:38Z | https://github.com/airtai/faststream/issues/1170 | [
"enhancement",
"Redis"
] | powersemmi | 3 |
amidaware/tacticalrmm | django | 1,683 | Include Serial Number on MacOS (like Windows does) | **Is your feature request related to a problem? Please describe.**
Currently you can't search for serial number on MacOS, only Windows. You also can't view/retrieve the serial number from the summary tab of the agent.
**Describe the solution you'd like**
The agent queries the serialnumber from the mac host and populates the serial field in the summary view, consistent with how it is displayed on Windows hosts. The serial number is then searchable, also consistent with Windows hosts.
**Describe alternatives you've considered**
Manually add the serial# to the Description field of the agent.
Add the serial as a custom field and populate with a collector script (but it won't be searchable or easily viewed)
```bash
$ ioreg -l | grep IOPlatformSerialNumber | grep -o '"IOPlatformSerialNumber" = "[^"]*"' | awk -F'"' '{print $4}'
M6CXC70G21
```
**Additional context**
This would be desirable on Linux hosts. | closed | 2023-11-16T17:25:10Z | 2024-01-27T02:55:16Z | https://github.com/amidaware/tacticalrmm/issues/1683 | [
"enhancement"
] | jesse2point0 | 2 |
clovaai/donut | nlp | 263 | Dataset Loader didn't work properly on Kaggle | Good afternoon,
This morning I was trying to run Donut on Kaggle. The structure of the dataset is similar with the one defined on the documentation. However, when I am trying train the model, an error occurred, saying that the "ground truth" didn't exist. While checking on the sample, it shows that the load_dataset recognize the folder name as label and ignore the metadata.jsonl file inside the folder.

I can read the jsonl file via command, tho.

I prepare the Donut with this code:
```
!git clone https://github.com/clovaai/donut.git
!cd donut && pip install .
```
Thank you for your help | closed | 2023-10-25T03:46:36Z | 2023-10-25T12:42:00Z | https://github.com/clovaai/donut/issues/263 | [] | wdprsto | 1 |
mirumee/ariadne | graphql | 244 | Upgrade to GraphQL-core v3 | I'm getting the following deprecation warning. Is this something that is already on your radar / that you are planning to resolve for the next release?
>**DeprecationWarning**: GraphQL-core-next has been discontinued. It is now released as GraphQL-core v3 and newer. | closed | 2019-09-06T08:28:05Z | 2019-12-03T14:05:00Z | https://github.com/mirumee/ariadne/issues/244 | [
"enhancement",
"help wanted",
"roadmap"
] | tlil | 3 |
tfranzel/drf-spectacular | rest-api | 1,295 | Question: Missing "Security Requirement Object" | **Describe the bug**
I configured a OpenApiAuthenticationExtension class based on OAuth2 with Scopes. All information given there are present in the Schema. But I am missing the root security element, defined in the standard as "Security Requirement Object", with a list of defined scopes. Is it correct that it is not generated or am I doing something wrong?
**Expected behavior**
Having a root "Security Requirement Object"
| open | 2024-09-20T13:41:19Z | 2024-09-20T13:41:19Z | https://github.com/tfranzel/drf-spectacular/issues/1295 | [] | ichnoweb | 0 |
tensorly/tensorly | numpy | 507 | Error encountered when using tensorly.decomposition.parafac with high rank and GPU | #### Describe the bug
I encountered an error while using the tensorly.decomposition.parafac function in my code. The issue arises when the rank is set to a value larger than one of the dimensions of the tensor and the code is executed on a GPU.
#### Steps or Code to Reproduce
```python
import torch, tensorly
tensorly.set_backend('pytorch')
for device in ['cpu', 'cuda']:
x = torch.rand([100, 100, 3], device=device)
y = tensorly.decomposition.parafac(x, 4)
print(device, x)
```
#### Expected behavior
<!--A clear and concise description of what you expected to happen.-->
#### Actual result
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument tensors in method wrapper_cat)
#### Versions
Linux-3.10.0-1160.90.1.el7.x86_64-x86_64-with-glibc2.17
Python 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0]
NumPy 1.23.5
SciPy 1.9.3
TensorLy 0.8.0
If you are using another backend than NumPy, please also give us the version for that backend.
tensorly.set_backend('pytorch') | closed | 2023-06-14T21:45:39Z | 2023-10-24T15:44:02Z | https://github.com/tensorly/tensorly/issues/507 | [] | fufeisi | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.