repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
pywinauto/pywinauto | automation | 895 | Unable to find the HP Firmware Installer elements | ## Expected Behavior
To get the elements from HP Hook Dock
## Actual Behavior
## Steps to Reproduce the Problem
1. Not able to connect window
2.
3.
## Short Example of Code to Demonstrate the Problem
## Specifications
- Pywinauto version: 0.6.8
- Python version and bitness: 3.7.4 and 32
- Platform and OS: W2k12R2
| open | 2020-02-21T09:20:28Z | 2020-02-25T01:23:08Z | https://github.com/pywinauto/pywinauto/issues/895 | [] | Dega-Python | 1 |
agronholm/anyio | asyncio | 70 | merge policy? | anyio is being picked up by a lot of libraries 🚀
I know for Trio at least, any typical project using Trio is going to have a transitive dependency on anyio. At my workplace it's the case for a commercial product that uses Trio.
Given that, **I'm concerned that anyio code changes are committed without peer review in most cases.** Could it follow the lead of trio itself, trio-websocket, and other libraries at this low level in the stack and maintain a pull request review policy?
Unit tests and high code coverage is nice, but having at least one other person look at changes can really help maintain code quality and avoid subtle and latent bugs. | closed | 2019-09-08T02:12:16Z | 2020-08-24T19:59:36Z | https://github.com/agronholm/anyio/issues/70 | [] | belm0 | 16 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 34 | Temporal Action Localization | 老哥,我好崇拜你啊。你看能不能找机会做一下这篇文章,"Rethinking the Faster R-CNN Architecture for Temporal Action Localization",原作者没有提供代码。 | closed | 2020-07-15T08:32:54Z | 2020-07-19T22:46:54Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/34 | [] | KingQino | 2 |
vimalloc/flask-jwt-extended | flask | 310 | If the default user loader is called but not defined by the user, raise an exception | I have just spent a lot of time debugging an issue where the extension did not register the user loader with JWTManager.
Therefore my routes could not identify which user was trying to use them / were not able to retrieve any information about them; even with @jwt_required decorator.
By returning `None` we are hiding the real issue from users. Where we try to use a feature we have not enabled we should raise an exception - something like `NotImplementedError("You have not registered this loader, please ensure it is registered with the extension").`
| closed | 2020-01-23T15:54:36Z | 2020-02-22T00:15:33Z | https://github.com/vimalloc/flask-jwt-extended/issues/310 | [] | callamd | 3 |
marcomusy/vedo | numpy | 296 | Scaling a volume breaks rendering when the camera orbits around the volume | Before solving #295, this issue did not exist. Now this is what happens when I scale my volume by a factor of 100.
Pyvista has the very same problem.
When you click and hold the left mouse button, VTK switches to a low-fidelity rendering to go faster but it seems this part is not scaled and it's far away close to the origin at a scale of 1 (that's what I guess is happening).
<img width="712" alt="Screenshot 2021-01-17 at 11 58 42" src="https://user-images.githubusercontent.com/10866779/104838656-bba23d80-58bc-11eb-85c5-58fd4afac9c7.png">
Releasing left mouse button shows the volume correctly.
<img width="712" alt="Screenshot 2021-01-17 at 11 58 45" src="https://user-images.githubusercontent.com/10866779/104838662-c8269600-58bc-11eb-90ed-0c9b8e4807f0.png">
Another view of the same effect:
<img width="712" alt="Screenshot 2021-01-17 at 11 58 28" src="https://user-images.githubusercontent.com/10866779/104838678-d5438500-58bc-11eb-9d95-2956a6ed9aba.png">
<img width="712" alt="Screenshot 2021-01-17 at 11 58 33" src="https://user-images.githubusercontent.com/10866779/104838687-decced00-58bc-11eb-81b2-fc409eb41188.png">
Code to reproduce this:
```
import nrrd
import numpy as np
import vedo
volume_data, header = nrrd.read('./annotation_100.nrrd')
vol = Volume(volume_data, mapper='smart').shade(False)
vol.cmap('viridis').alpha([0,1]).addScalarBar()
vol.scale([100, 100, 100])
vol.show(axes=1)
```
Associated volume data can be found in #295.
| closed | 2021-01-17T11:10:09Z | 2021-01-17T13:12:10Z | https://github.com/marcomusy/vedo/issues/296 | [] | nantille | 7 |
lepture/authlib | django | 694 | Proxy can not be set for AsyncOAuth2Client | **Describe the bug**
AsyncOAuth2Client accept kwargs that are passed to the httpx.AsyncClient.
httpx.AsyncClient accept `proxy` as a kwarg.
however:
`client_kwargs = self._extract_session_request_params(kwargs)`
drops anything that is not in here:
HTTPX_CLIENT_KWARGS = [
'headers', 'cookies', 'verify', 'cert', 'http1', 'http2',
'proxies', 'timeout', 'follow_redirects', 'limits', 'max_redirects',
'event_hooks', 'base_url', 'transport', 'app', 'trust_env',
]
`proxies` afaik is used in requests or other libraries to provide a dict of protocol/host, however httpx client seems to only expect `proxy` kwarg, which is a string with protocol://host format.
**Expected behavior**
passing `proxy="http://host.tld:8080"` as kwarg to `AsyncOAuth2Client` should be passed to the underlying httpx client
**Environment:**
- OS: osx
- Python Version: 3.12.7
- Authlib Version: 1.3.2
| closed | 2024-12-19T08:39:36Z | 2025-02-28T12:23:21Z | https://github.com/lepture/authlib/issues/694 | [
"bug",
"feature request",
"client"
] | sebastian-heinz | 2 |
encode/uvicorn | asyncio | 1,855 | Type of "run" is partially unknown | When using VSCode with `pylance` in "strict" mode:
```python
from uvicorn import run
```
Warning is shown: Type of "run" is partially unknown | closed | 2023-01-31T20:45:14Z | 2023-03-10T11:42:44Z | https://github.com/encode/uvicorn/issues/1855 | [
"typing"
] | AlexanderPodorov | 7 |
ultralytics/yolov5 | deep-learning | 12,900 | How to cite and make acknowledgements for YOLO v5 in a thesis? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I have used YOLO v5 code conducting a reasearch, but how to cite and make acknowledgements for YOLO v5 in my thesis? Please tell me the orignal references of YOLO v5, which will be cited. And if there are any specified words or comments should be stated in acknowledgements?
### Additional
_No response_ | closed | 2024-04-09T13:32:46Z | 2024-04-10T06:50:48Z | https://github.com/ultralytics/yolov5/issues/12900 | [
"question"
] | qruiwu | 3 |
xinntao/Real-ESRGAN | pytorch | 109 | netD models fail on my system | First of all, thank you very very much for making this a reality. You are allowing me to save so many images I thought were going to be unpleasant to see.
On to my problem:
When I try to run the netD trained models, such as `RealESRGAN_x2plus_netD.pth` or `RealESRGAN_x4plus_netD.pth`, the process fails at the very beginning with a
```
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for RRDBNet:
Missing key(s) in state_dict: "conv_first.weight", "conv_first.bias", "body.0.rdb1.conv1.weight", "body.0.rdb1.conv1.bias", "body.0.rdb1.conv2.weight", "body.0.rdb1.conv2.bias", "body.0.rdb1.conv3.weight", "body.0.rdb1.conv3.bias", "body.0.rdb1.conv4.weight", "body.0.rdb1.conv4.bias", "body.0.rdb1.conv5.weight", "body.0.rdb1.conv5.bias", "body.0.rdb2.conv1.weight", "body.0.rdb2.conv1.bias", "body.0.rdb2.conv2.weight", "body.0.rdb2.conv2.bias", "body.0.rdb2.conv3.weight", "body.0.rdb2.conv3.bias", "body.0.rdb2.conv4.weight", "body.0.rdb2.conv4.bias", "body.0.rdb2.conv5.weight", "body.0.rdb2.conv5.bias", "body.0.rdb3.conv1.weight", "body.0.rdb3.conv1.bias", "body.0.rdb3.conv2.weight", "body.0.rdb3.conv2.bias", "body.0.rdb3.conv3.weight", "body.0.rdb3.conv3.bias", "body.0.rdb3.conv4.weight", "body.0.rdb3.conv4.bias", "body.0.rdb3.conv5.weight", "body.0.rdb3.conv5.bias", "body.1.rdb1.conv1.weight", "body.1.rdb1.conv1.bias", "body.1.rdb1.conv2.weight", "body.1.rdb1.conv2.bias", "body.1.rdb1.conv3.weight", "body.1.rdb1.conv3.bias", "body.1.rdb1.conv4.weight", "body.1.rdb1.conv4.bias", "body.1.rdb1.conv5.weight", "body.1.rdb1.conv5.bias", "body.1.rdb2.conv1.weight", "body.1.rdb2.conv1.bias", "body.1.rdb2.conv2.weight", "body.1.rdb2.conv2.bias", "body.1.rdb2.conv3.weight", "body.1.rdb2.conv3.bias", "body.1.rdb2.conv4.weight", "body.1.rdb2.conv4.bias", "body.1.rdb2.conv5.weight", "body.1.rdb2.conv5.bias", "body.1.rdb3.conv1.weight", "body.1.rdb3.conv1.bias", "body.1.rdb3.conv2.weight", "body.1.rdb3.conv2.bias", "body.1.rdb3.conv3.weight", "body.1.rdb3.conv3.bias", "body.1.rdb3.conv4.weight", "body.1.rdb3.conv4.bias", "body.1.rdb3.conv5.weight", "body.1.rdb3.conv5.bias", "body.2.rdb1.conv1.weight", "body.2.rdb1.conv1.bias", "body.2.rdb1.conv2.weight", "body.2.rdb1.conv2.bias", "body.2.rdb1.conv3.weight", "body.2.rdb1.conv3.bias", "body.2.rdb1.conv4.weight", "body.2.rdb1.conv4.bias", "body.2.rdb1.conv5.weight", "body.2.rdb1.conv5.bias", "body.2.rdb2.conv1.weight", "body.2.rdb2.conv1.bias", "body.2.rdb2.conv2.weight", "body.2.rdb2.conv2.bias", "body.2.rdb2.conv3.weight", "body.2.rdb2.conv3.bias", "body.2.rdb2.conv4.weight", "body.2.rdb2.conv4.bias", "body.2.rdb2.conv5.weight", "body.2.rdb2.conv5.bias", "body.2.rdb3.conv1.weight", "body.2.rdb3.conv1.bias", "body.2.rdb3.conv2.weight", "body.2.rdb3.conv2.bias", "body.2.rdb3.conv3.weight", "body.2.rdb3.conv3.bias", "body.2.rdb3.conv4.weight", "body.2.rdb3.conv4.bias", "body.2.rdb3.conv5.weight", "body.2.rdb3.conv5.bias", "body.3.rdb1.conv1.weight", "body.3.rdb1.conv1.bias", "body.3.rdb1.conv2.weight", "body.3.rdb1.conv2.bias", "body.3.rdb1.conv3.weight", "body.3.rdb1.conv3.bias", "body.3.rdb1.conv4.weight", "body.3.rdb1.conv4.bias", "body.3.rdb1.conv5.weight", "body.3.rdb1.conv5.bias", "body.3.rdb2.conv1.weight", "body.3.rdb2.conv1.bias", "body.3.rdb2.conv2.weight", "body.3.rdb2.conv2.bias", "body.3.rdb2.conv3.weight", "body.3.rdb2.conv3.bias", "body.3.rdb2.conv4.weight", "body.3.rdb2.conv4.bias", "body.3.rdb2.conv5.weight", "body.3.rdb2.conv5.bias", "body.3.rdb3.conv1.weight", "body.3.rdb3.conv1.bias", "body.3.rdb3.conv2.weight", "body.3.rdb3.conv2.bias", "body.3.rdb3.conv3.weight", "body.3.rdb3.conv3.bias", "body.3.rdb3.conv4.weight", "body.3.rdb3.conv4.bias", "body.3.rdb3.conv5.weight", "body.3.rdb3.conv5.bias", "body.4.rdb1.conv1.weight", "body.4.rdb1.conv1.bias", "body.4.rdb1.conv2.weight", "body.4.rdb1.conv2.bias", "body.4.rdb1.conv3.weight", "body.4.rdb1.conv3.bias", "body.4.rdb1.conv4.weight", "body.4.rdb1.conv4.bias", "body.4.rdb1.conv5.weight", "body.4.rdb1.conv5.bias", "body.4.rdb2.conv1.weight", "body.4.rdb2.conv1.bias", "body.4.rdb2.conv2.weight", "body.4.rdb2.conv2.bias", "body.4.rdb2.conv3.weight", "body.4.rdb2.conv3.bias", "body.4.rdb2.conv4.weight", "body.4.rdb2.conv4.bias", "body.4.rdb2.conv5.weight", "body.4.rdb2.conv5.bias", "body.4.rdb3.conv1.weight", "body.4.rdb3.conv1.bias", "body.4.rdb3.conv2.weight", "body.4.rdb3.conv2.bias", "body.4.rdb3.conv3.weight", "body.4.rdb3.conv3.bias", "body.4.rdb3.conv4.weight", "body.4.rdb3.conv4.bias", "body.4.rdb3.conv5.weight", "body.4.rdb3.conv5.bias", "body.5.rdb1.conv1.weight", "body.5.rdb1.conv1.bias", "body.5.rdb1.conv2.weight", "body.5.rdb1.conv2.bias", "body.5.rdb1.conv3.weight", "body.5.rdb1.conv3.bias", "body.5.rdb1.conv4.weight", "body.5.rdb1.conv4.bias", "body.5.rdb1.conv5.weight", "body.5.rdb1.conv5.bias", "body.5.rdb2.conv1.weight", "body.5.rdb2.conv1.bias", "body.5.rdb2.conv2.weight", "body.5.rdb2.conv2.bias", "body.5.rdb2.conv3.weight", "body.5.rdb2.conv3.bias", "body.5.rdb2.conv4.weight", "body.5.rdb2.conv4.bias", "body.5.rdb2.conv5.weight", "body.5.rdb2.conv5.bias", "body.5.rdb3.conv1.weight", "body.5.rdb3.conv1.bias", "body.5.rdb3.conv2.weight", "body.5.rdb3.conv2.bias", "body.5.rdb3.conv3.weight", "body.5.rdb3.conv3.bias", "body.5.rdb3.conv4.weight", "body.5.rdb3.conv4.bias", "body.5.rdb3.conv5.weight", "body.5.rdb3.conv5.bias", "body.6.rdb1.conv1.weight", "body.6.rdb1.conv1.bias", "body.6.rdb1.conv2.weight", "body.6.rdb1.conv2.bias", "body.6.rdb1.conv3.weight", "body.6.rdb1.conv3.bias", "body.6.rdb1.conv4.weight", "body.6.rdb1.conv4.bias", "body.6.rdb1.conv5.weight", "body.6.rdb1.conv5.bias", "body.6.rdb2.conv1.weight", "body.6.rdb2.conv1.bias", "body.6.rdb2.conv2.weight", "body.6.rdb2.conv2.bias", "body.6.rdb2.conv3.weight", "body.6.rdb2.conv3.bias", "body.6.rdb2.conv4.weight", "body.6.rdb2.conv4.bias", "body.6.rdb2.conv5.weight", "body.6.rdb2.conv5.bias", "body.6.rdb3.conv1.weight", "body.6.rdb3.conv1.bias", "body.6.rdb3.conv2.weight", "body.6.rdb3.conv2.bias", "body.6.rdb3.conv3.weight", "body.6.rdb3.conv3.bias", "body.6.rdb3.conv4.weight", "body.6.rdb3.conv4.bias", "body.6.rdb3.conv5.weight", "body.6.rdb3.conv5.bias", "body.7.rdb1.conv1.weight", "body.7.rdb1.conv1.bias", "body.7.rdb1.conv2.weight", "body.7.rdb1.conv2.bias", "body.7.rdb1.conv3.weight", "body.7.rdb1.conv3.bias", "body.7.rdb1.conv4.weight", "body.7.rdb1.conv4.bias", "body.7.rdb1.conv5.weight", "body.7.rdb1.conv5.bias", "body.7.rdb2.conv1.weight", "body.7.rdb2.conv1.bias", "body.7.rdb2.conv2.weight", "body.7.rdb2.conv2.bias", "body.7.rdb2.conv3.weight", "body.7.rdb2.conv3.bias", "body.7.rdb2.conv4.weight", "body.7.rdb2.conv4.bias", "body.7.rdb2.conv5.weight", "body.7.rdb2.conv5.bias", "body.7.rdb3.conv1.weight", "body.7.rdb3.conv1.bias", "body.7.rdb3.conv2.weight", "body.7.rdb3.conv2.bias", "body.7.rdb3.conv3.weight", "body.7.rdb3.conv3.bias", "body.7.rdb3.conv4.weight", "body.7.rdb3.conv4.bias", "body.7.rdb3.conv5.weight", "body.7.rdb3.conv5.bias", "body.8.rdb1.conv1.weight", "body.8.rdb1.conv1.bias", "body.8.rdb1.conv2.weight", "body.8.rdb1.conv2.bias", "body.8.rdb1.conv3.weight", "body.8.rdb1.conv3.bias", "body.8.rdb1.conv4.weight", "body.8.rdb1.conv4.bias", "body.8.rdb1.conv5.weight", "body.8.rdb1.conv5.bias", "body.8.rdb2.conv1.weight", "body.8.rdb2.conv1.bias", "body.8.rdb2.conv2.weight", "body.8.rdb2.conv2.bias", "body.8.rdb2.conv3.weight", "body.8.rdb2.conv3.bias", "body.8.rdb2.conv4.weight", "body.8.rdb2.conv4.bias", "body.8.rdb2.conv5.weight", "body.8.rdb2.conv5.bias", "body.8.rdb3.conv1.weight", "body.8.rdb3.conv1.bias", "body.8.rdb3.conv2.weight", "body.8.rdb3.conv2.bias", "body.8.rdb3.conv3.weight", "body.8.rdb3.conv3.bias", "body.8.rdb3.conv4.weight", "body.8.rdb3.conv4.bias", "body.8.rdb3.conv5.weight", "body.8.rdb3.conv5.bias", "body.9.rdb1.conv1.weight", "body.9.rdb1.conv1.bias", "body.9.rdb1.conv2.weight", "body.9.rdb1.conv2.bias", "body.9.rdb1.conv3.weight", "body.9.rdb1.conv3.bias", "body.9.rdb1.conv4.weight", "body.9.rdb1.conv4.bias", "body.9.rdb1.conv5.weight", "body.9.rdb1.conv5.bias", "body.9.rdb2.conv1.weight", "body.9.rdb2.conv1.bias", "body.9.rdb2.conv2.weight", "body.9.rdb2.conv2.bias", "body.9.rdb2.conv3.weight", "body.9.rdb2.conv3.bias", "body.9.rdb2.conv4.weight", "body.9.rdb2.conv4.bias", "body.9.rdb2.conv5.weight", "body.9.rdb2.conv5.bias", "body.9.rdb3.conv1.weight", "body.9.rdb3.conv1.bias", "body.9.rdb3.conv2.weight", "body.9.rdb3.conv2.bias", "body.9.rdb3.conv3.weight", "body.9.rdb3.conv3.bias", "body.9.rdb3.conv4.weight", "body.9.rdb3.conv4.bias", "body.9.rdb3.conv5.weight", "body.9.rdb3.conv5.bias", "body.10.rdb1.conv1.weight", "body.10.rdb1.conv1.bias", "body.10.rdb1.conv2.weight", "body.10.rdb1.conv2.bias", "body.10.rdb1.conv3.weight", "body.10.rdb1.conv3.bias", "body.10.rdb1.conv4.weight", "body.10.rdb1.conv4.bias", "body.10.rdb1.conv5.weight", "body.10.rdb1.conv5.bias", "body.10.rdb2.conv1.weight", "body.10.rdb2.conv1.bias", "body.10.rdb2.conv2.weight", "body.10.rdb2.conv2.bias", "body.10.rdb2.conv3.weight", "body.10.rdb2.conv3.bias", "body.10.rdb2.conv4.weight", "body.10.rdb2.conv4.bias", "body.10.rdb2.conv5.weight", "body.10.rdb2.conv5.bias", "body.10.rdb3.conv1.weight", "body.10.rdb3.conv1.bias", "body.10.rdb3.conv2.weight", "body.10.rdb3.conv2.bias", "body.10.rdb3.conv3.weight", "body.10.rdb3.conv3.bias", "body.10.rdb3.conv4.weight", "body.10.rdb3.conv4.bias", "body.10.rdb3.conv5.weight", "body.10.rdb3.conv5.bias", "body.11.rdb1.conv1.weight", "body.11.rdb1.conv1.bias", "body.11.rdb1.conv2.weight", "body.11.rdb1.conv2.bias", "body.11.rdb1.conv3.weight", "body.11.rdb1.conv3.bias", "body.11.rdb1.conv4.weight", "body.11.rdb1.conv4.bias", "body.11.rdb1.conv5.weight", "body.11.rdb1.conv5.bias", "body.11.rdb2.conv1.weight", "body.11.rdb2.conv1.bias", "body.11.rdb2.conv2.weight", "body.11.rdb2.conv2.bias", "body.11.rdb2.conv3.weight", "body.11.rdb2.conv3.bias", "body.11.rdb2.conv4.weight", "body.11.rdb2.conv4.bias", "body.11.rdb2.conv5.weight", "body.11.rdb2.conv5.bias", "body.11.rdb3.conv1.weight", "body.11.rdb3.conv1.bias", "body.11.rdb3.conv2.weight", "body.11.rdb3.conv2.bias", "body.11.rdb3.conv3.weight", "body.11.rdb3.conv3.bias", "body.11.rdb3.conv4.weight", "body.11.rdb3.conv4.bias", "body.11.rdb3.conv5.weight", "body.11.rdb3.conv5.bias", "body.12.rdb1.conv1.weight", "body.12.rdb1.conv1.bias", "body.12.rdb1.conv2.weight", "body.12.rdb1.conv2.bias", "body.12.rdb1.conv3.weight", "body.12.rdb1.conv3.bias", "body.12.rdb1.conv4.weight", "body.12.rdb1.conv4.bias", "body.12.rdb1.conv5.weight", "body.12.rdb1.conv5.bias", "body.12.rdb2.conv1.weight", "body.12.rdb2.conv1.bias", "body.12.rdb2.conv2.weight", "body.12.rdb2.conv2.bias", "body.12.rdb2.conv3.weight", "body.12.rdb2.conv3.bias", "body.12.rdb2.conv4.weight", "body.12.rdb2.conv4.bias", "body.12.rdb2.conv5.weight", "body.12.rdb2.conv5.bias", "body.12.rdb3.conv1.weight", "body.12.rdb3.conv1.bias", "body.12.rdb3.conv2.weight", "body.12.rdb3.conv2.bias", "body.12.rdb3.conv3.weight", "body.12.rdb3.conv3.bias", "body.12.rdb3.conv4.weight", "body.12.rdb3.conv4.bias", "body.12.rdb3.conv5.weight", "body.12.rdb3.conv5.bias", "body.13.rdb1.conv1.weight", "body.13.rdb1.conv1.bias", "body.13.rdb1.conv2.weight", "body.13.rdb1.conv2.bias", "body.13.rdb1.conv3.weight", "body.13.rdb1.conv3.bias", "body.13.rdb1.conv4.weight", "body.13.rdb1.conv4.bias", "body.13.rdb1.conv5.weight", "body.13.rdb1.conv5.bias", "body.13.rdb2.conv1.weight", "body.13.rdb2.conv1.bias", "body.13.rdb2.conv2.weight", "body.13.rdb2.conv2.bias", "body.13.rdb2.conv3.weight", "body.13.rdb2.conv3.bias", "body.13.rdb2.conv4.weight", "body.13.rdb2.conv4.bias", "body.13.rdb2.conv5.weight", "body.13.rdb2.conv5.bias", "body.13.rdb3.conv1.weight", "body.13.rdb3.conv1.bias", "body.13.rdb3.conv2.weight", "body.13.rdb3.conv2.bias", "body.13.rdb3.conv3.weight", "body.13.rdb3.conv3.bias", "body.13.rdb3.conv4.weight", "body.13.rdb3.conv4.bias", "body.13.rdb3.conv5.weight", "body.13.rdb3.conv5.bias", "body.14.rdb1.conv1.weight", "body.14.rdb1.conv1.bias", "body.14.rdb1.conv2.weight", "body.14.rdb1.conv2.bias", "body.14.rdb1.conv3.weight", "body.14.rdb1.conv3.bias", "body.14.rdb1.conv4.weight", "body.14.rdb1.conv4.bias", "body.14.rdb1.conv5.weight", "body.14.rdb1.conv5.bias", "body.14.rdb2.conv1.weight", "body.14.rdb2.conv1.bias", "body.14.rdb2.conv2.weight", "body.14.rdb2.conv2.bias", "body.14.rdb2.conv3.weight", "body.14.rdb2.conv3.bias", "body.14.rdb2.conv4.weight", "body.14.rdb2.conv4.bias", "body.14.rdb2.conv5.weight", "body.14.rdb2.conv5.bias", "body.14.rdb3.conv1.weight", "body.14.rdb3.conv1.bias", "body.14.rdb3.conv2.weight", "body.14.rdb3.conv2.bias", "body.14.rdb3.conv3.weight", "body.14.rdb3.conv3.bias", "body.14.rdb3.conv4.weight", "body.14.rdb3.conv4.bias", "body.14.rdb3.conv5.weight", "body.14.rdb3.conv5.bias", "body.15.rdb1.conv1.weight", "body.15.rdb1.conv1.bias", "body.15.rdb1.conv2.weight", "body.15.rdb1.conv2.bias", "body.15.rdb1.conv3.weight", "body.15.rdb1.conv3.bias", "body.15.rdb1.conv4.weight", "body.15.rdb1.conv4.bias", "body.15.rdb1.conv5.weight", "body.15.rdb1.conv5.bias", "body.15.rdb2.conv1.weight", "body.15.rdb2.conv1.bias", "body.15.rdb2.conv2.weight", "body.15.rdb2.conv2.bias", "body.15.rdb2.conv3.weight", "body.15.rdb2.conv3.bias", "body.15.rdb2.conv4.weight", "body.15.rdb2.conv4.bias", "body.15.rdb2.conv5.weight", "body.15.rdb2.conv5.bias", "body.15.rdb3.conv1.weight", "body.15.rdb3.conv1.bias", "body.15.rdb3.conv2.weight", "body.15.rdb3.conv2.bias", "body.15.rdb3.conv3.weight", "body.15.rdb3.conv3.bias", "body.15.rdb3.conv4.weight", "body.15.rdb3.conv4.bias", "body.15.rdb3.conv5.weight", "body.15.rdb3.conv5.bias", "body.16.rdb1.conv1.weight", "body.16.rdb1.conv1.bias", "body.16.rdb1.conv2.weight", "body.16.rdb1.conv2.bias", "body.16.rdb1.conv3.weight", "body.16.rdb1.conv3.bias", "body.16.rdb1.conv4.weight", "body.16.rdb1.conv4.bias", "body.16.rdb1.conv5.weight", "body.16.rdb1.conv5.bias", "body.16.rdb2.conv1.weight", "body.16.rdb2.conv1.bias", "body.16.rdb2.conv2.weight", "body.16.rdb2.conv2.bias", "body.16.rdb2.conv3.weight", "body.16.rdb2.conv3.bias", "body.16.rdb2.conv4.weight", "body.16.rdb2.conv4.bias", "body.16.rdb2.conv5.weight", "body.16.rdb2.conv5.bias", "body.16.rdb3.conv1.weight", "body.16.rdb3.conv1.bias", "body.16.rdb3.conv2.weight", "body.16.rdb3.conv2.bias", "body.16.rdb3.conv3.weight", "body.16.rdb3.conv3.bias", "body.16.rdb3.conv4.weight", "body.16.rdb3.conv4.bias", "body.16.rdb3.conv5.weight", "body.16.rdb3.conv5.bias", "body.17.rdb1.conv1.weight", "body.17.rdb1.conv1.bias", "body.17.rdb1.conv2.weight", "body.17.rdb1.conv2.bias", "body.17.rdb1.conv3.weight", "body.17.rdb1.conv3.bias", "body.17.rdb1.conv4.weight", "body.17.rdb1.conv4.bias", "body.17.rdb1.conv5.weight", "body.17.rdb1.conv5.bias", "body.17.rdb2.conv1.weight", "body.17.rdb2.conv1.bias", "body.17.rdb2.conv2.weight", "body.17.rdb2.conv2.bias", "body.17.rdb2.conv3.weight", "body.17.rdb2.conv3.bias", "body.17.rdb2.conv4.weight", "body.17.rdb2.conv4.bias", "body.17.rdb2.conv5.weight", "body.17.rdb2.conv5.bias", "body.17.rdb3.conv1.weight", "body.17.rdb3.conv1.bias", "body.17.rdb3.conv2.weight", "body.17.rdb3.conv2.bias", "body.17.rdb3.conv3.weight", "body.17.rdb3.conv3.bias", "body.17.rdb3.conv4.weight", "body.17.rdb3.conv4.bias", "body.17.rdb3.conv5.weight", "body.17.rdb3.conv5.bias", "body.18.rdb1.conv1.weight", "body.18.rdb1.conv1.bias", "body.18.rdb1.conv2.weight", "body.18.rdb1.conv2.bias", "body.18.rdb1.conv3.weight", "body.18.rdb1.conv3.bias", "body.18.rdb1.conv4.weight", "body.18.rdb1.conv4.bias", "body.18.rdb1.conv5.weight", "body.18.rdb1.conv5.bias", "body.18.rdb2.conv1.weight", "body.18.rdb2.conv1.bias", "body.18.rdb2.conv2.weight", "body.18.rdb2.conv2.bias", "body.18.rdb2.conv3.weight", "body.18.rdb2.conv3.bias", "body.18.rdb2.conv4.weight", "body.18.rdb2.conv4.bias", "body.18.rdb2.conv5.weight", "body.18.rdb2.conv5.bias", "body.18.rdb3.conv1.weight", "body.18.rdb3.conv1.bias", "body.18.rdb3.conv2.weight", "body.18.rdb3.conv2.bias", "body.18.rdb3.conv3.weight", "body.18.rdb3.conv3.bias", "body.18.rdb3.conv4.weight", "body.18.rdb3.conv4.bias", "body.18.rdb3.conv5.weight", "body.18.rdb3.conv5.bias", "body.19.rdb1.conv1.weight", "body.19.rdb1.conv1.bias", "body.19.rdb1.conv2.weight", "body.19.rdb1.conv2.bias", "body.19.rdb1.conv3.weight", "body.19.rdb1.conv3.bias", "body.19.rdb1.conv4.weight", "body.19.rdb1.conv4.bias", "body.19.rdb1.conv5.weight", "body.19.rdb1.conv5.bias", "body.19.rdb2.conv1.weight", "body.19.rdb2.conv1.bias", "body.19.rdb2.conv2.weight", "body.19.rdb2.conv2.bias", "body.19.rdb2.conv3.weight", "body.19.rdb2.conv3.bias", "body.19.rdb2.conv4.weight", "body.19.rdb2.conv4.bias", "body.19.rdb2.conv5.weight", "body.19.rdb2.conv5.bias", "body.19.rdb3.conv1.weight", "body.19.rdb3.conv1.bias", "body.19.rdb3.conv2.weight", "body.19.rdb3.conv2.bias", "body.19.rdb3.conv3.weight", "body.19.rdb3.conv3.bias", "body.19.rdb3.conv4.weight", "body.19.rdb3.conv4.bias", "body.19.rdb3.conv5.weight", "body.19.rdb3.conv5.bias", "body.20.rdb1.conv1.weight", "body.20.rdb1.conv1.bias", "body.20.rdb1.conv2.weight", "body.20.rdb1.conv2.bias", "body.20.rdb1.conv3.weight", "body.20.rdb1.conv3.bias", "body.20.rdb1.conv4.weight", "body.20.rdb1.conv4.bias", "body.20.rdb1.conv5.weight", "body.20.rdb1.conv5.bias", "body.20.rdb2.conv1.weight", "body.20.rdb2.conv1.bias", "body.20.rdb2.conv2.weight", "body.20.rdb2.conv2.bias", "body.20.rdb2.conv3.weight", "body.20.rdb2.conv3.bias", "body.20.rdb2.conv4.weight", "body.20.rdb2.conv4.bias", "body.20.rdb2.conv5.weight", "body.20.rdb2.conv5.bias", "body.20.rdb3.conv1.weight", "body.20.rdb3.conv1.bias", "body.20.rdb3.conv2.weight", "body.20.rdb3.conv2.bias", "body.20.rdb3.conv3.weight", "body.20.rdb3.conv3.bias", "body.20.rdb3.conv4.weight", "body.20.rdb3.conv4.bias", "body.20.rdb3.conv5.weight", "body.20.rdb3.conv5.bias", "body.21.rdb1.conv1.weight", "body.21.rdb1.conv1.bias", "body.21.rdb1.conv2.weight", "body.21.rdb1.conv2.bias", "body.21.rdb1.conv3.weight", "body.21.rdb1.conv3.bias", "body.21.rdb1.conv4.weight", "body.21.rdb1.conv4.bias", "body.21.rdb1.conv5.weight", "body.21.rdb1.conv5.bias", "body.21.rdb2.conv1.weight", "body.21.rdb2.conv1.bias", "body.21.rdb2.conv2.weight", "body.21.rdb2.conv2.bias", "body.21.rdb2.conv3.weight", "body.21.rdb2.conv3.bias", "body.21.rdb2.conv4.weight", "body.21.rdb2.conv4.bias", "body.21.rdb2.conv5.weight", "body.21.rdb2.conv5.bias", "body.21.rdb3.conv1.weight", "body.21.rdb3.conv1.bias", "body.21.rdb3.conv2.weight", "body.21.rdb3.conv2.bias", "body.21.rdb3.conv3.weight", "body.21.rdb3.conv3.bias", "body.21.rdb3.conv4.weight", "body.21.rdb3.conv4.bias", "body.21.rdb3.conv5.weight", "body.21.rdb3.conv5.bias", "body.22.rdb1.conv1.weight", "body.22.rdb1.conv1.bias", "body.22.rdb1.conv2.weight", "body.22.rdb1.conv2.bias", "body.22.rdb1.conv3.weight", "body.22.rdb1.conv3.bias", "body.22.rdb1.conv4.weight", "body.22.rdb1.conv4.bias", "body.22.rdb1.conv5.weight", "body.22.rdb1.conv5.bias", "body.22.rdb2.conv1.weight", "body.22.rdb2.conv1.bias", "body.22.rdb2.conv2.weight", "body.22.rdb2.conv2.bias", "body.22.rdb2.conv3.weight", "body.22.rdb2.conv3.bias", "body.22.rdb2.conv4.weight", "body.22.rdb2.conv4.bias", "body.22.rdb2.conv5.weight", "body.22.rdb2.conv5.bias", "body.22.rdb3.conv1.weight", "body.22.rdb3.conv1.bias", "body.22.rdb3.conv2.weight", "body.22.rdb3.conv2.bias", "body.22.rdb3.conv3.weight", "body.22.rdb3.conv3.bias", "body.22.rdb3.conv4.weight", "body.22.rdb3.conv4.bias", "body.22.rdb3.conv5.weight", "body.22.rdb3.conv5.bias", "conv_body.weight", "conv_body.bias", "conv_up1.weight", "conv_up1.bias", "conv_up2.weight", "conv_up2.bias", "conv_hr.weight", "conv_hr.bias", "conv_last.weight", "conv_last.bias".
Unexpected key(s) in state_dict: "conv0.weight", "conv0.bias", "conv1.weight_orig", "conv1.weight_u", "conv1.weight_v", "conv2.weight_orig", "conv2.weight_u", "conv2.weight_v", "conv3.weight_orig", "conv3.weight_u", "conv3.weight_v", "conv4.weight_orig", "conv4.weight_u", "conv4.weight_v", "conv5.weight_orig", "conv5.weight_u", "conv5.weight_v", "conv6.weight_orig", "conv6.weight_u", "conv6.weight_v", "conv7.weight_orig", "conv7.weight_u", "conv7.weight_v", "conv8.weight_orig", "conv8.weight_u", "conv8.weight_v", "conv9.weight", "conv9.bias".
```
Am I doing something wrong or missing something I should have done?
The non-`netD` variants run just fine.
Here are my command line options:
```
--input
"43a8ca62e14c80.jpg"
--output
"output"
--model_path
"experiments/pretrained_models/RealESRGAN_x4plus_netD.pth"
--outscale 1
--tile 300
--face_enhance
```
I split the arguments into lines to make them easier for you to see
My machine has an AMD 4000 series CPU and an RTX2060.
Thank you in advance | closed | 2021-10-02T18:05:30Z | 2021-10-03T15:45:59Z | https://github.com/xinntao/Real-ESRGAN/issues/109 | [] | brunoais | 2 |
521xueweihan/HelloGitHub | python | 2,011 | 自荐项目:Beerus | ## 项目推荐
- 项目地址:[https://github.com/yuyenews](https://github.com/yuyenews)
- 类别:Go
- 项目后续更新计划:现有的组件持续迭代升级,然后RPC也提上计划了,最终目标是打造Go的web生态
- 项目描述:
一个用Go开发的web解决方案,包含一个web框架,一个数据库管理框架,未来还会开发RPC以及其他的web周边组件
- 推荐理由:现在web几乎还是java的天下,java的web生态已经非常成熟了,但是go好像还差点意思,我想尝试一下看能不能通过丰富Go的生态,让这门语言能够擅长更多的领域开发
## 示例代码:
### 不写sql的增删改查示例
***根据条件做单表查询***
```go
conditions := make([]*entity.Condition,0)
conditions = append(conditions, &entity.Condition{Key:"id > ?", Val: 10})
conditions = append(conditions, &entity.Condition{Key:"and user_name = ?", Val: "bee"})
conditions = append(conditions, &entity.Condition{Key: "order by create_time desc", Val: entity.NotWhere})
resultMap, err := operation.GetDBTemplate("Data source name").Select("table name", conditions)
```
***根据条件修改数据***
```go
conditions := make([]*entity.Condition,0)
conditions = append(conditions, &entity.Condition{Key:"id = ?", Val: 1})
data := ResultStruct{UserName: "TestNoSqlUpdate"}
operation.GetDBTemplate("Data source name").Update("table name", dbutil.StructToMapIgnore(&data, data, true), conditions)
```
***根据条件删除数据***
```go
conditions := make([]*entity.Condition,0)
conditions = append(conditions, &entity.Condition{Key:"id = ?", Val: 2})
_, err := operation.GetDBTemplate("Data source name").Delete("table name", conditions)
```
***插入一条数据***
```go
data := ResultStruct{
UserName: "TestNoSqlInsert",
UserEmail: "xxxxx@163.com",
UpdateTime: "2021-12-09 13:50:00",
}
result, err := operation.GetDBTemplate("Data source name").Insert("table name", dbutil.StructToMapIgnore(&data, data, true))
```
### 复杂操作可以写sql完成
有兴趣的可以翻阅文档:https://beeruscc.com
| closed | 2021-12-11T04:48:13Z | 2021-12-26T04:42:49Z | https://github.com/521xueweihan/HelloGitHub/issues/2011 | [] | yuyenews | 1 |
apache/airflow | machine-learning | 48,058 | DAG version is not interpolated in UI | ### Apache Airflow version
3.0.0 beta4
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
The DAG details page doesn't interpolate a DAG's version consistently on Airflow 3 beta4:

### What you think should happen instead?
The DAG version should be displayed correctly
### How to reproduce
Run Airflow 3 beta4, open DAG details page
### Operating System
Debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-21T14:58:43Z | 2025-03-22T05:55:43Z | https://github.com/apache/airflow/issues/48058 | [
"kind:bug",
"area:UI",
"affected_version:3.0.0beta"
] | BasPH | 3 |
holoviz/panel | plotly | 7,720 | Privé | **<!--**
Thanks for contacting us! Please read and follow these instructions carefully, then you can delete this introductory text. Note that the issue tracker is NOT the place for usage questions and technical assistance; post those at [Discourse](https://discourse.holoviz.org) instead. Issues without the required information below may be closed immediately.
-->
#### Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
#### Describe the solution you'd like
A clear and concise description of what you want to happen.
#### Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
#### Additional context
Add any other context or screenshots about the feature request here.
| closed | 2025-02-16T19:35:55Z | 2025-02-19T09:40:53Z | https://github.com/holoviz/panel/issues/7720 | [] | cabrin21 | 2 |
huggingface/transformers | tensorflow | 36,876 | <spam> | ### Model description
<!-- Failed to upload "Screen_Recording_20250320_044456_Chrome.mp4" -->
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_ | closed | 2025-03-21T08:28:30Z | 2025-03-21T11:50:47Z | https://github.com/huggingface/transformers/issues/36876 | [
"New model"
] | tjsexfunmoney664 | 0 |
ultralytics/yolov5 | deep-learning | 13,305 | Regarding the application establishment of preprocessing functions | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
YOLOv5 has various preprocessing functions, such as the Albumations library, mosaic, mixup, random_perspective, etc. If you look at the hyp file, you will see the parameters for each preprocessing function as shown below.
```
lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.01 # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937 # SGD momentum/Adam beta1
weight_decay: 0.0005 # optimizer weight decay 5e-4
warmup_epochs: 3.0 # warmup epochs (fractions ok)
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr: 0.1 # warmup initial bias lr
box: 0.05 # box loss gain
cls: 0.5 # cls loss gain
cls_pw: 1.0 # cls BCELoss positive_weight
obj: 1.0 # obj loss gain (scale with pixels)
obj_pw: 1.0 # obj BCELoss positive_weight
iou_t: 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
# anchors: 3 # anchors per output layer (0 to ignore)
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4 # image HSV-Value augmentation (fraction)
degrees: 0.0 # image rotation (+/- deg)
translate: 0.1 # image translation (+/- fraction)
scale: 0.5 # image scale (+/- gain)
shear: 0.0 # image shear (+/- deg)
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
flipud: 0.0 # image flip up-down (probability)
fliplr: 0.5 # image flip left-right (probability)
mosaic: 1.0 # image mosaic (probability)
mixup: 0.0 # image mixup (probability)
copy_paste: 0.0 # segment copy-paste (probability)
```
I think this represents the parameters of each preprocessing.
But are preprocessing processes themselves applied definite? For example, are `random_perspective` and `mosaic` applied 100% to all images in each epoch? Or is there some probability of application?
I would be grateful for your reply. Thanks in advance.
### Additional
_No response_ | open | 2024-09-11T06:29:08Z | 2024-09-13T10:14:39Z | https://github.com/ultralytics/yolov5/issues/13305 | [
"question"
] | K011-17 | 1 |
Asabeneh/30-Days-Of-Python | flask | 640 | Issue in Example:Integers on Day:03 Operators | # Arithmetic Operations in Python
# Integers
print('Addition: ', 1 + 2) # 3
print('Subtraction: ', 2 - 1) # 1
print('Multiplication: ', 2 * 3) # 6
print ('Division: ', 4 / 2) # 2.0 Division in Python gives floating number
print('Division: ', 6 / 2) # 3.0
print('Division: ', 7 / 2) # 3.5
print('Division without the remainder: ', 7 // 2) # 3, gives without the floating number or without the remaining
print ('Division without the remainder: ',7 // 3) # 2
print('Modulus: ', 3 % 2) # 1, Gives the remainder
print('Exponentiation: ', 2 ** 3) # 9 it means 2 * 2 * 2
Issue in the comment the output of 2 ** 3 is 8, not 9. The comment in the code is incorrect, | open | 2025-01-12T18:13:39Z | 2025-01-21T21:59:09Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/640 | [] | BaskarTV | 1 |
cleanlab/cleanlab | data-science | 283 | CI: Add property-based testing | Let's adopt Quickcheck-style property-based testing throughout the library.
# Details
There are lots of opportunities in cleanlab to do property-based testing, because it has lots of functions where it's easy to write down a relational property about outputs that should always hold. As one simple example: we might have one function to compute the confident joint and another function to compute just the diagonal of the confident joint; we can write down the property `∀ pyx, valid_prob_matrix pyx → diagonal (compute_confident_joint pyx) = compute_confident_joint_diagonal pyx`.
Systematically adopting property-based testing could help us catch more bugs.
Some [existing tests](https://github.com/cleanlab/cleanlab/blob/master/tests/test_latent_algebra.py#L47) in cleanlab are essentially property-based tests, but they're only being evaluated on a single hard-coded input. Upgrading these to property-based testing should be easy after writing the appropriate generators.
I'd suggest using the [Hypothesis](https://hypothesis.readthedocs.io/en/latest/) library for this purpose. It even has some [support for NumPy testing](https://hypothesis.readthedocs.io/en/latest/numpy.html). | open | 2022-06-20T22:36:43Z | 2024-12-25T19:44:15Z | https://github.com/cleanlab/cleanlab/issues/283 | [
"good first issue",
"help-wanted"
] | anishathalye | 5 |
exaloop/codon | numpy | 344 | how to debug a c program which link a share library making by codon ? | codon vsersion 0.15.5, os debian 10 , gcc version 10.2, gdb version 10.1,
I make a share library
codon build --relocation-model=pic --lib --debug --debug-entry-values --debugger-tune=gdb -o libmycodon.so my_codon.pyx
and when i use gdb on console ,i got a Segmentation fault.
#0 0x00007ffff79df6e2 in GC_find_limit_with_bound () from /usr/lib/codon/libcodonrt.so
#1 0x00007ffff79df500 in GC_init_linux_data_start () from /usr/lib/codon/libcodonrt.so
#2 0x00007ffff79dd6ed in GC_init () from /usr/lib/codon/libcodonrt.so
#3 0x00007ffff79e9b95 in GC_generic_malloc_inner () from /usr/lib/codon/libcodonrt.so
#4 0x00007ffff79e9f1d in GC_generic_malloc () from /usr/lib/codon/libcodonrt.so
#5 0x00007ffff79ea181 in GC_malloc_kind_global () from /usr/lib/codon/libcodonrt.so
my c program is quite simple just call a foo func from codon like int foo(int), but when I run the program is ok.
| closed | 2023-04-12T07:22:41Z | 2023-07-26T21:09:28Z | https://github.com/exaloop/codon/issues/344 | [] | qjhuang | 1 |
robotframework/robotframework | automation | 5,185 | A process stops printing data in a while loop when I use "Process" library to start this process in background | OS: Windows 10
Python: 3.12.0
Robotframework: 7.0.1
I use "Process" library to start this process(print.py) in background. The process will keep writing data to a file in a while loop.
**Start Process python print.py**
However, the process will suddenly stop print data to a file due to unknown reason while running test cases.
Could you help check and solve this issue? Thanks. | closed | 2024-08-16T10:34:34Z | 2024-08-23T12:34:05Z | https://github.com/robotframework/robotframework/issues/5185 | [] | fon1105 | 3 |
deezer/spleeter | tensorflow | 858 | Columns and DataType Not Explicitly Set on line 65 of test_train.py
| Hello!
I found an AI-Specific Code smell in your project.
The smell is called: Columns and DataType Not Explicitly Set
You can find more information about it in this paper: https://dl.acm.org/doi/abs/10.1145/3522664.3528620.
According to the paper, the smell is described as follows:
| **Problem** | If the columns are not selected explicitly, it is not easy for developers to know what to expect in the downstream data schema. If the datatype is not set explicitly, it may silently continue the next step even though the input is unexpected, which may cause errors later. The same applies to other data importing scenarios. |
| ------------- | :------------- |
| **Solution** | **It is recommended to set the columns and DataType explicitly in data processing.** |
| **Impact** | **Readability** |
Example:
```diff
### Pandas Column Selection
import pandas as pd
df = pd.read_csv('data.csv')
+ df = df[['col1', 'col2', 'col3']]
### Pandas Set DataType
import pandas as pd
- df = pd.read_csv('data.csv')
+ df = pd.read_csv('data.csv', dtype={'col1': 'str', 'col2': 'int', 'col3': 'float'})
```
You can find the code related to this smell in this link: https://github.com/deezer/spleeter/blob/19b523f081d04f09926763c679de48354f4e52d6/tests/test_train.py#L55-L75.
I also found instances of this smell in other files, such as:
File: https://github.com/deezer/spleeter/blob/master/spleeter/__main__.py#L178-L188 Line: 183
File: https://github.com/deezer/spleeter/blob/master/spleeter/utils/tensor.py#L149-L159 Line: 154
.
I hope this information is helpful! | open | 2023-07-04T10:31:22Z | 2023-07-04T10:31:22Z | https://github.com/deezer/spleeter/issues/858 | [] | CodeSmileBot | 0 |
zihangdai/xlnet | tensorflow | 39 | Word Embeddings | Can we retrieve word embeddings from the model? | closed | 2019-06-24T04:07:40Z | 2019-08-17T16:22:26Z | https://github.com/zihangdai/xlnet/issues/39 | [] | gayatrivenugopal | 25 |
albumentations-team/albumentations | deep-learning | 2,116 | [New transform] Add RandomShear | Add RandomShear that is child of Affine:
https://kornia.readthedocs.io/en/latest/augmentation.module.html#kornia.augmentation.RandomShear | closed | 2024-11-08T16:08:05Z | 2024-11-18T01:39:18Z | https://github.com/albumentations-team/albumentations/issues/2116 | [
"enhancement"
] | ternaus | 1 |
aio-libs/aiopg | sqlalchemy | 179 | Question: Is there two phase commit support? | If yes - it would be good to have a quick example here.
Thank you.
| closed | 2016-10-09T17:44:58Z | 2016-10-09T18:07:29Z | https://github.com/aio-libs/aiopg/issues/179 | [] | ediskandarov | 3 |
JaidedAI/EasyOCR | pytorch | 543 | Missing chars from latin model | Hi! There are missing characters in the latin model, as I cannot see the `ő` and `Ő` characters, that are otherwise available in hungarian. Can you add them and update your latin model?
OFF: The hungarian language file is incorrect, so I will provide a language update in a pull request later. | closed | 2021-09-21T07:21:22Z | 2022-05-31T12:03:41Z | https://github.com/JaidedAI/EasyOCR/issues/543 | [] | timurlenk07 | 3 |
huggingface/datasets | tensorflow | 6,840 | Delete uploaded files from the UI | ### Feature request
Once a file is uploaded and the commit is made, I am unable to delete individual files without completely deleting the whole dataset via the website UI.
### Motivation
Would be a useful addition
### Your contribution
Would love to help out with some guidance | open | 2024-04-25T22:33:57Z | 2025-01-21T09:44:22Z | https://github.com/huggingface/datasets/issues/6840 | [
"enhancement"
] | saicharan2804 | 1 |
scanapi/scanapi | rest-api | 117 | Create ScanAPI Github Action | ## Description
Create a Github Action for ScanAPI to make integrations easier. We already have a [docker image](https://github.com/scanapi/scanapi/blob/master/Dockerfile#L3) for it.
https://help.github.com/en/actions/building-actions/creating-a-docker-container-action
With this action we want to:
- Be able to use `scanapi` commands with args and options. i.e: `scanapi api.yaml`, `scanapi --help`, `scanapi api.yaml -c my_config_file.yaml` and so on.
- Be able to store the report as an artifact: https://github.com/actions/upload-artifact
We have already a repo for it: https://github.com/scanapi/github-action | closed | 2020-04-21T16:58:01Z | 2020-09-21T00:58:44Z | https://github.com/scanapi/scanapi/issues/117 | [
"Automation"
] | camilamaia | 3 |
jupyter/nbviewer | jupyter | 1,021 | Leading whitespace removed in code blocks with syntax highlighting | Leading whitespaces in codeblocks are removed if syntax highlighting is done. Seemingly regardless of the language used. It is common to want 6 spaces denoting input lines in APL. However, if I use APL syntax highlighting, these spaces are removed. Non-breaking spaces or additional newlines do not help.
Here is a test notebook rendered with nbviewer: https://nbviewer.org/gist/rikedyp/59230d88565c3459d88230b8f2e3c256
And source as a GitHub gist: https://gist.github.com/rikedyp/59230d88565c3459d88230b8f2e3c256
| open | 2022-10-04T07:11:14Z | 2025-01-06T21:42:57Z | https://github.com/jupyter/nbviewer/issues/1021 | [] | rikedyp | 2 |
aimhubio/aim | data-visualization | 3,148 | Multiple runs created for a single distributed training task with AIM | ## ❓Question
When using AIM for a distributed training task with multiple GPUs (e.g., 8 GPUs), I noticed that each GPU generates a separate run with its own hyperparameters and metrics. As a result, for a single distributed training task with 8 GPUs, a total of 8 runs are created.
However, my expectation is to have only one run for the entire distributed training task, regardless of the number of GPUs used. Is this behavior expected, or is there a way to consolidate the runs into a single run for the entire task?
Having multiple runs for a single task makes it difficult to track and analyze the overall performance and metrics. It would be more convenient and intuitive to have a single run that aggregates the data from all GPUs involved in the distributed training process.
Please let me know if this behavior is intended or if there is a configuration option or workaround to achieve a single run for distributed training tasks with AIM. | closed | 2024-05-28T03:05:32Z | 2025-03-13T11:54:29Z | https://github.com/aimhubio/aim/issues/3148 | [
"type / question"
] | zhiyxu | 5 |
JoeanAmier/XHS-Downloader | api | 157 | 程序运行 的安装包 运行后一直提示无法下载 | 程序运行 的安装包 运行后一直提示无法下载,如下截图

| closed | 2024-08-18T18:55:22Z | 2024-08-30T05:24:50Z | https://github.com/JoeanAmier/XHS-Downloader/issues/157 | [] | zhengzhiwu-git | 1 |
dmlc/gluon-cv | computer-vision | 1,768 | PyTorch 2.0 Support | [PyTorch 2.0](https://pytorch.org/get-started/pytorch-2.0/) has been released but gluon-cv still checks for torch < 2.0.0 when I try to import it and generates a runtime error with gluoncv 0.10 installed:
```
RuntimeError: Legacy torch==2.0.0 detected, some modules may not work properly. torch>=1.4.0,<2.0.0 is required. You can use pip or conda to upgrade
```
Is there a timeline for supporting gluoncv in an environment with PyTorch 2.0 installed? | closed | 2023-04-01T00:01:44Z | 2023-07-07T06:34:18Z | https://github.com/dmlc/gluon-cv/issues/1768 | [
"Stale"
] | akowalsk | 1 |
deepspeedai/DeepSpeed | pytorch | 6,889 | Using zero3 on multiple nodes is slow | I have multiple nodes, each with 8 40G A100, and I want to train a 72B model
When using zero3, the 72B model is distributed to all GPUs of all nodes. Even with nvlink, the communication delay is still very high, resulting in slow training speed, much slower than using zero3+offloading for a single node. The problem is that the more nodes there are, the slower the training speed. It is better to use only a single node
Is there a way to control zero3 to only allocate model parameters to the same node, where each node stores a complete model and only uses synchronous gradients between nodes to speed up training | open | 2024-12-18T03:43:41Z | 2025-01-22T14:25:07Z | https://github.com/deepspeedai/DeepSpeed/issues/6889 | [
"bug",
"training"
] | HelloWorld506 | 8 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 497 | Problem with same tablename in different database binds | I use same tablename in different database binds, and my code can be shown follows:
```
class DataA(db.Model):
__bind_key__ = 'db_a'
__tablename__ = 'data'
id = db.Column(db.Integer, primary_key=True)
content = db.Column(db.String(500))
class DataB(db.Model):
__bind_key__ = 'db_b'
__tablename__ = 'data'
id = db.Column(db.Integer, primary_key=True)
content = db.Column(db.String(500))
```
And it raised an exception during starting:
```
sqlalchemy.exc.InvalidRequestError: Table 'data' is already defined for this MetaData instance. Specify 'extend_existing=True' to redefine options and columns on an existing Table object.
```
| closed | 2017-05-10T08:33:33Z | 2020-12-05T20:55:51Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/497 | [] | Windfarer | 2 |
wkentaro/labelme | deep-learning | 747 | [Feature] XML output file format | As far as I know, the file format of LabelMe output is .json, how can I convert to the suitable .XML file format for [keras-yolov3](https://github.com/experiencor/keras-yolo3) ? | closed | 2020-08-09T10:01:10Z | 2020-08-20T15:52:59Z | https://github.com/wkentaro/labelme/issues/747 | [] | annhienktuit | 0 |
LAION-AI/Open-Assistant | python | 3,542 | Dataset release cycle | Is there any plan to release the dataset in cycles ?
I think in comparison to the V1 dataset it should be grown pretty much | open | 2023-07-04T08:00:02Z | 2023-08-27T21:11:08Z | https://github.com/LAION-AI/Open-Assistant/issues/3542 | [
"data"
] | flozi00 | 2 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,028 | [Feature Request]: Update Intel Extension for PyTorch to v2.1.30 | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
Fix issues the current version has and increases performance.
### Proposed workflow
Just update from 2.0.10 to https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.1.30%2Bxpu
### Additional information
_No response_ | open | 2024-06-15T17:31:59Z | 2024-06-15T17:34:24Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16028 | [
"enhancement"
] | Pantonia4 | 0 |
dmlc/gluon-cv | computer-vision | 1,197 | Exporting object detection model to Android devices. | Hi there, I’ve been using GluonCV for object detection, and I was trying to figure out how to export these models to Android (I re-trained the algorithm on my own dataset, so I’d like to export that).
I found [this discussion](https://discuss.mxnet.io/t/how-can-i-export-ssd512-to-android-phone/3080) on the mxnet forum, but as you can see the first link in the answer is not working anymore, and instructions in the amalgamation repository are not particularly clear.
I tried to clone the repo and run make from amalgamation subfolder, but I’m repeatedly getting this error:
```
Makefile:80: recipe for target 'nnvm.d' failed
make[3]: *** [nnvm.d] Error 1
make[3]: Leaving directory '/home/lews/PycharmProjects/incubator-mxnet/amalgamation'
cp: cannot stat 'nnvm.d': No such file or directory
cat: nnvm.cc: No such file or directory
mv: cannot move 'temp' to '../../../../amalgamation/nnvm.cc': No such file or directory
```
If exporting the model is possible at all, could you please link some resource/tutorial on how to do it? | closed | 2020-02-21T15:25:34Z | 2021-06-07T07:04:45Z | https://github.com/dmlc/gluon-cv/issues/1197 | [
"Stale"
] | LewsTherin511 | 1 |
miguelgrinberg/flasky | flask | 122 | Hardcoded url in selenium tests | In test_selenium.py the url is hard coded, which works fine if you are testing on your local machine, but with Jenkins, where many projects might run and also use port 5000 or if you use grid, this will not work.
Ist there a best practice on how to set the url and port in the selenium tests?
| closed | 2016-03-22T10:59:39Z | 2016-06-01T16:23:27Z | https://github.com/miguelgrinberg/flasky/issues/122 | [
"question"
] | pwfraley | 1 |
sinaptik-ai/pandas-ai | data-visualization | 758 | NameError: name 'eval' is not defined | WARNING:pandasai.helpers.logger:Error of executing code
WARNING:pandasai.helpers.logger:Failed to execute code with a correction framework [retry number: 1]
ERROR:pandasai.helpers.logger:Failed with error: Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/pandasai/smart_datalake/__init__.py", line 438, in chat
result = self._code_manager.execute_code(
File "/usr/local/lib/python3.10/dist-packages/pandasai/helpers/code_manager.py", line 286, in execute_code
return analyze_data(self._get_originals(dfs))
File "<string>", line 21, in analyze_data
NameError: name 'eval' is not defined
. Retrying | closed | 2023-11-16T11:24:52Z | 2024-10-09T12:24:00Z | https://github.com/sinaptik-ai/pandas-ai/issues/758 | [] | JunaidHassanCB | 1 |
horovod/horovod | machine-learning | 3,027 | CMake Error in horovod/torch/CMakeLists.txt: Target "pytorch" requires the language dialect "CXX14" , but CMake does not know the compile flags to use to enable it. | **Environment:**
1. Framework: (PyTorch,)
2. Framework version:
3. Horovod version:
4. MPI version:
5. CUDA version:
6. NCCL version:
7. Python version:
8. Spark / PySpark version:
9. Ray version:
10. OS and version:
11. GCC version:
12. CMake version:
-- Configuring done
CMake Error in horovod/torch/CMakeLists.txt:
Target "pytorch" requires the language dialect "CXX14" , but CMake does not
know the compile flags to use to enable it.
-- Generating done
CMake Generate step failed. Build files cannot be regenerated correctly.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-3j8y4qov/horovod_155b0d6aeac74d1899be1b6ff9cb8742/setup.py", line 199, in <module>
'horovodrun = horovod.runner.launch:run_commandline'
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/site-packages/setuptools/__init__.py", line 163, in setup
return distutils.core.setup(**attrs)
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/site-packages/wheel/bdist_wheel.py", line 299, in run
self.run_command('build')
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 87, in run
_build_ext.run(self)
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/tmp/pip-install-3j8y4qov/horovod_155b0d6aeac74d1899be1b6ff9cb8742/setup.py", line 95, in build_extensions
cwd=cmake_build_dir)
File "/home/xx/.conda/envs/dalle_test/lib/python3.7/subprocess.py", line 363, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '/tmp/pip-install-3j8y4qov/horovod_155b0d6aeac74d1899be1b6ff9cb8742', '-DCMAKE_BUILD_TYPE=RelWithDebInfo', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELWITHDEBINFO=/tmp/pip-install-3j8y4qov/horovod_155b0d6aeac74d1899be1b6ff9cb8742/build/lib.linux-x86_64-3.7', '-DPYTHON_EXECUTABLE:FILEPATH=/home/xx/.conda/envs/dalle_test/bin/python3.7']' returned non-zero exit status 1.
----------------------------------------
ERROR: Failed building wheel for horovod
| closed | 2021-07-08T08:03:34Z | 2021-08-03T10:13:52Z | https://github.com/horovod/horovod/issues/3027 | [
"bug"
] | Junzh821 | 1 |
Farama-Foundation/Gymnasium | api | 1,047 | [Bug Report] Code examples in Vector API documentation use deprecated "gym.vector.make" instead of "gym.make_vec" | ### Describe the bug
I found that code examples in the vector API documentation, at https://gymnasium.farama.org/api/vector/examples use the deprecated method `gym.vector.make` instead of the method `gym.make_vec`.
If this is intended feel free to close this issue, otherwise I would be glad to work on this since it seems like a simple change.
### Code example
```shell
## Current example for creating a vector env on the documentation
import gymnasium as gym
envs = gym.vector.make("CartPole-v1", num_envs=3)
envs.reset(seed=42)
```
### System info
_No response_
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| closed | 2024-05-11T16:10:33Z | 2024-05-16T02:36:16Z | https://github.com/Farama-Foundation/Gymnasium/issues/1047 | [
"bug"
] | LetteraUnica | 3 |
521xueweihan/HelloGitHub | python | 2,589 | [开源自荐]LLaMA2-Accessory: An Open-source Toolkit for LLM Development 🚀 | ## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/Alpha-VLLM/LLaMA2-Accessory
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:机器学习
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:一个为 LLM 开发而设计的开源工具包
<p align="center">
<img src="https://img.enderfga.cn/img/20230805213336.png" width="90%"/>
<br>
</p>
🚀**LLaMA2-Accessory** is an open-source toolkit for pre-training, fine-tuning and deployment of **Large Language Models (LLMs)** and **mutlimodal LLMs**. This repo is mainly inherited from [LLaMA-Adapter](https://github.com/OpenGVLab/LLaMA-Adapter) with more advanced features.🧠
## News
- **[2023.08.05]** We release the multimodel fine-tuning codes and checkpoints🔥🔥🔥
- **[2023.07.23]** Initial release 📌
## Features
* **💡Support More Datasets and Tasks**
- 🎯 Pre-training with [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) and [StarCoder](https://github.com/bigcode-project/starcoder).
- 📚 Single-modal fine-tuning with [Alpaca](https://github.com/tatsu-lab/stanford_alpaca), [ShareGPT](https://github.com/domeccleston/sharegpt), [LIMA](https://arxiv.org/pdf/2305.11206.pdf), [UltraChat](https://github.com/thunlp/UltraChat) and [MOSS](https://github.com/OpenLMLab/MOSS).
- 🌈 Multi-modal fine-tuning with image-text pairs ([LAION](https://laion.ai/blog/laion-5b/), [COYO](https://github.com/kakaobrain/coyo-dataset) and more), interleaved image-text data ([MMC4](https://github.com/allenai/mmc4) and [OBELISC](https://github.com/huggingface/OBELISC)) and visual instruction data ([LLaVA](https://github.com/haotian-liu/LLaVA), [Shrika](https://github.com/shikras/shikra), [Bard](https://bard.google.com/))
- 🔧 LLM for API Control ([GPT4Tools](https://github.com/StevenGrove/GPT4Tools) and [Gorilla](https://github.com/ShishirPatil/gorilla)).
* **⚡Efficient Optimization and Deployment**
- 🚝 Parameter-efficient fine-tuning with [Zero-init Attenion](https://github.com/OpenGVLab/LLaMA-Adapter) and [Bias-norm Tuning](https://github.com/OpenGVLab/LLaMA-Adapter).
- 💻 Fully Sharded Data Parallel ([FSDP](https://engineering.fb.com/2021/07/15/open-source/fsdp/)), [Flash Attention 2](https://github.com/Dao-AILab/flash-attention) and [QLoRA](https://github.com/artidoro/qlora).
* **🏋️♀️Support More Visual Encoders and LLMs**
- 👁🗨 Visual Encoders: [CLIP](https://github.com/openai/CLIP), [Q-Former](https://github.com/salesforce/LAVIS) and [ImageBind](https://github.com/facebookresearch/ImageBind).
- 🧩 LLMs: LLaMA and LLaMA2.
## Installation
See [docs/install.md](./docs/install.md).
## Training & Inference
See [docs/pretrain.md](./docs/pretrain.md) and [docs/finetune.md](./docs/finetune.md).
## Demos
* Instruction-tuned LLaMA2: [alpaca](https://alpha-vllm.github.io/demo_presentation/examples/finetune/sg/alpaca.html) & [gorilla](https://alpha-vllm.github.io/demo_presentation/examples/finetune/sg/gorilla.html).
* Chatbot LLaMA2: [dialog_sharegpt](https://alpha-vllm.github.io/demo_presentation/examples/finetune/sg/dialog_sharegpt.html) & [dialog_lima](https://alpha-vllm.github.io/demo_presentation/examples/finetune/sg/dialog_lima.html) & [llama2-chat](https://alpha-vllm.github.io/demo_presentation/examples/finetune/sg/llama2-chat.html).
* Multimodal LLaMA2: [in-context](https://alpha-vllm.github.io/demo_presentation/examples/finetune/mm/in-context.html)





## License
Llama 2 is licensed under the [LLAMA 2 Community License](LICENSE_llama2), Copyright (c) Meta Platforms, Inc. All Rights Reserved.
| closed | 2023-08-05T14:12:41Z | 2024-01-24T16:06:26Z | https://github.com/521xueweihan/HelloGitHub/issues/2589 | [
"机器学习"
] | Seagull619 | 0 |
cobrateam/splinter | automation | 708 | Unable to slice on ElementList in python3 | # Issue
Applying a type(slice) to an ElementList causes a `TypeError: Object of type slice is not JSON serializable`
This happens because of the __getitem__ implantation
``` python
py3/lib/python3.7/site-packages/splinter/element_list.py
def __getitem__(self, index):
if not isinstance(index, int):
return self.first[index]
...
```
Slicing in python 3 will cause index to be type(slice). So when ` if not isinstance(index, int)`
get's evaluated, the code will return `return self.first[index]`.
However this is incorrect. We want to slice(take a subset of the ElementList), but the code being execute will try and slice on the first item of the list.
Instead the code should handle type(slice) independently. Here's an example implementation...
``` python
def __getitem__(self, index):
if isinstance(index, slice):
# Handle none vales e.g. slice(None, 3, None) -> slice(0, 3, 1)
return [
self[i]
for i in range(index.start or 0, index.stop or len(self), index.step or 1)
]
...
``` | closed | 2019-08-08T16:18:44Z | 2020-02-28T22:24:02Z | https://github.com/cobrateam/splinter/issues/708 | [
"bug"
] | djmunro | 3 |
autogluon/autogluon | data-science | 4,441 | [BUG] feature_prune_kwargs={"force_prune": True} does not work when tuning_data is on for presets="medium_quality", | ```
Verbosity: 4 (Maximum Logging)
=================== System Info ===================
AutoGluon Version: 1.1.1
Python Version: 3.11.9
Operating System: Linux
Platform Machine: x86_64
Platform Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
CPU Count: 8
GPU Count: 1
Memory Avail: 8.36 GB / 23.47 GB (35.6%)
Disk Space Avail: 326.38 GB / 911.84 GB (35.8%)
===================================================
Presets specified: ['medium_quality']
============ fit kwarg info ============
User Specified kwargs:
{'auto_stack': False,
'feature_prune_kwargs': {'force_prune': True},
'verbosity': 4}
Full kwargs:
{'_feature_generator_kwargs': None,
'_save_bag_folds': None,
'ag_args': None,
'ag_args_ensemble': None,
'ag_args_fit': None,
'auto_stack': False,
'calibrate': 'auto',
'ds_args': {'clean_up_fits': True,
'detection_time_frac': 0.25,
'enable_ray_logging': True,
'holdout_data': None,
'holdout_frac': 0.1111111111111111,
'memory_safe_fits': True,
'n_folds': 2,
'n_repeats': 1,
'validation_procedure': 'holdout'},
'excluded_model_types': None,
'feature_generator': 'auto',
'feature_prune_kwargs': {'force_prune': True},
'holdout_frac': None,
'hyperparameter_tune_kwargs': None,
'included_model_types': None,
'keep_only_best': False,
'name_suffix': None,
'num_bag_folds': None,
'num_bag_sets': None,
'num_stack_levels': None,
'pseudo_data': None,
'refit_full': False,
'save_bag_folds': None,
'save_space': False,
'set_best_to_refit_full': False,
'unlabeled_data': None,
'use_bag_holdout': False,
'verbosity': 4}
========================================
Warning: Training may take a very long time because `time_limit` was not specified and `train_data` is large (500000 samples, 88.0 MB).
Consider setting `time_limit` to ensure training finishes within an expected duration or experiment with a small portion of `train_data` to identify an ideal `presets` and `hyperparameters` configuration.
Saving /mnt/d/python_directory_2/models_temp/learner.pkl
Saving /mnt/d/python_directory_2/models_temp/predictor.pkl
Beginning AutoGluon training ...
AutoGluon will save models to "/mnt/d/python_directory_2/models_temp"
Train Data Rows: 500000
Train Data Columns: 41
Tuning Data Rows: 75000
Tuning Data Columns: 41
Label Column: target
AutoGluon infers your prediction problem is: 'binary' (because only two unique label-values observed).
2 unique label values: [0, 1]
If 'binary' is not the correct problem_type, please manually specify the problem_type parameter during Predictor init (You may specify problem_type as one of: ['binary', 'multiclass', 'regression', 'quantile'])
Problem Type: binary
Preprocessing data ...
Selected class <--> label mapping: class 1 = 1, class 0 = 0
Using Feature Generators to preprocess the data ...
Performing general data preprocessing with merged train & validation data, so validation performance may not accurately reflect performance on new test data
Fitting AutoMLPipelineFeatureGenerator...
Available Memory: 8580.40 MB
Train Data (Original) Memory Usage: 92.13 MB (1.1% of available memory)
Inferring data type of each feature based on column values. Set feature_metadata_in to manually specify special dtypes of the features.
Stage 1 Generators:
Fitting AsTypeFeatureGenerator...
Note: Converting 2 features to boolean dtype as they only contain 2 unique values.
Original Features (exact raw dtype, raw dtype):
('datetime64[ns]', 'datetime') : 1 | ['time_date']
('float32', 'float') : 37 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', ...]
Types of features in original data (raw dtype, special dtypes):
('datetime', []) : 1 | ['time_date']
('float', []) : 37 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', ...]
Types of features in processed data (exact raw dtype, raw dtype):
('datetime64[ns]', 'datetime') : 1 | ['time_date']
('float32', 'float') : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...]
('int8', 'int') : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat']
Types of features in processed data (raw dtype, special dtypes):
('datetime', []) : 1 | ['time_date']
('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...]
('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat']
0.2s = Fit runtime
38 features in original data used to generate 38 features in processed data.
Stage 2 Generators:
Fitting FillNaFeatureGenerator...
Types of features in original data (raw dtype, special dtypes):
('datetime', []) : 1 | ['time_date']
('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...]
('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat']
Types of features in processed data (exact raw dtype, raw dtype):
('datetime64[ns]', 'datetime') : 1 | ['time_date']
('float32', 'float') : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...]
('int8', 'int') : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat']
Types of features in processed data (raw dtype, special dtypes):
('datetime', []) : 1 | ['time_date']
('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...]
('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat']
0.1s = Fit runtime
38 features in original data used to generate 38 features in processed data.
Stage 3 Generators:
Fitting IdentityFeatureGenerator...
Types of features in original data (raw dtype, special dtypes):
('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...]
('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat']
Types of features in processed data (exact raw dtype, raw dtype):
('float32', 'float') : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...]
('int8', 'int') : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat']
Types of features in processed data (raw dtype, special dtypes):
('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...]
('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat']
0.1s = Fit runtime
37 features in original data used to generate 37 features in processed data.
Skipping CategoryFeatureGenerator: No input feature with required dtypes.
Fitting DatetimeFeatureGenerator...
Types of features in original data (raw dtype, special dtypes):
('datetime', []) : 1 | ['time_date']
Types of features in processed data (exact raw dtype, raw dtype):
('int64', 'int') : 5 | ['time_date', 'time_date.year', 'time_date.month', 'time_date.day', 'time_date.dayofweek']
Types of features in processed data (raw dtype, special dtypes):
('int', ['datetime_as_int']) : 5 | ['time_date', 'time_date.year', 'time_date.month', 'time_date.day', 'time_date.dayofweek']
0.1s = Fit runtime
1 features in original data used to generate 5 features in processed data.
Skipping TextSpecialFeatureGenerator: No input feature with required dtypes.
Skipping TextNgramFeatureGenerator: No input feature with required dtypes.
Skipping IdentityFeatureGenerator: No input feature with required dtypes.
Skipping IsNanFeatureGenerator: No input feature with required dtypes.
Stage 4 Generators:
Fitting DropUniqueFeatureGenerator...
Types of features in original data (raw dtype, special dtypes):
('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...]
('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat']
('int', ['datetime_as_int']) : 4 | ['time_date', 'time_date.month', 'time_date.day', 'time_date.dayofweek']
Types of features in processed data (exact raw dtype, raw dtype):
('float32', 'float') : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...]
('int64', 'int') : 4 | ['time_date', 'time_date.month', 'time_date.day', 'time_date.dayofweek']
('int8', 'int') : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat']
Types of features in processed data (raw dtype, special dtypes):
('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...]
('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat']
('int', ['datetime_as_int']) : 4 | ['time_date', 'time_date.month', 'time_date.day', 'time_date.dayofweek']
0.2s = Fit runtime
41 features in original data used to generate 41 features in processed data.
Stage 5 Generators:
Fitting DropDuplicatesFeatureGenerator...
Types of features in original data (raw dtype, special dtypes):
('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...]
('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat']
('int', ['datetime_as_int']) : 4 | ['time_date', 'time_date.month', 'time_date.day', 'time_date.dayofweek']
Types of features in processed data (exact raw dtype, raw dtype):
('float32', 'float') : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...]
('int64', 'int') : 4 | ['time_date', 'time_date.month', 'time_date.day', 'time_date.dayofweek']
('int8', 'int') : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat']
Types of features in processed data (raw dtype, special dtypes):
('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...]
('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat']
('int', ['datetime_as_int']) : 4 | ['time_date', 'time_date.month', 'time_date.day', 'time_date.dayofweek']
0.1s = Fit runtime
41 features in original data used to generate 41 features in processed data.
Useless Original Features (Count: 3): ['STDDEV_30_n_03s__SC_FluctAnal_2_rsrangefit_50_1_logi_prop_r1', 'STOCHRSI_fastk_3_n_03m__CO_HistogramAMI_even_2_5', 'TEMA_3_n_15m__SC_FluctAnal_2_dfa_50_1_2_logi_prop_r1']
These features carry no predictive signal and should be manually investigated.
This is typically a feature which has the same value for all rows.
These features do not need to be present at inference time.
Types of features in original data (exact raw dtype, raw dtype):
('datetime64[ns]', 'datetime') : 1 | ['time_date']
('float32', 'float') : 37 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', ...]
Types of features in original data (raw dtype, special dtypes):
('datetime', []) : 1 | ['time_date']
('float', []) : 37 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', ...]
Types of features in processed data (exact raw dtype, raw dtype):
('float32', 'float') : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...]
('int64', 'int') : 4 | ['time_date', 'time_date.month', 'time_date.day', 'time_date.dayofweek']
('int8', 'int') : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat']
Types of features in processed data (raw dtype, special dtypes):
('float', []) : 35 | ['DIV_5_n_01s__DN_Mean', 'KAMA_10_n_01s__CO_FirstMin_ac', 'MACDEXT_macd_10_n_03s__SP_Summaries_welch_rect_centroid', 'MIN_15_n_05s__DN_Spread_Std', 'CCI_3_n_10s__SP_Summaries_welch_rect_centroid', ...]
('int', ['bool']) : 2 | ['STOCH_slowk_3_n_01s__SB_BinaryStats_diff_longstretch0', 'MAX_3_n_30s__FC_LocalSimple_mean1_tauresrat']
('int', ['datetime_as_int']) : 4 | ['time_date', 'time_date.month', 'time_date.day', 'time_date.dayofweek']
1.0s = Fit runtime
38 features in original data used to generate 41 features in processed data.
Train Data (Processed) Memory Usage: 95.42 MB (1.1% of available memory)
Data preprocessing and feature engineering runtime = 1.11s ...
AutoGluon will gauge predictive performance using evaluation metric: 'roc_auc'
This metric expects predicted probabilities rather than predicted class labels, so you'll need to use predict_proba() instead of predict()
To change this, specify the eval_metric parameter of Predictor()
Saving /mnt/d/python_directory_2/models_temp/learner.pkl
User-specified model hyperparameters to be fit:
{
'GBM': [{'extra_trees': True, 'ag_args': {'name_suffix': 'XT'}, 'learning_rate': 0.45}, {'learning_rate': 0.45}],
'CAT': {'iterations': 10000, 'learning_rate': 0.75, 'allow_writing_files': False, 'eval_metric': 'Logloss', 'thread_count': 8},
}
Saving /mnt/d/python_directory_2/models_temp/utils/data/X.pkl
Saving /mnt/d/python_directory_2/models_temp/utils/data/y.pkl
Saving /mnt/d/python_directory_2/models_temp/utils/data/X_val.pkl
Saving /mnt/d/python_directory_2/models_temp/utils/data/y_val.pkl
Model configs that will be trained (in order):
LightGBMXT: {'extra_trees': True, 'ag_args': {'name_suffix': 'XT', 'model_type': <class 'autogluon.tabular.models.lgb.lgb_model.LGBModel'>, 'priority': 90}, 'learning_rate': 0.45}
LightGBM: {'learning_rate': 0.45, 'ag_args': {'model_type': <class 'autogluon.tabular.models.lgb.lgb_model.LGBModel'>, 'priority': 90}}
CatBoost: {'iterations': 10000, 'learning_rate': 0.75, 'allow_writing_files': False, 'eval_metric': 'Logloss', 'thread_count': 8, 'ag_args': {'model_type': <class 'autogluon.tabular.models.catboost.catboost_model.CatBoostModel'>, 'priority': 70}}
Fitting 3 L1 models ...
Fitting model: LightGBMXT ...
Dropped 0 of 41 features.
Fitting LightGBMXT with 'num_gpus': 0, 'num_cpus': 8
Fitting 10000 rounds... Hyperparameters: {'learning_rate': 0.45, 'extra_trees': True}
[1] valid_set's binary_logloss: 0.690857
[2] valid_set's binary_logloss: 0.68494
[3] valid_set's binary_logloss: 0.684353
[4] valid_set's binary_logloss: 0.686686
[5] valid_set's binary_logloss: 0.691233
[6] valid_set's binary_logloss: 0.691459
[7] valid_set's binary_logloss: 0.701168
[8] valid_set's binary_logloss: 0.712405
[9] valid_set's binary_logloss: 0.716977
[10] valid_set's binary_logloss: 0.717592
[11] valid_set's binary_logloss: 0.715944
[12] valid_set's binary_logloss: 0.700291
[13] valid_set's binary_logloss: 0.702016
[14] valid_set's binary_logloss: 0.706436
[15] valid_set's binary_logloss: 0.703444
[16] valid_set's binary_logloss: 0.702883
[17] valid_set's binary_logloss: 0.702753
[18] valid_set's binary_logloss: 0.70785
[19] valid_set's binary_logloss: 0.70979
[20] valid_set's binary_logloss: 0.711334
[21] valid_set's binary_logloss: 0.710002
[22] valid_set's binary_logloss: 0.709985
[23] valid_set's binary_logloss: 0.710674
[24] valid_set's binary_logloss: 0.711116
Saving /mnt/d/python_directory_2/models_temp/models/LightGBMXT/model.pkl
Saving /mnt/d/python_directory_2/models_temp/utils/attr/LightGBMXT/y_pred_proba_val.pkl
0.541 = Validation score (roc_auc)
1.13s = Training runtime
0.02s = Validation runtime
3801989.4 = Inference throughput (rows/s | 75000 batch size)
Saving /mnt/d/python_directory_2/models_temp/models/trainer.pkl
Fitting model: LightGBM ...
Dropped 0 of 41 features.
Fitting LightGBM with 'num_gpus': 0, 'num_cpus': 8
Fitting 10000 rounds... Hyperparameters: {'learning_rate': 0.45}
[1] valid_set's binary_logloss: 0.726226
[2] valid_set's binary_logloss: 0.741846
[3] valid_set's binary_logloss: 0.737792
[4] valid_set's binary_logloss: 0.791165
[5] valid_set's binary_logloss: 0.838001
[6] valid_set's binary_logloss: 0.838667
[7] valid_set's binary_logloss: 0.839008
[8] valid_set's binary_logloss: 0.861499
[9] valid_set's binary_logloss: 0.85956
[10] valid_set's binary_logloss: 0.847175
[11] valid_set's binary_logloss: 0.84703
[12] valid_set's binary_logloss: 0.846104
[13] valid_set's binary_logloss: 0.832776
[14] valid_set's binary_logloss: 0.833003
[15] valid_set's binary_logloss: 0.831728
[16] valid_set's binary_logloss: 0.834595
[17] valid_set's binary_logloss: 0.833374
[18] valid_set's binary_logloss: 0.833442
[19] valid_set's binary_logloss: 0.833311
[20] valid_set's binary_logloss: 0.824875
[21] valid_set's binary_logloss: 0.822567
Saving /mnt/d/python_directory_2/models_temp/models/LightGBM/model.pkl
Saving /mnt/d/python_directory_2/models_temp/utils/attr/LightGBM/y_pred_proba_val.pkl
0.5 = Validation score (roc_auc)
1.01s = Training runtime
0.02s = Validation runtime
3795109.1 = Inference throughput (rows/s | 75000 batch size)
Saving /mnt/d/python_directory_2/models_temp/models/trainer.pkl
Fitting model: CatBoost ...
Dropped 0 of 41 features.
Fitting CatBoost with 'num_gpus': 0, 'num_cpus': 8
Catboost model hyperparameters: {'iterations': 10000, 'learning_rate': 0.75, 'random_seed': 0, 'allow_writing_files': False, 'eval_metric': 'Logloss', 'thread_count': 8}
0: learn: 0.6533180 test: 0.7174133 best: 0.7174133 (0) total: 44.1ms remaining: 7m 20s
1: learn: 0.6298120 test: 0.7229527 best: 0.7174133 (0) total: 82.8ms remaining: 6m 53s
2: learn: 0.6126701 test: 0.7287912 best: 0.7174133 (0) total: 122ms remaining: 6m 46s
3: learn: 0.6073046 test: 0.7166248 best: 0.7166248 (3) total: 158ms remaining: 6m 35s
4: learn: 0.5929676 test: 0.7188954 best: 0.7166248 (3) total: 194ms remaining: 6m 27s
5: learn: 0.5770597 test: 0.7453000 best: 0.7166248 (3) total: 227ms remaining: 6m 18s
6: learn: 0.5684997 test: 0.7506057 best: 0.7166248 (3) total: 271ms remaining: 6m 26s
7: learn: 0.5586667 test: 0.6851480 best: 0.6851480 (7) total: 313ms remaining: 6m 31s
8: learn: 0.5533758 test: 0.6924916 best: 0.6851480 (7) total: 356ms remaining: 6m 35s
9: learn: 0.5475293 test: 0.6908779 best: 0.6851480 (7) total: 398ms remaining: 6m 37s
10: learn: 0.5413246 test: 0.6897852 best: 0.6851480 (7) total: 431ms remaining: 6m 31s
11: learn: 0.5372513 test: 0.6887706 best: 0.6851480 (7) total: 466ms remaining: 6m 27s
12: learn: 0.5334703 test: 0.6926530 best: 0.6851480 (7) total: 500ms remaining: 6m 24s
13: learn: 0.5298752 test: 0.6958383 best: 0.6851480 (7) total: 541ms remaining: 6m 25s
14: learn: 0.5268030 test: 0.6941996 best: 0.6851480 (7) total: 581ms remaining: 6m 26s
15: learn: 0.5180547 test: 0.6958168 best: 0.6851480 (7) total: 631ms remaining: 6m 33s
16: learn: 0.5125751 test: 0.6883260 best: 0.6851480 (7) total: 670ms remaining: 6m 33s
17: learn: 0.5112367 test: 0.6874500 best: 0.6851480 (7) total: 719ms remaining: 6m 38s
18: learn: 0.5060452 test: 0.6895496 best: 0.6851480 (7) total: 765ms remaining: 6m 42s
19: learn: 0.5014992 test: 0.6867924 best: 0.6851480 (7) total: 808ms remaining: 6m 43s
20: learn: 0.4988651 test: 0.6899766 best: 0.6851480 (7) total: 848ms remaining: 6m 43s
21: learn: 0.4955378 test: 0.6872165 best: 0.6851480 (7) total: 894ms remaining: 6m 45s
22: learn: 0.4904387 test: 0.6821777 best: 0.6821777 (22) total: 944ms remaining: 6m 49s
23: learn: 0.4883393 test: 0.6846079 best: 0.6821777 (22) total: 985ms remaining: 6m 49s
24: learn: 0.4851253 test: 0.6906374 best: 0.6821777 (22) total: 1.02s remaining: 6m 47s
25: learn: 0.4807459 test: 0.6861663 best: 0.6821777 (22) total: 1.07s remaining: 6m 49s
26: learn: 0.4782397 test: 0.6860673 best: 0.6821777 (22) total: 1.12s remaining: 6m 52s
27: learn: 0.4775118 test: 0.6907217 best: 0.6821777 (22) total: 1.16s remaining: 6m 54s
28: learn: 0.4745040 test: 0.7047771 best: 0.6821777 (22) total: 1.2s remaining: 6m 53s
29: learn: 0.4728311 test: 0.7234329 best: 0.6821777 (22) total: 1.24s remaining: 6m 53s
30: learn: 0.4691995 test: 0.7221445 best: 0.6821777 (22) total: 1.29s remaining: 6m 54s
31: learn: 0.4675475 test: 0.7224027 best: 0.6821777 (22) total: 1.33s remaining: 6m 53s
32: learn: 0.4658990 test: 0.7221766 best: 0.6821777 (22) total: 1.37s remaining: 6m 54s
33: learn: 0.4650485 test: 0.7219495 best: 0.6821777 (22) total: 1.4s remaining: 6m 51s
34: learn: 0.4638509 test: 0.7243998 best: 0.6821777 (22) total: 1.45s remaining: 6m 52s
35: learn: 0.4621719 test: 0.7230784 best: 0.6821777 (22) total: 1.49s remaining: 6m 52s
36: learn: 0.4615038 test: 0.7235578 best: 0.6821777 (22) total: 1.53s remaining: 6m 52s
37: learn: 0.4609860 test: 0.7225358 best: 0.6821777 (22) total: 1.57s remaining: 6m 51s
38: learn: 0.4590647 test: 0.7220221 best: 0.6821777 (22) total: 1.61s remaining: 6m 52s
39: learn: 0.4584479 test: 0.7252117 best: 0.6821777 (22) total: 1.66s remaining: 6m 52s
40: learn: 0.4566977 test: 0.7285681 best: 0.6821777 (22) total: 1.71s remaining: 6m 54s
41: learn: 0.4550106 test: 0.7298182 best: 0.6821777 (22) total: 1.76s remaining: 6m 58s
42: learn: 0.4537528 test: 0.7286350 best: 0.6821777 (22) total: 1.82s remaining: 7m 2s
43: learn: 0.4530702 test: 0.7217189 best: 0.6821777 (22) total: 1.91s remaining: 7m 13s
44: learn: 0.4515620 test: 0.7219945 best: 0.6821777 (22) total: 1.99s remaining: 7m 20s
45: learn: 0.4501892 test: 0.7196554 best: 0.6821777 (22) total: 2.07s remaining: 7m 27s
46: learn: 0.4476642 test: 0.7128948 best: 0.6821777 (22) total: 2.12s remaining: 7m 28s
47: learn: 0.4466513 test: 0.7867258 best: 0.6821777 (22) total: 2.17s remaining: 7m 29s
48: learn: 0.4461165 test: 0.7878465 best: 0.6821777 (22) total: 2.22s remaining: 7m 30s
49: learn: 0.4450580 test: 0.7877115 best: 0.6821777 (22) total: 2.27s remaining: 7m 32s
bestTest = 0.6821776997
bestIteration = 22
Shrink model to first 23 iterations.
Saving /mnt/d/python_directory_2/models_temp/models/CatBoost/model.pkl
Saving /mnt/d/python_directory_2/models_temp/utils/attr/CatBoost/y_pred_proba_val.pkl
0.6525 = Validation score (roc_auc)
2.64s = Training runtime
0.01s = Validation runtime
7341255.5 = Inference throughput (rows/s | 75000 batch size)
Saving /mnt/d/python_directory_2/models_temp/models/trainer.pkl
Loading: /mnt/d/python_directory_2/models_temp/models/LightGBMXT/model.pkl
Performing feature pruning with model: FeatureSelector_LightGBMXT, total time limit: 300s, stop threshold: 10, prune ratio: 0.05, prune threshold: noise.
Number of training samples 500000 is greater than 50000. Using 50000 samples as training data.
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/utils/decorators.py", line 31, in _call
return f(*gargs, **gkwargs)
^^^^^^^^^^^^^^^^^^^^
File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/tabular/predictor/predictor.py", line 1167, in fit
self._fit(ag_fit_kwargs=ag_fit_kwargs, ag_post_fit_kwargs=ag_post_fit_kwargs)
File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/tabular/predictor/predictor.py", line 1173, in _fit
self._learner.fit(**ag_fit_kwargs)
File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/tabular/learner/abstract_learner.py", line 159, in fit
return self._fit(X=X, X_val=X_val, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/tabular/learner/default_learner.py", line 122, in _fit
trainer.fit(
File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/tabular/trainer/auto_trainer.py", line 125, in fit
self._train_multi_and_ensemble(
File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py", line 2589, in _train_multi_and_ensemble
model_names_fit = self.train_multi_levels(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py", line 452, in train_multi_levels
base_model_names, aux_models = self.stack_new_level(
^^^^^^^^^^^^^^^^^^^^^
File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py", line 600, in stack_new_level
core_models = self.stack_new_level_core(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py", line 730, in stack_new_level_core
return self._train_multi(
^^^^^^^^^^^^^^^^^^
File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py", line 2539, in _train_multi
model_names_trained = self._train_multi_initial(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py", line 2422, in _train_multi_initial
candidate_features = self._proxy_model_feature_prune(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py", line 2669, in _proxy_model_feature_prune
candidate_features = selector.select_features(**feature_prune_kwargs, **model_fit_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/utils/feature_selection.py", line 225, in select_features
X, y, X_val, y_val, X_fi, y_fi, prune_threshold, noise_columns, feature_metadata = self.setup(
^^^^^^^^^^^
File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/utils/feature_selection.py", line 549, in setup
X_train, _, y_train, _ = generate_train_test_split(X=X, y=y, problem_type=self.problem_type, random_state=random_state, test_size=drop_ratio)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/site-packages/autogluon/core/utils/utils.py", line 513, in generate_train_test_split
random.seed(random_state)
File "/home/artur/miniforge3/envs/py311_ag/lib/python3.11/random.py", line 160, in seed
raise TypeError('The only supported seed types are: None,\n'
TypeError: The only supported seed types are: None,
int, float, str, bytes, and bytearray.
``` | open | 2024-08-28T19:05:37Z | 2024-11-25T22:47:13Z | https://github.com/autogluon/autogluon/issues/4441 | [
"bug",
"module: tabular",
"Needs Triage"
] | arturdaraujo | 1 |
jina-ai/serve | fastapi | 5,749 | 怎么同时起grpc与http服务 | **Describe the feature**
<!-- A clear and concise description of what the feature is. -->
**Your proposal**
<!-- copy past your code/pull request link -->
---
<!-- Optional, but really help us locate the problem faster -->
**Environment**
<!-- Run `jina --version-full` and copy paste the output here -->
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. --> | closed | 2023-03-09T04:11:03Z | 2023-03-13T21:54:21Z | https://github.com/jina-ai/serve/issues/5749 | [] | yuanjie-ai | 1 |
tflearn/tflearn | data-science | 452 | Request for Gradient Noise | Any planned support for this simple and powerful enhancement? Or is there a way to do it now?
Thanks. | open | 2016-11-10T17:46:57Z | 2016-11-13T16:24:12Z | https://github.com/tflearn/tflearn/issues/452 | [
"enhancement",
"contributions welcome"
] | dcbarton | 1 |
flairNLP/flair | pytorch | 3,453 | [Question]: Multi-Task Learning with use_all_task | ### Question
How can I correctly train two tasks simultaneously on a single corpus using the parameter use_all_task=True? When I attempted to train two models together on one corpus, I encountered a RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn.
multitask_dataset = CONLL_03_DUTCH()
tasks = ['ner', 'pos']
model_1 = initialize_tagger(multitask_dataset, shared_embedding, tasks[0])
model_2 = initialize_tagger(multitask_dataset, shared_embedding, tasks[1])
multitask_model = MultitaskModel([model_1, model_2], use_all_tasks=True, task_ids=tasks)
trainer = ModelTrainer(multitask_model, multitask_dataset)
trainer.fine_tune('resources/taggers/sota-ner-flert',
learning_rate=5.0e-6,
max_epochs=20)
╭─────────────────────────────────────── Traceback (most recent call last) ───────────────────────────────────────╮
│ in :60 │
│ │
│ ❱ 60 trainer.fine_tune('resources/taggers/sota-ner-flert', │
│ │
│ /pyzr/active_venv/lib/python3.10/site-packages/flair/trainers/trainer.py:253 in fine_tune │
│ │
│ ❱ 253 │ │ return self.train_custom( │
│ │
│ /pyzr/active_venv/lib/python3.10/site-packages/flair/trainers/trainer.py:606 in train_custom │
│ │
│ ❱ 606 │ │ │ │ │ │ │ self._backward(scaler.scale(loss)) │
│ │
│ /pyzr/active_venv/lib/python3.10/site-packages/flair/trainers/trainer.py:124 in _backward │
│ │
│ ❱ 124 │ │ loss.backward() │
│ │
│ /pyzr/active_venv/lib/python3.10/site-packages/torch/_tensor.py:487 in backward │
│ │
│ ❱ 487 │ │ torch.autograd.backward( │
│ │
│ /pyzr/active_venv/lib/python3.10/site-packages/torch/autograd/init.py:200 in backward │
│ │
│ ❱ 200 │ Variable._execution_engine.run_backward( # Calls into the C++ engine to run the bac │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
| closed | 2024-05-10T23:57:49Z | 2025-03-11T05:08:33Z | https://github.com/flairNLP/flair/issues/3453 | [
"question"
] | EmbedCrafter | 1 |
zappa/Zappa | flask | 530 | [Migrated] Support option for providing log config file | Originally from: https://github.com/Miserlou/Zappa/issues/1404 by [soloman1124](https://github.com/soloman1124)
Hi, it is essential in our use case to have consistent logging format. We'd like to this by suppling our own log config file if possible. | closed | 2021-02-20T09:44:00Z | 2024-04-13T16:37:00Z | https://github.com/zappa/Zappa/issues/530 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
autokey/autokey | automation | 25 | Issue pasting into QT5 apps via Keyboard | Neither original 0.90.4 nor latest autokey-py3 git copes with pasting phrases and executing scripts in Qt5 application like Kate, Konsole or KDE Plasma 5.6 authorization dialog box using Keyboard method via a hotkey. Meanwhile it works well in any other application except of Qt5 based.
When you push the hotkey it either prints nothing or some garbage symbols (which are not even in the phrase) or does something unpredictable at all (like closes tabs).
I've tried -l option but didn't see anything explanatory to find out the problem.
---
KDE Plasma 5.6.3
Kate and Konsole version 16.04.0
KDE Frameworks 5.21.0
Qt 5.6.0 (built against 5.6.0)
| closed | 2016-05-05T13:55:22Z | 2016-12-01T20:56:31Z | https://github.com/autokey/autokey/issues/25 | [] | roman-parkhunovskyi | 2 |
adbar/trafilatura | web-scraping | 144 | Wrong title? | I was expecting the title in the example below to be foo, but trafilatura returns bar...
```
$ cat foo.html
<html>
<head>
<title>foo</title>
</head>
<body>
<h1>bar</h1>
lorem ipsum
The quick brown fox jumps over the lazy dog
</body>
</html>
```
```
$ trafilatura --json <foo.html
{"title": "bar", "author": null, "hostname": null, "date": null, "categories": "", "tags": "",
"fingerprint": "8+8hT6tZdPEdu8WBvHcvas4gHfw=", "id": null, "license": null,
"raw-text": "bar lorem ipsum The quick brown fox jumps over the lazy dog",
"source": null, "source-hostname": null, "excerpt": null,
"text": "bar\nlorem ipsum\nThe quick brown fox jumps over the lazy dog", "comments": ""}
``` | closed | 2021-11-18T09:53:20Z | 2021-11-26T17:46:45Z | https://github.com/adbar/trafilatura/issues/144 | [
"question"
] | pieterhartel | 1 |
piccolo-orm/piccolo | fastapi | 482 | Transaction object has no method add | I am attempting to use a transaction in one of my migrations however, I am unable to add to the transaction due to the `add` method not existing. I believe I am following the docs [here](https://piccolo-orm.readthedocs.io/en/0.7.7/piccolo/query_types/transactions.html) correctly.
The error
>🚀 Running 1 migration:
running 2022-04-08T10:51:00:796722
The command failed.
'Transaction' object has no attribute 'add'
This is part of my migration
```python
async def forwards():
manager = MigrationManager(migration_id=ID, app_name="", description=DESCRIPTION)
transaction = Table._meta.db.transaction()
print(f"running {ID}")
transaction.add(
Table.raw(
"""ALTER TABLE task__logs DROP CONSTRAINT task__logs_pkey;"""
)
)
transaction.add(Table.raw("""ALTER TABLE task__logs DROP COLUMN id;"""))
transaction.add(
Table.raw(
"""
ALTER TABLE
task__logs
ADD CONSTRAINT task__logs_pkey
PRIMARY KEY (task_name, started_at);
"""
)
)
``` | closed | 2022-04-08T18:17:42Z | 2022-04-09T14:49:11Z | https://github.com/piccolo-orm/piccolo/issues/482 | [] | theelderbeever | 2 |
ExpDev07/coronavirus-tracker-api | fastapi | 120 | golang API Wrappers | go-corona is a Golang client library for accessing global coronavirus (COVID-19, SARS-CoV-2) outbreak data.
Github: https://github.com/itsksaurabh/go-corona | closed | 2020-03-21T08:24:01Z | 2020-03-21T08:46:26Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/120 | [] | itsksaurabh | 0 |
geopandas/geopandas | pandas | 3,332 | BUG: long, lat geometry swapped ! geopandas.to_file( "out.kml", driver="KML",engine='pyogrio' ) | - [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of geopandas.
geopandas 0.14.4
pyogrio 0.8.0
- [ ] (optional) I have confirmed this bug exists on the main branch of geopandas.
---
#### Code Sample, a copy-pastable example
```
import geopandas as gpd
import pandas as pd
import pyogrio
# Sample data
data = {
'Name': ['Location1', 'Location2'],
'Lat': [ 14, 15],
'Lng': [100, 101]
}
# Create a DataFrame
df = pd.DataFrame(data)
gdf = gpd.GeoDataFrame( df , crs='EPSG:4326' , geometry=gpd.points_from_xy( df.Lng, df.Lat ) )
# Save DataFrame to a KML file
gdf.to_file("./CACHE/MyLoc.gpkg", layer='NCDC_CORS', driver="GPKG", engine='pyogrio') # ok!
gdf.to_file("./CACHE/MyLoc.kml", layer='NCDC_CORS', driver="KML", engine='pyogrio') # bug!
print("File saved successfully!")
```
#### Problem description
In the KML file got 14,100 instead of 100,14 and so on ...
<Point><coordinates>14,100</coordinates></Point>
<Point><coordinates>15,101</coordinates></Point>
#### Expected Output
<Point><coordinates>100,14</coordinates></Point>
<Point><coordinates>101,15</coordinates></Point>
#### Output of ``geopandas.show_versions()``
<details>
SYSTEM INFO
-----------
python : 3.12.3 | packaged by Anaconda, Inc. | (main, May 6 2024, 19:46:43) [GCC 11.2.0]
executable : /home/phisan/miniconda3/envs/dbdf/bin/python3
machine : Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
GEOS, GDAL, PROJ INFO
---------------------
GEOS : 3.8.0
GEOS lib : None
GDAL : 3.6.2
GDAL data dir: /home/phisan/miniconda3/envs/dbdf/share/gdal
PROJ : 9.3.1
PROJ data dir: /home/phisan/miniconda3/envs/dbdf/share/proj
PYTHON DEPENDENCIES
-------------------
geopandas : 0.14.4
numpy : 1.26.4
pandas : 2.2.1
pyproj : 3.6.1
shapely : 2.0.1
fiona : 1.9.5
geoalchemy2: None
geopy : None
matplotlib : 3.8.4
mapclassify: 2.5.0
pygeos : None
pyogrio : 0.8.0
psycopg2 : None
pyarrow : None
rtree : 1.0.1
</details>
| closed | 2024-06-09T06:04:21Z | 2024-06-17T09:08:10Z | https://github.com/geopandas/geopandas/issues/3332 | [
"bug",
"upstream issue"
] | phisan-chula | 3 |
microsoft/nni | machine-learning | 5,540 | getting cude_cores:Function Not Found | **Describe the issue**:
The nni process is not running with nni3.0b1. I also tried a more stable nni versions (2.10 and 2.8)
I get the following error:
```root@9be40a2183c3:/app# nnictl create --config config.yml
Traceback (most recent call last):
File "/usr/local/bin/nnictl", line 8, in <module>
sys.exit(parse_args())
File "/usr/local/lib/python3.10/dist-packages/nni/tools/nnictl/nnictl.py", line 497, in parse_args
args.func(args)
File "/usr/local/lib/python3.10/dist-packages/nni/tools/nnictl/launcher.py", line 91, in create_experiment
exp.start(port, debug, RunMode.Detach)
File "/usr/local/lib/python3.10/dist-packages/nni/experiment/experiment.py", line 135, in start
self._start_impl(port, debug, run_mode, None, [])
File "/usr/local/lib/python3.10/dist-packages/nni/experiment/experiment.py", line 94, in _start_impl
config = self.config.canonical_copy()
File "/usr/local/lib/python3.10/dist-packages/nni/experiment/config/base.py", line 166, in canonical_copy
canon._canonicalize([])
File "/usr/local/lib/python3.10/dist-packages/nni/experiment/config/experiment_config.py", line 121, in _canonicalize
if algo is not None and algo.name == '_none_': # type: ignore
AttributeError: 'dict' object has no attribute 'name'
```
**Environment**:
- NNI version: nni3.0b1
- Training service (local|remote|pai|aml|etc): local
- Python version: python3.10
- PyTorch/TensorFlow version: 2.8
- Is running in Docker?: yes
**Configuration**:
- Experiment config (remember to remove secrets!):
```
searchSpaceFile: search_space.json
trialCommand: python3.10 model.py # NOTE: change "python3" to "python" if you are using Windows
trialGpuNumber: 1
trialConcurrency: 1
maxExperimentDuration: 156h
maxTrialNumber: 200
tuner:
name: TPE
classArgs:
optimize_mode: maximize
trainingService:
platform: local
useActiveGpu: True
```
- Search space:
```
{
"en_decoder": { "_type": "choice", "_value": [7,8,9] },
"k1" : { "_type": "choice", "_value": [3,5,7,9,11] },
"k2" : { "_type": "choice", "_value": [3,5,7,9,11] },
"k3" : { "_type": "choice", "_value": [3,5,7,9,11] },
"k4" : { "_type": "choice", "_value": [3,5,7,9,11] },
"k5" : { "_type": "choice", "_value": [3,5,7,9,11] },
"k6" : { "_type": "choice", "_value": [3,5,7,9,11] },
"k7" : { "_type": "choice", "_value": [3,5,7,9,11] },
"k8" : { "_type": "choice", "_value": [3,5,7,9,11] },
"k9" : { "_type": "choice", "_value": [3,5,7,9,11] },
"f1": { "_type": "choice", "_value": [8,16,32] },
"f2": { "_type": "choice", "_value": [8,16,32] },
"f3": { "_type": "choice", "_value": [8,16,32] },
"f4": { "_type": "choice", "_value": [8,16,32] },
"f5": { "_type": "choice", "_value": [8,16,32] },
"f6": { "_type": "choice", "_value": [8,16,32] },
"f7": { "_type": "choice", "_value": [8,16,32] },
"f8": { "_type": "choice", "_value": [8,16,32] },
"f9": { "_type": "choice", "_value": [8,16,32] },
"res_cnn": { "_type": "choice", "_value": [1,2,3] },
"res_f1": { "_type": "choice", "_value": [8,16,32] },
"res_f2": { "_type": "choice", "_value": [8,16,32] },
"res_f3": { "_type": "choice", "_value": [8,16,32] },
"res_k1": { "_type": "choice", "_value": [3,5] },
"res_k2": { "_type": "choice", "_value": [3,5] },
"res_k3": { "_type": "choice", "_value": [3,5] },
"res_drop1": {"_type": "uniform", "_value": [0.1,0.3]},
"res_drop2": {"_type": "uniform", "_value": [0.1,0.3]},
"res_drop3": {"_type": "uniform", "_value": [0.1,0.3]},
"bilstm": { "_type": "choice", "_value": [1,2]},
"u1": { "_type": "choice", "_value": [8,16] },
"u2": { "_type": "choice", "_value": [8,16] },
"drop": {"_type": "uniform", "_value": [0.1,0.3]},
"pu": { "_type": "choice", "_value": [8,16] },
"su": { "_type": "choice", "_value": [8,16] },
"batch_size": { "_type": "choice", "_value": [50,80,100]},
"epochs":{ "_type": "choice", "_value": [10,15,20,25,30] }
}
```
**Log message**:
- nnimanager.log:
```
[2023-05-04 12:04:13] INFO (main) Start NNI manager
[2023-05-04 12:04:13] INFO (RestServer) Starting REST server at port 8080, URL prefix: "/"
[2023-05-04 12:04:13] INFO (RestServer) REST server started.
[2023-05-04 12:04:13] INFO (NNIDataStore) Datastore initialization done
[2023-05-04 12:04:14] INFO (NNIManager) Starting experiment: yajeqwud
[2023-05-04 12:04:14] INFO (NNIManager) Setup training service...
[2023-05-04 12:04:14] INFO (NNIManager) Setup tuner...
[2023-05-04 12:04:14] INFO (NNIManager) Change NNIManager status from: INITIALIZED to: RUNNING
[2023-05-04 12:04:14] INFO (NNIManager) Add event listeners
[2023-05-04 12:04:14] INFO (LocalV3.local) Start
[2023-05-04 12:04:14] INFO (NNIManager) NNIManager received command from dispatcher: ID,
[2023-05-04 12:04:14] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"en_decoder": 8, "k1": 9, "k2": 11, "k3": 5, "k4": 11, "k5": 3, "k6": 5, "k7": 9, "k8": 5, "k9": 7, "f1": 32, "f2": 16, "f3": 16, "f4": 32, "f5": 16, "f6": 16, "f7": 8, "f8": 16, "f9": 16, "res_cnn": 3, "res_f1": 32, "res_f2": 32, "res_f3": 16, "res_k1": 5, "res_k2": 5, "res_k3": 3, "res_drop1": 0.15125745390112305, "res_drop2": 0.21885863079171017, "res_drop3": 0.19313110293876518, "bilstm": 2, "u1": 16, "u2": 8, "drop": 0.2758735965780924, "pu": 8, "su": 16, "batch_size": 80, "epochs": 15}, "parameter_index": 0}
[2023-05-04 12:04:15] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 0,
hyperParameters: {
value: '{"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"en_decoder": 8, "k1": 9, "k2": 11, "k3": 5, "k4": 11, "k5": 3, "k6": 5, "k7": 9, "k8": 5, "k9": 7, "f1": 32, "f2": 16, "f3": 16, "f4": 32, "f5": 16, "f6": 16, "f7": 8, "f8": 16, "f9": 16, "res_cnn": 3, "res_f1": 32, "res_f2": 32, "res_f3": 16, "res_k1": 5, "res_k2": 5, "res_k3": 3, "res_drop1": 0.15125745390112305, "res_drop2": 0.21885863079171017, "res_drop3": 0.19313110293876518, "bilstm": 2, "u1": 16, "u2": 8, "drop": 0.2758735965780924, "pu": 8, "su": 16, "batch_size": 80, "epochs": 15}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2023-05-04 12:04:15] INFO (GpuInfoCollector) Forced update: {
gpuNumber: 1,
driverVersion: '470.182.03',
cudaVersion: 11060,
gpus: [
{
index: 0,
model: 'NVIDIA A100-SXM4-80GB',
gpuMemory: 85198045184,
freeGpuMemory: 85197914112,
gpuCoreUtilization: 0,
gpuMemoryUtilization: 0
}
],
processes: [],
success: true,
failures: [ 'cuda_cores: Function Not Found', 'process: Function Not Found' ]
}
[2023-05-04 12:04:17] INFO (LocalV3.local) Register directory trial_code = /app
```
- dispatcher.log:
```
[2023-05-04 13:04:14] INFO (nni.tuner.tpe/MainThread) Using random seed 2140802229
[2023-05-04 13:04:14] INFO (nni.runtime.msg_dispatcher_base/MainThread) Dispatcher started
[2023-05-04 13:04:14] INFO (nni.runtime.msg_dispatcher/Thread-1 (command_queue_worker)) Initial search space: {'en_decoder': {'_type': 'choice', '_value': [7, 8, 9]}, 'k1': {'_type': 'choice', '_value': [3, 5, 7, 9, 11]}, 'k2': {'_type': 'choice', '_value': [3, 5, 7, 9, 11]}, 'k3': {'_type': 'choice', '_value': [3, 5, 7, 9, 11]}, 'k4': {'_type': 'choice', '_value': [3, 5, 7, 9, 11]}, 'k5': {'_type': 'choice', '_value': [3, 5, 7, 9, 11]}, 'k6': {'_type': 'choice', '_value': [3, 5, 7, 9, 11]}, 'k7': {'_type': 'choice', '_value': [3, 5, 7, 9, 11]}, 'k8': {'_type': 'choice', '_value': [3, 5, 7, 9, 11]}, 'k9': {'_type': 'choice', '_value': [3, 5, 7, 9, 11]}, 'f1': {'_type': 'choice', '_value': [8, 16, 32]}, 'f2': {'_type': 'choice', '_value': [8, 16, 32]}, 'f3': {'_type': 'choice', '_value': [8, 16, 32]}, 'f4': {'_type': 'choice', '_value': [8, 16, 32]}, 'f5': {'_type': 'choice', '_value': [8, 16, 32]}, 'f6': {'_type': 'choice', '_value': [8, 16, 32]}, 'f7': {'_type': 'choice', '_value': [8, 16, 32]}, 'f8': {'_type': 'choice', '_value': [8, 16, 32]}, 'f9': {'_type': 'choice', '_value': [8, 16, 32]}, 'res_cnn': {'_type': 'choice', '_value': [1, 2, 3]}, 'res_f1': {'_type': 'choice', '_value': [8, 16, 32]}, 'res_f2': {'_type': 'choice', '_value': [8, 16, 32]}, 'res_f3': {'_type': 'choice', '_value': [8, 16, 32]}, 'res_k1': {'_type': 'choice', '_value': [3, 5]}, 'res_k2': {'_type': 'choice', '_value': [3, 5]}, 'res_k3': {'_type': 'choice', '_value': [3, 5]}, 'res_drop1': {'_type': 'uniform', '_value': [0.1, 0.3]}, 'res_drop2': {'_type': 'uniform', '_value': [0.1, 0.3]}, 'res_drop3': {'_type': 'uniform', '_value': [0.1, 0.3]}, 'bilstm': {'_type': 'choice', '_value': [1, 2]}, 'u1': {'_type': 'choice', '_value': [8, 16]}, 'u2': {'_type': 'choice', '_value': [8, 16]}, 'drop': {'_type': 'uniform', '_value': [0.1, 0.3]}, 'pu': {'_type': 'choice', '_value': [8, 16]}, 'su': {'_type': 'choice', '_value': [8, 16]}, 'batch_size': {'_type': 'choice', '_value': [50, 80, 100]}, 'epochs': {'_type': 'choice', '_value': [10, 15, 20, 25, 30]}}
[2023-05-04 13:05:14] ERROR (nni.runtime.command_channel.websocket.channel/MainThread) Failed to receive command. Retry in 0s
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/protocol.py", line 968, in transfer_data
message = await self.read_message()
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/protocol.py", line 1038, in read_message
frame = await self.read_data_frame(max_size=self.max_size)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/protocol.py", line 1113, in read_data_frame
frame = await self.read_frame(max_size)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/protocol.py", line 1170, in read_frame
frame = await Frame.read(
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/framing.py", line 69, in read
data = await reader(2)
File "/usr/lib/python3.10/asyncio/streams.py", line 708, in readexactly
await self._wait_for_data('readexactly')
File "/usr/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 99, in _receive_command
command = conn.receive()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 103, in receive
msg = _wait(self._ws.recv())
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait
return future.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/protocol.py", line 568, in recv
await self.ensure_open()
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/protocol.py", line 953, in ensure_open
raise self.connection_closed_exc()
websockets.exceptions.ConnectionClosedError: sent 1011 (unexpected error) keepalive ping timeout; no close frame received
[2023-05-04 13:05:34] ERROR (nni.runtime.command_channel.websocket.channel/MainThread) Failed to receive command. Retry in 1s
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 666, in __await_impl__
await protocol.handshake(
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 326, in handshake
status_code, response_headers = await self.read_http_response()
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 138, in read_http_response
status_code, reason, headers = await read_response(self.reader)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 120, in read_response
status_line = await read_line(stream)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 194, in read_line
line = await stream.readline()
File "/usr/lib/python3.10/asyncio/streams.py", line 524, in readline
line = await self.readuntil(sep)
File "/usr/lib/python3.10/asyncio/streams.py", line 616, in readuntil
await self._wait_for_data('readuntil')
File "/usr/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
return fut.result()
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 98, in _receive_command
conn = self._ensure_conn()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 75, in _ensure_conn
self._conn.connect()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 65, in connect
self._ws = _wait(_connect_async(self._url))
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait
return future.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 135, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 659, in __await_impl_timeout__
return await asyncio.wait_for(self.__await_impl__(), self.open_timeout)
File "/usr/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError
[2023-05-04 13:05:55] ERROR (nni.runtime.command_channel.websocket.channel/MainThread) Failed to receive command. Retry in 2s
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 666, in __await_impl__
await protocol.handshake(
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 326, in handshake
status_code, response_headers = await self.read_http_response()
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 138, in read_http_response
status_code, reason, headers = await read_response(self.reader)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 120, in read_response
status_line = await read_line(stream)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 194, in read_line
line = await stream.readline()
File "/usr/lib/python3.10/asyncio/streams.py", line 524, in readline
line = await self.readuntil(sep)
File "/usr/lib/python3.10/asyncio/streams.py", line 616, in readuntil
await self._wait_for_data('readuntil')
File "/usr/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
return fut.result()
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 98, in _receive_command
conn = self._ensure_conn()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 75, in _ensure_conn
self._conn.connect()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 65, in connect
self._ws = _wait(_connect_async(self._url))
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait
return future.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 135, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 659, in __await_impl_timeout__
return await asyncio.wait_for(self.__await_impl__(), self.open_timeout)
File "/usr/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError
[2023-05-04 13:06:17] ERROR (nni.runtime.command_channel.websocket.channel/MainThread) Failed to receive command. Retry in 3s
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 666, in __await_impl__
await protocol.handshake(
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 326, in handshake
status_code, response_headers = await self.read_http_response()
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 138, in read_http_response
status_code, reason, headers = await read_response(self.reader)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 120, in read_response
status_line = await read_line(stream)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 194, in read_line
line = await stream.readline()
File "/usr/lib/python3.10/asyncio/streams.py", line 524, in readline
line = await self.readuntil(sep)
File "/usr/lib/python3.10/asyncio/streams.py", line 616, in readuntil
await self._wait_for_data('readuntil')
File "/usr/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
return fut.result()
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 98, in _receive_command
conn = self._ensure_conn()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 75, in _ensure_conn
self._conn.connect()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 65, in connect
self._ws = _wait(_connect_async(self._url))
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait
return future.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 135, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 659, in __await_impl_timeout__
return await asyncio.wait_for(self.__await_impl__(), self.open_timeout)
File "/usr/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError
[2023-05-04 13:06:40] ERROR (nni.runtime.command_channel.websocket.channel/MainThread) Failed to receive command. Retry in 4s
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 666, in __await_impl__
await protocol.handshake(
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 326, in handshake
status_code, response_headers = await self.read_http_response()
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 138, in read_http_response
status_code, reason, headers = await read_response(self.reader)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 120, in read_response
status_line = await read_line(stream)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 194, in read_line
line = await stream.readline()
File "/usr/lib/python3.10/asyncio/streams.py", line 524, in readline
line = await self.readuntil(sep)
File "/usr/lib/python3.10/asyncio/streams.py", line 616, in readuntil
await self._wait_for_data('readuntil')
File "/usr/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
return fut.result()
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 98, in _receive_command
conn = self._ensure_conn()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 75, in _ensure_conn
self._conn.connect()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 65, in connect
self._ws = _wait(_connect_async(self._url))
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait
return future.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 135, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 659, in __await_impl_timeout__
return await asyncio.wait_for(self.__await_impl__(), self.open_timeout)
File "/usr/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError
[2023-05-04 13:06:44] WARNING (nni.runtime.command_channel.websocket.channel/MainThread) Failed to receive command. Last retry
[2023-05-04 13:07:04] INFO (nni.runtime.msg_dispatcher_base/MainThread) Report error to NNI manager: Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 666, in __await_impl__
await protocol.handshake(
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 326, in handshake
status_code, response_headers = await self.read_http_response()
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 138, in read_http_response
status_code, reason, headers = await read_response(self.reader)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 120, in read_response
status_line = await read_line(stream)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 194, in read_line
line = await stream.readline()
File "/usr/lib/python3.10/asyncio/streams.py", line 524, in readline
line = await self.readuntil(sep)
File "/usr/lib/python3.10/asyncio/streams.py", line 616, in readuntil
await self._wait_for_data('readuntil')
File "/usr/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
return fut.result()
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/nni/__main__.py", line 61, in main
dispatcher.run()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/msg_dispatcher_base.py", line 69, in run
command, data = self._channel._receive()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/tuner_command_channel/channel.py", line 270, in _receive
command = self._channel.receive()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 59, in receive
command = self._receive_command()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 108, in _receive_command
conn = self._ensure_conn()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 75, in _ensure_conn
self._conn.connect()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 65, in connect
self._ws = _wait(_connect_async(self._url))
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait
return future.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 135, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 659, in __await_impl_timeout__
return await asyncio.wait_for(self.__await_impl__(), self.open_timeout)
File "/usr/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError
[2023-05-04 13:07:04] ERROR (nni.runtime.command_channel.websocket.channel/MainThread) Failed to send command. Retry in 0s
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 666, in __await_impl__
await protocol.handshake(
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 326, in handshake
status_code, response_headers = await self.read_http_response()
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 138, in read_http_response
status_code, reason, headers = await read_response(self.reader)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 120, in read_response
status_line = await read_line(stream)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 194, in read_line
line = await stream.readline()
File "/usr/lib/python3.10/asyncio/streams.py", line 524, in readline
line = await self.readuntil(sep)
File "/usr/lib/python3.10/asyncio/streams.py", line 616, in readuntil
await self._wait_for_data('readuntil')
File "/usr/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
return fut.result()
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/nni/__main__.py", line 61, in main
dispatcher.run()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/msg_dispatcher_base.py", line 69, in run
command, data = self._channel._receive()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/tuner_command_channel/channel.py", line 270, in _receive
command = self._channel.receive()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 59, in receive
command = self._receive_command()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 108, in _receive_command
conn = self._ensure_conn()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 75, in _ensure_conn
self._conn.connect()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 65, in connect
self._ws = _wait(_connect_async(self._url))
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait
return future.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 135, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 659, in __await_impl_timeout__
return await asyncio.wait_for(self.__await_impl__(), self.open_timeout)
File "/usr/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 45, in send
conn.send(command)
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 90, in send
_wait(self._ws.send(nni.dump(message)))
AttributeError: 'NoneType' object has no attribute 'send'
[2023-05-04 13:07:24] ERROR (nni.runtime.command_channel.websocket.channel/MainThread) Failed to send command. Retry in 1s
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 666, in __await_impl__
await protocol.handshake(
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 326, in handshake
status_code, response_headers = await self.read_http_response()
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 138, in read_http_response
status_code, reason, headers = await read_response(self.reader)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 120, in read_response
status_line = await read_line(stream)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 194, in read_line
line = await stream.readline()
File "/usr/lib/python3.10/asyncio/streams.py", line 524, in readline
line = await self.readuntil(sep)
File "/usr/lib/python3.10/asyncio/streams.py", line 616, in readuntil
await self._wait_for_data('readuntil')
File "/usr/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
return fut.result()
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 44, in send
conn = self._ensure_conn()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 75, in _ensure_conn
self._conn.connect()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 65, in connect
self._ws = _wait(_connect_async(self._url))
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait
return future.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 135, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 659, in __await_impl_timeout__
return await asyncio.wait_for(self.__await_impl__(), self.open_timeout)
File "/usr/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError
[2023-05-04 13:07:46] ERROR (nni.runtime.command_channel.websocket.channel/MainThread) Failed to send command. Retry in 2s
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 666, in __await_impl__
await protocol.handshake(
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 326, in handshake
status_code, response_headers = await self.read_http_response()
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 138, in read_http_response
status_code, reason, headers = await read_response(self.reader)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 120, in read_response
status_line = await read_line(stream)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 194, in read_line
line = await stream.readline()
File "/usr/lib/python3.10/asyncio/streams.py", line 524, in readline
line = await self.readuntil(sep)
File "/usr/lib/python3.10/asyncio/streams.py", line 616, in readuntil
await self._wait_for_data('readuntil')
File "/usr/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
return fut.result()
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 44, in send
conn = self._ensure_conn()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 75, in _ensure_conn
self._conn.connect()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 65, in connect
self._ws = _wait(_connect_async(self._url))
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait
return future.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 135, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 659, in __await_impl_timeout__
return await asyncio.wait_for(self.__await_impl__(), self.open_timeout)
File "/usr/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError
[2023-05-04 13:08:08] ERROR (nni.runtime.command_channel.websocket.channel/MainThread) Failed to send command. Retry in 3s
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 666, in __await_impl__
await protocol.handshake(
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 326, in handshake
status_code, response_headers = await self.read_http_response()
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 138, in read_http_response
status_code, reason, headers = await read_response(self.reader)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 120, in read_response
status_line = await read_line(stream)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 194, in read_line
line = await stream.readline()
File "/usr/lib/python3.10/asyncio/streams.py", line 524, in readline
line = await self.readuntil(sep)
File "/usr/lib/python3.10/asyncio/streams.py", line 616, in readuntil
await self._wait_for_data('readuntil')
File "/usr/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
return fut.result()
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 44, in send
conn = self._ensure_conn()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 75, in _ensure_conn
self._conn.connect()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 65, in connect
self._ws = _wait(_connect_async(self._url))
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait
return future.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 135, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 659, in __await_impl_timeout__
return await asyncio.wait_for(self.__await_impl__(), self.open_timeout)
File "/usr/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError
[2023-05-04 13:08:31] ERROR (nni.runtime.command_channel.websocket.channel/MainThread) Failed to send command. Retry in 4s
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 666, in __await_impl__
await protocol.handshake(
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 326, in handshake
status_code, response_headers = await self.read_http_response()
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 138, in read_http_response
status_code, reason, headers = await read_response(self.reader)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 120, in read_response
status_line = await read_line(stream)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 194, in read_line
line = await stream.readline()
File "/usr/lib/python3.10/asyncio/streams.py", line 524, in readline
line = await self.readuntil(sep)
File "/usr/lib/python3.10/asyncio/streams.py", line 616, in readuntil
await self._wait_for_data('readuntil')
File "/usr/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
return fut.result()
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 44, in send
conn = self._ensure_conn()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 75, in _ensure_conn
self._conn.connect()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 65, in connect
self._ws = _wait(_connect_async(self._url))
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait
return future.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 135, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 659, in __await_impl_timeout__
return await asyncio.wait_for(self.__await_impl__(), self.open_timeout)
File "/usr/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError
[2023-05-04 13:08:35] WARNING (nni.runtime.command_channel.websocket.channel/MainThread) Failed to send command {'type': 'ER', 'content': 'Traceback (most recent call last):\n File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 666, in __await_impl__\n await protocol.handshake(\n File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 326, in handshake\n status_code, response_headers = await self.read_http_response()\n File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 138, in read_http_response\n status_code, reason, headers = await read_response(self.reader)\n File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 120, in read_response\n status_line = await read_line(stream)\n File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 194, in read_line\n line = await stream.readline()\n File "/usr/lib/python3.10/asyncio/streams.py", line 524, in readline\n line = await self.readuntil(sep)\n File "/usr/lib/python3.10/asyncio/streams.py", line 616, in readuntil\n await self._wait_for_data(\'readuntil\')\n File "/usr/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data\n await self._waiter\nasyncio.exceptions.CancelledError\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for\n return fut.result()\nasyncio.exceptions.CancelledError\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File "/usr/local/lib/python3.10/dist-packages/nni/__main__.py", line 61, in main\n dispatcher.run()\n File "/usr/local/lib/python3.10/dist-packages/nni/runtime/msg_dispatcher_base.py", line 69, in run\n command, data = self._channel._receive()\n File "/usr/local/lib/python3.10/dist-packages/nni/runtime/tuner_command_channel/channel.py", line 270, in _receive\n command = self._channel.receive()\n File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 59, in receive\n command = self._receive_command()\n File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 108, in _receive_command\n conn = self._ensure_conn()\n File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 75, in _ensure_conn\n self._conn.connect()\n File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 65, in connect\n self._ws = _wait(_connect_async(self._url))\n File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait\n return future.result()\n File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result\n return self.__get_result()\n File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result\n raise self._exception\n File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 135, in _connect_async\n return await websockets.connect(url, max_size=None) # type: ignore\n File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 659, in __await_impl_timeout__\n return await asyncio.wait_for(self.__await_impl__(), self.open_timeout)\n File "/usr/lib/python3.10/asyncio/tasks.py", line 458, in wait_for\n raise exceptions.TimeoutError() from exc\nasyncio.exceptions.TimeoutError\n'}. Last retry
[2023-05-04 13:08:55] ERROR (nni.runtime.msg_dispatcher_base/MainThread) Connection to NNI manager is broken. Failed to report error.
[2023-05-04 13:08:55] ERROR (nni.main/MainThread)
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 666, in __await_impl__
await protocol.handshake(
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 326, in handshake
status_code, response_headers = await self.read_http_response()
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 138, in read_http_response
status_code, reason, headers = await read_response(self.reader)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 120, in read_response
status_line = await read_line(stream)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 194, in read_line
line = await stream.readline()
File "/usr/lib/python3.10/asyncio/streams.py", line 524, in readline
line = await self.readuntil(sep)
File "/usr/lib/python3.10/asyncio/streams.py", line 616, in readuntil
await self._wait_for_data('readuntil')
File "/usr/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
return fut.result()
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/nni/__main__.py", line 85, in <module>
main()
File "/usr/local/lib/python3.10/dist-packages/nni/__main__.py", line 61, in main
dispatcher.run()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/msg_dispatcher_base.py", line 69, in run
command, data = self._channel._receive()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/tuner_command_channel/channel.py", line 270, in _receive
command = self._channel.receive()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 59, in receive
command = self._receive_command()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 108, in _receive_command
conn = self._ensure_conn()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 75, in _ensure_conn
self._conn.connect()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 65, in connect
self._ws = _wait(_connect_async(self._url))
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait
return future.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 135, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 659, in __await_impl_timeout__
return await asyncio.wait_for(self.__await_impl__(), self.open_timeout)
File "/usr/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError
```
- nnictl stdout and stderr:
```
--------------------------------------------------------------------------------
Experiment yajeqwud start: 2023-05-04 13:04:12.999020
--------------------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 666, in __await_impl__
await protocol.handshake(
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 326, in handshake
status_code, response_headers = await self.read_http_response()
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 138, in read_http_response
status_code, reason, headers = await read_response(self.reader)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 120, in read_response
status_line = await read_line(stream)
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/http.py", line 194, in read_line
line = await stream.readline()
File "/usr/lib/python3.10/asyncio/streams.py", line 524, in readline
line = await self.readuntil(sep)
File "/usr/lib/python3.10/asyncio/streams.py", line 616, in readuntil
await self._wait_for_data('readuntil')
File "/usr/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
return fut.result()
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.10/dist-packages/nni/__main__.py", line 85, in <module>
main()
File "/usr/local/lib/python3.10/dist-packages/nni/__main__.py", line 61, in main
dispatcher.run()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/msg_dispatcher_base.py", line 69, in run
command, data = self._channel._receive()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/tuner_command_channel/channel.py", line 270, in _receive
command = self._channel.receive()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 59, in receive
command = self._receive_command()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 108, in _receive_command
conn = self._ensure_conn()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/channel.py", line 75, in _ensure_conn
self._conn.connect()
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 65, in connect
self._ws = _wait(_connect_async(self._url))
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 121, in _wait
return future.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/dist-packages/nni/runtime/command_channel/websocket/connection.py", line 135, in _connect_async
return await websockets.connect(url, max_size=None) # type: ignore
File "/usr/local/lib/python3.10/dist-packages/websockets/legacy/client.py", line 659, in __await_impl_timeout__
return await asyncio.wait_for(self.__await_impl__(), self.open_timeout)
File "/usr/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError
```
--------------------------------------------------------------------------------
Experiment yajeqwud start: 2023-05-04 13:04:12.999020
--------------------------------------------------------------------------------
```
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | open | 2023-05-04T11:12:48Z | 2023-05-17T02:08:53Z | https://github.com/microsoft/nni/issues/5540 | [] | TayyabaZainab0807 | 8 |
RayVentura/ShortGPT | automation | 105 | 🐛 [Bug]: Error for shor creation | ### What happened?
here the error :
Checking requirements...
- Requirements : List of requirements and installed version:
edge-tts==6.1.8
ffmpeg==1.4
gradio==3.38.0==3.38.0
moviepy==1.0.3==1.0.3
openai==0.28.0
pillow==9.0.0==9.0.0
proglog==0.1.10
progress==1.6
protobuf==3.20.0==3.20.0
python-dotenv==1.0.0
questionary==2.0.1
tiktoken==0.3.3
tinydb==4.8.0
tinymongo==0.2.0
torch==2.0.1
torchaudio==2.0.2
whisper-timestamped==1.12.20
yt-dlp==2023.7.6
Running on local URL: http://0.0.0.0:31415
To create a public link, set `share=True` in `launch()`.
Step 1 _generateScript
Step 2 _generateTempAudio
Step 3 _speedUpAudio
ffmpeg version 4.4.2-0ubuntu0.22.04.1 Copyright (c) 2000-2021 the FFmpeg developers
built with gcc 11 (Ubuntu 11.2.0-19ubuntu1)
configuration: --prefix=/usr --extra-version=0ubuntu0.22.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-pocketsphinx --enable-librsvg --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 70.100 / 56. 70.100
libavcodec 58.134.100 / 58.134.100
libavformat 58. 76.100 / 58. 76.100
libavdevice 58. 13.100 / 58. 13.100
libavfilter 7.110.100 / 7.110.100
libswscale 5. 9.100 / 5. 9.100
libswresample 3. 9.100 / 3. 9.100
libpostproc 55. 9.100 / 55. 9.100
[mp3 @ 0x55617508c680] Estimating duration from bitrate, this may be inaccurate
Input #0, mp3, from '.editing_assets/facts_shorts_assets/781d20ae10164a1594e0ff23/temp_audio_path.wav':
Duration: 00:01:22.29, start: 0.000000, bitrate: 127 kb/s
Stream #0:0: Audio: mp3, 44100 Hz, mono, fltp, 128 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (mp3 (mp3float) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Output #0, wav, to '.editing_assets/facts_shorts_assets/781d20ae10164a1594e0ff23/audio_voice.wav':
Metadata:
ISFT : Lavf58.76.100
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, mono, s16, 705 kb/s
Metadata:
encoder : Lavc58.134.100 pcm_s16le
size= 4908kB time=00:00:56.96 bitrate= 705.8kbits/s speed= 399x
video:0kB audio:4908kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.001552%
Step 4 _timeCaptions
Detected language: Turkish
100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 5698/5698 [00:02<00:00, 2215.57frames/s]
Step 5 _generateImageSearchTerms
Step 6 _generateImageUrls
Search engine queries for images...: 100%|█████████████████████████████████████████████████████████████████████████| 21/21 [00:10<00:00, 2.06it/s]
Step 7 _chooseBackgroundMusic
Step 8 _chooseBackgroundVideo
Step 9 _prepareBackgroundAssets
https://manifest.googlevideo.com/api/manifest/hls_playlist/expire/1695499219/ei/c-8OZZHqGNCB8gO_y7eABg/ip/78.170.113.195/id/22d604eb2d333202/itag/616/source/youtube/requiressl/yes/ratebypass/yes/pfa/1/wft/1/sgovp/clen%3D120909901%3Bdur%3D364.830%3Bgir%3Dyes%3Bitag%3D356%3Blmt%3D1679719935054569/hls_chunk_host/rr3---sn-u0g3uxax3-pnus.googlevideo.com/mh/U5/mm/31,29/mn/sn-u0g3uxax3-pnus,sn-nv47zn7r/ms/au,rdu/mv/m/mvi/3/pl/24/initcwndbps/1077500/vprv/1/playlist_type/DVR/dover/13/txp/443C434/mt/1695477208/fvip/1/short_key/1/keepalive/yes/fexp/24007246/beids/24350017/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,pfa,wft,sgovp,vprv,playlist_type/sig/AOq0QJ8wRgIhAPdZUq9Bn30x3gvbuGdbHNjY1lmM6yARxJQRzXUZgA6DAiEAocIsemXZfWNFP-fwahQA_O2LogwdY6QJkoC13GZSfbg%3D/lsparams/hls_chunk_host,mh,mm,mn,ms,mv,mvi,pl,initcwndbps/lsig/AG3C_xAwRQIhAPvCaNGxcus2y_LgDFsy5bqdOyY65Mr17RHG6H4R96iCAiAAx3SS9gYQUSDSl87wR0QTgJGU_La8XvaMoPWYbag2QA%3D%3D/playlist/index.m3u8 364.83 56.981769 .editing_assets/facts_shorts_assets/781d20ae10164a1594e0ff23/clipped_background.mp4
Error File "/home/bc/Projects/OpenSource/shortgpt/gui/ui_tab_short_automation.py", line 103, in create_short
for step_num, step_info in shortEngine.makeContent():
File "/home/bc/Projects/OpenSource/shortgpt/shortGPT/engine/abstract_content_engine.py", line 74, in makeContent
self.stepDict[currentStep]()
File "/home/bc/Projects/OpenSource/shortgpt/shortGPT/engine/content_short_engine.py", line 109, in _prepareBackgroundAssets
self._db_background_trimmed = extract_random_clip_from_video(
File "/home/bc/Projects/OpenSource/shortgpt/shortGPT/editing_utils/handle_videos.py", line 55, in extract_random_clip_from_video
.input(video_url, ss=start_time, t=clip_duration)
### What type of browser are you seeing the problem on?
Firefox
### What type of Operating System are you seeing the problem on?
Linux
### Python Version
3.10.12
### Application Version
latest one from github
### Expected Behavior
vreate short video
### Error Message
```shell
Checking requirements...
- Requirements : List of requirements and installed version:
edge-tts==6.1.8
ffmpeg==1.4
gradio==3.38.0==3.38.0
moviepy==1.0.3==1.0.3
openai==0.28.0
pillow==9.0.0==9.0.0
proglog==0.1.10
progress==1.6
protobuf==3.20.0==3.20.0
python-dotenv==1.0.0
questionary==2.0.1
tiktoken==0.3.3
tinydb==4.8.0
tinymongo==0.2.0
torch==2.0.1
torchaudio==2.0.2
whisper-timestamped==1.12.20
yt-dlp==2023.7.6
Running on local URL: http://0.0.0.0:31415
To create a public link, set `share=True` in `launch()`.
Step 1 _generateScript
Step 2 _generateTempAudio
Step 3 _speedUpAudio
ffmpeg version 4.4.2-0ubuntu0.22.04.1 Copyright (c) 2000-2021 the FFmpeg developers
built with gcc 11 (Ubuntu 11.2.0-19ubuntu1)
configuration: --prefix=/usr --extra-version=0ubuntu0.22.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-pocketsphinx --enable-librsvg --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 70.100 / 56. 70.100
libavcodec 58.134.100 / 58.134.100
libavformat 58. 76.100 / 58. 76.100
libavdevice 58. 13.100 / 58. 13.100
libavfilter 7.110.100 / 7.110.100
libswscale 5. 9.100 / 5. 9.100
libswresample 3. 9.100 / 3. 9.100
libpostproc 55. 9.100 / 55. 9.100
[mp3 @ 0x55617508c680] Estimating duration from bitrate, this may be inaccurate
Input #0, mp3, from '.editing_assets/facts_shorts_assets/781d20ae10164a1594e0ff23/temp_audio_path.wav':
Duration: 00:01:22.29, start: 0.000000, bitrate: 127 kb/s
Stream #0:0: Audio: mp3, 44100 Hz, mono, fltp, 128 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (mp3 (mp3float) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Output #0, wav, to '.editing_assets/facts_shorts_assets/781d20ae10164a1594e0ff23/audio_voice.wav':
Metadata:
ISFT : Lavf58.76.100
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, mono, s16, 705 kb/s
Metadata:
encoder : Lavc58.134.100 pcm_s16le
size= 4908kB time=00:00:56.96 bitrate= 705.8kbits/s speed= 399x
video:0kB audio:4908kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.001552%
Step 4 _timeCaptions
Detected language: Turkish
100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 5698/5698 [00:02<00:00, 2215.57frames/s]
Step 5 _generateImageSearchTerms
Step 6 _generateImageUrls
Search engine queries for images...: 100%|█████████████████████████████████████████████████████████████████████████| 21/21 [00:10<00:00, 2.06it/s]
Step 7 _chooseBackgroundMusic
Step 8 _chooseBackgroundVideo
Step 9 _prepareBackgroundAssets
https://manifest.googlevideo.com/api/manifest/hls_playlist/expire/1695499219/ei/c-8OZZHqGNCB8gO_y7eABg/ip/78.170.113.195/id/22d604eb2d333202/itag/616/source/youtube/requiressl/yes/ratebypass/yes/pfa/1/wft/1/sgovp/clen%3D120909901%3Bdur%3D364.830%3Bgir%3Dyes%3Bitag%3D356%3Blmt%3D1679719935054569/hls_chunk_host/rr3---sn-u0g3uxax3-pnus.googlevideo.com/mh/U5/mm/31,29/mn/sn-u0g3uxax3-pnus,sn-nv47zn7r/ms/au,rdu/mv/m/mvi/3/pl/24/initcwndbps/1077500/vprv/1/playlist_type/DVR/dover/13/txp/443C434/mt/1695477208/fvip/1/short_key/1/keepalive/yes/fexp/24007246/beids/24350017/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,pfa,wft,sgovp,vprv,playlist_type/sig/AOq0QJ8wRgIhAPdZUq9Bn30x3gvbuGdbHNjY1lmM6yARxJQRzXUZgA6DAiEAocIsemXZfWNFP-fwahQA_O2LogwdY6QJkoC13GZSfbg%3D/lsparams/hls_chunk_host,mh,mm,mn,ms,mv,mvi,pl,initcwndbps/lsig/AG3C_xAwRQIhAPvCaNGxcus2y_LgDFsy5bqdOyY65Mr17RHG6H4R96iCAiAAx3SS9gYQUSDSl87wR0QTgJGU_La8XvaMoPWYbag2QA%3D%3D/playlist/index.m3u8 364.83 56.981769 .editing_assets/facts_shorts_assets/781d20ae10164a1594e0ff23/clipped_background.mp4
Error File "/home/bc/Projects/OpenSource/shortgpt/gui/ui_tab_short_automation.py", line 103, in create_short
for step_num, step_info in shortEngine.makeContent():
File "/home/bc/Projects/OpenSource/shortgpt/shortGPT/engine/abstract_content_engine.py", line 74, in makeContent
self.stepDict[currentStep]()
File "/home/bc/Projects/OpenSource/shortgpt/shortGPT/engine/content_short_engine.py", line 109, in _prepareBackgroundAssets
self._db_background_trimmed = extract_random_clip_from_video(
File "/home/bc/Projects/OpenSource/shortgpt/shortGPT/editing_utils/handle_videos.py", line 55, in extract_random_clip_from_video
.input(video_url, ss=start_time, t=clip_duration)
```
### Code to produce this issue.
_No response_
### Screenshots/Assets/Relevant links
_No response_ | open | 2023-09-23T14:03:23Z | 2023-09-24T13:03:22Z | https://github.com/RayVentura/ShortGPT/issues/105 | [
"bug"
] | MyraBaba | 2 |
strawberry-graphql/strawberry | asyncio | 3,041 | Switching from strawberry.union to Annotated Union results in unexpected type when resolving fields | ## Describe the Bug
I had a `strawberry.union` kind of union type declaration, and been trying to migrate to the updated Annotated Union kind of declaration, but that change then results in a TypeError.
The original code something looks like this:
```python
SomeStuff = strawberry.union(
"SomeStuff",
types=(
Resource,
Process,
),
)
```
and according to the [docs](https://strawberry.rocks/docs/types/union#defining-unions) it should be:
```python
SomeStuff = Annotated[Union[Resource, Process], strawberry.union("SomeStuff")]
```
This latter then results in the following error when the code starts to resolve the schema, let's say a `Thing` has a field of `SomeStuff` type:
```
../../../.pyenv/versions/3.10.11/envs/llm-ct/lib/python3.10/site-packages/graphql/type/definition.py:946: in fields
raise cls(f"{self.name} fields cannot be resolved. {error}") from error
E TypeError: Thing fields cannot be resolved. Unexpected type 'typing.Annotated[typing.Union[app.models.graphql.resource.Resource, app.models.graphql.process.Process], <strawberry.union.StrawberryUnion object at 0x10b00b0a0>]'
```
Any idea what I might be doing wrong or missing? Does anything else have to change as well, besides the definition of `SomeStuff`?
## System Information
- Operating system: MacOS
- Strawberry version (if applicable): 0.204.0
## Additional Context
The definition of `Thing` is something like this, not sure if the lazy loading has any effect on the outcome:
```python
@strawberry.interface
class Thing:
name: str
@strawberry.field
async def observed_object(
self, info: Info
) -> Annotated[
"SomeStuff",
strawberry.lazy("app.models.graphql.stuffs"),
]:
# [...snip...]
```
(Lazy loading is needed as the real case has some circularity in dependencies).
I wonder the problem is really here, that in the union-type definition `SomeStuff` would be now incorrect, reading the "resolving an union" [section](https://strawberry.rocks/docs/types/union#resolving-a-union) in the docs. But that would be quite a boilerplate (in the real setup I have more than 2 types that are unioned). | open | 2023-08-16T08:15:17Z | 2025-03-20T15:56:20Z | https://github.com/strawberry-graphql/strawberry/issues/3041 | [
"bug"
] | imrehg | 0 |
pytest-dev/pytest-xdist | pytest | 493 | xdist cause the INTERNAL errors | plugins:
forked-1.1.3,
xdist-1.30.0,
parallel-0.0.9,
rerunfailures-7.0,
cov-2.8.1, logger-0.5.1,
metadata-1.8.0, html-2.0.1,
timeout-1.3.3, repeat-0.8.0,
instafail-0.4.1
~~~~~~~~~~~~~~~~~~~~~ Stack of Thread-4 (140296258430720) ~~~~~~~~~~~~~~~~~~~~~~
File "/usr/lib/python3.6/threading.py", line 884, in _bootstrap
self._bootstrap_inner()
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/builds/admin-ftw15/admin/sct/src/testcases/common/memory_checker.py", line 43, in run
self._stop_event.wait(self._sleep_period)
File "/usr/lib/python3.6/threading.py", line 551, in wait
signaled = self._cond.wait(timeout)
File "/usr/lib/python3.6/threading.py", line 299, in wait
gotit = waiter.acquire(True, timeout)
~~~~~~~~~~~~~~~~~~~~~ Stack of Thread-3 (140296266823424) ~~~~~~~~~~~~~~~~~~~~~~
File "/usr/lib/python3.6/threading.py", line 884, in _bootstrap
self._bootstrap_inner()
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/viper/transports/admin_websocket/websocket_server_transport.py", line 78, in run
srv._server = srv._event_loop.run_forever()
File "/usr/lib/python3.6/asyncio/base_events.py", line 422, in run_forever
self._run_once()
File "/usr/lib/python3.6/asyncio/base_events.py", line 1398, in _run_once
event_list = self._selector.select(timeout)
File "/usr/lib/python3.6/selectors.py", line 445, in select
fd_event_list = self._epoll.poll(timeout, max_ev)
~~~~~~~~~~~~~~~~~~~~~ Stack of <unknown> (140296440977152) ~~~~~~~~~~~~~~~~~~~~~
File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/execnet/gateway_base.py", line 285, in _perform_spawn
reply.run()
File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/execnet/gateway_base.py", line 220, in run
self._result = func(*args, **kwargs)
File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/execnet/gateway_base.py", line 967, in _thread_receiver
msg = Message.from_io(io)
File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/execnet/gateway_base.py", line 432, in from_io
header = io.read(9) # type 1, channel 4, payload 4
File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/execnet/gateway_base.py", line 400, in read
data = self._read(numbytes - len(buf))
+++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/_pytest/main.py", line 213, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/_pytest/main.py", line 257, in _main
INTERNALERROR> config.hook.pytest_runtestloop(session=session)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR> return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/manager.py", line 87, in <lambda>
INTERNALERROR> firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/xdist/remote.py", line 70, in pytest_runtestloop
INTERNALERROR> self.run_one_test(torun)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/xdist/remote.py", line 87, in run_one_test
INTERNALERROR> self.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR> return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/manager.py", line 87, in <lambda>
INTERNALERROR> firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pytest_rerunfailures.py", line 176, in pytest_runtest_protocol
INTERNALERROR> reports = runtestprotocol(item, nextitem=nextitem, log=False)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/_pytest/runner.py", line 87, in runtestprotocol
INTERNALERROR> reports.append(call_and_report(item, "call", log))
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/_pytest/runner.py", line 169, in call_and_report
INTERNALERROR> report = hook.pytest_runtest_makereport(item=item, call=call)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR> return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/manager.py", line 87, in <lambda>
INTERNALERROR> firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/callers.py", line 203, in _multicall
INTERNALERROR> gen.send(outcome)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/src/testcases/conftest.py", line 169, in pytest_runtest_makereport
INTERNALERROR> browser().save_screenshot(destination_file)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 1055, in save_screenshot
INTERNALERROR> return self.get_screenshot_as_file(filename)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 1032, in get_screenshot_as_file
INTERNALERROR> png = self.get_screenshot_as_png()
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 1064, in get_screenshot_as_png
INTERNALERROR> return base64.b64decode(self.get_screenshot_as_base64().encode('ascii'))
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 1074, in get_screenshot_as_base64
INTERNALERROR> return self.execute(Command.SCREENSHOT)['value']
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 319, in execute
INTERNALERROR> response = self.command_executor.execute(driver_command, params)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/selenium/webdriver/remote/remote_connection.py", line 374, in execute
INTERNALERROR> return self._request(command_info[0], url, body=data)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/selenium/webdriver/remote/remote_connection.py", line 397, in _request
INTERNALERROR> resp = self._conn.request(method, url, body=body, headers=headers)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/urllib3/request.py", line 76, in request
INTERNALERROR> method, url, fields=fields, headers=headers, **urlopen_kw
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/urllib3/request.py", line 97, in request_encode_url
INTERNALERROR> return self.urlopen(method, url, **extra_kw)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/urllib3/poolmanager.py", line 330, in urlopen
INTERNALERROR> response = conn.urlopen(method, u.request_uri, **kw)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/urllib3/connectionpool.py", line 672, in urlopen
INTERNALERROR> chunked=chunked,
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/urllib3/connectionpool.py", line 421, in _make_request
INTERNALERROR> six.raise_from(e, None)
INTERNALERROR> File "<string>", line 3, in raise_from
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/urllib3/connectionpool.py", line 416, in _make_request
INTERNALERROR> httplib_response = conn.getresponse()
INTERNALERROR> File "/usr/lib/python3.6/http/client.py", line 1331, in getresponse
INTERNALERROR> response.begin()
INTERNALERROR> File "/usr/lib/python3.6/http/client.py", line 297, in begin
INTERNALERROR> version, status, reason = self._read_status()
INTERNALERROR> File "/usr/lib/python3.6/http/client.py", line 258, in _read_status
INTERNALERROR> line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
INTERNALERROR> File "/usr/lib/python3.6/socket.py", line 586, in readinto
INTERNALERROR> return self._sock.recv_into(b)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pytest_timeout.py", line 140, in handler
INTERNALERROR> timeout_sigalrm(item, params.timeout)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pytest_timeout.py", line 313, in timeout_sigalrm
INTERNALERROR> pytest.fail('Timeout >%ss' % timeout)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/_pytest/outcomes.py", line 111, in fail
INTERNALERROR> raise Failed(msg=msg, pytrace=pytrace)
INTERNALERROR> Failed: Timeout >90.0s
R [ 20%]
[ 20%]
src/testcases/tests_gui/configuration/commissioning_wizard/site/legacy_coverage/test_pnp_radios.py R [ 20%]
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/_pytest/main.py", line 213, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/_pytest/main.py", line 257, in _main
INTERNALERROR> config.hook.pytest_runtestloop(session=session)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR> return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/manager.py", line 87, in <lambda>
INTERNALERROR> firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/xdist/dsession.py", line 112, in pytest_runtestloop
INTERNALERROR> self.loop_once()
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/xdist/dsession.py", line 135, in loop_once
INTERNALERROR> call(**kwargs)
INTERNALERROR> File "/builds/admin-ftw15/admin/sct/.tox/test/lib/python3.6/site-packages/xdist/dsession.py", line 177, in worker_workerfinished
INTERNALERROR> assert not crashitem, (crashitem, node)
INTERNALERROR> AssertionError: ('src/testcases/tests_gui/configuration/commissioning_wizard/site/lte4500_AirScale_Indoor_SFN/test_lte_opr_ac_8483_site.py::test_lte_opr_ac_8483_site', <WorkerController gw3>)
INTERNALERROR> assert not 'src/testcases/tests_gui/configuration/commissioning_wizard/site/lte4500_AirScale_Indoor_SFN/test_lte_opr_ac_8483_site.py::test_lte_opr_ac_8483_site
| closed | 2019-12-10T01:20:01Z | 2019-12-10T11:35:34Z | https://github.com/pytest-dev/pytest-xdist/issues/493 | [] | aaronxiang0926 | 1 |
wkentaro/labelme | deep-learning | 825 | [BUG]ubuntu standalone file - pyinstaller version problem | Problem:
Run `Pyinstaller labelme.spec` creates x-sharedlib file instead of x-executable file
Env:
- OS: Ubuntu 16.04 & 18.04
- Labelme Version 4.2.10
- newest spec file
Solve:
Pyinstaller version 4.1 creates x-sharedlib file which can run through terminal
pyinstaller version 3.6 creates x-executable file which can run through double click.
Stucked in this problem for several days and found no related information, just figured out by myself.
May this help those who met the same problem. | closed | 2021-01-13T10:48:08Z | 2021-09-30T15:11:34Z | https://github.com/wkentaro/labelme/issues/825 | [
"issue::bug"
] | shutingh | 0 |
litestar-org/litestar | api | 3,487 | Bug: Test error | ### Description
From https://github.com/litestar-org/litestar/actions/runs/9039276158/job/24841832854?pr=3486
```
________________________________ test_sync_app _________________________________
[gw0] linux -- Python 3.8.18 /home/runner/work/litestar/litestar/.venv/bin/python
self = <sqlalchemy.engine.base.Connection object at 0x7efe10fc11f0>
def _rollback_impl(self) -> None:
if self._has_events or self.engine._has_events:
self.dispatch.rollback(self)
if self._still_open_and_dbapi_connection_is_valid:
if self._echo:
if self._is_autocommit_isolation():
self._log_info(
"ROLLBACK using DBAPI connection.rollback(), "
"DBAPI should ignore due to autocommit mode"
)
else:
self._log_info("ROLLBACK")
try:
> self.engine.dialect.do_rollback(self.connection)
.venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py:1119:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <sqlalchemy.dialects.sqlite.pysqlite.SQLiteDialect_pysqlite object at 0x7efe20072d60>
dbapi_connection = <sqlalchemy.pool.base._ConnectionFairy object at 0x7efe200bc220>
def do_rollback(self, dbapi_connection):
> dbapi_connection.rollback()
E sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 139626483770944 and this is thread id 139626466989632.
.venv/lib/python3.8/site-packages/sqlalchemy/engine/default.py:692: ProgrammingError
The above exception was the direct cause of the following exception:
self = <litestar.middleware._internal.exceptions.middleware.ExceptionHandlerMiddleware object at 0x7efe20090880>
scope = {'_aa_connection_state': {'_sqlalchemy_db_session': <sqlalchemy.orm.session.Session object at 0x7efe2007beb0>}, 'app':...Litestar object at 0x7efe202f0420>, 'client': ('testclient', 50000), 'extensions': {'http.response.template': {}}, ...}
receive = <function TestClientTransport.create_receive.<locals>.receive at 0x7efe20071f70>
send = <function Litestar._wrap_send.<locals>.wrapped_send at 0x7efe10fc0280>
async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
"""ASGI-callable.
Args:
scope: The ASGI connection scope.
receive: The ASGI receive function.
send: The ASGI send function.
Returns:
None
"""
scope_state = ScopeState.from_scope(scope)
async def capture_response_started(event: Message) -> None:
if event["type"] == "http.response.start":
scope_state.response_started = True
await send(event)
try:
> await self.app(scope, receive, capture_response_started)
litestar/middleware/_internal/exceptions/middleware.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
litestar/_asgi/asgi_router.py:99: in __call__
await asgi_app(scope, receive, send)
litestar/routes/http.py:84: in handle
await response(scope, receive, send)
litestar/response/base.py:194: in __call__
await self.start_response(send=send)
litestar/response/base.py:165: in start_response
await send(event)
litestar/middleware/_internal/exceptions/middleware.py:155: in capture_response_started
await send(event)
litestar/app.py:864: in wrapped_send
await hook(message, scope)
litestar/concurrency.py:62: in sync_to_thread
return await _run_sync_asyncio(fn, *args, **kwargs)
litestar/concurrency.py:38: in _run_sync_asyncio
return await asyncio.get_running_loop().run_in_executor(get_asyncio_executor(), bound_fn) # pyright: ignore
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/concurrent/futures/thread.py:57: in run
result = self.fn(*self.args, **self.kwargs)
.venv/lib/python3.8/site-packages/advanced_alchemy/extensions/litestar/plugins/init/config/sync.py:52: in default_before_send_handler
session.close()
.venv/lib/python3.8/site-packages/sqlalchemy/orm/session.py:2468: in close
self._close_impl(invalidate=False)
.venv/lib/python3.8/site-packages/sqlalchemy/orm/session.py:2537: in _close_impl
transaction.close(invalidate)
.venv/lib/python3.8/site-packages/sqlalchemy/orm/state_changes.py:139: in _go
ret_value = fn(self, *arg, **kw)
.venv/lib/python3.8/site-packages/sqlalchemy/orm/session.py:1362: in close
transaction.close()
.venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py:2577: in close
self._do_close()
.venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py:2715: in _do_close
self._close_impl()
.venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py:2701: in _close_impl
self._connection_rollback_impl()
.venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py:2693: in _connection_rollback_impl
self.connection._rollback_impl()
.venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py:1121: in _rollback_impl
self._handle_dbapi_exception(e, None, None, None, None)
.venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py:2344: in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
.venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py:1119: in _rollback_impl
self.engine.dialect.do_rollback(self.connection)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <sqlalchemy.dialects.sqlite.pysqlite.SQLiteDialect_pysqlite object at 0x7efe20072d60>
dbapi_connection = <sqlalchemy.pool.base._ConnectionFairy object at 0x7efe200bc220>
def do_rollback(self, dbapi_connection):
> dbapi_connection.rollback()
E sqlalchemy.exc.ProgrammingError: (sqlite3.ProgrammingError) SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 139626483770944 and this is thread id 139626466989632.
E (Background on this error at: https://sqlalche.me/e/20/f405)
.venv/lib/python3.8/site-packages/sqlalchemy/engine/default.py:692: ProgrammingError
The above exception was the direct cause of the following exception:
monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7efe200fda30>
def test_sync_app(monkeypatch: MonkeyPatch) -> None:
from docs.examples.plugins.sqlalchemy_init_plugin import sqlalchemy_sync
monkeypatch.setattr(sqlalchemy_sync.sqlalchemy_config, "connection_string", "sqlite://")
with TestClient(app=sqlalchemy_sync.app) as client:
> res = client.get("/sqlalchemy-app")
tests/examples/test_plugins/test_sqlalchemy_init_plugin.py:16:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
litestar/testing/client/sync_client.py:193: in get
return Client.get(
.venv/lib/python3.8/site-packages/httpx/_client.py:1045: in get
return self.request(
litestar/testing/client/sync_client.py:149: in request
return Client.request(
.venv/lib/python3.8/site-packages/httpx/_client.py:821: in request
return self.send(request, auth=auth, follow_redirects=follow_redirects)
.venv/lib/python3.8/site-packages/httpx/_client.py:908: in send
response = self._send_handling_auth(
.venv/lib/python3.8/site-packages/httpx/_client.py:936: in _send_handling_auth
response = self._send_handling_redirects(
.venv/lib/python3.8/site-packages/httpx/_client.py:973: in _send_handling_redirects
response = self._send_single_request(request)
.venv/lib/python3.8/site-packages/httpx/_client.py:1009: in _send_single_request
response = transport.handle_request(request)
litestar/testing/transport.py:173: in handle_request
raise exc
litestar/testing/transport.py:165: in handle_request
portal.call(
.venv/lib/python3.8/site-packages/anyio/from_thread.py:288: in call
return cast(T_Retval, self.start_task_soon(func, *args).result())
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/concurrent/futures/_base.py:444: in result
return self.__get_result()
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/concurrent/futures/_base.py:389: in __get_result
raise self._exception
.venv/lib/python3.8/site-packages/anyio/from_thread.py:217: in _call_func
retval = await retval_or_awaitable
litestar/app.py:591: in __call__
await self.asgi_handler(scope, receive, self._wrap_send(send=send, scope=scope)) # type: ignore[arg-type]
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <litestar.middleware._internal.exceptions.middleware.ExceptionHandlerMiddleware object at 0x7efe20090880>
scope = {'_aa_connection_state': {'_sqlalchemy_db_session': <sqlalchemy.orm.session.Session object at 0x7efe2007beb0>}, 'app':...Litestar object at 0x7efe202f0420>, 'client': ('testclient', 50000), 'extensions': {'http.response.template': {}}, ...}
receive = <function TestClientTransport.create_receive.<locals>.receive at 0x7efe20071f70>
send = <function Litestar._wrap_send.<locals>.wrapped_send at 0x7efe10fc0280>
async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
"""ASGI-callable.
Args:
scope: The ASGI connection scope.
receive: The ASGI receive function.
send: The ASGI send function.
Returns:
None
"""
scope_state = ScopeState.from_scope(scope)
async def capture_response_started(event: Message) -> None:
if event["type"] == "http.response.start":
scope_state.response_started = True
await send(event)
try:
await self.app(scope, receive, capture_response_started)
except Exception as e:
if scope_state.response_started:
> raise LitestarException("Exception caught after response started") from e
E litestar.exceptions.base_exceptions.LitestarException: Exception caught after response started
litestar/middleware/_internal/exceptions/middleware.py:161: LitestarException
------------------------------ Captured log call -------------------------------
ERROR sqlalchemy.pool.impl.SingletonThreadPool:base.py:381 Exception closing connection <sqlite3.Connection object at 0x7efe20890e40>
Traceback (most recent call last):
File "/home/runner/work/litestar/litestar/.venv/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 379, in _close_connection
self._dialect.do_close(connection)
File "/home/runner/work/litestar/litestar/.venv/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 701, in do_close
dbapi_connection.close()
sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 139626483770944 and this is thread id 139626584458816.
```
### URL to code causing the issue
_No response_
### MCVE
```python
# Your MCVE code here
```
### Steps to reproduce
```bash
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
```
### Screenshots
```bash
""
```
### Logs
_No response_
### Litestar Version
PR into main
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | open | 2024-05-10T23:47:17Z | 2025-03-20T15:54:42Z | https://github.com/litestar-org/litestar/issues/3487 | [
"Bug :bug:"
] | peterschutt | 0 |
polarsource/polar | fastapi | 4,728 | Cannot delete organization | ### Description
<!-- A brief description with a link to the page on the site where you found the issue. -->
I just created https://polar.sh/dashboard/python-icalendar
But I would like to delete it.
### Current Behavior
<!-- A brief description of the current behavior of the issue. -->
When I go to the settings of that organization, I see nothing to remove it.
https://polar.sh/dashboard/python-icalendar/settings
### Expected Behavior
<!-- A brief description of what you expected to happen. -->
When I go to the settings of that organization, I would expect a way to remove it.
### Screenshots
<!-- Add screenshots, if applicable, to help explain your problem. -->

### Environment:
- Operating System: [e.g., Windows, macOS, Linux] Linux
- Browser (if applicable): [e.g., Chrome, Firefox, Safari], Firefox
---
<!-- Thank you for contributing to Polar! We appreciate your help in improving it. -->
<!-- Questions: [Discord Server](https://discord.com/invite/Pnhfz3UThd). --> | closed | 2024-12-23T14:31:01Z | 2024-12-23T19:36:59Z | https://github.com/polarsource/polar/issues/4728 | [
"bug"
] | niccokunzmann | 1 |
microsoft/nni | data-science | 5,664 | Large performance drop after applying pruning + speedup | **Describe the issue**:
Hello, I applied the L1NormPruner as in the tutorial (to all Conv2d layers in my model) but the accuracy drops from ~50 (mIOU) to 0 and it doesn't seem to be improving much in the retrain after.
I have tried a total_sparsity of 0.9, 0.8, 0.7, 0.6 and 0.5 and it happens in all cases.
Pruning was supposed to slightly decrease the accuracy but not so drastically, right?
Do you have any guesses on why this is happening?
**Code that I'm using:**
```config_list = [{
'op_types': ['Conv2d'],
'total_sparsity': 0.9,
}]
pruner = L1NormPruner(model, config_list)
# compress the model and generate the masks
_, masks = pruner.compress()
# need to unwrap the model, if the model is wrapped before speedup
pruner._unwrap_model()
ModelSpeedup(model, torch.rand(1, 3, 480, 640).to(device), masks).speedup_model()```
| open | 2023-08-17T10:35:45Z | 2023-08-17T10:36:08Z | https://github.com/microsoft/nni/issues/5664 | [] | CatarinaGouveia | 0 |
pytest-dev/pytest-django | pytest | 940 | Prioritizing pytests --first flags over Django's "usual" test order breaks mix of TestCase und TransactionTestCase with data migrations | This was introduced in #819. If I have data migrations in my project and two test cases, one `TestCase` and one `TransactionTestCase`, prioritizing the first flags over the order introduced in #214 can break the `TestCase`.
Running tests with `--new-first` and modifying the `TransactionTestCase` after the `TestCase` results in an empty database for the latter. If the project uses data migrations to create default data in the database this breaks the `TestCase` relying on that data. Running tests with `--failed-first` does the same if the `TransactionTestCase` is the last to have failed.
This means for a project using data migrations in this way and having both `TestCase`s and `TransactionTestCase`s the first flags cannot be used reliably currently. | open | 2021-07-13T15:04:18Z | 2021-07-13T15:11:32Z | https://github.com/pytest-dev/pytest-django/issues/940 | [] | dfn-certling | 0 |
qubvel-org/segmentation_models.pytorch | computer-vision | 328 | Some encoder without shape, how to deal with this? | when building the encoder, it will always need the encode. shape, see figure 1. But some encode doesn't have it, like mobilenet and efficientnet, shown in figure 2. result in the error, see figure 3. How to deal with this problem? Thanks!



| closed | 2021-01-04T03:03:58Z | 2021-01-04T10:11:25Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/328 | [] | MiaoRain | 2 |
PrefectHQ/prefect | data-science | 17,517 | DaskTaskRunner issues unexpected "a future was garbage collected" warning | ### Bug summary
When submitting multiple dependent tasks in a for loop and only keeping the reference to the last one, I get a warning
```
"A future was garbage collected before it resolved. Please call `.wait()` or `.result()` on futures to ensure they resolve."
```
Here is an example to reproduce, which follows the test that tests for this warning introduced in #14148.
```
from prefect import task, flow
from prefect_dask import DaskTaskRunner
from prefect.cache_policies import NO_CACHE
from prefect.futures import wait
task_runner = DaskTaskRunner(cluster_kwargs={"n_workers": 1})
@task()
def task1():
return 42
@task()
def task2(test):
return test * 2
@flow(task_runner=task_runner)
def test_flow():
results = []
n = 10
for _ in range(n):
result1 = task1.submit()
result2 = task2.submit(result1)
results.append(result2)
wait(results)
if __name__ == "__main__":
test_flow()
```
- The warning gets issued `n-1` times, when each of the first `n-1` instances of `result1` go out of scope`
- My expectation would be that the instances of `task2` hold references to the futures, and therefore `__del__` would not be called
- I get the same behaviour with the `RayTaskRunner`
- The `ConcurrentTaskRunner` does not have this issue
### Version info
```Text
$ prefect version
Version: 3.2.9
API version: 0.8.4
Python version: 3.11.9
Git commit: 27eb408c
Built: Fri, Feb 28, 2025 8:12 PM
OS/Arch: linux/x86_64
Profile: prefect3
Server type: server
Pydantic version: 2.9.2
Integrations:
prefect-dask: 0.3.3
prefect-ray: 0.4.3
```
### Additional context
_No response_ | open | 2025-03-18T12:04:09Z | 2025-03-18T12:04:09Z | https://github.com/PrefectHQ/prefect/issues/17517 | [
"bug"
] | athrpf | 0 |
milesmcc/shynet | django | 69 | Fix the NPM_FILE_PATTERNS setting to work on windows | Instead of hardcoding the "/" to the entries `NPM_FILE_PATTERNS` you need to use `os.path.join`, as per: https://github.com/kevin1024/django-npm/issues/15
For example:
```
NPM_FILE_PATTERNS = {
"a17t": [os.path.join("dist", "a17t.css"), os.path.join("dist", "tailwind.css")],
"apexcharts": [os.path.join("dist", "apexcharts.min.js")],
"litepicker": [os.path.join("dist", "js", "main.js")],
"turbolinks": [os.path.join("dist", "turbolinks.js")],
"stimulus": [os.path.join("dist", "stimulus.umd.js")],
"inter-ui": [os.path.join("Inter (web)", "*")],
"@fortawesome": [os.path.join("fontawesome-free", "js", "all.min.js")],
}
```
TIA | closed | 2020-08-02T06:23:11Z | 2020-08-11T21:57:13Z | https://github.com/milesmcc/shynet/issues/69 | [
"bug",
"enhancement",
"good first issue"
] | spapas | 2 |
iperov/DeepFaceLab | deep-learning | 5,515 | device_lib.list_local_devices() doesn't return in the CUDA build up to 2080 | Any batch script hangs, I traced it and it freezes in tensorflow when it calls **device_lib.list_local_devices()**
In: C:\DFL\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\device.py
GPU: Geforce 750 Ti
Win 10
```
import tensorflow as tf
from tensorflow.python.client import device_lib
print(f"list_local_devices()={device_lib.list_local_devices()}")
```
I tried several things: if there was an incompatibility with the installed newer CUDA, but it shouldn't as the build has its own directory and it's an old tensorflow 1.13. The paths are set by setenv.bat,, but in addition I added them in the system's Path, also I tried with copying the .dll files both in the .bat folder and in the main.py.
I've been using the DirectX12 version as an alternative. The GPU is 750 Ti and initially I thought that it was just too old, but I just discovered it's supposed to work as it supports newer CUDA versions. Also there's not an error message, but the call to "list_local_devices" doesn't return.
If I run setenv.bat and then I call the build's python, then import the tf. and call list_local_devices interactively, the function recognizes the GPU and prints a correct output, but then the CLI session hangs. The system has also an integrated Intel GPU HD530.
I understand that this seems to be a tensorflow or drivers' issue, but does anyone have solved it? Thanks.
```
c:\DFL\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8>python
Python 3.6.8 (tags/v3.6.8:3c6b436a57, Dec 24 2018, 00:16:47) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
c:\DFL\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
c:\DFL\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
c:\DFL\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
c:\DFL\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
c:\DFL\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
c:\DFL\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
>>>
>>> from tensorflow.python.client import device_lib
>>> print(f"list_local_devices()={device_lib.list_local_devices()}")
2022-05-09 22:11:18.429936: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2022-05-09 22:11:18.551876: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce GTX 750 Ti major: 5 minor: 0 memoryClockRate(GHz): 1.0845
pciBusID: 0000:01:00.0
totalMemory: 2.00GiB freeMemory: 194.50MiB
2022-05-09 22:11:18.552651: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
```
````
@staticmethod
def _get_tf_devices_proc(q : multiprocessing.Queue):
print("_get_tf_devices_proc")
print(sys.platform[0:3])
if sys.platform[0:3] == 'win':
compute_cache_path = Path(os.environ['APPDATA']) / 'NVIDIA' / ('ComputeCache_ALL')
os.environ['CUDA_CACHE_PATH'] = str(compute_cache_path)
print("CUDA_CACHE_PATH={os.environ['CUDA_CACHE_PATH']}")
if not compute_cache_path.exists():
io.log_info("Caching GPU kernels...")
compute_cache_path.mkdir(parents=True, exist_ok=True)
import tensorflow
tf_version = tensorflow.version.VERSION
print(f"tf_version={tf_version}")
#if tf_version is None:
# tf_version = tensorflow.version.GIT_VERSION
if tf_version[0] == 'v':
tf_version = tf_version[1:]
if tf_version[0] == '2':
tf = tensorflow.compat.v1
else:
tf = tensorflow
import logging
# Disable tensorflow warnings
tf_logger = logging.getLogger('tensorflow')
tf_logger.setLevel(logging.ERROR)
from tensorflow.python.client import device_lib
print("AFTER: from tensorflow.python.client import device_lib")
devices = []
print(f"list_local_devices()={device_lib.list_local_devices()}") ### HANGS HERE ###
physical_devices = device_lib.list_local_devices()
physical_devices_f = {}
print("BEFORE: for dev in physical_devices:")
```
| open | 2022-05-09T19:24:31Z | 2023-06-09T13:52:12Z | https://github.com/iperov/DeepFaceLab/issues/5515 | [] | Twenkid | 4 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,199 | [Feature Request]: Add Ascend NPU npu_fusion_attention to accelerate training | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
- a simple description of npu_fusion_attention operator
- add Ascend NPU npu_fusion_attention to accelerate training
### Proposed workflow
1. Go to add description of npu_fusion_attention operator
2. add Ascend NPU npu_fusion_attention to accelerate training
### Additional information
_No response_ | open | 2024-07-12T08:52:41Z | 2024-07-24T08:48:30Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16199 | [
"enhancement"
] | kevin19891229 | 2 |
ray-project/ray | machine-learning | 51,129 | [RLLIB] Support for Gymnasium Graph Spaces | ### Description
Gymnasium [Graph Spaces](https://gymnasium.farama.org/api/spaces/composite/#gymnasium.spaces.Graph) describe graph-based environments, which are becoming more common. The graph spaces has fixed node and edge spaces, however allows for dynamic sized edge/node counts for those given edge/node spaces. This allows for one space to describe all graphs with node/edge counts leading to the thousands.
While this space can practically be described in ray through padding to a maximum node/edge size, this is undesirable especially for GNN-based models. For instance, it requires setting a maximum edge/node size, which can be problematic as a primary benefit from using GNNs is scaling from smaller graphs to larger unknown sizes. If the size is set arbitrarily large, then small graph examples will involve a bunch of padding. While padding can be helpful performance-wise, for some graph problems it can be advantageous to allow dynamic node/edge sizes to limit the amount of data transfer.
For my personal project, I hacked a version of ray to achieve graph spaces. The primary change was to re-create the tree library calls, to support structures involving graphs. I have it implemented [here](https://github.com/dkupsh/ray), and it works at-least on a temporary hacky basis.
### Use case
Support for Graph Spaces, makes it so that you can directly use the Graph gymnasium space. Makes it so you don't have to pad graphs, and can have the nodes/edges be dynamically sized. Will help improve gnn-based environments significantly. | open | 2025-03-06T17:42:21Z | 2025-03-11T15:54:54Z | https://github.com/ray-project/ray/issues/51129 | [
"enhancement",
"P3",
"rllib",
"rllib-env"
] | dkupsh | 0 |
plotly/dash | jupyter | 2,863 | [BUG] running does not support wildcards in ids | **Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.17.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
dash-bootstrap-components 1.6.0
```
**Describe the bug**
Using wildcards in ids for Outputs in the running kwarg does leads to `itempath is undefined` error.
**Expected behavior**
Wildcards are properly resolved and matching components are updated while the callback is running.
**MVE**
```
from time import sleep
import dash_bootstrap_components as dbc
from dash import MATCH, Input, Output, callback
layout = [
dbc.Button(
"Test1",
id={"component": "button", "index": "1"},
),
dbc.Button(
"Test2",
id={"component": "button", "index": "2"},
),
]
@callback(
Output({"component": "button", "index": MATCH}, "color"),
Input({"component": "button", "index": MATCH}, "n_clicks"),
running=[
(Output({"component": "button", "index": MATCH}, "children"), "running", "finished"),
],
prevent_initial_call=True,
)
def test(_) -> str:
sleep(3)
return "warning"
```
| closed | 2024-05-20T14:53:45Z | 2024-06-12T13:04:15Z | https://github.com/plotly/dash/issues/2863 | [
"bug",
"sev-2"
] | tlauli | 1 |
minimaxir/textgenrnn | tensorflow | 56 | Unable to open largetext weights. | I successfully trained a model using train_from_largetext and saved the weights as 'weights/recipes.hdf5'.
However, I don't seem to be able to reopen the saved weights. When I try:
recipes.save('weights/recipes.hdf5')
recipes2 = textgenrnn('weights/recipes.hdf5')
I get the following error message when I try to reopen the weights:
> >>>
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> File "/home/janelle_shane/textgenrnn/textgenrnn/textgenrnn.py", line 65, in __init__
> weights_path=weights_path)
> File "/home/janelle_shane/textgenrnn/textgenrnn/model.py", line 38, in textgenrnn_model
> model.load_weights(weights_path, by_name=True)
> File "/home/janelle_shane/.pyenv/versions/3.5.5/lib/python3.5/site-packages/keras/engine/network.py", line 1177, in load_weights
> reshape=reshape)
> File "/home/janelle_shane/.pyenv/versions/3.5.5/lib/python3.5/site-packages/keras/engine/saving.py", line 1018, in load_weights_from_hdf5_group_by_name
> str(weight_values[i].shape) + '.')
> ValueError: Layer #1 (named "embedding"), weight <tf.Variable 'embedding_8/embeddings:0' shape=(465, 100) dtype=float32_ref> has shape (465, 100), but the saved weight has shape (106, 100).
>
This combo of commands works for models trained with regular train_from_file, so I wonder if there's some problem with train_from_largetext_file? | open | 2018-07-29T02:45:02Z | 2018-10-15T00:41:21Z | https://github.com/minimaxir/textgenrnn/issues/56 | [] | janelleshane | 8 |
Farama-Foundation/Gymnasium | api | 882 | [Bug Report] save_video doesn't work | ### Describe the bug
I tried using the snippet for saving an episode give [here](https://gymnasium.farama.org/api/utils/#gymnasium.utils.save_video.save_video) . I am running a jupyter notebook in a server.
The above gives following error:
```
{
"name": "TypeError",
"message": "must be real number, not NoneType",
"stack": "---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[20], line 22
18 reward_eps += reward
20 # frames = env.render()
---> 22 save_video(env.render(), \"videos\",
23 fps=env.metadata['render_fps'],
24 step_starting_index=0,
25 episode_index=0)
27 env.close()
File ~/miniconda3/envs/vizdoom/lib/python3.9/site-packages/gymnasium/utils/save_video.py:97, in save_video(frames, video_folder, episode_trigger, step_trigger, video_length, name_prefix, episode_index, step_starting_index, **kwargs)
95 if episode_trigger is not None and episode_trigger(episode_index):
96 clip = ImageSequenceClip(frames[:video_length], **kwargs)
---> 97 clip.write_videofile(f\"{path_prefix}-episode-{episode_index}.mp4\")
99 if step_trigger is not None:
100 # skip the first frame since it comes from reset
101 for step_index, frame_index in enumerate(
102 range(1, len(frames)), start=step_starting_index
103 ):
File ~/miniconda3/envs/vizdoom/lib/python3.9/site-packages/decorator.py:232, in fun(*args, **kw)
230 evaldict = dict(_call_=caller, _func_=func)
231 es = ''
--> 232 for i, extra in enumerate(extras):
233 ex = '_e%d_' % i
234 evaldict[ex] = extra
File ~/miniconda3/envs/vizdoom/lib/python3.9/site-packages/moviepy/decorators.py:54, in requires_duration(f, clip, *a, **k)
52 raise ValueError(\"Attribute 'duration' not set\")
53 else:
---> 54 return f(clip, *a, **k)
File ~/miniconda3/envs/vizdoom/lib/python3.9/site-packages/decorator.py:232, in fun(*args, **kw)
230 evaldict = dict(_call_=caller, _func_=func)
231 es = ''
--> 232 for i, extra in enumerate(extras):
233 ex = '_e%d_' % i
234 evaldict[ex] = extra
File ~/miniconda3/envs/vizdoom/lib/python3.9/site-packages/moviepy/decorators.py:135, in use_clip_fps_by_default(f, clip, *a, **k)
130 new_a = [fun(arg) if (name=='fps') else arg
131 for (arg, name) in zip(a, names)]
132 new_kw = {k: fun(v) if k=='fps' else v
133 for (k,v) in k.items()}
--> 135 return f(clip, *new_a, **new_kw)
File ~/miniconda3/envs/vizdoom/lib/python3.9/site-packages/decorator.py:232, in fun(*args, **kw)
230 evaldict = dict(_call_=caller, _func_=func)
231 es = ''
--> 232 for i, extra in enumerate(extras):
233 ex = '_e%d_' % i
234 evaldict[ex] = extra
File ~/miniconda3/envs/vizdoom/lib/python3.9/site-packages/moviepy/decorators.py:22, in convert_masks_to_RGB(f, clip, *a, **k)
20 if clip.ismask:
21 clip = clip.to_RGB()
---> 22 return f(clip, *a, **k)
File ~/miniconda3/envs/vizdoom/lib/python3.9/site-packages/moviepy/video/VideoClip.py:300, in VideoClip.write_videofile(self, filename, fps, codec, bitrate, audio, audio_fps, preset, audio_nbytes, audio_codec, audio_bitrate, audio_bufsize, temp_audiofile, rewrite_audio, remove_temp, write_logfile, verbose, threads, ffmpeg_params, logger)
292 if make_audio:
293 self.audio.write_audiofile(audiofile, audio_fps,
294 audio_nbytes, audio_bufsize,
295 audio_codec, bitrate=audio_bitrate,
296 write_logfile=write_logfile,
297 verbose=verbose,
298 logger=logger)
--> 300 ffmpeg_write_video(self, filename, fps, codec,
301 bitrate=bitrate,
302 preset=preset,
303 write_logfile=write_logfile,
304 audiofile=audiofile,
305 verbose=verbose, threads=threads,
306 ffmpeg_params=ffmpeg_params,
307 logger=logger)
309 if remove_temp and make_audio:
310 if os.path.exists(audiofile):
File ~/miniconda3/envs/vizdoom/lib/python3.9/site-packages/moviepy/video/io/ffmpeg_writer.py:213, in ffmpeg_write_video(clip, filename, fps, codec, bitrate, preset, withmask, write_logfile, audiofile, verbose, threads, ffmpeg_params, logger)
211 logfile = None
212 logger(message='Moviepy - Writing video %s\
' % filename)
--> 213 with FFMPEG_VideoWriter(filename, clip.size, fps, codec = codec,
214 preset=preset, bitrate=bitrate, logfile=logfile,
215 audiofile=audiofile, threads=threads,
216 ffmpeg_params=ffmpeg_params) as writer:
218 nframes = int(clip.duration*fps)
220 for t,frame in clip.iter_frames(logger=logger, with_times=True,
221 fps=fps, dtype=\"uint8\"):
File ~/miniconda3/envs/vizdoom/lib/python3.9/site-packages/moviepy/video/io/ffmpeg_writer.py:88, in FFMPEG_VideoWriter.__init__(self, filename, size, fps, codec, audiofile, preset, bitrate, withmask, logfile, threads, ffmpeg_params)
77 self.ext = self.filename.split(\".\")[-1]
79 # order is important
80 cmd = [
81 get_setting(\"FFMPEG_BINARY\"),
82 '-y',
83 '-loglevel', 'error' if logfile == sp.PIPE else 'info',
84 '-f', 'rawvideo',
85 '-vcodec', 'rawvideo',
86 '-s', '%dx%d' % (size[0], size[1]),
87 '-pix_fmt', 'rgba' if withmask else 'rgb24',
---> 88 '-r', '%.02f' % fps,
89 '-an', '-i', '-'
90 ]
91 if audiofile is not None:
92 cmd.extend([
93 '-i', audiofile,
94 '-acodec', 'copy'
95 ])
TypeError: must be real number, not NoneType"
}
```
Can someone point out the problem? I am manually saving the frames from env.render, it is working fine.
### Code example
```shell
terminated = False
truncated = False
reward_eps = 0.0
env = gym.make('CartPole-v1', render_mode='rgb_array_list')
env.reset()
while not (terminated or truncated):
action = env.action_space.sample()
observation, reward, terminated, truncated, info = env.step(action)
reward_eps += reward
save_video(env.render(), "videos",
fps=env.metadata['render_fps'],
step_starting_index=0,
episode_index=0)
env.close()
```
### System info
1. Gymnasium was installed using pip
2. gymnasium.__version__ == 0.29.1
3. run on server(LINUX)
4. Python version == 3.9.18
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| closed | 2024-01-19T19:01:01Z | 2024-01-25T19:27:06Z | https://github.com/Farama-Foundation/Gymnasium/issues/882 | [
"bug"
] | Acejoy | 4 |
Anjok07/ultimatevocalremovergui | pytorch | 1,670 | Demcus: v4 htdemcus_ft - M4 GPU | Previously I had no issue with this ensemble, now it crashes every time.
Using M4 Pro Mac Mini with GPU processing. Demcus: v4 htdemcus_ft seems to be the issue.
Works without GPU processing
Raw Error Details:
RuntimeError: "Invalid buffer size: 17.34 GB"
Traceback Error: "
File "UVR.py", line 6584, in process_start
File "separate.py", line 826, in seperate
File "separate.py", line 971, in demix_demucs
File "demucs/apply.py", line 196, in apply_model
File "demucs/apply.py", line 222, in apply_model
File "demucs/apply.py", line 256, in apply_model
File "demucs/utils.py", line 490, in result
File "demucs/apply.py", line 271, in apply_model
File "torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "demucs/htdemucs.py", line 593, in forward
File "torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "demucs/transformer.py", line 667, in forward
File "torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "demucs/transformer.py", line 365, in forward
File "torch/nn/modules/transformer.py", line 715, in _sa_block
x = self.self_attn(x, x, x,
^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/modules/activation.py", line 1241, in forward
attn_output, attn_output_weights = F.multi_head_attention_forward(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/functional.py", line 5440, in multi_head_attention_forward
attn_output = scaled_dot_product_attention(q, k, v, attn_mask, dropout_p, is_causal)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
"
Error Time Stamp [2024-12-18 10:24:02]
Full Application Settings:
vr_model: 1_HP-UVR
aggression_setting: 5
window_size: 512
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: 70
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: Kim Vocal 1
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: PCM_16
cuda_set: Default
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 5
demucs_stems: All Stems
mdx_stems: All Stems | open | 2024-12-18T10:30:48Z | 2024-12-26T11:06:04Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1670 | [] | Cerimor | 2 |
huggingface/transformers | python | 36,485 | Gemma2 (quantized) inference is broken - torch._dynamo.exc.UserError: Dynamic control flow is not supported at the moment. | ### System Info
```
- `transformers` version: 4.49.0
- Platform: Linux-5.4.0-156-generic-x86_64-with-glibc2.39
- Python version: 3.10.14
- Huggingface_hub version: 0.29.1
- Safetensors version: 0.5.3
- Accelerate version: 1.4.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.5.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A100-SXM4-40GB
```
### Who can help?
@SunMarc @MekkCyber
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following quantized example (taken from the official docs):
```
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
You should get the following error:
```
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1602, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/variables/misc.py", line 1024, in call_function
return self.obj.call_method(tx, self.name, args, kwargs)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/variables/misc.py", line 774, in call_method
return self.call_apply(tx, args, kwargs)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/variables/misc.py", line 723, in call_apply
return variables.UserFunctionVariable(fn, source=source).call_function(
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
return super().call_function(tx, args, kwargs)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1692, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
return super().call_function(tx, args, kwargs)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 560, in inner
raise exc.UserError(
torch._dynamo.exc.UserError: Dynamic control flow is not supported at the moment. Please use functorch.experimental.control_flow.cond to explicitly capture the control flow. For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#cond-operands
from user code:
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
return func(*args, **kwargs)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/transformers/models/gemma2/modeling_gemma2.py", line 887, in forward
outputs = self.model(
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/transformers/models/gemma2/modeling_gemma2.py", line 667, in forward
layer_outputs = decoder_layer(
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/transformers/models/gemma2/modeling_gemma2.py", line 321, in forward
hidden_states, self_attn_weights = self.self_attn(
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/transformers/models/gemma2/modeling_gemma2.py", line 216, in forward
query_states = self.q_proj(hidden_states).view(hidden_shape).transpose(1, 2)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/bitsandbytes/nn/modules.py", line 990, in forward
out = bnb.matmul(x, self.weight, bias=self.bias, state=self.state)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py", line 509, in matmul
return MatMul8bitLt.apply(A, B, out, bias, state)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py", line 326, in forward
CA, SCA, outlier_cols = F.int8_vectorwise_quant(A.to(torch.float16), threshold=state.threshold)
File "/opt/miniconda3/envs/video-pipe-12-4/lib/python3.10/site-packages/bitsandbytes/functional.py", line 2786, in int8_vectorwise_quant
if outliers.any():
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
Please note, it does work without problems under `transformers==4.48.3`.
### Expected behavior
Text generation should work correctly. | closed | 2025-03-01T01:27:02Z | 2025-03-17T10:38:23Z | https://github.com/huggingface/transformers/issues/36485 | [
"bug"
] | royvelich | 2 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,409 | cannot call Transaction.rollback(): the underlying connection is closed | ### Describe the bug
Using async_session with context manager raise the error
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
sqlalchemy = "^2.0.21"
### DBAPI (i.e. the database driver)
asyncpg = "^0.28.0"
### Database Vendor and Major Version
PostgreSQL 13
### Python Version
python = "^3.10"
### Operating system
Macos ventura 13.5.1
### To Reproduce
```python
from pydantic import PostgresDsn
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession, async_sessionmaker
from sqlalchemy.orm import sessionmaker
from sqlalchemy import create_engine
database_url = PostgresDsn.build(
scheme='postgresql+asyncpg',
host="127.0.0.1",
username="user",
port=5432,
password="secret",
path="database",
)
test_db_uri = str(database_url).replace('+asyncpg', '')
testing_engine = create_engine(test_db_uri, pool_pre_ping=True)
test_async_engine = create_async_engine(
str(database_url), pool_pre_ping=True, echo=False,
)
TestingSessionLocal = sessionmaker(
expire_on_commit=True,
# twophase=True,
autoflush=False,
autocommit=False,
bind=testing_engine
)
test_sync_maker = sessionmaker()
AsyncTestingSessionLocal = async_sessionmaker(
class_=AsyncSession,
expire_on_commit=False,
# twophase=True,
autoflush=False,
autocommit=False,
bind=test_async_engine,
future=True,
sync_session_class=test_sync_maker
)
async def init_db(async_db: "AsyncSession"):
# Tables should be created with Alembic migrations
# But if you don't want to use migrations, create
# the tables un-commenting the next line
# Base.metadata.create_all(bind=engine)
user = await user_repo.first(async_db, params={'email': settings.FIRST_SUPERUSER})
if not user:
user_in = {
'email': settings.FIRST_SUPERUSER,
"full_name": "Admin",
'password': settings.FIRST_SUPERUSER_PASSWORD,
'is_active': True,
'is_superuser': True,
}
user = await user_repo.create(async_db, obj_in=user_in) # noqa: F841
logger.info("User successfully created")
async def async_db():
from app.db.session import AsyncTestingSessionLocal, test_async_engine
from app.db.init_db import init_db
from app.db.models import metadata
from sqlalchemy_utils import database_exists, create_database, drop_database
database_url = test_async_engine.url.render_as_string(hide_password=False).replace('+asyncpg', '')
if not database_exists(database_url):
create_database(database_url)
# connect to the database
is_echo = test_async_engine.echo
test_async_engine.echo = False
async with test_async_engine.begin() as conn:
await conn.run_sync(metadata.create_all) # Create the tables.
test_async_engine.echo = is_echo
async with test_async_engine.connect() as conn:
async with AsyncTestingSessionLocal(bind=conn) as session:
await init_db(session)
yield session
# rollback - everything that happened with the
# Session above (including calls to commit())
# is rolled back.
# await conn.rollback()
# for AsyncEngine created in function scope, close and
# clean-up pooled connections
await test_async_engine.dispose()
# Drop test database
if database_exists(database_url):
drop_database(database_url)
```
### Error
sqlalchemy.exc.InterfaceError: (sqlalchemy.dialects.postgresql.asyncpg.InterfaceError) <class 'asyncpg.exceptions._base.InterfaceError'>: cannot call Transaction.rollback(): the underlying connection is closed
### Additional context
_No response_ | closed | 2023-10-03T12:40:02Z | 2023-10-03T13:11:16Z | https://github.com/sqlalchemy/sqlalchemy/issues/10409 | [] | xykylikuf001 | 0 |
google/seq2seq | tensorflow | 188 | sys.excepthook is missing when run "bash wmt16_en_de.sh" | /nmt_data/wmt16_de_en/newstest2014.tok.clean.bpe.32000.de
/nmt_data/wmt16_de_en/newstest2015.tok.clean.bpe.32000.de
/nmt_data/wmt16_de_en/newstest2016.tok.clean.bpe.32000.de
/nmt_data/wmt16_de_en/train.tok.clean.bpe.32000.de
cut: stdin: Illegal byte sequence
close failed in file object destructor:
sys.excepthook is missing
lost sys.stderr
| open | 2017-04-24T02:28:22Z | 2017-08-01T06:17:34Z | https://github.com/google/seq2seq/issues/188 | [] | SeekPoint | 0 |
miguelgrinberg/python-socketio | asyncio | 402 | simple solution for how to connect to external socketio server? | Hi,
Have built a simple flask socketio server, javascript socketio client, and python socketio client. running all on localhost:5000 works. Deployed flask server and javascript client on remote Heroku server and those two work fine, but I cannot figure out how to connect to the remote server from my local python socketio client. The socketio connection tried using
`sio.connect('https://[myapp].heroku.com/')`
is refused by the server. I have a feeling I need to either do additional config on the server or alter the python socketio client code...but can't find any clear directions. (for ex: code for the javascript socketio client had to be changed from
`var socket = io.connect('http://127.0.0.1:5000');`
to
`socket = io.connect('https://' + document.domain + ':' + location.port);`
to make it work on Heroku). Anyone have suggestion? | closed | 2019-12-29T22:08:09Z | 2019-12-30T00:35:00Z | https://github.com/miguelgrinberg/python-socketio/issues/402 | [] | JMA6971 | 1 |
aminalaee/sqladmin | sqlalchemy | 447 | Not working with the latest fastapi 0.94.1 release | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
Hello,
I am using sqladmin with my project, the source of which you can find here: https://github.com/mavroprovato/fuelpricesgr
When I try to update to fastapi version 0.93.0 or later, the admin crashes. With previous versions it works. I have attached the stack trace.
### Steps to reproduce the bug
_No response_
### Expected behavior
_No response_
### Actual behavior
_No response_
### Debugging material
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 436, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/fastapi/applications.py", line 276, in __call__
await super().__call__(scope, receive, send)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/starlette/middleware/cors.py", line 84, in __call__
await self.app(scope, receive, send)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/starlette/routing.py", line 443, in handle
await self.app(scope, receive, send)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/starlette/middleware/sessions.py", line 86, in __call__
await self.app(scope, receive, send_wrapper)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/sqladmin/authentication.py", line 56, in wrapper_decorator
return await func(*args, **kwargs)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/sqladmin/application.py", line 367, in list
return self.templates.TemplateResponse(model_view.list_template, context)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/starlette/templating.py", line 113, in TemplateResponse
return _TemplateResponse(
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/starlette/templating.py", line 39, in __init__
content = template.render(context)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/jinja2/environment.py", line 1301, in render
self.environment.handle_exception()
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/jinja2/environment.py", line 936, in handle_exception
raise rewrite_traceback_stack(source=source)
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/sqladmin/templates/list.html", line 1, in top-level template code
{% extends "layout.html" %}
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/sqladmin/templates/layout.html", line 1, in top-level template code
{% extends "base.html" %}
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/sqladmin/templates/base.html", line 17, in top-level template code
{% block body %}
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/sqladmin/templates/layout.html", line 65, in block 'body'
{% block content %} {% endblock %}
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/sqladmin/templates/list.html", line 102, in block 'content'
<a href="#" data-name="{{ model_view.name }}" data-pk="{{ model_view.get_prop_value(row, model_view.pk_column) }}" data-url="{{ model_view._url_for_delete(request, row) }}" data-bs-toggle="modal" data-bs-target="#modal-delete" title="Delete">
File "/home/kostas/.cache/pypoetry/virtualenvs/fuelpricesgr-BkUJlFTl-py3.10/lib/python3.10/site-packages/sqladmin/models.py", line 758, in _url_for_delete
return url + "?" + query_params
TypeError: unsupported operand type(s) for +: 'URL' and 'str'
### Environment
OS: Ubuntu 22.04.2
Python : 3.10
SQLAdmin: 0.9.0
### Additional context
_No response_ | closed | 2023-03-14T19:22:14Z | 2023-03-15T09:42:52Z | https://github.com/aminalaee/sqladmin/issues/447 | [] | mavroprovato | 3 |
onnx/onnx | pytorch | 6,103 | Spec for ReduceSumSquare is incorrect when noop_with_empty_axes == 1 | # Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
No
### Describe the bug
When noop_with_empty_axes == 1 & axes is empty, in ONNX spec, it will return input tensor directly.
But in reference in onnx, it is mismatch. it returned np.square of the input tensor

### System information
<!--
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*):
- ONNX version (*e.g. 1.13*):
- Python version:
- GCC/Compiler version (if compiling from source):
- CMake version:
- Protobuf version:
- Visual Studio version (if applicable):-->
### Reproduction instructions
<!--
- Describe the code to reproduce the behavior.
```
import onnx
model = onnx.load('model.onnx')
...
```
- Attach the ONNX model to the issue (where applicable)-->
### Expected behavior
return input tensor directly
### Notes
<!-- Any additional information -->
| open | 2024-04-29T02:43:06Z | 2025-03-21T00:18:10Z | https://github.com/onnx/onnx/issues/6103 | [
"bug",
"topic: documentation",
"topic: spec clarification",
"contributions welcome"
] | RunnerZhong | 8 |
ets-labs/python-dependency-injector | asyncio | 790 | Use of __self__ (possibly in conjunction with providers.Container) seems to cause memory leaks | Here's a (pretty sloppy) sort of minimal reproduction, together with some investigation that I've been doing:
```
import gc
import time
import weakref
import objgraph
from dependency_injector import containers, providers
class Parent(containers.DeclarativeContainer):
__self__ = providers.Self()
config = providers.Configuration(default={})
a_factory = providers.Factory(lambda container: True, __self__)
class Child(containers.DeclarativeContainer):
a_parent = providers.Container(Parent)
def test_parent_container_allocation(testing_container):
parent = Parent()
count = 500
for _ in range(count):
Child(a_parent=parent)
time.sleep(1)
gc.collect()
random_container = weakref.ref(objgraph.by_type("DynamicContainer"))
print(objgraph.show_backrefs(random_container(), max_depth=20))
chain = objgraph.find_backref_chain(random_container, objgraph.is_proper_module)
import ipdb
ipdb.set_trace()
assert len(objgraph.by_type("DynamicContainer")) < 50
``` | open | 2024-03-02T18:22:54Z | 2024-03-02T18:22:54Z | https://github.com/ets-labs/python-dependency-injector/issues/790 | [] | colonelpanic8 | 0 |
albumentations-team/albumentations | deep-learning | 1,830 | [Tech Debt, Speedup] Move to_float to albucore | closed | 2024-07-04T01:43:29Z | 2024-09-12T02:23:30Z | https://github.com/albumentations-team/albumentations/issues/1830 | [
"Speed Improvements",
"Tech debt"
] | ternaus | 0 |
|
JaidedAI/EasyOCR | machine-learning | 1,013 | qusetion about these two parameters' difference: text_threshold and low_text | I have a question regarding the parameters of the readtext method for Text Detection. There are two parameters, **text_threshold** and **low_text**, described as follows:
text_threshold (float, default = 0.7): Text confidence threshold.
low_text (float, default = 0.4): Text low-bound score.
My intuition is that the value of text_threshold should be higher than the value of low_text. Since the confidence scores of the text regions that to be saved are already higher than text_threshold, they should certainly be higher than low_text as well. I'm not quite clear on the distinction between these two parameters. Thanks for your answer.
| open | 2023-05-10T06:46:18Z | 2024-12-18T05:13:08Z | https://github.com/JaidedAI/EasyOCR/issues/1013 | [] | JeremyGe07 | 4 |
itamarst/eliot | numpy | 471 | Let `Action.continue_task` to take `action_type` and `fields`. | I've been experimenting with capturing logs from a eliot-using service in integration tests, and want to have a more descriptive action than `eliot:remote_task` as the root action of the service. | closed | 2021-07-01T15:46:18Z | 2021-09-19T20:55:57Z | https://github.com/itamarst/eliot/issues/471 | [
"API enhancement"
] | tomprince | 1 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 565 | [Feature request] 希望增加2k/4k画质下载选择 | #529 我看到有相关issue有人发出已经修改好的代码,希望能把这个代码 集成在新的docker Releases版本中,十分感谢
| open | 2025-02-26T08:44:15Z | 2025-02-26T08:44:15Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/565 | [
"enhancement"
] | Mak0760 | 0 |
pennersr/django-allauth | django | 3,956 | self.request.session[INTERNAL_RESET_SESSION_KEY] not persisted | Enabling CSRF protection in our Django app causes the value self.request.session[INTERNAL_RESET_SESSION_KEY] is "lost" | closed | 2024-07-09T14:15:26Z | 2024-07-09T14:53:43Z | https://github.com/pennersr/django-allauth/issues/3956 | [] | davidu1975 | 1 |
NVIDIA/pix2pixHD | computer-vision | 167 | Can't run the test | I install the code as the instructions, but the test doesn't work.
------------ Options -------------
aspect_ratio: 1.0
batchSize: 1
checkpoints_dir: ./checkpoints
cluster_path: features_clustered_010.npy
data_type: 32
dataroot: ./datasets/cityscapes/
display_winsize: 512
engine: None
export_onnx: None
feat_num: 3
fineSize: 512
fp16: False
gpu_ids: [0]
how_many: 50
input_nc: 3
instance_feat: False
isTrain: False
label_feat: False
label_nc: 35
loadSize: 1024
load_features: False
local_rank: 0
max_dataset_size: inf
model: pix2pixHD
nThreads: 2
n_blocks_global: 9
n_blocks_local: 3
n_clusters: 10
n_downsample_E: 4
n_downsample_global: 4
n_local_enhancers: 1
name: label2city_1024p
nef: 16
netG: local
ngf: 32
niter_fix_global: 0
no_flip: False
no_instance: False
norm: instance
ntest: inf
onnx: None
output_nc: 3
phase: test
resize_or_crop: none
results_dir: ./results/
serial_batches: False
tf_log: False
use_dropout: False
use_encoded_image: False
verbose: False
which_epoch: latest
-------------- End ----------------
CustomDatasetDataLoader
dataset [AlignedDataset] was created
LocalEnhancer(
(model): Sequential(
(0): ReflectionPad2d((3, 3, 3, 3))
(1): Conv2d(36, 64, kernel_size=(7, 7), stride=(1, 1))
(2): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace=True)
(4): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(5): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(6): ReLU(inplace=True)
(7): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(8): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(9): ReLU(inplace=True)
(10): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(11): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(12): ReLU(inplace=True)
(13): Conv2d(512, 1024, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(14): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(15): ReLU(inplace=True)
(16): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace=True)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(17): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace=True)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(18): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace=True)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(19): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace=True)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(20): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace=True)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(21): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace=True)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(22): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace=True)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(23): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace=True)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(24): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace=True)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(25): ConvTranspose2d(1024, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1))
(26): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(27): ReLU(inplace=True)
(28): ConvTranspose2d(512, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1))
(29): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(30): ReLU(inplace=True)
(31): ConvTranspose2d(256, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1))
(32): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(33): ReLU(inplace=True)
(34): ConvTranspose2d(128, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1))
(35): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(36): ReLU(inplace=True)
)
(model1_1): Sequential(
(0): ReflectionPad2d((3, 3, 3, 3))
(1): Conv2d(36, 32, kernel_size=(7, 7), stride=(1, 1))
(2): InstanceNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace=True)
(4): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(5): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(6): ReLU(inplace=True)
)
(model1_2): Sequential(
(0): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace=True)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(1): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace=True)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(2): ResnetBlock(
(conv_block): Sequential(
(0): ReflectionPad2d((1, 1, 1, 1))
(1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1))
(2): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(3): ReLU(inplace=True)
(4): ReflectionPad2d((1, 1, 1, 1))
(5): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1))
(6): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
)
)
(3): ConvTranspose2d(64, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1))
(4): InstanceNorm2d(32, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)
(5): ReLU(inplace=True)
(6): ReflectionPad2d((3, 3, 3, 3))
(7): Conv2d(32, 3, kernel_size=(7, 7), stride=(1, 1))
(8): Tanh()
)
(downsample): AvgPool2d(kernel_size=3, stride=2, padding=[1, 1])
)
Traceback (most recent call last):
File "test.py", line 59, in <module>
generated = model.inference(data['label'], data['inst'], data['image'])
File "/workspace/lzy/pix2pixHD/models/pix2pixHD_model.py", line 198, in inference
input_label, inst_map, real_image, _ = self.encode_input(Variable(label), Variable(inst), image, infer=True)
File "/workspace/lzy/pix2pixHD/models/pix2pixHD_model.py", line 126, in encode_input
edge_map = self.get_edges(inst_map)
File "/workspace/lzy/pix2pixHD/models/pix2pixHD_model.py", line 264, in get_edges
edge[:,:,:,1:] = edge[:,:,:,1:] | (t[:,:,:,1:] != t[:,:,:,:-1])
RuntimeError: Expected object of scalar type Byte but got scalar type Bool for argument #2 'other' in call to _th_or
| closed | 2019-12-03T08:22:52Z | 2020-03-21T16:05:12Z | https://github.com/NVIDIA/pix2pixHD/issues/167 | [] | LizzzMax | 2 |
huggingface/datasets | pytorch | 6,887 | FAISS load to None | ### Describe the bug
I've use FAISS with Datasets and save to FAISS.
Then load to save FAISS then no error, then ds to None
```python
ds.load_faiss_index('embeddings', 'my_index.faiss')
```
### Steps to reproduce the bug
# 1.
```python
ds_with_embeddings = ds.map(lambda example: {'embeddings': model(transforms(example['image']).unsqueeze(0)).squeeze()}, batch_size=64)
ds_with_embeddings.add_faiss_index(column='embeddings')
ds_with_embeddings.save_faiss_index('embeddings', 'index.faiss')
```
# 2.
```python
ds.load_faiss_index('embeddings', 'my_index.faiss')
```
### Expected behavior
Add column in Datasets.
### Environment info
Google Colab, SageMaker Notebook | open | 2024-05-09T02:43:50Z | 2024-05-16T20:44:23Z | https://github.com/huggingface/datasets/issues/6887 | [] | brainer3220 | 1 |
quantmind/pulsar | asyncio | 184 | Release app bug | ProcessName setting is not set and cause failure in linux when actor try to set the process name.
| closed | 2015-12-10T16:51:41Z | 2016-01-13T11:23:30Z | https://github.com/quantmind/pulsar/issues/184 | [
"bug"
] | swlab2 | 0 |
nvbn/thefuck | python | 932 | How can I debug thefuck using IDE? | <!-- If you have any issue with The Fuck, sorry about that, but we will do what we
can to fix that. Actually, maybe we already have, so first thing to do is to
update The Fuck and see if the bug is still there. -->
<!-- If it is (sorry again), check if the problem has not already been reported and
if not, just open an issue on [GitHub](https://github.com/nvbn/thefuck) with
the following basic information: -->
The output of `thefuck --version` (something like `The Fuck 3.1 using Python
3.5.0 and Bash 4.4.12(1)-release`):
The Fuck 3.29 using Python 3.7.4 and ZSH 5.3
Your system (Debian 7, ArchLinux, Windows, etc.):
OSX Mojave, 10.14.5 (18F203)
How to reproduce the bug:
Not a bug, below sections are deleted for that reason.
---
Hi, I was surprised that golang hasn't been added to _thefuck_ so decided to add it and open a PR ny myself.
As an introductory step I'm planning to basically copypaste another ruleset&testcase and start from there. However, I can't figure out how to inspect a variable. I often times put a breakpoint, pause the program, inspect the variable and its methods to get used to source code.
I finally found a way: `logger.debug(command)` and `thefuck --debug` to read the console log, but that doesn't tell me call stacks, the method I can use with `command`, etc.
So, is there a way to debug _thefuck_ using IDE or, at least, manually call _thefuck_ from cli? | closed | 2019-07-18T04:14:48Z | 2019-07-19T04:34:02Z | https://github.com/nvbn/thefuck/issues/932 | [] | ik1ne | 2 |
vaexio/vaex | data-science | 1,279 | [BUG-REPORT] import vaex raises an error when unable to write to '~/.vaex' | I am running an application on an hadoop cluster with YARN and shipping a conda environment to it.
For some reason, (not clear to me yet), when [this](https://github.com/vaexio/vaex/blob/master/packages/vaex-core/vaex/utils.py#L203) is executed `~/.vaex` is expanded to `/home/.vaex` and the app fails because user doesn't have write permissions on `/home`.
Matplotlib solves this same issue by creating a [temporary folder](https://github.com/matplotlib/matplotlib/blob/d5d2b2a6caf75fc2f884c1bcde3c456db10229ec/lib/matplotlib/__init__.py#L428) and logging a warning.
**Software information**
- Vaex version (`import vaex; vaex.__version__)`:
```
{'vaex-core': '4.1.0',
'vaex-viz': '0.5.0',
'vaex-hdf5': '0.7.0',
'vaex-server': '0.4.0',
'vaex-astro': '0.8.0',
'vaex-jupyter': '0.6.0',
'vaex-ml': '0.11.1'}
```
- Vaex was installed via: conda-forge
- OS: Linux CentOS 7
| closed | 2021-03-22T17:37:19Z | 2021-03-31T07:14:43Z | https://github.com/vaexio/vaex/issues/1279 | [] | dzanaga | 2 |
sammchardy/python-binance | api | 821 | function get_sub_account_assets error unexpected keyword argument 'version' | I have a simple code requesting for a user sub accounts assets using the function
function get_sub_account_assets
and I having the error unexpected keyword argument 'version'
her is the code:
`
from binance.client import Client
def fetch_data1(credentials = None):
credentials = credentials if credentials else \
{
'apiKey': '111',
'secret': '111',
}
client = Client(credentials["apiKey"], credentials["secret"])
datas = []
try:
balance = client.get_sub_account_assets(email='jose.xmargin@gmail.com')
print(balance)
except Exception as err:
print(f"Exception occured, path /wapi/v1/sub-account/list: {err}")
return datas
def main():
"""Main function
"""
datas = fetch_data1()
print(datas)
if __name__ == "__main__":
main()
` | open | 2021-05-05T13:02:46Z | 2021-08-18T12:12:37Z | https://github.com/sammchardy/python-binance/issues/821 | [] | israel-gonzalezmedina | 3 |
strawberry-graphql/strawberry-django | graphql | 389 | Optimiser not working with custom prefetch query | The strawberry django optimiser is not working as expected when we provide hints (like custom prefetch) for a field in a type.
It results in n+1 queries when we do that
## Describe the Bug
Suppose I have 2 models Model and ModelVariables, ModelVariables has a foreign key reference to Model. So, in django we can refer ModelVariables from Model using the `related_name`.
Consider my models to be something like this:
```
class Model(model.Model):
id = models.UUIDField(primary_key=True)
name = models.CharField(max_length=255)
class ModelVariable(model.Model):
id = models.UUIDField(primary_key=True)
value = models.CharField(max_length=255)
related_model = models.ForeignKey(
"Model",
on_delete=models.CASCADE,
related_name="model_variables"
)
relational_field = models.ForeignKey(
"RelatedField",
on_delete=models.CASCADE,
related_name="model_variables"
)
class RelatedField(model.Model):
id = models.UUIDField(primary_key=True)
is_editable = models.BooleanField(default=False)
```
Since, `ModelVariable` has a foreign key to `Model`. I can get all the `ModelVariable` for a `Model` using -> `Model.model_variables.all()`
Now, for the above models I created strawberry django types in the following way
```
@strawberry_django.type(Model)
class ModelType(Node):
id: NodeID
@strawberry_django.field(prefetch_related=[
Prefetch(
"model_variables",
queryset=ModelVariable.objects.filter(
related_field__is_editable=True
),
to_attr="editable_model_variables"
)
])
def editable_model_variables(self) -> list["ModelVariableType"]:
return self.editable_model_variables
@strawberry_django.type(ModelVariable)
class ModelVariableType(Node):
id: NodeID
value: auto
related_model: "ModelType"
related_field: "RelatedFieldType"
@strawberry_django.type(RelatedField)
class RelatedFieldType(Node):
id: NodeID
is_editable: auto
```
Now, in my ModelType, I want `editable_model_variables` field to return all ModelVariables that are editable, so I added a prefetch_related for it and updated the query. Which works.
Now, when I execute a query and mention `related_field` in the query. It causes n+1 queries. The obvious reason is that, in my prefetch related query I did not select_related the `related_field`.
AFAIK the optimiser takes these hints and not just override the optimised queries.
I also tried passing `field_name="model_variables"` to the `@strawberry_django.field` decorator for hinting but that didn't work too.
## System Information
- Operating system: Ubuntu 22.04
- Strawberry version (if applicable):
- strawberry-graphql-django==0.20.0
- strawberry-graphql==0.209.5 | closed | 2023-10-11T10:06:24Z | 2025-03-20T15:57:21Z | https://github.com/strawberry-graphql/strawberry-django/issues/389 | [
"enhancement"
] | AlphaRishi1229 | 6 |
ultralytics/ultralytics | computer-vision | 19,137 | How to set the values of cls and box according to my needs at the beginning of training | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
My application requires more accuracy in classification and less in box. How do I set the values of box and cls at the beginning of training, I see that the default value allocation on the documentation is 7.5 and 0.5, I don't understand if there is a proportionality between them or what is the range of their values?
### Additional
_No response_ | open | 2025-02-08T13:23:38Z | 2025-02-13T22:02:03Z | https://github.com/ultralytics/ultralytics/issues/19137 | [
"question",
"detect"
] | blood7ao | 5 |
Sanster/IOPaint | pytorch | 359 | [Feature Request]support SD-XL |
is there any plan to support SD-XL? | open | 2023-08-18T22:42:12Z | 2025-03-15T02:01:47Z | https://github.com/Sanster/IOPaint/issues/359 | [] | dsp6414 | 5 |
gee-community/geemap | streamlit | 487 | ee_to_geojson export file that is not compliant with __geo_interface__ | ### Description
If the `ee_to_geojson` fonction is used as an export fonction the file get a strange definition.
Example using the following asset : 'users/bornToBeAlive/aoi_sandan'
### What I Did
```
import ee
import geemap
ee.Initialize()
featurecol = ee.FeatureCollection('users/bornToBeAlive/aoi_sandan')
geemap.ee_to_geojson(featurecol, 'test.geojson')
```
The output of the function is perfect but in the file there is a duplication of the 'featurecollection' key.
PR coming in a minute
| closed | 2021-05-21T14:15:28Z | 2021-05-21T14:33:15Z | https://github.com/gee-community/geemap/issues/487 | [
"bug"
] | 12rambau | 0 |
roboflow/supervision | computer-vision | 1,376 | mAP for small, medium and large objects | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Description
I saw an issue online where the user demanded calculation of MeanAveragePrecision for small, medium and large objects for HBB and OBB detection models. I thought that maybe this will be a good idea to expand this in supervision library.
We can discuss about this issue and maybe potential approach on how to achieve this.
### Use case
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | closed | 2024-07-17T15:15:41Z | 2024-07-17T16:16:32Z | https://github.com/roboflow/supervision/issues/1376 | [
"enhancement"
] | Bhavay-2001 | 2 |
Lightning-AI/pytorch-lightning | machine-learning | 20,622 | fabric.save docs/examples are wrong for latest version | ### 📚 Documentation
all docs seem to mention that fabric.save should be called with
`fabric.save(model.to_state_dict(), checkpoint_path)` but it seems that in latest version the params have been switched and checkpoint_path needs to be passed first
example doc: https://lightning.ai/docs/pytorch/LTS/fabric/api/fabric_methods.html
cc @lantiga @borda | open | 2025-03-06T07:54:40Z | 2025-03-06T07:55:01Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20622 | [
"docs",
"needs triage"
] | ppisljar | 0 |
mwaskom/seaborn | pandas | 2,768 | Rotated x tick labels cropped in saved PNG | Consider the following minimal working example
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_excel('https://figshare.com/ndownloader/files/24413336')
df.rename(columns=lambda column_name: column_name.replace('\n', ''), inplace=True)
guangdong_df = df.loc[df['Province_name'] == 'Guangdong']
ax = sns.countplot(
data=guangdong_df.sort_values(by=['City_name']),
x="City_name"
)
ax.set(
xlabel='City',
ylabel='Number of Studies'
)
ax.set_xticklabels(
ax.get_xticklabels(),
rotation = 90
)
plt.yticks([0, 5, 10, 15, 20], [0, 5, 10, 15, 20])
ax.figure.savefig("mosquito-cities.png")
```
# Actual Results

# Expected Results

# Environment Information
- Seaborn version 0.11.2
- matplotlib version 3.4.3 | closed | 2022-03-22T02:46:10Z | 2022-03-22T10:33:34Z | https://github.com/mwaskom/seaborn/issues/2768 | [] | rgaiacs | 2 |
graphistry/pygraphistry | pandas | 639 | [BUG] user reports scale of scene settings does not match the UI ex: .5 in settings changes to 50 in UI | **Describe the bug**
- point_size and edge_size - range from 0-100 but in the scene settings, it's required to pass in 0.0 to 1.0 - user requests that these numbers align
- edge_curvature, edge_opacity and point_opacity are percentages in the UI, so these seem to make sense to enter as 0.0 to 1.0
from user:
>Mismatch in names on UI and attributes in Graphistry API. Also, mismatch in the scale of the UI input and what is available on the API. Either update documentation or make the UI and API like-for-like
somewhat related: https://github.com/graphistry/pygraphistry/issues/633
**To Reproduce**
```python
import graphistry
import pandas as pd
#graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
df = pd.read_csv('https://raw.githubusercontent.com/graphistry/pygraphistry/master/demos/data/honeypot.csv')
g = graphistry.edges(df, 'attackerIP', 'victimIP')
g2 = g.scene_settings(
point_size=0.2,
edge_curvature=0.3,
edge_opacity=0.4,
point_opacity=0.5
)
g2.plot()
```
**Screenshots**

| open | 2025-01-15T19:54:39Z | 2025-01-15T19:54:39Z | https://github.com/graphistry/pygraphistry/issues/639 | [
"bug"
] | DataBoyTX | 0 |
vitalik/django-ninja | django | 799 | ninja could work like the Django Rest Framework (DRF) to automatically capture error types | One optimization suggestion: ninja could work like the Django Rest Framework (DRF) to automatically capture error types and hints and return them through the interface. This would help users of the interface to quickly locate errors.
| closed | 2023-07-20T03:01:16Z | 2023-07-20T07:02:38Z | https://github.com/vitalik/django-ninja/issues/799 | [] | MaxwellEdisons | 1 |
strawberry-graphql/strawberry | fastapi | 2,992 | Strawberry must provide server side ping messages | <!-- Provide a general summary of the bug in the title above. -->
Server side ping messages are necessary to keep the websocket connection open on all types of platforms.
The particular platform I'm working with is react-native on Android
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
React-Native on Android Websockets close due to no server side PING messages within 8-10 seconds.
You can follow the crux of the discussion here: https://discord.com/channels/689806334337482765/1134350180653740065
I have verified the issue with the author of `graphql-ws` repo.
<!-- A clear and concise description of what the bug is. -->
## System Information
- Operating system:
- Strawberry version (if applicable):
## Additional Context
I would recommend sending PINGs from strawberry every 6 seconds to account for most types of client websocket timeouts.
Probably would require changes to handlers.py within these lines:
```py
async def handle_connection_init(self, message: ConnectionInitMessage) -> None:
if self.connection_timed_out:
# No way to reliably excercise this case during testing
return # pragma: no cover
if self.connection_init_timeout_task:
self.connection_init_timeout_task.cancel()
if message.payload is not UNSET and not isinstance(message.payload, dict):
await self.close(code=4400, reason="Invalid connection init payload")
return
self.connection_params = message.payload
if self.connection_init_received:
reason = "Too many initialisation requests"
await self.close(code=4429, reason=reason)
return
self.connection_init_received = True
await self.send_message(ConnectionAckMessage())
self.connection_acknowledged = True
async def handle_ping(self, message: PingMessage) -> None:
await self.send_message(PongMessage())
async def handle_pong(self, message: PongMessage) -> None:
pass
```
<!-- Add any other relevant information about the problem here. --> | open | 2023-07-30T09:20:59Z | 2025-03-20T15:56:19Z | https://github.com/strawberry-graphql/strawberry/issues/2992 | [
"bug"
] | XChikuX | 6 |
tfranzel/drf-spectacular | rest-api | 528 | raw schema dict renders incorrect response schema | **Describe the bug**
According to these [API docs](https://github.com/tfranzel/drf-spectacular/blob/master/drf_spectacular/utils.py#L223) I should be able to pass in a raw schema dict. I tried a few formats as I couldn't find any examples.
However the swagger ui always interprets the response dict as a string.
**To Reproduce**
```python
class APIDoTheThing(APIView):
@extend_schema(responses={201: {'test': 'value'}},
methods=['POST'])
def post(self, request, external_id, format=None):
return Response(
{'sender': 'blbla'
'receiver': 'sdfdas'
, status=201)
```
**Expected behavior**
The swagger-ui should document the response parameters as {"string": "string"}
**Actual behavior**
The swagger-ui documents the response parameters as "string"

Is there any way to explicitly define my response schema without using a serializer?
Thanks
| closed | 2021-09-19T02:34:17Z | 2021-09-19T14:02:57Z | https://github.com/tfranzel/drf-spectacular/issues/528 | [] | devonhk | 4 |
ultralytics/yolov5 | deep-learning | 12,410 | ImportError when trying to implement DeepSort_YOLOv5_Pytorch | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
**Issue:**
I am currently attempting to implement DeepSort_YOLOv5_Pytorch in my project. As part of the setup process, I have cloned the yolo_v5 repository into my project directory. However, when I try to run the code, I encounter the following error:
```
from utils import TryExcept, emojis
ModuleNotFoundError: No module named 'utils'
```
Upon investigating the issue, I suspect that the problem may be related to the `utils` module within the yolo_v5 repository, as I cannot find the import statement for it.
**Context:**
I have followed the setup instructions for DeepSort_YOLOv5_Pytorch, including cloning the yolo_v5 repository into my project directory. The goal is to combine YOLOv5 for object detection with DeepSort for object tracking. However, it seems that the code is unable to find the required `utils` module from the yolo_v5 repository.
**Expected Behavior:**
I expect the code to run without any ImportError related to the `utils` module, allowing me to successfully implement DeepSort_YOLOv5_Pytorch in my project.
**Steps to Reproduce:**
1. Clone the DeepSort_YOLOv5_Pytorch repository.
2. Clone the yolo_v5 repository into the project directory.
3. Run the code.
**Additional Information:**
- Project Repository: (https://github.com/HowieMa/DeepSORT_YOLOv5_Pytorch)
- Operating System: Ubuntu 22.04
- Python Version: 3.11.4
### Additional
Please let me know if there are any specific steps or configurations I should follow to resolve this issue or if there's any additional information needed to assist in troubleshooting. | closed | 2023-11-21T19:22:37Z | 2024-10-20T19:32:08Z | https://github.com/ultralytics/yolov5/issues/12410 | [
"question",
"Stale"
] | PG-9-9 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.