repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
deezer/spleeter | tensorflow | 557 | pip install spleeter -> broken (on Archlinux) -> using virtualenv python 3.8 + poetry is fine ! | - [x] I didn't find a similar issue already open.
- [x] I read the documentation (README AND Wiki)
- [x] I have installed FFMpeg
- [x] My problem is related to Spleeter only, not a derivative product (such as Webapplication, or GUI provided by others)
## Description
Pip install of spleeter 2.1.0 broken on archlinux.
Searching for tensorflow 2.3.0 -> not available.
## Step to reproduce
1. Installed using `pip install spleeter`
2. Run as `user` and/or 'root'
3. Got `ERROR: Could not find a version that satisfies the requirement tensorflow==2.3.0 (from spleeter) (from versions: none)
ERROR: No matching distribution found for tensorflow==2.3.0 (from spleeter)` error
## Output
~ $ pip install spleeter
Defaulting to user installation because normal site-packages is not writeable
Collecting spleeter
Downloading spleeter-2.1.0-py3-none-any.whl (50 kB)
|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 50 kB 3.4 MB/s
Collecting typer<0.4.0,>=0.3.2
Downloading typer-0.3.2-py3-none-any.whl (21 kB)
Collecting ffmpeg-python==0.2.0
Downloading ffmpeg_python-0.2.0-py3-none-any.whl (25 kB)
Collecting librosa==0.8.0
Downloading librosa-0.8.0.tar.gz (183 kB)
|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 183 kB 5.8 MB/s
Collecting pandas==1.1.2
Downloading pandas-1.1.2.tar.gz (5.2 MB)
|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5.2 MB 6.3 MB/s
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Collecting httpx[http2]<0.17.0,>=0.16.1
Downloading httpx-0.16.1-py3-none-any.whl (65 kB)
|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 65 kB 1.5 MB/s
Collecting norbert==0.2.1
Downloading norbert-0.2.1-py2.py3-none-any.whl (11 kB)
ERROR: Could not find a version that satisfies the requirement tensorflow==2.3.0 (from spleeter) (from versions: none)
ERROR: No matching distribution found for tensorflow==2.3.0 (from spleeter)
## Environment
| ----------------- | ------------------------------- |
| OS | Linux Archlinux |
| Installation type | pip |
| RAM available | 8 Go |
| Hardware spec | Intel celeron integrated GPU |
## Additional context
-
| closed | 2021-01-12T12:32:18Z | 2021-03-13T23:46:48Z | https://github.com/deezer/spleeter/issues/557 | [
"bug",
"invalid"
] | jujudusud | 7 |
falconry/falcon | api | 2,238 | Evaluate CodSpeed for benchmarking in CI | I talked to @adriencaccia from [CodSpeed](https://codspeed.io) during EuroPython 2024, and it turns out that they productified the same Gist by @itamarst that https://github.com/falconry/falcon/pull/1825 is based upon :smiling_imp:
CodSpeed is free for open source projects, and it has even by adopted by projects like Ruff. | open | 2024-07-12T14:53:50Z | 2024-11-11T06:52:35Z | https://github.com/falconry/falcon/issues/2238 | [
"maintenance",
"perf"
] | vytas7 | 1 |
tensorflow/tensor2tensor | deep-learning | 1,106 | Is it a good idea to apply NER for translation | This may not directly related to tensor2tensor, but I am curious about to what extend NER could improve translation quality in general. Here are some examples where NER could apply.
1. numbers, especially long digits e.g., 100,000,000. To my understand, it would be split into a serious of tokens with each element of being one digits. If it could be replaced by a special token such as _number, the length will be reduced to one.
2. dates, for example **2 October 2018** will also be one token if converted properly.
The benefit of doing this is to shorten sentences and thus yielding a more simplified sentence structure. On the other hand, there are also bad sides. For example, it relies on a good NER system. It may also cause trouble when post-processing those NEs after translated into another language, one of which could be to retain the orders of NEs as of in source sentences.
Any comments, suggestions? | closed | 2018-10-02T09:09:55Z | 2018-11-13T04:33:20Z | https://github.com/tensorflow/tensor2tensor/issues/1106 | [] | lkluo | 4 |
JaidedAI/EasyOCR | machine-learning | 723 | Question about using EasyOCR | Hi,
We are just wondering if the EasyOCR is free to use or we can execute it only 30 operations (for Community version-according to the web: https://www.jaided.ai/price/)?
Many thanks! | closed | 2022-05-09T11:11:01Z | 2022-05-09T17:39:25Z | https://github.com/JaidedAI/EasyOCR/issues/723 | [] | Di-Ma-S21 | 3 |
mirumee/ariadne | graphql | 173 | Look into supporting OpenTracing | OpenTracing (https://opentracing.io/) looks like a great way to implement generic APM support.
OpenTracing would also allow us to track requests across multiple servers (via forwarding trace/span IDs, for example using HTTP headers). | closed | 2019-05-10T15:36:59Z | 2019-08-07T16:43:33Z | https://github.com/mirumee/ariadne/issues/173 | [] | patrys | 3 |
ScottfreeLLC/AlphaPy | pandas | 19 | AI to Metatrader 4 | Hello,
I'm interested in your application.
I want the AI for Metatrader 4. My idea is to use Google Servers and this Technology from Google to handle a Website that clone Metatrader4 library. To clone Website you have to import this code from that is listed on this site: https://www.mql5.com/de/articles/3024?utm_source=MetaTrader+4+WebTerminal&utm_campaign=Share+WebTerminal
Other Forum about this project: https://www.mql5.com/de/forum/223405
Can you do this as Opensource? | closed | 2018-01-17T13:12:19Z | 2018-06-23T10:53:40Z | https://github.com/ScottfreeLLC/AlphaPy/issues/19 | [] | ConsultingSecurty | 2 |
amisadmin/fastapi-amis-admin | sqlalchemy | 146 | โ andy pip install fastapi_amis_admin[cli] zsh: no matches found: fastapi_amis_admin[cli] | pip install fastapi_amis_admin[cli] zsh: no matches found: fastapi_amis_admin[cli] ๆฑ้ฎๅคงไฝฌ๏ผ่ฟไธชๆไนๅค็ | open | 2023-12-04T09:48:17Z | 2024-01-08T08:11:43Z | https://github.com/amisadmin/fastapi-amis-admin/issues/146 | [] | brucel11qwe | 1 |
coleifer/sqlite-web | flask | 42 | fails to display table with foreign keys | Using the latest version from pip, I have the following issue:
When I click on a table that contains a foreign key, an Internal Server Error is caused with the following Traceback:
```
[2018-04-26 10:56:55,863] ERROR in app: Exception on /expression/ [GET]
Traceback (most recent call last):
File "/storage/apps/easybuild/software/Anaconda3/5.0.1/lib/python3.6/site-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/storage/apps/easybuild/software/Anaconda3/5.0.1/lib/python3.6/site-packages/flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/storage/apps/easybuild/software/Anaconda3/5.0.1/lib/python3.6/site-packages/flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/storage/apps/easybuild/software/Anaconda3/5.0.1/lib/python3.6/site-packages/flask/_compat.py", line 33,in reraise
raise value
File "/storage/apps/easybuild/software/Anaconda3/5.0.1/lib/python3.6/site-packages/flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "/storage/apps/easybuild/software/Anaconda3/5.0.1/lib/python3.6/site-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/storage/apps/easybuild/software/Anaconda3/5.0.1/lib/python3.6/site-packages/sqlite_web/sqlite_web.py", line 201, in inner
return fn(table, *args, **kwargs)
File "/storage/apps/easybuild/software/Anaconda3/5.0.1/lib/python3.6/site-packages/sqlite_web/sqlite_web.py", line 217, in table_structure
ds_table = dataset[table]
File "/storage/apps/easybuild/software/Anaconda3/5.0.1/lib/python3.6/site-packages/playhouse/dataset.py", line63, in __getitem__
self.update_cache(table)
File "/storage/apps/easybuild/software/Anaconda3/5.0.1/lib/python3.6/site-packages/playhouse/dataset.py", line92, in update_cache
literal_column_names=True)
File "/storage/apps/easybuild/software/Anaconda3/5.0.1/lib/python3.6/site-packages/playhouse/reflection.py", line 598, in generate_models
literal_column_names=literal_column_names)
File "/storage/apps/easybuild/software/Anaconda3/5.0.1/lib/python3.6/site-packages/playhouse/reflection.py", line 586, in introspect
related_name=related_names.get(src))
File "/storage/apps/easybuild/software/Anaconda3/5.0.1/lib/python3.6/site-packages/playhouse/reflection.py", line 120, in set_foreign_key
self.rel_model = model_names[foreign_key.dest_table]
KeyError: 'sample'
```
The SQL of the table looks like
```sql
create table sample (sample varchar(80)
, patient varchar(80)
, cohort varchar(10)
, sample_type varchar(4)
, PRIMARY KEY (sample)
, FOREIGN KEY (cohort) references cohort(cohort)
, FOREIGN KEY (sample_type) references sample_type(sample_type))
```
The SQL of the referenced tables like
```sql
create table cohort (cohort varchar(10)
, name text
, PRIMARY KEY (cohort));
create table sample_type (numeric_code varchar(2)
, sample_type varchar(8)
, definition text
, PRIMARY KEY (sample_type)
, UNIQUE(numeric_code))
```
| closed | 2018-04-26T11:04:01Z | 2018-05-04T13:57:08Z | https://github.com/coleifer/sqlite-web/issues/42 | [] | grst | 12 |
flairNLP/fundus | web-scraping | 204 | [Todo] Finish the migration to the new parser spec format | There are some quirks left after #201 has taken place:
- [ ] Update the documentation
- [ ] Rework/Simplify the Sitemap vs Newsmap Situation
- [ ] Explain what the Sitemap Parameters do | closed | 2023-05-13T07:38:34Z | 2023-06-27T05:39:55Z | https://github.com/flairNLP/fundus/issues/204 | [] | Weyaaron | 2 |
dadadel/pyment | numpy | 130 | Feature Request: Publish/Deploy Sphinx Documentation Website | Hi,
This is a feature request to Publish/Deploy the Sphinx Documentation Website.
GitHub Pages might be a good option for this. | open | 2023-08-01T04:32:08Z | 2023-08-01T04:32:08Z | https://github.com/dadadel/pyment/issues/130 | [] | jgarte | 0 |
predict-idlab/plotly-resampler | plotly | 83 | update example notebooks | Main points:
- [x] add `register_plotly_resampler` / `unregister_plotly_resampler`
- [x] show `register_plotly_resampler` + `pd.options.plotting.backend` = 'plotly'
- [x] show how `add_traces` can be used
- [x] show how `add_trace` and `add_traces` can be used with dict_input
**to discuss**: maybe also add the hack that when the trace type is not specified with a dict input, that we assume a scatter**gl** will be used; instead of the default scatter behavior. | closed | 2022-06-24T08:21:20Z | 2022-08-12T07:18:57Z | https://github.com/predict-idlab/plotly-resampler/issues/83 | [
"documentation",
"enhancement"
] | jonasvdd | 4 |
scikit-image/scikit-image | computer-vision | 7,053 | Address new deprecations in SciPy 1.12.0.dev0 | Our nightly test job is doing its job and flagging pending deprecation warnings: https://github.com/scikit-image/scikit-image/actions/runs/5485938623/jobs/9995372984#step:5:3983
```
FAILED segmentation/tests/test_random_walker.py::test_2d_cg[float16] - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_2d_cg[float32] - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_2d_cg[float64] - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_2d_cg_mg[float16] - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_2d_cg_mg[float32] - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_2d_cg_mg[float64] - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_types - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_3d[float32] - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_3d[float64] - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_3d_inactive - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_multispectral_2d[float32-0] - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_multispectral_2d[float32-1] - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_multispectral_2d[float32--1] - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_multispectral_2d[float64-0] - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_multispectral_2d[float64-1] - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_multispectral_2d[float64--1] - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_multispectral_3d[float32] - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_multispectral_3d[float64] - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_spacing_0 - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_spacing_1 - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_isolated_seeds - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_isolated_area - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
FAILED segmentation/tests/test_random_walker.py::test_prob_tol - ValueError: Unexpected warning: 'scipy.sparse.linalg.cg' keyword argument 'tol' is deprecated in favor of 'rtol' and will be removed in SciPy v.1.14.0. Until then, if set, it will override 'rtol'.
```
Don't have time right to look into it right now. | closed | 2023-07-07T14:37:40Z | 2023-07-08T15:22:28Z | https://github.com/scikit-image/scikit-image/issues/7053 | [
":wrench: type: Maintenance"
] | lagru | 0 |
Significant-Gravitas/AutoGPT | python | 8,811 | Home Page become a creator does not work | closed | 2024-11-27T13:02:54Z | 2024-12-09T16:30:51Z | https://github.com/Significant-Gravitas/AutoGPT/issues/8811 | [
"bug",
"UI",
"platform/frontend"
] | Swiftyos | 0 |
|
keras-team/keras | machine-learning | 20,552 | This method creates a model with a 100% memory leak loop using model. fit() | I have tried various methods, but the memory is definitely leaking, it seems that the release of memory cannot keep up. Through the logs, it can be found that there is periodic memory recycling, but with the increase of time, there is still a clear upward trend
Name: keras
Version: 3.6.0
Please find the [gist](https://colab.sandbox.google.com/gist/Venkat6871/30fb12f2188a7826e2c649bbd945dbda/80753_tf_2-18-0-nightly-v.ipynb) here for your reference.
https://github.com/tensorflow/tensorflow/issues/80753#issuecomment-2503203801 | open | 2024-11-27T08:21:34Z | 2024-11-28T06:42:29Z | https://github.com/keras-team/keras/issues/20552 | [
"type:bug/performance"
] | phpYj | 1 |
CatchTheTornado/text-extract-api | api | 37 | [feat] support returning images | Marker sometimes leaves like ``in the text - would be great to return these images + bounding boxes as another option as well | open | 2024-11-15T11:23:31Z | 2025-01-12T17:01:55Z | https://github.com/CatchTheTornado/text-extract-api/issues/37 | [
"feature"
] | pkarw | 2 |
Kanaries/pygwalker | pandas | 448 | [BUG] Interpret Data meet Oops! | **Describe the bug**
I render a bar graph, when click 'Interpret Data' meet a Oops bug.
**To Reproduce**
Steps to reproduce the behavior:
1. render a bar graph
2. click 'Interpret Data' on a bar, There will be a Oops Error
**Expected behavior**
**Screenshots**


**Versions**
- pygwalker version: 0.4.6
- python version: 3.8
- browser: chrome latest
**Additional context**
| closed | 2024-02-29T08:58:48Z | 2024-03-09T03:54:33Z | https://github.com/Kanaries/pygwalker/issues/448 | [
"bug",
"P1"
] | DonYum | 2 |
ultralytics/yolov5 | machine-learning | 13,410 | what is the width_multiple? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi, always grateful to use your YOLO.
I have a question.
I modified 'yolov5.yaml' by changing "s = [0.50, 0.50, 1024]" to "s =[0.44, 0.33, 1024]".
I understand that the depth_multiple controls the number of Darknetbottleneck blocks.
However, I donโt fully understand what width_multiple does. I know it generally affects the convolution channels, but I want to know exactly what it controls and how.
Could you explain this in detail? Thanks.
### Additional
_No response_ | closed | 2024-11-10T13:43:10Z | 2024-11-10T20:42:26Z | https://github.com/ultralytics/yolov5/issues/13410 | [
"question"
] | sihogu | 2 |
pyro-ppl/numpyro | numpy | 1,032 | Error in trying to run Example: MCMC Methods for Tall Data | I try to run the code in
http://num.pyro.ai/en/stable/examples/covtype.html?highlight=GPU#sphx-glr-download-examples-covtype-py
TypeError: Argument 'None' of type '<class 'NoneType'>' is not a valid JAX type | closed | 2021-05-07T15:11:29Z | 2021-05-27T22:11:09Z | https://github.com/pyro-ppl/numpyro/issues/1032 | [
"question"
] | srijaniiserprinceton | 10 |
donnemartin/system-design-primer | python | 752 | Sdp | open | 2023-03-11T17:51:15Z | 2023-05-23T01:31:32Z | https://github.com/donnemartin/system-design-primer/issues/752 | [
"needs-review"
] | aminegui19 | 2 |
|
prkumar/uplink | rest-api | 120 | Add a way to override the execution order of certain class-level and method-level response handlers | **Is your feature request related to a problem? Please describe.**
Here's the original question raised by @liiight on [gitter](https://gitter.im/python-uplink/Lobby?at=5beddea66b9822140d36ce8f):
> so i have a class response handler to handle request errors, which basically just does `raise_for_status()`
> I have another response handler that I want to use in order to retry 404 status code via a retry lib I use
> I set the 2nd response handler directly on the relevant method but it seems that the 1st one is the one that actually catches the exception
> is there a way to decide on the order of those?
**Describe the solution you'd like**
There should be a way to specify that a particular method-level response handler should run before any or certain class-level handlers.
**Additional context**
Here's my response to the original question:
> currently, class-level decorators run before method-level decorators, as you noticed in your usage. #72 (v0.4.1) details some of the rationale for this. Currently, Uplink doesn't give you a way to decide on the order between class-level and method-level decorators. From what I can tell, there are two existing workarounds, but both have drawbacks. First, you could make the retry response handler a class-level decorator. If you don't want all methods to be retried, the other workaround is to apply the raise_for_status decorator on each method, but this makes things more verbose. | open | 2018-11-23T04:41:11Z | 2019-07-06T00:41:43Z | https://github.com/prkumar/uplink/issues/120 | [
"Feature Request"
] | prkumar | 1 |
opengeos/streamlit-geospatial | streamlit | 115 | https://huggingface.co/spaces/giswqs/Streamlit | Exception: module 'ipyleaflet' has no attribute 'TileLayer'
Traceback:
File "/home/user/.local/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "/home/user/app/app.py", line 2, in <module>
import leafmap.foliumap as leafmap
File "/home/user/.local/lib/python3.8/site-packages/leafmap/__init__.py", line 42, in <module>
raise Exception(e) | closed | 2023-04-01T18:39:01Z | 2023-04-18T15:05:56Z | https://github.com/opengeos/streamlit-geospatial/issues/115 | [] | paritoshk | 1 |
seleniumbase/SeleniumBase | pytest | 2,328 | Selemiumbase Driver switches to another tab | After opening those three tabs, the driver switches from the third tab to the second one or the first one.
It doesn't happen with the selenium driver nor the uc_chromedriver.
I managed to reproduce the bug with the following code:
happened on Python 3.9 & 3.11.6
`from seleniumbase import Driver`
`AMAZON_PRIME_LOGIN = "https://www.amazon.com/ap/signin?openid.pape.max_auth_age=3600&openid.return_to=https%3A%2F%2Fgaming.amazon.com%2Fprime%2Fsignup%2Feg%3Fingress%3Damzn%26ref_%3Dsm_w_ics_m_f_all&openid.identity=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.assoc_handle=amzn_respawn_desktop_us&openid.mode=checkid_setup&openid.claimed_id=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0%2Fidentifier_select&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&ssoResponse=eyJ6aXAiOiJERUYiLCJlbmMiOiJBMjU2R0NNIiwiYWxnIjoiQTI1NktXIn0.yf9Wgo4I2tZaftdNDV9dGzDE3WBRtCwlofy9T0xdJFn6Z8J9GkkQ2A.YfhrqNQaRSrDgXpJ.5jj055CVEHpYJa2zcCUKxxPSxxcSeVjvQFpUjEP-_kOek_h1S8Zy6jujXVJSGJtsliAleSPGnrlvysESKkSEXnAWFOvJRcE9JepYQJulvu"`
`AMAZON_PRIME_GAMING = "https://gaming.amazon.com/prime-gaming-capsule-nov-23/dp/amzn1.pg.item.fe075900-6304-4e90-a13d-d5e04635dca9?ingress=amzn&ref_=SM_LeagueofLegends_S13_D09_CRWN"`
`AMAZON_NUMBER_SETTING = "https://www.amazon.com/ap/profile/mobilephone?ref_=ax_am_landing_add_mobile&openid.assoc_handle=usflex&referringAppAction=CNEP"`
`driver = Driver(uc= True)`
`driver.get(AMAZON_PRIME_LOGIN)`
`driver.switch_to.new_window('tab')`
`driver.get(AMAZON_PRIME_GAMING)`
`driver.switch_to.new_window('tab')`
`driver.get(AMAZON_NUMBER_SETTING)`
`input()`
`driver.quit()` | closed | 2023-11-30T06:08:50Z | 2025-02-14T12:58:32Z | https://github.com/seleniumbase/SeleniumBase/issues/2328 | [
"question",
"UC Mode / CDP Mode"
] | ZoneLover | 6 |
miguelgrinberg/Flask-Migrate | flask | 163 | How to call apis(upgrade, current) in python script | I have a database `db`. I want to judge if `flask_migrate` has created tables in `db`. If not, `upgrade db`.
There are commands, but no examples about calling `migrate, upgrade` in python script.
The test files in flask_migrate also run commands, such as:
`(o, e, s) = run_cmd('python app.py db migrate')`
| closed | 2017-07-07T05:06:28Z | 2021-08-05T18:04:38Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/163 | [
"question"
] | zhangjiajie023 | 5 |
dmlc/gluon-nlp | numpy | 680 | vocabulary set_embedding(glove) maps all OOV terms to the same vector if no lookup provided | When attaching the pre-trained embedding to a built vocab, namely, `vocab.set_embedding(glove_em)`, by design, all of the vocabulary that did not present in the pre-trained embedding will be mapped to the `<'unk'>` using init_unknown_vec (default `nd.zeros`).
This is a bit awkward since the `<'unk'>` was defined in building vocabulary already (e.g., using a threshold of mininal term frequency), and one may expect all of these OOV terms are at least mapped to different random vectors instead of the same vector.
Of course, when using FastText and enable the ngram or providing `unknown_lookup` could resolve it. However, as the default behavior, it is still a bit counter-converntion.
```
text_data = "Computing-Tabulating-Recording \\n affective-motivational \\n teacher-dealers"
counter = gluonnlp.data.count_tokens(text_data)
vocab = gluonnlp.Vocab(counter, unknown_token='<unk>', min_freq=self.min_freq)
em = gluonnlp.embedding.create('GloVe',
unknown_token='<unk>',
source='glove.840B.300d',
allow_extend=True,
init_unknown_vec=nd.random.uniform)
vocab.set_embedding(em)
``` | open | 2019-04-25T11:46:37Z | 2019-06-23T21:41:18Z | https://github.com/dmlc/gluon-nlp/issues/680 | [
"enhancement"
] | khui | 6 |
tensorpack/tensorpack | tensorflow | 1,059 | dump-model-params.py fail for Fasterrcnn horovod checkpt | If you're asking about an unexpected problem you met, use this template.
__PLEASE DO NOT DELETE THIS TEMPLATE, FILL IT__:
### 1. What you did:
(1) **If you're using examples, what's the command you run:**
dump-model-params.py --meta mymeta myinputchpt outputnpz.npz
(2) **If you're using examples, have you made any changes to the examples? Paste `git diff` here:**
(3) **If not using examples, tell us what you did:**
Note that we may not be able to investigate it if there is no reproducible code.
It's always better to paste what you did instead of describing them.
### 2. What you observed:
(1) **Include the ENTIRE logs here:**
Traceback (most recent call last):
File "dump-model-params.py", line 25, in <module>
tf.train.import_meta_graph(args.meta, clear_devices=True)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1674, in import_meta_graph
meta_graph_or_file, clear_devices, import_scope, **kwargs)[0]
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1696, in _import_meta_graph_with_return_elements
**kwargs))
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/framework/meta_graph.py", line 806, in import_scoped_meta_graph_with_return_elements
return_elements=return_elements)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 391, in import_graph_def
_RemoveDefaultAttrs(op_dict, producer_op_list, graph_def)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 158, in _RemoveDefaultAttrs
op_def = op_dict[node.op]
KeyError: 'HorovodAllreduce'
It's always better to paste what you observed instead of describing them.
It's always better to paste **as much as possible**, although sometimes a partial log is OK.
Tensorpack typically saves stdout to its training log.
If stderr is relevant, you can run a command with `CMD 2>&1 | tee logs.txt`
to save both stdout and stderr to one file.
(2) **Other observations, if any:**
For example, CPU/GPU utilization, output images, tensorboard curves, if relevant to your issue.
### 3. What you expected, if not obvious.
If you expect higher speed, please first read http://tensorpack.readthedocs.io/en/latest/tutorial/performance-tuning.html
If you expect higher accuracy, only in one of the two conditions can we help with it:
(1) You're unable to match the accuracy documented in tensorpack examples.
(2) It appears to be a tensorpack bug.
Otherwise, how to get high accuracy is a machine learning question and is
not our responsibility to figure out.
### 4. Your environment:
+ Python version:
+ TF version: `python -c 'import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)'`.
+ Tensorpack version: `python -c 'import tensorpack; print(tensorpack.__version__);'`.
You can install Tensorpack master by `pip install -U git+https://github.com/ppwwyyxx/tensorpack.git`
and see if your issue is already solved.
+ If you're not using tensorpack under a normal command line shell (e.g.,
using an IDE or jupyter notebook), please retry under a normal command line shell.
+ Hardware information, e.g. number of GPUs used.
Feel free to add extra information related to your issue, but
please try to provide the above information __accurately__ to save effort in the investigation.
| closed | 2019-01-22T18:11:06Z | 2019-04-13T07:27:28Z | https://github.com/tensorpack/tensorpack/issues/1059 | [
"usage"
] | bluerythem | 2 |
ResidentMario/missingno | data-visualization | 73 | What does the right Y axis ticks in missingno represent? | According to the example run by the author in the docs, I thought Y axis represented the number of entries according to the percentage in the right Y axis. However, values are way discrepant in my case:

Could anyone explain what the right Y axis ticks really represent?
| closed | 2018-07-17T12:45:28Z | 2019-03-17T04:25:24Z | https://github.com/ResidentMario/missingno/issues/73 | [] | aguinaldoabbj | 1 |
recommenders-team/recommenders | data-science | 2,149 | [BUG] No detailed information in testing error | ### Description
<!--- Describe your issue/bug/request in detail -->
The recent nightly build logs (https://github.com/recommenders-team/recommenders/actions/runs/10335387467/job/28609908093) don't provide sufficient information about the error.

### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
<!--- * Azure Data Science Virtual Machine. -->
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
In the testing workflow.
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
### Willingness to contribute
<!--- Go over all the following points, and put an `x` in the box that apply. -->
- [x] Yes, I can contribute for this issue independently.
- [ ] Yes, I can contribute for this issue with guidance from Recommenders community.
- [ ] No, I cannot contribute at this time.
### Other Comments
| closed | 2024-08-19T13:29:25Z | 2024-08-20T03:23:26Z | https://github.com/recommenders-team/recommenders/issues/2149 | [
"bug"
] | SimonYansenZhao | 1 |
ray-project/ray | deep-learning | 51,527 | [Train] Crash at end of training | ### What happened + What you expected to happen
Recently I've been staring to experience a crash at the end of training. The backtrace is always the same:
```
Training completed after 1 iterations at 2025-03-05 04:39:56. Total running time: 8min 51s
2025-03-05 04:39:56,506 INFO tune.py:1009 -- Wrote the latest version of all result files and experiment state to 'earthdaily-pathfinders-scaleai/venus/afm-profiling/train/experimental-2025-03-05_04-31-02_a458' in 0.5035s.
(TorchTrainer pid=439, ip=10.212.157.221) *** SIGSEGV received at time=1741149596 on cpu 63 ***
(TorchTrainer pid=439, ip=10.212.157.221) PC: @ 0x7f60b5a3c7be (unknown) ray::gcs::TaskInfoAccessor::AsyncAddTaskEventData()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b6ee7050 1824 (unknown)
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b591d975 1392 ray::core::worker::TaskEventBufferImpl::FlushEvents()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b58a66ec 1488 ray::core::CoreWorker::Disconnect()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b58a6a9d 1152 ray::core::CoreWorker::ForceExit()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b58a6ecf 1680 ray::core::CoreWorker::HandleKillActor()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b589e3d4 192 ray::rpc::ServerCallImpl<>::HandleRequestImpl()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b5c2bbc8 1168 EventTracker::RecordExecution()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b5c0fffe 48 std::_Function_handler<>::_M_invoke()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b5c10476 112 boost::asio::detail::completion_handler<>::do_complete()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b62d68db 128 boost::asio::detail::scheduler::do_run_one()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b62d8259 288 boost::asio::detail::scheduler::run()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b62d8962 96 boost::asio::io_context::run()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b57ff0b1 1280 ray::core::CoreWorker::RunIOService()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b5d1d4e0 64 thread_proxy
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b6f341c4 (unknown) (unknown)
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: *** SIGSEGV received at time=1741149596 on cpu 63 ***
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: PC: @ 0x7f60b5a3c7be (unknown) ray::gcs::TaskInfoAccessor::AsyncAddTaskEventData()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b6ee7050 1824 (unknown)
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b591d975 1392 ray::core::worker::TaskEventBufferImpl::FlushEvents()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b58a66ec 1488 ray::core::CoreWorker::Disconnect()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b58a6a9d 1152 ray::core::CoreWorker::ForceExit()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b58a6ecf 1680 ray::core::CoreWorker::HandleKillActor()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b589e3d4 192 ray::rpc::ServerCallImpl<>::HandleRequestImpl()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b5c2bbc8 1168 EventTracker::RecordExecution()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b5c0fffe 48 std::_Function_handler<>::_M_invoke()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b5c10476 112 boost::asio::detail::completion_handler<>::do_complete()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b62d68db 128 boost::asio::detail::scheduler::do_run_one()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b62d8259 288 boost::asio::detail::scheduler::run()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b62d8962 96 boost::asio::io_context::run()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b57ff0b1 1280 ray::core::CoreWorker::RunIOService()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,569 E 439 479] logging.cc:484: @ 0x7f60b5d1d4e0 64 thread_proxy
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,569 E 439 479] logging.cc:484: @ 0x7f60b6f341c4 (unknown) (unknown)
(TorchTrainer pid=439, ip=10.212.157.221) Fatal Python error: Segmentation fault
(TorchTrainer pid=439, ip=10.212.157.221)
(TorchTrainer pid=439, ip=10.212.157.221)
(TorchTrainer pid=439, ip=10.212.157.221) Extension modules: msgpack._cmsgpack, google._upb._message, psutil._psutil_linux, psutil._psutil_posix, setproctitle, yaml._yaml, charset_normalizer.md, ray._raylet, numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.tslib, pandas._libs.lib, pandas._libs.hashing, pyarrow.lib, pandas._libs.ops, pyarrow._compute, bottleneck.move, bottleneck.nonreduce, bottleneck.nonreduce_axis, bottleneck.reduce, pandas._libs.arrays, pandas._libs.index, pandas._libs.join, pandas._libs.sparse, pandas._libs.reduction, pandas._libs.indexing, pandas._libs.internals, pandas._libs.writers, pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.tslibs.strptime, pandas._libs.groupby, pandas._libs.testing, pandas._libs.parsers, pandas._libs.json, pyarrow._fs, pyarrow._azurefs, pyarrow._hdfs, pyarrow._gcsfs, pyarrow._s3fs, pyarrow._parquet, torch._C, torch._C._dynamo.autograd_compiler, torch._C._dynamo.eval_frame, torch._C._dynamo.guards, torch._C._dynamo.utils, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, pydantic.typing, pydantic.errors, pydantic.version, pydantic.utils, pydantic.class_validators, pydantic.config, pydantic.color, pydantic.datetime_parse, pydantic.validators, pydantic.networks, pydantic.types, pydantic.json, pydantic.error_wrappers, pydantic.fields, pydantic.parse, pydantic.schema, pydantic.main, pydantic.dataclasses, pydantic.annotated_types, pydantic.decorator, pydantic.env_settings, pydantic.tools, pydantic, pyarrow._json, lazy_object_proxy.cext, matplotlib._c_internal_utils, PIL._imaging, matplotlib._path, kiwisolver._cext, matplotlib._image, _cffi_backend, scipy._lib._ccallback_c, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg._matfuncs_expm, scipy.linalg._linalg_pythran, scipy.linalg.cython_blas, scipy.linalg._decomp_update, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.linalg._propack._spropack, scipy.sparse.linalg._propack._dpropack, scipy.sparse.linalg._propack._cpropack, scipy.sparse.linalg._propack._zpropack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, multidict._multidict, yarl._quoting_c, propcache._helpers_c, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket.mask, aiohttp._websocket.reader_c, frozenlist._frozenlist, sklearn.__check_build._check_build, sklearn.utils.murmurhash, scipy.spatial._ckdtree, scipy._lib.messagestream, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.special._ufuncs_cxx, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, scipy.spatial.transform._rotation, scipy.optimize._group_columns, scipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, scipy.optimize._zeros, scipy.optimize._cython_nnls, scipy._lib._uarray._uarray, scipy.linalg._decomp_interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.optimize._direct, scipy.integrate._odepack, scipy.integrate._quadpack, scipy.integrate._vode, scipy.integrate._dop, scipy.integrate._lsoda, scipy.interpolate._fitpack, scipy.interpolate._dfitpack, scipy.interpolate._dierckx, scipy.interpolate._ppoly, scipy.interpolate._interpnd, scipy.interpolate._rbfinterp_pythran, scipy.interpolate._rgi_cython, scipy.interpolate._bspl, scipy.special.cython_special, scipy.stats._stats, scipy.stats._sobol, scipy.stats._qmc_cy, scipy.stats._biasedurn, scipy.stats._stats_pythran, scipy.stats._levy_stable.levyst, scipy.stats._ansari_swilk_statistics, scipy.stats._mvn, scipy.stats._rcont.rcont, scipy.ndimage._nd_image, scipy.ndimage._rank_filter_1d, _ni_label, scipy.ndimage._ni_label, sklearn.utils._openmp_helpers, sklearn.utils._logistic_sigmoid, sklearn.utils.sparsefuncs_fast, sklearn.preprocessing._csr_polynomial_expansion, sklearn.utils._typedefs, sklearn.utils._readonly_array_wrapper, sklearn.metrics._dist_metrics, sklearn.metrics.cluster._expected_mutual_info_fast, sklearn.utils._cython_blas, sklearn.utils._heap, sklearn.utils._sorting, sklearn.utils._vector_sentinel, sklearn.metrics._pairwise_distances_reduction, sklearn.metrics._pairwise_fast, sklearn.utils._random, markupsafe._speedups, scipy.fftpack.convolve, tornado.speedups, greenlet._greenlet (total: 228)
```
I am experiencing that regardless of the number of workers I use (one or multiple). I am always using the DDP strategy though. This is how I am initializing the PyTorch Lightning trainer in my training loop:
```
trainer = Trainer(
strategy=ray.train.lightning.RayDDPStrategy(),
plugins=[ray.train.lightning.RayLightningEnvironment()],
```
Beyond that, I'm not sure what information would be relevant, but I am happy to provide more info about the way I am running my training jobs upon request.
The same backtrace has been reported before in a comment on [this issue](https://github.com/ray-project/ray/issues/49998), however the original description of that issue seems unrelated, so I am creating a new issue here.
### Versions / Dependencies
Ray: 2.43.0
I think I've only experienced this with Ray 2.43.0 and not in an older version.
### Reproduction script
This is hard to reproduce - it happens occasionally at the end of training.
### Issue Severity
Medium: It is a significant difficulty but I can work around it. | open | 2025-03-19T16:53:00Z | 2025-03-19T22:00:57Z | https://github.com/ray-project/ray/issues/51527 | [
"bug",
"triage",
"train"
] | jleben | 0 |
plotly/dash | data-science | 3,036 | [Feature Request] Optionally pass errors raised from a callback specific error handler to the global error handler | Thanks so much for your interest in Dash!
Before posting an issue here, please check the Dash [community forum](https://community.plotly.com/c/dash) to see if the topic has already been discussed. The community forum is also great for implementation questions. When in doubt, please feel free to just post the issue here :)
**Is your feature request related to a problem? Please describe.**
Currently if I define `on_error` for both the `Dash()` class and a for a specific callback, the callback specific `on_error` overwrites the global `on_error`. It would be nice to have an option to chain these error handlers, so that exceptions raised from the callback specific `on_error` are passed on to the global `on_error`.
Example I can think of:
- callback specific on_error: handles user incorrectly filling in data
- global on_error: catches unexpected errors (ie. bugs) and notifies the developer
**Describe the solution you'd like**
Perhaps a `bool` argument to the `callback()` decorator that would enable passing uncaught exceptions from the local `on_error` to the global `on_error`, and the same argument to the `Dash()` class which would be used as a default for all callbacks?
**Describe alternatives you've considered**
- Wrapping callback specific on_error in a try catch block and calling the global error handler manually.
- Wrapping the body of a callback in a try catch block and calling the callback specific on_error manually.
I think both of these approaches are unnecessary boiler plate.
**Additional context**
Add any other context or screenshots about the feature request here.
| open | 2024-10-15T09:59:29Z | 2024-10-15T09:59:29Z | https://github.com/plotly/dash/issues/3036 | [] | tlauli | 0 |
paulpierre/RasaGPT | fastapi | 21 | Any plan to support Slack bot? | Hi,
Kindly ask if you could consider adding support for Slack bot
| open | 2023-05-15T18:18:34Z | 2023-05-15T18:18:34Z | https://github.com/paulpierre/RasaGPT/issues/21 | [] | JosephShenV | 0 |
Johnserf-Seed/TikTokDownload | api | 77 | likeๆจกๅผๆน้ไธ่ฝฝๆฅ้ | **ๅไธไธช่ดฆๅทๅจpostๆจกๅผๅฏไปฅๆญฃๅธธไธ่ฝฝไฝๆฏlikeๆจกๅผๆฅ้๏ผ**
ๆน้ไธ่ฝฝ็ดๆฅๅ่ฝฆ๏ผๅไธ่ง้ขไธ่ฝฝ็ดๆฅ็ฒ่ดด่ง้ข้พๆฅ๏ผ
----่ฏปๅ้
็ฝฎๅฎๆ----
----ไธบๆจไธ่ฝฝๅคไธช่ง้ข----
----็จๆท็sec_id=MS4wLjABAAAAs_Dkw8_CynCMjVN601UkAa8M3TGfDgLJqYUn2tKeyy_iEDch9ifarviMtWSjD4qN----
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-2-6256bfbbea5e> in <module>
1 import TikTokMulti as MTK
2
----> 3 MTK.TikTok()
4
5 #ๅ่ง้ขไธ่ฝฝ
~\Desktop\TikTokDownload-main\TikTokMulti.py in __init__(self)
99
100 print('----่ฏปๅ้
็ฝฎๅฎๆ----\r')
--> 101 self.judge_link()
102
103 def out_Print(self):
~\Desktop\TikTokDownload-main\TikTokMulti.py in judge_link(self)
146 response = requests.get(url = api_post_url,headers = self.headers)
147 html = json.loads(response.content.decode())
--> 148 self.nickname = html['aweme_list'][0]['author']['nickname']
149 if not os.path.exists(self.save + self.mode + "\\" + self.nickname):
150 os.makedirs(self.save + self.mode + "\\" + self.nickname)
IndexError: list index out of range | closed | 2022-01-17T23:21:09Z | 2022-08-27T08:13:04Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/77 | [] | adam2rdam | 9 |
jmcnamara/XlsxWriter | pandas | 552 | Feature request: Support shapes | Hi, as you suggested in #107 I'm opening an issue for this. Other issue are quite old, I understand that some infrastructure are here to support shape, do you have updated stuff in the code to help supporting shapes. This shapes support might not be a great place for a first contrib :-/
| closed | 2018-08-13T13:45:41Z | 2018-08-15T11:54:10Z | https://github.com/jmcnamara/XlsxWriter/issues/552 | [
"feature request",
"someday"
] | RemiDesgrange | 1 |
unionai-oss/pandera | pandas | 1,919 | Datasets that fail DataFrameModel validation unexpectedly alter field properties | **Describe the bug**
When a dataframe fails validation, it can corrupts the state of the `coerce` attribute for a particular field.
In my case, a field is defined as coerce=True, but after trying to validate the bad dataset, the field now has coerce=False.
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandera.
- [ ] (optional) I have confirmed this bug exists on the main branch of pandera.
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
import pandas as pd
import pandera as pa
from pandera.typing import Series
class Table(pa.DataFrameModel):
"""Simple table with 2 columns."""
chr: Series[str] = pa.Field(nullable=False, description="Chromosome", str_length=dict(min_value=1), coerce=True)
start: Series[int] = pa.Field(nullable=False, ge=0, description="0-based inclusive start position of region")
assert Table.to_schema().columns["chr"].coerce # Passes as expected
Table.validate(pd.DataFrame({"chr": ["chr1", "chr2"], "start": [0, 10]}))
assert Table.to_schema().columns["chr"].coerce # Still passes as expected
try:
Table.validate(pd.DataFrame({"chr": ["", "chr1"], "start": [0, 10]}))
raise AssertionError("Dataframe should fail validation as str_length constraint not met")
except pa.errors.SchemaError:
...
# Unexpectedly fails. coerce is now False for this Field.
# Failed validation essentially corrupted the state of the class
assert Table.to_schema().columns["chr"].coerce
```
#### Expected behavior
The state of a DataFrameModel should not be changed by failed dataframe manipulation.
#### Additional context
I believe the problem is caused by these lines: https://github.com/unionai-oss/pandera/blob/main/pandera/backends/pandas/container.py#L221-L223
The coerce attribute of the field is being changed during dataframe validation, and is intended to be restored once validation completes. But if an exception is raised, the attributes are not properly reverted as the code just jumps to the except blocks.
A simple fix is just to move these reversion lines outside of the try-except block, after the last of of the exception blocks (so that reversion is always applied regardless of validation success). Alternatively, perhaps it's cleaner to deepcopy the schema_component to avoid the complicated logic regarding changing and reverting these various attributes.
| closed | 2025-02-28T15:11:24Z | 2025-03-06T03:48:32Z | https://github.com/unionai-oss/pandera/issues/1919 | [
"bug"
] | tfwillems | 2 |
chiphuyen/stanford-tensorflow-tutorials | nlp | 49 | 03_linear_regression_sol.py can't run with errors as follows | Traceback (most recent call last):
File "D:/stanford_tensorflow_tutorials/tf_oreilly/01_linear_regression_sol.py", line 19, in <module>
book = xlrd.open_workbook(DATA_FILE, encoding_override="utf-8")
File "C:\Anaconda3\lib\site-packages\xlrd\__init__.py", line 441, in open_workbook
ragged_rows=ragged_rows,
File "C:\Anaconda3\lib\site-packages\xlrd\book.py", line 107, in open_workbook_xls
bk.fake_globals_get_sheet()
File "C:\Anaconda3\lib\site-packages\xlrd\book.py", line 687, in fake_globals_get_sheet
self.get_sheets()
File "C:\Anaconda3\lib\site-packages\xlrd\book.py", line 678, in get_sheets
self.get_sheet(sheetno)
File "C:\Anaconda3\lib\site-packages\xlrd\book.py", line 669, in get_sheet
sh.read(self)
File "C:\Anaconda3\lib\site-packages\xlrd\sheet.py", line 1475, in read
self.update_cooked_mag_factors()
File "C:\Anaconda3\lib\site-packages\xlrd\sheet.py", line 1543, in update_cooked_mag_factors
elif not (10 <= zoom <= 400):
TypeError: unorderable types: int() <= NoneType() | open | 2017-08-12T13:35:06Z | 2017-08-12T13:35:06Z | https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/49 | [] | WoNiuHu | 0 |
dask/dask | numpy | 11,455 | dask-expr increasing calculation time | Hi guys!
I would like to report a possible bug on dask-expr.
Runing this code with dask-expr is tooking ~10 seconds on my machine
Now, running without dask-expr took ~2 secods
Iยดve attached a sample csv file as data sample.
[a (1).csv](https://github.com/user-attachments/files/17492243/a.1.csv)
```
import dask.dataframe as dd
import pandas as pd
df = dd.read_csv('a (1).csv')
df.head()
novas_colunas = [
'CD_5_AJST_PVS', 'CD_5_DT_CTB_AJST_PVS', 'CD_6_RCBT_RC', 'CD_6_DT_CTB_RCBT_RC',
'CD_7_RVSA_RCBT_RC', 'CD_7_DT_CTB_RVSA_RCBT_RC', 'CD_8_PGTO_CBAC', 'CD_9_RVSA_PGTO_CBAC'
]
for coluna in novas_colunas:
df[coluna] = pd.Series(dtype='datetime64[ns]')
df['CD_5_AJST_PVS'] = df['CD_5_AJST_PVS'].mask(cond=(df['codigoTipoEvento'] == 5), other=df['dataHoraEvento'])
df['CD_5_DT_CTB_AJST_PVS'] = df['CD_5_DT_CTB_AJST_PVS'].mask(cond=(df['codigoTipoEvento'] == 5), other=df['dataHoraEvento'])
df['CD_6_RCBT_RC'] = df['CD_6_RCBT_RC'].mask(cond=(df['codigoTipoEvento'] == 6), other=df['dataHoraEvento'])
df['CD_6_DT_CTB_RCBT_RC'] = df['CD_6_DT_CTB_RCBT_RC'].mask(cond=(df['codigoTipoEvento'] == 6), other=df['dataHoraEvento'])
df['CD_7_RVSA_RCBT_RC'] = df['CD_7_RVSA_RCBT_RC'].mask(cond=(df['codigoTipoEvento'] == 6), other=df['dataHoraEvento'])
df['CD_7_DT_CTB_RVSA_RCBT_RC'] = df['CD_7_DT_CTB_RVSA_RCBT_RC'].mask(cond=(df['codigoTipoEvento'] == 6), other=df['dataHoraEvento'])
df['CD_8_PGTO_CBAC'] = df['CD_8_PGTO_CBAC'].mask(cond=(df['codigoTipoEvento'] == 6), other=df['dataHoraEvento'])
df['CD_9_RVSA_PGTO_CBAC'] = df['CD_9_RVSA_PGTO_CBAC'].mask(cond=(df['codigoTipoEvento'] == 6), other=df['dataHoraEvento'])
df = df.drop(columns = ['codigoTipoEvento', 'dataHoraEvento'])
df.compute()
```
**Environment**:
- Dask version: 2024.10.0
- dask-expr : 1.1.16
- Python version: 3.11
- Operating System: Windows
- Install method (conda, pip, source): Pip
| closed | 2024-10-23T13:58:46Z | 2024-11-14T10:00:16Z | https://github.com/dask/dask/issues/11455 | [
"dask-expr"
] | frbelotto | 2 |
google-research/bert | tensorflow | 984 | token_type_id has only two possible value๏ผ yes๏ผ | I read the code, found that in the comments,
token_type_ids = tf.constant([[0, 0, 1], [0, 2, 0]])
But I think token_type_ids can only be 1 or 0 , am I wrong | open | 2020-01-08T07:50:02Z | 2020-03-28T14:02:25Z | https://github.com/google-research/bert/issues/984 | [] | novas-meng | 1 |
horovod/horovod | deep-learning | 2,935 | Allredue hang in yolo3 train using multi machine | **Environment:**
1. Framework: MXNet
2. Framework version:1.6.0.post0
3. Horovod version:0.22.0
4. MPI version:3.1.0
5. CUDA version:10.2
6. NCCL version:2.9.6
7. Python version:3.6
8. Spark / PySpark version:
9. Ray version:
10. OS and version: Ubuntu 18.04.5 LTS \n \l
11. GCC version:7.5.0
12. CMake version:3.20.2
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
Hi, all
I got warning like this.
When I am using 2 node and 2 GPUs for distributed training, there are some infos following:
envs:
```
docker image : gluonai/gluon-cv:gpu-latest
python:3.6
mxnet-cu102:1.6.0.post0
horovod:0.22.0
```
run_multi_node.sh in gluon-cv dirs is:
```
export CUDA_VISIBLE_DEVICES=2
export NCCL_DEBUG=info
export NCCL_SOCKET_IFNAME=eth0
export NCCL_IB_DISABLE=1
rm -rf log
mkdir log
export MXNET_CUDNN_AUTOTUNE_DEFAULT=0
export MXNET_EXEC_ENABLE_ADDTO=1
python3 ./scripts/detection/yolo/train_yolo3.py \
--network darknet53 \
--dataset=coco \
--batch-size=4 \
--horovod \
--num-workers 8 \
--log-interval 10 \
--lr-decay-epoch 220,250 \
--epochs 280 \
--warmup-epochs 2 \
--mixup \
--no-mixup-epochs 20 \
--label-smooth --no-wd \
--save-interval 1 \
--val-interval 1 \
--syncbn \
--save-prefix log/
```
and logs is:
```
root@yq01-sys-hic-v100-box-a223-0155:/home/users/liuyuhui/workspace/gluon-cv# /usr/bin/mpirun bash run_multi_node.sh
Mon May 24 06:42:09 2021[1,1]<stdout>:loading annotations into memory...
Mon May 24 06:42:09 2021[1,0]<stdout>:loading annotations into memory...
Mon May 24 06:42:25 2021[1,0]<stdout>:Done (t=16.53s)
Mon May 24 06:42:25 2021[1,0]<stdout>:creating index...
Mon May 24 06:42:26 2021[1,0]<stdout>:index created!
Mon May 24 06:42:27 2021[1,1]<stdout>:Done (t=17.78s)
Mon May 24 06:42:27 2021[1,1]<stdout>:creating index...
Mon May 24 06:42:27 2021[1,1]<stdout>:index created!
Mon May 24 06:42:51 2021[1,0]<stdout>:loading annotations into memory...
Mon May 24 06:42:52 2021[1,0]<stdout>:Done (t=0.44s)
Mon May 24 06:42:52 2021[1,0]<stdout>:creating index...
Mon May 24 06:42:52 2021[1,0]<stdout>:index created!
Mon May 24 06:42:52 2021[1,1]<stdout>:loading annotations into memory...
Mon May 24 06:42:52 2021[1,1]<stdout>:Done (t=0.44s)
Mon May 24 06:42:52 2021[1,1]<stdout>:creating index...
Mon May 24 06:42:52 2021[1,1]<stdout>:index created!
Mon May 24 06:43:03 2021[1,0]<stderr>:INFO:root:Namespace(amp=False, batch_size=4, data_shape=416, dataset='coco', epochs=280, gpus='0', horovod=True, label_smooth=True, log_interval=10, lr=0.001, lr_decay=0.1, lr_decay_epoch='220,250', lr_decay_period=0, lr_mode='step', mixup=True, momentum=0.9, network='darknet53', no_mixup_epochs=20, no_random_shape=False, no_wd=True, num_samples=117266, num_workers=8, resume='', save_interval=1, save_prefix='log/yolo3_darknet53_coco', seed=233, start_epoch=0, syncbn=True, val_interval=1, warmup_epochs=2, warmup_lr=0.0, wd=0.0005)
Mon May 24 06:43:03 2021[1,0]<stderr>:INFO:root:Start training from [Epoch 0]
Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Bootstrap : Using eth0:10.255.100.13<0>
Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 1.
Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO NET/Socket : Using [0]eth0:10.255.100.13<0>
Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Using network Socket
Mon May 24 06:43:04 2021[1,0]<stdout>:NCCL version 2.9.6+cuda10.2
Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Bootstrap : Using xgbe0:10.127.28.15<0>
Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 1.
Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO NET/Socket : Using [0]xgbe0:10.127.28.15<0>
Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Using network Socket
Mon May 24 06:43:04 2021[1,1]<stderr>:INFO:root:Namespace(amp=False, batch_size=4, data_shape=416, dataset='coco', epochs=280, gpus='0', horovod=True, label_smooth=True, log_interval=10, lr=0.001, lr_decay=0.1, lr_decay_epoch='220,250', lr_decay_period=0, lr_mode='step', mixup=True, momentum=0.9, network='darknet53', no_mixup_epochs=20, no_random_shape=False, no_wd=True, num_samples=117266, num_workers=8, resume='', save_interval=1, save_prefix='log/yolo3_darknet53_coco', seed=233, start_epoch=0, syncbn=True, val_interval=1, warmup_epochs=2, warmup_lr=0.0, wd=0.0005)
Mon May 24 06:43:04 2021[1,1]<stderr>:INFO:root:Start training from [Epoch 0]
Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Channel 00/02 : 0 1
Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Channel 01/02 : 0 1
Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1
Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Setting affinity for GPU 2 to 0fffff
Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1
Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Setting affinity for GPU 2 to ffffff
Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Channel 00 : 1[41000] -> 0[42000] [receive] via NET/Socket/0
Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Channel 00 : 0[42000] -> 1[41000] [receive] via NET/Socket/0
Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Channel 01 : 1[41000] -> 0[42000] [receive] via NET/Socket/0
Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Channel 01 : 0[42000] -> 1[41000] [receive] via NET/Socket/0
Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Channel 00 : 0[42000] -> 1[41000] [send] via NET/Socket/0
Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Channel 00 : 1[41000] -> 0[42000] [send] via NET/Socket/0
Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Channel 01 : 0[42000] -> 1[41000] [send] via NET/Socket/0
Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Channel 01 : 1[41000] -> 0[42000] [send] via NET/Socket/0
Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Connected all rings
Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO Connected all trees
Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer
Mon May 24 06:43:04 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:5377:5532 [0] NCCL INFO comm 0x7f440c35e630 rank 1 nranks 2 cudaDev 0 busId 41000 - Init COMPLETE
Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Connected all rings
Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Connected all trees
Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer
Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO comm 0x7f9f9c37d480 rank 0 nranks 2 cudaDev 0 busId 42000 - Init COMPLETE
Mon May 24 06:43:04 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:5328:5459 [0] NCCL INFO Launch mode Parallel
Mon May 24 06:43:09 2021[1,0]<stderr>:INFO:root:[Epoch 0][Batch 9], LR: 1.71E-07, Speed: 19.202 samples/sec, ObjLoss=6658.178, BoxCenterLoss=20.613, BoxScaleLoss=24.689, ClassLoss=435.474
Mon May 24 06:43:12 2021[1,0]<stderr>:INFO:root:[Epoch 0][Batch 19], LR: 3.41E-07, Speed: 17.696 samples/sec, ObjLoss=7650.106, BoxCenterLoss=16.439, BoxScaleLoss=18.395, ClassLoss=339.499
Mon May 24 06:43:14 2021[1,0]<stderr>:INFO:root:[Epoch 0][Batch 29], LR: 5.12E-07, Speed: 19.079 samples/sec, ObjLoss=6205.702, BoxCenterLoss=18.171, BoxScaleLoss=19.766, ClassLoss=378.784
Mon May 24 06:43:17 2021[1,0]<stderr>:INFO:root:[Epoch 0][Batch 39], LR: 6.82E-07, Speed: 9.979 samples/sec, ObjLoss=5282.987, BoxCenterLoss=17.172, BoxScaleLoss=18.486, ClassLoss=358.852
Mon May 24 06:47:08 2021[1,0]<stderr>:[Mon May 24 06:47:08 2021[1,0]<stderr>:2021-05-24 06:47:08.577207: W /tmp/pip-install-8i6hkcw7/horovod_78663a94386f4ebabc36dedd9415f5c2/horovod/common/stall_inspector.cc:105] One or more tensors were submitted to be reduced, gathered or broadcasted by subset of ranks and are waiting for remainder of ranks for more than 60 seconds. This may indicate that different ranks are trying to submit different tensors or that only subset of ranks is submitting tensors, which will cause deadlock.
Mon May 24 06:47:08 2021[1,0]<stderr>:Missing ranks:
Mon May 24 06:47:08 2021[1,0]<stderr>:1: [horovod_allreduce.0, horovod_allreduce.1, horovod_allreduce.100, horovod_allreduce.101, horovod_allreduce.104, horovod_allreduce.105 ...]
```
It seems that problems happen in allreduce time out๏ผbut it's ok when I running demo horovod provided in multi machine.
run.sh as following:
```
export NCCL_DEBUG=info
export NCCL_SOCKET_IFNAME=eth0
export NCCL_IB_DISABLE=1
python3 mxnet_mnist.py
```
```
root@yq01-sys-hic-v100-box-a223-0155:/home/users/liuyuhui/workspace/horovod/examples/mxnet# /usr/bin/mpirun bash run.sh
Mon May 24 07:12:45 2021[1,1]<stderr>:INFO:root:Namespace(batch_size=64, dtype='float32', epochs=5, gradient_predivide_factor=1.0, lr=0.01, momentum=0.9, no_cuda=False)
Mon May 24 07:12:45 2021[1,0]<stderr>:INFO:root:Namespace(batch_size=64, dtype='float32', epochs=5, gradient_predivide_factor=1.0, lr=0.01, momentum=0.9, no_cuda=False)
Mon May 24 07:12:45 2021[1,0]<stderr>:INFO:root:data-0/mnist.zip exists, skipping download
Mon May 24 07:12:45 2021[1,1]<stderr>:INFO:root:data-1/mnist.zip exists, skipping download
Mon May 24 07:12:46 2021[1,1]<stderr>:[07:12:46] src/io/iter_mnist.cc:Mon May 24 07:12:46 2021[1,1]<stderr>:113: MNISTIter: load 30000 images, shuffle=1, shape=[64,1,28,28]
Mon May 24 07:12:46 2021[1,0]<stderr>:[07:12:46Mon May 24 07:12:46 2021[1,0]<stderr>:] src/io/iter_mnist.cc:113: MNISTIter: load 30000 images, shuffle=1, shape=[64,1,28,28]
Mon May 24 07:12:46 2021[1,1]<stderr>:[07:12:46] src/io/iter_mnist.cc:113: MNISTIter: load 10000 images, shuffle=1, shape=[Mon May 24 07:12:46 2021[1,1]<stderr>:64,1,28,28]
Mon May 24 07:12:47 2021[1,0]<stderr>:[07:12:47] src/io/iter_mnist.cc:113: MNISTIter: load 10000 images, shuffle=Mon May 24 07:12:47 2021[1,0]<stderr>:1, shape=[64,1,28,28]
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Bootstrap : Using eth0:10.255.100.13<0>
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 1.
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO NET/Socket : Using [0]eth0:10.255.100.13<0>
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Using network Socket
Mon May 24 07:12:50 2021[1,0]<stdout>:NCCL version 2.9.6+cuda10.2
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Bootstrap : Using xgbe0:10.127.28.15<0>
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 1.
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO NET/Socket : Using [0]xgbe0:10.127.28.15<0>
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Using network Socket
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 00/02 : 0 1
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 01/02 : 0 1
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Setting affinity for GPU 0 to 0fffff
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Setting affinity for GPU 0 to ffffff
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 00 : 1[3f000] -> 0[40000] [receive] via NET/Socket/0
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Channel 00 : 0[40000] -> 1[3f000] [receive] via NET/Socket/0
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 01 : 1[3f000] -> 0[40000] [receive] via NET/Socket/0
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Channel 01 : 0[40000] -> 1[3f000] [receive] via NET/Socket/0
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 00 : 0[40000] -> 1[3f000] [send] via NET/Socket/0
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Channel 00 : 1[3f000] -> 0[40000] [send] via NET/Socket/0
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 01 : 0[40000] -> 1[3f000] [send] via NET/Socket/0
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Channel 01 : 1[3f000] -> 0[40000] [send] via NET/Socket/0
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Connected all rings
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Connected all trees
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Connected all rings
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO comm 0x7ff8d8346190 rank 0 nranks 2 cudaDev 0 busId 40000 - Init COMPLETE
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Connected all trees
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO comm 0x7f22c834adb0 rank 1 nranks 2 cudaDev 0 busId 3f000 - Init COMPLETE
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Launch mode Parallel
Mon May 24 07:12:50 2021[1,0]<stderr>:[07:12:50] Mon May 24 07:12:50 2021[1,0]<stderr>:src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (set the environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
Mon May 24 07:12:50 2021[1,1]<stderr>:[07:12:50] Mon May 24 07:12:50 2021[1,1]<stderr>:src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (set the environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
Mon May 24 07:12:51 2021[1,1]<stderr>:INFO:root:[Epoch 0 Batch 100] Training: accuracy=0.867500
Mon May 24 07:12:51 2021[1,0]<stderr>:INFO:root:[Epoch 0 Batch 100] Training: accuracy=0.865469
Mon May 24 07:12:51 2021[1,1]<stderr>:INFO:root:[Epoch 0 Batch 200] Training: accuracy=0.914297
Mon May 24 07:12:51 2021[1,0]<stderr>:INFO:root:[Epoch 0 Batch 200] Training: accuracy=0.915547
Mon May 24 07:12:51 2021[1,0]<stderr>:INFO:root:[Epoch 0 Batch 300] Training: accuracy=0.934219
Mon May 24 07:12:51 2021[1,1]<stderr>:INFO:root:[Epoch 0 Batch 300] Training: accuracy=0.934010
Mon May 24 07:12:52 2021[1,0]<stderr>:INFO:root:[Epoch 0 Batch 400] Training: accuracy=0.944688
Mon May 24 07:12:52 2021[1,1]<stderr>:INFO:root:[Epoch 0 Batch 400] Training: accuracy=0.944688
Mon May 24 07:12:52 2021[1,0]<stderr>:INFO:root:Epoch[0] Speed=28557.88 samples/s Time cost=2.097635
Mon May 24 07:12:52 2021[1,0]<stderr>:INFO:root:Epoch[0] Train: accuracy=0.949753 Validation: accuracy=0.983373
Mon May 24 07:12:52 2021[1,0]<stderr>:INFO:root:[Epoch 1 Batch 100] Training: accuracy=0.984375
Mon May 24 07:12:52 2021[1,1]<stderr>:INFO:root:[Epoch 1 Batch 100] Training: accuracy=0.981094
Mon May 24 07:12:53 2021[1,0]<stderr>:INFO:root:[Epoch 1 Batch 200] Training: accuracy=0.985078
Mon May 24 07:12:53 2021[1,1]<stderr>:INFO:root:[Epoch 1 Batch 200] Training: accuracy=0.981016
Mon May 24 07:12:53 2021[1,0]<stderr>:INFO:root:[Epoch 1 Batch 300] Training: accuracy=0.985000
Mon May 24 07:12:53 2021[1,1]<stderr>:INFO:root:[Epoch 1 Batch 300] Training: accuracy=0.981563
Mon May 24 07:12:53 2021[1,0]<stderr>:INFO:root:[Epoch 1 Batch 400] Training: accuracy=0.985117
Mon May 24 07:12:53 2021[1,1]<stderr>:INFO:root:[Epoch 1 Batch 400] Training: accuracy=0.982500
Mon May 24 07:12:53 2021[1,0]<stderr>:INFO:root:Epoch[1] Speed=40224.99 samples/s Time cost=1.489223
Mon May 24 07:12:54 2021[1,0]<stderr>:INFO:root:Epoch[1] Train: accuracy=0.985544 Validation: accuracy=0.986378
Mon May 24 07:12:54 2021[1,0]<stderr>:INFO:root:[Epoch 2 Batch 100] Training: accuracy=0.990469
Mon May 24 07:12:54 2021[1,1]<stderr>:INFO:root:[Epoch 2 Batch 100] Training: accuracy=0.989219
Mon May 24 07:12:54 2021[1,0]<stderr>:INFO:root:[Epoch 2 Batch 200] Training: accuracy=0.990703
Mon May 24 07:12:54 2021[1,1]<stderr>:INFO:root:[Epoch 2 Batch 200] Training: accuracy=0.988047
Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:[Epoch 2 Batch 300] Training: accuracy=0.990417
Mon May 24 07:12:55 2021[1,1]<stderr>:INFO:root:[Epoch 2 Batch 300] Training: accuracy=0.988229
Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:[Epoch 2 Batch 400] Training: accuracy=0.990156
Mon May 24 07:12:55 2021[1,1]<stderr>:INFO:root:[Epoch 2 Batch 400] Training: accuracy=0.988711
Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:Epoch[2] Speed=40484.51 samples/s Time cost=1.479677
Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:Epoch[2] Train: accuracy=0.990151 Validation: accuracy=0.988482
Mon May 24 07:12:55 2021[1,1]<stderr>:INFO:root:[Epoch 3 Batch 100] Training: accuracy=0.993594
Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:[Epoch 3 Batch 100] Training: accuracy=0.993437
Mon May 24 07:12:56 2021[1,1]<stderr>:INFO:root:[Epoch 3 Batch 200] Training: accuracy=0.992266
Mon May 24 07:12:56 2021[1,0]<stderr>:INFO:root:[Epoch 3 Batch 200] Training: accuracy=0.993359
Mon May 24 07:12:56 2021[1,0]<stderr>:INFO:root:[Epoch 3 Batch 300] Training: accuracy=0.993229
Mon May 24 07:12:56 2021[1,1]<stderr>:INFO:root:[Epoch 3 Batch 300] Training: accuracy=0.992344
Mon May 24 07:12:56 2021[1,0]<stderr>:INFO:root:[Epoch 3 Batch 400] Training: accuracy=0.993203
Mon May 24 07:12:56 2021[1,1]<stderr>:INFO:root:[Epoch 3 Batch 400] Training: accuracy=0.992500
Mon May 24 07:12:57 2021[1,0]<stderr>:INFO:root:Epoch[3] Speed=34536.84 samples/s Time cost=1.734496
Mon May 24 07:12:57 2021[1,0]<stderr>:INFO:root:Epoch[3] Train: accuracy=0.993356 Validation: accuracy=0.989083
Mon May 24 07:12:57 2021[1,0]<stderr>:INFO:root:[Epoch 4 Batch 100] Training: accuracy=0.996094
Mon May 24 07:12:57 2021[1,1]<stderr>:INFO:root:[Epoch 4 Batch 100] Training: accuracy=0.995469
Mon May 24 07:12:58 2021[1,0]<stderr>:INFO:root:[Epoch 4 Batch 200] Training: accuracy=0.995547
Mon May 24 07:12:58 2021[1,1]<stderr>:INFO:root:[Epoch 4 Batch 200] Training: accuracy=0.994531
Mon May 24 07:12:58 2021[1,0]<stderr>:INFO:root:[Epoch 4 Batch 300] Training: accuracy=0.995469
Mon May 24 07:12:58 2021[1,1]<stderr>:INFO:root:[Epoch 4 Batch 300] Training: accuracy=0.994375
Mon May 24 07:12:58 2021[1,0]<stderr>:INFO:root:[Epoch 4 Batch 400] Training: accuracy=0.995273
Mon May 24 07:12:58 2021[1,1]<stderr>:INFO:root:[Epoch 4 Batch 400] Training: accuracy=0.994609
Mon May 24 07:12:59 2021[1,0]<stderr>:INFO:root:Epoch[4] Speed=34617.01 samples/s Time cost=1.730479
Mon May 24 07:12:59 2021[1,0]<stderr>:INFO:root:Epoch[4] Train: accuracy=0.995259 Validation: accuracy=0.990084
root@yq01-sys-hic-v100-box-a223-0155:/home/users/liuyuhui/workspace/horovod/examples/mxnet#
```
It seems that allreduce time out using multi machine, but I got OK when I train demo provided by horovod.
there are run.sh as following:
```
export NCCL_DEBUG=info
export NCCL_SOCKET_IFNAME=eth0
export NCCL_IB_DISABLE=1
python3 mxnet_mnist.py
```
```
root@yq01-sys-hic-v100-box-a223-0155:/home/users/liuyuhui/workspace/horovod/examples/mxnet# /usr/bin/mpirun bash run.sh
Mon May 24 07:12:45 2021[1,1]<stderr>:INFO:root:Namespace(batch_size=64, dtype='float32', epochs=5, gradient_predivide_factor=1.0, lr=0.01, momentum=0.9, no_cuda=False)
Mon May 24 07:12:45 2021[1,0]<stderr>:INFO:root:Namespace(batch_size=64, dtype='float32', epochs=5, gradient_predivide_factor=1.0, lr=0.01, momentum=0.9, no_cuda=False)
Mon May 24 07:12:45 2021[1,0]<stderr>:INFO:root:data-0/mnist.zip exists, skipping download
Mon May 24 07:12:45 2021[1,1]<stderr>:INFO:root:data-1/mnist.zip exists, skipping download
Mon May 24 07:12:46 2021[1,1]<stderr>:[07:12:46] src/io/iter_mnist.cc:Mon May 24 07:12:46 2021[1,1]<stderr>:113: MNISTIter: load 30000 images, shuffle=1, shape=[64,1,28,28]
Mon May 24 07:12:46 2021[1,0]<stderr>:[07:12:46Mon May 24 07:12:46 2021[1,0]<stderr>:] src/io/iter_mnist.cc:113: MNISTIter: load 30000 images, shuffle=1, shape=[64,1,28,28]
Mon May 24 07:12:46 2021[1,1]<stderr>:[07:12:46] src/io/iter_mnist.cc:113: MNISTIter: load 10000 images, shuffle=1, shape=[Mon May 24 07:12:46 2021[1,1]<stderr>:64,1,28,28]
Mon May 24 07:12:47 2021[1,0]<stderr>:[07:12:47] src/io/iter_mnist.cc:113: MNISTIter: load 10000 images, shuffle=Mon May 24 07:12:47 2021[1,0]<stderr>:1, shape=[64,1,28,28]
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Bootstrap : Using eth0:10.255.100.13<0>
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 1.
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO NET/Socket : Using [0]eth0:10.255.100.13<0>
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Using network Socket
Mon May 24 07:12:50 2021[1,0]<stdout>:NCCL version 2.9.6+cuda10.2
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Bootstrap : Using xgbe0:10.127.28.15<0>
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 1.
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO NET/Socket : Using [0]xgbe0:10.127.28.15<0>
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Using network Socket
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 00/02 : 0 1
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 01/02 : 0 1
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Setting affinity for GPU 0 to 0fffff
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Setting affinity for GPU 0 to ffffff
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 00 : 1[3f000] -> 0[40000] [receive] via NET/Socket/0
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Channel 00 : 0[40000] -> 1[3f000] [receive] via NET/Socket/0
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 01 : 1[3f000] -> 0[40000] [receive] via NET/Socket/0
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Channel 01 : 0[40000] -> 1[3f000] [receive] via NET/Socket/0
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 00 : 0[40000] -> 1[3f000] [send] via NET/Socket/0
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Channel 00 : 1[3f000] -> 0[40000] [send] via NET/Socket/0
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Channel 01 : 0[40000] -> 1[3f000] [send] via NET/Socket/0
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Channel 01 : 1[3f000] -> 0[40000] [send] via NET/Socket/0
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Connected all rings
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Connected all trees
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Connected all rings
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO comm 0x7ff8d8346190 rank 0 nranks 2 cudaDev 0 busId 40000 - Init COMPLETE
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO Connected all trees
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer
Mon May 24 07:12:50 2021[1,1]<stdout>:yq01-sys-hic-k8s-v100-box-a225-0562:6159:6315 [0] NCCL INFO comm 0x7f22c834adb0 rank 1 nranks 2 cudaDev 0 busId 3f000 - Init COMPLETE
Mon May 24 07:12:50 2021[1,0]<stdout>:yq01-sys-hic-v100-box-a223-0155:6243:6375 [0] NCCL INFO Launch mode Parallel
Mon May 24 07:12:50 2021[1,0]<stderr>:[07:12:50] Mon May 24 07:12:50 2021[1,0]<stderr>:src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (set the environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
Mon May 24 07:12:50 2021[1,1]<stderr>:[07:12:50] Mon May 24 07:12:50 2021[1,1]<stderr>:src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (set the environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
Mon May 24 07:12:51 2021[1,1]<stderr>:INFO:root:[Epoch 0 Batch 100] Training: accuracy=0.867500
Mon May 24 07:12:51 2021[1,0]<stderr>:INFO:root:[Epoch 0 Batch 100] Training: accuracy=0.865469
Mon May 24 07:12:51 2021[1,1]<stderr>:INFO:root:[Epoch 0 Batch 200] Training: accuracy=0.914297
Mon May 24 07:12:51 2021[1,0]<stderr>:INFO:root:[Epoch 0 Batch 200] Training: accuracy=0.915547
Mon May 24 07:12:51 2021[1,0]<stderr>:INFO:root:[Epoch 0 Batch 300] Training: accuracy=0.934219
Mon May 24 07:12:51 2021[1,1]<stderr>:INFO:root:[Epoch 0 Batch 300] Training: accuracy=0.934010
Mon May 24 07:12:52 2021[1,0]<stderr>:INFO:root:[Epoch 0 Batch 400] Training: accuracy=0.944688
Mon May 24 07:12:52 2021[1,1]<stderr>:INFO:root:[Epoch 0 Batch 400] Training: accuracy=0.944688
Mon May 24 07:12:52 2021[1,0]<stderr>:INFO:root:Epoch[0] Speed=28557.88 samples/s Time cost=2.097635
Mon May 24 07:12:52 2021[1,0]<stderr>:INFO:root:Epoch[0] Train: accuracy=0.949753 Validation: accuracy=0.983373
Mon May 24 07:12:52 2021[1,0]<stderr>:INFO:root:[Epoch 1 Batch 100] Training: accuracy=0.984375
Mon May 24 07:12:52 2021[1,1]<stderr>:INFO:root:[Epoch 1 Batch 100] Training: accuracy=0.981094
Mon May 24 07:12:53 2021[1,0]<stderr>:INFO:root:[Epoch 1 Batch 200] Training: accuracy=0.985078
Mon May 24 07:12:53 2021[1,1]<stderr>:INFO:root:[Epoch 1 Batch 200] Training: accuracy=0.981016
Mon May 24 07:12:53 2021[1,0]<stderr>:INFO:root:[Epoch 1 Batch 300] Training: accuracy=0.985000
Mon May 24 07:12:53 2021[1,1]<stderr>:INFO:root:[Epoch 1 Batch 300] Training: accuracy=0.981563
Mon May 24 07:12:53 2021[1,0]<stderr>:INFO:root:[Epoch 1 Batch 400] Training: accuracy=0.985117
Mon May 24 07:12:53 2021[1,1]<stderr>:INFO:root:[Epoch 1 Batch 400] Training: accuracy=0.982500
Mon May 24 07:12:53 2021[1,0]<stderr>:INFO:root:Epoch[1] Speed=40224.99 samples/s Time cost=1.489223
Mon May 24 07:12:54 2021[1,0]<stderr>:INFO:root:Epoch[1] Train: accuracy=0.985544 Validation: accuracy=0.986378
Mon May 24 07:12:54 2021[1,0]<stderr>:INFO:root:[Epoch 2 Batch 100] Training: accuracy=0.990469
Mon May 24 07:12:54 2021[1,1]<stderr>:INFO:root:[Epoch 2 Batch 100] Training: accuracy=0.989219
Mon May 24 07:12:54 2021[1,0]<stderr>:INFO:root:[Epoch 2 Batch 200] Training: accuracy=0.990703
Mon May 24 07:12:54 2021[1,1]<stderr>:INFO:root:[Epoch 2 Batch 200] Training: accuracy=0.988047
Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:[Epoch 2 Batch 300] Training: accuracy=0.990417
Mon May 24 07:12:55 2021[1,1]<stderr>:INFO:root:[Epoch 2 Batch 300] Training: accuracy=0.988229
Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:[Epoch 2 Batch 400] Training: accuracy=0.990156
Mon May 24 07:12:55 2021[1,1]<stderr>:INFO:root:[Epoch 2 Batch 400] Training: accuracy=0.988711
Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:Epoch[2] Speed=40484.51 samples/s Time cost=1.479677
Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:Epoch[2] Train: accuracy=0.990151 Validation: accuracy=0.988482
Mon May 24 07:12:55 2021[1,1]<stderr>:INFO:root:[Epoch 3 Batch 100] Training: accuracy=0.993594
Mon May 24 07:12:55 2021[1,0]<stderr>:INFO:root:[Epoch 3 Batch 100] Training: accuracy=0.993437
Mon May 24 07:12:56 2021[1,1]<stderr>:INFO:root:[Epoch 3 Batch 200] Training: accuracy=0.992266
Mon May 24 07:12:56 2021[1,0]<stderr>:INFO:root:[Epoch 3 Batch 200] Training: accuracy=0.993359
Mon May 24 07:12:56 2021[1,0]<stderr>:INFO:root:[Epoch 3 Batch 300] Training: accuracy=0.993229
Mon May 24 07:12:56 2021[1,1]<stderr>:INFO:root:[Epoch 3 Batch 300] Training: accuracy=0.992344
Mon May 24 07:12:56 2021[1,0]<stderr>:INFO:root:[Epoch 3 Batch 400] Training: accuracy=0.993203
Mon May 24 07:12:56 2021[1,1]<stderr>:INFO:root:[Epoch 3 Batch 400] Training: accuracy=0.992500
Mon May 24 07:12:57 2021[1,0]<stderr>:INFO:root:Epoch[3] Speed=34536.84 samples/s Time cost=1.734496
Mon May 24 07:12:57 2021[1,0]<stderr>:INFO:root:Epoch[3] Train: accuracy=0.993356 Validation: accuracy=0.989083
Mon May 24 07:12:57 2021[1,0]<stderr>:INFO:root:[Epoch 4 Batch 100] Training: accuracy=0.996094
Mon May 24 07:12:57 2021[1,1]<stderr>:INFO:root:[Epoch 4 Batch 100] Training: accuracy=0.995469
Mon May 24 07:12:58 2021[1,0]<stderr>:INFO:root:[Epoch 4 Batch 200] Training: accuracy=0.995547
Mon May 24 07:12:58 2021[1,1]<stderr>:INFO:root:[Epoch 4 Batch 200] Training: accuracy=0.994531
Mon May 24 07:12:58 2021[1,0]<stderr>:INFO:root:[Epoch 4 Batch 300] Training: accuracy=0.995469
Mon May 24 07:12:58 2021[1,1]<stderr>:INFO:root:[Epoch 4 Batch 300] Training: accuracy=0.994375
Mon May 24 07:12:58 2021[1,0]<stderr>:INFO:root:[Epoch 4 Batch 400] Training: accuracy=0.995273
Mon May 24 07:12:58 2021[1,1]<stderr>:INFO:root:[Epoch 4 Batch 400] Training: accuracy=0.994609
Mon May 24 07:12:59 2021[1,0]<stderr>:INFO:root:Epoch[4] Speed=34617.01 samples/s Time cost=1.730479
Mon May 24 07:12:59 2021[1,0]<stderr>:INFO:root:Epoch[4] Train: accuracy=0.995259 Validation: accuracy=0.990084
root@yq01-sys-hic-v100-box-a223-0155:/home/users/liuyuhui/workspace/horovod/examples/mxnet#
```
Any suggestions will be highly welcome!
| open | 2021-05-24T08:03:09Z | 2021-05-24T08:03:09Z | https://github.com/horovod/horovod/issues/2935 | [
"bug"
] | vslyu | 0 |
axnsan12/drf-yasg | django | 744 | ModuleNotFoundError: No module named 'jsonschema.compat' | drf_yasg==1.20.0
swagger_spec_validator==2.7.3
```python
Internal Server Error: /
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 204, in _get_response
response = response.render()
File "/usr/local/lib/python3.9/site-packages/django/template/response.py", line 105, in render
self.content = self.rendered_content
File "/usr/local/lib/python3.9/site-packages/rest_framework/response.py", line 70, in rendered_content
ret = renderer.render(self.data, accepted_media_type, context)
File "/usr/local/lib/python3.9/site-packages/drf_yasg/renderers.py", line 35, in render
return codec.encode(data)
File "/usr/local/lib/python3.9/site-packages/drf_yasg/codecs.py", line 73, in encode
VALIDATORS[validator](copy.deepcopy(spec))
File "/usr/local/lib/python3.9/site-packages/drf_yasg/codecs.py", line 29, in _validate_swagger_spec_validator
from swagger_spec_validator.common import SwaggerValidationError as SSVErr
File "/usr/local/lib/python3.9/site-packages/swagger_spec_validator/__init__.py", line 8, in <module>
from swagger_spec_validator.util import validate_spec_url
File "/usr/local/lib/python3.9/site-packages/swagger_spec_validator/util.py", line 9, in <module>
from swagger_spec_validator import validator12
File "/usr/local/lib/python3.9/site-packages/swagger_spec_validator/validator12.py", line 29, in <module>
from swagger_spec_validator.ref_validators import default_handlers
File "/usr/local/lib/python3.9/site-packages/swagger_spec_validator/ref_validators.py", line 14, in <module>
from jsonschema.compat import iteritems
ModuleNotFoundError: No module named 'jsonschema.compat'
```
| open | 2021-10-03T01:59:03Z | 2025-03-07T12:11:13Z | https://github.com/axnsan12/drf-yasg/issues/744 | [
"triage"
] | spavlovich001 | 1 |
jina-ai/serve | deep-learning | 5,671 | jina export kubernetes CLI should be able to export a Deploment YAML file to kubernetes | closed | 2023-02-08T14:19:19Z | 2023-02-08T21:12:47Z | https://github.com/jina-ai/serve/issues/5671 | [] | alaeddine-13 | 0 |
|
OpenBB-finance/OpenBB | python | 6,904 | [๐น๏ธ]Starry-eyed Supporter | ### What side quest or challenge are you solving?
[๐น๏ธ]Starry-eyed Supporter
### Points
150
### Description
Starred OpenBB repo
### Provide proof that you've completed the task
1.

2.

3.

4.

5.

| closed | 2024-10-28T16:24:05Z | 2024-10-30T20:51:16Z | https://github.com/OpenBB-finance/OpenBB/issues/6904 | [] | nepalankit | 2 |
ageitgey/face_recognition | machine-learning | 1,164 | Need Help with Group Photo Handling | * face_recognition version:
* Python version: 3.6
* Operating System: Windows 10
### Description
I want to compare two photos. The first has the face of one individual. The second is a group photo with many faces. I want to see if the individual from the first photo appears in the second photo.
### What I Did
I tried:
```
face_locations = face_recognition.face_locations(img2_loaded)
for face in face_locations:
top, right, bottom, left = face
face_img = img2_loaded[top:bottom, left:right]
face_recognition.compare_faces(img1_loaded, face_img)
```
And I'm getting an error about operands cannot be broadcast together with shapes (3088,2316,3) (90,89,3). Any tips would be much appreciated. | open | 2020-06-22T01:11:48Z | 2020-10-09T05:27:23Z | https://github.com/ageitgey/face_recognition/issues/1164 | [] | Basant206 | 1 |
OthersideAI/self-operating-computer | automation | 147 | [BUG] Brief Description of the Issue | Found a bug? Please fill out the sections below. ๐
### Describe the bug
A clear and concise description of what the bug is.
### Steps to Reproduce
1. when starting the project in vscode and running it locally it just open the git local and creash the project
2. just screens open for the 2 second and wiped out
### Expected Behavior
A brief description of what you expected to happen.
I expected to open the file where I can write something and order it to do the task as others are doing.
### Actual Behavior:
what actually happened.
mentioned above
### Environment
- OS: win 10
- Model Used (e.g., GPT-4v, Gemini Pro Vision):
- Framework Version (optional):
### Screenshots
If applicable, add screenshots to help explain your problem.
### Additional context
Add any other context about the problem here. | open | 2024-01-26T00:40:24Z | 2024-01-26T00:42:44Z | https://github.com/OthersideAI/self-operating-computer/issues/147 | [
"bug"
] | Praharsh1109 | 1 |
microsoft/qlib | deep-learning | 1,659 | ๆปๅจ้ขๆตไธญ็ๅจๆ่กๆฑ | ๆๆๅฎ็ฐไธไธช็นๆง๏ผๅณๆปๅจ้ขๆตๆถ๏ผๅฎ็ฐๅจๆ่กๆฑ ใ
ๆฏๅฆๆฏไธๆปๆถ๏ผๅจๆ็กฎๅฎ่ฏฅๆป่กๆฑ ไธบๆปๅจๆฅๅธๅผๆๅคง็ๅ300ๅช่ก็ฅจใ
| open | 2023-09-29T11:46:06Z | 2023-09-29T11:46:06Z | https://github.com/microsoft/qlib/issues/1659 | [
"enhancement"
] | quant2008 | 0 |
matplotlib/matplotlib | data-visualization | 29,746 | [Doc]: Add uv and pixi install instructions | ### Documentation Link
https://matplotlib.org/devdocs/#install
### Problem
Since uv and pixi are shaking up the package managment landscape, they should be included in our install instructions.
### Suggested improvement
_No response_ | closed | 2025-03-12T17:01:12Z | 2025-03-18T21:34:49Z | https://github.com/matplotlib/matplotlib/issues/29746 | [
"Documentation"
] | timhoffm | 1 |
plotly/dash | data-science | 2,965 | run black and other linting tools on generated code | In a comment on #2276, @alexcjohnson suggested:
> I wonder, now that we're Py3-only, maybe we should call black on the generated files as part of the generation process? We already do that in the core component packages as a separate step. | open | 2024-08-26T14:37:47Z | 2024-08-26T14:38:21Z | https://github.com/plotly/dash/issues/2965 | [
"infrastructure",
"feature",
"P3"
] | gvwilson | 0 |
albumentations-team/albumentations | machine-learning | 1,510 | Multiple instances of keypoint labels per image | ## ๐ Bug
I have an image that contains multiple faces and multiple landmarks. The number of faces and landmark instances is variable. The problem is that `alb.KeypointParams` allows to pass only a single instance of keypoints.
## To Reproduce
```python
image = cv2.cvtColor(
cv2.imread("couple.jpg"),
cv2.COLOR_BGR2RGB,
)
# Two instances of bounding boxes for faces
boxes = (
(332, 128, 542, 424),
(542, 232, 726, 498),
)
# Two instances of bounding boxes for landmarks
keypoints = (
[
[410.562, 223.625],
[482.817, 268.089],
[436.5, 286.616],
[364.246, 301.438],
[443.911, 344.049],
],
[
[590.205, 329.531],
[676.795, 337.857],
[633.5, 381.152],
[580.214, 417.786],
[668.469, 429.442],
],
)
transform = alb.Compose(
bbox_params=alb.BboxParams(
format="pascal_voc",
label_fields=["category_ids"],
),
keypoint_params=alb.KeypointParams(
format="xy",
),
p=1,
transforms=[
alb.Resize(height=1024, width=1024, p=1),
],
)
sample = transform(
image=image,
bboxes=boxes,
category_ids=np.ones(len(boxes)),
keypoints=keypoints,
)
```
This script yields the following error:
```
Traceback (most recent call last):
File "example.py", line 49, in <module>
sample = transform(
File "/python3.10/site-packages/albumentations/core/composition.py", line 207, in __call__
p.preprocess(data)
File "/python3.10/site-packages/albumentations/core/utils.py", line 83, in preprocess
data[data_name] = self.check_and_convert(data[data_name], rows, cols, direction="to")
File "/python3.10/site-packages/albumentations/core/utils.py", line 91, in check_and_convert
return self.convert_to_albumentations(data, rows, cols)
File "/python3.10/site-packages/albumentations/core/keypoints_utils.py", line 140, in convert_to_albumentations
return convert_keypoints_to_albumentations(
File "/python3.10/site-packages/albumentations/core/keypoints_utils.py", line 269, in convert_keypoints_to_albumentations
return [
File "/python3.10/site-packages/albumentations/core/keypoints_utils.py", line 270, in <listcomp>
convert_keypoint_to_albumentations(kp, source_format, rows, cols, check_validity, angle_in_degrees)
File "/python3.10/site-packages/albumentations/core/keypoints_utils.py", line 220, in convert_keypoint_to_albumentations
check_keypoint(keypoint, rows, cols)
File "/python3.10/site-packages/albumentations/core/keypoints_utils.py", line 153, in check_keypoint
if not 0 <= value < size:
TypeError: '<=' not supported between instances of 'int' and 'list'
```
## Expected behavior
This should work, it looks like a "natural" thing to have: to treat keypoints the same way as bounding boxes. Currently I managed to make it work with the following snippet:
```python
sample = transform(
image=image,
bboxes=boxes,
category_ids=np.ones(len(boxes)),
keypoints=np.asarray(keypoints).reshape(-1, 2), # Merge keypoints
)
keypoints = np.asarray(sample["keypoints"]).reshape(-1, 5, 2) # Transform/Reshape them back
```
However this feels like a dirty workaround. Perhaps there should be an easier way to achieve the same, but without bugs

## Environment
- Albumentations version (e.g., 0.1.8): `1.3.1`
- Python version (e.g., 3.7): `3.10`
- OS (e.g., Linux): any
- How you installed albumentations (`conda`, `pip`, source): pip
- Any other relevant information:
| open | 2024-01-28T07:54:42Z | 2024-01-28T07:54:42Z | https://github.com/albumentations-team/albumentations/issues/1510 | [] | kqf | 0 |
huggingface/diffusers | deep-learning | 10,893 | [Feature request] CogvideoX Controlnet integration for 5B / 2B | **Is your feature request related to a problem? Please describe.**
Came across and would be useful addition
https://github.com/TheDenk/cogvideox-controlnet
**Describe the solution you'd like.**
If possible add the controlnet support for CogVideoX. The existing code is based on diffusers only
**Describe alternatives you've considered.**
N.A.
**Additional context.**
N.A.
| open | 2025-02-24T18:15:02Z | 2025-02-24T18:15:18Z | https://github.com/huggingface/diffusers/issues/10893 | [] | nitinmukesh | 0 |
matterport/Mask_RCNN | tensorflow | 2,332 | Error "TypeError: 'NoneType' object is not iterable" triggered by modellib.MaskRCNN in demo.ipynb | Hi,
I have an error when I try to run the demo.ipynb file. I guess it's due to modellib.MasrRCNN function (see error below).
My first guess is that it's due to tensorflow version that i'm using which is 2.2.0. But when I changed that to an older version 1.3.0, there's an incompatibility between keras and tensorflow as keras requires at least tensorflow version 2.2.
Can anyone help please?
Thanks,
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-de1010a949a4> in <module>
1 # Create model object in inference mode.
----> 2 model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
3 # Load weights trained on MS-COCO
4 model.load_weights(COCO_MODEL_PATH, by_name=True)
~\Documents\1_USMBA\sw\Mask_RCNN\mrcnn\model.py in __init__(self, mode, config, model_dir)
1835 self.model_dir = model_dir
1836 self.set_log_dir()
-> 1837 self.keras_model = self.build(mode=mode, config=config)
1838
1839 def build(self, mode, config):
~\Documents\1_USMBA\sw\Mask_RCNN\mrcnn\model.py in build(self, mode, config)
1899 else:
1900 _, C2, C3, C4, C5 = resnet_graph(input_image, config.BACKBONE,
-> 1901 stage5=True, train_bn=config.TRAIN_BN)
1902 # Top-down Layers
1903 # TODO: add assert to varify feature map sizes match what's in config
~\Documents\1_USMBA\sw\Mask_RCNN\mrcnn\model.py in resnet_graph(input_image, architecture, stage5, train_bn)
178 # Stage 1
179 x = KL.ZeroPadding2D((3, 3))(input_image)
--> 180 x = KL.Conv2D(64, (7, 7), strides=(2, 2), name='conv1', use_bias=True)(x)
181 x = BatchNorm(name='bn_conv1')(x, training=train_bn)
182 x = KL.Activation('relu')(x)
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in __call__(self, *args, **kwargs)
895 # Build layer if applicable (if the `build` method has been
896 # overridden).
--> 897 self._maybe_build(inputs)
898 cast_inputs = self._maybe_cast_inputs(inputs)
899
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in _maybe_build(self, inputs)
2414 # operations.
2415 with tf_utils.maybe_init_scope(self):
-> 2416 self.build(input_shapes) # pylint:disable=not-callable
2417 # We must set also ensure that the layer is marked as built, and the build
2418 # shape is stored since user defined build functions may not be calling
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\keras\layers\convolutional.py in build(self, input_shape)
161 constraint=self.kernel_constraint,
162 trainable=True,
--> 163 dtype=self.dtype)
164 if self.use_bias:
165 self.bias = self.add_weight(
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in add_weight(self, name, shape, dtype, initializer, regularizer, trainable, constraint, partitioner, use_resource, synchronization, aggregation, **kwargs)
575 synchronization=synchronization,
576 aggregation=aggregation,
--> 577 caching_device=caching_device)
578 if regularizer is not None:
579 # TODO(fchollet): in the future, this should be handled at the
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\training\tracking\base.py in _add_variable_with_custom_getter(self, name, shape, dtype, initializer, getter, overwrite, **kwargs_for_getter)
741 dtype=dtype,
742 initializer=initializer,
--> 743 **kwargs_for_getter)
744
745 # If we set an initializer and the variable processed it, tracking will not
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\keras\engine\base_layer_utils.py in make_variable(name, shape, dtype, initializer, trainable, caching_device, validate_shape, constraint, use_resource, collections, synchronization, aggregation, partitioner)
139 synchronization=synchronization,
140 aggregation=aggregation,
--> 141 shape=variable_shape if variable_shape else None)
142
143
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\variables.py in __call__(cls, *args, **kwargs)
257 def __call__(cls, *args, **kwargs):
258 if cls is VariableV1:
--> 259 return cls._variable_v1_call(*args, **kwargs)
260 elif cls is Variable:
261 return cls._variable_v2_call(*args, **kwargs)
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\variables.py in _variable_v1_call(cls, initial_value, trainable, collections, validate_shape, caching_device, name, variable_def, dtype, expected_shape, import_scope, constraint, use_resource, synchronization, aggregation, shape)
218 synchronization=synchronization,
219 aggregation=aggregation,
--> 220 shape=shape)
221
222 def _variable_v2_call(cls,
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\variables.py in <lambda>(**kwargs)
196 shape=None):
197 """Call on Variable class. Useful to force the signature."""
--> 198 previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
199 for _, getter in ops.get_default_graph()._variable_creator_stack: # pylint: disable=protected-access
200 previous_getter = _make_getter(getter, previous_getter)
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\variable_scope.py in default_variable_creator(next_creator, **kwargs)
2596 synchronization=synchronization,
2597 aggregation=aggregation,
-> 2598 shape=shape)
2599 else:
2600 return variables.RefVariable(
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\variables.py in __call__(cls, *args, **kwargs)
261 return cls._variable_v2_call(*args, **kwargs)
262 else:
--> 263 return super(VariableMetaclass, cls).__call__(*args, **kwargs)
264
265
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py in __init__(self, initial_value, trainable, collections, validate_shape, caching_device, name, dtype, variable_def, import_scope, constraint, distribute_strategy, synchronization, aggregation, shape)
1432 aggregation=aggregation,
1433 shape=shape,
-> 1434 distribute_strategy=distribute_strategy)
1435
1436 def _init_from_args(self,
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py in _init_from_args(self, initial_value, trainable, collections, caching_device, name, dtype, constraint, synchronization, aggregation, distribute_strategy, shape)
1565 with ops.name_scope("Initializer"), device_context_manager(None):
1566 initial_value = ops.convert_to_tensor(
-> 1567 initial_value() if init_from_fn else initial_value,
1568 name="initial_value", dtype=dtype)
1569 if shape is not None:
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\keras\engine\base_layer_utils.py in <lambda>()
119 (type(init_ops.Initializer), type(init_ops_v2.Initializer))):
120 initializer = initializer()
--> 121 init_val = lambda: initializer(shape, dtype=dtype)
122 variable_dtype = dtype.base_dtype
123 if use_resource is None:
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\init_ops_v2.py in __call__(self, shape, dtype)
556 else:
557 limit = math.sqrt(3.0 * scale)
--> 558 return self._random_generator.random_uniform(shape, -limit, limit, dtype)
559
560 def get_config(self):
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\init_ops_v2.py in random_uniform(self, shape, minval, maxval, dtype)
1066 op = random_ops.random_uniform
1067 return op(
-> 1068 shape=shape, minval=minval, maxval=maxval, dtype=dtype, seed=self.seed)
1069
1070 def truncated_normal(self, shape, mean, stddev, dtype):
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\ops\random_ops.py in random_uniform(shape, minval, maxval, dtype, seed, name)
280 maxval = 1
281 with ops.name_scope(name, "random_uniform", [shape, minval, maxval]) as name:
--> 282 shape = tensor_util.shape_tensor(shape)
283 # In case of [0,1) floating results, minval and maxval is unused. We do an
284 # `is` comparison here since this is cheaper than isinstance or __eq__.
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\framework\tensor_util.py in shape_tensor(shape)
1013 # not convertible to Tensors because of mixed content.
1014 shape = tuple(map(tensor_shape.dimension_value, shape))
-> 1015 return ops.convert_to_tensor(shape, dtype=dtype, name="shape")
1016
1017
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\framework\ops.py in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, dtype_hint, ctx, accepted_result_types)
1339
1340 if ret is None:
-> 1341 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
1342
1343 if ret is NotImplemented:
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\framework\constant_op.py in _constant_tensor_conversion_function(v, dtype, name, as_ref)
319 as_ref=False):
320 _ = as_ref
--> 321 return constant(v, dtype=dtype, name=name)
322
323
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\framework\constant_op.py in constant(value, dtype, shape, name)
260 """
261 return _constant_impl(value, dtype, shape, name, verify_shape=False,
--> 262 allow_broadcast=True)
263
264
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\framework\constant_op.py in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast)
268 ctx = context.context()
269 if ctx.executing_eagerly():
--> 270 t = convert_to_eager_tensor(value, ctx, dtype)
271 if shape is None:
272 return t
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\framework\constant_op.py in convert_to_eager_tensor(value, ctx, dtype)
93 except AttributeError:
94 dtype = dtypes.as_dtype(dtype).as_datatype_enum
---> 95 ctx.ensure_initialized()
96 return ops.EagerTensor(value, ctx.device_name, dtype)
97
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\eager\context.py in ensure_initialized(self)
500 opts = pywrap_tfe.TFE_NewContextOptions()
501 try:
--> 502 config_str = self.config.SerializeToString()
503 pywrap_tfe.TFE_ContextOptionsSetConfig(opts, config_str)
504 if self._device_policy is not None:
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\eager\context.py in config(self)
878 """Return the ConfigProto with all runtime deltas applied."""
879 # Ensure physical devices have been discovered and config has been imported
--> 880 self._initialize_physical_devices()
881
882 config = config_pb2.ConfigProto()
c:\users\youssef\anaconda3\envs\maskrcnn\lib\site-packages\tensorflow\python\eager\context.py in _initialize_physical_devices(self)
1167 self._physical_devices = [
1168 PhysicalDevice(name=d.decode(),
-> 1169 device_type=d.decode().split(":")[1]) for d in devs]
1170 # Construct the visible device list from all physical devices but ignore
1171 # XLA devices
TypeError: 'NoneType' object is not iterable
| open | 2020-08-22T08:46:47Z | 2020-08-22T08:46:47Z | https://github.com/matterport/Mask_RCNN/issues/2332 | [] | YoussefAlj | 0 |
yihong0618/running_page | data-visualization | 129 | Garmin login is broken I will try to fix it as soon as possible๏ผGarmin ไฟฎๆนไบ็ป้ๆนๅผ๏ผๆๅฐฝๅฟซๅฐ่ฏ่งฃๅณ๏ผ | closed | 2021-05-18T00:54:48Z | 2021-05-21T04:36:30Z | https://github.com/yihong0618/running_page/issues/129 | [] | yihong0618 | 9 |
|
iterative/dvc | data-science | 10,691 | speed up checksum checks for large files | Hey DVC devs!
First off, a huge thank you for creating this brilliant project. It has been a game-changer for our data science projects.
In our work with lot of large input files (2 - 50+ GB), we've noticed that the checksum checks for these files are incredibly slow. Is there any plan or interest in speeding up the checksum checks? Are there any options to utilize integrated file system checksums or faster implementations of checksum calculations?
Thanks a ton!
Tom | closed | 2025-02-20T05:13:41Z | 2025-03-14T04:04:50Z | https://github.com/iterative/dvc/issues/10691 | [] | tschwarzl | 9 |
man-group/arctic | pandas | 501 | False positive integrity check failures when balanacer is running and moving chunks | #### Arctic Version
1.58
#### Arctic Store
VersionStore, NdarrayStore
#### Description of problem and/or code sample that reproduces the issue
We a DataIntegrityException, where the number of expected updated segments (with new parent in their set) is double the number of the segments meant to change.
Suspicions is that this happens at the balancer runs at this time and segments/document may exist briefly in more than one replica sets as they are being moved.
This is even more likely because the spec we use for the update_many query only uses _id:
[relevant code here](https://github.com/manahl/arctic/blob/v1.58.0/arctic/store/_ndarray_store.py#L368)
This doesn't use the sharding key (symbol) and in effect is broadcasted to all replica set servers.
```
INFO - return arctic.decorators.mongo_retry(f)(*args, **kwargs)
[2018-01-31 03:34:18,158] INFO - File "build/bdist.linux-x86_64/egg/arctic/decorators.py", line 50, in f_retry
[2018-01-31 03:34:18,158] INFO - return f(*args, **kwargs)
[2018-01-31 03:34:18,158] INFO - File "build/bdist.linux-x86_64/egg/xxxxxxxxxxxxxxx/version_store.py", line 224, in write
[2018-01-31 03:34:18,159] INFO - prune_previous_version=prune_previous_version, **kwargs)
[2018-01-31 03:34:18,159] INFO - File "build/bdist.linux-x86_64/egg/arctic/decorators.py", line 50, in f_retry
[2018-01-31 03:34:18,159] INFO - return f(*args, **kwargs)
[2018-01-31 03:34:18,159] INFO - File "build/bdist.linux-x86_64/egg/arctic/store/version_store.py", line 574, in write
[2018-01-31 03:34:18,159] INFO - handler.write(self._arctic_lib, version, symbol, data, previous_version, **kwargs)
[2018-01-31 03:34:18,159] INFO - File "build/bdist.linux-x86_64/egg/xxxxxxxxxxxxxxxx/_ts_ndarray_store.py", line 130, in write
[2018-01-31 03:34:18,159] INFO - super(TimeSeriesNdarrayStore, self).write(mongoose_lib, version, symbol, item, previous_version, **kwargs)
[2018-01-31 03:34:18,159] INFO - File "build/bdist.linux-x86_64/egg/arctic/store/_ndarray_store.py", line 414, in write
[2018-01-31 03:34:18,160] INFO - self._do_append(collection, version, symbol, item[previous_version['up_to']:], previous_version, dirty_append=True)
[2018-01-31 03:34:18,160] INFO - File "build/bdist.linux-x86_64/egg/arctic/store/_ndarray_store.py", line 315, in _do_append
[2018-01-31 03:34:18,160] INFO - self._concat_and_rewrite(collection, version, symbol, item, previous_version)
[2018-01-31 03:34:18,160] INFO - File "build/bdist.linux-x86_64/egg/arctic/store/_ndarray_store.py", line 372, in _concat_and_rewrite
[2018-01-31 03:34:18,160] INFO - len(unchanged_segment_ids)
[2018-01-31 03:34:18,160] INFO - DataIntegrityException: Symbol: GETF_median_volume_20d:97 update_many updated 2 segments instead of 1
``` | closed | 2018-02-06T17:03:59Z | 2018-02-06T22:41:59Z | https://github.com/man-group/arctic/issues/501 | [] | dimosped | 0 |
mwaskom/seaborn | data-visualization | 3,728 | Incorrect plotting of exactly overlapping scatter with `hue` and `hue_order` | While working with `sns.scatterplot` for representing locations on a grid, I discovered an issue where using `hue` and `hue_order` produces an incorrect plot: markers that should be perfectly overlappingโthey have identical (`x`, `y`) coordinatesโare drawn at a small offset, such that the edge of one can be seen intersecting the other. Here's a minimal example that reproduces the issue with `matplotlib 3.9.1` and `seaborn 0.13.2`:
```python
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
df = pd.DataFrame.from_dict({
'x': [6.3, 6.3, 6.3, 6.3, 6.633333, 6.633333, 6.633333, 6.633333, 33.48, 33.48, 33.48, 33.48, 33.813333, 33.813333, 33.813333, 33.813333],
'y': [-12.42, -12.42, -4.0, -4.0, -12.42, -12.42, -4.0, -4.0, -12.42, -12.42, -4.0, -4.0, -12.42, -12.42, -4.0, -4.0],
'locid': ['loc1', 'loc1', 'loc1', 'loc1', 'loc2', 'loc2', 'loc2', 'loc2', 'loc1', 'loc1', 'loc1', 'loc1', 'loc2', 'loc2', 'loc2', 'loc2']
})
sns.scatterplot(
data=df,
x='x',
y='y',
marker="o",
hue='locid',
hue_order=['loc1'],
)
print('Pandas version: ', pd.__version__) # 2.2.2
print('Matplotlib version: ', matplotlib.__version__) # 3.9.1
print('Seaborn version: ', sns.__version__) # 0.13.2
```
That code produces the following plot:

where at each corner, the edge of the second marker is clearly seen to intersect the face of the first
From my brief dive into this problem:
1. As in the example, it doesn't matter whether a tall stack of markers are made to overlap: there's only to points with the exact (6.3, -12.42) coordinates and the problem is there.
2. The issue is seaborn-specific. Using matplotlib's `plt.scatter` does yield a correct plot.
3. Both `hue` and `hue_order` need to be used in order for the issue to appear. Slicing the data with `df[df.locid == 'loc1']` makes a correct plot.
4. The problem persists even with `marker='.' `, `marker='s'`, `marker='v'` and `marker='d'`, but not with `marker='x'`. | open | 2024-07-12T14:04:03Z | 2024-07-15T15:23:55Z | https://github.com/mwaskom/seaborn/issues/3728 | [] | eloyvallinaes | 3 |
littlecodersh/ItChat | api | 55 | ๆๆ ทไธปๅจ็ป็นๅฎ่็ณปไบบๅ้ๆถๆฏ๏ผ | ็ไบๆๆกฃ ๅบๆฌ้ฝๆฏ่ขซๅจ็ๆฅๆถๆถๆฏ ๅนถๅๅค๏ผๆไนไธปๅจ็ปๅซไบบๅ้ๆถๆฏๅข๏ผ sendๅฏน่ฑกไธญๆไน็ปusernameๅผ๏ผ
ๆๅฐ่ฏ็จitchat.get_frineds(),ไฝๆฏ่ฟๅๅผไธ็ฅ้ๆไนๅค็ใ
| closed | 2016-08-01T09:34:14Z | 2017-03-24T01:37:05Z | https://github.com/littlecodersh/ItChat/issues/55 | [
"question"
] | KennethYeah | 13 |
wandb/wandb | data-science | 9,238 | [Bug]: subprocess.Popen gets stuck when running sweep | ### Describe the bug
wandb Version: 0.19.2
Python 3.10.12
Ubuntu 22.04
Inside my python script I spawn a subprocess like:
```
command = ["bash", "-c", "ffmpeg -framerate 24 -i frame_%d.png output.mp4]
subprocess.Popen(command)
```
And it works perfectly fine if I run it directly. But if I create a sweep and start an agent in cli it gets stuck indefinitely. Also the output of this ffmpeg subprocess is always truncated no matter how I try to get it - print in console or redirect to file. I'm not a specialist in multi processing but apparently wandb does something that leads to subprocess failing? | closed | 2025-01-10T02:51:42Z | 2025-01-13T16:15:09Z | https://github.com/wandb/wandb/issues/9238 | [
"ty:bug",
"a:sdk"
] | simplerick | 1 |
encode/httpx | asyncio | 2,924 | requesๅhttpxๅฏนๅไธไธช่ฏทๆฑ็ปๆไธไธ่ด | he = {
"cookie": "csrf_session_id=31bba393b7ed2c621edac0317048ba0a;tt_scid=VauYpHMBxVl.JRELyMyZUOvGbNLE0LM2FWR3I.-7y9ztQDU8xszugxiCKVYoVEWz84d0;ttcid=f38b7c0cc7634a3fb3e8fd56f40f60d542;msToken=5lVMnWk_WrZpmZi7yYDiwJQ0JUzVKb5U9VYKrWS47FHKMNVj04JvrtFjMLhKjCHRzx_EUIp8D-afCEqOk_kSRdGMQfkeYfjMZSUPaLaR;passport_csrf_token=49a20b5140e0a8d953007c360a3d48bd;passport_csrf_token_default=49a20b5140e0a8d953007c360a3d48bd;ttwid=1%7CtsDAT1Sn8VEDfr5JM0SuBwNAUl5KOKZvICLEf_13E28%7C1698977950%7Cb3159479b03f4c3a297cb9f6df8dd4b1c1dc23268a29b0a2a1c3724722a7efb3;__ac_nonce=06544589e0034d97e1fa;",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36",
"referer": "https://live.douyin.com/",
}
res = httpx.get(
'https://live.douyin.com/webcast/im/fetch/?resp_content_type=protobuf&did_rule=3&device_id=&app_name=douyin_web&endpoint=live_pc&support_wrds=1&user_unique_id=7297054683247429129&identity=audience&need_persist_msg_count=15&room_id=7297021771922918171&version_code=180800&last_rtt=-1&live_id=1&aid=6383&fetch_rule=1&cursor=&internal_ext=&device_platform=web&cookie_enabled=true&screen_width=1920&screen_height=1080&browser_language=zh-CN&browser_platform=Win32&browser_name=Mozilla&browser_version=5.0%20(Windows%20NT%2010.0;%20Win64;%20x64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/117.0.0.0%20Safari/537.36&browser_online=true&tz_name=Asia/Shanghai&msToken=A9V7O6PityCrBntXftiZIDr7HXeYcho1z84ZmsyM1w9wbUIFrtfVrkcafKuV6dtVhFmhzSQdcnYI_bqeCF2ic9dU56raNUGiuQOFZhuJWLEMiZ8ZIA==&X-Bogus=DFSzswVYdTUANykmtFdl0M9WX7jQ',
headers=he,)
print(res.text)
ไฝฟ็จhttpx่ฟๅๆฏ็ฉบ๏ผrequests่ฟๅๆฏprotobuf็ฑปๅๆฐๆฎ
| closed | 2023-11-03T07:29:51Z | 2023-11-03T12:39:45Z | https://github.com/encode/httpx/issues/2924 | [] | General-Denom | 0 |
learning-at-home/hivemind | asyncio | 198 | Support gradient accumulation in ExpertBackend | Larger Transformer models are trained with larger batches, it's probably beneficial to accumulate gradients from several backward requests before making a step. It can be implemented in `ExpertBackend.apply_gradients()`, and the number of processed examples is already available.
The tricky part is to implement loss averaging correctly: since we might have batches of different sizes (at least at the server side), we might need to scale the gradients from each batch according to its relative size.
| open | 2021-03-29T07:16:20Z | 2021-03-29T07:16:20Z | https://github.com/learning-at-home/hivemind/issues/198 | [
"enhancement"
] | mryab | 0 |
gevent/gevent | asyncio | 1,799 | Unable to build gevent in aws codebuild agents. Help identify the poblem | * gevent version: gevent==21.1.2 installed via pip
* Python version: python:3.7-alpine docker image
* Operating System: python:3.7-alpine
### Description:
Unable to build gevent when run build on aws codebuild agent. Builds fine locally. For me unclear how come. Also created ticket to AWS support.
### Stack trace
```python-traceback
running build_ext
--
generating cffi module 'build/temp.linux-x86_64-3.7/gevent.libuv._corecffi.c'
creating build/temp.linux-x86_64-3.7
Running '(cd "/tmp/pip-install-fw15y3vy/gevent_27411c98f48148e38e31acc7df88a136/deps/libev" && sh ./configure -C > configure-output.txt )' in /tmp/pip-install-fw15y3vy/gevent_27411c98f48148e38e31acc7df88a136
config.status: error: in `/tmp/pip-install-fw15y3vy/gevent_27411c98f48148e38e31acc7df88a136/deps/libev':
config.status: error: Something went wrong bootstrapping makefile fragments
for automatic dependency tracking. Try re-running configure with the
'--disable-dependency-tracking' option to at least be able to build
the package (albeit without support for automatic dependency tracking).
See `config.log' for more details
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 280, in <module>
main()
File "/usr/local/lib/python3.7/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/usr/local/lib/python3.7/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 205, in build_wheel
metadata_directory)
File "/tmp/pip-build-env-_7wi_zvn/overlay/lib/python3.7/site-packages/setuptools/build_meta.py", line 222, in build_wheel
wheel_directory, config_settings)
File "/tmp/pip-build-env-_7wi_zvn/overlay/lib/python3.7/site-packages/setuptools/build_meta.py", line 207, in _build_with_temp_dir
self.run_setup()
File "/tmp/pip-build-env-_7wi_zvn/overlay/lib/python3.7/site-packages/setuptools/build_meta.py", line 259, in run_setup
self).run_setup(setup_script=setup_script)
File "/tmp/pip-build-env-_7wi_zvn/overlay/lib/python3.7/site-packages/setuptools/build_meta.py", line 150, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 479, in <module>
run_setup(EXT_MODULES)
File "setup.py", line 463, in run_setup
"signal_os_incompat = gevent.monkey:_subscribe_signal_os",
File "/tmp/pip-build-env-_7wi_zvn/overlay/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/usr/local/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/local/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/usr/local/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-_7wi_zvn/overlay/lib/python3.7/site-packages/wheel/bdist_wheel.py", line 299, in run
self.run_command('build')
File "/usr/local/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.7/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/local/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-_7wi_zvn/overlay/lib/python3.7/site-packages/cffi/setuptools_ext.py", line 143, in run
ext.sources[0] = make_mod(self.build_temp, pre_run)
File "/tmp/pip-build-env-_7wi_zvn/overlay/lib/python3.7/site-packages/cffi/setuptools_ext.py", line 128, in make_mod
pre_run(ext, ffi)
File "/tmp/pip-install-fw15y3vy/gevent_27411c98f48148e38e31acc7df88a136/_setuputils.py", line 364, in pre_run
action()
File "/tmp/pip-install-fw15y3vy/gevent_27411c98f48148e38e31acc7df88a136/_setuplibev.py", line 55, in configure_libev
system(libev_configure_command)
File "/tmp/pip-install-fw15y3vy/gevent_27411c98f48148e38e31acc7df88a136/_setuputils.py", line 195, in system
if _system(cmd, cwd=cwd, env=env, **kwargs):
File "/tmp/pip-install-fw15y3vy/gevent_27411c98f48148e38e31acc7df88a136/_setuputils.py", line 191, in _system
return check_call(cmd, cwd=cwd, env=env, **kwargs)
File "/usr/local/lib/python3.7/subprocess.py", line 363, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '(cd "/tmp/pip-install-fw15y3vy/gevent_27411c98f48148e38e31acc7df88a136/deps/libev" && sh ./configure -C > configure-output.txt )' returned non-zero exit status 1.
----------------------------------------
[0m[91m ERROR: Failed building wheel for gevent
```
### What I've run:
```docker
FROM python:3.7-alpine
RUN mkdir /cb_f
WORKDIR /cb_f
ENV PYTHONPATH "${PYTHONPATH}:/cb_f"
RUN \
apk add --no-cache postgresql-libs bash supervisor && \
apk add --no-cache --virtual .build-deps gcc libc-dev g++ postgresql-dev python3-dev libffi-dev musl-dev make build-base git alpine-sdk automake autoconf
RUN apk add libtool
ADD requirements.txt /cb_f/
RUN pip install -r requirements.txt --no-cache-dir && \
apk --purge del .build-deps
ADD configs/supervisord.conf /etc/
ADD configs/supervisord_2.conf /etc/
ADD configs/supervisord_3.conf /etc/
COPY . /cb_f/
EXPOSE 5000
```
| closed | 2021-06-29T15:49:31Z | 2021-07-19T13:01:44Z | https://github.com/gevent/gevent/issues/1799 | [] | KursLabIgor | 14 |
matplotlib/matplotlib | data-science | 29,594 | [MNT]: Consolidate tick API | ### Summary
The Tick class/concept consistents of a *ticklabel*, a *tickline* (the marker) and a *gridline*. In addition there's the *location* as a fundmental property.
We have a lot of methods to handle ticks. There are methods on Axis to configure all these properties - often in flavors of major/minor and global (i.e. a function that lets you select major/minor/both). These methods would typically be used via `ax.xaxis.get_majorticklabels()`. Additionally, a subset of these wrappers is exposed on the Axes level, then in the flavors of x/y.

Overall we have 16 such methods on Axes and 14 on Axis that directly work on Tick instances or parts of them.
### Proposed fix
We should discourage direct interaction with ticks or their components (ticklabel, tickline, gridline). As ticks are volatile configuring the instances explicitly may not persist if the plot is changed later.
Therefore, I would like to get rid of the high-level Axes methods that give access to these components: `get_xticklabels()`, `get_xmajorticklabels()`, `get_xminorticklabels()`, `get_xgridlines()`, `get_xticklines()` (and same for y).
People should use `tick_params()` instead where possible, e.g. `ax.tick_params(labelsize=10)` instead of `for label in ax.get_xticklabels(): label.set_fontsize(10)`. This is not only shorter but also configures the common/universal tick property instead of individual volatile instances.
Since `tick_params()` does currently not, and likely will never provide full control on all aspects (e.g. [this example](https://matplotlib.org/stable/gallery/event_handling/pick_event_demo.html#simple-picking-lines-rectangles-and-text) makes tick labels pickable), users should use the underlying Axis functions if they really must access the indivdual tick(components), i.e. use `ax.xaxis.get_ticklabels()` instead of ax.get_xticklabels()`.
While removing this bunch of wrapper functions on Axes is massive API change, I think we eventually want to go there (slowly through a pending deprecation), because these functions are encouraging nowadays bad practice. - The weaker alternative would be to only discourage.
Concrete:
- pending deprecate the `Axes` methods: `get_xticklabels()`, `get_xmajorticklabels()`, `get_xminorticklabels()`, `get_xgridlines()`, `get_xticklines()` (and same for y).
- Recommend to use `Axes.tick_params()` instead where possible.
- Recommend to ues the respective `Axis` methods if more control is needed, e.g. `ax.xaxis.get_ticklabels()`
- On all methods that return Tick components, warn that this only affects the current instances and future changes to the plot may create new ticks.
---
Usage statistics from GitHub query
Query string: `/\.get_gridlines\(/ NOT path:_base.py NOT path:axes.py NOT path:axis.py language:Python NOT is:fork NOT repo:matplotlib/matplotlib`
 | open | 2025-02-08T09:38:33Z | 2025-02-26T07:39:09Z | https://github.com/matplotlib/matplotlib/issues/29594 | [
"Maintenance"
] | timhoffm | 0 |
StratoDem/sd-material-ui | dash | 600 | material-ui in dash | <!--- Provide a general summary of your changes in the Title above -->
<!--- MANDATORY -->
<!--- Always fill out a description, even if you are reporting a simple issue. If it is something truly trivial or simple, it is okay to keep it short and sweet. -->
## Description
<!--- A clear and concise description of what the issue is about. Include things like expected/desired behavior, actual behavior, motivation or rational for a new feature, what files it concerns, etc. -->
I would like to know if this implementation is necessary because I cant use [material-ui](https://github.com/mui-org/material-ui) lib directly in Dash ([Dash support React components](https://dash.plotly.com/plugins)) | closed | 2021-02-19T18:01:14Z | 2021-02-19T19:17:56Z | https://github.com/StratoDem/sd-material-ui/issues/600 | [] | oDevBR | 1 |
quantmind/pulsar | asyncio | 241 | Start Pulsar WSGI Server from Docker | Hey, I am using Pulsar for JSON-RPC and everything works fine locally, but I can't send calls from outside the Docker container, where the WSGI Server is running. Do I need to expose other ports then the one that I expose with `--bind` or what could be the issue?
I was hoping I could simply expose a JSON-RPC API with pulsar, like I do it with flask and consume that with other pulsar or node apps.
| closed | 2016-08-26T14:37:44Z | 2016-09-08T11:49:49Z | https://github.com/quantmind/pulsar/issues/241 | [] | MrLoh | 5 |
MorvanZhou/tutorials | numpy | 25 | initialize_all_variables is deprecated | for the code of tf14 and tf15
WARNING:tensorflow:From full_code.py:62 in <module>.: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead. | closed | 2016-11-29T12:45:17Z | 2016-11-29T13:11:33Z | https://github.com/MorvanZhou/tutorials/issues/25 | [] | xuyuji9000 | 1 |
dpgaspar/Flask-AppBuilder | flask | 1,893 | Need to include inputs in existing templates from the ModelView class. |
### Environment
Flask-Appbuilder version:3.4.4
### Describe the expected results
I wanted to modify the edit template such that I can include my own inputs. But the existing functions/methods don't allow that. So instead I did the below instead of the conventional edit_template = 'xx.html'.
```python
class SQLDepoModelView(ModelView):
@expose("/edit")
@has_access
def edit(self):
keywords = [keyword_obj.keyword for keyword_obj in db.session.query(sql_depo_keyword_helper).all()]
return self.render_template('edit_template.html', keywords=keywords)
```
### Describe the actual results
I'll get a "jinja2.exceptions.UndefinedError: 'widgets' is undefined" instead.
```pytb
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
C:\ProgramData\Anaconda3\lib\site-packages\flask_sqlalchemy\__init__.py:873: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and '
2022-07-19 13:07:17,207:INFO:flask_appbuilder.base:Registering class IndexView on menu
2022-07-19 13:07:17,207:INFO:flask_appbuilder.baseviews:Registering route / ('GET',)
2022-07-19 13:07:17,223:INFO:flask_appbuilder.base:Registering class UtilView on menu
2022-07-19 13:07:17,223:INFO:flask_appbuilder.baseviews:Registering route /back ('GET',)
2022-07-19 13:07:17,238:INFO:flask_appbuilder.base:Registering class LocaleView on menu
2022-07-19 13:07:17,238:INFO:flask_appbuilder.baseviews:Registering route /lang/<string:locale> ('GET',)
2022-07-19 13:07:17,238:INFO:flask_appbuilder.base:Registering class SecurityApi on menu
2022-07-19 13:07:17,238:INFO:flask_appbuilder.api:Registering route /api/v1/security/login ['POST']
2022-07-19 13:07:17,238:INFO:flask_appbuilder.api:Registering route /api/v1/security/refresh ['POST']
2022-07-19 13:07:17,254:INFO:flask_appbuilder.base:Registering class ResetPasswordView on menu
2022-07-19 13:07:17,254:INFO:flask_appbuilder.baseviews:Registering route /resetpassword/form ['GET']
2022-07-19 13:07:17,254:INFO:flask_appbuilder.baseviews:Registering route /resetpassword/form ['POST']
2022-07-19 13:07:17,285:INFO:flask_appbuilder.base:Registering class ResetMyPasswordView on menu
2022-07-19 13:07:17,287:INFO:flask_appbuilder.baseviews:Registering route /resetmypassword/form ['GET']
2022-07-19 13:07:17,287:INFO:flask_appbuilder.baseviews:Registering route /resetmypassword/form ['POST']
2022-07-19 13:07:17,303:INFO:flask_appbuilder.base:Registering class UserInfoEditView on menu
2022-07-19 13:07:17,303:INFO:flask_appbuilder.baseviews:Registering route /userinfoeditview/form ['GET']
2022-07-19 13:07:17,303:INFO:flask_appbuilder.baseviews:Registering route /userinfoeditview/form ['POST']
2022-07-19 13:07:17,319:INFO:flask_appbuilder.base:Registering class AuthDBView on menu
2022-07-19 13:07:17,319:INFO:flask_appbuilder.baseviews:Registering route /login/ ['GET', 'POST']
2022-07-19 13:07:17,319:INFO:flask_appbuilder.baseviews:Registering route /logout/ ('GET',)
2022-07-19 13:07:17,335:INFO:flask_appbuilder.base:Registering class UserDBModelView on menu List Users
2022-07-19 13:07:17,335:INFO:flask_appbuilder.baseviews:Registering route /users/action/<string:name>/<pk> ['GET', 'POST']
2022-07-19 13:07:17,335:INFO:flask_appbuilder.baseviews:Registering route /users/action_post ['POST']
2022-07-19 13:07:17,335:INFO:flask_appbuilder.baseviews:Registering route /users/add ['GET', 'POST']
2022-07-19 13:07:17,335:INFO:flask_appbuilder.baseviews:Registering route /users/api ['GET']
2022-07-19 13:07:17,335:INFO:flask_appbuilder.baseviews:Registering route /users/api/column/add/<col_name> ['GET']
2022-07-19 13:07:17,335:INFO:flask_appbuilder.baseviews:Registering route /users/api/column/edit/<col_name> ['GET']
2022-07-19 13:07:17,335:INFO:flask_appbuilder.baseviews:Registering route /users/api/create ['POST']
2022-07-19 13:07:17,335:INFO:flask_appbuilder.baseviews:Registering route /users/api/delete/<pk> ['DELETE']
2022-07-19 13:07:17,350:INFO:flask_appbuilder.baseviews:Registering route /users/api/get/<pk> ['GET']
2022-07-19 13:07:17,350:INFO:flask_appbuilder.baseviews:Registering route /users/api/read ['GET']
2022-07-19 13:07:17,350:INFO:flask_appbuilder.baseviews:Registering route /users/api/readvalues ['GET']
2022-07-19 13:07:17,350:INFO:flask_appbuilder.baseviews:Registering route /users/api/update/<pk> ['PUT']
2022-07-19 13:07:17,350:INFO:flask_appbuilder.baseviews:Registering route /users/delete/<pk> ['GET', 'POST']
2022-07-19 13:07:17,350:INFO:flask_appbuilder.baseviews:Registering route /users/download/<string:filename> ('GET',)
2022-07-19 13:07:17,350:INFO:flask_appbuilder.baseviews:Registering route /users/edit/<pk> ['GET', 'POST']
2022-07-19 13:07:17,350:INFO:flask_appbuilder.baseviews:Registering route /users/list/ ('GET',)
2022-07-19 13:07:17,366:INFO:flask_appbuilder.baseviews:Registering route /users/show/<pk> ['GET']
2022-07-19 13:07:17,366:INFO:flask_appbuilder.baseviews:Registering route /users/userinfo/ ('GET',)
2022-07-19 13:07:17,414:INFO:flask_appbuilder.base:Registering class RoleModelView on menu List Roles
2022-07-19 13:07:17,414:INFO:flask_appbuilder.baseviews:Registering route /roles/action/<string:name>/<pk> ['GET', 'POST']
2022-07-19 13:07:17,466:INFO:flask_appbuilder.baseviews:Registering route /roles/action_post ['POST']
2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/add ['GET', 'POST']
2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/api ['GET']
2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/api/column/add/<col_name> ['GET']
2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/api/column/edit/<col_name> ['GET']
2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/api/create ['POST']
2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/api/delete/<pk> ['DELETE']
2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/api/get/<pk> ['GET']
2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/api/read ['GET']
2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/api/readvalues ['GET']
2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/api/update/<pk> ['PUT']
2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/delete/<pk> ['GET', 'POST']
2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/download/<string:filename> ('GET',)
2022-07-19 13:07:17,482:INFO:flask_appbuilder.baseviews:Registering route /roles/edit/<pk> ['GET', 'POST']
2022-07-19 13:07:17,498:INFO:flask_appbuilder.baseviews:Registering route /roles/list/ ('GET',)
2022-07-19 13:07:17,591:INFO:flask_appbuilder.baseviews:Registering route /roles/show/<pk> ['GET']
2022-07-19 13:07:17,668:INFO:flask_appbuilder.base:Registering class UserStatsChartView on menu User's Statistics
2022-07-19 13:07:17,668:INFO:flask_appbuilder.baseviews:Registering route /userstatschartview/chart/ ('GET',)
2022-07-19 13:07:17,684:INFO:flask_appbuilder.baseviews:Registering route /userstatschartview/chart/<group_by> ('GET',)
2022-07-19 13:07:17,731:INFO:flask_appbuilder.base:Registering class PermissionModelView on menu Base Permissions
2022-07-19 13:07:17,731:INFO:flask_appbuilder.baseviews:Registering route /permissions/action/<string:name>/<pk> ['GET', 'POST']
2022-07-19 13:07:17,746:INFO:flask_appbuilder.baseviews:Registering route /permissions/action_post ['POST']
2022-07-19 13:07:17,746:INFO:flask_appbuilder.baseviews:Registering route /permissions/add ['GET', 'POST']
2022-07-19 13:07:17,746:INFO:flask_appbuilder.baseviews:Registering route /permissions/api ['GET']
2022-07-19 13:07:17,746:INFO:flask_appbuilder.baseviews:Registering route /permissions/api/column/add/<col_name> ['GET']
2022-07-19 13:07:17,762:INFO:flask_appbuilder.baseviews:Registering route /permissions/api/column/edit/<col_name> ['GET']
2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/api/create ['POST']
2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/api/delete/<pk> ['DELETE']
2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/api/get/<pk> ['GET']
2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/api/read ['GET']
2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/api/readvalues ['GET']
2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/api/update/<pk> ['PUT']
2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/delete/<pk> ['GET', 'POST']
2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/download/<string:filename> ('GET',)
2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/edit/<pk> ['GET', 'POST']
2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/list/ ('GET',)
2022-07-19 13:07:17,825:INFO:flask_appbuilder.baseviews:Registering route /permissions/show/<pk> ['GET']
2022-07-19 13:07:17,871:INFO:flask_appbuilder.base:Registering class ViewMenuModelView on menu Views/Menus
2022-07-19 13:07:17,903:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/action/<string:name>/<pk> ['GET', 'POST']
2022-07-19 13:07:17,903:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/action_post ['POST']
2022-07-19 13:07:17,903:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/add ['GET', 'POST']
2022-07-19 13:07:17,903:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/api ['GET']
2022-07-19 13:07:17,903:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/api/column/add/<col_name> ['GET']
2022-07-19 13:07:17,903:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/api/column/edit/<col_name> ['GET']
2022-07-19 13:07:17,918:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/api/create ['POST']
2022-07-19 13:07:17,983:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/api/delete/<pk> ['DELETE']
2022-07-19 13:07:17,983:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/api/get/<pk> ['GET']
2022-07-19 13:07:17,983:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/api/read ['GET']
2022-07-19 13:07:17,996:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/api/readvalues ['GET']
2022-07-19 13:07:17,996:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/api/update/<pk> ['PUT']
2022-07-19 13:07:17,996:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/delete/<pk> ['GET', 'POST']
2022-07-19 13:07:17,996:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/download/<string:filename> ('GET',)
2022-07-19 13:07:18,012:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/edit/<pk> ['GET', 'POST']
2022-07-19 13:07:18,090:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/list/ ('GET',)
2022-07-19 13:07:18,090:INFO:flask_appbuilder.baseviews:Registering route /viewmenus/show/<pk> ['GET']
2022-07-19 13:07:18,137:INFO:flask_appbuilder.base:Registering class PermissionViewModelView on menu Permission on Views/Menus
2022-07-19 13:07:18,137:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/action/<string:name>/<pk> ['GET', 'POST']
2022-07-19 13:07:18,168:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/action_post ['POST']
2022-07-19 13:07:18,168:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/add ['GET', 'POST']
2022-07-19 13:07:18,168:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/api ['GET']
2022-07-19 13:07:18,168:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/api/column/add/<col_name> ['GET']
2022-07-19 13:07:18,168:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/api/column/edit/<col_name> ['GET']
2022-07-19 13:07:18,168:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/api/create ['POST']
2022-07-19 13:07:18,184:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/api/delete/<pk> ['DELETE']
2022-07-19 13:07:18,200:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/api/get/<pk> ['GET']
2022-07-19 13:07:18,200:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/api/read ['GET']
2022-07-19 13:07:18,200:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/api/readvalues ['GET']
2022-07-19 13:07:18,200:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/api/update/<pk> ['PUT']
2022-07-19 13:07:18,200:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/delete/<pk> ['GET', 'POST']
2022-07-19 13:07:18,200:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/download/<string:filename> ('GET',)
2022-07-19 13:07:18,200:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/edit/<pk> ['GET', 'POST']
2022-07-19 13:07:18,200:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/list/ ('GET',)
2022-07-19 13:07:18,200:INFO:flask_appbuilder.baseviews:Registering route /permissionviews/show/<pk> ['GET']
2022-07-19 13:07:18,262:INFO:flask_appbuilder.base:Registering class MenuApi on menu
2022-07-19 13:07:18,305:INFO:flask_appbuilder.api:Registering route /api/v1/menu/ ['GET']
2022-07-19 13:07:18,570:INFO:flask_appbuilder.base:Registering class SQLDepoModelView on menu All SQLs
2022-07-19 13:07:18,570:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/action/<string:name>/<pk> ['GET', 'POST']
2022-07-19 13:07:18,586:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/action_post ['POST']
2022-07-19 13:07:18,593:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/add ['GET', 'POST']
2022-07-19 13:07:18,631:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/api ['GET']
2022-07-19 13:07:18,631:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/api/column/add/<col_name> ['GET']
2022-07-19 13:07:18,631:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/api/column/edit/<col_name> ['GET']
2022-07-19 13:07:18,631:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/api/create ['POST']
2022-07-19 13:07:18,631:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/api/delete/<pk> ['DELETE']
2022-07-19 13:07:18,631:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/api/get/<pk> ['GET']
2022-07-19 13:07:18,647:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/api/read ['GET']
2022-07-19 13:07:18,647:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/api/readvalues ['GET']
2022-07-19 13:07:18,662:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/api/update/<pk> ['PUT']
2022-07-19 13:07:18,662:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/delete/<pk> ['GET', 'POST']
2022-07-19 13:07:18,662:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/download/<string:filename> ('GET',)
2022-07-19 13:07:18,662:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/edit ('GET',)
2022-07-19 13:07:18,662:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/list/ ('GET',)
2022-07-19 13:07:18,662:INFO:flask_appbuilder.baseviews:Registering route /sqldepomodelview/show/<pk> ['GET']
2022-07-19 13:07:18,712:INFO:flask_appbuilder.base:Registering class KeywordView on menu Keyword
2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/action/<string:name>/<pk> ['GET', 'POST']
2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/action_post ['POST']
2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/add ['GET', 'POST']
2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/api ['GET']
2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/api/column/add/<col_name> ['GET']
2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/api/column/edit/<col_name> ['GET']
2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/api/create ['POST']
2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/api/delete/<pk> ['DELETE']
2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/api/get/<pk> ['GET']
2022-07-19 13:07:18,724:INFO:flask_appbuilder.baseviews:Registering route /keywordview/api/read ['GET']
2022-07-19 13:07:18,740:INFO:flask_appbuilder.baseviews:Registering route /keywordview/api/readvalues ['GET']
2022-07-19 13:07:18,740:INFO:flask_appbuilder.baseviews:Registering route /keywordview/api/update/<pk> ['PUT']
2022-07-19 13:07:18,740:INFO:flask_appbuilder.baseviews:Registering route /keywordview/delete/<pk> ['GET', 'POST']
2022-07-19 13:07:18,758:INFO:flask_appbuilder.baseviews:Registering route /keywordview/download/<string:filename> ('GET',)
2022-07-19 13:07:18,849:INFO:flask_appbuilder.baseviews:Registering route /keywordview/edit/<pk> ['GET', 'POST']
2022-07-19 13:07:18,849:INFO:flask_appbuilder.baseviews:Registering route /keywordview/list/ ('GET',)
2022-07-19 13:07:18,849:INFO:flask_appbuilder.baseviews:Registering route /keywordview/show/<pk> ['GET']
2022-07-19 13:07:18,952:INFO:werkzeug: * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
2022-07-19 13:07:21,836:ERROR:app:Exception on /sqldepomodelview/edit [GET]
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\flask\app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "C:\ProgramData\Anaconda3\lib\site-packages\flask\app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "C:\ProgramData\Anaconda3\lib\site-packages\flask\app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "C:\ProgramData\Anaconda3\lib\site-packages\flask\_compat.py", line 39, in reraise
raise value
File "C:\ProgramData\Anaconda3\lib\site-packages\flask\app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "C:\ProgramData\Anaconda3\lib\site-packages\flask\app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "C:\ProgramData\Anaconda3\lib\flask_appbuilder\security\decorators.py", line 148, in wraps
return f(self, *args, **kwargs)
File "C:\MY CRM\PycharmProjects\sql_depository_app\sql_depo_app\app\views.py", line 31, in edit
return self.render_template('edit_template.html', keywords=keywords)
File "C:\ProgramData\Anaconda3\lib\flask_appbuilder\baseviews.py", line 288, in render_template
template, **dict(list(kwargs.items()) + list(self.extra_args.items()))
File "C:\ProgramData\Anaconda3\lib\site-packages\flask\templating.py", line 140, in render_template
ctx.app,
File "C:\ProgramData\Anaconda3\lib\site-packages\flask\templating.py", line 120, in _render
rv = template.render(context)
File "C:\ProgramData\Anaconda3\lib\site-packages\jinja2\environment.py", line 1291, in render
self.environment.handle_exception()
File "C:\ProgramData\Anaconda3\lib\site-packages\jinja2\environment.py", line 925, in handle_exception
raise rewrite_traceback_stack(source=source)
File "C:\MY CRM\PycharmProjects\sql_depository_app\sql_depo_app\app\templates\edit_template.html", line 1, in top-level template code
{% extends "appbuilder/general/model/edit.html" %}
File "C:\ProgramData\Anaconda3\lib\flask_appbuilder\templates\appbuilder\general\model\edit.html", line 2, in top-level template code
{% import 'appbuilder/general/lib.html' as lib %}
File "C:\ProgramData\Anaconda3\lib\flask_appbuilder\templates\appbuilder\base.html", line 1, in top-level template code
{% extends base_template %}
File "C:\MY CRM\PycharmProjects\sql_depository_app\sql_depo_app\app\templates\base_template.html", line 1, in top-level template code
{% extends 'appbuilder/baselayout.html' %}
File "C:\ProgramData\Anaconda3\lib\flask_appbuilder\templates\appbuilder\baselayout.html", line 2, in top-level template code
{% import 'appbuilder/baselib.html' as baselib %}
File "C:\ProgramData\Anaconda3\lib\flask_appbuilder\templates\appbuilder\init.html", line 37, in top-level template code
{% block body %}
File "C:\ProgramData\Anaconda3\lib\flask_appbuilder\templates\appbuilder\baselayout.html", line 19, in block 'body'
{% block content %}
File "C:\ProgramData\Anaconda3\lib\flask_appbuilder\templates\appbuilder\general\model\edit.html", line 23, in block 'content'
{% block edit_form %}
File "C:\MY CRM\PycharmProjects\sql_depository_app\sql_depo_app\app\templates\edit_template.html", line 4, in block 'edit_form'
{{ super() }}
File "C:\ProgramData\Anaconda3\lib\flask_appbuilder\templates\appbuilder\general\model\edit.html", line 25, in block 'edit_form'
{{ widgets.get('edit')(form_action=form_action)|safe }}
File "C:\ProgramData\Anaconda3\lib\site-packages\jinja2\environment.py", line 474, in getattr
return getattr(obj, attribute)
jinja2.exceptions.UndefinedError: 'widgets' is undefined
2022-07-19 13:07:21,853:INFO:werkzeug:127.0.0.1 - - [19/Jul/2022 13:07:21] "GET /sqldepomodelview/edit?pk=1 HTTP/1.1" 500 -
```
### Steps to reproduce
After setting up the model.py and view.py and any other preparations, I ran "flask run" in the command prompt
| closed | 2022-07-19T05:15:55Z | 2022-07-19T09:53:05Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1893 | [] | XiangYang95 | 2 |
jazzband/django-oauth-toolkit | django | 764 | Please release new version - Cannot revoke all tokens: AccessToken matching query does not exist | I am getting a DoesNotExist exception trying to clean up when a user logs out.
django-oauth-toolkit==1.2.0
```
from itertools import chain
from oauth2_provider.models import get_access_token_model, get_application_model, get_refresh_token_model
access_tokens = get_access_token_model().objects.filter(user=user)
refresh_tokens = get_refresh_token_model().objects.filter(user=user)
for token in chain(access_tokens, refresh_tokens):
token.revoke()
```
It appears this is fixed in master: https://github.com/jazzband/django-oauth-toolkit/blob/master/oauth2_provider/models.py#L397
It looks like the fix is a year old: https://github.com/jazzband/django-oauth-toolkit/commit/5b51da74019046ef4c8c81c9975db029a2113d52#diff-ac3d3b1e30eb6e828386263c3a1256ca
Please release a new version. | closed | 2019-11-24T18:12:58Z | 2020-03-23T14:33:08Z | https://github.com/jazzband/django-oauth-toolkit/issues/764 | [] | thenewguy | 3 |
HumanSignal/labelImg | deep-learning | 747 | image size problem | when i use format PascalVOC to save xml result, the <width> and <height> elements will save as 0 when the image size is 256*256 | closed | 2021-05-10T08:27:36Z | 2021-06-06T14:52:13Z | https://github.com/HumanSignal/labelImg/issues/747 | [] | Wangbenzhi | 1 |
ultralytics/ultralytics | deep-learning | 19,193 | I want to integrate yolov8's detection + classification into a network and turn it into a multi-task network. Is there any existing case for reference when I do this? | ### Search before asking
- [x] I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
### Description
I want to integrate yolov8's detection + classification into a network and turn it into a multi-task network. Is there any existing case for reference when I do this?
### Use case
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [x] Yes I'd like to help by submitting a PR! | open | 2025-02-12T00:47:48Z | 2025-02-12T00:48:16Z | https://github.com/ultralytics/ultralytics/issues/19193 | [
"enhancement",
"detect",
"classify"
] | hu874 | 1 |
Miserlou/Zappa | django | 1,526 | Update to support werkzeug >= 0.12 | > Could not find a version that matches werkzeug==0.12,>=0.14
> - Werkzeug [required: ==0.12, installed: 0.14.1]
This is colliding with everything that is using the new version.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.45.1
* Operating System and Python version: Ubuntu 16.04 TLS
* The output of `pip freeze`:
`absl-py==0.2.2
argcomplete==1.9.2
astor==0.6.2
base58==0.2.4
bleach==1.5.0
boto3==1.7.35
botocore==1.10.35
certifi==2018.4.16
cfn-flip==1.0.3
chardet==3.0.4
click==6.7
docutils==0.14
durationpy==0.5
Flask==1.0.2
future==0.16.0
gast==0.2.0
grpcio==1.12.1
h5py==2.8.0
hjson==3.0.1
html5lib==0.9999999
idna==2.6
itsdangerous==0.24
Jinja2==2.10
jmespath==0.9.3
kappa==0.6.0
Keras==2.2.0
Keras-Applications==1.0.2
Keras-Preprocessing==1.0.1
lambda-packages==0.19.0
Markdown==2.6.11
MarkupSafe==1.0
numpy==1.14.4
pandas==0.23.0
placebo==0.8.1
protobuf==3.5.2.post1
python-dateutil==2.6.1
python-slugify==1.2.4
pytz==2018.4
PyYAML==3.12
requests==2.18.4
s3transfer==0.1.13
scipy==1.1.0
six==1.11.0
tensorboard==1.8.0
tensorflow==1.8.0
termcolor==1.1.0
toml==0.9.4
tqdm==4.19.1
troposphere==2.3.0
Unidecode==1.0.22
urllib3==1.22
Werkzeug==0.14.1
wsgi-request-logger==0.4.6
zappa==0.45.1`
| closed | 2018-06-10T15:23:12Z | 2019-06-04T15:12:22Z | https://github.com/Miserlou/Zappa/issues/1526 | [] | aminhusni | 3 |
mwaskom/seaborn | matplotlib | 3,650 | Doubts about using two types of graphics to draw simultaneously | When I use two different functions to plot, using the same data, different functions result in different graphs. May I ask why? Thank you in advance๏ผI use seaborn 0.13
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import seaborn.objects as so
sns.set('paper','ticks',font_scale=1.8,font='Arial',palette=sns.color_palette('tab10'),rc=({'svg.fonttype':'none'}))
fig,ax=plt.subplots(figsize=(4,4))
sns.despine()
sns.pointplot(dfstand,x='A',y='B',ax=ax)
sns.scatterplot(dfu3,x='GC3',y='Nc',ax=ax)

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import seaborn.objects as so
sns.set('paper','ticks',font_scale=1.8,font='Arial',palette=sns.color_palette('tab10'),rc=({'svg.fonttype':'none'}))
fig,ax=plt.subplots(figsize=(4,4))
sns.despine()
sns.lineplot(dfstand,x='A',y='B',ax=ax)
sns.scatterplot(dfu3,x='GC3',y='Nc',ax=ax)

| closed | 2024-03-09T08:43:19Z | 2024-03-09T16:48:33Z | https://github.com/mwaskom/seaborn/issues/3650 | [] | z626093820 | 2 |
Johnserf-Seed/TikTokDownload | api | 406 | [BUG]TypeError: Profile.__init__() missing 1 required positional argument: 'headers' | **ๆ่ฟฐๅบ็ฐ็้่ฏฏ**
ๅฏนbug็ๆธ
ๆฐ่็ฎๆด็ๆ่ฟฐใ
gui้กต้ข่งฃๆ็จๆทไธป้กต้ช้๏ผๅๅฐๆฅ้๏ผTypeError: Profile.__init__() missing 1 required positional argument: 'headers'
**bugๅค็ฐ**
ๅค็ฐ่ฟๆฌก่กไธบ็ๆญฅ้ชค๏ผ
mac็ณป็ป
| closed | 2023-04-21T02:17:44Z | 2023-08-07T14:16:13Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/406 | [
"ๆ
้(bug)",
"้ขๅคๆฑๅฉ(help wanted)",
"ๆ ๆ(invalid)"
] | pipishousi | 0 |
CatchTheTornado/text-extract-api | api | 41 | [feat] Test `pixtral` as a OCR strategy | https://mistral.ai/news/pixtral-12b/ | open | 2024-11-18T12:18:15Z | 2025-01-09T17:20:47Z | https://github.com/CatchTheTornado/text-extract-api/issues/41 | [
"feature"
] | pkarw | 0 |
huggingface/text-generation-inference | nlp | 3,105 | google/gemma-3-27b-it context lenght issue | i have deployed the google/gemma-3-27b-it model on 4 H100 GPUS, it only supports 23k context length, when i increased to support 128k context window as it supports, i endup with following errors
i even tried with 64k context window, it went into cuda out of memeory issues
2025-03-13T08:36:37.262517Z INFO text_generation_launcher: Runtime environment:
Target: x86_64-unknown-linux-gnu
Cargo version: 1.85.0
Commit sha: 411a28288de9218e2684dccbace481a1abdb0cef
Docker label: sha-411a282
nvidia-smi:
Thu Mar 13 08:36:36 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.127.08 Driver Version: 550.127.08 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA H100 80GB HBM3 On | 00000000:45:00.0 Off | 0 |
| N/A 29C P0 70W / 700W | 1MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA H100 80GB HBM3 On | 00000000:4E:00.0 Off | 0 |
| N/A 29C P0 69W / 700W | 1MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 2 NVIDIA H100 80GB HBM3 On | 00000001:1B:00.0 Off | 0 |
| N/A 31C P0 71W / 700W | 1MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 3 NVIDIA H100 80GB HBM3 On | 00000001:24:00.0 Off | 0 |
| N/A 28C P0 73W / 700W | 1MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
xpu-smi:
N/A
hpu-smi:
N/A
2025-03-13T08:36:37.262563Z INFO text_generation_launcher: Args {
model_id: "google/gemma-3-27b-it",
revision: None,
validation_workers: 2,
sharded: Some(
true,
),
num_shard: Some(
4,
),
quantize: None,
speculate: None,
dtype: None,
kv_cache_dtype: None,
trust_remote_code: false,
max_concurrent_requests: 128,
max_best_of: 2,
max_stop_sequences: 4,
max_top_n_tokens: 5,
max_input_tokens: Some(
32000,
),
max_input_length: None,
max_total_tokens: Some(
64000,
),
waiting_served_ratio: 0.3,
max_batch_prefill_tokens: Some(
32000,
),
max_batch_total_tokens: None,
max_waiting_tokens: 20,
max_batch_size: None,
cuda_graphs: None,
hostname: "gemma-3-27b-it-5d7964566c-xnkck",
port: 8000,
shard_uds_path: "/tmp/text-generation-server",
master_addr: "localhost",
master_port: 29500,
huggingface_hub_cache: Some(
"/huggingface/hub",
),
weights_cache_override: None,
disable_custom_kernels: false,
cuda_memory_fraction: 1.0,
rope_scaling: None,
rope_factor: None,
json_output: false,
otlp_endpoint: None,
otlp_service_name: "text-generation-inference.router",
cors_allow_origin: [],
api_key: None,
watermark_gamma: None,
watermark_delta: None,
ngrok: false,
ngrok_authtoken: None,
ngrok_edge: None,
tokenizer_config_path: None,
disable_grammar_support: false,
env: true,
max_client_batch_size: 1,
lora_adapters: None,
usage_stats: Off,
payload_limit: 2000000,
enable_prefill_logprobs: false,
}
2025-03-13T08:36:40.043396Z INFO text_generation_launcher: Using attention flashinfer - Prefix caching False
2025-03-13T08:36:40.043429Z INFO text_generation_launcher: Sharding model on 4 processes
2025-03-13T08:36:40.043433Z INFO text_generation_launcher: Using default cuda graphs [1, 2, 4, 8, 16, 32]
2025-03-13T08:36:40.043785Z INFO download: text_generation_launcher: Starting check and download process for google/gemma-3-27b-it
2025-03-13T08:36:43.498233Z INFO text_generation_launcher: Files are already present on the host. Skipping download.
2025-03-13T08:36:44.060714Z INFO download: text_generation_launcher: Successfully downloaded weights for google/gemma-3-27b-it
2025-03-13T08:36:44.061471Z INFO shard-manager: text_generation_launcher: Starting shard rank=0
2025-03-13T08:36:44.590395Z INFO shard-manager: text_generation_launcher: Starting shard rank=1
2025-03-13T08:36:45.196166Z INFO shard-manager: text_generation_launcher: Starting shard rank=2
2025-03-13T08:36:45.867258Z INFO shard-manager: text_generation_launcher: Starting shard rank=3
2025-03-13T08:36:47.973482Z INFO text_generation_launcher: Using prefix caching = False
2025-03-13T08:36:47.973534Z INFO text_generation_launcher: Using Attention = flashinfer
2025-03-13T08:36:54.083888Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2025-03-13T08:36:54.609747Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2025-03-13T08:36:55.216572Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2025-03-13T08:36:55.888966Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2025-03-13T08:37:04.091352Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2025-03-13T08:37:04.617169Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2025-03-13T08:37:05.224253Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2025-03-13T08:37:05.896938Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2025-03-13T08:37:14.098533Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2025-03-13T08:37:14.624769Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2025-03-13T08:37:15.231953Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2025-03-13T08:37:15.904796Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2025-03-13T08:37:24.105963Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2025-03-13T08:37:24.632677Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2025-03-13T08:37:25.239656Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2025-03-13T08:37:25.912803Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2025-03-13T08:37:34.113333Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2025-03-13T08:37:34.641461Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2025-03-13T08:37:35.247092Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2025-03-13T08:37:35.920604Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2025-03-13T08:37:44.120842Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2025-03-13T08:37:44.649364Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2025-03-13T08:37:45.254347Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2025-03-13T08:37:45.928487Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2025-03-13T08:37:54.128489Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2025-03-13T08:37:54.657147Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2025-03-13T08:37:55.261709Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2025-03-13T08:37:55.936555Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2025-03-13T08:38:04.135901Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2025-03-13T08:38:04.664958Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2025-03-13T08:38:05.269205Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2025-03-13T08:38:05.944561Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2025-03-13T08:38:14.143354Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2025-03-13T08:38:14.672706Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2025-03-13T08:38:15.276730Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2025-03-13T08:38:15.952321Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2025-03-13T08:38:18.500055Z INFO text_generation_launcher: Using prefill chunking = False
2025-03-13T08:38:19.085091Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-1
2025-03-13T08:38:19.176301Z INFO shard-manager: text_generation_launcher: Shard ready in 94.574638951s rank=1
2025-03-13T08:38:21.300395Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-2
2025-03-13T08:38:21.301426Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-0
2025-03-13T08:38:21.301937Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-3
2025-03-13T08:38:21.348798Z INFO shard-manager: text_generation_launcher: Shard ready in 97.272539231s rank=0
2025-03-13T08:38:21.356498Z INFO shard-manager: text_generation_launcher: Shard ready in 95.475191243s rank=3
2025-03-13T08:38:21.385097Z INFO shard-manager: text_generation_launcher: Shard ready in 96.176034962s rank=2
2025-03-13T08:38:22.958763Z INFO text_generation_launcher: Starting Webserver
2025-03-13T08:38:23.126019Z INFO text_generation_router_v3: backends/v3/src/lib.rs:125: Warming up model
2025-03-13T08:38:23.330948Z INFO text_generation_launcher: Using optimized Triton indexing kernels.
2025-03-13T08:38:25.345859Z ERROR text_generation_launcher: Method Warmup encountered an error.
Traceback (most recent call last):
File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1585, in warmup
_, _batch, _ = self.generate_token(batch)
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/contextlib.py", line 81, in inner
return func(*args, **kwds)
File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1971, in generate_token
out, speculative_logits = self.forward(batch, adapter_data)
File "/usr/src/server/text_generation_server/models/vlm_causal_lm.py", line 482, in forward
logits, speculative_logits = self.model.forward(
File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 888, in forward
hidden_states = self.text_model.model(
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 547, in forward
hidden_states, residual = layer(
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 449, in forward
attn_output = self.self_attn(
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 296, in forward
attn_output = F.scaled_dot_product_attention(
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 30.52 GiB. GPU 3 has a total capacity of 79.10 GiB of which 14.37 GiB is free. Process 3342359 has 64.72 GiB memory in use. 79.10 GiB allowed; Of the allocated memory 62.19 GiB is allocated by PyTorch, and 1.00 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/.venv/bin/text-generation-server", line 10, in <module>
sys.exit(app())
File "/usr/src/.venv/lib/python3.11/site-packages/typer/main.py", line 323, in __call__
return get_command(self)(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1161, in __call__
return self.main(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/typer/core.py", line 743, in main
return _main(
File "/usr/src/.venv/lib/python3.11/site-packages/typer/core.py", line 198, in _main
rv = self.invoke(ctx)
File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1697, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/typer/main.py", line 698, in wrapper
return callback(**use_params)
File "/usr/src/server/text_generation_server/cli.py", line 119, in serve
server.serve(
File "/usr/src/server/text_generation_server/server.py", line 315, in serve
asyncio.run(
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 641, in run_until_complete
self.run_forever()
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 608, in run_forever
self._run_once()
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 1936, in _run_once
handle._run()
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/events.py", line 84, in _run
self._context.run(self._callback, *self._args)
File "/usr/src/.venv/lib/python3.11/site-packages/grpc_interceptor/server.py", line 165, in invoke_intercept_method
return await self.intercept(
> File "/usr/src/server/text_generation_server/interceptor.py", line 24, in intercept
return await response
File "/usr/src/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 120, in _unary_interceptor
raise error
File "/usr/src/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 111, in _unary_interceptor
return await behavior(request_or_iterator, context)
File "/usr/src/server/text_generation_server/server.py", line 144, in Warmup
self.model.warmup(batch, max_input_tokens, max_total_tokens)
File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1587, in warmup
raise RuntimeError(
RuntimeError: Not enough memory to handle 32000 prefill tokens. You need to decrease `--max-batch-prefill-tokens`
2025-03-13T08:38:25.349736Z ERROR text_generation_launcher: Method Warmup encountered an error.
Traceback (most recent call last):
File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1585, in warmup
_, _batch, _ = self.generate_token(batch)
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/contextlib.py", line 81, in inner
return func(*args, **kwds)
File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1971, in generate_token
out, speculative_logits = self.forward(batch, adapter_data)
File "/usr/src/server/text_generation_server/models/vlm_causal_lm.py", line 482, in forward
logits, speculative_logits = self.model.forward(
File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 888, in forward
hidden_states = self.text_model.model(
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 547, in forward
hidden_states, residual = layer(
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 449, in forward
attn_output = self.self_attn(
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 296, in forward
attn_output = F.scaled_dot_product_attention(
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 30.52 GiB. GPU 1 has a total capacity of 79.10 GiB of which 14.37 GiB is free. Process 3342101 has 64.72 GiB memory in use. 79.10 GiB allowed; Of the allocated memory 62.19 GiB is allocated by PyTorch, and 1.00 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/.venv/bin/text-generation-server", line 10, in <module>
sys.exit(app())
File "/usr/src/.venv/lib/python3.11/site-packages/typer/main.py", line 323, in __call__
return get_command(self)(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1161, in __call__
return self.main(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/typer/core.py", line 743, in main
return _main(
File "/usr/src/.venv/lib/python3.11/site-packages/typer/core.py", line 198, in _main
rv = self.invoke(ctx)
File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1697, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/typer/main.py", line 698, in wrapper
return callback(**use_params)
File "/usr/src/server/text_generation_server/cli.py", line 119, in serve
server.serve(
File "/usr/src/server/text_generation_server/server.py", line 315, in serve
asyncio.run(
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 641, in run_until_complete
self.run_forever()
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 608, in run_forever
self._run_once()
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 1936, in _run_once
handle._run()
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/events.py", line 84, in _run
self._context.run(self._callback, *self._args)
File "/usr/src/.venv/lib/python3.11/site-packages/grpc_interceptor/server.py", line 165, in invoke_intercept_method
return await self.intercept(
> File "/usr/src/server/text_generation_server/interceptor.py", line 24, in intercept
return await response
File "/usr/src/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 120, in _unary_interceptor
raise error
File "/usr/src/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 111, in _unary_interceptor
return await behavior(request_or_iterator, context)
File "/usr/src/server/text_generation_server/server.py", line 144, in Warmup
self.model.warmup(batch, max_input_tokens, max_total_tokens)
File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1587, in warmup
raise RuntimeError(
RuntimeError: Not enough memory to handle 32000 prefill tokens. You need to decrease `--max-batch-prefill-tokens`
2025-03-13T08:38:25.350178Z ERROR text_generation_launcher: Method Warmup encountered an error.
Traceback (most recent call last):
File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1585, in warmup
_, _batch, _ = self.generate_token(batch)
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/contextlib.py", line 81, in inner
return func(*args, **kwds)
File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1971, in generate_token
out, speculative_logits = self.forward(batch, adapter_data)
File "/usr/src/server/text_generation_server/models/vlm_causal_lm.py", line 482, in forward
logits, speculative_logits = self.model.forward(
File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 888, in forward
hidden_states = self.text_model.model(
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 547, in forward
hidden_states, residual = layer(
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 449, in forward
attn_output = self.self_attn(
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 296, in forward
attn_output = F.scaled_dot_product_attention(
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 30.52 GiB. GPU 2 has a total capacity of 79.10 GiB of which 14.37 GiB is free. Process 3342216 has 64.72 GiB memory in use. 79.10 GiB allowed; Of the allocated memory 62.19 GiB is allocated by PyTorch, and 1.00 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/.venv/bin/text-generation-server", line 10, in <module>
sys.exit(app())
File "/usr/src/.venv/lib/python3.11/site-packages/typer/main.py", line 323, in __call__
return get_command(self)(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1161, in __call__
return self.main(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/typer/core.py", line 743, in main
return _main(
File "/usr/src/.venv/lib/python3.11/site-packages/typer/core.py", line 198, in _main
rv = self.invoke(ctx)
File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1697, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/typer/main.py", line 698, in wrapper
return callback(**use_params)
File "/usr/src/server/text_generation_server/cli.py", line 119, in serve
server.serve(
File "/usr/src/server/text_generation_server/server.py", line 315, in serve
asyncio.run(
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 641, in run_until_complete
self.run_forever()
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 608, in run_forever
self._run_once()
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 1936, in _run_once
handle._run()
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/events.py", line 84, in _run
self._context.run(self._callback, *self._args)
File "/usr/src/.venv/lib/python3.11/site-packages/grpc_interceptor/server.py", line 165, in invoke_intercept_method
return await self.intercept(
> File "/usr/src/server/text_generation_server/interceptor.py", line 24, in intercept
return await response
File "/usr/src/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 120, in _unary_interceptor
raise error
File "/usr/src/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 111, in _unary_interceptor
return await behavior(request_or_iterator, context)
File "/usr/src/server/text_generation_server/server.py", line 144, in Warmup
self.model.warmup(batch, max_input_tokens, max_total_tokens)
File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1587, in warmup
raise RuntimeError(
RuntimeError: Not enough memory to handle 32000 prefill tokens. You need to decrease `--max-batch-prefill-tokens`
2025-03-13T08:38:25.350698Z ERROR text_generation_launcher: Method Warmup encountered an error.
Traceback (most recent call last):
File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1585, in warmup
_, _batch, _ = self.generate_token(batch)
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/contextlib.py", line 81, in inner
return func(*args, **kwds)
File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1971, in generate_token
out, speculative_logits = self.forward(batch, adapter_data)
File "/usr/src/server/text_generation_server/models/vlm_causal_lm.py", line 482, in forward
logits, speculative_logits = self.model.forward(
File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 888, in forward
hidden_states = self.text_model.model(
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 547, in forward
hidden_states, residual = layer(
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 449, in forward
attn_output = self.self_attn(
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/src/server/text_generation_server/models/custom_modeling/flash_gemma3_modeling.py", line 296, in forward
attn_output = F.scaled_dot_product_attention(
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 30.52 GiB. GPU 0 has a total capacity of 79.10 GiB of which 14.37 GiB is free. Process 3342032 has 64.72 GiB memory in use. 79.10 GiB allowed; Of the allocated memory 62.19 GiB is allocated by PyTorch, and 1.00 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/.venv/bin/text-generation-server", line 10, in <module>
sys.exit(app())
File "/usr/src/.venv/lib/python3.11/site-packages/typer/main.py", line 323, in __call__
return get_command(self)(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1161, in __call__
return self.main(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/typer/core.py", line 743, in main
return _main(
File "/usr/src/.venv/lib/python3.11/site-packages/typer/core.py", line 198, in _main
rv = self.invoke(ctx)
File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1697, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/src/.venv/lib/python3.11/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
File "/usr/src/.venv/lib/python3.11/site-packages/typer/main.py", line 698, in wrapper
return callback(**use_params)
File "/usr/src/server/text_generation_server/cli.py", line 119, in serve
server.serve(
File "/usr/src/server/text_generation_server/server.py", line 315, in serve
asyncio.run(
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 641, in run_until_complete
self.run_forever()
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 608, in run_forever
self._run_once()
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 1936, in _run_once
handle._run()
File "/root/.local/share/uv/python/cpython-3.11.11-linux-x86_64-gnu/lib/python3.11/asyncio/events.py", line 84, in _run
self._context.run(self._callback, *self._args)
File "/usr/src/.venv/lib/python3.11/site-packages/grpc_interceptor/server.py", line 165, in invoke_intercept_method
return await self.intercept(
> File "/usr/src/server/text_generation_server/interceptor.py", line 24, in intercept
return await response
File "/usr/src/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 120, in _unary_interceptor
raise error
File "/usr/src/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 111, in _unary_interceptor
return await behavior(request_or_iterator, context)
File "/usr/src/server/text_generation_server/server.py", line 144, in Warmup
self.model.warmup(batch, max_input_tokens, max_total_tokens)
File "/usr/src/server/text_generation_server/models/flash_causal_lm.py", line 1587, in warmup
raise RuntimeError(
RuntimeError: Not enough memory to handle 32000 prefill tokens. You need to decrease `--max-batch-prefill-tokens`
2025-03-13T08:38:25.358791Z ERROR warmup{max_input_length=Some(32000) max_prefill_tokens=32000 max_total_tokens=Some(64000) max_batch_size=None}:warmup: text_generation_router_v3::client: backends/v3/src/client/mod.rs:45: Server error: Not enough memory to handle 32000 prefill tokens. You need to decrease `--max-batch-prefill-tokens`
2025-03-13T08:38:25.370414Z ERROR warmup{max_input_length=Some(32000) max_prefill_tokens=32000 max_total_tokens=Some(64000) max_batch_size=None}:warmup: text_generation_router_v3::client: backends/v3/src/client/mod.rs:45: Server error: Not enough memory to handle 32000 prefill tokens. You need to decrease `--max-batch-prefill-tokens`
2025-03-13T08:38:25.381723Z ERROR warmup{max_input_length=Some(32000) max_prefill_tokens=32000 max_total_tokens=Some(64000) max_batch_size=None}:warmup: text_generation_router_v3::client: backends/v3/src/client/mod.rs:45: Server error: Not enough memory to handle 32000 prefill tokens. You need to decrease `--max-batch-prefill-tokens`
2025-03-13T08:38:25.392642Z ERROR warmup{max_input_length=Some(32000) max_prefill_tokens=32000 max_total_tokens=Some(64000) max_batch_size=None}:warmup: text_generation_router_v3::client: backends/v3/src/client/mod.rs:45: Server error: Not enough memory to handle 32000 prefill tokens. You need to decrease `--max-batch-prefill-tokens`
Error: Backend(Warmup(Generation("Not enough memory to handle 32000 prefill tokens. You need to decrease `--max-batch-prefill-tokens`")))
2025-03-13T08:38:25.403245Z ERROR text_generation_launcher: Webserver Crashed
2025-03-13T08:38:25.403260Z INFO text_generation_launcher: Shutting down shards
2025-03-13T08:38:25.452182Z INFO shard-manager: text_generation_launcher: Terminating shard rank=0
2025-03-13T08:38:25.452239Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=0
2025-03-13T08:38:25.459966Z INFO shard-manager: text_generation_launcher: Terminating shard rank=3
2025-03-13T08:38:25.462190Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=3
2025-03-13T08:38:25.481703Z INFO shard-manager: text_generation_launcher: Terminating shard rank=1
2025-03-13T08:38:25.481742Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=1
2025-03-13T08:38:25.488581Z INFO shard-manager: text_generation_launcher: Terminating shard rank=2
2025-03-13T08:38:25.488620Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=2
2025-03-13T08:38:25.862773Z INFO shard-manager: text_generation_launcher: shard terminated rank=3
2025-03-13T08:38:27.053688Z INFO shard-manager: text_generation_launcher: shard terminated rank=0
2025-03-13T08:38:27.290200Z INFO shard-manager: text_generation_launcher: shard terminated rank=2
2025-03-13T08:38:27.583555Z INFO shard-manager: text_generation_launcher: shard terminated rank=1
Error: WebserverFailed | open | 2025-03-13T09:14:50Z | 2025-03-17T10:17:51Z | https://github.com/huggingface/text-generation-inference/issues/3105 | [] | nskpro-cmd | 6 |
pyeve/eve | flask | 1,275 | List index out of range in validation.py | I just wanna apologize beforehand if this issue is a bit incoherent.
I'm not yet 100% sure what is happening or exactly how to reproduce it but at least I've seen it happen so I'll try to just explain that instead.
### Expected Behavior
No exception happening
### Actual Behavior
I've gotten this stacktrace (but only sometimes)
```pytb
[2019-05-28 09:18:24,962] ERROR in patch: list index out of range
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/eve/methods/patch.py", line 179, in patch_internal
updates, object_id, original, normalize_document
File "/usr/local/lib/python3.7/site-packages/eve/validation.py", line 44, in validate_update
document, update=True, normalize=normalize_document
File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 990, in validate
self.__normalize_mapping(self.document, self.schema)
File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 671, in __normalize_mapping
self.__normalize_containers(mapping, schema)
File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 757, in __normalize_containers
self.__normalize_sequence_per_schema(field, mapping, schema)
File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 826, in __normalize_sequence_per_schema
result = validator.normalized(document, always_return_document=True)
File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 646, in normalized
self.__normalize_mapping(self.document, self.schema)
File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 671, in __normalize_mapping
self.__normalize_containers(mapping, schema)
File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 748, in __normalize_containers
self.__normalize_mapping_per_schema(field, mapping, schema)
File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 812, in __normalize_mapping_per_schema
result_value = validator.normalized(mapping[field], always_return_document=True)
File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 646, in normalized
self.__normalize_mapping(self.document, self.schema)
File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 669, in __normalize_mapping
self.__normalize_default_fields(mapping, schema)
File "/usr/local/lib/python3.7/site-packages/cerberus/validator.py", line 922, in __normalize_default_fields
self._normalize_default(mapping, schema, field)
File "/usr/local/lib/python3.7/site-packages/eve/validation.py", line 72, in _normalize_default
challenge = challenge[sub_field]
IndexError: list index out of range
```
I have been digging through Eve code and added a few prints in validation.py trying to figure out what is going on and am pretty sure something might be confused around how Eve handles the persisted_document field in validation.py when I'm patching a document (in a maybe specific way).
What I have in my schema is a field that is a list of dictionaries.
- First POST added one item to this list, so far all is good.
- Next PATCH when I try to update my list to have two things in it instead the above exception happens.
My theory here is that somewhere in validation (normalization) logic Eve gets the keys from the new data (keys for items 0 and 1 in list in request payload for the PATCH request) but somehow it's trying to read data from the persisted_document (which by this time, in the middle of the PATCH request is still only one item long)
As I explained above, I have added prints in validation.py and seen that the "challenge" object on line 72 is a list of one item instead of two, I also see in the code that challenge is initialized from whatever is in persisted_document.
I'm not sure if I understand Eve code well enough (yet) but shouldn't this code be dealing with the document from the request instead of the persisted one? | closed | 2019-05-28T11:05:28Z | 2019-06-07T13:14:30Z | https://github.com/pyeve/eve/issues/1275 | [] | kallqvist | 2 |
keras-team/keras | python | 21,080 | Will Keras welcome new backend contributions? | Suppose we could develop a brand-new backend for Keras, such as [Paddle](https://github.com/PaddlePaddle/Paddle). Would Keras welcome our new backend contributions?
In fact, I can understand that adding a new backend will increase the maintenance burden of the keras-team. Therefore, I would like to ask for keras-team's opinion. | closed | 2025-03-21T14:19:45Z | 2025-03-21T17:25:58Z | https://github.com/keras-team/keras/issues/21080 | [] | pass-lin | 1 |
yvann-ba/Robby-chatbot | streamlit | 36 | Error: module 'langchain' has no attribute 'verbose' | I'm receiving this error consistently while attempting to use the App.
Is there an issue with the latest version of langchain? | closed | 2023-05-16T05:26:04Z | 2023-05-27T14:16:23Z | https://github.com/yvann-ba/Robby-chatbot/issues/36 | [] | lameds | 1 |
Yorko/mlcourse.ai | plotly | 661 | Jupyter Images are not rendering | Hi
Thanks for launching such an awesome course. I am really enjoying working with this material. However, I am facing a small issue right now.
After cloning the repo and running the notebooks, it seems that the images are not getting rendered correctly. I have not moved the path of the Images and I have verified that the images themselves exist in the location mentioned in the img src of the image.


I have attached a list of the pictures which might help you. I am using jupyter notebook to work with the notebooks locally alogn with Python3 3.6.9 | closed | 2020-04-05T13:45:45Z | 2020-04-16T17:34:26Z | https://github.com/Yorko/mlcourse.ai/issues/661 | [] | blaine12100 | 7 |
Skyvern-AI/skyvern | api | 1,846 | Error connecting workflow to prompt - Network error trying to host locally on vultr using the docker template | 
Hello you all love this tool trying to get it hosted locally on this machine I got it all situated with the docker compose up -d and it seems to be running working
I put in the IP address of my vultr server then http://0.0.0.0:8080 took away for security but using the local IP address I get to this

Not sure what I am missing | closed | 2025-02-27T00:21:05Z | 2025-03-11T13:29:28Z | https://github.com/Skyvern-AI/skyvern/issues/1846 | [] | TheMindExpansionNetwork | 7 |
healthchecks/healthchecks | django | 157 | In API Update, make it possible to set specific channels | In the API currently we're only able to set all channels or no channels on a check. We need to be able to set specific channels programmatically through the API, and not only through the Web-UI.
PS. If this already works, the API-Docs need to be updated with new information regarding how this should be done. | closed | 2018-03-09T14:47:02Z | 2018-11-21T18:44:45Z | https://github.com/healthchecks/healthchecks/issues/157 | [] | RobertStigsson | 1 |
exaloop/codon | numpy | 160 | Please provide evidence for performance claims | The readme claims "Typical speedups over Python are on the order of 10-100x or more, on a single thread."
Where do these numbers come from?
Please provide the benchmarks used, and any additional information needed to reproduce the result. | closed | 2023-01-10T10:28:18Z | 2023-01-12T18:52:08Z | https://github.com/exaloop/codon/issues/160 | [] | markshannon | 15 |
ydataai/ydata-profiling | data-science | 1,548 | Upgrade Visions library | ### Missing functionality
Visions 0.75 installs a dependency that consume 1.8Gb of hard disk space, in the new version of visions this library is removed.
### Proposed feature
Upgrade to the latest visions version
### Alternatives considered
_No response_
### Additional context
_No response_ | open | 2024-02-19T07:35:43Z | 2024-02-19T07:35:56Z | https://github.com/ydataai/ydata-profiling/issues/1548 | [
"needs-triage"
] | damiles | 0 |
codertimo/BERT-pytorch | nlp | 74 | How to use BERT model to fine-tune a cloze-style task? | I mean,not just use BERT model to predict answers,also train it.
| open | 2020-02-13T01:42:33Z | 2020-02-13T01:42:33Z | https://github.com/codertimo/BERT-pytorch/issues/74 | [] | OrchidXu | 0 |
dgtlmoon/changedetection.io | web-scraping | 1,982 | [feature] When viewing a diff with multiple changes since last view, default to the last viewed date | **Version and OS**
v0.45.5
**Is your feature request related to a problem? Please describe.**
When I see a bolded watch in the watch overview, I click it and it autoscrolls to the diff text. The issue is it only includes the diff between the latest two copies of the watch, and multiple changes may have occurred since my last view, since the diff date selected is scrolled out of frame, it may not be obvious that other changes are hidden between separate diff dates.
**Describe the solution you'd like**
Since it seems last_viewed is stored in the database, it should be possible to select the left side comparison to be the newest history item since the last_view date and diff it against the most recent version.
**Describe the use-case and give concrete real-world examples**
Page A is set to be checked every hour and it does change every hour. 12 hours go by since the user last viewed the diff, and the default diff link only shows changes from the last hour, the user has to scroll up to select the last date.
| closed | 2023-11-15T04:00:40Z | 2023-11-17T16:21:53Z | https://github.com/dgtlmoon/changedetection.io/issues/1982 | [
"enhancement"
] | jonoff | 2 |
youfou/wxpy | api | 158 | send_file()ๆนๆณๅจๅ้.docxๆไปถ็ๆถๅ๏ผๆถๅฐ็ๆไปถไธบไปไนERROR_MESSAGE_MAIN | ๅจ็จsend_file๏ผ๏ผๆนๆณ็ๆถๅ๏ผไธๆฏๅไธๅบๅป๏ผๆฅ้๏ผ
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/Users/uband/git/bot-env/lib/python3.6/site-packages/wxpy/api/chats/chat.py", line 54, in wrapped
ret = do_send()
File "/Users/uband/git/bot-env/lib/python3.6/site-packages/wxpy/utils/misc.py", line 72, in wrapped
smart_map(check_response_body, ret)
File "/Users/uband/git/bot-env/lib/python3.6/site-packages/wxpy/utils/misc.py", line 207, in smart_map
return func(i, *args, **kwargs)
File "/Users/uband/git/bot-env/lib/python3.6/site-packages/wxpy/utils/misc.py", line 53, in check_response_body
raise ResponseError(err_code=err_code, err_msg=err_msg)
wxpy.exceptions.ResponseError: err_code: 1; err_msg:
>>>
ๅฐฑๆฏๆถๅฐ็ๆไปถๆๅผไธ้ขๆฏERROR_MESSAGE_MAIN | open | 2017-08-19T08:10:06Z | 2017-08-19T08:10:06Z | https://github.com/youfou/wxpy/issues/158 | [] | GreenGitHuber | 0 |
vitalik/django-ninja | django | 697 | ModelSchema with lower camel case results? | **Is your feature request related to a problem? Please describe.**
I have Django models that are snake case, as is traditional (e.g., full_name). The client already specified lower camel case (e.g., fullName). So, every API call I have, I want to use lower camel case.
**Describe the solution you'd like**
I'd like to be able to use alias_generator=to_lower_camel.
This sort of code does not currently work for me:
```
from pydantic.utils import to_lower_camel
from .models import MyObj
from ninja import ModelSchema
class MyObjSchema(ModelSchema):
class Config:
model = MyObj
model_fields = ["id", "name", "full_name"]
# make names lower camel case
alias_generator = to_lower_camel
```
It results in `full_name: null` in all the results. If I comment out the alias_generator, it returns `full_name` populated, so I know this is the alias_generator.
| closed | 2023-03-08T19:08:44Z | 2023-03-09T15:54:12Z | https://github.com/vitalik/django-ninja/issues/697 | [] | boxydog | 2 |
sherlock-project/sherlock | python | 2,001 | Sherlock | Https://github.com/sherlock project/sherlock | closed | 2024-02-18T23:41:33Z | 2024-02-23T00:39:27Z | https://github.com/sherlock-project/sherlock/issues/2001 | [] | Johnnysoltan | 1 |
iperov/DeepFaceLab | deep-learning | 5,660 | Readme outdated video | The video of Queen Elisabeth in the readme is private. Please remove!
THIS IS NOT TECH SUPPORT FOR NEWBIE FAKERS
POST ONLY ISSUES RELATED TO BUGS OR CODE
## Expected behavior
*Describe, in some detail, what you are trying to do and what the output is that you expect from the program.*
## Actual behavior
*Describe, in some detail, what the program does instead. Be sure to include any error message or screenshots.*
## Steps to reproduce
*Describe, in some detail, the steps you tried that resulted in the behavior described above.*
## Other relevant information
- **Command lined used (if not specified in steps to reproduce)**: main.py ...
- **Operating system and version:** Windows, macOS, Linux
- **Python version:** 3.5, 3.6.4, ... (if you are not using prebuilt windows binary) | closed | 2023-04-13T23:34:39Z | 2023-06-08T16:34:17Z | https://github.com/iperov/DeepFaceLab/issues/5660 | [] | DylanM5 | 2 |
getsentry/sentry | python | 86,875 | Bug: Linear-Sentry Integration Not Working Despite Successful Activation | ### Environment
SaaS (https://sentry.io/)
### Steps to Reproduce
I'll add that important error message to the bug report:
## Bug: Linear-Sentry Integration Not Working Despite Successful Activation
### Description
I successfully activated the Sentry integration through Linear (confirmation received), but I'm unable to create Linear tickets from Sentry. The issue persists across multiple Linear workspaces (both EU and US hosted instances). When attempting to use the integration, an error message appears stating "Unable to connect to Linear."
### Steps to Reproduce
1. Initiated the Sentry integration workflow from Linear
2. Received confirmation of successful integration activation
3. Attempted to create Linear tickets from Sentry
4. Observed error message: "Unable to connect to Linear"
5. Ticket creation fails
### Environment Details
- Linear workspaces tested: 2 (one EU-hosted, one US-hosted)
### Additional Information
- The integration appears as active in both Linear and Sentry interfaces
- The error occurs consistently across different Linear workspaces





### Expected Result
Should be able to create Linear tickets directly from Sentry after successful integration activation.
### Actual Result
Unable to create Linear tickets from Sentry despite the integration showing as successfully activated. A specific error message "Unable to connect to Linear" is displayed when attempting to use the integration.
### Product Area
Settings - Integrations
### Link
_No response_
### DSN
_No response_
### Version
_No response_ | open | 2025-03-12T09:38:22Z | 2025-03-13T17:47:06Z | https://github.com/getsentry/sentry/issues/86875 | [
"Product Area: Settings - Integrations"
] | LorenzoGentile | 2 |
akfamily/akshare | data-science | 5,909 | ๅ
ณไบ็ๆฌๆดๆฐ็้ฎ้ข | ้ฆๅ
ๆ่ฐขไฝ่
็ปดๆค็ๅผๆบ้กน็ฎ
ๅฏนๆฏไธ, ่ฟไธชๅบ, ๅฏน่ทๅ่ก็ฅจๆฐๆฎ, ็็ๅพๅฎ็จ
็ๆฌๆดๆฐ็ไนๅพ้ข็น, ็นๅซๆ่ฐข!
ๆณๆไธชๆ่ง,
ๅ ไธบ็ๆฌๆดๆฐ็ๆฏ่พ้ข็น:
่ฏทๅคๆณจไธ, ็ๆฌๆดๆฐๅค็็้ฎ้ข or ๆฐๅ ็ๅ่ฝ
่ฟๆ ท, ๅฏไปฅ่ฎฉ็จๆท้ๆฉๆฏๅฆๆดๆฐๅบ,
ๆ่ฐขไฝ่
! | closed | 2025-03-17T08:54:47Z | 2025-03-17T13:19:22Z | https://github.com/akfamily/akshare/issues/5909 | [
"bug"
] | gyarmy | 2 |
grillazz/fastapi-sqlalchemy-asyncpg | pydantic | 3 | boost logging with rich | closed | 2021-08-30T07:33:47Z | 2021-09-01T07:52:04Z | https://github.com/grillazz/fastapi-sqlalchemy-asyncpg/issues/3 | [
"enhancement"
] | grillazz | 0 |
|
pydantic/pydantic | pydantic | 11,363 | Subclassing generic dataclass loses type of parent class fields | ### Initial Checks
- [x] I confirm that I'm using Pydantic V2
### Description
This might be a non-issue if dataclasses are not intended to be subclassed.
When extending a generic dataclass, the fields from the superclass have no validation done on them.
```python
@dataclass
class A(Generic[T]):
a: T
@dataclass
class B(A[U]):
b: U
```
I would expect here that deserialising something to `B[int]` would validate that `a` was an `int` but this doesn't appear to happen.
### Example Code
```Python
from pydantic.dataclasses import dataclass
from pydantic import TypeAdapter
from typing import TypeVar, Generic
T = TypeVar('T')
U = TypeVar('U')
@dataclass
class A(Generic[T]):
a: T
@dataclass
class B(A[U]):
b: U
b = TypeAdapter(B[int]).validate_python({"a": ["not", "an", "int"], "b": "42"})
# ^^^^^^^^^^^^^^^
# This does not fail even though the type of a is not valid
print(b)
assert b.b == 42 # <- Passes as "42" has been converted to an int correctly
assert type(b.a) == int # <- This fails as a is a list
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.10.6
pydantic-core version: 2.27.2
pydantic-core build: profile=release pgo=false
install path: /path/to/.venv/lib/python3.12/site-packages/pydantic
python version: 3.12.7 (main, Oct 8 2024, 00:20:25) [Clang 18.1.8 ]
platform: Linux-6.6.72-1-lts-x86_64-with-glibc2.40
related packages: pydantic-settings-2.7.1 fastapi-0.115.7 pyright-1.1.392.post0 typing_extensions-4.12.2
commit: unknown
``` | open | 2025-01-30T17:34:34Z | 2025-02-03T14:16:14Z | https://github.com/pydantic/pydantic/issues/11363 | [
"bug V2",
"topic-generics"
] | tpoliaw | 1 |
ultralytics/ultralytics | pytorch | 19,816 | `single_cls` training dies quietly during 1st epoch | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Train
### Bug
I have a dataset consisting of ~95 categories that I'm trying to train a binary object detection model on for my tagging utility for it to automatically recommend bounding boxes. I'm utilizing the `single_cls` command line option to "flatten" my training data into a single class for classification.
When running the training session on a `Yolo11n` model, the training process dies quietly while training the first epoch.
I can confirm running training on the same dataset, using the same base model, without the `single_cls=true` option completes training successfully.
### Environment
```
Ultralytics 8.3.94 ๐ Python-3.12.3 torch-2.6.0+cu126 CUDA:0 (NVIDIA GeForce RTX 3090, 24253MiB)
Setup complete โ
(32 CPUs, 31.3 GB RAM, 133.3/1831.7 GB disk)
OS Linux-6.8.0-55-generic-x86_64-with-glibc2.39
Environment Linux
Python 3.12.3
Install pip
Path /home/enusbaum/yolo/venv/lib/python3.12/site-packages/ultralytics
RAM 31.26 GB
Disk 133.3/1831.7 GB
CPU AMD Ryzen 9 5950X 16-Core Processor
CPU count 32
GPU NVIDIA GeForce RTX 3090, 24253MiB
GPU count 1
CUDA 12.6
numpy โ
2.1.1<=2.1.1,>=1.23.0
matplotlib โ
3.10.1>=3.3.0
opencv-python โ
4.11.0.86>=4.6.0
pillow โ
11.0.0>=7.1.2
pyyaml โ
6.0.2>=5.3.1
requests โ
2.32.3>=2.23.0
scipy โ
1.15.2>=1.4.1
torch โ
2.6.0+cu126>=1.8.0
torch โ
2.6.0+cu126!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision โ
0.21.0+cu126>=0.9.0
tqdm โ
4.67.1>=4.64.0
psutil โ
7.0.0
py-cpuinfo โ
9.0.0
pandas โ
2.2.3>=1.1.4
seaborn โ
0.13.2>=0.11.0
ultralytics-thop โ
2.0.14>=2.0.0
```
### Minimal Reproducible Example
```
yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true
```
### Additional
While I can see the Python processes are still running:
```
enusbaum 2302 5.6 2.1 13017132 696784 pts/1 Sl 11:32 0:31 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true
enusbaum 2303 5.5 2.1 13015644 707028 pts/1 Sl 11:32 0:30 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true
enusbaum 2304 5.4 2.1 13017552 714692 pts/1 Sl 11:32 0:29 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true
enusbaum 2324 3.9 0.2 13093984 91624 pts/1 Sl 11:32 0:21 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true
enusbaum 2327 2.7 0.2 13097044 92048 pts/1 Sl 11:32 0:15 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true
enusbaum 2328 3.8 0.2 13094032 90900 pts/1 Sl 11:32 0:21 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true
enusbaum 2329 3.8 0.2 13094044 92320 pts/1 Sl 11:32 0:20 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true
enusbaum 2334 2.8 0.2 13097128 92176 pts/1 Sl 11:32 0:15 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true
enusbaum 2335 3.8 0.2 13094116 92028 pts/1 Sl 11:32 0:21 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true
enusbaum 2336 3.7 0.2 13094128 91428 pts/1 Sl 11:32 0:20 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true
enusbaum 2338 3.9 0.2 13094152 91688 pts/1 Sl 11:32 0:21 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true
enusbaum 2341 2.7 0.2 13097212 92076 pts/1 Sl 11:32 0:15 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true
enusbaum 2342 3.7 0.2 13094200 91976 pts/1 Sl 11:32 0:20 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true
enusbaum 2343 3.8 0.2 13094212 90960 pts/1 Sl 11:32 0:20 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train data=data.yaml model=yolo11n.pt epochs=200 imgsz=640 patience=25 batch=-1 plots=true workers=12 single_cls=true
```
They seem to be dead as they're not consuming any CPU, and are still holding on to GPU memory:
```
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2302 enusbaum 20 0 12.4g 696784 94344 S 0.0 2.1 0:31.06 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train +
2303 enusbaum 20 0 12.4g 707028 96184 S 0.0 2.2 0:30.38 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train +
2304 enusbaum 20 0 12.4g 714692 96904 S 0.0 2.2 0:29.89 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train +
2324 enusbaum 20 0 12.5g 91624 75944 S 0.0 0.3 0:21.43 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train +
2327 enusbaum 20 0 12.5g 92048 76424 S 0.0 0.3 0:15.26 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train +
2328 enusbaum 20 0 12.5g 90900 75912 S 0.0 0.3 0:21.05 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train +
2329 enusbaum 20 0 12.5g 92320 76168 S 0.0 0.3 0:20.93 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train +
2334 enusbaum 20 0 12.5g 92176 75912 S 0.0 0.3 0:15.43 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train +
2335 enusbaum 20 0 12.5g 92028 76168 S 0.0 0.3 0:21.06 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train +
2336 enusbaum 20 0 12.5g 91428 75912 S 0.0 0.3 0:20.39 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train +
2338 enusbaum 20 0 12.5g 91688 75912 S 0.0 0.3 0:21.45 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train +
2341 enusbaum 20 0 12.5g 92076 75912 S 0.0 0.3 0:15.17 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train +
2342 enusbaum 20 0 12.5g 91976 75912 S 0.0 0.3 0:20.60 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train +
2343 enusbaum 20 0 12.5g 90960 75656 S 0.0 0.3 0:20.81 /home/enusbaum/yolo/venv/bin/python3 /home/enusbaum/yolo/venv/bin/yolo detect train +
```
```
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.120 Driver Version: 550.120 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3090 Off | 00000000:0A:00.0 Off | N/A |
| 30% 44C P2 109W / 350W | 17529MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
+-----------------------------------------------------------------------------------------+
```
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-03-21T11:44:56Z | 2025-03-22T17:31:59Z | https://github.com/ultralytics/ultralytics/issues/19816 | [
"bug",
"detect"
] | enusbaum | 7 |
ansible/awx | automation | 15,504 | [next_ui] Jobs limit query display filter | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
### Feature type
Enhancement to Existing Feature
### Feature Summary
Hello,
We mostly use AWX Job template provisoning callback. We should have way to filter and display the `limit` info in `Jobs` UI.
I can see `limit` info inf job detail API `/api/v2/jobs/job_id`
Please check image below for it.

Thanks
### Select the relevant components
- [X] UI
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Steps to reproduce
in Jobs UI, there isn't way to display limit info
### Current results
Can't filter `limit` info
### Sugested feature result
we should have way to filter/query `Limit`
### Additional information
_No response_ | open | 2024-09-12T15:56:36Z | 2024-09-12T15:57:09Z | https://github.com/ansible/awx/issues/15504 | [
"type:enhancement",
"component:ui",
"needs_triage",
"community"
] | hungpr0 | 0 |
AntonOsika/gpt-engineer | python | 150 | AttributeError: 'tuple' object has no attribute 'expandtabs' | I'm getting the following error when running `python -m gpt_engineer.main`. I'm using python 3.11/
```
File "/opt/miniconda3/envs/gpt-eng/lib/python3.11/inspect.py", line 873, in cleandoc
lines = doc.expandtabs().split('\n')
^^^^^^^^^^^^^^
AttributeError: 'tuple' object has no attribute 'expandtabs'
``` | closed | 2023-06-18T13:03:31Z | 2023-06-18T13:37:43Z | https://github.com/AntonOsika/gpt-engineer/issues/150 | [
"bug"
] | gchlebus | 4 |
babysor/MockingBird | deep-learning | 887 | ้ข่ฎญ็ปๆจกๅๆฟๆขๅไธๅน้
ๆๅฉ | ้ข่ฎญ็ปๆจกๅๆฟๆขๆpretrained-11-7-21_75kๅคงไฝฌๆไพ็ๅบ็ฐไธๅน้
ใ
Found 266 samples
+----------------+------------+---------------+------------------+
| Steps with r=2 | Batch Size | Learning Rate | Outputs/Step (r) |
+----------------+------------+---------------+------------------+
| 85k Steps | 12 | 5e-06 | 2 |
+----------------+------------+---------------+------------------+
E:\MockingBird\synthesizer\synthesizer_dataset.py:84: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\utils\tensor_new.cpp:248.)
embeds = torch.tensor(embeds)
E:\MockingBird\synthesizer\synthesizer_dataset.py:84: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\utils\tensor_new.cpp:248.)
embeds = torch.tensor(embeds)
Traceback (most recent call last):
File "synthesizer_train.py", line 37, in <module>
train(**vars(args))
File "E:\MockingBird\synthesizer\train.py", line 208, in train
optimizer.step()
File "C:\Users\Administrator\anaconda3\envs\mock\lib\site-packages\torch\optim\optimizer.py", line 280, in wrapper
out = func(*args, **kwargs)
File "C:\Users\Administrator\anaconda3\envs\mock\lib\site-packages\torch\optim\optimizer.py", line 33, in _use_grad
ret = func(self, *args, **kwargs)
File "C:\Users\Administrator\anaconda3\envs\mock\lib\site-packages\torch\optim\adam.py", line 141, in step
adam(
File "C:\Users\Administrator\anaconda3\envs\mock\lib\site-packages\torch\optim\adam.py", line 281, in adam
func(params,
File "C:\Users\Administrator\anaconda3\envs\mock\lib\site-packages\torch\optim\adam.py", line 446, in _multi_tensor_adam
torch._foreach_add_(device_exp_avgs, device_grads, alpha=1 - beta1)
RuntimeError: The size of tensor a (1024) must match the size of tensor b (3) at non-singleton dimension 3
**Env & To Reproduce[ๅค็ฐไธ็ฏๅข]**
ๆ่ฟฐไฝ ็จ็็ฏๅขใไปฃ็ ็ๆฌใๆจกๅ
Python 3.8.16 pretrained
**Screenshots[ๆชๅพ๏ผๅฆๆ๏ผ]**
If applicable, add screenshots to help
| open | 2023-04-27T18:40:22Z | 2023-04-27T18:40:22Z | https://github.com/babysor/MockingBird/issues/887 | [] | zhangmm2 | 0 |
gradio-app/gradio | python | 10,224 | Disabling queue will display only the first character in chatbot streaming | ### Describe the bug
I want to disable queue for bot() in chatbot streaming
- https://www.gradio.app/docs/gradio/chatbot#demos
- chatbot_streaming
### Have you searched existing issues? ๐
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
import random
import time
with gr.Blocks() as demo:
chatbot = gr.Chatbot(type="messages")
msg = gr.Textbox()
clear = gr.Button("Clear")
def user(user_message, history: list):
return "", history + [{"role": "user", "content": user_message}]
def bot(history: list):
bot_message = random.choice(["How are you?", "I love you", "I'm very hungry"])
history.append({"role": "assistant", "content": ""})
for character in bot_message:
history[-1]['content'] += character
time.sleep(0.05)
yield history
# OK
# msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then(
# bot, chatbot, chatbot
# )
# NG
msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then(
bot, chatbot, chatbot, queue=False
)
clear.click(lambda: None, None, chatbot, queue=False)
if __name__ == "__main__":
demo.launch(debug=True)
```
### Screenshot
OK | NG
-- | --
 | 
### Logs
```shell
* Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.9.1
gradio_client version: 1.5.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.7.0
audioop-lts is not installed.
fastapi: 0.115.6
ffmpy: 0.4.0
gradio-client==1.5.2 is not installed.
httpx: 0.28.1
huggingface-hub: 0.27.0
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 2.2.0
orjson: 3.10.12
packaging: 24.2
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.10.3
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.8.3
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.2.3
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.28.1
huggingface-hub: 0.27.0
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.1
```
### Severity
Blocking usage of gradio | closed | 2024-12-18T06:41:41Z | 2024-12-18T17:12:37Z | https://github.com/gradio-app/gradio/issues/10224 | [
"bug"
] | chai3 | 1 |
deezer/spleeter | tensorflow | 157 | [Bug] Spleeter crashes at "KMP_AFFINITY" | ## Description
On some songs Spleeter decides to exit too early, rendering nothing.
## Step to reproduce
It's hard to get provide instructions due to potential piracy. I will not share the song I'm trying to process, but basically load a song and it exits at "Affinity" something.
## Output
```
INFO:spleeter:Audio data loaded successfully
INFO:spleeter:Audio data loaded successfully
OMP: Info #154: KMP_AFFINITY: Initial OS proc set respected: 0-11
OMP: Info #213: KMP_AFFINITY: decoding x2APIC ids.
OMP: Info #276: KMP_AFFINITY: Affinity capable, using global cpuid leaf 11 info
OMP: Info #156: KMP_AFFINITY: 12 available OS procs
OMP: Info #157: KMP_AFFINITY: Uniform topology
OMP: Info #191: KMP_AFFINITY: 1 socket x 6 cores/socket x 2 threads/core (6 total cores)
OMP: Info #215: KMP_AFFINITY: OS proc to physical thread map:
OMP: Info #171: KMP_AFFINITY: OS proc 0 maps to socket 0 core 0 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 1 maps to socket 0 core 0 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 2 maps to socket 0 core 1 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 3 maps to socket 0 core 1 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 4 maps to socket 0 core 2 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 5 maps to socket 0 core 2 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 6 maps to socket 0 core 3 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 7 maps to socket 0 core 3 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 8 maps to socket 0 core 4 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 9 maps to socket 0 core 4 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 10 maps to socket 0 core 5 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 11 maps to socket 0 core 5 thread 1
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 16216 thread 0 bound to OS proc set 0
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 23460 thread 1 bound to OS proc set 2
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 8744 thread 2 bound to OS proc set 4
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 19660 thread 3 bound to OS proc set 6
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 18976 thread 4 bound to OS proc set 8
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 26180 thread 5 bound to OS proc set 10
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 5280 thread 6 bound to OS proc set 1
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 26188 thread 7 bound to OS proc set 3
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 26176 thread 8 bound to OS proc set 5
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 10776 thread 9 bound to OS proc set 7
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 22556 thread 10 bound to OS proc set 9
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 1656 thread 11 bound to OS proc set 11
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 5664 thread 13 bound to OS proc set 2
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 26120 thread 14 bound to OS proc set 4
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 17960 thread 12 bound to OS proc set 0
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 17492 thread 15 bound to OS proc set 6
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 24596 thread 17 bound to OS proc set 10
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 9968 thread 16 bound to OS proc set 8
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 4584 thread 18 bound to OS proc set 1
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 14316 thread 19 bound to OS proc set 3
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 21612 thread 20 bound to OS proc set 5
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 19120 thread 21 bound to OS proc set 7
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 25808 thread 22 bound to OS proc set 9
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 25704 thread 23 bound to OS proc set 11
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 18908 thread 24 bound to OS proc set 0
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 25836 thread 25 bound to OS proc set 2
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 24272 thread 26 bound to OS proc set 4
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 22248 thread 27 bound to OS proc set 6
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 24932 thread 28 bound to OS proc set 8
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 20220 thread 29 bound to OS proc set 10
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 19272 thread 30 bound to OS proc set 1
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 14016 thread 31 bound to OS proc set 3
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 10340 thread 32 bound to OS proc set 5
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 3204 thread 33 bound to OS proc set 7
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 3996 thread 34 bound to OS proc set 9
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 20376 thread 35 bound to OS proc set 11
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 7404 thread 36 bound to OS proc set 0
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 3736 thread 38 bound to OS proc set 4
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 21108 thread 37 bound to OS proc set 2
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 25812 thread 40 bound to OS proc set 8
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 15844 thread 39 bound to OS proc set 6
OMP: Info #251: KMP_AFFINITY: pid 12040 tid 6948 thread 41 bound to OS proc set 10
```
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Windows 10 |
| Installation type | Conda |
| RAM available | 16GB |
## Additional context
It's weird that it works on some songs but not on others. | closed | 2019-12-03T17:33:49Z | 2019-12-04T04:59:45Z | https://github.com/deezer/spleeter/issues/157 | [
"bug",
"invalid"
] | aidv | 2 |
snooppr/snoop | web-scraping | 14 | Crash in wsl | The script crashes when executed in WSL because it does not have access to the pc hardware. Please fix or dont use `playsound`. | closed | 2020-02-18T21:17:04Z | 2020-02-19T10:55:08Z | https://github.com/snooppr/snoop/issues/14 | [
"bug"
] | SunnyCapt | 1 |
pandas-dev/pandas | data-science | 60,602 | QST: Does the project consider DataFrame.query() arbitrary code execution to be a security vulnerability? | ### Research
- [X] I have searched the [[pandas] tag](https://stackoverflow.com/questions/tagged/pandas) on StackOverflow for similar questions.
- [X] I have asked my usage related question on [StackOverflow](https://stackoverflow.com).
### Link to question on StackOverflow
https://stackoverflow.com/questions/79304226/should-i-manually-patch-the-pandas-dataframe-query-vulnerability-or-wait-for-a
(To clarify, this question was written by another user.)
### Question about pandas
Hi, I saw [this question on StackOverflow](https://stackoverflow.com/questions/79304226/should-i-manually-patch-the-pandas-dataframe-query-vulnerability-or-wait-for-a), which is about a public CVE, [CVE-2024-9880](https://huntr.com/bounties/a49baae1-4652-4d6c-a179-313c21c41a8d).
The basic premise of the CVE is that if an attacker controls the `expr` argument to DataFrame.query(), then arbitrary code execution can be achieved.
The example given in the CVE is
<details>
```python
import pandas as pd
df = pd.DataFrame({'a': [1, 2, 3], 'b': ['error_details', 'confidential_info', 'normal']})
query = '@pd.core.frame.com.builtins.__import__("os").system("""ping google.com #""")'
try:
engine = "python"
result = df.query(query,local_dict={},engine="python",).index
except Exception as e:
print(f'Error: {e}')
```
</details>
However, this is not minimal, and a more minimal construction would be
```python
import pandas as pd
df = pd.DataFrame()
expr = '@pd.compat.os.system("""echo foo""")'
result = df.query(expr, engine='python')
```
(The report also says that `engine='python'` is required, but both `engine='python'` and `engine='numexpr'` worked in my testing.)
My question is about Pandas's security model. What security guarantees does Pandas make about DataFrame.query() with an attacker-controlled `expr`?
My intuition about this is "none, don't do that," but I'm wondering what the Pandas project thinks. | closed | 2024-12-24T19:46:22Z | 2025-03-22T19:12:20Z | https://github.com/pandas-dev/pandas/issues/60602 | [
"Usage Question",
"expressions",
"Closing Candidate"
] | nickodell | 21 |
thtrieu/darkflow | tensorflow | 789 | how using the .txt files instead of xml? | i was study by darknet, but now i try the darkflow.
in darknet, my custom image dataset have annotation file format is txt.
it is composed: "object-class" "x" "y" "width" "height"
so my question is
how can i using this txt file?
if i can't this, how can i convert this file to the .xml files?
many thanks,
| closed | 2018-06-03T11:05:23Z | 2018-06-07T02:59:00Z | https://github.com/thtrieu/darkflow/issues/789 | [] | Jeongseop-Yun | 5 |
Avaiga/taipy | automation | 1,748 | Stop support for Python 3.8 | Stop supporting version 3.8 of Python. | closed | 2024-09-05T07:27:20Z | 2024-09-21T06:49:17Z | https://github.com/Avaiga/taipy/issues/1748 | [
"๐ฅ Priority: Critical",
"๐ง Devops",
"๐ Staff only"
] | jrobinAV | 2 |
InstaPy/InstaPy | automation | 6,791 | Insta.py | open | 2024-02-03T20:32:24Z | 2024-02-03T20:32:24Z | https://github.com/InstaPy/InstaPy/issues/6791 | [] | lokotr0n | 0 |
|
axnsan12/drf-yasg | rest-api | 544 | Error AttributeError: 'AnonymousUser' | I Have this trouble when I Open my documentation:
view's MyViewSet raised exception during schema generation; use `getattr(self, 'swagger_fake_view
', False)` to detect and short-circuit this
line 33, in get_queryset
return MyModel.objects.filter(container=self.request.user.container)
AttributeError: 'AnonymousUser' object has no attribute 'container'
Somebody know how to solve this please?
| closed | 2020-02-11T14:14:32Z | 2023-10-20T08:11:52Z | https://github.com/axnsan12/drf-yasg/issues/544 | [] | danilocastelhano1 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.