repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
polarsource/polar | fastapi | 4,353 | No webhook timeout duration information in the docs. | ### Description
I can't find any information about the webhook timeout duration from the documentation.
### Current Behavior
No info regarding webhook timeout duration.
### Expected Behavior
Maybe we could consider adding the timeout duration information in the docs.
### Screenshots
Only information is about the retry count.
https://docs.polar.sh/api/webhooks
<img width="1105" alt="image" src="https://github.com/user-attachments/assets/d13f5f81-2200-488c-8785-4702b01ad936">
### Environment:
- Operating System: -
- Browser (if applicable): -
---
<!-- Thank you for contributing to Polar! We appreciate your help in improving it. -->
<!-- Questions: [Discord Server](https://discord.com/invite/Pnhfz3UThd). --> | closed | 2024-11-01T07:01:53Z | 2024-11-01T08:13:50Z | https://github.com/polarsource/polar/issues/4353 | [
"bug"
] | reynaldichernando | 1 |
matplotlib/matplotlib | data-visualization | 28,929 | [MNT]: Clarify whether the values of an AxesImage are known as "data" or as "array" | ### Summary
The docstring of `AxesImage.set_array` says "Retained for backwards compatibility - use set_data instead.", but on the getter side, only `get_array` (inherited from ScalarMappable) exists -- `get_data` doesn't even exist.
### Proposed fix
Be consistent as to which name, "data" or "array", is used. I suspect they could even be made aliases of one another at the ScalarMappable level...
(Perhaps also distantly related to the colorizer-related API changes.) | open | 2024-10-03T09:36:23Z | 2024-10-14T09:10:16Z | https://github.com/matplotlib/matplotlib/issues/28929 | [
"API: consistency",
"Maintenance"
] | anntzer | 13 |
xonsh/xonsh | data-science | 4,911 | `xonfig tutorial` runs nothing | `xonfig tutorial` runs nothing.
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| closed | 2022-08-04T16:13:01Z | 2022-08-04T18:06:56Z | https://github.com/xonsh/xonsh/issues/4911 | [
"xonfig",
"windows-wsl",
"xonfig-tutorial"
] | anki-code | 1 |
pytest-dev/pytest-cov | pytest | 520 | Race condition in pytest-dev hangs coverage 6.3 | See https://github.com/nedbat/coveragepy/issues/1310 . Removing pytest-cov solves the problem. What is pytest-cov doing that causes this problem? | open | 2022-01-30T02:39:27Z | 2022-01-30T13:25:50Z | https://github.com/pytest-dev/pytest-cov/issues/520 | [] | nedbat | 3 |
graphql-python/graphene | graphql | 953 | Native schema stitching | Hi
Can anyone provide an example code how to do schema stitching in python using graphene?
Thanks
| closed | 2019-04-26T11:50:18Z | 2019-10-03T20:47:11Z | https://github.com/graphql-python/graphene/issues/953 | [
"✨ enhancement",
"wontfix"
] | arindam04 | 20 |
DistrictDataLabs/yellowbrick | scikit-learn | 1,272 | Update Validation Curve Docs | **Describe the issue**
We updated the Validation curve so that you can change the marker style. #1258
Now, we need to update the Validation curve documentation to state this new capability.
<!-- If you have a question, note that you can email us via our listserve:
https://groups.google.com/forum/#!forum/yellowbrick -->
<!-- This line alerts the Yellowbrick maintainers, feel free to use this
@ address to alert us directly in follow up comments -->
@DistrictDataLabs/team-oz-maintainers
| open | 2022-08-07T22:24:08Z | 2022-08-07T22:24:08Z | https://github.com/DistrictDataLabs/yellowbrick/issues/1272 | [
"type: documentation"
] | lwgray | 0 |
explosion/spaCy | machine-learning | 13,625 | Cannot install spaCy 3.8 in python3.8 environment | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
On a clean environment with python 3.8 and pip, try pip install spaCy==3.8
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: windows
* Python Version Used: 3.8
* spaCy Version Used: 3.8
* Environment Information: clean environment with python 3.8 and pip

| open | 2024-09-12T23:06:39Z | 2024-10-25T04:20:08Z | https://github.com/explosion/spaCy/issues/13625 | [
"deprecated",
"resolved"
] | jianlins | 6 |
polakowo/vectorbt | data-visualization | 555 | Error in flex_simulate_nb function and Documentation | https://vectorbt.dev/api/portfolio/nb/#vectorbt.portfolio.nb.flex_simulate_nb
from vectorbt.portfolio.nb import (
get_col_elem_nb,
order_nb,
order_nothing_nb,
flex_simulate_nb,
flex_simulate_row_wise_nb,
sort_call_seq_out_nb,
)
this is missing build_call_seq
also when i added it and tried to run the example in the documentation i got this error

its interesting because the simulate_nb works but the flex doesn't
good news is that i went through all the documentation code on the nb api and it all works but the flex sim nb ... so once this is fixed all should be good | closed | 2023-01-27T19:16:48Z | 2023-11-23T12:56:23Z | https://github.com/polakowo/vectorbt/issues/555 | [] | quantfreedom | 3 |
tensorflow/tensor2tensor | machine-learning | 1,210 | Shuffle buffer causes OOM error on CPU (1.10.0) | I noticed that with 1.10.0 a shuffle buffer get build up before training:
```
2018-11-09 11:48:04.525172: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:97] Filling up shuffle buffer (this may take a while): 391 of 512
2018-11-09 11:48:14.233178: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:97] Filling up shuffle buffer (this may take a while): 396 of 512
2018-11-09 11:48:29.700824: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:97] Filling up shuffle buffer (this may take a while): 400 of 512
2018-11-09 11:48:33.617605: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:97] Filling up shuffle buffer (this may take a while): 402 of 512
2018-11-09 11:48:50.017594: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:97] Filling up shuffle buffer (this may take a while): 406 of 512
2018-11-09 11:48:56.350018: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:97] Filling up shuffle buffer (this may take a while): 407 of 512
```
However, for one of my larger t2t-problems this seems to cause an OOM error (CPU RAM). I am not sure if this operation happened before 1.10.0 but in any case I'd like to do something against this OOM error.
Why is there a shuffle buffer getting build up and can I disable it or at least control its size s.t. it fits into memory?
----
Error output:
```
2018-11-09 11:49:16.324220: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:97] Filling up shuffle buffer (this may take a while): 413 of 512
2018-11-09 11:49:25.588304: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:97] Filling up shuffle buffer (this may take a while): 415 of 512
2018-11-09 11:49:33.819391: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:97] Filling up shuffle buffer (this may take a while): 419 of 512
./train.sh: line 96: 712 Killed t2t-trainer --generate_data --t2t_usr_dir=$USER_DIR --worker_gpu=$WORKER_GPU --data_dir=$DATA_DIR --tmp_dir=$TMP_DIR --problem=$PROBLEM --model=$MODEL --hparams_set=$HPARAMS --output_dir=$TRAIN_DIR --train_steps=50000000 --save_checkpoints_secs=3600 --keep_checkpoint_max=5
``` | closed | 2018-11-09T10:57:18Z | 2019-02-13T09:00:03Z | https://github.com/tensorflow/tensor2tensor/issues/1210 | [] | stefan-falk | 7 |
strawberry-graphql/strawberry | django | 3,517 | default_factory doesn't work | Hi!
I use default_factory to initialize my variable, but variable always returns the same result. It seems like default_factory doesn't work and it returns always the same result of function.
Here is example to reproduce:
https://play.strawberry.rocks/?gist=a7a5e62ffe4e68696b44456398d11104 | open | 2024-05-27T15:13:17Z | 2025-03-20T15:56:45Z | https://github.com/strawberry-graphql/strawberry/issues/3517 | [
"bug"
] | ShtykovaAA | 11 |
xzkostyan/clickhouse-sqlalchemy | sqlalchemy | 183 | cannot import name 'engines' from 'clickhouse_sqlalchemy' | using pyinstaller to pack project code, after packing, start the package, raise error:
cannot import name 'engines' from 'clickhouse_sqlalchemy'
extra clue is now we use gcc 10.3.0, then occur this issue.
before we use gcc 4.8.5 without this issue.
want to ask, is this issue related with gcc version? | closed | 2022-06-28T08:58:28Z | 2022-11-29T17:29:09Z | https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/183 | [] | flyly0755 | 0 |
jupyterlab/jupyter-ai | jupyter | 1,083 | Config UI should allow saving when API key is passed via an environment variable | ## Description
I'm trying to use OpenAI GPT4 and struggle a bit with configuration. I'm using openai and have an environment variable configured. Hence, I do not need to configure my API key in Jupyter AI. It should work without.
## Reproduce
The configuration dialog says the API key is optional, but there is an error message saying I must enter it.

## Expected behavior
Make the API key optional, and remove the error message.
## Context
- Operating System and version: Windows 10
- Browser and version: <!-- e.g. Chrome 92 -->
- JupyterLab version: Version 4.2.5 | open | 2024-11-02T16:07:06Z | 2024-11-02T23:03:19Z | https://github.com/jupyterlab/jupyter-ai/issues/1083 | [
"bug"
] | haesleinhuepf | 2 |
plotly/dash | data-visualization | 2,818 | [BUG] Dash Testing: `wait_for_text_to_equal` may incorrectly succeed when used with text `"None"` | **Describe your context**
- replace the result of `pip list | grep dash` below
```
dash 2.16.1
dash-core-components 2.0.0
dash-dangerously-set-inner-html 0.0.2
dash-flow-example 0.0.5
dash-html-components 2.0.0
dash-table 5.0.0
dash-testing-stub 0.0.2
```
**Describe the bug**
When `wait_for_text_to_equal` is used to wait for the text `"None"`, the function will often succeed even when you would reasonably expect it to fail.
I think this is part of the reason why the regression in #2733 wasn't caught by the tests.
This behavior is demonstrated by the following test case:
```python
import dash
from dash import html
def test_wait_for_text_to_equal_none(dash_duo):
app = dash.Dash(__name__)
app.layout = html.Div(id="my-div", children="Hello world")
dash_duo.start_server(app)
dash_duo.wait_for_text_to_equal("#my-div", "None", timeout=4)
```
**Expected behavior**
The test should fail because the contents of the `#my-div` div are never equal to `None` or `"None"`.
**Actual behavior**
The test passes.
**Explanation**
This happens because `wait_for_text_to_equal` checks not only the text content of the element, but also the value of the `value` attribute. ([see here](https://github.com/plotly/dash/blob/f7f8fb4c5893506e35cdeaec141310a95fe1486a/dash/testing/wait.py#L110C13-L113C14)).
If `value` is not defined we get a value of `None`, which is then converted to a string and therefore matches the string `"None"`.
So `dash_duo.wait_for_text_to_equal("#my-div", "None")` _always_ succeeds unless the target element has a defined `value`.
**Proposed solutions**
IMO the cleanest solution would be to modify `wait_for_text_to_equal` to check _only_ the element's text, and add a new function `wait_for_value_to_equal` which checks the value (or a generalized `wait_for_attr_to_equal` function). This would break backwards compatibility.
Alternatively we could have `wait_for_text_to_equal` ignore `value` if value is not defined, or issue a warning when used with the text `"None"`. | closed | 2024-03-27T19:22:12Z | 2024-04-19T16:32:10Z | https://github.com/plotly/dash/issues/2818 | [] | emilykl | 1 |
skypilot-org/skypilot | data-science | 4,757 | [AWS] Fail to detect cuda using H100 and default image | A user reported that CUDA is fail to be detected by torch using `p5.48xlarge` and our default AMI, but it works with official ubuntu pytorch deep learning AMI. | open | 2025-02-19T17:29:15Z | 2025-02-19T17:48:45Z | https://github.com/skypilot-org/skypilot/issues/4757 | [
"triage"
] | Michaelvll | 0 |
miguelgrinberg/Flask-SocketIO | flask | 745 | cannot run socketio on ubuntu 16.04 | Hi @miguelgrinberg , i am trying to set up a socket server in a ubuntu 16.04 completly new, i am following this tutorial:
https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uwsgi-and-nginx-on-ubuntu-16-04
But i can't make that work, i already try every configuration that i found but nothing seems to work, this are my files:
**run.py**
`from flask import Flask, session, request
from flask_socketio import SocketIO, emit, disconnect
app = Flask(__name__)
...
socketio = SocketIO(app, async_mode='gevent')
@socketio.on('list_event', namespace='/list')
def list(data):
emit('list_response', {'data': 'data'}, broadcast=True)
if __name__ == '__main__':
socketio.run(app)`
**list of packages:**
`Package Version
------------------ --------
alembic 1.0.0
aniso8601 3.0.2
asn1crypto 0.24.0
blinker 1.4
cffi 1.11.5
click 6.7
cryptography 2.3
Flask 1.0.2
Flask-Cors 3.0.6
Flask-JWT-Extended 3.11.0
Flask-Mail 0.9.1
Flask-Migrate 2.2.1
Flask-RESTful 0.3.6
Flask-SocketIO 3.0.1
Flask-SQLAlchemy 2.3.2
gevent 1.3.5
gevent-websocket 0.10.1
greenlet 0.4.14
idna 2.7
itsdangerous 0.24
Jinja2 2.10
Mako 1.0.7
MarkupSafe 1.0
passlib 1.7.1
pip 10.0.1
pkg-resources 0.0.0
pycparser 2.18
PyJWT 1.6.4
PyMySQL 0.9.2
python-dateutil 2.7.3
python-editor 1.0.3
python-engineio 2.2.0
python-socketio 2.0.0
pytz 2018.5
setuptools 40.0.0
six 1.11.0
SQLAlchemy 1.2.10
uWSGI 2.0.17.1
Werkzeug 0.14.1
wheel 0.31.1`
**/etc/nginx/sites-availeble/site:**
`server {
listen 8080;
server_name 70.32.30.196;
# return 301 $scheme://70.32.30.196:8080$request_uri;
location / {
include proxy_params;
include uwsgi_params;
uwsgi_pass http://70.32.30.196:8080;
}
location /socket.io/ {
include proxy_params;
try_files $uri $uri/ =404;
proxy_http_version 1.1;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
# proxy_pass http://70.32.30.196:8080/socket.io;
}
}`
**site.ini**
`[uwsgi]
module = run:app
master = true
processes = 5
socket = site.sock
chmod-socket = 660
vacuum = true
http-timeout = 3600000
buffer-size=32768
die-on-term = true
logto = /var/log/nginx/%n.log
`
**/etc/systemd/system/site.service:**
`[Unit]
Description=uWSGI instance to serve site
After=network.target
[Service]
User=root
Group=www-data
WorkingDirectory=/var/www/html/site
Environment="PATH=/var/www/html/site/site/bin"
ExecStart=/var/www/html/site/site/bin/uwsgi --ini /var/www/html/site/site.ini
[Install]
WantedBy=multi-user.target`
This all configuration always response **404** error, or **301** or **invalid host in upstream**.
As i said, i can't make it work and i can't find any guide anywhere, please, can you help me?
Thanks | closed | 2018-07-20T03:36:01Z | 2018-09-29T09:39:18Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/745 | [
"question"
] | nexthor | 1 |
qubvel-org/segmentation_models.pytorch | computer-vision | 1,076 | Xception is unavailable | When Deeplabv3+ uses Xception as the encoder, the following problem occurs.
 | open | 2025-02-24T05:56:33Z | 2025-03-05T09:42:43Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/1076 | [] | QingXiaoYan | 1 |
cchen156/Learning-to-See-in-the-Dark | tensorflow | 75 | how to test the model on my own dataset | open | 2019-04-03T08:16:41Z | 2020-10-02T14:19:30Z | https://github.com/cchen156/Learning-to-See-in-the-Dark/issues/75 | [] | AksChunara | 29 |
|
databricks/spark-sklearn | scikit-learn | 47 | Question about further support | Hi,
I am looking for road map information on this project. In specific I am looking for, if there is any work going on/plans for support of using the model generated from scikit in Spark pipelineModel and also any plans on support of other ML Algorithms like Decision Tress, K-means etc.
Could anyone shed some light on this.
Thanks
Praveen | closed | 2016-11-16T09:56:53Z | 2018-12-08T12:18:52Z | https://github.com/databricks/spark-sklearn/issues/47 | [] | praveend | 1 |
torchbox/wagtail-grapple | graphql | 81 | Update for Preview documentation | I am currently working on setting up my first project with Wagtail as the CMS and a Gatsby frontend and got quite confused when I was trying to the preview to work.
In my setup I closely followed the instructions in the docs and added all the settings for the subscriptions with `channels`. This unfortunately did not quite work out for me (as is documented in [another issue](https://github.com/GrappleGQL/wagtail-grapple/issues/50)).
Because of the phrasing in the docs I assumed that I definitely need the subscriptions to work to enable the preview functionality.
> Grapple’s Headless Preview is built-on GraphQL Subscriptions which means that your client subscribes to the preview page and any changes in the Admin will be pushed to your client via WebSockets. This allows you to add preview support to any client whether that be a SPA or Native App.
Luckily, in my debugging attempts I discovered the example Grapple project in the repo. I checked if the subscriptions would work with that project, hoping that something about my own projects Wagtail setup caused the subscription to fail. Unfortunately I got the same error message. So my own projects setup seems to reflect that of the Grapple example project.
I don't know why exactly, but I thought to still try out the preview button in the admin of the Grapple example project. At first it was only showing the browsers 404 page. Which is not surprising at this point there was no frontend that would handle the preview.
Even though I had already checked that the subscriptions did not work, I went ahead and created an extremely simple Gatsby frontend for the Grapple example project. This turns out was a great decision, because I discovered that the preview was working! I did not update on every keystroke in the Wagtail admin, but the preview tab would reflect changes in the admin after the preview button was pressed again.
This was huge to me. Until this point I thought I would have to find a different (most certainly hacky) solution for the preview.
I went back to the docs to see if I had missed something and focusing on the subscriptions was a wrong turn I took at some point, but the documentation (to me at least) seems to suggest that working subscriptions a definitely necessary to make the preview work. This does not seem to be the case.
Especially when working with the `gatsby-source-wagtail` package for the frontend the preview works basically out of the box with no extra settings for `channels` required.
My proposal here would be to update the preview documentation and split basic setup (in which the clicking of the preview button is still required) and the setup for the "live" preview (which requires the `channels` package settings).
Also, in either case, a crucial bit of documentation that is currently not mentioned at all is the settings for the `django-cors-headers`. These settings are necessary to make even the basic preview working. I luckily discovered these settings in the Grapple example project and found them mentioned in the [Wagtail Headless Preview](https://github.com/torchbox/wagtail-headless-preview) README. I think it would be extremely helpful to directly reference these in the Grapple docs to streamline the setup for new users.
@NathHorrigan @zerolab If you in general agree with this proposal, I am happy to implement the changes and create a PR. Any feedback obviously welcome!
| open | 2020-07-20T06:40:05Z | 2020-10-06T07:28:58Z | https://github.com/torchbox/wagtail-grapple/issues/81 | [
"documentation"
] | tbrlpld | 2 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 548 | Encoder training does not use GPU | Hi, I have a TITAN RTX and Cuda 10.2 installed correctly on my system. Nevertheless, the encoder training happens on CPU. Is there a way to make it use the GPU? I could not find any option. | closed | 2020-10-06T20:09:24Z | 2020-10-07T22:58:38Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/548 | [] | padmalcom | 3 |
jwkvam/bowtie | plotly | 237 | add helpful messages for yarn and node | - [ ] determine minimum yarn version
- [x] determine minimum node version
- [x] raise helpful message when building if yarn or node are missing or out of date | open | 2018-09-19T19:07:12Z | 2018-10-13T05:44:23Z | https://github.com/jwkvam/bowtie/issues/237 | [
"user experience"
] | jwkvam | 0 |
tflearn/tflearn | tensorflow | 1,105 | how to get the output of any specific layer | I am trying to get the output of a fully-connected layer, but i can't find any api I can use. can anyone help me? thank u | closed | 2018-12-16T03:42:47Z | 2019-01-07T14:55:03Z | https://github.com/tflearn/tflearn/issues/1105 | [] | WuChannn | 2 |
ploomber/ploomber | jupyter | 895 | `_suggest_command` does not work for some examples | See https://github.com/ploomber/ploomber/pull/825#issuecomment-1171692773
`_suggest_command` returns `None` for the input `grid` but we expect `cookbook/grid`. There may be some other cases that do not work as well. | closed | 2022-07-06T19:40:55Z | 2022-09-02T22:56:15Z | https://github.com/ploomber/ploomber/issues/895 | [] | 94rain | 1 |
graphistry/pygraphistry | pandas | 295 | [FEA] Control the nodes and relationship properties displayed in the graphistry graphs | Request to include a function which filters all properties of a node or a relationship, where we just mention the property name and only those mentioned in the function are displayed when the graphs are shown.

Taking this image as an example
Mentioning -
"Color"
"id"
"name"
Would only show those specific 3 properties in the output when the particular node is selected.
If there are nodes with different labels and properties, we can also mention the required properties for specific labeled node.
Default value is showing all properties | open | 2021-12-27T12:14:31Z | 2021-12-27T18:24:19Z | https://github.com/graphistry/pygraphistry/issues/295 | [
"enhancement"
] | Parth-Joshi-6669 | 1 |
pyjanitor-devs/pyjanitor | pandas | 881 | Speed up pytest | # Brief Description
<!-- Please provide a brief description of what you'd like to propose. -->
When I do local tests, I find there only use single core to test.
That causes **so low**.
To speed up we could patch some commands such as `-n`.
# Example API
<!-- One of the selling points of pyjanitor is the API. Hence, we guard the API very carefully, and want to
make sure that it is accessible and understandable to many people. Please provide a few examples of what the API
of the new function you're proposing will look like. We have provided an example that you should modify. -->
Please modify the example API below to illustrate your proposed API, and then delete this sentence.
From
https://github.com/pyjanitor-devs/pyjanitor/blob/53c86113b62d3245937c5596332f262172d916bc/Makefile#L21-L23
to
```Makefile
test:
@echo "Running test suite..."
pytest -v -n auto --color=yes
``` | closed | 2021-08-18T07:09:43Z | 2021-08-20T12:14:46Z | https://github.com/pyjanitor-devs/pyjanitor/issues/881 | [] | Zeroto521 | 1 |
nicodv/kmodes | scikit-learn | 149 | .whl file for kmode algo ? | we're struggle to deploy that algorithm in secure server where there is no internet connection, so i just want to know if there is any .whl file that you have for kmode algorithm or is there any other through which we can deploy kmode algo in secure server(having no internet connectivity)
| closed | 2020-10-24T19:58:57Z | 2021-02-13T03:44:10Z | https://github.com/nicodv/kmodes/issues/149 | [
"question"
] | omairmauz | 1 |
miguelgrinberg/flasky | flask | 152 | How to recreate development database at the end of ch 9? | At the end of ch9, it is mentioned that we need to recreate/update the db. How do we update it?
Do I just have to open the manage.py python shell and run:
`Role.insert_roles()`
Do I need to explicitly assign roles to users in the shell? If so, what is the command for that?
And then I run the following in the terminal:
`python manage.py db migrate`
`python manage.py db upgrade`
In case I want to recreate the entire db, how do I do it?
| closed | 2016-06-07T11:13:26Z | 2017-03-17T19:00:42Z | https://github.com/miguelgrinberg/flasky/issues/152 | [] | priyankamandikal | 6 |
Urinx/WeixinBot | api | 173 | 请问,如何判别某个群聊消息是属于哪个群的消息 | 大神好!
若robot加入了多个群聊,那么如何分辨它收到的某个群聊消息是属于哪个群的? 例如:群1有a,d,c, robot 四个人,群1有a,f,g,robot. 如 a(同时属于群1和群2) 发了一个消息, 如何robot分辨 这个消息是 在群1中还是在群2中的?
我看了API,没能找到分辨它们的方法和属性。
先谢了! | open | 2017-04-01T06:45:56Z | 2017-04-01T06:45:56Z | https://github.com/Urinx/WeixinBot/issues/173 | [] | callmelu | 0 |
keras-team/keras | machine-learning | 20,359 | How to apply a threshold filter to a layer? | Having an array like this :
`input = np.array([[0.04, -0.8, -1.2, 1.3, 0.85, 0.09, -0.08, 0.2]])`
I want to change all the values (of the last dimension) between -0.1 and 0.1 to zero and change the rest to 1
`filtred = [[0, 1, 1, 1, 1, 0, 0, 1]]`
Using the lamnda layer is not my favor choice (I would prefer to find a solution with a native layer which could be easily converted to TfLite without activating the `SELECT_TF_OPS` or the `TFLITE_BUILTINS` options) but I tried it anyway :
```
layer = tf.keras.layers.Lambda(lambda x: 0 if x <0.1 and x>-0.1 else 1)
layer(input)
```
I am getting :
```
ValueError: Exception encountered when calling Lambda.call().
The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Arguments received by Lambda.call():
• inputs=tf.Tensor(shape=(6,), dtype=float32)
• mask=None
• training=None
``` | closed | 2024-10-15T17:53:16Z | 2024-10-27T14:33:05Z | https://github.com/keras-team/keras/issues/20359 | [
"type:Bug"
] | nassimus26 | 4 |
axnsan12/drf-yasg | rest-api | 304 | Error in schema generator | getting this error after updating to 1.13.0 (everything was fine with 1.12.1):
```
Traceback (most recent call last):
File "/lib/python3.6/site-packages/django/core/handlers/exception.py", line 35, in inner
response = get_response(request)
File "/lib/python3.6/site-packages/django/core/handlers/base.py", line 128, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/lib/python3.6/site-packages/django/core/handlers/base.py", line 126, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/lib/python3.6/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/lib/python3.6/site-packages/django/views/generic/base.py", line 69, in view
return self.dispatch(request, *args, **kwargs)
File "/lib/python3.6/site-packages/rest_framework/views.py", line 483, in dispatch
response = self.handle_exception(exc)
File "/lib/python3.6/site-packages/rest_framework/views.py", line 443, in handle_exception
self.raise_uncaught_exception(exc)
File "/lib/python3.6/site-packages/rest_framework/views.py", line 480, in dispatch
response = handler(request, *args, **kwargs)
File "/lib/python3.6/site-packages/drf_yasg/views.py", line 95, in get
schema = generator.get_schema(request, self.public)
File "/lib/python3.6/site-packages/drf_yasg/generators.py", line 244, in get_schema
paths, prefix = self.get_paths(endpoints, components, request, public)
File "/lib/python3.6/site-packages/drf_yasg/generators.py", line 402, in get_paths
operation = self.get_operation(view, path, prefix, method, components, request)
File "/lib/python3.6/site-packages/drf_yasg/generators.py", line 444, in get_operation
operation = view_inspector.get_operation(operation_keys)
File "/lib/python3.6/site-packages/drf_yasg/inspectors/view.py", line 40, in get_operation
body = self.get_request_body_parameters(consumes)
File "/lib/python3.6/site-packages/drf_yasg/inspectors/view.py", line 93, in get_request_body_parameters
schema = self.get_request_body_schema(serializer)
File "/lib/python3.6/site-packages/drf_yasg/inspectors/view.py", line 150, in get_request_body_schema
return self.serializer_to_schema(serializer)
File "/lib/python3.6/site-packages/drf_yasg/inspectors/base.py", line 438, in serializer_to_schema
self.field_inspectors, 'get_schema', serializer, {'field_inspectors': self.field_inspectors}
File "/lib/python3.6/site-packages/drf_yasg/inspectors/base.py", line 118, in probe_inspectors
result = method(obj, **kwargs)
File "/lib/python3.6/site-packages/drf_yasg/inspectors/field.py", line 39, in get_schema
return self.probe_field_inspectors(serializer, openapi.Schema, self.use_definitions)
File "/lib/python3.6/site-packages/drf_yasg/inspectors/base.py", line 238, in probe_field_inspectors
swagger_object_type=swagger_object_type, use_references=use_references, **kwargs
File "/lib/python3.6/site-packages/drf_yasg/inspectors/base.py", line 118, in probe_inspectors
result = method(obj, **kwargs)
File "/lib/python3.6/site-packages/drf_yasg/inspectors/field.py", line 125, in field_to_swagger_object
actual_schema = definitions.setdefault(ref_name, make_schema_definition)
File "/lib/python3.6/site-packages/drf_yasg/openapi.py", line 679, in setdefault
ret = maker()
File "/lib/python3.6/site-packages/drf_yasg/inspectors/field.py", line 101, in make_schema_definition
child, ChildSwaggerType, use_references, **prop_kwargs
File "/lib/python3.6/site-packages/drf_yasg/inspectors/base.py", line 238, in probe_field_inspectors
swagger_object_type=swagger_object_type, use_references=use_references, **kwargs
File "/lib/python3.6/site-packages/drf_yasg/inspectors/base.py", line 118, in probe_inspectors
result = method(obj, **kwargs)
File "/lib/python3.6/site-packages/drf_yasg/inspectors/field.py", line 79, in field_to_swagger_object
child_schema = self.probe_field_inspectors(field.child, ChildSwaggerType, use_references)
File "/lib/python3.6/site-packages/drf_yasg/inspectors/base.py", line 238, in probe_field_inspectors
swagger_object_type=swagger_object_type, use_references=use_references, **kwargs
File "/lib/python3.6/site-packages/drf_yasg/inspectors/base.py", line 118, in probe_inspectors
result = method(obj, **kwargs)
File "/lib/python3.6/site-packages/drf_yasg/inspectors/field.py", line 125, in field_to_swagger_object
actual_schema = definitions.setdefault(ref_name, make_schema_definition)
File "/lib/python3.6/site-packages/drf_yasg/openapi.py", line 679, in setdefault
ret = maker()
File "/lib/python3.6/site-packages/drf_yasg/inspectors/field.py", line 101, in make_schema_definition
child, ChildSwaggerType, use_references, **prop_kwargs
File "/lib/python3.6/site-packages/drf_yasg/inspectors/base.py", line 238, in probe_field_inspectors
swagger_object_type=swagger_object_type, use_references=use_references, **kwargs
File "/lib/python3.6/site-packages/drf_yasg/inspectors/base.py", line 118, in probe_inspectors
result = method(obj, **kwargs)
File "/lib/python3.6/site-packages/drf_yasg/inspectors/field.py", line 628, in field_to_swagger_object
values_type = get_basic_type_info_from_hint(next(iter(enum_value_types)))
File "/lib/python3.6/site-packages/drf_yasg/inspectors/field.py", line 502, in get_basic_type_info_from_hint
if typing and get_origin_type(hint_class) == typing.Union:
File "/lib/python3.6/typing.py", line 760, in __eq__
return self._subs_tree() == other
File "/lib/python3.6/typing.py", line 760, in __eq__
return self._subs_tree() == other
File "/lib/python3.6/typing.py", line 760, in __eq__
return self._subs_tree() == other
[Previous line repeated 215 more times]
File "/lib/python3.6/typing.py", line 759, in __eq__
if not isinstance(other, _Union):
RecursionError: maximum recursion depth exceeded while calling a Python object
```
Had no chance to dig deeper yet, most likely related to https://github.com/axnsan12/drf-yasg/pull/272 | closed | 2019-02-01T08:15:03Z | 2019-02-21T23:00:15Z | https://github.com/axnsan12/drf-yasg/issues/304 | [
"bug"
] | rsichnyi | 8 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 1,189 | __bind_key__ not working | Hello,
the bind key is not working for me. Is this a bug, or a problem with my code?
All data is written to `database.db`, but shold be seperated to the two databases. The `database_logging.db` was created but is empty.
The relevant extract of the code. I need the declarative_base because I want to seperate the table definitions over multiple files.
database.py
```
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
```
app.py
```
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + os.path.join(basedir, 'database/database.db')
app.config['SQLALCHEMY_BINDS'] = {
'logging': 'sqlite:///' + os.path.join(basedir, 'database/database_logging.db')
}
db = SQLAlchemy(app, model_class=Base)
with app.app_context():
db.create_all()
```
data.py
```
from sqlalchemy import *
from database.database import Base
class atable(Base):
__bind_key__ = "logging"
__tablename__ = "a"
id = Column(Integer, primary_key=True)
abc = Column(Text, nullable=False, index=True)
def __repr__(self):
return f'<a {self.abc}>'
class btable(Base):
__tablename__ = "b"
id = Column(Integer, primary_key=True)
abc = Column(Text, nullable=False, index=True)
def __repr__(self):
return f'<b {self.abc}>'
```
Environment:
- Python version: 3.10
- Flask-SQLAlchemy version: 3.0.3
- SQLAlchemy version: 2.0.9
| closed | 2023-04-08T17:22:21Z | 2023-04-23T01:10:28Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/1189 | [] | Laserlicht | 2 |
CatchTheTornado/text-extract-api | api | 68 | [feat] Change routing from ocr to extract | [ ] Change routing from ocr to extract with alias
[ ] Clean main.py - move routes to submodules
```
router = APIRouter()
@router.post("/extract", tags=["Extract"])
@router.post("/ocr", include_in_schema=False)
(...)
```
| open | 2025-01-11T00:19:39Z | 2025-01-19T16:55:22Z | https://github.com/CatchTheTornado/text-extract-api/issues/68 | [] | choinek | 0 |
K3D-tools/K3D-jupyter | jupyter | 319 | z-fighting | Is there a solution to mitigate z-fighting in k3d?
I have some mesh faces very close together which leads to z-fighting in some areas
Moving the camera closer to the mesh fixed the issue, but is there a way to fix it in all camera positions/angles?

| closed | 2021-11-20T22:27:42Z | 2022-04-05T23:39:30Z | https://github.com/K3D-tools/K3D-jupyter/issues/319 | [
"Next release"
] | esalehim | 1 |
qubvel-org/segmentation_models.pytorch | computer-vision | 673 | Feature Request: Argument to set cache directory | Hi,
While using your models, I found out that there is no straightforward way to store downloaded weights in a specific directory.
Currently, to store weights in a specific directory, I have to download the weights separately to store in my directory and then use them by using load_state_dict function.
```
wget url.to.download.efficientnet_b2.weights
```
```python
model = smp.Unet(encoder_name="efficientnet-b2", encoder_weights=None)
model.encoder.load_state_dict("/path/to/downloaded/efficientnet_b2_weights.pth")
```
Now, I want there to be a parameter in the model class itself to give dir for downloading weights (just like in huggingface's transformers library) example:
```python
model = smp.Unet(encoder_name="efficientnet-b2", encoder_weights='imagenet', cache_dir='./cache/')
```
| closed | 2022-10-15T18:19:35Z | 2022-12-23T01:59:31Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/673 | [
"Stale"
] | mrFahrenhiet | 2 |
open-mmlab/mmdetection | pytorch | 12,028 | How to compute custom iou threshold map from cocoapi? | How to compute custom iou threshold map from cocoapi, I want to compute map20 since my task is detect tiny object that area lower than 256 pixels | open | 2024-11-05T03:04:58Z | 2024-11-05T04:32:40Z | https://github.com/open-mmlab/mmdetection/issues/12028 | [] | risingClouds | 0 |
inducer/pudb | pytest | 531 | Incorrect display size which does not adapt in XFCE terminal | When I insert this code to the python script
script.sh
```
#!/usr/bin/env python
...code...
from pudb import set_trace; set_trace()
```
Then using Xfce session run through FastX2 in xfce4-terminal with bash shell in an conda environment I run
`./script.sh`
The pudb interface opens always so that its graphics fits default dimensions of the xfce4-terminal even if I resize or maximize it before or after running the script.
The default size of the terminal when I open xfce4-terminal is
`echo "LINES=$LINES COLUMNS=$COLUMNS"`
LINES=24 COLUMNS=80
and when I maximize terminal
`echo "LINES=$LINES COLUMNS=$COLUMNS"`
LINES=52 COLUMNS=201
In maximized mode the following code
```
import shutil
print(shutil.get_terminal_size(), flush=True)
```
provides correct dimensions (201,52).
Whatever I do pudb uses just default 24 LINES and 80 COLUMNS to render its graphical interface which is not enough for convenient debugging.
It would be nice if I could
`set_trace(term_size=(201,52))`
but this functionality seems not to be available for non remote pudb.
Might be related to the issue
https://github.com/getgauge/gauge-python/issues/130
Screenshot

| closed | 2022-07-19T13:31:40Z | 2022-07-20T13:49:02Z | https://github.com/inducer/pudb/issues/531 | [] | kulvait | 7 |
frol/flask-restplus-server-example | rest-api | 80 | Removing all log handers is skipping some | There is a defect in the `Logging` class in the file `app/extensions/logging/__init__.py` where it attempts to remove existing Flask log handlers on lines `26` & `27` but it has a flaw in it's logic and only removes 1 of the 2 default log handlers.
The reason is that the code is modifying the collection that it is iterating over which is always a _bad idea_ because the iterator will skip members as the collection changes. The same is true for lines `40` & `41` which attempt to remove the SQLAlchemy log handlers.
**How to reproduce:**
This code snippet using lines 26 & 27 with some print statements show the logic error:
```python
''' Example showing iteration defect '''
from flask import Flask
app = Flask(__name__)
if __name__ == '__main__':
print('Log handler count before {}'.format(len(app.logger.handlers)))
for handler in app.logger.handlers:
app.logger.removeHandler(handler)
print('Log handler count after {}'.format(len(app.logger.handlers)))
```
The results are:
```shell
Log handler count before 2
Log handler count after 1
```
Which shows that the code does not do what was intended if the intent was to remove _all_ of the log handlers.
**How to fix:**
The fix is to always make a copy of _any_ collection that you intend to modify in a loop and iterate of over the _copy_ (which remains constant) as you modify the _real_ collection:
```python
''' Example showing correct iteration technique '''
from flask import Flask
app = Flask(__name__)
if __name__ == '__main__':
print('Log handler count before {}'.format(len(app.logger.handlers)))
handler_list = list(app.logger.handlers)
for handler in handler_list:
app.logger.removeHandler(handler)
print('Log handler count after {}'.format(len(app.logger.handlers)))
```
The now results are:
```shell
Log handler count before 2
Log handler count after 0
```
Which I believe is the desired result.
| closed | 2017-11-06T13:07:05Z | 2017-11-07T00:52:47Z | https://github.com/frol/flask-restplus-server-example/issues/80 | [] | rofrano | 2 |
scikit-learn/scikit-learn | machine-learning | 30,571 | extra dependency needed for update lockfiles script | https://github.com/scikit-learn/scikit-learn/blob/6c163c68c8f6fbe6015d6e2ccc545eff98f655ff/build_tools/update_environments_and_lock_files.py#L28-L32
You also need `conda`. Without it I see `FileNotFoundError: [Errno 2] No such file or directory: 'conda'`
Developers who use micromamba may not have conda installed. | closed | 2025-01-02T17:57:43Z | 2025-01-03T18:23:53Z | https://github.com/scikit-learn/scikit-learn/issues/30571 | [
"Needs Triage"
] | lucascolley | 0 |
pydantic/pydantic-ai | pydantic | 1,146 | Tool return types and docstrings are not used | ### Initial Checks
- [x] I confirm that I'm using the latest version of Pydantic AI
### Description
I noticed that the return type annotations and docstrings of tools are not actually used by pydantic-ai. I was struggling to make my agent understand the instructions and only now noticed that my agent didn't even see the instructions because (part of it) was placed under the "Returns" section of the docstring.
I was wondering if this was left out intentionally, and if so, what was the reason to do so?
If this is indeed intentional, I think it should be mentioned in the documentation somewhere.
### Example Code
```Python
```
### Python, Pydantic AI & LLM client version
```Text
python 3.12
pydantic==2.10.6
pydantic-ai==0.0.40
pydantic-ai-slim==0.0.40
pydantic-graph==0.0.40
pydantic_core==2.27.2
``` | closed | 2025-03-17T09:56:00Z | 2025-03-22T10:00:58Z | https://github.com/pydantic/pydantic-ai/issues/1146 | [
"documentation",
"Feature request",
"help wanted"
] | Krogager | 5 |
miguelgrinberg/python-socketio | asyncio | 810 | The emit call in client managers does not implement the `to` argument | The `emit()` in the `Server` classes accepts both `to` and `room` for the recipient, but the client managers only have `room`. | closed | 2021-10-24T12:04:00Z | 2021-10-24T18:54:29Z | https://github.com/miguelgrinberg/python-socketio/issues/810 | [
"bug"
] | miguelgrinberg | 0 |
ckan/ckan | api | 8,289 | postgres 12 | postgres 10 has been unsupported for 2 years https://www.postgresql.org/support/versioning/
Let's suggest using and test against postgres 12, the oldest supported postgres version (supported until November this year) | closed | 2024-06-20T16:26:09Z | 2024-06-26T12:51:14Z | https://github.com/ckan/ckan/issues/8289 | [] | wardi | 0 |
miguelgrinberg/python-socketio | asyncio | 965 | Exception in MacOs while its working in Windows 10 | **Describe the bug**
I'm trying to consume the socket.io used in https://pypi.org/project/breeze-connect/ and in MacOs its throwing the following error.
`Traceback (most recent call last):
File "/Users/suresh/PycharmProjects/icicialgo/Algo.py", line 64, in <module>
breeze.ws_connect()
File "/Users/suresh/PycharmProjects/icicialgo/venv/lib/python3.10/site-packages/breeze_connect/breeze_connect.py", line 52, in ws_connect
self.sio_handler.connect()
File "/Users/suresh/PycharmProjects/icicialgo/venv/lib/python3.10/site-packages/breeze_connect/breeze_connect.py", line 19, in connect
self.sio.connect(self.hostname, headers={"User-Agent":"python-socketio[client]/socket"}, auth=auth, transports="websocket", wait_timeout=30)
File "/Users/suresh/PycharmProjects/icicialgo/venv/lib/python3.10/site-packages/socketio/client.py", line 338, in connect
raise exceptions.ConnectionError(exc.args[0]) from None
socketio.exceptions.ConnectionError: Connection error`
The same code works well in Windows 10 without any change.
python-engineio==4.3.1
python-socketio==5.5.2
The above is the version used.
| closed | 2022-07-17T04:28:41Z | 2023-02-17T10:43:35Z | https://github.com/miguelgrinberg/python-socketio/issues/965 | [
"question"
] | suresh-j2 | 3 |
pytest-dev/pytest-html | pytest | 787 | Logs before rerun appear in html report | ```py
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
seq_num = 0
def seq():
global seq_num
seq_num += 1
return seq_num
def test_fail():
logger.info(f'Test log {seq()}')
assert False
```
pytest --reruns 1 --html=report.html
The report has logs of both runs:
```
def test_fail():
logger.info(f'Test log {seq()}')
> assert False
E assert False
test_my.py:17: AssertionError
------------------------------ Captured log call -------------------------------
INFO test_my:test_my.py:16 Test log 1
------------------------------ Captured log call -------------------------------
INFO test_my:test_my.py:16 Test log 2
```
It looks like there is logic in `_process_logs` to skip rerun logs, but this function is not called (maybe it was changed in pytest).
```py
# Don't add captured output to reruns
if report.outcome != "rerun":
``` | open | 2023-12-22T13:42:54Z | 2024-01-08T19:56:04Z | https://github.com/pytest-dev/pytest-html/issues/787 | [] | orgads | 8 |
Kanaries/pygwalker | matplotlib | 475 | How to hide the first row "preview" in data tab | Hi team,
I would like to hide the first row that shows the dirtribution in data tab. How to do that?
I tried set use_preview = False but it's not working.
```
walker = pyg.to_html(df,default_tab="data",use_preview=False)
```
Error return
TypeError: pygwalker.api.pygwalker.PygWalker() got multiple values for keyword argument 'use_preview' | open | 2024-03-09T08:43:40Z | 2024-03-09T09:15:38Z | https://github.com/Kanaries/pygwalker/issues/475 | [
"enhancement",
"good first issue"
] | thienphuoc86 | 3 |
kornia/kornia | computer-vision | 3,083 | ruff (D): adopt some the ignored rules for the docstrings | we need an issue to discuss and check which of these rules we may want to have
_Originally posted by @johnnv1 in https://github.com/kornia/kornia/pull/3082#discussion_r1868460841_
The ignored rules are (from https://docs.astral.sh/ruff/rules/#pydocstyle-d):
- 'D100' : Missing docstring in public module
- 'D101' : Missing docstring in public class
- 'D102' : Missing docstring in public method
- 'D103' : Missing docstring in public function
- 'D104' : Missing docstring in public package
- 'D105' : Missing docstring in magic method
- 'D107' : Missing docstring in __init__
- 'D203' : 1 blank line required before class docstring
- 'D204' : 1 blank line required after class docstring
- 'D205' : 1 blank line required between summary line and description
- 'D213' : Multi-line docstring summary should start at the second line
- 'D400' : First line should end with a period
- 'D401' : First line of docstring should be in imperative mood: "{first_line}"
- 'D404' : First word of the docstring should not be "This"
- 'D406' : Section name should end with a newline ("{name}")
- 'D407' : Missing dashed underline after section ("{name}")
- 'D415' : First line should end with a period, question mark, or exclamation point
- 'D417' : Missing argument description in the docstring for {definition}: {name}
maybe we should adopt some default style used by the community (numpy or google) cc @edgarriba @shijianjian @ducha-aiki
- https://docs.astral.sh/ruff/faq/#does-ruff-support-numpy-or-google-style-docstrings
- https://docs.astral.sh/ruff/formatter/#docstring-formatting
---
Missing rules to be enable after #3088
- [ ] 'D100'
- [ ] 'D101'
- [ ] 'D102'
- [ ] 'D103
- [ ] 'D104',
- [ ] 'D105'
- [ ] 'D107'
- [ ] 'D417'
| open | 2024-12-03T22:46:55Z | 2024-12-17T07:17:57Z | https://github.com/kornia/kornia/issues/3083 | [
"help wanted",
"good first issue",
"docs :books:"
] | johnnv1 | 8 |
thtrieu/darkflow | tensorflow | 691 | identifying persons | Is there any way that to extend the person detection to identify the person. Example If we train the program using some known person image, how to extend this project to identify those persons from the video ? | closed | 2018-04-03T09:16:10Z | 2018-04-04T03:24:00Z | https://github.com/thtrieu/darkflow/issues/691 | [] | vins2win | 2 |
sktime/pytorch-forecasting | pandas | 1,029 | question: MultiHorizonMetric temporal-based error-weighting | - PyTorch-Forecasting version: **0.10.2**
### Question
I wonder whether it's possible (and if possible - how) to pass **shared temporal-based weights** to calculate loss, inherited from [MultiHorizonMetric](https://pytorch-forecasting.readthedocs.io/en/stable/_modules/pytorch_forecasting/metrics/base_metrics.html#MultiHorizonMetric) ?
Say, with prediction horizon = 60 points
- we need to pay more attention to **recent future** errors (predictions 1-20)
- than to **the latest** (21-60).
- for **each created series from train dataset** to train models on (aka shared weights)
I see we can indirectly use `weight` column in TimeSeriesDataset to set, i.e., exponential decay given `relative_time_idx` for particular timeseries and weight **data rows** differently (each row that starts from such `relative_time_idx`)
- given particular series, that starts from p21 to p80 (of length 60), we can weight this row by `w`
- given particular series, that starts from p22 to p81 (of length 60), we can weight this row by `0.9*w` and so on
But I haven't noticed any hints in a code to use it separately **column-wise shared for all the series** ?
i.e. given forecasting horizon = 5, pass a list/array of `[w1, w2, w3, w4, w5]` to be used in loss reduction across temporal axis?
If there are no workarounds to do it, maybe some `enhancement` in metrics/loss `__init__` should be planned in future releases?
Any help/hints/ideas are very appreciated
Thanks in advance!
| open | 2022-06-14T10:49:01Z | 2022-06-14T21:13:32Z | https://github.com/sktime/pytorch-forecasting/issues/1029 | [] | fnavruzov | 0 |
twopirllc/pandas-ta | pandas | 617 | variance - sample or population variance? | ta.variance can cover both variances: population variance and sample variance:
ddof = 0 => population variance
ddof = 1 => sample variance
TALIB's VAR function can calculate only _population_ variance (when ddof = 0)
**Bug**
the default is ddof=1, instead ddof=0 - therefore ta.variance returns:
- _sample_ variance when TALIB is present, but
- _population_ variance when TALIB is not installed
To add to the confusion: the comment at the bottom of ta.variance code claims that
**default ddof = 0** (as it should be) but the code sets **default ddof = 1**
**Expected behavior**
the default variance should return **population** variance - with or without TALIB installed. Sample variance should be returned only when explicitly adding ddof=1
**Recommended fix**
Line 10 of [variance.py](https://github.com/twopirllc/pandas-ta/blob/main/pandas_ta/statistics/variance.py) should read:
`ddof = int(ddof) if isinstance(ddof, int) and ddof >= 0 and ddof < length else 0` | closed | 2022-11-14T20:31:03Z | 2023-08-30T17:55:30Z | https://github.com/twopirllc/pandas-ta/issues/617 | [
"bug"
] | mihakralj | 1 |
onnx/onnx | machine-learning | 6,185 | Shape Inference crash on Gemm | # Bug Report
### Describe the bug
```
import onnx
import onnx.parser
model = """
<
ir_version: 10,
opset_import: ["" : 6]
>
graph (double[2,1] in0, double in1, double[2] in2) => () {
out0 = Gemm <alpha: float = 1, beta: float = -693.752, broadcast: int = -436, transB: int = 823> (in0, in1, in2)
}
"""
onnx.shape_inference.infer_shapes(onnx.parser.parse_model(model))
```
crashes with a segmentation fault.
### System information
- OS Platform and Distribution (*. Linux Ubuntu 20.04*): Linux Ubuntu 20.04
- ONNX version (*e.g. 1.13*): 1.16.1
- Python version: 3.10.12
### Expected behavior
No crash, but an error that the model is invalid | closed | 2024-06-17T09:33:29Z | 2024-07-25T15:27:03Z | https://github.com/onnx/onnx/issues/6185 | [
"bug",
"module: shape inference"
] | mgehre-amd | 0 |
microsoft/hummingbird | scikit-learn | 160 | Cleanup SKL LabelEncoder Converter Code | We have the original draft of the SKL LabelEncoder code uploaded in [this branch](https://github.com/microsoft/hummingbird/tree/cleanup/label-encoder). All credit here goes to @scnakandala, the original author and brains behind this.
It contains an un-edited implementation of LabelEncoder [here](https://github.com/microsoft/hummingbird/blob/cleanup/label-encoder/hummingbird/ml/operator_converters/skl_label_encoder.py).
There is a test file [here](https://github.com/microsoft/hummingbird/blob/cleanup/label-encoder/tests/test_sklearn_label_encoder_converter.py) that needs to be cleaned up and passing (and also tests added for errors).
**Notes:**
* We had to treat strings a bit differently because of the way they are represented in tensors
* We should add more expansive tests
* In the numeric label encoder, there is a tricky spot [here](https://github.com/microsoft/hummingbird/blob/cleanup/label-encoder/hummingbird/ml/operator_converters/skl_label_encoder.py#L46) related to running on GPU vs CPU.
* There may be some bugfixes/improvements that are available now. Our CI/CD pipeline is not currently GPU-enabled, but it would be helpful to be able to run this on a GPU-enabled machine when developing. | closed | 2020-06-19T18:55:18Z | 2020-12-18T00:42:34Z | https://github.com/microsoft/hummingbird/issues/160 | [] | ksaur | 3 |
unit8co/darts | data-science | 2,189 | [BUG] historical_forecasts() method gives empty list in v0.27.2 | **Describe the bug**
After upgrading my darts package from v0.24.0 to v0.27.2, historical_forecasts() method for the Random Forest, XGBoost and LightGBM models gives empty list which leads to NaN in computing the backtest error.
**To Reproduce**
code snippet to reproduce the issue:
```
import pandas as pd
from darts.models.forecasting.random_forest import RandomForest
from darts.metrics import mae, rmse
# sample data
time_series = [298.0, 225.0, 228.0, 234.0, 211.0, 253.0, 312.0, 256.0, 267.0, 206.0, 292.0, 336.0, 273.0, 201.0, 287.0, 232.0, 125.0, 98.0, 248.0, 20.0, 326.0, 178.0, 238.0, 243.0, 298.0, 225.0, 228.0, 234.0, 211.0, 253.0, 312.0, 256.0, 267.0, 206.0, 292.0, 336.0, ]
# Create the Index
index_ = pd.date_range('2021-01-01', periods=len(time_series), freq='M')
timeseries = pd.Series(time_series, index=index_, name='time_series')
darts_timeseries = TimeSeries.from_series(time_series)
m = RandomForest(
lags=12,
output_chunk_length=12,
use_static_covariates=False,
multi_models=False,
)
min_train_len = max(m.min_train_series_length, 2 * 12)
historical_forecast = m.historical_forecasts(
series=darts_timeseries,
train_length=min_train_len + 1,
forecast_horizon=int(12 / 2.0),
retrain=True,
last_points_only=False,
)
m.backtest(
series=darts_timeseries,
historical_forecasts=historical_forecast,
train_length=min_train_len + 1,
forecast_horizon=int(12 / 2.0),
retrain=True,
last_points_only=False,
metric=[rmse, mae],
)
```
**Expected behavior**
Backtest error should give something like:
```
array([46.57663223, 40.65416667])
```
and historical_forecasts() method should give non-empty list.
**System (please complete the following information):**
- Python version: 3.10.11
- darts version: 0.27.2
**Additional context**
Add any other context about the problem here.
| open | 2024-01-26T08:39:39Z | 2024-02-09T12:45:55Z | https://github.com/unit8co/darts/issues/2189 | [
"bug"
] | VivCh14 | 0 |
ijl/orjson | numpy | 475 | update issue - orjson 3.10.1 on Almalinux 9 | I cannot update orjson 3.10.1 on Almalinux 9...
H.
```
poetry update
- Updating orjson (3.10.0 -> 3.10.1): Failed
RuntimeError
Unable to find installation candidates for orjson (3.10.1)
at env/lib/python3.11/site-packages/poetry/installation/chooser.py:74 in choose_for
70│
71│ links.append(link)
72│
73│ if not links:
→ 74│ raise RuntimeError(f"Unable to find installation candidates for {package}")
75│
76│ # Get the best link
77│ chosen = max(links, key=lambda link: self._sort_key(package, link))
78│
Cannot install orjson.
``` | closed | 2024-04-16T16:55:17Z | 2024-04-26T08:01:51Z | https://github.com/ijl/orjson/issues/475 | [
"Stale"
] | jankrnavek | 0 |
OpenBB-finance/OpenBB | python | 6,969 | [IMPROVE] `obb.equity.screener`: Make Input Of Country & Exchange Uniform Across Providers | In the `economy` module, the `country` parameter can be entered as names or two-letter ISO codes. The same treatment should be applied to the `obb.equity.screener` endpoint. Additionally, the "exchange" parameter should reference ISO MICs - i.e, XNAS instead of NASDAQ, XNYS instead of NYSE, etc.
| open | 2024-11-27T04:48:54Z | 2024-11-27T04:48:54Z | https://github.com/OpenBB-finance/OpenBB/issues/6969 | [
"enhancement",
"platform"
] | deeleeramone | 0 |
jupyter-book/jupyter-book | jupyter | 2,204 | singlehtml and pdfhtml formatting and images from code not showing | I'd like to get a single PDF copy of some UC Berkeley textbooks including https://github.com/data-8/textbook and https://github.com/prob140/textbook/.
Building the books locally with the basic `jupyter-book build <path-to-book>` command works just fine, however when I use `--builder singlehtml` or `--builder pdfhtml`, I end up with fairly ugly output and am missing images that are generated from code. The images don't appear, I just get a little image icon and a file path for where the image should be. When I use `--builder singlehtml`, I can see that the images do exist in `_images`.
Here's the PDF I'm generating now. You can see the issues with images in section 1.3.1: [ugly-textbook.pdf](https://github.com/user-attachments/files/16949240/book.pdf).
Two years ago someone else generated a PDF copy of the Data 8 textbook that didn't have these issues:
[old-textbook.pdf](https://github.com/user-attachments/files/16949259/textbook.pdf).
| open | 2024-09-10T18:38:26Z | 2024-09-10T18:38:26Z | https://github.com/jupyter-book/jupyter-book/issues/2204 | [] | pancakereport | 0 |
horovod/horovod | machine-learning | 4,021 | Unexpected Worker Failure when using Elastic Horovod + Process Sets | **Environment:**
1. Framework: PyTorch
2. Framework version: 1.9.0+cu102
3. Horovod version: 0.28.1
4. MPI version: N/A
5. CUDA version: cu102
6. NCCL version: 2708
7. Python version: 3.9.18
8. Spark / PySpark version: N/A
9. Ray version: N/A
10. OS and version: Linux SMP x86_64 x86_64 x86_64 GNU/Linux
11. GCC version: 7.3.1
12. CMake version: 3.14
**Bug report:**
```
import horovod.torch as hvd
import time
worker_1_process_set = hvd.ProcessSet([1])
worker_2_process_set = hvd.ProcessSet([0, 2])
hvd.init(process_sets="dynamic")
hvd.add_process_set(worker_1_process_set)
hvd.add_process_set(worker_2_process_set)
@hvd.elastic.run
def main(state):
rank = hvd.rank()
size = hvd.size()
if rank == 0:
while True:
print(f"Sleeping for 1 second: {rank}", flush=True)
time.sleep(1)
elif rank == 1:
while True:
print(f"Sleeping for 1 second: {rank}", flush=True)
time.sleep(1)
elif rank == 2:
while True:
print(f"Sleeping for 1 second: {rank}", flush=True)
time.sleep(1)
if __name__ == '__main__':
print(f"Initialized with rank {hvd.rank()}", flush=True)
# Initialize the TorchState
state = hvd.elastic.TorchState()
print(f"Running main with rank {hvd.rank()}", flush=True)
main(state)
print(f"Finished running main with rank {hvd.rank()}", flush=True)
print(f"Joined with rank {hvd.rank()}", flush=True)
```
I am running the code above using elastic horovod and using process sets as described above. I am using the following command to run all 3 workers on a single node. After killing one of the processes from a terminal, all the remaining processes are killed. If I do the same workflow using the same command BUT WITHOUT using process sets, after terminating only one process the remaining 2 workers are not terminated. Basically, while using process sets with elastic horovod I was expecting that one worker failure would not terminate the remaining processes as it's happening in the log below. However, for some reason when I dont use process sets, the remaining workers stay alive as expected. What could be the reason here? Is this a bug or am i missing something while using the process sets? Please help
Similar issues:
1. https://github.com/horovod/horovod/issues/2484
```
(horovod-setup) (miniconda3) [pgadikar@ip-10-20-1-15 experiments]$ horovodrun -np 3 --min-np 2 --host-discovery-script discover-hosts.sh --elastic-timeout 5 --network-interfaces eth0,lo python mast
er-child-exp.py
[1]<stdout>:Initialized with rank 1
[1]<stdout>:Running main with rank 1
[2]<stdout>:Initialized with rank 2
[2]<stdout>:Running main with rank 2
[0]<stdout>:Initialized with rank 0
[0]<stdout>:Running main with rank 0
[1]<stdout>:Sleeping for 1 second: 1
[2]<stdout>:Sleeping for 1 second: 2
[0]<stdout>:Sleeping for 1 second: 0
[2]<stderr>:[2024-02-07 04:16:27.910743: E /tmp/pip-install-ozxdndi9/horovod_e8e6eba6ed5e495cb7b495d7bb552c01/horovod/common/operations.cc:697] [2]: Horovod background loop uncaught exception: [/tmp/pip-install-ozxdndi9/horovod_e8e6eba6ed5e495cb7b495d7bb552c01/third_party/compatible_gloo/gloo/transport/tcp/pair.cc:589] Read error [10.20.1.15]:20903: Connection reset by peer
[0]<stderr>:[2024-02-07 04:16:27.910752: E /tmp/pip-install-ozxdndi9/horovod_e8e6eba6ed5e495cb7b495d7bb552c01/horovod/common/operations.cc:697] [0]: Horovod background loop uncaught exception: [/tmp/pip-install-ozxdndi9/horovod_e8e6eba6ed5e495cb7b495d7bb552c01/third_party/compatible_gloo/gloo/transport/tcp/pair.cc:589] Read error [10.20.1.15]:49541: Connection reset by peer
[2]<stderr>:terminate called after throwing an instance of 'gloo::IoException'
[0]<stderr>:terminate called after throwing an instance of 'gloo::IoException'
[2]<stderr>: what(): [/tmp/pip-install-ozxdndi9/horovod_e8e6eba6ed5e495cb7b495d7bb552c01/third_party/compatible_gloo/gloo/transport/tcp/pair.cc:589] Read error [10.20.1.15]:20903: Connection reset by peer
[0]<stderr>: what(): [/tmp/pip-install-ozxdndi9/horovod_e8e6eba6ed5e495cb7b495d7bb552c01/third_party/compatible_gloo/gloo/transport/tcp/pair.cc:589] Read error [10.20.1.15]:49541: Connection reset by peer
Process 1 exit with status code 143.
Process 2 exit with status code 134.
Process 0 exit with status code 134.
ERROR:root:failure count == 3 -> stop running
Traceback (most recent call last):
File "/home/pgadikar/miniconda3/envs/horovod-setup/bin/horovodrun", line 8, in <module>
sys.exit(run_commandline())
File "/home/pgadikar/miniconda3/envs/horovod-setup/lib/python3.9/site-packages/horovod/runner/launch.py", line 837, in run_commandline
_run(args)
File "/home/pgadikar/miniconda3/envs/horovod-setup/lib/python3.9/site-packages/horovod/runner/launch.py", line 825, in _run
return _run_elastic(args)
File "/home/pgadikar/miniconda3/envs/horovod-setup/lib/python3.9/site-packages/horovod/runner/launch.py", line 738, in _run_elastic
return gloo_run_elastic(settings, env, args.run_func if args.run_func else args.command, executable)
File "/home/pgadikar/miniconda3/envs/horovod-setup/lib/python3.9/site-packages/horovod/runner/gloo_run.py", line 380, in gloo_run_elastic
return launch_gloo_elastic(command_or_func, exec_command, settings, env, get_common_interfaces, rendezvous, executable)
File "/home/pgadikar/miniconda3/envs/horovod-setup/lib/python3.9/site-packages/horovod/runner/gloo_run.py", line 351, in launch_gloo_elastic
raise RuntimeError('Horovod detected that one or more processes exited with non-zero '
RuntimeError: Horovod detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was:
Process name: ip-10-20-1-15.us-east-2.compute.internal[1]
Exit code: 143
(horovod-setup) (miniconda3) [pgadikar@ip-10-20-1-15 experiments]$
``` | open | 2024-02-07T04:38:46Z | 2024-02-07T04:38:46Z | https://github.com/horovod/horovod/issues/4021 | [
"bug"
] | Pranavug | 0 |
noirbizarre/flask-restplus | flask | 506 | ImportError: No module named enum | I upgraded to a new version of flask-restplus(0.11.0) and I got error: "ImportError: No module named enum"
In HTTPStatus we probably use enum.
please add to the setup.py dependencies "enum34" | closed | 2018-08-01T11:02:48Z | 2018-09-21T15:54:20Z | https://github.com/noirbizarre/flask-restplus/issues/506 | [] | aspir | 2 |
marshmallow-code/marshmallow-sqlalchemy | sqlalchemy | 112 | Serialize enum fields | 
I ran into a problem. When I use instance, I get a field type of enum.
```
print(UserSchema().load({}, instance=user).data.status)
# <StatusType.active: 1>
```
But if I use data, I get a field type of string.
```
print(UserSchema().load({'status': 'active'}, instance=user).data.status)
# active
```
I have a solution.
```
from sqlalchemy.types import Enum
class EnumField(fields.Field):
def __init__(self, *args, **kwargs):
self.column = kwargs.get('column')
super(EnumField, self).__init__(*args, **kwargs)
def _serialize(self, value, attr, obj):
field = super(EnumField, self)._serialize(value, attr, obj)
return field.name if field else field
def deserialize(self, value, attr=None, data=None):
field = super(EnumField, self).deserialize(value, attr, data)
if isinstance(field, str) and self.column is not None:
return self.column.type.python_type[field]
return field
class ExtendModelConverter(ModelConverter):
ModelConverter.SQLA_TYPE_MAPPING[Enum] = EnumField
def _add_column_kwargs(self, kwargs, column):
super(ExtendModelConverter, self)._add_column_kwargs(kwargs, column)
if hasattr(column.type, 'enums'):
kwargs['column'] = column
```
But I am confused by one place, this is the decorator `marshmallow.pre_load`. I still need to work with string at this place.
What do you think about it? This problem exists? What are the options for solving it? | closed | 2017-06-17T07:08:57Z | 2024-08-14T17:58:14Z | https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/112 | [] | MyGodIsHe | 5 |
xonsh/xonsh | data-science | 5,716 | Pressing backspace causes crash? | ## Current Behavior
<!---
For general xonsh issues, please try to replicate the failure using `xonsh --no-rc --no-env`.
Short, reproducible code snippets are highly appreciated.
You can use `$XONSH_SHOW_TRACEBACK=1`, `$XONSH_TRACE_SUBPROC=2`, or `$XONSH_DEBUG=1`
to collect more information about the failure.
-->
Traceback (if applicable):
```xsh
Unhandled exception in event loop:
File "C:\Users\-\AppData\Local\Programs\Python\Python313\Lib\site-packages\xonsh\shells\ptk_shell\__init__.py", line 415, in cmdloop
line = self.singleline(auto_suggest=auto_suggest)
File "C:\Users\-\AppData\Local\Programs\Python\Python313\Lib\site-packages\xonsh\shells\ptk_shell\__init__.py", line 383, in singleline
line = self.prompter.prompt(**prompt_args)
File "C:\Users\-\AppData\Local\Programs\Python\Python313\Lib\site-packages\prompt_toolkit\shortcuts\prompt.py", line 1035, in prompt
return self.app.run(
~~~~~~~~~~~~^
set_exception_handler=set_exception_handler,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
inputhook=inputhook,
^^^^^^^^^^^^^^^^^^^^
)
^
File "C:\Users\-\AppData\Local\Programs\Python\Python313\Lib\site-packages\prompt_toolkit\application\application.py", line 1002, in run
return asyncio.run(coro)
~~~~~~~~~~~^^^^^^
File "C:\Users\-\AppData\Local\Programs\Python\Python313\Lib\site-packages\nest_asyncio.py", line 30, in run
return loop.run_until_complete(task)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "C:\Users\-\AppData\Local\Programs\Python\Python313\Lib\site-packages\nest_asyncio.py", line 92, in run_until_complete
self._run_once()
~~~~~~~~~~~~~~^^
File "C:\Users\-\AppData\Local\Programs\Python\Python313\Lib\site-packages\nest_asyncio.py", line 133, in _run_once
handle._run()
~~~~~~~~~~~^^
File "C:\Users\-\AppData\Local\Programs\Python\Python313\Lib\asyncio\events.py", line 89, in _run
self._context.run(self._callback, *self._args)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\-\AppData\Local\Programs\Python\Python313\Lib\asyncio\tasks.py", line 386, in __wakeup
self.__step()
~~~~~~~~~~~^^
File "C:\Users\-\AppData\Local\Programs\Python\Python313\Lib\asyncio\tasks.py", line 293, in __step
self.__step_run_and_handle_result(exc)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "C:\Users\-\AppData\Local\Programs\Python\Python313\Lib\asyncio\tasks.py", line 304, in __step_run_and_handle_result
result = coro.send(None)
File "C:\Users\-\AppData\Local\Programs\Python\Python313\Lib\site-packages\prompt_toolkit\application\application.py", line 886, in run_async
return await _run_async(f)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\-\AppData\Local\Programs\Python\Python313\Lib\site-packages\prompt_toolkit\application\application.py", line 746, in _run_async
result = await f
^^^^^^^
File "C:\Users\-\AppData\Local\Programs\Python\Python313\Lib\asyncio\futures.py", line 288, in __await__
yield self # This tells Task to wait for completion.
^^^^^^^^^^
File "C:\Users\-\AppData\Local\Programs\Python\Python313\Lib\asyncio\tasks.py", line 375, in __wakeup
future.result()
~~~~~~~~~~~~~^^
File "C:\Users\-\AppData\Local\Programs\Python\Python313\Lib\asyncio\futures.py", line 200, in result
raise self._exception.with_traceback(self._exception_tb)
Exception
Press ENTER to continue...
```
</details>
## Expected Behavior
When I press backspace it should delete the characters I want to delete.
## xonfig
```
# XONSH WEBCONFIG START
import datetime
import nest_asyncio
from prompt_toolkit import prompt
from prompt_toolkit.history import InMemoryHistory
from prompt_toolkit.key_binding import KeyBindings
from prompt_toolkit.keys import Keys
nest_asyncio.apply()
bindings = KeyBindings()
@bindings.add(Keys.Up)
def history_search_backward(event):
event.current_buffer.history_backward()
@bindings.add(Keys.Down)
def history_search_forward(event):
event.current_buffer.history_forward()
$PROMPT = '{INTENSE_GREEN}[{YELLOW}{user}{RESET}@{BLUE}{hostname}{RESET}:{cwd}─{INTENSE_YELLOW}[' + str(datetime.date.today()) + ']─{INTENSE_GREEN}[{localtime}]{INTENSE_RED}──>{RESET} '
$XONSH_HISTORY_BACKEND = 'sqlite'
$FORCE_POSIX_PATHS = True
$XONSH_SHOW_TRACEBACK = True
$INTENSIFY_COLORS_ON_WIN
```
```xsh
+-----------------------------+---------------------+
| xonsh | 0.18.3 |
| Python | 3.13.0 |
| PLY | 3.11 |
| have readline | False |
| prompt toolkit | 3.0.48 |
| shell type | prompt_toolkit |
| history backend | sqlite |
| pygments | 2.18.0 |
| on posix | False |
| on linux | False |
| on darwin | False |
| on windows | True |
| on cygwin | False |
| on msys2 | False |
| is superuser | True |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| xontrib | [] |
| RC file 1 | C:\Users\-\.xonshrc |
| UPDATE_OS_ENVIRON | False |
| XONSH_CAPTURE_ALWAYS | False |
| XONSH_SUBPROC_OUTPUT_FORMAT | stream_lines |
| THREAD_SUBPROCS | True |
| XONSH_CACHE_SCRIPTS | True |
+-----------------------------+---------------------+
```
</details>
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| open | 2024-10-30T01:08:00Z | 2024-10-30T13:19:22Z | https://github.com/xonsh/xonsh/issues/5716 | [
"prompt-toolkit",
"windows"
] | brokedarius | 0 |
ivy-llc/ivy | numpy | 28,214 | Fix Ivy Failing Test: jax - shape.shape__bool__ | closed | 2024-02-07T18:29:28Z | 2024-02-09T09:27:05Z | https://github.com/ivy-llc/ivy/issues/28214 | [
"Sub Task"
] | fnhirwa | 0 |
|
PokeAPI/pokeapi | graphql | 654 | CORS - access-control-allow-origin hardcoded | Steps to Reproduce:
1. Clone this repository which utilizes graphql pokeapi
2. Install dependencies via `npm install`
3. Run development server via `npm run dev`
4. Error should be visible under Network tab in Chrome Developer Tools
Error preview: https://imgur.com/a/yIqbC8G | closed | 2021-09-16T12:00:15Z | 2021-09-16T18:46:31Z | https://github.com/PokeAPI/pokeapi/issues/654 | [
"invalid"
] | patryknawolski | 3 |
marimo-team/marimo | data-science | 3,302 | Autocomplete does not find suggestions in polars namespaces | ### Describe the bug
When using polars the marimo autocomplete is unable to find any suggestions for names that are within namespaces. For example, here is what the autocomplete shows for the `dt` namespace.

Here is what VS Code shows:

### Environment
{
"marimo": "0.10.7",
"OS": "Darwin",
"OS Version": "24.2.0",
"Processor": "arm",
"Python Version": "3.13.1",
"Binaries": {
"Browser": "--",
"Node": "v23.5.0"
},
"Dependencies": {
"click": "8.1.3",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.19.1",
"packaging": "24.2",
"psutil": "6.1.1",
"pygments": "2.18.0",
"pymdown-extensions": "10.13",
"pyyaml": "6.0.2",
"ruff": "0.6.9",
"starlette": "0.42.0",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.34.0",
"websockets": "14.1"
},
"Optional Dependencies": {
"altair": "5.5.0",
"duckdb": "1.1.3",
"pandas": "2.2.3",
"polars": "1.17.1",
"pyarrow": "18.1.0"
}
}
### Code to reproduce
```python
import polars as pl
pl.col("col_name").dt
``` | open | 2024-12-27T21:03:15Z | 2025-01-03T10:17:17Z | https://github.com/marimo-team/marimo/issues/3302 | [
"bug",
"upstream"
] | kjgoodrick | 3 |
Anjok07/ultimatevocalremovergui | pytorch | 713 | Not opening on macOS Monterey M1 | Tried using the installer and instructions given for if the app doesn't open, and it still doesn't work. The app icon bounces once and doesn't open. | closed | 2023-08-01T19:39:32Z | 2024-07-27T17:45:29Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/713 | [] | GageHoweTamu | 1 |
hpcaitech/ColossalAI | deep-learning | 5,992 | llama3 pretrian TypeError: launch_from_torch() missing 1 required positional argument: 'config' | 
| closed | 2024-08-12T09:12:56Z | 2024-08-13T02:22:35Z | https://github.com/hpcaitech/ColossalAI/issues/5992 | [] | wuduher | 1 |
jpjacobpadilla/Stealth-Requests | web-scraping | 2 | AsyncStealthSession Windows | Anytime I try to use AsyncStealthSession I get this warning which I can't seem to remove:
RuntimeWarning:
Proactor event loop does not implement add_reader family of methods required.
Registering an additional selector thread for add_reader support.
To avoid this warning use:
asyncio.set_event_loop_policy(WindowsSelectorEventLoopPolicy())
self.loop = _get_selector(loop if loop is not None else asyncio.get_running_loop())
| closed | 2024-11-25T09:59:51Z | 2025-01-23T18:46:12Z | https://github.com/jpjacobpadilla/Stealth-Requests/issues/2 | [] | MrGrasss | 0 |
jupyter/nbviewer | jupyter | 302 | nbviewer : render git repository served by webgit | Will be a nice enanchment in nbviewer to have the capabilities to render ipython notebooks from a directory stored in a “non github” git repository.
The same way how it now works with github repositories.
The official webviewer provided by GIT is [gitweb](http://git-scm.com/book/en/Git-on-the-Server-GitWeb), I temporary added the notebook examples available [here](http://webgit.epinux.com/?p=project.git;a=tree;f=notebook)
Nbviewer works great using the text input box in nbviewer to point to the notebook url, e.g. :
- gitweb url : http://webgit.epinux.com/?p=project.git;a=blob_plain;f=notebook/Basic+Output.ipynb;hb=HEAD
- nbviewer rendering : http://nbviewer.ipython.org/url/webgit.epinux.com//%3Fp%3Dproject.git%3Ba%3Dblob_plain%3Bf%3Dnotebook/Basic%2BOutput.ipynb
but it doesn’t work if i try to :
- point nbviewer to the notebook directory :
http://webgit.epinux.com/?p=project.git;a=tree
- composing manually the nbviewer+notebook-url :
http://nbviewer.ipython.org/url/webgit.epinux.com/?p=project.git;a=blob_plain;f=notebook/Basic+Output.ipynb;hb=HEAD
| open | 2014-06-19T20:37:56Z | 2015-03-03T17:59:35Z | https://github.com/jupyter/nbviewer/issues/302 | [
"type:Enhancement",
"tag:Provider"
] | epifanio | 3 |
HumanSignal/labelImg | deep-learning | 1,006 | Feature Request: Root Folder Flag | I would like to request a new feature for labelimg: a root folder flag. This flag would allow users to specify the root folder for their images, labels, and classes.txt file when running labelimg from the command line.
For example, running `labelimg --root=/path/to/folder/` would mean that the images folder is located at */path/to/folder/images/*, the labels save folder is located at */path/to/folder/labels/*, and the classes.txt file is located at */path/to/folder/labels/classes.txt*. If a classes.txt file already exists in the specified location, it should be used.
Visually, the file structure would look like:
```
/path/to/folder
|- images/
|- *.jpg, *.png, etc.
|- labels/
|- classes.txt
|- *.txt - YOLO annotation label files
```
This feature would make it easier to organize images and annotations.
| open | 2023-08-03T18:25:32Z | 2023-08-03T18:25:32Z | https://github.com/HumanSignal/labelImg/issues/1006 | [] | sohang3112 | 0 |
ultralytics/ultralytics | machine-learning | 18,795 | Object Detection: Precision 1 Recall 0 | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
I am training an object detector however the results show a precision of 1 and a recall of 0 for some classes.
I am not certain of whether this is a bug or not.
It would be helpful to know what the cause of this issue is and/or how to migitate it.
<img width="811" alt="Image" src="https://github.com/user-attachments/assets/736bf807-8e37-47d6-b415-8459263769a5" />
### Environment
```
Ultralytics 8.3.65 🚀 Python-3.11.11 torch-2.5.1+cu121 CUDA:0 (NVIDIA A100-SXM4-40GB, 40514MiB)
Setup complete ✅ (12 CPUs, 83.5 GB RAM, 87.2/235.7 GB disk)
OS Linux-6.1.85+-x86_64-with-glibc2.35
Environment Colab
Python 3.11.11
Install pip
RAM 83.48 GB
Disk 87.2/235.7 GB
CPU Intel Xeon 2.20GHz
CPU count 12
GPU NVIDIA A100-SXM4-40GB, 40514MiB
GPU count 1
CUDA 12.1
```
### Minimal Reproducible Example
```
# import YOLO model
from ultralytics import YOLO
# Load a model
model = YOLO('yolo11n.pt') # load a pretrained model (recommended for training)
# Train the model
model.train(data='/content/data.yaml', epochs=20, imgsz=800,
classes=[2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,
38,39,43,44,46,48])
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-01-21T09:52:44Z | 2025-01-21T10:17:19Z | https://github.com/ultralytics/ultralytics/issues/18795 | [
"detect"
] | fninsiima | 4 |
deepfakes/faceswap | deep-learning | 1,247 | Unable to use GPU | Excuse me, everyone, I installed the software with the GPU version, but when running the software, I am not using the GPU, but running the CPU. What is going on and how to solve it, thank you!


| closed | 2022-07-14T23:36:34Z | 2022-07-15T00:54:11Z | https://github.com/deepfakes/faceswap/issues/1247 | [] | sdasdasddaaaa | 1 |
ray-project/ray | python | 51,173 | [RFC] GPU object store support in Ray Core | # GPU Support in Ray Core
Authors: @stephanie-wang @edoakes
TLDR: We discuss a design for GPU objects (specifically `torch.Tensors`) in the Ray Core API.
# Requirements
The goal of this API proposal is to add support for GPU “objects” and direct GPU-GPU communication in the Ray Core API.
**Goals**:
* \[P0\] Performance overhead: \<1ms latency overhead to launch collectives. Overlapping compute and communication.
* Note that this means that this API will be primarily suitable for cases such as inter-node KV-cache transfer or weight-syncing, where the size of the data transfer can amortize the system overhead. It is not suitable for latency-sensitive cases such as tensor-parallel inference.
* Features: Extend current Ray API with
* \[P0\] P2p GPU communication through NCCL and/or CUDA IPC
* \[P1\] Collective GPU communication through NCCL
* \[P1\] CPU-GPU data movement, via pinned memory
* \[P0\] Basic memory management:
* Garbage collection
* No caching or spilling (for now). GPU memory will be GCed as early as possible and a user error will be thrown if buffered memory exceeds a configurable threshold
* \[P0\] Interoperability
* User can continue to use SPMD-style collectives in actor definitions
* Works with all other Ray Core APIs
* Eventually, other methods of transport such as RDMA or non-NVIDIA GPU
* Works with PyTorch APIs such as torch.compile
(Current) limitations:
* Actors only \- in the future, we could additionally support task-actor and/or task-task communication
* GPU data limited to torch.Tensors
* Only the process that creates the actors can specify GPU object transfers between the actors. If the actor handle is passed to another worker, the new caller will not be able to call methods that return or take in a GPU object reference, i.e. GPUObjectRefs cannot be borrowed.
* Also implies that all actors that use GPU-GPU communication need to be created by the same process.
* The process that creates the actors could be a task, but for simplicity, we’ll call it the “driver” for the rest of this doc.
* This simplifies failure handling: if the driver crashes unexpectedly, its child actors will also exit, so we don’t need to worry about dangling references.
* User needs to be aware of GPU execution to write correct and efficient code. For example:
* Add type hints to all Ray tasks that return GPU tensors.
* If torch.Tensors returned by a Ray task are modified by the user after the task returns without proper stream synchronization, the behavior is undefined.
* Deadlock prevention is not guaranteed if user-defined collectives create a cycle or if the program invokes user-defined collectives through a different actor handle
# Background
This doc is motivated by recent evidence that Compiled Graphs may have limited applicability to current applications. Its main use cases are online/offline inference (which is currently bottlenecked on vLLM development) and distributed training (which will take a while to develop). Meanwhile, we have other applications such as RLHF that we would like to support. These applications can use the Compiled Graphs API, but it requires significant developer effort and they are structured in such a way that the added performance gain is negligible. See [Ray Compiled Graphs Q1 2025 update](https://docs.google.com/document/d/19iE_HgtF3PzJDHrRqi4hZ4otOwb0-2cZPwx91Dducek/edit?tab=t.0#heading=h.hs3lldec5k9i) for more information.
Therefore, our goal is to introduce an “interpreted” version of the Ray Compiled Graphs API that enables direct GPU-GPU movement of torch.Tensors between Ray actors. This has been a common user request for almost as long as Ray has been around. This is to support the [single-controller dataflow model](https://arxiv.org/pdf/2203.12533) for orchestrating GPU devices, in contrast to the current options with Ray:
* Orchestrating p2p and collectives inside actor definitions, using NCCL, torch.distributed or [ray.util.collective](https://docs.ray.io/en/latest/ray-more-libs/ray-collective.html#module-ray.util.collective.collective)
* Fully adopting the new Ray Compiled Graphs API \- this option can provide better performance but restricts the user to static dataflow graphs and currently does not play well with non-Compiled Graphs Ray code
Ultimately, the goal is for this API to be consistent with the existing Ray Compiled Graphs API. Ideally, it should be simple to change between these APIs.
Other useful docs:
* \[optional\] Compiled Graphs design: [Design: Accelerated execution in Ray 3.0](https://docs.google.com/document/d/1zHpJ2vvCHTYhZ1GlMcYvZoXIt7J0T5-bcEpHVKHmN-U/edit?tab=t.0#heading=h.17dss3b9evbj)
* \[required\] RFC for the API proposed in this design: [\[PUBLIC\] RFC: GPU objects in Ray Core API](https://docs.google.com/document/d/1jSvtmxEsIR8Oe-JiKs70wXZET1v3p6QYLiye8ZIPYo4/edit?tab=t.0)
# APIs
See [\[PUBLIC\] RFC: GPU objects in Ray Core API](https://docs.google.com/document/d/1jSvtmxEsIR8Oe-JiKs70wXZET1v3p6QYLiye8ZIPYo4/edit?tab=t.0).
# Proposed Design
The actor’s creator will be responsible for coordinating the transfers between actors. For simplicity, we will call this creator process the “driver”, although it may not be the driver of the overall Ray job. The driver will order all transfers between actors to ensure that collective operations are scheduled on actors in a consistent order, to avoid deadlock.
Each actor will locally store the tensors that they are sending/receiving in Python. We will extend each Ray actor with the following Python state:
* communicators: `Dict[CommID, Communicator]`: A map of (NCCL) communicators that the actor is a participant in
* `tensor_store`: `Dict[Tuple[ObjectID, tensor_index], Tuple[torch.Tensor, int]]`: A map from ObjectRef to the torch.Tensor and its current reference count. Tensor\_index is used for objects that may contain multiple torch.Tensors
* The reference count should be \> 0 IFF (the driver still has the corresponding GPUObjectRef in scope OR there is a pending communication op that uses the tensor OR there is a pending task that takes this tensor as an argument)
## Collective group initialization and destruction
Collective group initialization and destruction is accomplished by having the driver send a creation/destruction task to each actor. For example, if the user code looks like this:
```python
# Setup.
A, B, C = [Actor.options(num_gpus=1).remote(i) for i in range(3)]
# Each actor is assigned rank according to its order in the list.
group : NcclGroup = ray.util.collectives.init_group([A, B, C])
# Wait until group is ready, same as in placement groups.
ray.get(group.ready())
```
Then, during init\_group, the driver will launch a pre-defined task to each actor that:
1. Creates a NCCL communicator, using ray.util.collective
2. Stores the handle in `self.communicators`
## Example: GPU-GPU communication via NCCL
Suppose we have example code like this that sends a torch.Tensor from actor A to actor B:
```python
@ray.remote(num_gpus=1)
class Actor:
@ray.method(
tensor_transport=”auto”,
tensor_shape=torch.Size([N]),
)
def foo():
return torch.randn(N, device="cuda")
def bar(t: torch.Tensor):
...
A, B = Actor.remote(), Actor.remote()
group : NcclGroup = ray.util.collectives.init_group([A, B])
x : GPUObjectRef = A.foo.remote()
y = B.bar.remote(x)
del x
ray.get(y)
```
In this case, the steps on the driver are:
1. A.foo.remote():
1. Driver sends ExecuteTask RPC to A to dispatch A.foo.remote() task.
2. B.bar.remote().
1. Driver sends BeginSend RPC to A to begin sending the tensor with ID (x.id, 0\) to B.
2. Driver sends BeginRecv RPC to B to begin receiving a tensor of size N, and to store the result in B.tensor\_store\[(x.id, 0)\]
3. Driver sends ExecuteTask RPC to B to dispatch B.bar.remote() task. Note that due to Ray’s task execution order, this will get ordered after B’s receive task.
3. Del x:
1. Driver sends DecrementRefCount RPC to A to decrement the tensor’s ref count.
4. ray.get(y)
1. The usual Ray Core protocol.
On actor A:
1. A receives A.foo.remote() ExecuteTask:
1. A executes foo()
2. A serializes foo()’s return value, and extracts any GPU tensors. The GPU tensors are replaced with a tensor placeholder.
3. A asserts that the tensor has size N.
4. A stores the tensor in self.tensor\_store, with initial ref count=1, for the driver’s reference.
2. A receives BeginSend RPC:
1. A begins sending the tensor to actor B, using the correct NCCL communicator and B’s rank.
2. A increments the tensor’s ref count, to indicate that there is a pending send.
3. A receives DecrementRefCount RPC
1. A decrements the tensor’s ref count. If the ref count \== 0, delete.
4. Upon completion of the send to B:
1. A decrements the tensor’s ref count. If the ref count \== 0, delete.
On actor B:
1. B receives BeginRecv RPC:
1. B begins receiving the tensor from actor A, using the correct NCCL communicator and A’s rank.
2. B initializes the tensor ref count to 1, indicating that there is a pending task that requires this tensor as an argument.
2. B receives ExecuteTask RPC:
1. NOTE: At this point, the tensor should already have been received.
2. B deserializes the task’s arguments, replacing any tensor placeholders with the tensor from self.tensor\_store.
3. Decrement ref count for any found tensor placeholders. If the ref count \== 0, delete.
The flow of messages looks something like this. Steps that have the same number can proceed concurrently:

The protocol for collective communication is similar. The only difference is that the driver must dispatch to **all** actors in the group, and we would use a BeginCollective RPC instead of BeginSend/BeginRecv.
## WARNING: Ensuring data consistency
One caveat of this approach is that the user may still have a pointer to the tensor while it’s in the tensor\_store and pending transfers or collectives to other nodes. This can lead to data inconsistency if the user modifies the tensor while or before it is sent to other actors.
Detecting whether the user has a pointer is also hard to detect. Tracking Python references is not sufficient because different torch.Tensors could share the same physical data, etc.
Therefore, the user needs to be careful when sending tensors. Ideally, we should expose an API to allow the user to synchronize with any ongoing sends/collectives, so that they know when it’s safe to write the data. This kind of synchronization would only be possible for actors with concurrency enabled, because otherwise synchronization could hang the actor.
One possibility is to provide a future-like syntax, keyed by torch.Tensor. For example:
```python
@ray.remote(num_gpus=1)
class Actor:
@ray.method(tensor_transport=group)
def foo(self):
self.tensor = torch.randn(N, device="cuda")
return self.tensor
def modify_tensor(self):
# Wait until any ongoing communication ops involving self.tensor have finished.
# self._tensor_store = {...: copy(self.tensor)}
ray.wait(self.tensor)
self.tensor += 1
```
This program could hang if the GPUObjectRef corresponding to \`self.tensor\` never goes out of scope at the driver. One way to fix this is to allow copy-on-write: copy self.tensor back into the actor’s tensor storage after a timeout, allowing the user to use the original copy.
## WARNING: Deadlock prevention
TODO
## Dynamic tensor shapes
If the tensor shape is not known, then the driver needs to wait until A has finished and extracted all GPU tensors before sending to B. This looks something like this:

If there are multiple tensors in the value, the user can specify them using a “key” into the value. For example, if the returned value is a TensorDict, then the user would use the key strings to distinguish different tensors. Also, the tensor shape(s) can be specified on-the-fly, per method invocation instead of per method definition. For example, the following code specifies the shapes of two different tensors that are nested inside one Python object:
```python
x : GPUObjectRef = A.foo.options(tensor_shape={
“layer1”: (N, M),
“layer2”: (M, O),
}).remote()
```
## Memory management
The protocol must hook into Ray Core’s reference counting protocol (C++). In particular, if the driver’s GPUObjectRef goes out of scope, then we should send DecrementRefCount RPCs to the actor(s) that stored the original copy of this object. We can find these actors by storing weak refs to these actors’ handles inside the GPUObjectRef.
We should support configuration of each actor’s maximum allowed GPU memory for its `self.tensor_store`. If the actor tries to place a GPU object in its store and it would exceed the store’s capacity, then the actor should throw an OutOfMemoryError. This error should get propagated to all downstream tasks.
In the future, we can consider more advanced memory management such as:
* Waiting for current memory to be consumed
* Offloading to CPU memory and/or disk
The same tensor may be passed as a task argument multiple times to the same actor. If the tensor must be received from a different actor, then we have two options:
1. Driver asks receiving actor if it still has the copy, then decides whether it needs to trigger another BeginSend/Recv. This requires the driver to remember **all** actors that may have a copy of a tensor, not just the one that originated the copy.
2. Driver always triggers another BeginSend/Recv.
We will favor the second option initially since this is simpler, but less efficient if significant data needs to be transferred.
## Overlapping compute and communication
This is a critical performance feature in most distributed GPU applications. To support this, we can use a similar design as Ray Compiled Graphs: [\[PUBLIC\] Design: Overlapping GPU communication in aDAG](https://docs.google.com/document/d/1AkAqrMPadk1rMyjKE4VN4bq058z36fgBcx0i4dHIW20/edit?tab=t.0#heading=h.ame2b8s8p2d2). The main difference would be that we cannot rearrange the order of tasks before execution; instead the driver will guarantee a consistent global order by submitting operations one at a time.
To avoid blocking on the CPU, we may need to use Python or potentially C++ multithreading to handle the BeginSend/BeginRecv/BeginCollective RPCs. Also, we may need to rate-limit the pending communication ops to avoid memory buildup.
## Other communication transports
### Intra-actor: Skipping serialization/communication
If a `GPUObjectRef` is passed back to a task on the same actor that created the data, then we can avoid serialization. This optimization is already done in Ray Compiled Graphs but has not been possible in Ray Core because we always serialize the data into the object store.
```python
@ray.remote(num_gpus=1)
class Actor:
@ray.method(tensor_shape=torch.Size([N]))
def foo():
return torch.randn(N, device="cuda")
A = Actor.remote()
x : GPUObjectRef = A.foo.remote()
# One option is to avoid serializing x and pass it directly to y.
y : ObjectRef = A.bar.remote(x)
```
### Intra-node: CUDA memcpy and IPC
TODO
### CPU-GPU
TODO
### Driver-specific communication
- Can/should the driver have access to GPUs?
- ray.put(), ray.get() results from actor
### Dynamic/autoscaling actor groups: RDMA / NVSHMEM / etc
NCCL only supports static actor groups. If the membership of the group needs to be changed, e.g., for autoscaling or upon failure, then the NCCL group needs to be recreated. NCCL group creation is also quite slow. This is a known issue for NCCL users in autoscaling environments.
Initially, we plan to use NCCL because it is the industry standard for NVIDIA GPU communication. However, in the future, we can consider adding support for dynamic actor groups. This includes two different features:
1. Actors could be dynamically added or removed from a NCCL group. Actor failures could be handled smoothly by removing that actor from any NCCL groups it participated in.
2. Peer-to-peer communications between actors would not require specifying a NCCL group beforehand.
A simple version of Feature 1 is to simply re-create a NCCL group upon actor addition or deletion. If it happens relatively infrequently, the performance overhead is okay.
Feature 2 is more challenging. NCCL group (re)creation is more likely to be a bottleneck when there is an elastic group of actors and many possible actor-actor pairs. Options:
1. A high-level library like [UCX](https://docs.nvidia.com/networking/display/hpcxv215/unified+communication+-+x+framework+library). This requires benchmarking to determine overheads.
2. Build our own transport over low-level primitives like RDMA or [NVSHMEM](https://developer.nvidia.com/nvshmem). This will bring up some new complexity around how to:
1. Set up the connection. Typically this will require some kind of initialization call on both actors to map a shared region of memory.
2. Schedule the transfer. These are lower-level APIs compared to NCCL and we would likely want to perform chunking ourselves.
3. Tear down the connection. We may want to cache the connection, but also need to be aware of possible failures, out-of-memory conditions, etc.
## Deadlock prevention
TODO
# Implementation
## Ray Compiled Graphs
Our top priority is to support vLLM. The secondary priority is to support distributed training, the development of which is primarily happening at UW.
To support these applications, we must harden the following features, some of which are shared with this current design proposal:
* Inter-node communication performance
* \[shared\] Compute/communication overlap
* \[shared\] Collectives support
* Usability \- remove execution timeouts
Specifically, this also means that we will de-prioritize the following features that were originally planned. This timeline also matches the current proposal better, in that it gives us more time to develop the current proposal before deciding how the two APIs can be flexibly used together.
* DAG flexibility
* DAG concurrency \- ability to execute multiple DAGs on the same actor
* Ability to interoperate with normal Ray Core API \- allow passing inputs and outputs to non-compiled tasks through Ray object store
* Ability to run any compiled DAG code in non-compiled mode. Useful for development and debugging.
## Project Timeline
Target applications:
* veRL
* Data transfer
* weight syncing
* Riot SEED RL algorithm (?)
* Ray Data (?)
* GPU-GPU actor communication
* Possibly, CPU-GPU
Initial prototype:
* P2p example works. Includes:
* Creation of one collective group
* GC of torch.Tensors
* Tensor shape is known
* \+ Tensor shape is unknown
* \+ Object contains multiple tensors
* \+ What level of actor-driver support do we need?
* veRL prototype
Checkpoint: veRL prototype complete, Ray Data GPU-GPU prototype possible?
Remaining features:
* Correctness:
* Ability to synchronize torch.Tensors
* Features
* GPUObjectRef is sent to multiple different actor tasks
* Collectives API supported
* CPU-GPU support
* Performance
* Intra-actor: Skipping serialization/communication
* Intra-node: CUDA IPC
* Overlapping compute and communication | open | 2025-03-07T21:10:09Z | 2025-03-17T23:14:27Z | https://github.com/ray-project/ray/issues/51173 | [
"enhancement",
"RFC",
"core"
] | richardliaw | 6 |
erdewit/ib_insync | asyncio | 167 | ib_insync throws "RuntimeError: There is no current event loop in thread" | I'm trying to create an IB object in a non main thread and it is throwing "RuntimeError: There is no current event loop in thread 'Thread-X'"
```
import threading
from ib_insync import IB
class Strategy (threading.Thread):
def run(self):
ib = IB()
ib.connect('127.0.0.1', 7495, clientId=15)
...
``` | closed | 2019-07-09T01:11:31Z | 2019-07-09T15:08:25Z | https://github.com/erdewit/ib_insync/issues/167 | [] | fvalenti1 | 2 |
numba/numba | numpy | 9,764 | Docs: note move of CUDA target to NVIDIA | For tracking in the 0.61 milestone. | closed | 2024-10-22T14:52:51Z | 2024-11-19T22:50:22Z | https://github.com/numba/numba/issues/9764 | [
"Task",
"doc"
] | gmarkall | 0 |
cvat-ai/cvat | tensorflow | 8,388 | Can't deploy CVAT on AWS. | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1. Used the docs to install CVAT on EC2 instance.
2. EXPORT my public IP and run the containers. Note: I used my IPV4 not the `compute.amazonaws.com` format IP.
3. Set inbound rules in my AWS, added custom TCP with 8080 port to anywhere 0.0.0.0/0
4. Go to my public ip with 8080 port. It shows `404 page not found`
### Expected Behavior
Shoud open the login page when I used my IP with port 8080. But I get a 404 error.
### Possible Solution
Tried to manually add a static value in cvat/cvat/settings/base.py folder. Just changed the CVAT_HOST parameter's default value by my public IP instead localhost. So that I ddon't have to export everytime. But issue was not resolved. Have crosschecked all the checks from my side like EC2 instance and AWS settings but won't work.
### Context
Just update with the fix. Many has faced the same problem.
### Environment
```Markdown
Docker version 27.2.0
sudo docker logs traefik --------------------------------------------------------
time="2024-09-02T07:01:58Z" level=info msg="Configuration loaded from environment variables."
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:01:59Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:01:59Z"}
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:01:59Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:01:59Z"}
{"ClientAddr":"103.67.97.179:61312","ClientHost":"103.67.97.179","ClientPort":"61312","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":48560,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":48560,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":1,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:02:00.023558053Z","StartUTC":"2024-09-02T07:02:00.023558053Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:02:00Z"}
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:02:02Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:02:02Z"}
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:02:02Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:02:02Z"}
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:02:03Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:02:03Z"}
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:02:03Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:02:03Z"}
{"entryPointName":"web","level":"error","msg":"accept tcp [::]:8080: use of closed network connection","time":"2024-09-02T07:08:58Z"}
{"entryPointName":"web","level":"error","msg":"Error while starting server: accept tcp [::]:8080: use of closed network connection","time":"2024-09-02T07:08:58Z"}
time="2024-09-02T07:09:09Z" level=info msg="Configuration loaded from environment variables."
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:09:11Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:09:11Z"}
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:09:11Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:09:11Z"}
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:09:12Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:09:12Z"}
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:09:12Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:09:12Z"}
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:09:14Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:09:14Z"}
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:09:14Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:09:14Z"}
{"ClientAddr":"103.67.97.179:62520","ClientHost":"103.67.97.179","ClientPort":"62520","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":35306,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":35306,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":1,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:09:18.869683676Z","StartUTC":"2024-09-02T07:09:18.869683676Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:09:18Z"}
{"ClientAddr":"103.67.97.179:62520","ClientHost":"103.67.97.179","ClientPort":"62520","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":28117,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":28117,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":2,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:09:20.446667573Z","StartUTC":"2024-09-02T07:09:20.446667573Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:09:20Z"}
{"ClientAddr":"103.67.97.179:62874","ClientHost":"103.67.97.179","ClientPort":"62874","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":30905,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":30905,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":3,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:21:18.255956707Z","StartUTC":"2024-09-02T07:21:18.255956707Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:21:18Z"}
{"ClientAddr":"103.67.97.179:62874","ClientHost":"103.67.97.179","ClientPort":"62874","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":55988,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":55988,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":4,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:21:19.831547518Z","StartUTC":"2024-09-02T07:21:19.831547518Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:21:19Z"}
{"ClientAddr":"103.67.97.179:64158","ClientHost":"103.67.97.179","ClientPort":"64158","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":33494,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":33494,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":5,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:30:26.584114154Z","StartUTC":"2024-09-02T07:30:26.584114154Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:30:26Z"}
{"ClientAddr":"103.67.97.179:64158","ClientHost":"103.67.97.179","ClientPort":"64158","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":29050,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":29050,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":6,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:30:27.038927254Z","StartUTC":"2024-09-02T07:30:27.038927254Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:30:27Z"}
{"ClientAddr":"103.67.97.179:64158","ClientHost":"103.67.97.179","ClientPort":"64158","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":39274,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":39274,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":7,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:30:27.518017168Z","StartUTC":"2024-09-02T07:30:27.518017168Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:30:27Z"}
{"ClientAddr":"103.67.97.179:64158","ClientHost":"103.67.97.179","ClientPort":"64158","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":35694,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":35694,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":8,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:30:27.863068403Z","StartUTC":"2024-09-02T07:30:27.863068403Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:30:27Z"}
{"ClientAddr":"103.67.97.179:64158","ClientHost":"103.67.97.179","ClientPort":"64158","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":33139,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":33139,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":9,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:30:28.213668303Z","StartUTC":"2024-09-02T07:30:28.213668303Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:30:28Z"}
{"ClientAddr":"103.67.97.179:64158","ClientHost":"103.67.97.179","ClientPort":"64158","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":100971,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":100971,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":10,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:30:28.561789523Z","StartUTC":"2024-09-02T07:30:28.561789523Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:30:28Z"}
{"ClientAddr":"103.67.97.179:64158","ClientHost":"103.67.97.179","ClientPort":"64158","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":62992,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":62992,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":11,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:30:28.903873223Z","StartUTC":"2024-09-02T07:30:28.903873223Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:30:28Z"}
{"ClientAddr":"103.67.97.179:64158","ClientHost":"103.67.97.179","ClientPort":"64158","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":53130,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":53130,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":12,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:30:29.245628462Z","StartUTC":"2024-09-02T07:30:29.245628462Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:30:29Z"}
{"ClientAddr":"103.67.97.179:64158","ClientHost":"103.67.97.179","ClientPort":"64158","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":25235,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":25235,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":13,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:30:29.601375841Z","StartUTC":"2024-09-02T07:30:29.601375841Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:30:29Z"}
{"ClientAddr":"103.67.97.179:64164","ClientHost":"103.67.97.179","ClientPort":"64164","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":22885,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":22885,"RequestAddr":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com:8080","RequestContentSize":0,"RequestCount":14,"RequestHost":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:30:43.865705817Z","StartUTC":"2024-09-02T07:30:43.865705817Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:30:43Z"}
{"ClientAddr":"103.67.97.179:64164","ClientHost":"103.67.97.179","ClientPort":"64164","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":30567,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":30567,"RequestAddr":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com:8080","RequestContentSize":0,"RequestCount":15,"RequestHost":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com","RequestMethod":"GET","RequestPath":"/favicon.ico","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:30:44.113169857Z","StartUTC":"2024-09-02T07:30:44.113169857Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:30:44Z"}
{"ClientAddr":"103.67.97.179:64164","ClientHost":"103.67.97.179","ClientPort":"64164","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":38168,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":38168,"RequestAddr":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com:8080","RequestContentSize":0,"RequestCount":16,"RequestHost":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com","RequestMethod":"GET","RequestPath":"/login","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:31:44.240112713Z","StartUTC":"2024-09-02T07:31:44.240112713Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:31:44Z"}
{"ClientAddr":"103.67.97.179:64158","ClientHost":"103.67.97.179","ClientPort":"64158","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":40828,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":40828,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":17,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:32:52.806172349Z","StartUTC":"2024-09-02T07:32:52.806172349Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:32:52Z"}
{"ClientAddr":"103.67.97.179:64158","ClientHost":"103.67.97.179","ClientPort":"64158","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":26735,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":26735,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":18,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:34:53.11361724Z","StartUTC":"2024-09-02T07:34:53.11361724Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:34:53Z"}
{"ClientAddr":"103.67.97.179:64158","ClientHost":"103.67.97.179","ClientPort":"64158","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":29765,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":29765,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":19,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:34:54.073708403Z","StartUTC":"2024-09-02T07:34:54.073708403Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:34:54Z"}
{"ClientAddr":"103.67.97.179:64158","ClientHost":"103.67.97.179","ClientPort":"64158","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":32910,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":32910,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":20,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:34:54.420223606Z","StartUTC":"2024-09-02T07:34:54.420223606Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:34:54Z"}
{"ClientAddr":"103.67.97.179:64538","ClientHost":"103.67.97.179","ClientPort":"64538","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":34216,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":34216,"RequestAddr":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com:8080","RequestContentSize":0,"RequestCount":21,"RequestHost":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com","RequestMethod":"GET","RequestPath":"/login","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:34:57.09053894Z","StartUTC":"2024-09-02T07:34:57.09053894Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:34:57Z"}
{"ClientAddr":"103.67.97.179:64538","ClientHost":"103.67.97.179","ClientPort":"64538","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":35264,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":35264,"RequestAddr":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com:8080","RequestContentSize":0,"RequestCount":22,"RequestHost":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com","RequestMethod":"GET","RequestPath":"/login","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:34:57.321902136Z","StartUTC":"2024-09-02T07:34:57.321902136Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:34:57Z"}
{"ClientAddr":"103.67.97.179:64538","ClientHost":"103.67.97.179","ClientPort":"64538","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":37569,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":37569,"RequestAddr":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com:8080","RequestContentSize":0,"RequestCount":23,"RequestHost":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com","RequestMethod":"GET","RequestPath":"/login","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:34:57.662612233Z","StartUTC":"2024-09-02T07:34:57.662612233Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:34:57Z"}
{"ClientAddr":"103.67.97.179:64538","ClientHost":"103.67.97.179","ClientPort":"64538","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":31379,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":31379,"RequestAddr":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com:8080","RequestContentSize":0,"RequestCount":24,"RequestHost":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com","RequestMethod":"GET","RequestPath":"/login","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:34:57.98330094Z","StartUTC":"2024-09-02T07:34:57.98330094Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:34:57Z"}
{"ClientAddr":"103.67.97.179:64538","ClientHost":"103.67.97.179","ClientPort":"64538","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":29462,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":29462,"RequestAddr":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com:8080","RequestContentSize":0,"RequestCount":25,"RequestHost":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com","RequestMethod":"GET","RequestPath":"/login","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:34:58.303076989Z","StartUTC":"2024-09-02T07:34:58.303076989Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:34:58Z"}
{"ClientAddr":"103.67.97.179:64538","ClientHost":"103.67.97.179","ClientPort":"64538","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":41029,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":41029,"RequestAddr":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com:8080","RequestContentSize":0,"RequestCount":26,"RequestHost":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com","RequestMethod":"GET","RequestPath":"/login","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:34:58.62559481Z","StartUTC":"2024-09-02T07:34:58.62559481Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:34:58Z"}
{"ClientAddr":"103.67.97.179:64158","ClientHost":"103.67.97.179","ClientPort":"64158","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":27251,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":27251,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":27,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:35:01.760289793Z","StartUTC":"2024-09-02T07:35:01.760289793Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:35:01Z"}
{"ClientAddr":"103.67.97.179:64158","ClientHost":"103.67.97.179","ClientPort":"64158","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":349929,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":349929,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":28,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:35:02.081698075Z","StartUTC":"2024-09-02T07:35:02.081698075Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:35:02Z"}
{"entryPointName":"web","level":"error","msg":"accept tcp [::]:8080: use of closed network connection","time":"2024-09-02T07:36:25Z"}
{"entryPointName":"web","level":"error","msg":"Error while starting server: accept tcp [::]:8080: use of closed network connection","time":"2024-09-02T07:36:25Z"}
{"level":"error","msg":"Failed to list containers for docker, error Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json?limit=0\": context canceled","providerName":"docker","time":"2024-09-02T07:36:25Z"}
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:36:27Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:36:27Z"}
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:36:27Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:36:27Z"}
time="2024-09-02T07:43:30Z" level=info msg="Configuration loaded from environment variables."
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:43:31Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:43:31Z"}
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:43:31Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:43:31Z"}
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:43:33Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:43:33Z"}
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:43:33Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:43:33Z"}
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:43:34Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:43:34Z"}
{"entryPointName":"websecure","level":"error","msg":"entryPoint \"websecure\" doesn't exist","routerName":"grafana_https@file","time":"2024-09-02T07:43:34Z"}
{"level":"error","msg":"no valid entryPoint for this router","routerName":"grafana_https@file","time":"2024-09-02T07:43:34Z"}
{"ClientAddr":"103.67.97.179:64982","ClientHost":"103.67.97.179","ClientPort":"64982","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":29924,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":29924,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":1,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:43:39.714706869Z","StartUTC":"2024-09-02T07:43:39.714706869Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:43:39Z"}
{"ClientAddr":"103.67.97.179:64982","ClientHost":"103.67.97.179","ClientPort":"64982","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":71186,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":71186,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":2,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:43:40.95752995Z","StartUTC":"2024-09-02T07:43:40.95752995Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:43:40Z"}
{"ClientAddr":"103.67.97.179:64982","ClientHost":"103.67.97.179","ClientPort":"64982","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":152845,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":152845,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":3,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:43:42.186896145Z","StartUTC":"2024-09-02T07:43:42.186896145Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:43:42Z"}
{"ClientAddr":"103.67.97.179:64982","ClientHost":"103.67.97.179","ClientPort":"64982","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":29938,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":29938,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":4,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:43:42.882375347Z","StartUTC":"2024-09-02T07:43:42.882375347Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:43:42Z"}
{"ClientAddr":"103.67.97.179:64982","ClientHost":"103.67.97.179","ClientPort":"64982","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":28114,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":28114,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":5,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:43:43.211399973Z","StartUTC":"2024-09-02T07:43:43.211399973Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:43:43Z"}
{"ClientAddr":"103.67.97.179:65000","ClientHost":"103.67.97.179","ClientPort":"65000","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":29283,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":29283,"RequestAddr":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com:8080","RequestContentSize":0,"RequestCount":6,"RequestHost":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com","RequestMethod":"GET","RequestPath":"/login","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:43:45.585564426Z","StartUTC":"2024-09-02T07:43:45.585564426Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:43:45Z"}
{"ClientAddr":"103.67.97.179:65000","ClientHost":"103.67.97.179","ClientPort":"65000","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":47066,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":47066,"RequestAddr":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com:8080","RequestContentSize":0,"RequestCount":7,"RequestHost":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com","RequestMethod":"GET","RequestPath":"/login","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:43:45.907747303Z","StartUTC":"2024-09-02T07:43:45.907747303Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:43:45Z"}
{"ClientAddr":"103.67.97.179:65000","ClientHost":"103.67.97.179","ClientPort":"65000","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":49799,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":49799,"RequestAddr":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com:8080","RequestContentSize":0,"RequestCount":8,"RequestHost":"ec2-13-51-167-65.eu-north-1.compute.amazonaws.com","RequestMethod":"GET","RequestPath":"/login","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:43:46.375232962Z","StartUTC":"2024-09-02T07:43:46.375232962Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:43:46Z"}
{"ClientAddr":"45.156.129.80:38626","ClientHost":"45.156.129.80","ClientPort":"38626","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":30813,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":30813,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":9,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:45:56.576759564Z","StartUTC":"2024-09-02T07:45:56.576759564Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:45:56Z"}
{"ClientAddr":"103.67.97.179:65247","ClientHost":"103.67.97.179","ClientPort":"65247","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":34962,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":34962,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":10,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:49:07.704415559Z","StartUTC":"2024-09-02T07:49:07.704415559Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:49:07Z"}
{"ClientAddr":"103.67.97.179:65247","ClientHost":"103.67.97.179","ClientPort":"65247","ClientUsername":"-","DownstreamContentSize":19,"DownstreamStatus":404,"Duration":31677,"GzipRatio":0,"OriginContentSize":0,"OriginDuration":0,"OriginStatus":0,"Overhead":31677,"RequestAddr":"13.51.167.65:8080","RequestContentSize":0,"RequestCount":11,"RequestHost":"13.51.167.65","RequestMethod":"GET","RequestPath":"/","RequestPort":"8080","RequestProtocol":"HTTP/1.1","RequestScheme":"http","RetryAttempts":0,"StartLocal":"2024-09-02T07:49:08.274347839Z","StartUTC":"2024-09-02T07:49:08.274347839Z","entryPointName":"web","level":"info","msg":"","time":"2024-09-02T07:49:08Z"}
```
| closed | 2024-09-02T07:59:05Z | 2024-09-03T08:55:16Z | https://github.com/cvat-ai/cvat/issues/8388 | [
"need info"
] | Somvit09 | 6 |
Gozargah/Marzban | api | 1,343 | Cross Distro Dockerfile | Some users reported problems with installing Marzban on linux distros other than Ubuntu
as Marzban's Dockerfile uses apt-get, we must add `FROM ubuntu:24.04` to Dockerfile header
or even switch to alpine to reduce image and container size of Marzban docker
i guess for alpine must change `apt-get` to `APK` and `bash` to `RUN /bin/sh` or `RUN apk add --no-cache bash`
share your test results and opinions here | closed | 2024-10-07T09:34:45Z | 2024-10-07T10:23:28Z | https://github.com/Gozargah/Marzban/issues/1343 | [] | fodhelper | 1 |
piskvorky/gensim | nlp | 2,746 | Word2Vec ns_exponent cannot be changed from default | #### Problem description
I am trying to train Word2Vec and tune the `ns_exponent` hyperparameter. When I initialize the model, I set `ns_exponent = 0.5`, but find that it has reset to the default of `ns_exponent = 0.75` immediately after initializing.
I looked through the Word2Vec source code for any mentions of `ns_exponent`, but found no reason for the class to ignore my argument. I suspected the Vocabulary initialization may have something to do with it, but that seems to take its argument straight from the `__init__`. Neither do I believe that I am overriding the `ns_exponent` setting with one of the other parameters, because this occurs even when `ns_exponent` is the only one explicitly set.
#### Steps/code/corpus to reproduce
```
model = Word2Vec(ns_exponent = 0.5)
print(model.ns_exponent)
```
The printed output is:
```
0.75
```
and the resulting model's `ns_exponent` attribute is set to 0.75 as well.
#### Versions
```
Windows-10-10.0.18362-SP0
Python 3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)]
NumPy 1.16.0
SciPy 1.1.0
gensim 3.6.0
FAST_VERSION 0
```
| closed | 2020-02-06T15:17:19Z | 2020-07-14T02:58:19Z | https://github.com/piskvorky/gensim/issues/2746 | [
"bug"
] | coopwilliams | 3 |
saulpw/visidata | pandas | 2,629 | VisiData counts "plots" instead of "points" | As a side effect of #2626 , I looked at the screensho:

And noticed that the count on the bottom right is "6012 plots". Shouldn't it be "points" instead? | closed | 2024-12-07T08:34:12Z | 2025-01-04T06:27:09Z | https://github.com/saulpw/visidata/issues/2629 | [
"bug",
"fixed"
] | cool-RR | 0 |
fbdesignpro/sweetviz | pandas | 174 | VisibleDeprecationWarning Exception | When I try to analyze a dataframe I get the following exception:
```python
AttributeError Traceback (most recent call last)
Cell In[7], line 1
----> 1 sweetviz.analyze(df)
File ~/Documentos/Projects_Software/reg/.venv/lib/python3.12/site-packages/sweetviz/sv_public.py:12, in analyze(source, target_feat, feat_cfg, pairwise_analysis)
8 def analyze(source: Union[pd.DataFrame, Tuple[pd.DataFrame, str]],
9 target_feat: str = None,
10 feat_cfg: FeatureConfig = None,
11 pairwise_analysis: str = 'auto'):
---> 12 report = sweetviz.DataframeReport(source, target_feat, None,
13 pairwise_analysis, feat_cfg)
14 return report
File ~/Documentos/Projects_Software/reg/.venv/lib/python3.12/site-packages/sweetviz/dataframe_report.py:277, in DataframeReport.__init__(self, source, target_feature_name, compare, pairwise_analysis, fc, verbosity)
274 for f in features_to_process:
275 # start = time.perf_counter()
276 self.progress_bar.set_description_str(f"Feature: {f.source.name}")
--> 277 self._features[f.source.name] = sa.analyze_feature_to_dictionary(f)
278 self.progress_bar.update(1)
279 # print(f"DONE FEATURE------> {f.source.name}"
280 # f" {(time.perf_counter() - start):.2f} {self._features[f.source.name]['type']}")
281 # self.progress_bar.set_description_str('[FEATURES DONE]')
282 # self.progress_bar.close()
283
284 # Wrap up summary
File ~/Documentos/Projects_Software/reg/.venv/lib/python3.12/site-packages/sweetviz/series_analyzer.py:142, in analyze_feature_to_dictionary(to_process)
140 # Perform full analysis on source/compare/target
141 if returned_feature_dict["type"] == FeatureType.TYPE_NUM:
--> 142 sweetviz.series_analyzer_numeric.analyze(to_process, returned_feature_dict)
143 elif returned_feature_dict["type"] == FeatureType.TYPE_CAT:
144 sweetviz.series_analyzer_cat.analyze(to_process, returned_feature_dict)
File ~/Documentos/Projects_Software/reg/.venv/lib/python3.12/site-packages/sweetviz/series_analyzer_numeric.py:102, in analyze(to_process, feature_dict)
98 do_stats_numeric(to_process.compare, compare_dict)
100 do_detail_numeric(to_process.source, to_process.source_counts, to_process.compare_counts, feature_dict)
--> 102 feature_dict["minigraph"] = GraphNumeric("mini", to_process)
103 feature_dict["detail_graphs"] = list()
104 for num_bins in [0, 5, 15, 30]:
File ~/Documentos/Projects_Software/reg/.venv/lib/python3.12/site-packages/sweetviz/graph_numeric.py:71, in GraphNumeric.__init__(self, which_graph, to_process)
67 normalizing_weights = norm_source
69 gap_percent = config["Graphs"].getfloat("summary_graph_categorical_gap")
---> 71 warnings.filterwarnings('ignore', category=np.VisibleDeprecationWarning)
72 self.hist_specs = axs.hist(plot_data, weights = normalizing_weights, bins=self.num_bins, \
73 rwidth = (100.0 - gap_percent) / 100.0)
74 warnings.filterwarnings('once', category=np.VisibleDeprecationWarning)
File ~/Documentos/Projects_Software/reg/.venv/lib/python3.12/site-packages/numpy/__init__.py:410, in __getattr__(attr)
407 import numpy.char as char
408 return char.chararray
--> 410 raise AttributeError("module {!r} has no attribute "
411 "{!r}".format(__name__, attr))
AttributeError: module 'numpy' has no attribute 'VisibleDeprecationWarning'
```
Is there any way to workaround this bug? | open | 2024-07-13T13:48:50Z | 2024-08-05T23:49:09Z | https://github.com/fbdesignpro/sweetviz/issues/174 | [] | fccoelho | 2 |
FlareSolverr/FlareSolverr | api | 429 | [audiences] (updating) The cookies provided by FlareSolverr are not valid | **Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue.
Check closed issues as well, because your issue may have already been fixed.
### How to enable debug and html traces
[Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace)
### Environment
* **FlareSolverr version**:
* **Last working FlareSolverr version**:
* **Operating system**:
* **Are you using Docker**: [yes/no]
* **FlareSolverr User-Agent (see log traces or / endpoint)**:
* **Are you using a proxy or VPN?** [yes/no]
* **Are you using Captcha Solver:** [yes/no]
* **If using captcha solver, which one:**
* **URL to test this issue:**
### Description
[List steps to reproduce the error and details on what happens and what you expected to happen]
### Logged Error Messages
[Place any relevant error messages you noticed from the logs here.]
[Make sure you attach the full logs with your personal information removed in case we need more information]
### Screenshots
[Place any screenshots of the issue here if needed]
| closed | 2022-07-16T14:14:34Z | 2022-07-17T00:37:08Z | https://github.com/FlareSolverr/FlareSolverr/issues/429 | [
"invalid"
] | baozaodetudou | 1 |
autogluon/autogluon | computer-vision | 3,913 | [BUG] Setting `ag_args_ensemble` `num_folds` to 0 results in error when `num_bag_folds >= 2` is set in the `.fit()`. | **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [X] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [X] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
If you try to set the `"ag_args_ensemble": {"num_folds": 0}` in the hyperparameters dictionary when `num_bag_folds=2` is set in the `.fit`. It results in an error. A minimal example shown below, but swap in your dataset. I think if `num_folds` is `0` that it should exit the k_folds loop and not run the model through that processing of `k_folds` and just do a single model, but this doesn't happen. Also, if this is done the other way of specifying nothing and adding `num_folds` of 2 into `ag_args_ensemble` then it doesn't run bagged models.
I want this because I prefer to bag and run hyperparameter tuning on the boosted models as it is quick, and then not do so on any of the neural network models. I couldn't figure out how to correctly specify that in the settings that would work.
**Expected behavior**
I expected it to break out of the k_fold loop and just train a single model. Instead it errors out with the logs in the section below under screenshots. I think it should check for the `num_folds` argument before creating the `BaggedEnsemble`.
**To Reproduce**
```python
import autogluon as ag
from autogluon.tabular import TabularDataset, TabularPredictor
hyperparams = {
'GBM': {
"ag_args": {"hyperparameter_tune_kwargs": {'num_trials': 2, 'scheduler' : 'local', 'searcher': 'auto'}},
"ag_args_ensemble": {"num_folds": 0}
},
# 'AG_AUTOMM': {
# "ag_args": {"hyperparameter_tune_kwargs": {'num_trials': 0}},
# "ag_args_fit": {"max_time_limit": 60, "num_gpus": 1},
# "ag_args_ensemble": {"num_folds": 0}
# },
predictor = TabularPredictor(problem_type="regression",
label='label',
path="~/data/models/temp",
eval_metric="root_mean_squared_error").fit(
train_data=df_train[feature_cols].iloc[0:1000],
hyperparameters=hyperparams,
num_bag_folds=2,
)
}
**Screenshots / Logs**
```
k_fold_end must be greater than k_fold_start, values: (0, 0)
Traceback (most recent call last):
File "/home/ubuntu/miniconda/envs/opcity-ml/lib/python3.9/site-packages/autogluon/core/models/abstract/model_trial.py", line 37, in model_trial
model = fit_and_save_model(
File "/home/ubuntu/miniconda/envs/opcity-ml/lib/python3.9/site-packages/autogluon/core/models/abstract/model_trial.py", line 96, in fit_and_save_model
model.fit(**fit_args, time_limit=time_left)
File "/home/ubuntu/miniconda/envs/opcity-ml/lib/python3.9/site-packages/autogluon/core/models/abstract/abstract_model.py", line 838, in fit
out = self._fit(**kwargs)
File "/home/ubuntu/miniconda/envs/opcity-ml/lib/python3.9/site-packages/autogluon/core/models/ensemble/stacker_ensemble_model.py", line 165, in _fit
return super()._fit(X=X, y=y, time_limit=time_limit, **kwargs)
File "/home/ubuntu/miniconda/envs/opcity-ml/lib/python3.9/site-packages/autogluon/core/models/ensemble/bagged_ensemble_model.py", line 211, in _fit
self._validate_bag_kwargs(
File "/home/ubuntu/miniconda/envs/opcity-ml/lib/python3.9/site-packages/autogluon/core/models/ensemble/bagged_ensemble_model.py", line 336, in _validate_bag_kwargs
raise ValueError(f"k_fold_end must be greater than k_fold_start, values: ({k_fold_end}, {k_fold_start})")
ValueError: k_fold_end must be greater than k_fold_start, values: (0, 0)
k_fold_end must be greater than k_fold_start, values: (0, 0)
Traceback (most recent call last):
File "/home/ubuntu/miniconda/envs/opcity-ml/lib/python3.9/site-packages/autogluon/core/models/abstract/model_trial.py", line 37, in model_trial
model = fit_and_save_model(
File "/home/ubuntu/miniconda/envs/opcity-ml/lib/python3.9/site-packages/autogluon/core/models/abstract/model_trial.py", line 96, in fit_and_save_model
model.fit(**fit_args, time_limit=time_left)
File "/home/ubuntu/miniconda/envs/opcity-ml/lib/python3.9/site-packages/autogluon/core/models/abstract/abstract_model.py", line 838, in fit
out = self._fit(**kwargs)
File "/home/ubuntu/miniconda/envs/opcity-ml/lib/python3.9/site-packages/autogluon/core/models/ensemble/stacker_ensemble_model.py", line 165, in _fit
return super()._fit(X=X, y=y, time_limit=time_limit, **kwargs)
File "/home/ubuntu/miniconda/envs/opcity-ml/lib/python3.9/site-packages/autogluon/core/models/ensemble/bagged_ensemble_model.py", line 211, in _fit
self._validate_bag_kwargs(
File "/home/ubuntu/miniconda/envs/opcity-ml/lib/python3.9/site-packages/autogluon/core/models/ensemble/bagged_ensemble_model.py", line 336, in _validate_bag_kwargs
raise ValueError(f"k_fold_end must be greater than k_fold_start, values: ({k_fold_end}, {k_fold_start})")
ValueError: k_fold_end must be greater than k_fold_start, values: (0, 0)
No model was trained during hyperparameter tuning LightGBM_BAG_L1... Skipping this model.
No base models to train on, skipping auxiliary stack level 2...
```
INSTALLED VERSIONS
------------------
date : 2024-02-11
time : 05:13:25.987318
python : 3.9.16.final.0
OS : Linux
OS-release : 5.15.0-1053-aws
Version : #58~20.04.1-Ubuntu SMP Mon Jan 22 17:15:01 UTC 2024
machine : x86_64
processor : x86_64
num_cores : 64
cpu_ram_mb : 255018
cuda version : 12.525.147.05
num_gpus : 1
gpu_ram_mb : [21576]
avail_disk_size_mb : 425362
accelerate : 0.16.0
autogluon : 0.8.2
autogluon.common : 0.8.2
autogluon.core : 0.8.2
autogluon.features : 0.8.2
autogluon.multimodal : 0.8.2
autogluon.tabular : 0.8.2
autogluon.timeseries : 0.8.2
boto3 : 1.24.34
catboost : 1.2
defusedxml : 0.7.1
evaluate : 0.3.0
fastai : 2.7.12
gluonts : 0.13.3
grpcio : 1.46.3
hyperopt : 0.2.7
imodels : 1.3.18
jinja2 : 3.1.2
joblib : 1.3.1
jsonschema : 4.17.3
lightgbm : 3.3.5
matplotlib : 3.7.2
mlforecast : 0.7.3
networkx : 3.1
nlpaug : 1.1.11
nltk : 3.6.7
nptyping : 2.4.1
numpy : 1.22.4
omegaconf : 2.2.3
onnxruntime-gpu : 1.13.1
openmim : 0.3.9
pandas : 1.5.1
Pillow : 9.4.0
psutil : 5.9.4
pydantic : 1.10.12
PyMuPDF : None
pytesseract : 0.3.10
pytorch-lightning : 1.9.5
pytorch-metric-learning: 1.7.3
ray : 2.3.1
requests : 2.28.0
scikit-image : 0.19.3
scikit-learn : 1.1.1
scikit-learn-intelex : 2023.1.1
scipy : 1.8.1
seqeval : 1.2.2
setuptools : 59.5.0
skl2onnx : 1.13
statsforecast : 1.4.0
statsmodels : 0.13.2
tabpfn : 0.1.10
tensorboard : 2.9.0
text-unidecode : 1.3
timm : 0.9.5
torch : 1.12.0
torchmetrics : 0.11.4
torchvision : 0.13.0
tqdm : 4.65.0
transformers : 4.26.1
ujson : 5.3.0
vowpalwabbit : 9.4.0
xgboost : 1.7.4
E0211 05:13:25.987650804 52419 fork_posix.cc:76] Other threads are currently calling into gRPC, skipping fork() handlers
```
</details>
| closed | 2024-02-11T05:20:59Z | 2025-01-10T21:42:25Z | https://github.com/autogluon/autogluon/issues/3913 | [
"API & Doc",
"module: tabular",
"Needs Triage"
] | JSpenced | 1 |
miguelgrinberg/python-socketio | asyncio | 82 | Horizontal scaling and publishing to a SID | The notes on horizontal scaling say (**emphasis mine**):
> If multiple Socket.IO servers are connected to the same message queue, **they automatically communicate with each other and manage a combined client list**, without any need for additional configuration. When a load balancer such as nginx is used, **this provides virtually unlimited scaling capabilities for the server**.
I'm concerned that emit'ing to "SID rooms" is expensive. Here's my scenario where I'm handling and transforming the pub/sub of events for individual clients:
* two clients subscribe to an event: `user:1:updated` and one request fields: `[id, name]` where the other requests `[id, name, location, friends: {id, name}]`
* i can manage the relationship between SIDs and event subscriptions at the process level
* when publishing the data to a SID room, other processes are notified of the message even if they're not hosting the SID (right?).
This reflects the current approach of GraphQL subscription frameworks (specific clients can request specific formats). My understanding then is that if I have 20k clients evenly split across 4 processes all subscribed to 1 event, each process will emit 5k messages, and be interrupted by irrelevant data 15k times.
I'm looking at the [`AsyncRedisManager`](https://github.com/miguelgrinberg/python-socketio/blob/master/socketio/asyncio_redis_manager.py#L72) and the [`Server`](https://github.com/miguelgrinberg/python-socketio/blob/master/socketio/server.py#L220) that [`AsyncServer`](https://github.com/miguelgrinberg/python-socketio/blob/master/socketio/asyncio_server.py#L108) subclasses from.
If I understand this right, SocketIO is unable to scale horizontally when publishing primarily to SIDs with the current implementation. Is this right? | closed | 2017-03-12T03:08:26Z | 2020-02-17T23:50:55Z | https://github.com/miguelgrinberg/python-socketio/issues/82 | [
"enhancement"
] | dfee | 14 |
nonebot/nonebot2 | fastapi | 2,863 | Plugin: ba-tools | ### PyPI 项目名
nonebot-plugin-ba-tools
### 插件 import 包名
nonebot_plugin_ba_tools
### 标签
[{"label":"蔚蓝档案","color":"#00fcf8"}]
### 插件配置项
_No Response_ | closed | 2024-08-10T08:19:35Z | 2024-08-13T11:22:48Z | https://github.com/nonebot/nonebot2/issues/2863 | [
"Plugin"
] | hanasa2023 | 9 |
graphql-python/graphene-mongo | graphql | 76 | Bug in version 0.2.0 all_posts = MongoengineConnectionField(Posts) | After updating graphene-mongo from 0.18 to 0.2.0, I could not get "ListField(StringField())" types in query parameter. So I revert back to 0.18.
For example:
query{
allPosts(here I can have all fields of posts collection except 'selectedtags' and 'postpics')
}
in post_model.py I have following
`class Posts(Document):
meta = {'collection': 'posts'}
categoryid = ReferenceField(Categories)
cityid = ReferenceField(City)
pdate = DateTimeField()
content = DictField(required = True)
selectedtags = ListField(StringField())
postpics = ListField(StringField())
featured = BooleanField()` | closed | 2019-04-05T04:55:22Z | 2019-04-16T10:38:54Z | https://github.com/graphql-python/graphene-mongo/issues/76 | [
"work in progress"
] | anilwarbhe | 6 |
CTFd/CTFd | flask | 1,797 | CTFd pages route is relative when it shouldn't be | For some reason CTFd page routes are being generated in the navbar as relative when they shouldn't be. E.g. (`page` instead of `/page`). | closed | 2021-02-09T01:17:49Z | 2021-02-09T08:03:18Z | https://github.com/CTFd/CTFd/issues/1797 | [
"easy"
] | ColdHeat | 0 |
CTFd/CTFd | flask | 2,310 | Undocumented and unauthenticated API notifications endpoint ? | **Environment**:
- CTFd Version/Commit: `2474d6000dcaa9bec64d40aebcb8b6818dbe629c`, version `3.5.2`
- Operating System: Linux (6.1.27-1kali1 x86_64)
- Web Browser and Version: `Firefox 102.10.0esr`, trusted not relevant here
**What happened?**
When reviewing the API for [a Golang client](https://github.com/pandatix/go-ctfd), I discovered [the /api/v1/notifications endpoints](https://docs.ctfd.io/docs/api/redoc#tag/notifications).
Nevertheless, it does not contain the HEAD that is documented in the swagger UI (screenshot follows).
<img src="https://imgur.com/Hu8UlDD.png">
Using the undocumented `HEAD` one, you can issue requests that don't require authentication. An example without parameters follows.
``` bash
$ curl -I http://127.0.0.1:4000/api/v1/notifications
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Result-Count: 1
Set-Cookie: session=9ddc61ea-2fdd-4f08-9a9a-2ea8546aa41f.mRLdOxPbeXrDbi3zNSWL5p95ZxE; HttpOnly; Path=/; SameSite=Lax
Content-Length: 0
Server: Werkzeug/1.0.1 Python/3.11.2
Date: Wed, 24 May 2023 22:52:50 GMT
```
Thanks to the parameters, you could blindly-fetch notifications content as blind-SQLi are made.
<img src="https://imgur.com/J6GqcVP.png">
For instance, let's says you iterate using the `title` parameter. The response header `Result-Count` should be 0 until you match one/many notifications titles.
I have to admit that I don't know if we have to consider it a vulnerability has it leaks data without authentication going against the expected and documented behavior, or just a way to have the notifications working.
If we consider an attack, the attacker requires no account (there may be a problem here) and a blind brute-force attack on the endpoint to get the title/content/user_id/team_id/... Let's say we write notifications title with an alphabet of 62 chars (26 lowercase 26 uppercase 10 special characters) with a minimal length of 10 chars : worst case is 62^10 combinations until blind brute-force works, possibly taking forever. Apply it on the most-valuable parameter i.e. `content` and you see it is unlikely to happen. This may create the CVSS v3.1 vector [CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N) with a score of 5.3 (MEDIUM).
**What did you expect to happen?**
Get redirected to the login form, maybe ? :shrug:
**How to reproduce your issue**
Steps 1 to 3 are optional if have a running CTFd near.
1. Clone the repository
2. `cd CTFd && make serve`
3. Setup CTFd with whatever configuration
4. Make the previous curl call
| open | 2023-05-24T23:10:54Z | 2023-08-07T15:53:52Z | https://github.com/CTFd/CTFd/issues/2310 | [] | pandatix | 3 |
sigmavirus24/github3.py | rest-api | 1,121 | Support for X-GitHub-Api-Version HTTP header | GitHub recently announced the REST API versioning [1], and that they plan breaking changes to the API. In order to avoid this, they created the X-GitHub-Api-Version header to allow us to fix ourselves in time. Would be good to have this supported in this library.
[1] https://github.blog/2022-11-28-to-infinity-and-beyond-enabling-the-future-of-githubs-rest-api-with-api-versioning/ | open | 2022-11-29T10:49:13Z | 2023-03-02T16:27:49Z | https://github.com/sigmavirus24/github3.py/issues/1121 | [] | cknowles-moz | 3 |
KrishnaswamyLab/PHATE | data-visualization | 110 | conda install phate fails | <!--
If you are using PHATE in R, please submit an issue at https://github.com/KrishnaswamyLab/phateR
-->
**Describe the bug**
Can't install `phate` from `bioconda`.
**To Reproduce**
`mamba install -c bioconda phate`... mamba and conda are (mostly) interchangeable so this error should occur in `conda` as well.
**Expected behavior**
An installed version of phate
**Actual behavior**
````
Encountered problems while solving:
- nothing provides graphtools >=1.3.1 needed by phate-0.4.5-py_0
````
</details>
I assume that graphtools is a pip requirement of phate. It may be a version that got bumped and didn't get updated on pypi, or the pip requirement is specified as a conda requirement in the bioconda repo.
Just figured you guys should know. I can workaround it by building my own graphtools.
| closed | 2022-02-02T00:11:18Z | 2022-02-02T02:52:15Z | https://github.com/KrishnaswamyLab/PHATE/issues/110 | [
"bug"
] | stanleyjs | 2 |
firerpa/lamda | automation | 98 | lamda是否支持python3.12 | 作者你好,我想问一下,这个lamda新版本是否支持python3.12.x的环境呢? | closed | 2024-11-23T09:42:13Z | 2025-01-18T04:39:07Z | https://github.com/firerpa/lamda/issues/98 | [] | xxx252525 | 1 |
gunthercox/ChatterBot | machine-learning | 1,615 | is it possible to generate reply based on multiple questions | Bot: what is your favorite color?
User: red
Bot: do you like sweet?
User: yes
Bot: do you like fruits?
User: yes
Bot: i guess you like apple?
Does Chatterbot able to perform such kind of conversation?
| closed | 2019-02-09T09:38:40Z | 2019-12-11T11:35:28Z | https://github.com/gunthercox/ChatterBot/issues/1615 | [] | kangaroocancode | 4 |
ShishirPatil/gorilla | api | 394 | RAFT Add support for resuming `raft.py` dataset generation in case of interruption | The `raft.py` dataset generation script takes a long time to run and can be interrupted for various reasons (laptop going to sleep, network errors, API unavailable temporarily, ...). In that case, the script needs to be run all over again.
The `raft.py` script should be able to resume progress after an interruption to avoid starting all over again.
| closed | 2024-04-27T17:10:58Z | 2024-05-04T05:59:47Z | https://github.com/ShishirPatil/gorilla/issues/394 | [
"enhancement"
] | cedricvidal | 0 |
wkentaro/labelme | deep-learning | 673 | how to solve this problem? | usage: labelme2coco.py [-h] --labels LABELS input_dir output_dir
labelme2coco.py: error: the following arguments are required: input_dir, output_dir, --labels | closed | 2020-05-30T08:15:28Z | 2020-06-04T08:08:19Z | https://github.com/wkentaro/labelme/issues/673 | [] | Aiz-wallenstein | 2 |
scikit-learn/scikit-learn | python | 30,952 | Improve TargetEncoder predict time for single rows and many categories | As reported [here](https://tiago.rio.br/work/willbank/account/patching-scikit-learn-improve-api-performance/), `TargetEncoder.transform` is optimized for large `n_samples`. But in deployment mode, it might be single rows that matter. Combined with high cardinality of the categories, `transform` can be slow, but has room for improvement. | open | 2025-03-06T21:57:03Z | 2025-03-13T17:24:02Z | https://github.com/scikit-learn/scikit-learn/issues/30952 | [
"Performance",
"module:preprocessing"
] | lorentzenchr | 1 |
s3rius/FastAPI-template | asyncio | 174 | Project starts with Docker error. | When starting the project docker-compose -f deploy/docker-compose.yml --project-directory . up --build, an error occurs:
Status: Downloaded newer image for postgres:13.8-bullseye
Pulling migrator (hrm:latest)...
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume data could be lost. Consider backing up your data before continuing.
Continue with the new image? [yN]y
Pulling migrator (hrm:latest)...
ERROR: pull access denied for hrm, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
(hrm-py3.10) lab42@lab42-Linux:~/Рабочий стол/dev/hrm$
| open | 2023-06-22T09:04:10Z | 2024-07-26T07:51:42Z | https://github.com/s3rius/FastAPI-template/issues/174 | [] | shpilevskiyevgeniy | 23 |
widgetti/solara | jupyter | 808 | Update our code snippets | Many of the old code examples on our website, like https://github.com/widgetti/solara/blob/master/solara/website/pages/documentation/examples/utilities/countdown_timer.py (found at https://solara.dev/documentation/examples/utilities/countdown_timer )
Use old styles
* `from solara.alias import rv` instead, we can use `solara.v`
* `with solara.VBox() as main:` + `return main` can be completely skipped. We don't need a return value, and the VBox (which should be solara.Column instead) will be automatically added.
I think we should update all of them, so people and LLM's don't learn the old way of doing things. | open | 2024-10-02T14:48:33Z | 2024-10-28T12:00:26Z | https://github.com/widgetti/solara/issues/808 | [] | maartenbreddels | 2 |
roboflow/supervision | deep-learning | 743 | Using ByteTrack to track frames with moving detections AND moving backgrounds (i.e camera movement). | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Hi,
I'm having an possible issue using ByteTrack on a 63 frame test clip to track two objects as they move and the camera observing them moves. The bounding box model works very well, but when implementing ByteTrack the bounding box the tracker returns is not usable.
There is only one instance of each object in each frame and detection confidence is set to 0.5. I've attached a screen shot of a print out of the detection before using byte track and after. Without in the image implies without ByteTrack and are all over 90% confidence (to my eye close to 100%). Notably the last frame ignores an object. This is setting match_thresh=0.8. This is also several frames in.

The tracker correctly tracks the objects when setting match_thresh very close to 1, but grossly mistakes the size of the boxes as opposed to using the original. Otherwise (at default) it either entirely ignores a proper detection, or labels a new track. I'm trying to find more from the documentation but I'm struggling to understand why the boxes are being resized, and not using the original detections. So without going deeper and testing I am asking here.
I have workarounds for my use case, but I am more curious to see if there is something I am missing or if there is an issue with the implementation (I haven't used ByteTrack from scratch, but will probably get around to it in the next week or so and update if needed).
Any pointers, questions, or just a "read / do this you dummy" are welcome.
### Additional
_No response_ | closed | 2024-01-18T07:07:10Z | 2024-01-19T02:17:04Z | https://github.com/roboflow/supervision/issues/743 | [
"question"
] | adaiale | 2 |
tensorflow/tensor2tensor | deep-learning | 1,298 | Python 3 compatibility issue with `self_generated_targets` | ### Description
I get `AssertionError` when trying to return `self_generated_targets` in a model.
### Environment information
```
OS: macOS 10.13.4
$ pip freeze | grep tensor
mesh-tensorflow==0.0.4
tensor2tensor==1.11.0
tensorboard==1.12.0
tensorflow==1.12.0
tensorflow-metadata==0.9.0
tensorflow-probability==0.5.0
$ python -V
Python 3.6.4
```
### For bugs: error logs
```
File "/Users/ywkim/tensor2tensor/tensor2tensor/utils/t2t_model.py", line 1266, in wrapping_model_fn
use_tpu=use_tpu)
File "/Users/ywkim/tensor2tensor/tensor2tensor/utils/t2t_model.py", line 1340, in estimator_model_fn
"Found {}".format(logits.keys())
AssertionError: Expect only key 'logits' when there is 'self_generated_targets'. Found dict_keys(['logits'])
```
| closed | 2018-12-13T06:46:14Z | 2018-12-14T02:38:56Z | https://github.com/tensorflow/tensor2tensor/issues/1298 | [] | ywkim | 0 |
polakowo/vectorbt | data-visualization | 104 | Feature Request: Walk Forward Analysis | Would it be possible to integrate/support walk forward analysis in a future version of vectorbt? | closed | 2021-02-21T21:37:42Z | 2021-03-09T16:20:37Z | https://github.com/polakowo/vectorbt/issues/104 | [
"enhancement"
] | elliottvillars | 1 |
talkpython/modern-apis-with-fastapi | rest-api | 5 | Because UV acorns looks cool but can't serve Python code... :) | ## TRANSCRIPT CORRECTIONS SUGGESTIONS
https://training.talkpython.fm/courses/transcript/getting-started-with-fastapi/lecture/300303
```diff
- 4:38 we're gonna come down here and say "uviacorn, run this application". And we could
+ 4:38 we're gonna come down here and say "uvicorn, run this application". And we could
- 5:09 FastAPI and uviacorn, create a simple method and call uviacorn
+ 5:09 FastAPI and uvicorn, create a simple method and call uvicorn
```
https://training.talkpython.fm/courses/transcript/getting-started-with-fastapi/lecture/300304
```diff
- 0:14 so we're gonna use uviacorn,
+ 0:14 so we're gonna use uvicorn,
- 0:35 And then we just say "uviacorn.run"
+ 0:35 And then we just say "uvicorn.run"
```
https://training.talkpython.fm/courses/transcript/getting-started-with-fastapi/lecture/300408
```diff
- 1:48 Uviacorn is one of them.
+ 1:48 Uvicorn is one of them.
- 3:04 Uviacorn. And That's a pretty awesome logo.
+ 3:04 Uvicorn. And That's a pretty awesome logo.
- 3:34 is based upon. So here's Uviacorn, we're gonna be using that. This is one
+ 3:34 is based upon. So here's Uvicorn, we're gonna be using that. This is one
- 4:12 so we have uviacorn,
+ 4:12 so we have uvicorn,
- 4:58 might as well run it on uviacorn,
+ 4:58 might as well run it on uvicorn,
```
https://training.talkpython.fm/courses/transcript/getting-started-with-fastapi/lecture/300502
```diff
- 1:05 we'll have FastAPI, uviacorn.
+ 1:05 we'll have FastAPI, uvicorn.
- 1:28 thing going. So we'll import FastAPI and we'll import uviacorn,
+ 1:28 thing going. So we'll import FastAPI and we'll import uvicorn,
- 1:56 come down here and we'll do a uviacorn, run, API, port,
+ 1:56 come down here and we'll do a uvicorn, run, API, port,
```
https://training.talkpython.fm/courses/transcript/getting-started-with-fastapi/lecture/300506
```diff
- 5:46 Come down here. We would say "uviacorn",
+ 5:46 Come down here. We would say "uvicorn",
- 5:53 like this. There we go. uviacorn.
+ 5:53 like this. There we go. uvicorn.
- 6:25 Now I go down to the terminal and I uviacorn it. It's back to working, okay.
+ 6:25 Now I go down to the terminal and I uvicorn it. It's back to working, okay.
```
https://training.talkpython.fm/courses/transcript/getting-started-with-fastapi/lecture/300805
```diff
- 1:02 You saw that we use you uviacorn,
+ 1:02 You saw that we use you uvicorn,
- 1:41 application over in uviacorn,
+ 1:41 application over in uvicorn,
- 1:53 It's gonna be this uviacorn process. And in fact,
+ 1:53 It's gonna be this uvicorn process. And in fact,
- 2:59 We're gonna install uviacorn and our Python Web app,
+ 2:59 We're gonna install uvicorn and our Python Web app,
```
https://training.talkpython.fm/courses/transcript/getting-started-with-fastapi/lecture/300807
```diff
- 3:15 We don't have the libraries needed to run, set up like uviacorn, or
+ 3:15 We don't have the libraries needed to run, set up like uvicorn, or
```
https://training.talkpython.fm/courses/transcript/getting-started-with-fastapi/lecture/300808
```diff
- 3:10 So uviacorn is running on our server as we hoped.
+ 3:10 So uvicorn is running on our server as we hoped.
```
https://training.talkpython.fm/courses/transcript/getting-started-with-fastapi/lecture/300810
```diff
- 0:33 We're going to run four copies of uviacorn as workers.
+ 0:33 We're going to run four copies of uvicorn as workers.
- 0:51 but a ASGI, uviacorn ones.
+ 0:51 but a ASGI, uvicorn ones.
- 1:36 and uviacorn on the server require these two libraries as well.
+ 1:36 and uvicorn on the server require these two libraries as well.
- 3:56 still. So perfect. We've now set up Gunicorn And uviacorn to always
+ 3:56 still. So perfect. We've now set up Gunicorn And uvicorn to always
```
https://training.talkpython.fm/courses/transcript/getting-started-with-fastapi/lecture/300811
```diff
- 1:12 which will fan it out to the uviacorn workers,
+ 1:12 which will fan it out to the uvicorn workers,
```
https://training.talkpython.fm/courses/transcript/getting-started-with-fastapi/lecture/300902
```diff
- 0:10 so we're using uviacorn as the server.
+ 0:10 so we're using uvicorn as the server.
- 0:38 we just have to call uviacorn dot run and pass it
+ 0:38 we just have to call uvicorn dot run and pass it
```
https://training.talkpython.fm/courses/transcript/getting-started-with-fastapi/lecture/300909
```diff
- 0:19 Gunicorn is going to run a bunch of uviacorn worker processes
+ 0:19 Gunicorn is going to run a bunch of uvicorn worker processes
- 0:25 like this and over in uviacorn,
+ 0:25 like this and over in uvicorn,
``` | closed | 2021-03-12T20:27:16Z | 2021-03-16T23:42:08Z | https://github.com/talkpython/modern-apis-with-fastapi/issues/5 | [] | denis-roy | 2 |
ahmedfgad/GeneticAlgorithmPython | numpy | 64 | Solution_FItness array and solutions arrays are in different length. | I am using pygad, for GA, to find combination of solutions which would satisfy conditions. I have got a code, which runs 15 generations with 40 populations. When GA stops running, the size of <solutions> array is 640 where as <fitness> array is 600. I am looking for a single array which would have solutions for all trials with fitness array next to it. However, i was expecting them to be equal. May be i am doing something wrong? | open | 2021-09-13T07:23:42Z | 2021-09-28T17:37:28Z | https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/64 | [
"bug"
] | javid-b | 4 |
ultralytics/yolov5 | deep-learning | 13,203 | Overlapping Bounding Boxes of Different Classes | ### Search before asking
- [ ] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I annotate the small bounding box in Class B to be covered by the bounding box in Class A

Despite increasing the amount of training data, the result of the detect after training is the appearance of a Class B bounding box of the same size as the Class A bounding box.

Here are my questions:
Is this issue the same as the one discussed in Issue #2172?
Do you know if this has been improved in YOLOv8?
### Additional
_No response_ | open | 2024-07-20T03:00:00Z | 2024-10-20T19:50:24Z | https://github.com/ultralytics/yolov5/issues/13203 | [
"question"
] | toigni | 2 |
strawberry-graphql/strawberry-django | graphql | 541 | Global ID mapping does not work with manual id assignment on type in relay django | According to documentation something like this
```py
@strawberry_django.type(Author, filters=AuthorFilter, order=AuthorOrder)
class AuthorNode(strawberry.relay.Node):
id: strawberry.relay.GlobalID = strawberry_django.field()
books_connection: ListConnectionWithTotalCount[Annotated["BookNode", strawberry.lazy(
"graph_api.gql.nodes.book_node"
)]] = strawberry_django.connection(
field_name="books",
extensions=[IsAuthenticated()],
)
```
The response comes with ids with pk values
```json
{
"data": {
"authorsConnection": {
"edges": [
{
"node": {
"id": "1",
}
},
{
"node": {
"id": "2",
}
}
],
"totalCount": 2
}
}
}
```
However this works properly
```py
@strawberry_django.type(Author, filters=AuthorFilter, order=AuthorOrder, fields=["id"]) # note that I have set the id via fields parameter instead of assigning it as a field
class AuthorNode(strawberry.relay.Node):
books_connection: ListConnectionWithTotalCount[Annotated["BookNode", strawberry.lazy(
"graph_api.gql.nodes.book_node"
)]] = strawberry_django.connection(
field_name="books",
extensions=[IsAuthenticated()],
)
```
| open | 2024-06-03T10:20:37Z | 2024-06-10T18:29:59Z | https://github.com/strawberry-graphql/strawberry-django/issues/541 | [
"question"
] | Elawphant | 3 |
pallets/quart | asyncio | 262 | `quart.redirect` documentation and source code mismatch | The documentation says that [`quart.redirect`](https://pgjones.gitlab.io/quart/reference/source/quart.html?highlight=redirect#quart.redirect) has 3 parameters:
- location – the location the response should redirect to.
- code – the redirect status code. defaults to 302.
- Response (class) – a Response class to use when instantiating a response. The default is werkzeug.wrappers.Response if unspecified.
However, when [inspecting the source code](https://github.com/pallets/quart/blob/4bba6b4bb00600283e8eb2d264a22ca3037784cf/src/quart/helpers.py#L401C4-L401C4), this does not seem to be true:
```python
def redirect(location: str, code: int = 302) -> WerkzeugResponse:
"""Redirect to the location with the status code."""
if current_app:
return current_app.redirect(location, code=code)
return werkzeug_redirect(location, code=code)
``` | closed | 2023-08-17T09:23:00Z | 2023-11-08T00:17:35Z | https://github.com/pallets/quart/issues/262 | [] | fabge | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.