repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
arnaudmiribel/streamlit-extras | streamlit | 96 | Open in GitHub sticky button | Would be great to have an extra that makes an app play well with GitHub, and:
- adds a sticky github button to reach the source code of an app `Open in GitHub` sorta
- optionally adds the `.` trigger to open github.dev, too
API:
- `add_sticky_github_button(label="Open in GitHub", owner="arnaudmiribel", repository="streamlit-extras")`
- `add_sticky_github_button(label="Open in GitHub", repository="arnaudmiribel/streamlit-extras")` | open | 2022-11-25T14:16:57Z | 2022-12-09T11:14:11Z | https://github.com/arnaudmiribel/streamlit-extras/issues/96 | [
"new-extra"
] | arnaudmiribel | 3 |
MaartenGr/BERTopic | nlp | 1,763 | How to stabilize number of topics across runs? | Hi, there! Thank you very much for the great package!
I understand that BERTopic is stochastic so it's impossible to keep the topic number constant across runs. However, while keeping input and all hyperparameters constant (including random_state for UMAP), BERTopic can generate vastly different number of topics across runs (e.g. 8 topics for one iteration and 20 for the next). Is there any way to help stabilize the number of topics across runs?
| closed | 2024-01-20T14:37:37Z | 2024-01-21T05:04:39Z | https://github.com/MaartenGr/BERTopic/issues/1763 | [] | MaggieMeow | 2 |
tortoise/tortoise-orm | asyncio | 876 | Slow query log |
I want to save the tortoise's log to a file
But I found that db_client logs are all debug query logs
I hope the config of slow query and threshold can be added to the tortoise config
In db_client use logging.WARNING
| open | 2021-08-25T07:02:49Z | 2021-08-25T07:02:49Z | https://github.com/tortoise/tortoise-orm/issues/876 | [] | Ailibert | 0 |
JaidedAI/EasyOCR | deep-learning | 1,053 | height_ths does not work | All the boxes are almost the same height, but no matter how small height_ths I set, it just does not work.


and it will detect a box like:

| open | 2023-06-18T17:21:05Z | 2023-06-19T10:50:21Z | https://github.com/JaidedAI/EasyOCR/issues/1053 | [] | Allen-Cee | 1 |
keras-team/keras | pytorch | 20,052 | Is 2D CNN's description correct ? | > This layer creates a convolution kernel that is convolved with the layer
input over a single spatial (or temporal) dimension to produce a tensor of
outputs. If `use_bias` is True, a bias vector is created and added to the
outputs. Finally, if `activation` is not `None`, it is applied to the
outputs as well.
[keras docs](https://github.com/keras-team/keras/blob/v3.4.1/keras/src/layers/convolutional/conv2d.py#L5)
Isn't convolved over *2 spatial dimensions* ? | closed | 2024-07-28T07:25:48Z | 2024-07-29T15:31:01Z | https://github.com/keras-team/keras/issues/20052 | [
"type:docs"
] | newresu | 1 |
awesto/django-shop | django | 444 | 0.9.3 upgrade issue with djangocms-cascade==0.11.0 | After upgrading to 0.9.3 i get an error regarding djangocms-cascade.
In my setup I'm not using any cascade plugins so my settings for djangocms-cascade is `CMSPLUGIN_CASCADE = {}` since it's required to specify.
```
Traceback (most recent call last):
File "manage.py", line 14, in <module>
execute_from_command_line(sys.argv)
File "/Users/dino/.virtualenvs/gibi-trieste.com/lib/python2.7/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
utility.execute()
File "/Users/dino/.virtualenvs/gibi-trieste.com/lib/python2.7/site-packages/django/core/management/__init__.py", line 327, in execute
django.setup()
File "/Users/dino/.virtualenvs/gibi-trieste.com/lib/python2.7/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/Users/dino/.virtualenvs/gibi-trieste.com/lib/python2.7/site-packages/django/apps/registry.py", line 115, in populate
app_config.ready()
File "/Users/dino/.virtualenvs/gibi-trieste.com/lib/python2.7/site-packages/django/contrib/admin/apps.py", line 22, in ready
self.module.autodiscover()
File "/Users/dino/.virtualenvs/gibi-trieste.com/lib/python2.7/site-packages/django/contrib/admin/__init__.py", line 26, in autodiscover
autodiscover_modules('admin', register_to=site)
File "/Users/dino/.virtualenvs/gibi-trieste.com/lib/python2.7/site-packages/django/utils/module_loading.py", line 50, in autodiscover_modules
import_module('%s.%s' % (app_config.name, module_to_search))
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/Users/dino/.virtualenvs/gibi-trieste.com/lib/python2.7/site-packages/cms/admin/__init__.py", line 11, in <module>
plugin_pool.plugin_pool.discover_plugins()
File "/Users/dino/.virtualenvs/gibi-trieste.com/lib/python2.7/site-packages/cms/plugin_pool.py", line 32, in discover_plugins
load('cms_plugins')
File "/Users/dino/.virtualenvs/gibi-trieste.com/lib/python2.7/site-packages/cms/utils/django_load.py", line 57, in load
get_module(app, modname, verbose, failfast)
File "/Users/dino/.virtualenvs/gibi-trieste.com/lib/python2.7/site-packages/cms/utils/django_load.py", line 41, in get_module
module = import_module(module_name)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/Users/dino/.virtualenvs/gibi-trieste.com/lib/python2.7/site-packages/cmsplugin_cascade/cms_plugins.py", line 23, in <module>
import_module('{}.cms_plugins'.format(module))
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/Users/dino/.virtualenvs/gibi-trieste.com/lib/python2.7/site-packages/cmsplugin_cascade/generic/cms_plugins.py", line 160, in <module>
class FontIconPlugin(CascadePluginBase):
File "/Users/dino/.virtualenvs/gibi-trieste.com/lib/python2.7/site-packages/cmsplugin_cascade/plugin_base.py", line 125, in __new__
attrs['fields'] += (('save_shared_glossary', 'save_as_identifier'), 'shared_glossary',)
KeyError: u'fields'
``` | closed | 2016-11-03T11:21:34Z | 2016-11-03T17:29:19Z | https://github.com/awesto/django-shop/issues/444 | [] | dinoperovic | 5 |
OFA-Sys/Chinese-CLIP | computer-vision | 314 | 这个可以在华为的服务器部署吗? | open | 2024-05-22T09:48:01Z | 2024-05-23T02:13:47Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/314 | [] | intjun | 1 |
|
yinkaisheng/Python-UIAutomation-for-Windows | automation | 27 | 一款非常值得推荐的软件,好些开源 | 比pywinauto用着顺手! | closed | 2017-10-22T16:53:15Z | 2017-10-23T00:00:55Z | https://github.com/yinkaisheng/Python-UIAutomation-for-Windows/issues/27 | [] | 1006079161 | 1 |
graphdeco-inria/gaussian-splatting | computer-vision | 395 | SIBR viewer build error on ubuntu | [ 88%] Linking CXX executable SIBR_texturedMesh_app
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/12/../../../x86_64-linux-gnu/libgtk-3.so: undefined reference to `g_task_set_static_name'
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/12/../../../x86_64-linux-gnu/libgtk-3.so: undefined reference to `g_string_free_and_steal'
/usr/bin/ld: /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0: undefined reference to `g_pattern_spec_match_string'
collect2: error: ld returned 1 exit status
make[2]: *** [src/projects/basic/apps/texturedMesh/CMakeFiles/SIBR_texturedMesh_app.dir/build.make:187: src/projects/basic/apps/texturedMesh/SIBR_texturedMesh_app] Error 1
make[1]: *** [CMakeFiles/Makefile2:1189: src/projects/basic/apps/texturedMesh/CMakeFiles/SIBR_texturedMesh_app.dir/all] Error 2
make: *** [Makefile:136: all] Error 2
| open | 2023-10-26T16:27:05Z | 2023-10-29T09:11:35Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/395 | [] | ZejunMa | 1 |
taverntesting/tavern | pytest | 164 | Support for a timeout parameter for requests | I would like to be about to specify a timeout parameter for each of my requests, this would be useful for endpoints that must return a given timeout. https://github.com/taverntesting/tavern/blob/2ff7f97e3c705b9a935ab05aef52545a85b332cb/tavern/_plugins/rest/request.py#L64-L67 seems like a good place to slot it in with a possible default of `None`. I wouldn't mind making a PR for this if this route seems like a good idea. | closed | 2018-07-30T15:29:21Z | 2018-08-26T10:18:23Z | https://github.com/taverntesting/tavern/issues/164 | [
"Type: Enhancement"
] | justinfay | 2 |
sigmavirus24/github3.py | rest-api | 328 | Python 2.7.6 - ImportError: cannot import name URITemplate | I cant figure out what's the problem here. It appears to be a circular dependency but I'm not sure. This does not occur with python 3. I tested the version installed with pip (0.9.3) as well as the alpha (1.0.0a1) version and got the same result. What am I missing?
``` trace
>>> from github3 import login
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "github3/__init__.py", line 20, in <module>
from .api import (
File "github3/api.py", line 11, in <module>
from .github import GitHub, GitHubEnterprise
File "github3/github.py", line 17, in <module>
from .gists import Gist
File "github3/gists/__init__.py", line 16, in <module>
from .gist import Gist
File "github3/gists/gist.py", line 14, in <module>
from .comment import GistComment
File "github3/gists/comment.py", line 12, in <module>
from ..users import User
File "github3/users.py", line 12, in <module>
from uritemplate import URITemplate
ImportError: cannot import name URITemplate
```
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/7076314-python-2-7-6-importerror-cannot-import-name-uritemplate?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github).
</bountysource-plugin> | closed | 2014-12-17T08:44:58Z | 2021-01-16T09:53:24Z | https://github.com/sigmavirus24/github3.py/issues/328 | [
"question"
] | lots0logs | 11 |
google-deepmind/graph_nets | tensorflow | 25 | physics demo loss function | The calculation of the loss function, why is the difference between the nodes' output of the model and the velocity dimension's feature, that is, why the output of the node is linearly transformed into two-dimensional. | closed | 2018-11-22T05:46:10Z | 2019-06-24T09:15:49Z | https://github.com/google-deepmind/graph_nets/issues/25 | [] | kingwmk | 2 |
PokemonGoF/PokemonGo-Bot | automation | 6,317 | No issue Charizard | open | 2023-07-13T11:07:29Z | 2023-07-13T11:07:29Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/6317 | [] | Ranchiro | 0 |
|
graphdeco-inria/gaussian-splatting | computer-vision | 597 | Just single Pixel rendering | Hi @grgkopanas @kwea123 @Snosixtyboo
Is it possible to render just single pixel which returns the color of each single ray from different colmap camera positions using diff-gaussian-rasterization?
I am interested in getting final color at particular location for different view directions, by providing 3DGS pointcloud, location of target pixel for tracking in first fully rasterized image using first colmap camera pose from colmap pose sequence or 3D location in scene (how to find it?), and colmap camera poses.
Can you please guide me to achieve this? | closed | 2024-01-05T14:42:34Z | 2024-03-06T10:57:53Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/597 | [] | pknmax | 2 |
pallets/flask | flask | 5,412 | CHIPS support | Add an argument for `set_cookie` to add `Partitioned;` attribute to the Set-Cookie header.
According to Google, third-party cookies will soon be deprecated. [link](https://developers.google.com/privacy-sandbox/3pcd/prepare/prepare-for-phaseout)
For cross-site cookies which store data on a per site basis, like an embed, will need to migrate to [CHIPS](https://developers.google.com/privacy-sandbox/3pcd/chips), which uses `partitioned` storage to store cookies. | closed | 2024-02-12T20:16:07Z | 2024-02-27T00:06:20Z | https://github.com/pallets/flask/issues/5412 | [] | SimonLiu423 | 1 |
plotly/dash | data-visualization | 2,811 | dash title design issues | Title logic problem, when instantiating dash, title was set, when registering route, it was not set, should title when instantiating dash be used instead of module name? You can set the title of a single page when registering routes, but if the title of all pages is the same, is the code redundant, is there any other way to set the title globally, and only need to set it once | closed | 2024-03-24T10:04:36Z | 2024-04-03T15:59:25Z | https://github.com/plotly/dash/issues/2811 | [] | jaxonister | 3 |
microsoft/hummingbird | scikit-learn | 138 | rename master to main | [following this example](https://www.hanselman.com/blog/EasilyRenameYourGitDefaultBranchFromMasterToMain.aspx)
If someone has a local clone, then can update their locals like this:
```
$ git checkout master
$ git branch -m master main
$ git fetch
$ git branch --unset-upstream
$ git branch -u origin/main
$ git symbolic-ref refs/remotes/origin/HEAD refs/remotes/origin/main
``` | closed | 2020-06-15T17:21:03Z | 2020-11-04T22:52:10Z | https://github.com/microsoft/hummingbird/issues/138 | [] | ksaur | 6 |
JaidedAI/EasyOCR | machine-learning | 1,368 | Plot train/eval accuracy per epoch | Silly question but how do i plot the graph for train/eval accuracy per epoch during training? | open | 2025-01-18T15:05:08Z | 2025-01-18T15:05:08Z | https://github.com/JaidedAI/EasyOCR/issues/1368 | [] | Mehdi-i | 0 |
netbox-community/netbox | django | 18,305 | Make ContactsMixin available for plugins | ### NetBox version
v4.1.10
### Feature type
Other
### Triage priority
I volunteer to perform this work (if approved)
### Proposed functionality
1. Add `ContactsMixin` and related mixin classes (column, filterset, …) to the plugin documentation.
2. Add `ObjectContactsView` to [feature-view auto-registration](https://github.com/netbox-community/netbox/blob/develop/netbox/netbox/models/features.py#L639-L655)
### Use case
Plugins may want to use contacts for custom models. Making the mixin class public for use in the API will prevent plugins from reinventing the wheel and better integrate with NetBox, as they can implement a consistent UI with the NetBox core.
### Database changes
None.
### External dependencies
None. | open | 2025-01-05T12:29:47Z | 2025-03-24T15:30:03Z | https://github.com/netbox-community/netbox/issues/18305 | [
"status: accepted",
"type: feature",
"complexity: low"
] | alehaa | 0 |
errbotio/errbot | automation | 1,649 | Error while starting errbot for Slack | **Need help with**
- [ ] Running the bot
**Issue description**
So I think I am at the final step and keep getting an error when trying to run my errbot.
**Environment (please complete the following information):**
- Errbot version: 6.1.9
- OS version: Amazon Linux 2023
- Python version: 3.9.16
- Using a virtual environment: yes
- Using Docker: no
**Additional info**
Error I get is:
```
11:57:05 ERROR errbot.plugin_manager Error loading Webserver.
Traceback (most recent call last):
File "/opt/errbot/virtualenv/lib64/python3.9/site-packages/errbot/plugin_manager.py", line 444, in activate_non_started_plugins
if not plugin.is_activated:
AttributeError: 'NoneType' object has no attribute 'is_activated'
11:57:05 INFO errbot.plugin_manager Activate plugin: Example.
11:57:05 INFO errbot.core_plugins.wsvie Checking Example for webhooks
11:57:06 WARNING errbot.core Some plugins failed to start during bot startup:
Traceback (most recent call last):
File "/opt/errbot/virtualenv/lib64/python3.9/site-packages/errbot/plugin_manager.py", line 289, in _load_plugins_generic
plugin_classes = plugin_info.load_plugin_classes(
File "/opt/errbot/virtualenv/lib64/python3.9/site-packages/errbot/plugin_info.py", line 100, in load_plugin_classes
spec.loader.exec_module(modu1e)
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/opt/errbot/virtualenv/lib64/python3.9/site-packages/errbot/core_plugins/webserver.py", line 9, in <module>
from OpenSSL import crypto
File "/opt/errbot/virtualenv/lib64/python3.9/site-packages/OpenSSL/__init__.py", line 8, in <module>
from OpenSSL import crypto, SSL
File "/opt/errbot/virtualenv/lib64/python3.9/site-packages/OpenSSL/crypto.py", line 3279, in <module>
_lib.OpenSSL_add_all_algorithms()
AttributeError: module 'lib' has no attribute 'OpenSSL_add_all_algorithms'
Error: Webserver failed to activate: 'NoneType' object has no attribute 'is_activated'.
```
Any help to make this work will be greatly appreciated. Thank you in advance!
| closed | 2023-07-14T12:08:25Z | 2024-01-27T06:48:23Z | https://github.com/errbotio/errbot/issues/1649 | [
"type: support/question"
] | coder-kush | 4 |
plotly/dash-bio | dash | 507 | Clustergram - dendrograms not appearing? | Look at https://dash.plotly.com/dash-bio/clustergram - the dendrograms don't appear, only the heatmap.
cc @nicholas-esterer - related to the work you're doing in plotly.py on dendrograms? | closed | 2020-07-09T16:54:46Z | 2020-08-12T15:44:24Z | https://github.com/plotly/dash-bio/issues/507 | [] | alexcjohnson | 4 |
iam-abbas/FastAPI-Production-Boilerplate | rest-api | 6 | Celery Task Did Not Register | I made a small example to test out celery workers. It should get universiies based on the country I query. However, it did not seem to register my task. Do you have any ideas why that would be ?
I created a `test_celery.py` in `tasks` directory:
```python
from celery import shared_task
from typing import List, Optional
import json
import httpx
from pydantic import BaseModel
from worker import celery_app
@celery_app.task(name="get_all_universities_task", bind=True)
def get_all_universities_task(self, country):
return get_all_universities_for_country(country)
url = 'http://universities.hipolabs.com/search'
def get_all_universities_for_country(country: str) -> dict:
print('get_all_universities_for_country ', country)
params = {'country': country}
client = httpx.Client()
response = client.get(url, params=params)
response_json = json.loads(response.text)
universities = []
for university in response_json:
university_obj = University.parse_obj(university)
universities.append(university_obj)
return {country: universities}
class University(BaseModel):
"""Model representing a university."""
country: Optional[str] = None # Optional country name
web_pages: List[str] = [] # List of web page URLs
name: Optional[str] = None # Optional university name
alpha_two_code: Optional[str] = None # Optional alpha two-code
domains: List[str] = [] # List of domain names
```
And I created an API endpoint and `api/v1` directory. That endpoint triggers the tasks
```python
from fastapi import APIRouter
from worker.tasks.test_celery import get_all_universities_task
from celery.result import AsyncResult
import redis
test_celery_router = APIRouter()
@test_celery_router.get("/", status_code=200)
async def test_celery(country):
task = get_all_universities_task.delay(country)
return {"task_id": task.id, "task_status": task.status}
@test_celery_router.get("/status/{task_id}", status_code=200)
async def test_celery_status(task_id):
async_result = AsyncResult(task_id)
if async_result.ready():
result_value = async_result.get()
return {"task_id": task_id, "task_status": async_result.status, "task_result": result_value}
else:
return {"task_id": task_id, "task_status": async_result.status, "task_result": "Not ready yet!"}
```
When I ran `make celery-worker`, it seems to me that it did not register the task "get_all_univerisity_task":
```zsh
❯ make celery-worker
poetry run celery -A worker worker -l info
celery@Quans-MBP.lan v5.3.1 (emerald-rush)
macOS-10.16-x86_64-i386-64bit 2023-06-20 17:58:19
[config]
.> app: worker:0x10cb337d0
.> transport: amqp://guest:**@localhost:5672//
.> results: redis://localhost:6379/0
.> concurrency: 10 (prefork)
.> task events: OFF (enable -E to monitor tasks in this worker)
[queues]
.> celery exchange=celery(direct) key=celery
[tasks]
[2023-06-20 17:58:19,735: WARNING/MainProcess] /Users/quankhuc/anaconda3/envs/TradingBot/lib/python3.11/site-packages/celery/worker/consumer/consumer.py:498: CPendingDeprecationWarning: The broker_connection_retry configuration setting will no longer determine
whether broker connection retries are made during startup in Celery 6.0 and above.
If you wish to retain the existing behavior for retrying connections on startup,
you should set broker_connection_retry_on_startup to True.
warnings.warn(
[2023-06-20 17:58:19,807: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672//
[2023-06-20 17:58:19,808: WARNING/MainProcess] /Users/quankhuc/anaconda3/envs/TradingBot/lib/python3.11/site-packages/celery/worker/consumer/consumer.py:498: CPendingDeprecationWarning: The broker_connection_retry configuration setting will no longer determine
whether broker connection retries are made during startup in Celery 6.0 and above.
If you wish to retain the existing behavior for retrying connections on startup,
you should set broker_connection_retry_on_startup to True.
warnings.warn(
[2023-06-20 17:58:19,813: INFO/MainProcess] mingle: searching for neighbors
[2023-06-20 17:58:20,840: INFO/MainProcess] mingle: all alone
[2023-06-20 17:58:20,864: INFO/MainProcess] celery@Quans-MBP.lan ready.
[2023-06-20 17:58:21,142: INFO/MainProcess] Events of group {task} enabled by remote.
[2023-06-20 18:05:38,090: ERROR/MainProcess] Received unregistered task of type 'get_all_universities_task'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you're using relative imports?
Please see
https://docs.celeryq.dev/en/latest/internals/protocol.html
for more information.
The full contents of the message body was:
'[["vietnam"], {}, {"callbacks": null, "errbacks": null, "chain": null, "chord": null}]' (86b)
The full contents of the message headers:
{'lang': 'py', 'task': 'get_all_universities_task', 'id': '18094bcf-9a99-4330-94b8-96acc928edca', 'shadow': None, 'eta': None, 'expires': None, 'group': None, 'group_index': None, 'retries': 0, 'timelimit': [None, None], 'root_id': '18094bcf-9a99-4330-94b8-96acc928edca', 'parent_id': None, 'argsrepr': "('vietnam',)", 'kwargsrepr': '{}', 'origin': 'gen10874@Quans-MBP.lan', 'ignore_result': False, 'stamped_headers': None, 'stamps': {}}
The delivery info for this task is:
{'consumer_tag': 'None4', 'delivery_tag': 1, 'redelivered': False, 'exchange': '', 'routing_key': 'celery'}
Traceback (most recent call last):
File "/Users/quankhuc/anaconda3/envs/TradingBot/lib/python3.11/site-packages/celery/worker/consumer/consumer.py", line 642, in on_task_received
strategy = strategies[type_]
~~~~~~~~~~^^^^^^^
KeyError: 'get_all_universities_task'
```
| closed | 2023-06-20T23:06:40Z | 2023-07-11T17:37:12Z | https://github.com/iam-abbas/FastAPI-Production-Boilerplate/issues/6 | [] | quankhuc | 0 |
httpie/cli | rest-api | 1,242 | No syntax highlighting possibly for content-types with charset | ```bash
$ https pie.dev
``` | closed | 2021-12-14T15:14:14Z | 2021-12-29T09:08:51Z | https://github.com/httpie/cli/issues/1242 | [
"bug",
"new"
] | jkbrzt | 0 |
DistrictDataLabs/yellowbrick | scikit-learn | 340 | Code Conventions Guide for Documentation and Examples | In our documentation and code examples, we have several different styles around referring to the workflow and how to format code examples.
It would be helpful to identify and establish a handful of code conventions that we follow to reduce the cognitive load for using this library.
Code Examples:
- Should always include the import path of the visualizer
| closed | 2018-03-16T14:09:16Z | 2018-04-10T17:33:24Z | https://github.com/DistrictDataLabs/yellowbrick/issues/340 | [
"type: documentation"
] | ndanielsen | 11 |
pallets/flask | python | 5,160 | Switch to importlib breaks scripts with `app.run()` | With a trivial script [using `app.run()`](https://flask.palletsprojects.com/en/2.3.x/server/#in-code) such as:
```python3
from flask import Flask
app = Flask(__name__)
if __name__ == "__main__":
app.run(debug=True)
```
The current git `main` breaks with:
```pytb
Traceback (most recent call last):
File "/home/florian/tmp/flask/app.py", line 3, in <module>
app = Flask(__name__)
^^^^^^^^^^^^^^^
File "/home/florian/tmp/flask/src/flask/app.py", line 376, in __init__
instance_path = self.auto_find_instance_path()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/florian/tmp/flask/src/flask/app.py", line 630, in auto_find_instance_path
prefix, package_path = find_package(self.import_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/florian/tmp/flask/src/flask/scaffold.py", line 898, in find_package
package_path = _find_package_path(import_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/florian/tmp/flask/src/flask/scaffold.py", line 858, in _find_package_path
spec = importlib.util.find_spec(root_mod_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib.util>", line 114, in find_spec
ValueError: __main__.__spec__ is None
```
This seems to be a regression due to 84e11a1e827c0f55f9b9ee15952eddcf8a6492e0 from #5157.
Environment:
- Python version: 3.11.4
- Flask version: git main
| closed | 2023-06-09T10:56:29Z | 2023-06-24T00:07:17Z | https://github.com/pallets/flask/issues/5160 | [] | The-Compiler | 3 |
littlecodersh/ItChat | api | 437 | 打印二维码失败 | 你好!
我在使用itchat的时候,打印二维码的阶段失败了,后台信息如下:
`sudo python3 test.py`
Getting uuid of QR code.
Downloading QR code.
Error: no "view" mailcap rules found for type "image/png"
Can't call method "get_value" on an undefined value at /usr/bin/mimeopen line 162.
/usr/bin/xdg-open: 461: /usr/bin/xdg-open: links2: not found
/usr/bin/xdg-open: 461: /usr/bin/xdg-open: links: not found
/usr/bin/xdg-open: 461: /usr/bin/xdg-open: lynx: not found
(这里是一段空白)
Hit any key to quit w3m:
不知道是什么问题,打印失败。
在这之前安装过:
`sudo apt-get install xdg-utils `
`sudo apt-get install desktop-file-utils ` | closed | 2017-06-30T08:25:16Z | 2017-06-30T09:28:31Z | https://github.com/littlecodersh/ItChat/issues/437 | [] | lihengzkj | 1 |
dmlc/gluon-nlp | numpy | 1,388 | bertpass_gpu.cc does not support MXNet 1.8 | Due to API change https://github.com/apache/incubator-mxnet/issues/19135 | open | 2020-10-08T20:57:13Z | 2020-10-12T16:25:37Z | https://github.com/dmlc/gluon-nlp/issues/1388 | [
"bug"
] | leezu | 3 |
Lightning-AI/pytorch-lightning | data-science | 20,054 | Dataloader with >0 workers when using DDP causes a crash | ### Bug description
Having a dataloader with >0 workers causes a crash. This behavior occurs both with custom datasets, and even standard huggingface datasets, and torchvision datasets.
The dataloaders work fine standalone with many workers, and also work with accelerate just fine.
The run general works until the first validation step at which point it crashes. Interestingly, num_sanity_val_steps works fine [e.g., `num_sanity_val_steps=10`].
Working version:
```
def main(config):
"""Main entry point for training."""
_print_config(config, resolve=True, save_cfg=True)
tokenizer = get_tokenizer(config)
train_dataloader, val_dataloader = get_dataloaders(config, tokenizer=tokenizer, valid_seed=config.seed)
for i in range(10):
for batch in tqdm(train_dataloader):
pass
for batch in tqdm(val_dataloader):
pass
if __name__ == "__main__":
main()
```
Not working:
```
trainer.fit(model, train_ds, valid_ds)
```
### What version are you seeing the problem on?
v2.2, master
### How to reproduce the bug
_No response_
### Error messages and logs
Traceback:
```
terminate called after throwing an instance of 'c10::Error' | 0/? [00:00<?, ?it/s]
what(): CUDA error: initialization error
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at /opt/conda/conda-bld/pytorch_1716905979055/work/c10/cuda/CUDAException.cpp:43 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f7f99b0d897 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f7f99abdb25 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7f7f99be7718 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x1d6f6 (0x7f7f99bb26f6 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #4: <unknown function> + 0x1f5e3 (0x7f7f99bb45e3 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0x1f922 (0x7f7f99bb4922 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #6: <unknown function> + 0x5a5950 (0x7f7fe82d8950 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x6a36f (0x7f7f99af236f in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #8: c10::TensorImpl::~TensorImpl() + 0x21b (0x7f7f99aeb1cb in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #9: c10::TensorImpl::~TensorImpl() + 0x9 (0x7f7f99aeb379 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #10: <unknown function> + 0x851088 (0x7f7fe8584088 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #11: THPVariable_subclass_dealloc(_object*) + 0x2f6 (0x7f7fe8584406 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: initialization error
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at /opt/conda/conda-bld/pytorch_1716905979055/work/c10/cuda/CUDAException.cpp:43 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f7c3193d897 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f7c318edb25 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7f7c31a17718 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x1d6f6 (0x7f7c319e26f6 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #4: <unknown function> + 0x1f5e3 (0x7f7c319e45e3 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0x1f922 (0x7f7c319e4922 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #6: <unknown function> + 0x5a5950 (0x7f7c80108950 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x6a36f (0x7f7c3192236f in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #8: c10::TensorImpl::~TensorImpl() + 0x21b (0x7f7c3191b1cb in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #9: c10::TensorImpl::~TensorImpl() + 0x9 (0x7f7c3191b379 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #10: <unknown function> + 0x851088 (0x7f7c803b4088 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #11: THPVariable_subclass_dealloc(_object*) + 0x2f6 (0x7f7c803b4406 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #12: <unknown function> + 0x124633 (0x55d22c939633 in /homedir/envs/envname/bin/python)
frame #13: <unknown function> + 0x13d697 (0x55d22c952697 in /homedir/envs/envname/bin/python)
frame #14: <unknown function> + 0x13d77b (0x55d22c95277b in /homedir/envs/envname/bin/python)
frame #15: <unknown function> + 0x13d77b (0x55d22c95277b in /homedir/envs/envname/bin/python)
frame #16: <unknown function> + 0x13d77b (0x55d22c95277b in /homedir/envs/envname/bin/python)
frame #17: <unknown function> + 0x13d77b (0x55d22c95277b in /homedir/envs/envname/bin/python)
frame #18: <unknown function> + 0x13d77b (0x55d22c95277b in /homedir/envs/envname/bin/python)
frame #19: <unknown function> + 0x13d77b (0x55d22c95277b in /homedir/envs/envname/bin/python)
frame #20: <unknown function> + 0x13d77b (0x55d22c95277b in /homedir/envs/envname/bin/python)
frame #21: <unknown function> + 0x13d77b (0x55d22c95277b in /homedir/envs/envname/bin/python)
frame #22: <unknown function> + 0x13d77b (0x55d22c95277b in /homedir/envs/envname/bin/python)
frame #23: <unknown function> + 0x13d77b (0x55d22c95277b in /homedir/envs/envname/bin/python)
frame #24: <unknown function> + 0x13d77b (0x55d22c95277b in /homedir/envs/envname/bin/python)
frame #25: <unknown function> + 0x13d77b (0x55d22c95277b in /homedir/envs/envname/bin/python)
frame #26: <unknown function> + 0x13d77b (0x55d22c95277b in /homedir/envs/envname/bin/python)
frame #27: <unknown function> + 0x13d77b (0x55d22c95277b in /homedir/envs/envname/bin/python)
frame #28: <unknown function> + 0x13d77b (0x55d22c95277b in /homedir/envs/envname/bin/python)
frame #29: <unknown function> + 0x13d77b (0x55d22c95277b in /homedir/envs/envname/bin/python)
frame #30: <unknown function> + 0x13d77b (0x55d22c95277b in /homedir/envs/envname/bin/python)
frame #31: <unknown function> + 0x14dcf6 (0x55d22c962cf6 in /homedir/envs/envname/bin/python)
frame #32: <unknown function> + 0x129739 (0x55d22c93e739 in /homedir/envs/envname/bin/python)
frame #33: <unknown function> + 0x12763d (0x55d22c93c63d in /homedir/envs/envname/bin/python)
frame #34: <unknown function> + 0x1d418b (0x55d22c9e918b in /homedir/envs/envname/bin/python)
frame #35: _PyObject_GC_NewVar + 0x23f (0x55d22c93147f in /homedir/envs/envname/bin/python)
frame #36: PyTuple_New + 0x117 (0x55d22c938aa7 in /homedir/envs/envname/bin/python)
frame #37: <unknown function> + 0x1320b5 (0x55d22c9470b5 in /homedir/envs/envname/bin/python)
frame #38: <unknown function> + 0x1321d1 (0x55d22c9471d1 in /homedir/envs/envname/bin/python)
frame #39: <unknown function> + 0x131e4e (0x55d22c946e4e in /homedir/envs/envname/bin/python)
frame #40: <unknown function> + 0x1d7844 (0x55d22c9ec844 in /homedir/envs/envname/bin/python)
frame #41: <unknown function> + 0x1ea6eb (0x55d22c9ff6eb in /homedir/envs/envname/bin/python)
frame #42: <unknown function> + 0x143e8a (0x55d22c958e8a in /homedir/envs/envname/bin/python)
frame #43: _PyEval_EvalFrameDefault + 0x4c12 (0x55d22c94e142 in /homedir/envs/envname/bin/python)
frame #44: _PyFunction_Vectorcall + 0x6c (0x55d22c959a2c in /homedir/envs/envname/bin/python)
frame #45: _PyEval_EvalFrameDefault + 0x13ca (0x55d22c94a8fa in /homedir/envs/envname/bin/python)
frame #46: _PyFunction_Vectorcall + 0x6c (0x55d22c959a2c in /homedir/envs/envname/bin/python)
frame #47: _PyEval_EvalFrameDefault + 0x72c (0x55d22c949c5c in /homedir/envs/envname/bin/python)
frame #48: _PyFunction_Vectorcall + 0x6c (0x55d22c959a2c in /homedir/envs/envname/bin/python)
frame #49: _PyEval_EvalFrameDefault + 0x72c (0x55d22c949c5c in /homedir/envs/envname/bin/python)
frame #50: _PyFunction_Vectorcall + 0x6c (0x55d22c959a2c in /homedir/envs/envname/bin/python)
frame #51: _PyEval_EvalFrameDefault + 0x320 (0x55d22c949850 in /homedir/envs/envname/bin/python)
frame #52: _PyFunction_Vectorcall + 0x6c (0x55d22c959a2c in /homedir/envs/envname/bin/python)
frame #53: _PyEval_EvalFrameDefault + 0x320 (0x55d22c949850 in /homedir/envs/envname/bin/python)
frame #54: _PyFunction_Vectorcall + 0x6c (0x55d22c959a2c in /homedir/envs/envname/bin/python)
frame #55: <unknown function> + 0x144208 (0x55d22c959208 in /homedir/envs/envname/bin/python)
frame #56: _PyObject_CallMethodIdObjArgs + 0x169 (0x55d22c967419 in /homedir/envs/envname/bin/python)
frame #57: <unknown function> + 0x75187 (0x55d22c88a187 in /homedir/envs/envname/bin/python)
frame #58: _PyEval_EvalFrameDefault + 0x3e3b (0x55d22c94d36b in /homedir/envs/envname/bin/python)
frame #59: <unknown function> + 0x1d7c60 (0x55d22c9ecc60 in /homedir/envs/envname/bin/python)
frame #60: PyEval_EvalCode + 0x87 (0x55d22c9ecba7 in /homedir/envs/envname/bin/python)
frame #61: <unknown function> + 0x1dedaa (0x55d22c9f3daa in /homedir/envs/envname/bin/python)
frame #62: <unknown function> + 0x144bf3 (0x55d22c959bf3 in /homedir/envs/envname/bin/python)
frame #63: _PyEval_EvalFrameDefault + 0x5cd5 (0x55d22c94f205 in /homedir/envs/envname/bin/python)
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: initialization error
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at /opt/conda/conda-bld/pytorch_1716905979055/work/c10/cuda/CUDAException.cpp:43 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7fe326374897 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7fe326324b25 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7fe32644e718 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x1d6f6 (0x7fe3264196f6 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #4: <unknown function> + 0x1f5e3 (0x7fe32641b5e3 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0x1f922 (0x7fe32641b922 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #6: <unknown function> + 0x5a5950 (0x7fe374b3f950 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x6a36f (0x7fe32635936f in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #8: c10::TensorImpl::~TensorImpl() + 0x21b (0x7fe3263521cb in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #9: c10::TensorImpl::~TensorImpl() + 0x9 (0x7fe326352379 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #10: <unknown function> + 0x851088 (0x7fe374deb088 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #11: THPVariable_subclass_dealloc(_object*) + 0x2f6 (0x7fe374deb406 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #12: <unknown function> + 0x124633 (0x563677bbe633 in /homedir/envs/envname/bin/python)
frame #13: <unknown function> + 0x13d697 (0x563677bd7697 in /homedir/envs/envname/bin/python)
frame #14: <unknown function> + 0x13d77b (0x563677bd777b in /homedir/envs/envname/bin/python)
frame #15: <unknown function> + 0x13d77b (0x563677bd777b in /homedir/envs/envname/bin/python)
frame #16: <unknown function> + 0x13d77b (0x563677bd777b in /homedir/envs/envname/bin/python)
frame #17: <unknown function> + 0x13d77b (0x563677bd777b in /homedir/envs/envname/bin/python)
frame #18: <unknown function> + 0x13d77b (0x563677bd777b in /homedir/envs/envname/bin/python)
frame #19: <unknown function> + 0x13d77b (0x563677bd777b in /homedir/envs/envname/bin/python)
frame #20: <unknown function> + 0x13d77b (0x563677bd777b in /homedir/envs/envname/bin/python)
frame #21: <unknown function> + 0x13d77b (0x563677bd777b in /homedir/envs/envname/bin/python)
frame #22: <unknown function> + 0x13d77b (0x563677bd777b in /homedir/envs/envname/bin/python)
frame #23: <unknown function> + 0x13d77b (0x563677bd777b in /homedir/envs/envname/bin/python)
frame #24: <unknown function> + 0x13d77b (0x563677bd777b in /homedir/envs/envname/bin/python)
frame #25: <unknown function> + 0x13d77b (0x563677bd777b in /homedir/envs/envname/bin/python)
frame #26: <unknown function> + 0x13d77b (0x563677bd777b in /homedir/envs/envname/bin/python)
frame #27: <unknown function> + 0x13d77b (0x563677bd777b in /homedir/envs/envname/bin/python)
frame #28: <unknown function> + 0x13d77b (0x563677bd777b in /homedir/envs/envname/bin/python)
frame #29: <unknown function> + 0x13d77b (0x563677bd777b in /homedir/envs/envname/bin/python)
frame #30: <unknown function> + 0x13d77b (0x563677bd777b in /homedir/envs/envname/bin/python)
frame #31: <unknown function> + 0x14dcf6 (0x563677be7cf6 in /homedir/envs/envname/bin/python)
frame #32: <unknown function> + 0x129739 (0x563677bc3739 in /homedir/envs/envname/bin/python)
frame #33: <unknown function> + 0x12763d (0x563677bc163d in /homedir/envs/envname/bin/python)
frame #34: <unknown function> + 0x1d418b (0x563677c6e18b in /homedir/envs/envname/bin/python)
frame #35: _PyObject_GC_NewVar + 0x23f (0x563677bb647f in /homedir/envs/envname/bin/python)
frame #36: PyTuple_New + 0x117 (0x563677bbdaa7 in /homedir/envs/envname/bin/python)
frame #37: <unknown function> + 0x1320b5 (0x563677bcc0b5 in /homedir/envs/envname/bin/python)
frame #38: <unknown function> + 0x1321d1 (0x563677bcc1d1 in /homedir/envs/envname/bin/python)
frame #39: <unknown function> + 0x131e4e (0x563677bcbe4e in /homedir/envs/envname/bin/python)
frame #40: <unknown function> + 0x1d7844 (0x563677c71844 in /homedir/envs/envname/bin/python)
frame #41: <unknown function> + 0x1ea6eb (0x563677c846eb in /homedir/envs/envname/bin/python)
frame #42: <unknown function> + 0x143e8a (0x563677bdde8a in /homedir/envs/envname/bin/python)
frame #43: _PyEval_EvalFrameDefault + 0x4c12 (0x563677bd3142 in /homedir/envs/envname/bin/python)
frame #44: _PyFunction_Vectorcall + 0x6c (0x563677bdea2c in /homedir/envs/envname/bin/python)
frame #45: _PyEval_EvalFrameDefault + 0x13ca (0x563677bcf8fa in /homedir/envs/envname/bin/python)
frame #46: _PyFunction_Vectorcall + 0x6c (0x563677bdea2c in /homedir/envs/envname/bin/python)
frame #47: _PyEval_EvalFrameDefault + 0x72c (0x563677bcec5c in /homedir/envs/envname/bin/python)
frame #48: _PyFunction_Vectorcall + 0x6c (0x563677bdea2c in /homedir/envs/envname/bin/python)
frame #49: _PyEval_EvalFrameDefault + 0x72c (0x563677bcec5c in /homedir/envs/envname/bin/python)
frame #50: _PyFunction_Vectorcall + 0x6c (0x563677bdea2c in /homedir/envs/envname/bin/python)
frame #51: _PyEval_EvalFrameDefault + 0x320 (0x563677bce850 in /homedir/envs/envname/bin/python)
frame #52: _PyFunction_Vectorcall + 0x6c (0x563677bdea2c in /homedir/envs/envname/bin/python)
frame #53: _PyEval_EvalFrameDefault + 0x320 (0x563677bce850 in /homedir/envs/envname/bin/python)
frame #54: _PyFunction_Vectorcall + 0x6c (0x563677bdea2c in /homedir/envs/envname/bin/python)
frame #55: <unknown function> + 0x144208 (0x563677bde208 in /homedir/envs/envname/bin/python)
frame #56: _PyObject_CallMethodIdObjArgs + 0x169 (0x563677bec419 in /homedir/envs/envname/bin/python)
frame #57: <unknown function> + 0x75187 (0x563677b0f187 in /homedir/envs/envname/bin/python)
frame #58: _PyEval_EvalFrameDefault + 0x3e3b (0x563677bd236b in /homedir/envs/envname/bin/python)
frame #59: <unknown function> + 0x1d7c60 (0x563677c71c60 in /homedir/envs/envname/bin/python)
frame #60: PyEval_EvalCode + 0x87 (0x563677c71ba7 in /homedir/envs/envname/bin/python)
frame #61: <unknown function> + 0x1dedaa (0x563677c78daa in /homedir/envs/envname/bin/python)
frame #62: <unknown function> + 0x144bf3 (0x563677bdebf3 in /homedir/envs/envname/bin/python)
frame #63: _PyEval_EvalFrameDefault + 0x5cd5 (0x563677bd4205 in /homedir/envs/envname/bin/python)
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: initialization error
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Exception raised from c10_cuda_check_implementation at /opt/conda/conda-bld/pytorch_1716905979055/work/c10/cuda/CUDAException.cpp:43 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f549635d897 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f549630db25 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x7f5496437718 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x1d6f6 (0x7f54964026f6 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #4: <unknown function> + 0x1f5e3 (0x7f54964045e3 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #5: <unknown function> + 0x1f922 (0x7f5496404922 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10_cuda.so)
frame #6: <unknown function> + 0x5a5950 (0x7f54e4b28950 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x6a36f (0x7f549634236f in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #8: c10::TensorImpl::~TensorImpl() + 0x21b (0x7f549633b1cb in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #9: c10::TensorImpl::~TensorImpl() + 0x9 (0x7f549633b379 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #10: <unknown function> + 0x851088 (0x7f54e4dd4088 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #11: THPVariable_subclass_dealloc(_object*) + 0x2f6 (0x7f54e4dd4406 in /homedir/envs/envname/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #12: <unknown function> + 0x124633 (0x5557111ab633 in /homedir/envs/envname/bin/python)
frame #13: <unknown function> + 0x13d697 (0x5557111c4697 in /homedir/envs/envname/bin/python)
frame #14: <unknown function> + 0x13d77b (0x5557111c477b in /homedir/envs/envname/bin/python)
frame #15: <unknown function> + 0x13d77b (0x5557111c477b in /homedir/envs/envname/bin/python)
frame #16: <unknown function> + 0x13d77b (0x5557111c477b in /homedir/envs/envname/bin/python)
frame #17: <unknown function> + 0x13d77b (0x5557111c477b in /homedir/envs/envname/bin/python)
frame #18: <unknown function> + 0x13d77b (0x5557111c477b in /homedir/envs/envname/bin/python)
frame #19: <unknown function> + 0x13d77b (0x5557111c477b in /homedir/envs/envname/bin/python)
frame #20: <unknown function> + 0x13d77b (0x5557111c477b in /homedir/envs/envname/bin/python)
frame #21: <unknown function> + 0x13d77b (0x5557111c477b in /homedir/envs/envname/bin/python)
frame #22: <unknown function> + 0x13d77b (0x5557111c477b in /homedir/envs/envname/bin/python)
frame #23: <unknown function> + 0x13d77b (0x5557111c477b in /homedir/envs/envname/bin/python)
frame #24: <unknown function> + 0x13d77b (0x5557111c477b in /homedir/envs/envname/bin/python)
frame #25: <unknown function> + 0x13d77b (0x5557111c477b in /homedir/envs/envname/bin/python)
frame #26: <unknown function> + 0x13d77b (0x5557111c477b in /homedir/envs/envname/bin/python)
frame #27: <unknown function> + 0x13d77b (0x5557111c477b in /homedir/envs/envname/bin/python)
frame #28: <unknown function> + 0x13d77b (0x5557111c477b in /homedir/envs/envname/bin/python)
frame #29: <unknown function> + 0x13d77b (0x5557111c477b in /homedir/envs/envname/bin/python)
frame #30: <unknown function> + 0x13d77b (0x5557111c477b in /homedir/envs/envname/bin/python)
frame #31: <unknown function> + 0x14dcf6 (0x5557111d4cf6 in /homedir/envs/envname/bin/python)
frame #32: <unknown function> + 0x129739 (0x5557111b0739 in /homedir/envs/envname/bin/python)
frame #33: <unknown function> + 0x12763d (0x5557111ae63d in /homedir/envs/envname/bin/python)
frame #34: <unknown function> + 0x1d418b (0x55571125b18b in /homedir/envs/envname/bin/python)
frame #35: _PyObject_GC_NewVar + 0x23f (0x5557111a347f in /homedir/envs/envname/bin/python)
frame #36: PyTuple_New + 0x117 (0x5557111aaaa7 in /homedir/envs/envname/bin/python)
frame #37: <unknown function> + 0x1320b5 (0x5557111b90b5 in /homedir/envs/envname/bin/python)
frame #38: <unknown function> + 0x1321d1 (0x5557111b91d1 in /homedir/envs/envname/bin/python)
frame #39: <unknown function> + 0x131e4e (0x5557111b8e4e in /homedir/envs/envname/bin/python)
frame #40: <unknown function> + 0x1d7844 (0x55571125e844 in /homedir/envs/envname/bin/python)
frame #41: <unknown function> + 0x1ea6eb (0x5557112716eb in /homedir/envs/envname/bin/python)
frame #42: <unknown function> + 0x143e8a (0x5557111cae8a in /homedir/envs/envname/bin/python)
frame #43: _PyEval_EvalFrameDefault + 0x4c12 (0x5557111c0142 in /homedir/envs/envname/bin/python)
frame #44: _PyFunction_Vectorcall + 0x6c (0x5557111cba2c in /homedir/envs/envname/bin/python)
frame #45: _PyEval_EvalFrameDefault + 0x13ca (0x5557111bc8fa in /homedir/envs/envname/bin/python)
frame #46: _PyFunction_Vectorcall + 0x6c (0x5557111cba2c in /homedir/envs/envname/bin/python)
frame #47: _PyEval_EvalFrameDefault + 0x72c (0x5557111bbc5c in /homedir/envs/envname/bin/python)
frame #48: _PyFunction_Vectorcall + 0x6c (0x5557111cba2c in /homedir/envs/envname/bin/python)
frame #49: _PyEval_EvalFrameDefault + 0x72c (0x5557111bbc5c in /homedir/envs/envname/bin/python)
frame #50: _PyFunction_Vectorcall + 0x6c (0x5557111cba2c in /homedir/envs/envname/bin/python)
frame #51: _PyEval_EvalFrameDefault + 0x320 (0x5557111bb850 in /homedir/envs/envname/bin/python)
frame #52: _PyFunction_Vectorcall + 0x6c (0x5557111cba2c in /homedir/envs/envname/bin/python)
frame #53: _PyEval_EvalFrameDefault + 0x320 (0x5557111bb850 in /homedir/envs/envname/bin/python)
frame #54: _PyFunction_Vectorcall + 0x6c (0x5557111cba2c in /homedir/envs/envname/bin/python)
frame #55: <unknown function> + 0x144208 (0x5557111cb208 in /homedir/envs/envname/bin/python)
frame #56: _PyObject_CallMethodIdObjArgs + 0x169 (0x5557111d9419 in /homedir/envs/envname/bin/python)
frame #57: <unknown function> + 0x75187 (0x5557110fc187 in /homedir/envs/envname/bin/python)
frame #58: _PyEval_EvalFrameDefault + 0x3e3b (0x5557111bf36b in /homedir/envs/envname/bin/python)
frame #59: <unknown function> + 0x1d7c60 (0x55571125ec60 in /homedir/envs/envname/bin/python)
frame #60: PyEval_EvalCode + 0x87 (0x55571125eba7 in /homedir/envs/envname/bin/python)
frame #61: <unknown function> + 0x1dedaa (0x555711265daa in /homedir/envs/envname/bin/python)
frame #63: _PyEval_EvalFrameDefault + 0x5cd5 (0x5557111c1205 in /homedir/envs/envname/bin/python)
frame #62: <unknown function> + 0x144bf3 (0x5557111cbbf3 in /homedir/envs/envname/bin/python)
Error executing job with overrides: [''loader.num_workers=4', 'trainer.val_check_interval=2']
Traceback (most recent call last):
File "/homedir/envs/envname/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1133, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/homedir/envs/envname/lib/python3.10/queue.py", line 180, in get
self.not_empty.wait(remaining)
File "/homedir/envs/envname/lib/python3.10/threading.py", line 324, in wait
gotit = waiter.acquire(True, timeout)
File "/homedir/envs/envname/lib/python3.10/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 1076120) is killed by signal: Aborted.
```
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- NVIDIA RTX A6000
- NVIDIA RTX A6000
- available: True
- version: 12.1
* Lightning:
- lightning: 2.3.2
- lightning-utilities: 0.11.2
- pytorch-lightning: 2.3.1
- torch: 2.3.1
- torch-fidelity: 0.3.0
- torch-tb-profiler: 0.4.3
- torchaudio: 2.3.1
- torchmetrics: 1.4.0.post0
- torchvision: 0.18.1
- torchx: 0.6.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.10.14
- release: 4.18.0-372.32.1.el8_6.x86_64
</details>
### More info
_No response_
cc @justusschock @awaelchli | open | 2024-07-05T20:32:42Z | 2024-07-07T10:59:14Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20054 | [
"bug",
"data handling",
"repro needed",
"ver: 2.2.x"
] | alexanderswerdlow | 3 |
localstack/localstack | python | 11,805 | feature request: Implement describe_listener_attributes for ELBv2 | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Feature description
We hope you will support describe_listener_attributes for ELBv2. Since Pulumi AWS v6.57.0 they expect that this is available and we use Localstack to test pulumi deploys before they go to AWS.
### 🧑💻 Implementation
_No response_
### Anything else?
_No response_ | closed | 2024-11-07T14:45:06Z | 2024-11-29T22:11:01Z | https://github.com/localstack/localstack/issues/11805 | [
"type: feature",
"aws:apigatewayv2",
"status: backlog"
] | rsanting | 12 |
influxdata/influxdb-client-python | jupyter | 571 | Should I retry if get error "hinted handoff queue not empty"? | We are using InfluxDB Enterprise, I can understand how [hinted handoff queue](https://www.influxdata.com/blog/eventual-consistency-hhq/) works.
And if we see this error "hinted handoff queue not empty" in the data node log, it is fine.
Because based on [this](https://docs.influxdata.com/enterprise_influxdb/v1.8/troubleshooting/frequently_asked_questions/#why-am-i-seeing-hinted-handoff-queue-not-empty-errors-in-my-data-node-logs)
> This error is informational only and does not necessarily indicate a problem in the cluster. It indicates that the node handling the write request currently has data in its local hinted handoff queue for the destination node. Coordinating nodes will not attempt direct writes to other nodes until the hinted handoff queue for the destination node has fully drained.
So the data is still in current data node, and just won't sync to the other node until the hinted handoff queue in the other node clear, it will try later.
Here is my question, we are using this influxdb-client-python to write data. I am wondering if we get the same error from this Python client which returns from data node. In this case, it means the data has not been written to InfluxDB successfully and need retry, right? Thanks!
```shell
(500)
Reason: Internal Server Error
HTTP response headers: ...
HTTP response body: b'{"error":"write failed: hinted handoff queue not empty"}\n'
``` | open | 2023-04-04T17:16:45Z | 2023-04-04T17:18:09Z | https://github.com/influxdata/influxdb-client-python/issues/571 | [] | hongbo-miao | 0 |
piskvorky/gensim | nlp | 3,465 | ModuleNotFoundError: No module named 'gensim.models.deprecated' | #### Problem description
I'm trying to load a keyed vectors to vectorize a text. I'm following all the steps outlined in the [documentation](https://radimrehurek.com/gensim/models/keyedvectors.html#how-to-obtain-word-vectors).
#### Steps/code/corpus to reproduce
I've tried:
```python
from gensim.models.keyedvectors import KeyedVectors
vectors = KeyedVectors.load("complete.kv", mmap="r")
```
And:
```python
from gensim.models import KeyedVectors
vectors = KeyedVectors.load("complete.kv", mmap="r")
```
Also:
```python
from gensim.models.keyedvectors import KeyedVectors
vectors = KeyedVectors.load("complete.kv")
```
The error is always the same:
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[21], line 1
----> 1 vectors = KeyedVectors.load("complete.kv")
File [~/miniconda3/envs/textmining/lib/python3.10/site-packages/gensim/utils.py:486](https://file+.vscode-resource.vscode-cdn.net/home/juanpy/Projects/TextMining/notebooks/~/miniconda3/envs/textmining/lib/python3.10/site-packages/gensim/utils.py:486), in SaveLoad.load(cls, fname, mmap)
482 logger.info("loading %s object from %s", cls.__name__, fname)
484 compress, subname = SaveLoad._adapt_by_suffix(fname)
--> 486 obj = unpickle(fname)
487 obj._load_specials(fname, mmap, compress, subname)
488 obj.add_lifecycle_event("loaded", fname=fname)
File [~/miniconda3/envs/textmining/lib/python3.10/site-packages/gensim/utils.py:1461](https://file+.vscode-resource.vscode-cdn.net/home/juanpy/Projects/TextMining/notebooks/~/miniconda3/envs/textmining/lib/python3.10/site-packages/gensim/utils.py:1461), in unpickle(fname)
1447 """Load object from `fname`, using smart_open so that `fname` can be on S3, HDFS, compressed etc.
1448
1449 Parameters
(...)
1458
1459 """
1460 with open(fname, 'rb') as f:
-> 1461 return _pickle.load(f, encoding='latin1')
ModuleNotFoundError: No module named 'gensim.models.deprecated'
```
#### Versions
Linux-6.2.6-76060206-generic-x86_64-with-glibc2.35
Python 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0]
Bits 64
NumPy 1.24.2
SciPy 1.10.1
gensim 4.3.1
FAST_VERSION 0
(I've also tried running it in Google Colab and I get the same error)
**Thanks.** | closed | 2023-04-14T11:09:42Z | 2023-04-17T17:18:05Z | https://github.com/piskvorky/gensim/issues/3465 | [] | juan-op | 2 |
pywinauto/pywinauto | automation | 774 | ImportError: cannot import name 'Desktop' from 'pywinauto' | ## Expected Behavior
On a new pc, I installed pywinauto and then try to run:
from pywinauto import Desktop, Application
It's not supposed to throw error but...
## Actual Behavior
Here's the immediate error I get when running that:
ImportError: cannot import name 'Desktop' from 'pywinauto' (`C:\ProgramData\Anaconda3\lib\site-packages\pywinauto\__init__.py`)
## Steps to Reproduce the Problem
1. `pip install pywinauto==0.5.4`
2. `from pywinauto import Desktop, Application`
3. get error: `ImportError: cannot import name 'Desktop' from 'pywinauto'` (`C:\ProgramData\Anaconda3\lib\site-packages\pywinauto\__init__.py`)
## Short Example of Code to Demonstrate the Problem
`from pywinauto import Desktop`
## Specifications
- Pywinauto version:
- Python version and bitness:
- Platform and OS:
| closed | 2019-07-15T18:43:51Z | 2019-07-15T20:01:26Z | https://github.com/pywinauto/pywinauto/issues/774 | [
"invalid"
] | doverradio | 1 |
Miserlou/Zappa | flask | 1,395 | Zappa crashes when running "zappa" | <!--- Provide a general summary of the issue in the Title above -->
## Context
After two days and two different OS', I finally got Zappa working. For about 5 minutes, until I had to reset my virtual env (Zappa was packaging my entire virtual env, including /lib/, which resulted in ~100mb deploy zips.)
When I reinstalled, I now get this:
```
camer@DESKTOP-VRG88TF:/mnt/c/Users/camer/Desktop/Development/ryze/content-engine/api$ zappa
Oh no! An error occurred! :(
==============
Traceback (most recent call last):
File "/home/camer/.local/share/virtualenvs/api-GJXpcMFr/lib/python3.6/site-packages/zappa/cli.py", line 2610, in handle
sys.exit(cli.handle())
File "/home/camer/.local/share/virtualenvs/api-GJXpcMFr/lib/python3.6/site-packages/zappa/cli.py", line 505, in handle
self.dispatch_command(self.command, stage)
File "/home/camer/.local/share/virtualenvs/api-GJXpcMFr/lib/python3.6/site-packages/zappa/cli.py", line 520, in dispatch_command
if not self.vargs['json']:
KeyError: 'json'
==============
Need help? Found a bug? Let us know! :D
File bug reports on GitHub here: https://github.com/Miserlou/Zappa
And join our Slack channel here: https://slack.zappa.io
Love!,
~ Team Zappa!
```
## Expected Behavior
Zappa should output the usual list of help commands
## Actual Behavior
Zappa breaks
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Create a virtual env with `pipenv` on python 3.6
2. Run `zappa`
3. It breaks
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.45.1
* Operating System and Python version: Python 3.6.3, Ubuntu 16.04.3
* Your `zappa_settings.py`:
```
{
"dev": {
"app_function": "app.app",
"aws_region": null,
"profile_name": "zappa",
"project_name": "api",
"runtime": "python3.6",
"s3_bucket": "zappa-i38d9iob8"
}
}
``` | open | 2018-02-13T19:03:41Z | 2018-04-13T08:02:47Z | https://github.com/Miserlou/Zappa/issues/1395 | [
"bug",
"windows",
"needs-info"
] | cameronk | 1 |
stanfordnlp/stanza | nlp | 450 | UnsupportedOperation: fileno when trying to start corenlp server via stanza :( | Extremely confused on how to start corenlp client via stanza. I cannot get it to work on both, my windows pc and ubuntu pc. Envromental variables seem to be ok for me, as on the "Starting server with command: java [...]" it gets the correct path on both systems(as seen below).
Heres a log from windows, im using jupyter notebook with python 3.7 and anaconda. Yes java is installed and its the build 1.8.0_261-b12
```
2020-08-23 16:19:39 INFO: Writing properties to tmp file: corenlp_server-cb875580c6b14b81.props
2020-08-23 16:19:39 INFO: Starting server with command: java -Xmx4G -cp C:\Users\mikol\stanza_corenlp\* edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 30000 -threads 5 -maxCharLength 100000 -quiet False -serverProperties corenlp_server-cb875580c6b14b81.props -annotators tokenize,ssplit,pos,lemma,ner,parse,depparse,coref -preload -outputFormat serialized
---------------------------------------------------------------------------
UnsupportedOperation Traceback (most recent call last)
<ipython-input-3-8480433fb1e5> in <module>
4 annotators=['tokenize','ssplit','pos','lemma','ner', 'parse', 'depparse','coref'],
5 timeout=30000,
----> 6 memory='4G') as client:
7 ann = client.annotate(test_doc)
8 print(ann)
C:\ProgramData\Anaconda3\lib\site-packages\stanza\server\client.py in __enter__(self)
174
175 def __enter__(self):
--> 176 self.start()
177 return self
178
C:\ProgramData\Anaconda3\lib\site-packages\stanza\server\client.py in start(self)
146 self.server = subprocess.Popen(self.start_cmd,
147 stderr=stderr,
--> 148 stdout=stderr)
149
150 def atexit_kill(self):
C:\ProgramData\Anaconda3\lib\subprocess.py in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, encoding, errors, text)
751 (p2cread, p2cwrite,
752 c2pread, c2pwrite,
--> 753 errread, errwrite) = self._get_handles(stdin, stdout, stderr)
754
755 # We wrap OS handles *before* launching the child, otherwise a
C:\ProgramData\Anaconda3\lib\subprocess.py in _get_handles(self, stdin, stdout, stderr)
1084 else:
1085 # Assuming file-like object
-> 1086 c2pwrite = msvcrt.get_osfhandle(stdout.fileno())
1087 c2pwrite = self._make_inheritable(c2pwrite)
1088
UnsupportedOperation: fileno
```
The error code looks the same on both machines really, only with different filepaths.
Please if anybody can help i would really appreciate it, without the corenlp tools theres not much i can do on my project atm. | closed | 2020-08-23T14:27:55Z | 2020-08-23T23:44:39Z | https://github.com/stanfordnlp/stanza/issues/450 | [
"bug"
] | GitHubUser97 | 6 |
nonebot/nonebot2 | fastapi | 3,278 | Plugin: 追番小工具 | ### PyPI 项目名
nonebot-plugin-track-anime
### 插件 import 包名
nonebot_plugin_track_anime
### 标签
[{"label":"追番工具","color":"#c78787"}]
### 插件配置项
```dotenv
```
### 插件测试
- [ ] 如需重新运行插件测试,请勾选左侧勾选框 | closed | 2025-01-26T07:03:44Z | 2025-02-10T12:34:23Z | https://github.com/nonebot/nonebot2/issues/3278 | [
"Plugin",
"Publish"
] | lbsucceed | 5 |
google-research/bert | nlp | 1,256 | Scripts to Reproduce the "Well-Read Students Learn Better" Results | Hello!
Would it be possible to release the collaterals (scripts, hyper parameters, etc.) to reproduce the pretraining distillation (PD) results you've presented in the Well-Read Students Learn Better paper (the way the 24 smaller models checkpoints were trained)?
Thank you so much! | open | 2021-08-20T21:50:56Z | 2021-08-20T21:55:17Z | https://github.com/google-research/bert/issues/1256 | [] | VenkatKS | 0 |
seleniumbase/SeleniumBase | web-scraping | 3,437 | how do i press shitf and and letter ussing cdp gui?? | I need to be able to insure capital letter typing | closed | 2025-01-19T22:31:04Z | 2025-01-19T23:10:02Z | https://github.com/seleniumbase/SeleniumBase/issues/3437 | [
"question",
"UC Mode / CDP Mode"
] | SlimeBswervin | 5 |
plotly/dash | data-science | 3,037 | add dash-uploader to dash core components | https://github.com/fohrloop/dash-uploader/issues/4 | open | 2024-10-15T14:57:01Z | 2024-10-15T14:57:01Z | https://github.com/plotly/dash/issues/3037 | [
"feature",
"P3",
"community"
] | gvwilson | 0 |
huggingface/diffusers | pytorch | 10,382 | [Bug] Encoder in diffusers.models.autoencoders.vae's forward method return type mismatch leads to AttributeError | ### Describe the bug
**Issue Description:**
When using the Encoder from the` diffusers.models.autoencoders.vae module`, calling its forward method returns a value type mismatch, resulting in an AttributeError during subsequent processing. Specifically, when calling the Encoder's forward method, the returned result is a tuple, while the subsequent code expects to receive a tensor.
### Reproduction
Please use the following code to reproduce the issue
```python
from diffusers.models.autoencoders.vae import Encoder
import torch
encoder = Encoder(
down_block_types=["DownBlock2D", "DownBlock2D"],
block_out_channels=[64, 64],
)
encoder(torch.randn(1, 3, 256, 256)).shape
```
**Expected Behavior:**
The Encoder's forward method in `diffusers.models.autoencoders.vae` should return a tensor for further processing.
**Actual Behavior:**
Running the above code results in the following error:
```txt
AttributeError: 'tuple' object has no attribute 'dim'
```
**Additional Information:**
- Error log:
```txt
Traceback (most recent call last):
File "main.py", line 9, in <module>
encoder(torch.randn(1, 3, 256, 256)).shape
...
File "python3.11/site-packages/diffusers/models/autoencoders/vae.py", line 172, in forward
sample = down_block(sample)
...
File "python3.11/site-packages/diffusers/models/autoencoders/vae.py", line 172, in forward
hidden_states = resnet(hidden_states, temb)
...
File "python3.11/site-packages/diffusers/models/autoencoders/vae.py", line 172, in forward
hidden_states = self.norm1(hidden_states)
File "python3.11/site-packages/torch/nn/modules/normalization.py", line 313, in forward
return F.group_norm(input, self.num_groups, self.weight, self.bias, self.eps)
File "python3.11/site-packages/torch/nn/functional.py", line 2947, in group_norm
if input.dim() < 2:
AttributeError: 'tuple' object has no attribute 'dim'
```
- **Relevant code snippet:**
- In` diffusers/models/autoencoders/vae.py`, lines 171-173:
```python
for down_block in self.down_blocks:
sample = down_block(sample)
```
- `DownBlock2D`'s `forward `method declaration:
```python
def forward(
self, hidden_states: torch.Tensor, temb: Optional[torch.Tensor] = None, *args, **kwargs
) -> Tuple[torch.Tensor, Tuple[torch.Tensor, ...]]:
```
### Logs
_No response_
### System Info
- 🤗 Diffusers version: 0.31.0
- Platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
- Running on Google Colab?: No
- Python version: 3.11.11
- PyTorch version (GPU?): 2.5.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.26.5
- Transformers version: 4.47.0
- Accelerate version: 1.2.1
- PEFT version: 0.14.0
- Bitsandbytes version: not installed
- Safetensors version: 0.4.5
- xFormers version: not installed
- Accelerator: NVIDIA GeForce RTX 3090, 24576 MiB
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@DN6 @sayakpaul | open | 2024-12-25T13:34:30Z | 2025-02-07T15:03:04Z | https://github.com/huggingface/diffusers/issues/10382 | [
"bug",
"stale"
] | mq-yuan | 3 |
ydataai/ydata-profiling | jupyter | 1,313 | index -9223372036854775808 is out of bounds for axis 0 with size 2 | ### Current Behaviour
IndexError Traceback (most recent call last)
<ipython-input-34-c0d0834d8e2d> in <module>
----> 1 profile_report.get_description()
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/typeguard/__init__.py in wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/ydata_profiling/profile_report.py in get_description(self)
315 Dict containing a description for each variable in the DataFrame.
316 """
--> 317 return self.description_set
318
319 def get_rejected_variables(self) -> set:
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/typeguard/__init__.py in wrapper(*args, **kwargs)
1031 memo = _CallMemo(python_func, _localns, args=args, kwargs=kwargs)
1032 check_argument_types(memo)
-> 1033 retval = func(*args, **kwargs)
1034 try:
1035 check_return_type(retval, memo)
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/ydata_profiling/profile_report.py in description_set(self)
251 self.summarizer,
252 self.typeset,
--> 253 self._sample,
254 )
255 return self._description_set
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/ydata_profiling/model/describe.py in describe(config, df, summarizer, typeset, sample)
70 pbar.total += len(df.columns)
71 series_description = get_series_descriptions(
---> 72 config, df, summarizer, typeset, pbar
73 )
74
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/multimethod/__init__.py in __call__(self, *args, **kwargs)
313 func = self[tuple(func(arg) for func, arg in zip(self.type_checkers, args))]
314 try:
--> 315 return func(*args, **kwargs)
316 except TypeError as ex:
317 raise DispatchError(f"Function {func.__code__}") from ex
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/ydata_profiling/model/pandas/summary_pandas.py in pandas_get_series_descriptions(config, df, summarizer, typeset, pbar)
98 with multiprocessing.pool.ThreadPool(pool_size) as executor:
99 for i, (column, description) in enumerate(
--> 100 executor.imap_unordered(multiprocess_1d, args)
101 ):
102 pbar.set_postfix_str(f"Describe variable:{column}")
~/anaconda3/envs/py3.7/lib/python3.7/multiprocessing/pool.py in next(self, timeout)
746 if success:
747 return value
--> 748 raise value
749
750 __next__ = next # XXX
~/anaconda3/envs/py3.7/lib/python3.7/multiprocessing/pool.py in worker(inqueue, outqueue, initializer, initargs, maxtasks, wrap_exception)
119 job, i, func, args, kwds = task
120 try:
--> 121 result = (True, func(*args, **kwds))
122 except Exception as e:
123 if wrap_exception and func is not _helper_reraises_exception:
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/ydata_profiling/model/pandas/summary_pandas.py in multiprocess_1d(args)
77 """
78 column, series = args
---> 79 return column, describe_1d(config, series, summarizer, typeset)
80
81 pool_size = config.pool_size
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/multimethod/__init__.py in __call__(self, *args, **kwargs)
313 func = self[tuple(func(arg) for func, arg in zip(self.type_checkers, args))]
314 try:
--> 315 return func(*args, **kwargs)
316 except TypeError as ex:
317 raise DispatchError(f"Function {func.__code__}") from ex
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/ydata_profiling/model/pandas/summary_pandas.py in pandas_describe_1d(config, series, summarizer, typeset)
55
56 typeset.type_schema[series.name] = vtype
---> 57 return summarizer.summarize(config, series, dtype=vtype)
58
59
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/ydata_profiling/model/summarizer.py in summarize(self, config, series, dtype)
37 object:
38 """
---> 39 _, _, summary = self.handle(str(dtype), config, series, {"type": str(dtype)})
40 return summary
41
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/ydata_profiling/model/handler.py in handle(self, dtype, *args, **kwargs)
60 funcs = self.mapping.get(dtype, [])
61 op = compose(funcs)
---> 62 return op(*args)
63
64
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/ydata_profiling/model/handler.py in func2(*x)
19 return f(*x)
20 else:
---> 21 return f(*res)
22
23 return func2
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/ydata_profiling/model/handler.py in func2(*x)
19 return f(*x)
20 else:
---> 21 return f(*res)
22
23 return func2
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/ydata_profiling/model/handler.py in func2(*x)
19 return f(*x)
20 else:
---> 21 return f(*res)
22
23 return func2
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/ydata_profiling/model/handler.py in func2(*x)
15 def func(f: Callable, g: Callable) -> Callable:
16 def func2(*x) -> Any:
---> 17 res = g(*x)
18 if type(res) == bool:
19 return f(*x)
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/multimethod/__init__.py in __call__(self, *args, **kwargs)
313 func = self[tuple(func(arg) for func, arg in zip(self.type_checkers, args))]
314 try:
--> 315 return func(*args, **kwargs)
316 except TypeError as ex:
317 raise DispatchError(f"Function {func.__code__}") from ex
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/ydata_profiling/model/summary_algorithms.py in inner(config, series, summary)
63 if not summary["hashable"]:
64 return config, series, summary
---> 65 return fn(config, series, summary)
66
67 return inner
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/ydata_profiling/model/summary_algorithms.py in inner(config, series, summary)
80 series = series.dropna()
81
---> 82 return fn(config, series, summary)
83
84 return inner
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/ydata_profiling/model/pandas/describe_numeric_pandas.py in pandas_describe_numeric_1d(config, series, summary)
118
119 if chi_squared_threshold > 0.0:
--> 120 stats["chi_squared"] = chi_square(finite_values)
121
122 stats["range"] = stats["max"] - stats["min"]
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/ydata_profiling/model/summary_algorithms.py in chi_square(values, histogram)
50 ) -> dict:
51 if histogram is None:
---> 52 histogram, _ = np.histogram(values, bins="auto")
53 return dict(chisquare(histogram)._asdict())
54
<__array_function__ internals> in histogram(*args, **kwargs)
~/anaconda3/envs/py3.7/lib/python3.7/site-packages/numpy/lib/histograms.py in histogram(a, bins, range, normed, weights, density)
854 # The index computation is not guaranteed to give exactly
855 # consistent results within ~1 ULP of the bin edges.
--> 856 decrement = tmp_a < bin_edges[indices]
857 indices[decrement] -= 1
858 # The last bin includes the right edge. The other bins do not.
IndexError: index -9223372036854775808 is out of bounds for axis 0 with size 2
### Expected Behaviour
return data profiling for this table
### Data Description
SUM_TIMER_READ_WRITE
0 10950043000000000
### Code that reproduces the bug
```Python
import pandas as pd
from ydata_profiling import ProfileReport
b = {'SUM_TIMER_READ_WRITE': [10950043000000000]}
table = pd.DataFrame.from_dict(b)
profile_report = ProfileReport(
table,
progress_bar=False,
infer_dtypes=False,
missing_diagrams=None,
correlations=None,
interactions=None,
# duplicates=None,
samples=None)
description = profile_report.get_description()
```
### pandas-profiling version
v4.1.1
### Dependencies
```Text
pandas==1.3.5
ydata-profiling==4.1.1
```
### OS
Linux dsp-X299-WU8 5.15.0-69-generic #76~20.04.1-Ubuntu SMP Mon Mar 20 15:54:19 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | open | 2023-04-18T02:46:42Z | 2023-05-08T17:00:33Z | https://github.com/ydataai/ydata-profiling/issues/1313 | [
"question/discussion ❓"
] | zhoujianch | 1 |
HIT-SCIR/ltp | nlp | 181 | 实体识别怎么在没有B的情况下就出现了I? | 测试文本如下:
[15.txt](https://github.com/HIT-SCIR/ltp/files/417406/15.txt)
输出结果

| closed | 2016-08-15T00:59:19Z | 2020-06-25T11:20:57Z | https://github.com/HIT-SCIR/ltp/issues/181 | [
"bug"
] | rulongchen | 4 |
piskvorky/gensim | data-science | 3,388 | FastTextKeyedVectors.add_vectors is not adding vectors | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
I have been trying to create a `FastTextKeyedVectors` and adding vectors to it using either `add_vector` or `add_vectors` but the methods are not adding anything. After looking at the implementation of those methods, I think there is an error while checking if a key has already been added.
#### Steps/code/corpus to reproduce
I create a `FastTextKeyedVectors` using the defaults used by the `FastText` model, then try to add vectors to it using `add_vector` or `add_vectors`:
```
wv = FastTextKeyedVectors(vector_size=2, min_n=3, max_n=6, bucket=2000000)
wv.add_vector("test", [0.5, 0.5])
print(wv.key_to_index)
>> {}
print(wv.index_to_key)
>> []
print(wv.vectors)
>> []
wv.add_vectors(["test"], [[0.5, 0.5]])
print(wv.key_to_index)
>> {}
print(wv.index_to_key)
>> []
print(wv.vectors)
>> []
```
`wv.key_to_index`, `wv.index_to_key` and `wv.vectors` are all empty.
`FastTextKeyedVectors` is a child of `KeyedVectors` where the `add_vector/s` methods are implemented. `add_vector` does a few checks then calls `add_vectors`.
In `add_vectors`, there is an `in_vocab_mask`, which is a list of booleans indicating if a key is already present in the KeyedVectors.
```
in_vocab_mask = np.zeros(len(keys), dtype=bool)
for idx, key in enumerate(keys):
if key in self:
in_vocab_mask[idx] = True
```
Since Gensim 4.0, `key in wv` will always return True with FastText by design. The proper way of checking if a key exists is by calling `key in wv.key_to_index` (See https://github.com/RaRe-Technologies/gensim/wiki/Migrating-from-Gensim-3.x-to-4#10-check-if-a-word-is-fully-oov-out-of-vocabulary-for-fasttext)
So replacing the above code by
```
in_vocab_mask = np.zeros(len(keys), dtype=bool)
for idx, key in enumerate(keys):
if key in self.key_to_index:
in_vocab_mask[idx] = True
```
seems to fix the issue.
```
wv = FastTextKeyedVectors(vector_size=2, min_n=3, max_n=6, bucket=2000000)
wv.add_vectors(["test"], [[0.5, 0.5]])
print(wv.key_to_index)
>> {'test': 0}
print(wv.index_to_key)
>> ['test']
print(wv.vectors)
>> [[0.5 0.5]]
```
I am not sure how `FastText` models are able to add vectors to `FastTextKeyedVectors` the proper way when training without encountering this issue as I have not looked at the training code in detail.
#### Versions
Linux-5.10.0-17-amd64-x86_64-with-glibc2.31
Python 3.10.4 (main, Mar 31 2022, 08:41:55) [GCC 7.5.0]
Bits 64
NumPy 1.21.6
SciPy 1.7.3
gensim 4.2.0
FAST_VERSION 1
| open | 2022-09-28T09:13:24Z | 2022-09-29T13:57:41Z | https://github.com/piskvorky/gensim/issues/3388 | [
"bug"
] | globba | 2 |
nvbn/thefuck | python | 1,179 | Thefuck not initialised by .bashrc line on WSL (Ubuntu) | <!-- If you have any issue with The Fuck, sorry about that, but we will do what we
can to fix that. Actually, maybe we already have, so first thing to do is to
update The Fuck and see if the bug is still there. -->
<!-- If it is (sorry again), check if the problem has not already been reported and
if not, just open an issue on [GitHub](https://github.com/nvbn/thefuck) with
the following basic information: -->
The output of `thefuck --version` (something like `The Fuck 3.1 using Python
3.5.0 and Bash 4.4.12(1)-release`):
The Fuck 3.30 using Python 3.9.2 and Bash 5.0.17(1)-release
Your system (Debian 7, ArchLinux, Windows, etc.):
Ubuntu 20.04.1 LTS on Windows 10 WSL
How to reproduce the bug:
Follow instructions to install thefuck:
```
sudo apt update
sudo apt install python3-dev python3-pip python3-setuptools
sudo pip3 install thefuck
```
Add the recommended line to .bashrc:
```
eval $(thefuck --alias)
```
start a new shell: the following error will appear:
```
Command 'thefuck' not found, but can be installed with:
sudo apt install thefuck
```
This happens anytime a new shell session is started, regardless of what is in .bashrc.
If you try to use thefuck:
```
$ fuck
Seems like fuck alias already configured!
For applying changes run source ~/.bashrc or restart your shell.
$
```
Once I `source ~/.bashrc` I can finally use thefuck, without issues. But if I start a new shell session, I'm back to square one and need to `source ~/.bashrc` again.
The output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):
```
Command 'thefuck' not found, but can be installed with:
sudo apt install thefuck
$ fuck
Seems like fuck alias already configured!
For applying changes run source ~/.bashrc or restart your shell.
$ source ~/.bashrc
$ fuck
DEBUG: Run with settings: {'alter_history': True,
'debug': True,
'env': {'GIT_TRACE': '1', 'LANG': 'C', 'LC_ALL': 'C'},
'exclude_rules': [],
'history_limit': None,
'instant_mode': False,
'no_colors': False,
'num_close_matches': 3,
'priority': {},
'repeat': False,
'require_confirmation': True,
'rules': [<const: All rules enabled>],
'slow_commands': ['lein', 'react-native', 'gradle', './gradlew', 'vagrant'],
'user_dir': PosixPath('/home/username/.config/thefuck'),
'wait_command': 3,
'wait_slow_command': 15}
DEBUG: Received output: /bin/sh: 1: source: not found
DEBUG: Call: source ~/.bashrc; with env: {'SHELL': '/bin/bash', 'TF_SHELL': 'bash', 'NVM_INC': '/home/username/.nvm/versions/node/v15.8.0/include/node', 'WSL_DISTRO_NAME': 'Ubuntu', 'WT_SESSION': '5e46141a-92ee-4b26-9e10-efe97d3a3f36', 'HOMEBREW_PREFIX': '/home/linuxbrew/.linuxbrew', 'GTK_MODULES': 'appmenu-gtk-module', 'NAME': 'MACHINE', 'PWD': '/mnt/c/Users/trphx05', 'LOGNAME': 'username', 'MANPATH': '/home/username/.nvm/versions/node/v15.8.0/share/man:/home/linuxbrew/.linuxbrew/share/man:', 'HOME': '/home/username', 'LANG': 'C', 'WSL_INTEROP': '/run/WSL/1455_interop', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:', 'TF_ALIAS': 'fuck', 'INFOPATH': '/home/linuxbrew/.linuxbrew/share/info:', 'NVM_DIR': '/home/username/.nvm', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'TERM': 'xterm-256color', 'TF_HISTORY': '\t fuck\n\t cd ~\n\t sudo apt-get install thefuck\n\t export THEFUCK_DEBUG=true\n\t fuck\n\t source ~/.bashrc\n\t fuck\n\t nano ~/.bashrc\n\t fuck\n\t source ~/.bashrc', 'LESSOPEN': '| /usr/bin/lesspipe %s', 'USER': 'username', 'PYTHONIOENCODING': 'utf-8', 'TF_SHELL_ALIASES': 'alias alert=\'notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e \'\\\'\'s/^\\s*[0-9]\\+\\s*//;s/[;&|]\\s*alert$//\'\\\'\')"\'\nalias egrep=\'egrep --color=auto\'\nalias fgrep=\'fgrep --color=auto\'\nalias grep=\'grep --color=auto\'\nalias l=\'ls -CF\'\nalias la=\'ls -A\'\nalias ll=\'ls -alF\'\nalias ls=\'ls --color=auto\'', 'HOMEBREW_CELLAR': '/home/linuxbrew/.linuxbrew/Cellar', 'SHLVL': '1', 'NVM_CD_FLAGS': '', 'HOMEBREW_REPOSITORY': '/home/linuxbrew/.linuxbrew/Homebrew', 'UBUNTU_MENUPROXY': '1', 'WSLENV': 'WT_SESSION::WT_PROFILE_ID', 'XDG_DATA_DIRS': '/usr/local/share:/usr/share:/var/lib/snapd/desktop', 'PATH': '/home/linuxbrew/.linuxbrew/bin:/home/linuxbrew/.linuxbrew/sbin:/home/username/.nvm/versions/node/v15.8.0/bin:/home/username/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/mnt/c/Python39/Scripts/:/mnt/c/Python39/:/mnt/c/Windows/system32:/mnt/c/Windows:/mnt/c/Windows/System32/Wbem:/mnt/c/Windows/System32/WindowsPowerShell/v1.0/:/mnt/c/Windows/System32/OpenSSH/:/mnt/c/Program Files (x86)/NVIDIA Corporation/PhysX/Common:/mnt/c/Program Files/NVIDIA Corporation/NVIDIA NvDLISR:/mnt/c/ProgramData/chocolatey/bin:/mnt/c/Program Files/Microsoft VS Code/bin:/mnt/c/Program Files (x86)/Plantronics/Spokes3G/:/mnt/c/Program Files/Docker/Docker/resources/bin:/mnt/c/ProgramData/DockerDesktop/version-bin:/mnt/c/Users/trphx05/scoop/apps/dotnet-sdk/current:/mnt/c/Users/trphx05/scoop/apps/perl/current/perl/site/bin:/mnt/c/Users/trphx05/scoop/apps/perl/current/perl/bin:/mnt/c/Users/trphx05/scoop/apps/perl/current/c/bin:/mnt/c/Users/trphx05/scoop/apps/yarn/current/global/node_modules/.bin:/mnt/c/Users/trphx05/scoop/apps/yarn/current/Yarn/bin:/mnt/c/Users/trphx05/scoop/apps/rustup-msvc/current/.cargo/bin:/mnt/c/Users/trphx05/scoop/apps/nvm/current/nodejs/nodejs:/mnt/c/Users/trphx05/scoop/shims:/mnt/c/Users/trphx05/AppData/Local/Microsoft/WindowsApps:/mnt/c/Users/trphx05/AppData/Roaming/Python/Python39/Scripts:/mnt/c/Users/trphx05/.dotnet/tools:/snap/bin', 'THEFUCK_DEBUG': 'true', 'NVM_BIN': '/home/username/.nvm/versions/node/v15.8.0/bin', 'HOSTTYPE': 'x86_64', 'WT_PROFILE_ID': '{2c4de342-38b7-51cf-b940-2309a097f518}', '_': '/home/linuxbrew/.linuxbrew/bin/thefuck', 'LC_ALL': 'C', 'GIT_TRACE': '1'}; is slow: False took: 0:00:00.053087
DEBUG: Importing rule: adb_unknown_command; took: 0:00:00.000210
DEBUG: Importing rule: ag_literal; took: 0:00:00.000364
DEBUG: Importing rule: apt_get; took: 0:00:00.000498
DEBUG: Importing rule: apt_get_search; took: 0:00:00.000258
DEBUG: Importing rule: apt_invalid_operation; took: 0:00:00.000673
DEBUG: Importing rule: apt_list_upgradable; took: 0:00:00.000372
DEBUG: Importing rule: apt_upgrade; took: 0:00:00.000529
DEBUG: Importing rule: aws_cli; took: 0:00:00.000243
DEBUG: Importing rule: az_cli; took: 0:00:00.000238
DEBUG: Importing rule: brew_cask_dependency; took: 0:00:00.000473
DEBUG: Importing rule: brew_install; took: 0:00:00.000094
DEBUG: Importing rule: brew_link; took: 0:00:00.000235
DEBUG: Importing rule: brew_reinstall; took: 0:00:00.000493
DEBUG: Importing rule: brew_uninstall; took: 0:00:00.000240
DEBUG: Importing rule: brew_unknown_command; took: 0:00:00.000105
DEBUG: Importing rule: brew_update_formula; took: 0:00:00.000237
DEBUG: Importing rule: brew_upgrade; took: 0:00:00.000082
DEBUG: Importing rule: cargo; took: 0:00:00.000125
DEBUG: Importing rule: cargo_no_command; took: 0:00:00.000238
DEBUG: Importing rule: cat_dir; took: 0:00:00.000234
DEBUG: Importing rule: cd_correction; took: 0:00:00.000957
DEBUG: Importing rule: cd_mkdir; took: 0:00:00.000423
DEBUG: Importing rule: cd_parent; took: 0:00:00.000081
DEBUG: Importing rule: chmod_x; took: 0:00:00.000083
DEBUG: Importing rule: choco_install; took: 0:00:00.104599
DEBUG: Importing rule: composer_not_command; took: 0:00:00.000491
DEBUG: Importing rule: cp_create_destination; took: 0:00:00.000367
DEBUG: Importing rule: cp_omitting_directory; took: 0:00:00.000447
DEBUG: Importing rule: cpp11; took: 0:00:00.000234
DEBUG: Importing rule: dirty_untar; took: 0:00:00.000969
DEBUG: Importing rule: dirty_unzip; took: 0:00:00.000847
DEBUG: Importing rule: django_south_ghost; took: 0:00:00.000085
DEBUG: Importing rule: django_south_merge; took: 0:00:00.000100
DEBUG: Importing rule: dnf_no_such_command; took: 0:00:00.050721
DEBUG: Importing rule: docker_image_being_used_by_container; took: 0:00:00.000448
DEBUG: Importing rule: docker_login; took: 0:00:00.000307
DEBUG: Importing rule: docker_not_command; took: 0:00:00.058180
DEBUG: Importing rule: dry; took: 0:00:00.000180
DEBUG: Importing rule: fab_command_not_found; took: 0:00:00.000629
DEBUG: Importing rule: fix_alt_space; took: 0:00:00.000326
DEBUG: Importing rule: fix_file; took: 0:00:00.001864
DEBUG: Importing rule: gem_unknown_command; took: 0:00:00.052363
DEBUG: Importing rule: git_add; took: 0:00:00.000608
DEBUG: Importing rule: git_add_force; took: 0:00:00.000231
DEBUG: Importing rule: git_bisect_usage; took: 0:00:00.000244
DEBUG: Importing rule: git_branch_delete; took: 0:00:00.000245
DEBUG: Importing rule: git_branch_delete_checked_out; took: 0:00:00.000233
DEBUG: Importing rule: git_branch_exists; took: 0:00:00.000314
DEBUG: Importing rule: git_branch_list; took: 0:00:00.000245
DEBUG: Importing rule: git_checkout; took: 0:00:00.000258
DEBUG: Importing rule: git_commit_amend; took: 0:00:00.000303
DEBUG: Importing rule: git_commit_reset; took: 0:00:00.000249
DEBUG: Importing rule: git_diff_no_index; took: 0:00:00.000236
DEBUG: Importing rule: git_diff_staged; took: 0:00:00.000226
DEBUG: Importing rule: git_fix_stash; took: 0:00:00.000234
DEBUG: Importing rule: git_flag_after_filename; took: 0:00:00.000241
DEBUG: Importing rule: git_help_aliased; took: 0:00:00.000228
DEBUG: Importing rule: git_merge; took: 0:00:00.000226
DEBUG: Importing rule: git_merge_unrelated; took: 0:00:00.000239
DEBUG: Importing rule: git_not_command; took: 0:00:00.000227
DEBUG: Importing rule: git_pull; took: 0:00:00.000226
DEBUG: Importing rule: git_pull_clone; took: 0:00:00.000220
DEBUG: Importing rule: git_pull_uncommitted_changes; took: 0:00:00.000243
DEBUG: Importing rule: git_push; took: 0:00:00.000228
DEBUG: Importing rule: git_push_different_branch_names; took: 0:00:00.000222
DEBUG: Importing rule: git_push_force; took: 0:00:00.000236
DEBUG: Importing rule: git_push_pull; took: 0:00:00.000224
DEBUG: Importing rule: git_push_without_commits; took: 0:00:00.000270
DEBUG: Importing rule: git_rebase_merge_dir; took: 0:00:00.000231
DEBUG: Importing rule: git_rebase_no_changes; took: 0:00:00.000185
DEBUG: Importing rule: git_remote_delete; took: 0:00:00.000220
DEBUG: Importing rule: git_remote_seturl_add; took: 0:00:00.000154
DEBUG: Importing rule: git_rm_local_modifications; took: 0:00:00.000220
DEBUG: Importing rule: git_rm_recursive; took: 0:00:00.000282
DEBUG: Importing rule: git_rm_staged; took: 0:00:00.000225
DEBUG: Importing rule: git_stash; took: 0:00:00.000227
DEBUG: Importing rule: git_stash_pop; took: 0:00:00.000267
DEBUG: Importing rule: git_tag_force; took: 0:00:00.000218
DEBUG: Importing rule: git_two_dashes; took: 0:00:00.000220
DEBUG: Importing rule: go_run; took: 0:00:00.000244
DEBUG: Importing rule: go_unknown_command; took: 0:00:00.053827
DEBUG: Importing rule: gradle_no_task; took: 0:00:00.000539
DEBUG: Importing rule: gradle_wrapper; took: 0:00:00.000253
DEBUG: Importing rule: grep_arguments_order; took: 0:00:00.000236
DEBUG: Importing rule: grep_recursive; took: 0:00:00.000231
DEBUG: Importing rule: grunt_task_not_found; took: 0:00:00.000422
DEBUG: Importing rule: gulp_not_task; took: 0:00:00.000260
DEBUG: Importing rule: has_exists_script; took: 0:00:00.000233
DEBUG: Importing rule: heroku_multiple_apps; took: 0:00:00.000237
DEBUG: Importing rule: heroku_not_command; took: 0:00:00.000228
DEBUG: Importing rule: history; took: 0:00:00.000100
DEBUG: Importing rule: hostscli; took: 0:00:00.000420
DEBUG: Importing rule: ifconfig_device_not_found; took: 0:00:00.000308
DEBUG: Importing rule: java; took: 0:00:00.000270
DEBUG: Importing rule: javac; took: 0:00:00.000250
DEBUG: Importing rule: lein_not_task; took: 0:00:00.000367
DEBUG: Importing rule: ln_no_hard_link; took: 0:00:00.000246
DEBUG: Importing rule: ln_s_order; took: 0:00:00.000249
DEBUG: Importing rule: long_form_help; took: 0:00:00.000083
DEBUG: Importing rule: ls_all; took: 0:00:00.000241
DEBUG: Importing rule: ls_lah; took: 0:00:00.000259
DEBUG: Importing rule: man; took: 0:00:00.000239
DEBUG: Importing rule: man_no_space; took: 0:00:00.000080
DEBUG: Importing rule: mercurial; took: 0:00:00.000238
DEBUG: Importing rule: missing_space_before_subcommand; took: 0:00:00.000108
DEBUG: Importing rule: mkdir_p; took: 0:00:00.000231
DEBUG: Importing rule: mvn_no_command; took: 0:00:00.000233
DEBUG: Importing rule: mvn_unknown_lifecycle_phase; took: 0:00:00.000238
DEBUG: Importing rule: nixos_cmd_not_found; took: 0:00:00.054193
DEBUG: Importing rule: no_command; took: 0:00:00.000366
DEBUG: Importing rule: no_such_file; took: 0:00:00.000100
DEBUG: Importing rule: npm_missing_script; took: 0:00:00.000508
DEBUG: Importing rule: npm_run_script; took: 0:00:00.000323
DEBUG: Importing rule: npm_wrong_command; took: 0:00:00.000393
DEBUG: Importing rule: open; took: 0:00:00.000310
DEBUG: Importing rule: pacman; took: 0:00:00.157218
DEBUG: Importing rule: pacman_not_found; took: 0:00:00.000172
DEBUG: Importing rule: path_from_history; took: 0:00:00.000131
DEBUG: Importing rule: php_s; took: 0:00:00.000320
DEBUG: Importing rule: pip_install; took: 0:00:00.000312
DEBUG: Importing rule: pip_unknown_command; took: 0:00:00.000309
DEBUG: Importing rule: port_already_in_use; took: 0:00:00.000176
DEBUG: Importing rule: prove_recursively; took: 0:00:00.000246
DEBUG: Importing rule: pyenv_no_such_command; took: 0:00:00.053834
DEBUG: Importing rule: python_command; took: 0:00:00.000305
DEBUG: Importing rule: python_execute; took: 0:00:00.000307
DEBUG: Importing rule: quotation_marks; took: 0:00:00.000177
DEBUG: Importing rule: react_native_command_unrecognized; took: 0:00:00.000536
DEBUG: Importing rule: remove_shell_prompt_literal; took: 0:00:00.000088
DEBUG: Importing rule: remove_trailing_cedilla; took: 0:00:00.000117
DEBUG: Importing rule: rm_dir; took: 0:00:00.000239
DEBUG: Importing rule: rm_root; took: 0:00:00.000334
DEBUG: Importing rule: scm_correction; took: 0:00:00.000261
DEBUG: Importing rule: sed_unterminated_s; took: 0:00:00.000248
DEBUG: Importing rule: sl_ls; took: 0:00:00.000101
DEBUG: Importing rule: ssh_known_hosts; took: 0:00:00.000242
DEBUG: Importing rule: sudo; took: 0:00:00.000091
DEBUG: Importing rule: sudo_command_from_user_path; took: 0:00:00.000248
DEBUG: Importing rule: switch_lang; took: 0:00:00.000142
DEBUG: Importing rule: systemctl; took: 0:00:00.000444
DEBUG: Importing rule: terraform_init; took: 0:00:00.000240
DEBUG: Importing rule: test.py; took: 0:00:00.000084
DEBUG: Importing rule: tmux; took: 0:00:00.000253
DEBUG: Importing rule: touch; took: 0:00:00.000235
DEBUG: Importing rule: tsuru_login; took: 0:00:00.000233
DEBUG: Importing rule: tsuru_not_command; took: 0:00:00.000246
DEBUG: Importing rule: unknown_command; took: 0:00:00.000087
DEBUG: Importing rule: unsudo; took: 0:00:00.000080
DEBUG: Importing rule: vagrant_up; took: 0:00:00.000235
DEBUG: Importing rule: whois; took: 0:00:00.000362
DEBUG: Importing rule: workon_doesnt_exists; took: 0:00:00.000304
DEBUG: Importing rule: yarn_alias; took: 0:00:00.000319
DEBUG: Importing rule: yarn_command_not_found; took: 0:00:00.045713
DEBUG: Importing rule: yarn_command_replaced; took: 0:00:00.000514
DEBUG: Importing rule: yarn_help; took: 0:00:00.000262
DEBUG: Importing rule: yum_invalid_operation; took: 0:00:00.054748
DEBUG: Trying rule: path_from_history; took: 0:00:00.000385
DEBUG: Trying rule: dry; took: 0:00:00.000064
DEBUG: Trying rule: git_stash_pop; took: 0:00:00.000024
DEBUG: Trying rule: test.py; took: 0:00:00.000002
DEBUG: Trying rule: adb_unknown_command; took: 0:00:00.000015
DEBUG: Trying rule: ag_literal; took: 0:00:00.000016
DEBUG: Trying rule: apt_get_search; took: 0:00:00.000014
DEBUG: Trying rule: apt_invalid_operation; took: 0:00:00.000017
DEBUG: Trying rule: apt_list_upgradable; took: 0:00:00.000016
DEBUG: Trying rule: apt_upgrade; took: 0:00:00.000014
DEBUG: Trying rule: aws_cli; took: 0:00:00.000018
DEBUG: Trying rule: az_cli; took: 0:00:00.000014
DEBUG: Trying rule: brew_cask_dependency; took: 0:00:00.000013
DEBUG: Trying rule: brew_install; took: 0:00:00.000003
DEBUG: Trying rule: brew_link; took: 0:00:00.000015
DEBUG: Trying rule: brew_reinstall; took: 0:00:00.000012
DEBUG: Trying rule: brew_uninstall; took: 0:00:00.000013
DEBUG: Trying rule: brew_unknown_command; took: 0:00:00.000002
DEBUG: Trying rule: brew_update_formula; took: 0:00:00.000013
DEBUG: Trying rule: brew_upgrade; took: 0:00:00.000003
DEBUG: Trying rule: cargo; took: 0:00:00.000003
DEBUG: Trying rule: cargo_no_command; took: 0:00:00.000014
DEBUG: Trying rule: cat_dir; took: 0:00:00.000016
DEBUG: Trying rule: cd_correction; took: 0:00:00.000016
DEBUG: Trying rule: cd_mkdir; took: 0:00:00.000019
DEBUG: Trying rule: cd_parent; took: 0:00:00.000002
DEBUG: Trying rule: chmod_x; took: 0:00:00.000004
DEBUG: Trying rule: composer_not_command; took: 0:00:00.000014
DEBUG: Trying rule: cp_create_destination; took: 0:00:00.000014
DEBUG: Trying rule: cp_omitting_directory; took: 0:00:00.000016
DEBUG: Trying rule: cpp11; took: 0:00:00.000015
DEBUG: Trying rule: dirty_untar; took: 0:00:00.000014
DEBUG: Trying rule: dirty_unzip; took: 0:00:00.000014
DEBUG: Trying rule: django_south_ghost; took: 0:00:00.000003
DEBUG: Trying rule: django_south_merge; took: 0:00:00.000002
DEBUG: Trying rule: docker_image_being_used_by_container; took: 0:00:00.000014
DEBUG: Trying rule: docker_login; took: 0:00:00.000017
DEBUG: Trying rule: docker_not_command; took: 0:00:00.000014
DEBUG: Trying rule: fab_command_not_found; took: 0:00:00.000015
DEBUG: Trying rule: fix_alt_space; took: 0:00:00.000004
DEBUG: Trying rule: fix_file; took: 0:00:00.000011
DEBUG: Trying rule: gem_unknown_command; took: 0:00:00.000014
DEBUG: Trying rule: git_add; took: 0:00:00.000013
DEBUG: Trying rule: git_add_force; took: 0:00:00.000011
DEBUG: Trying rule: git_bisect_usage; took: 0:00:00.000012
DEBUG: Trying rule: git_branch_delete; took: 0:00:00.000011
DEBUG: Trying rule: git_branch_delete_checked_out; took: 0:00:00.000016
DEBUG: Trying rule: git_branch_exists; took: 0:00:00.000012
DEBUG: Trying rule: git_branch_list; took: 0:00:00.000012
DEBUG: Trying rule: git_checkout; took: 0:00:00.000012
DEBUG: Trying rule: git_commit_amend; took: 0:00:00.000012
DEBUG: Trying rule: git_commit_reset; took: 0:00:00.000011
DEBUG: Trying rule: git_diff_no_index; took: 0:00:00.000011
DEBUG: Trying rule: git_diff_staged; took: 0:00:00.000013
DEBUG: Trying rule: git_fix_stash; took: 0:00:00.000012
DEBUG: Trying rule: git_flag_after_filename; took: 0:00:00.000012
DEBUG: Trying rule: git_help_aliased; took: 0:00:00.000012
DEBUG: Trying rule: git_merge; took: 0:00:00.000011
DEBUG: Trying rule: git_merge_unrelated; took: 0:00:00.000014
DEBUG: Trying rule: git_not_command; took: 0:00:00.000012
DEBUG: Trying rule: git_pull; took: 0:00:00.000011
DEBUG: Trying rule: git_pull_clone; took: 0:00:00.000012
DEBUG: Trying rule: git_pull_uncommitted_changes; took: 0:00:00.000012
DEBUG: Trying rule: git_push; took: 0:00:00.000012
DEBUG: Trying rule: git_push_different_branch_names; took: 0:00:00.000011
DEBUG: Trying rule: git_push_pull; took: 0:00:00.000015
DEBUG: Trying rule: git_push_without_commits; took: 0:00:00.000012
DEBUG: Trying rule: git_rebase_merge_dir; took: 0:00:00.000012
DEBUG: Trying rule: git_rebase_no_changes; took: 0:00:00.000012
DEBUG: Trying rule: git_remote_delete; took: 0:00:00.000012
DEBUG: Trying rule: git_remote_seturl_add; took: 0:00:00.000016
DEBUG: Trying rule: git_rm_local_modifications; took: 0:00:00.000012
DEBUG: Trying rule: git_rm_recursive; took: 0:00:00.000011
DEBUG: Trying rule: git_rm_staged; took: 0:00:00.000012
DEBUG: Trying rule: git_stash; took: 0:00:00.000014
DEBUG: Trying rule: git_tag_force; took: 0:00:00.000011
DEBUG: Trying rule: git_two_dashes; took: 0:00:00.000011
DEBUG: Trying rule: go_run; took: 0:00:00.000016
DEBUG: Trying rule: go_unknown_command; took: 0:00:00.000013
DEBUG: Trying rule: gradle_no_task; took: 0:00:00.000014
DEBUG: Trying rule: gradle_wrapper; took: 0:00:00.000014
DEBUG: Trying rule: grep_arguments_order; took: 0:00:00.000014
DEBUG: Trying rule: grep_recursive; took: 0:00:00.000014
DEBUG: Trying rule: grunt_task_not_found; took: 0:00:00.000017
DEBUG: Trying rule: gulp_not_task; took: 0:00:00.000014
DEBUG: Trying rule: has_exists_script; took: 0:00:00.000223
DEBUG: Trying rule: heroku_multiple_apps; took: 0:00:00.000017
DEBUG: Trying rule: heroku_not_command; took: 0:00:00.000014
DEBUG: Trying rule: hostscli; took: 0:00:00.000015
DEBUG: Trying rule: ifconfig_device_not_found; took: 0:00:00.000014
DEBUG: Trying rule: java; took: 0:00:00.000014
DEBUG: Trying rule: javac; took: 0:00:00.000015
DEBUG: Trying rule: lein_not_task; took: 0:00:00.000014
DEBUG: Trying rule: ln_no_hard_link; took: 0:00:00.000004
DEBUG: Trying rule: ln_s_order; took: 0:00:00.000004
DEBUG: Trying rule: ls_all; took: 0:00:00.000014
DEBUG: Trying rule: ls_lah; took: 0:00:00.000012
DEBUG: Trying rule: man; took: 0:00:00.000016
DEBUG: Trying rule: mercurial; took: 0:00:00.000013
DEBUG: Trying rule: mkdir_p; took: 0:00:00.000004
DEBUG: Trying rule: mvn_no_command; took: 0:00:00.000013
DEBUG: Trying rule: mvn_unknown_lifecycle_phase; took: 0:00:00.000012
DEBUG: Trying rule: no_such_file; took: 0:00:00.000473
DEBUG: Trying rule: npm_missing_script; took: 0:00:00.000018
DEBUG: Trying rule: npm_run_script; took: 0:00:00.000014
DEBUG: Trying rule: npm_wrong_command; took: 0:00:00.000013
DEBUG: Trying rule: open; took: 0:00:00.000015
DEBUG: Trying rule: php_s; took: 0:00:00.000014
DEBUG: Trying rule: pip_install; took: 0:00:00.000017
DEBUG: Trying rule: pip_unknown_command; took: 0:00:00.000015
DEBUG: Trying rule: port_already_in_use; took: 0:00:00.000383
DEBUG: Trying rule: prove_recursively; took: 0:00:00.000017
DEBUG: Trying rule: pyenv_no_such_command; took: 0:00:00.000014
DEBUG: Trying rule: python_command; took: 0:00:00.000005
DEBUG: Trying rule: python_execute; took: 0:00:00.000014
DEBUG: Trying rule: quotation_marks; took: 0:00:00.000002
DEBUG: Trying rule: react_native_command_unrecognized; took: 0:00:00.000014
DEBUG: Trying rule: remove_shell_prompt_literal; took: 0:00:00.000003
DEBUG: Trying rule: remove_trailing_cedilla; took: 0:00:00.000003
DEBUG: Trying rule: rm_dir; took: 0:00:00.000004
DEBUG: Trying rule: scm_correction; took: 0:00:00.000014
DEBUG: Trying rule: sed_unterminated_s; took: 0:00:00.000014
DEBUG: Trying rule: sl_ls; took: 0:00:00.000002
DEBUG: Trying rule: ssh_known_hosts; took: 0:00:00.000015
DEBUG: Trying rule: sudo; took: 0:00:00.000008
DEBUG: Trying rule: sudo_command_from_user_path; took: 0:00:00.000013
DEBUG: Trying rule: switch_lang; took: 0:00:00.000031
DEBUG: Trying rule: systemctl; took: 0:00:00.000018
DEBUG: Trying rule: terraform_init; took: 0:00:00.000024
DEBUG: Trying rule: tmux; took: 0:00:00.000014
DEBUG: Trying rule: touch; took: 0:00:00.000013
DEBUG: Trying rule: tsuru_login; took: 0:00:00.000014
DEBUG: Trying rule: tsuru_not_command; took: 0:00:00.000012
DEBUG: Trying rule: unknown_command; took: 0:00:00.000092
DEBUG: Trying rule: unsudo; took: 0:00:00.000005
DEBUG: Trying rule: vagrant_up; took: 0:00:00.000013
DEBUG: Trying rule: whois; took: 0:00:00.000014
DEBUG: Trying rule: workon_doesnt_exists; took: 0:00:00.000013
DEBUG: Trying rule: yarn_alias; took: 0:00:00.000014
DEBUG: Trying rule: yarn_command_not_found; took: 0:00:00.000013
DEBUG: Trying rule: yarn_command_replaced; took: 0:00:00.000014
DEBUG: Trying rule: yarn_help; took: 0:00:00.000016
DEBUG: Trying rule: man_no_space; took: 0:00:00.000004
DEBUG: Trying rule: no_command; took: 0:00:00.077364
source ~/.bashrc [enter/↑/↓/ctr]
```
Anything else you think is relevant:
FILL THIS IN
<!-- It's only with enough information that we can do something to fix the problem. -->
| closed | 2021-03-29T09:29:57Z | 2021-07-22T09:44:16Z | https://github.com/nvbn/thefuck/issues/1179 | [] | LoZeno | 1 |
OWASP/Nettacker | automation | 75 | args_loader upgrade | Hello,
according to #60 issue, the `core/args_loader.py` is not working well and does not consider all possible commands. we can add a task to replace `--method-args port_scan_stealth` with `--method-args port_scan_stealth=True`. the default values for any `key` would be `True`.
e.g. `--method-args port_scan_stealth&port_scan_ports=1,2,3&dir_scan_random_agent` would be equal to `--method-args port_scan_stealth=True&port_scan_ports=1,2,3&dir_scan_random_agent=True`
let me know if anyone would like to work on this.
Best Regards. | closed | 2018-03-11T17:58:13Z | 2018-03-11T23:24:03Z | https://github.com/OWASP/Nettacker/issues/75 | [
"bug",
"help wanted",
"done",
"bug fixed"
] | Ali-Razmjoo | 3 |
Evil0ctal/Douyin_TikTok_Download_API | api | 498 | 能否新增获取用户主页详细粉丝数的接口 | **您的功能请求是否与问题相关? 如有,请描述。**
如:我在使用xxx时觉得如果可以改进xxx的话会更好。
**描述您想要的解决方案**
如:对您想要发生的事情的清晰简洁的描述。
**描述您考虑过的替代方案**
如:对您考虑过的任何替代解决方案或功能的清晰简洁的描述。
**附加上下文**
在此处添加有关功能请求的任何其他上下文或屏幕截图。
| closed | 2024-11-08T14:34:33Z | 2024-11-08T14:35:23Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/498 | [
"enhancement"
] | jianhudiyi | 0 |
modoboa/modoboa | django | 2,638 | Unable to send mail from mailclient | I remember i had this problem in the past, but i'm not sure what and how..
I setup a standard MODOBOA server on Debian 10, i can logon to the webmail and send email to everyone. But when i try to use Thunderbird i get an error: 550 5.1.1 recipient address rejected user unknown
does anyone have solved this? | closed | 2022-10-13T12:12:03Z | 2022-10-13T12:14:57Z | https://github.com/modoboa/modoboa/issues/2638 | [] | jhjacobs81 | 0 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,171 | Send button disabled when file upload questions are marked as mandatory | **Describe the bug**
Send button remains disabled at last step if mandatory fields are not filled within the questionnaire.
**To Reproduce**
Steps to reproduce the behavior:
1. set a questionnaire with at leat 1 mandatory field
2. fill the questionnaire but the mandatory one --> send turns to disabled
3. fill the mandatory field --> send button does not light on
**Expected behavior**
A clear and concise description of what you expected to happen.
**Desktop (please complete the following information):**
- OS: ubuntu
- Browser: firefox 97
- Globaleaks Version: 4.7.10
| closed | 2022-02-10T15:43:47Z | 2022-02-17T18:20:21Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3171 | [
"T: Bug",
"C: Client"
] | larrykind | 4 |
pytest-dev/pytest-cov | pytest | 19 | Module-level code imported from conftest files is reported as 'uncovered' | This is because `pytest-cov` doesn't actually start coverage measurement until `pytest_sessionstart`.
I think it could perhaps be fixed by starting coverage measurement instead in `pytest_load_initial_conftests`, but a few things about this make me nervous:
1) If people are setting up their coverage options via some method other than command-line args (e.g. modifying the config in another hook or plugin) it would no longer take effect soon enough.
2) I'm not familiar with the distributed-coverage aspect of `pytest-cov` and `cov-core`, so I'm not sure what implications there might be there.
| closed | 2014-10-16T18:58:43Z | 2014-10-17T17:19:54Z | https://github.com/pytest-dev/pytest-cov/issues/19 | [] | carljm | 6 |
Lightning-AI/pytorch-lightning | data-science | 20,539 | `top_k` parameter of `ModelCheckpoint` default value | ### Description & Motivation
I believe that the `top_k` would highly benefit from the improvement in its default value. Currently, it defaults to `1`. However, sometimes, like for example when I am trying to save models every n train steps, it doesn't make sense to save the `top_k=1` model. I would presumably like to have it save all the models.
### Pitch
So, I would suggest that the default value should be `None`. In case where `every_n_train_steps` or `every_n_epochs` or `train_time_interval` has some value with `monitor=None` and the `top_k` value is not given, i.e., still bears `None` it should change to -1. However in cases where the `monitor` parameter is set the `top_k` value should be changed to 1, if not specified.
### Alternatives
_No response_
### Additional context
_No response_
cc @lantiga @borda | open | 2025-01-09T12:31:00Z | 2025-01-14T22:18:50Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20539 | [
"feature"
] | arijit-hub | 6 |
strawberry-graphql/strawberry | asyncio | 3,176 | Prefetch_related field name gets continuously appended when type is used as a related field | <!-- Provide a general summary of the bug in the title above. -->
Subsequent queries fail with error `invalid parameter to prefetch_related()` when attempting to query for a field `bar` on a type `foo` where `foo` is a related field and `bar` defines a prefetch via `@strawberry_django.field(prefetch_related=Prefetch(...))`
If the `prefetch_related` is removed, this issue does not occur.
## Describe the Bug
Given
```
@strawberry.django.type(Baz)
class BazType:
foo: FooType
@strawberry.django.type(Foo)
class FooType:
@strawberry_django.field(
prefetch_related=Prefetch(
"bars",
queryset=Bar.objects.active().select_related("qux")
to_attr="_bar_prefetch",
)
)
def bar(self) -> Optional[
Annotated["BarType", strawberry.lazy("..qux")]
]:
return self.bar
```
A query for `baz` that attempts to retrieve `bar` on `foo`:
```
query GetBaz {
results {
id
foo {
bar {
id
}
}
```
Succeeds on the first execution, but on the second execution fails with
`"Cannot find 'foo' on Foo object, 'foo_foo_bars' is an invalid parameter to prefetch_related()"`, then on the third execution fails with `"Cannot find 'foo' on Foo object, 'foo_foo_foo_bars' is an invalid parameter to prefetch_related()"`, on the fourth with `"Cannot find 'foo' on Foo object, 'foo_foo_foo_foo_bars' is an invalid parameter to prefetch_related()"` and so on...
However, once the `prefetch_related` is removed from the `bar` field definition, this issue does not occur.
## System Information
- Operating system: MacOS/Linux
- Python Version: `3.9`
- Strawberry version (if applicable): `==0.209.1`
## Additional Context
We thought that this issue was fixed with the latest library update, but it has persisted.
<!-- Add any other relevant information about the problem here. --> | closed | 2023-10-27T16:06:30Z | 2025-03-20T15:56:27Z | https://github.com/strawberry-graphql/strawberry/issues/3176 | [
"bug"
] | hlschmidbauer | 3 |
deezer/spleeter | deep-learning | 263 | Tensorflow-gpu support | Is it possible for this to run on tensorflow-gpu on native python?
| closed | 2020-02-08T05:16:36Z | 2020-02-08T13:59:12Z | https://github.com/deezer/spleeter/issues/263 | [
"enhancement",
"feature"
] | glennford49 | 1 |
Yorko/mlcourse.ai | numpy | 370 | locally built docker image doesn't work | I've created docker image locally, using docker image build and then tried to run it like this:
`python run_docker_jupyter.py -t mlc_local`
got this:
```
Running command
docker run -it --rm -p 5022:22 -p 4545:4545 -v "/home/egor/private/mlcourse.ai":/notebooks -w /notebooks mlc_local jupyter
Command: jupyter
[I 12:44:17.454 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/traitlets/traitlets.py", line 528, in get
value = obj._trait_values[self.name]
KeyError: 'allow_remote_access'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/notebook/notebookapp.py", line 869, in _default_allow_remote
addr = ipaddress.ip_address(self.ip)
File "/usr/lib/python3.5/ipaddress.py", line 54, in ip_address
address)
ValueError: '' does not appear to be an IPv4 or IPv6 address
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/jupyter-notebook", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python3.5/dist-packages/jupyter_core/application.py", line 266, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/traitlets/config/application.py", line 657, in launch_instance
app.initialize(argv)
File "<decorator-gen-7>", line 2, in initialize
File "/usr/local/lib/python3.5/dist-packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/notebook/notebookapp.py", line 1629, in initialize
self.init_webapp()
File "/usr/local/lib/python3.5/dist-packages/notebook/notebookapp.py", line 1379, in init_webapp
self.jinja_environment_options,
File "/usr/local/lib/python3.5/dist-packages/notebook/notebookapp.py", line 158, in __init__
default_url, settings_overrides, jinja_env_options)
File "/usr/local/lib/python3.5/dist-packages/notebook/notebookapp.py", line 251, in init_settings
allow_remote_access=jupyter_app.allow_remote_access,
File "/usr/local/lib/python3.5/dist-packages/traitlets/traitlets.py", line 556, in __get__
return self.get(obj, cls)
File "/usr/local/lib/python3.5/dist-packages/traitlets/traitlets.py", line 535, in get
value = self._validate(obj, dynamic_default())
File "/usr/local/lib/python3.5/dist-packages/notebook/notebookapp.py", line 872, in _default_allow_remote
for info in socket.getaddrinfo(self.ip, self.port, 0, socket.SOCK_STREAM):
File "/usr/lib/python3.5/socket.py", line 732, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -5] No address associated with hostname
```
| closed | 2018-10-10T12:50:06Z | 2018-10-11T13:59:36Z | https://github.com/Yorko/mlcourse.ai/issues/370 | [
"enhancement"
] | eignatenkov | 7 |
roboflow/supervision | pytorch | 1,338 | Add Gpu support to time in zone solutions ! | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Description
it not support executes the code in the gpu/cuda
### Use case
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2024-07-10T04:52:50Z | 2024-07-10T08:47:07Z | https://github.com/roboflow/supervision/issues/1338 | [
"enhancement"
] | Rasantis | 1 |
jupyter-widgets-contrib/ipycanvas | jupyter | 243 | Proposal: javascript thread with callback | ipycanvas struggles with threading in Google Colab notebooks and the threading examples don't behave properly. To mitigate, and perhaps for other reasons, it would be useful to have a thread running on the javascript side of things at a particular framerate and posting callbacks to be caught on the python side. That way, the python side will not need run multiple threads. This does seem to only be a problem with Google Colab notebooks, however. (Why Google Colab? I'd like to be able to use Google's free GPUs to train neural networks that interact with pygame via ipycanvas.) Thanks for considering my weird request! | open | 2022-01-24T23:05:26Z | 2022-01-25T15:29:06Z | https://github.com/jupyter-widgets-contrib/ipycanvas/issues/243 | [] | markriedl | 2 |
cvat-ai/cvat | pytorch | 8,700 | When importing labelled images in "Datumaro 1.0" format they are downloaded in point cloud format | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
when I try to import the dataset to cvat I get
```
"{
"name": "MediaTypeError",
"message": "Unexpected media type of a dataset item '000000'. Expected '<class 'datumaro.components.media.Image'>', actual '<class 'datumaro.components.media.PointCloudFromFile'>' ",
"stack": "---------------------------------------------------------------------------
......
......
MediaTypeError: Unexpected media type of a dataset item '000000'. Expected '<class 'datumaro.components.media.Image'>', actual '<class 'datumaro.components.media.PointCloudFromFile'>' "
}
```
This is the same if I used the API or the Web GUI
### Expected Behavior
the dataset be in "datumaro.components.media.Image" format. instead of Pointcloud
### Possible Solution
Use the correct Datumaro version in CVAT or fix the bug.
### Context
I updated to CAT 2.22.0 Yesterday but the issue still persists. not sure if the bit need for this needs to be updated separately or because the annotations were done before the update was made and was saved in the old format. I re exported few times after the update but in the downloaded annotation file I still get
```
...
[],"attr":{"frame":23},"point_cloud":{"path":""}},{"id":"000024","annotations":[]...
...
```
This issue is referenced [#5924](https://github.com/cvat-ai/cvat/issues/5924) but the solution is noted as updating CVAT (which I did) not sure if I'm doing it incorrectly or some residual issue.
### Environment
_No response_ | closed | 2024-11-14T11:46:46Z | 2024-11-25T13:50:25Z | https://github.com/cvat-ai/cvat/issues/8700 | [
"bug",
"need info"
] | ganindu7 | 7 |
RobertCraigie/prisma-client-py | asyncio | 202 | Support configuring the CLI binary path | ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Unlike the engine binary paths, the CLI binary path cannot be dynamically set by the user, we should support this to improve the user experience on unsupported architectures.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Use an environment variable `PRISMA_CLI_BINARY` to represent the CLI binary path.
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
Related issue: #195
We should also look into loading dotenv files before running CLI commands so that binary paths can be configured in a `.env` file instead of having to be set by the user's shell but that should be a separate issue.
| closed | 2022-01-01T15:30:47Z | 2022-02-01T12:08:36Z | https://github.com/RobertCraigie/prisma-client-py/issues/202 | [
"kind/improvement"
] | RobertCraigie | 0 |
horovod/horovod | machine-learning | 3,810 | TF/Keras 2.11 isn’t currently working with `KerasEstimator` in horovod 0.26.1 even using legacy optimizer | **Environment:**
1. Framework: keras
2. Framework version: 2.11
3. Horovod version: 0.26.1
4. MPI version: 4.1.4
5. CUDA version: 11.0.3-1
6. NCCL version: 2.10.3-1
7. Python version: 3.8
**Bug report:**
With keras=2.11 and horovod 0.26.1, `horovod.spark.keras.KerasEstimator` doesn't work even when using legacy optimizer. It has the following error message
```
Traceback (most recent call last):
[1,2]<stderr>: File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
[1,2]<stderr>: return _run_code(code, main_globals, None,
[1,2]<stderr>: File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
[1,2]<stderr>: exec(code, run_globals)
[1,2]<stderr>: File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/horovod/spark/task/mpirun_exec_fn.py", line 52, in <module>
[1,2]<stderr>: main(codec.loads_base64(sys.argv[1]), codec.loads_base64(sys.argv[2]))
[1,2]<stderr>: File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/horovod/spark/task/mpirun_exec_fn.py", line 45, in main
[1,2]<stderr>: task_exec(driver_addresses, settings, 'OMPI_COMM_WORLD_RANK', 'OMPI_COMM_WORLD_LOCAL_RANK')
[1,2]<stderr>: File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/horovod/spark/task/__init__.py", line 61, in task_exec
[1,2]<stderr>: result = fn(*args, **kwargs)
[1,2]<stderr>: File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/horovod/spark/keras/remote.py", line 136, in train
[1,2]<stderr>: model = deserialize_keras_model(
[1,2]<stderr>: File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/horovod/spark/keras/remote.py", line 299, in deserialize_keras_model
[1,2]<stderr>: return load_model_fn(f)
[1,2]<stderr>: File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/horovod/spark/keras/remote.py", line 137, in <lambda>
[1,2]<stderr>: serialized_model, lambda x: hvd.load_model(x))
[1,2]<stderr>: File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/horovod/tensorflow/keras/__init__.py", line 274, in load_model
[1,2]<stderr>: return _impl.load_model(keras, wrap_optimizer, _OPTIMIZER_MODULES, filepath, custom_optimizers, custom_objects)
[1,2]<stderr>: File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/horovod/_keras/__init__.py", line 272, in load_model
[1,2]<stderr>: return keras.models.load_model(filepath, custom_objects=horovod_objects)
[1,2]<stderr>: File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
[1,2]<stderr>: raise e.with_traceback(filtered_tb) from None
[1,2]<stderr>: File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/horovod/tensorflow/keras/__init__.py", line 273, in <lambda>
[1,2]<stderr>: return lambda **kwargs: DistributedOptimizer(cls(**kwargs), compression=compression)
[1,2]<stderr>:ValueError[1,2]<stderr>:: decay is deprecated in the new Keras optimizer, pleasecheck the docstring for valid arguments, or use the legacy optimizer, e.g., tf.keras.optimizers.legacy.Adadelta.
```
We found this [PR](https://sourcegraph.com/github.com/horovod/horovod/-/commit/02685064a2b6201a4250f2e25ec7418ea8a59d8f?visible=5) seems to solve the issue. And if we install horovod from master it works. Given this, could we make a patch release that include the linked PR?
| closed | 2023-01-07T01:23:34Z | 2023-02-01T18:13:14Z | https://github.com/horovod/horovod/issues/3810 | [
"bug"
] | wenfeiy-db | 1 |
svc-develop-team/so-vits-svc | pytorch | 46 | 训练过程报错_pickle.UnpicklingError | **系统平台**: CentOS 7.9
**出现问题的环节**: 训练
**Python版本**: 3.8.13
**PyTorch版本**: Version: 1.13.1+cu116
**所用分支**: 4.0-v2
**所用数据集**: 本人
授权证明截图:
**问题描述**: 训练开始后报错,无法继续,报错如下,网络搜索得到的结果是与torch版本有关,但在我跟换了torch版本之后似乎也没有改变。可能是由于torch.load()引起。
**日志**:
在slurm环境中使用srun运行 4卡
前面都是正常的...
(部分文件夹涉及隐私用星号代替了)
```
INFO:torch.nn.parallel.distributed:Reducer buckets have been rebuilt in this iteration.
INFO:torch.nn.parallel.distributed:Reducer buckets have been rebuilt in this iteration.
INFO:torch.nn.parallel.distributed:Reducer buckets have been rebuilt in this iteration.
INFO:torch.nn.parallel.distributed:Reducer buckets have been rebuilt in this iteration.
INFO:44k:Train Epoch: 1 [0%]
INFO:44k:Losses: [4.583632469177246, 2.16941237449646, 11.800090789794922, 124.89070129394531, 616.9237060546875], step: 0, lr: 0.0002
/home/****/project/conda_envs/sov/lib/python3.8/site-packages/torch/autograd/__init__.py:197: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance.
grad.sizes() = [32, 1, 4], strides() = [4, 1, 1]
bucket_view.sizes() = [32, 1, 4], strides() = [4, 4, 1] (Triggered internally at ../torch/csrc/distributed/c10d/reducer.cpp:325.)
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
INFO:44k:Saving model and optimizer state at iteration 1 to ./logs/44k/G_0.pth
INFO:44k:Saving model and optimizer state at iteration 1 to ./logs/44k/D_0.pth
INFO:torch.nn.parallel.distributed:Reducer buckets have been rebuilt in this iteration.
INFO:torch.nn.parallel.distributed:Reducer buckets have been rebuilt in this iteration.
INFO:torch.nn.parallel.distributed:Reducer buckets have been rebuilt in this iteration.
INFO:torch.nn.parallel.distributed:Reducer buckets have been rebuilt in this iteration.
/home/****/project/conda_envs/sov/lib/python3.8/site-packages/torch/autograd/__init__.py:197: UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance.
grad.sizes() = [32, 1, 4], strides() = [4, 1, 1]
bucket_view.sizes() = [32, 1, 4], strides() = [4, 4, 1] (Triggered internally at ../torch/csrc/distributed/c10d/reducer.cpp:325.)
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
Traceback (most recent call last):
File "train.py", line 310, in <module>
main()
File "train.py", line 51, in main
mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,))
File "/home/****/project/conda_envs/sov/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 240, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/****/project/conda_envs/sov/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 198, in start_processes
while not context.join():
File "/home/****/project/conda_envs/sov/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 160, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 2 terminated with the following error:
Traceback (most recent call last):
File "/home/****/project/conda_envs/sov/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/scratch/****/so-vits-svc/train.py", line 122, in run
train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler,
File "/scratch/****/so-vits-svc/train.py", line 141, in train_and_evaluate
for batch_idx, items in enumerate(train_loader):
File "/home/****/project/conda_envs/sov/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 628, in __next__
data = self._next_data()
File "/home/****/project/conda_envs/sov/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1333, in _next_data
return self._process_data(data)
File "/home/****/project/conda_envs/sov/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1359, in _process_data
data.reraise()
File "/home/****/project/conda_envs/sov/lib/python3.8/site-packages/torch/_utils.py", line 543, in reraise
raise exception
_pickle.UnpicklingError: Caught UnpicklingError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/****/project/conda_envs/sov/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/home/****/project/conda_envs/sov/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 58, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/****/project/conda_envs/sov/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 58, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/scratch/****/so-vits-svc/data_utils.py", line 88, in __getitem__
return self.get_audio(self.audiopaths[index][0])
File "/scratch/****/so-vits-svc/data_utils.py", line 51, in get_audio
spec = torch.load(spec_filename)
File "/home/****/project/conda_envs/sov/lib/python3.8/site-packages/torch/serialization.py", line 795, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/****/project/conda_envs/sov/lib/python3.8/site-packages/torch/serialization.py", line 1002, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: A load persistent id instruction was encountered,
but no persistent_load function was specified.
```
之后就exit code 1了
同样的数据集在windows平台也尝试过,没有报错。 | closed | 2023-03-18T12:18:51Z | 2023-03-18T16:09:13Z | https://github.com/svc-develop-team/so-vits-svc/issues/46 | [] | Yuxiza | 11 |
JoshuaC215/agent-service-toolkit | streamlit | 165 | Make appear tools calls from the api | i want to have the tools calls on the reponse api but i get this: {"type":"ai","content":"J'ai saisi \"new for congo google.com\" dans le champ d'ID 4. Veuillez noter que je n'ai pas réellement effectué de recherche sur Google ; j'ai simulé l'action en utilisant l'API fournie. \n","tool_calls":[],"tool_call_id":null,"run_id":"6a9d1f07-3fc3-45cb-ac4d-cd6ccbde8d96","response_metadata":{"prompt_feedback":{"block_reason":0,"safety_ratings":[]},"finish_reason":"STOP","safety_ratings":[]},"custom_data":{}}
But in langsmith i got this:

Thanks ! | closed | 2025-02-05T23:43:22Z | 2025-03-03T07:21:09Z | https://github.com/JoshuaC215/agent-service-toolkit/issues/165 | [] | Louis454545 | 1 |
zama-ai/concrete-ml | scikit-learn | 94 | What is the difference between two bit? | 1.the bit in Brevitas model
<img width="509" alt="1688733334096" src="https://github.com/zama-ai/concrete-ml/assets/127387074/f0b3cefd-f9ec-4471-b8ed-95f82c7c9525">
2.the bit after fhe_compatibility
<img width="931" alt="1688733478729" src="https://github.com/zama-ai/concrete-ml/assets/127387074/d134e140-9fa0-4df9-bd77-d205cf447e50">
What is the difference between two bit?
| closed | 2023-07-07T12:39:01Z | 2023-07-10T08:22:10Z | https://github.com/zama-ai/concrete-ml/issues/94 | [] | maxwellgodv | 2 |
pyeve/eve | flask | 912 | BulkWrite error raised on DuplicateKey compound index | If this is the intended behaviour, sorry for bothering you.
I set up a compound index on one of the resources:
```
db.create_index(
[
("full_name.first_name", pymongo.ASCENDING),
("full_name.last_name", pymongo.DESCENDING)
], unique=True
)
```
While testing with app.test_client(), when posting an insert request with the same "full_name" as one already in the database, it returns, as expected a 409, with the expected Duplicate key message but also raises a BulkWriteError.
Is there something wrong with my set-up? Is this supposed to work this way?
Thank you for your time.
| closed | 2016-09-09T16:08:12Z | 2018-05-18T18:19:53Z | https://github.com/pyeve/eve/issues/912 | [
"stale"
] | ghost | 1 |
ultralytics/yolov5 | pytorch | 13,099 | runtime error:permission denied | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi
I want to run train with GPU like this.
01:00.0 VGA compatible controller: NVIDIA Corporation GA102 [GeForce RTX 3090] (rev a1)
02:00.0 VGA compatible controller: NVIDIA Corporation GA102 [GeForce RTX 3090] (rev a1)
My command line:
$ python -m torch.distributed.run --nproc_per_node 2 --master_port 1 segment/train.py --data ./data/gnrDataset_polygon.yaml --weights ~/mdh_share/yolov5s-seg.pt --img 640 --device 0,1
error log:
WARNING:__main__:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
ERROR:torch.distributed.elastic.multiprocessing.errors.error_handler:{
"message": {
"message": "RuntimeError: Permission denied",
"extraInfo": {
"py_callstack": "Traceback (most recent call last):\n File \"/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py\", line 345, in wrapper\n return f(*args, **kwargs)\n File \"/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/run.py\", line 719, in main\n run(args)\n File \"/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/run.py\", line 710, in run\n elastic_launch(\n File \"/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/launcher/api.py\", line 131, in __call__\n return launch_agent(self._config, self._entrypoint, list(args))\n File \"/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/launcher/api.py\", line 252, in launch_agent\n result = agent.run()\n File \"/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/elastic/metrics/api.py\", line 125, in wrapper\n result = f(*args, **kwargs)\n File \"/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/elastic/agent/server/api.py\", line 709, in run\n result = self._invoke_run(role)\n File \"/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/elastic/agent/server/api.py\", line 837, in _invoke_run\n self._initialize_workers(self._worker_group)\n File \"/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/elastic/metrics/api.py\", line 125, in wrapper\n result = f(*args, **kwargs)\n File \"/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/elastic/agent/server/api.py\", line 678, in _initialize_workers\n self._rendezvous(worker_group)\n File \"/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/elastic/metrics/api.py\", line 125, in wrapper\n result = f(*args, **kwargs)\n File \"/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/elastic/agent/server/api.py\", line 538, in _rendezvous\n store, group_rank, group_world_size = spec.rdzv_handler.next_rendezvous()\n File \"/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/elastic/rendezvous/static_tcp_rendezvous.py\", line 55, in next_rendezvous\n self._store = TCPStore( # type: ignore[call-arg]\nRuntimeError: Permission denied\n",
"timestamp": "1718680548"
}
}
}
Traceback (most recent call last):
File "/home/wise/anaconda3/envs/yolov5/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/wise/anaconda3/envs/yolov5/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/run.py", line 723, in <module>
main()
File "/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
return f(*args, **kwargs)
File "/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/run.py", line 719, in main
run(args)
File "/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/run.py", line 710, in run
elastic_launch(
File "/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 252, in launch_agent
result = agent.run()
File "/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper
result = f(*args, **kwargs)
File "/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/elastic/agent/server/api.py", line 709, in run
result = self._invoke_run(role)
File "/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/elastic/agent/server/api.py", line 837, in _invoke_run
self._initialize_workers(self._worker_group)
File "/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper
result = f(*args, **kwargs)
File "/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/elastic/agent/server/api.py", line 678, in _initialize_workers
self._rendezvous(worker_group)
File "/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper
result = f(*args, **kwargs)
File "/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/elastic/agent/server/api.py", line 538, in _rendezvous
store, group_rank, group_world_size = spec.rdzv_handler.next_rendezvous()
File "/home/wise/anaconda3/envs/yolov5/lib/python3.9/site-packages/torch/distributed/elastic/rendezvous/static_tcp_rendezvous.py", line 55, in next_rendezvous
self._store = TCPStore( # type: ignore[call-arg]
RuntimeError: Permission denied
Thanks a lot.
### Additional
_No response_ | closed | 2024-06-18T03:27:27Z | 2024-10-20T19:48:06Z | https://github.com/ultralytics/yolov5/issues/13099 | [
"question",
"Stale"
] | mdh31 | 3 |
dynaconf/dynaconf | flask | 368 | Document the usage with `python -m` | My project structure:
gfa/
├── config
│ └── settings.yaml
├── __init__.py
├── __main__.py
├── resources
│ ├── exon_body.bed
│ ├── exon_end.bed
│ ├── exon_start.bed
├── src
│ ├── annotators
│ │ ├── get_annots.py
│ │ └── __init__.py
│ ├── gfa.py
│ ├── preprocess
└── tests
├── fixtures
│ └── test_bkpt.tsv
├── __init__.py
└── test_get_annots.py
I have a settings.yaml in the config folder with the content below
```yaml
---
exon_region_dict
start : '../resources/exon_start.bed'
stop : '../resources/exon_stop.bed'
body : '../resources/exon_body.bed'
```
I call settings.EXON_REGION_DICT in get_annots.py but I get this error
line 113, in __getattr__
return getattr(self._wrapped, name)
AttributeError: 'Settings' object has no attribute 'EXON_REGION_DICT' | closed | 2020-07-02T16:33:13Z | 2022-07-02T20:12:33Z | https://github.com/dynaconf/dynaconf/issues/368 | [
"wontfix",
"hacktoberfest",
"Docs"
] | gopi1616 | 11 |
vitalik/django-ninja | pydantic | 468 | How to unify exception returns | I want to unify the return value
like the following
{
"code": 200,
"data": [],
"message": "xxxx"
}
Every time I need to use try except in the view function, it feels very annoying
| closed | 2022-06-10T08:00:19Z | 2022-07-02T15:27:20Z | https://github.com/vitalik/django-ninja/issues/468 | [] | zhiming429438709 | 1 |
PokemonGoF/PokemonGo-Bot | automation | 5,989 | [Feature request] auto-snipe from a discord channel | Is this possible? | closed | 2017-03-31T23:02:17Z | 2017-04-03T10:56:11Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5989 | [] | WayneUong | 1 |
recommenders-team/recommenders | data-science | 1,994 | [FEATURE] Make get cuda and get cudnn consistent | ### Description
<!--- Describe your expected feature in detail -->
See comment here: https://github.com/recommenders-team/recommenders/pull/1989/files#r1328626117
### Expected behavior with the suggested feature
<!--- For example: -->
<!--- *Adding algorithm xxx will help people understand more about xxx use case scenarios. -->
### Other Comments
| open | 2023-09-18T13:52:29Z | 2023-09-18T13:52:29Z | https://github.com/recommenders-team/recommenders/issues/1994 | [
"enhancement"
] | miguelgfierro | 0 |
onnx/onnx | pytorch | 6,326 | n-bit data type | # Ask a Question
### Question
<!-- Explain your question here. -->
Can you describe how ONNX will support n-bit data types, e.g. [4-bit data type](https://github.com/onnx/onnx/pull/5811). Will it be through making ops like [matmulnbits](https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/contrib_ops/cpu/quantization/matmul_nbits.cc#L82) or through other ways like expressing the n-bit ops using dequantizelinear and quantizelinear? Thanks.
| closed | 2024-08-29T02:08:49Z | 2024-12-10T22:52:40Z | https://github.com/onnx/onnx/issues/6326 | [
"question"
] | jeyblu | 4 |
littlecodersh/ItChat | api | 265 | you can't get access to internet or wechat domain, so exit. | 题主你好,我是python初学者,参考readme写出的demo报错,报错截图如下

我的开发环境: win7
python版本: 3.6.0
执行的python脚本 2.py 源代码为:
import itchat
itchat.auto_login()
itchat.send('hello filehelper', toUserName='filehelper')
为什么会报这个错呢?是我的执行方式有错误吗,如果是,正确的执行方式是怎样的呢?
非常感谢 :)
| closed | 2017-03-07T09:57:37Z | 2018-12-15T08:11:37Z | https://github.com/littlecodersh/ItChat/issues/265 | [
"question"
] | rhodesiaxlo | 7 |
ccxt/ccxt | api | 25,399 | Bybit OHLCV volume discrepancy between fetch_ohlcv and watch_ohlcv_for_symbols | ### Operating System
windows
### Programming Languages
Python
### CCXT Version
4.4.64
### Description
**Description:**
I'm using CCXT to retrieve OHLCV data from Bybit, but I noticed an inconsistency in the volume values returned by fetch_ohlcv() and watch_ohlcv_for_symbols().
fetch_ohlcv() returns the volume as the traded amount in BTC.
watch_ohlcv_for_symbols() returns the volume as the traded amount in USDT.
This discrepancy makes it difficult to maintain consistency when processing OHLCV data.
**Here’s an example:**
Data from `fetch_ohlcv('BTC/USD:BTC', timeframe='15m'):`
```
[[1741044600000, 86400.5, 86598.0, 86191.5, 86493.5, 21.45504114],
[1741045500000, 86493.5, 86493.5, 85900.5, 86108.5, 41.00255361],
[1741046400000, 86108.5, 86124.0, 85756.5, 86080.0, 21.29571666]]
```
(Volume is in BTC.)
Data from `watch_ohlcv_for_symbols([['BTC/USD:BTC', '15m']]):`
`{'BTC/USD:BTC': {'15m': [[1741046400000, 86108.5, 86124.0, 85756.5, 86071.0, 1829475.0]]}}`
(Volume is in USDT.)
**Expected Behavior:**
The volume values should be consistent across both methods, either both in BTC or both in USDT.
**Questions:**
Is this an intentional difference in Bybit’s API, or is this a bug in CCXT?
If this is intended, is there a way to specify whether watch_ohlcv_for_symbols() should return volume in BTC instead of USDT?
What would be the recommended approach to standardize the volume format?
Would appreciate any guidance on this! Thanks.
### Code
```
```
| closed | 2025-03-04T00:20:16Z | 2025-03-04T09:32:47Z | https://github.com/ccxt/ccxt/issues/25399 | [
"bug"
] | big100 | 1 |
Farama-Foundation/Gymnasium | api | 1,084 | [Bug Report] The env checker does not detect when the environment is not deterministic after the first reset. | ### Describe the bug
From the [doc](https://gymnasium.farama.org/api/env/#gymnasium.Env.reset)
> Therefore, `reset()` should (in the typical use case) be called with a seed right after initialization and then never again.
This implies that the reset with a seed, followed by a reset without a seed, must be deterministic.
However, when an environment does not respect this rule, the environment checker does not fail.
### Code example
```python
import gymnasium as gym
from gymnasium.utils.env_checker import check_env
import random
class MyEnv(gym.Env):
def __init__(self):
super().__init__()
self.observation_space = gym.spaces.Discrete(5)
self.action_space = gym.spaces.Discrete(5)
def reset(self, seed=None, options=None):
super().reset(seed=seed)
if seed is not None:
obs = self.np_random.integers(5)
else:
# generate a random observations, but not based on the inner rng -> bad => non deterministic
obs = int(random.random() * 5)
return obs, {}
def step(self, action):
obs = self.np_random.integers(5)
return obs, 0, False, False, {}
register = gym.register(
id="MyEnv-v0",
entry_point=MyEnv,
max_episode_steps=1000,
)
check_env(gym.make("MyEnv-v0")) # Passes but:
env = gym.make("MyEnv-v0")
env.reset(seed=0)
obs1, _ = env.reset()
env.reset(seed=0)
obs2, _ = env.reset()
assert obs1 == obs2 # Fails
```
### System info
gymnasium 0.29.1 (also tested with 1.0.0a2)
MacOS
### Additional context
Context: realized panda-gym was in the case described above: https://github.com/qgallouedec/panda-gym/issues/94
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| closed | 2024-06-13T16:16:01Z | 2024-07-02T09:50:57Z | https://github.com/Farama-Foundation/Gymnasium/issues/1084 | [
"bug"
] | qgallouedec | 2 |
jupyter/nbviewer | jupyter | 425 | nbviewer not synching with github | I received a 400 error on a notebook that had rendered fine in the past. Then looking at some older notebooks, the nbviewer notebook is not the same as what is on github. Even downloading a notebook from nbviewer is different from what is displayed on nbviewer. It seems that nbviewer is not synching with github...
Example notebook from nbviewer:
http://nbviewer.ipython.org/github/nejohnson2/modeling-waste/blob/master/notebooks/CT_Income.ipynb
| closed | 2015-03-13T21:03:49Z | 2015-09-01T00:41:45Z | https://github.com/jupyter/nbviewer/issues/425 | [
"status:Resolved"
] | nejohnson2 | 3 |
google/seq2seq | tensorflow | 179 | QXcbConnection: Could not connect to display | I just pulled the latest master branch and run unit test, but got this error. Can you please check? | closed | 2017-04-19T16:49:22Z | 2017-10-11T07:14:03Z | https://github.com/google/seq2seq/issues/179 | [] | wolfshow | 2 |
eriklindernoren/ML-From-Scratch | deep-learning | 11 | You have a lot of problem with your code | In many line, it has print inessario that assures that it does not execute. I tried it in many distro and nothing | closed | 2017-03-02T18:38:01Z | 2017-03-04T13:10:34Z | https://github.com/eriklindernoren/ML-From-Scratch/issues/11 | [] | Anon23456 | 3 |
geopandas/geopandas | pandas | 2,941 | ENH: add attribute checking for an active geometry column | It may be useful to have an attribute on a `GeoDataFrame` that tells a user if an active geometry column is set or not. In a use case in OSMnx (https://github.com/gboeing/osmnx/pull/1012), they pass dict-based data to the constructor but they do not know if the dict has a `geometry` or not. So they're then checking for a presence of a `"geometry"` column, which works but feels like we could do more to support such cases. If you are dealing with a more generic input, you may have an active geometry column called differently and checking for that is a bit unwelcoming at the moment.
I would propose to add an attribute like `GeoDataFrame.has_geometry` or `.has_active_geometry` that would check the contents of `_geometry_column_name` and return a boolean depending if it is `None` or a string. You can then safely retrieve `GeoDataFrame.geometry` or set a new column, depending on what you need. | closed | 2023-06-27T08:33:13Z | 2024-01-05T16:50:45Z | https://github.com/geopandas/geopandas/issues/2941 | [] | martinfleis | 3 |
Gerapy/Gerapy | django | 142 | 部署时候,后台出现卡死scrapyd进程! | 按照文档进行项目部署的时候: 打包没问题,,点击部署,后台出现了
这样的一个进程,之后访问scrapyd都访问不了,这个进程被kill掉以后,scrapyd一切正常,,日志里面有一个错误:[04/Mar/2020 00:30:33] "POST /1/api/daemonstatus/ HTTP/1.1" 404 9572,,我应该怎么做去解决这个问题呢? | open | 2020-03-03T16:34:21Z | 2020-03-03T16:34:21Z | https://github.com/Gerapy/Gerapy/issues/142 | [] | lvouran | 0 |
plotly/plotly.py | plotly | 4,353 | Error bars in px.scatter should inherit color and opacity from markers by default | Error bars do correctly show with the same color (not opacity) as markers when using categorical data to set the color parameter:
```
import plotly.express as px
df = px.data.iris()
df["e_plus"] = df["sepal_width"]/100
df["e_minus"] = df["sepal_width"]/40
fig = px.scatter(df, x="sepal_width", y="sepal_length", color="species",
error_y="e_plus", error_y_minus="e_minus")
fig.show()
```
<img width="961" alt="image" src="https://github.com/plotly/plotly.py/assets/12720117/cffbcfbc-1d27-4005-b8a3-8e66a5ab05a8">
However, setting the opacity parameter only changes the opacity of the markers:
```
fig = px.scatter(df, x="sepal_width", y="sepal_length", color="species",
error_y="e_plus", error_y_minus="e_minus", opacity=0.6)
fig.show()
```
<img width="953" alt="image" src="https://github.com/plotly/plotly.py/assets/12720117/389685ce-1e8c-4926-af9f-e29700b1caf8">
And using numerical data to set the color parameter, the error bars will not have the same color as the markers, but instead renders in black:
```
fig = px.scatter(df, x="sepal_width", y="sepal_length", color="petal_width",
error_y="e_plus", error_y_minus="e_minus")
fig.show()
```
<img width="963" alt="image" src="https://github.com/plotly/plotly.py/assets/12720117/72b0b8eb-d4c7-4c20-8ec7-e213f4526151">
Fixing this would be super helpful, since the only current solution seems to be to manually add or update the trace _for every single data point_ to set rgba-values for the error bars. My scatter plot had a few thousand data points, making that a very time consuming for-loop. | open | 2023-09-12T19:11:24Z | 2024-08-12T21:06:19Z | https://github.com/plotly/plotly.py/issues/4353 | [
"bug",
"P3"
] | rickynilsson | 1 |
OFA-Sys/Chinese-CLIP | nlp | 161 | M1 Macbook有办法使用吗? | 您好,因为M1本身无法安装CUDA,能只用CPU跑吗?
PS:经过测试M1跑 OpenAI [ViT-B/32](https://github.com/openai/CLIP) model 没啥问题。
| closed | 2023-07-14T02:04:04Z | 2023-07-14T02:50:43Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/161 | [] | andforce | 1 |
ultralytics/ultralytics | python | 19,234 | "Transferred 119/649 items from pretrained weights" in Docker | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hey there,
I want to retrain a previously trained Yolo11 object detection model on new images (with the same classes).
I set my model with `model = YOLO("path/best.pt")` and use `model.train(data="path/data.yaml", epochs=1, imgsz=512, pretrained = true)` (of course I would set the epochs higher in the final version). In my local Pythonprogram this works just fine and even after one epoch I have a mAP as high as before.
But when I try to include this in the real program, where we are using a docker container (not ultralytics I think) it states "**Transferred 119/649 items from pretrained weights**" and it has a mAP of 0 for all except 3 classes.
Why isn't it loading all pretrained things in docker when I use the same data and model als locally?
### Additional
_No response_ | open | 2025-02-13T12:29:35Z | 2025-02-13T19:40:43Z | https://github.com/ultralytics/ultralytics/issues/19234 | [
"question",
"dependencies",
"detect"
] | sunny42838 | 9 |
plotly/plotly.py | plotly | 4,386 | UpdateMenu breaks px.Bar figure when Color Argument Specified | Hello All,
I'm trying to plot a bar chart and separate the values of each bar by the value in a separate column. I also want to add a dropdown / button array using updatemenu which allows me to update how the data is filtered.
This code works acceptably:
```
import numpy
import plotly.express as px
df = px.data.gapminder()
dfstart = df[df['year'] == 1967]
bar = px.bar(
x = dfstart["continent"],
text= dfstart["country"],
y= dfstart["pop"]
)
df67 = df[df['year'] == 1967]
df72 = df[df['year'] == 1972]
yearbuttons = [
{'label': "67", 'method': 'update',
'args':
[{'x': [df67['continent']],
'y': [df67['pop']],
'text': [df67['country']]
},{'title': 'Top 5'}]},
{'label': "72", 'method': 'update',
'args':
[{'x': [df72['continent']],
'y': [df72['pop']],
'text': [df72['country']]
},{'title': '72'}]}
]
bar.update_layout(
updatemenus=[
dict(
type = "dropdown",
direction = "down",
buttons=yearbuttons,
x=1,
xanchor="left",
y=1.3,
yanchor="top",
active = 0
)
]
)
bar.show()
```
However, if I would like to separate the value of each bar by color, and replace 'text' with 'color' in the above example, then the data breaks as soon as I click a button and the data blows up. It seems like the color argument is still pointing at the original graph and plotting all colour values in the original graph against all values in the updated graph.
I posted on the forum in case there was an answer there and didn't get far: [See Forum Post](https://community.plotly.com/t/use-updatemenu-buttons-to-filter-data-for-a-given-value-in-a-column-of-the-underlying-dataframe/79247/8) | closed | 2023-10-17T14:50:22Z | 2024-07-11T17:19:32Z | https://github.com/plotly/plotly.py/issues/4386 | [] | ehardgrave | 1 |
Yorko/mlcourse.ai | scikit-learn | 371 | Validation form is out of date for the demo assignment 3 | Questions 3.6 and 3.7 in the [validation form ](https://docs.google.com/forms/d/1wfWYYoqXTkZNOPy1wpewACXaj2MZjBdLOL58htGWYBA/edit) for demo assignment 3 are incorrect. The questions are valid for the previous version of the assignment that is accessible by commit 152a534428d59648ebce250fd876dea45ad00429.
| closed | 2018-10-10T13:58:54Z | 2018-10-16T11:32:43Z | https://github.com/Yorko/mlcourse.ai/issues/371 | [
"enhancement"
] | fralik | 3 |
gradio-app/gradio | machine-learning | 10,007 | Add support for FBX files in `Model3D` | - [x] I have searched to see if a similar issue already exists.
**Describe the solution you'd like**
Please add support for displaying FBX files in `Model3D`, since FBX is one of the most commonly used formats of 3D assets.
**Additional context**
> The FBX format is used to provide interoperability between digital content creation applications and game engines such as Blender, Maya, Autodesk, Unity, Unreal and many others. It supports many features such as 3D models, scene hierarchy, materials, lighting, animations, bones and more.
It seems that three.js supports FBX format: [FBX Loader - Three.js Tutorials](https://sbcode.net/threejs/loaders-fbx/).
| open | 2024-11-21T06:58:11Z | 2024-11-22T00:31:58Z | https://github.com/gradio-app/gradio/issues/10007 | [
"enhancement"
] | jasongzy | 0 |
JaidedAI/EasyOCR | pytorch | 986 | Fine-Tune on Korean handwritten dataset | Dear @JaidedTeam
I would like to fine-tune the EASY OCR in the handwritten Korean language, I am assuming that the pre-trained model is already trained in Korean and English vocabulary and I will enhance the Korean handwritten accuracy on EASY OCR. How do I achieve it? I know how to train custom models but due to the large size of English datasets, I don't want to train in Korean and English from scratch. I have already 10 M KOREAN handwritten images.
Regards,
Khawar | open | 2023-04-10T06:31:13Z | 2024-01-10T00:52:55Z | https://github.com/JaidedAI/EasyOCR/issues/986 | [] | khawar-islam | 6 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,674 | RunTimeError : Size of Tensros must match except in dimension 1 | Hi, I'm having trouble with training pix2pix model with my data.
I have a run time error that I've never seen it before when I use other datasets.
is there any solution to figure this out?
python train.py --dataroot .\datasets\soolsool_5760_2880_60_30 --name soolsool_v1 --model pix2pix --direction BtoA --load_size 1440 --crop_size 1440 --preprocess none --no_dropout --input_nc 4 --output_nc 4
error message : Sizes of tensors must match except in dimension 1. Expected size 45 but got size 44 for tensor number 1 in the list.
File "D:\04_pix2pix\pytorch-CycleGAN-and-pix2pix-master\models\pix2pix_model.py", line 117, in optimize_parameters
self.forward() # compute fake images: G(A)
^^^^^^^^^^^^^^
File "D:\04_pix2pix\pytorch-CycleGAN-and-pix2pix-master\models\pix2pix_model.py", line 88, in forward
self.fake_B = self.netG(self.real_A) # G(A)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\parallel\data_parallel.py", line 184, in forward
return self.module(*inputs[0], **module_kwargs[0])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\04_pix2pix\pytorch-CycleGAN-and-pix2pix-master\models\networks.py", line 466, in forward
return self.model(input)
^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\04_pix2pix\pytorch-CycleGAN-and-pix2pix-master\models\networks.py", line 534, in forward
return self.model(x)
^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\container.py", line 219, in forward
input = module(input)
^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\04_pix2pix\pytorch-CycleGAN-and-pix2pix-master\models\networks.py", line 536, in forward
return torch.cat([x, self.model(x)], 1)
^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\container.py", line 219, in forward
input = module(input)
^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\04_pix2pix\pytorch-CycleGAN-and-pix2pix-master\models\networks.py", line 536, in forward
return torch.cat([x, self.model(x)], 1)
^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\container.py", line 219, in forward
input = module(input)
^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\04_pix2pix\pytorch-CycleGAN-and-pix2pix-master\models\networks.py", line 536, in forward
return torch.cat([x, self.model(x)], 1)
^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\container.py", line 219, in forward
input = module(input)
^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\04_pix2pix\pytorch-CycleGAN-and-pix2pix-master\models\networks.py", line 536, in forward
return torch.cat([x, self.model(x)], 1)
^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\container.py", line 219, in forward
input = module(input)
^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\04_pix2pix\pytorch-CycleGAN-and-pix2pix-master\models\networks.py", line 536, in forward
return torch.cat([x, self.model(x)], 1)
^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\container.py", line 219, in forward
input = module(input)
^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\04_pix2pix\pytorch-CycleGAN-and-pix2pix-master\models\networks.py", line 536, in forward
return torch.cat([x, self.model(x)], 1)
^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\container.py", line 219, in forward
input = module(input)
^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\anaconda3\envs\pix2pix\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\04_pix2pix\pytorch-CycleGAN-and-pix2pix-master\models\networks.py", line 536, in forward
return torch.cat([x, self.model(x)], 1) | open | 2024-09-12T06:52:45Z | 2025-03-24T04:33:30Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1674 | [] | Jiwno | 3 |
christabor/flask_jsondash | flask | 84 | Consider tiny db for json file adapter feature | https://github.com/msiemens/tinydb/blob/master/README.rst | open | 2017-02-11T01:15:49Z | 2017-02-11T01:15:49Z | https://github.com/christabor/flask_jsondash/issues/84 | [] | christabor | 0 |
lexiforest/curl_cffi | web-scraping | 330 | Use more JA3 fingerprints from another repository | **Is your feature request related to a problem? Please describe.**
I'm currently searching for a Python library capable of using the same JA3 fingerprints of the most important browsers to prevent Cloudflare from detecting my bots. I found your repository and another based on this one (https://github.com/rawandahmad698/noble-tls). The other allows me to impersonate very recent versions like chrome_124 or even using fingerprints of okhttp, which works for preventing the detection (old versions are blocked), and periodically that repository gets updated to include more fingerprints.
**Describe the solution you'd like**
However, I prefer using your repository because it has a nice syntax very similar to the requests library. Also, your repository works very good with files, but the other can't manage them. Since that repository is based on yours, it's possible for you to use it to upgrade your repository adding more fingerprints? Sincerely I didn't investigate how yours projects work exactly.
By the way, thank you for developing this project. You helped me a lot :)
Edit: Before while testing on a website protected with Cloudflare I noticed that if I enable using a random tls extension order (option available in the noble-tls project and in tls-client) I would be able to bypass the security, but if disabled, not. So maybe the solution for the problem I detected would be adding that option too, combined with allowing more JA3 fingerprints. | closed | 2024-06-25T14:52:51Z | 2024-06-29T02:37:41Z | https://github.com/lexiforest/curl_cffi/issues/330 | [
"enhancement"
] | ghost | 2 |
streamlit/streamlit | python | 10,029 | st.radio label alignment and label_visibility issues in the latest versions | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
The st.radio widget has two unexpected behaviors in the latest versions of Streamlit:
The label is centered instead of left-aligned, which is inconsistent with previous versions.
The label_visibility option 'collapsed' does not work as expected, and the label remains visible.
### Reproducible Code Example
```Python
import streamlit as st
# Store the initial value of widgets in session state
if "visibility" not in st.session_state:
st.session_state.visibility = "visible"
st.session_state.disabled = False
st.session_state.horizontal = False
col1, col2 = st.columns(2)
with col1:
st.checkbox("Disable radio widget", key="disabled")
st.checkbox("Orient radio options horizontally", key="horizontal")
with col2:
st.radio(
"Set label visibility 👇",
["visible", "hidden", "collapsed"],
key="visibility",
label_visibility=st.session_state.visibility,
disabled=st.session_state.disabled,
horizontal=st.session_state.horizontal,
)
```
### Steps To Reproduce
please play with the options of the above snippet, and inspect label
### Expected Behavior
The label of the st.radio widget should be left-aligned, consistent with previous versions.
When label_visibility='collapsed' is set, the label should not be visible.
### Current Behavior
The label is centered.
Setting label_visibility='collapsed' does not hide the label; it remains visible.
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.41.1
- Python version:
- Operating System:
- Browser:
### Additional Information
_No response_ | closed | 2024-12-16T12:28:53Z | 2024-12-16T16:06:20Z | https://github.com/streamlit/streamlit/issues/10029 | [
"type:bug",
"feature:st.radio",
"status:awaiting-team-response"
] | Panosero | 4 |
erdewit/ib_insync | asyncio | 530 | Time notation change | IB is going to change the time notation.
Please read this. https://groups.io/g/twsapi/topic/adding_timezone_to_request/93271054
I have not confirmed that it works, but I think formatIBDatetime needs to be changed this way.
```Python
def formatIBDatetime(dt: Union[date, datetime, str, None]) -> str:
"""Format date or datetime to string that IB uses."""
if not dt:
s = ''
elif isinstance(dt, datetime):
# convert to UTC timezone
dt = dt.astimezone(tz=timezone.utc)
s = dt.strftime('%Y%m%d-%H:%M:%S UTC')
elif isinstance(dt, date):
dt = datetime(dt.year, dt.month, dt.day, 23, 59, 59).astimezone(tz=timezone.utc)
s = dt.strftime('%Y%m%d-%H:%M:%S UTC')
else:
s = dt
return s
```
| closed | 2022-11-23T06:00:51Z | 2022-11-29T23:44:34Z | https://github.com/erdewit/ib_insync/issues/530 | [] | stock888777 | 5 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,370 | About G_A,G_B,D_A,D_B,idt_A,idt_B,cycleA,cycleB at wandb. | Hello!
I'm trying style transfer like horse2zebra by CycleGAN.
I used wandb and I got 8 graphs(G_A,G_B,D_A,D_B,idt_A,idt_B,cycleA,cycleB).
I saw FAQ and your paper but I can't understand perfectly about graphs.
These are the correspondence between graph and paper.
Is my recognition correct?
[graphs name : the name in your paper at Sec2 Fig3]
G_A : loss of G in (a)
G_B : loss of F in (a)
D_A : loss of Dx in (a)
D_B : loss of Dy in (a)
idt_A,idt_B : the identity loss
cycleA : the consistency loss in (b)
cycleB : the consistency loss in (c)
<img width="1050" alt="スクリーンショット 2022-01-23 11 30 44" src="https://user-images.githubusercontent.com/55468858/150662394-855fdcba-4d8a-4e22-9540-199a60fe066b.png">
| closed | 2022-01-23T02:39:16Z | 2022-02-15T20:50:33Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1370 | [] | lejelly | 1 |
Significant-Gravitas/AutoGPT | python | 9,190 | Documentation - Add note about Docker on Windows | You might want to add a note to both, the ReadMe.md and the documentation page that when installing Docker on Windows, you should not choose to run it under Hyper-V but WSL2 only, because supabase (at least the version you are using) appears to have an issue with that (you get "unhealthy" for subabase-db).
Tried two times but after switching Docker to WSL2 it worked fine. | open | 2025-01-05T00:24:54Z | 2025-03-07T08:43:14Z | https://github.com/Significant-Gravitas/AutoGPT/issues/9190 | [
"documentation"
] | softworkz | 1 |
flaskbb/flaskbb | flask | 431 | Add items from flaskbb.core to documentation | Need to add doc strings and add them to the documentation as well. Filing this as an issue so I remember to do it because y'all will see it and shame me for not doing it. | closed | 2018-03-27T23:59:57Z | 2018-04-05T09:05:58Z | https://github.com/flaskbb/flaskbb/issues/431 | [
"important"
] | justanr | 1 |
hbldh/bleak | asyncio | 1,071 | Unable to connect to device using Windows | * bleak version: unsure, I think latest version
* Python version: 3.10
* Operating System: Windows
* BlueZ version (`bluetoothctl -v`) in case of Linux:
### Description
I'm just trying to get my Bleak script, which I developed and tested on my MacOS (where it works perfectly) to also work on my Windows 11 desktop. But I can't get it to run, and even trying to connect to my device using the bleak examples gives me all kinds of errors without working successfully.
### What I Did
Ran this simple example
```
import asyncio
from bleak import discover
devices = []
async def scan():
dev = await discover()
for i in range(0,len(dev)):
print("["+str(i)+"]"+str(dev[i]))
devices.append(dev[i])
from bleak import BleakClient
async def connect(address, loop):
async with BleakClient(address, loop=loop) as client:
services = await client.get_services()
for ser in services:
print(ser.uuid)
loop = asyncio.get_event_loop()
loop.run_until_complete(scan())
index = input('please select device from 0 to '+str(len(devices))+":")
index = int(index)
loop.run_until_complete(connect(devices[index].address, loop))
```
My device is listed as number 18 on the output, so I just selected it as shown below and I get these errors. Please note, this works on my MacOS, and I can also connect to this peripheral from any phone. Just not my Windows.
```
Warning (from warnings module):
File "C:/Users/14102/AppData/Local/Programs/Python/Python310/Scripts/discoverdev.py", line 19
loop = asyncio.get_event_loop()
DeprecationWarning: There is no current event loop
Warning (from warnings module):
File "C:/Users/14102/AppData/Local/Programs/Python/Python310/Scripts/discoverdev.py", line 6
dev = await discover()
FutureWarning: The discover function will removed in a future version, use BleakScanner.discover instead.
[0]45:AE:9E:90:DD:20: Apple, Inc. (b'\x10\x05\x01\x98\xa4\xfa\xda')
[1]56:D3:04:8F:AE:23: Apple, Inc. (b'\x10\x06\x08\x19\xacB\x84h')
[2]2A:55:D8:0A:A7:91: Microsoft (b'\x01\t \x02\x9c\xa9\x18\x01o\xd9\x9b\x98y;|]I\x9c=\xaaNC\x87\xff\x9c\xec\x84')
[3]5A:CD:C8:AE:F0:51: Apple, Inc. (b'\x10\x05\t\x98\xf1hV')
[4]7D:92:61:D6:BF:11: Apple, Inc. (b'\x01\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')
[5]25:B2:34:CA:43:0E: Microsoft (b'\x01\t \x02\x82\xc28d\xa1~\xe3P\xed\xd1\xb5}\xaa\xa5\x08\t@\xf8\xc1\xd0\xb6C\xbd')
[6]7F:73:F0:96:66:72: Apple, Inc. (b'\x10\x07;\x1fiH\xaf\xac\x80')
[7]B2:5C:DA:64:FE:26: Unknown
[8]17:03:D5:B2:BB:82: Apple, Inc. (b'\t\x06\x03\x0e\n\xcb\xd4\xb3')
[9]15:10:77:58:4B:6E: Microsoft (b'\x01\t \x02\xd8 \x83L\x13F\x9e\xc9\xce\x13B\x9d\xae3\x97V\xafo\xdcp:\x8d\x97')
[10]1A:F3:B0:94:E8:1E: Microsoft (b'\x01\t \x02\x01\xc6\x10l\xa5\xe4+\xc3\xd48\x9a\xccL\xf9=\x95:\x8dh\xc5#\x02\x85')
[11]25:D7:51:27:26:22: Unknown
[12]7B:62:14:42:8B:64: Apple, Inc. (b'\x0c\x0e\x00\xd8\x8d\x94Nh<\x1e\x93\xbe\xc93\xb8\x08\x10\x06$\x19\xef\xd9!h')
[13]36:1F:39:45:F8:9D: This value has special meaning depending on the context in which it used. Link Manager Protocol (LMP): This value may be used in the internal and interoperability tests before a Company ID has been assigned. This value shall not be used in shipping end products. Device ID Profile: This value is reserved as the default vendor ID when no Device ID service record is present in a remote device. (b'\x02\x15&\x86\xf3\x9c\xba\xdaFX\x85J\xa6.~^\x8b\x8d\x00\x01\x00\x00\xc9')
[14]FC:E8:EC:59:9A:35: Apple, Inc. (b'\x12\x024\x03')
[15]59:47:20:34:A0:CE: Apple, Inc. (b'\x10\x07;\x1fD\xfb(\x068')
[16]EC:6D:03:D9:0F:15: Apple, Inc. (b'\x12\x02\x00\x01')
[17]51:74:A7:DA:D5:D8: Apple, Inc. (b'\x10\x05 \x1c\xed\xea\xe0')
[18]DE:D3:4D:CA:60:FD: Unknown
[19]55:02:6F:89:96:1D: Apple, Inc. (b'\x10\x075\x1fmJ\x87\xf1\x18')
[20]7D:1A:97:1D:4A:33: Apple, Inc. (b'\x0c\x0e\x00\x02D\x8bQ\x0bi\xd5p\xb95\xd3\n\x17\x10\x05\x01\x18\xc7e.')
[21]26:12:BC:30:02:79: Microsoft (b'\x01\t \x02\xceJ=e\xfe<\xd5\x8cr\xaf\xcce\x8c\xaa\xcf\x85r\xe6\x02\xea!ao')
[22]2E:AC:9B:D4:73:89: Unknown
[23]75:16:F5:69:6D:32: Apple, Inc. (b'\x10\x07;\x1f\xa7f\x8a\x05\x18')
[24]4E:AA:0D:7A:4B:37: Apple, Inc. (b'\x07\x19\x01\x0e \xf8\x8f\x01\x00\x04\x8ahR0+5\x84d\xf2\x92z\x07jxt;')
[25]01:3A:DD:60:A0:5F: Microsoft (b'\x01\t \x02HI\xef\xa3\x18\x16\xd4\xe8J\xaa\xd7C\x08}\xaa\xaf\xb8^\xc8^\x11\x84\xee')
[26]3D:4F:3E:AE:E6:26: Microsoft (b'\x01\t \x02\x11\xc1\xc6\xc5\x83\xa9\xfb~\xe4BV\x90e\xe8\xee\xd0\x1dn\x842\x8c:\x88')
[27]21:59:DA:18:6C:30: Apple, Inc. (b'\t\x06\x03\x86\n\x10&\xa1')
[28]7D:D3:92:D1:96:5F: Microsoft (b'\x01\t \x02\x8cj\x88{lW\r\xfdu\xf0\x12m\x1bD\x81\xa0*\xfd\x07\xf2\xd8\t%')
[29]57:C3:A6:7C:3F:8C: Apple, Inc. (b'\x07\x19\x01\x0f +\x99\x8f\x01\x00\x05\xd6\xfb\xce\xca\x0c\xe5\x02\xbf\xfdi\x7f\xad\xed\xccxR')
[30]17:8D:B3:B3:8D:86: Unknown
[31]4A:5B:6E:94:56:A0: Apple, Inc. (b'\x10\x063\x1eZ\xed\x03p')
please select device from 0 to 32:18
Traceback (most recent call last):
File "C:/Users/14102/AppData/Local/Programs/Python/Python310/Scripts/discoverdev.py", line 14, in connect
async with BleakClient(address, loop=loop) as client:
File "C:\Users\14102\AppData\Local\Programs\Python\Python310\lib\site-packages\bleak\__init__.py", line 354, in __aenter__
await self.connect()
File "C:\Users\14102\AppData\Local\Programs\Python\Python310\lib\site-packages\bleak\__init__.py", line 392, in connect
return await self._backend.connect(**kwargs)
File "C:\Users\14102\AppData\Local\Programs\Python\Python310\lib\site-packages\bleak\backends\winrt\client.py", line 339, in connect
await self.get_services()
File "C:\Users\14102\AppData\Local\Programs\Python\Python310\lib\site-packages\bleak\backends\winrt\client.py", line 531, in get_services
await self._requester.get_gatt_services_async(*args),
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/14102/AppData/Local/Programs/Python/Python310/Scripts/discoverdev.py", line 23, in <module>
loop.run_until_complete(connect(devices[index].address, loop))
File "C:\Users\14102\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 646, in run_until_complete
return future.result()
asyncio.exceptions.CancelledError
``` | closed | 2022-10-06T18:14:20Z | 2022-10-06T18:15:52Z | https://github.com/hbldh/bleak/issues/1071 | [] | celiafb | 0 |
sinaptik-ai/pandas-ai | data-science | 1,363 | Feature generation | ### 🚀 The feature
Hello, it says at https://docs.pandas-ai.com/intro:
> Feature generation: Enhance data quality through feature generation.
Where can I find out more about this feature?
Thank you 🙏
### Motivation, pitch
I'm interested in understanding the extent to which Pandas AI can work with unstructured data.
### Alternatives
_No response_
### Additional context
_No response_ | closed | 2024-09-10T17:10:50Z | 2024-12-17T16:08:31Z | https://github.com/sinaptik-ai/pandas-ai/issues/1363 | [
"documentation"
] | abrichr | 1 |
sinaptik-ai/pandas-ai | data-visualization | 1,351 | Issue in importing pandsai | ### System Info
Python 3.11.7
### 🐛 Describe the bug
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[45], [line 1](vscode-notebook-cell:?execution_count=45&line=1)
----> [1](vscode-notebook-cell:?execution_count=45&line=1) from pandasai import PandasAI
ImportError: cannot import name 'PandasAI' from 'pandasai' (c:\Users\USERID\AppData\Local\anaconda3\Lib\site-packages\pandasai\__init__.py) | closed | 2024-09-02T00:34:52Z | 2025-03-05T09:50:35Z | https://github.com/sinaptik-ai/pandas-ai/issues/1351 | [
"bug"
] | anilmadishetty2498 | 7 |
redis/redis-om-python | pydantic | 263 | Best Practices for Multi-Language Schema Source of Truth with Redis OM? | Hi,
Love the idea of a First Class ORM around Redis. You guys are doing exciting things!
#### Goal
Find a best-practice mechanism to support Redis OM schema with a single source of truth between languages(Say Python and Node)
#### Question
In Python if we define an Embedded JSON Model object, how would we consume/interact in Node without a secondary schema definition to maintain in JS/TS?
#### Example
Consider an Embedded JSON Model structure in python like:
```python
from redis_om import EmbeddedJsonModel, Field, JsonModel, Migrator
class Address(EmbeddedJsonModel):
address1: str
city: str = Field(index=True)
state: str = Field(index=True)
postal_code: str = Field(index=True)
class Ticker(JsonModel):
ticker: str = Field(index=True)
name: str = Field(index=True)
market: str = Field(index=True)
locale: str = Field(index=True)
primary_exchange: str = Field(index=True)
type: str = Field(index=True)
active: bool = Field(index=True)
currency_name: str = Field(index=True)
cik: str = Field(index=True)
composite_figi: str
share_class_figi: str
market_cap: float = Field(index=True)
phone_number: str
address: Address
```
It would be ideal to perform CRUD operations using [Redis Om Node](https://github.com/redis/redis-om-node) in addition to python without maintaining a separate schema definition.
The concern is without a single schema definition shared across languages there will be nasty days around migrations; especially if/when these are in a distributed consumer architecture with multiple languages accessing the objects.
#### Solutions?
It feels like mapping a TS file to other languages could answer the bell? Maybe there is a TS -> Python(.NET, Rust, etc) schema mapping utility for Redis OM already or in the roadmap?
Greatly appreciate any pointers if there is a best practice, pattern or tutorial that covers how to do this well.
Thank you in Advance,
Ryan | closed | 2022-05-25T15:34:59Z | 2022-06-07T19:49:51Z | https://github.com/redis/redis-om-python/issues/263 | [] | ryanrussell | 2 |
huggingface/datasets | tensorflow | 6,676 | Can't Read List of JSON Files Properly | ### Describe the bug
Trying to read a bunch of JSON files into Dataset class but default approach doesn't work. I don't get why it works when I read it one by one but not when I pass as a list :man_shrugging:
The code fails with
```
ArrowInvalid: JSON parse error: Invalid value. in row 0
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
This doesn't work
```
from datasets import Dataset
# dir contains 100 json files.
Dataset.from_json("/PUT SOME PATH HERE/*")
```
This works:
```
from datasets import concatenate_datasets
ls_ds = []
for file in list_of_json_files:
ls_ds.append(Dataset.from_json(file))
ds = concatenate_datasets(ls_ds)
```
### Expected behavior
I expect this to read json files properly as error is not clear
### Environment info
- `datasets` version: 2.17.0
- Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.2
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0
| open | 2024-02-17T22:58:15Z | 2024-03-02T20:47:22Z | https://github.com/huggingface/datasets/issues/6676 | [] | lordsoffallen | 3 |
lux-org/lux | jupyter | 66 | Better support for Pandas.Series | We should create a LuxSeries object to take on the sliced version of the LuxDataframe, following [guidelines](https://pandas.pydata.org/pandas-docs/stable/development/extending.html#override-constructor-properties) for subclassing DataFrames. We need to pass the `_metadata` from LuxDataFrame to LuxSeries so that it is preserved across operations (and therefore doesn't need to be recomputed), related to #65. Currently, this code is commented since LuxSeries is causing issues compared to the original Pd.Series.
```python
class LuxDataFrame(pd.DataFrame):
....
@property
def _constructor(self):
return LuxDataFrame
@property
def _constructor_sliced(self):
def f(*args, **kwargs):
# adapted from https://github.com/pandas-dev/pandas/issues/13208#issuecomment-326556232
return LuxSeries(*args, **kwargs).__finalize__(self, method='inherit')
return f
```
```python
class LuxSeries(pd.Series):
# _metadata = ['name','_intent','data_type_lookup','data_type',
# 'data_model_lookup','data_model','unique_values','cardinality',
# 'min_max','plot_config', '_current_vis','_widget', '_recommendation']
def __init__(self,*args, **kw):
super(LuxSeries, self).__init__(*args, **kw)
@property
def _constructor(self):
return LuxSeries
@property
def _constructor_expanddim(self):
from lux.core.frame import LuxDataFrame
# def f(*args, **kwargs):
# # adapted from https://github.com/pandas-dev/pandas/issues/13208#issuecomment-326556232
# return LuxDataFrame(*args, **kwargs).__finalize__(self, method='inherit')
# return f
return LuxDataFrame
```
In particular the original `name` property of the Lux Series is lost when we implement LuxSeries, see `test_pandas.py:test_df_to_series` for more details.
Example:
```python
df = pd.read_csv("lux/data/car.csv")
df._repr_html_()
series = df["Weight"]
series.name # returns None (BUG!)
series.cardinality # preserved
```
We should also add a __repr__ to print out the basic histogram for Series objects. | closed | 2020-08-18T08:06:41Z | 2020-11-23T02:26:29Z | https://github.com/lux-org/lux/issues/66 | [
"bug",
"priority",
"hard"
] | dorisjlee | 1 |
pyeve/eve | flask | 870 | substring search with SQLAlchemy | I cannot find in docs : is it possible to make request that filter by substiring?
| closed | 2016-06-02T11:40:26Z | 2016-06-03T12:15:02Z | https://github.com/pyeve/eve/issues/870 | [] | Rurik19 | 1 |
mwaskom/seaborn | pandas | 3,706 | UNITS of colorbar in 2D KDE plot? | Can someone please tell me what would be the unit of the colorbar on the 2D KDE plot? The document says that it is the normalized density. Normalized by what? And what would be the unit in that case. | closed | 2024-06-03T14:07:53Z | 2024-06-06T07:55:06Z | https://github.com/mwaskom/seaborn/issues/3706 | [] | ven1996 | 4 |
docarray/docarray | pydantic | 1,068 | Allow stacked DocumentArrays to be constructed from list of Documents | **Is your feature request related to a problem? Please describe.**
The pytorch dataloader's pin_memory implementation requires the data to have this signature
```python
type(data)([pin_memory(sample, device) for sample in data])
```
This signature does not exist for Stacked DAs, and thus, they cannot be used as the return type of a dataset.
**Describe the solution you'd like**
An implementation of this constructor for Stacked DAs
**Describe alternatives you've considered**
None
**Additional context**
This might not resolve the issues with pin_memory but it would be a step in the right direction.
| closed | 2023-02-01T09:04:32Z | 2023-02-08T10:47:17Z | https://github.com/docarray/docarray/issues/1068 | [] | Jackmin801 | 5 |
xzkostyan/clickhouse-sqlalchemy | sqlalchemy | 197 | how to create materialized view coperating with package sqlalchemy_utils | **Describe the bug**
from sqlalchemy_utils import create_view with this method, i can create clickhouse normal view
but with from sqlalchemy_utils importcreate_materialized_view, create clickhouse meterialized view failed with error as below
Orig exception: Code: 62. DB::Exception: Syntax error: failed at position 19 ('MATERIALIZED') (line 1, col 19): MATERIALIZED VIEW xxx AS SELECT yyy Expected one of: TABLE, VIEW, DICTIONARY, FUNCTION. (SYNTAX_ERROR) (version 21.12.3.32 (official build))
**To Reproduce**
normal view
```python
from sqlalchemy import select
from sqlalchemy_utils import create_view
from clickhouse_sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
stmtLWI = select(selectable).where(wherecondition)
cvLWI = create_view('viewname', stmtLWI, Base.metadata)
class ViewLWI(Base):
__tablename__ = 'viewname'
__table__ = cvLWI
```
this is ok that can create corresponding view successfully, but with meterialized view unsuccessfully
```python
from sqlalchemy import select
from sqlalchemy_utils import create_materialized_view
from clickhouse_sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
stmtLWI = select(selectable).where(wherecondition)
cvLWI = create_materialized_view('viewname', stmtLWI, Base.metadata)
class ViewLWI(Base):
__tablename__ = 'viewname'
__table__ = cvLWI
__table_args__ = (
engines.MergeTree(partition_by='field1',
order_by='field2',
primary_key='field2'),
)
```
**Expected behavior**
create meterialized view successfully same as normal view
**Versions**
clickhouse-driver==0.2.3
clickhouse-sqlalchemy==0.2.2
SQLAlchemy==1.4.8
SQLAlchemy-Utils==0.37.6
sqlalchemy-views==0.3.1
python==3.8.5
| closed | 2022-08-31T03:05:47Z | 2022-11-29T21:07:00Z | https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/197 | [] | flyly0755 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.