repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
tiangolo/uvicorn-gunicorn-fastapi-docker | pydantic | 25 | Support "Let’s Encrypt"? | https://letsencrypt.org/pt-br/ | closed | 2020-01-31T20:20:24Z | 2020-04-19T15:18:00Z | https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/25 | [] | jgmartinss | 1 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 59 | 请问int4量化后的模型可以进一步finetune么 | 比较好奇之前看到有人提到可以这样做。想问一下这样的工作是否具有意义,或者说推荐这样做么?如果可以的话有没有什么方法可以去参考。感谢 | closed | 2023-04-04T12:27:45Z | 2023-04-24T03:51:58Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/59 | [] | CrazyBoyM | 0 |
pytorch/vision | computer-vision | 8,784 | `train` parameter should be explained before `download`, `transform` and `target_transform` parameter | ### 📚 The doc issue
In [the doc](https://pytorch.org/vision/stable/generated/torchvision.datasets.QMNIST.html) of `QMNIST()`, `train` parameter is located before `**kwargs` which are `download`, `transform` and `target_transform` parameter as shown below:
> class torchvision.datasets.QMNIST(root: [Union](https://docs.python.org/3/library/typing.html#typing.Union)[[str](https://docs.python.org/3/library/stdtypes.html#str), [Path](https://docs.python.org/3/library/pathlib.html#pathlib.Path)], what: [Optional](https://docs.python.org/3/library/typing.html#typing.Optional)[[str](https://docs.python.org/3/library/stdtypes.html#str)] = None, compat: [bool](https://docs.python.org/3/library/functions.html#bool) = True, train: [bool](https://docs.python.org/3/library/functions.html#bool) = True, **kwargs: [Any](https://docs.python.org/3/library/typing.html#typing.Any))
But `train` parameter is explained after `download`, `transform` and `target_transform` parameter as shown below:
> Parameters:
> ...
> - compat ([bool](https://docs.python.org/3/library/functions.html#bool),optional) – A boolean that says whether the target for each example is class number (for compatibility with the MNIST dataloader) or a torch vector containing the full qmnist information. Default=True.
> - download ([bool](https://docs.python.org/3/library/functions.html#bool), optional) – If True, downloads the dataset from the internet and puts it in root directory. If dataset is already downloaded, it is not downloaded again.
> - transform (callable, optional) – A function/transform that takes in a PIL image and returns a transformed version. E.g, transforms.RandomCrop
> - target_transform (callable, optional) – A function/transform that takes in the target and transforms it.
> - train ([bool](https://docs.python.org/3/library/functions.html#bool),optional,compatibility) – When argument ‘what’ is not specified, this boolean decides whether to load the training set or the testing set. Default: True.
### Suggest a potential alternative/fix
So, `train` parameter should be explained before `download`, `transform` and `target_transform` parameter as shown below:
> Parameters:
> ...
> - compat ([bool](https://docs.python.org/3/library/functions.html#bool),optional) – A boolean that says whether the target for each example is class number (for compatibility with the MNIST dataloader) or a torch vector containing the full qmnist information. Default=True.
> - train ([bool](https://docs.python.org/3/library/functions.html#bool),optional,compatibility) – When argument ‘what’ is not specified, this boolean decides whether to load the training set or the testing set. Default: True.
> - download ([bool](https://docs.python.org/3/library/functions.html#bool), optional) – If True, downloads the dataset from the internet and puts it in root directory. If dataset is already downloaded, it is not downloaded again.
> - transform (callable, optional) – A function/transform that takes in a PIL image and returns a transformed version. E.g, transforms.RandomCrop
> - target_transform (callable, optional) – A function/transform that takes in the target and transforms it. | closed | 2024-12-06T03:05:52Z | 2025-02-19T16:10:56Z | https://github.com/pytorch/vision/issues/8784 | [] | hyperkai | 1 |
miguelgrinberg/python-socketio | asyncio | 914 | [Client] Reconnection attemps won't stop even if a connection has successfully established. | ### Summary
When a connection can't be made, it tries to reconnect multiple times at the same time. When a connection is finally established, the other recconectionn threads don't stop.
### My Code
**wss.py**
```python
import socketio
import threading
class WSSocket(threading.Thread):
def __init__(self, wss, debug=False):
super(WSSocket,self).__init__()
self.sio = socketio.Client() if not debug else socketio.Client(engineio_logger=True, logger=True, reconnection_delay=3)
self._wss = wss
self._debug = debug
def __callbacks(self):
@self.sio.event
def connect():
self.conn_event.set()
print("### connect ###")
@self.sio.event
def disconnect():
print("### disconnect ###")
@self.sio.event
def message(data):
print(f"[MSG] {data}")
def loop(self):
self.sio.wait()
def setup(self):
self.__callbacks()
self.sio.connect(self._wss)
def run(self):
self.conn_event = threading.Event()
self.setup()
self.loop()
```
**main.py**
```python
rom wss import WSSocket
import threading
if __name__ == "__main__":
sck = WSSocket("wss://[REDACTED]/socket.io", debug=True)
sck.start()
if not sck.conn_event.wait(timeout=20):
raise Exception("SERVER", "Can't connect")
```
### Output
```code
$ python main.py
Attempting polling connection to https://[REDACTED]/socket.io/?transport=polling&EIO=4
Polling connection accepted with {'sid': 'vQaHiton2LzcySvCACa_', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://[REDACTED]/socket.io/?transport=websocket&EIO=4
WebSocket upgrade was successful
WebSocket connection was closed, aborting
Waiting for write loop task to end
Exiting write loop task
Engine.IO connection dropped
Connection failed, new attempt in 2.65 seconds
Exiting read loop task
Exception in thread Thread-1:
Traceback (most recent call last):
File "/data/data/com.termux/files/usr/lib/python3.10/threading.py", line 1009, in _bootstrap_inner
self.run()
File "/data/data/com.termux/files/home/wss.py", line 40, in run
self.setup()
File "/data/data/com.termux/files/home/wss.py", line 36, in setup
self.sio.connect(self._wss)
File "/data/data/com.termux/files/home/env/lib/python3.10/site-packages/socketio/client.py", line 347, in connect
raise exceptions.ConnectionError(
socketio.exceptions.ConnectionError: One or more namespaces failed to connect
Attempting polling connection to https://[REDACTED]/socket.io/?transport=polling&EIO=4
Polling connection accepted with {'sid': 'E7Wv0W99sMf8VtaqACbp', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://[REDACTED]/socket.io/?transport=websocket&EIO=4
WebSocket upgrade was successful
WebSocket connection was closed, aborting
Waiting for write loop task to end
Exiting write loop task
Engine.IO connection dropped
Connection failed, new attempt in 3.45 seconds
Exiting read loop task
Connection failed, new attempt in 4.91 seconds
Attempting polling connection to https://[REDACTED]/socket.io/?transport=polling&EIO=4
Polling connection accepted with {'sid': '6Z2FO1Vc_-sfzHXkACb7', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://[REDACTED]/socket.io/?transport=websocket&EIO=4
WebSocket upgrade was successful
WebSocket connection was closed, aborting
Waiting for write loop task to end
Exiting write loop task
Engine.IO connection dropped
Connection failed, new attempt in 3.48 seconds
Exiting read loop task
Connection failed, new attempt in 4.97 seconds
Attempting polling connection to https://[REDACTED]/socket.io/?transport=polling&EIO=4
Polling connection accepted with {'sid': 'Bk3YO2X-VuhBf-nOACcR', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://[REDACTED]/socket.io/?transport=websocket&EIO=4
WebSocket upgrade was successful
WebSocket connection was closed, aborting
Waiting for write loop task to end
Exiting write loop task
Engine.IO connection dropped
Connection failed, new attempt in 3.44 seconds
Exiting read loop task
Connection failed, new attempt in 5.11 seconds
Attempting polling connection to https://[REDACTED]/socket.io/?transport=polling&EIO=4
Polling connection accepted with {'sid': 'D35HRBnfAn-I6EebACcq', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://[REDACTED]/socket.io/?transport=websocket&EIO=4
WebSocket upgrade was successful
WebSocket connection was closed, aborting
Waiting for write loop task to end
Exiting write loop task
Engine.IO connection dropped
Connection failed, new attempt in 3.05 seconds
Exiting read loop task
Connection failed, new attempt in 5.46 seconds
Attempting polling connection to https://[REDACTED]/socket.io/?transport=polling&EIO=4
Attempting polling connection to https://[REDACTED]/socket.io/?transport=polling&EIO=4
Polling connection accepted with {'sid': 'RYiGKpFL2hG_3ltlACc3', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://[REDACTED]/socket.io/?transport=websocket&EIO=4
Polling connection accepted with {'sid': 'Cly4nTqpc-Ldt-f4ACc8', 'upgrades': ['websocket'], 'pingInterval': 25000, 'pingTimeout': 20000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://[REDACTED]/socket.io/?transport=websocket&EIO=4
WebSocket upgrade was successful
Received packet MESSAGE data 0{"sid":"ZlH_lUWcSY3mLy3hACc9"}
Namespace / is connected
### connect ###
Bird Bot room code: Received packet MESSAGE data 0{"sid":"-sLjQelqoHWkyXyxACc-"}
Reconnection successful
WebSocket upgrade was successful
Connection failed, new attempt in 4.63 seconds
Reconnection successful
Connection failed, new attempt in 5.22 seconds
^C
Sending packet CLOSE data None
Engine.IO connection dropped
### disconnect ###
Exiting write loop task
Exiting write loop task
Connection failed, new attempt in 5.13 seconds
Unexpected error decoding packet: "string index out of range", aborting
Waiting for write loop task to end
Exiting read loop task
``` | closed | 2022-04-28T08:54:05Z | 2022-04-28T14:30:28Z | https://github.com/miguelgrinberg/python-socketio/issues/914 | [] | hugocornago | 2 |
streamlit/streamlit | deep-learning | 10,851 | HTTPS Production Environment Support | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Streamlit [docs](https://docs.streamlit.io/develop/api-reference/configuration/config.toml) provide HTTPS support but don't recommend hosting in a prod environment due to lack of testing: "'DO NOT USE THIS OPTION IN A PRODUCTION ENVIRONMENT. It has not gone through security audits or performance tests." I would like to see these tests happen as the alternative would be to use a reverse proxy like nginx but with a SSL connection a lot of the diverse documentation online has recommended to disable CORS and the XSRF functionality in the config.toml file: --server.enableCORS=false --server.enableXsrfProtection=false. This seems like a greater security risk than just using the inhouse HTTPS Support provided [here](https://docs.streamlit.io/develop/concepts/configuration/https-support) and I would like to see this feature fully ready for an enterprise production grade application.
### Why?
I've been greatly frustrated by the documentation to use a reverse proxy like nginx since it is all very diverse in how people configure there .conf files, a lot of the ubuntu documentation doesn't pair well with RHEL, and there's no clear explanation of the steps in why something should be set up the way it is in the examples I've seen. The need to disable CORS and Xsrf for SSL to work also worries me about whether it would be more secure to ignore the documentation warning and just use streamlits in house Server SSL features. Security should be of utmost importance and I wish Streamlit would put more eggs in the HTTPS basket.
### How?
In the config.toml file of the .streamlit directory I would like to see more support for sslCertFile & sslKeyFile to be production grade ready.
### Additional Context
Raising issue here as instructed from my discussion post https://discuss.streamlit.io/t/updates-on-streamlits-in-house-https-hosting/94710?u=oaklius | open | 2025-03-19T16:46:54Z | 2025-03-20T15:25:46Z | https://github.com/streamlit/streamlit/issues/10851 | [
"type:enhancement",
"area:security",
"area:server"
] | OwenRicker | 1 |
MaartenGr/BERTopic | nlp | 1,737 | Setting UMAP random seed seems to break the model results | I have a dataset of scientific abstracts. When I run the following code, I get ~65 topics:
```
sentence_model = SentenceTransformer('allenai/scibert_scivocab_cased')
topic_model = BERTopic(embedding_model=sentence_model)
topics, probs = topic_model.fit_transform(docs)
```
However, if I try defining a random seed in the UMAP model, I get only 2 topics and the outliers, no matter how I set the random seed:
```
umap_model = UMAP(random_state=1234)
sentence_model = SentenceTransformer('allenai/scibert_scivocab_cased')
topic_model = BERTopic(embedding_model=sentence_model, umap_model=umap_model)
topics, probs = topic_model.fit_transform(docs)
```
This seems very weird to me; I wouldn't expect the random seed to have such a large effect on the model; but even if it does, I would expect different results when I change the value of the random seed, rather than the difference being between no random seed, and any random seed.
Is this expected or is something weird going on?
Thanks! | closed | 2024-01-11T19:57:39Z | 2024-01-12T18:25:16Z | https://github.com/MaartenGr/BERTopic/issues/1737 | [] | serenalotreck | 2 |
marshmallow-code/flask-smorest | rest-api | 280 | Allow alt_response on MethodView | It would be a nice enhancement to allow the usage of the `alt_response`-Decorator on `MethodView`-classes.
It could act as a shortcut to decorating every endpoint of the view with `alt_response`.
An example use case would be a custom converter that rejects any pet_id not present in the pet database.
Since its declared in the `route`-decorator, every endpoint will raise 404 if the pet was not found:
```python3
@blp.route('/<object_id(must_exist=True):pet_id>')
@blp.alt_response(404, ErrorSchema)
class PetsById(MethodView):
@blp.response(200, PetSchema)
def get(self, pet_id):
"""Get pet by ID"""
return Pet.get_by_id(pet_id)
@blp.response(204)
def delete(self, pet_id):
"""Delete pet"""
Pet.delete(pet_id)
``` | open | 2021-09-22T10:40:35Z | 2023-08-16T10:14:26Z | https://github.com/marshmallow-code/flask-smorest/issues/280 | [
"enhancement"
] | der-joel | 5 |
apache/airflow | automation | 48,034 | @dag imported from airflow.sdk fails with `AttributeError: 'DAG' object has no attribute 'get_run_data_interval'` | ### Apache Airflow version
from main. 3.0.0b4
### What happened?
(This bug does not exist in beta3, only got it in the nightly from last night, can't get current main to run so could not test there)
Tried to add this dag:
```python
from airflow.decorators import task
from pendulum import datetime
from airflow.sdk import dag
@dag(
start_date=datetime(2025, 1, 1),
schedule="@daily",
catchup=False,
)
def test_dag():
@task
def test_task():
pass
test_task()
test_dag()
```
Getting the import error:
```
[2025-03-20T19:44:44.482+0000] {dag.py:1866} INFO - Sync 1 DAGs
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 10, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.12/site-packages/airflow/__main__.py", line 58, in main
args.func(args)
File "/usr/local/lib/python3.12/site-packages/airflow/cli/cli_config.py", line 49, in command
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/airflow/utils/cli.py", line 111, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/airflow/utils/providers_configuration_loader.py", line 55, in wrapped_function
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/airflow/utils/session.py", line 101, in wrapper
return func(*args, session=session, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/airflow/cli/commands/remote_commands/dag_command.py", line 711, in dag_reserialize
dag_bag.sync_to_db(bundle.name, bundle_version=bundle.get_current_version(), session=session)
File "/usr/local/lib/python3.12/site-packages/airflow/utils/session.py", line 98, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/airflow/models/dagbag.py", line 649, in sync_to_db
update_dag_parsing_results_in_db(
File "/usr/local/lib/python3.12/site-packages/airflow/dag_processing/collection.py", line 326, in update_dag_parsing_results_in_db
for attempt in run_with_db_retries(logger=log):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/tenacity/__init__.py", line 443, in __iter__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/tenacity/__init__.py", line 376, in iter
result = action(retry_state)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/tenacity/__init__.py", line 398, in <lambda>
self._add_action_func(lambda rs: rs.outcome.result())
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/usr/local/lib/python3.12/site-packages/airflow/dag_processing/collection.py", line 336, in update_dag_parsing_results_in_db
DAG.bulk_write_to_db(bundle_name, bundle_version, dags, session=session)
File "/usr/local/lib/python3.12/site-packages/airflow/utils/session.py", line 98, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/airflow/models/dag.py", line 1872, in bulk_write_to_db
dag_op.update_dags(orm_dags, session=session)
File "/usr/local/lib/python3.12/site-packages/airflow/dag_processing/collection.py", line 471, in update_dags
last_automated_data_interval = dag.get_run_data_interval(last_automated_run)
^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'DAG' object has no attribute 'get_run_data_interval'
```
### What you think should happen instead?
The same dag works when using `from airflow.decorators import dag`.
### How to reproduce
1. Add the dag above
2. run airflow dags reserialize
3. see the error
### Operating System
Mac M1 Pro 15.3.1 (24D70)
### Versions of Apache Airflow Providers
None
### Deployment
Other
### Deployment details
Astro CLI
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-20T20:17:29Z | 2025-03-21T08:11:39Z | https://github.com/apache/airflow/issues/48034 | [
"kind:bug",
"area:core",
"needs-triage",
"affected_version:3.0.0beta"
] | TJaniF | 4 |
jazzband/django-oauth-toolkit | django | 1,260 | Application is_usable not checked when using bearer token in validate_bearer_token | **Describe the bug**
We extended the provider application model and add enabled db and allowed_ips fields to it.
```
class MyApplication(oauth2_provider_models.AbstractApplication):
enabled = models.BooleanField(default=True, help_text='False means that user on record has frozen account')
allowed_ips = models.TextField('Allowed Ips', blank=True, null=True)
def is_usable(self, request):
"""
Determines whether the application can be used - in this case we check if record is enabled.
:param request: The HTTP request being processed.
"""
ip = request.META.get('REMOTE_ADDR')
return self.enabled and ip in allowed_ips.split(" ")
```
When disabling application and make it unusable you can still use the access token.
**To Reproduce**
Create access token. Extend application and override is_usable. You can still use access token.
You can hardcode is_usable and return false.
validate_bearer_token valides if token is is_valid but not if application of a token is usable:
https://github.com/jazzband/django-oauth-toolkit/blob/master/oauth2_provider/oauth2_validators.py#L405
is usable method that can be overriden using OAUTH2_PROVIDER_APPLICATION_MODEL:
https://github.com/jazzband/django-oauth-toolkit/blob/master/oauth2_provider/models.py#L209
**Expected behavior**
If application is not usable their token should not work.
**Version**
2.2.0
<!-- Have you tested with the latest version and/or master branch? -->
<!-- Replace '[ ]' with '[x]' to indicate that. -->
- [X] I have tested with the latest published release and it's still a problem.
- [X] I have tested with the master branch and it's still a problem.
| open | 2023-03-28T07:37:05Z | 2025-02-08T21:12:23Z | https://github.com/jazzband/django-oauth-toolkit/issues/1260 | [
"bug"
] | matejsp | 2 |
learning-at-home/hivemind | asyncio | 149 | p2pd test nat traversal | 
Create a simple setup with 3 nodes where
- node S1 starts first; it is available publicly; it is a full DHT node
- nodes A and B are _bootstrap_-ed from S1 and
- all nodes use QUIC with ~secio~ tls/noise
TODO:
- [x] check that nodes A and B can communicate directly if they are **not** behind NAT (localhost-only)
- [x] check that nodes A and B can communicate if they **are** behind NAT (ping me if you need access to a VM for S1)
| closed | 2021-02-24T13:34:06Z | 2021-03-17T18:19:18Z | https://github.com/learning-at-home/hivemind/issues/149 | [
"enhancement",
"help wanted"
] | justheuristic | 4 |
waditu/tushare | pandas | 1,528 | 002481财报信息不准确 | 数据来源:资产负债表
股票代码 代码 发布日期 实际发布日期 截至日期 报告类型 公司类型 总流动资产 总资产 ... 更新标识
002481.SZ 2481 2020-04-23 2020-04-23 2018-12-31 4 1 1284383703 3793607747 1193784595 2584767205 2571863115 0
002481.SZ 2481 2020-04-23 2020-04-23 2018-12-31 4 1 1285444521 3796302401 1193784595 2587461859 2574557769 1
002481.SZ 2481 2019-04-16 2019-04-16 2018-12-31 1 1 1285444521 3796302401 1193784595 2587461859 2574557769 1
报告类型4更新0的数据是调整后值,更新1为调整前值,与报告类型1一样。已经在choice终端核对过。 | open | 2021-03-25T02:28:51Z | 2021-03-25T02:28:51Z | https://github.com/waditu/tushare/issues/1528 | [] | steinsxc126 | 0 |
thtrieu/darkflow | tensorflow | 795 | Can anyone explain how the bbox is calculated? | I have read YOLO paper, and I just can not understand how this net can calculate som boxes from a new image! | open | 2018-06-07T07:36:23Z | 2018-06-07T07:36:23Z | https://github.com/thtrieu/darkflow/issues/795 | [] | NineBall9 | 0 |
keras-team/keras | pytorch | 20,251 | Allow to pass **kwargs to optimizers.get | https://github.com/keras-team/keras/blob/f6c4ac55692c132cd16211f4877fac6dbeead749/keras/src/optimizers/__init__.py#L72-L97
When dynamically getting an optimizer by using tf.keras.optimizers.get(<OPT_NAME>), it would be extremely useful if one could also pass extra arguments to the function, so that the optimizer gets initialized properly. See below a test example of the behavior I would like to see:
```python
optimizer_name = 'adam'
opt_params = {'learning_rate': 3e-3, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon' : 1e-07, 'amdsgrad': True}
import tensorflow as tf
opt = tf.keras.optimizers.get(optimizer_name, **opt_params)
assert(opt.learning_rate == opt_params['learning_rate']), "Opt learning rate not being correctly initialized"
``` | closed | 2024-09-11T20:21:18Z | 2024-09-11T22:31:30Z | https://github.com/keras-team/keras/issues/20251 | [
"type:feature",
"keras-team-review-pending"
] | manuelblancovalentin | 1 |
noirbizarre/flask-restplus | api | 429 | Auto-generate API integration tests based on Swagger's API doc… | It seems like this should be the next logical step from auto-generating the doc with Swagger: auto-generate basic integration tests that check for input-output according to Swagger's schema (potentially with hooks/config to do more sophisticated testing).
Is there any official example of such a thing? Maybe using Swagger Codegen or Dredd? | open | 2018-05-04T13:06:19Z | 2018-05-06T15:56:43Z | https://github.com/noirbizarre/flask-restplus/issues/429 | [] | zedrdave | 1 |
sunscrapers/djoser | rest-api | 624 | Users keep getting repeated Activation Emails even though they are already active | Hi,
I am using Djoser with the SimpleJWT plugin and for some reason my users get constant user Activation emails from my server even after they have activated in the past. If they click on the activation link again it just says they have already activated.
Does it have something to do with updating a `User` that triggers this email again? I would think the only time an activation email should be sent out is when a user is created and is not active. Any ideas on why this would be happening? Thanks!
Djoser settings:
```python
DJOSER = {
'PASSWORD_RESET_CONFIRM_URL': 'users/reset-password/{uid}/{token}',
'USERNAME_RESET_CONFIRM_URL': 'users/reset-username/confirm/{uid}/{token}',
'ACTIVATION_URL': 'users/activate/{uid}/{token}',
'SEND_ACTIVATION_EMAIL': True,
'PASSWORD_CHANGED_EMAIL_CONFIRMATION': True,
'USER_CREATE_PASSWORD_RETYPE': True,
'SET_PASSWORD_RETYPE': True,
'TOKEN_MODEL': None,
'SERIALIZERS': {
'user': 'restapi.serializers.CustomUserSerializer',
'current_user': 'restapi.serializers.CustomUserSerializer',
},
}
``` | open | 2021-07-16T20:30:56Z | 2022-01-28T06:20:21Z | https://github.com/sunscrapers/djoser/issues/624 | [] | rob4226 | 8 |
gradio-app/gradio | python | 10,458 | Lite: Plotly doesn't work when installed along with altair | ### Describe the bug
In the `outbreak_forecast` demo running on Lite,
Plotly throws the following error.
`plotly==6.0.0` was released and it depends on `narwhals>=1.15.0` (https://github.com/plotly/plotly.py/blob/v6.0.0/packages/python/plotly/recipe/meta.yaml#L28).
However, installing `altair` leads to install `narwhals==1.10.0` **even after `narwhals>=1.15.0` is installed and the older version of `narwhals` overrides the already installed version.** (Pyodide provides `narwhals==1.10.0` [as a native package](https://pyodide.org/en/stable/usage/packages-in-pyodide.html), but `micropip.install("plotly")` installs `narwhals` from PyPI).
Then, the error says Plotly calls non-existing API of `narwhals`.
This poor dependency resolution is a known bug of micropip, but looks like it's not easy to introduce a fix,
so we should add some workaround on our end.
(Ref: https://github.com/pyodide/micropip/issues/103 )
```
webworker.js:368 Python error: Traceback (most recent call last):
File "/lib/python3.12/site-packages/gradio/queueing.py", line 625, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/gradio/route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/gradio/blocks.py", line 2044, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/gradio/blocks.py", line 1591, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<exec>", line 3, in mocked_anyio_to_thread_run_sync
File "/lib/python3.12/site-packages/gradio/utils.py", line 883, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "app.py", line 33, in outbreak
fig = px.line(df, x="day", y=countries)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/plotly/express/_chart_types.py", line 270, in line
return make_figure(args=locals(), constructor=go.Scatter)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/plotly/express/_core.py", line 2477, in make_figure
args = build_dataframe(args, constructor)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/plotly/express/_core.py", line 1727, in build_dataframe
df_output, wide_id_vars = process_args_into_dataframe(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/plotly/express/_core.py", line 1343, in process_args_into_dataframe
df_output[col_name] = to_named_series(
^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/plotly/express/_core.py", line 1175, in to_named_series
x = nw.from_native(x, series_only=True, pass_through=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: from_native() got an unexpected keyword argument 'pass_through'
```
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
Run the `outbreak_forecast` demo on Lite.
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Lite
```
### Severity
I can work around it | closed | 2025-01-29T07:53:43Z | 2025-01-30T07:20:21Z | https://github.com/gradio-app/gradio/issues/10458 | [
"bug",
"gradio-lite"
] | whitphx | 2 |
anselal/antminer-monitor | dash | 169 | TypeError: Object of type timedelta is not JSON serializable | In newer versions of python the following line produces and error.
To fix it convert it to string.
https://github.com/anselal/antminer-monitor/blob/6f9803e891296c0c2807125128532f4d52024c0f/antminermonitor/blueprints/asicminer/asic_antminer.py#L115 | closed | 2020-03-16T12:30:55Z | 2020-03-16T12:35:06Z | https://github.com/anselal/antminer-monitor/issues/169 | [
":bug: bug"
] | anselal | 0 |
ghtmtt/DataPlotly | plotly | 224 | Geopackage Views seem not to work, or is the DateTime column? | Not sure if I do something wrong or if I hit an issue:
Trying to create a scatter/line plot from a view from a Geopackage (Time-Value) where Time is a DateTime do not show up in DataPlotly:

While the underlying data-table does this fine:

**To Reproduce**
Steps to reproduce the behavior:
1. From this zipped geopackage: [cloud3.zip](https://github.com/ghtmtt/DataPlotly/files/4599597/cloud3.zip) load both the data table and the view4
2. View4 is actually a view/join from Data and Grid
3. Try to create a scatterplot from Data: works fine: as you see Value are actually VERY small values (from 1E-10 till 10 or so...)
4. Then try to do this with the view4 layer: not sure what goes wrong, but the Y-axis is going from 0-4 but nothing is shown
5. Note that I actually wanted (as this is time related data) see a plot of one cell (this is air dispersion modelling).. Also @ghtmtt interest probably: https://github.com/qgis/QGIS/issues/36291 and https://github.com/qgis/QGIS/issues/26804 I created the view with 'OGC_FID' and then selection works \o/
6. Note that is also seems that for the view the DateTime (Time) column seems not to be reckognized?
**Desktop (please complete the following information):**
- OS: Debian Testing
- QGIS master
- DataPlotly current
| closed | 2020-05-08T14:43:33Z | 2020-05-08T16:26:13Z | https://github.com/ghtmtt/DataPlotly/issues/224 | [
"bug"
] | rduivenvoorde | 3 |
huggingface/diffusers | deep-learning | 10,550 | [LoRA] loading LoRA into a quantized base model | Similar issues:
1. https://github.com/huggingface/diffusers/issues/10512
2. https://github.com/huggingface/diffusers/issues/10496
<details>
<summary>Reproduction</summary>
```py
import torch
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, FluxTransformer2DModel, FluxPipeline
from huggingface_hub import hf_hub_download
transformer_8bit = FluxTransformer2DModel.from_pretrained(
"black-forest-labs/FLUX.1-dev",
subfolder="transformer",
quantization_config=DiffusersBitsAndBytesConfig(load_in_8bit=True),
torch_dtype=torch.bfloat16,
)
pipe = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
transformer=transformer_8bit,
torch_dtype=torch.bfloat16,
).to("cuda")
pipe.load_lora_weights(
hf_hub_download("ByteDance/Hyper-SD", "Hyper-FLUX.1-dev-8steps-lora.safetensors"),
adapter_name="hyper-sd"
)
pipe.set_adapters("hyper-sd", adapter_weights=0.125)
prompt = "A robot made of exotic candies and chocolates of different kinds. The background is filled with confetti and celebratory gifts."
image = pipe(
prompt=prompt,
height=1024,
width=1024,
max_sequence_length=512,
num_inference_steps=8,
guidance_scale=50,
generator=torch.Generator().manual_seed(42),
).images[0]
image[0].save("out.jpg")
```
</details>
Happens on `main` as well as `v0.31.0-release` branch as well.
<details>
<summary>Error</summary>
```bash
Traceback (most recent call last):
File "/home/sayak/diffusers/load_loras_flux.py", line 18, in <module>
pipe.load_lora_weights(
File "/home/sayak/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1846, in load_lora_weights
self.load_lora_into_transformer(
File "/home/sayak/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1948, in load_lora_into_transformer
inject_adapter_in_model(lora_config, transformer, adapter_name=adapter_name, **peft_kwargs)
File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/peft/mapping.py", line 260, in inject_adapter_in_model
peft_model = tuner_cls(model, peft_config, adapter_name=adapter_name, low_cpu_mem_usage=low_cpu_mem_usage)
File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 141, in __init__
super().__init__(model, config, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage)
File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 184, in __init__
self.inject_adapter(self.model, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage)
File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 501, in inject_adapter
self._create_and_replace(peft_config, adapter_name, target, target_name, parent, current_key=key)
File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 239, in _create_and_replace
self._replace_module(parent, target_name, new_module, target)
File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 263, in _replace_module
new_module.to(child.weight.device)
File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1340, in to
return self._apply(convert)
File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 900, in _apply
module._apply(fn)
File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 900, in _apply
module._apply(fn)
File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 927, in _apply
param_applied = fn(param)
File "/home/sayak/.pyenv/versions/3.10.12/envs/diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1333, in convert
raise NotImplementedError(
NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
```
</details>
@BenjaminBossan any suggestions here? | closed | 2025-01-13T06:03:49Z | 2025-01-16T02:52:53Z | https://github.com/huggingface/diffusers/issues/10550 | [
"lora"
] | sayakpaul | 18 |
FactoryBoy/factory_boy | django | 629 | Easy way to build a nested dict from factory? | #### The problem
My situation is similar to this one raised https://github.com/FactoryBoy/factory_boy/issues/68#issuecomment-363268477
I have nested factories that use SubFactory
When I want to use factory.build to create a dict, the nested factory comes out as object rather than as a dict.
#### Proposed solution
Is there a way to improve with a `build_nested_dict` function or there's a workaround?
| open | 2019-06-24T09:52:36Z | 2020-10-09T11:39:41Z | https://github.com/FactoryBoy/factory_boy/issues/629 | [] | simkimsia | 3 |
yvann-ba/Robby-chatbot | streamlit | 69 | Always respond "Unfortunately, I was not able to answer your question. Please try again. If the problem persists, try rephrasing your question." | I uploaded the csv file and asked "How many rows" always answer
"Unfortunately, I was not able to answer your question. Please try again. If the problem persists, try rephrasing your question"
[national-gdp-constant-usd-wb.csv](https://github.com/yvann-hub/Robby-chatbot/files/13066989/national-gdp-constant-usd-wb.csv)
| open | 2023-10-23T06:53:09Z | 2023-10-23T06:55:00Z | https://github.com/yvann-ba/Robby-chatbot/issues/69 | [] | sainisanjay | 0 |
bigscience-workshop/petals | nlp | 429 | PingAggregator returns inf for all servers that use relays | It seems that something is wrong with calling `rpc_ping()` for such servers (since https://health.petals.dev is able to reach such servers in 5 sec successfully). | closed | 2023-08-02T23:58:12Z | 2023-08-03T00:09:13Z | https://github.com/bigscience-workshop/petals/issues/429 | [] | borzunov | 1 |
Textualize/rich | python | 2,988 | [BUG] print does not work well with DefaultDict | - [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
`rich.print` inspects `__rich__` and `aihwerij235234ljsdnp34ksodfipwoe234234jlskjdf` of object during printing.
However, it does not inspect if the `__getattr__` of object will always returns something.
`ipython` avoids this behaviour by checking `_ipython_canary_method_should_not_exist_` before inspecting the object.
```python
from chanfig import Config
from rich import print as pprint
if __name__ == '__main__':
config = Config(**{'hello': 'world'})
print('print', config)
pprint('rich.print', config)
print(config.__rich__)
print(config.keys())
```
```bash
rint Config(<class 'chanfig.config.Config'>,
('hello'): 'world'
)
rich.print Config(<class 'chanfig.config.Config'>,
('hello'): 'world'
('__rich__'): Config(<class 'chanfig.config.Config'>, )
('aihwerij235234ljsdnp34ksodfipwoe234234jlskjdf'): Config(<class 'chanfig.config.Config'>, )
)
Config(<class 'chanfig.config.Config'>, )
dict_keys(['hello', '__rich__', 'aihwerij235234ljsdnp34ksodfipwoe234234jlskjdf'])
```
ref: https://github.com/ZhiyuanChen/CHANfiG/issues/6
| closed | 2023-06-03T15:50:30Z | 2023-07-29T11:33:48Z | https://github.com/Textualize/rich/issues/2988 | [
"Can't reproduce"
] | ZhiyuanChen | 6 |
aio-libs/aiomysql | sqlalchemy | 684 | Docs should use RTD theme for local dev builds | This should help avoiding issues like #683 where the default theme does not show these issues.
https://sphinx-rtd-theme.readthedocs.io/en/stable/installing.html | open | 2022-01-22T15:56:13Z | 2022-01-30T19:02:28Z | https://github.com/aio-libs/aiomysql/issues/684 | [
"docs"
] | Nothing4You | 2 |
strawberry-graphql/strawberry | graphql | 2,874 | Codegen should be able to know the `__typename` for an object | <!--- Provide a general summary of the changes you want in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Feature Request Type
- [ ] Core functionality
- [x] Alteration (**backward compatible enhancement**/optimization) of existing feature(s)
- [ ] New behavior
## Description
The code generator doesn't (currently) know very much about the GraphQL types. e.g. a query like:
```
query OperationName {
__typename
union {
... on Animal {
age
}
... on Person {
name
}
}
}
```
Will generate classes like:
```py
class OperationNameResultUnionAnimal:
# typename: Animal
age: int
class OperationNameResultUnionPerson:
# typename: Person
name: str
OperationNameResultUnion = Union[OperationNameResultUnionAnimal, OperationNameResultUnionPerson]
class OperationNameResult:
union: OperationNameResultUnion
```
In principle, the `__typename` field of the response should tell us whether we are going to get an `Animal` or a `Person`, but it would be very handy for client generators to be able to easily construct the following mapping:
```py
{"Animal": OperationNameResultUnionAnimal, "Person": OperationNameResultUnionPerson}
```
Currently, this is very hard to do with the information exposed to the client generator. Let's plumb that information through so that clients can start to take advantage of it. | closed | 2023-06-21T13:33:27Z | 2025-03-20T15:56:15Z | https://github.com/strawberry-graphql/strawberry/issues/2874 | [] | mgilson | 0 |
sigmavirus24/github3.py | rest-api | 656 | Add recursive option to `repository.tree(sha, recursive=False)` | # Overview
GitHub Tree API allow to get tree recursively - https://developer.github.com/v3/git/trees/#get-a-tree-recursively
# Ideas
It should be pretty simple for now it works even like this (a hack):
```python
repository.tree('sha?recursive=1')
```
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/39848100-add-recursive-option-to-repository-tree-sha-recursive-false?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github).
</bountysource-plugin> | closed | 2016-12-08T08:10:34Z | 2018-07-20T13:07:00Z | https://github.com/sigmavirus24/github3.py/issues/656 | [
"Mentored/Pair available"
] | roll | 6 |
NullArray/AutoSploit | automation | 576 | Error:no arguments have been passed | Hi, why do I report this error after I enter the api interface?

| closed | 2019-03-18T17:47:26Z | 2019-04-03T12:48:58Z | https://github.com/NullArray/AutoSploit/issues/576 | [] | CXXHK | 4 |
JaidedAI/EasyOCR | deep-learning | 1,016 | Very strange behavior of two-number ocr | Hi, I'm trying to apply OCR to a captcha image and I'm getting some strange results to share.
When I use `reader = easyocr.Reader(['en'])`, the original captcha image is not recognized at all, but the `255-image` is recognized just fine. Intuitively, this is very strange behavior.
 : original image
 : inverted image
`reader.readtext(image, detail = 0, allowlist="0123456789")` returns `['27']`, which is accurate for inverted image but `[]`(nothing detected) for original image.
Is there any guess for why this strange phenomena occurs? | open | 2023-05-12T12:39:12Z | 2023-05-21T07:23:00Z | https://github.com/JaidedAI/EasyOCR/issues/1016 | [] | MilkClouds | 1 |
plotly/dash | flask | 2,253 | Bridge from R plotly visualization to Python Dash | I'm currently using ggplot2 and plotly in R for a project since a specific package that I am using is only available in that language.
However, when attempting to create my app in dashR, I'm finding it extremely difficult to do so, specifically because the documentation of dashR seem to be outdated.
So I'm trying to find a way to take the plotly graphs from R and use them in a Python environment using Dash. Either that or if I can take some dashR components and combine them with Python Dash components. However, I'm unsure if this is even an option for Dash and would like to confirm whether that is the case!
Thank you! | closed | 2022-09-29T22:37:37Z | 2022-10-03T16:57:19Z | https://github.com/plotly/dash/issues/2253 | [] | HyprValent | 6 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 729 | Having an error when executing vocoder_preprocess.py | I'm trying to train vocoder after training synthesizer. But I have this error when executing vocoder_preprocess.py.

So I checked tacotron.py and I realized that the model returns 4 outputs.

but "model" in run_synthesis.py is supposed to return 3 outputs(look at the first picture).
I guess that the error is resulted from that part.
How can I solve this problem?
| closed | 2021-04-08T02:50:23Z | 2021-04-13T02:32:10Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/729 | [
"bug"
] | JH-lee95 | 2 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 421 | Python Crashing after starting demo_toolbox.py or demo_cli.py | _(Onlyone) C:\Users\Tomas\Documents\Ai For Work\Voice\Real Time Clone>python demo_cli.py
C:\Users\Tomas\anaconda3\envs\Onlyone\lib\site-packages\h5py\__init__.py:40: UserWarning: h5py is running against HDF5 1.10.5 when it was built against 1.10.4, this may cause problems
'{0}.{1}.{2}'.format(*version.hdf5_built_version_tuple)
Warning! ***HDF5 library version mismatched error***
The HDF5 header files used to compile this application do not match
the version used by the HDF5 library to which this application is linked.
Data corruption or segmentation faults may occur if the application continues.
This can happen when an application was compiled by one version of HDF5 but
linked with a different version of static or shared HDF5 library.
You should recompile the application or check your shared library related
settings such as 'LD_LIBRARY_PATH'.
You can, at your own risk, disable this warning by setting the environment
variable 'HDF5_DISABLE_VERSION_CHECK' to a value of '1'.
Setting it to 2 or higher will suppress the warning messages totally.
Headers are 1.10.4, library is 1.10.5
SUMMARY OF THE HDF5 CONFIGURATION
=================================
General Information:
-------------------
HDF5 Version: 1.10.5
Configured on: 2019-03-04
Configured by: Visual Studio 15 2017 Win64
Host system: Windows-10.0.17763
Uname information: Windows
Byte sex: little-endian
Installation point: C:/Program Files/HDF5
Compiling Options:
------------------
Build Mode:
Debugging Symbols:
Asserts:
Profiling:
Optimization Level:
Linking Options:
----------------
Libraries:
Statically Linked Executables: OFF
LDFLAGS: /machine:x64
H5_LDFLAGS:
AM_LDFLAGS:
Extra libraries:
Archiver:
Ranlib:
Languages:
----------
C: yes
C Compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.16.27023/bin/Hostx86/x64/cl.exe 19.16.27027.1
CPPFLAGS:
H5_CPPFLAGS:
AM_CPPFLAGS:
CFLAGS: /DWIN32 /D_WINDOWS /W3
H5_CFLAGS:
AM_CFLAGS:
Shared C Library: YES
Static C Library: YES
Fortran: OFF
Fortran Compiler:
Fortran Flags:
H5 Fortran Flags:
AM Fortran Flags:
Shared Fortran Library: YES
Static Fortran Library: YES
C++: ON
C++ Compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.16.27023/bin/Hostx86/x64/cl.exe 19.16.27027.1
C++ Flags: /DWIN32 /D_WINDOWS /W3 /GR /EHsc
H5 C++ Flags:
AM C++ Flags:
Shared C++ Library: YES
Static C++ Library: YES
JAVA: OFF
JAVA Compiler:
Features:
---------
Parallel HDF5: OFF
Parallel Filtered Dataset Writes:
Large Parallel I/O:
High-level library: ON
Threadsafety: OFF
Default API mapping: v110
With deprecated public symbols: ON
I/O filters (external): DEFLATE DECODE ENCODE
MPE:
Direct VFD:
dmalloc:
Packages w/ extra debug output:
API Tracing: OFF
Using memory checker: OFF
Memory allocation sanity checks: OFF
Function Stack Tracing: OFF
Strict File Format Checks: OFF
Optimization Instrumentation:
Bye..._
Is there a way to fix it? | closed | 2020-07-12T05:21:45Z | 2020-07-12T06:58:48Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/421 | [] | OuterEnd07 | 5 |
idealo/imagededup | computer-vision | 172 | Failed to install imagededup on linux using pip install imagededup | Hi,
I got the following errors:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
daal4py 2021.3.0 requires daal==2021.2.3, which is not installed.
scikit-image 0.18.3 requires PyWavelets>=1.1.1, but you have pywavelets 1.0.3 which is incompatible.
pyerfa 2.0.0 requires numpy>=1.17, but you have numpy 1.16.6 which is incompatible.
pandas 1.3.4 requires numpy>=1.17.3, but you have numpy 1.16.6 which is incompatible.
numba 0.54.1 requires numpy<1.21,>=1.17, but you have numpy 1.16.6 which is incompatible.
bokeh 2.4.1 requires pillow>=7.1.0, but you have pillow 6.2.2 which is incompatible.
astropy 4.3.1 requires numpy>=1.17, but you have numpy 1.16.6 which is incompatible.
Any idea, help?
Thanks | closed | 2022-10-05T14:36:54Z | 2022-10-22T11:17:55Z | https://github.com/idealo/imagededup/issues/172 | [] | abdullahfathi | 3 |
tiangolo/uwsgi-nginx-flask-docker | flask | 28 | Unable to change nginx.conf in the image | This may seem a little strange, but I am not able to change nginx.conf file inside /etc/nginx/conf.d/nginx.conf
Here is what I did:
## Method1: Change in Dockerfile
My Dockerfile looks like this:
```
FROM tiangolo/uwsgi-nginx-flask:flask
COPY ./app /app
COPY ./changes/nginx.conf /etc/nginx/conf.d/nginx.conf
COPY ./changes/nginx.conf /app/
```
./changes/nginx.conf looks like this:
```
server {
location /app1/ {
try_files $uri @app;
}
location @app {
include uwsgi_params;
uwsgi_pass unix:///tmp/uwsgi.sock;
}
location /static {
alias /app/static;
}
}
```
**Note the change in location in above server block from `location /` to `location /app1/`**
After the image is built and I run the docker container, I exec into the running container
`sudo docker exec -ti CONTAINER_ID /bin/bash`
`cat /app/nginx.conf` shows presence of updated nginx.conf file (location changes from `/` to `/app1/`
BUT `cat /etc/nginx/conf.d/nginx.conf` still shows the old conf file (location is still `/`)
I thought maybe the second COPY line is not getting executed successfully and docker isn't throwing error on console (sudo?). So, I changed the conf file manually and did a docker commit - the second approach mentioned below.
## Method2: Docker commit
After the docker container was up and running, I used exec to login into the container using
`[vagrant@localhost]$ sudo docker exec -ti CONTAINER_ID /bin/bash`
`[root@CONTAINER_ID]# vi /etc/nginx/conf.d/nginx.conf`
Changing the file to reflect below:
```
server {
location /app1/ {
try_files $uri @app;
}
location @app {
include uwsgi_params;
uwsgi_pass unix:///tmp/uwsgi.sock;
}
location /static {
alias /app/static;
}
}
```
Saved the file `wq!` and exit the container.
After that I did `sudo docker commit CONTAINER_ID my_new_image`
Starting a new container and re-logging into container running on my_new_image still gives below nginx.conf file inside /etc/nginx/conf.d/nginx.conf:
```
server {
location / {
try_files $uri @app;
}
location @app {
include uwsgi_params;
uwsgi_pass unix:///tmp/uwsgi.sock;
}
location /static {
alias /app/static;
}
}
```
I can tell that the my_new_image has some changes because it is larger in size than tiangolo/uwsgi-nginx-flask-docker because I had installed vim to edit the file. But somehow file changes are not persisting inside /etc/nginx/conf.d/nginx.conf.
Am I doing something wrong or is it some bug?
| closed | 2017-11-01T22:26:10Z | 2018-01-15T10:30:53Z | https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/28 | [] | VimanyuAgg | 6 |
ageitgey/face_recognition | machine-learning | 768 | Optimum Image size | * face_recognition version:1.0
* Python version:3.5.3
* Operating System:Windows
### Description
Wanted to check what is the optimum size of the image. This should depend on the CNN initial input nodes used?
also how will the number_of_times_to_upsample parameter can get tuned defaukt is one. But should be use 2,3,4,...
Also while using dlib.shape_predictor("./shape_predictor_68_face_landmarks.dat") I can detect faces but when using the same Box to face_locations no boxes were detected
### What I Did
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
| open | 2019-03-08T17:48:30Z | 2019-03-08T17:48:30Z | https://github.com/ageitgey/face_recognition/issues/768 | [] | kmeligy | 0 |
SYSTRAN/faster-whisper | deep-learning | 1,253 | audio file as input parameter for model.transcribe works well but ndarray-typed parameter captured with sounddevice does not work | `model.transcribe` works well when I use an audio file as an input parameter. But when I use sounddevice to record a period of speech and save the speech result as ndarray and send it directly for `model.transcribe` , it cannot recognize speech.
But I save the speech recorded by sounddevice as an audio file and then use this file as input paramter for `model.transcribe` , the speech can be recognized. What is the problem? Is there any specific format requirement for ndarray parameter?
| open | 2025-02-21T01:20:24Z | 2025-02-25T11:05:52Z | https://github.com/SYSTRAN/faster-whisper/issues/1253 | [] | cyflhn | 4 |
gradio-app/gradio | machine-learning | 10,042 | ValueError: Cannot process this value as an Image, it is of type: <class 'tuple'> | ### Describe the bug
Tried gr.load("models/black-forest-labs/FLUX.1-schnell").launch() and sometimes it throws this error. It is not consistent as sometimes it generates the image and sometimes throws this error. I tried both in a local docker container and huggingface private space.
gradio==5.6.0
gradio_client==1.4.3
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 624, in process_events
response = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 2028, in process_api
data = await self.postprocess_data(block_fn, result["prediction"], state)
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1834, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "/usr/local/lib/python3.10/site-packages/gradio/components/image.py", line 279, in postprocess
saved = image_utils.save_image(value, self.GRADIO_CACHE, self.format)
File "/usr/local/lib/python3.10/site-packages/gradio/image_utils.py", line 76, in save_image
raise ValueError(
ValueError: Cannot process this value as an Image, it is of type: <class 'tuple'>
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
gr.load("models/black-forest-labs/FLUX.1-schnell").launch()
```
### Screenshot
<img width="786" alt="image" src="https://github.com/user-attachments/assets/b59938a0-b666-4975-a1bc-599134a79617">
### Logs
```shell
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 624, in process_events
response = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 2028, in process_api
data = await self.postprocess_data(block_fn, result["prediction"], state)
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1834, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "/usr/local/lib/python3.10/site-packages/gradio/components/image.py", line 279, in postprocess
saved = image_utils.save_image(value, self.GRADIO_CACHE, self.format)
File "/usr/local/lib/python3.10/site-packages/gradio/image_utils.py", line 76, in save_image
raise ValueError(
ValueError: Cannot process this value as an Image, it is of type: <class 'tuple'>
```
### System Info
```shell
gradio==5.6.0
gradio_client==1.4.3
```
### Severity
Blocking usage of gradio | open | 2024-11-26T13:32:43Z | 2025-02-28T17:55:41Z | https://github.com/gradio-app/gradio/issues/10042 | [
"bug"
] | jmjuanico | 1 |
iperov/DeepFaceLab | machine-learning | 5,356 | ... | ... | closed | 2021-07-01T17:13:49Z | 2021-07-09T07:32:39Z | https://github.com/iperov/DeepFaceLab/issues/5356 | [] | ghost | 1 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,283 | SSL configuration in multisite istance | **Describe the bug**
First subsite configuration in multi site configuration OK
Second site, after manual configuration of SSL certs, error 400 (bad request) in subsite
**To Reproduce**
Sites created in MODE "Default" (MODE WHISTLEBLOWINGPA throw Internal server error (Unexpected))
After creation:
[Sites management] -> Select 2nd subsite, [Network settings] -> Insert Hostname (save) -> Manual configuration, loading ssl private key, loading public CRT, press [Enable] -> redirection to site -> http 400 error
**version**
OS DEBIAN 11
GL version 4.10.10
**Additional info**
We tried to delete all the sites and also recreate them in different order.
Only the first inserted after installing the new fresh instance of GL always works, even if it is inserted as the second or third.
Others throw error 400 after SSL configuration.
There are no sub-site errors in the logs.
thx | closed | 2022-09-20T16:43:39Z | 2022-09-23T15:04:53Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3283 | [] | zazzati | 4 |
ultralytics/yolov5 | deep-learning | 12,642 | Wrong Confusion Matrix Results when Training Yolov5 with a custom Dataset | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello
I trained yolov5s on a custom dataset with 50 epochs and batch size of 16 but after training the model I evaluated its performance on the test set and noticed that the mAP was 94% which was a bit weird for me. So I checked the confusion matrix and noticed that the results were wrong.
Here are my training specifications:
1. **dataset:** The dataset was mainly published for pose estimation but I did some preprocessing on RoboFlow to make it suitable for object detection (I am not sure if this can be an issue or not). The data contains a single class and is divided to 25000 images for training, 4600 images for validation and 1198 images for testing.
2. **Model Configuration:** I have used yolov5s model configuration
3. **Training Parameters:** I have used the default training parameters except for the number of classes I set it to 1 and the activation function. Concerning the weights, I started the training from the petrained weightsof ultralytics (yolov5s.pt)
4. **Activation Function:** For the activation function I have changed the SiLU() to LeakyReLU(0.1015625, inplace=True). The reason I have done this is because my model will deployed later on an FPGA board and the SiLU activation function is not supported by the board.
5. **Training Platform:** I trained the model on my laptop with RTX4060 GPU with size 8GB
I hope you can help me fix this issue as I am confused what I can try to resolve the problem. Thank you very much in advance for guidance and support.

### Additional
_No response_ | closed | 2024-01-17T14:42:24Z | 2024-10-20T19:37:37Z | https://github.com/ultralytics/yolov5/issues/12642 | [
"question",
"Stale"
] | IkrameBeggar | 7 |
JoeanAmier/TikTokDownloader | api | 396 | 请教 如何把多个作者的视频文件直接下载保持到一个同一的文件夹 | 请教作者和各位前面, 如何把若干个视频作者的视频mp4文件下载到同一个文件夹,而不再区分子文件夹,比如
"root": "C:\\Users\\observer\\Desktop\\project2\\download",
不再保存到各自的路径,路径不再加uid
C:\Users\observer\Desktop\project2\download\UID1799271955046775_rv1_发布作品
去掉路径中的“UID1799271955046775_xxx_发布作品” , 直接保存到
C:\Users\observer\Desktop\project2\download\rv1
这样方便统一处理一系列视频, 感谢指点 | closed | 2025-01-27T12:19:20Z | 2025-01-27T12:49:32Z | https://github.com/JoeanAmier/TikTokDownloader/issues/396 | [] | 9ihbd2DZSMjtsf7vecXjz | 2 |
uriyyo/fastapi-pagination | fastapi | 1,042 | CursorPage returns total always null | > from fastapi_pagination.cursor import CursorPage
returns total always null. Is it that it can't return a coursor at all?
| closed | 2024-02-22T10:40:52Z | 2024-02-22T11:16:54Z | https://github.com/uriyyo/fastapi-pagination/issues/1042 | [
"question"
] | SnoozeFreddo | 2 |
modelscope/modelscope | nlp | 961 | TypeError: modelscope.msdatasets.utils.hf_datasets_util.load_dataset_with_ctx() got multiple values for keyword argument 'trust_remote_code' | 1.17版本(pip 当前最新版本) MsDataset.load()的 trust_remote_code=True 这个 arg 报错
估计是另一个issue #962 导致的:datasets 未引入到 modelscope 安装包的requires,导致需要手动pip install datasets,但这个手动安装版本是pip的latest版本,不一定与当前的 modelscope适配
环境 win10 python3.9-10
Thanks for your error report and we appreciate it a lot.
**Checklist**
* I have searched the tutorial on modelscope [doc-site](https://modelscope.cn/docs)
* I have searched related issues but cannot get the expected help.
* The bug has not been fixed in the latest version.
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
* What command or script did you run?
> A placeholder for the command.
* Did you make any modifications on the code or config? Did you understand what you have modified?
* What dataset did you use?
**Your Environments (__required__)**
* OS: `uname -a`
* CPU: `lscpu`
* Commit id (e.g. `a3ffc7d8`)
* You may add addition that may be helpful for locating the problem, such as
* How you installed PyTorch [e.g., pip, conda, source]
* Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)
Please @ corresponding people according to your problem:
Model related: @wenmengzhou @tastelikefeet
Model hub related: @liuyhwangyh
Dataset releated: @wangxingjun778
Finetune related: @tastelikefeet @Jintao-Huang
Pipeline related: @Firmament-cyou @wenmengzhou
Contribute your model: @zzclynn
| closed | 2024-08-28T01:56:30Z | 2024-10-01T05:32:03Z | https://github.com/modelscope/modelscope/issues/961 | [] | monetjoe | 2 |
gradio-app/gradio | data-science | 10,379 | [ Gradio Client ] Handling a local audio file. ReadTimeout: The read operation timed out. | Hello,
I'm following the tutorial to run the whisper example on the documentation with a local file. However, I'm having a couple of errors.
I'm using gradio_client `1.5.4`, but I tried `1.5.0` and `1.4.3`. The same behaviour persists. Probably, this isn't a bug, but rather I'm using badly the API. Sorry if so.
**A.** When following the example published [here](https://www.gradio.app/docs/python-client/introduction), it works when the file is hosted. However, when I try to the same with a local file I can't receive the correct output using a local file.
```python
from gradio_client import Client, handle_file
client = Client("abidlabs/whisper")
results = client.predict(
#audio=handle_file('https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav')
audio=handle_file('test.wav')
)
results
```
Output with local file
```
Loaded as API: https://abidlabs-whisper.hf.space/ ✔
('/tmp/gradio/45665c644e65fc9edcad2a47be86e9a8c33c813652125c7cb039d4461a3c1168/test.wav',)
```
Output with github hosted file
```
Loaded as API: https://abidlabs-whisper.hf.space/ ✔
you
```
**B.** I created a more complex radio app I'm sharing from my PC. When I use it via the GUI on gradio.live, there's no problem. However, if I attempt to use it via `gradio_client`, I obtain the following ReadTimeout error.
```
---------------------------------------------------------------------------
ReadTimeout Traceback (most recent call last)
[/usr/local/lib/python3.11/dist-packages/httpx/_transports/default.py](https://localhost:8080/#) in map_httpcore_exceptions()
100 try:
--> 101 yield
102 except Exception as exc:
29 frames
[/usr/local/lib/python3.11/dist-packages/httpx/_transports/default.py](https://localhost:8080/#) in handle_request(self, request)
249 with map_httpcore_exceptions():
--> 250 resp = self._pool.handle_request(req)
251
[/usr/local/lib/python3.11/dist-packages/httpcore/_sync/connection_pool.py](https://localhost:8080/#) in handle_request(self, request)
255 self._close_connections(closing)
--> 256 raise exc from None
257
[/usr/local/lib/python3.11/dist-packages/httpcore/_sync/connection_pool.py](https://localhost:8080/#) in handle_request(self, request)
235 # Send the request on the assigned connection.
--> 236 response = connection.handle_request(
237 pool_request.request
[/usr/local/lib/python3.11/dist-packages/httpcore/_sync/connection.py](https://localhost:8080/#) in handle_request(self, request)
102
--> 103 return self._connection.handle_request(request)
104
[/usr/local/lib/python3.11/dist-packages/httpcore/_sync/http11.py](https://localhost:8080/#) in handle_request(self, request)
135 self._response_closed()
--> 136 raise exc
137
[/usr/local/lib/python3.11/dist-packages/httpcore/_sync/http11.py](https://localhost:8080/#) in handle_request(self, request)
105 trailing_data,
--> 106 ) = self._receive_response_headers(**kwargs)
107 trace.return_value = (
[/usr/local/lib/python3.11/dist-packages/httpcore/_sync/http11.py](https://localhost:8080/#) in _receive_response_headers(self, request)
176 while True:
--> 177 event = self._receive_event(timeout=timeout)
178 if isinstance(event, h11.Response):
[/usr/local/lib/python3.11/dist-packages/httpcore/_sync/http11.py](https://localhost:8080/#) in _receive_event(self, timeout)
216 if event is h11.NEED_DATA:
--> 217 data = self._network_stream.read(
218 self.READ_NUM_BYTES, timeout=timeout
[/usr/local/lib/python3.11/dist-packages/httpcore/_backends/sync.py](https://localhost:8080/#) in read(self, max_bytes, timeout)
125 exc_map: ExceptionMapping = {socket.timeout: ReadTimeout, OSError: ReadError}
--> 126 with map_exceptions(exc_map):
127 self._sock.settimeout(timeout)
[/usr/lib/python3.11/contextlib.py](https://localhost:8080/#) in __exit__(self, typ, value, traceback)
157 try:
--> 158 self.gen.throw(typ, value, traceback)
159 except StopIteration as exc:
[/usr/local/lib/python3.11/dist-packages/httpcore/_exceptions.py](https://localhost:8080/#) in map_exceptions(map)
13 if isinstance(exc, from_exc):
---> 14 raise to_exc(exc) from exc
15 raise # pragma: nocover
ReadTimeout: The read operation timed out
The above exception was the direct cause of the following exception:
ReadTimeout Traceback (most recent call last)
[<ipython-input-8-97f4f20faa6a>](https://localhost:8080/#) in <cell line: 0>()
----> 1 job.result()
[/usr/local/lib/python3.11/dist-packages/gradio_client/client.py](https://localhost:8080/#) in result(self, timeout)
1512 >> 9
1513 """
-> 1514 return super().result(timeout=timeout)
1515
1516 def outputs(self) -> list[tuple | Any]:
[/usr/lib/python3.11/concurrent/futures/_base.py](https://localhost:8080/#) in result(self, timeout)
447 raise CancelledError()
448 elif self._state == FINISHED:
--> 449 return self.__get_result()
450
451 self._condition.wait(timeout)
[/usr/lib/python3.11/concurrent/futures/_base.py](https://localhost:8080/#) in __get_result(self)
399 if self._exception:
400 try:
--> 401 raise self._exception
402 finally:
403 # Break a reference cycle with the exception in self._exception
[/usr/lib/python3.11/concurrent/futures/thread.py](https://localhost:8080/#) in run(self)
56
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
[/usr/local/lib/python3.11/dist-packages/gradio_client/client.py](https://localhost:8080/#) in _inner(*data)
1130
1131 data = self.insert_empty_state(*data)
-> 1132 data = self.process_input_files(*data)
1133 predictions = _predict(*data)
1134 predictions = self.process_predictions(*predictions)
[/usr/local/lib/python3.11/dist-packages/gradio_client/client.py](https://localhost:8080/#) in process_input_files(self, *data)
1285 data_ = []
1286 for i, d in enumerate(data):
-> 1287 d = utils.traverse(
1288 d,
1289 partial(self._upload_file, data_index=i),
[/usr/local/lib/python3.11/dist-packages/gradio_client/utils.py](https://localhost:8080/#) in traverse(json_obj, func, is_root)
998 """
999 if is_root(json_obj):
-> 1000 return func(json_obj)
1001 elif isinstance(json_obj, dict):
1002 new_obj = {}
[/usr/local/lib/python3.11/dist-packages/gradio_client/client.py](https://localhost:8080/#) in _upload_file(self, f, data_index)
1357 with open(file_path, "rb") as f:
1358 files = [("files", (orig_name.name, f))]
-> 1359 r = httpx.post(
1360 self.client.upload_url,
1361 headers=self.client.headers,
[/usr/local/lib/python3.11/dist-packages/httpx/_api.py](https://localhost:8080/#) in post(url, content, data, files, json, params, headers, cookies, auth, proxy, follow_redirects, verify, timeout, trust_env)
302 **Parameters**: See `httpx.request`.
303 """
--> 304 return request(
305 "POST",
306 url,
[/usr/local/lib/python3.11/dist-packages/httpx/_api.py](https://localhost:8080/#) in request(method, url, params, content, data, files, json, headers, cookies, auth, proxy, timeout, follow_redirects, verify, trust_env)
107 trust_env=trust_env,
108 ) as client:
--> 109 return client.request(
110 method=method,
111 url=url,
[/usr/local/lib/python3.11/dist-packages/httpx/_client.py](https://localhost:8080/#) in request(self, method, url, content, data, files, json, params, headers, cookies, auth, follow_redirects, timeout, extensions)
823 extensions=extensions,
824 )
--> 825 return self.send(request, auth=auth, follow_redirects=follow_redirects)
826
827 @contextmanager
[/usr/local/lib/python3.11/dist-packages/httpx/_client.py](https://localhost:8080/#) in send(self, request, stream, auth, follow_redirects)
912 auth = self._build_request_auth(request, auth)
913
--> 914 response = self._send_handling_auth(
915 request,
916 auth=auth,
[/usr/local/lib/python3.11/dist-packages/httpx/_client.py](https://localhost:8080/#) in _send_handling_auth(self, request, auth, follow_redirects, history)
940
941 while True:
--> 942 response = self._send_handling_redirects(
943 request,
944 follow_redirects=follow_redirects,
[/usr/local/lib/python3.11/dist-packages/httpx/_client.py](https://localhost:8080/#) in _send_handling_redirects(self, request, follow_redirects, history)
977 hook(request)
978
--> 979 response = self._send_single_request(request)
980 try:
981 for hook in self._event_hooks["response"]:
[/usr/local/lib/python3.11/dist-packages/httpx/_client.py](https://localhost:8080/#) in _send_single_request(self, request)
1012
1013 with request_context(request=request):
-> 1014 response = transport.handle_request(request)
1015
1016 assert isinstance(response.stream, SyncByteStream)
[/usr/local/lib/python3.11/dist-packages/httpx/_transports/default.py](https://localhost:8080/#) in handle_request(self, request)
247 extensions=request.extensions,
248 )
--> 249 with map_httpcore_exceptions():
250 resp = self._pool.handle_request(req)
251
[/usr/lib/python3.11/contextlib.py](https://localhost:8080/#) in __exit__(self, typ, value, traceback)
156 value = typ()
157 try:
--> 158 self.gen.throw(typ, value, traceback)
159 except StopIteration as exc:
160 # Suppress StopIteration *unless* it's the same exception that
[/usr/local/lib/python3.11/dist-packages/httpx/_transports/default.py](https://localhost:8080/#) in map_httpcore_exceptions()
116
117 message = str(exc)
--> 118 raise mapped_exc(message) from exc
119
120
ReadTimeout: The read operation timed out
```
Thanks in advance.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
This is the code for the first attempt with whisper:
```python
from gradio_client import Client, handle_file
client = Client("abidlabs/whisper")
results = client.predict(
audio=handle_file('test.wav')
)
results
```
This is the code for my gradio local app. I'll keep it open for some time.
```
from gradio_client import Client, handle_file
client = Client("https://bc3c5d74ec992db205.gradio.live")
audio_file = handle_file("/content/test.wav")
result = client.predict(
audio_file=audio_file, # Local file named "audio.wav"
model="base", # Whisper model: "tiny", "base", "small", "medium", "large", or "large-v2"
task="transcribe", # Task: "transcribe" or "translate"
language="auto", # Source language: "auto", "en", "es", etc.
api_name="/process_audio" # The Gradio endpoint name
)
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
It says `gradio_client` is not installed. But it's actually.
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.12.0
gradio_client version: 1.5.4
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 3.7.1
audioop-lts is not installed.
fastapi: 0.115.6
ffmpy: 0.5.0
gradio-client==1.5.4 is not installed.
httpx: 0.28.1
huggingface-hub: 0.27.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 1.26.4
orjson: 3.10.14
packaging: 24.2
pandas: 2.2.2
pillow: 11.1.0
pydantic: 2.10.5
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.2
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.28.1
huggingface-hub: 0.27.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.1
```
### Severity
Blocking usage of gradio | open | 2025-01-17T08:27:24Z | 2025-01-17T17:46:27Z | https://github.com/gradio-app/gradio/issues/10379 | [
"bug"
] | caviri | 2 |
graphql-python/flask-graphql | graphql | 90 | python 3.10+ MutableMapping ImportError | After python 3.9 collections migrate MutableMapping from collections to collections.abc which causes an import error in this library.
I am currently running 3.11 and get the following error message:

When I roll my python version back to 3.9 this issue goes away. | closed | 2022-08-16T02:21:47Z | 2022-08-16T02:28:22Z | https://github.com/graphql-python/flask-graphql/issues/90 | [] | wes-public-apps | 1 |
frol/flask-restplus-server-example | rest-api | 51 | problem install with PostgreSQL database | Hello
I'm trying using PostgreSQL database to make your example work. I have problem with logs error like this
> Traceback (most recent call last):
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
psycopg2.ProgrammingError: column "password" cannot be cast automatically to type bytea
HINT: You might need to specify "USING password::bytea".
>The above exception was the direct cause of the following exception:
> Traceback (most recent call last):
File "/home/me/.virtualenvs/flask-noota-api/bin/invoke", line 11, in <module>
sys.exit(program.run())
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/invoke/program.py", line 293, in run
self.execute()
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/invoke/program.py", line 408, in execute
executor.execute(*self.tasks)
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/invoke/executor.py", line 114, in execute
result = call.task(*args, **call.kwargs)
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/invoke/tasks.py", line 114, in __call__
result = self.body(*args, **kwargs)
File "/home/me/CODER/python/flask-noota-api/tasks/app/_utils.py", line 61, in wrapper
return func(*args, **kwargs)
File "/home/me/CODER/python/flask-noota-api/tasks/app/db.py", line 277, in init_development_data
context.invoke_execute(context, 'app.db.upgrade')
File "/home/me/CODER/python/flask-noota-api/tasks/__init__.py", line 73, in invoke_execute
results = Executor(namespace, config=context.config).execute((command_name, kwargs))
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/invoke/executor.py", line 114, in execute
result = call.task(*args, **call.kwargs)
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/invoke/tasks.py", line 114, in __call__
result = self.body(*args, **kwargs)
File "/home/me/CODER/python/flask-noota-api/tasks/app/_utils.py", line 61, in wrapper
return func(*args, **kwargs)
File "/home/me/CODER/python/flask-noota-api/tasks/app/db.py", line 163, in upgrade
command.upgrade(config, revision, sql=sql, tag=tag)
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/command.py", line 174, in upgrade
script.run_env()
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/script/base.py", line 416, in run_env
util.load_python_file(self.dir, 'env.py')
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/util/pyfiles.py", line 93, in load_python_file
module = load_module_py(module_id, path)
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/util/compat.py", line 68, in load_module_py
module_id, path).load_module(module_id)
File "<frozen importlib._bootstrap_external>", line 388, in _check_name_wrapper
File "<frozen importlib._bootstrap_external>", line 809, in load_module
File "<frozen importlib._bootstrap_external>", line 668, in load_module
File "<frozen importlib._bootstrap>", line 268, in _load_module_shim
File "<frozen importlib._bootstrap>", line 693, in _load
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 665, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "migrations/env.py", line 93, in <module>
run_migrations_online()
File "migrations/env.py", line 86, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/runtime/environment.py", line 807, in run_migrations
self.get_context().run_migrations(**kw)
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/runtime/migration.py", line 321, in run_migrations
step.migration_fn(**kw)
File "/home/me/CODER/python/flask-noota-api/migrations/versions/36954739c63_.py", line 28, in upgrade
existing_nullable=False)
File "/usr/lib/python3.5/contextlib.py", line 66, in __exit__
next(self.gen)
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/operations/base.py", line 299, in batch_alter_table
impl.flush()
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/operations/batch.py", line 57, in flush
fn(*arg, **kw)
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/ddl/postgresql.py", line 91, in alter_column
existing_nullable=existing_nullable,
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/alembic/ddl/impl.py", line 118, in _exec
return conn.execute(construct, *multiparams, **params)
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 945, in execute
return meth(self, multiparams, params)
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/sql/ddl.py", line 68, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1002, in _execute_ddl
compiled
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1189, in _execute_context
context)
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1393, in _handle_dbapi_exception
exc_info
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/util/compat.py", line 186, in reraise
raise value.with_traceback(tb)
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/home/me/.virtualenvs/flask-noota-api/lib/python3.5/site-packages/sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) column "password" cannot be cast automatically to type bytea
HINT: You might need to specify "USING password::bytea".
[SQL: 'ALTER TABLE "user" ALTER COLUMN password TYPE BYTEA ']
-------------------
tested with
- Python 3.5.2
- (PostgreSQL) 9.4.10
installed dependencies
> alembic==0.8.10
aniso8601==1.2.0
apispec==0.19.0
appdirs==1.4.2
arrow==0.8.0
bcrypt==3.1.3
cffi==1.9.1
click==6.7
colorlog==2.10.0
Flask==0.12
Flask-Cors==3.0.2
Flask-Login==0.4.0
flask-marshmallow==0.7.0
Flask-OAuthlib==0.9.3
flask-restplus==0.10.1
Flask-SQLAlchemy==2.2
invoke==0.15.0
itsdangerous==0.24
Jinja2==2.9.5
jsonschema==2.6.0
lockfile==0.12.2
Mako==1.0.6
MarkupSafe==0.23
marshmallow==2.13.1
marshmallow-sqlalchemy==0.12.0
oauthlib==2.0.1
packaging==16.8
passlib==1.7.1
permission==0.4.1
psycopg2==2.7
pycparser==2.17
pyparsing==2.1.10
python-dateutil==2.6.0
python-editor==1.0.3
pytz==2016.10
PyYAML==3.12
requests==2.13.0
requests-oauthlib==0.8.0
six==1.10.0
SQLAlchemy==1.1.5
SQLAlchemy-Utils==0.32.12
webargs==1.5.3
Werkzeug==0.11.15
any idea ?
| closed | 2017-03-05T18:03:21Z | 2017-03-07T15:17:33Z | https://github.com/frol/flask-restplus-server-example/issues/51 | [] | repodevs | 7 |
home-assistant/core | asyncio | 140,900 | Local Calendar - Repeating Events on 1st Saturday of Month | ### The problem
When creating a new event in the Local Calendar integration, using the Repeat Event - 1 Saturday (or any day) doesnt create the event on the first Saturday, it creates it on the 1st day of each month.
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
NA
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
local_calendar
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/local_calendar
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
It's not clear if this is a bug or by design. Looking for guidance. WIll submit a feature request if this is working as designed. | closed | 2025-03-19T00:45:40Z | 2025-03-19T00:59:08Z | https://github.com/home-assistant/core/issues/140900 | [
"integration: local_calendar"
] | livetoautomate | 4 |
tfranzel/drf-spectacular | rest-api | 1,189 | The work of versions is not clear! | First of all, I would like to thank the authors for this wonderful module...)
**Describe the bug**
I don't quite understand the behavior of this module specifically in my case.
I write my API with the following architecture:

The endpoints are available at the following addresses:
- path('api/v1/', include('api.urls_v1'))
- path('api/v2/', include('api.urls_v2'))
**Main urls file**
```
urlpatterns = [
path('api/v1/', include('api.urls_v1')),
path('api/v2/', include('api.urls_v2')),
path('api/schema/', SpectacularAPIView.as_view(api_version='v1'), name='schema_v1'),
path('api/doc/', SpectacularSwaggerView.as_view(url_name='schema_v1'), name='swagger'),
path('admin/', admin.site.urls),
path('api-auth/', include('rest_framework.urls')),
path('token/', TokenObtainPairView.as_view(), name='token_obtain_pair'),
path('token/refresh/', TokenRefreshView.as_view(), name='token_refresh'),
]
```
**My settings file**
```
REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': (
'rest_framework.permissions.DjangoModelPermissionsOrAnonReadOnly',
),
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework_simplejwt.authentication.JWTAuthentication',
),
'DEFAULT_SCHEMA_CLASS': 'drf_spectacular.openapi.AutoSchema',
'DEFAULT_VERSIONING_CLASS': 'rest_framework.versioning.URLPathVersioning',
}
```
And generated swagger scheme.

**Expected behavior**
I expect to get endpoints that are broken down by application. Now I have them all mixed up in a block with the name api (apparently this is the name of the project name)
| closed | 2024-02-29T07:41:17Z | 2024-02-29T10:08:26Z | https://github.com/tfranzel/drf-spectacular/issues/1189 | [] | ElveeBolt | 5 |
HumanSignal/labelImg | deep-learning | 134 | cannot set 3 different labels on the same image | I'm trying to label 3 different labels to the same image, but all of them appears to be right.
<img width="932" alt="screen shot 2017-07-30 at 18 33 51" src="https://user-images.githubusercontent.com/985808/28754821-05bf570c-7556-11e7-94e2-f470104af3a6.png">
| closed | 2017-07-30T15:36:46Z | 2017-08-01T08:21:50Z | https://github.com/HumanSignal/labelImg/issues/134 | [] | ofersa | 1 |
FujiwaraChoki/MoneyPrinter | automation | 211 | [BUG] An error occurred during TTS: Incorrect padding | **Describe the bug**
[-] An error occurred during TTS: Incorrect padding
It Mooviepy then fails to find the mp3 file | closed | 2024-02-12T16:29:52Z | 2024-02-15T21:35:32Z | https://github.com/FujiwaraChoki/MoneyPrinter/issues/211 | [] | Syycorax | 3 |
vaexio/vaex | data-science | 2,252 | Slow groupby after adding column from array | I have an original file with 100M lines. I create a dfv by importing it from .csv via vaex.from_csv. I filter some of the data frame according to certain conditions to create dfv_filtered. I run groupby and aggregate via sum on one of the columns. This runs fine in about ~10 sec.
I now take dfv_filtered, and cast one of its columns to an array via dfc_filtered.x.values. I transform this array into a numpy array and manipulate it to my liking, then add it to dfv_filtered. I do so via dfv_filtered['new column'] = name_of_np_array. I then create yet another column by multipliying dfv_filtered['new_column'] * dfv_filtered['existing_column']. Now when I run groupby it takes several minutes. I don't understand why. The dtypes are all the same, the dataframe seems virtual still, why would it take much longer?
If I simply take dfv_filtered and copy one of its existing columns over and over and add it as a new column each time, and then run groupby, it still runs ~10 sec.
Which step of my process is the one making it slower? | open | 2022-10-27T18:49:22Z | 2022-10-28T13:59:45Z | https://github.com/vaexio/vaex/issues/2252 | [] | statsrunner | 1 |
JaidedAI/EasyOCR | pytorch | 443 | Text block detection | Hi Team..
Is there a way to detect text blocks using this tool?

In example above, I need the entire text content to be detected as a block (which is address) than separate text..
Any help will be appreciated!!
| closed | 2021-05-31T14:01:14Z | 2021-06-02T09:09:36Z | https://github.com/JaidedAI/EasyOCR/issues/443 | [] | ghareesh | 1 |
sangaline/wayback-machine-scraper | web-scraping | 19 | Error 429 + Scraper gives up | Many moons ago, Internet Archive added some rate limiting that seems to also affect Wayback Machine.
( See discussion on similar project here https://github.com/buren/wayback_archiver/issues/32 )
The scraper scrapes too fast, and gets IP banned for 5 minutes by Wayback Machine.
As a result, all the remaining URLs in the pipeline fail repeatedly, Scrapy gives up on all of them and says "we're done!"
```
...
2023-11-09 22:09:57 [scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying <GET https://web.archive.org/cdx/search/cdx?url=www.example.com/blog/stuff&output=json&fl=timestamp,original,statuscode,digest> (failed 3 times): 429 Unknown Status
2023-11-09 22:09:57 [scrapy.core.engine] INFO: Closing spider (finished)
```
I see two issues here:
1. Add a global rate limit (I don't think the concurrency flag covers this?)
1.b. If we get a 429, increase the delay? (Ideally should not occur, as the limit appears to be constant? Although this page https://web.archive.org/429.html suggests that the error can occur randomly if Wayback is getting a lot of traffic from other people.)
Also, if we get a 429, that seems to mean the IP has been banned for 5 minutes, so we should just pause the scraper for that time? (Making any requests during this time may possibly extend the block?)
2. (Unnecessary if previous points handled?) Increase retry limit from 3 to something much higher? Again, if we approach scraping with a "backoff"
---
TODO:
1. Find out exactly what the rate limit is: May be 5 per minute, or may be 15 per minute? (12 or 4s delay respectively.)
They seem to have changed it several times. Not sure if there are official numbers.
https://archive.org/details/toomanyrequests_20191110
This page says it's 15. It only mentions _submitting_ URLs, but it appears to cover retrievals too.
2. Find out if this project already does rate limiting. Edit: Sorta, but not entirely sufficient for this use case? (e.g. no 5-minute backoff on 429, autothrottle does not guarantee <X/minute, etc.)
Seems to be using Scrapy's autothrottle, so the fix may be as simple as updating the start delay and default concurrency:
`__main__.py`
```
'AUTOTHROTTLE_START_DELAY': 4, # aiming for 15 per minute
```
and
```
parser.add_argument('-c', '--concurrency', default=1.0, help=(
```
This doesn't seem to be sufficient to limit to 15/minute though, as I am getting mostly >15/min with these settings (and as high as 29 sometimes). But Wayback did not complain, so it seems the limit is higher than that.
More work needed. May report back later.
Edit: AutoThrottle docs say `AUTOTHROTTLE_TARGET_CONCURRENCY` represents the **average,** not the maximum. Which means if Wayback has a hard limit of X req/sec, setting X as the target would lead by definition to exceeding that limit 50% of the time. | open | 2023-11-10T00:18:33Z | 2023-11-23T12:10:04Z | https://github.com/sangaline/wayback-machine-scraper/issues/19 | [] | avelican | 2 |
ultralytics/ultralytics | python | 19,061 | How to replace the optimizer, can you give specific steps? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
How to replace the optimizer, can you give specific steps?
### Additional
_No response_ | open | 2025-02-04T09:55:24Z | 2025-02-05T04:40:42Z | https://github.com/ultralytics/ultralytics/issues/19061 | [
"question"
] | yangershuai627 | 4 |
ultralytics/ultralytics | machine-learning | 19,335 | YOLO + OpenCV: Stream Decoding Issues | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I am attempting to use YOLO to perform real-time object detection on an RTSP stream from my Raspberry Pi connected to a camera. When I process the stream in real-time (a direct stream input), there are no artifacts, and it runs fine. However, when I process the stream frame by frame, I get many artifacts and the error 'h264 error while decoding MB'. Could this be related to the rate at which frames are being processed? I am running on a powerful machine, so I can rule out hardware limitations. Is there a way I can process the stream frame by frame without experiencing these artifacts?

### Additional
_No response_ | open | 2025-02-20T14:48:41Z | 2025-02-20T18:03:32Z | https://github.com/ultralytics/ultralytics/issues/19335 | [
"question",
"detect",
"embedded"
] | caisamuels | 2 |
huggingface/datasets | computer-vision | 7,085 | [Regression] IterableDataset is broken on 2.20.0 | ### Describe the bug
In the latest version of datasets there is a major regression, after creating an `IterableDataset` from a generator and applying a few operations (`map`, `select`), you can no longer iterate through the dataset multiple times.
The issue seems to stem from the recent addition of "resumable IterableDatasets" (#6658) (@lhoestq). It seems like it's keeping state when it shouldn't.
### Steps to reproduce the bug
Minimal Reproducible Example (comparing `datasets==2.17.0` and `datasets==2.20.0`)
```
#!/bin/bash
# List of dataset versions to test
versions=("2.17.0" "2.20.0")
# Loop through each version
for version in "${versions[@]}"; do
# Install the specific version of the datasets library
pip3 install -q datasets=="$version" 2>/dev/null
# Run the Python script
python3 - <<EOF
from datasets import IterableDataset
from datasets.features.features import Features, Value
def test_gen():
yield from [{"foo": i} for i in range(10)]
features = Features([("foo", Value("int64"))])
d = IterableDataset.from_generator(test_gen, features=features)
mapped = d.map(lambda row: {"foo": row["foo"] * 2})
column = mapped.select_columns(["foo"])
print("Version $version - Iterate Once:", list(column))
print("Version $version - Iterate Twice:", list(column))
EOF
done
```
The output looks like this:
```
Version 2.17.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.17.0 - Iterate Twice: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.20.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.20.0 - Iterate Twice: []
```
### Expected behavior
The expected behavior is it version 2.20.0 should behave the same as 2.17.0.
### Environment info
`datasets==2.20.0` on any platform. | closed | 2024-07-31T13:01:59Z | 2024-08-22T14:49:37Z | https://github.com/huggingface/datasets/issues/7085 | [] | AjayP13 | 3 |
yeongpin/cursor-free-vip | automation | 8 | 已Start | 已Start
本機器ID: 408D5C42CC08
_Originally posted by @lookoupai in https://github.com/yeongpin/cursor-free-vip/issues/4#issuecomment-2585373086_
| closed | 2025-01-13T02:23:31Z | 2025-01-13T02:27:55Z | https://github.com/yeongpin/cursor-free-vip/issues/8 | [] | llkj0001 | 0 |
keras-team/keras | tensorflow | 20,860 | TorchModuleWrapper serialization issue | I would like to open this issue to revive a discussion started in a previous issue [(#19226)](https://github.com/keras-team/keras/issues/19226). While the previous issue seems to be inactive, the potential bug seems to still be present. I hope this is fine.
The problem arises when trying to save a mdel containing a `TorchModuleWrapper` layer (therefore using PyTorch as backend).
I referenced the original issue and in particular my latest comment below for more details:
> This bug is currently still present. The following is a minimal snippet that can reproduce it:
> ```python
> import os
> os.environ["KERAS_BACKEND"] = "torch"
> import torch
> import keras
>
> torch_module = torch.nn.Linear(4,4)
> keras_layer = keras.layers.TorchModuleWrapper(torch_module)
>
> inputs = keras.Input(shape=(4,))
> outputs = keras_layer(inputs)
> model = keras.Model(inputs=inputs, outputs=outputs)
>
> model.save('./serialized.keras')
> ```
>
> The error is:
>
> ```
> UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte
> ```
> generated in [keras.src.saving.serialization_lib.serialize_keras_object](https://github.com/keras-team/keras/blob/fbf0af76130beecae2273a513242255826b42c04/keras/src/saving/serialization_lib.py#L150)
>
> It is worth noting that manually using [`get_config`](https://github.com/keras-team/keras/blob/fbf0af76130beecae2273a513242255826b42c04/keras/src/utils/torch_utils.py#L141) and [`from_config`](https://github.com/keras-team/keras/blob/fbf0af76130beecae2273a513242255826b42c04/keras/src/utils/torch_utils.py#L151) to serialize and deserialize (in memory) produce the correct result:
>
> ```python
> torch_linear = torch.nn.Linear(4,4) # xA^T+b with initalized weights
> wrapped_torch = TorchModuleWrapper(torch_linear) # Wrap it
>
> # get its config, and rebuild it
> torch_linear_from_config = keras.layers.TorchModuleWrapper.from_config(wrapped_torch.get_config()).module
>
> # assert all parameters are the same
> assert (torch_linear.weight == torch_linear_from_config.weight).all()
> assert (torch_linear.bias == torch_linear_from_config.bias).all()
> ```
>
> What `get_config()` does is map `module` (a torch object) to its serialized string (coming from `torch.save(self.module, buffer)`). I believe it is wrong to use the utf-8 in [serialize_keras_object(obj)](https://github.com/keras-team/keras/blob/fbf0af76130beecae2273a513242255826b42c04/keras/src/saving/serialization_lib.py#L154), since that encoding is specifically meant for text and not arbitrary bytes.
>
> Does anybody have an idea about it?
> Thank you for any help on this!
>
> I got this error with both:
> - python 3.10, keras 3.7.0, torch 2.5.1+cu124
> - python 3.11, keras 3.8.0, torch 2.5.1+cu124
>
_Originally posted by @MicheleCattaneo in [#19226](https://github.com/keras-team/keras/issues/19226#issuecomment-2607726028)_
As I am highly interested in using Keras3 with PyTorch modules, I am willing to contribute to a potential solution to this issue. I would however appreciate some guidance, as I am not very familiar with the Keras code base.
Thank you for any help! | open | 2025-02-05T13:10:48Z | 2025-02-05T14:25:02Z | https://github.com/keras-team/keras/issues/20860 | [
"type:Bug",
"backend:torch"
] | MicheleCattaneo | 0 |
StructuredLabs/preswald | data-visualization | 175 | [FEATURE] Create a New Topbar Component | #### **Overview**
The **default top bar** in every Preswald app is currently **hardcoded** inside `preswald/frontend/layout/`. This should be refactored into a **separate Preswald component**.
### Adding a new component
https://docs.preswald.com/addnewcomponent
#### **Changes Required**
1. **Move existing top bar code** from default layout into its own widget kind of like other components like selectbox
2. **Expose `topbar` as a Preswald component** that users can explicitly include:
```python
from preswald import topbar
topbar()
```
3. **Remove the sidebar toggle button** from the top bar (since it is now in the sidebar).
4. **Ensure default behavior remains unchanged** for apps that do not include `topbar()` explicitly.
#### **Testing**
- Create a sample preswald app using `preswald init` and include and don't include the topbar() and make sure it all works
#### **Update Documentation**
- Add `topbar` documentation to `docs/sdk/topbar.md`, including examples and screenshots.
- Update `preswald/tutorial` with an example of how to use the `topbar` component.
- Run `preswald tutorial` and verify that the top bar is included only when explicitly added.
| open | 2025-03-11T06:50:10Z | 2025-03-17T03:23:41Z | https://github.com/StructuredLabs/preswald/issues/175 | [
"enhancement"
] | amrutha97 | 4 |
jupyter-book/jupyter-book | jupyter | 1,673 | Dynamic configuration / environment variables / etc with book builds | ### Describe the problem/need and solution
Currently we use a static configuration file (`_config.yml`) for all of the book's configuration. However, there are some cases where you want to dynamically choose configuration at build time. For example, "set a configuration value based on an environment variable."
This isn't currently possible with static configuration, but it *is* possible in Sphinx. We could find some way to allow a user to dynamically update their configuration (or run arbitrary Python code) at build time.
### Guide for implementation
**Current build process**
Here's where we invoke Sphinx:
https://github.com/executablebooks/jupyter-book/blob/aedee257645ee41906c4d64f66f71b7f0dc7acfa/jupyter_book/cli/main.py#L307-L321
In that case, we explicitly set `noconfig=True`, which means that Sphinx does not expect any `conf.py` file to exist.
We then generate a dictionary of Sphinx config, and pass it to the Sphinx build command as "overrides":
https://github.com/executablebooks/jupyter-book/blob/aedee257645ee41906c4d64f66f71b7f0dc7acfa/jupyter_book/sphinx.py#L114-L129
We also already have the ability to generate a `conf.py` file from a `_config.yml` file:
https://github.com/executablebooks/jupyter-book/blob/aedee257645ee41906c4d64f66f71b7f0dc7acfa/jupyter_book/cli/main.py#L458
### Three ideas for implementation
There a few ways we could add this functionality:
1. **Support `conf.py`**. We could allow users to add a `conf.py` (maybe we'd call it `_config.py`?) that we'd point to during the Sphinx build. This would behave no differently from how Sphinx currently handles it.
2. **Generate a `conf.py` at build time, and add a `extraConfig` block**. Instead of using configuration over-rides, we could generate a **temporary `conf.py` file** that was created via the function above. We could then support a configuration block that would contain arbitrary Python code to be run, and that could over-ride / set configuration values (by being added to the end of the `conf.py` file. This is similar to [how JupyterHub uses `extraConfig`](https://zero-to-jupyterhub.readthedocs.io/en/latest/resources/reference.html#hub-extraconfig).
3. **Pre-process config.yml with jinja**. We could also add a pre-processing step before we parse the `config.yml` file. This would let users to something like [ansible style variable injection](https://github.com/executablebooks/jupyter-book/issues/1673#issuecomment-1085388535).
### Suggestion
After some discussion below, it seems like path 3 above has the most support for adding this functionality. Especially if we followed patterns that were already common in other frameworks, it would be a way to provide some dynamic configuration without supporting the total flexibility of a `conf.py` file.
### Tasks and updates
_No response_ | open | 2022-03-22T17:47:53Z | 2024-06-24T18:34:50Z | https://github.com/jupyter-book/jupyter-book/issues/1673 | [
"enhancement"
] | choldgraf | 17 |
graphql-python/graphene-sqlalchemy | sqlalchemy | 310 | Why is there no options to set node and edge nonnull on connection field? | I've been trying to set nonnull to node and edge on connection field
because a frontend engineer told me that he've got a lot of things to handler if node and edge are nullable.
is there a specific reason node and edge set to nullable? | open | 2021-05-31T00:10:56Z | 2022-04-28T00:40:02Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/310 | [
"enhancement",
"waiting"
] | mz-ericlee | 1 |
ivy-llc/ivy | numpy | 27,868 | Fix Frontend Failing Test: paddle - tensor.torch.Tensor.__gt__ | closed | 2024-01-07T23:37:49Z | 2024-01-07T23:48:41Z | https://github.com/ivy-llc/ivy/issues/27868 | [
"Sub Task"
] | NripeshN | 0 |
|
roboflow/supervision | computer-vision | 1,070 | Where does the processed video gets saved? | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Apologies for asking such a newbie question. Where does the processed file gets saved after running the inference_file_example.py script? I am using vs code to run this locally on my cpu. I can see the video as it was processing but as soon as its finished I couldn't find it anywhere.
I am using time_in_zone
Thanks
### Additional
_No response_ | closed | 2024-03-28T20:27:44Z | 2024-03-29T12:24:02Z | https://github.com/roboflow/supervision/issues/1070 | [
"question"
] | Ryugon07 | 1 |
hyperspy/hyperspy | data-visualization | 2,939 | EELS remove background doesn't work | Hi.
I want to remove EELS SI background using s.remove_background with fixed energy window.
A "interactive" option works well. (Setting energy window manually)
```highlossalign.remove_background(background_type = "Power law", zero_fill=True, fast=True)```
But when I set the energy window, It cannot remove background
```highlossalign.remove_background(signal_range=(825., 849.), background_type = "Power law", zero_fill=True, fast=True)```
(energy size(channel): 2048, offset 800eV)
How can I solve removing background without setting manually? | closed | 2022-05-15T06:46:01Z | 2022-05-29T14:07:14Z | https://github.com/hyperspy/hyperspy/issues/2939 | [] | Ruiky94 | 3 |
mars-project/mars | numpy | 3,045 | [BUG] ModuleNotFoundError: No module named 'mars.lib.sparse.coo' | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
It seems that the `SparseNDArray` type does not support `COONDArray` any more. The `SparseNDArray` is a special data type implemented by Mars, it likes tensor or dataframe and may be returned as the result of the operand.
For example, the `SparseNDArray` type may be returned to user:
``` python
raw = sps.random(10, 5, density=0.2)
arr = tensor(raw, chunk_size=3)
arr2 = arr.astype("i8")
res = arr2.execute().fetch() # {mars.lib.sparse.matrix.SparseMatrix: (10, 5)}
```
So, we should make it not only serializable by the Mars itself but also pickleable.
The `ModuleNotFoundError: No module named 'mars.lib.sparse.coo'` is raised when unpickling a `SparseMatrix`, the type `SparseMatrix` calls `__new__` first, and `SparseMatrix.__new__` calls super new `SparseNDArray.__new__`. But, the `SparseNDArray.__new__` is a special method, it construct different types according to the input params.
When unpickling, the input params of `SparseNDArray.__new__` is empty, so it goes to the stale code:
```python
def __new__(cls, *args, **kwargs):
shape = kwargs.get("shape", None)
if shape is not None and len(shape) == 1:
from .vector import SparseVector
return object.__new__(SparseVector)
if len(args) == 1 and issparse(args[0]) and args[0].ndim == 2:
from .matrix import SparseMatrix
return object.__new__(SparseMatrix)
else:
# When unpickling, it goes here.
from .coo import COONDArray
return object.__new__(COONDArray)
```
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version 3.7.7
2. The version of Mars you use Latest master
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| closed | 2022-05-18T03:18:37Z | 2022-05-18T08:48:19Z | https://github.com/mars-project/mars/issues/3045 | [] | fyrestone | 0 |
rthalley/dnspython | asyncio | 1,029 | DoH3 or HTTP3 | **Motivation**
We can utilize HTTP/3 in DoH implementation. Cloudflare, Google and NextDNS server already support this!
<img width="1125" alt="Screenshot 2024-01-03 at 01 23 10" src="https://github.com/rthalley/dnspython/assets/125150101/a760e0c8-8553-4f65-a303-faa393aeef97">
*NextDNS log*
**Describe the solution you'd like.**
Enable HTTP/3 in DoH by default, if not available, fallback to HTTP/2.
| closed | 2024-01-02T17:24:53Z | 2024-02-24T13:35:04Z | https://github.com/rthalley/dnspython/issues/1029 | [
"Enhancement Request",
"Future"
] | UjuiUjuMandan | 3 |
ivy-llc/ivy | numpy | 28,069 | Fix Ivy Failing Test: paddle - elementwise.divide | closed | 2024-01-27T11:59:18Z | 2024-02-05T13:49:15Z | https://github.com/ivy-llc/ivy/issues/28069 | [
"Sub Task"
] | MuhammadNizamani | 0 |
|
chatanywhere/GPT_API_free | api | 325 | 显示连接至服务器时出现错误 | 
| open | 2024-11-20T16:41:17Z | 2024-11-21T14:42:59Z | https://github.com/chatanywhere/GPT_API_free/issues/325 | [] | Chelseabalabala | 1 |
QingdaoU/OnlineJudge | django | 122 | C++编译莫名其妙报错 | 
| closed | 2018-02-18T14:43:01Z | 2018-02-19T13:47:16Z | https://github.com/QingdaoU/OnlineJudge/issues/122 | [] | liz0ng | 2 |
grillazz/fastapi-sqlalchemy-asyncpg | pydantic | 12 | add JSON field example | expose key, value as property and test response with pydantic | open | 2021-12-21T20:23:13Z | 2021-12-21T20:23:13Z | https://github.com/grillazz/fastapi-sqlalchemy-asyncpg/issues/12 | [] | grillazz | 0 |
ansible/ansible | python | 84,502 | Fact caching with smart gathering can miss facts when plays use different gather_subset sets | ### Summary
The lack of ability to set a global `gather_subset` means that when using fact caching with `gather_facts: smart`, the facts collected are determined by the `gather_subset` of the first play that runs. Subsequent plays that request different fact subsets via their own `gather_subset` configuration will not receive those additional facts because:
1. The first play/block/task caches its collected facts based on its `gather_subset`
2. Later plays/blocks/tasks see the facts are cached (due to smart gathering)
3. No new fact gathering occurs until the cache times out even though different subsets are requested
4. This leads to missing facts that were explicitly requested by later plays
This creates a potential issue where plays/blocks/tasks using the same cache location must maintain identical `gather_subset` configurations to ensure all required facts are available when using fact caching with smart gathering. It seems like there should either be a way to specify a global `gather_subset` or smart gathering should be able to determine if some new facts need to be added to the facts cache due to the subset being expanded on later plays.
### Issue Type
Bug Report
### Component Name
lib/ansible/module_utils/facts/collector.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.18.1]
config file = None
configured module search path = ['/home/raddessi/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/raddessi/.conda/envs/ansible-3.11/lib/python3.11/site-packages/ansible
ansible collection location = /home/raddessi/.ansible/collections:/usr/share/ansible/collections
executable location = /home/raddessi/.conda/envs/ansible-3.11/bin/ansible
python version = 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (/home/raddessi/.conda/envs/ansible-3.11/bin/python3.11)
jinja version = 3.1.5
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = code
PAGER(env: PAGER) = less
```
### OS / Environment
not relevant but confirmed on fedora 41 and debian 10
### Steps to Reproduce
I've set up an integration test to document the failure that you can see [at this branch](https://github.com/ansible/ansible/compare/devel...raddessi:ansible:devel.gather_subset_caching?expand=1), here is a high level summary:
env settings
```bash
ANSIBLE_GATHERING=smart
ANSIBLE_CACHE_PLUGIN=jsonfile
ANSIBLE_CACHE_PLUGIN_CONNECTION=./cache
```
playbook1
```yaml
# First play, facts cached here will be minimal
- hosts: testhost
module_defaults:
ansible.builtin.gather_facts:
gather_subset: ["!all"] # can be changed to ["!all", "hardware"] to resolve the issue
tasks:
- name: ensure facts are gathered
assert:
that:
- ansible_facts is defined and 'fqdn' in ansible_facts
```
```yaml
# Second play, hardware facts not available despite being requested
- hosts: testhost
module_defaults:
ansible.builtin.gather_facts:
gather_subset: ["hardware"]
tasks:
- name: ensure the hardware facts are present
assert:
that:
- ansible_facts is defined and 'processor_cores' in ansible_facts
```
### Expected Results
I expected to be able to use facts that were specified but since the cache already exists it was returned as-is even though it only contains a subset of the facts that were requested.
### Actual Results
```console
TASK [ensure the hardware facts are present] ***********************************
fatal: [testhost]: FAILED! => {
"assertion": "ansible_facts is defined and 'processor_cores' in ansible_facts",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
PLAY RECAP *********************************************************************
testhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
NOTICE: To resume at this test target, use the option: --start-at gather_subset_caching
FATAL: Command "./runme.sh" returned exit status 2.
FATAL: Command "podman exec ansible-test-controller-uYaUDvjx /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.13 /root/ansible/bin/ansible-test integration --allow-destructive --containers '{}' --truncate 187 --color yes --host-path test/results/.tmp/host-a2osvri4 --metadata test/results/.tmp/metadata-yyow8ew_.json -- gather_subset_caching" returned exit status 1.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | open | 2024-12-31T22:55:42Z | 2025-01-21T15:54:46Z | https://github.com/ansible/ansible/issues/84502 | [
"bug",
"affects_2.18"
] | raddessi | 4 |
JaidedAI/EasyOCR | machine-learning | 1,077 | AttributeError: module 'PIL.Image' has no attribute 'ANTIALIAS' | When I try to use easyocr on any image, I get this error:
AttributeError: module 'PIL.Image' has no attribute 'ANTIALIAS'
According to (https://stackoverflow.com/questions/76616042/attributeerror-module-pil-image-has-no-attribute-antialias), new version of PIL (10.0.0) has no ANTIALIAS, as it's deprecated.
Full error:
File "...", line 8, in convert_img_to_text
result = reader.readtext(img_path)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "...\venv\Lib\site-packages\easyocr\easyocr.py", line 464, in readtext
result = self.recognize(img_cv_grey, horizontal_list, free_list,\
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "...\venv\Lib\site-packages\easyocr\easyocr.py", line 383, in recognize
image_list, max_width = get_image_list(h_list, f_list, img_cv_grey, model_height = imgH)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "...\venv\Lib\site-packages\easyocr\utils.py", line 613, in get_image_list
crop_img,ratio = compute_ratio_and_resize(crop_img,width,height,model_height)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "...\venv\Lib\site-packages\easyocr\utils.py", line 576, in compute_ratio_and_resize
img = cv2.resize(img,(int(model_height*ratio),model_height),interpolation=Image.ANTIALIAS) | open | 2023-07-09T13:39:22Z | 2025-02-04T18:03:22Z | https://github.com/JaidedAI/EasyOCR/issues/1077 | [] | ArtDoctor | 27 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 1,151 | Привет | closed | 2022-12-30T08:02:28Z | 2023-01-08T08:55:17Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1151 | [] | Vieolik | 0 |
|
allenai/allennlp | pytorch | 5,439 | Load elmo-constituency-parser from archive failed | ## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [ ] I have verified that the issue exists against the `main` branch of AllenNLP.
- [ ] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/main/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [x] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [ ] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/main/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/main) to find out if the bug was already fixed in the main branch.
- [x] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [ ] I have included in the "Related issues or possible duplicates" section beloew all related issues and possible duplicate issues (If there are none, check this box anyway).
- [x] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [ ] I have included in the "Environment" section below the output of `pip freeze`.
- [ ] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
<!-- Please provide a clear and concise description of what the bug is here. -->
archive = load_archive(
"elmo-constituency-parser-2018.03.14.tar.gz"
)
predictor = Predictor.from_archive(archive, 'constituency-parser')
predictor.predict_json({"sentence": "This is a sentence to be predicted!"})
<details>
<summary><b>Python traceback:</b></summary>
<p>
<!-- Paste the traceback from any exception (if there was one) in between the next two lines below -->
```
Traceback (most recent call last):
File "E:\Chuan\Documents\GitHub\allennlp\allennlp\models\archival.py", line 232, in load_archive
dataset_reader, validation_dataset_reader = _load_dataset_readers(
File "E:\Chuan\Documents\GitHub\allennlp\allennlp\models\archival.py", line 268, in _load_dataset_readers
dataset_reader = DatasetReader.from_params(
File "E:\Chuan\Documents\GitHub\allennlp\allennlp\common\from_params.py", line 638, in from_params
subclass, constructor_name = as_registrable.resolve_class_name(choice)
File "E:\Chuan\Documents\GitHub\allennlp\allennlp\common\registrable.py", line 207, in resolve_class_name
raise ConfigurationError(
allennlp.common.checks.ConfigurationError: 'ptb_trees' is not a registered name for 'DatasetReader'. If your registered class comes from custom code, you'll need to import the corresponding modules. If you're using AllenNLP from the command-line, this is done by using the '--include-package' flag, or by specifying your imports in a '.allennlp_plugins' file. Alternatively, you can specify your choices using fully-qualified paths, e.g. {"model": "my_module.models.MyModel"} in which case they will be automatically imported correctly.
python-BaseException
```
</p>
</details>
## Related issues or possible duplicates
- None
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
OS: Windows 10
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version: 3.9.5
allennlp 2.4.0
| closed | 2021-10-19T21:14:30Z | 2021-10-26T02:20:40Z | https://github.com/allenai/allennlp/issues/5439 | [
"bug"
] | hoperiver | 0 |
pywinauto/pywinauto | automation | 1,033 | the program does not open, but there are no errors | Hello,
I try to run the program, but it doesn't start, and there are no errors. IDE returns exit code 0. System applications such as notepad, calculator are launched.
I thought it was a matter of rights, I put all the necessary programs and tools in Program Files. Created a system environment variable for the program I want to run, but that didn't help. The process of the program I am trying to start does not start. It is not in the task manager
If I try to run the program through cmd, then just load and go to another line. If I start a notepad or calculator, then everything opens. I always run my Pycharm session as Admin.
The code:
`from pywinauto.application import Application`
`app = Application(backend="uia").start("C:\\Program Files\\kinderi\\Sintech.Arm.exe")`
The program does not open. Returns exit code 0. No errors.
Thanks for any help.
| open | 2021-01-13T12:23:35Z | 2021-01-13T12:23:35Z | https://github.com/pywinauto/pywinauto/issues/1033 | [] | fazruslan | 0 |
tartiflette/tartiflette | graphql | 483 | OSError: cannot load library '/var/task/tartiflette/language/parsers/libgraphqlparser/cffi/libgraphqlparser.dylib': /var/task/tartiflette/language/parsers/libgraphqlparser/cffi/libgraphqlparser.dylib: cannot open shared object file: No such file or directory. Additionally, ctypes.util.find_library() did not manage to locate a library called | I'm trying to load tartiflette in an aws lambda. This is what is happening.
`tartiflette = "^1.3.1"`
`python 3.8`
<details>
<summary>Click to expand</summary>
```
[ERROR] OSError: cannot load library '/var/task/tartiflette/language/parsers/libgraphqlparser/cffi/libgraphqlparser.dylib': /var/task/tartiflette/language/parsers/libgraphqlparser/cffi/libgraphqlparser.dylib: cannot open shared object file: No such file or directory. Additionally, ctypes.util.find_library() did not manage to locate a library called '/var/task/tartiflette/language/parsers/libgraphqlparser/cffi/libgraphqlparser.dylib'
Traceback (most recent call last):
File "/var/lang/lib/python3.8/imp.py", line 234, in load_module
return load_source(name, filename, file)
File "/var/lang/lib/python3.8/imp.py", line 171, in load_source
module = _load(spec)
File "<frozen importlib._bootstrap>", line 702, in _load
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/var/task/data_request_form_api/lambda_handler.py", line 8, in <module>
from .service import setup_graphql
File "/var/task/data_request_form_api/service.py", line 1, in <module>
from tartiflette import Resolver, create_engine, Engine
File "/var/task/tartiflette/__init__.py", line 5, in <module>
from tartiflette.engine import Engine
File "/var/task/tartiflette/engine.py", line 19, in <module>
from tartiflette.execution.collect import parse_and_validate_query
File "/var/task/tartiflette/execution/collect.py", line 11, in <module>
from tartiflette.language.parsers.libgraphqlparser import parse_to_document
File "/var/task/tartiflette/language/parsers/libgraphqlparser/__init__.py", line 1, in <module>
from .parser import parse_to_document
File "/var/task/tartiflette/language/parsers/libgraphqlparser/parser.py", line 35, in <module>
_LIB = _FFI.dlopen(f"{_LIBGRAPHQLPARSER_DIR}/libgraphqlparser.dylib")
File "/var/task/cffi/api.py", line 150, in dlopen
lib, function_cache = _make_ffi_library(self, name, flags)
File "/var/task/cffi/api.py", line 832, in _make_ffi_library
backendlib = _load_backend_lib(backend, libname, flags)
File "/var/task/cffi/api.py", line 827, in _load_backend_lib
raise OSError(msg)
```
</details>
| closed | 2021-04-08T09:30:53Z | 2022-04-09T00:43:31Z | https://github.com/tartiflette/tartiflette/issues/483 | [] | Lilja | 12 |
huggingface/datasets | nlp | 6,775 | IndexError: Invalid key: 0 is out of bounds for size 0 | ### Describe the bug
I am trying to fine-tune llama2-7b model in GCP. The notebook I am using for this can be found [here](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_llama2_peft_finetuning.ipynb).
When I use the dataset given in the example, the training gets successfully completed (example dataset can be found [here](https://huggingface.co/datasets/timdettmers/openassistant-guanaco)).
However when I use my own dataset which is in the same format as the example dataset, I get the below error (my dataset can be found [here](https://huggingface.co/datasets/kk2491/finetune_dataset_002)).

I see the files are being read correctly from the logs:

### Steps to reproduce the bug
1. Clone the [vertex-ai-samples](https://github.com/GoogleCloudPlatform/vertex-ai-samples) repository.
2. Run the [llama2-7b peft fine-tuning](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_llama2_peft_finetuning.ipynb).
3. Change the dataset `kk2491/finetune_dataset_002`
### Expected behavior
The training should complete successfully, and model gets deployed to an endpoint.
### Environment info
Python version : Python 3.10.12
Dataset : https://huggingface.co/datasets/kk2491/finetune_dataset_002
| open | 2024-04-03T17:06:30Z | 2024-04-08T01:24:35Z | https://github.com/huggingface/datasets/issues/6775 | [] | kk2491 | 7 |
mwaskom/seaborn | data-science | 3,511 | lineplot of empty dataframe with hue in seaborn 0.13.0 | MWE
```
df1 = pd.DataFrame({}, columns=["aa", "bb", "cc"]) # empty dataframe
# df1 = pd.DataFrame([(1, 2, 3), (2, 1, 3)], columns=["aa", "bb", "cc"]) # with this, it works
sns.lineplot(df1, x="aa", y="bb") # works
sns.lineplot(df1, x="aa", y="bb", hue="cc") # does not work
```
Error happens with seaborn 0.13.0, but not with 0.12.2:
Error:
```
File .../python3.10/site-packages/seaborn/relational.py:507, in lineplot(data, x, y, hue, size, style, units, palette, hue_order, hue_norm, sizes, size_order, size_norm, dashes, markers, style_order, estimator, errorbar, n_boot, seed, orient, sort, err_style, err_kws, legend, ci, ax, **kwargs)
504 color = kwargs.pop("color", kwargs.pop("c", None))
505 kwargs["color"] = _default_color(ax.plot, hue, color, kwargs)
--> 507 p.plot(ax, kwargs)
508 return ax
File .../python3.10/site-packages/seaborn/relational.py:274, in _LinePlotter.plot(self, ax, kws)
266 # TODO How to handle NA? We don't want NA to propagate through to the
267 # estimate/CI when some values are present, but we would also like
268 # matplotlib to show "gaps" in the line when all values are missing.
(...)
271
272 # Loop over the semantic subsets and add to the plot
273 grouping_vars = "hue", "size", "style"
--> 274 for sub_vars, sub_data in self.iter_data(grouping_vars, from_comp_data=True):
276 if self.sort:
277 sort_vars = ["units", orient, other]
File .../python3.10/site-packages/seaborn/_base.py:938, in VectorPlotter.iter_data(self, grouping_vars, reverse, from_comp_data, by_facet, allow_empty, dropna)
935 for var in grouping_vars:
936 grouping_keys.append(levels.get(var, []))
--> 938 iter_keys = itertools.product(*grouping_keys)
939 if reverse:
940 iter_keys = reversed(list(iter_keys))
TypeError: 'NoneType' object is not iterable
``` | closed | 2023-10-02T07:10:40Z | 2023-11-28T00:30:38Z | https://github.com/mwaskom/seaborn/issues/3511 | [
"bug",
"mod:relational"
] | maximilianmordig | 2 |
Sanster/IOPaint | pytorch | 590 | Where is the update one click | I cant see the new update avalibale on the one click installer website
IOPaint is outdated since v1

| closed | 2024-11-02T09:41:29Z | 2024-11-05T12:25:19Z | https://github.com/Sanster/IOPaint/issues/590 | [] | Tobe2d | 5 |
pallets-eco/flask-sqlalchemy | flask | 833 | flask-sqlalchemy session close seems not work | I have a question about flask-sqlalchemy, precisely about sqlalchemy.
When executing one function, processes are recorded in database.
Whenever after recording db, I added db.session.close() to get session back to pool.
but while function is executed, I cannot connect to database. why is it happening ?
def func(self):
# stage 1:
self.sub_func1() ->update process to db
# stage 2:
self.sub_func2() ->update process to db
# stage 3:
self.sub_func3() ->update process to db
# stage 4:
self.sub_func4() ->update process to db
return result | closed | 2020-06-04T02:39:37Z | 2020-12-05T19:58:25Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/833 | [] | merry-swjung | 1 |
onnx/onnx | tensorflow | 5,926 | Add TopK node to a pretrained Brevitas model | We are working with FINN-ONNX, and we want the pretrained models from Brevitas that classify the MNIST images to output the index (class) instead of a probabilities tensor of dim 1x10.To our knowledge, the node responsible for this is the TopK.
Where do we have to add this layer, and what function can we add so the 'export_qonnx' would understand it as a TopK node?
The desired block is in the following image:

| open | 2024-02-09T17:21:55Z | 2024-02-13T10:04:09Z | https://github.com/onnx/onnx/issues/5926 | [
"question"
] | abedbaltaji | 1 |
pydantic/pydantic | pydantic | 11,553 | PydanticOmit failing with duplicated union field | ### Initial Checks
- [x] I confirm that I'm using Pydantic V2
### Description
When using a custom type that omits its JSON schema (by raising `PydanticOmit` in its `__get_pydantic_json_schema__` method), the schema generation behaves inconsistently. In a model with a single field of a union type, the JSON schema is generated successfully (omitting the custom type as intended). However, when the same custom type is used in multiple fields within one model, generating the JSON schema fails with a `PydanticOmit` exception.
### Example Code
```Python
from pydantic_core import PydanticOmit
from pydantic import BaseModel
class CustomSerializedType(BaseModel):
@classmethod
def __get_pydantic_json_schema__(
cls, core_schema, handler,
):
raise PydanticOmit
class SingleField(BaseModel):
first_field: list[float | CustomSerializedType]
class DuplicatedField(BaseModel):
first_field: list[float | CustomSerializedType]
second_field: list[float | CustomSerializedType]
# This is fine
SingleField.model_json_schema()
"""
{'properties': {'first_field': {'items': {'type': 'number'},
'title': 'First Field',
'type': 'array'}},
'required': ['first_field'],
'title': 'SingleField',
'type': 'object'}
"""
# This raises an error
DuplicatedField.model_json_schema()
"""
...
handler_func(schema_or_field, current_handler, js_modify_function)
535 def new_handler_func(
536 schema_or_field: CoreSchemaOrField,
537 current_handler: GetJsonSchemaHandler = current_handler,
538 js_modify_function: GetJsonSchemaFunction = js_modify_function,
539 ) -> JsonSchemaValue:
--> 540 json_schema = js_modify_function(schema_or_field, current_handler)
541 if _core_utils.is_core_schema(schema_or_field):
542 json_schema = populate_defs(schema_or_field, json_schema)
Cell In[20], line 9, in CustomSerializedType.__get_pydantic_json_schema__(cls, core_schema, handler)
5 @classmethod
6 def __get_pydantic_json_schema__(
7 cls, core_schema, handler,
8 ) -> JsonSchemaValue:
----> 9 raise PydanticOmit
PydanticOmit: PydanticOmit()
"""
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.10.6
pydantic-core version: 2.27.2
pydantic-core build: profile=release pgo=false
install path: .venv/lib/python3.12/site-packages/pydantic
python version: 3.12.7 (main, Oct 16 2024, 07:12:08) [Clang 18.1.8 ]
platform: macOS-15.2-arm64-arm-64bit
related packages: fastapi-0.115.6 mypy-1.15.0 pydantic-settings-2.6.1 typing_extensions-4.12.2
commit: unknown
``` | open | 2025-03-14T11:33:33Z | 2025-03-20T11:04:09Z | https://github.com/pydantic/pydantic/issues/11553 | [
"bug V2"
] | ericliuche | 6 |
Lightning-AI/pytorch-lightning | pytorch | 20,548 | pytorch_lightning.utilities(module) and lightning_utilities (package) | ### Outline & Motivation
In the future release, is it possible to recommend which one to use when both contains similar functions? e.g.,
usage of lightning-utilities 0.11.9 with strict linting/LSP support working
```
from lightning_utilities.core.rank_zero import rank_zero_only
```
usage of utilities in pytorch-lightning 2.5.0, not having linting/LSP support
```
pytorch_lighning.utilities.rank_zero_only # "utilities" is not a known attribute of module "pytorch_lightning"
```
### Pitch
_No response_
### Additional context
_No response_
cc @lantiga @justusschock | closed | 2025-01-14T23:52:18Z | 2025-03-17T23:13:49Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20548 | [
"refactor",
"needs triage"
] | holyshipt | 1 |
dpgaspar/Flask-AppBuilder | rest-api | 1,799 | On mac, flask fab create-app fails until to deactivate / reactivate the venv | If you'd like to report a bug in Flask-Appbuilder, fill out the template below. Provide
any extra information that may be useful
Responsible disclosure:
We want to keep Flask-AppBuilder safe for everyone. If you've discovered a security vulnerability
please report to danielvazgaspar@gmail.com.
### Environment
Flask-Appbuilder version: 3.4.4
pip freeze output:
apispec==3.3.2
attrs==21.4.0
Babel==2.9.1
click==7.1.2
colorama==0.4.4
defusedxml==0.7.1
dnspython==2.2.0
email-validator==1.1.3
Flask==1.1.4
Flask-AppBuilder==3.4.4
Flask-Babel==2.0.0
Flask-JWT-Extended==3.25.1
Flask-Login==0.4.1
Flask-OpenID==1.3.0
Flask-SQLAlchemy==2.5.1
Flask-WTF==0.14.3
greenlet==1.1.2
idna==3.3
itsdangerous==1.1.0
Jinja2==2.11.3
jsonschema==4.4.0
MarkupSafe==2.0.1
marshmallow==3.14.1
marshmallow-enum==1.5.1
marshmallow-sqlalchemy==0.26.1
prison==0.2.1
PyJWT==1.7.1
pyrsistent==0.18.1
python-dateutil==2.8.2
python3-openid==3.2.0
pytz==2021.3
PyYAML==6.0
six==1.16.0
SQLAlchemy==1.4.31
SQLAlchemy-Utils==0.38.2
Werkzeug==1.0.1
WTForms==2.3.3
### Describe the expected results
Tell us what should happen.
```python
flask fab create-app
```
and expect to provide app name etc
### Describe the actual results
Tell us what happens instead.
```pytb
"No such command: fab"
```
### Steps to reproduce
Do clean install on mac using pip install; activate the venv and try flask fab create-app
It fails
Then deactivate venv, and reactivate
Now it works | open | 2022-02-08T02:11:21Z | 2022-02-08T02:11:21Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1799 | [] | valhuber | 0 |
pyppeteer/pyppeteer | automation | 160 | Change logs missing. | Really appreciate the work being doing by the contributors.
### Issue:
The version [0.2.2](https://pypi.org/project/pyppeteer/#history) in pypi misses change logs. How different is it from the code of 0.0.25 is something which we need to find out by doing a diff.
### Desired Result.
A brief description regarding what all changes were made to the API would suffice. Details like `Addition`, `Fixes`, `Depreciation` following the https://keepachangelog.com/en/1.0.0/ will do a great benefit to the community here. | open | 2020-08-20T09:28:23Z | 2020-08-25T04:28:01Z | https://github.com/pyppeteer/pyppeteer/issues/160 | [] | ja8zyjits | 1 |
samuelcolvin/watchfiles | asyncio | 84 | test_awatch_log is flaky | The `test_awatch_log` is flaky and fails on slow systems and/or systems under heavy load. I can reproduce it by running two games (Krunker and SuperTuxKart) while simultaneously running the test on my laptop. What happens is that the number of messages containing "DEBUG" goes below 4 and the test thus fails.
You might wonder if this really is a problem - after all, you don't usually run multiple games while testing your code. The problem is that while packaging watchgod for Alpine Linux I experienced this test randomly failing on their continuous integration (CI) on certain arches (armhf, aarch64, s390x), presumably as a result of me not being the only person who uses these CI runners and the systems thus being under heavy load.
I don't have a proposed way to fix this, and I understand if it's something you don't want to fix, but I thought I would report it nonetheless. | closed | 2021-07-10T18:05:40Z | 2022-03-23T10:24:35Z | https://github.com/samuelcolvin/watchfiles/issues/84 | [] | Newbytee | 1 |
Lightning-AI/pytorch-lightning | data-science | 20,206 | Training crash when using XLA profiler on XLA accelerator and manual optimization | ### Bug description
training loop crash when running on XLA profiler + manual optimization.
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
```python
Training on XLAProfile + Manual Optimization on XLA Machine
```
### Error messages and logs
```
concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/process.py", line 246, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/process.py", line 205, in _process_chunk
return [fn(*args) for args in chunk]
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/process.py", line 205, in <listcomp>
return [fn(*args) for args in chunk]
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/runtime.py", line 95, in wrapper
return fn(*args, **kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 78, in _run_thread_per_device
replica_results = list(
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator
yield _result_or_cancel(fs.pop())
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel
return fut.result(timeout)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 71, in _thread_fn
return fn()
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 187, in __call__
self.fn(runtime.global_ordinal(), *self.args, **self.kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/strategies/launchers/xla.py", line 141, in _wrapping_function
results = function(*args, **kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 579, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _run
results = self._run_stage()
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1030, in _run_stage
self.fit_loop.run()
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 205, in run
self.advance()
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 363, in advance
self.epoch_loop.run(self._data_fetcher)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 140, in run
self.advance(data_fetcher)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 252, in advance
batch_output = self.manual_optimization.run(kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/manual.py", line 94, in run
self.advance(kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/manual.py", line 114, in advance
training_step_output = call._call_strategy_hook(trainer, "training_step", *kwargs.values())
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 311, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 390, in training_step
return self.lightning_module.training_step(*args, **kwargs)
File "/mnt/disks/persist/ldm/ldm/models/autoencoder.py", line 438, in training_step
opt1.step()
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/core/optimizer.py", line 153, in step
step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/strategies/ddp.py", line 270, in optimizer_step
optimizer_output = super().optimizer_step(optimizer, closure, model, **kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 238, in optimizer_step
return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/xla.py", line 75, in optimizer_step
xm.mark_step()
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/core/xla_model.py", line 1056, in mark_step
torch_xla._XLAC._xla_step_marker(
RuntimeError: Expecting scope to be empty but it is [Strategy]XLAStrategy.training_step.1
Exception raised from ResetScopeContext at ../torch/csrc/lazy/core/ir_metadata.cpp:77 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f812737a897 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f812732ab25 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: torch::lazy::ScopePusher::ResetScopes() + 0xa5 (0x7f81136f7c55 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #3: torch_xla::XLAGraphExecutor::MarkStep(torch::lazy::BackendDevice const&) + 0x57 (0x7f7fc6920a87 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/_XLAC.cpython-310-x86_64-linux-gnu.so)
frame #4: <unknown function> + 0x4aeb60a (0x7f7fc66eb60a in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/_XLAC.cpython-310-x86_64-linux-gnu.so)
frame #5: <unknown function> + 0x4aebab6 (0x7f7fc66ebab6 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/_XLAC.cpython-310-x86_64-linux-gnu.so)
frame #6: <unknown function> + 0x4abd006 (0x7f7fc66bd006 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/_XLAC.cpython-310-x86_64-linux-gnu.so)
frame #7: python() [0x4fdc87]
<omitting python frames>
frame #12: python() [0x5099ce]
frame #15: python() [0x509b26]
frame #17: python() [0x509b26]
frame #19: python() [0x5099ce]
frame #21: python() [0x509b26]
frame #23: python() [0x509b26]
frame #41: python() [0x5099ce]
frame #43: python() [0x509b26]
frame #45: python() [0x509b26]
frame #49: python() [0x5cf883]
frame #51: python() [0x5c87f7]
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/mnt/disks/persist/ldm/main.py", line 753, in <module>
trainer.fit(model, data, ckpt_path=opt.resume_from_checkpoint if "resume_from_checkpoint" in opt else None)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 543, in fit
call._call_and_handle_interrupt(
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 43, in _call_and_handle_interrupt
return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/pytorch_lightning/strategies/launchers/xla.py", line 98, in launch
process_context = xmp.spawn(
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/runtime.py", line 95, in wrapper
return fn(*args, **kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 38, in spawn
return pjrt.spawn(fn, nprocs, start_method, args)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 211, in spawn
run_multiprocess(spawn_fn, start_method=start_method)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/runtime.py", line 95, in wrapper
return fn(*args, **kwargs)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 171, in run_multiprocess
replica_results = list(
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 172, in <genexpr>
itertools.chain.from_iterable(
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/process.py", line 575, in _chain_from_iterable_of_lists
for element in iterable:
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator
yield _result_or_cancel(fs.pop())
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel
return fut.result(timeout)
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/~/miniconda3/envs/ldm-tp23/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
RuntimeError: Expecting scope to be empty but it is [Strategy]XLAStrategy.training_step.1
Exception raised from ResetScopeContext at ../torch/csrc/lazy/core/ir_metadata.cpp:77 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x7f812737a897 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x7f812732ab25 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: torch::lazy::ScopePusher::ResetScopes() + 0xa5 (0x7f81136f7c55 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #3: torch_xla::XLAGraphExecutor::MarkStep(torch::lazy::BackendDevice const&) + 0x57 (0x7f7fc6920a87 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/_XLAC.cpython-310-x86_64-linux-gnu.so)
frame #4: <unknown function> + 0x4aeb60a (0x7f7fc66eb60a in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/_XLAC.cpython-310-x86_64-linux-gnu.so)
frame #5: <unknown function> + 0x4aebab6 (0x7f7fc66ebab6 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/_XLAC.cpython-310-x86_64-linux-gnu.so)
frame #6: <unknown function> + 0x4abd006 (0x7f7fc66bd006 in /~/miniconda3/envs/ldm-tp23/lib/python3.10/site-packages/_XLAC.cpython-310-x86_64-linux-gnu.so)
frame #7: python() [0x4fdc87]
<omitting python frames>
frame #12: python() [0x5099ce]
frame #15: python() [0x509b26]
frame #17: python() [0x509b26]
frame #19: python() [0x5099ce]
frame #21: python() [0x509b26]
frame #23: python() [0x509b26]
frame #41: python() [0x5099ce]
frame #43: python() [0x509b26]
frame #45: python() [0x509b26]
frame #49: python() [0x5cf883]
frame #51: python() [0x5c87f7]
```
### Environment
<details>
<summary>Current environment</summary>
```
- PyTorch Lightning Version (2.4.0):
- PyTorch XLA Version (2.4.0):
- PyTorch Version (2.4):
- Python version (3.10):
```
</details>
### More info
_No response_ | open | 2024-08-16T11:31:10Z | 2024-08-16T11:31:26Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20206 | [
"bug",
"needs triage",
"ver: 2.4.x"
] | sdsuster | 0 |
mars-project/mars | pandas | 2,787 | [Proposal] Expand slot management to resource management | ## Motivation
Currently Mars use slot for resource management and bands allocation which just consider cpu/gpu but no memory. Mars always allocate one slot which represents one core cpu or gpu for a subtask. It works well most time. But there are some shortcomings like:
* Subtasks need less cpu but assigned more which results in low cpu utilization and long execution time
* Subtasks need more memory and less cpu which leads node OOM
So we could develop more granular resource management and allocation to increase resource utilization, improve scheduling efficiency, and avoid OOM.
## Design
We propose a more common resource management which includes not only cpu/gpu but also memory, and even estimated time of a subtask.
A subtask of Mars needs one slot but no other resource by default. We could add more different types of resources to management.
Obviusly we can involve memory first as follows:
```
class Resource:
num_cpus: float
num_gpus: float
num_mem_bytes: float
```
With this we can expand slot management to resource management. And bands allocation needs to consider both cpu/gpu and memory.
So we should develop a more complex resource management from a simple resource(cpu/gpu) to multiple resources.
In addition, we can easily implement hbo if we have an external system which can recommend resources for subtasks by history information.
If no external system, we can set memory resource to 0 which degenerates to the original slot scheduler or set a value through configuration to avoid OOM.
And later we can estimated execution time of subtasks if the external HBO system can recommend subtask execution time.
## Plan
In order to implement this proposal, we plan to do:
* Add physical resource management which has been in #2731
* Add a logic id for subtask which represents a unique subtask and in different submits the same subtask has same logic id which has been in #2575
* Add a logic key for tileable graph which just like subtask logic key and this is for HBO in #2961
* Introduce resource management and bands allocation #2846
| closed | 2022-03-04T07:42:11Z | 2022-04-25T11:35:03Z | https://github.com/mars-project/mars/issues/2787 | [
"proposal"
] | zhongchun | 2 |
Miserlou/Zappa | flask | 1,261 | How do you start a project from scratch? | Are there docs for using `zappa init` without an existing python project? | open | 2017-11-26T13:29:49Z | 2018-02-26T03:10:49Z | https://github.com/Miserlou/Zappa/issues/1261 | [
"question",
"documentation",
"non-bug"
] | carlhunterroach | 8 |
xuebinqin/U-2-Net | computer-vision | 145 | 关于损失函数的问题 | 作者您好,想请教您为什么没有在U2net里采用Basnet里面的混合损失函数呢?如果采用bce+ssim+iou的混合损失函数效果会不会更好呢? | closed | 2021-01-14T06:04:48Z | 2021-01-20T21:03:54Z | https://github.com/xuebinqin/U-2-Net/issues/145 | [] | harrywellington9588 | 2 |
huggingface/diffusers | deep-learning | 10,690 | SDXL InPainting: Mask blur option is negated by forced binarization. | The SDXL InPainting pipeline's documentation suggests using `pipeline.mask_processor.blur()` for creating soft masks, but this functionality is effectively broken due to the implementation order. Please let me know if I'm missing something here. Based on my testing, whether I use a blurred mask or blur them with the built in method, they still show a solid seam as if there was no blur applied.
The mask processor is initialized with forced binarization:
```
self.mask_processor = VaeImageProcessor(
vae_scale_factor=self.vae_scale_factor,
do_normalize=False,
do_binarize=True, # Forces binarization
do_convert_grayscale=True
)
```
When processing masks, any blur effect is applied before binarization, which then converts all values back to pure black and white, completely negating the blur effect (if I'm not mistaken). The VaeImageProcessor defaults binarize to false, but when masks are initialized it sets binarize to true.
Relevant files:
diffusers/stable_diffusion_xl/pipeline_stable_diffusion_xl_inpaint.py[326]
diffusers/image_processor.py[88][276][523]
### Reproduction
Inpaint pipeline config using either your own blurred image or the built in blur method according to documentation. It's fairly self explanatory and I don't have a minimal script to share.
### System Info
diffusers==0.32.2
### Who can help?
@yiyixuxu @DN6 | closed | 2025-01-30T15:59:46Z | 2025-02-22T04:10:12Z | https://github.com/huggingface/diffusers/issues/10690 | [
"bug"
] | zacheryvaughn | 3 |
Josh-XT/AGiXT | automation | 964 | Local ChatGPT server connection randomly timing out | ### Description
When i try to access my local OpenAI compatible API using my local ip and the port, sometimes the connection stops working and i have to restart the AGiXT backend to make it work again.
The API i'm using is RWKV Runner (the RWKV World model didn't work with oobabooga) which i noticed does not list the embedder in the models endpoint but supports embeddings.
The API does not know about the HTTP request and the backend just says connection timed out.
### Steps to Reproduce the Bug
1. Locally run an OpenAI compatible API (Preferably RWKV Runner)
2. Setup the agent
3. Create a conversation
4. Click "Send" in chat mode about 3-5 times and wait for response
### Expected Behavior
After clicking "Send", the connection should not repeatedly time out and the backend should connect to the API.
### Operating System
- [ ] Linux
- [X] Microsoft Windows
- [ ] Apple MacOS
- [ ] Android
- [ ] iOS
- [ ] Other
### Python Version
- [ ] Python <= 3.9
- [x] Python 3.10
- [ ] Python 3.11
### Environment Type - Connection
- [X] Local - You run AGiXT in your home network
- [ ] Remote - You access AGiXT through the internet
### Runtime environment
- [X] Using docker compose
- [ ] Using local
- [ ] Custom setup (please describe above!)
### Acknowledgements
- [X] I have searched the existing issues to make sure this bug has not been reported yet.
- [X] I am using the latest version of AGiXT.
- [X] I have provided enough information for the maintainers to reproduce and diagnose the issue. | closed | 2023-09-02T09:10:39Z | 2023-09-03T06:35:45Z | https://github.com/Josh-XT/AGiXT/issues/964 | [
"type | report | bug",
"needs triage"
] | DadamaldaDad | 3 |
huggingface/text-generation-inference | nlp | 2,569 | Question: What is preferred way to cite TGI/repo? Didnt see a citation file. | open | 2024-09-26T02:07:42Z | 2024-09-26T02:07:42Z | https://github.com/huggingface/text-generation-inference/issues/2569 | [] | elegantmoose | 0 |
|
albumentations-team/albumentations | machine-learning | 1,591 | [Documentation] Add to Contributor's guide that instead of `np.random` we use functions from `random_utils` | closed | 2024-03-18T17:25:34Z | 2024-03-21T01:02:09Z | https://github.com/albumentations-team/albumentations/issues/1591 | [
"good first issue",
"documentation"
] | ternaus | 1 |
|
ultralytics/yolov5 | pytorch | 13,469 | How to compute loss using eval mode in val. py file for YOLOv5 | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Due to research requirements, I need to calculate the loss function value for each input image in the `eval mode` of the YOLO V5 model
I modified the `run` function in the `val. py` file
The `compute_loss` variable was specified as `ComputeLoss (model)` in it
```python
# Configure
model.eval()
compute_loss = ComputeLoss(model)
cuda = device.type != "cpu"
is_coco = isinstance(data.get("val"), str) and data["val"].endswith(f"coco{os.sep}val2017.txt") # COCO dataset
nc = 1 if single_cls else int(data["nc"]) # number of classes
iouv = torch.linspace(0.5, 0.95, 10, device=device) # iou vector for mAP@0.5:0.95
niou = iouv.numel()
```
The following error will occur:
```python
Traceback (most recent call last):
File "val.py", line 626, in <module>
main(opt)
File "val.py", line 597, in main
run(**vars(opt))
File "xxxxxxxxx/.local/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "val.py", line 299, in run
compute_loss = ComputeLoss(model)
^^^^^^^^^^^^^^^^^^
File "utils/loss.py", line 115, in __init__
h = model.hyp # hyperparameters
^^^^^^^^^
File "xxxxxxxxxx/.local/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1931, in __getattr__
raise AttributeError(
AttributeError: 'DetectMultiBackend' object has no attribute 'hyp'
```
When I imitate the training mode and adding the 'hyp' attribute to the YOLOv5 model using 'data/hyps/hyp.satch-low-yaml' will result in the following error:
```python
Traceback (most recent call last):
File "val.py", line 626, in <module>
main(opt)
File "val.py", line 597, in main
run(**vars(opt))
File "xxxxxxx/.local/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "val.py", line 299, in run
compute_loss = ComputeLoss(model)
^^^^^^^^^^^^^^^^^^
File "utils/loss.py", line 129, in __init__
m = de_parallel(model).model[-1] # Detect() module
~~~~~~~~~~~~~~~~~~~~~~~~^^^^
TypeError: 'DetectionModel' object is not subscriptable
```
I really need the loss value.I look forward to your reply. I would be extremely grateful
If it's not possible to directly modify `val. py` to achieve the goal, use
```python
torch.hub.load("yolo.pt")
```
or
```python
from ultralytics import YOLO
Model=YOLO ("yolo5. pt")
```
and other methods can achieve the goal, I also look forward to your reply. I would greatly appreciate it
### Additional
_No response_ | open | 2024-12-22T06:04:25Z | 2024-12-22T19:19:42Z | https://github.com/ultralytics/yolov5/issues/13469 | [
"question",
"research"
] | BIT-QiuYu | 2 |
wkentaro/labelme | computer-vision | 657 | [BUG] shift_auto_shape_color config for semantic segmentation on Windows 10 not working | - OS: Windows 10
- Labelme Version: 4.2.10
The "shift_auto_shape_color" config doesn't seem to be working. I'm on Windows 10 and am running this command (based on the semantic segmentation example):
```
labelme data_annotated --labels labels.txt --nodata --validatelabel exact --config
'{shift_auto_shape_color: -2}'
```
I end up getting this error:
```
usage: labelme [-h] [--version] [--reset-config]
[--logger-level {debug,info,warning,fatal,error}]
[--output OUTPUT] [--config CONFIG] [--nodata] [--autosave]
[--nosortlabels] [--flags FLAGS] [--labelflags LABEL_FLAGS]
[--labels LABELS] [--validatelabel {exact,instance}]
[--keep-prev] [--epsilon EPSILON]
[filename]
labelme: error: unrecognized arguments: -2}'
```
Is this a bug or something wrong with how I'm using it? Thanks. | closed | 2020-05-14T18:44:01Z | 2020-12-15T13:36:48Z | https://github.com/wkentaro/labelme/issues/657 | [] | coded5282 | 4 |
ErdemOzgen/Python-developer-roadmap | rest-api | 1 | Add FastAPI | open | 2022-08-30T12:39:36Z | 2022-08-30T12:39:36Z | https://github.com/ErdemOzgen/Python-developer-roadmap/issues/1 | [] | ErdemOzgen | 0 |
|
matplotlib/matplotlib | data-visualization | 29,672 | [Bug]: Matplotlib and Herbie | ### Bug summary
I am having an issue regarding MPL when using Herbie lately. This has been happening the past few weeks
### Code for reproduction
```Python
!pip install xarray matplotlib pygrib numpy pandas basemap cartopy metpy Herbie-data eccodes==2.38.3
from herbie import Herbie
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
import cartopy
import math
import metpy
from herbie.toolbox import EasyMap, pc, ccrs
from herbie import paint
import metpy.calc as mpcalc
```
### Actual outcome
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-6-96bd97320032>](https://localhost:8080/#) in <cell line: 0>()
6 import math
7 import metpy
----> 8 from herbie.toolbox import EasyMap, pc, ccrs
9 from herbie import paint
10 import metpy.calc as mpcalc
3 frames
[/usr/local/lib/python3.11/dist-packages/mpl_toolkits/axes_grid1/inset_locator.py](https://localhost:8080/#) in InsetPosition()
16 @_api.deprecated("3.8", alternative="Axes.inset_axes")
17 class InsetPosition:
---> 18 @_docstring.dedent_interpd
19 def __init__(self, parent, lbwh):
20 """
AttributeError: module 'matplotlib._docstring' has no attribute 'dedent_interpd'
### Expected outcome
The outcome expected is that it would run smoothly.
### Additional information
It have worked in 3.8.4 before I believe.
### Operating system
_No response_
### Matplotlib Version
3.8.4
### Matplotlib Backend
_No response_
### Python version
_No response_
### Jupyter version
_No response_
### Installation
pip | closed | 2025-02-24T17:31:22Z | 2025-02-24T18:22:00Z | https://github.com/matplotlib/matplotlib/issues/29672 | [] | jp2nyy | 4 |
dnouri/nolearn | scikit-learn | 316 | Bug when using Lasagne `mask_input` parameter | When initializing layers, the `incoming` and `incomings` arguments are resolved when they happen to be strings. However, those are not the only ones that may reference other layers. The `mask_input` parameter from recurrent layers also references another layer. Therefore, `initialize_layers` should resolve that too, elso the string will simply be passed on, causing a Lasagne error.
There may be other cases in Lasagne, I'm not sure. | closed | 2016-11-23T10:42:31Z | 2016-11-24T08:37:59Z | https://github.com/dnouri/nolearn/issues/316 | [] | BenjaminBossan | 0 |
piccolo-orm/piccolo | fastapi | 646 | Declarative partitioning support | Is there a way to mention partition key in the model? | closed | 2022-10-19T17:41:51Z | 2022-10-20T07:31:26Z | https://github.com/piccolo-orm/piccolo/issues/646 | [] | devsarvesh92 | 2 |
ultralytics/ultralytics | machine-learning | 19,112 | I want to implement a multi-task network for segmentation and keypoints .what do i need to do | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I want to implement a multi-task network for segmentation and keypoints .what do i need to do
### Additional
_No response_ | open | 2025-02-07T01:27:25Z | 2025-02-14T12:36:10Z | https://github.com/ultralytics/ultralytics/issues/19112 | [
"question",
"segment",
"pose"
] | duyanfang123 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.