repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
pydata/pandas-datareader | pandas | 265 | Unable to load special German index over Google | Hello,
it seems to be impossible to load the values of the MDAX, which is a German index. The Error message is:
pandas_datareader._utils.RemoteDataError: Unable to read URL: http://www.google.com/finance/historical
The values are definitely available in Google as you could see here:
https://www.google.com/finance/historical?q=INDEXDB%3AMDAX&ei=2fJLWNCdJdessQGxnZqIAw
My code is as following:
```python
start = datetime.datetime(2015, 1, 1)
today = datetime.date.today()# - timedelta(1)
print('Start date is {0} and end date is {1}\n'.format(start,today))
df_downloadMDAX = web.DataReader("INDEXDB:MDAX", 'google', start, today).
```
Are there any reasons why that does not work? All other indexes work.
BTW: Thanks for your greatful work here!
| closed | 2016-12-10T12:42:53Z | 2018-09-12T08:06:50Z | https://github.com/pydata/pandas-datareader/issues/265 | [
"google-finance"
] | GitHubUser2014 | 5 |
pytorch/vision | machine-learning | 8,394 | Run all torchvision models in one script. | ### 🚀 The feature
Is there a test script that can run models.
### Motivation, pitch
Hl, i am testing a model migration script from cuda to sycl and i would like to test it on torch vision model set, i would like to know do we have a test script that can run all models in torchvision? like run.py [code](https://github.com/pytorch/benchmark/blob/main/run.py) in torchbenchmark, thanks.
### Alternatives
_No response_
### Additional context
_No response_ | closed | 2024-04-24T01:39:23Z | 2024-04-29T10:18:17Z | https://github.com/pytorch/vision/issues/8394 | [] | leizhenyuan | 1 |
PokeAPI/pokeapi | api | 866 | Paldea Pokedex endpoint is 404 | <!--
Thanks for contributing to the PokéAPI project. To make sure we're effective, please check the following:
- Make sure your issue hasn't already been submitted on the issues tab. (It has search functionality!)
- If your issue is one of outdated API data, please note that we get our data from [veekun](https://github.com/veekun/pokedex/). If they are not up to date either, please look for or create an issue there. Otherwise, feel free to create an issue here.
- Provide a clear description of the issue.
- Provide a clear description of the steps to reproduce.
- Provide a clear description of the expected behavior.
Thank you!
-->
Steps to Reproduce:
1. Go to https://pokeapi.co/api/v2/pokedex/paldea — get a 404
2. Go to https://pokeapi.co/api/v2/pokedex — see Paldea is missing from the list
| closed | 2023-04-05T02:23:28Z | 2024-11-14T13:48:49Z | https://github.com/PokeAPI/pokeapi/issues/866 | [] | elliebartling | 2 |
paperless-ngx/paperless-ngx | django | 8,973 | [BUG] Error 500 in INBOX after pdf upload | ### Description
After a month or two not using Paperless, i get stuck while checking my INBOX documents.
I believe a signed document have been imported automatically and get paperless stuck while trying to list it and throw a 500 error only in INBOX (if excluded from filters everything works, but not being able to use INBOX prevent any new documents).
All i can else is that issue happened a year ago when i tried to import a pdf with a signature and it failed the same, had to restore the system and avoid uploading it but i can't do that anymore since i'm not sure what document is at fault, and a lot of documents seems to have been uploaded recently

### Steps to reproduce
I can't give the pdf at fault at the time since it's a contract with personal informations, but uploading pdf signature and then trying to list pdf including it seems to throw an error.
### Webserver logs
```bash
server-1 | raise ex.with_traceback(None)
server-1 | django.db.utils.InternalError: missing chunk number 0 for toast value 28198 in pg_toast_16691
db-1 | 2025-01-31 17:45:06.947 UTC [235] ERROR: unexpected chunk number 1 (expected 0) for toast value 28012 in pg_toast_16691
db-1 | 2025-01-31 17:45:06.947 UTC [235] STATEMENT: SELECT COUNT(*) FROM (SELECT DISTINCT "documents_document"."id" AS "col1", "documents_document"."deleted_at" AS "col2", "documents_document"."restored_at" AS "col3", "documents_document"."transaction_id" AS "col4", "documents_document"."owner_id" AS "col5", "documents_document"."correspondent_id" AS "col6", "documents_document"."storage_path_id" AS "col7", "documents_document"."title" AS "col8", "documents_document"."document_type_id" AS "col9", "documents_document"."content" AS "col10", "documents_document"."mime_type" AS "col11", "documents_document"."checksum" AS "col12", "documents_document"."archive_checksum" AS "col13", "documents_document"."page_count" AS "col14", "documents_document"."created" AS "col15", "documents_document"."modified" AS "col16", "documents_document"."storage_type" AS "col17", "documents_document"."added" AS "col18", "documents_document"."filename" AS "col19", "documents_document"."archive_filename" AS "col20", "documents_document"."original_filename" AS "col21", "documents_document"."archive_serial_number" AS "col22", COUNT("documents_note"."id") AS "num_notes" FROM "documents_document" LEFT OUTER JOIN "documents_note" ON ("documents_document"."id" = "documents_note"."document_id") INNER JOIN "documents_document_tags" ON ("documents_document"."id" = "documents_document_tags"."document_id") WHERE (("documents_document"."deleted_at" IS NULL AND "documents_document_tags"."tag_id" = 1) OR ("documents_document"."deleted_at" IS NULL AND "documents_document_tags"."tag_id" = 1 AND "documents_document"."owner_id" = 3) OR ("documents_document"."deleted_at" IS NULL AND "documents_document_tags"."tag_id" = 1 AND "documents_document"."owner_id" IS NULL)) GROUP BY 1) subquery
server-1 | [2025-01-31 18:45:06,947] [ERROR] [django.request] Internal Server Error: /api/documents/
server-1 | Traceback (most recent call last):
server-1 | File "/usr/local/lib/python3.12/site-packages/django/db/backends/utils.py", line 105, in _execute
server-1 | return self.cursor.execute(sql, params)
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/psycopg/cursor.py", line 97, in execute
server-1 | raise ex.with_traceback(None)
server-1 | psycopg.errors.DataCorrupted: unexpected chunk number 1 (expected 0) for toast value 28012 in pg_toast_16691
server-1 |
server-1 | The above exception was the direct cause of the following exception:
server-1 |
server-1 | Traceback (most recent call last):
server-1 | File "/usr/local/lib/python3.12/site-packages/asgiref/sync.py", line 518, in thread_handler
server-1 | raise exc_info[1]
server-1 | File "/usr/local/lib/python3.12/site-packages/django/core/handlers/exception.py", line 42, in inner
server-1 | response = await get_response(request)
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/asgiref/sync.py", line 518, in thread_handler
server-1 | raise exc_info[1]
server-1 | File "/usr/local/lib/python3.12/site-packages/django/core/handlers/base.py", line 253, in _get_response_async
server-1 | response = await wrapped_callback(
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/asgiref/sync.py", line 468, in __call__
server-1 | ret = await asyncio.shield(exec_coro)
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/asgiref/current_thread_executor.py", line 40, in run
server-1 | result = self.fn(*self.args, **self.kwargs)
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/asgiref/sync.py", line 522, in thread_handler
server-1 | return func(*args, **kwargs)
server-1 | ^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/django/views/decorators/csrf.py", line 65, in _view_wrapper
server-1 | return view_func(request, *args, **kwargs)
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/viewsets.py", line 124, in view
server-1 | return self.dispatch(request, *args, **kwargs)
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/views.py", line 509, in dispatch
server-1 | response = self.handle_exception(exc)
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/views.py", line 469, in handle_exception
server-1 | self.raise_uncaught_exception(exc)
server-1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
server-1 | raise exc
server-1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/views.py", line 506, in dispatch
server-1 | response = handler(request, *args, **kwargs)
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/src/paperless/src/documents/views.py", line 907, in list
server-1 | return super().list(request)
server-1 | ^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/mixins.py", line 40, in list
server-1 | page = self.paginate_queryset(queryset)
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/generics.py", line 175, in paginate_queryset
server-1 | return self.paginator.paginate_queryset(queryset, self.request, view=self)
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/pagination.py", line 211, in paginate_queryset
server-1 | self.page = paginator.page(page_number)
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/django/core/paginator.py", line 89, in page
server-1 | number = self.validate_number(number)
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/django/core/paginator.py", line 70, in validate_number
server-1 | if number > self.num_pages:
server-1 | ^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/django/utils/functional.py", line 47, in __get__
server-1 | res = instance.__dict__[self.name] = self.func(instance)
server-1 | ^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/django/core/paginator.py", line 116, in num_pages
server-1 | if self.count == 0 and not self.allow_empty_first_page:
server-1 | ^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/django/utils/functional.py", line 47, in __get__
server-1 | res = instance.__dict__[self.name] = self.func(instance)
server-1 | ^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/django/core/paginator.py", line 110, in count
server-1 | return c()
server-1 | ^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/django/db/models/query.py", line 620, in count
server-1 | return self.query.get_count(using=self.db)
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/django/db/models/sql/query.py", line 630, in get_count
server-1 | return obj.get_aggregation(using, {"__count": Count("*")})["__count"]
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/django/db/models/sql/query.py", line 616, in get_aggregation
server-1 | result = compiler.execute_sql(SINGLE)
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/django/db/models/sql/compiler.py", line 1574, in execute_sql
server-1 | cursor.execute(sql, params)
server-1 | File "/usr/local/lib/python3.12/site-packages/django/db/backends/utils.py", line 79, in execute
server-1 | return self._execute_with_wrappers(
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/django/db/backends/utils.py", line 92, in _execute_with_wrappers
server-1 | return executor(sql, params, many, context)
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/django/db/backends/utils.py", line 100, in _execute
server-1 | with self.db.wrap_database_errors:
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/django/db/utils.py", line 91, in __exit__
server-1 | raise dj_exc_value.with_traceback(traceback) from exc_value
server-1 | File "/usr/local/lib/python3.12/site-packages/django/db/backends/utils.py", line 105, in _execute
server-1 | return self.cursor.execute(sql, params)
server-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.12/site-packages/psycopg/cursor.py", line 97, in execute
server-1 | raise ex.with_traceback(None)
server-1 | django.db.utils.InternalError: unexpected chunk number 1 (expected 0) for toast value 28012 in pg_toast_16691
redis-1 | 1:M 31 Jan 2025 17:45:09.000 * 100 changes in 300 seconds. Saving...
redis-1 | 1:M 31 Jan 2025 17:45:09.000 * Background saving started by pid 20
redis-1 | 20:C 31 Jan 2025 17:45:09.006 * DB saved on disk
redis-1 | 20:C 31 Jan 2025 17:45:09.006 * RDB: 0 MB of memory used by copy-on-write
redis-1 | 1:M 31 Jan 2025 17:45:09.101 * Background saving terminated with success
```
### Browser logs
```bash
```
### Paperless-ngx version
2.14.7
### Host OS
Debian GNU/Linux 12 (bookworm)
### Installation method
Docker - official image
### System status
```json
{
"pngx_version": "2.14.7",
"server_os": "Linux-6.1.0-30-amd64-x86_64-with-glibc2.36",
"install_type": "docker",
"storage": {
"total": 949798285312,
"available": 626140348416
},
"database": {
"type": "postgresql",
"url": "paperless",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "mfa.0003_authenticator_type_uniq",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://redis:None",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2025-01-31T01:00:05.823389+01:00",
"index_error": null,
"classifier_status": "WARNING",
"classifier_last_trained": null,
"classifier_error": "Classifier file does not exist (yet). Re-training may be pending."
}
}
```
### Browser
Google Chrome
### Configuration changes
docker-compose.yaml
```version: "3"
networks:
paperless: null
nginx:
external: true
volumes:
redis: null
services:
server:
image: ghcr.io/paperless-ngx/paperless-ngx:latest
networks:
- nginx
- paperless
#ports:
# - 8547:8000
volumes:
- ./consume:/usr/src/paperless/consume:rw
- ./data:/usr/src/paperless/data:rw
- ./media:/usr/src/paperless/media:rw
- ./export:/usr/src/paperless/export:rw
- /etc/localtime:/etc/localtime:ro
environment:
- PAPERLESS_TASK_WORKERS=8
- PAPERLESS_THREADS_PER_WORKER=2
- PAPERLESS_URL=[REDACTED]
- PAPERLESS_REDIS=redis://redis
- PAPERLESS_TIKA_ENABLED=1
- PAPERLESS_TIKA_GOTENBERG_ENDPOINT=http://gotenberg:3000
- PAPERLESS_TIKA_ENDPOINT=http://tika:9998
- PAPERLESS_EMAIL_TASK_CRON=* */5 * * *
- 'PAPERLESS_OCR_USER_ARGS={"invalidate_digital_signatures": true}'
- PAPERLESS_DBHOST=db
- VIRTUAL_HOST=[REDACTED]
- LETSENCRYPT_HOST=[REDACTED]
- LETSENCRYPT_EMAIL=[REDACTED]
- VIRTUAL_PORT=8000
- PUID=1000
- GUID=1000
depends_on:
- redis
- gotenberg
- tika
- db
healthcheck:
test: curl localhost:8000 || exit 1
interval: 60s
timeout: 30s
retries: 3
restart: unless-stopped
redis:
image: redis:6.2-alpine
networks:
- paperless
volumes:
- redis:/data
restart: unless-stopped
gotenberg:
image: docker.io/gotenberg/gotenberg:8
restart: unless-stopped
networks:
- paperless
# The gotenberg chromium route is used to convert .eml files. We do not
# want to allow external content like tracking pixels or even javascript.
command:
- gotenberg
- --chromium-disable-javascript=true
- --chromium-allow-list=file:///tmp/.*
tika:
image: ghcr.io/paperless-ngx/tika:latest
networks:
- paperless
restart: unless-stopped
db:
image: docker.io/library/postgres:16
restart: unless-stopped
networks:
- paperless
volumes:
- ./db:/var/lib/postgresql/data
environment:
POSTGRES_DB: [REDACTED]
POSTGRES_USER: [REDACTED]
POSTGRES_PASSWORD: [REDACTED]```
### Please confirm the following
- [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [x] I have already searched for relevant existing issues and discussions before opening this report.
- [x] I have updated the title field above with a concise description. | closed | 2025-01-31T17:52:16Z | 2025-03-03T03:12:00Z | https://github.com/paperless-ngx/paperless-ngx/issues/8973 | [
"not a bug"
] | maxoux | 2 |
giotto-ai/giotto-tda | scikit-learn | 136 | I can not install giotto. | I a, using ubuntu and in python3.5 and python 3.7 I can not install giotto. How can I install it simply?
`Command "/usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-xsyfhd6s/giotto-learn/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-blrgq5ld/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-install-xsyfhd6s/giotto-learn/`
| closed | 2019-12-26T16:41:03Z | 2020-01-21T22:32:34Z | https://github.com/giotto-ai/giotto-tda/issues/136 | [
"enhancement",
"good first issue"
] | MINIMALaq | 14 |
JaidedAI/EasyOCR | machine-learning | 1,251 | Error on Loading "Tamil" language | When I load the tamil language as following code ,
import os
import easyocr
reader = easyocr.Reader(['ta'])
But It does not load,
Following errors appear
WARNING:easyocr.easyocr:Neither CUDA nor MPS are available - defaulting to CPU. Note: This module is much faster with a GPU.
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-10-66640fe52398>](https://localhost:8080/#) in <cell line: 3>()
1 import os
2 import easyocr
----> 3 reader = easyocr.Reader(['ta'])
2 frames
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in load_state_dict(self, state_dict, strict, assign)
2151
2152 if len(error_msgs) > 0:
-> 2153 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
2154 self.__class__.__name__, "\n\t".join(error_msgs)))
2155 return _IncompatibleKeys(missing_keys, unexpected_keys)
RuntimeError: Error(s) in loading state_dict for Model:
size mismatch for Prediction.weight: copying a param with shape torch.Size([143, 512]) from checkpoint, the shape in current model is torch.Size([127, 512]).
size mismatch for Prediction.bias: copying a param with shape torch.Size([143]) from checkpoint, the shape in current model is torch.Size([127]). | open | 2024-05-09T11:48:06Z | 2024-05-20T09:25:51Z | https://github.com/JaidedAI/EasyOCR/issues/1251 | [] | kokul93 | 1 |
qubvel-org/segmentation_models.pytorch | computer-vision | 471 | Focal Loss run time error | Hi all
i am new in using of Sematic Segmantation models
so i used the smp modsl for smenatic segamtnion and i built model and loss lik following:
model = smp.Unet(encoder_name=‘resnet34’,
encoder_depth=5,
encoder_weights=‘imagenet’,
decoder_use_batchnorm=True,
decoder_channels=(256, 128, 64, 32, 16),
decoder_attention_type=None,
in_channels=3,
classes=5,
activation=None,
aux_params=None)
loss = smp.losses.FocalLoss(mode=‘multiclass’, gamma=2.0)
loss.name = ‘FocalLoss’
and the target mask size is 8x512x512 (contain indices in each pixel represents the class value)
with image size is 8x3x512x512
after run the following code to train the model :
train_epoch = smp.utils.train.TrainEpoch(
model=model,
loss=loss,
metrics= metrics,
optimizer=optimizer,
device=device,
verbose=True,)
train_logs = train_epoch.run(traindataloader)
i got this error:
…/ Deeplearning\lib\site-packages\segmentation_models_pytorch\utils\functional.py", line 34, in iou
intersection = torch.sum(gt * pr)
RuntimeError: The size of tensor a (8) must match the size of tensor b (5) at non-singleton dimension 1
why the gt and pr mismatch ??
how can I overcome this error?
when I Remove the metric list the error is removed but I can’t show any metric during running.
the error comes from : intersection = torch.sum(gt * pr)
the size of gt is 8x512x512
the size of pr is 8x5x512x512
where the 8 is batch size and 5 is the class number
how can I solve this issue ?
| closed | 2021-08-18T12:06:06Z | 2022-03-12T01:52:14Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/471 | [
"Stale"
] | Sarmadfismael | 3 |
iperov/DeepFaceLab | machine-learning | 5,695 | Docker's issue | I configured a docker image of the deepfacelab. I found that the interval between loading the model in the image and starting training was much longer than running it directly locally. I tried to modify the Virtual memory of the docker, but it was not improved | open | 2023-07-03T03:30:22Z | 2023-08-16T20:57:15Z | https://github.com/iperov/DeepFaceLab/issues/5695 | [] | DUODUODUODUOBOOM | 1 |
pytest-dev/pytest-django | pytest | 291 | Can't use custom django TestRunner | Hi,
We use django in your project and we are eager to upgrade our test made originally with unittest.unittest or DjangoTestCase to py.test for access to its numerous functionnality.
During my implementation of pytest-django, I encountered some issues on code wich was written with other unittesting framework (ApiTestCase with is on override of DjangoTestCase), its try to create a database which is an unwanted behaviour.
In our test process we only use database for read-only purpose, with a custom TEST_RUNNER set in our django.settings.
I try to add DJANGO_SETTINGS_MODULE = project.settings.dev in pytest.ini with no success.
I study the source code of pytest-django and find the module compat.py which import the class DiscoverRunner to launch test.
What I want to know is there a way to change the TEST_RUNNER attribute chosen by pytest-django?
If so how can I do?
In other case I have made some modifications to get the TEST_RUNNER defined in django settings instead to import directly DiscoverRunner. I can create a pull request to do so.
Cheers.
| closed | 2015-11-13T13:59:45Z | 2015-12-05T09:06:43Z | https://github.com/pytest-dev/pytest-django/issues/291 | [] | Moerin | 1 |
jupyter-book/jupyter-book | jupyter | 2,125 | docutils/transforms/misc.py error | ### Describe the bug
I am trying to build my JupyterBook with the latest version of of the package (and I hadn't upgraded for quite a while), and the docutils/transforms/misc.py script is throwing an error.
Does anyone know what I can change to avoid this error? It's hard for me to identify where this error is coming from, and I wondered if others who have upgraded after a while have run into something similar.
```console
cat /var/folders/06/y6vmvyfj0wg08vb3rszcfy080000gn/T/sphinx-err-txilrytd.log
# Platform: darwin; (macOS-14.3-arm64-arm-64bit)
# Sphinx version: 7.2.6
# Python version: 3.9.18 (CPython)
# Docutils version: 0.20.1
# Jinja2 version: 3.1.3
# Pygments version: 2.17.2
# Last messages:
# 05-Text-Analysis/10-Topic-Modeling-CSV
#
# reading sources... [ 60%]
# 05-Text-Analysis/11-Topic-Modeling-Time-Series
#
# reading sources... [ 61%]
# 05-Text-Analysis/12-Named-Entity-Recognition
#
# Loaded extensions:
# sphinx.ext.mathjax (7.2.6)
# alabaster (0.7.16)
# sphinxcontrib.applehelp (1.0.8)
# sphinxcontrib.devhelp (1.0.6)
# sphinxcontrib.htmlhelp (2.0.5)
# sphinxcontrib.serializinghtml (1.1.10)
# sphinxcontrib.qthelp (1.0.7)
# sphinx_togglebutton (0.3.2)
# sphinx_copybutton (0.5.2)
# myst_nb (1.0.0)
# jupyter_book (1.0.0)
# sphinx_thebe (0.3.1)
# sphinx_comments (0.0.3)
# sphinx_external_toc (1.0.1)
# sphinx.ext.intersphinx (7.2.6)
# sphinx_design (0.5.0)
# sphinx_book_theme (unknown version)
# notfound.extension (1.0.0)
# sphinxext.rediraffe (unknown version)
# sphinx_jupyterbook_latex (unknown version)
# sphinx_multitoc_numbering (unknown version)
# pydata_sphinx_theme (unknown version)
# Traceback:
Traceback (most recent call last):
File "/Users/melwalsh/anaconda3/envs/py39/lib/python3.9/site-packages/jupyter_book/sphinx.py", line 167, in build_sphinx
app.build(force_all, filenames)
File "/Users/melwalsh/anaconda3/envs/py39/lib/python3.9/site-packages/sphinx/application.py", line 355, in build
self.builder.build_update()
File "/Users/melwalsh/anaconda3/envs/py39/lib/python3.9/site-packages/sphinx/builders/__init__.py", line 293, in build_update
self.build(to_build,
File "/Users/melwalsh/anaconda3/envs/py39/lib/python3.9/site-packages/sphinx/builders/__init__.py", line 313, in build
updated_docnames = set(self.read())
File "/Users/melwalsh/anaconda3/envs/py39/lib/python3.9/site-packages/sphinx/builders/__init__.py", line 420, in read
self._read_serial(docnames)
File "/Users/melwalsh/anaconda3/envs/py39/lib/python3.9/site-packages/sphinx/builders/__init__.py", line 441, in _read_serial
self.read_doc(docname)
File "/Users/melwalsh/anaconda3/envs/py39/lib/python3.9/site-packages/sphinx/builders/__init__.py", line 498, in read_doc
publisher.publish()
File "/Users/melwalsh/anaconda3/envs/py39/lib/python3.9/site-packages/docutils/core.py", line 236, in publish
self.apply_transforms()
File "/Users/melwalsh/anaconda3/envs/py39/lib/python3.9/site-packages/docutils/core.py", line 216, in apply_transforms
self.document.transformer.apply_transforms()
File "/Users/melwalsh/anaconda3/envs/py39/lib/python3.9/site-packages/sphinx/transforms/__init__.py", line 83, in apply_transforms
super().apply_transforms()
File "/Users/melwalsh/anaconda3/envs/py39/lib/python3.9/site-packages/docutils/transforms/__init__.py", line 182, in apply_transforms
transform.apply(**kwargs)
File "/Users/melwalsh/anaconda3/envs/py39/lib/python3.9/site-packages/docutils/transforms/misc.py", line 98, in apply
self.visit_transition(node)
File "/Users/melwalsh/anaconda3/envs/py39/lib/python3.9/site-packages/docutils/transforms/misc.py", line 108, in visit_transition
assert (isinstance(node.parent, nodes.document)
AssertionError
```
**problem**
This is a problem for people I can't build my book with the newest version of jupyter-book.
### Reproduce the bug
You can reproduce the error if you clone my repo https://github.com/melaniewalsh/Intro-Cultural-Analytics/ and try to build the book. However, I also had to re-install a couple of custom extensions that I am using before getting to this error, so you might have to do that too if you are trying to reproduce
```
pip install sphinxext-rediraffe
pip install sphinx-notfound-page
```
### List your environment
```
Jupyter Book : 1.0.0
External ToC : 1.0.1
MyST-Parser : 2.0.0
MyST-NB : 1.0.0
Sphinx Book Theme : 1.1.2
Jupyter-Cache : 1.0.0
NbClient : 0.9.0
``` | open | 2024-02-26T01:43:23Z | 2025-02-26T08:03:23Z | https://github.com/jupyter-book/jupyter-book/issues/2125 | [
"bug"
] | melaniewalsh | 2 |
yeongpin/cursor-free-vip | automation | 186 | BUG | ❌ register.config_setup_error
Traceback (most recent call last):
File "main.py", line 344, in <module>
File "main.py", line 291, in main
File "new_signup.py", line 232, in setup_config
File "configparser.py", line 735, in read
File "configparser.py", line 1050, in _read
File "configparser.py", line 1058, in _read_inner
File "encodings\cp1254.py", line 23, in decode
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9e in position 639: character maps to <undefined>
[PYI-19516:ERROR] Failed to execute script 'main' due to unhandled exception!
[process exited with code 1 (0x00000001)] | closed | 2025-03-11T00:55:18Z | 2025-03-11T01:11:52Z | https://github.com/yeongpin/cursor-free-vip/issues/186 | [
"bug"
] | kkatree | 0 |
chezou/tabula-py | pandas | 311 | Password Protected Files | Not able to read password protected file using this.
I passed the argument password and given password in string format. Everytime it error comes
CalledProcessError
Command '['java', '-Dfile.encoding=UTF8', '-jar', '/home/bizhues/workspace/AccPro/env/lib/python3.10/site-packages/tabula/tabula-1.0.5-jar-with-dependencies.jar', '--pages', 'all', '--guess', '--format', 'JSON', '/tmp/d50695bf-8f62-49dd-844b-7ce13aa70caa.pdf']' returned non-zero exit status 1. | closed | 2022-08-06T08:19:00Z | 2022-08-06T08:19:13Z | https://github.com/chezou/tabula-py/issues/311 | [] | aditya-bizhues | 1 |
mckinsey/vizro | data-visualization | 286 | Should vizro support polars (or other dataframes besides pandas)? | > Ty Petar, please consider supporting polars, I think it is necessary, given that the whole point of vizro is working with a dataframe in memory. Currently vizro cannot determine polars column names (detects them as [0,1,2,3,4...])
_Originally posted by @vmisusu in https://github.com/mckinsey/vizro/issues/191#issuecomment-1845368168_
---
I'm opening this issue to see whether other people have the same question so we can figure out what priority it should be. Just hit 👍 if it's something you'd like to see in vizro and feel free to leave and comments.
The current situation (25 January 2024) is:
* vizro currently only supports pandas DataFrames, but supporting others like polars a great idea and something we did consider before. The main blocker previously was that plotly didn't support polars, but as of [5.15](https://github.com/plotly/plotly.py/releases/tag/v5.15.0) it supports not just polars but actually any dataframe with a `to_pandas` method, and as of [5.16](https://github.com/plotly/plotly.py/releases/tag/v5.16.0) it supports dataframes that follow the [dataframe interchange protocol](https://data-apis.org/dataframe-protocol/latest/index.html) ([which is now `pip install`able](https://github.com/data-apis/dataframe-api/issues/73))
* on vizro we could follow a similar sort of pattern to plotly's development[^1]. Ideally supporting the dataframe interchange protocol is the "right" way to do this, but we should work out exactly how much performance improvement polars users would actually get in practice to see what the value of this would be over a simple `to_pandas` call. The biggest changes we'd need to make would be to actions code like filtering functionality (FYI @petar-qb). I don't think it would be too hard, but it's certainly not a small task either
See also [How Polars Can Help You Build Fast Dash Apps for Large Datasets](https://plotly.com/blog/polars-to-build-fast-dash-apps-for-large-datasets/?_gl=1*1f3y92r*_gcl_au*NTYwOTc2MDQ0LjE3MTcxNjY2OTAuNzU5NDIxOTIyLjE3MTg3MjAwNzMuMTcxODcyMDA3NQ..*_ga*MTg4NDIxNDI5NS4xNzE3MTY2Njkx*_ga_6G7EE0JNSC*MTcxOTkxNDM5NS42MC4xLjE3MTk5MTQ3NjguNDUuMC4w)
From @Coding-with-Adam:
> Chad had a nice app that he built to compare between pandas and polars and show the difference when using Dash. https://dash-polars-pandas-docker.onrender.com/ (free tier)
I also made a video him: https://youtu.be/_iebrqafOuM
And here’s the article he wrote: [Dash: Polars vs Pandas. An interactive battle between the… | by Chad Bell | Medium](https://medium.com/@chadbell045/dash-polars-vs-pandas-a02ac9fcc484)
FYI @astrojuanlu
[^1]: https://github.com/plotly/plotly.py/pull/4244 https://github.com/plotly/plotly.py/pull/4272/files https://github.com/plotly/plotly.py/pull/3901 https://github.com/plotly/plotly.py/issues/3637 | open | 2024-01-25T09:51:04Z | 2024-12-04T20:37:37Z | https://github.com/mckinsey/vizro/issues/286 | [] | antonymilne | 12 |
ultralytics/ultralytics | pytorch | 18,731 | predict result | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
I'm using yolo11 with pycharm. If i run self.model.predict(source=pil_image, conf=0.55)
" 0: 640x640 1 CAT, 21.0ms Speed: 5.9ms preprocess, 21.0ms inference, 2.0ms postprocess per image at shape (1, 3, 640, 640)"
the result will be output like this. Is there a parameter that can disable this message?
### Additional
_No response_ | open | 2025-01-17T10:28:06Z | 2025-01-17T10:45:17Z | https://github.com/ultralytics/ultralytics/issues/18731 | [
"question"
] | kunsungwoo | 2 |
ipyflow/ipyflow | jupyter | 41 | incorporate refs to global variables inside functions during liveness checking (at function callpoints) | closed | 2020-08-06T22:18:27Z | 2020-08-25T20:48:50Z | https://github.com/ipyflow/ipyflow/issues/41 | [] | smacke | 0 |
|
serengil/deepface | machine-learning | 733 | DeepFace.find, force return unknown faces | Hello. First of all, excellent project. Congratulations !
Is there a way to force "find" to return faces with score 0 ?
I'm having to run DeepFace.represent along with DeepFace.find, when I could just run DeepFace.find if it would also return the faces that couldn't be identified | closed | 2023-04-26T16:25:10Z | 2023-04-26T16:28:35Z | https://github.com/serengil/deepface/issues/733 | [
"enhancement"
] | CostanzoPablo | 1 |
scikit-learn-contrib/metric-learn | scikit-learn | 161 | improve bounds argument and bounds_ attribute in ITML | See comment: https://github.com/metric-learn/metric-learn/pull/159#discussion_r250234204
We should:
- Allow tuples, lists, numpy arrays, (any array like, TODO: see if there is other array-like than these 3 types) as arguments for `bounds` for ITML
- I think a unique type of return for `bounds_` would be better, say a numpy array
- Do tests to ensure that this works as expected
- Do the right advertising in the dosctring | closed | 2019-01-24T09:53:35Z | 2019-05-24T17:49:04Z | https://github.com/scikit-learn-contrib/metric-learn/issues/161 | [] | wdevazelhes | 0 |
microsoft/qlib | deep-learning | 1,704 | ModuleNotFoundError: No module named 'pyqlib' after install qyqlib package | ## 🐛 Bug Description
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1.(base) C:\Users\a>cd pyqlib
1.(base) C:\Users\a\pyqlib>python -m build
1.(base) C:\Users\a\pyqlib>touch setup.py
(base) C:\Users\a\pyqlib>python -m build
* Creating venv isolated environment...
* Installing packages in isolated environment... (setuptools >= 40.8.0, wheel)
* Getting build dependencies for sdist...
running egg_info
creating pyqlib.egg-info
writing pyqlib.egg-info\PKG-INFO
writing dependency_links to pyqlib.egg-info\dependency_links.txt
writing top-level names to pyqlib.egg-info\top_level.txt
writing manifest file 'pyqlib.egg-info\SOURCES.txt'
reading manifest file 'pyqlib.egg-info\SOURCES.txt'
writing manifest file 'pyqlib.egg-info\SOURCES.txt'
* Building sdist...
running sdist
running egg_info
writing pyqlib.egg-info\PKG-INFO
writing dependency_links to pyqlib.egg-info\dependency_links.txt
writing top-level names to pyqlib.egg-info\top_level.txt
reading manifest file 'pyqlib.egg-info\SOURCES.txt'
writing manifest file 'pyqlib.egg-info\SOURCES.txt'
warning: sdist: standard file not found: should have one of README, README.rst, README.txt, README.md
running check
creating pyqlib-0.9.3
creating pyqlib-0.9.3\pyqlib.egg-info
copying files to pyqlib-0.9.3...
copying setup.py -> pyqlib-0.9.3
copying pyqlib.egg-info\PKG-INFO -> pyqlib-0.9.3\pyqlib.egg-info
copying pyqlib.egg-info\SOURCES.txt -> pyqlib-0.9.3\pyqlib.egg-info
copying pyqlib.egg-info\dependency_links.txt -> pyqlib-0.9.3\pyqlib.egg-info
copying pyqlib.egg-info\top_level.txt -> pyqlib-0.9.3\pyqlib.egg-info
copying pyqlib.egg-info\SOURCES.txt -> pyqlib-0.9.3\pyqlib.egg-info
Writing pyqlib-0.9.3\setup.cfg
Creating tar archive
removing 'pyqlib-0.9.3' (and everything under it)
* Building wheel from sdist
* Creating venv isolated environment...
* Installing packages in isolated environment... (setuptools >= 40.8.0, wheel)
* Getting build dependencies for wheel...
running egg_info
writing pyqlib.egg-info\PKG-INFO
writing dependency_links to pyqlib.egg-info\dependency_links.txt
writing top-level names to pyqlib.egg-info\top_level.txt
reading manifest file 'pyqlib.egg-info\SOURCES.txt'
writing manifest file 'pyqlib.egg-info\SOURCES.txt'
* Installing packages in isolated environment... (wheel)
* Building wheel...
running bdist_wheel
running build
installing to build\bdist.win-amd64\wheel
running install
running install_egg_info
running egg_info
writing pyqlib.egg-info\PKG-INFO
writing dependency_links to pyqlib.egg-info\dependency_links.txt
writing top-level names to pyqlib.egg-info\top_level.txt
reading manifest file 'pyqlib.egg-info\SOURCES.txt'
writing manifest file 'pyqlib.egg-info\SOURCES.txt'
Copying pyqlib.egg-info to build\bdist.win-amd64\wheel\.\pyqlib-0.9.3-py3.10.egg-info
running install_scripts
creating build\bdist.win-amd64\wheel\pyqlib-0.9.3.dist-info\WHEEL
creating 'C:\Users\a\pyqlib\dist\.tmp-evrubh2o\pyqlib-0.9.3-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it
adding 'pyqlib-0.9.3.dist-info/METADATA'
adding 'pyqlib-0.9.3.dist-info/WHEEL'
adding 'pyqlib-0.9.3.dist-info/top_level.txt'
adding 'pyqlib-0.9.3.dist-info/RECORD'
removing build\bdist.win-amd64\wheel
Successfully built pyqlib-0.9.3.tar.gz and pyqlib-0.9.3-py3-none-any.whl
(base) C:\Users\a\pyqlib>pip install dist/pyqlib-0.9.3-py3-none-any.whl
Processing c:\users\a\pyqlib\dist\pyqlib-0.9.3-py3-none-any.whl
Installing collected packages: pyqlib
Successfully installed pyqlib-0.9.3
(base) C:\Users\a\pyqlib>pip install dist/pyqlib-0.9.3-py3-none-any.whl
Processing c:\users\a\pyqlib\dist\pyqlib-0.9.3-py3-none-any.whl
Installing collected packages: pyqlib
Successfully installed pyqlib-0.9.3
(base) C:\Users\a\pyqlib>python
Python 3.10.9 | packaged by Anaconda, Inc. | (main, Mar 1 2023, 18:18:15) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import qlib
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'qlib'
>>> import pyqlib
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'pyqlib'
>>> print(pyqlib.__version__)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'pyqlib' is not defined
>>> import pqlib
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'pqlib'
>>> import pyqlib
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'pyqlib'
>>> import pyqlib
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'pyqlib'
>>> pip show pyqlib
File "<stdin>", line 1
pip show pyqlib
^^^^
SyntaxError: invalid syntax
>>> pip show pyqlib
File "<stdin>", line 1
pip show pyqlib
^^^^
SyntaxError: invalid syntax
>>> pip show qlib
File "<stdin>", line 1
pip show qlib
^^^^
SyntaxError: invalid syntax
>>> pip show "pyqlib"
File "<stdin>", line 1
pip show "pyqlib"
^^^^
SyntaxError: invalid syntax
>>> import sys
>>> print(sys.path)
['', 'C:\\Users\\a\\anaconda3\\python310.zip', 'C:\\Users\\a\\anaconda3\\DLLs', 'C:\\Users\\a\\anaconda3\\lib', 'C:\\Users\\a\\anaconda3', 'C:\\Users\\a\\AppData\\Roaming\\Python\\Python310\\site-packages', 'C:\\Users\\a\\anaconda3\\lib\\site-packages', 'C:\\Users\\a\\anaconda3\\lib\\site-packages\\win32', 'C:\\Users\\a\\anaconda3\\lib\\site-packages\\win32\\lib', 'C:\\Users\\a\\anaconda3\\lib\\site-packages\\Pythonwin']
>>> exit()
(base) C:\Users\a\pyqlib>python -m pip show pyqlib
Name: pyqlib
Version: 0.9.3
Summary:
Home-page:
Author:
Author-email:
License:
Location: c:\users\a\anaconda3\lib\site-packages
Requires:
Required-by:
(base) C:\Users\a\pyqlib>conda create -n test python=3.10
Collecting package metadata (current_repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 23.1.0
latest version: 23.10.0
Please update conda by running
$ conda update -n base -c defaults conda
Or to minimize the number of packages updated during conda update use
conda install conda=23.10.0
## Package Plan ##
environment location: C:\Users\a\anaconda3\envs\test
added / updated specs:
- python=3.10
The following packages will be downloaded:
package | build
---------------------------|-----------------
ca-certificates-2023.08.22 | haa95532_0 123 KB
pip-23.3.1 | py310haa95532_0 2.9 MB
setuptools-68.0.0 | py310haa95532_0 934 KB
wheel-0.41.2 | py310haa95532_0 127 KB
------------------------------------------------------------
Total: 4.0 MB
The following NEW packages will be INSTALLED:
bzip2 pkgs/main/win-64::bzip2-1.0.8-he774522_0
ca-certificates pkgs/main/win-64::ca-certificates-2023.08.22-haa95532_0
libffi pkgs/main/win-64::libffi-3.4.4-hd77b12b_0
openssl pkgs/main/win-64::openssl-3.0.12-h2bbff1b_0
pip pkgs/main/win-64::pip-23.3.1-py310haa95532_0
python pkgs/main/win-64::python-3.10.13-he1021f5_0
setuptools pkgs/main/win-64::setuptools-68.0.0-py310haa95532_0
sqlite pkgs/main/win-64::sqlite-3.41.2-h2bbff1b_0
tk pkgs/main/win-64::tk-8.6.12-h2bbff1b_0
tzdata pkgs/main/noarch::tzdata-2023c-h04d1e81_0
vc pkgs/main/win-64::vc-14.2-h21ff451_1
vs2015_runtime pkgs/main/win-64::vs2015_runtime-14.27.29016-h5e58377_2
wheel pkgs/main/win-64::wheel-0.41.2-py310haa95532_0
xz pkgs/main/win-64::xz-5.4.2-h8cc25b3_0
zlib pkgs/main/win-64::zlib-1.2.13-h8cc25b3_0
(test) C:\Users\a\pyqlib>pip install dist/pyqlib-0.9.3-py3-none-any.whl
Processing c:\users\a\pyqlib\dist\pyqlib-0.9.3-py3-none-any.whl
Installing collected packages: pyqlib
Successfully installed pyqlib-0.9.3
(test) C:\Users\a\pyqlib>python -c "import pyqlib"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'pyqlib'
(base) C:\Users\a\pyqlib>activate test
(test) C:\Users\a\pyqlib>pip show pyqlib
WARNING: Package(s) not found: pyqlib
(test) C:\Users\a\pyqlib>pip install pyqlib
ERROR: Could not find a version that satisfies the requirement pyqlib (from versions: none)
ERROR: No matching distribution found for pyqlib
(test) C:\Users\a\pyqlib>pip install https://files.pythonhosted.org/packages/b3/f9/321b8ccaacdfe250e27c168bd95f80b1ab90d3d695d033642971130aea90/pyqlib-0.9.3-cp310-cp310-win_amd64.whl
Collecting pyqlib==0.9.3
ERROR: HTTP error 404 while getting https://files.pythonhosted.org/packages/b3/f9/321b8ccaacdfe250e27c168bd95f80b1ab90d3d695d033642971130aea90/pyqlib-0.9.3-cp310-cp310-win_amd64.whl
ERROR: Could not install requirement pyqlib==0.9.3 from https://files.pythonhosted.org/packages/b3/f9/321b8ccaacdfe250e27c168bd95f80b1ab90d3d695d033642971130aea90/pyqlib-0.9.3-cp310-cp310-win_amd64.whl because of HTTP error 404 Client Error: Not Found for url: https://files.pythonhosted.org/packages/b3/f9/321b8ccaacdfe250e27c168bd95f80b1ab90d3d695d033642971130aea90/pyqlib-0.9.3-cp310-cp310-win_amd64.whl for URL https://files.pythonhosted.org/packages/b3/f9/321b8ccaacdfe250e27c168bd95f80b1ab90d3d695d033642971130aea90/pyqlib-0.9.3-cp310-cp310-win_amd64.whl
## Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
## Screenshot
<!-- A screenshot of the error message or anything shouldn't appear-->
## Environment
**Note**: User could run `cd scripts && python collect_info.py all` under project directory to get system information
and paste them here directly.
- Qlib version:pyqlib==0.9.3
- Python version: python pkgs/main/win-64::python-3.10.13-he1021f5_0
- OS (`Windows`, `Linux`, `MacOS`): Windows
- Commit number (optional, please provide it if you are using the dev version):
## Additional Notes
<!-- Add any other information about the problem here. -->
| open | 2023-12-02T15:08:03Z | 2024-01-04T12:26:25Z | https://github.com/microsoft/qlib/issues/1704 | [
"bug"
] | david2588e | 1 |
xuebinqin/U-2-Net | computer-vision | 332 | Any advices on image restoration | hi @xuebinqin , thanks for your awesome work, I want to re-use u2net for image restoration tasks, because I suppose that a good architecture like U2Net learned good representation, no matter segmentation or other tasks, could you please give some advices on image restoration task? thank you. | open | 2022-09-13T01:00:35Z | 2022-09-13T01:00:35Z | https://github.com/xuebinqin/U-2-Net/issues/332 | [] | anguoyang | 0 |
tortoise/tortoise-orm | asyncio | 1,899 | tortoise.exceptions.OperationalError: syntax error at or near "order" in 0.24.1 | ```
# tortoise-orm==0.24.0
from tortoise.contrib.postgres.functions import Random
await Table.annotate(order=Random()).order_by('order')
...
DEBUG:tortoise.db_client:SELECT "name","id",RANDOM() "order" FROM "table" ORDER BY RANDOM() ASC: []
# works
```
```
# tortoise-orm==0.24.1
from tortoise.contrib.postgres.functions import Random
await Table.annotate(order=Random()).order_by('order')
...
DEBUG:tortoise.db_client:SELECT "name","id",RANDOM() "order" FROM "table" ORDER BY order ASC: []
# tortoise.exceptions.OperationalError: syntax error at or near "order"
```
need ORDER BY order -> ORDER BY "order"
| closed | 2025-02-26T07:17:17Z | 2025-02-26T12:18:44Z | https://github.com/tortoise/tortoise-orm/issues/1899 | [
"bug"
] | rSedoy | 1 |
nolar/kopf | asyncio | 935 | Additional filtering options for node status updates | ### Problem
In our operator, we are interested in node status changes to identify node ready/not ready scenarios and take some actions based on the status. Our handler is like
@kopf.on.update('node', field='status.conditions')
async def node_status(body, status, logger, **kwargs):
...
...
It works as expected, however, the node status updates are too frequent, every ~10sec, even when there is no real change, due to the update to the field 'lastHeartbeatTime' in status.conditions. Is there a way to ignore the change in the above mentioned field, or is there a way to specify additional filtering options.
### Proposal
_No response_
### Code
_No response_
### Additional information
_No response_ | closed | 2022-07-12T23:17:28Z | 2022-07-13T21:50:09Z | https://github.com/nolar/kopf/issues/935 | [
"enhancement"
] | sribee | 1 |
akfamily/akshare | data-science | 5,634 | AKShare 接口问题报告 | 新增接口请求 | 问题描述:
在stock_board_industry_ths.py中,建议增加一个接口,可以获取某个行业所有公司当天的涨跌幅情况
比如电机这个行业:https://q.10jqka.com.cn/thshy/detail/code/881277/
 | closed | 2025-02-16T16:10:31Z | 2025-02-17T07:27:50Z | https://github.com/akfamily/akshare/issues/5634 | [
"bug"
] | tianhaisuifeng | 0 |
vitalik/django-ninja | django | 1,216 | Pagination does not work with code response - [BUG] | **Describe the bug**
If you add decorator @paginate you can't return a tuple where the first element is the response code, because it throws a ValidationError
Both replies in the following code throw ValidationError:
```
@router.get("/{post_id}/replies/", response={200: List[ReplyResponse], 400: Error, 404: Error})
@paginate
def list_replies_for_post(request, post_id: str):
try:
post = get_post_by_id(post_id)
if post is None:
raise Post.DoesNotExist(f"Post with id {post_id} does not exist")
replies = Post.approved.filter(parent=post)
return 200, replies
except Post.DoesNotExist as e:
return 404, {"message": str(e)}
```
**Versions (please complete the following information):**
- Python version: 3.12
- Django version: 5.0.6
- Django-Ninja version: 1.1.0
- Pydantic version: 2.7.4
Note you can quickly get this by runninng in `./manage.py shell` this line:
```
import django; import pydantic; import ninja; django.__version__; ninja.__version__; pydantic.__version__
```
| closed | 2024-07-03T00:18:16Z | 2024-07-05T00:14:30Z | https://github.com/vitalik/django-ninja/issues/1216 | [] | pertile | 1 |
docarray/docarray | fastapi | 1,497 | Support dealing with DocList/DocDict that does not fit in memory | # Context
the objective is to support dealing with DocArray that doesn't fit in memory.
How it will work is not well defined but we would like to not mix up the concept of in memory and on disk like we did with DocArray v1.
This would be different as well as DocIndex that are used for vector retrieval only and DocStore that are use to store data.
## Possible interface
This is just a draft, no promise that it will look like this.
```python
from docarray import BaseDoc
class MyDoc(BaseDoc):
embedding: NdArray
data: str # could be whatever
super_docs = DocSomething(sqlite_file='') ## name of the concept that will save document on disk
for docs in super_docs.batch(batch_size=64):
assert isinstance(docs, DocList) # in memory / the same will work with DocDict
docs.embedding = embed(docs.data)
super_docs.save(docs)
```
Potential improvement: we could have a context manager to do this
```python
for docs in super_docs.batch(batch_size=64):
with docs.sync(super_docs):
assert isinstance(docs, DocList) # in memory / the same will work with DocDict
docs.embedding = embed(docs.data)
```
| open | 2023-05-05T09:44:03Z | 2023-05-05T09:54:55Z | https://github.com/docarray/docarray/issues/1497 | [] | samsja | 0 |
alteryx/featuretools | data-science | 2,457 | `NumWords` returns wrong answer when text with multiple spaces is passed in | ```
NumWords().get_function()(pd.Series(["hello world"]))
```
Returns 4. Adding another space would return 5.
The issue is with how the number of words is counted. Consecutive spaces should be collapsed into one. | closed | 2023-01-19T22:02:47Z | 2023-02-09T18:30:43Z | https://github.com/alteryx/featuretools/issues/2457 | [
"bug"
] | sbadithe | 0 |
feature-engine/feature_engine | scikit-learn | 404 | multivariate imputation | In multivariate imputation, we estimate the values of missing data using regression or classification models based of the other variables in the data.
The iterativeimputer will allows us only to use either one of regression or classification. But often we have binary, discrete and continuous variables in our datasets. So we would like to use a suitable model for each variable to carry out the imputation.
Can we design a transformer that does exactly so?
It would either recognise binary, multilcass and continuous variables or ask the user to enter them, and then train suitable models to predict the values of the missing data, for each variably type.
| open | 2022-03-30T15:56:27Z | 2022-08-20T16:35:08Z | https://github.com/feature-engine/feature_engine/issues/404 | [] | solegalli | 4 |
ClimbsRocks/auto_ml | scikit-learn | 267 | warm start for lightgbm | open | 2017-07-04T06:56:05Z | 2017-07-04T06:56:05Z | https://github.com/ClimbsRocks/auto_ml/issues/267 | [] | ClimbsRocks | 0 |
|
sunscrapers/djoser | rest-api | 573 | User updated signal | I have my users (optionally) signed up to an external mailing list for marketing.
It would be really useful if this library could send out a user_updated signal that I could connect to in order to update the user's details on that external service when they change any of their details.
Looking at things as they are, I think I'm right in saying that I'd have to override the view code to do that at the moment. | open | 2021-01-05T16:18:47Z | 2021-01-05T16:18:47Z | https://github.com/sunscrapers/djoser/issues/573 | [] | bodgerbarnett | 0 |
erdewit/ib_insync | asyncio | 605 | Incorrect Contract "lastTradeDateOrContractMonth" | Version: TWS 10.19.2a, ib_insync 0.9.85
During TWS login, choose timezone=`Asia/Shanghai` UTC+8 (`US/Eastern` is UTC-4 or UTC-5). Below is a minimal code to reproduce the issue:
```
import ib_insync as ibi
ib = ibi.IB()
ib.connect('127.0.0.1', 7496, 1234)
opt = ibi.Contract(conId=610457880) # AAPL monthly option expired on 2025-12-19 US/Eastern
ib.qualifyContracts(opt)
print(opt.lastTradeDateOrContractMonth) # '20251219'
opt = ibi.Contract(conId=629286394) # AAPL weekly option expired on 2023-06-23 US/Eastern
ib.qualifyContracts(opt)
print(opt.lastTradeDateOrContractMonth) # '20230624', but expect '20230623'
# ^ ^
ib.disconnect()
```
In case the weekly option is invalid (expired), any soon expired weekly option contract would produce the same issue.
I believe the issue is originated from `ib_insync/decoder.py` line 327 during handling the `lastTimes`. For the montly option, `lastTimes` reads `'20251219'`; while for the weekly option, it reads `'20230624 04:00:00 Asia/Shanghai'`. | closed | 2023-06-20T15:12:50Z | 2023-07-02T11:10:31Z | https://github.com/erdewit/ib_insync/issues/605 | [] | phcchan | 1 |
tensorflow/tensor2tensor | deep-learning | 1,742 | registry.list_models() returns un empty list | ### Description
Tensor2Tensor library can't find any of its predefined models! I wanted to use the Transformer but finally, I found that it contains absolutely no single model, because `registry.list_models()` returned an empty list.
### Environment information
```
OS: Manjaro Linux 64-bit with kernel 4.19.81 and KDE Plasma 5.17.2
$ pip freeze | grep tensor
mesh-tensorflow==0.1.4
tensor2tensor==1.14.1
tensorboard==1.15.0
tensorflow==1.15.0
tensorflow-datasets==1.3.0
tensorflow-estimator==1.15.1
tensorflow-gan==2.0.0
tensorflow-hub==0.7.0
tensorflow-metadata==0.15.0
tensorflow-probability==0.7.0
$ python -V
Python 3.7.4
```
### For bugs: reproduction and error logs
```
>>> from tensor2tensor.utils import registry
>>> registry.list_models()
[]
```
# Steps to reproduce:
Import the `registry` module then `list_models()`:
```
>>> from tensor2tensor.utils import registry
>>> registry.list_models()
[]
```
| closed | 2019-11-13T03:55:23Z | 2019-11-13T11:28:18Z | https://github.com/tensorflow/tensor2tensor/issues/1742 | [] | Hamza5 | 1 |
psf/black | python | 3,929 | Line wrapping on generic functions using the type parameter syntax | **Describe the style change**
The suggested style change regards the type parameter syntax introduced in Python 3.12.
When formatting lines too long, I think line-wrapping should occur in the arguments' parenthesis rather than in the type parameters' brackets.
**Examples in the current _Black_ style**
This would work with any generic function using the new type parameter syntax with a long enough signature (longer than the `line-length`).
```python
type SupportedDataTypes = float
type Tensor = list
def sum_backward[
T: SupportedDataTypes
](a: Tensor[T], grad: Tensor[T]) -> dict[Tensor[T], Tensor[T]]:
...
```
**Desired style**
I think this kind of line-wrapping, keeping the type parameters list on a single line when possible, should be preferred over what currently happens as it is more readable.
```python
type SupportedDataTypes = float
type Tensor = list
def sum_backward[T: SupportedDataTypes](
a: Tensor[T],
grad: Tensor[T]
) -> dict[Tensor[T], Tensor[T]]:
...
```
| closed | 2023-10-06T23:33:42Z | 2024-12-18T03:09:17Z | https://github.com/psf/black/issues/3929 | [
"T: style",
"S: accepted"
] | PaulWassermann | 4 |
moshi4/pyCirclize | matplotlib | 56 | Massive size on save to svg - vectors of sector borders and some other lines have tons of nodes | 
Hello! I would like to save pycirclize plots to svg to submit for publication in vector format (required for the journal).
I found that upon saving to svg, the output files are massive and unwieldy in any editor. The .svg is >1 MB.
Opening the .svg file in inkscape, I found that the black borders around the sector are being rendered as hundreds of tiny points, rather than as a continuous curve in the vector.
Is there any way the code can be adjusted so that it can save to vector in a more efficient way? Arrows and links seem to work fine, it is just those border boxes so far as I can tell.
To reproduce, you can use the Example 1. Circos Plot and change the save command to:
`circos.savefig("example01.svg")`
This also occurs in phylogenetic trees, where the lines consist of more nodes than should be necessary (although I am unsure about how vector rendering works):

Thank you so much for this program which renders really compelling images. | closed | 2024-02-17T13:21:23Z | 2024-05-03T02:32:05Z | https://github.com/moshi4/pyCirclize/issues/56 | [
"enhancement"
] | dnjst | 1 |
comfyanonymous/ComfyUI | pytorch | 7,072 | ComfyUI Interface frozen. | ### Expected Behavior
I have the problem that as soon as I open the ComfyUI launcher it freezes. Now I know that the problem is the
ComfyUI-AnimateDiff-Evolved. As soon as I disable it I can move again. The problem is that I need that for my project: https://civitai.com/models/559596?modelVersionId=713139
I had this problem after I updated ComfyUI today.
Thank you guys in advance.
### Actual Behavior
<img width="1472" alt="Image" src="https://github.com/user-attachments/assets/a2090635-d684-4034-a4af-3233caf60622" />
### Steps to Reproduce
.
### Debug Logs
```powershell
.
```
### Other
_No response_ | open | 2025-03-04T14:06:38Z | 2025-03-14T00:26:35Z | https://github.com/comfyanonymous/ComfyUI/issues/7072 | [
"Custom Nodes Bug"
] | FilipeFilipeFilipe | 6 |
ultralytics/yolov5 | deep-learning | 12,860 | Extracting Features from Specific Layers of the YOLOv5x6 Model | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I want to understand how to extract features from a specific layer of the YOLOv5x6 model (I mean, input an image and output a fixed-dimensional feature, regardless of how many objects are detected.).
I've seen a few existing issues, most of which are quite old, and [the most recent one](https://github.com/ultralytics/yolov5/issues/12214#issuecomment-1755464776) mentions a models/yolov5l.py file, but I couldn't find this file in the v7.0 version. Can you provide the method for extracting features in the v7.0 version? It would be even better if you could provide a simple example code.
### Additional
_No response_ | closed | 2024-03-28T06:14:41Z | 2024-05-09T00:21:01Z | https://github.com/ultralytics/yolov5/issues/12860 | [
"question",
"Stale"
] | Bycqg | 4 |
pytorch/vision | computer-vision | 8,565 | Torchvision Normalize zero checks and tensor creation in constructor | ### 🚀 The feature
Move checks for zeros and tensor creation to __init__ of torchvision transform normalize
https://pytorch.org/vision/main/_modules/torchvision/transforms/transforms.html#Normalize
### Motivation, pitch
The mean and std attributes normally don't change, so checking if they are valid and creating tensors from the arguments, and then doing the calculations on tensors directly instead of calling F.normalize is faster. Setting the attributes as private should be enough to caution against changing their values.
### Alternatives
Leave as is
### Additional context
_No response_ | closed | 2024-07-31T17:40:03Z | 2024-08-09T13:53:29Z | https://github.com/pytorch/vision/issues/8565 | [] | heth27 | 3 |
ageitgey/face_recognition | machine-learning | 1,210 | How to get all face images' encoding embedding rather than some face images' are None? | * face_recognition version:1.2.3
* Python version:3.7
* Operating System:win10
### Description
I need to get 1000 face images encoding embedding, but some face images's embedding what I get using this library face_recognition is None. Is it needed to be trained again?What should I do to successfully encode all the 1000 face images in my own database.In other words, are the recognized face images limited?
### What I Did
```
```
| open | 2020-08-30T02:15:59Z | 2020-11-20T15:48:01Z | https://github.com/ageitgey/face_recognition/issues/1210 | [] | Piaoyyt | 1 |
simple-login/app | flask | 1,476 | [enhancement] Enable file upload to non-AWS object storage | It would be great if it was possible to upload files to an S3 API compatible object storage service different from AWS S3, such as Google Cloud Platform, Oracle Cloud, self-hosted Minio and many others.
This could be achieved by adding support for the following variables:
- `AWS_URL`: URL of the object storage endpoint (mandatory)
- `AWS_CA_BUNDLE`: The path to a custom certificate bundle to use when establishing SSL/TLS connections (optional)
- `AWS_USE_SSL`: Whether or not to use SSL (default `true`)
- `AWS_VERIFY_SSL`: Whether or not to verify SSL certificates (default `true`)
Such variables could then be used in `boto3.client()` or `boto3.resource()` to connect to alternative object storage services (using `AWS_URL`) and potentially to self hosted ones (with own CA or with self signed certificates or even without SSL) using the other variables.
Examples of boto3 usage with non-AWS S3 API compatible object storage services:
https://medium.com/oracledevs/use-aws-python-sdk-to-upload-files-to-oracle-cloud-infrastructure-object-storage-b623e5681412
https://docs.outscale.com/en/userguide/Boto3.html | open | 2022-12-08T22:05:29Z | 2024-06-20T05:27:58Z | https://github.com/simple-login/app/issues/1476 | [] | buxm | 1 |
huggingface/transformers | pytorch | 36,541 | Wrong dependency: `"tensorflow-text<2.16"` | ### System Info
- `transformers` version: 4.50.0.dev0
- Platform: Windows-10-10.0.26100-SP0
- Python version: 3.10.11
- Huggingface_hub version: 0.29.1
- Safetensors version: 0.5.3
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
### Who can help?
@stevhliu @Rocketknight1
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to install the packages needed for creating a PR to test my changes. Running `pip install -e ".[dev]"` from this [documentation](https://huggingface.co/docs/transformers/contributing#create-a-pull-request) results in the following error:
```markdown
ERROR: Cannot install transformers and transformers[dev]==4.50.0.dev0 because these package versions have conflicting dependencies.
The conflict is caused by:
transformers[dev] 4.50.0.dev0 depends on tensorflow<2.16 and >2.9; extra == "dev"
tensorflow-text 2.8.2 depends on tensorflow<2.9 and >=2.8.0; platform_machine != "arm64" or platform_system != "Darwin"
transformers[dev] 4.50.0.dev0 depends on tensorflow<2.16 and >2.9; extra == "dev"
tensorflow-text 2.8.1 depends on tensorflow<2.9 and >=2.8.0
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip to attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
```
This happens because of the specification of `tensorflow-text<2.16` here:
https://github.com/huggingface/transformers/blob/c0c5acff077ac7c8fe68a0fdbad24306dbd9d4e3/setup.py#L179
### Expected behavior
`transformers[dev]` requires `tensorflow` above version 2.9, while `tensorflow-text` explicitly restricts TensorFlow to versions below 2.9.
Also, there is no 2.16 version, neither of `tensorflow` or `tensorflow-text`:
https://pypi.org/project/tensorflow/#history
https://pypi.org/project/tensorflow-text/#history
```markdown
INFO: pip is looking at multiple versions of transformers[dev] to determine which version is compatible with other requirements. This could take a while.
ERROR: Could not find a version that satisfies the requirement tensorflow<2.16,>2.9; extra == "dev" (from transformers[dev]) (from versions: 2.16.0rc0, 2.16.1, 2.16.2, 2.17.0rc0, 2.17.0rc1, 2.17.0, 2.17.1, 2.18.0rc0, 2.18.0rc1, 2.18.0rc2, 2.18.0, 2.19.0rc0)
ERROR: No matching distribution found for tensorflow<2.16,>2.9; extra == "dev"
```
What is the correct `tensorflow-text` version? | open | 2025-03-04T15:23:19Z | 2025-03-07T00:14:03Z | https://github.com/huggingface/transformers/issues/36541 | [
"bug"
] | d-kleine | 6 |
gradio-app/gradio | deep-learning | 9,896 | Dropdown's preprocessing error when using a dropdown in a function without refreshing the frontend after starting the backend | ### Describe the bug
An error occurs when attempting to use a dropdown in a function. This issue specifically arises if the backend is started without refreshing the frontend.
Here is the error:
`\Lib\site-packages\gradio\components\dropdown.py", line 202, in preprocess
raise Error(
gradio.exceptions.Error: 'Value: Value 1 is not in the list of choices: []'
Keyboard interruption in main thread... closing server.`
This was not happening before in gradio4.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
I've created a small Gradio app to reproduce the dropdown issue when the backend is restarted without refreshing the frontend.
```python
import gradio as gr
def set_interface():
dropdown_choices = ["Option 1", "Option 2", "Option 3"]
dropdown = gr.Dropdown(choices=dropdown_choices, label="Select an option", visible=True, allow_custom_value=True)
textbox = gr.Textbox(label="Selected option", visible= True)
button = gr.Button("Submit", visible=True)
return (dropdown, textbox, button)
```
```python
import gradio as gr
from set_front import set_interface
def update_textbox(choice):
return f"You selected: {choice}"
with gr.Blocks() as demo:
dropdown = gr.Dropdown(visible=False)
textbox = gr.Textbox(visible=False)
button=gr.Button(visible=False)
button.click(
fn=lambda : gr.update(choices=["Value 1", "Value 2", "Value 3"]),
outputs=dropdown,
)
dropdown.change(fn=update_textbox, inputs=dropdown, outputs=textbox)
demo.load(
fn=set_interface,
outputs=[dropdown, textbox, button]
)
gr.close_all()
demo.launch()
```
### Screenshot
Initially :

Then if I click on my button it changes the dropdown.

Until there everything is going well.
But now if I restart my backend and I choose an input it raises an error:

Steps to Reproduce:
Start the application and observe the initial state of the dropdown.
Click the button to change the dropdown.
Restart the backend.
Attempt to select an input from the dropdown.
Expected Behavior: The dropdown should function correctly without errors after the backend is restarted.
Actual Behavior: An error is raised when selecting an input from the dropdown after restarting the backend.
### Logs
```shell
`\Lib\site-packages\gradio\components\dropdown.py", line 202, in preprocess
raise Error(
gradio.exceptions.Error: 'Value: Value 1 is not in the list of choices: []'
Keyboard interruption in main thread... closing server.`
```
### System Info
```shell
gradio version: 5.4.0
gradio_client version: 1.4.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.6.2.post1
audioop-lts is not installed.
fastapi: 0.115.4
ffmpy: 0.4.0
gradio-client==1.4.2 is not installed.
httpx: 0.27.2
huggingface-hub: 0.26.2
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 2.1.3
orjson: 3.10.11
packaging: 24.1
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.9.2
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 6.0.2
ruff: 0.7.2
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit==0.12.0 is not installed.
typer: 0.12.5
typing-extensions: 4.12.2
urllib3: 2.2.3
uvicorn: 0.32.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.27.2
huggingface-hub: 0.26.2
packaging: 24.1
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
Blocking usage of gradio | closed | 2024-11-04T14:28:11Z | 2024-11-05T05:10:55Z | https://github.com/gradio-app/gradio/issues/9896 | [
"bug"
] | GuizMeuh | 1 |
mithi/hexapod-robot-simulator | dash | 81 | Ideas to improve code quality | The variable trio isn't being used at all, should we even return it, or should we just return the n axis and the height?
https://github.com/mithi/hexapod-robot-simulator/blob/531fbb34d44246de3a0116def7e3d365de25b9f6/hexapod/ground_contact_solver/ground_contact_solver.py#L39 | closed | 2020-04-21T21:49:08Z | 2020-04-23T14:18:53Z | https://github.com/mithi/hexapod-robot-simulator/issues/81 | [
"code quality"
] | mithi | 2 |
OpenBB-finance/OpenBB | python | 6,770 | [🕹️] Write a Article Comparing OpenBB and Other Financial Tools | ### What side quest or challenge are you solving?
Write a Article Comparing OpenBB and Other Financial Tools
### Points
300
### Description
Wrote a article on [Medium](https://medium.com) comparing OpenBB and other financial tools
### Provide proof that you've completed the task
» 14-October-2024 by Ayan Mondal » Link to Article : https://medium.com/@ayanmondal1805/openbb-vs-other-financial-tools-a-simple-comparison-for-investors-72d7ceb7d414 | closed | 2024-10-14T09:16:16Z | 2024-10-19T23:37:20Z | https://github.com/OpenBB-finance/OpenBB/issues/6770 | [] | trinetra110 | 8 |
coqui-ai/TTS | python | 2,386 | [Feature request] Add a hint on how to solve "not enough sample" | <!-- Welcome to the 🐸TTS project!
We are excited to see your interest, and appreciate your support! --->
**🚀 Feature Description**
Hi,
In section "SPeaker Adaptation" of [YourTTS paper](https://arxiv.org/pdf/2112.02418.pdf) it is mentioned that the experiment is conducted with eg 15 samples. I am trying to reproduce this with the YourTTS recipe.
However trying to reproduce this experiment with so little amount of samples leads to `You do not have enough samples for the evaluation set. You can work around this setting the 'eval_split_size' parameter to a minimum of` error message as it requires at least 100 samples to get a non-zero integer value for eval_split_size default value (0.01 or 1%).
But this error happens in "compute_embeddings" section of the recipe which cannot change the eval split size (correct me if I am wrong).
**Solution**
One way of solving this is to provide the meta_file_val parameter in the dataset config and manually set in this metadata_val.csv the sample(s) to use for validation (even though this won't be used in compute embeddings script).
Consequently for better user friendliness it could be added to the previous error message "or provide a meta_file_val for this dataset".
If you agree I can write this change.
**Alternative Solutions**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
| closed | 2023-03-06T04:26:10Z | 2023-04-19T14:12:37Z | https://github.com/coqui-ai/TTS/issues/2386 | [
"help wanted",
"wontfix",
"feature request"
] | Ca-ressemble-a-du-fake | 3 |
Python3WebSpider/ProxyPool | flask | 57 | 代码中url错误导致的retrying.RetryError: RetryError[Attempts: 3, Value: None] | **url错误导致的retrying模块报错问题**
* proxypool.crawlers.daili66.BASE_URL:http://www.664ip.cn/{page}.html ,这个url的域名应该是写错了,改成 www.66ip.cn 就可以正常运行 | closed | 2020-02-21T07:19:56Z | 2020-02-21T14:05:11Z | https://github.com/Python3WebSpider/ProxyPool/issues/57 | [
"bug"
] | zyrsss | 2 |
voxel51/fiftyone | data-science | 5,427 | [DOCS] Is there a documentation for converting video dataset into frames or images and vice versa? | ### URL(s) with the issue
https://docs.voxel51.com/recipes/convert_datasets.html
### Description of proposal (what needs changing)
Title. As the link I refer, there are some things I want to talk. I think this documentation only covers the dataset conversion for the same media type, for example, image to image. With that, I want to ask:
1. Is there a documentation or at least a tutorial to convert the video to image dataset and vice versa?
2. At that link, looks like the notebook heavily uses CLI commands. Is there a python friendly tutorial that reduces CLI usage?
That is all I can say for now. I will update later.
### Willingness to contribute
The FiftyOne Community encourages documentation contributions. Would you or another member of your organization be willing to contribute a fix for this documentation issue to the FiftyOne codebase?
- [ ] Yes. I can contribute a documentation fix independently
- [x] Yes. I would be willing to contribute a documentation fix with guidance from the FiftyOne community
- [ ] No. I cannot contribute a documentation fix at this time
| open | 2025-01-23T15:18:33Z | 2025-02-09T18:41:04Z | https://github.com/voxel51/fiftyone/issues/5427 | [
"documentation"
] | DWSuryo | 4 |
coqui-ai/TTS | deep-learning | 3,074 | [Bug] json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) | ### Describe the bug
Can't generate with Coqui Studio voices
### To Reproduce
When running
tts = TTS(model_name="coqui_studio/en/Torcull Diarmuid/coqui_studio", progress_bar=False)
tts.tts_to_file(text="This is a test.", file_path="out.wav")
I get that error json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
### Expected behavior
Voice generated
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA GeForce GTX 1650"
],
"available": true,
"version": "12.1"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.0+cu121",
"TTS": "0.17.8",
"numpy": "1.22.0"
},
"System": {
"OS": "Windows",
"architecture": [
"64bit",
"WindowsPE"
],
"processor": "Intel64 Family 6 Model 165 Stepping 5, GenuineIntel",
"python": "3.9.16",
"version": "10.0.19045"
}
}
```
### Additional context
Also I find that voices generated with xTTS local are far from what is generated in Coqui Studio, any explanation to that? | closed | 2023-10-17T14:41:55Z | 2023-11-03T19:54:14Z | https://github.com/coqui-ai/TTS/issues/3074 | [
"bug"
] | darkzbaron | 11 |
facebookresearch/fairseq | pytorch | 4,938 | How to choose which value for --arch???? | ## What is your question?
While finetuning a model I noticed that I get an architecture mismatch error when I change the value for `--arch`. In this case the value for `--arch` was specified in a working example, but in future cases where there is no value for `--arch` specified, how do I know which one to choose considering how large the list of possible values are?

| open | 2023-01-09T20:53:16Z | 2023-01-09T20:54:34Z | https://github.com/facebookresearch/fairseq/issues/4938 | [
"question",
"needs triage"
] | FayZ676 | 0 |
keras-team/keras | data-science | 20,463 | BackupAndRestore callback sometimes can't load checkpoint | When training interrupts, sometimes model can't restore weights back with BackupAndRestore callback.
```python
Traceback (most recent call last):
File "/home/alex/jupyter/lab/model_fba.py", line 150, in <module>
model.fit(train_dataset, callbacks=callbacks, epochs=NUM_EPOCHS, steps_per_epoch=STEPS_PER_EPOCH, verbose=2)
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 113, in error_handler
return fn(*args, **kwargs)
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/backend/tensorflow/trainer.py", line 311, in fit
callbacks.on_train_begin()
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/callbacks/callback_list.py", line 218, in on_train_begin
callback.on_train_begin(logs)
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/callbacks/backup_and_restore.py", line 116, in on_train_begin
self.model.load_weights(self._weights_path)
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 113, in error_handler
return fn(*args, **kwargs)
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/models/model.py", line 353, in load_weights
saving_api.load_weights(
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/saving/saving_api.py", line 251, in load_weights
saving_lib.load_weights_only(
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/saving/saving_lib.py", line 550, in load_weights_only
weights_store = H5IOStore(filepath, mode="r")
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/saving/saving_lib.py", line 931, in __init__
self.h5_file = h5py.File(root_path, mode=self.mode)
File "/home/alex/.local/lib/python3.10/site-packages/h5py/_hl/files.py", line 561, in __init__
fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
File "/home/alex/.local/lib/python3.10/site-packages/h5py/_hl/files.py", line 235, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 102, in h5py.h5f.open
OSError: Unable to synchronously open file (bad object header version number)
``` | closed | 2024-11-07T05:57:29Z | 2024-11-11T16:49:36Z | https://github.com/keras-team/keras/issues/20463 | [
"type:Bug"
] | shkarupa-alex | 1 |
jina-ai/serve | machine-learning | 5,974 | Which version is the stable version??? | jina docarray | closed | 2023-07-19T03:16:04Z | 2023-07-19T08:01:51Z | https://github.com/jina-ai/serve/issues/5974 | [] | yuanjie-ai | 2 |
gradio-app/gradio | python | 10,217 | IndexError when creating a pure API endpoint | ### Describe the bug
For unit testing purposes, I wanted to create a pure API endpoint, i.e. a function that could be called from `gradio_client` but wouldn't show in the UI. So I tried:
```python
gr.on ([], fn = test, show_api = False)
```
but this throws an `IndexError` in `set_event_trigger`.
Note: this is similar to #6730 but slightly different: in #6730, passing `None` is expected to trigger on all changes. Here I'm attempting to never trigger from the UI and only be able to call the function with an explicit REST request.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
import gradio_client as gc
def test():
print ("Test!")
return []
with gr.Blocks() as demo:
gr.on ([], fn = test, show_api = False)
demo.launch (prevent_thread_lock = True, server_port = 7860)
client = gc.Client ("http://127.0.0.1:7860")
client.predict (api_name = "/test")
```
### Screenshot
_No response_
### Logs
```shell
Traceback (most recent call last):
File "test_gradio.py", line 9, in <module>
gr.on ([], fn = test, show_api = False)
File ".venv/lib/python3.11/site-packages/gradio/events.py", line 820, in on
dep, dep_index = root_block.set_event_trigger(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/gradio/blocks.py", line 784, in set_event_trigger
if _targets[0][1] in ["change", "key_up"] and trigger_mode is None:
~~~~~~~~^^^
IndexError: list index out of range
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.9.1
gradio_client version: 1.5.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.6.2.post1
audioop-lts is not installed.
fastapi: 0.115.6
ffmpy: 0.4.0
gradio-client==1.5.2 is not installed.
httpx: 0.28.0
huggingface-hub: 0.26.3
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 2.1.3
orjson: 3.10.12
packaging: 24.2
pandas: 2.2.3
pillow: 10.4.0
pydantic: 2.10.3
pydub: 0.25.1
python-multipart: 0.0.19
pyyaml: 6.0.2
ruff: 0.8.1
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit: 0.12.0
typer: 0.15.0
typing-extensions: 4.12.2
urllib3: 2.2.3
uvicorn: 0.32.1
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.28.0
huggingface-hub: 0.26.3
packaging: 24.2
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
I can work around it | closed | 2024-12-17T13:57:48Z | 2025-01-10T19:46:58Z | https://github.com/gradio-app/gradio/issues/10217 | [
"bug",
"needs designing"
] | jeberger | 2 |
getsentry/sentry | python | 87,537 | Real-time mode including issues not matching project filter | ### Environment
SaaS (https://sentry.io/)
### Steps to Reproduce
[JAVASCRIPT-2YM5](https://sentry.sentry.io/issues/feedback/?feedbackSlug=javascript%3A6424080044&project=11276)
This screenshot demonstrates the problem:

### Expected Result
Project filtering works correctly.
### Actual Result
Project filtering did not work correctly.
### Product Area
Issues
### Link
_No response_
### DSN
_No response_
### Version
_No response_ | open | 2025-03-20T20:18:06Z | 2025-03-20T20:19:26Z | https://github.com/getsentry/sentry/issues/87537 | [] | mrduncan | 0 |
facebookresearch/fairseq | pytorch | 5,065 | omegaconf.errors.ValidationError: Cannot convert 'DictConfig' to string | Describe the bug
when i used fairseq to load a pretrained model, an error occured:
omegaconf.errors.ValidationError: Cannot convert 'DictConfig' to string: '{'_name': 'gpt2', 'gpt2_encoder_json': 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json', 'gpt2_vocab_bpe': '[https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe'}](https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe'%7D)'
full_key: bpe object_type=DenoisingConfig
To Reproduce
from fairseq.models.bart import BARTModel
d1_bart = BARTModel.from_pretrained(
d1_path,
checkpoint_file=args.d1_model_path.split("/")[-1],
data_name_or_path=d1_data_bin,
) | open | 2023-04-08T11:35:58Z | 2024-04-27T14:11:10Z | https://github.com/facebookresearch/fairseq/issues/5065 | [
"bug",
"needs triage"
] | FayeXXX | 2 |
Kinto/kinto | api | 3,157 | Performance for plural endpoints is suboptimal | Some background: we use Kinto as a synced per-user data store, using a bucket per app deployment/environment, and a collection per user. Users can often have thousands of records (most fairly small, a handful per collection fairly large).
We're using `postgresql` for the `storage` and `permission` backends (and `memcached` for the `cache` backend).
The performance of plural endpoints (e.g. `/v1/buckets/my-app-staging/collections/my-user/records` to get all records in a collection) in the current server implementation is a bit disappointing (ignoring caching).
I've profiled the Kinto server using Sentry, by adding `traces_sample_rate=1.0, _experiments={"profiles_sample_rate": 1.0}` to the `sentry_sdk.init()` call. While the SQL queries themselves take a bit of time, it's also spending a considerable amount of time in library functions.
## JSON deserialisation
Swapping out [`json.loads`](https://docs.python.org/3.8/library/json.html#json.loads) for [`msgspec.json.decode`](https://jcristharif.com/msgspec/api.html#msgspec.msgpack.decode) for SQLAlchemy's JSON deserialisation gives a substantial improvement:
```
Benchmark (json.loads): curl http://localhost:8888/v1/buckets/my-app-development/collections/my-user/records
Time (mean ± σ): 1.490 s ± 0.114 s [User: 0.006 s, System: 0.008 s]
Range (min … max): 1.327 s … 1.879 s 100 runs
```
```
Benchmark (msgspec.json.decode): curl http://localhost:8888/v1/buckets/my-app-development/collections/my-user/records
Time (mean ± σ): 1.267 s ± 0.052 s [User: 0.006 s, System: 0.007 s]
Range (min … max): 1.150 s … 1.428 s 100 runs
```
This improved the performance by ~18% for this collection (~3000 records). | open | 2023-03-19T12:48:52Z | 2024-07-23T20:04:51Z | https://github.com/Kinto/kinto/issues/3157 | [
"stale"
] | lesderid | 0 |
jacobgil/pytorch-grad-cam | computer-vision | 91 | hello,grad-cam for c++? | closed | 2021-05-19T05:18:50Z | 2021-05-19T07:23:37Z | https://github.com/jacobgil/pytorch-grad-cam/issues/91 | [] | mx1mx2 | 1 |
|
pydata/xarray | pandas | 9,935 | Use DatasetGroupBy.quantile for DatasetGroupBy.median for multiple groups when using dask arrays | ### Is your feature request related to a problem?
I am grouping data in a Dataset and computing statistics. I wanted to take the median over (two) groups, but I got the following message:
```python
>>> ds.groupby(['x', 'y']).median()
# NotImplementedError: The da.nanmedian function only works along an axis or a subset of axes. The full algorithm is difficult to do in parallel
```
while `ds.groupby(['x']).median()` works without any problem.
I noticed that this issue is because the DataArrays are dask arrays: if they are numpy arrays, there is no problem. In addition, if `.median()` is replaced by `.quantile(0.5)`, there is no problem either. See below:
```python
import dask.array as da
import numpy as np
import xarray as xr
rng = da.random.default_rng(0)
ds = xr.Dataset(
{'a': (('x', 'y'), rng.random((10, 10)))},
coords={'x': np.arange(5).repeat(2), 'y': np.arange(5).repeat(2)}
)
# Raises:
# NotImplementedError: The da.nanmedian function only works along an axis or a subset of axes. The full algorithm is difficult to do in parallel
try:
ds.groupby(['x', 'y']).median()
except NotImplementedError as e:
print(e)
# No problems with the following:
ds.groupby(['x']).median()
ds.groupby(['x', 'y']).quantile(0.5)
ds.compute().groupby(['x', 'y']).median() # Implicit conversion to numpy array
```
### Describe the solution you'd like
A straightforward solution seems to be to use `DatasetGroupBy.quantile(0.5)` for `DatasetGroupBy.median()` if the median is to be computed over multiple groups.
### Describe alternatives you've considered
_No response_
### Additional context
My `xr.show_versions()`:
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.10.5 | packaged by conda-forge | (main, Jun 14 2022, 07:06:46) [GCC 10.3.0]
python-bits: 64
OS: Linux
OS-release: 6.8.0-49-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.12.2
libnetcdf: 4.9.3-development
xarray: 2024.10.0
pandas: 2.2.3
numpy: 1.26.4
scipy: 1.14.1
netCDF4: 1.6.5
pydap: None
h5netcdf: 1.4.1
h5py: 3.12.1
zarr: 2.18.3
cftime: 1.6.4.post1
nc_time_axis: None
iris: None
bottleneck: 1.4.2
dask: 2024.11.2
distributed: None
matplotlib: 3.9.2
cartopy: 0.24.0
seaborn: 0.13.2
numbagg: None
fsspec: 2024.10.0
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 75.5.0
pip: 24.3.1
conda: None
pytest: None
mypy: None
IPython: 8.29.0
sphinx: 7.4.7
</details> | open | 2025-01-09T14:28:41Z | 2025-01-09T19:36:17Z | https://github.com/pydata/xarray/issues/9935 | [
"upstream issue"
] | adriaat | 0 |
Anjok07/ultimatevocalremovergui | pytorch | 1,564 | Error when Processing (Vocal Only) | Tried to process a voice isolation with 'BS-Roformer-Viperx-1297' and kept getting an error.
Here is the error text:
Last Error Received:
Process: MDX-Net
If this error persists, please contact the developers with the error details.
Raw Error Details:
ZeroDivisionError: "float division by zero"
Traceback Error: "
File "UVR.py", line 6667, in process_start
File "separate.py", line 662, in seperate
File "separate.py", line 798, in demix
File "separate.py", line 326, in running_inference_progress_bar
"
Error Time Stamp [2024-09-22 17:29:48]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 5
window_size: 512
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: BS-Roformer-Viperx-1297
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: True
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: False
is_primary_stem_only: True
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_use_opencl: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: MP3
wav_type_set: PCM_16
device_set: Default
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: Vocals | open | 2024-09-22T16:31:25Z | 2024-09-22T16:31:25Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1564 | [] | DylanWGaming | 0 |
ansible/awx | automation | 15,062 | AWX Community Meeting Agenda - April 2024 | # AWX Office Hours
## Proposed agenda based on topics
@TheRealHaoLiu fall out of the postgres 15 upgrade
- [Data lost after restarting postgres-15 pod after upgrading to awx-operator 2.13.0 ](https://github.com/ansible/awx-operator/issues/1768)
- [Postgres 15 pod: cannot create directory '/var/lib/pgsql/data/userdata': Permission](https://github.com/ansible/awx-operator/issues/1770)
- [postgres_data_path not respected by postgres-15 causing data lost](https://github.com/ansible/awx-operator/issues/1790)
- [awx migration pod issue with custom ca ](https://github.com/ansible/awx-operator/issues/1782)
@TheRealHaoLiu ARM64 AWX image release coming in future release https://github.com/ansible/awx/pull/15053
This afternoon's release will also contain fixes for the `postgres_data_path` parameter issue as well as the `/var/lib/pgsql/data/userdata` permissions issue, both of which are related to the sclorg postgresql image change. Please refer to the following docs for more information on that:
* https://ansible.readthedocs.io/projects/awx-operator/en/latest/user-guide/database-configuration.html?h=databas#note-about-overriding-the-postgres-image
## What
After a successful Contributor Summit in October 2023, one of the bits of feedback we got was to host a regular time for the Automation Controller (AWX) Team to be available for your folks in the AWX Community, so we are happy to announce a new regular video meeting.
This kind of feedback loop is vital to the success of AWX and the AWX team wants to make it as easy as possible for you - our community - to get involved.
## Where & When
Our next meeting will be held on Tuesday, April 9th, 2024 at [1500 UTC](https://dateful.com/time-zone-converter?t=15:00&tz=UTC)
* [Google Meet](https://meet.google.com/vyk-dfow-cfi)
* Via Phone PIN: 842522378 [Guide](https://support.google.com/meet/answer/9518557)
This meeting is held once a month, on the second Tuesday of the month, at [1500 UTC](https://dateful.com/time-zone-converter?t=15:00&tz=UTC)
## How
Add one topic per comment in this GitHub issue
If you don't have a GitHub account, jump on [#awx:ansible.com](https://matrix.to/#/#awx:ansible.com) on Matrix and we can add the topic for you
## Talk with us
As well as the fortnightly video meeting you can join the Community (inc development team) on Matrix Chat.
* Matrix: [#awx:ansible.com](https://matrix.to/#/#awx:ansible.com) (recomended)
* libera.chat IRC: `#ansible-awx` (If you are already setup on IRC)
The Matrix & IRC channels are bridged, you'll just have a better experience on Matrix
## Links
[AWX YouTube Chanel](https://www.youtube.com/@ansible-awx)
[Previous Meeting](https://github.com/ansible/awx/issues/14969)
Meeting recording
Next Meeting
See you soon!
| closed | 2024-04-03T17:41:51Z | 2024-04-10T19:20:47Z | https://github.com/ansible/awx/issues/15062 | [
"needs_triage"
] | TheRealHaoLiu | 1 |
marshmallow-code/flask-smorest | rest-api | 263 | Marshmallow 'missing' and 'default' attributes of fields are deprecated | # Proposed solution
- Replace missing/default field parameters with load_default/dump_default [marshmallow#1742](https://github.com/marshmallow-code/marshmallow/pull/1742)
- Upgrade minimal version of marshmallow (probably marshmallow>=3.13.0,<4)
# How to reproduce?
If `requirements.txt`:
```txt
marshmallow>=3.10.0,<4
```
then: `...site-packages/marshmallow/fields.py:456: RemovedInMarshmallow4Warning: The 'missing' attribute of fields is deprecated. Use 'load_default' instead.`
If `requirements.txt`:
```txt
marshmallow~=3.10.0
```
No error is produced.
# Notes & comments
Note `load_default` and `dump_default` are only available on version 3.13.0 [Marshmallow Changelog](https://marshmallow.readthedocs.io/en/stable/changelog.html?highlight=releases)
| closed | 2021-07-27T08:35:04Z | 2021-07-29T19:36:56Z | https://github.com/marshmallow-code/flask-smorest/issues/263 | [] | BorjaEst | 1 |
proplot-dev/proplot | matplotlib | 257 | Allow nan values in indicate_error | ### Description
It would be nice if `indicate_error` has an option to skip nan values in the input data. Currently, means and stds are not drawn if there is a nan value.
### Steps to reproduce
```python
import numpy as np
import proplot as plot
x = np.arange(10)
rng = np.random.default_rng()
y = rng.standard_normal((100, 10))
y[5, 5] = np.nan
fig, ax = plot.subplots()
ax.plot(x, y, means=True)
```
**Actual behavior**: [What actually happened]

### Proplot version
0.6.4 | closed | 2021-07-08T01:07:28Z | 2021-08-19T16:54:21Z | https://github.com/proplot-dev/proplot/issues/257 | [
"enhancement"
] | kinyatoride | 3 |
AirtestProject/Airtest | automation | 1,161 | 命令行版本和pip安装版本对应不上 |


| closed | 2023-09-25T02:52:23Z | 2023-10-16T09:30:36Z | https://github.com/AirtestProject/Airtest/issues/1161 | [] | aogg | 2 |
chatanywhere/GPT_API_free | api | 219 | 免费版GPT35支持接入Openwebui的时候配置多个key吗 | 免费版GPT35支持接入Openwebui的时候配置多个key吗?近期遇到了一个key每天免费使用次数不能超过100次的报错
还有就是付费版是否有如单窗口上下文限制、每天聊天次数的限制? | closed | 2024-04-23T03:25:49Z | 2024-05-23T07:36:10Z | https://github.com/chatanywhere/GPT_API_free/issues/219 | [] | linux-y | 1 |
PaddlePaddle/ERNIE | nlp | 442 | C++ Call Stacks (More useful to developers):0 std::string paddle::platfo | 
I donot know why this happened in some of my kernel on AI studio,while the others not when I am trying to use Baidu AI studio gpu. | closed | 2020-05-18T07:51:24Z | 2020-07-24T09:41:01Z | https://github.com/PaddlePaddle/ERNIE/issues/442 | [
"wontfix"
] | qmzorvim123 | 1 |
microsoft/nni | machine-learning | 5,669 | Model doesn't train after pruning | Hi, I am trying to apply the soft pruning algorithm to a face detector using the FPGMPruner from NNI. I am using the following code:
```
from nni.algorithms.compression.v2.pytorch.pruning import FPGMPruner
config_list = [{
'sparsity_per_layer' : compress_rates_total[0],
'op_types' : ['Conv2d'],
}, {
'exclude' : True,
'op_names' : ['loc.0', 'loc.1', 'loc.2', 'loc.3', 'loc.4', 'loc.5',
'conf.0', 'conf.1', 'conf.2', 'conf.3', 'conf.4', 'conf.5'
]
}]
#prune the model
pruner = FPGMPruner(net, config_list)
pruner.compress()
pruner._unwrap_model()
# Main loop
start_time = time.time()
for epoch in range(args.start_epoch, args.epochs):
#current_learning_rate = adjust_learning_rate(optimizer, epoch, args.epochs, args.learning_rate, 20, 0.0001) #learning rate scheduler
current_learning_rate = adjust_learning_rate(optimizer, epoch, args.gammas, args.schedule)
losses = 0
train(train_loader, net, criterion, optimizer, epoch, losses, current_learning_rate, compress_rates_total)
calc(net)
config_list = [{
'sparsity_per_layer' : compress_rates_total[epoch], #new config list for each epoch for pruning
'op_types' : ['Conv2d'],
}, {
'exclude' : True,
'op_names' : ['loc.0', 'loc.1', 'loc.2', 'loc.3', 'loc.4', 'loc.5',
'conf.0', 'conf.1', 'conf.2', 'conf.3', 'conf.4', 'conf.5'
]
}]
if epoch % args.epoch_prune == 0 or epoch == args.epochs - 1: #here it prunes
pruner = FPGMPruner(net, config_list)
pruner.compress()
pruner._unwrap_model()
print('Model pruned')
torch.save(net.state_dict(), './weights/epoch{}.pth'.format(epoch))
```
After inspecting every model that's been produced at each epoch I realized that the train() function is not updating none of the weights of the model except those on the layers I excluded from pruning ('loc.0', 'loc.1' , etc). Does anybody know why is this happening and how I can fix it ? | closed | 2023-08-22T13:19:42Z | 2023-09-08T12:27:57Z | https://github.com/microsoft/nni/issues/5669 | [] | gkrisp98 | 0 |
Lightning-AI/pytorch-lightning | pytorch | 20,185 | Checkpoint callback run before validation step - stale or none monitor values considered for validation metrics | ### Bug description
I am doing iterative training with `check_val_every_n_epoch=None` and (example values)`val_check_interval=10` on my trainer and with the matched argument on ModelCheckpoint `every_n_train_steps=10`.
e.g.
```python
checkpoint_callback = ModelCheckpoint(
dirpath=experiment_dir.joinpath("checkpoints"),
filename="checkpoint-{epoch}-{step:06d}-{train_loss:.2f}-{val_loss:.2f}",
save_top_k=checkpoint_top_k,
every_n_train_steps=checkpoint_n_step,
monitor="val_loss",
)
```
It is a [documented](https://lightning.ai/docs/pytorch/stable/common/checkpointing_intermediate.html) usage to make the monitor metric `val_loss`.
The problem is that these values might not exist, giving the [warning](https://github.com/Lightning-AI/pytorch-lightning/blob/1551a16b94f5234a4a78801098f64d0732ef5cb5/src/lightning/pytorch/callbacks/model_checkpoint.py#L378) or they are stale - because val_step is run after the checkpoint has been processed, new val metrics are not considered.
### What version are you seeing the problem on?
v2.3, v2.4
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0):
#- PyTorch Version (e.g., 2.4):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
### More info
_No response_ | open | 2024-08-10T01:23:28Z | 2024-08-11T02:05:46Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20185 | [
"bug",
"needs triage",
"ver: 2.4.x",
"ver: 2.3.x"
] | PheelaV | 2 |
deepinsight/insightface | pytorch | 1,883 | inference.py | How do I use my own image for face recognition? | open | 2022-01-11T14:11:29Z | 2022-01-11T14:11:29Z | https://github.com/deepinsight/insightface/issues/1883 | [] | Zmmccc | 0 |
recommenders-team/recommenders | data-science | 1,954 | [BUG] SAR needs to be modified due to a breaking change in scipy | ### Description
<!--- Describe your issue/bug/request in detail -->
With scipy 1.10.1, the item similarity matrix is a dense matrix
```
print(type(model.item_similarity))
print(type(model.user_affinity))
print(type(model.item_similarity) == np.ndarray)
print(type(model.item_similarity) == scipy.sparse._csr.csr_matrix)
print(model.item_similarity.shape)
print(model.item_similarity)
<class 'numpy.ndarray'>
<class 'scipy.sparse._csr.csr_matrix'>
True
False
(1646, 1646)
[[1. 0.10650888 0.03076923 ... 0. 0. 0. ]
[0.10650888 1. 0.15104167 ... 0. 0.00729927 0.00729927]
[0.03076923 0.15104167 1. ... 0. 0. 0.01190476]
...
[0. 0. 0. ... 1. 0. 0. ]
[0. 0.00729927 0. ... 0. 1. 0. ]
[0. 0.00729927 0.01190476 ... 0. 0. 1. ]]
```
but with scipy 1.11.1 the item similarity matrix is sparse
```
print(type(model.item_similarity))
print(type(model.user_affinity))
type(model.item_similarity) == np.ndarray
type(model.item_similarity) == scipy.sparse._csr.csr_matrix
print(model.item_similarity.shape)
<class 'numpy.ndarray'>
<class 'scipy.sparse._csr.csr_matrix'>
()
```
### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
<!--- * Azure Data Science Virtual Machine. -->
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
Related to https://github.com/microsoft/recommenders/issues/1951
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
### Other Comments
We found that the issue was that during a division in Jaccard, scipy change the type. We talked to the authors of scipy and they told us that they did a breaking change in 1.11.0 https://github.com/scipy/scipy/issues/18796#issuecomment-1619125257
| closed | 2023-07-03T16:41:19Z | 2024-04-30T04:52:05Z | https://github.com/recommenders-team/recommenders/issues/1954 | [
"bug"
] | miguelgfierro | 9 |
Yorko/mlcourse.ai | scikit-learn | 3 | Короткая инструкция по работе с материалами курса средствами GItHub | Добрый день!
Во-первых, я хотел бы сказать, что вы большие молодцы, что делаете этот курс! Основательность с который вы его запустили впечатляет!
Во-вторых, я хотел бы предложить небольшое дополнение, а именно добавить короткую инструкцию о том, как работать с материалами курса средствами Git/GitHub. Я думаю не все ваши слушатели умеют работать c Git/GitHub. По опыту прошлогоднего курса Юрия в ВШЭ, где он так же выкладывал материалы на GitHub, было совсем не очевидно как корректно обновлять их, не теряя собственных заметок в лекциях.
Я догадываюсь, что скорее всего нужно сделать Fork / Clone, наверное, сделать свой бранч(?) а потом [как-то так](http://stackoverflow.com/questions/7200614/how-to-merge-remote-master-to-local-branch), но хотелось бы знать наверняка. Возможно, это пригодиться и другим.
С наилучшими,
Андрей
| closed | 2017-03-04T09:28:45Z | 2017-03-13T19:54:14Z | https://github.com/Yorko/mlcourse.ai/issues/3 | [
"help wanted"
] | adyomin | 2 |
ray-project/ray | python | 51,393 | Remote Ray Dataloader | ### Description
Related to this issue at https://github.com/ray-project/ray/issues/51392. The remote dataloader is an open interface of Pytorch Dataloader. Here, it is temporarily named TorchBasedRayDataloader. The implementation principle of the demo is as follows. In this way, the allocation logic of the dataloader can be not intruded. Ray is only regarded as a worker of torch. In addition to this advantage, it can also solve the problem that torch can only be limited by the resources of the current node. However, the following example is only for demonstration. Among them, problems such as order, GIL lock, and resume need to be handled. I wonder if the community is interested in implementing a remote version of torch dataloader. If interested, I can contribute here.
```
@ray.remote(num_cpus=1)
class RemoteIterDatasetActor(IterableDataset):
"""Actually get data from remote loader server
extend : get plan data from Server.
"""
def __init__(
self,
torch_dataloader: DataLoader
):
self.torch_dataloader = torch_dataloader
def __iter__(self):
return iter(self.torch_dataloader)
class DataloaderBasedRay():
def __init__(self, torch_dataloader) -> None:
super().__init__()
self.dataset = RemoteRayIterDataset(torch_dataloader.dataset)
self.queue = queue.Queue()
ray.init()
remote_dataset_actor = RemoteIterDatasetActor \
.options(num_cpus=DATA_LOAD_ACTOR_NUM_CPUS, scheduling_strategy="SPREAD") \
.remote(torch_dataloader)
prefetch_thread = threading.Thread(
target=self.__prefetch, daemon=True)
prefetch_thread.start()
def __prefetch():
result_ref = remote_dataset_actor.__iter__.remote()
self.queue.put(ray.get(result_ref))
def __iter__(self):
self.queue.pop()
```
_No response_
### Use case
_No response_ | open | 2025-03-15T02:33:04Z | 2025-03-17T21:47:50Z | https://github.com/ray-project/ray/issues/51393 | [
"enhancement"
] | Jay-ju | 1 |
ray-project/ray | machine-learning | 50,938 | Release test long_running_many_tasks_serialized_ids.aws failed | Release test **long_running_many_tasks_serialized_ids.aws** failed. See https://buildkite.com/ray-project/release/builds/34295#01954650-a064-46ce-b8d2-cf86fedefe0f for more details.
Managed by OSS Test Policy | closed | 2025-02-27T08:08:07Z | 2025-02-28T06:09:27Z | https://github.com/ray-project/ray/issues/50938 | [
"bug",
"P0",
"triage",
"core",
"release-test",
"ray-test-bot",
"weekly-release-blocker",
"stability"
] | can-anyscale | 1 |
NVIDIA/pix2pixHD | computer-vision | 24 | Do you train global and local generator seperately or together ? | Hey,
While in the paper you mention that the global and local generator are trained separately before fine tuning them together, in the default settings of the code I see both of them trained together.
Can you clarify a bit on that ? For eg. if I launch train_512.sh, would they be trained together from the start ?
Thanks in advance
| closed | 2018-03-06T15:52:45Z | 2018-06-28T21:25:20Z | https://github.com/NVIDIA/pix2pixHD/issues/24 | [] | PraveerSINGH | 3 |
geopandas/geopandas | pandas | 2,502 | BUG: OutOfBoundsDatetime on read of faulty datetimes | - [x ] I have checked that this issue has not already been reported.
- [ x] I have confirmed this bug exists on the latest version of geopandas.
- [x ] (optional) I have confirmed this bug exists on the main branch of geopandas.
---
#### Code Sample, a copy-pastable example
Expected behaviour converting 'fully corrupt' datetime to NaT.
```python
import geopandas
from shapely.geometry import Point
import tempfile
# Fully corrupt datetime (row 4)
data = [[1, '1990-02-0300:00:00'],[2, '2002-08-2118:00:00'],[3, '2019-10-3019:00:05'],
[4, '2008-02-2906:30:45'],[5, '9999-99-9900:00:00']]
geom = [Point(1,1),Point(2,2),Point(3,3),Point(4,4),Point(5,5)]
gdf = geopandas.GeoDataFrame(data = data, columns = ['uid', 'date'], crs = None, geometry=geom)
myschema = {'geometry': 'Point', 'properties': {'uid': 'int', 'date':'datetime'}}
with tempfile.TemporaryDirectory() as td:
gdf.to_file(f"{td}/temp.gpkg", driver='GPKG', schema=myschema)
print(geopandas.read_file(f"{td}/temp.gpkg"))
```
Unexpected(?) behaviour erroring 'partially corrupt' datetime:
```python
import geopandas
from shapely.geometry import Point
import tempfile
# Semi corrupt datetime (row 4)
data = [[1, '1990-02-0300:00:00'],[2, '2002-08-2118:00:00'],[3, '2019-10-3019:00:05'],
[4, '2008-02-2906:30:45'],[5, '9999-12-3100:00:00']]
geom = [Point(1,1),Point(2,2),Point(3,3),Point(4,4),Point(5,5)]
gdf = geopandas.GeoDataFrame(data = data, columns = ['uid', 'date'], crs = None, geometry=geom)
myschema = {'geometry': 'Point', 'properties': {'uid': 'int', 'date':'datetime'}}
with tempfile.TemporaryDirectory() as td:
gdf.to_file(f"{td}/temp.gpkg", driver='GPKG', schema=myschema)
print(geopandas.read_file(f"{td}/temp.gpkg"))
```
This second example returns `OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 9999-12-31 00:00:00`.
#### Problem description
I have a geopackage with datetime in which some datetimes are arbitrarily broken, e.g. strings as '9999-12-3100:00:00', the schema has these set as 'datetime', which means that on read geopandas.io.file is applying `pandas.to_datetime()` to make these strings into datetime objects. This throws an error when these are out of bounds, but curiously only when they're partially out of bounds as shown in the difference between the two examples above.
There would appear to be a solution in which `pandas.to_datetime()` has the `errors` argument defaulting to raise, if instead this is set to coerce or ignore it would allow the file to be read.
I suspect if this were desirable it would have to be as a `geopandas.read_file` argument, the field would then have to be subject to further inspection by the user to resolve any datetime issues, or else it would need to be clearly documented as not guaranteeing datetime objects on read.
An alternative might be to allow user-defined schemas for read, but that is a more substantial change.
I also accept this might be considered such an edge case that the current behaviour is acceptable.
#### Expected Output
File is read with corrupt datetimes either set to NaT, or ignored and retained as strings.
#### Output of ``geopandas.show_versions()``
<details>
SYSTEM INFO
-----------
python : 3.8.5 (default, Mar 2 2021, 09:39:24) [GCC 9.3.0]
executable : /home/user/.cache/pypoetry/virtualenvs/project-YfbsymT8-py3.8/bin/python
machine : Linux-5.15.0-1015-aws-x86_64-with-glibc2.29
GEOS, GDAL, PROJ INFO
---------------------
GEOS : 3.10.2
GEOS lib : /usr/lib/x86_64-linux-gnu/libgeos_c.so
GDAL : 3.4.1
GDAL data dir: /home/user/.cache/pypoetry/virtualenvs/project-YfbsymT8-py3.8/lib/python3.8/site-packages/fiona/gdal_data
PROJ : 8.2.0
PROJ data dir: /home/user/.cache/pypoetry/virtualenvs/project-YfbsymT8-py3.8/lib/python3.8/site-packages/pyproj/proj_dir/share/proj
PYTHON DEPENDENCIES
-------------------
geopandas : 0.11.0
pandas : 1.4.1
fiona : 1.8.21
numpy : 1.23.1
shapely : 1.8.1.post1
rtree : 0.9.7
pyproj : 3.3.0
matplotlib : 3.5.1
mapclassify: 2.4.3
geopy : None
psycopg2 : None
geoalchemy2: None
pyarrow : 4.0.1
pygeos : 0.12.0
</details>
| closed | 2022-07-20T16:14:33Z | 2022-07-24T10:44:26Z | https://github.com/geopandas/geopandas/issues/2502 | [
"bug"
] | danlewis85 | 1 |
skypilot-org/skypilot | data-science | 4,817 | [PythonAPI] SDK returns a dict with string keys for the job id for `sky.job_status` | closed | 2025-02-25T21:47:31Z | 2025-02-28T17:00:47Z | https://github.com/skypilot-org/skypilot/issues/4817 | [
"PythonAPI"
] | romilbhardwaj | 0 |
|
microsoft/MMdnn | tensorflow | 542 | Error when converting PyTorch model to TF model | Platform (like ubuntu 16.04/win10): Ubuntu16.04
Python version: 3.6.7
Source framework with version (like Tensorflow 1.4.1 with GPU): PyTorch 0.4.0
Destination framework with version (like CNTK 2.3 with GPU): TensorFlow 1.12
Pre-trained model path (webpath or webdisk path): Customed ResNet-50
Running scripts:
```
mmconvert -sf pytorch -in ../../model/model_full_1_8592_8663.pkl -df tensorflow --inputShape=3,320,256 -om sppe_tf
```
----
I made some modification on ResNet-50, and encounter the following error.
```
Traceback (most recent call last):
File "/home/solomon/.conda/envs/pytorch040/lib/python3.6/site-packages/mmdnn/conversion/pytorch/pytorch_parser.py", line 74, in __init__
model = torch.load(model_file_name)
File "/home/solomon/.conda/envs/pytorch040/lib/python3.6/site-packages/torch/serialization.py", line 303, in load
return _load(f, map_location, pickle_module)
File "/home/solomon/.conda/envs/pytorch040/lib/python3.6/site-packages/torch/serialization.py", line 469, in _load
result = unpickler.load()
AttributeError: Can't get attribute 'FastPose_SE' on <module '__main__' from '/home/solomon/.conda/envs/pytorch040/bin/mmconvert'>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/solomon/.conda/envs/pytorch040/bin/mmconvert", line 11, in <module>
sys.exit(_main())
File "/home/solomon/.conda/envs/pytorch040/lib/python3.6/site-packages/mmdnn/conversion/_script/convert.py", line 102, in _main
ret = convertToIR._convert(ir_args)
File "/home/solomon/.conda/envs/pytorch040/lib/python3.6/site-packages/mmdnn/conversion/_script/convertToIR.py", line 92, in _convert
parser = PytorchParser(args.network, inputshape[0])
File "/home/solomon/.conda/envs/pytorch040/lib/python3.6/site-packages/mmdnn/conversion/pytorch/pytorch_parser.py", line 76, in __init__
model = torch.load(model_file_name, map_location='cpu')
File "/home/solomon/.conda/envs/pytorch040/lib/python3.6/site-packages/torch/serialization.py", line 303, in load
return _load(f, map_location, pickle_module)
File "/home/solomon/.conda/envs/pytorch040/lib/python3.6/site-packages/torch/serialization.py", line 469, in _load
result = unpickler.load()
AttributeError: Can't get attribute 'FastPose_SE' on <module '__main__' from '/home/solomon/.conda/envs/pytorch040/bin/mmconvert'>
``` | closed | 2019-01-03T01:24:16Z | 2019-01-03T12:25:36Z | https://github.com/microsoft/MMdnn/issues/542 | [] | ShuaiHuang | 2 |
pallets-eco/flask-wtf | flask | 360 | Broken link in documentation for warning against easy_install | I was reading through the [documentation's instructions about installation this morning](https://flask-wtf.readthedocs.io/en/stable/install.html) and the link about easy_install, [shouldn't do that,](https://python-packaging-user-guide.readthedocs.io/en/latest/pip_easy_install/) is broken.
I can see that the broken link has already been addressed in a past [PR](https://github.com/lepture/flask-wtf/pull/122) and [issue](https://github.com/lepture/flask-wtf/issues/350) however, it remains up on the documentation. Is there an estimate of when the link will be fixed in the documentation? Thank you! | closed | 2019-02-26T16:49:28Z | 2021-05-26T00:55:03Z | https://github.com/pallets-eco/flask-wtf/issues/360 | [] | McEileen | 1 |
PrefectHQ/prefect | automation | 16,724 | Prefect UI not working with Server API Auth (username:password) | I'm trying to implement Authentication for Prefect Open Source while hosting the server locally.
My docker-compose for Prefect server looks like this:
```
services:
db:
image: postgres:latest
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=yourTopSecretPassword
- POSTGRES_DB=prefect
ports:
- "5432:5432"
volumes:
- db:/var/lib/postgresql/data
prefect:
image: prefecthq/prefect:3-latest
command: prefect server start --host 0.0.0.0
environment:
- PREFECT_LOGGING_LEVEL=DEBUG
- PREFECT_API_DATABASE_CONNECTION_URL=postgresql+asyncpg://postgres:yourTopSecretPassword@db:5432/prefect
- PREFECT_SERVER_API_AUTH_STRING="admin:admin"
ports:
- "4200:4200"
depends_on:
- db
```
This results in Prefect UI credential page, but as soon as page loads, I get a toast message in red saying `Authentication failed". It only asks for a password, and no matter what I put in the pwd field, it logins but shows the above toast message as well.
Please let me know if I'm doing anything wrong. Or if you've any documentation for above implementation.
Attaching screenshot here:
<img width="1106" alt="image" src="https://github.com/user-attachments/assets/bc09718d-a466-43bd-b69a-e0775bd16e6c" />
Ref: https://github.com/PrefectHQ/prefect/issues/2238#issuecomment-2581153637_
| closed | 2025-01-15T06:38:37Z | 2025-02-05T17:30:32Z | https://github.com/PrefectHQ/prefect/issues/16724 | [
"bug",
"ui"
] | shah1dk | 8 |
pallets/flask | flask | 5,155 | FLASK_APP is unavailable when FlaskGroup is not used | When cli group created with [flask.cli.AppGroup](https://github.com/pallets/flask/blob/main/src/flask/cli.py#L362) and returned to cli instead of default [flask.cli.FlaskGroup](https://github.com/pallets/flask/blob/main/src/flask/cli.py#L481) [flask.cli.ScriptInfo](https://github.com/pallets/flask/blob/main/src/flask/cli.py#L266) failing to locate application with `FLASK_APP` environment variable with this error message:
```
Error: Could not locate a Flask application. Use the 'flask --app' option, 'FLASK_APP' environment variable, or a 'wsgi.py' or 'app.py' file in the current directory.
```
Here is a simple example of executable file `main.py`:
```python
#!/usr/bin/env python
from os import environ
import click
from flask import Flask
from flask.cli import AppGroup
cli = AppGroup("main")
@cli.command()
def test():
click.echo("Test command")
def create_app() -> Flask:
app = Flask(__name__)
app.cli.add_command(cli)
return app
if __name__ == "__main__":
environ.setdefault("FLASK_APP", "main:create_app")
cli()
```
If run this file with flask<2.2.0:
```
$ ./main.py test
Test command
```
And with flask>=2.2.0:
```
$ ./main.py test
Usage: main.py test [OPTIONS]
Try 'main.py test --help' for help.
Error: Could not locate a Flask application. Use the 'flask --app' option, 'FLASK_APP' environment variable, or a 'wsgi.py' or 'app.py' file in the current directory.
```
I think this error is caused by [this](https://github.com/pallets/flask/commit/99fa3c36abc03cd5b3407df34dce74e879ea377a) commit. Before 2.2.0 ScriptInfo locates `app_import_path` automatically from environment variable `FLASK_APP`, after 2.2.0 it expects that this value will be provided by `--app` of option FlaskGroup that can get it from `FLASK_APP` environment variable.
Environment:
- Python version: 3.11
- Flask version: >=2.2.0
| closed | 2023-06-07T15:12:51Z | 2023-06-07T16:11:51Z | https://github.com/pallets/flask/issues/5155 | [] | TitaniumHocker | 4 |
d2l-ai/d2l-en | data-science | 2,308 | Add chapter on diffusion models | GANs are the most advanced topic currently discussed in d2l.ai. However, diffusion models have taken the image generation mantle from GANs. Holding up GANs as the final chapter of the book and not discussing diffusion models at all feels like a clear indication of dated content. Additionally, fast.ai just announced that they will be adding diffusion models to their intro curriculum, as I imagine will many other intro courses. Want to make sure there's at least a discussion about this here if the content isn't already in progress or roadmapped. | open | 2022-09-16T21:41:12Z | 2023-05-15T13:48:32Z | https://github.com/d2l-ai/d2l-en/issues/2308 | [
"feature request"
] | dmarx | 4 |
jschneier/django-storages | django | 1,098 | How to connect using DefaultAzureCredential from azure.identity? | Hello,
I am STRUGGLING with this, I have a backend setup, but we're trying to use credential = DefaultAzureCredential() from azure.identity to authenticate ourselves.
However, we cannot seem to get our django app to connect to the container.
Is it even possible to connect to blob storage with DefaultAzureCredential()
This is our backend.py...
```from django.conf import settings
from storages.backends.azure_storage import AzureStorage
class AzureMediaStorage(AzureStorage):
account_name = settings.AZURE_ACCOUNT_NAME
account_key = settings.AZURE_STORAGE_KEY
azure_container = settings.AZURE_MEDIA_CONTAINER
expiration_secs = None
overwrite_files = True
class AzureStaticStorage(AzureStorage):
account_name = settings.AZURE_ACCOUNT_NAME
account_key = settings.AZURE_STORAGE_KEY
azure_container = settings.AZURE_STATIC_CONTAINER
expiration_secs = None
```
I'm also wondering if we have the wrong settings in our settings.py.
```
AZURE_CUSTOM_DOMAIN = f'{AZURE_ACCOUNT_NAME}.blob.core.windows.net'
blobClient = BlobServiceClient(AZURE_CUSTOM_DOMAIN, credential=credential)
staticClient = ContainerClient(AZURE_CUSTOM_DOMAIN, container_name='static', credential=credential).url
mediaClient = ContainerClient(AZURE_CUSTOM_DOMAIN, container_name='media', credential=credential).url
DEFAULT_FILE_STORAGE = blobClient.get_container_client('media')
STATICFILES_STORAGE = blobClient.get_container_client('static')
# TODO --> If this change does not work, change the default_file_storage and the staticfiles_storage to url
# STATIC_ROOT = os.path.join(BASE_DIR, 'staticroot')
# STATIC_URL = '/static/'
STATIC_URL = staticClient
MEDIA_URL = mediaClient
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
# any static paths you want to publish
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'biqboards', 'static')
]
```
I fully believe this is an us problem, but I'm stuck and not sure where to go from here. | closed | 2021-12-13T16:58:07Z | 2023-09-05T00:13:04Z | https://github.com/jschneier/django-storages/issues/1098 | [] | TimothyMalahy | 1 |
autokey/autokey | automation | 848 | Shortcuts not working in GTK4 + LibAdwaita apps | ### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Xorg
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Bug
### Choose one or more terms that describe this issue:
- [ ] autokey triggers
- [X] autokey-gtk
- [ ] autokey-qt
- [ ] beta
- [X] bug
- [ ] critical
- [ ] development
- [ ] documentation
- [ ] enhancement
- [ ] installation/configuration
- [ ] phrase expansion
- [ ] scripting
- [ ] technical debt
- [ ] user interface
### Other terms that describe this issue if not provided above:
GTK4, LibAdwaita
### Which Linux distribution did you use?
Pop!_OS 22.04 (Ubuntu-based)
### Which AutoKey GUI did you use?
GTK
### Which AutoKey version did you use?
0.95.10-2
### How did you install AutoKey?
Distribution's repository
### Can you briefly describe the issue?
Shortcuts are not working inside GTK4 + LibAdwaita apps. I noticed that the text cursor blinks if I press the shortcuts, but nothing else seems to happen.
No messages appeared in the terminal when running one of the apps I tested through it.
Not sure if it's an AutoKey's need to support GTK4, or if it's a GTK4 issue, so reporting it here first.
### Can the issue be reproduced?
Always
### What are the steps to reproduce the issue?
(assuming you already have AutoKey installed and configured with shortcuts)
1. Open a GTK4 + LibAdwaita app (such as Gnome Text Editor, Flatseal v2.0 etc.)
2. Press a shortcut you have configured in AutoKey
3. See how it doesn't work
### What should have happened?
The shortcut should have worked.
### What actually happened?
Nothing happened, except that pressing the shortcut multiple times caused the text cursor in text fields to blink.
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
```bash
The only apparently unusual message I saw was the following, but it happened when focused on other apps as well, not only the ones where the issue being reported here happened.
2023-05-02 15:31:46,648 ERROR - service - Ignored locking error in handle_keypress
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/autokey/service.py", line 207, in __tryReleaseLock
self.configManager.lock.release()
```
### Anything else?
Tested with both Flatpaks and native packages (.deb). | open | 2023-05-02T18:38:12Z | 2023-07-14T23:30:42Z | https://github.com/autokey/autokey/issues/848 | [
"installation/configuration"
] | hyuri | 32 |
mckinsey/vizro | plotly | 99 | Have a component like Plotly Textarea to get text input from the user. | ### What's the problem this feature will solve?
Enabling users to input SQL queries for data retrieval can significantly enhance the utility of data connectors. This feature would allow for the generation of dynamic dashboards that can be customized according to user-defined queries as texts.
### Describe the solution you'd like
The following functionality from https://dash.plotly.com/dash-core-components/textarea#textarea-properties will suit text-based input.
from dash import Dash, dcc, html, Input, Output, callback
app = Dash(__name__)
app.layout = html.Div([
dcc.Textarea(
id='textarea-example',
value='Textarea content initialized\nwith multiple lines of text',
style={'width': '100%', 'height': 300},
),
html.Div(id='textarea-example-output', style={'whiteSpace': 'pre-line'})
])
@callback(
Output('textarea-example-output', 'children'),
Input('textarea-example', 'value')
)
def update_output(value):
return 'You have entered: \n{}'.format(value)
### Alternative Solutions
A different approach would be to have dropdown menus where the user could select the list of tables and filters and we generate the query in the backend.
### Additional context
I was thinking of implementing a component like the following. I haven't tested it yet, but will work on such a solution.
from typing import Optional
from dash import ClientsideFunction, Input, Output, State, clientside_callback, dcc, html
from pydantic import Field, validator
from vizro.models import Action, VizroBaseModel
from vizro.models._action._actions_chain import _action_validator_factory
from vizro.models._models_utils import _log_call
class Textarea(VizroBaseModel):
"""Textarea component for Vizro.
Can be provided to [`Filter`][vizro.models.Filter] or
[`Parameter`][vizro.models.Parameter]. Based on the underlying
[`dcc.Textarea`](https://dash.plotly.com/dash-core-components/textarea).
Args:
value (Optional[str]): Default value for textarea. Defaults to `None`.
title (Optional[str]): Title to be displayed. Defaults to `None`.
actions (List[Action]): See [`Action`][vizro.models.Action]. Defaults to `[]`.
"""
value: Optional[str] = Field(None, description="Default value for textarea.")
title: Optional[str] = Field(None, description="Title to be displayed.")
actions: List[Action] = []
# Validator for actions, if needed
_set_actions = _action_validator_factory("value")
@_log_call
def build(self):
output = [Output(f"{self.id}_output", "children")]
inputs = [Input(f"{self.id}_textarea", "value")]
clientside_callback(
ClientsideFunction(namespace="clientside", function_name="update_textarea_output"),
output=output,
inputs=inputs,
)
return html.Div(
[
html.P(self.title) if self.title else None,
dcc.Textarea(
id=f"{self.id}_textarea",
value=self.value,
style={'width': '100%', 'height': 300},
),
html.Div(
id=f"{self.id}_output",
style={'whiteSpace': 'pre-line'}
),
],
className="textarea_container",
id=f"{self.id}_outer",
)
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | closed | 2023-10-06T05:16:26Z | 2024-07-09T15:09:00Z | https://github.com/mckinsey/vizro/issues/99 | [
"Custom Components :rocket:"
] | farshidbalan | 1 |
MaartenGr/BERTopic | nlp | 1,401 | Determining optimal range of threshold values in some techniques like outlier reduction or zero shot CLF | There are many good techniques available in BERTopic which is helpful. However, it would be really great if we can get some conceptual idea about optimizing values of these very crucial parameter values.
For example, we have `threshold` in [outlier reduction](https://maartengr.github.io/BERTopic/api/bertopic.html#bertopic._bertopic.BERTopic.reduce_outliers). I am using BERTopic for various datasets and they are quite different in number of documents and size of each document. I have one dataset with 34K documents, another has about 1K. I have observed that for first dataset I have to set `threshold=0.03` and for another it is `threshold=0.30`.
How to go about finding approximately good value or range for such parameters?
I have same question for `threshold` used in Zero shot CLF. By default, its 0.8 but I had to set it to 0.4 to actually see some classification happening.
https://maartengr.github.io/BERTopic/api/representation/zeroshot.html
I am aware that if `calculate_probabilities=True` is set then we get topic-doc probability matrix but not sure how it can be useful here. Is there any median probability or something like that which we can be used as a reference point for approximate range of `threshold` value? | open | 2023-07-11T18:43:56Z | 2023-07-12T05:24:25Z | https://github.com/MaartenGr/BERTopic/issues/1401 | [] | Pratik--Patel | 1 |
OthersideAI/self-operating-computer | automation | 104 | Does GPT4 vision have this ability? | Here is a response from GPT4 when I try to prompt it though the chat interface. It doesn't seem to actually be parsing the grid on the image or attempting to interpolate where to place the cursor. It seems to be guessing based on training data over some form of documentation about browser layout.
_Given the standard layout of a web browser and the typical position of the address bar at the top, to get to the address bar in Google Chrome, one would usually click near the top of the browser window. In this case, with the grid overlay indicating position percentages, the address bar would typically be around the "50%" mark on the 'x' axis and somewhere between "0%" to "10%" on the 'y' axis.
However, since I cannot directly interpret the image to provide precise coordinates due to the limitations of my current capabilities, the description provided will be an estimate. If you need the exact coordinates, you would measure from the top of the browser window to the address bar and then calculate the percentage that this represents of the overall screen height.
Assuming the address bar is at the standard position, the response would be:
CLICK {{ "x": "50%", "y": "5%", "description": "Click: Chrome Address Bar", "reason": "To enter a URL and navigate to a website" }}
This is based on a common browser layout and not the specific image, as I cannot process the image to give precise coordinates._ | closed | 2023-12-12T23:52:05Z | 2023-12-21T00:47:29Z | https://github.com/OthersideAI/self-operating-computer/issues/104 | [] | ahsin-s | 1 |
kevlened/pytest-parallel | pytest | 26 | session-scope fixtures aren't honored | Similar to what happens with `pytest-xdist`: https://github.com/pytest-dev/pytest-xdist/issues/271 | open | 2018-11-29T06:45:18Z | 2022-06-16T08:06:15Z | https://github.com/kevlened/pytest-parallel/issues/26 | [
"bug"
] | hectorv | 5 |
PaddlePaddle/PaddleHub | nlp | 1,467 | 使用超分模型出错 ModuleNotFoundError: No module named 'realsr.module'; 'realsr' is not a package | 使用超分模型出错 ModuleNotFoundError: No module named 'realsr.module'; 'realsr' is not a package
也试过手动删除缓存目录,和更新paddlehub版本,还是不行
- 代码
```
'''
realsr.py
'''
import paddle
import os
import argparse
import cv2
import paddlehub as hub
def main(args):
image_path = 'input.png'
image = cv2.imread(image_path)
module = hub.Module(name="realsr")
res = module.run_image(image)
return
if __name__ == '__main__':
parser = argparse.ArgumentParser()
args = parser.parse_args()
main(args)
print('exit')
```
- 系统环境 MaxOS
- python版本 Python 3.8.10
- paddlehub 2.1.0 /Users/max/Documents/Project/Demo/python/pd/PaddleHub
- paddlepaddle 2.0.0
- 详细日志
```
(pd) MaxdeMac-mini:pd max$ python realsr.py
/Users/max/Documents/tools/miniconda3/envs/pd/lib/python3.8/site-packages/paddle/fluid/layers/utils.py:26: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
def convert_to_list(value, n, name, dtype=np.int):
/Users/max/Documents/tools/miniconda3/envs/pd/lib/python3.8/site-packages/paddle2onnx/onnx_helper/mapping.py:42: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
int(TensorProto.STRING): np.dtype(np.object)
/Users/max/Documents/tools/miniconda3/envs/pd/lib/python3.8/site-packages/paddle2onnx/constant/dtypes.py:43: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
np.bool: core.VarDesc.VarType.BOOL,
/Users/max/Documents/tools/miniconda3/envs/pd/lib/python3.8/site-packages/paddle2onnx/constant/dtypes.py:44: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
core.VarDesc.VarType.FP32: np.float,
/Users/max/Documents/tools/miniconda3/envs/pd/lib/python3.8/site-packages/paddle2onnx/constant/dtypes.py:49: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
core.VarDesc.VarType.BOOL: np.bool
[2021-06-08 17:23:52,521] [ WARNING] - An error was encountered while loading realsr. Detailed error information can be found in the /Users/max/.paddlehub/log/20210608.log.
[2021-06-08 17:23:52,525] [ WARNING] - An error was encountered while loading realsr. Detailed error information can be found in the /Users/max/.paddlehub/log/20210608.log.
Download https://bj.bcebos.com/paddlehub/paddlehub_dev/realsr.tar.gz
[##################################################] 100.00%
Decompress /Users/max/.paddlehub/tmp/tmpq_d8ji2h/realsr.tar.gz
[##################################################] 100.00%
[2021-06-08 17:25:35,841] [ INFO] - Successfully uninstalled realsr
Traceback (most recent call last):
File "realsr.py", line 32, in <module>
main(args)
File "realsr.py", line 25, in main
module = hub.Module(name="realsr")
File "/Users/max/Documents/Project/Demo/python/pd/PaddleHub/paddlehub/module/module.py", line 388, in __new__
module = cls.init_with_name(
File "/Users/max/Documents/Project/Demo/python/pd/PaddleHub/paddlehub/module/module.py", line 487, in init_with_name
user_module_cls = manager.install(
File "/Users/max/Documents/Project/Demo/python/pd/PaddleHub/paddlehub/module/manager.py", line 190, in install
return self._install_from_name(name, version, ignore_env_mismatch)
File "/Users/max/Documents/Project/Demo/python/pd/PaddleHub/paddlehub/module/manager.py", line 265, in _install_from_name
return self._install_from_url(item['url'])
File "/Users/max/Documents/Project/Demo/python/pd/PaddleHub/paddlehub/module/manager.py", line 258, in _install_from_url
return self._install_from_archive(file)
File "/Users/max/Documents/Project/Demo/python/pd/PaddleHub/paddlehub/module/manager.py", line 380, in _install_from_archive
return self._install_from_directory(directory)
File "/Users/max/Documents/Project/Demo/python/pd/PaddleHub/paddlehub/module/manager.py", line 364, in _install_from_directory
hub_module_cls = HubModule.load(self._get_normalized_path(module_info.name))
File "/Users/max/Documents/Project/Demo/python/pd/PaddleHub/paddlehub/module/module.py", line 418, in load
py_module = utils.load_py_module(dirname, '{}.module'.format(basename))
File "/Users/max/Documents/Project/Demo/python/pd/PaddleHub/paddlehub/utils/utils.py", line 250, in load_py_module
py_module = importlib.import_module(py_module_name)
File "/Users/max/Documents/tools/miniconda3/envs/pd/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 970, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'realsr.module'; 'realsr' is not a package
``` | closed | 2021-06-08T09:41:52Z | 2021-06-09T03:29:06Z | https://github.com/PaddlePaddle/PaddleHub/issues/1467 | [
"cv"
] | KingsleyYau | 2 |
microsoft/nlp-recipes | nlp | 495 | [ASK] [tc_multi_languages_transformers.ipynb] temporary data directory and cache directory are not deleted after the notebook run | ### Description
The temporary directories should be deleted after the notebook run is finished.
### Other Comments
consider https://security.openstack.org/guidelines/dg_using-temporary-files-securely.html | open | 2019-11-25T21:51:23Z | 2019-12-05T17:01:25Z | https://github.com/microsoft/nlp-recipes/issues/495 | [
"bug"
] | daden-ms | 9 |
strawberry-graphql/strawberry | graphql | 3,655 | multipart upload struggle | I am trying to make the file upload work and no luck yet
I got back to the example on https://strawberry.rocks/docs/guides/file-upload#sending-file-upload-requests
but just copy past multi file requests from postman returns "Unsupported content type"
<!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
<!-- A clear and concise description of what the bug is. -->
## System Information
- Operating system: macOS sequoia
- Strawberry version (if applicable):
- latest
## Additional Context
<!-- Add any other relevant information about the problem here. -->
python code
@strawberry.mutation
def read_files(self, files: List[Upload]) -> List[str]:
print(f"Received read_files mutation. Number of files: {len(files)}")
contents = []
for file in files:
content = file.read().decode("utf-8")
contents.append(content)
return contents
curl --location 'localhost:7675/graphql' \
--form 'operations="{ \"query\": \"mutation(\$files: [Upload\!]\!) { readFiles(files: \$files) }\", \"variables\": { \"files\": [null, null] } }"' \
--form 'map="{\"file1\": [\"variables.files.0\"], \"file2\": [\"variables.files.1\"]}"' \
--form 'file1=@"/Users/its/Documents/roll.csv"' \
--form 'file2=@"/Users/its/Documents/dump.csv"'
Request Body
operations: "{ "query": "mutation($files: [Upload!]!) { readFiles(files: $files) }", "variables": { "files": [null, null] } }"
map: "{"file1": ["variables.files.0"], "file2": ["variables.files.1"]}"
file1: undefined
file2: undefined
Response Headers
date: Tue, 01 Oct 2024 15:10:58 GMT
server: uvicorn
content-length: 24
content-type: text/plain; charset=utf-8
Response Body
Unsupported content type
response
Unsupported content type | closed | 2024-10-01T15:59:02Z | 2025-03-20T15:56:53Z | https://github.com/strawberry-graphql/strawberry/issues/3655 | [
"bug"
] | itsklimov | 5 |
sinaptik-ai/pandas-ai | pandas | 1,593 | Plot chart failing on windows | Env: Windows 10 Pro
Pandas AI version: 3.0.0b8
Python version : 3.11.9
`
import pandasai as pai
from pandasai_openai import AzureOpenAI
from pandasai import Agent
#removed key, end point
llm = AzureOpenAI(
api_token="",
azure_endpoint = "",
deployment_name="gpt-4o",
api_version="2024-02-15-preview") # The name of your deployed model
pai.config.set({"llm": llm})
df = pai.read_csv("data/heart.csv")
agent = Agent(df,config={
"llm": llm,
"verbose": True,
"save_charts": True,
"open_charts": False,
"save_charts_path": "/exports/charts/",
})
response = agent.chat("Plot age frequency.")
print(response)
`
When I load 1 csv in pandasai and query - plot the age frequency . It throws below error .
Seems to be path issue in windows. Please check .
**Error stack trace :**
plot_filename = plot_age_frequency()
result = {'type': 'plot', 'value': plot_filename}
2025-02-06 16:39:10 [INFO] Retrying execution (1/3)...
2025-02-06 16:39:10 [INFO] Execution failed with error: Traceback (most recent call last):
File "C:\Users\pulkitme\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\pandasai\core\code_execution\code_executor.py", line 29, in execute
exec(code, self._environment)
File "<string>", line 20, in <module>
File "<string>", line 15, in plot_age_frequency
File "C:\Users\pulkitme\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\matplotlib\pyplot.py", line 1023, in savefig
res = fig.savefig(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pulkitme\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\matplotlib\figure.py", line 3378, in savefig
self.canvas.print_figure(fname, **kwargs)
File "C:\Users\pulkitme\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\matplotlib\backend_bases.py", line 2366, in print_figure
result = print_method(
^^^^^^^^^^^^^
File "C:\Users\pulkitme\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\matplotlib\backend_bases.py", line 2232, in <lambda>
print_method = functools.wraps(meth)(lambda *args, **kwargs: meth(
^^^^^
File "C:\Users\pulkitme\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\matplotlib\backends\backend_agg.py", line 509, in print_png
self._print_pil(filename_or_obj, "png", pil_kwargs, metadata)
File "C:\Users\pulkitme\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\matplotlib\backends\backend_agg.py", line 458, in _print_pil
mpl.image.imsave(
File "C:\Users\pulkitme\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\matplotlib\image.py", line 1689, in imsave
image.save(fname, **pil_kwargs)
File "C:\Users\pulkitme\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\PIL\Image.py", line 2563, in save
fp = builtins.open(filename, "w+b")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 22] Invalid argument: 'C:\\pandasai\\exports\\charts\temp_chart.png' | closed | 2025-02-06T11:58:40Z | 2025-02-28T10:45:06Z | https://github.com/sinaptik-ai/pandas-ai/issues/1593 | [] | pulkitmehtawork | 8 |
gee-community/geemap | jupyter | 1,175 | ee_export_image_to_drive | Hi Dr. Wu,
I am not sure if I did it correctly but ee_export_image_to_drive() function doesn't seem to work for me.
I ran this code
```
geemap.ee_export_image_to_drive(img,description='Billa_Satellite_Predictors',folder='export',
region=roi, scale=10, crs=crs)
```
but nothing to show up in my drive folder, no task in EE task manager, and no output from the command
Thanks,
Daniel | closed | 2022-08-04T03:41:02Z | 2022-08-08T01:47:59Z | https://github.com/gee-community/geemap/issues/1175 | [
"bug"
] | Daniel-Trung-Nguyen | 2 |
polakowo/vectorbt | data-visualization | 548 | vector cannot be installed on my pc . | Hi ,
vector cannot be installed on my pc , my python version is 3.11.0 , I think that is the issue here .
are you thinking on updating vectorbt to run newer version on python ?

| closed | 2022-12-26T17:40:45Z | 2024-03-16T10:41:51Z | https://github.com/polakowo/vectorbt/issues/548 | [] | ajit1310 | 4 |
ARM-DOE/pyart | data-visualization | 908 | Py-ART bug? | We are using Py-ART for our
project and recently updated to python3. In the nexrad_level2.py, there are the lines
_> grep get_msg29 nexrad_level2.py
new_pos = _get_msg29_from_buf(buf, pos, dic)
def _get_msg29_from_buf(pos, dic):
It's calling with three arguments but receives only two. All the other get_msg* lines have three arguments, so we modified the last line, and it works fine.
We installed Py-ART using anaconda. We have arm_pyart-1.11.0-py37hc1659b7_0
Sim | closed | 2020-02-19T20:18:50Z | 2020-03-18T19:56:14Z | https://github.com/ARM-DOE/pyart/issues/908 | [] | simaberson | 3 |
OWASP/Nettacker | automation | 121 | Bring ML to OWASP Nettacker | Hello,
As you know, OWASP Nettacker logs every event in the database, after a while, you will have enough data to use them for machine learning to find the risks easier. I am not an expert in AI/ML so I cannot tell, but I have a few ideas. for instance, we have a company with an internal network (include 10,000 IPs), I can tell it's easy to set a cronjob and analysis the network hourly, but it's a little bit hard to monitor the events! that's where I would like to use ML to create diagrams/charts to monitor the events in WebUI. To be sure if it's possible and this theory is not wrong, I will ask Dr. Vahid Behzadan (@behzadanksu) to guide us in this case and will let you know the updates.
I know we can do better with the data, so I am glad to send me your ideas to work on.
Best Regards. | closed | 2018-04-23T20:38:20Z | 2021-02-02T20:20:32Z | https://github.com/OWASP/Nettacker/issues/121 | [
"enhancement",
"priority"
] | Ali-Razmjoo | 7 |
glumpy/glumpy | numpy | 130 | possible ways to accelerate the rendering | Hi,
I am trying to accelerate the procedure of rendering. I render about 10~20 images every time with one object in each image and found it will cause about 0.05 sec/image.
Do you have any suggestions to shorten the time? Like can I use multiple GPU to render them parallelly, or can I downsample the model?
Do you have any ideas?
Thanks and looking for your response! | open | 2018-01-11T20:51:40Z | 2018-01-13T06:55:17Z | https://github.com/glumpy/glumpy/issues/130 | [] | liyi14 | 7 |
vitalik/django-ninja | django | 760 | More comprehensive documentation for CSRF protection | Hello,
I was trying to switch one of my codebases from my custom auth (that didn't need to use the default csrf protection as I was using custom tokens in headers) to the default Django sessions (now using CSRF protection then). The Ninja CSRF documentation felt a bit too short, as in my opinion:
- A link to the CSRF documentation for Django could be included,
- It could be even better if it included short advices for CORS, as unlike Django, Ninja is primarly made to build APIs that may be fetched from other domains. A link to `django-cors-headers` as the recommended option should be enough.
I would just like your opinion on it first @vitalik : I can make a PR with those additions if it's ok for you, but if you want to keep the existing documentation as-is as implementing CORS is outside the scope of Ninja and already covered by Django, you can just close my issue.
Thanks in advance
| closed | 2023-05-08T17:09:06Z | 2023-11-19T12:47:51Z | https://github.com/vitalik/django-ninja/issues/760 | [] | c4ffein | 3 |
giotto-ai/giotto-tda | scikit-learn | 539 | [CI] Update maynlinux2010 version | **Is your feature request related to a problem? Please describe.**
<!--
A clear and concise description of what the problem is. Eg. "I'm always frustrated when [...]"
-->
I'm always frustrated when the CI starts to fail x)
Last week, CI, build wheels with `maynlinux2010` started to fails because `Cent OS 6` has reached EOL [see](https://github.com/pypa/manylinux/issues/836).
After an issue was raised to the main repository and a fix was merged, a new issue was encountered. This issue is now fixed, but the problem is that it becomes less and less stable.
**Describe the solution you'd like**
<!--
A clear and concise description of what you want to happen. If you have ideas on how to implement
your solution with (pseudo-)code, include them here.
-->
If it's possible, an easy solution would be to move from `manylinux2010` to `manylinux2014`.
**Describe alternatives you've considered**
<!--
A clear and concise description of any alternative solutions or features you've considered.
-->
Looking for different solution, I see that currently in `scikit-learn` they use [cibuildwheel](https://github.com/joerick/cibuildwheel) in order to build wheels.
In my opinion we should move to a standard solution if it fit our needs in the CI.
**Additional context**
<!--
Add any other context or screenshots about the feature request here.
-->
<!-- Thanks for contributing! -->
| closed | 2020-12-07T08:49:05Z | 2022-08-18T08:12:41Z | https://github.com/giotto-ai/giotto-tda/issues/539 | [
"enhancement",
"discussion",
"CI"
] | MonkeyBreaker | 1 |
deezer/spleeter | tensorflow | 792 | [Feature] speaker/singer/vocalist separation | ## Description
speaker separation, so when we hear different speakers or singers or vocalists,
every different voice gets a separated audio voice track
## Additional information
| closed | 2022-09-27T02:45:38Z | 2022-10-07T10:33:54Z | https://github.com/deezer/spleeter/issues/792 | [
"enhancement",
"feature"
] | bartman081523 | 2 |
NullArray/AutoSploit | automation | 905 | Divided by zero exception281 | Error: Attempted to divide by zero.281 | closed | 2019-04-19T16:03:11Z | 2019-04-19T16:37:01Z | https://github.com/NullArray/AutoSploit/issues/905 | [] | AutosploitReporter | 0 |
pytorch/pytorch | python | 148,883 | Pytorch2.7+ROCm6.3 is 34.55% slower than Pytorch2.6+ROCm6.2.4 | The same hardware and software environment, only the versions of PyTorch+ROCm are different.
Use ComfyUI to run Hunyuan text to video:
ComfyUI:v0.3.24
ComfyUI plugin: teacache
49frames
480x960
20steps
CPU:i5-7500
GPU:AMD 7900XT 20GB
RAM:32GB
PyTorch2.6+ROCm6.2.4 Time taken: 348 seconds 14.7s/it
The VAE Decode Tiled node (parameters: 128 64 32 8) takes: 55 seconds
PyTorch2.7+ROCm6.3 Time taken: 387 seconds 15.66s/it**(11.21% slower)**
The VAE Decode Tiled node (parameters: 128 64 32 8) takes: 74 seconds**(34.55% slower)**
In addition, if the VAE node parameters are set to 256 64 64 8 (the default parameters for nvidia graphics cards), it will take a very long time and seem to be stuck but the program will not crash.The same situation occurs in both Pytorch 2.6 and 2.7.
I'm sorry I don't know what error message to submit for this discrepancy, but I can cooperate with the test and upload the specified information.
Thank you.
[ComfyUI_running_.json](https://github.com/user-attachments/files/19162936/ComfyUI_running_.json)
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | open | 2025-03-10T12:56:34Z | 2025-03-19T14:14:52Z | https://github.com/pytorch/pytorch/issues/148883 | [
"module: rocm",
"triaged"
] | testbug5577 | 6 |
huggingface/datasets | nlp | 7,289 | Dataset viewer displays wrong statists | ### Describe the bug
In [my dataset](https://huggingface.co/datasets/speedcell4/opus-unigram2), there is a column called `lang2`, and there are 94 different classes in total, but the viewer says there are 83 values only. This issue only arises in the `train` split. The total number of values is also 94 in the `test` and `dev` columns, viewer tells the correct number of them.
<img width="177" alt="image" src="https://github.com/user-attachments/assets/78d76ef2-fe0e-4fa3-85e0-fb2552813d1c">
### Steps to reproduce the bug
```python3
from datasets import load_dataset
ds = load_dataset('speedcell4/opus-unigram2').unique('lang2')
for key, lang2 in ds.items():
print(key, len(lang2))
```
This script returns the following and tells that the `train` split has 94 values in the `lang2` column.
```
train 94
dev 94
test 94
zero 5
```
### Expected behavior
94 in the reviewer.
### Environment info
Collecting environment information...
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Linux release 8.2.2004 (Core) (x86_64)
GCC version: (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5)
Clang version: Could not collect
CMake version: version 3.11.4
Libc version: glibc-2.28
Python version: 3.9.20 (main, Oct 3 2024, 07:27:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 525.85.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 4
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7542 32-Core Processor
Stepping: 0
CPU MHz: 3389.114
BogoMIPS: 5789.40
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
NUMA node2 CPU(s): 32-47
NUMA node3 CPU(s): 48-63
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.1+cu121
[pip3] torchaudio==2.4.1+cu121
[pip3] torchdevice==0.1.1
[pip3] torchglyph==0.3.2
[pip3] torchmetrics==1.5.0
[pip3] torchrua==0.5.1
[pip3] torchvision==0.19.1+cu121
[pip3] triton==3.0.0
[pip3] datasets==3.0.1
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.4.1+cu121 pypi_0 pypi
[conda] torchaudio 2.4.1+cu121 pypi_0 pypi
[conda] torchdevice 0.1.1 pypi_0 pypi
[conda] torchglyph 0.3.2 pypi_0 pypi
[conda] torchmetrics 1.5.0 pypi_0 pypi
[conda] torchrua 0.5.1 pypi_0 pypi
[conda] torchvision 0.19.1+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi | closed | 2024-11-11T03:29:27Z | 2024-11-13T13:02:25Z | https://github.com/huggingface/datasets/issues/7289 | [] | speedcell4 | 1 |
proplot-dev/proplot | data-visualization | 216 | Cannot change barb linewidths |
Hi,luke:
`barbs` linewidth is not work when data is 1d array

| closed | 2020-08-11T06:14:12Z | 2021-07-03T20:21:30Z | https://github.com/proplot-dev/proplot/issues/216 | [
"support"
] | wangrenz | 3 |
kubeflow/katib | scikit-learn | 1,598 | Can the different Katib trials run from the same volume/model? | /kind question
Question:
We are currently running the pytorch Katib pods by mounting the data and model from a shared NFS volume. Does that cause any issues? Should each trial have its own volume for the model? Or perhaps every worker as well? | closed | 2021-07-30T11:16:08Z | 2022-03-02T11:58:43Z | https://github.com/kubeflow/katib/issues/1598 | [
"kind/question",
"lifecycle/stale"
] | PatrickGhosn | 3 |
pytest-dev/pytest-mock | pytest | 128 | Mocker fixture fails to patch class | I am trying to patch a class within one of my application submodules using the `pytest-mocker` fixture. The patch is not successfully applied because when I try `assert MyMock.call_count == 1` it says that MyMock was called 0 times, even when in my function under test I invoked my class' constructor.
I added a different test using `unittest.mock` instead of the `mocker` fixture and it works as expected.
I put together a small example that replicates the issue, is located here https://github.com/richin13/pytest-mock-behavior
I am wondering if there's anything I'm missing when trying to patch a class using the `mocker` fixture or if this is a bug
I would appreciate any help. I'm using python 3.7 | closed | 2018-10-31T18:41:46Z | 2018-11-01T14:17:43Z | https://github.com/pytest-dev/pytest-mock/issues/128 | [] | richin13 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.