repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
chaoss/augur | data-visualization | 2,994 | Error running release 0.81.0 in Docker | The latest release seems to build but does not run when built for Docker. The image crashes with this error:
```
File "/opt/venv/lib/python3.9/site-packages/augur/tasks/github/util/github_data_access.py", line 6, in <module>
from keyman.KeyClient import KeyClient
ModuleNotFoundError: No module named 'keyman'
```
I can see a `keyman` dir in the source code that seems new, but it isn't copied into the image - there are bunch of copies (https://github.com/chaoss/augur/blob/main/docker/backend/Dockerfile#L76-L82) but `keyman` is outside of `/augur`.
Not sure if it needs to be pip-installed or copied in, or both. | closed | 2025-02-13T16:59:07Z | 2025-02-14T19:32:44Z | https://github.com/chaoss/augur/issues/2994 | [] | GregSutcliffe | 4 |
axnsan12/drf-yasg | django | 607 | How to get filter/ordering params for custom action. | Hi,
Is there something I can put in the @swagger_auto_schema decorator so my custom action has the Viewset's filter/ordering fields documented in Swagger similar to how it is generated automatically for my the list endpoint from the ListModelMixin? Right now I'm having to pass them all through manual_parameters
ex:
class MyView(viewsets.GenericViewSet, mixins.ListModelMixin):
queryset = [some queryset]
serializer_class = [some serializer]
filter_backends = [DjangoFilterBackend]
filterset_class = MyFilterSetClass
ordering_fields = ['fields....']
@swagger_auto_schema(operation_description='Some custom action',
responses={status.HTTP_200_OK: 'Ok',
status.HTTP_404_NOT_FOUND: 'Not found.'})
@action(methods=['get'], detail=True)
def my_custom_action(self, request, pk=None):
queryset = self.filter_queryset(self.get_queryset()) | open | 2020-06-29T16:44:49Z | 2025-03-07T12:14:03Z | https://github.com/axnsan12/drf-yasg/issues/607 | [
"triage"
] | adl-asi | 1 |
zappa/Zappa | django | 506 | [Migrated] Failed to generate or install certificate! | Originally from: https://github.com/Miserlou/Zappa/issues/1325 by [brianfrombellevue](https://github.com/brianfrombellevue)
When doing the zappa certify command for the first time I received the follow error:
Error registering: 400 {
"type": "urn:acme:error:malformed",
"detail": "Provided agreement URL [https://letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016.pdf] does not match current agreement URL [https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf]",
"status": 400
}
Failed to generate or install certificate! :(
I ran pip install zappa --upgrade and after that the error still occurred. Manually changing the URL myself in the lets_encrypt.py file fixed the issue.
## Your Environment
* Zappa version used: 0.45.1 | closed | 2021-02-20T09:43:38Z | 2024-07-13T08:17:52Z | https://github.com/zappa/Zappa/issues/506 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
lexiforest/curl_cffi | web-scraping | 460 | Differences observed in hash fingerprints for Safari 18 on Mac and iOS | **The question**
I was comparing the curl-cffi impersonate fingerprint with the actual fingerprint of Safari on a Mac and iPhone using https://tls.browserleaks.com/json and observed some discrepancies.
For `safari18` impersonate, the `akamai`, `ja4` and `ja4_o` hash were observed to be different.
For 'safari18_ios' impersonate, the `ja4`, `ja4_o` hash were observed to be different.
Whereas, for `chrome131` and `chrome131_android`, I observe that only the `ja4_o` hash is different.
Is this behavior expected?
**Versions**
If it's related to a specific environment, paste your env info here.
- OS: [MacOS, iOS]
- curl_cffi version [0.8.0b7] | closed | 2024-12-16T06:35:15Z | 2025-02-14T01:48:30Z | https://github.com/lexiforest/curl_cffi/issues/460 | [
"needs more info",
"question"
] | charliedelta02 | 4 |
TencentARC/GFPGAN | pytorch | 12 | 重新训练自己的数据集 | 您好,请问如果我想训练自己的数据集,是否只需要修改数据集的面部landmarks?
还有另一个问题:在其他条件不变的情况下,训练时增加gpu数量至8,学习率是否需要相应的调整呢? | closed | 2021-07-05T11:31:25Z | 2021-07-06T06:34:48Z | https://github.com/TencentARC/GFPGAN/issues/12 | [] | SimKarras | 2 |
scikit-optimize/scikit-optimize | scikit-learn | 788 | expected_minimum() and Categorical dimensions | It seems like `expected_minumum()` functions is designed only for numeric dimensions. If I use categorical dimension (3 members), I see this error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-17-2a2539f0633a> in <module>
1 from skopt import expected_minimum
----> 2 expected_minimum(res_gp)
3 #res_gp.space.bounds
~/miniconda3/envs/work/lib/python3.7/site-packages/skopt/utils.py in expected_minimum(res, n_random_starts, random_state)
238
239 for x0 in xs:
--> 240 r = sp_minimize(func, x0=x0, bounds=res.space.bounds)
241
242 if r.fun < best_fun:
~/miniconda3/envs/work/lib/python3.7/site-packages/scipy/optimize/_minimize.py in minimize(fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options)
599 elif meth == 'l-bfgs-b':
600 return _minimize_lbfgsb(fun, x0, args, jac, bounds,
--> 601 callback=callback, **options)
602 elif meth == 'tnc':
603 return _minimize_tnc(fun, x0, args, jac, bounds, callback=callback,
~/miniconda3/envs/work/lib/python3.7/site-packages/scipy/optimize/lbfgsb.py in _minimize_lbfgsb(fun, x0, args, jac, bounds, disp, maxcor, ftol, gtol, eps, maxfun, maxiter, iprint, callback, maxls, **unknown_options)
267 raise ValueError('length of x0 != length of bounds')
268 # unbounded variables must use None, not +-inf, for optimizer to work properly
--> 269 bounds = [(None if l == -np.inf else l, None if u == np.inf else u) for l, u in bounds]
270
271 if disp is not None:
~/miniconda3/envs/work/lib/python3.7/site-packages/scipy/optimize/lbfgsb.py in <listcomp>(.0)
267 raise ValueError('length of x0 != length of bounds')
268 # unbounded variables must use None, not +-inf, for optimizer to work properly
--> 269 bounds = [(None if l == -np.inf else l, None if u == np.inf else u) for l, u in bounds]
270
271 if disp is not None:
ValueError: too many values to unpack (expected 2)
```
This is content of my `res_gp.space.bounds`:
```
[(4, 16),
('none', 'log', 'logdiff'),
(0, 1),
(0, 1),
(0, 1)]
```
How can I get expected minimum for a space with categorical dimensions? | closed | 2019-09-21T13:39:24Z | 2020-02-11T16:14:55Z | https://github.com/scikit-optimize/scikit-optimize/issues/788 | [] | Arturus | 1 |
pytorch/vision | computer-vision | 8,906 | Setting a complex value to `num_output_channels` argument of `Grayscale()` works | ### 🐛 Describe the bug
[The doc](https://pytorch.org/vision/main/generated/torchvision.transforms.v2.Grayscale.html) of `Grayscale()` says `num_output_channels` parameter is `int` as shown below:
> Parameters:
> num_output_channels ([int](https://docs.python.org/3/library/functions.html#int)) – (1 or 3) number of channels desired for output image
But setting a complex value to `num_output_channels` argument works as shown below:
```python
from torchvision.datasets import OxfordIIITPet
from torchvision.transforms.v2 import Grayscale
my_data1= OxfordIIITPet(
root="data",
transform=Grayscale(num_output_channels=1.+0.j)
)
my_data2 = OxfordIIITPet(
root="data",
transform=Grayscale(num_output_channels=3.+0.j)
)
my_data1[0][0]
my_data2[0][0]
```

```
### Versions
```python
import torchvision
torchvision.__version__ # '0.20.1'
``` | closed | 2025-02-14T23:46:24Z | 2025-02-19T13:41:31Z | https://github.com/pytorch/vision/issues/8906 | [] | hyperkai | 1 |
trevismd/statannotations | seaborn | 146 | Feature request: Lineplot support | It would be great to add support for annotating [Seaborn Lineplots](https://seaborn.pydata.org/generated/seaborn.lineplot.html). Are there any major obstacles for implementing this feature? | open | 2024-03-16T10:50:01Z | 2024-03-16T10:50:01Z | https://github.com/trevismd/statannotations/issues/146 | [] | janikscherer | 0 |
miguelgrinberg/Flask-SocketIO | flask | 1,221 | Fix simple typo: addresee -> addressee | # Issue Type
[x] Bug (Typo)
# Steps to Replicate
1. Examine flask_socketio/__init__.py.
2. Search for `addresee`.
# Expected Behaviour
1. Should read `addressee`.
| closed | 2020-03-28T03:05:40Z | 2020-03-28T10:32:50Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1221 | [] | timgates42 | 0 |
pydata/pandas-datareader | pandas | 226 | EDGAR - test issue - stalled build | Hello,
there is an issue with CI because of EDGAR tests
see for example https://travis-ci.org/pydata/pandas-datareader/builds/155920545
```
No output has been received in the last 10 minutes, this potentially indicates a stalled build or something wrong with the build itself.
The build has been terminated
```
Pinging @jtkiley https://github.com/pydata/pandas-datareader/issues/147
Any idea ?
Kind regards
| closed | 2016-09-03T17:38:30Z | 2018-01-18T16:28:37Z | https://github.com/pydata/pandas-datareader/issues/226 | [] | femtotrader | 10 |
hpcaitech/ColossalAI | deep-learning | 5,795 | I have searched the existing issues | closed | 2024-06-11T08:54:10Z | 2024-06-11T11:15:05Z | https://github.com/hpcaitech/ColossalAI/issues/5795 | [] | duanjunwen | 0 |
|
gevent/gevent | asyncio | 1,797 | Python Bigquery to_dataframe function is blocked when gunicorn is run with worker class gevent | When I run my flask app with worker-class=gevent on gunicorn, the server blocks.
- gunicorn command
gunicorn app:app --workers=5 --worker-class=gevent --threads=5 --timeout=1800 --log-level=DEBUG
- Source code
```
query = '...'
query_job = bigquery_client.query(query)\
query_job.to_dataframe()# to_dataframe function where block occurs
```
- Source code where a block in the bigquery library occurs (lines 678 to 680 of the google/cloud/bigquery/_pandas_helpers.py file)
```
try:
frame = worker_queue.get(timeout=_PROGRESS_INTERVAL) # this point
yield frame
```
- Python library version
- python=3.7.10
- gunicorn=20.1.0
- gevnet=21.1.2
- eventlet=0.30.2
- google-cloud-bigquery=2.20.0
- google-cloud-bigquery-storage=2.4.0
- google-cloud-core=1.6.0
- pyarrow=4.0.0
The same happens when worker-class is an eventlet. It does not occur when worker-class is gthread or sync.
The block in _pandas_helpers.py is executed in the following syntax, is it a thread-related problem?
with concurrent.futures.ThreadPoolExecutor(max_workers=total_streams) as pool:
Why do blocks happen? | open | 2021-06-11T08:20:43Z | 2022-11-19T11:53:18Z | https://github.com/gevent/gevent/issues/1797 | [
"Type: Question"
] | gsroot | 3 |
ageitgey/face_recognition | python | 871 | Maybe you have some problems with the path to the models, open api.py the folder where the face_recognition module is installed, and check the first lines. | open | 2019-06-30T11:30:36Z | 2019-06-30T11:31:06Z | https://github.com/ageitgey/face_recognition/issues/871 | [] | harshitaagrwl | 0 |
|
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 881 | Regarding Training in Colour spaces | Can anybody please help me how to train the models in YIQ, LAB, HSV colourspaces. I am not able to understand that simply reading the files and using BGR2HSV wont suffice I guess. What changes should be done as I am a beginner in this package. | open | 2019-12-17T16:23:44Z | 2019-12-18T21:40:35Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/881 | [] | kanlions | 3 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 96 | Optimize Login Time Delay | The current implementation puts a delay of 35 seconds (why??).
`time.sleep(35) #TODO fix better`
| closed | 2024-08-28T03:24:33Z | 2024-08-28T16:14:52Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/96 | [] | sanjeethboddi | 1 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 387 | cant run demo_toolbox.py | <h1>i tryed to run demo_toolbox.py (in a termanal and i just get this)<h1>
<p>File "demo_cli.py", line 66, in <module>
encoder.load_model(args.enc_model_fpath)
File "/content/Real-Time-Voice-Cloning-master/encoder/inference.py", line 33, in load_model
checkpoint = torch.load(weights_fpath, _device)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 584, in load
with _open_file_like(f, 'rb') as opened_file:
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 234, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 215, in __init__
super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'encoder/saved_models/pretrained.pt'
</p> | closed | 2020-06-28T21:46:50Z | 2020-06-30T01:21:04Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/387 | [] | shrek231 | 1 |
qubvel-org/segmentation_models.pytorch | computer-vision | 345 | UNetPlusPlus CenterBlock is not use in the current code | I have checked your code for UNetPlusPlus and it seems that you do not use the centerBlock that you defined.
I could not see any use in any ModuleDict or ModuleList or SequentialModel and neither in the forward method of the UNetPlusPlus decoder.
`if center:
self.center: nn.Module = CenterBlock(head_channels, head_channels, use_batchnorm=use_batchnorm, activation_name=activation_name)
else:
self.center = nn.Identity()`
This center block is still valid? It seams to have center=True with vgg networks so it might do something wrong with vgg network currently. | closed | 2021-02-11T15:05:20Z | 2022-03-01T02:03:17Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/345 | [
"Stale"
] | Eric2Hamel | 2 |
ageitgey/face_recognition | machine-learning | 1,417 | Face | J | closed | 2022-04-10T07:44:29Z | 2022-04-10T07:44:57Z | https://github.com/ageitgey/face_recognition/issues/1417 | [] | Pite4r | 0 |
trevorstephens/gplearn | scikit-learn | 256 | Consider releasing 0.4.2 to PyPi due to test incompatibility with current sklearn | <!-- Thanks for contributing to gplearn!
Please ensure you have taken a look at the contribution guidelines:
https://gplearn.readthedocs.io/en/stable/contributing.html -->
**Describe the bug**
Current PyPi release refers to `sklearn.utils.testing` in the tests, which was moved. **The current `master` seems to have already adressed the issue**, but it is not on PyPi. This leads to a packaging issue for OS maintainers because tests can not be run on build with current `sklearn`.
**Expected behavior**
PyPi should have a minor release with up-to-date dependencies.
**Actual behavior**
` ModuleNotFoundError: No module named 'sklearn.utils.testing'`
@trevorstephens: sorry to ping you directly, but you're the only PyPi maintainer =) | closed | 2022-04-21T09:40:25Z | 2022-05-03T11:21:13Z | https://github.com/trevorstephens/gplearn/issues/256 | [
"dependencies",
"tests / CI"
] | evilmav | 2 |
cvat-ai/cvat | computer-vision | 8,684 | Cannot create a project from a backup | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
_No response_
### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
Hello everyone,
I used CVAT on another virtual machine and its version is:
```
Server version: 2.19.0
Core version: 15.2.0
Canvas version: 2.20.9
UI version: 1.66.0
```
After a while I had to move to another virtual machine and I exported a backup of my whole project and create a project from this backup on my second virtual machine but I got the following error:

CVAT version on my second machine is:
Server version: 2.22.0
Core version: 15.2.1
Canvas version: 2.20.10
UI version: 1.66.4
Please help me in this problem, so many thanks in advance.
### Environment
_No response_ | closed | 2024-11-12T09:18:47Z | 2024-11-20T07:19:02Z | https://github.com/cvat-ai/cvat/issues/8684 | [
"bug",
"need info"
] | azadehashouri | 6 |
3b1b/manim | python | 1,901 | closed | closed | closed | 2022-11-15T19:57:50Z | 2022-11-16T06:22:47Z | https://github.com/3b1b/manim/issues/1901 | [
"bug"
] | barakasamsara | 0 |
plotly/dash | plotly | 2,907 | Differences in dcc.Store storage_type performance with dash 2.17.1 | Background callback running correctly (loading animations appearing, print statements appearing too, etc) buut when it finishes it errors out (no output is returned) with this error in the console:
```
Failed to execute 'setItem' on 'Storage': Setting the value of 'flag_storage' exceeded the quota.
```
It was resolved by changing the `storage_type` to `'memory'` as per: https://community.plotly.com/t/error-the-quota-has-been-exceeded/26944
**Description by the user:**
> When tested with `storage_type = 'memory'` instead of `'session'`, we don’t get the issue, so I tried to understand more why the issue happened only in the past weeks while the storage as `session` was used for one year on server (Dash Enterprise) and still works in our local machine.
> The only difference is that on the server we recently switched from the 2.16.1 version (that we still use on local machine) to the 2.17.1; if we specify `dash==2.16.1` even with `storage_type='session'` we get no issue, but with 2.17.1 we have it.
I don't have additional information and haven't had the opportunity to try to replicate this. | open | 2024-06-28T08:29:16Z | 2024-08-13T14:19:36Z | https://github.com/plotly/dash/issues/2907 | [
"feature",
"P3"
] | celia-lm | 0 |
pyeve/eve | flask | 1,071 | Is it possible to stop Eve from sending the item query as a string? | I've tried so many things but I can't figure out how to use integer _id fields. I've done what is shown in the Custom ID tutorial and nothing I do will keep Eve from querying MongoDB with the value as a string like:
{
"find" : "m_destination",
"filter" : {
"_id" : "2218917881"
},
"limit" : 1,
"singleBatch" : true
}
If I could just get it to send
"filter" : {
"_id" : 2218917881
},
i.e. without the quotes it would work, but I'm not sure it is possible and starting to wonder if it is a bug.
Python 3.6.3
Eve 0.8.dev0
class IntegerEncoder(BaseJSONEncoder):
"""
Encoder for integer _id
"""
def default(self, obj): # pylint: disable=E0202
if isinstance(obj, int):
return obj
else:
return super(IntegerEncoder, self).default(obj)
DOMAIN = {
'destination': {
'datasource': {'source': 'm_destination'},
'item_url': 'regex("[0-9]{1,10}")',
'schema': {'_id': {'type': 'integer', 'min': 1, 'max': 4294967295}}
},
| closed | 2017-10-10T16:28:57Z | 2018-05-18T16:19:46Z | https://github.com/pyeve/eve/issues/1071 | [
"stale"
] | vpzed | 1 |
kizniche/Mycodo | automation | 887 | No Internet Connection - error when attempting an upgrade | ### Attempted the upgrade and the "no internet connection" displays
I am remotely logged into the system. After checking various items, decided to do the upgrade from 8.8.7 8.8.8.
After clicking on the "Upgrade" menu option the following error displays:
"No internet connection detected. To upgrade Mycodo automatically, you will need an internet connection. Refresh the page when one is connected."
This is interesting as I'm logged in remotely.
### Versions:
- Mycodo Version:8.8.7
- Raspberry Pi Version: 4
Mycodo Version: 8.8.7
Python Version: 3.7.3 (default, Jul 25 2020, 13:03:44) [GCC 8.3.0]
Database Version: 66e27f22b15a
Daemon Status: Running
Daemon Process ID: 795
Daemon RAM Usage: 59.484 MB
Daemon Virtualenv: Yes
Frontend Process ID: 480
Frontend RAM Usage: 110.452 MB
Frontend Virtualenv: Yes
Kernel Information: uname -a
Linux Hydroponics 5.4.51-v7l+ #1333 SMP Mon Aug 10 16:51:40 BST 2020 armv7l GNU/Linux
### Reproducibility
Please list specific setup details that are involved and the steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
### Expected behavior
A clear and concise description of what you expected to happen.
### Screenshots
If applicable, add screenshots to help explain your problem.
### Additional context
Is there anything that should be added to make it easier to address this issue?
| closed | 2020-11-19T20:28:20Z | 2020-11-21T06:55:47Z | https://github.com/kizniche/Mycodo/issues/887 | [] | kpslc | 2 |
miguelgrinberg/Flask-SocketIO | flask | 781 | example of sending binary data using socketio | can we send any kind of data (image, PDF, audio,video ) any size
please give one generic example that explain how to send file on socket | closed | 2018-09-05T21:48:46Z | 2018-09-15T12:05:13Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/781 | [
"question"
] | rupesh2017 | 5 |
dmlc/gluon-cv | computer-vision | 1,325 | About fine-tuning a pretrained faster_rcnn model | Hello,
Thanks again for your example about finetuning a ssd model for training Pikachus. I've followed this tutorial and it works like a charm :
https://gluon-cv.mxnet.io/build/examples_detection/finetune_detection.html
I would like to know if there is any guide or tutorial for doing the same finetuning process with faster_rcnn model? Because fine-tuning will be much efficient instead of training a model from scratch.
Any reply will be appreciated. Best regards. | closed | 2020-05-28T15:09:03Z | 2021-05-22T06:40:47Z | https://github.com/dmlc/gluon-cv/issues/1325 | [
"Stale"
] | mehmetgur | 3 |
maxhumber/gif | matplotlib | 12 | Duration parameter has no effect | Different `gif.save()` parameters seem to have no effect.
Code to reproduce (`gif.save()` call on line 118): https://gist.github.com/ddejohn/fa67039541bdb7e387c403fe32a65eb9
## Different parameters, same gif produced
Here's an [album of gifs](https://imgur.com/a/sAWWpKc) produced with four different sets of parameters:
```python
duration=100, unit="ms", between="frames"
duration=5, unit="ms", between="frames"
duration=1, unit="s", between="startend"
duration=10, unit="s", between="startend"
```
The only apparent difference is file size:
 | closed | 2021-10-16T21:16:16Z | 2023-01-23T14:20:14Z | https://github.com/maxhumber/gif/issues/12 | [] | ddejohn | 3 |
ultralytics/ultralytics | pytorch | 19,196 | Full-resolution sized ram caching (not linked to training size) | ### Search before asking
- [x] I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
### Description
From memory, yolov5 used to have a "cache the image on disk/ram" in full resolution.
Here if the image training size is for example 640px, but we use augmentations like zoom/distortion, the lowered resolution (compared to original 2048px (for example)) will suffer from quality degradation and pixelization after zooming.
### Use case
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-02-12T06:57:54Z | 2025-02-12T06:58:23Z | https://github.com/ultralytics/ultralytics/issues/19196 | [
"enhancement"
] | ExtReMLapin | 1 |
coqui-ai/TTS | deep-learning | 3,988 | [Bug] loss is NaN when fine-tuning XTTS_v2! | ### Describe the bug
When fine-tuning XTTS_v2 on LJSpeech following the script (recipes/ljspeech/xtts_v2/train_gpt_xtts.py), the loss is always NaN! So terrible.
I try to reduce the learning rate, change the batch size, change the DDP config, at last it doesn't work!
<img width="528" alt="image" src="https://github.com/user-attachments/assets/1a80d3d1-e8fc-40bf-a5f0-78e0cd01b2fe">
### To Reproduce
<img width="801" alt="image" src="https://github.com/user-attachments/assets/73d9fa55-c045-4040-8f89-70b3d27984da">
### Expected behavior
The loss is expected to be normal, but now it is NaN.
### Logs
```shell
> Training Environment:
| > Backend: Accelerate
| > Mixed precision: False
| > Precision: float32
| > Current device: 0
| > Num. of GPUs: 1
| > Num. of CPUs: 28
| > Num. of Torch Threads: 1
| > Torch seed: 1
| > Torch CUDNN: True
| > Torch CUDNN deterministic: False
| > Torch CUDNN benchmark: False
| > Torch TF32 MatMul: False
> Start Tensorboard: tensorboard --logdir=/home/xj_data/liuwenrui/model/TTS/examples/checkpoints/xtts_ljspeech/GPT_XTTS_LJSpeech_FT-September-05-2024_07+56AM-0000000
> Model has 518442047 parameters
> EPOCH: 0/1000
--> /home/xj_data/liuwenrui/model/TTS/examples/checkpoints/xtts_ljspeech/GPT_XTTS_LJSpeech_FT-September-05-2024_07+56AM-0000000
> Filtering invalid eval samples!!
> Total eval samples after filtering: 131
> EVALUATION
| > Synthesizing test sentences.
The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
--> EVAL PERFORMANCE
| > avg_loader_time: 0.1370311975479126 (+0)
| > avg_loss_text_ce: nan (+0)
| > avg_loss_mel_ce: nan (+0)
| > avg_loss: nan (+0)
> EPOCH: 1/1000
--> /home/xj_data/liuwenrui/model/TTS/examples/checkpoints/xtts_ljspeech/GPT_XTTS_LJSpeech_FT-September-05-2024_07+56AM-0000000
> Sampling by language: dict_keys(['en'])
> TRAINING (2024-09-05 07:57:04)
--> TIME: 2024-09-05 07:57:06 -- STEP: 0/406 -- GLOBAL_STEP: 0
| > loss_text_ce: nan (nan)
| > loss_mel_ce: nan (nan)
| > loss: nan (nan)
| > current_lr: 5e-06
| > step_time: 0.6947 (0.6947054862976074)
| > loader_time: 0.9726 (0.9726102352142334)
--> TIME: 2024-09-05 07:57:34 -- STEP: 50/406 -- GLOBAL_STEP: 50
| > loss_text_ce: nan (nan)
| > loss_mel_ce: nan (nan)
| > loss: nan (nan)
| > current_lr: 5e-06
| > step_time: 0.5593 (0.5495537853240967)
| > loader_time: 0.0208 (0.025657353401184083)
--> TIME: 2024-09-05 07:58:03 -- STEP: 100/406 -- GLOBAL_STEP: 100
| > loss_text_ce: nan (nan)
| > loss_mel_ce: nan (nan)
| > loss: nan (nan)
| > current_lr: 5e-06
| > step_time: 0.5421 (0.5507096624374387)
| > loader_time: 0.027 (0.024191355705261236)
--> TIME: 2024-09-05 07:58:32 -- STEP: 150/406 -- GLOBAL_STEP: 150
| > loss_text_ce: nan (nan)
| > loss_mel_ce: nan (nan)
| > loss: nan (nan)
| > current_lr: 5e-06
| > step_time: 0.5438 (0.5505051151911414)
| > loader_time: 0.0248 (0.023627797762552902)
--> TIME: 2024-09-05 07:59:01 -- STEP: 200/406 -- GLOBAL_STEP: 200
| > loss_text_ce: nan (nan)
| > loss_mel_ce: nan (nan)
| > loss: nan (nan)
| > current_lr: 5e-06
| > step_time: 0.5429 (0.5503179728984828)
| > loader_time: 0.0192 (0.023366824388504036)
--> TIME: 2024-09-05 07:59:29 -- STEP: 250/406 -- GLOBAL_STEP: 250
| > loss_text_ce: nan (nan)
| > loss_mel_ce: nan (nan)
| > loss: nan (nan)
| > current_lr: 5e-06
| > step_time: 0.5364 (0.5505284156799314)
| > loader_time: 0.024 (0.023049613952636726)
--> TIME: 2024-09-05 07:59:58 -- STEP: 300/406 -- GLOBAL_STEP: 300
| > loss_text_ce: nan (nan)
| > loss_mel_ce: nan (nan)
| > loss: nan (nan)
| > current_lr: 5e-06
| > step_time: 0.5382 (0.5508304643630979)
| > loader_time: 0.0228 (0.022726009686787927)
--> TIME: 2024-09-05 08:00:27 -- STEP: 350/406 -- GLOBAL_STEP: 350
| > loss_text_ce: nan (nan)
| > loss_mel_ce: nan (nan)
| > loss: nan (nan)
| > current_lr: 5e-06
| > step_time: 0.5357 (0.5509397418158392)
| > loader_time: 0.0224 (0.022749708720615953)
--> TIME: 2024-09-05 08:00:56 -- STEP: 400/406 -- GLOBAL_STEP: 400
| > loss_text_ce: nan (nan)
| > loss_mel_ce: nan (nan)
| > loss: nan (nan)
| > current_lr: 5e-06
| > step_time: 0.5422 (0.5508994698524474)
| > loader_time: 0.0199 (0.022739327549934404)
> EVALUATION
| > Synthesizing test sentences.
--> EVAL PERFORMANCE
| > avg_loader_time: 0.15962785482406616 (+0.022596657276153564)
| > avg_loss_text_ce: nan (+nan)
| > avg_loss_mel_ce: nan (+nan)
| > avg_loss: nan (+nan)
> EPOCH: 2/1000
--> /home/xj_data/liuwenrui/model/TTS/examples/checkpoints/xtts_ljspeech/GPT_XTTS_LJSpeech_FT-September-05-2024_07+56AM-0000000
> TRAINING (2024-09-05 08:01:04)
--> TIME: 2024-09-05 08:01:31 -- STEP: 44/406 -- GLOBAL_STEP: 450
| > loss_text_ce: nan (nan)
| > loss_mel_ce: nan (nan)
| > loss: nan (nan)
| > current_lr: 5e-06
| > step_time: 0.5423 (0.5685172839598219)
| > loader_time: 0.0218 (0.02611354806206443)
--> TIME: 2024-09-05 08:02:00 -- STEP: 94/406 -- GLOBAL_STEP: 500
| > loss_text_ce: nan (nan)
| > loss_mel_ce: nan (nan)
| > loss: nan (nan)
| > current_lr: 5e-06
| > step_time: 0.5477 (0.5601014101758914)
| > loader_time: 0.0206 (0.024027073636968085)
--> TIME: 2024-09-05 08:02:29 -- STEP: 144/406 -- GLOBAL_STEP: 550
| > loss_text_ce: nan (nan)
| > loss_mel_ce: nan (nan)
| > loss: nan (nan)
| > current_lr: 5e-06
| > step_time: 0.5499 (0.5568753977616625)
| > loader_time: 0.0221 (0.023584759897655915)
--> TIME: 2024-09-05 08:02:58 -- STEP: 194/406 -- GLOBAL_STEP: 600
| > loss_text_ce: nan (nan)
| > loss_mel_ce: nan (nan)
| > loss: nan (nan)
| > current_lr: 5e-06
| > step_time: 0.5477 (0.5556149875994809)
| > loader_time: 0.0204 (0.023164422241682858)
```
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA A100-SXM4-80GB",
"NVIDIA A100-SXM4-80GB"
],
"available": true,
"version": "11.8"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.2+cu118",
"TTS": "0.22.0",
"numpy": "1.26.3"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.10.14",
"version": "#1 SMP Mon Jul 22 15:34:17 CST 2024"
}
}
```
### Additional context
So terrible experience. I try to train it for 3 days, and the loss is always NaN | closed | 2024-09-05T00:04:40Z | 2025-01-03T08:48:43Z | https://github.com/coqui-ai/TTS/issues/3988 | [
"bug",
"wontfix"
] | r666ay | 1 |
deezer/spleeter | deep-learning | 683 | bug | ValueError: Can't load save_path when it is None. | open | 2021-11-22T12:39:05Z | 2022-02-18T09:50:03Z | https://github.com/deezer/spleeter/issues/683 | [] | bell-xiong | 2 |
cvat-ai/cvat | tensorflow | 8,319 | Job application | I have applied for a data annotator job and i did the practical test and still not having feedback can you give me feedback about my performance in the test | closed | 2024-08-19T11:20:15Z | 2024-08-19T11:24:11Z | https://github.com/cvat-ai/cvat/issues/8319 | [] | fatmard947 | 0 |
BayesWitnesses/m2cgen | scikit-learn | 74 | memcpy instead of assign_array | For the generated C code, you could use `memcpy` instead of your `assign_array` function. | closed | 2019-03-09T09:32:38Z | 2019-03-13T16:21:47Z | https://github.com/BayesWitnesses/m2cgen/issues/74 | [] | beojan | 2 |
SALib/SALib | numpy | 271 | conda release > 1.1.2 | It seems that the latest versions of SALib are only available on PyPI. Our workflow requires conda so I had to package it on our 'personal' channel. I'll be happy to contribute and deploy it on conda-forge for you instead. | closed | 2019-11-05T07:19:42Z | 2020-10-27T06:55:39Z | https://github.com/SALib/SALib/issues/271 | [
"priority",
"need_help"
] | revesansparole | 8 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 257 | Alpaca 7B Plus的效果好像还不如Alpaca 7B (仅仅来自我个人少量测试的观察) |
### 详细描述问题
Alpaca 7B Plus的效果好像还不如Alpaca 7B (仅仅来自我个人少量测试的观察)
### 运行截图或log
Plus模型输出
```
(llama) sun.weiheng@swh:~/llmodels$ python ./inference_hf.py \
> --base_model merged_alpaca_7b_plus \
> --with_prompt --interactive
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████| 2/2 [00:06<00:00, 3.15s/it]
Vocab of the base model: 49954
Vocab of the tokenizer: 49954
Start inference with interactive mode.
Input:你好
Response: 您好!有什么可以帮助您的吗?
Input:白象方便面麻辣藤椒味250g。这个产品的品牌是什么?
Response: 很抱歉,我无法回答这个问题,因为您提供的信息不足以确定产品的具体名称和生产商。请提供更多详细的信息以便我能够帮助您解决问题。
Input:上面提到的产品是什么?
Response: 抱歉,我无法回答您的问题。您需要提供更多上下文信息或详细描述产品才能得到准确的回答。
Input:介绍一下方便面这种食品
Response: 方便面是一种常见的快餐,由面粉、水和调料制成。它可以快速制作成美味的饭菜或小吃,非常适合忙碌的人们食用。
Input:白象方便面麻辣藤椒味250g。这个产品的规格是多少?
Response: 抱歉,我无法回答您的问题。因为您提供的信息不足以确定产品的具体规格和型号。请提供更多详细的信息或图片以便我能够更好地帮助您解决问
题。
```
7B模型输出
```
(llama) sun.weiheng@swh:~/llmodels$ python ./inference_hf.py --base_model merged_alpaca_7b --with_prompt --interactive
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████| 2/2 [00:24<00:00, 12.35s/it]
Vocab of the base model: 49954
Vocab of the tokenizer: 49954
Start inference with interactive mode.
Input:你好
Response: 您好!很高兴为您服务。有什么可以帮您的吗?
Input:白象方便面麻辣藤椒味250g。这个产品的品牌是什么?
Response: 该产品是“白象”的,它是一家知名的中国食品公司之一。
Input:上面提到的产品是什么?
Response: 上文中没有提及具体的产品。
Input:介绍一下方便面这种食品
Response: 方便面是一种非常受欢迎的中国快餐,它由米饭、肉类和蔬菜等材料制成。通常是用机器制作而成的,口感松软且有嚼劲。
Input:白象方便面麻辣藤椒味250g。这个产品的规格是多少?
Response: 该产品重量为250克,口味是麻辣藤椒味。
Input:白象方便面麻辣藤椒味250g。这个产品的规格是多少?请用原文里的词回答,不要输出其他内容
Response: 此产品重量为250克
```
### 必查项目
- [x] 哪个模型的问题:Alpaca
- [x] 问题类型:
- 效果问题
- [x] 由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [x] 我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [x] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
| closed | 2023-05-06T06:50:32Z | 2023-05-06T08:28:23Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/257 | [] | Oliver-whsun | 3 |
QuivrHQ/quivr | api | 3,497 | EDA Question | * EDA Question
* Sample question from data -> LLM to generate intent
* Build intent classifier
* 8 type of question / use cases of rag
* Question correlation in chat
* EDA Chat:
* Chat length stats
* Stats on number of sources returned | closed | 2024-11-25T10:20:13Z | 2024-12-09T14:03:14Z | https://github.com/QuivrHQ/quivr/issues/3497 | [] | linear[bot] | 1 |
babysor/MockingBird | deep-learning | 298 | plots中看到损失不是逐渐下降的,想要获得最优损失的模型该如何获得 | 为何设置让模型每次都保存最新的而不是保存损失更小的模型 | closed | 2021-12-28T10:03:17Z | 2021-12-29T03:27:32Z | https://github.com/babysor/MockingBird/issues/298 | [] | hanggun | 1 |
tensorly/tensorly | numpy | 36 | Handling big datasets for Robust PCA | I tried to run Robust PCA on a torch array. The dimensions are around 500000*375. They array can perfectly fit on my GPU as I ran robust matrix decomposition without any issues. I am sure why robust PCA can't fit or even require 2310 GB of memory. Also is the memory GPU or CPU memory ?
My system has 512 GB of CPU memory and 32 GB of GPU memory
`$ Torch: not enough memory: you tried to allocate 2310GB. Buy new RAM! at /pytorch/torch/lib/TH/THGeneral.c:246` | closed | 2018-02-02T20:59:27Z | 2018-08-29T17:35:22Z | https://github.com/tensorly/tensorly/issues/36 | [] | rtmlp | 13 |
Ehco1996/django-sspanel | django | 184 | 点击很多模块报500开启debug直接报服务错误 | 
点击红框里面的那些按钮 跳转页面500
大佬帮帮我 | closed | 2018-11-07T17:54:51Z | 2018-11-10T00:39:26Z | https://github.com/Ehco1996/django-sspanel/issues/184 | [] | L-X-J | 1 |
serengil/deepface | machine-learning | 639 | running at least one task for all available models in unit tests | [Unit tests](https://github.com/serengil/deepface/blob/master/tests/unit_tests.py) must perform at least on function of verify and represent for all available models. In that way, we can find the early bug before a release. | closed | 2023-01-27T11:57:48Z | 2023-02-01T19:03:34Z | https://github.com/serengil/deepface/issues/639 | [
"enhancement"
] | serengil | 1 |
serengil/deepface | machine-learning | 579 | Getting weights dir already existing error | Getting error while making docker container up
```
worker-1 | File "/usr/local/lib/python3.8/site-packages/deepface/commons/functions.py", line 56, in initialize_folder
worker-1 | os.makedirs(home+"/.deepface/weights")
worker-1 | File "/usr/local/lib/python3.8/os.py", line 223, in makedirs
worker-1 | mkdir(name, mode)
-worker-1 | FileExistsError: [Errno 17] File exists: '/root/.deepface/weights'
```
So I have to manualy remove directory while running above script.
Let me know if someone knows the solution of this. | closed | 2022-10-22T11:57:15Z | 2023-12-06T18:20:19Z | https://github.com/serengil/deepface/issues/579 | [
"enhancement"
] | KaushikSathvara | 5 |
SCIR-HI/Huatuo-Llama-Med-Chinese | nlp | 6 | 项目提供的lora没有达到提的效果呀,就给的那几个infer的例子 | 
| closed | 2023-04-19T09:09:07Z | 2024-12-18T03:40:22Z | https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese/issues/6 | [] | Zombiessss | 6 |
xinntao/Real-ESRGAN | pytorch | 497 | PIL module ended | the generate_multiscale_DF2K.py script doesn't work anymore because the pill module is not developed anymore
| open | 2022-11-12T16:24:56Z | 2022-11-12T16:24:56Z | https://github.com/xinntao/Real-ESRGAN/issues/497 | [] | MedCy1 | 0 |
aiogram/aiogram | asyncio | 1,258 | proper way to stop the bot ( in webhooking) | closed | 2023-08-10T18:10:45Z | 2023-08-10T18:21:52Z | https://github.com/aiogram/aiogram/issues/1258 | [
"bug"
] | pdisk | 0 |
|
quokkaproject/quokka | flask | 5 | 'media' folder doesn't exist | Media folder is referenced, but does not exist.
| closed | 2013-07-19T20:17:18Z | 2015-07-16T02:56:57Z | https://github.com/quokkaproject/quokka/issues/5 | [
"bug"
] | kevinbowrin | 2 |
Miksus/rocketry | pydantic | 54 | It has an app.run() but no app.stop() | **Is your feature request related to a problem? Please describe.**
Sometimes, you would want the scheduler to stop running once a certain condition occurs. For instance, I want to check periodically a certain status returned by some REST API. If that status has met my condition, I would want to stop the scheduler.
**Describe the solution you'd like**
An app.stop() which is the logical opposite of app.run()
**Describe alternatives you've considered**
I could just exit() the code once the condition is met but that won't work for all cases.
| closed | 2022-07-14T02:48:36Z | 2022-07-17T05:01:26Z | https://github.com/Miksus/rocketry/issues/54 | [
"enhancement"
] | emantos | 3 |
apache/airflow | automation | 47,632 | Don't show task duration if we are missing start date | ### Apache Airflow version
main (development)
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
I noticed that I had a negative duration shown for a task when there isn't a start_date set. We should just show nothing in that instance.

### What you think should happen instead?
_No response_
### How to reproduce
You need a broken instance, so it's probably easier to just null out a start_date manually in the db to replicate the situation.
### Operating System
macos
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-11T15:44:42Z | 2025-03-15T13:01:45Z | https://github.com/apache/airflow/issues/47632 | [
"kind:bug",
"area:core",
"area:UI",
"needs-triage",
"affected_version:3.0.0beta"
] | jedcunningham | 1 |
mljar/mljar-supervised | scikit-learn | 691 | ImportError: cannot import name 'interp' from 'scipy' | On January 20th, 2024 scypi released version 1.12.0 which broke its compatibility withthe old scikit-plot version 0.37, used by mljar-supervised. Since then, when importing AutoML (```from supervised.automl import AutoML```) you get the following error:
```
ImportError: cannot import name 'interp' from 'scipy' (/Users/someuser/temp/venv/lib/python3.9/site-packages/scipy/__init__.py)
```
It also happens on ```Python 3.11.7```.
WORKAROUND:
Add to requirements.txt ```scipy==1.11.4```, which installs the previous version of scipy.
@pplonski, long-term I guess replacing ```scikit-plot``` should take place, as it stopped developing on 2018. As a a quick fix for now, I would add the following to requirements.txt
```
scipy>=1.6.1,<=1.11.4
```
instead of ```scipy>=1.6.1```
Thanks!
| closed | 2024-01-21T12:56:49Z | 2024-11-21T11:48:11Z | https://github.com/mljar/mljar-supervised/issues/691 | [] | haim-cohen-moonactive | 17 |
keras-rl/keras-rl | tensorflow | 326 | history not have mean_q | i use duel_dqn_cartpole ,find a problem。
his=dqn.fit(env, nb_steps=1000, visualize=False, verbose=2)
use his.history ,just find episode_reward , nb_steps and nb_episode_steps 。
his.history
{'episode_reward': [-51.53412198149204, 3.981158740171498], 'nb_steps': [487, 974], 'nb_episode_steps': [487, 487]}
but log have more info :
487/1000: episode: 1, duration: 377.151s, episode steps: 487, steps per second: 1, episode reward: 32.765, mean reward: 0.067 [-3.768, 7.021], mean action: 0.955 [0.000, 2.000], mean observation: 0.012 [-1.623, 5.089], loss: 1.848188, mean_absolute_error: 22.578606, mean_q: 33.592277
how can i find info about mean_q in his.history ? or other way ?
Thank you!
- [ ] Check that you are up-to-date with the master branch of Keras-RL. You can update with:
`pip install git+git://github.com/keras-rl/keras-rl.git --upgrade --no-deps`
- [ ] Check that you are up-to-date with the master branch of Keras. You can update with:
`pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps`
- [ ] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short). If you report an error, please include the error message and the backtrace.
| closed | 2019-08-07T02:30:12Z | 2019-11-12T04:11:30Z | https://github.com/keras-rl/keras-rl/issues/326 | [
"wontfix"
] | hardy110 | 1 |
pandas-dev/pandas | python | 61,081 | BUG: pd.api.types.infer_dtype on scalar input | ### Context
I was trying to identify data types in columns with mixed data types:
```python
df.map(pd.api.types.infer_dtype)
```
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
pd.api.types.infer_dtype(1)
pd.api.types.infer_dtype(1.0)
pd.api.types.infer_dtype(True)
```
### Issue Description
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[8], line 1
----> 1 pd.api.types.infer_dtype(1)
File lib.pyx:1605, in pandas._libs.lib.infer_dtype()
TypeError: 'int' object is not iterable
```
### Expected Behavior
According to the documentation, pd.api.types.infer_dtype() should accept scalar input.
### Installed Versions
<details>
```
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.5
python-bits : 64
OS : Windows
OS-release : 11
Version : 10.0.26100
machine : AMD64
processor : Intel64 Family 6 Model 154 Stepping 4, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.2.3
numpy : 2.1.0
pytz : 2024.1
dateutil : 2.9.0
pip : 24.2
Cython : None
sphinx : None
IPython : 8.32.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.4
lxml.etree : None
matplotlib : 3.9.2
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.14.1
sqlalchemy : 2.0.37
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : 0.23.0
tzdata : 2024.1
qtpy : None
pyqt5 : None
```
</details>
| closed | 2025-03-07T18:50:10Z | 2025-03-13T15:38:18Z | https://github.com/pandas-dev/pandas/issues/61081 | [
"Bug",
"Docs"
] | gnotisauton | 9 |
NVIDIA/pix2pixHD | computer-vision | 197 | File "C:\Users\virkt\Anaconda3\envs\pix2pix\lib\site-packages\torch\utils\data\dataloader.py", line 774, in _try_get_data raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) RuntimeError: DataLoader worker (pid(s) 24476, 14368) exited unexpectedly | Can anyone solve this error? | open | 2020-06-05T10:37:50Z | 2020-06-24T06:00:03Z | https://github.com/NVIDIA/pix2pixHD/issues/197 | [] | manvirvirk | 1 |
quantumlib/Cirq | api | 6,327 | Improve `__pow__` for `SingleQubitCliffordGate` and `CliffordGate` class | **Is your feature request related to a use case or problem? Please describe.**
The `__pow__` operator for `CliffordGate` is implemented only for integer powers and has complexity $\mathcal{O}(n)$ where $n$ is the exponent.
https://github.com/quantumlib/Cirq/blob/ec84a057614396bf89459cd141a5f77b4d01ed48/cirq-core/cirq/ops/clifford_gate.py#L399-L411
For `SingleQubitCliffordGate` it's implemented for only integer powers where it falls to `CliffordGate.__pow__` and for $\pm \sqrt{}$.
https://github.com/quantumlib/Cirq/blob/ec84a057614396bf89459cd141a5f77b4d01ed48/cirq-core/cirq/ops/clifford_gate.py#L718-L728
**Describe the solution you'd like**
For `CliffordGate.__pow__` exponentiation should be done using [binary exponentiation](https://cp-algorithms.com/algebra/binary-exp.html) to reduce the complexity ot $\mathcal{O}(\log{n})$. Support for non integer exponents is hard in the general case.
For `SingleQubitCliffordGate.__pow__`. The single qubit clifford gates are a group of size 24. see. https://github.com/quantumlib/Cirq/blob/ec84a057614396bf89459cd141a5f77b4d01ed48/cirq-core/cirq/ops/clifford_gate.py#L149
support for integer powers can be done in $\mathcal{O}(1)$ if we either fall to the optimized `CliffordGate.__pow__` but with `exponent%24` instead of `expnent` or cache the results in table and access `group_powers[self][exponent%24]`. For rational exponents, When the clifford operation has a sqrt. The operation becomes well defined for exponents of the form $\frac{k}{2}$ where $k \in \mathbb{Z}$. For example $X^\frac{5}{2}$ is the same as $SqrtX^5$ and $X^\frac{-5}{2}$ which is the same as $(SqrtX^\dagger)^5$.
**What is the urgency from your perspective for this issue? Is it blocking important work?**
<!-- Please choose one and remove the others -->
P3 - I'm not really blocked by it, it is an idea I'd like to discuss / suggestion based on principle | open | 2023-10-24T21:41:53Z | 2024-04-30T20:52:08Z | https://github.com/quantumlib/Cirq/issues/6327 | [
"good first issue",
"kind/feature-request",
"triage/accepted",
"good for learning"
] | NoureldinYosri | 6 |
X-PLUG/MobileAgent | automation | 60 | 请教大佬:AssertionError: Torch not compiled with CUDA enabled这个问题怎么解决 | 当前环境:NVIDIA-SMI 475.14 Driver Version: 475.14 CUDA Version: 11.4
Python版本:3.10.11

| closed | 2024-09-13T01:58:05Z | 2024-10-09T08:34:52Z | https://github.com/X-PLUG/MobileAgent/issues/60 | [] | shenyugub | 1 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 275 | Run Ollama/Local Models on the Google Colab | **Is your feature request related to a problem? Please describe.**
As openai & google api key cost money, it would be better if there's a way to run models directly on google colab
**Describe the solution you'd like**
I'd like to be able to use the google colab without having to use an api key, any local models like in the examples of this github repository would be good | closed | 2024-05-20T20:41:23Z | 2024-05-29T17:29:39Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/275 | [
"enhancement"
] | Nick088Official | 3 |
allenai/allennlp | pytorch | 5,346 | 'optional' multitask scheduler is not optional | The docstring for the `MultiTaskDataLoader` indicates the `scheduler` argument is optional
https://github.com/allenai/allennlp/blob/311f1104bf4762b7b5c1172eb874276343d562c9/allennlp/data/data_loaders/multitask_data_loader.py#L54
However, it's declared as a required positional argument
https://github.com/allenai/allennlp/blob/311f1104bf4762b7b5c1172eb874276343d562c9/allennlp/data/data_loaders/multitask_data_loader.py#L106
And I don't see any indication that if not supplied it's set to `HomogeneousRoundRobinScheduler` | closed | 2021-08-09T00:38:59Z | 2021-08-12T09:06:29Z | https://github.com/allenai/allennlp/issues/5346 | [
"bug"
] | david-waterworth | 4 |
deezer/spleeter | tensorflow | 219 | [Bug] spleeter-gpu is unusable out of the box | ## Description
I tried, once again, to get spleeter running on the GPU, but I just can't do it.
I never directly worked with TensorFlow or any other ML software package, so I can't even begin to debug this.
## Step to reproduce
1. Install CUDA 10.2 on a fresh, fully up-to-date Windows 10 installation
2. Install Miniconda
3. Install spleeter-gpu using `conda install -c conda-forge spleeter-gpu` in the Anaconda Prompt
4. Run `spleeter separate -o Spleeter -m -p spleeter:2stems-16kHz -i "audiofile.m4a"`
## Output
```bash
Traceback (most recent call last):
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\client\session.py", line 1356, in _do_call
return fn(*args)
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\client\session.py", line 1341, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\client\session.py", line 1429, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
(0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node conv2d_7/Conv2D}}]]
[[strided_slice_25/_309]]
(1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node conv2d_7/Conv2D}}]]
0 successful operations.
0 derived errors ignored.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\user\miniconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\user\miniconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\user\Miniconda3\Scripts\spleeter.exe\__main__.py", line 7, in <module>
File "c:\users\user\miniconda3\lib\site-packages\spleeter\__main__.py", line 54, in entrypoint
main(sys.argv)
File "c:\users\user\miniconda3\lib\site-packages\spleeter\__main__.py", line 46, in main
entrypoint(arguments, params)
File "c:\users\user\miniconda3\lib\site-packages\spleeter\commands\separate.py", line 43, in entrypoint
synchronous=False
File "c:\users\user\miniconda3\lib\site-packages\spleeter\separator.py", line 123, in separate_to_file
sources = self.separate(waveform)
File "c:\users\user\miniconda3\lib\site-packages\spleeter\separator.py", line 89, in separate
'audio_id': ''})
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\contrib\predictor\predictor.py", line 77, in __call__
return self._session.run(fetches=self.fetch_tensors, feed_dict=feed_dict)
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\client\session.py", line 950, in run
run_metadata_ptr)
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\client\session.py", line 1173, in _run
feed_dict_tensor, options, run_metadata)
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\client\session.py", line 1350, in _do_run
run_metadata)
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\client\session.py", line 1370, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
(0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node conv2d_7/Conv2D (defined at c:\users\user\miniconda3\lib\site-packages\spleeter\utils\estimator.py:71) ]]
[[strided_slice_25/_309]]
(1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node conv2d_7/Conv2D (defined at c:\users\user\miniconda3\lib\site-packages\spleeter\utils\estimator.py:71) ]]
0 successful operations.
0 derived errors ignored.
Original stack trace for 'conv2d_7/Conv2D':
File "c:\users\user\miniconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\user\miniconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\user\Miniconda3\Scripts\spleeter.exe\__main__.py", line 7, in <module>
sys.exit(entrypoint())
File "c:\users\user\miniconda3\lib\site-packages\spleeter\__main__.py", line 54, in entrypoint
main(sys.argv)
File "c:\users\user\miniconda3\lib\site-packages\spleeter\__main__.py", line 46, in main
entrypoint(arguments, params)
File "c:\users\user\miniconda3\lib\site-packages\spleeter\commands\separate.py", line 43, in entrypoint
synchronous=False
File "c:\users\user\miniconda3\lib\site-packages\spleeter\separator.py", line 123, in separate_to_file
sources = self.separate(waveform)
File "c:\users\user\miniconda3\lib\site-packages\spleeter\separator.py", line 86, in separate
predictor = self._get_predictor()
File "c:\users\user\miniconda3\lib\site-packages\spleeter\separator.py", line 58, in _get_predictor
self._predictor = to_predictor(estimator)
File "c:\users\user\miniconda3\lib\site-packages\spleeter\utils\estimator.py", line 71, in to_predictor
return predictor.from_saved_model(latest)
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\contrib\predictor\predictor_factories.py", line 153, in from_saved_model
config=config)
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\contrib\predictor\saved_model_predictor.py", line 153, in __init__
loader.load(self._session, tags.split(','), export_dir)
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\util\deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 269, in load
return loader.load(sess, tags, import_scope, **saver_kwargs)
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 422, in load
**saver_kwargs)
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 352, in load_graph
meta_graph_def, import_scope=import_scope, **saver_kwargs)
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\training\saver.py", line 1473, in _import_meta_graph_with_return_elements
**kwargs))
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\framework\meta_graph.py", line 857, in import_scoped_meta_graph_with_return_elements
return_elements=return_elements)
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\framework\importer.py", line 443, in import_graph_def
_ProcessNewOps(graph)
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\framework\importer.py", line 236, in _ProcessNewOps
for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 3751, in _add_new_tf_operations
for c_op in c_api_util.new_tf_operations(self)
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 3751, in <listcomp>
for c_op in c_api_util.new_tf_operations(self)
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 3641, in _create_op_from_tf_operation
ret = Operation(c_op, self)
File "c:\users\user\miniconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2005, in __init__
self._traceback = tf_stack.extract_stack()
```
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Windows 10 |
| Installation type | Conda |
| RAM available | 32 GB |
| Hardware spec | Ryzen 9 3900x / RTX 2070 Super |
| closed | 2020-01-02T07:07:19Z | 2020-05-09T01:19:04Z | https://github.com/deezer/spleeter/issues/219 | [
"bug",
"invalid"
] | SamusAranX | 19 |
Integuru-AI/Integuru | api | 9 | Proposal to Add Unit Tests and CI Workflow Using GitHub Actions | I propose adding unit tests to improve code reliability and establishing a CI workflow using GitHub Actions to run these tests automatically on each pull request and commit. This setup will help maintain code quality and streamline development by catching potential issues early and providing contributors with immediate feedback.
@richardyhz @alanalanlu Do let me know how this sounds, I will probably work on this over the weekend. Thanks.
| closed | 2024-11-01T04:55:14Z | 2024-11-21T03:13:42Z | https://github.com/Integuru-AI/Integuru/issues/9 | [] | legendkartik45 | 0 |
ultralytics/ultralytics | python | 19,214 | Problems caused by categories missing from training data? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I am training a model on a domain (Biology) with an a priori large number of categories (species). The metadata for these categories is easy to obtain from databases such as GBIF. However, my image dataset contains only a relatively small number of these categories.
I would like to design my model architecture in such a way as to include all of the possible categories. This will allow me to do things like online training - when new images come in, even if they include an organism that was not in the previous training data, the existing model architecture can handle this "new" category. However, I am concerned that having a large number of categories that have no representation in the training data will cause unforeseen problems.
Will it likely be problematic to build a YOLO detection model with many categories that have no training data associated with them?
### Additional
_No response_ | open | 2025-02-12T17:14:15Z | 2025-02-12T17:15:11Z | https://github.com/ultralytics/ultralytics/issues/19214 | [
"question",
"detect"
] | csbrown | 1 |
healthchecks/healthchecks | django | 181 | Check settings goes over display | Hi! I'm just starting to use this amazing tool. Thank you for developing it :)
I meet minor UI issue: check settings goes over display:

I have 1366x768 display. Google Chrome 67.0.3396.99 Windows 10
Hope it will be resolved. Notify me, if I can help with anything about this issue | closed | 2018-07-20T13:54:03Z | 2018-07-24T09:20:27Z | https://github.com/healthchecks/healthchecks/issues/181 | [] | dimadk24 | 1 |
graphql-python/graphene | graphql | 866 | Cannot add integral value to Timestamp without freq. | I have a date column as `dtype('<M8[ns]')` which was converted from string type.
i want to add some number of days to this TIMESTAMP type.
doing something like this:
`data['dATE'].min()+6)` but getting the error
`Cannot add integral value to Timestamp without freq.`
how can i use arithmetic operation on date? | closed | 2018-11-21T19:27:02Z | 2019-03-16T13:54:13Z | https://github.com/graphql-python/graphene/issues/866 | [] | F-Chaudhry | 1 |
PaddlePaddle/models | computer-vision | 4,771 | 可以用来裁剪或优化TTS语音模型吗 | 可以用来裁剪或优化TTS语音模型吗,例如:pytorch版本的tacotron2 | closed | 2020-07-27T06:00:08Z | 2020-08-10T01:30:27Z | https://github.com/PaddlePaddle/models/issues/4771 | [] | chwbin | 1 |
pytest-dev/pytest-html | pytest | 786 | pytest_html_results_table_row hook is called twice when running tests | When running tests, i get the following error, same as in issue #782
```
INTERNALERROR> if "sortable" in self._report.table_header[index]:
INTERNALERROR> ~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
INTERNALERROR> IndexError: list index out of range
```
The sample code is take verbatim from the [documentation](https://pytest-html.readthedocs.io/en/latest/user_guide.html#modifying-the-results-table)
When investingating the error and adding logging to the hook, i can see that it is called twice during the same test.
When inspecting the values of cells and `cells` and `self._report.table_header` in basereport.py in `_hydrate_data`, I see that the length of `cells` is 8 when `self._report.table_header` is 6. as the code is iterating on cells, this triggers an index error.
Has this ever happened to anyone else ?
here is the sample code
```python
def pytest_html_results_table_header(cells):
cells.insert(2, "<th>Description</th>")
cells.insert(1, '<th class="sortable time" data-column-type="time">Time</th>')
def pytest_html_results_table_row(report, cells):
description = report.description.replace(" ", " ").replace("\n", "<br />")
cells.insert(2, f"<td>{description}</td>")
cells.insert(1, f'<td class="col-time">{datetime.utcnow()}</td>')
print(cells)
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
report = outcome.get_result()
report.description = str(item.function.__doc__)
```
Here is the result of the `print(cells)` statement
```
['<td class="col-result">Passed</td>', '<td class="col-testId">test_suite_setUp::setup</td>', '<td class="col-duration">0 ms</td>', '<td class="col-links"></td>', '<td>None</td>', '<td class="col-time">2023-12-19 17:19:07.301758</td>']
['<td class="col-result">Passed</td>', '<td class="col-testId">test_suite_setUp::setup</td>', '<td class="col-duration">0 ms</td>', '<td class="col-links"></td>', '<td>None</td>', '<td class="col-time">2023-12-19 17:19:07.301758</td>', '<td>None</td>', '<td class="col-time">2023-12-19 17:19:07.302751</td>']
```
printing the content of `report.call` when filtering on report.when == "call" also gives 'setup' twice and then 'call' twice
| closed | 2023-12-19T17:36:38Z | 2024-12-07T10:14:23Z | https://github.com/pytest-dev/pytest-html/issues/786 | [] | gregtwice | 1 |
tflearn/tflearn | tensorflow | 913 | Appending to existing vocabulary. | Since my system runs out of memory when preprocessing data, I am experimenting with batch training. How do I append to the existing vocabulary? The way I've done it so far is below. Please let me know if there is a better or a correct way of doing it. Thank you.
```
vp = tflearn.data_utils.VocabularyProcessor(max_input_length, min_frequency=min_frequency)
try:
vp = vp.restore (model_dump_path+'vp_words_dictionary')
print("restored vp_words_dictionary")
except:
pass
vp = vp.fit(inputs)
inputs_parsed = vp.transform(inputs)
vp.save(model_dump_path+'vp_words_dictionary')
``` | open | 2017-09-24T01:47:25Z | 2017-09-24T01:47:25Z | https://github.com/tflearn/tflearn/issues/913 | [] | igorvishnevskiy | 0 |
jmcnamara/XlsxWriter | pandas | 774 | Feature request: set_range() similar to set_row() and set_column() | Hi,
I am currently trying to format a specific range in an excel and after thorough search on stackoverflow or within the API, I realize that it is not possible. Nor could I find any method to change format only for one cell. The .write() methods all overwrite the current cell contents.
Use case:
- multiple tables in the same excel sheet that need separate formating of headers for example
The arguments should likely work very similar to .merge_range(), without the need to provide a value.
Thanks!
| closed | 2021-01-19T15:12:45Z | 2021-01-19T20:37:10Z | https://github.com/jmcnamara/XlsxWriter/issues/774 | [
"wont_fix",
"feature request"
] | Db-pckr | 1 |
plotly/dash-table | dash | 413 | table's height becomes too small when filtering returns 0 rows and virtualized=true | 
```python
import dash
from dash.dependencies import Input, Output
import dash_core_components as dcc
import dash_html_components as html
import dash_table
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/gapminderDataFiveYear.csv')
# Assign row IDs
df['id'] = df.index
app = dash.Dash(__name__)
app.layout = html.Div([
html.H3('Row IDs'),
dash_table.DataTable(
id='table',
columns=[{
'id': c,
'label': c
} for c in df.columns if c != 'id'],
data=df.to_dict('records'),
virtualization=True,
filtering='fe',
sorting=True,
pagination_mode=False,
row_selectable='multi',
derived_virtual_selected_row_ids=[]
),
dcc.Graph(id='graph')
])
@app.callback(
Output('graph', 'figure'),
[Input('table', 'derived_virtual_row_ids'),
Input('table', 'derived_virtual_selected_row_ids')])
def display_graph(row_ids, selected_row_ids):
if row_ids is not None:
filtered_df = df.loc[row_ids, :].copy()
else:
filtered_df = df.copy()
filtered_df['color'] = '#0074D9'
filtered_df.loc[selected_row_ids, 'color'] = '#85144b'
return {
'data': [
{
'x': filtered_df['country'],
'y': filtered_df['gdpPercap'],
'type': 'bar',
'marker': {
'color': filtered_df['color']
}
}
],
'layout': {
'uirevision': 'constant',
'yaxis': {
'title': 'gdpPercap'
}
}
}
if __name__ == '__main__':
app.run_server(debug=True)
``` | open | 2019-04-23T18:12:20Z | 2019-04-25T18:56:25Z | https://github.com/plotly/dash-table/issues/413 | [
"dash-type-enhancement"
] | chriddyp | 1 |
pyeve/eve | flask | 1,309 | No "_links" element in an embeddable field. | When querying a resource with an embeddable field, the resulting embedded document doesn't contains the "_links" element.
This will allow to easily browse to the embedded resource.
### Expected Behavior
**Query /people/5d72524e396a29fb502a2cf1?embedded={"parent": 1}**
It contains a **parent._links** element.
```json
{
"_id": "5d72524e396a29fb502a2cf1",
"firstname": "George",
"lastname": "Ten Son",
"parent": {
"_id": "5d72522b396a29fb502a2cf0",
"firstname": "George",
"lastname": "Ten Father",
"_updated": "Fri, 06 Sep 2019 12:33:47 GMT",
"_created": "Fri, 06 Sep 2019 12:33:47 GMT",
"_etag": "17513197535d410eededa5b4f5acc7e3380605c1",
"_links": {
"self": {
"title": "person",
"href": "people/5d72522b396a29fb502a2cf0"
}
}
},
"_updated": "Fri, 06 Sep 2019 12:34:22 GMT",
"_created": "Fri, 06 Sep 2019 12:34:22 GMT",
"_etag": "245f6b8dafb288ff06bad235337608dd2c00763a",
"_links": {
"parent": {
"title": "home",
"href": "/"
},
"self": {
"title": "person",
"href": "people/5d72524e396a29fb502a2cf1"
},
"collection": {
"title": "people",
"href": "people"
}
}
```
### Actual Behavior
There is no **parent._links** element.
```json
{
"_id": "5d72524e396a29fb502a2cf1",
"firstname": "George",
"lastname": "Ten Son",
"parent": {
"_id": "5d72522b396a29fb502a2cf0",
"firstname": "George",
"lastname": "Ten Father",
"_updated": "Fri, 06 Sep 2019 12:33:47 GMT",
"_created": "Fri, 06 Sep 2019 12:33:47 GMT",
"_etag": "17513197535d410eededa5b4f5acc7e3380605c1"
},
"_updated": "Fri, 06 Sep 2019 12:34:22 GMT",
"_created": "Fri, 06 Sep 2019 12:34:22 GMT",
"_etag": "245f6b8dafb288ff06bad235337608dd2c00763a",
"_links": {
"parent": {
"title": "home",
"href": "/"
},
"self": {
"title": "person",
"href": "people/5d72524e396a29fb502a2cf1"
},
"collection": {
"title": "people",
"href": "people"
}
}
``` | closed | 2019-09-06T13:00:26Z | 2020-03-11T15:25:54Z | https://github.com/pyeve/eve/issues/1309 | [
"stale"
] | jordeu | 1 |
chainer/chainer | numpy | 8,579 | How about matching weight indices of LSTM in the docstring to implementation? | https://docs.chainer.org/en/stable/reference/generated/chainer.functions.n_step_lstm.html
https://docs.chainer.org/en/stable/reference/generated/chainer.functions.n_step_bilstm.html
In the docstring of `n_step_lstm` and `n_step_bilstm` ,
- `W0` and `W4` are weights for input gates
- `W1` and `W5` are weights for forget gates
- `W2` and `W6` are weights for output gates
- `W3` and `W7` are weights for cell gates
But referring to implementation of [_extract_gates](https://github.com/chainer/chainer/blob/v7.7.0/chainer/functions/rnn/lstm.py#L91) and [_lstm](https://github.com/chainer/chainer/blob/v7.7.0/chainer/functions/rnn/n_step_lstm.py#L540-L547) ,
- `W2` and `W6` are weights for cell gates (not output gates)
- `W3` and `W7` are weights for output gates (not cell gates)
So how about fixing the docstring of `n_step_lstm` and `n_step_bilstm` and matching the index to implementation?
If this is a valid proposal, I will create a pull request. | closed | 2020-08-04T08:41:58Z | 2021-06-26T03:11:21Z | https://github.com/chainer/chainer/issues/8579 | [
"cat:document",
"stale",
"prio:medium"
] | ysk24ok | 2 |
ageitgey/face_recognition | machine-learning | 1,252 | ImportError: /usr/local/cuda/lib64/libcudnn.so.7: version `libcudnn.so.7' not found | * face_recognition version: 7
* Python version: 3.7
* Operating System: ubuntu 18.04
Keep getting this error:
ImportError: /usr/local/cuda/lib64/libcudnn.so.7: version `libcudnn.so.7' not found (required by /home/ubuntu/anaconda3/lib/python3.7/site-packages/_dlib_pybind11.cpython-37m-x86_64-linux-gnu.so)
The command I was running was: face_recognition known unknown
Full output:
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/bin/face_recognition", line 5, in <module>
from face_recognition.face_recognition_cli import main
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/face_recognition/__init__.py", line 7, in <module>
from .api import load_image_file, face_locations, batch_face_locations, face_landmarks, face_encodings, compare_faces, face_distance
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/face_recognition/api.py", line 4, in <module>
import dlib
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/dlib/__init__.py", line 19, in <module>
from _dlib_pybind11 import *
ImportError: /usr/local/cuda/lib64/libcudnn.so.7: version `libcudnn.so.7' not found (required by /home/ubuntu/anaconda3/lib/python3.7/site-packages/_dlib_pybind11.cpython-37m-x86_64-linux-gnu.so) | open | 2020-12-10T02:26:31Z | 2020-12-10T02:26:31Z | https://github.com/ageitgey/face_recognition/issues/1252 | [] | augustfr | 0 |
babysor/MockingBird | deep-learning | 410 | 如何把一个py文件转换成ui文件?方便添加自己的功能并且再次修改界面ui | 求助作者及各位大佬! | closed | 2022-02-28T12:37:35Z | 2022-03-07T04:24:57Z | https://github.com/babysor/MockingBird/issues/410 | [] | flysmart | 3 |
ibis-project/ibis | pandas | 10,403 | bug: DatabaseError: ORA-00923 when attempting to fetch table schema using Ibis Oracle backend | ### What happened?
Here's an English description of the issue suitable for a GitHub submission:
Title: DatabaseError: ORA-00923 when attempting to fetch table schema using Ibis Oracle backend
Description:
I'm encountering an error when trying to connect to an Oracle database using the Ibis Oracle backend. The error occurs specifically when attempting to fetch the schema for a table.
Steps to reproduce:
Establish a connection to the Oracle database:
<PYTHON>
con = ibis.oracle.connect(
user='user',
password='password',
host='88888',
port=1521,
service_name='**'
)
List databases and tables:
<PYTHON>
dbs = con.list_databases()
tables = con.list_tables()
Attempt to fetch a specific table:
<PYTHON>
t = con.table("CONFIG_YMXX")
### What version of ibis are you using?
9.5.0
### What backend(s) are you using, if any?
Oracle
### Relevant log output
```sh
DatabaseError: ORA-00923: FROM keyword not found where expected
Help: https://docs.oracle.com/error-help/db/ora-00923/
```
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | closed | 2024-10-31T03:51:23Z | 2024-11-02T07:50:41Z | https://github.com/ibis-project/ibis/issues/10403 | [
"bug"
] | xuefliang | 2 |
replicate/cog | tensorflow | 2,079 | HTTPX verifytypes error | During cog push in our CI, we get the following error:
```
Validating model schema...
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/usr/local/lib/python3.12/site-packages/cog/command/openapi_schema.py", line 12, in <module>
from ..server.http import create_app
File "/usr/local/lib/python3.12/site-packages/cog/server/http.py", line 47, in <module>
from .runner import (
File "/usr/local/lib/python3.12/site-packages/cog/server/runner.py", line 21, in <module>
from .clients import SKIP_START_EVENT, ClientManager
File "/usr/local/lib/python3.12/site-packages/cog/server/clients.py", line 27, in <module>
from .retry_transport import RetryTransport
File "/usr/local/lib/python3.12/site-packages/cog/server/retry_transport.py", line 11, in <module>
class RetryTransport(httpx.AsyncBaseTransport):
File "/usr/local/lib/python3.12/site-packages/cog/server/retry_transport.py", line 35, in RetryTransport
verify: httpx._types.VerifyTypes = True,
^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'httpx._types' has no attribute 'VerifyTypes'. Did you mean: 'ProxyTypes'?
ⅹ Failed to get type signature: exit status 1
```
Cog setup is:
```
- name: Setup Cog
uses: replicate/setup-cog@v2
with:
cog-version: "v0.10.0-alpha21"
token: ${{ secrets.REPLICATE_API_TOKEN }}
```
| closed | 2024-12-06T18:56:23Z | 2024-12-09T11:03:36Z | https://github.com/replicate/cog/issues/2079 | [] | christopher5106 | 1 |
albumentations-team/albumentations | machine-learning | 2,474 | SomeOf not respecting `p` of child augments | ## Describe the bug
SomeOf (and RandomOrder) does not seem to respect child augment probabilities.
### To Reproduce
```
pip install albumentations==2.0.5
```
Example:
```
A.SomeOf([A.Erasing(p=0.1, scale=(0.2, 0.7), fill='random_uniform')], n=100)
```
### Expected behavior
Erasing should only be applied 1 out of 10 times.
### Actual behavior
Erasing is currently applied every time.
### Screenshots
N/A
### Additional context
```python
class SomeOf(BaseCompose):
# ... SKIPPED ...
def __call__(self, *arg: Any, force_apply: bool = False, **data: Any) -> dict[str, Any]:
if self.replay_mode:
for t in self.transforms:
data = t(**data)
data = self.check_data_post_transform(data)
return data
if self.transforms_ps and (force_apply or self.py_random.random() < self.p):
for i in self._get_idx():
t = self.transforms[i]
data = t(force_apply=True, **data) # <<PROBLEMATIC ENTRY>>
self._track_transform_params(t, data)
data = self.check_data_post_transform(data)
return data
```
| open | 2025-03-16T01:05:05Z | 2025-03-18T16:51:42Z | https://github.com/albumentations-team/albumentations/issues/2474 | [
"bug"
] | nmichlo | 1 |
Neoteroi/BlackSheep | asyncio | 247 | error 500 custom template | Hi there! Can I use custom html template for `Internal Server Error` 500? If yes then how to do it?
| closed | 2022-04-13T17:44:34Z | 2023-07-15T16:17:16Z | https://github.com/Neoteroi/BlackSheep/issues/247 | [
"enhancement",
"fixed in branch",
"needs docs"
] | advenn | 1 |
freqtrade/freqtrade | python | 11,504 | implement user access in freqtrade telegram bot | <!--
Note: this section will not show up in the issue.
Have you search for this feature before requesting it? It's highly likely that a similar request was already filed.
-->
## Describe your environment
(if applicable)
* Operating system: linux
* Python Version: _____ (`python -V`)
* CCXT version: _____ (`pip freeze | grep ccxt`)
* Freqtrade Version: 2025.2
implement user access to freqtrade telegram bot, same like in telegram library ALLOWED_USERS: bot should read messages only from specific Telegram User IDs, if wanted
new parameter in config.json with Telegram User IDs of allowed users; if list is empty or we have a string like "disabled", bot will read messages from all users :
```
"telegram": {
"enabled": true,
"token": "xxxxxxxx",
# allowed users are only 123456789 and 987654321
"allowed_users": ["123456789", "987654321"],
............
# allowed for everyone
"allowed_users": [],
"allowed_users": ["disabled"],
``` | closed | 2025-03-13T19:15:37Z | 2025-03-16T14:20:05Z | https://github.com/freqtrade/freqtrade/issues/11504 | [
"Question"
] | dobremha | 9 |
thtrieu/darkflow | tensorflow | 484 | Video is not displaying. | I am trying to run camera video demo when using this command it executing correctly but no video output
./flow --model cfg/tiny-yolo-voc.cfg --load bin/tiny-yolo-voc.weights --demo demo.mp4
but as i use --saveVideo, its output video working fine
./flow --model cfg/tiny-yolo-voc.cfg --load bin/tiny-yolo-voc.weights --demo demo.mp4 --saveVideo | open | 2017-12-22T05:13:52Z | 2018-01-10T04:37:12Z | https://github.com/thtrieu/darkflow/issues/484 | [
"bug"
] | MuhammadFaizanKhan | 1 |
MorvanZhou/tutorials | tensorflow | 73 | 有没有读取自己的图像训练集的代码呀 | open | 2018-12-10T05:17:13Z | 2018-12-10T05:17:13Z | https://github.com/MorvanZhou/tutorials/issues/73 | [] | xiatutu | 0 |
|
AirtestProject/Airtest | automation | 948 | 图像没有匹配到,但是却通过了wait函数的检测 | **描述问题bug**
图片没有匹配到,但是却通过了wait函数的检测,是哪里有bug吗

**复现步骤**
1. 添加wait函数
2. 运行脚本
3. 图像还未出现
4. 提示已找到,脚本继续往下,造成后续流程混乱
**预期效果**
等待直到图片出现或达到超时时间,不应该直接通过
**python 版本:** `Python3.6.5`
**airtest 版本:** `1.2.10`
**手机设备:**
- oppo R11plus,Android8.1.0
**其他相关环境信息**
mac版airtestIDE
| closed | 2021-07-30T03:53:37Z | 2021-10-08T02:17:34Z | https://github.com/AirtestProject/Airtest/issues/948 | [] | AnewG | 3 |
lundberg/respx | pytest | 226 | Add a catch-all `.route()` example in docs | Proposed in https://github.com/lundberg/respx/issues/177#issuecomment-1400554951 | closed | 2023-01-27T15:21:33Z | 2024-03-19T16:12:09Z | https://github.com/lundberg/respx/issues/226 | [
"documentation"
] | lundberg | 0 |
seleniumbase/SeleniumBase | pytest | 2,976 | After Dockerizing the Selenium Web App not opening the webpage that needs to be crawled. | Dear Micheal,
I have build a full web app that scrapes some site and gathers some information. Locally everything runs perfectly, however after dealing with dockerizing the application. Never occured exception has risen due to:
```
INFO: 127.0.0.1:44352 - "POST /crawl_menu HTTP/1.1" 200 OK
Exception in Getir Crawler: Message:
Element {button[aria-label='Tümünü Reddet']} was not present after 7 seconds!
```
I have never gotten that, I think that after docker the container runs the crawler headless=True and tries to get to the site in 6 seconds but cannot do it. What should I do to workaround that? I will provide my crawler.py and dockerfile.
crawler.py:
```
def g_crawler(url, is_area):
menu_items = []
if not is_area:
with SB(uc=True, headless=True) as sb:
sb.driver.uc_open_with_reconnect(url, 6)
try:
sb.uc_gui_handle_cf()
sb.sleep(3)
sb.click("button[aria-label='Tümünü Reddet']")
sb.sleep(3)
all_items = sb.find_elements("div[class='sc-be09943-2 gagwGV']")
for item in all_items:
product_name = item.find_element("css selector", "h4[class='style__Title4-sc-__sc-1nwjacj-5 jrcmhy sc-be09943-0 bpfNyi']").text
sb.sleep(2)
try:
product_description = item.find_element("css selector", "p[contenteditable='false']").text
except:
product_description = "No description for this product."
sb.sleep(2)
product_price = item.find_element("css selector", "span[class='style__Text-sc-__sc-1nwjacj-0 jbOUDC sc-be09943-5 kA-DgzG']").text
sb.sleep(2)
menu_item = {
"Menu Item": product_name,
"Menu Ingredients": product_description,
"Price": product_price
}
if product_name == "Poşet":
continue
menu_items.append(menu_item)
menu_items_json = json.dumps(menu_items, ensure_ascii=False, indent=4)
menu_items_list = json.loads(menu_items_json)
df = pd.DataFrame(menu_items_list)
# title = sb.get_title()
# excel_file = f'{title}_getir_menu.xlsx'
# df.to_excel(excel_file, index=False)
return df.to_json(orient='split')
except Exception as e:
print(f"Exception in Getir Crawler: {e}")
```
My Dockerfile:
```
# Use a smaller base image
FROM python:3.10-slim
# Set environment variables
ENV PYTHONUNBUFFERED=1
ENV TZ=Europe/Istanbul
ENV LC_ALL=tr_TR.UTF-8
ENV LANG=tr_TR.UTF-8
# Set the working directory in the container
WORKDIR /app
# Install dependencies and Chrome in one layer to keep image size smaller
RUN apt-get update && apt-get install -y \
wget \
gnupg \
unzip \
curl \
ca-certificates \
fonts-liberation \
libappindicator3-1 \
libasound2 \
libatk-bridge2.0-0 \
libatk1.0-0 \
libcups2 \
libdbus-1-3 \
libgdk-pixbuf2.0-0 \
libnspr4 \
libnss3 \
libx11-xcb1 \
libxcomposite1 \
libxdamage1 \
libxrandr2 \
xdg-utils \
locales \
--no-install-recommends \
&& ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone \
&& wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb \
&& apt-get install ./google-chrome-stable_current_amd64.deb --yes \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Configure locale settings for Türkiye
RUN echo "LC_ALL=tr_TR.UTF-8" >> /etc/environment \
&& echo "LANG=tr_TR.UTF-8" >> /etc/environment \
&& locale-gen tr_TR.UTF-8
# Copy and install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application
COPY . .
# Expose the ports for FastAPI and Streamlit
EXPOSE 8000 8501
# Command to run FastAPI and Streamlit
CMD ["sh", "-c", "uvicorn menu_crawler:app --host 0.0.0.0 --port 8000 & streamlit run Hotel_Analyst.py"]
``` | closed | 2024-07-30T15:02:35Z | 2024-07-31T06:30:04Z | https://github.com/seleniumbase/SeleniumBase/issues/2976 | [
"invalid usage",
"UC Mode / CDP Mode"
] | Orbiszeus | 9 |
wger-project/wger | django | 1,506 | Improve OFF import | At the moment, when importing a current OFF dump with a threshold of 0.7 completeness, around 230 000 products can't be imported because the `extract_info_from_off` function can't extract the necessary information.
Most products fail because some of these fields are missing
* energy-kcal_100g
* proteins_100g
* carbohydrates_100g
* sugars_100g
* fat_100g
* saturated-fat_100g
## Possible solutions
~~Convert the energy, when the energy is only available in kj per 100g and not in kcal (`energy-kj_100g`), the conversion is easy~~
~~Ignore less important ones (`sugars_100g` and `saturated-fat_100g`) and just set their values to 0~~
Calculate the rest if the infos are available, read `serving_quantity` and e.g. `carbohydrates_serving` and convert this to a per-100g value. This probably has some tricky edge cases and we should take a look at what other fields / flags we have access to in the dump
| open | 2023-11-26T13:43:01Z | 2023-11-27T19:26:41Z | https://github.com/wger-project/wger/issues/1506 | [] | rolandgeider | 0 |
tflearn/tflearn | data-science | 796 | any plan to add siamese network? | i seen that tensorflow has add siamese network to detect text similarity(https://github.com/dhwajraj/deep-siamese-text-similarity/),
but i don't see any example about siamese network of tflearn, is there any plan to add siamese network of tflearn? | open | 2017-06-15T14:40:48Z | 2022-09-10T18:06:29Z | https://github.com/tflearn/tflearn/issues/796 | [
"contributions welcome"
] | willduan | 2 |
KaiyangZhou/deep-person-reid | computer-vision | 582 | Validating on my CAL model | Hi, I trained my CAL model for clothes changing using the backbone OSNet, but got very low map and top1 when validating the model on market1501 dataset using our main.py code, what is the reason of it? | open | 2024-08-14T09:03:34Z | 2024-08-14T09:03:34Z | https://github.com/KaiyangZhou/deep-person-reid/issues/582 | [] | QiqLiang | 0 |
numba/numba | numpy | 9,707 | Assertion error in LoopVectorize.cpp on AWS EC2 c7g.8xlarge machines | <!--
Thanks for opening an issue! To help the Numba team handle your information
efficiently, please first ensure that there is no other issue present that
already describes the issue you have
(search at https://github.com/numba/numba/issues?&q=is%3Aissue).
-->
## Reporting a bug
<!--
Before submitting a bug report please ensure that you can check off these boxes:
-->
- [x] I have tried using the latest released version of Numba (most recent is
visible in the release notes
(https://numba.readthedocs.io/en/stable/release-notes-overview.html).
- [ ] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
<!--
Please include details of the bug here, including, if applicable, what you
expected to happen!
-->
Sorry for not including code to reproduce it! I hope someone else will be able to find a way. Our code is big, complex, and proprietary, and I have no idea which part of it triggers the issue. It has a few calls to other Numba-decorated functions and does plenty of math over Numpy arrays of float64 and uint64 values.
I've tested it with many versions of Python 3, Numpy 1.x and 2.0.1, Numba 0.60.0, and llvmlite 0.43.0. (But also some older versions of Numba.) The OS the the default Ubuntu 24.04 AWS image.
The problem only happens on our c7g.8xlarge AWS instance (maybe any c7g instance?), so for example not on c6g.8xlarge, which is our current workaround. (Both are aarch64/arm64.)
A call to a big, complex Numba-decorated function causes an immediate crash (no Traceback) with the message:
```
python: /root/miniconda3/envs/buildenv/conda-bld/llvmdev_1680642098205/work/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp:10136: virtual void llvm::VPReplicateRecipe::execute(llvm::VPTransformState&): Assertion `(!State.VF.isScalable() || IsUniform) && "Can't scalarize a scalable vector"' failed.
```
I hope that's at all helpful! | closed | 2024-08-16T11:16:25Z | 2024-11-28T02:03:40Z | https://github.com/numba/numba/issues/9707 | [
"llvm",
"more info needed",
"bug - segfault",
"stale"
] | Telofy | 11 |
Kanaries/pygwalker | matplotlib | 125 | Cannot load more than | When I try to embed pygwalker in `streamlit`, I get the following error:
```
Dataframe is too large for ipynb files. Only 14862 sample items are printed to the file.
```
Is it a known issue that pygwalker cannot handle large datasets?
Thanks a lot for the work, the project looks super cool 😄
Best,
Adrien | closed | 2023-06-05T08:41:15Z | 2023-07-06T02:02:19Z | https://github.com/Kanaries/pygwalker/issues/125 | [
"fixed but needs feedback",
"P1"
] | ruaultadrien | 2 |
pallets-eco/flask-wtf | flask | 3 | Use built-in json module within widgets.py | widgets.py currently requires simplejson to work. It should try importing the built-in json library before attempting to use simplejson.
Patch attached.
---
- Bitbucket: https://bitbucket.org/danjac/flask-wtf/issue/3
- Originally Reported By: [ ](http://bitbucket.org/sirn)
- Originally Created At: 2010-07-20 22:05:49
| closed | 2012-02-29T16:44:46Z | 2021-05-30T01:24:53Z | https://github.com/pallets-eco/flask-wtf/issues/3 | [
"bug",
"import"
] | rduplain | 0 |
polyaxon/traceml | plotly | 8 | Show missing columns only | Thanks for the awesome plugin !
1. possible to add colors to point out missing values. Light shade of red if missing is > 0.
2. possible to display only missing columns? Sometimes a dataframe has a lot of columns and user is mostly interested in missing information.
I am new to python, if you guide me where to look, I can create a pull request. Thank you. | open | 2017-04-24T18:50:35Z | 2017-04-24T18:50:35Z | https://github.com/polyaxon/traceml/issues/8 | [] | upkarlidder | 0 |
jupyter-incubator/sparkmagic | jupyter | 651 | Automated runs via Papermill | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
I'm trying to determine whether notebooks using livy/sparkmagic can be run via papermill. It would seem that the answer is yes (papermill is mentioned in your docs), but I see no examples and am unclear how spark sessions can be bootstrapped non-interactively. Automated session creation in general would be a great feature to add/document.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
An example which shows how to create spark sessions automatically.
| closed | 2020-06-20T17:44:13Z | 2020-07-06T14:41:59Z | https://github.com/jupyter-incubator/sparkmagic/issues/651 | [] | dylanwilder | 2 |
TencentARC/GFPGAN | deep-learning | 60 | 如何降低美颜效果 | 个人觉得增强后的人脸好像美颜太过了一点 过于平滑 细节不够。请问一下我自己重新训练能否降低美颜效果,应该修改哪里最好呢 | closed | 2021-09-08T00:38:33Z | 2021-09-24T07:57:10Z | https://github.com/TencentARC/GFPGAN/issues/60 | [] | jorjiang | 3 |
google-deepmind/sonnet | tensorflow | 153 | Request for Conv1D to support [WNC] input format | in pyTorch the convention places the time dimension first. This is convenient for RL and is extensively used in code bases.
Currently the Conv1D in sonnet (v1) only supports (batch, sequence_length, element_size) in the "NWC" format. Can we add support to 'WNC' format to the Conv1D?
https://pytorch.org/docs/stable/nn.html#gru
*Update*: it seems the Conv1D in Pytorch also uses [NWC] convention, for it is awkward otherwise for conv modules.
https://pytorch.org/docs/stable/nn.html#conv1d | closed | 2019-11-20T12:35:30Z | 2019-11-22T10:00:03Z | https://github.com/google-deepmind/sonnet/issues/153 | [] | geyang | 1 |
apache/airflow | machine-learning | 47,808 | Scheduler crashes on dag with retries | ### Apache Airflow version
main (development)
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
Following stack trace is seen on trying to execute the attached dag.
```
[2025-03-15 15:30:21 +0530] [27911] [INFO] Worker exiting (pid: 27911)
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/airflow/utils/providers_configuration_loader.py", line 55, in wrapped_function
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/airflow/cli/commands/local_commands/scheduler_command.py", line 52, in scheduler
run_command_with_daemon_option(
File "/home/karthikeyan/stuff/python/airflow/airflow/cli/commands/local_commands/daemon_utils.py", line 86, in run_command_with_daemon_option
callback()
File "/home/karthikeyan/stuff/python/airflow/airflow/cli/commands/local_commands/scheduler_command.py", line 55, in <lambda>
callback=lambda: _run_scheduler_job(args),
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/airflow/cli/commands/local_commands/scheduler_command.py", line 43, in _run_scheduler_job
run_job(job=job_runner.job, execute_callable=job_runner._execute)
File "/home/karthikeyan/stuff/python/airflow/airflow/utils/session.py", line 101, in wrapper
return func(*args, session=session, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/airflow/jobs/job.py", line 342, in run_job
return execute_job(job, execute_callable=execute_callable)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/airflow/jobs/job.py", line 371, in execute_job
ret = execute_callable()
^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/airflow/jobs/scheduler_job_runner.py", line 937, in _execute
self._run_scheduler_loop()
File "/home/karthikeyan/stuff/python/airflow/airflow/jobs/scheduler_job_runner.py", line 1063, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/airflow/jobs/scheduler_job_runner.py", line 1163, in _do_scheduling
callback_tuples = self._schedule_all_dag_runs(guard, dag_runs, session)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/airflow/utils/retries.py", line 93, in wrapped_function
for attempt in run_with_db_retries(max_retries=retries, logger=logger, **retry_kwargs):
File "/home/karthikeyan/stuff/python/airflow/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 443, in __iter__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 376, in iter
result = action(retry_state)
^^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 398, in <lambda>
self._add_action_func(lambda rs: rs.outcome.result())
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/home/karthikeyan/stuff/python/airflow/airflow/utils/retries.py", line 102, in wrapped_function
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/airflow/jobs/scheduler_job_runner.py", line 1569, in _schedule_all_dag_runs
callback_tuples = [(run, self._schedule_dag_run(run, session=session)) for run in dag_runs]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/airflow/jobs/scheduler_job_runner.py", line 1569, in <listcomp>
callback_tuples = [(run, self._schedule_dag_run(run, session=session)) for run in dag_runs]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/airflow/jobs/scheduler_job_runner.py", line 1667, in _schedule_dag_run
schedulable_tis, callback_to_run = dag_run.update_state(session=session, execute_callbacks=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/airflow/utils/session.py", line 98, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/airflow/models/dagrun.py", line 943, in update_state
info = self.task_instance_scheduling_decisions(session)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/airflow/utils/session.py", line 98, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/airflow/models/dagrun.py", line 1123, in task_instance_scheduling_decisions
schedulable_tis, changed_tis, expansion_happened = self._get_ready_tis(
^^^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/airflow/models/dagrun.py", line 1222, in _get_ready_tis
if not schedulable.are_dependencies_met(session=session, dep_context=dep_context):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/airflow/utils/session.py", line 98, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/karthikeyan/stuff/python/airflow/airflow/models/taskinstance.py", line 2335, in are_dependencies_met
for dep_status in self.get_failed_dep_statuses(dep_context=dep_context, session=session):
File "/home/karthikeyan/stuff/python/airflow/airflow/models/taskinstance.py", line 2359, in get_failed_dep_statuses
for dep_status in dep.get_dep_statuses(self, session, dep_context):
File "/home/karthikeyan/stuff/python/airflow/airflow/ti_deps/deps/base_ti_dep.py", line 116, in get_dep_statuses
yield from self._get_dep_statuses(ti, session, cxt)
File "/home/karthikeyan/stuff/python/airflow/airflow/ti_deps/deps/not_previously_skipped_dep.py", line 56, in _get_dep_statuses
if parent.inherits_from_skipmixin:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'SerializedBaseOperator' object has no attribute 'inherits_from_skipmixin'
[2025-03-15 15:30:21 +0530] [27910] [INFO] Shutting down: Master
```
### What you think should happen instead?
_No response_
### How to reproduce
Following dag file crashes the scheduler
```python
DOC = """# Docs for the dag
* 1
* 2
*bold* _italics_
"""
from datetime import datetime, timedelta
from airflow import DAG
from airflow.decorators import task
from datetime import timedelta
with DAG(
dag_id="retry_issue_dag",
start_date=datetime(2023, 10, 10),
catchup=False,
schedule="@once",
doc_md=DOC,
) as dag:
@task(retries=8, retry_delay=timedelta(seconds=1))
def retry_less_than_10():
raise Exception("fail")
@task(retries=40, retry_delay=timedelta(seconds=1))
def retry_more_than_10():
raise Exception("fail")
retry_less_than_10()
retry_more_than_10()
```
### Operating System
Ubuntu 20.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-15T10:04:45Z | 2025-03-15T14:37:52Z | https://github.com/apache/airflow/issues/47808 | [
"kind:bug",
"area:Scheduler",
"area:core",
"needs-triage"
] | tirkarthi | 5 |
python-gino/gino | sqlalchemy | 738 | 'Config' object has no attribute 'DB_DSN' | * GINO version: 1.0.1
* Python version: 3.8
* asyncpg version:
* aiocontextvars version:
* PostgreSQL version: 12
### Description
When i use documentation of Gino Webserver Fastapi, i try reproduce db = Gino(....) , so all parameters on .env file pass, so
AttributeError: 'Config' object has no attribute 'DB_DSN'
my DB_DSN is DB_DSN=postgres+psycopg2://postgres@localhost:5432/fund_api , no have password
i tried with many variables options like:
'postgresql+psycopg2://postgres@localhost:5432/fund_api'
"postgresql+psycopg2://postgres@localhost:5432/fund_api"
"postgresql://postgres:@localhost:5432/fund_api"
"postgresql://postgres:@localhost:5432/fund_api"
### What I Did
```
File "./main.py", line 2, in <module>
from src.models import db
File "/home/scrimfx/Projetos/FundAPI/fundenor-api/src/models/__init__.py", line 6, in <module>
dsn=config.DB_DSN,
AttributeError: 'Config' object has no attribute 'DB_DSN'
```
| closed | 2020-11-26T12:25:57Z | 2020-11-26T16:11:56Z | https://github.com/python-gino/gino/issues/738 | [
"question"
] | ScrimForever | 2 |
google-research/bert | tensorflow | 492 | BERT accuracy reduced after providing custom training..The answer is also not correct | I have trained Google BERT with a custom training.
I have included the exact question and answer along with the context from the input document in the training file and trained BERT.
With new generated checkpoints (ckpt) I am still getting the same wrong answer as obtained before training. However it is observed the probability returned is reduced this time, in nbest_predictions.json. | open | 2019-03-11T12:53:15Z | 2019-03-11T12:53:15Z | https://github.com/google-research/bert/issues/492 | [] | shuvadibp | 0 |
dynaconf/dynaconf | flask | 997 | Add Python API documentation / Module reference to the documentation/website | **Is your feature request related to a problem? Please describe.**
As explained in #996
dynaconf 2.2.3 has a [Module reference](https://dynaconf.readthedocs.io/en/docs_223/reference/dynaconf.html#) where I can read see all the methods of dynaconf.LazySettings together with a description.
I don't see anything similar in https://dynaconf.com/ , @rochacbruno, confirmed that
> Hi, since we migrated to mkdocs there is **no API reference documented**, there is a mkdocs plugin that can generate that we just need to setup it.
**Describe the solution you'd like**
@rochacbruno and @pedro-psb mentioned that there is a mkdocs plugin called [mkdocstring](https://mkdocstrings.github.io/)
that can be used to generate that kind of api documentation .
So I guess this mkdocs plugin should be configured/setup/included in the pipeline that generates the website
| closed | 2023-09-05T13:55:15Z | 2024-02-27T10:16:11Z | https://github.com/dynaconf/dynaconf/issues/997 | [
"Not a Bug",
"RFC",
"Docs",
"good first issue"
] | ecerulm | 3 |
ivy-llc/ivy | pytorch | 28,292 | Fix Ivy Failing Test: numpy - shape.shape__rmul__ | closed | 2024-02-15T14:21:18Z | 2024-02-21T06:42:17Z | https://github.com/ivy-llc/ivy/issues/28292 | [
"Sub Task"
] | fnhirwa | 0 |
|
rthalley/dnspython | asyncio | 588 | any way to get it run for opennic ? | I am trying to do a query on alternate root named opennic.org.
Here is simple code snippet I am trying for it:
```
import dns.resolver
my_resolver = my_resolver = dns.resolver.Resolver(configure=False)
my_resolver.nameservers = ['91.217.137.37', '176.126.70.119', '172.104.136.243']
r = dns.resolver.resolve('grep.geek', 'A')
print(r.response)
```
but I am getting :
```
Traceback (most recent call last):
File "main.py", line 5, in <module>
r = dns.resolver.resolve('grep.geek', 'A')
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/dns/resolver.py", line 1205, in resolve
return get_default_resolver().resolve(qname, rdtype, rdclass, tcp, source,
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/dns/resolver.py", line 1030, in resolve
(request, answer) = resolution.next_request()
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/dns/resolver.py", line 584, in next_request
raise NXDOMAIN(qnames=self.qnames_to_try,
dns.resolver.NXDOMAIN: The DNS query name does not exist: grep.geek.
```
I am not sure what I am doing wrong. If I add the given dns name servers in my system and visit the website `grep.geek` I am able to navigate it which means it is resolving via browser. But if I try doing same with dns.resolver I am getting above error.
Are alternate roots not supported at the moment?
Quick Note: Just to ensure that it is running on an environment where there are no specific dns are set I am trying this via repl.it | closed | 2020-09-28T18:52:30Z | 2020-09-28T19:34:26Z | https://github.com/rthalley/dnspython/issues/588 | [] | saurabhnemade | 2 |
scikit-multilearn/scikit-multilearn | scikit-learn | 73 | Separate directory structure for unit tests | I think we should proceed on changing our test structure into a flat/separate one. This means that all tests should be in a single directory (`./tests`) and (if I may add) mirrors the package directory that we have. __I will open up a Pull Request that resolves this issue.__
I'd rather have sibling directories inside the tests directory so that everything is organized categorically.
This separation is also [how scikit-learn structures their tests](https://github.com/scikit-learn/scikit-learn/tree/master/sklearn/tests). I would modify this a little bit and move it a level-higher, with the actual package as its sibling. This is the one recommended in various blog posts regarding the subject matter:
- [Structuring your project, Hitchhiker's Guide to Python](http://docs.python-guide.org/en/latest/writing/structure/#sample-repository)
- [Unit test tutorial](https://cgoldberg.github.io/python-unittest-tutorial/)
- [StackOverflow popular answer in structuring Python projects](https://stackoverflow.com/questions/193161/what-is-the-best-project-structure-for-a-python-application) | open | 2017-09-28T11:13:01Z | 2023-03-14T16:59:57Z | https://github.com/scikit-multilearn/scikit-multilearn/issues/73 | [
"enhancement",
"help wanted"
] | ljvmiranda921 | 2 |
netbox-community/netbox | django | 17,741 | Tags logic in Config Context | ### Deployment Type
Self-hosted
### Triage priority
N/A
### NetBox Version
v4.1.3
### Python Version
3.10
### Steps to Reproduce
- Assign tags "foo" to Virtual Machine 1
- Assign tags "foo" and "bar" to Virtual Machine 2
- Create Config Context with tags "foo" and "bar"
### Expected Behavior
Config Context should be applied only to Virtual Machine 2
### Observed Behavior
Config Context is applied to Virtual Machine 1 and Virtual Machine 2.
Other attributes have "AND" logic, while tags have "OR" logic, tested on "self-hosted" and "demo" version. | closed | 2024-10-11T18:39:37Z | 2025-01-31T03:02:17Z | https://github.com/netbox-community/netbox/issues/17741 | [
"type: bug",
"netbox"
] | xcdr | 1 |
onnx/onnx | tensorflow | 6,352 | [Feature request] Better support for large models (>2GB) in extract_model | ### System information
1.16.2
### What is the problem that this feature solves?
Allows for extracting sub-models form a large model (>2GB). When using this function (both with the loaded model and the model path), we are forced to do 2 things:
* `infer_shapes` with the loaded model (in `Extractor` init). This function does not work with models > 2GB; thus will return an empty graph.
* in `extract_model`, we are are forced extract the sub-models **with** the weights/external data. This could potentially lead to very large extracted submodels (`.onnx` file > 2GB); which will lead to failure of loading the submodels.
### Alternatives considered
If one seeks to use `extract_model`, there is no other solution besides editing the library code itself.
### Describe the feature
Pass in a parameter which allows to `load_external_data` in `extract_model`. Also alter how we init in the `Extractor` class.
```python
def extract_model(
input_path: str | os.PathLike,
output_path: str | os.PathLike,
input_names: list[str],
output_names: list[str],
check_model: bool = True,
load_external_data=False
) -> None:
e = Extractor(model, load_external_data)
```
```python
from onnx.shape_inference import infer_shapes_path, infer_shapes
class Extractor:
def __init__(self, model_path: str, load_external_data) -> None:
if load_external_data: # infer shape first, as loaded model + external data could be large
infer_shapes_path(model_path)
self.model = onnx.load(model_path, load_external_data)
else:
model = onnx.load(model_path, load_external_data)
self.model = shape_inference.infer_shapes(model)
self.graph = self.model.graph
self.wmap = self._build_name2obj_dict(self.graph.initializer)
self.vimap = self._build_name2obj_dict(self.graph.value_info)
```
### Will this influence the current api (Y/N)?
_No response_
### Feature Area
shape_inference, model usage
### Are you willing to contribute it (Y/N)
Yes
### Notes
_No response_ | open | 2024-09-09T08:58:13Z | 2024-10-23T03:36:51Z | https://github.com/onnx/onnx/issues/6352 | [
"topic: enhancement"
] | highly0 | 3 |
healthchecks/healthchecks | django | 692 | Forced notification endpoint | Would it be possible to add an endpoint called `/notify`, similar to `/log`, that also sends out a notification with the logged data? | closed | 2022-08-09T20:12:10Z | 2022-12-19T08:20:14Z | https://github.com/healthchecks/healthchecks/issues/692 | [] | facorazza | 1 |
huggingface/datasets | computer-vision | 7,139 | Use load_dataset to load imagenet-1K But find a empty dataset | ### Describe the bug
```python
def get_dataset(data_path, train_folder="train", val_folder="val"):
traindir = os.path.join(data_path, train_folder)
valdir = os.path.join(data_path, val_folder)
def transform_val_examples(examples):
transform = Compose([
Resize(256),
CenterCrop(224),
ToTensor(),
])
examples["image"] = [transform(image.convert("RGB")) for image in examples["image"]]
return examples
def transform_train_examples(examples):
transform = Compose([
RandomResizedCrop(224),
RandomHorizontalFlip(),
ToTensor(),
])
examples["image"] = [transform(image.convert("RGB")) for image in examples["image"]]
return examples
# @fengsicheng: This way is very slow for big dataset like ImageNet-1K (but can pass the network problem using local dataset)
# train_set = load_dataset("imagefolder", data_dir=traindir, num_proc=4)
# test_set = load_dataset("imagefolder", data_dir=valdir, num_proc=4)
train_set = load_dataset("imagenet-1K", split="train", trust_remote_code=True)
test_set = load_dataset("imagenet-1K", split="test", trust_remote_code=True)
print(train_set["label"])
train_set.set_transform(transform_train_examples)
test_set.set_transform(transform_val_examples)
return train_set, test_set
```
above the code, but output of the print is a list of None:
<img width="952" alt="image" src="https://github.com/user-attachments/assets/c4e2fdd8-3b8f-481e-8f86-9bbeb49d79fb">
### Steps to reproduce the bug
1. just ran the code
2. see the print
### Expected behavior
I do not know how to fix this, can anyone provide help or something? It is hurry for me
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-5.4.0-190-generic-x86_64-with-glibc2.31
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.6
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1 | open | 2024-09-05T15:12:22Z | 2024-10-09T04:02:41Z | https://github.com/huggingface/datasets/issues/7139 | [] | fscdc | 2 |
huggingface/transformers | deep-learning | 36,667 | The parameter 'text' may be None as the comments says, there is a confuse. | The comments of method "\_\_call\_\_" say:
Main method to prepare for the model one or several sequences(s) and audio(s). This method forwards the `text`
and `kwargs` arguments to Qwen2TokenizerFast's [`~Qwen2TokenizerFast.__call__`] if `text` is not `None` to encode
the text. To prepare the audio(s), this method forwards the `audios` and `kwrags` arguments to
WhisperFeatureExtractor's [`~WhisperFeatureExtractor.__call__`] if `audios` is not `None`. Please refer to the doctsring
of the above two methods for more information."
but here "text" must be not None:
https://github.com/huggingface/transformers/blob/7652804d237fb8768f0f0b8129a05e4f0576114b/src/transformers/models/qwen2_audio/processing_qwen2_audio.py#L106 | open | 2025-03-12T12:43:41Z | 2025-03-17T17:15:50Z | https://github.com/huggingface/transformers/issues/36667 | [] | ralgond | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.