repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
kornia/kornia | computer-vision | 2,371 | parameter alpha in focal loss is not the same as that of paper | ### Describe the bug
The parameter alpha in focal loss is a scalor, so it couldn't has the effect of balanced the different classes. I think alpha should be a tensor with n_classses element and every value refers to the weiht of this class
### Reproduction steps
```bash
Please refer to the description
```
### Expected behavior
Please refer to the description
### Environment
```shell
wget https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
```
### Additional context
_No response_ | closed | 2023-05-09T12:30:37Z | 2023-05-09T20:48:09Z | https://github.com/kornia/kornia/issues/2371 | [
"help wanted"
] | Xinchengzelin | 1 |
proplot-dev/proplot | data-visualization | 269 | How to rotate the latlabels/lonlabels? | <!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). -->
### Description
[Description of the bug or feature.]
### Steps to reproduce
A "[Minimal, Complete and Verifiable Example](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports)" will make it much easier for maintainers to help you.
```python
# your code here
# we should be able to copy-paste this into python and exactly reproduce your bug
```
**Expected behavior**: [What you expected to happen]
**Actual behavior**: [What actually happened]
### Equivalent steps in matplotlib
Please make sure this bug is related to a specific proplot feature. If you're not sure, try to replicate it with the [native matplotlib API](https://matplotlib.org/3.1.1/api/index.html). Matplotlib bugs belong on the [matplotlib github page](https://github.com/matplotlib/matplotlib).
```python
# your code here, if applicable
```
### Proplot version
Paste the results of `import matplotlib; print(matplotlib.__version__); import proplot; print(proplot.version)`here.
| closed | 2021-08-21T05:09:22Z | 2021-08-21T20:16:29Z | https://github.com/proplot-dev/proplot/issues/269 | [
"support"
] | TreeYu123 | 1 |
apify/crawlee-python | automation | 874 | Reconsider HttpClient interface | As of now, the python `BaseHttpClient` look like this:
https://github.com/apify/crawlee-python/blob/beac9fa0eb415caafc04cdaef2888e77fad915e0/src/crawlee/http_clients/_base.py#L55
It has two methods, `send_request` and `crawl`. This is the first iteration of decoupled HTTP clients.
Later on, we refactored the JS version to use this one:
https://github.com/apify/crawlee/blob/f912b8b06da2bc4f3f3db508cc39c936a5c87f23/packages/core/src/http_clients/base-http-client.ts#L179
It also has two methods, `sendRequest` and `stream`. Unlike the python version, the signatures of those methods match quite well. It is worth noting that the two serious attempts at implementing this interface (so far) both couldn't manage to implement `stream` correctly. Although we could probably live without it in the most common case, streaming is paramount for downloading files (potentially large ones), which is a use case that we want to support.
We should simplify this interface and make it look the same in both versions. Any thoughts on how to achieve that? @vdusek @B4nan @barjin... and whoever else wants to chat :slightly_smiling_face: | open | 2025-01-06T15:50:52Z | 2025-01-16T14:00:08Z | https://github.com/apify/crawlee-python/issues/874 | [
"t-tooling",
"debt",
"solutioning"
] | janbuchar | 3 |
Nemo2011/bilibili-api | api | 249 | 爬取视频评论不能使用BV号吗 | **Python 版本:** 3.x.y
**模块版本:** x.y.z
**运行环境:** Windows / Linux / MacOS
---
爬取视频评论不能使用BV号吗
| closed | 2023-03-29T15:31:03Z | 2023-04-28T12:33:45Z | https://github.com/Nemo2011/bilibili-api/issues/249 | [
"question"
] | Fluchw | 3 |
babysor/MockingBird | pytorch | 211 | 训练时出错:RuntimeError: Error(s) in loading state_dict for Tacotron: | Arguments:
run_id: mandarin
syn_dir: k:/mockingbird/datame/SV2TTS/synthesizer
models_dir: synthesizer/saved_models/
save_every: 1000
backup_every: 25000
log_every: 200
force_restart: False
hparams:
Checkpoint path: synthesizer\saved_models\mandarin\mandarin.pt
Loading training data from: k:\mockingbird\datame\SV2TTS\synthesizer\train.txt
Using model: Tacotron
Using device: cpu
Initialising Tacotron Model...
Trainable Parameters: 32.866M
Loading weights at synthesizer\saved_models\mandarin\mandarin.pt
Traceback (most recent call last):
File "synthesizer_train.py", line 37, in <module>
train(**vars(args))
File "K:\MockingBird\synthesizer\train.py", line 114, in train
model.load(weights_fpath, optimizer)
File "K:\MockingBird\synthesizer\models\tacotron.py", line 536, in load
self.load_state_dict(checkpoint["model_state"], strict=False)
File "f:\anaconda3\envs\mockingbird\lib\site-packages\torch\nn\modules\module.py", line 1482, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Tacotron:
size mismatch for encoder_proj.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([128, 1024]).
size mismatch for decoder.attn_rnn.weight_ih: copying a param with shape torch.Size([384, 768]) from checkpoint, the shape in current model is torch.Size([384, 1280]).
size mismatch for decoder.rnn_input.weight: copying a param with shape torch.Size([1024, 640]) from checkpoint, the shape in current model is torch.Size([1024, 1152]).
size mismatch for decoder.stop_proj.weight: copying a param with shape torch.Size([1, 1536]) from checkpoint, the shape in current model is torch.Size([1, 2048]).
我已经把symbol里的那行字符改成旧版的那个了,还是报这个错。我这里用的是自己的数据,模仿aishell3的结构放了,已经做了 pre.py 的预处理,在开始训练这一步的时候就出了这个错 | open | 2021-11-11T17:14:11Z | 2021-11-12T02:54:05Z | https://github.com/babysor/MockingBird/issues/211 | [] | dsyrock | 2 |
SciTools/cartopy | matplotlib | 1,532 | Cannot install cartopy in Scientific Linux | ### Description
<!-- Please provide a general introduction to the issue/proposal. -->
Cartopy cannot be installed on Scientific Linux because the newest available version of `proj` is 4.8.0 and cartopy requires 4.9.0. I'm not sure why `setup.py` checks the system-level versions of `proj` and `geos` instead of the versions in the current virtualenv though...
<!--
If you are reporting a bug, attach the *entire* traceback from Python.
If you are proposing an enhancement/new feature, provide links to related articles, reference examples, etc.
If you are asking a question, please ask on StackOverflow and use the cartopy tag. All cartopy
questions on StackOverflow can be found at https://stackoverflow.com/questions/tagged/cartopy
-->
#### Code to reproduce
```
pip install cartopy
```
#### Traceback
```
Collecting cartopy
Using cached Cartopy-0.17.0.tar.gz (8.9 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
ERROR: Command errored out with exit status 1:
command: /home/jba/venv/gdal3/bin/python3 /home/jba/venv/gdal3/lib64/python3.6/site-packages/pip/_vendor/pep517/_in_process.py get_requires_for_build_wheel /tmp/tmp312hzzaa
cwd: /tmp/pip-install-u9agv4jo/cartopy
Complete output (1 lines):
Proj version 4.8.0 is installed, but cartopy requires at least version 4.9.0.
----------------------------------------
ERROR: Command errored out with exit status 1: /home/jba/venv/gdal3/bin/python3 /home/jba/venv/gdal3/lib64/python3.6/site-packages/pip/_vendor/pep517/_in_process.py get_requires_for_build_wheel /tmp/tmp312hzzaa Check the logs for full command output.
```
<details>
<summary>Full environment definition</summary>
<!-- fill in the following information as appropriate -->
### Operating system
$ cat /proc/version
Linux version 3.10.0-1062.18.1.el7.x86_64 (mockbuild@sl7-uefisign.fnal.gov) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC) ) #1 SMP Tue Mar 17 10:44:42 CDT 2020
$ cat /etc/*-release
NAME="Scientific Linux"
VERSION="7.4 (Nitrogen)"
ID="scientific"
ID_LIKE="rhel centos fedora"
VERSION_ID="7.4"
PRETTY_NAME="Scientific Linux 7.4 (Nitrogen)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.4:GA"
HOME_URL="http://www.scientificlinux.org//"
BUG_REPORT_URL="mailto:scientific-linux-devel@listserv.fnal.gov"
REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.4
REDHAT_SUPPORT_PRODUCT="Scientific Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.4"
Scientific Linux release 7.4 (Nitrogen)
Scientific Linux release 7.4 (Nitrogen)
Scientific Linux release 7.4 (Nitrogen)
### Cartopy version
0.17.0
### conda list
```
n/a
```
### pip list
```
Package Version Location
--------------------- ------- ----------------------
arrow 0.15.5
attrs 19.3.0
backcall 0.1.0
chardet 3.0.4
click 7.1.1
cloudpickle 1.3.0
colorama 0.4.3
coverage 5.0.3
cycler 0.10.0
Cython 0.29.15
dask 2.12.0
decorator 4.4.2
entrypoints 0.3
farmlib 1.5.9 /home/jba/work/farmlib
flake8 3.7.9
flake8-builtins 1.4.2
flake8-commas 2.0.0
flake8-comprehensions 3.2.2
flake8-docstrings 1.5.0
flake8-logging-format 0.6.0
flake8-polyfill 1.0.2
flake8-print 3.1.4
flake8-string-format 0.3.0
future 0.18.2
fuzzywuzzy 0.17.0
GDAL 3.0.4
importlib-metadata 1.5.0
ipython 7.1.1
ipython-genutils 0.2.0
jbafarm 1.7.3 /home/jba/work/farmmap
jedi 0.16.0
jsonschema 3.2.0
kiwisolver 1.1.0
litmus 1.3.4
matplotlib 3.2.0
mccabe 0.6.1
mock 4.0.2
networkx 2.4
nose 1.3.7
nosexcover 1.0.11
numpy 1.18.1
packaging 20.3
pandas 0.25.0
parso 0.6.2
pep8-naming 0.9.1
pexpect 4.8.0
pickleshare 0.7.5
Pillow 7.0.0
pip 20.0.2
proj 0.1.0
prompt-toolkit 2.0.10
psycopg2-binary 2.8.4
ptyprocess 0.6.0
pycodestyle 2.5.0
pydocstyle 3.0.0
pyflakes 2.1.1
Pygments 2.6.1
pyparsing 2.4.6
pyproj 2.1.3
pyrsistent 0.15.7
python-dateutil 2.8.1
python-Levenshtein 0.12.0
pytz 2019.3
PyWavelets 1.0.3
Rtree 0.9.4
scikit-image 0.14.5
scipy 1.2.1
setuptools 39.2.0
Shapely 1.7.0
simplejson 3.17.0
six 1.14.0
snowballstemmer 2.0.0
toolz 0.10.0
tqdm 4.43.0
traitlets 4.3.3
wcwidth 0.1.8
wheel 0.34.2
zipp 3.1.0
```
</details>
| closed | 2020-04-23T10:40:13Z | 2020-04-30T22:47:49Z | https://github.com/SciTools/cartopy/issues/1532 | [] | jontwo | 9 |
keras-team/keras | machine-learning | 20,081 | Loading up Json_files built and trained in Keras 2 for Keras 3 | Using Keras 3, I am trying to load up a built and trained model from Keras 2 API that is stored in .json with weights stored in .h5. The model file is the following: [cnn_model.json](https://github.com/user-attachments/files/16462021/cnn_model.json). Since model_from_json does not exist in Keras 3, I rewrote the function from the Keras 2 API so that I can load the .json file. With Keras 3 (with torch backend), I am trying to load the model and the weights with the following code
```
import os
import keras
import json
os.environ["KERAS_BACKEND"] = "torch"
def model_from_json(json_string, custom_objects=None):
"""Parses a JSON model configuration string and returns a model instance.
Args:
json_string: JSON string encoding a model configuration.
custom_objects: Optional dictionary mapping names
(strings) to custom classes or functions to be
considered during deserialization.
Returns:
A Keras model instance (uncompiled).
model_config = json.loads(json_string)
return deserialize_keras_object(model_config, custom_objects=custom_objects)
def model_torch():
model_name = 'cnn_model' #model file name
model_file = model_name + '.json'
with open(model_file, 'r') as json_file:
print('USING MODEL:' + model_file)
loaded_model_json = json_file.read()
loaded_model = model_from_json(loaded_model_json)
loaded_model.load_weights(model_name + '.h5')
loaded_model.compile('sgd', 'mse')
if __name__ == "__main__":
model_torch()
```
However, when I run this code, I obtain the error below (as shown below). With this, I have the three following questions:
1. How does one possibly fix this error given that the model I want to load (in Keras 3) was built and trained in tensorflow-keras 2?
2. Is it better to rebuild the model in Keras using the load_model() function in Keras 3, and if so, how can you translate the weights from the .h5 file that was created in tensorflow-keras 2 to keras 3?
3. To rebuild how, how should one translate the json dictionary to actual code?
Error I obtain:
`
TypeError: Could not locate class 'Sequential'. Make sure custom classes are decorated with `@keras.saving.register_keras_serializable()`. Full object config: {'class_name': 'Sequential', 'config': {'name': 'sequential', 'layers': [{'class_name': 'Conv2D', 'config': {'name': 'conv2d_20', 'trainable': True, 'batch_input_shape': [None, 50, 50, 1], 'dtype': 'float32', 'filters': 32, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'valid', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'activation': 'relu', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None, 'dtype': 'float32'}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}}, {'class_name': 'Activation', 'config': {'name': 'activation_13', 'trainable': True, 'dtype': 'float32', 'activation': 'relu'}}, {'class_name': 'Conv2D', 'config': {'name': 'conv2d_21', 'trainable': True, 'dtype': 'float32', 'filters': 32, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'valid', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None, 'dtype': 'float32'}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}}, {'class_name': 'Activation', 'config': {'name': 'activation_14', 'trainable': True, 'dtype': 'float32', 'activation': 'relu'}}, {'class_name': 'MaxPooling2D', 'config': {'name': 'max_pooling2d_10', 'trainable': True, 'dtype': 'float32', 'pool_size': [2, 2], 'padding': 'valid', 'strides': [2, 2], 'data_format': 'channels_last'}}, {'class_name': 'Dropout', 'config': {'name': 'dropout_17', 'trainable': True, 'dtype': 'float32', 'rate': 0.25, 'noise_shape': None, 'seed': None}}, {'class_name': 'Conv2D', 'config': {'name': 'conv2d_22', 'trainable': True, 'dtype': 'float32', 'filters': 64, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'same', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None, 'dtype': 'float32'}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}}, {'class_name': 'Activation', 'config': {'name': 'activation_15', 'trainable': True, 'dtype': 'float32', 'activation': 'relu'}}, {'class_name': 'Conv2D', 'config': {'name': 'conv2d_23', 'trainable': True, 'dtype': 'float32', 'filters': 64, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'valid', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None, 'dtype': 'float32'}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}}, {'class_name': 'Activation', 'config': {'name': 'activation_16', 'trainable': True, 'dtype': 'float32', 'activation': 'relu'}}, {'class_name': 'MaxPooling2D', 'config': {'name': 'max_pooling2d_11', 'trainable': True, 'dtype': 'float32', 'pool_size': [2, 2], 'padding': 'valid', 'strides': [2, 2], 'data_format': 'channels_last'}}, {'class_name': 'Dropout', 'config': {'name': 'dropout_18', 'trainable': True, 'dtype': 'float32', 'rate': 0.25, 'noise_shape': None, 'seed': None}}, {'class_name': 'Flatten', 'config': {'name': 'flatten_8', 'trainable': True, 'dtype': 'float32', 'data_format': 'channels_last'}}, {'class_name': 'Dense', 'config': {'name': 'dense_15', 'trainable': True, 'dtype': 'float32', 'units': 512, 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None, 'dtype': 'float32'}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}}, {'class_name': 'Activation', 'config': {'name': 'activation_17', 'trainable': True, 'dtype': 'float32', 'activation': 'relu'}}, {'class_name': 'Dropout', 'config': {'name': 'dropout_19', 'trainable': True, 'dtype': 'float32', 'rate': 0.5, 'noise_shape': None, 'seed': None}}, {'class_name': 'Dense', 'config': {'name': 'dense_16', 'trainable': True, 'dtype': 'float32', 'units': 2, 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None, 'dtype': 'float32'}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}}, {'class_name': 'Activation', 'config': {'name': 'activation_18', 'trainable': True, 'dtype': 'float32', 'activation': 'softmax'}}]}, 'keras_version': '2.2.4-tf', 'backend': 'tensorflow'}
`
| closed | 2024-08-01T21:36:25Z | 2024-09-07T19:33:52Z | https://github.com/keras-team/keras/issues/20081 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] | manuelpaeza | 7 |
matterport/Mask_RCNN | tensorflow | 2,334 | Instance Segmentation vs Object Detection | I would like to find out whether it is better to use instance segmentation or object detection to classify vehicles and count them, in the case of traffic congestion.
From my experience, traffic congestion has a lot of occlusion for bounding box to be accurate, it may classify a car as a truck, and a truck as a car. I have a relatively large datasets, approx 7000 - 10000 images, it may be better to just use object detection as it will be easier to manage as the dataset gets larger
Image example:

If anyone can give some input, that would be greatly appreciated.
Thanks | open | 2020-08-24T04:01:36Z | 2020-09-17T14:03:20Z | https://github.com/matterport/Mask_RCNN/issues/2334 | [] | OAT7963 | 1 |
python-restx/flask-restx | flask | 94 | Router cannot find endpoint with id parameter | ### Issue
When trying to hit an endpoint with an integer variable in the URL, flask-restx responds with
`{
"message": "The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again. You have requested this URI [/xman/statements/1] but did you mean /xman/statements/<int:statement_id> or /xman/statements ?"
}`
### **Code**
In my code, there is a namespace defined as such:
`api = Namespace("xman", description="External Managers related operations", path="/xman")`
and the resource is decorated as such:
<code>
@api.route("/statements/<int:statement_id>")
class StatementDetailsEndpoint(Resource):
@inject.autoparams()
def __init__(self, logging_service: LoggingService, data_service: EMDataService):
super().__init__()
self.logging_service = logging_service
self.data_service = data_service
def get(self, statement_id: int):
...get logic....
</code>
When **not** passing in a parameter into the route, everything works correctly. For example:
<code>
@api.route("/statements")
class StatementListEndpoint(Resource):
@inject.autoparams()
def __init__(self, logging_service: LoggingService, data_service: EMDataService):
super().__init__()
self.logging_service = logging_service
self.data_service = data_service
self.parser = reqparse.RequestParser()
def get(self):
...code....
</code>
### **Expected Behavior**
Expect a JSON response from endpoint
### **Actual Behavior**
Endpoint is not found
### **Environment**
- Python 3.6.7
- Flask 1.0.2
- Flask-RESTX 0.1.1
- Other installed Flask extensions
### **Additional Context**
The endpoint does appear in the Swagger documentation when the application is running and the same issue occurs when inputting a valid integer into the parameter box

| open | 2020-03-19T22:36:25Z | 2020-05-12T20:51:39Z | https://github.com/python-restx/flask-restx/issues/94 | [
"bug"
] | viralogic | 2 |
lukasmasuch/streamlit-pydantic | streamlit | 38 | Code for simple for throws PydanticImportError | <!--
Thanks for reporting a bug 🙌 ❤️
Before opening a new issue, please make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead. Also, be sure to check our documentation first.
-->
**Code for Simple Form in README.md cannot be run**
I was starting out with `streamlit-pydantic` and tried to run the code for the simple form given in the `README.md`, but I encountered an import error while running the simple example.
```python
2023-07-21 20:33:40.546 Uncaught app exception
Traceback (most recent call last):
File "/Users/ayangangopadhyay/Documents/Projects/Expenditure_Dashboard/.venv/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/Users/ayangangopadhyay/Documents/Projects/Expenditure_Dashboard/simple_form.py", line 4, in <module>
import streamlit_pydantic as sp
File "/Users/ayangangopadhyay/Documents/Projects/Expenditure_Dashboard/.venv/lib/python3.10/site-packages/streamlit_pydantic/__init__.py", line 9, in <module>
from .settings import StreamlitSettings
File "/Users/ayangangopadhyay/Documents/Projects/Expenditure_Dashboard/.venv/lib/python3.10/site-packages/streamlit_pydantic/settings.py", line 4, in <module>
from pydantic import BaseSettings
File "/Users/ayangangopadhyay/Documents/Projects/Expenditure_Dashboard/.venv/lib/python3.10/site-packages/pydantic/__init__.py", line 207, in __getattr__
return _getattr_migration(attr_name)
File "/Users/ayangangopadhyay/Documents/Projects/Expenditure_Dashboard/.venv/lib/python3.10/site-packages/pydantic/_migration.py", line 288, in wrapper
raise PydanticImportError(
pydantic.errors.PydanticImportError: `BaseSettings` has been moved to the `pydantic-settings` package. See https://docs.pydantic.dev/2.0.3/migration/#basesettings-has-moved-to-pydantic-settings for more details.
For further information visit https://errors.pydantic.dev/2.0.3/u/import-error
```
The code I am trying to run is exactly the one given for the simple form -
```python
import streamlit as st
from pydantic import BaseModel
import streamlit_pydantic as sp
class ExampleModel(BaseModel):
some_text: str
some_number: int
some_boolean: bool
data = sp.pydantic_form(key="my_form", model=ExampleModel)
if data:
st.json(data.json())
```
**Expected behaviour:**
Rendering a simple form with `streamlit-pydantic`.
**Steps to reproduce the issue:**
1. Create a `.venv`
2. Install required dependencies
3. Create the `simple_form.py` file using the above code or copying it from the `README.md`
4. `streamlit run simple_form.py`
-->
**Technical details:**
- Host Machine OS (Windows/Linux/Mac): Mac
- Browser (Chrome/Firefox/Safari): Arc/Mozilla
- Python Version: 3.10.8
- streamlit-pydantic Version: 0.6.0
- streamlit: 1.24.1
Please let me know if any further details are required, and I will be happy to provide them. Thanks! | open | 2023-07-21T15:15:43Z | 2024-11-13T16:49:26Z | https://github.com/lukasmasuch/streamlit-pydantic/issues/38 | [
"type:bug"
] | SnoozingSimian | 6 |
ultralytics/ultralytics | python | 18,871 | Does tracking mode support NMS threshold? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I'm currently using YOLOv10 to track some objects and there are a lot of cases when two bounding boxes (of the same class) have a high IoU, I tried setting the NMS threshold ("iou" parameter) of the tracker very low it but doesn't change anything... I also tried setting a high NMS threshold (expecting a lot of overlapping BBs) but no matter what value i set, the predictions/tracking looks the same.
I tried to search about the parameters of the YOLOv10 tracker on the Ultralytics Docs and on Ultralytics GitHub but couldn't find anything about the NMS Threshold on the tracker. Is it implemented? Is the parameter name "iou" similar to the predict mode?
Can someone help me in this regard? Thanks!
### Additional
_No response_ | closed | 2025-01-24T20:48:06Z | 2025-01-26T19:08:07Z | https://github.com/ultralytics/ultralytics/issues/18871 | [
"question",
"track"
] | argo-gabriel | 5 |
holoviz/panel | plotly | 7,805 | Card layouts break and overlap when in a container of a constrained size and expanded | <details>
<summary>Software Version Info</summary>
```plaintext
panel == 1.6.1
```
</details>
#### Description of expected behavior and the observed behavior
I expect the cards to respect the overflow property of the container they are in and not overlap when expanded.
**Example 1 overflow: auto in column container**
https://github.com/user-attachments/assets/78a749ae-f82b-4836-9b04-85464a60210a
**Example 2 no overflow specified**
https://github.com/user-attachments/assets/d9d82ab0-a5c3-43f8-9925-07e996223b30
#### Complete, minimal, self-contained example code that reproduces the issue
**Example 1 overflow: auto in column container**
```python
import panel as pn
card1 = pn.layout.Card(pn.pane.Markdown("""
Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
"""), title="Card 1")
card2 = pn.layout.Card(pn.pane.Markdown("""
In a world where technology and nature coexist,
the balance between innovation and preservation becomes crucial.
As we advance into the future, we must remember the lessons of the past,
embracing sustainable practices that honor our planet.
Together, we can forge a path that respects both progress and the environment,
ensuring a brighter tomorrow for generations to come.
"""), title="Card 2")
pn.Column(card1, card2, height=200, styles={'overflow': 'auto'}).servable()
```
**Example 2 no overflow specified**
```python
import panel as pn
card1 = pn.layout.Card(pn.pane.Markdown("""
Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
"""), title="Card 1")
card2 = pn.layout.Card(pn.pane.Markdown("""
In a world where technology and nature coexist,
the balance between innovation and preservation becomes crucial.
As we advance into the future, we must remember the lessons of the past,
embracing sustainable practices that honor our planet.
Together, we can forge a path that respects both progress and the environment,
ensuring a brighter tomorrow for generations to come.
"""), title="Card 2")
pn.Column(card1, card2, height=200).servable()
```
I think this stems from the recalculation of this style, but I'm not quite sure how to get around it:
 | open | 2025-03-24T19:46:16Z | 2025-03-24T19:57:43Z | https://github.com/holoviz/panel/issues/7805 | [] | DmitriyLeybel | 2 |
miLibris/flask-rest-jsonapi | sqlalchemy | 104 | bug: "include" returns deleted relationships also | I have a one to many relationship between Track and Session i.e. a Track can have multiple associated sessions but a session has only one associated Track.
**TrackSchema**
```Python
sessions = Relationship(attribute='sessions',
self_view='v1.track_sessions',
self_view_kwargs={'id': '<id>'},
related_view='v1.session_list',
related_view_kwargs={'track_id': '<id>'},
schema='SessionSchema',
many=True,
type_='session')
```
**Track model**
```python
sessions = db.relationship('Session', backref='track')
```
**SessionSchema**
```python
track = Relationship(attribute='track',
self_view='v1.session_track',
self_view_kwargs={'id': '<id>'},
related_view='v1.track_detail',
related_view_kwargs={'session_id': '<id>'},
schema='TrackSchema',
type_='track')
```
**Session Model**
```python
track_id = db.Column(db.Integer, db.ForeignKey('tracks.id', ondelete='CASCADE'))
```
When I try to include the sessions under a track using ```tracks/{{track_id}}?include=sessions```. It also returns the sessions which were deleted. Seems to be a bug in the library. | closed | 2018-06-11T02:17:07Z | 2018-06-11T03:06:56Z | https://github.com/miLibris/flask-rest-jsonapi/issues/104 | [] | dr0pdb | 2 |
huggingface/datasets | computer-vision | 7,425 | load_dataset("livecodebench/code_generation_lite", version_tag="release_v2") TypeError: 'NoneType' object is not callable | ### Describe the bug
from datasets import load_dataset
lcb_codegen = load_dataset("livecodebench/code_generation_lite", version_tag="release_v2")
or
configs = get_dataset_config_names("livecodebench/code_generation_lite", trust_remote_code=True)
both error:
Traceback (most recent call last):
File "", line 1, in
File "/workspace/miniconda/envs/grpo/lib/python3.10/site-packages/datasets/load.py", line 2131, in load_dataset
builder_instance = load_dataset_builder(
File "/workspace/miniconda/envs/grpo/lib/python3.10/site-packages/datasets/load.py", line 1888, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
TypeError: 'NoneType' object is not callable
### Steps to reproduce the bug
from datasets import get_dataset_config_names
configs = get_dataset_config_names("livecodebench/code_generation_lite", trust_remote_code=True)
OR
lcb_codegen = load_dataset("livecodebench/code_generation_lite", version_tag="release_v2")
### Expected behavior
load datasets livecodebench/code_generation_lite
### Environment info
import datasets
version '3.3.2' | open | 2025-02-27T07:36:02Z | 2025-03-24T05:57:06Z | https://github.com/huggingface/datasets/issues/7425 | [] | dshwei | 9 |
serengil/deepface | machine-learning | 663 | 'deepface.commons.functions' has no attribute 'preprocess_face' | I'm trying to call DeepFace.stream() (library 0.0.78 installed from pip)
but get an error message
```
AttributeError: module 'deepface.commons.functions' has no attribute 'preprocess_face'
``` | closed | 2023-02-07T15:29:39Z | 2023-02-07T15:31:00Z | https://github.com/serengil/deepface/issues/663 | [
"bug"
] | noonv | 1 |
joerick/pyinstrument | django | 352 | Support multithreaded profiling | Thanks for pyintrument - it's incredibly useful
I had need to trace a multithreaded python app and examine the relationship between threads. Obviously in some cases multithreading can be a little interesting in python, but in this particular case works well.
I have extended a fork of pyinstrument to support showing all child threads from the one where profiling starts. I get nice results with all threads separated which has been hugely helpful.
I'll file a PR and reference this issue. It's still a bit of a WIP, but i'd be curious to see if it looks reasonable to you. | open | 2024-12-04T23:07:21Z | 2025-03-15T15:43:54Z | https://github.com/joerick/pyinstrument/issues/352 | [] | georgeharker | 5 |
jmcnamara/XlsxWriter | pandas | 614 | MIME Type of the generated xlsx file | Title: Issue with MIME Type of the generated xlsx file
Hello,
I am using XlsxWriter to generate excel files (obviously), no problem on the generation part but when I check the MIME type of the generated file, it's not what it should be.
I am using Python version 3.7 and XlsxWriter 1.1.5 and Excel version 16.23 (Excel for Mac).
Here is the code to demonstrates the problem:
```python
from xlsxwriter import Workbook
wb = Workbook("created_with_xlsxwriter.xlsx")
ws = wb.add_worksheet()
ws.write(0, 0, "Hello")
wb.close()
```
And when I check the MIME type of the file with the following command:
```
> file --mime-type -b created_with_xlsxwriter.xlsx
application/zip
```
If I do the same command on a file created with excel:
```
> file --mime-type -b created_with_excel.xlsx
application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
```
The strangest thing is that if I open the generated file in Excel and do ```command + s``` on my Mac, and try the above command again, the MIME type become the good one.
Same happens if I open the file with a PHP xlsx library and save it again. | closed | 2019-03-29T01:30:36Z | 2019-04-07T12:18:35Z | https://github.com/jmcnamara/XlsxWriter/issues/614 | [
"bug",
"ready to close",
"short term"
] | alexandra-picot | 4 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,752 | How patch chromedriver.exe with your package? | Hello, could you please guide me on how I can patch the chromedriver.exe file using your package and use that Chrome driver file in a project that is not in Python? Thank you. | open | 2024-02-17T22:11:51Z | 2024-02-24T04:40:41Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1752 | [] | arshambh | 1 |
recommenders-team/recommenders | machine-learning | 1,848 | [BUG] xdeepfm error in AzureML test | ### Description
<!--- Describe your issue/bug/request in detail -->
```
@pytest.mark.gpu
@pytest.mark.notebooks
@pytest.mark.integration
@pytest.mark.parametrize(
"syn_epochs, criteo_epochs, expected_values, seed",
[
(
15,
10,
***
"res_syn": ***"auc": 0.9716, "logloss": 0.699***,
"res_real": ***"auc": 0.749, "logloss": 0.4926***,
***,
42,
)
],
)
def test_xdeepfm_integration(
notebooks,
output_notebook,
kernel_name,
syn_epochs,
criteo_epochs,
expected_values,
seed,
):
notebook_path = notebooks["xdeepfm_quickstart"]
pm.execute_notebook(
notebook_path,
output_notebook,
kernel_name=kernel_name,
parameters=dict(
EPOCHS_FOR_SYNTHETIC_RUN=syn_epochs,
EPOCHS_FOR_CRITEO_RUN=criteo_epochs,
BATCH_SIZE_SYNTHETIC=1024,
BATCH_SIZE_CRITEO=1024,
RANDOM_SEED=seed,
),
)
results = sb.read_notebook(output_notebook).scraps.dataframe.set_index("name")[
"data"
]
for key, value in expected_values.items():
> assert results[key]["auc"] == pytest.approx(value["auc"], rel=TOL, abs=ABS_TOL)
E assert 0.5131 == 0.9716 ± 9.7e-02
E comparison failed
E Obtained: 0.5131
E Expected: 0.9716 ± 9.7e-02
```
### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
<!--- * Azure Data Science Virtual Machine. -->
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
See https://github.com/microsoft/recommenders/actions/runs/3459763061/jobs/5775521889
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
### Other Comments
| closed | 2022-11-16T09:42:24Z | 2022-11-16T12:07:53Z | https://github.com/recommenders-team/recommenders/issues/1848 | [
"bug"
] | miguelgfierro | 1 |
ploomber/ploomber | jupyter | 1,057 | Populate Slack Welcome message | Context:
Currently, we will send the user a welcome message when user joins the Ploomber Slack Workspace.
We may consider to provide the list of our current links.
Example - Ideal After:
<img width="768" alt="Screenshot 2023-01-02 at 9 26 32 PM" src="https://user-images.githubusercontent.com/9766828/210292932-b20a7a26-8591-4dd0-b1ce-95ff5fa4efd8.png">
Example - Currently Implementation:
<img width="901" alt="Screenshot 2023-01-02 at 9 30 49 PM" src="https://user-images.githubusercontent.com/9766828/210293134-f2b0e0d1-1948-4e2d-b494-fdca8440642a.png">
Action Item:
- [ ] Discuss if providing those links will help users to find the resource easier
- What to include in the list? Currently we can have Ploomber.io, Github link, Doc link, Blog link
- [ ] Modify the bot message with the new adding section | closed | 2023-01-03T02:31:02Z | 2023-05-23T18:44:04Z | https://github.com/ploomber/ploomber/issues/1057 | [] | jinniw43805 | 1 |
scrapy/scrapy | web-scraping | 6,012 | Unable to install requirements.txt missing | ### Description
Unable to install on windows 10
### Steps to Reproduce
pip install Scrappy
**Expected behavior:** [What you expect to happen]
No Errores
**Actual behavior:** [What actually happens]
Collecting Scrappy
Using cached Scrappy-0.3.0.alpha.4.tar.gz (17 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting guessit (from Scrappy)
Using cached guessit-3.7.1-py3-none-any.whl (170 kB)
Collecting tvdb-api (from Scrappy)
Using cached tvdb_api-3.1.0.tar.gz (23 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
INFO: pip is looking at multiple versions of scrappy to determine which version is compatible with other requirements. This could take a while.
Collecting Scrappy
Using cached Scrappy-0.3.0.alpha.3.tar.gz (16 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Using cached Scrappy-0.3.0.alpha.2.tar.gz (16 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Using cached Scrappy-0.3.0.alpha.tar.gz (16 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Using cached Scrappy-0.2.10.beta.14.tar.gz (16 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Using cached Scrappy-0.2.10.beta.13.tar.gz (15 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Using cached Scrappy-0.2.10.beta.12.tar.gz (15 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Using cached Scrappy-0.2.10.beta.11.tar.gz (15 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [17 lines of output]
Traceback (most recent call last):
File "C:\Users\test\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Users\test\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "C:\Users\test\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "C:\Users\test\AppData\Local\Temp\pip-build-env-8hm_g8ev\overlay\Lib\site-packages\setuptools\build_meta.py", line 341, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File "C:\Users\test\AppData\Local\Temp\pip-build-env-8hm_g8ev\overlay\Lib\site-packages\setuptools\build_meta.py", line 323, in _get_build_requires
self.run_setup()
File "C:\Users\test\AppData\Local\Temp\pip-build-env-8hm_g8ev\overlay\Lib\site-packages\setuptools\build_meta.py", line 487, in run_setup
super(_BuildMetaLegacyBackend,
File "C:\Users\test\AppData\Local\Temp\pip-build-env-8hm_g8ev\overlay\Lib\site-packages\setuptools\build_meta.py", line 338, in run_setup
exec(code, locals())
File "<string>", line 4, in <module>
FileNotFoundError: [Errno 2] No such file or directory: 'requirements.txt'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
**Reproduces how often:** [What percentage of the time does it reproduce?]
| closed | 2023-08-13T03:42:54Z | 2023-08-13T06:22:03Z | https://github.com/scrapy/scrapy/issues/6012 | [] | stccorp | 2 |
axnsan12/drf-yasg | rest-api | 471 | Incorrect model schema generation | Hi, I made a dynamic model serializer `UserCouponSerializer`, the fields of which vary according to `fields` argument passed when it initialized. If `fields` is None, it includes all fields of its model. The code is below.
```
class DynamicFieldsModelSerializer(serializers.ModelSerializer):
def __init__(self, *args, **kwargs):
fields = kwargs.pop('fields', None)
super(DynamicFieldsModelSerializer, self).__init__(*args, **kwargs)
if fields is not None:
allowed = set(fields)
existing = set(self.fields)
for field_name in existing - allowed:
self.fields.pop(field_name)
```
```
class UserCouponSerializer(DynamicFieldsModelSerializer):
id = serializers.CharField(max_length=20, read_only=True)
class Meta:
model = UserCoupon
fields = '__all__'
```
After that, I made another serializer `UserCouponListSerializer`, which uses `UserCouponSerializer` for its own fields(active_coupon_list, inactive_coupon_list). That two fields only needs partial fields from `UserCouponSerializer`, and I specified `fields` argument as seen below.
```
class UserCouponListSerializer(serializers.Serializer):
active_coupon_list = serializers.ListField(
child=UserCouponSerializer(
fields=['affiliate', 'goods_name', 'period_end', 'status', 'id', 'pay_id']
)
)
inactive_coupon_list = serializers.ListField(
child=UserCouponSerializer(
fields=['affiliate', 'goods_name', 'period_end', 'status', 'id', 'pay_id']
)
)
```
However, the problem is, when I rendered API documentation, everything generated from `UserCouponSerializer` includes only the fields specified in `UserCouponListSerializer`, that is, 'affiliate', 'goods_name', 'period_end', 'status', 'id', and 'pay_id'.

I expected all fields of `UserCoupon` fields are rendered as I defined `UserCouponSerializer`.
Can I know the cause and get some help about that?
| open | 2019-10-08T07:30:45Z | 2025-03-07T12:16:29Z | https://github.com/axnsan12/drf-yasg/issues/471 | [
"triage"
] | zzinny | 0 |
laughingman7743/PyAthena | sqlalchemy | 329 | Redesign SQLAlchemy dialect layout | https://github.com/sqlalchemy/sqlalchemy/blob/main/README.dialects.rst | closed | 2022-06-06T13:28:01Z | 2023-05-04T14:06:43Z | https://github.com/laughingman7743/PyAthena/issues/329 | [] | laughingman7743 | 0 |
amidaware/tacticalrmm | django | 1,524 | Django Admin not loading css | **Server Info (please complete the following information):**
- OS: Debian 11
- Browser: Chrome
- RMM Version (as shown in top left of web UI): v0.15.12
**Installation Method:**
- [X] Standard
- [ ] Docker
**Describe the bug**
The Django Admin page doesn't load assets (css and js)
**To Reproduce**
Steps to reproduce the behavior:
1. Enable Django Admin Interface
2. Restart rmm
3. Go to Django Admin page
**Expected behavior**
The Django Admin Interface loading with the css and js.
**Screenshots**

| closed | 2023-05-31T18:06:32Z | 2023-06-02T23:24:51Z | https://github.com/amidaware/tacticalrmm/issues/1524 | [] | myde2001 | 2 |
gradio-app/gradio | machine-learning | 10,587 | Gradio Block() can't detect imported library like numpy in jupyter notebook | ### Describe the bug
Exact issue described here is still valid but I cannot reopen this ticket https://github.com/gradio-app/gradio/issues/3625
Gradio fails to pull imports on reload
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
%%blocks
# Anything in under gr.NO_RELOAD won't be reloaded when the block is re-run (afaik)
if gr.NO_RELOAD:
import numpy as np
from transformers import pipeline
transcriber = pipeline("automatic-speech-recognition", model="openai/whisper-base.en")
def transcribe(stream, new_chunk):
sr, y = new_chunk
# Convert to mono if stereo
if y.ndim > 1:
y = y.mean(axis=1)
y = y.astype(np.float32)
y /= np.max(np.abs(y))
if stream is not None:
stream = np.concatenate([stream, y])
else:
stream = y
return stream, transcriber({"sampling_rate": sr, "raw": stream})["text"]
waveform_options = gr.WaveformOptions(
waveform_color="#01C6FF",
waveform_progress_color="#0066B4",
skip_length=2,
show_controls=False,
)
with gr.Blocks() as demo:
with gr.Row():
with gr.Column():
state = gr.State()
audio = gr.Audio(
sources=["microphone", "upload"],
show_download_button=True,
waveform_options=waveform_options,
streaming=True,
)
with gr.Row():
clear_btn = gr.ClearButton()
submit_btn = gr.Button("Submit", variant="primary")
output = gr.Textbox(label="Output")
submit_btn.click(
fn=transcribe,
inputs=[state, audio],
outputs=[state, output],
api_name="transcribe")
gr.on(
triggers=[audio.stream],
fn=transcribe,
inputs=[state, audio],
outputs=[state, output],
)
```
### Screenshot
_No response_
### Logs
```shell
Traceback (most recent call last):
File "/opt/conda/lib/python3.12/site-packages/gradio/queueing.py", line 625, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/gradio/route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/gradio/blocks.py", line 2098, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/gradio/blocks.py", line 1645, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 962, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/gradio/utils.py", line 883, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "<string>", line 16, in transcribe
NameError: name 'np' is not defined
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.16.0
gradio_client version: 1.7.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.8
ffmpy: 0.5.0
gradio-client==1.7.0 is not installed.
httpx: 0.28.1
huggingface-hub: 0.28.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 2.1.3
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.6
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.45.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.9.0
httpx: 0.28.1
huggingface-hub: 0.28.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.2
```
### Severity
Blocking usage of gradio | open | 2025-02-13T19:09:40Z | 2025-02-14T23:07:43Z | https://github.com/gradio-app/gradio/issues/10587 | [
"bug",
"docs/website"
] | kelvinhammond | 4 |
BeanieODM/beanie | pydantic | 892 | [BUG] Beanie projection and Pydantic Schema do not play well together | **Describe the bug**
Beanie projections are expecting an "_id" field, but Pydantic schema expect "id". This makes it impossible to use the same Schema and create duplicated code (unless I’m missing the proper method to do it)
**To Reproduce**
```python
from motor.motor_asyncio import AsyncIOMotorClient
from pydantic import BaseModel, Field
from beanie import Document, init_beanie, PydanticObjectId
class Author(Document, BaseModel):
name: str
class AuthorRead(BaseModel):
id: PydanticObjectId = Field(alias="id")
name: str
class AuthorProjection(BaseModel):
# note the underscore
id: PydanticObjectId = Field(alias="_id")
name: str
async def example():
client = AsyncIOMotorClient("mongodb://localhost:27017")
await init_beanie(database=client.db_name, document_models=[Author])
dict = { "name": "Joe" }
joe = Author(**dict)
await joe.insert()
# created object contains "id"
print(AuthorRead(**joe.dict()))
# Beanie get, also give us an 'id' field, so AuthorRead expect id too
# (get() method does not have a project() method)
result = await Author.get(joe.id)
print(AuthorRead(**joe.dict()))
# projection is expecting "_id", not "id"
# we cannot use the same Schema!
result = await Author.find_all().project(AuthorProjection).to_list()
print(result)
await example()
```
**Expected behavior**
A way to use the same Schema for projections, like mapping _id to id during projection
| open | 2024-03-05T03:56:41Z | 2024-04-16T23:37:37Z | https://github.com/BeanieODM/beanie/issues/892 | [
"bug"
] | sheoak | 6 |
assafelovic/gpt-researcher | automation | 1,260 | Empty report when running GPTR on Docker with Windows | **Describe the bug**
I'm running gpt-researcher with Docker. When I try to use Deep Researcher, it runs with no problems, but the output that it produces is empty. So, it provides a doc file that is empty.
**To Reproduce**
I have not made any changes, just used the default settings.
**Expected behavior**
I would expect the provided document to contain text that answers the raised questions.
**Desktop (please complete the following information):**
- OS: Windows
- Browser: Chrome
- Version: 10
| open | 2025-03-14T15:57:12Z | 2025-03-20T09:25:38Z | https://github.com/assafelovic/gpt-researcher/issues/1260 | [] | OX304 | 10 |
lucidrains/vit-pytorch | computer-vision | 306 | Non-deterministic results based on group_max_seq_len in NaViT | I'm having trouble understanding what the various parameters do, even after reading the source code.
Specifically, I'm wondering what group_max_seq_len does, and why it has non-deterministic results? For example:
```
v = NaViT(patch_size=60, **vit_args) # these are extremely large images
v(image_list, group_images = True, group_max_seq_len=1315)
tensor([[ 0.5456, -0.4548, 0.3367, ..., 0.5904, 0.5517, 0.6039],
[ 0.5456, -0.4548, 0.3367, ..., 0.5904, 0.5517, 0.6039],
[ 0.5456, -0.4548, 0.3367, ..., 0.5904, 0.5517, 0.6039],
...,
[ 0.5456, -0.4548, 0.3367, ..., 0.5904, 0.5517, 0.6039],
[ 0.5456, -0.4548, 0.3367, ..., 0.5904, 0.5517, 0.6039],
[ 0.5456, -0.4548, 0.3367, ..., 0.5904, 0.5517, 0.6039]])
v(image_list, group_images = True, group_max_seq_len=229)
tensor([[ 0.2724, -0.8302, 0.4734, ..., 0.7219, 0.6409, 0.4224],
[ 0.5486, -0.4530, 0.3360, ..., 0.5885, 0.5462, 0.6067],
[ 0.2724, -0.8302, 0.4734, ..., 0.7219, 0.6409, 0.4224],
...,
[ 0.4754, -0.4497, 0.3625, ..., 0.6052, 0.6225, 0.5106],
[ 0.2724, -0.8302, 0.4734, ..., 0.7219, 0.6409, 0.4224],
[ 0.4645, -0.4711, 0.3736, ..., 0.6147, 0.6285, 0.5033]])
```
For larger maximum sequence lengths, all the images have identical outputs. My problem is, I want deterministic results, therefore I want a constant max sequence length regardless of how the images are batched (kind of the whole reason I want to use NaViT). However, if I pick the maximum of the whole dataset, then you have the above (1315) result where every single image has identical logits.
If you can clarify how I decide on this parameter I would really appreciate it. | closed | 2024-04-18T15:38:08Z | 2024-04-18T17:17:13Z | https://github.com/lucidrains/vit-pytorch/issues/306 | [] | dempsey-ryan | 3 |
explosion/spaCy | machine-learning | 13,026 | Random 'Segmentation fault (core dumped)' error when training for long spancat | Hi,
I am getting 'Segmentation fault (core dumped)' when trying to train model for long SpanCat. I know this error could be related to OOM issues but this does not seem the case here. I tried to reduce [nlp] batch_size and [training.batcher.size] as shown in the attached config file and used a VM with very large RAM to make sure we are not running out of memory.
During training the VM memory usage never goes above 40% and even when reducing the [components.spancat.suggester] min_size and max_size the memory usage does not exceed 20% but the training exits with error 'Segmentation fault (core dumped)'.
Note: when training with low [components.spancat.suggester] values the training completes but with all zeroes for F, P and R.
His is the command I am using for training:
python -m spacy train config_spn.cfg --output ./output_v3_lg_1.3 --paths.train ./spacy_models_v3/train_data.spacy --paths.dev ./spacy_models_v3/test_data.spacy --code functions.py -V
This is the training output:
[2023-09-28 09:25:08,461] [DEBUG] Config overrides from CLI: ['paths.train', 'paths.dev']
ℹ Saving to output directory: output_v3_lg_1.3
ℹ Using CPU
=========================== Initializing pipeline ===========================
[2023-09-28 09:25:08,610] [INFO] Set up nlp object from config
[2023-09-28 09:25:08,618] [DEBUG] Loading corpus from path: spacy_models_v3/test_data.spacy
[2023-09-28 09:25:08,618] [DEBUG] Loading corpus from path: spacy_models_v3/train_data.spacy
[2023-09-28 09:25:08,619] [INFO] Pipeline: ['tok2vec', 'spancat']
[2023-09-28 09:25:08,621] [INFO] Created vocabulary
[2023-09-28 09:25:09,450] [INFO] Added vectors: en_core_web_lg
[2023-09-28 09:25:09,450] [INFO] Finished initializing nlp object
[2023-09-28 09:25:16,150] [INFO] Initialized pipeline components: ['tok2vec', 'spancat']
✔ Initialized pipeline
============================= Training pipeline =============================
[2023-09-28 09:25:16,158] [DEBUG] Loading corpus from path: spacy_models_v3/test_data.spacy
[2023-09-28 09:25:16,159] [DEBUG] Loading corpus from path: spacy_models_v3/train_data.spacy
ℹ Pipeline: ['tok2vec', 'spancat']
ℹ Initial learn rate: 0.001
E # LOSS TOK2VEC LOSS SPANCAT SPANS_SC_F SPANS_SC_P SPANS_SC_R SCORE
--- ------ ------------ ------------ ---------- ---------- ---------- ------
0 0 98109.47 19535.08 0.00 0.00 4.58 0.00
0 200 528.73 781.51 0.00 0.00 3.75 0.00
Segmentation fault (core dumped)
Environment:
Operating System: Ubuntu 20.04.6 LTS
Python Version Used: 3.8.10
spaCy Version Used: 3.6.0
[config_spn.cfg.txt](https://github.com/explosion/spaCy/files/12748569/config_spn.cfg.txt)
Thanks in advance!
| open | 2023-09-28T10:55:49Z | 2023-10-30T13:21:20Z | https://github.com/explosion/spaCy/issues/13026 | [
"bug",
"feat / spancat"
] | belalsalih | 7 |
twopirllc/pandas-ta | pandas | 484 | Please install TA-Lib to use 2crows. (pip install TA-Lib) message ... | **Which version are you running? The lastest version is on Github. Pip is for major releases.**
```python
import pandas_ta as ta
print(ta.version)
```
0.3.14b0
**Do you have _TA Lib_ also installed in your environment?**
```sh
$ pip list
```
yes

**Did you upgrade? Did the upgrade resolve the issue?**
```sh
$ pip install -U git+https://github.com/twopirllc/pandas-ta
```
Yes.. i have installed talib using the updated version only as mentioned in the readme...
**Describe the bug**
[A clear and concise description of what the bug is.]
i am simply trying to import ticker data from yfinance and using df.ta.strategy(ta.AllStrategy) to build all the indicators. However, i am getting messages saying: **Please install TA-Lib to use 2crows. (pip install TA-Lib)**
first 7 lines of the output of df.ta.strategy(ta.AllStrategy)
**0it [00:00, ?it/s][X] Please install TA-Lib to use 2crows. (pip install TA-Lib)
[X] Please install TA-Lib to use 3blackcrows. (pip install TA-Lib)
[X] Please install TA-Lib to use 3inside. (pip install TA-Lib)
[X] Please install TA-Lib to use 3linestrike. (pip install TA-Lib)
[X] Please install TA-Lib to use 3outside. (pip install TA-Lib)
[X] Please install TA-Lib to use 3starsinsouth. (pip install TA-Lib)
[X] Please install TA-Lib to use 3whitesoldiers. (pip install TA-Lib)**
**To Reproduce**
Provide sample code.
Code:
(https://colab.research.google.com/drive/1eGviU_45HrZLDj_hSHLvXkTvw2gln-vd?usp=sharing)
**Expected behavior**
A clear and concise description of what you expected to happen.
My expectation is what is written in the readme that "Runs and appends all indicators to the current DataFrame by default"
**Screenshots**
If applicable, add screenshots to help explain your problem.


**Additional context**
Add any other context about the problem here.
I want to generate all the indicators at one go.. Also, I am seeing many indicators are having Null. Not sure whether it is because of the installation issue itself mentioned previously..

Thanks for using Pandas TA!
| closed | 2022-02-06T07:57:37Z | 2022-03-11T21:51:29Z | https://github.com/twopirllc/pandas-ta/issues/484 | [
"wontfix",
"info"
] | tbeesofar | 8 |
robotframework/robotframework | automation | 4,563 | Terminal execution breaks up visible=True | closed | 2022-12-14T14:17:03Z | 2022-12-14T14:18:30Z | https://github.com/robotframework/robotframework/issues/4563 | [] | MichaelSeeburger | 0 |
|
MagicStack/asyncpg | asyncio | 792 | CockroachDB + SQLAlchemy trouble | <!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**: 0.23.0
* **PostgreSQL version**: cockroachdb/cockroach:latest
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: N/A
* **Python version**: 3.8.10
* **Platform**: Arch Linux
* **Do you use pgbouncer?**: No
* **Did you install asyncpg with pip?**: Yes
* **If you built asyncpg locally, which version of Cython did you use?**: N/A
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: Yes
<!-- Enter your issue details below this comment. -->
Disclaimer, I'm not entirely sure whether the actual bug is in this repository or in SQLAlchemy. That said, when trying to use `sqlalchemy` with the `asyncpg` driver to connect to CockroachDB, [this function](https://github.com/sqlalchemy/sqlalchemy/blob/master/lib/sqlalchemy/dialects/postgresql/asyncpg.py#L1015-L1039) causes some problems. The [`json` block](https://github.com/sqlalchemy/sqlalchemy/blob/master/lib/sqlalchemy/dialects/postgresql/asyncpg.py#L1026-L1032) causes `asyncpg` to emit a `ValueError: unknown type: pg_catalog.json`, and, when commented out, the [`jsonb` block](https://github.com/sqlalchemy/sqlalchemy/blob/master/lib/sqlalchemy/dialects/postgresql/asyncpg.py#L1033-L1039) emits `asyncpg.exceptions._base.InterfaceError: cannot use custom codec on non-scalar type pg_catalog.jsonb`. I wrote [this patch](https://github.com/SoftwareSheriff/sqlalchemy/commit/c9a386c0a6b401838a87b678cf97543b8b0e263c) for SQLAlchemy which successfully works around these two problems, but I feel like it probably breaks something else and also that there's just no way that this right way to fix this. | closed | 2021-07-28T18:34:36Z | 2021-07-28T20:51:19Z | https://github.com/MagicStack/asyncpg/issues/792 | [] | CobaltCause | 2 |
harry0703/MoneyPrinterTurbo | automation | 247 | TypeError: cannot unpack non-iterable NoneType object | ## generating video: 1 => ./storage/tasks/a240439d-b011-484d-8aa8-a223665691ee/final-1.mp4
2024-04-12 16:01:33 | INFO | "./app/services/video.py:183": generate_video - start, video size: 1080 x 1920
2024-04-12 16:01:33 | INFO | "./app/services/video.py:184": generate_video - ① video: ./storage/tasks/a240439d-b011-484d-8aa8-a223665691ee/combined-1.mp4
2024-04-12 16:01:33 | INFO | "./app/services/video.py:185": generate_video - ② audio: ./storage/tasks/a240439d-b011-484d-8aa8-a223665691ee/audio.mp3
2024-04-12 16:01:33 | INFO | "./app/services/video.py:186": generate_video - ③ subtitle: ./storage/tasks/a240439d-b011-484d-8aa8-a223665691ee/subtitle.srt
2024-04-12 16:01:33 | INFO | "./app/services/video.py:187": generate_video - ④ output: ./storage/tasks/a240439d-b011-484d-8aa8-a223665691ee/final-1.mp4
2024-04-12 16:01:33 | INFO | "./app/services/video.py:202": generate_video - using font: ./resource/fonts/STHeitiLight.ttc
2024-04-12 16:01:34.052 Uncaught app exception
Traceback (most recent call last):
File "/opt/anaconda3/envs/MoneyPrinterTurbo/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 542, in _run_script
exec(code, module.__dict__)
File "/Users/xunuoqing/Downloads/MoneyPrinterTurbo-main/webui/Main.py", line 432, in <module>
result = tm.start(task_id=task_id, params=params)
File "/Users/xunuoqing/Downloads/MoneyPrinterTurbo-main/app/services/task.py", line 155, in start
video.generate_video(video_path=combined_video_path,
File "/Users/xunuoqing/Downloads/MoneyPrinterTurbo-main/app/services/video.py", line 238, in generate_video
sub = SubtitlesClip(subtitles=subtitle_path, encoding='utf-8')
File "/opt/anaconda3/envs/MoneyPrinterTurbo/lib/python3.10/site-packages/moviepy/video/tools/subtitles.py", line 69, in __init__
self.duration = max([tb for ((ta, tb), txt) in self.subtitles])
File "/opt/anaconda3/envs/MoneyPrinterTurbo/lib/python3.10/site-packages/moviepy/video/tools/subtitles.py", line 69, in <listcomp>
self.duration = max([tb for ((ta, tb), txt) in self.subtitles])
TypeError: cannot unpack non-iterable NoneType object
[audio.mp3.json](https://github.com/harry0703/MoneyPrinterTurbo/files/14956044/audio.mp3.json)
[script.json](https://github.com/harry0703/MoneyPrinterTurbo/files/14956045/script.json)
[subtitle.srt.json](https://github.com/harry0703/MoneyPrinterTurbo/files/14956046/subtitle.srt.json)
| closed | 2024-04-12T08:16:47Z | 2024-04-13T13:55:58Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/247 | [
"bug"
] | xunuoqing | 2 |
StackStorm/st2 | automation | 6,197 | Pack rule operator exists / nexists should not have criteria pattern mandatory | ## SUMMARY
When using exists / nexists rule operator, we need to provide a criteria pattern which is not correct, as the value of this pattern needs to be null / should not be there.
### STACKSTORM VERSION
st2 3.7.0, on Python 3.6.8
##### OS, environment, install method
Running on Docker in Mackbook Pro (x86)
## Steps to reproduce the problem
Use exists / nexists rule operator in criteria section. Omit the value of criteria pattern (as it is not needed for this operator).
```
criteria:
trigger.org_id:
type: exists
```
## Expected Results
The pack should get installed correctly as all information is available.
## Actual Results
Pack installation fails and we need to add a pattern field unnecessarily.
```
criteria:
trigger.org_id:
type: exists
pattern: "placeholder"
```
Making sure to follow these steps will guarantee the quickest resolution possible.
Thanks!
| open | 2024-05-11T06:03:46Z | 2024-05-11T06:03:46Z | https://github.com/StackStorm/st2/issues/6197 | [] | mishra-abhishek-salesforce | 0 |
seleniumbase/SeleniumBase | web-scraping | 2,805 | Continuous Driver download problem | Every time I run it it downloads the driver as in the image, does it have to do this every time?
<img width="1017" alt="image" src="https://github.com/seleniumbase/SeleniumBase/assets/88004617/99ee1baa-440f-41be-a930-fe71a8f2d1c8">
| closed | 2024-05-25T02:36:17Z | 2024-05-25T20:58:19Z | https://github.com/seleniumbase/SeleniumBase/issues/2805 | [
"duplicate"
] | mvx20 | 2 |
ExpDev07/coronavirus-tracker-api | fastapi | 323 | Access to application | **Describe the bug**
A clear and concise description of what the bug is. Please include timestamps and HTTP status codes.
If possible include the [httpie](https://httpie.org/) or `curl` request and response.
Please include the verbose flag. `-v`
**To Reproduce**
`httpie/curl` request to reproduce the behavior:
1. Getting Italy data at `v2/locations/IT` gives a 422.
2. Expected to same data as `/v2/locations?country_code=IT`
2. See httpie request & response below
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots or Requests**
If applicable, add screenshots or `httpie/curl`requests to help explain your problem.
```sh
$ http GET https://coronavirus-tracker-api.herokuapp.com/v2/locations/IT -v
GET /v2/locations/IT HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: coronavirus-tracker-api.herokuapp.com
User-Agent: HTTPie/2.0.0
HTTP/1.1 422 Unprocessable Entity
Connection: keep-alive
Content-Length: 99
Content-Type: application/json
Date: Sat, 18 Apr 2020 12:50:29 GMT
Server: uvicorn
Via: 1.1 vegur
{
"detail": [
{
"loc": [
"path",
"id"
],
"msg": "value is not a valid integer",
"type": "type_error.integer"
}
]
}
```
**Additional context**
Add any other context about the problem here.
Does the other instance at https://covid-tracker-us.herokuapp.com/ produce the same result?
| closed | 2020-07-25T13:08:52Z | 2023-09-10T11:23:49Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/323 | [
"bug",
"down"
] | PRRH | 5 |
lepture/authlib | flask | 51 | Configure OAuthClient with OpenID Discovery configuration | https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata
| closed | 2018-04-26T06:54:10Z | 2020-08-11T05:37:06Z | https://github.com/lepture/authlib/issues/51 | [] | lepture | 6 |
ludwig-ai/ludwig | data-science | 3,247 | `torchvision` installing 0.15.0 in the CI instead of 0.14.1 | Similar issue as https://github.com/ludwig-ai/ludwig/issues/3245 | closed | 2023-03-15T00:25:28Z | 2023-03-15T16:07:13Z | https://github.com/ludwig-ai/ludwig/issues/3247 | [] | geoffreyangus | 0 |
horovod/horovod | deep-learning | 3,465 | On the clang-format check of the project | I noticed that horovod use [clang-format](https://clang.llvm.org/docs/ClangFormat.html) to format C++ code.
But when I check the project with clang-format-12, I still got many errors. The cmd I used as below:
```bash
#!/usr/bin/env bash
for src in $(find ./horovod -name "*.h" -or -name "*.cc")
do
clang-format-12 -style=file ${src}
done
```
And the output as attached [hvd_cf12.txt](https://github.com/horovod/horovod/files/8220236/hvd_cf12.txt).
May I know did I use the wrong clang-format version, or the cmd is not right?
| closed | 2022-03-10T04:08:52Z | 2022-04-26T18:13:52Z | https://github.com/horovod/horovod/issues/3465 | [
"bug"
] | GHGmc2 | 1 |
ijl/orjson | numpy | 146 | possible to escape already jsonified string ? | I am working with pandas objects that have a very fast to_json() method to serialise pandas.Series.
I need to serialize a dict with some elements which are pandas objects, e.g.:
`{ "data": pandas.Series(...) }`
I would like to be able to transform the dict as
`{ "data": SerializedJSON("[1,3,4, ...]") }`
with
```
class SerializedJSON(str):
"""An object of type SerializedJSON is a string that will be injected as such in the finale JSON string"""
pass
```
so that orjson would not escape again this string.
Does this make sense ? Would it be useful to add this SerializedJSON ?
| closed | 2020-11-28T07:37:53Z | 2023-06-01T14:40:40Z | https://github.com/ijl/orjson/issues/146 | [] | sdementen | 4 |
vitalik/django-ninja | pydantic | 1,404 | How to change a response timezone based on a query parameter? | Hi, I want to know how to change a response timezone based on a query parmeter like https:://example.com?timezone=utc , https://example.com?timezone=Asia/Tokyo ,or https://exampl.com?timezone=Asia/Kolkata?
I think the answer of https://github.com/vitalik/django-ninja/issues/786 is related to this problem,
but I can't understand how to change timezone dynamicaly.
```
from ninja.renderers import JSONRenderer
from ninja.responses import NinjaJSONEncoder
class MyJsonEncoder(NinjaJSONEncoder):
def default(self, o):
if isinstance(o, datetime.datetime):
###############################
#####how to set timezone dynamically?
##################################
return o.astimezone().isoformat()
return super().default(o)
class MyJsonRenderer(JSONRenderer):
encoder_class = NinjaJSONEncoder
api = NinjaAPI(renderer=MyJsonRenderer())
``` | closed | 2025-02-05T02:12:41Z | 2025-02-05T02:26:20Z | https://github.com/vitalik/django-ninja/issues/1404 | [] | owari-taro | 1 |
apify/crawlee-python | automation | 145 | Resolve incorrect handling of configuration overrides in pydantic | https://github.com/pydantic/pydantic-settings/issues/180 - we probably want to make the sources in [`_settings_build_values`](https://github.com/pydantic/pydantic-settings/blob/8c5a45e43cca4e88a6d65fcb280529499fc6200a/pydantic_settings/main.py#L146) convert the keys to field names from aliases in case `populate_by_name` is set.
Note that we'll need to wait for upstream to release this | open | 2024-05-09T13:15:05Z | 2024-05-09T13:15:05Z | https://github.com/apify/crawlee-python/issues/145 | [
"t-tooling"
] | janbuchar | 0 |
MagicStack/asyncpg | asyncio | 606 | Feature request: Support client side keepalives | <!-- Enter your issue details below this comment. -->
I have been using version 0.20.1. I was able to configure server side TCP keepalives via the server_settings dict object, but I can't seem to find the place where I can set client side keepalives from either a connection pool or a connection object. Does it make sense for an async library like asyncpg to support something like this?
Thanks and Regards,
Keith
| open | 2020-08-11T07:17:34Z | 2023-03-10T17:29:26Z | https://github.com/MagicStack/asyncpg/issues/606 | [] | keithks | 3 |
pytorch/pytorch | deep-learning | 149,425 | python custom ops tutorial stopped working in PyTorch 2.7 RC1 | Get PyTorch 2.7 RC1. Repro in next comment.
Error looks like:
```py
Traceback (most recent call last):
File "/home/rzou/dev/2.7/pco.py", line 124, in <module>
cropped_img = f(img)
^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/2.7/pco.py", line 120, in f
@torch.compile(fullgraph=True)
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1201, in forward
return compiled_fn(full_args)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line
328, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in cal
l_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line
689, in inner_fn
outs = compiled_fn(args)
^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line
495, in wrapper
return compiled_fn(runtime_args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_inductor/output_code.py", line 460, in __call__
return self.current_callable(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/torchinductor_rzou/oy/coy5shd4xlyzvhkrwtaiad5zxz7jhd654636vqhwxsyeux5q27d7.py", line 42, in call
assert_size_stride(buf1, (3, 40, 40), (1600, 40, 1))
AssertionError: expected size 3==3, stride 1==1600 at dim=0; expected size 40==40, stride 120==40 at dim=1; expected s
ize 40==40, stride 3==1 at dim=2
This error most often comes from a incorrect fake (aka meta) kernel for a custom op.
Use torch.library.opcheck to test your custom op.
See https://pytorch.org/docs/stable/library.html#torch.library.opcheck
```
cc @ezyang @gchanan @kadeng @msaroufim @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @chauhang @aakhundov | closed | 2025-03-18T19:57:03Z | 2025-03-19T15:08:34Z | https://github.com/pytorch/pytorch/issues/149425 | [
"high priority",
"triage review",
"oncall: pt2",
"module: inductor"
] | zou3519 | 4 |
comfyanonymous/ComfyUI | pytorch | 7,084 | 报错 | ### Expected Behavior
报错
### Actual Behavior
Trimap did not contain background values
### Steps to Reproduce
Trimap did not contain background values
### Debug Logs
```powershell
Trimap did not contain background values
```
### Other
_No response_ | open | 2025-03-05T13:19:10Z | 2025-03-06T15:43:32Z | https://github.com/comfyanonymous/ComfyUI/issues/7084 | [
"Potential Bug"
] | uber58 | 1 |
pallets-eco/flask-wtf | flask | 433 | TypeError raised from hidden_tag() on Jinja 3.0.0rc1 | # Requirements
```
click==8.0.0rc1
Flask==2.0.0rc1
Flask-WTF==0.14.3
itsdangerous==2.0.0rc2
Jinja2==3.0.0rc1
MarkupSafe==2.0.0rc2
Werkzeug==2.0.0rc4
WTForms==2.3.3
```
# Example
```
import os
from flask import Flask, render_template_string
from flask_wtf import FlaskForm
from wtforms import StringField
from wtforms.validators import DataRequired
from flask_wtf.csrf import CSRFProtect
class MyForm(FlaskForm):
name = StringField('name', validators=[DataRequired()])
app = Flask(__name__)
app.config['SECRET_KEY'] = os.urandom(24).hex()
csrf = CSRFProtect(app)
@app.route('/')
def hello_world():
form = MyForm()
return render_template_string('''
<form method="POST" action="/">
{{ form.hidden_tag() }}
{{ form.name.label }} {{ form.name(size=20) }}
<input type="submit" value="Go">
</form>
''', form=form)
if __name__ == '__main__':
app.run()
```
# Traceback
```
Traceback (most recent call last):
File "/.../pre-venv/lib/python3.9/site-packages/flask/app.py", line 1971, in __call__
return self.wsgi_app(environ, start_response)
File "/.../pre-venv/lib/python3.9/site-packages/flask/app.py", line 1956, in wsgi_app
response = self.handle_exception(e)
File "/.../pre-venv/lib/python3.9/site-packages/flask/app.py", line 1953, in wsgi_app
response = self.full_dispatch_request()
File "/.../pre-venv/lib/python3.9/site-packages/flask/app.py", line 1454, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/.../pre-venv/lib/python3.9/site-packages/flask/app.py", line 1452, in full_dispatch_request
rv = self.dispatch_request()
File "/.../pre-venv/lib/python3.9/site-packages/flask/app.py", line 1438, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/.../app.py", line 21, in hello_world
return render_template_string('''
File "/.../pre-venv/lib/python3.9/site-packages/flask/templating.py", line 145, in render_template_string
return _render(ctx.app.jinja_env.from_string(source), context, ctx.app)
File "/.../pre-venv/lib/python3.9/site-packages/flask/templating.py", line 110, in _render
rv = template.render(context)
File "/.../pre-venv/lib/python3.9/site-packages/jinja2/environment.py", line 1127, in render
self.environment.handle_exception()
File "/.../pre-venv/lib/python3.9/site-packages/jinja2/environment.py", line 814, in handle_exception
raise rewrite_traceback_stack(source=source)
File "<template>", line 3, in top-level template code
File "/.../pre-venv/lib/python3.9/site-packages/flask_wtf/form.py", line 133, in hidden_tag
return Markup(
File "/.../pre-venv/lib/python3.9/site-packages/jinja2/utils.py", line 843, in __init__
super().__init__(*args, **kwargs)
TypeError: object.__init__() takes exactly one argument (the instance to initialize)
```
# Notes
Using `form.csrf_token` in the place of `hidden_tag()` in the template raises no exception.
Also my assumption is that `hidden_tag()` does not require the existence of additional hidden form elements as I believe this works as expected on Flask 1.1.2.
Thank you for the consideration in preparation for Flask 2.0 | closed | 2021-04-25T15:05:01Z | 2021-05-26T00:54:49Z | https://github.com/pallets-eco/flask-wtf/issues/433 | [] | jtrip | 2 |
huggingface/diffusers | pytorch | 10,452 | pipe.disable_model_cpu_offload | **Is your feature request related to a problem? Please describe.**
If I enable the following in Gradio interface
sana_pipe.enable_model_cpu_offload()
and during next generation I want to disable cpu offload, how to do it? I mentioned Gradio specifically as command line inference will not have this problem unless after initializing pipe you generate multiple times with and without cpu offload.
I already searched but nothing found
https://github.com/search?q=repo%3Ahuggingface%2Fdiffusers%20disable_model_cpu_offload&type=code
**Describe the solution you'd like.**
Add method to disable for
1. enable_model_cpu_offload()
2. enable_sequential_cpu_offload()
**Describe alternatives you've considered.**
I will have to delete the pipe completely and load again for each inference in Gradio UI
Kindly suggest if any alternative solution.
```
import torch
from diffusers import SanaPipeline
pipe = SanaPipeline.from_pretrained(
"Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers", torch_dtype=torch.float32
)
pipe.to("cuda")
pipe.text_encoder.to(torch.bfloat16)
pipe.transformer = pipe.transformer.to(torch.bfloat16)
pipe.enable_model_cpu_offload()
image = pipe(prompt='a cyberpunk cat with a neon sign that says "Sana"')[0]
image[0].save("output.png")
pipe.disable_model_cpu_offload()
image = pipe(prompt='a cyberpunk cat with a neon sign that says "Sana 1"')[0]
image[0].save("output1.png")
```
P.S. How to delete a pipe completely so all models are removed completely and GPU memory is freed
I did checked documentation but unable to find find anything relevant
https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana.py
https://github.com/huggingface/diffusers/blob/4e44534845d35248436abf87688906f52e71b868/src/diffusers/pipelines/pipeline_utils.py
| closed | 2025-01-04T16:39:01Z | 2025-01-07T08:29:32Z | https://github.com/huggingface/diffusers/issues/10452 | [] | nitinmukesh | 3 |
fastapi-admin/fastapi-admin | fastapi | 110 | Postgresql Problem | Hello, I am using trying to connect PostgreSQL on my fastapi-admin project.
Is this possible? I'm getting an error about timeout:
File "/usr/local/lib/python3.9/asyncio/tasks.py", line 492, in wait_for
raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError
On another hand, I would like to connect the project with a custom PostgreSQL DB. Is there any way to do it using my own tables?

| closed | 2022-07-13T10:26:16Z | 2022-07-13T11:36:35Z | https://github.com/fastapi-admin/fastapi-admin/issues/110 | [] | carlosnutual | 2 |
KaiyangZhou/deep-person-reid | computer-vision | 13 | Missing Validation Set | Is there a validation set used for choosing the best model before testing the accuracy on the test set?
From what I see from the code, the model with Best Rank 1 is chosen based on test set result. Won't this mean that the Best Rank 1 result is overfitting on the test set? | closed | 2018-05-18T03:04:32Z | 2018-09-03T18:13:02Z | https://github.com/KaiyangZhou/deep-person-reid/issues/13 | [] | Vincentcent1 | 1 |
LAION-AI/Open-Assistant | machine-learning | 3,315 | sample inference code for llama sft 7 model | Hello,
Can you send me the inference sample python code snippet of llama sft 7 model that shows how to do inference with llama model?
I know about <assistant> <prompter> these special tokens from code are used to learn model differences between user & assistant prompts. But i want to know once user enters prompt/message, what is the pipeline for inference. like EOS tokens used, Stop sequences used. where & how to use special tokens to frame user & assistant responses.
I was looking through inference code but its quite complex to understand for new user.
| closed | 2023-06-06T21:11:37Z | 2023-06-09T11:35:54Z | https://github.com/LAION-AI/Open-Assistant/issues/3315 | [] | premanand09 | 2 |
tflearn/tflearn | tensorflow | 679 | model.load(..., weights_only=True) dose not load batch normalization weights | when I'm using model.load(..., weights_only=True), the weights correspond to BN layers are not loaded. that's why when I use model.predict the output of the classifier is completely different and incorrect. How can I load the weights of conv layers as well as BN layers but not input size and other optimization parameters?
Thanks | open | 2017-03-22T22:40:58Z | 2017-03-22T22:40:58Z | https://github.com/tflearn/tflearn/issues/679 | [] | SadeghMSalehi | 0 |
plotly/plotly.py | plotly | 4,229 | Allow applying patterns to single values | Context: plotly.express.timeline
Looking at (and neighboring lines which perform analogous checks) https://github.com/plotly/plotly.py/blob/216fca2a7ed14d2e58f5423557b6e731e2a0f575/packages/python/plotly/plotly/express/_core.py#L973, there is no way to set `pattern_shape_sequence` to `['']` and set patterns for individual values within `pattern_shape_map`. All I want is to make one specific thing stand out without changing its color.
…unless there is another way to achieve that that I’d missed.
| closed | 2023-06-06T08:40:31Z | 2024-07-11T14:31:20Z | https://github.com/plotly/plotly.py/issues/4229 | [] | ernestask | 2 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 202 | Support for LocalAI | Hey :wave: LocalAI (https://github.com/mudler/LocalAI) author here - nice project!
**Is your feature request related to a problem? Please describe.**
I'd like to run this locally with LocalAI - only Ollama seems to be supported.
**Describe the solution you'd like**
LocalAI provides a drop-in API compatible with OpenAI - the only requirement is to be able to specify a base API url endpoint for the client to hit. If Scrapegraph could let the user specify a base url for OpenAI would be enough to make it compatible with LocalAI
**Describe alternatives you've considered**
N/A
**Additional context**
n/a
| closed | 2024-05-10T07:34:10Z | 2024-05-10T12:11:08Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/202 | [
"feature request"
] | mudler | 5 |
thtrieu/darkflow | tensorflow | 687 | Darkflow - YOLOv1 - Loss function | Hi, Guys
Can someone explain how the loss function is working? Because from what I can tell it's being called cli.py builds the model, creates a framework and does a single pass to build the network. Then Train is called and calls the loss function and the training begins. Is this correct chain of events?
Also flow.py contains:
Please see the comment below.
```
def train(self):
loss_ph = self.framework.placeholders
loss_mva = None; profile = list()
batches = self.framework.shuffle()
# Function Pointer to loss function
loss_op = self.framework.loss
print("LOSS FUNCTION POINTER: ",loss_op)
for i, (x_batch, datum) in enumerate(batches):
if not i: self.say(train_stats.format(
self.FLAGS.lr, self.FLAGS.batch,
self.FLAGS.epoch, self.FLAGS.save
))
feed_dict = {
loss_ph[key]: datum[key]
for key in loss_ph }
feed_dict[self.inp] = x_batch
feed_dict.update(self.feed)
fetches = [self.train_op, loss_op]
if self.FLAGS.summary:
fetches.append(self.summary_op)
fetched = self.sess.run(fetches, feed_dict)
print(fetched)
loss = fetched[1]
print(loss)
# WHAT DOES THIS DO BELOW??????????
if loss_mva is None: loss_mva = loss
loss_mva = .9 * loss_mva + .1 * loss
step_now = self.FLAGS.load + i + 1
if self.FLAGS.summary:
self.writer.add_summary(fetched[2], step_now)
form = 'step {} - loss {} - moving ave loss {}'
self.say(form.format(step_now, loss, loss_mva))
profile += [(loss, loss_mva)]
ckpt = (i+1) % (self.FLAGS.save // self.FLAGS.batch)
args = [step_now, profile]
if not ckpt: _save_ckpt(self, *args)
if ckpt: _save_ckpt(self, *args)
```
| open | 2018-04-01T14:27:12Z | 2018-04-06T20:14:33Z | https://github.com/thtrieu/darkflow/issues/687 | [] | rij12 | 1 |
man-group/arctic | pandas | 893 | Fix failing builds: https://travis-ci.org/github/man-group/arctic/jobs/762598165 | On a quick glance, some failing tests seem to be returning None, both VersionStore and ChunkStore. Need to take a proper look at this, as it's blocking quite a few PRs | closed | 2021-03-15T10:28:43Z | 2021-04-08T16:06:44Z | https://github.com/man-group/arctic/issues/893 | [] | shashank88 | 3 |
jacobgil/pytorch-grad-cam | computer-vision | 135 | A question about loss backward | Thank you for sharing the well organized code. I learned a lot from the code. Now I would like to ask you a question about loss.backward. For the classification problem, the final output of the model is a classification vector, such as 1 x M vector, your code is also targeted at the classification task, for a certain category, one of the categories corresponding to the classification score can be propagated backward. My question is, if the output of the model is not a 1 x M vector, but a matrix like N x M, and I want to take one of N and propagate backwards, how do I do that?
Your code: 1 x M -> value
I want: N x M -> a vector
Looking forward to your reply. | closed | 2021-09-17T01:17:53Z | 2021-09-17T16:22:38Z | https://github.com/jacobgil/pytorch-grad-cam/issues/135 | [] | QWTforGithub | 6 |
newpanjing/simpleui | django | 79 | djcelery出错. |
![Uploading image.png…]()
Request Method: | GET
-- | --
https://csds.nkhdkj.com/admin/djcelery/periodictask/
2.1.8
TypeError
Object of type '__proxy__' is not JSON serializable
| closed | 2019-06-11T11:08:04Z | 2019-06-18T07:16:41Z | https://github.com/newpanjing/simpleui/issues/79 | [
"bug"
] | ghost | 5 |
pyqtgraph/pyqtgraph | numpy | 2,213 | Visual glitches with thick line(>1) on version 0.12.4 | <!-- In the following, please describe your issue in detail! -->
<!-- If some of the sections do not apply, just remove them. -->
### Short description
When setting line width thicker than 1, zooming in causes some line to fill up a large area
https://user-images.githubusercontent.com/66480156/156883464-b96a4b43-101e-46aa-afed-2ff300aae082.mp4
### Code to reproduce
That light green line was created this way
```python
import pyqtgraph as pyqtgraph
plot_item = pyqtgraph.PlotItem()
line = plot_item.plot(
pen=pyqtgraph.mkPen("#E2F200", width=2.5),
connect="finite",
),
```
### Expected behavior
Zooming in normally
### Real behavior
Light green line fills up a large space. No error occured.
### Tested environment(s)
* PyQtGraph version: 0.12.4
* Qt Python binding: Pyside6 6.2.3 Qt 6.2.3
* Python version: 3.10.2
* NumPy version: 1.22.2
* Operating system: Windows 11
* Installation method: pip
### Additional context
None
| closed | 2022-03-05T12:42:21Z | 2022-07-14T05:35:37Z | https://github.com/pyqtgraph/pyqtgraph/issues/2213 | [] | temeddix | 7 |
deepspeedai/DeepSpeed | pytorch | 5,646 | [BUG] File not found in autotuner cache in multi-node setting on SLURM | **Describe the bug**
I am training an LLM using DeepSpeed and 12 nodes a 8 V100s per node. My training is generally working well (thanks DeepSpeed), but when I run multiple training runs in parallel, I run into trouble.
I am getting these kinds of errors
```
Traceback (most recent call last):
File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 473, in matmul_ext_update_autotune_table
fp16_matmul._update_autotune_table()
fp16_matmul._update_autotune_table()
File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 450, in _update_autotune_table
File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 450, in _update_autotune_table
TritonMatmul._update_autotune_table(__class__.__name__ + "_2d_kernel", __class__._2d_kernel)
File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 179, in _update_autotune_table
TritonMatmul._update_autotune_table(__class__.__name__ + "_2d_kernel", __class__._2d_kernel)
File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 179, in _update_autotune_table
cache_manager.put(autotune_table)
File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 98, in put
cache_manager.put(autotune_table)
os.rename(self.file_path + ".tmp", self.file_path)
FileNotFoundError: [Errno 2] No such file or directory: '/gpfs/u/home/ANFM/ANFMbchl/scratch/1167439/.cache/Fp16Matmul_2d_kernel.pickle.tmp' -> '/gpfs/u/home/ANFM/ANFMbchl/scratch/1167439/.cache/Fp16Matmul_2d_kernel.pickle'
File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 98, in put
os.rename(self.file_path + ".tmp", self.file_path)
FileNotFoundError: [Errno 2] No such file or directory: '/gpfs/u/home/ANFM/ANFMbchl/scratch/.cache/Fp16Matmul_2d_kernel.pickle.tmp' -> '/gpfs/u/home/ANFM/ANFMbchl/scratch/.cache/Fp16Matmul_2d_kernel.pickle'
```
I thought that this is because maybe the directories are shared between the multiple runs, which can create race conditions.
My `TMPDIR`, `TRITON_CACHE_DIR`, and `TORCH_EXTENSIONS_DIR` are set as follows
```
export TMPDIR=$HOME/scratch/.cache
export TRITON_CACHE_DIR=$HOME/scratch/.cache
export TORCH_EXTENSIONS_DIR=$HOME/scratch/.cache/torch-extensions
```
To fix this, I tried to allocate one cache folder per run like so
```
export TMPDIR=$HOME/scratch/.cache
export TRITON_CACHE_DIR=$HOME/scratch/$SLURM_JOBID/.cache
export TORCH_EXTENSIONS_DIR=$HOME/scratch/$SLURM_JOBID/.cache/torch-extensions
mkdir -p $TRITON_CACHE_DIR
mkdir -p $TORCH_EXTENSIONS_DIR
```
but that also didn't work. Now I am getting this error
```
Traceback (most recent call last):
File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 473, in matmul_ext_update_autotune_table
fp16_matmul._update_autotune_table()
fp16_matmul._update_autotune_table()
File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 450, in _update_autotune_table
File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 450, in _update_autotune_table
TritonMatmul._update_autotune_table(__class__.__name__ + "_2d_kernel", __class__._2d_kernel)
File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 179, in _update_autotune_table
TritonMatmul._update_autotune_table(__class__.__name__ + "_2d_kernel", __class__._2d_kernel)
File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 179, in _update_autotune_table
cache_manager.put(autotune_table)
File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 98, in put
cache_manager.put(autotune_table)
os.rename(self.file_path + ".tmp", self.file_path)
FileNotFoundError: [Errno 2] No such file or directory: '/gpfs/u/home/ANFM/ANFMbchl/scratch/1167439/.cache/Fp16Matmul_2d_kernel.pickle.tmp' -> '/gpfs/u/home/ANFM/ANFMbchl/scratch/1167439/.cache/Fp16Matmul_2d_kernel.pickle'
File "/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 98, in put
os.rename(self.file_path + ".tmp", self.file_path)
FileNotFoundError: [Errno 2] No such file or directory: '/gpfs/u/home/ANFM/ANFMbchl/scratch/1167439/.cache/Fp16Matmul_2d_kernel.pickle.tmp' -> '/gpfs/u/home/ANFM/ANFMbchl/scratch/1167439/.cache/Fp16Matmul_2d_kernel.pickle'
```
**ds_report output**
```
[2024-06-12 03:08:15,154] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-06-12 03:08:15,765] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-devel package with yum
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
No ROCm runtime is found, using ROCM_HOME='/opt/rocm-4.3.0'
[WARNING] NVIDIA Inference is only supported on Ampere and newer architectures
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3
[WARNING] using untested triton version (2.3.0), only 1.0.0 is known to be compatible
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-devel package with yum
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
[WARNING] NVIDIA Inference is only supported on Ampere and newer architectures
fp_quantizer ........... [NO] ....... [NO]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3
[WARNING] using untested triton version (2.3.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/gpfs/u/home/ANFM/ANFMbchl/scratch/miniconda3/envs/torch-nightly/lib/python3.10/site-packages/torch']
torch version .................... 2.3.0+cu121
deepspeed install path ........... ['/gpfs/u/scratch/ANFM/ANFMbchl/DeepSpeed/deepspeed']
deepspeed info ................... 0.14.3+488a823, 488a823, master
torch cuda version ............... 12.1
torch hip version ................ None
nvcc version ..................... 12.1
deepspeed wheel compiled w. ...... torch 2.4, cuda 12.1
shared memory (/dev/shm) size .... 377.69 GB
```
| open | 2024-06-12T07:08:50Z | 2024-11-07T18:21:22Z | https://github.com/deepspeedai/DeepSpeed/issues/5646 | [
"bug",
"training"
] | jubueche | 5 |
coqui-ai/TTS | pytorch | 4,154 | [Feature request] using pre-extracted SE files(se.pt) | Can't I use xtts using pre-extracted SE files(se.pt)? I want to apply it and do a voice clone | closed | 2025-02-17T08:18:02Z | 2025-03-12T03:24:41Z | https://github.com/coqui-ai/TTS/issues/4154 | [
"feature request"
] | pes0427 | 6 |
unit8co/darts | data-science | 2,334 | [Question] How could I hide the "Finding best initial lr" message from pytorch_lightning when using Darts' Torch Forecasting Models? | I am unable to hide the `Finding best initial lr` message when calling the [`lr_find` method](https://github.com/unit8co/darts/blob/c3a611236690f0704ced6078982adf20b0a33886/darts/models/forecasting/torch_forecasting_model.py#L1111) associated with Darts' Torch `Forecasting Models`, such as `BlockRNNModel`:

Based on my understanding, this message is generated by `pytorch-lightning`. In particular, by the [`on_train_batch_start` method](https://github.com/Lightning-AI/pytorch-lightning/blob/58ad56afece3ea7faec2f1b7f68d90195f316d78/src/lightning/pytorch/tuner/lr_finder.py#L384) from the [`_LRCallback` class](https://github.com/Lightning-AI/pytorch-lightning/blob/58ad56afece3ea7faec2f1b7f68d90195f316d78/src/lightning/pytorch/tuner/lr_finder.py#L349). At least in this specific case :thinking:
I have tried the following:
1. Including `verbose=False` when calling the `lr_find` method.
2. Passing a `TQDMProgressBar(refresh_rate=0)` instance through the `callbacks` list in the `pl_trainer_kwargs` dict passed to the `BlockRNNModel` constructor.
3. Including an `"enable_progress_bar": False` in the `pl_trainer_kwargs` dict passed to the `BlockRNNModel` constructor.
So far, no luck :disappointed:
I don't know if I have misunderstood something or I am missing some critical bit of information :grimacing:
Could you help me solve this issue? :pray:
Any feedback would be much appreciated :relaxed:
***
I am using:
- python 3.11.0
- darts 0.28.0
- pytorch_lightning 1.9.5
Let me know if there are any missing but relevant and clarifying details I should mention. | closed | 2024-04-16T17:52:50Z | 2024-06-18T06:06:27Z | https://github.com/unit8co/darts/issues/2334 | [
"question"
] | fmerinocasallo | 1 |
TencentARC/GFPGAN | pytorch | 417 | Fails on i5-3470 / GTX 750: Hardware unsupported | open | 2023-07-16T15:52:17Z | 2024-04-08T14:17:56Z | https://github.com/TencentARC/GFPGAN/issues/417 | [] | sfc23 | 0 |
|
influxdata/influxdb-client-python | jupyter | 92 | The DataFrame serialisation is slower than in v1 | Using python pandas. Version 1 i used this:
```python
def dbpop_influx(data, dbname, measurement, columns):
## constants:
dbclient = DataFrameClient(host='localhost', port=8086, username='root', password='root', database=dbname)
n_import_chunks = math.ceil(len(data) / 10000)
data_chunks = np.array_split(data, n_import_chunks)
for d in data_chunks:
dbclient.write_points(d, measurement, tag_columns = columns, protocol = 'line')
```
Takes 29 seconds (was looking to improve that speed with multiprocessing)
Version 2 i used this:
```python
_client = InfluxDBClient(url="http://localhost:9999", token=token, org="org")
_write_client = _client.write_api(write_options=WriteOptions(batch_size=10000,
flush_interval=10_000,
jitter_interval=0,
retry_interval=5_000))
start = time.time()
_write_client.write('data', record=imp_dat[0], data_frame_measurement_name='coinmarketcap_ohlcv',
data_frame_tag_columns=['quote_asset','base_asset'])
print(time.time() - start)
```
this takes 118 seconds...
data looks like:

@bednar | closed | 2020-05-11T09:44:10Z | 2020-06-02T06:31:35Z | https://github.com/influxdata/influxdb-client-python/issues/92 | [
"enhancement"
] | benb92 | 2 |
huggingface/diffusers | pytorch | 11,127 | Civit AI flux model razor-8step-rapid-real not working with diffusers single file | ### Describe the bug
We have this civit AI model: https://civitai.com/models/849864/razor-8step-rapid-real which we want to run using `from_single_file`, but it errors out
### Reproduction
1) First create your CivitAI API key by logging into civit ai and navigating to https://civitai.com/user/account
Then go to "API Keys" section in the bottom and create your key.
2) Run the following command on terminal: `wget --show-progress -O model.safetensors "https://api.civitai.com/download/models/950841?token=YOUR_TOKEN"`
3) Try the code:
```
import torch
from diffusers import FluxPipeline
#wget --show-progress -O model.safetensors "https://api.civitai.com/download/models/950841?token="
pipe = FluxPipeline.from_single_file(
"model.safetensors",
torch_dtype=torch.bfloat16,
)
pipe.to("cuda")
prompt = "A cat holding a sign that says hello world"
image = pipe(
prompt,
height=1024,
width=1024,
guidance_scale=3.5,
num_inference_steps=50,
max_sequence_length=512,
generator=torch.Generator("cuda").manual_seed(0)
).images[0]
image.save("flux.png")
```
### Logs
```shell
(3.7) user@c6dbd33b-904f-4d4e-bc4e-f68f78a80315:~/runware/Ali/sd-base-api$ python ali.py
Fetching 16 files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 24745.16it/s]
Loading pipeline components...: 57%|█████████████████████████████████████████████████████████▏ | 4/7 [00:16<00:12, 4.16s/it]You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
Loading pipeline components...: 71%|███████████████████████████████████████████████████████████████████████▍ | 5/7 [00:16<00:06, 3.37s/it]
Traceback (most recent call last):
File "/home/user/runware/Ali/sd-base-api/ali.py", line 7, in <module>
pipe = FluxPipeline.from_single_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/envs/3.7/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/envs/3.7/lib/python3.12/site-packages/diffusers/loaders/single_file.py", line 509, in from_single_file
loaded_sub_model = load_single_file_sub_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/envs/3.7/lib/python3.12/site-packages/diffusers/loaders/single_file.py", line 104, in load_single_file_sub_model
loaded_sub_model = load_method(
^^^^^^^^^^^^
File "/home/user/anaconda3/envs/3.7/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/envs/3.7/lib/python3.12/site-packages/diffusers/loaders/single_file_model.py", line 343, in from_single_file
diffusers_format_checkpoint = checkpoint_mapping_fn(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/envs/3.7/lib/python3.12/site-packages/diffusers/loaders/single_file_utils.py", line 2255, in convert_flux_transformer_checkpoint_to_diffusers
q, k, v, mlp = torch.split(checkpoint.pop(f"single_blocks.{i}.linear1.weight"), split_size, dim=0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/envs/3.7/lib/python3.12/site-packages/torch/functional.py", line 207, in split
return tensor.split(split_size_or_sections, dim)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/envs/3.7/lib/python3.12/site-packages/torch/_tensor.py", line 983, in split
return torch._VF.split_with_sizes(self, split_size, dim)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: split_with_sizes expects split_sizes to sum exactly to 33030144 (input tensor's size at dimension 0), but got split_sizes=[3072, 3072, 3072, 12288]
```
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- 🤗 Diffusers version: 0.33.0.dev0
- Platform: Linux-5.15.0-134-generic-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.12.9
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.27.1
- Transformers version: 4.47.1
- Accelerate version: 1.2.1
- PEFT version: 0.14.0
- Bitsandbytes version: not installed
- Safetensors version: 0.5.0
- xFormers version: not installed
- Accelerator: NVIDIA A100-SXM4-80GB, 81920 MiB
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sayakpaul | open | 2025-03-20T17:11:49Z | 2025-03-21T13:42:43Z | https://github.com/huggingface/diffusers/issues/11127 | [
"bug"
] | ali-afridi26 | 4 |
miguelgrinberg/flasky | flask | 509 | Current version of chromedriver breaks pinned version of selenium | Running the tests on my Ubuntu 20.4 server against tag `5d` resulted in
```
test_admin_home_page (test_selenium.SeleniumTestCase) ... skipped 'Web browser not available'
```
So I hacked `tests/test_selenium.py` to add code to re-raise the exception caught in `SeleniumTestCase.setUpClass()` which got me
```
ERROR: setUpClass (test_selenium.SeleniumTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/bkline/repos/flasky/tests/test_selenium.py", line 19, in setUpClass
cls.client = webdriver.Chrome(chrome_options=options)
File "/home/bkline/repos/flasky/venv/lib/python3.8/site-packages/selenium/webdriver/chrome/webdriver.py", line 65, in __init__
RemoteWebDriver.__init__(
File "/home/bkline/repos/flasky/venv/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 98, in __init__
self.start_session(desired_capabilities, browser_profile)
File "/home/bkline/repos/flasky/venv/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 188, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/home/bkline/repos/flasky/venv/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 256, in execute
self.error_handler.check_response(response)
File "/home/bkline/repos/flasky/venv/lib/python3.8/site-packages/selenium/webdriver/remote/errorhandler.py", line 194, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: invalid argument: unrecognized capability: chromeOptions
```
A little more digging found some [advice](https://stackoverflow.com/questions/57995521/selenium-common-exceptions-webdriverexception-message-invalid-argument-unreco) which suggested upgrading `selenium` to accommodate changes to the behavior of the launcher in `chromedriver`. Sure enough, running `pip install -U selenium` got the selenium test back in the game. So it would seem that the requirements documents need to be updated to bump the version of `selenium`. I'm at 3.141.0 which is what PyPI is currently serving up. I haven't done the testing to determine the minimum version needed to avoid the breakage I ran into, but I'm not seeing any adverse side effects from this version.
```bash
(venv) $ grep selenium requirements/dev.txt
selenium==3.4.3
(venv) $ pip freeze | grep selenium
selenium==3.141.0
(venv) $ dpkg -l | grep chromium
ii chromium-browser 1:85.0.4183.83-0ubuntu0.20.04.2 amd64 Transitional package - chromium-browser -> chromium snap
ii chromium-chromedriver 1:85.0.4183.83-0ubuntu0.20.04.2 amd64 Transitional package - chromium-chromedriver -> chromium snap
``` | closed | 2021-04-13T19:00:23Z | 2021-04-29T23:12:52Z | https://github.com/miguelgrinberg/flasky/issues/509 | [
"bug"
] | bkline | 2 |
recommenders-team/recommenders | data-science | 1,801 | [BUG] For an nrms model, run_fast_eval does not return the correct prediction scores | ### Description
I believe that for the nrms model, the output of `run_fast_eval` is incorrect. (See code [here](https://github.com/microsoft/recommenders/blob/98d661edc6a9965c7f42b76dc5317af3ae74d5e0/recommenders/models/newsrec/models/base_model.py#L399).)
### In which platform does it happen?
Jupyter Lab running in Ubuntu 20.04.4 LTS (Focal Fossa). Using Python version 3.8.10 with tensorflow version 2.8.0.
### How do we replicate the issue?
Run the following code and verify that the outputs are different (I believe they should be the same):
```
group_impr_indexes, group_labels, group_preds = model.run_slow_eval(valid_news_file, valid_behaviors_file)
group_impr_indexes, group_labels, group_preds = model.run_fast_eval(valid_news_file, valid_behaviors_file)
```
Also, notice that the group_preds output of `run_slow_eval` is in the range [0,1] (as expected) while the output group_preds of `run_fast_eval` is not.
### Expected behavior (i.e. solution)
As stated in the [NRMS paper](https://wuch15.github.io/paper/EMNLP2019-NRMS.pdf) (equation 11), the output should be click probabilities (in the range [0,1]). I think this can be fixed by adding a sigmoid function after the computation of the dot product on [line 416](https://github.com/microsoft/recommenders/blob/98d661edc6a9965c7f42b76dc5317af3ae74d5e0/recommenders/models/newsrec/models/base_model.py#L416).
### Other Comments
| closed | 2022-07-25T20:30:23Z | 2024-07-16T18:03:07Z | https://github.com/recommenders-team/recommenders/issues/1801 | [
"bug"
] | AmandaRP | 1 |
tqdm/tqdm | jupyter | 1,505 | Troubles regarding removing the progress bar after loop | Hi,
I want to use tqdm in a nested loop and it would help if the progress bar for the inner loop can be removed after finishing the inner loop. I try to pass the parameter "leave=False" as doc said. It did remove the progress bar at the end, but leave a blank line, which means the next progress bar starts at a new line.

The mini example is listed as below.
```
from tqdm.auto import tqdm, trange
import time
for i in trange(4, leave=True):
for j in trange(5, total=5, leave=False):
time.sleep(0.5)
```
I wonder it may relate to my working environment (2 types):
1. JupyterLab 3.4.4, RHEL 8.8 (linux), jupyterlab 3.4.4, python 3.10.11, tqdm 4.65.0, ipywidgets 7.6.5
2. Pycharm using jupyter notebook 6.5.2, python 3.10.12, tqdm 4.65.0, ipywidgets 8.0.4
| open | 2023-09-01T08:05:58Z | 2023-10-04T20:17:49Z | https://github.com/tqdm/tqdm/issues/1505 | [] | lensory | 1 |
jupyter-book/jupyter-book | jupyter | 1,389 | Generate lists of pages based on tag metadata | It would be useful if users could attach **tags** to certain pages, and then use these tags to generate lists of pages that fall under that tag. This is a common thing to do in blogging platforms, and may be useful here as well.
[ABlog kind-of supports this](https://ablog.readthedocs.io/en/latest/), but it uses a directive rather than page-level metadata.
Ideally, people would be able to include metadata at the top of their pages like:
```
tags: tag1, tag2
```
and then include a list of pages with those tags via something like:
````
# To generate a list of pages that have `tag1` in their metadata
```{tag-list} tag1
```
````
| open | 2021-07-08T14:42:25Z | 2021-07-08T14:54:01Z | https://github.com/jupyter-book/jupyter-book/issues/1389 | [
"enhancement"
] | choldgraf | 2 |
docarray/docarray | pydantic | 1,557 | Not able to index when using List[str] in custom document class | System: `macOS 13.3.1`
Python version: `3.11.0`
IPython version: `8.10.0`
In the latest docarray version, when building hnsw index from the following simple document class:
```python
from docarray import BaseDoc
from docarray.index import HnswDocumentIndex
class MyDoc(BaseDoc):
test: List[str]
index = HnswDocumentIndex[MyDoc]()
```
The following error will pop up:
```
File ~/miniconda3/envs/test/lib/python3.11/site-packages/docarray/index/backends/hnswlib.py:76, in HnswDocumentIndex.__init__(self, db_config, **kwargs)
73 if db_config is not None and getattr(db_config, 'index_name'):
74 db_config.work_dir = db_config.index_name.replace("__", "/")
---> 76 super().__init__(db_config=db_config, **kwargs)
77 self._db_config = cast(HnswDocumentIndex.DBConfig, self._db_config)
78 self._work_dir = self._db_config.work_dir
File ~/miniconda3/envs/test/lib/python3.11/site-packages/docarray/index/abstract.py:111, in BaseDocIndex.__init__(self, db_config, subindex, **kwargs)
109 self._runtime_config = self.RuntimeConfig()
110 self._logger.info('Runtime config created')
--> 111 self._column_infos: Dict[str, _ColumnInfo] = self._create_column_infos(
112 self._schema
113 )
114 self._is_subindex = subindex
115 self._subindices: Dict[str, BaseDocIndex] = {}
File ~/miniconda3/envs/test/lib/python3.11/site-packages/docarray/index/abstract.py:880, in BaseDocIndex._create_column_infos(self, schema)
873 """Collects information about every column that is implied by a given schema.
874
875 :param schema: The schema (subclass of BaseDoc) to analyze and parse
876 columns from
877 :returns: A dictionary mapping from column names to column information.
878 """
879 column_infos: Dict[str, _ColumnInfo] = dict()
--> 880 for field_name, type_, field_ in self._flatten_schema(schema):
881 # Union types are handle in _flatten_schema
882 if issubclass(type_, AnyDocArray):
883 column_infos[field_name] = _ColumnInfo(
884 docarray_type=type_, db_type=None, config=dict(), n_dim=None
885 )
File ~/miniconda3/envs/test/lib/python3.11/site-packages/docarray/index/abstract.py:860, in BaseDocIndex._flatten_schema(cls, schema, name_prefix)
856 else:
857 raise ValueError(
858 f'Union type {t_} is not supported. Only Union of subclasses of AbstractTensor or Union[type, None] are supported.'
859 )
--> 860 elif issubclass(t_, BaseDoc):
861 names_types_fields.extend(
862 cls._flatten_schema(t_, name_prefix=inner_prefix)
863 )
864 elif issubclass(t_, AbstractTensor):
File <frozen abc>:123, in __subclasscheck__(cls, subclass)
TypeError: issubclass() arg 1 must be a class
```
This also happens in other typings like Sequence, Iterable and Tuple. | closed | 2023-05-21T21:23:55Z | 2023-05-26T09:20:48Z | https://github.com/docarray/docarray/issues/1557 | [] | lazyhope | 5 |
ultralytics/yolov5 | deep-learning | 13,173 | Saving Early Stopping Patience Value in last.pt Checkpoint | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello,
I have a question regarding the checkpointing mechanism in YOLOv5, specifically related to saving and resuming the training process.
When training a YOLOv5 model, the last.pt checkpoint saves the model's weights and optimizer state. However, it appears that training process parameters, such as the early stopping patience value, are not included in this checkpoint.
**If my training is interrupted and I restart from the last.pt checkpoint, does the **patience value reset to zero, or does it continue from the previously recorded value?****
### Additional
_No response_ | open | 2024-07-07T13:31:17Z | 2024-08-08T00:23:26Z | https://github.com/ultralytics/yolov5/issues/13173 | [
"question",
"Stale"
] | mabubakarsaleem | 2 |
nalepae/pandarallel | pandas | 155 | How to use pandarallel_apply with multiple arguments? | I have code like below:
```
def similarity(txt1, txt2):
return xxxxxxx
vSimilarity = np.vectorize(similarity)
vSimilarity(df['var1'], df['var2'])
```
How to convert it to utilize parallel_apply? Below does not work.
`(df['var1'], df['var2']).parallel_apply(similarity)` | closed | 2021-09-15T00:12:37Z | 2022-09-05T10:05:18Z | https://github.com/nalepae/pandarallel/issues/155 | [] | Ribo-Py | 1 |
tiangolo/uwsgi-nginx-flask-docker | flask | 21 | Alpine Linux | Have you considered using alpine linux? | closed | 2017-09-22T11:32:52Z | 2017-09-22T23:41:46Z | https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/21 | [] | claygorman | 3 |
huggingface/datasets | numpy | 6,882 | Connection Error When Using By-pass Proxies | ### Describe the bug
I'm currently using Clash for Windows as my proxy tunnel, after exporting HTTP_PROXY and HTTPS_PROXY to the port that clash provides🤔, it runs into a connection error saying "Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f969d391870>: Failed to establish a new connection: [Errno 111] Connection refused'))")))"
I have already read the documentation provided on the hugginface, but I think I didn't see the detailed instruction on how to set up proxies for this library.
### Steps to reproduce the bug
1. Turn on any proxy software like Clash / ShadosocksR etc.
2. export system varibles to the port provided by your proxy software in wsl (It's ok for other applications to use proxy expect dataset-library)
3. load any dataset from hugginface online
### Expected behavior
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
Cell In[33], [line 3](vscode-notebook-cell:?execution_count=33&line=3)
[1](vscode-notebook-cell:?execution_count=33&line=1) from datasets import load_metric
----> [3](vscode-notebook-cell:?execution_count=33&line=3) metric = load_metric("seqeval")
File ~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46, in deprecated.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
[44](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:44) warnings.warn(warning_msg, category=FutureWarning, stacklevel=2)
[45](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:45) _emitted_deprecation_warnings.add(func_hash)
---> [46](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46) return deprecated_function(*args, **kwargs)
File ~/.local/lib/python3.10/site-packages/datasets/load.py:2104, in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, trust_remote_code, **metric_init_kwargs)
[2101](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2101) warnings.filterwarnings("ignore", message=".*https://huggingface.co/docs/evaluate$", category=FutureWarning)
[2103](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2103) download_mode = DownloadMode(download_mode or DownloadMode.REUSE_DATASET_IF_EXISTS)
-> [2104](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2104) metric_module = metric_module_factory(
[2105](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2105) path,
[2106](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2106) revision=revision,
[2107](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2107) download_config=download_config,
[2108](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2108) download_mode=download_mode,
[2109](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2109) trust_remote_code=trust_remote_code,
[2110](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2110) ).module_path
[2111](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2111) metric_cls = import_main_class(metric_module, dataset=False)
[2112](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2112) metric = metric_cls(
[2113](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2113) config_name=config_name,
[2114](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2114) process_id=process_id,
...
--> [633](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:633) raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
[634](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:634) elif response is not None:
[635](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:635) raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})")
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (SSLError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))")))
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.0
- PyArrow version: 16.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.2.0 | open | 2024-05-08T06:40:14Z | 2024-05-17T06:38:30Z | https://github.com/huggingface/datasets/issues/6882 | [] | MRNOBODY-ZST | 1 |
keras-team/keras | tensorflow | 20,830 | AttributeError: module 'keras._tf_keras.keras.layers' has no attribute 'LocallyConnected1D' | AttributeError: module 'keras._tf_keras.keras.layers' has no attribute 'LocallyConnected1D'

| closed | 2025-01-31T06:13:11Z | 2025-03-01T02:07:46Z | https://github.com/keras-team/keras/issues/20830 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] | dongloong | 3 |
art049/odmantic | pydantic | 181 | After resolving python circular import error, docs not working. | Using odmantic 0.3.5 with this model:
person.py
```
from __future__ import annotations
from typing import Optional
from odmantic import Field , Model
# from pydantic import Field, BaseModel as Model <-- docs work fine but odmantic raise error
class Person(Model):
name: Optional[str] = Field(None, <-- some extra options --> )
rooms: "Optional[Room]" = Field(None, <-- some extra options --> )
from ..models.room import Room # nopep8
Person.update_forward_refs()
```
room.py
```
from __future__ import annotations
from typing import Optional
from odmantic import Field , Model
# from pydantic import Field, BaseModel as Model <-- docs work fine but odmantic raise error
class Room(Model):
name: Optional[str] = Field(None, <-- some extra options --> )
persons: "Optional[Person]" = Field(None, <-- some extra options --> )
from ..models.person import Person # nopep8
Room.update_forward_refs()
```
### Current Behavior
this [project](https://github.com/SSaeedHoseini/odmantic-crash-docs-example) reproduce the bug, you can execute ./run.sh and open the http://127.0.0.1:8000/docs#/
### Expected behavior
show docs.

### Environment
- ODMantic version: 0.3.5
- MongoDB version: 4.4.8
- Pydantic infos (output of `python -c "import pydantic.utils; print(pydantic.utils.version_info())`):
```
pydantic version: 1.8.2
pydantic compiled: True
install path: /xxx/lib/python3.9/site-packages/pydantic
python version: 3.9.5 (default, May 11 2021, 08:20:37) [GCC 10.3.0]
platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.33
optional deps. installed: ['dotenv', 'typing-extensions']
```
**Additional context**
error raised from when open the docs:
```
Traceback (most recent call last):
File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 375, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__
return await self.app(scope, receive, send)
File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/fastapi/applications.py", line 208, in __call__
await super().__call__(scope, receive, send)
File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 181, in __call__
raise exc from None
File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/starlette/exceptions.py", line 82, in __call__
raise exc from None
File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/starlette/exceptions.py", line 71, in __call__
await self.app(scope, receive, sender)
File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/starlette/routing.py", line 580, in __call__
await route.handle(scope, receive, send)
File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/starlette/routing.py", line 241, in handle
await self.app(scope, receive, send)
File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/starlette/routing.py", line 52, in app
response = await func(request)
File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/fastapi/applications.py", line 161, in openapi
return JSONResponse(self.openapi())
File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/fastapi/applications.py", line 136, in openapi
self.openapi_schema = get_openapi(
File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/fastapi/openapi/utils.py", line 387, in get_openapi
definitions = get_model_definitions(
File "/xxx/odmantic-crash-docs-example/.venv/lib/python3.9/site-packages/fastapi/utils.py", line 24, in get_model_definitions
m_schema, m_definitions, m_nested_models = model_process_schema(
File "pydantic/schema.py", line 548, in pydantic.schema.model_process_schema
File "pydantic/schema.py", line 589, in pydantic.schema.model_type_schema
File "pydantic/schema.py", line 241, in pydantic.schema.field_schema
File "pydantic/schema.py", line 495, in pydantic.schema.field_type_schema
File "pydantic/schema.py", line 839, in pydantic.schema.field_singleton_schema
File "/usr/lib/python3.9/abc.py", line 102, in __subclasscheck__
return _abc_subclasscheck(cls, subclass)
TypeError: issubclass() arg 1 must be a class
```
| closed | 2021-09-12T06:42:55Z | 2022-08-23T15:15:34Z | https://github.com/art049/odmantic/issues/181 | [
"bug"
] | SSaeedHoseini | 1 |
seleniumbase/SeleniumBase | pytest | 2,871 | HTTP Request Interception | I'm trying the code below and it keeps giving me the following error: `No module named 'blinker._saferef'`
How can I fix it? Also it would be great if I could change the request's parameters/headers, like changing a certain header before continuing the request like in playwright/puppeteer.
```from seleniumbase import Driver
def intercept_response(request, response):
print(request.headers)
driver = Driver(wire=True)
try:
driver.response_interceptor = intercept_response
driver.get("https://wikipedia.org")
finally:
driver.quit()``` | closed | 2024-06-25T11:05:29Z | 2024-06-25T13:25:11Z | https://github.com/seleniumbase/SeleniumBase/issues/2871 | [
"question"
] | namename-123 | 1 |
httpie/cli | rest-api | 1,420 | Documentation / help: where is the config stored | Running http --help should state where the config file is. The man page should also state that. It should also state the format of the config file.
| closed | 2022-07-07T16:03:40Z | 2022-07-12T15:37:05Z | https://github.com/httpie/cli/issues/1420 | [
"enhancement",
"new"
] | hholst80 | 1 |
unit8co/darts | data-science | 2,241 | Custom metric for LinearRegression.gridsearch | Hi!
I would like to pass this custom metric to gridsearch:
```
def asymmetric_custom_metric(y_true, y_pred, penalization_factor = 5):
"""
Custom loss function that penalizes predictions below the true value more than predictions above the value.
Parameters:
y_true (ndarray): Array of true values.
y_pred (ndarray): Array of predicted values.
Returns:
float: Custom loss value.
"""
# Calculate the difference between true and predicted values
diff = (y_pred - y_true).astype(float)
# Calculate the loss
loss = np.where(diff < 0, np.square(y_true - y_pred) * (1 + penalization_factor) , np.square(y_true - y_pred))
# Calculate the average loss
avg_loss = np.mean(loss)
return avg_loss
```
But gridsearch only accepts metrics comming directly from darts.metrics.
There is a Wrapper to transform sklearn or custom metrics into a darts metric?
Thanks!
Brian.
| closed | 2024-02-21T14:29:23Z | 2024-03-01T08:32:23Z | https://github.com/unit8co/darts/issues/2241 | [
"question"
] | brianreinke95 | 1 |
numba/numba | numpy | 9,951 | KeyError: 'LLVMPY_AddSymbol' | <!--
hello, I ran into a problem when using the numba library. when running any code using this library, the following error occurs:
`File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\llvmlite\binding\ffi.py", line 141, in __getattr__
return self._fntab[name]
~~~~~~~~~~~^^^^^^
KeyError: 'LLVMPY_AddSymbol'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\llvmlite\binding\ffi.py", line 122, in _load_lib
self._lib_handle = ctypes.CDLL(str(lib_path))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\ctypes\__init__.py", line 379, in __init__
self._handle = _dlopen(self._name, mode)
^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: Could not find module 'C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\llvmlite\binding\llvmlite.dll' (or one of its dependencies). Try using the full path with constructor syntax.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\workp\test\main.py", line 3, in <module>
from numba.core import types
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\numba\__init__.py", line 73, in <module>
from numba.core import config
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\numba\core\config.py", line 17, in <module>
import llvmlite.binding as ll
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\llvmlite\binding\__init__.py", line 4, in <module>
from .dylib import *
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\llvmlite\binding\dylib.py", line 36, in <module>
ffi.lib.LLVMPY_AddSymbol.argtypes = [
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\llvmlite\binding\ffi.py", line 144, in __getattr__
cfn = getattr(self._lib, name)
^^^^^^^^^
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\llvmlite\binding\ffi.py", line 136, in _lib
self._load_lib()
File "C:\Users\admin\AppData\Local\Programs\Python\Python312\Lib\site-packages\llvmlite\binding\ffi.py", line 130, in _load_lib
raise OSError("Could not find/load shared object file") from e
OSError: Could not find/load shared object file`
I run the code through paycharm 2024.2.1, on windows 11 pro. I installed this library via pip versions 25.0.1, 23.1, 23.2. I tried downloading the entire package, and 3 packages separately, checking the documentation.
https://numba.readthedocs.io/en/stable/user/installing.html#numba-support-info
I ran several versions of these packages, from the latest versions to versions that are supported on Python 3.9
. I tried to solve the problem on my own by reading various forums, checking the integrity and availability of files, and tried to run the code in different ways, but none of this helped. I really hope for your help
-->
## Reporting a bug
<!--
Before submitting a bug report please ensure that you can check off these boxes:
-->
- [x] I have tried using the latest released version of Numba (most recent is
visible in the release notes
(https://numba.readthedocs.io/en/stable/release-notes-overview.html).
- [ ] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
<!--
Please include details of the bug here, including, if applicable, what you
expected to happen!
-->
| closed | 2025-03-01T23:50:48Z | 2025-03-02T12:40:01Z | https://github.com/numba/numba/issues/9951 | [] | gumen674 | 1 |
igorbenav/fastcrud | pydantic | 141 | Duplicate Values when use relationship type one-to-many in a JoinConfig with get_multi_joined | **Describe the bug or question**
I'm trying to get data from tables where a user (table 1) can be a part of a company (table 2) and have multiple posts (table 3) on the company's forum. using get_multi_joined and joins_config, I want to get each user's company info and a list (won't be more than 5 items) of their posts on the forum but the posts are returning duplicate values.
**To Reproduce**
```python
# Your code here
crud_user.get_multi_joined(
"db": db,
"is_deleted": False,
"schema_to_select": UserModelCompPostsRead,
"nest_joins": True,
"joins_config": [
JoinConfig(
model=CompanyInfo,
join_on=User.company_id == CompanyInfo.id,
join_prefix="company",
schema_to_select=CompanyInfoRead,
)
JoinConfig(
model=ForumPosts,
join_on=User.id == ForumPosts.user_id,
join_prefix="forum_posts",
schema_to_select=ForumPosts,
relationship_type="one-to-many",
)
]
)
```
**Description**
I'm receiving duplicates in the list of posts when the expected results should be an item in the list once it meets the join_on requirement.
expectation:
```
"data": [
{
"name": "John Kin",
"id": 2,
"company_id": 1,
"company": {
"name": "Company",
"created_at": "2024-07-11T18:58:41.460483Z"
},
"forum_posts": [
{
"title": "Money Talks",
"user_id": 2,
"id": 41,
},
{
"title": "Green Goblin vs Spiderman",
"user_id": 2,
"id": 42,
},
]
},
...
# other users
]
```
actual output:
```
"data": [
{
"name": "John Kin",
"id": 2,
"company_id": 1,
"company": {
"name": "Company",
"created_at": "2024-07-11T18:58:41.460483Z"
},
"forum_posts": [
{
"title": "Money Talks",
"user_id": 2,
"id": 41,
},
{
"title": "Green Goblin vs Spiderman",
"user_id": 2,
"id": 42,
},
# values above are returned below too (duplicates)
{
"title": "Money Talks",
"user_id": 2,
"id": 41,
},
{
"title": "Green Goblin vs Spiderman",
"user_id": 2,
"id": 42,
},
]
},
...
# other users
]
```
**Additional context**
fastcrud = "^0.13.1"
SQLAlchemy = "^2.0.25"
fastapi = "^0.109.1" | closed | 2024-07-29T08:47:56Z | 2024-12-23T03:38:01Z | https://github.com/igorbenav/fastcrud/issues/141 | [] | gr1nch3 | 5 |
huggingface/transformers | deep-learning | 35,994 | model.parameters() return [Parameter containing: tensor([], device='cuda:0', dtype=torch.bfloat16, requires_grad=True)] when using zero3 | ### System Info
transformers 4.44.2
accelerate 1.2.1
deepspeed 0.12.2
torch 2.2.2
torchaudio 2.2.2
torchvision 0.17.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Try to print **model.parameters()** in transfomers trainer(), but get **Parameter containing: tensor([], device='cuda:0', dtype=torch.bfloat16, requires_grad=True)** for all layers
In fact, I am trying to return the correct **model.parameters()** in DeepSpeed Zero-3 mode and use the EMA model. Could you suggest any ways to solve the above issue, or any other methods to use the EMA model under Zero-3?
### Expected behavior
expect to see the gathered parameters | closed | 2025-01-31T16:42:47Z | 2025-03-11T08:03:44Z | https://github.com/huggingface/transformers/issues/35994 | [
"bug"
] | fanfanffff1 | 2 |
django-cms/django-cms | django | 7,310 | [BUG] | On editing source on "text" plugin of cms loses some of the data and changes it to something else.
## Steps to reproduce
1. Create a cms page
2. Click on the add plugin button and add `Text` plugin.
3. Click on the source button and put the source below
```<ul style="color: white;">
<li>
<h4>
<a href="https://www.google.com" target="_blank">Google</a>
<a class="padding-left-5px" href="https://mail.google.com/" title="Gmail"> <i class="fa fa-envelope" aria-hidden="true"></i></a>
</h4>
</li>
</ul>
```
## Expected behaviour

## Actual behaviour

This works fine and as expected
Now if we do the above steps and
1. Click on `Text` plugin again and try to edit it. It removes the font-awesome code which was written earlier .
``` <ul style="color: white;">
<li>
<h4><a href="https://www.google.com" target="_blank">Google</a> <a class="padding-left-5px" href="https://mail.google.com/" title="Gmail"> </a></h4>
</li>
</ul>
```
It looks like this after that.
Now since the font awesome icon is removed if i save this again. It becomes something like below

So my question is why the fontawesome code is getting removed. | closed | 2022-04-29T06:31:33Z | 2022-04-29T14:34:25Z | https://github.com/django-cms/django-cms/issues/7310 | [
"status: marked for rejection",
"status: non-issue"
] | snehlata08 | 3 |
autogluon/autogluon | computer-vision | 4,369 | [BUG] Time series tabular model always uses fallback method (SeasonalNaive) for time series of length 1 | **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [x] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [x] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
The tabular time series models identify time series of length 1 as too short for inference, even when differencing is set to 0. These time series are thus predicted using the fallback method, SeasonalNaive. Specifically, [this line of code](https://github.com/autogluon/autogluon/blob/ea2e8ff7082454565fbae31ebd5653851d5b4601/timeseries/src/autogluon/timeseries/models/autogluon_tabular/mlforecast.py#L368) is causing the issue.
**Expected behavior**
When differencing isn't applied, I'd expect the tabular time series models to produce predictions for time series of length 1, rather than using a fallback method.
**To Reproduce**
Following simplified example reproduces the issue. Two time series (item_ids ['4123__23', '7510__21']) are of length 1 and are predicted using the fallback method (SeasonalNaive), although sufficient data is available for a tabular prediction.
```python
import pandas
from autogluon.timeseries import TimeSeriesPredictor
from pandas import Timestamp
data = [
{'item_id': '2527__18', 'timestamp': Timestamp('2008-01-01 00:00:00'), 'quantity_sold': 18.0},
{'item_id': '2527__18', 'timestamp': Timestamp('2009-01-01 00:00:00'), 'quantity_sold': 682.0},
{'item_id': '2572__16', 'timestamp': Timestamp('2006-01-01 00:00:00'), 'quantity_sold': 6.0},
{'item_id': '2572__16', 'timestamp': Timestamp('2007-01-01 00:00:00'), 'quantity_sold': 18.0},
{'item_id': '2572__16', 'timestamp': Timestamp('2008-01-01 00:00:00'), 'quantity_sold': 22.0},
{'item_id': '2572__16', 'timestamp': Timestamp('2009-01-01 00:00:00'), 'quantity_sold': 74.0},
{'item_id': '4123__23', 'timestamp': Timestamp('2009-01-01 00:00:00'), 'quantity_sold': 138.0},
{'item_id': '695__24', 'timestamp': Timestamp('2001-01-01 00:00:00'), 'quantity_sold': 4.0},
{'item_id': '695__24', 'timestamp': Timestamp('2002-01-01 00:00:00'), 'quantity_sold': 92.0},
{'item_id': '695__24', 'timestamp': Timestamp('2003-01-01 00:00:00'), 'quantity_sold': 40.0},
{'item_id': '695__24', 'timestamp': Timestamp('2004-01-01 00:00:00'), 'quantity_sold': 116.0},
{'item_id': '695__24', 'timestamp': Timestamp('2005-01-01 00:00:00'), 'quantity_sold': 48.0},
{'item_id': '695__24', 'timestamp': Timestamp('2006-01-01 00:00:00'), 'quantity_sold': 132.0},
{'item_id': '695__24', 'timestamp': Timestamp('2007-01-01 00:00:00'), 'quantity_sold': 6.0},
{'item_id': '695__24', 'timestamp': Timestamp('2008-01-01 00:00:00'), 'quantity_sold': 26.0},
{'item_id': '695__24', 'timestamp': Timestamp('2009-01-01 00:00:00'), 'quantity_sold': 6.0},
{'item_id': '7510__21', 'timestamp': Timestamp('2009-01-01 00:00:00'), 'quantity_sold': 56.0}
]
df = pandas.DataFrame.from_dict(data)
predictor = TimeSeriesPredictor(
target='quantity_sold',
prediction_length=2,
freq='YS'
)
predictor.fit(
train_data=df,
hyperparameters={
'DirectTabular': {},
}
)
predictor.predict(df)
```
**Screenshots / Logs**
<!-- If applicable, add screenshots or logs to help explain your problem. -->

**Installed Versions**
<!-- Please run the following code snippet: -->
<details>
```python
INSTALLED VERSIONS
------------------
date : 2024-08-07
time : 09:35:12.422961
python : 3.11.9.final.0
OS : Darwin
OS-release : 23.6.0
Version : Darwin Kernel Version 23.6.0: Fri Jul 5 17:53:24 PDT 2024; root:xnu-10063.141.1~2/RELEASE_ARM64_T6020
machine : arm64
processor : arm
num_cores : 12
cpu_ram_mb : 32768.0
cuda version : None
num_gpus : 0
gpu_ram_mb : []
avail_disk_size_mb : 871868
accelerate : 0.21.0
autogluon : 1.1.1
autogluon.common : 1.1.1
autogluon.core : 1.1.1
autogluon.features : 1.1.1
autogluon.multimodal : 1.1.1
autogluon.tabular : 1.1.1
autogluon.timeseries : 1.1.1
boto3 : 1.34.154
catboost : None
defusedxml : 0.7.1
evaluate : 0.4.2
fastai : 2.7.16
gluonts : 0.15.1
hyperopt : 0.2.7
imodels : None
jinja2 : 3.1.4
joblib : 1.4.2
jsonschema : 4.21.1
lightgbm : 4.3.0
lightning : 2.3.3
matplotlib : 3.9.1
mlforecast : 0.10.0
networkx : 3.3
nlpaug : 1.1.11
nltk : 3.8.1
nptyping : 2.4.1
numpy : 1.26.4
nvidia-ml-py3 : 7.352.0
omegaconf : 2.2.3
onnxruntime-gpu : None
openmim : 0.3.9
optimum : 1.17.1
optimum-intel : None
orjson : 3.10.6
pandas : 2.2.2
pdf2image : 1.17.0
Pillow : 10.4.0
psutil : 5.9.8
pytesseract : 0.3.10
pytorch-lightning : 2.3.3
pytorch-metric-learning: 2.3.0
ray : 2.10.0
requests : 2.32.3
scikit-image : 0.20.0
scikit-learn : 1.4.0
scikit-learn-intelex : None
scipy : 1.12.0
seqeval : 1.2.2
setuptools : 72.1.0
skl2onnx : None
statsforecast : 1.4.0
tabpfn : None
tensorboard : 2.17.0
text-unidecode : 1.3
timm : 0.9.16
torch : 2.3.1
torchmetrics : 1.2.1
torchvision : 0.18.1
tqdm : 4.66.5
transformers : 4.40.2
utilsforecast : 0.0.10
vowpalwabbit : None
xgboost : 2.0.3
```
</details>
| closed | 2024-08-07T07:43:39Z | 2024-08-19T07:23:39Z | https://github.com/autogluon/autogluon/issues/4369 | [
"bug",
"module: timeseries"
] | JanusAsmussen | 1 |
jumpserver/jumpserver | django | 14,876 | [Feature] window 客户端使用麦克风 | ### 产品版本
v4.1
### 版本类型
- [ ] 社区版
- [ ] 企业版
- [x] 企业试用版
### 安装方式
- [ ] 在线安装 (一键命令安装)
- [x] 离线包安装
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] 源码安装
### ⭐️ 需求描述
我们有远程办公的需求,可使用公司電腦上的Telegram或Whatsapp做通話,麥克風的聲音會傳至公司的電腦透過公司電腦再傳送到對方那嘛(windows 远程桌面客户端)
### 解决方案
sorry
### 补充信息
_No response_ | closed | 2025-02-12T10:13:32Z | 2025-02-14T07:41:33Z | https://github.com/jumpserver/jumpserver/issues/14876 | [
"⭐️ Feature Request"
] | armored-glitch | 2 |
httpie/cli | api | 1,480 | I would like the option to disable the DNS Cache and do name resolution on every request | ## Checklist
- [ ] I've searched for similar feature requests.
---
## Enhancement request
…
---
## Problem it solves
E.g. “I'm always frustrated when […]”, “I’m trying to do […] so that […]”.
---
## Additional information, screenshots, or code examples
…
| open | 2023-02-16T00:34:13Z | 2023-02-16T00:34:13Z | https://github.com/httpie/cli/issues/1480 | [
"enhancement",
"new"
] | kahirokunn | 0 |
coleifer/sqlite-web | flask | 175 | image Docker.io | Message in English:
Hello Charles,
I noticed that the official Docker image for sqlite-web on Docker Hub is from 2020. Because of this, I encountered issues with missing Insert and Update buttons, despite these features being present in the latest version on GitHub. After building a new Docker image from the most recent GitHub code, everything worked perfectly. Could you please update the official Docker Hub image so that users can easily access the newest version of sqlite-web?
Thank you and best regards,
J.Scheuner | closed | 2025-03-02T07:07:37Z | 2025-03-02T14:30:40Z | https://github.com/coleifer/sqlite-web/issues/175 | [] | jscheuner | 1 |
NVlabs/neuralangelo | computer-vision | 200 | how to get the envirement mesh | it is a great project ,I use the nerf garden dataset to reconstruction and i only get the table in the mesh. but i wanna get all environment in the mesh ,how can i get that ,is there anything i can do to achieve my target | open | 2024-04-24T03:01:39Z | 2024-04-24T03:01:39Z | https://github.com/NVlabs/neuralangelo/issues/200 | [] | yang1hu | 0 |
MycroftAI/mycroft-core | nlp | 2,226 | Messagebus send isn't working | ## Software:
* Picroft
* 19.2.13
## Problem
This used to work before, but I've seen some work with refactoring the messagebus code, and that probably broke it.
This is basically my code:
```python
from mycroft.messagebus.send import send
send("skill.communications.device.new", {"message": "10.0.1.7"})
```
Which sends the message `skill.communications.device.new` from the code in my (communications) skill which handles new devices, to the Mycroft skill, to be registered.
Now it is failing with the error:
```python-traceback
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/home/pi/mycroft-core/.venv/lib/python3.5/site-packages/zeroconf.py", line 1423, in run
handler(self.zc)
File "/home/pi/mycroft-core/.venv/lib/python3.5/site-packages/zeroconf.py", line 1363, in <lambda>
zeroconf=zeroconf, service_type=self.type, name=name, state_change=state_change
File "/home/pi/mycroft-core/.venv/lib/python3.5/site-packages/zeroconf.py", line 1250, in fire
h(**kwargs)
File "/home/pi/mycroft-core/.venv/lib/python3.5/site-packages/zeroconf.py", line 1335, in on_change
listener.add_service(*args)
File "/opt/mycroft/skills/communications-skill.linuss1/shippingHandling.py", line 107, in add_service
send_communication_to_messagebus("device", ip)
File "/opt/mycroft/skills/communications-skill.linuss1/shippingHandling.py", line 26, in send_communication_to_messagebus
send("skill.communications.{}.new".format(msg_type), {"message": "{}".format(str(msg))})
File "/home/pi/mycroft-core/mycroft/messagebus/send.py", line 70, in send
url = MessageBusClient.build_url(
AttributeError: type object 'MessageBusClient' has no attribute 'build_url'
``` | closed | 2019-07-23T17:49:26Z | 2019-07-24T07:51:31Z | https://github.com/MycroftAI/mycroft-core/issues/2226 | [] | LinusSkucas | 2 |
apache/airflow | data-science | 47,488 | OpenLineage can silently lose Snowflake query_ids and can't support multiple query_ids | ### Apache Airflow Provider(s)
openlineage
### Versions of Apache Airflow Providers
latest
### Apache Airflow version
2.X
### Operating System
macos
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
When using `SqlExecuteQueryOperator` with Snowflake, and running a query with multiple statements in it, OpenLineage will only include first `query_id` in `ExternalQueryRunFacet`.
This is problematic, as users don't have full control on how the statements are executed (when query consists of multiple statements and `split_statements=False` operator throws an error `snowflake.connector.errors.ProgrammingError: 000008 (0A000): 01bad84f-0000-4392-0000-3d95000110ce: Actual statement count 3 did not match the desired statement count 1.`). The only solution for users to retrieve all query_ids in OL events is to set `split_statements=False` and make sure each task runs a single statement, which is rarely a case.
In BQ, similar problem is solved by ["parent_query_job"](https://github.com/apache/airflow/blob/ab3a1869c57def3ee74a925709cece4c7e07b891/providers/google/src/airflow/providers/google/cloud/openlineage/mixins.py#L109) executing each statement within a "child_query_job" with a link to the parent job, so that it's easy to access all ids later on. I couldn't find a similar mechanism in Snowflake.
### What you think should happen instead
Ideally, from within a single task (SqlExecuteQueryOperator) we would emit a separate OL event for each statement run, containing parentRunFacet pointing to the Airflow task. This may however take some time to implement properly and may? (or not) need some adjustments from the consumers?
As a partial solution, we could extend `ExternalQueryRunFacet` with a new property that accepts multiple `externalQueryIds`. This requires some discussion from OL community as how it fits to the spec.
Another small note, right now we are already sending the entire sql query (with all the statements) in `SQLJobFacet`, regardless if they execute as separate "queries" or not. So it would probably need adjustment as well.
### How to reproduce
Run a sample query like:
```
USE WAREHOUSE COMPUTE_WH;
CREATE OR REPLACE TABLE test.public.result AS SELECT * FROM snowflake_sample_data.tpch_sf1.customer;
```
You can see in Snowflake that this resulted in two queries being run, with two separate query_ids and only first one is included in OpenLineage event.
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-07T09:43:27Z | 2025-03-07T09:51:04Z | https://github.com/apache/airflow/issues/47488 | [
"kind:bug",
"area:providers",
"needs-triage",
"provider:openlineage"
] | kacpermuda | 0 |
axnsan12/drf-yasg | django | 74 | Validation error on serializer used only for responses | Note: This may be what https://github.com/axnsan12/drf-yasg/issues/51 was trying to get at...
Trying to do this, but getting a validation error ("... 'ssv': "Unresolvable JSON pointer: 'definitions/Detail'")
```
class DetailSerializer(serializers.Serializer):
detail = serializers.CharField()
```
```
class ManufacturerViewSet(viewsets.ModelViewSet):
serializer_class = ManufacturerSerializer
model = Manufacturer
queryset = Manufacturer.objects.all()
@swagger_auto_schema(responses={404: openapi.Response("Not found or Not accessible", DetailSerializer,
examples={
'Not found': DetailSerializer({'detail':'Not found'}).data,
'Not accessible': DetailSerializer({'detail':'Not accessible'}).data,
},
)})
def retrieve(self, request, *args, **kwargs):
return super().retrieve(self, request, *args, **kwargs)
```
However, if I add the serializer to recognized model, it does work, e.g.,
```
class ManufacturerSerializer(serializers.ModelSerializer):
status = DetailSerializer(many=False)
class Meta:
model = Manufacturer
fields = '__all__'
```
Full text of validation error...
```
{'flex': "'paths':\n"
" - '/manufacturers/{id}/':\n"
" - 'get':\n"
" - 'responses':\n"
" - '404':\n"
" - 'referenceObject':\n"
" - 'additionalProperties':\n"
' - "When `additionalProperties` is False, '
'no unspecified properties are allowed. The following unspecified '
"properties were found:\\n\\t`{'headers', 'description', 'examples', "
'\'schema\'}`"\n'
" - 'required':\n"
" - '$ref':\n"
" - 'This value is required'\n"
" - 'responseObject':\n"
" - 'schema':\n"
" - '$ref':\n"
" - 'The $ref `#/definitions/Detail` "
"was not found in the schema'",
'ssv': "Unresolvable JSON pointer: 'definitions/Detail'"}
``` | closed | 2018-03-04T19:44:11Z | 2018-03-05T09:51:52Z | https://github.com/axnsan12/drf-yasg/issues/74 | [
"bug"
] | rmorison | 1 |
python-gino/gino | asyncio | 39 | Accessing unselected columns should raise an error rather that return None | Hello!
When one select columns with `Table.select()`, the returned object will have only part of attributes. Currently, gino (0.4.1) would return `None` when accessing attributes that weren't selected. However, accessing unselected attributes usually mean that there is a bug somewhere so it would be better to throw an appropriate error. | closed | 2017-08-29T13:31:01Z | 2017-08-30T11:04:24Z | https://github.com/python-gino/gino/issues/39 | [
"feature request"
] | AmatanHead | 2 |
oegedijk/explainerdashboard | plotly | 294 | Logodds: difference between contributions plot and prediction box | From the code

I built an explainer dashboard, which includes the following outputs:
<img width="497" alt="ExplainerDashboard_PredictionBox" src="https://github.com/oegedijk/explainerdashboard/assets/145360667/60cf5af5-0c9d-4f1f-acbc-54d9d5bbe82a">
<img width="491" alt="ExplainerDashboard_ContributionsPlot" src="https://github.com/oegedijk/explainerdashboard/assets/145360667/310decbc-dea6-4869-9a1b-bb99661f8e3a">
The underlying problem is one of multiclassification, with classes 0, 1 and 2. The model used was LGBM.
In the previous images, I chose class 2 as the positive class. My question is why the logodd predicted in the Contributions Plot is different from the logodd of class 2 in the Prediction box and how is the first logodd calculated?
| open | 2024-01-30T10:24:05Z | 2024-01-30T10:24:05Z | https://github.com/oegedijk/explainerdashboard/issues/294 | [] | rita-a-oliveira-alb | 0 |
dmlc/gluon-nlp | numpy | 1,114 | Visualization for model interpretation | I took at look at AllenNLP Interpret https://arxiv.org/pdf/1909.09251.pdf, which implements the saliency map for important tokens, and adversarial attacks with input reduction or word hotflip. These methods seem to be quite useful in helping users understand what the model learns and when it fails. | open | 2020-01-15T21:27:06Z | 2020-01-15T21:27:17Z | https://github.com/dmlc/gluon-nlp/issues/1114 | [
"enhancement",
"help wanted"
] | eric-haibin-lin | 0 |
piskvorky/gensim | machine-learning | 2,988 | `word2vec.doesnt_match` numpy vstack deprecation warning | #### Problem description
I followed [this instruction](https://radimrehurek.com/gensim/scripts/glove2word2vec.html) to load GloVe model. When I run: `model.doesnt_match("breakfast cereal dinner lunch".split())` from the [tutorial](https://rare-technologies.com/word2vec-tutorial/), it produces FutureWarning on the `vstack` function. It seems that [I am not the first person to encounter this error as well](https://stackoverflow.com/questions/56593904/word2vec-doesnt-match-function-throws-numpy-warning). It might also be similar to [Issue 2432](https://github.com/RaRe-Technologies/gensim/issues/2432). The error reads:
> C:\Path_to_gensim\keyedvectors.py:877: FutureWarning: arrays to stack must be passed as a "sequence" type such as list or tuple. Support for non-sequence iterables such as generators is deprecated as of NumPy 1.16 and will raise an error in the future.
> vectors = vstack(self.word_vec(word, use_norm=True) for word in used_words).astype(REAL)
#### Steps/code/corpus to reproduce
```python
from gensim.test.utils import datapath, get_tmpfile
from gensim.models import KeyedVectors
from gensim.scripts.glove2word2vec import glove2word2vec
glove_file = datapath('test_glove.txt')
tmp_file = get_tmpfile("test_word2vec.txt")
_ = glove2word2vec(glove_file, tmp_file)
model = KeyedVectors.load_word2vec_format(tmp_file)
model.doesnt_match("breakfast cereal dinner lunch".split())
```
#### Versions
```python
Windows-10-10.0.17763-SP0
python 3.8.2 (tags/v3.8.2:7b3ab59, Feb 25 2020, 23:03:10) [MSC v.1916 64 bit (AMD64)]
Bits 64
NumPy 1.19.0
SciPy 1.5.2
gensim 3.8.3
FAST_VERSION 0
``` | closed | 2020-10-22T20:07:33Z | 2020-10-23T03:00:48Z | https://github.com/piskvorky/gensim/issues/2988 | [] | atunanggara | 1 |
aimhubio/aim | tensorflow | 3,063 | Having problems using with fairseq | ## ❓Question
The library [fairseq](https://github.com/facebookresearch/fairseq/) has built in support for aim, but I am struggling to get it working. I'm not sure if it's something I'm doing wrong or if maybe the fairseq support is out of date, but the fairseq repo is fairly inactive so I thought I would ask here.
I am working locally and run `aim server`, and see: "Server is mounted on 0.0.0.0:53800".
I then run my fairseq experiment, adding to my config.yaml file:
```
common:
aim_repo: aim://0.0.0.0:53800
```
then run my experiment. It seems to be working initially - aim detects the experiment and the log starts with:
```
[2023-11-15 14:31:07,453][fairseq.logging.progress_bar][INFO] - Storing logs at Aim repo: aim://0.0.0.0:53800
[2023-11-15 14:31:07,480][aim.sdk.reporter][INFO] - creating RunStatusReporter for f6f19ecf0e2147b19e24d52f
[2023-11-15 14:31:07,482][aim.sdk.reporter][INFO] - starting from: {}
[2023-11-15 14:31:07,482][aim.sdk.reporter][INFO] - starting writer thread for <aim.sdk.reporter.RunStatusReporter object at 0x7f57117363e0>
[2023-11-15 14:31:08,471][fairseq.trainer][INFO] - begin training epoch 1
[2023-11-15 14:31:08,471][fairseq_cli.train][INFO] - Start iterating over samples
[2023-11-15 14:31:10,821][fairseq.trainer][INFO] - NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 64.0
[2023-11-15 14:31:12,261][fairseq.trainer][INFO] - NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 32.0
[2023-11-15 14:31:12,261][fairseq_cli.train][INFO] - begin validation on "valid" subset
[2023-11-15 14:31:12,266][fairseq.logging.progress_bar][INFO] - Storing logs at Aim repo: aim://0.0.0.0:53800
[2023-11-15 14:31:12,283][fairseq.logging.progress_bar][INFO] - Appending to run: f6f19ecf0e2147b19e24d52f
```
but then I get an error:
```
...
File "/lib/python3.10/site-packages/fairseq/logging/progress_bar.py", line 64, in progress_bar
bar = AimProgressBarWrapper(
File "/lib/python3.10/site-packages/fairseq/logging/progress_bar.py", line 365, in __init__
self.run = get_aim_run(aim_repo, aim_run_hash)
File "/lib/python3.10/site-packages/fairseq/logging/progress_bar.py", line 333, in get_aim_run
return Run(run_hash=run_hash, repo=repo)
File "/lib/python3.10/site-packages/aim/ext/exception_resistant.py", line 70, in wrapper
_SafeModeConfig.exception_callback(e, func)
File "/lib/python3.10/site-packages/aim/ext/exception_resistant.py", line 47, in reraise_exception
raise e
File "/lib/python3.10/site-packages/aim/ext/exception_resistant.py", line 68, in wrapper
return func(*args, **kwargs)
File "/lib/python3.10/site-packages/aim/sdk/run.py", line 828, in __init__
super().__init__(run_hash, repo=repo, read_only=read_only, experiment=experiment, force_resume=force_resume)
File "/lib/python3.10/site-packages/aim/sdk/run.py", line 276, in __init__
super().__init__(run_hash, repo=repo, read_only=read_only, force_resume=force_resume)
File "/lib/python3.10/site-packages/aim/sdk/base_run.py", line 50, in __init__
self._lock.lock(force=force_resume)
File "/lib/python3.10/site-packages/aim/storage/lock_proxy.py", line 38, in lock
return self._rpc_client.run_instruction(self._hash, self._handler, 'lock', (force,))
File "/lib/python3.10/site-packages/aim/ext/transport/client.py", line 260, in run_instruction
return self._run_read_instructions(queue_id, resource, method, args)
File "/lib/python3.10/site-packages/aim/ext/transport/client.py", line 285, in _run_read_instructions
raise_exception(status_msg.header.exception)
File lib/python3.10/site-packages/aim/ext/transport/message_utils.py", line 76, in raise_exception
raise exception(*args) if args else exception()
TypeError: Timeout.__init__() missing 1 required positional argument: 'lock_file'
Exception in thread Thread-13 (worker):
Traceback (most recent call last):
File "lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/lib/python3.10/site-packages/aim/ext/transport/rpc_queue.py", line 55, in worker
if self._try_exec_task(task_f, *args):
File "/lib/python3.10/site-packages/aim/ext/transport/rpc_queue.py", line 81, in _try_exec_task
task_f(*args)
File "/lib/python3.10/site-packages/aim/ext/transport/client.py", line 301, in _run_write_instructions
raise_exception(response.exception)
File "/python3.10/site-packages/aim/ext/transport/message_utils.py", line 76, in raise_exception
raise exception(*args) if args else exception()
aim.ext.transport.message_utils.UnauthorizedRequestError: 3310c526-aa51-47ef-ba87-fbf75f80f610
```
Does anyone have any idea what might be causing this/if there's something wrong with the approach I'm taking? I've tried with a variety of different aim versions (going back to the versions when fairseq was more actively being developed) and I still get errors.
| open | 2023-11-15T14:47:34Z | 2024-01-09T07:40:58Z | https://github.com/aimhubio/aim/issues/3063 | [
"type / question"
] | henrycharlesworth | 4 |
dask/dask | numpy | 11,641 | order: combining different xarray variables followed by a reduction orders very inefficiently | Lets look at the following example:
```
import xarray as xr
import dask.array as da
size = 50
ds = xr.Dataset(
dict(
u=(
["time", "j", "i"],
da.random.random((size, 20, 20), chunks=(10, -1, -1)),
),
v=(
["time", "j", "i"],
da.random.random((size, 20, 20), chunks=(10, -1, -1)),
),
w=(
["time", "j", "i"],
da.random.random((size, 20, 20), chunks=(10, -1, -1)),
),
)
)
ds["uv"] = ds.u * ds.v
ds["vw"] = ds.v * ds.w
ds = ds.fillna(199)
```
We are combining u and v and then v and w. Not having a reduction after that step generally works fine:
<img width="1321" alt="Screenshot 2025-01-07 at 16 21 32" src="https://github.com/user-attachments/assets/d72431d8-44bf-4bb1-b335-99480128c453" />
The individual chunks in one array are independent of all other chunks, so we can process chunk by chunk for all data arrays.
Adding a reduction after these cross dependencies makes things go sideways:
Add:
```
ds = ds.count()
```
The ordering algorithm eagerly processes a complete tree reduction for the first variable ``uv`` before touching anything from ``vw``. This means that the data array ``v`` is loaded completely into memory when the first tree reduction is finished before we are tackling the ``vw`` and thus we can't release any chunk from ``v``.
<img width="1511" alt="Screenshot 2025-01-07 at 16 21 12" src="https://github.com/user-attachments/assets/977a78f1-a11c-4ecd-b468-392a2c7f9c98" />
I am not sure what a good solution here would look like. Ideally, the ordering algorithm would know that the ``v`` chunks are a lot larger than the reduced chunks of the ``uv`` combination and thus prefer processing ``v`` before starting with a new chunk of ``uv``.
Alternatively, we could load ``v`` twice, i.e. drop the v chunks after they are added to ``uv``.
This is the pattern that kills https://github.com/coiled/benchmarks/blob/main/tests/geospatial/workloads/atmospheric_circulation.py
task graph:
```
from dask.base import collections_to_dsk
dsk = collections_to_dsk([ds.uv.data, ds.vw.data], optimize_graph=True)
```
cc @fjetter | open | 2025-01-07T15:20:19Z | 2025-01-29T19:46:26Z | https://github.com/dask/dask/issues/11641 | [
"array",
"dask-order"
] | phofl | 4 |
vaexio/vaex | data-science | 2,421 | Exporting out of memory dataframe to parquet error | I am trying to export an out of memory dataframe to parquet as in the following code but i keep getting the following error.
Code:
'''
import numpy as np
from matplotlib import pyplot as plt
import vaex as vd
def custom_shift(df, column):
# Extract the values of the column
values = vd.from_arrays(column=df[f'{column}'].values)
# Create a new shifted column with None as the first value
d = vd.from_arrays(column=[None])
shifted_values = vd.concat([d, values[:-1]])
shifted_values = vd.from_arrays(ClosePrice_shifted=shifted_values).ClosePrice_shifted.values
df.add_column(f'{column}_shifted', shifted_values)
# Add the shifted values as a new column
return df
def get_threshold(daily_returns, lookback=40):
ewm_std = np.abs(daily_returns.rolling(window=lookback).std())
threshold = np.exp(ewm_std)
return threshold.mean() * 0.1
ddf = custom_shift(ddf, 'ClosePrice')
ddf = ddf.dropna()
ddf['daily_returns'] = ddf['ClosePrice'] / ddf['ClosePrice_shifted'] - 1
ddf['threshold'] = (ddf['daily_returns'].apply(get_threshold))
ddf.export_parquet('dollar_bars_threshold.parquet', engine='pyarrow')
'''
Error:
Traceback (most recent call last):
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\scopes.py", line 113, in evaluate
result = self[expression]
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\scopes.py", line 198, in __getitem__
raise KeyError("Unknown variables or column: %r" % (variable,))
KeyError: "Unknown variables or column: '((ClosePrice / ClosePrice_shifted) - 1)'"
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 2273, in data_type
data = self.evaluate(expression, 0, 1, filtered=False, array_type=array_type, parallel=False)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 3095, in evaluate
return self._evaluate_implementation(expression, i1=i1, i2=i2, out=out, selection=selection, filtered=filtered, array_type=array_type, parallel=parallel, chunk_size=chunk_size, progress=progress)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 6562, in _evaluate_implementation
value = block_scope.evaluate(expression)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\scopes.py", line 113, in evaluate
result = self[expression]
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\scopes.py", line 188, in __getitem__
values = self.evaluate(expression)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\scopes.py", line 119, in evaluate
result = eval(expression, expression_namespace, self)
File "<string>", line 1, in <module>
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\arrow\numpy_dispatch.py", line 74, in operator
result_data = a.add_missing(result_data)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\arrow\numpy_dispatch.py", line 27, in add_missing
ar = vaex.array_types.to_arrow(ar)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\array_types.py", line 184, in to_arrow
return pa.array(x)
File "pyarrow\array.pxi", line 340, in pyarrow.lib.array
File "pyarrow\array.pxi", line 86, in pyarrow.lib._ndarray_to_array
File "pyarrow\error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: only handle 1-dimensional arrays
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Intijir\PycharmProjects\quantStrategy\Data\Labeling\cumulative_sum.py", line 89, in <module>
ddf.export_parquet('dollar_bars_threshold.parquet', engine='pyarrow')
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 6823, in export_parquet
schema = self.schema_arrow(reduce_large=True)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 2335, in schema_arrow
return pa.schema({name: reduce(dtype.arrow) for name, dtype in self.schema().items()})
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 2323, in schema
return {column_name:self.data_type(column_name) for column_name in self.get_column_names()}
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 2323, in <dictcomp>
return {column_name:self.data_type(column_name) for column_name in self.get_column_names()}
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 2275, in data_type
data = self.evaluate(expression, 0, 1, filtered=True, array_type=array_type, parallel=False)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 3095, in evaluate
return self._evaluate_implementation(expression, i1=i1, i2=i2, out=out, selection=selection, filtered=filtered, array_type=array_type, parallel=parallel, chunk_size=chunk_size, progress=progress)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 6402, in _evaluate_implementation
max_stop = (len(self) if (self.filtered and filtered) else self.length_unfiltered())
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 4326, in __len__
self._cached_filtered_length = int(self.count())
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 967, in count
return self._compute_agg('count', expression, binby, limits, shape, selection, delay, edges, progress, array_type=array_type)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 941, in _compute_agg
return self._delay(delay, progressbar.exit_on(var))
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 1780, in _delay
self.execute()
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataframe.py", line 421, in execute
self.executor.execute()
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\execution.py", line 308, in execute
for _ in self.execute_generator():
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\execution.py", line 432, in execute_generator
yield from self.thread_pool.map(self.process_part, dataset.chunk_iterator(run.dataset_deps, chunk_size),
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\multithreading.py", line 100, in map
iterator = super(ThreadPoolIndex, self).map(wrapped, cancellable_iter())
File "C:\Users\Intijir\AppData\Local\Programs\Python\Python39\lib\concurrent\futures\_base.py", line 598, in map
fs = [self.submit(fn, *args) for args in zip(*iterables)]
File "C:\Users\Intijir\AppData\Local\Programs\Python\Python39\lib\concurrent\futures\_base.py", line 598, in <listcomp>
fs = [self.submit(fn, *args) for args in zip(*iterables)]
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\multithreading.py", line 86, in cancellable_iter
for value in chunk_iterator:
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\dataset.py", line 1257, in chunk_iterator
for (i1, i2, ichunks), (j1, j2, jchunks) in zip(
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\arrow\dataset.py", line 182, in chunk_iterator
chunks = chunks_future.result()
File "C:\Users\Intijir\AppData\Local\Programs\Python\Python39\lib\concurrent\futures\_base.py", line 446, in result
return self.__get_result()
File "C:\Users\Intijir\AppData\Local\Programs\Python\Python39\lib\concurrent\futures\_base.py", line 391, in __get_result
raise self._exception
File "C:\Users\Intijir\AppData\Local\Programs\Python\Python39\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\Intijir\PycharmProjects\quantStrategy\venv\lib\site-packages\vaex\arrow\dataset.py", line 114, in reader
table = fragment.to_table(columns=list(columns_physical), use_threads=False)
File "pyarrow\_dataset.pyx", line 1613, in pyarrow._dataset.Fragment.to_table
File "pyarrow\_dataset.pyx", line 3713, in pyarrow._dataset.Scanner.to_table
File "pyarrow\error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow\error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowMemoryError: malloc of size 8388608 failed
Thanks! | open | 2024-04-20T13:35:34Z | 2024-04-20T14:52:50Z | https://github.com/vaexio/vaex/issues/2421 | [] | Intijir | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 730 | Looking for performance metric for cyclegan | Hi, we often apply cycleGAN for unpaired data. So, some of the performance metric will be not applied
- SSIM
- PSNR
For my dataset, I would like to use cyclegan to mapping an image from winter session to spring session and they have no pair data for each image. Could you tell me how can I evaluate the cyclegan performance (i.e how to know the output is close to a realistic image...)
| closed | 2019-08-14T21:55:25Z | 2020-04-25T18:18:55Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/730 | [] | John1231983 | 6 |
tensorpack/tensorpack | tensorflow | 599 | Question about the backward of the quantize function | Usage Questions Only:
I have a question about quantize function in the dorefa paper.

I have confused to the process about find out the derivative r0/ri
| closed | 2018-01-18T08:04:23Z | 2018-05-30T20:59:33Z | https://github.com/tensorpack/tensorpack/issues/599 | [
"unrelated"
] | hitlgy | 2 |
codertimo/BERT-pytorch | nlp | 75 | If I want to use /u as a placeholder instead of /t, what do I need to do | open | 2020-03-10T07:30:55Z | 2020-03-10T07:30:55Z | https://github.com/codertimo/BERT-pytorch/issues/75 | [] | 0GSC0-0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.