repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
zappa/Zappa | django | 938 | [Migrated] Zappa is not invoking the right Virtual env correctly while deploy | Originally from: https://github.com/Miserlou/Zappa/issues/2206 by [veris-neerajdhiman](https://github.com/veris-neerajdhiman)
<!--- Provide a general summary of the issue in the Title above -->
I am trying to deploy a Django app with Zappa using. I have created virtualenv using pyenv.
## Context
Following commands confirms the correct virtualenv
```
▶ pyenv which zappa
/Users/****/.pyenv/versions/zappa/bin/zappa
▶ pyenv which python
/Users/****/.pyenv/versions/zappa/bin/python
```
But when I am trying to deploy the application using zappa deploy dev following error is thrown
```
▶ zappa deploy dev
(pip 18.1 (/Users/****/.pyenv/versions/3.6.9/envs/zappa/lib/python3.6/site-packages), Requirement.parse('pip>=20.1'), {'pip-tools'})
Calling deploy for stage dev..
Oh no! An error occurred! :(
==============
Traceback (most recent call last):
File "/Users/****/.pyenv/versions/3.6.9/envs/zappa/lib/python3.6/site-packages/zappa/cli.py", line 2778, in handle
sys.exit(cli.handle())
File "/Users/****/.pyenv/versions/3.6.9/envs/zappa/lib/python3.6/site-packages/zappa/cli.py", line 512, in handle
self.dispatch_command(self.command, stage)
File "/Users/****/.pyenv/versions/3.6.9/envs/zappa/lib/python3.6/site-packages/zappa/cli.py", line 549, in dispatch_command
self.deploy(self.vargs['zip'])
File "/Users/****/.pyenv/versions/3.6.9/envs/zappa/lib/python3.6/site-packages/zappa/cli.py", line 723, in deploy
self.create_package()
File "/Users/****/.pyenv/versions/3.6.9/envs/zappa/lib/python3.6/site-packages/zappa/cli.py", line 2264, in create_package
disable_progress=self.disable_progress
File "/Users/****/.pyenv/versions/3.6.9/envs/zappa/lib/python3.6/site-packages/zappa/core.py", line 627, in create_lambda_zip
copytree(site_packages, temp_package_path, metadata=False, symlinks=False, ignore=shutil.ignore_patterns(*excludes))
File "/Users/****/.pyenv/versions/3.6.9/envs/zappa/lib/python3.6/site-packages/zappa/utilities.py", line 54, in copytree
lst = os.listdir(src)
FileNotFoundError: [Errno 2] No such file or directory: '/Users/****/mydir/zappa/env/lib/python3.6/site-packages'
==============
```
## Expected Behavior
- Zappa Should fetch packages from pyenv virtualenv
<!--- Tell us what should happen -->
## Actual Behavior
- You can see the line at which error is thrown is different where virtualenv is installed. I don't know why `Zappa deploy` is looking for the site-packages here.
<!--- Tell us what happens instead -->
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Create Virtual env from pyenv 3.6.9
2. Install zappa and Django
3. deploy zappa application
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.52.0
* Operating System and Python version: Mac and python 3.6.9 (installed from pyenv)
* The output of `pip freeze`:
```
argcomplete==1.12.2
asgiref==3.3.1
boto3==1.17.0
botocore==1.20.0
certifi==2020.12.5
cfn-flip==1.2.3
chardet==4.0.0
click==7.1.2
Django==3.1.6
durationpy==0.5
future==0.18.2
hjson==3.0.2
idna==2.10
importlib-metadata==3.4.0
jmespath==0.10.0
kappa==0.6.0
pip-tools==5.5.0
placebo==0.9.0
python-dateutil==2.8.1
python-slugify==4.0.1
pytz==2021.1
PyYAML==5.4.1
requests==2.25.1
s3transfer==0.3.4
six==1.15.0
sqlparse==0.4.1
text-unidecode==1.3
toml==0.10.2
tqdm==4.56.0
troposphere==2.6.3
typing-extensions==3.7.4.3
urllib3==1.26.3
Werkzeug==0.16.1
wsgi-request-logger==0.4.6
zappa==0.52.0
zipp==3.4.0
```
* Link to your project (optional):
* Your `zappa_settings.json`:
```
▶ brew cleanup
{
"dev": {
"django_settings": "myproject.settings",
"profile_name": "zappa",
"project_name": "zappa",
"runtime": "python3.6",
"s3_bucket": "nd-zappa",
"aws_region": "ap-south-1"
}
}
```
| closed | 2021-02-20T13:24:46Z | 2022-11-19T04:17:23Z | https://github.com/zappa/Zappa/issues/938 | [] | jneves | 2 |
ets-labs/python-dependency-injector | flask | 448 | Question: Create an instance of providers.Singleton at the same time as fastapi starts. | Hello.
As the title suggests, what should I do with the definition of providers.Singleton that instantiates at the same time as fastapi starts!
I want to generate the following "manager" at the start of fastapi.
(Dependency injection of application_service in manager may be implemented incorrectly.)
```python
class Container(containers.DeclarativeContainer):
application_service = providers.Factory(
ApplicationService,
)
manager = providers.Singleton(
Manager,
application_service=application_service(), # Inject the above application_service dependency.
)
```
Do you have any good ideas? | open | 2021-04-20T06:14:14Z | 2021-04-26T02:49:52Z | https://github.com/ets-labs/python-dependency-injector/issues/448 | [
"question"
] | satodaiki | 0 |
graphistry/pygraphistry | jupyter | 629 | [BUG] g2.search(..., fuzzy=True, cols=['title','text'] ), AssertionError: ydf must be provided to transform data | **Describe the bug**
It seems, exact match(fuzzy=False) works well, but for fuzzy=True it fails giving the following error:
AssertionError: ydf must be provided to transform data
**To Reproduce**
Code, including data, than can be run without editing:
Ask-HackerNews-Demo.ipynb
g2 = g.umap(X=['title','text']..... )
g2.search(..., fuzzy=True, cols=['title','text'] ),
```
**Expected behavior**
What should have happened
**Actual behavior**
What did happen
AssertionError: ydf must be provided to transform data
**Screenshots**
This problem could be fixed, look at attached picture
**Browser environment (please complete the following information):**
- OS: [e.g. MacOS]
- Browser [Firefox]
- Version [e.g. 22]
**Graphistry GPU server environment**
- Where run [Hub]
- If self-hosting, Graphistry Version [e.g. 0.14.0, see bottom of a viz or login dashboard]
- If self-hosting, any OS/GPU/driver versions
**PyGraphistry API client environment**
- Where run [e.g., Graphistry 2.35.9 Jupyter]
- Version [e.g. 0.14.0, print via `graphistry.__version__`]
- 0.35.4+18.g60177c52.dirty(dev/dev-skrub branch)
- Python Version [e.g. Python 3.7.7]
**Additional context**
Add any other context about the problem here.

| open | 2025-01-04T17:04:02Z | 2025-01-10T09:51:35Z | https://github.com/graphistry/pygraphistry/issues/629 | [
"bug",
"help wanted",
"p2",
"good-first-issue"
] | maksim-mihtech | 2 |
custom-components/pyscript | jupyter | 520 | Logging Documentation for Apps vs Scripts | In following https://hacs-pyscript.readthedocs.io/en/latest/reference.html#logging, it is not readily apparent that the steps for setting logging on a script by script basis like illustrated here:
```
logger:
default: info
logs:
custom_components.pyscript.file: info
custom_components.pyscript.file.my_script.my_function: debug
```
would change if the script in question is really an app which appears to change these directions to:
```
logger:
default: info
logs:
custom_components.pyscript.apps: info
custom_components.pyscript.apps.my_script.my_function: debug
```
I was able to finally piece this together based on the output of the debugging log when I got this working using `logger.set_level(**{"custom_components.pyscript": "debug"})` at the top of my script/app just to see how the debugging log would come through.
| open | 2023-08-28T02:28:15Z | 2023-08-28T02:33:07Z | https://github.com/custom-components/pyscript/issues/520 | [] | marshalltech81 | 0 |
albumentations-team/albumentations | deep-learning | 1,694 | GridDistortion Reference | The blog [http://pythology.blogspot.sg/2014/03/interpolation-on-regular-distorted-grid.html](http://pythology.blogspot.sg/2014/03/interpolation-on-regular-distorted-grid.html) has been removed. Any additional reference for grid_distortion please? Thanks.
grid_distortion is defined here [https://github.com/albumentations-team/albumentations/blob/d47389cd8c40f7f7b5a0ee777a204e251484ed11/albumentations/augmentations/geometric/functional.py#L1256](https://github.com/albumentations-team/albumentations/blob/d47389cd8c40f7f7b5a0ee777a204e251484ed11/albumentations/augmentations/geometric/functional.py#L1256).
| closed | 2024-04-29T14:33:55Z | 2024-05-01T03:50:47Z | https://github.com/albumentations-team/albumentations/issues/1694 | [
"documentation"
] | danielmao2019 | 4 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 764 | 请问微调指令数据很长该怎么设置参数 | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [X] 我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
- [X] 模型正确性检查:务必检查模型的[SHA256.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/SHA256.md),模型不对的情况下无法保证效果和正常运行
### 问题类型
模型训练与精调
### 基础模型
LLaMA-Plus-13B
### 操作系统
Linux
### 详细描述问题
作者您好,我们想用自己的数据对Alpaca-13B-Plus进行指令微调,但是发现我们的数据长度普遍很长,长的一条指令能达到2k-4k,请问这种情况下还能不能使用该模型,指令微调的最大长度参数该设置成多大呢?
### 依赖情况(代码类问题务必提供)
_No response_
### 运行日志或截图
_No response_ | closed | 2023-07-19T02:57:35Z | 2023-07-30T22:02:08Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/764 | [
"stale"
] | TheLolita | 2 |
iperov/DeepFaceLab | deep-learning | 570 | DFL 2.0 GPU Memory not used.. | Hi, me again.
DFL 2 dont use the memory on card in a right way.
In Dfl 1.0 i was able to run res 240 with batch size 8 on my Titan RTX (24GB)
If i try now res 240 with batch size 4 (all other on default) it consumed 8gb GPU Memory and then quits with errors.
Please tell me if u need some additional infos :) | closed | 2020-01-23T07:29:34Z | 2020-06-09T15:36:08Z | https://github.com/iperov/DeepFaceLab/issues/570 | [] | blanuk | 13 |
plotly/dash-core-components | dash | 198 | ability to preserve user interaction state between redraws | discussion taken from here: https://community.plot.ly/t/preserve-trace-visibility-in-callback/5118/5
> More generally, I wonder if we should just have some kind of flag that tells the graph whether it should reset user interactions or not (like zooming, panning, clicking on legends, clicking on mode bar items). In some cases, like if you were to switch the chart type or display completely different data, you’d want to reset the user interactions. In other cases, you wouldn’t necessarily want to. | closed | 2018-05-08T17:56:43Z | 2019-10-30T22:20:54Z | https://github.com/plotly/dash-core-components/issues/198 | [
"Status: Discussion Needed"
] | chriddyp | 18 |
youfou/wxpy | api | 113 | 群聊中search按地区性别查找都返回空数组 | 对单个Group的members执行.search(sex=MALE)之类的查询都是返回空数组,但是name查询有效。
包括stats_text()和stats等统计方法也都返回空 | closed | 2017-07-08T13:36:01Z | 2017-12-09T08:40:59Z | https://github.com/youfou/wxpy/issues/113 | [] | honwenle | 5 |
mlfoundations/open_clip | computer-vision | 19 | suggestions for a multilingual version CLIP | Hi,
thanks for this great work! I want to make a multilingual version CLIP. There is existing works to use English CLIP indirectly (https://github.com/FreddeFrallan/Multilingual-CLIP). But do you have suggestions on making the code a multilingual version?
Thanks you! | closed | 2021-09-30T07:27:42Z | 2021-10-05T23:52:30Z | https://github.com/mlfoundations/open_clip/issues/19 | [] | niatzt | 0 |
Sanster/IOPaint | pytorch | 394 | [BUG] | **Model**
Which model are you using?
**Describe the bug**
A clear and concise description of what the bug is.
**Screenshots**
If applicable, add screenshots to help explain your problem.

**System Info**
Software version used
- lama-cleaner:
- pytorch:
- CUDA:
| closed | 2023-10-10T14:56:28Z | 2025-03-14T02:02:23Z | https://github.com/Sanster/IOPaint/issues/394 | [
"stale"
] | openainext | 2 |
PaddlePaddle/PaddleNLP | nlp | 9,710 | [Bug]: zero_shot_text_classification的compute_metrics方法错误 | ### 软件环境
```Markdown
- paddlepaddle:
- paddlepaddle-gpu: 2.4.2.post116
- paddlenlp: 2.5.2.post0
```
### 重复问题
- [X] I have searched the existing issues
### 错误描述
```Markdown
在文本分类任务中,使用zero_shot_text_classification和text_classification里的multi_label各训练一个多分类模型,发现两者的准确率相差巨大,通过代码发现zero_shot_text_classification里计算准确率的方法compute_metrics会把多分类的one-hot全部展开为一维,导致macro_f1值偏高很多,而multi_label里的准确率计算是正确的。
```
### 稳定复现步骤 & 代码
文件:applications/zero_shot_text_classification/run_eval.py
将如下代码:
preds = preds[labels != -100].numpy()
labels = labels[labels != -100].numpy()
改为:
preds = preds.numpy()
labels = labels.numpy()
即可正常 | closed | 2024-12-27T03:28:57Z | 2025-03-12T00:21:37Z | https://github.com/PaddlePaddle/PaddleNLP/issues/9710 | [
"bug",
"stale"
] | phpdancer | 2 |
miLibris/flask-rest-jsonapi | sqlalchemy | 82 | With the include parameter, do not pick the ones that do not have date in the included | I do not know if it is a bad configuration of mine or a Json specification. My problem is that I have a table of ads and would like to bring the photos and categories of the ads together in the ad endpoint. When placing the include parameter of photos and categories the result only brings those that have photos and categories and ignores the ones that do not have. Is it a JsonApi standard or can it be a bad setup in my API?
Sorry for grammatical errors, English is not my native language. | closed | 2017-12-11T19:41:11Z | 2017-12-12T13:13:43Z | https://github.com/miLibris/flask-rest-jsonapi/issues/82 | [] | AndreNalevaiko | 2 |
huggingface/datasets | pandas | 6,853 | Support soft links for load_datasets imagefolder | ### Feature request
Load_dataset from a folder of images doesn't seem to support soft links. It would be nice if it did, especially during methods development where image folders are being curated.
### Motivation
Images are coming from a complex variety of sources and we'd like to be able to soft link directly from the originating folders as opposed to copying. Having a copy of the file ensures that there may be issues with image versioning as well as having double the amount of required disk space.
### Your contribution
N/A | open | 2024-04-30T22:14:29Z | 2024-04-30T22:14:29Z | https://github.com/huggingface/datasets/issues/6853 | [
"enhancement"
] | billytcl | 0 |
ScottfreeLLC/AlphaPy | scikit-learn | 17 | Cryptocurrency Prices | You can get historical cryptocurrency pricing from [Quandl](https://www.quandl.com/collections/markets/bitcoin-data), but the MarketFlow pipeline needs to be modified to read directly from the _data_ directory if no feeds are available.
Here are some sources of historical daily and intraday cryptocurrency data:
- Daily : https://www.kaggle.com/sudalairajkumar/cryptocurrencypricehistory
- Intraday (1-minute) : https://www.kaggle.com/smitad/bitcoin-trading-strategy-simulation/data | open | 2017-12-19T01:50:36Z | 2020-08-23T22:07:04Z | https://github.com/ScottfreeLLC/AlphaPy/issues/17 | [
"enhancement"
] | mrconway | 3 |
ludwig-ai/ludwig | data-science | 3,182 | Errors in Ludwig Docker Images | **Describe the bug**
When using the Docker Image `ludwigai:ludwig-gpu:master` to start a ludwig container an error is thrown. It says that PyTorch and TorchAudio were compiled with different CUDA Versions.
When using the Docker Image `ludwigai:ludwig:master` or `ludwigai:ludwig-ray:master` another error is thrown, this one says no module named 'mlflow' was found.
**To Reproduce**
Follow the Steps from the Docker Section in the Getting Started Guide on ludwig.ai Website ([Link](https://ludwig.ai/latest/getting_started/docker/))
I used a slightly modified version of the Command from "[Run Ludwig CLI Section](https://ludwig.ai/latest/getting_started/docker/#run-ludwig-cli)" as you can see in the logs below. I mostly changed the paths to the ones of my system. I tried it with `train` and `experiment` command as entrypoint. I tried different versions of the paths too.
I used the data and config from the Rotten Tomatoes Example found in the Guide.
Data: [Link](https://ludwig.ai/latest/getting_started/prepare_data/)
Config: [Link](https://ludwig.ai/latest/getting_started/train/)
**Expected behavior**
Ludwig should train itself.
**Screenshots** (In this case its logs)
Here is the full Error from ludwig-gpu image:
```
docker run -v /mnt/z/Development/MachineLearning/Ludwig/Rotten_Tomatoes/data:/data \
-v /mnt/z/Development/MachineLearning/Ludwig/Rotten_Tomatoes/src:/src \
ludwigai/ludwig-gpu:master \
train --config /src/config.yaml \
--dataset /data/rotten_tomatoes.csv \
--output_directory /src/results
Traceback (most recent call last):
File "/opt/conda/bin/ludwig", line 8, in <module>
sys.exit(main())
File "/opt/conda/lib/python3.10/site-packages/ludwig/cli.py", line 172, in main
CLI()
File "/opt/conda/lib/python3.10/site-packages/ludwig/cli.py", line 67, in __init__
getattr(self, args.command)()
File "/opt/conda/lib/python3.10/site-packages/ludwig/cli.py", line 70, in train
from ludwig import train
File "/opt/conda/lib/python3.10/site-packages/ludwig/train.py", line 23, in <module>
from ludwig.api import LudwigModel
File "/opt/conda/lib/python3.10/site-packages/ludwig/api.py", line 40, in <module>
from ludwig.backend import Backend, initialize_backend, provision_preprocessing_workers
File "/opt/conda/lib/python3.10/site-packages/ludwig/backend/__init__.py", line 22, in <module>
from ludwig.backend.base import Backend, LocalBackend
File "/opt/conda/lib/python3.10/site-packages/ludwig/backend/base.py", line 29, in <module>
from ludwig.data.cache.manager import CacheManager
File "/opt/conda/lib/python3.10/site-packages/ludwig/data/cache/manager.py", line 8, in <module>
from ludwig.data.dataset.base import DatasetManager
File "/opt/conda/lib/python3.10/site-packages/ludwig/data/dataset/base.py", line 21, in <module>
from ludwig.utils.defaults import default_random_seed
File "/opt/conda/lib/python3.10/site-packages/ludwig/utils/defaults.py", line 24, in <module>
from ludwig.features.feature_registries import get_input_type_registry
File "/opt/conda/lib/python3.10/site-packages/ludwig/features/feature_registries.py", line 33, in <module>
from ludwig.features.audio_feature import AudioFeatureMixin, AudioInputFeature
File "/opt/conda/lib/python3.10/site-packages/ludwig/features/audio_feature.py", line 22, in <module>
import torchaudio
File "/opt/conda/lib/python3.10/site-packages/torchaudio/__init__.py", line 1, in <module>
from torchaudio import ( # noqa: F401
File "/opt/conda/lib/python3.10/site-packages/torchaudio/_extension.py", line 136, in <module>
_check_cuda_version()
File "/opt/conda/lib/python3.10/site-packages/torchaudio/_extension.py", line 128, in _check_cuda_version
raise RuntimeError(
RuntimeError: Detected that PyTorch and TorchAudio were compiled with different CUDA versions. PyTorch has CUDA version 11.6 whereas TorchAudio has CUDA version 11.7. Please install the TorchAudio version that matches your PyTorch version.
```
Here is the full error from `ludwig` image:
```
docker run -v /mnt/z/Development/MachineLearning/Ludwig/Rotten_Tomatoes/data:/data \
-v /mnt/z/Development/MachineLearning/Ludwig/Rotten_Tomatoes/src:/src \
ludwigai/ludwig:master \
train --config /src/config.yaml \
--dataset /data/rotten_tomatoes.csv \
--output_directory /src/results
Traceback (most recent call last):
File "/usr/local/bin/ludwig", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.8/site-packages/ludwig/cli.py", line 172, in main
CLI()
File "/usr/local/lib/python3.8/site-packages/ludwig/cli.py", line 67, in __init__
getattr(self, args.command)()
File "/usr/local/lib/python3.8/site-packages/ludwig/cli.py", line 72, in train
train.cli(sys.argv[2:])
File "/usr/local/lib/python3.8/site-packages/ludwig/train.py", line 379, in cli
add_contrib_callback_args(parser)
File "/usr/local/lib/python3.8/site-packages/ludwig/contrib.py", line 27, in add_contrib_callback_args
const=contrib_cls(),
File "/usr/local/lib/python3.8/site-packages/ludwig/contribs/mlflow/__init__.py", line 42, in __init__
self.tracking_uri = mlflow.get_tracking_uri()
File "/usr/local/lib/python3.8/site-packages/ludwig/utils/package_utils.py", line 34, in __getattr__
module = self._load()
File "/usr/local/lib/python3.8/site-packages/ludwig/utils/package_utils.py", line 23, in _load
module = importlib.import_module(self.__name__)
File "/usr/local/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'mlflow'
```
**Environment (please complete the following information):**
- OS: Windows
- Version 11
- Python 3.10.2 (on Windows, so not applicable here i guess?)
- Ludwig 0.7.1
I use Docker Desktop with WSL2 Engine as my Docker environment. It runs the latest version (Docker Desktop 4.16.3 (96739)).
I executed the bash commands in my WSL2 shell. I run Ubuntu 20.04.5 LTS there.
When using the Windows cmd for the `docker run` command it did not work either.
| closed | 2023-03-02T20:31:40Z | 2023-03-05T20:05:54Z | https://github.com/ludwig-ai/ludwig/issues/3182 | [
"bug"
] | Velyn-N | 6 |
scikit-optimize/scikit-optimize | scikit-learn | 274 | Remove option `acq_optimizer="auto"` | In order to simplify slightly the API, I would vote for removing the `"auto"` value of `acq_optimizer`.
I am not convinced we should default to random sampling as soon as one of the dimensions is not `Real`.
I would rather default to `"lbfgs"` and have, maybe, a warning if one the dimensions isnt `Real`. (Which by the way does not mean that the optimization will fail.) | closed | 2016-11-30T09:12:47Z | 2017-01-04T10:53:47Z | https://github.com/scikit-optimize/scikit-optimize/issues/274 | [
"API",
"Easy"
] | glouppe | 4 |
litestar-org/litestar | api | 3,423 | Enhancement: Support RSGI Specification (granian) | ### Summary
Here is the [specification](https://github.com/emmett-framework/granian/blob/master/docs/spec/RSGI.md)
### Basic Example
_No response_
### Drawbacks and Impact
_No response_
### Unresolved questions
_No response_ | closed | 2024-04-23T20:57:49Z | 2025-03-20T15:54:37Z | https://github.com/litestar-org/litestar/issues/3423 | [
"Enhancement"
] | gam-phon | 4 |
ray-project/ray | python | 51,503 | CI test windows://python/ray/tests:test_client_builder is consistently_failing | CI test **windows://python/ray/tests:test_client_builder** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aaf1-9737-4a02-a7f8-1d7087c16fb1
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c4f-4156-97c5-9793049512c1
DataCaseName-windows://python/ray/tests:test_client_builder-END
Managed by OSS Test Policy | closed | 2025-03-19T00:06:56Z | 2025-03-19T21:52:47Z | https://github.com/ray-project/ray/issues/51503 | [
"bug",
"triage",
"core",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] | can-anyscale | 3 |
gunthercox/ChatterBot | machine-learning | 1,997 | How to pass extra parameters to Chatterbot's Statement object and get those parameters with get_response() method or get_response.serialize() | I am using Chatterbot's logical adapters to generate different responses including calls to external APIs such as weather and news. My problem is simple, I would like to pass an extra parameter to the Statement object when I am returning a response from a logical adapter. This extra parameter would be something like 'message_type' explaining the data returned as in message_type='weather_data'. To illustrate that, here is some code:
what I am doing now:
```python
class WeatherAPIAdapter(LogicAdapter):
def __init__(self, chatbot, **kwargs):
super().__init__(chatbot, **kwargs)
# varibales
def can_process(self, statement):
"""
Return true if the input statement contains
'what' and 'is' and 'temperature'.
"""
# verification code here
return True
def process(self, input_statement, additional_response_selection_parameters):
from chatterbot.conversation import Statement
weather_data = call_to_api()
response_statement = Statement(
text=weather_data
)
response_statement.confidence = 1.0
return response_statement
```
what I need:
```python
class WeatherAPIAdapter(LogicAdapter):
def __init__(self, chatbot, **kwargs):
super().__init__(chatbot, **kwargs)
# varibales
def can_process(self, statement):
"""
Return true if the input statement contains
'what' and 'is' and 'temperature'.
"""
# verification code here
return True
def process(self, input_statement, additional_response_selection_parameters):
from chatterbot.conversation import Statement
weather_data = call_to_api()
message_type = "weather_data"
response_statement = Statement(
text=weather_data,
message_type=message_type
)
response_statement.confidence = 1.0
return response_statement
```
and then be able to get the message_type using like the following:
```python
message_type = bot.get_response(input_data).message_type
response = bot.get_response(input_data).text
```
or with the serialize() method:
```python
data = bot.get_response(input_data).serialize()
```
Thank you very much for your help. | open | 2020-06-19T19:44:15Z | 2020-06-28T18:24:43Z | https://github.com/gunthercox/ChatterBot/issues/1997 | [] | MurphyAdam | 1 |
jina-ai/serve | machine-learning | 5,697 | Executor Web Page Not Found | **Describe the bug**
The Executor Doc Web Page cannot be found.
https://docs.jina.ai/fundamentals/executor/?utm_source=learning-portal

| closed | 2023-02-18T12:53:23Z | 2023-02-20T15:17:58Z | https://github.com/jina-ai/serve/issues/5697 | [] | devmike123 | 1 |
snarfed/granary | rest-api | 158 | use silo URL as first author URL, not web site(s) in profile | ...to limit confusion and keep URLs-as-identifiers consistent over time. requested by @aaronpk since some people change the links in the profiles often and he wants to be able to identify them over time. [IRC discussion.](https://chat.indieweb.org/dev/2018-12-04#t1543947792918900)
conclusion is to put both silo and profile URLs into `p-author`, but always put silo URL first.
(also bridgy seems inconsistent? it might prefer profile links for replies/comments but silo URL for likes? need to investigate.) | closed | 2018-12-04T18:34:33Z | 2018-12-07T22:46:37Z | https://github.com/snarfed/granary/issues/158 | [
"now"
] | snarfed | 0 |
simple-login/app | flask | 1,659 | Adding Support for Blocking Email Addresses via Regex and Selected Ones | This feature request is aimed at adding new functionality to the existing system, which will allow users to block specific email addresses that satisfy a regex condition. Additionally, this feature will provide support to block all addresses except the ones selected by the user.
This functionality will be highly beneficial for users who want to control the emails they receive and ensure that only relevant messages reach their inbox.
# Expected Benefits
The benefits of adding this feature include:
* Improved email management: Users will have greater control over the emails they receive, reducing clutter and improving efficiency.
* Customizable blocking: Users will be able to block email addresses based on their specific needs and preferences.
* Reduced spam: This feature will help to reduce spam and unwanted email, leading to a cleaner inbox. | open | 2023-03-23T11:55:42Z | 2023-03-23T11:55:42Z | https://github.com/simple-login/app/issues/1659 | [] | zlianon | 0 |
AutoGPTQ/AutoGPTQ | nlp | 506 | [BUG]ValueError: Tokenizer class BaichuanTokenizer does not exist or is not currently imported. | **Describe the bug**
I am quantizing baichuan2-13b-chat. Previously, when quantizing bloomz-7b1, the same error occurred, which was resolved by adding `--fast_tokenizer`. However, this time the issue with baichuan has not been resolved.
**Hardware details**
A800
**Software version**
```
absl-py 2.0.0
accelerate 0.25.0
aiohttp 3.9.1
aioprometheus 23.12.0
aiosignal 1.3.1
anyio 4.2.0
async-timeout 4.0.3
attributedict 0.3.0
attrs 23.1.0
auto-gptq 0.6.0
autoawq 0.1.8+cu118
blessings 1.7
cachetools 5.3.2
certifi 2023.11.17
chardet 5.2.0
charset-normalizer 3.3.2
click 8.1.7
codecov 2.1.13
colorama 0.4.6
coloredlogs 15.0.1
colour-runner 0.1.1
coverage 7.4.0
DataProperty 1.0.1
datasets 2.16.0
deepdiff 6.7.1
dill 0.3.7
distlib 0.3.8
distro 1.9.0
evaluate 0.4.1
exceptiongroup 1.2.0
fastapi 0.108.0
filelock 3.13.1
frozenlist 1.4.1
fsspec 2023.10.0
gekko 1.0.6
h11 0.14.0
httpcore 1.0.2
httptools 0.6.1
httpx 0.26.0
huggingface-hub 0.20.2
humanfriendly 10.0
idna 3.6
inspecta 0.1.3
Jinja2 3.1.2
joblib 1.3.2
jsonlines 4.0.0
jsonschema 4.20.0
jsonschema-specifications 2023.12.1
lm_eval 0.4.0
lxml 4.9.4
MarkupSafe 2.1.3
mbstrdecoder 1.1.3
mpmath 1.3.0
msgpack 1.0.7
multidict 6.0.4
multiprocess 0.70.15
networkx 3.2.1
ninja 1.11.1.1
nltk 3.8.1
numexpr 2.8.8
numpy 1.26.2
nvidia-cublas-cu12 12.1.3.1
nvidia-cuda-cupti-cu12 12.1.105
nvidia-cuda-nvrtc-cu12 12.1.105
nvidia-cuda-runtime-cu12 12.1.105
nvidia-cudnn-cu12 8.9.2.26
nvidia-cufft-cu12 11.0.2.54
nvidia-curand-cu12 10.3.2.106
nvidia-cusolver-cu12 11.4.5.107
nvidia-cusparse-cu12 12.1.0.106
nvidia-nccl-cu12 2.18.1
nvidia-nvjitlink-cu12 12.3.101
nvidia-nvtx-cu12 12.1.105
openai 1.6.1
optimum 1.16.1
ordered-set 4.1.0
orjson 3.9.10
packaging 23.2
pandas 2.1.4
pathvalidate 3.2.0
peft 0.7.1
Pillow 10.1.0
pip 23.3.1
platformdirs 4.1.0
pluggy 1.3.0
portalocker 2.8.2
protobuf 4.25.1
psutil 5.9.7
pyarrow 14.0.2
pyarrow-hotfix 0.6
pybind11 2.11.1
pydantic 1.10.13
Pygments 2.17.2
pyproject-api 1.6.1
pytablewriter 1.2.0
python-dateutil 2.8.2
python-dotenv 1.0.0
pytz 2023.3.post1
PyYAML 6.0.1
quantile-python 1.1
ray 2.9.0
referencing 0.32.0
regex 2023.12.25
requests 2.31.0
responses 0.18.0
rootpath 0.1.1
rouge 1.0.1
rouge-score 0.1.2
rpds-py 0.15.2
sacrebleu 2.4.0
safetensors 0.4.1
scikit-learn 1.3.2
scipy 1.11.4
sentencepiece 0.1.99
setuptools 68.2.2
six 1.16.0
sniffio 1.3.0
sqlitedict 2.1.0
starlette 0.32.0.post1
sympy 1.12
tabledata 1.3.3
tabulate 0.9.0
tcolorpy 0.1.4
termcolor 2.4.0
texttable 1.7.0
threadpoolctl 3.2.0
tokenizers 0.15.0
toml 0.10.2
tomli 2.0.1
torch 2.1.2+cu118
torchvision 0.16.2
tox 4.11.4
tqdm 4.66.1
tqdm-multiprocess 0.0.11
transformers 4.36.2
triton 2.1.0
typepy 1.3.2
typing_extensions 4.9.0
tzdata 2023.3
urllib3 2.1.0
uvicorn 0.25.0
uvloop 0.19.0
virtualenv 20.25.0
vllm 0.2.6+cu118
watchfiles 0.21.0
websockets 12.0
wheel 0.41.2
xformers 0.0.23.post1+cu118
xxhash 3.4.1
yarl 1.9.4
zstandard 0.22.0
```
**To Reproduce**
```
CUDA_VISIBLE_DEVICES=0,1 python examples/quantization/quant_with_alpaca_default_template.py \
--pretrained_model_dir /data/Baichuan2-13B-Chat \
--quantized_model_dir /data/Baichuan2-13B-Chat_4bit_1000 \
--num_samples 1000 \
--fast_tokenizer \
--save_and_reload \
--data_path /data/LLaMA-Factory/data/bi_frpt_ins_all_en_1.json \
--per_gpu_max_memory 10
```
**Expected behavior**
solve it
**Screenshots**

**Additional context**
I noticed that @TheBloke has already quantized baichuan2-13b. I wonder if you could help me with the issue.
I'm very grateful for any suggestions anyone might have.
Thank you very much
| open | 2024-01-08T07:54:31Z | 2024-01-08T07:54:31Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/506 | [
"bug"
] | oreojason | 0 |
ashnkumar/sketch-code | tensorflow | 6 | No module named inference.Sampler | This line in ``convert_single_image.py`` is causing problems.
```python
from classes.inference.Sampler import *
```
```
Traceback (most recent call last):
File "convert_single_image.py", line 7, in <module>
from classes.inference.Sampler import *
ImportError: No module named inference.Sampler
``` | closed | 2018-04-20T15:05:38Z | 2020-07-29T22:57:47Z | https://github.com/ashnkumar/sketch-code/issues/6 | [] | bibhas2 | 6 |
unit8co/darts | data-science | 2,325 | [BUG] FutureWarning: Series.fillna with 'method' is deprecated | **Describe the bug**
Running the hyperparameter optimization example https://unit8co.github.io/darts/examples/17-hyperparameter-optimization.html
In the Data Preparation section there is this code fragment:
```python
all_series = ElectricityDataset(multivariate=False).load()
NR_DAYS = 80
DAY_DURATION = 24 * 4 # 15 minutes frequency
all_series_fp32 = [
s[-(NR_DAYS * DAY_DURATION) :].astype(np.float32) for s in tqdm(all_series)
]
```
It generates a large number of these warnings:
```
/Users/florin.andrei/Library/Python/3.11/lib/python/site-packages/darts/datasets/__init__.py:550: FutureWarning: Series.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead.
start_date = min(srs.fillna(method="ffill").dropna().index)
/Users/florin.andrei/Library/Python/3.11/lib/python/site-packages/darts/datasets/__init__.py:551: FutureWarning: Series.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead.
end_date = max(srs.fillna(method="bfill").dropna().index)
```
**To Reproduce**
Just run the example.
**Expected behavior**
No warnings should be thrown.
**System (please complete the following information):**
- Python version: 3.11.9
- darts version: 0.28.0
**Additional context**
- MacBook Pro
- Apple M3 Pro
- macOS Sonoma 14.4.1
- pandas 2.2.2
- tqdm 4.66.2 | closed | 2024-04-12T20:58:06Z | 2024-07-03T11:20:25Z | https://github.com/unit8co/darts/issues/2325 | [
"devops"
] | FlorinAndrei | 3 |
littlecodersh/ItChat | api | 359 | Non-blocking wechat requests | Hi guys,
Thanks for making this wechat python repo. I am wondering whether there is a way to separate workers i.e.:
1. To receive message and save to database, and make other callbacks.
2. To send message.
Appreciate your help!
| closed | 2017-05-11T17:10:10Z | 2017-05-29T01:46:40Z | https://github.com/littlecodersh/ItChat/issues/359 | [
"question"
] | afeezaziz | 3 |
keras-team/keras | data-science | 20,809 | Tensorboard not working with Trainer Pattern | I'm using the Keras Trainer pattern as illustrated [here](https://keras.io/examples/keras_recipes/trainer_pattern/). The issue when using this pattern is that when you use Tensorboard only the top level weights are being recorded.
The reason for this is that `Tensorboard` is recording the weights for the all the layers in `self.model.layers` [here](https://github.com/keras-team/keras/blob/v3.8.0/keras/src/callbacks/tensorboard.py#L558-L576). But this equal to `[<Sequential name=sequential, built=True>]` ~~and the weights for that Sequential object is []~~
I tried several things:
1. Passing a CallBackList to the Tensorflow Trainer when calling fit passing model_a instead of trainer_a, but this fails because model_a has no optimizer
2. I tried to overwrite the `layers` method in the Trainer object to have `recursive=True` but the weights were still not showing in TensorBoard suggesting that something else is going on
I'm open to any suggestions here.
full example
```
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import tensorflow as tf
import keras
from keras.callbacks import TensorBoard
# Load MNIST dataset and standardize the data
mnist = keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
class MyTrainer(keras.Model):
def __init__(self, model):
super().__init__()
self.model = model
# Create loss and metrics here.
self.loss_fn = keras.losses.SparseCategoricalCrossentropy()
self.accuracy_metric = keras.metrics.SparseCategoricalAccuracy()
@property
def metrics(self):
# List metrics here.
return [self.accuracy_metric]
def train_step(self, data):
x, y = data
with tf.GradientTape() as tape:
y_pred = self.model(x, training=True) # Forward pass
# Compute loss value
loss = self.loss_fn(y, y_pred)
# Compute gradients
trainable_vars = self.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Update metrics
for metric in self.metrics:
metric.update_state(y, y_pred)
# Return a dict mapping metric names to current value.
return {m.name: m.result() for m in self.metrics}
def test_step(self, data):
x, y = data
# Inference step
y_pred = self.model(x, training=False)
# Update metrics
for metric in self.metrics:
metric.update_state(y, y_pred)
return {m.name: m.result() for m in self.metrics}
def call(self, x):
# Equivalent to `call()` of the wrapped keras.Model
x = self.model(x)
return x
model_a = keras.models.Sequential(
[
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(256, activation="relu"),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation="softmax"),
]
)
callbacks = [TensorBoard(histogram_freq=1)]
trainer_1 = MyTrainer(model_a)
trainer_1.compile(optimizer=keras.optimizers.SGD())
trainer_1.fit(
x_train, y_train, epochs=5, batch_size=64, validation_data=(x_test, y_test), callbacks=callbacks,
)
``` | open | 2025-01-24T09:58:16Z | 2025-02-27T17:18:14Z | https://github.com/keras-team/keras/issues/20809 | [
"stat:awaiting keras-eng",
"type:Bug"
] | GeraudK | 7 |
docarray/docarray | fastapi | 1,223 | Link to join discord server has expired | **Describe the bug**
The link to join the discord server on the `README.md` file has expired, please it needs to be updated. | closed | 2023-03-11T09:50:02Z | 2023-03-14T09:16:33Z | https://github.com/docarray/docarray/issues/1223 | [] | asuzukosi | 2 |
dropbox/sqlalchemy-stubs | sqlalchemy | 67 | Can't use ForeignKey in Column | Hi the following used to work until https://github.com/dropbox/sqlalchemy-stubs/pull/54
```
from sqlalchemy import Column, ForeignKey
x = Column(
"template_group_id",
ForeignKey("template_group.template_group_id"),
nullable=False,
unique=True,
)
```
Is there a workaround other than `#type: ignore`? | closed | 2019-02-05T14:44:49Z | 2019-02-17T15:26:49Z | https://github.com/dropbox/sqlalchemy-stubs/issues/67 | [
"bug",
"priority-normal",
"topic-stubs"
] | euresti | 2 |
strawberry-graphql/strawberry | fastapi | 3,807 | `pydantic._internal._typing_extra.is_new_type` was removed in a beta version of pydantic | <!-- Provide a general summary of the bug in the title above. -->
A Beta version of pydantic (i.e. version `v.2.11.0b1`) removed `pydantic._internal._typing_extra.is_new_type` in favor of using `typing_inspection.typing_objects.is_newtype` which is a new dependency of pydantic.
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
<!-- A clear and concise description of what the bug is. -->
```
from pydantic._internal._typing_extra import is_new_type
ImportError: cannot import name 'is_new_type' from 'pydantic._internal._typing_extra'
```
## System Information
- Operating system: N/A
- Strawberry version (if applicable):
## Additional Context
https://github.com/pydantic/pydantic/pull/11479
<!-- Add any other relevant information about the problem here. -->
| closed | 2025-03-13T09:08:28Z | 2025-03-13T12:48:58Z | https://github.com/strawberry-graphql/strawberry/issues/3807 | [
"bug"
] | jptrindade | 0 |
whitphx/streamlit-webrtc | streamlit | 1,188 | Add/replace the demo model with Hugging Face | The current object detection model is old and should be replaced with modern ones.
Use https://huggingface.co/docs/hub/models-uploading, https://huggingface.co/docs/huggingface_hub/index | open | 2023-02-10T06:24:09Z | 2023-02-10T07:15:33Z | https://github.com/whitphx/streamlit-webrtc/issues/1188 | [] | whitphx | 0 |
strnad/CrewAI-Studio | streamlit | 36 | tasks limit | Hello,
Thank you for your project. Is there a limit to the amount of tasks that can be added per project? I have 5 agents and I need to assign a unique task to each one of them. For some reason I can only add 3 tasks.
Thanks! | closed | 2024-11-12T21:54:51Z | 2024-11-15T04:47:38Z | https://github.com/strnad/CrewAI-Studio/issues/36 | [] | adelorenzo | 1 |
huggingface/datasets | nlp | 7,116 | datasets cannot handle nested json if features is given. | ### Describe the bug
I have a json named temp.json.
```json
{"ref1": "ABC", "ref2": "DEF", "cuts":[{"cut1": 3, "cut2": 5}]}
```
I want to load it.
```python
ds = datasets.load_dataset('json', data_files="./temp.json", features=datasets.Features({
'ref1': datasets.Value('string'),
'ref2': datasets.Value('string'),
'cuts': datasets.Sequence({
"cut1": datasets.Value("uint16"),
"cut2": datasets.Value("uint16")
})
}))
```
The above code does not work. However, I can load it without giving features.
```python
ds = datasets.load_dataset('json', data_files="./temp.json")
```
Is it possible to load integers as uint16 to save some memory?
### Steps to reproduce the bug
As in the bug description.
### Expected behavior
The data are loaded and integers are uint16.
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.21.0
- Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | closed | 2024-08-20T12:27:49Z | 2024-09-03T10:18:23Z | https://github.com/huggingface/datasets/issues/7116 | [] | ljw20180420 | 3 |
deepspeedai/DeepSpeed | deep-learning | 5,579 | [BUG] fp6 can‘t load qwen1.5-34b-chat | **Describe the bug**
NotImplementedError: Cannot copy out of meta tensor; no data!
`import mii
model_path = 'Qwen1.5-32B-Chat-hf'
pipe = mii.pipeline(model_path, quantization_mode='wf6af16')
response = pipe(["DeepSpeed is", "Seattle is"], max_new_tokens=128)
print(response)`
**System info (please complete the following information):**
- OS: Ubuntu 2004]
- GPU A100
- Python 3.11
**stark**


thanks for your help! | open | 2024-05-29T07:32:04Z | 2024-05-29T07:32:42Z | https://github.com/deepspeedai/DeepSpeed/issues/5579 | [
"bug",
"inference"
] | pointerhacker | 0 |
cookiecutter/cookiecutter-django | django | 5,500 | Heroku - Default Redis connection uses TSLS | ## What happened?
Heroku is now using default TLS connection for Redis. The current settings parameter only supports non-secure connection.
## What should've happened instead?
The settings for production Heroku deployment should default to secure TLS connection using rediss://
## Additional details
Adding
broker_use_ssl = {
"cert_reqs": ssl.CERT_NONE,
}
still throws `raise ValueError(E_REDIS_SSL_CERT_REQS_MISSING_INVALID)`
- Host system configuration:
- Version of cookiecutter CLI (get it with `cookiecutter --version`):
- OS name and version:
On Linux, run
```bash
lsb_release -a 2> /dev/null || cat /etc/redhat-release 2> /dev/null || cat /etc/*-release 2> /dev/null || cat /etc/issue 2> /dev/null
```
On MacOs, run
```bash
sw_vers
```
On Windows, via CMD, run
```
systeminfo | findstr /B /C:"OS Name" /C:"OS Version"
```
```bash
# Insert here the OS name and version
```
- Python version, run `python3 -V`: 3.11.10
- Docker version (if using Docker), run `docker --version`: N/A
- docker compose version (if using Docker), run `docker compose --version`:
- ...
- Options selected and/or [replay file](https://cookiecutter.readthedocs.io/en/latest/advanced/replay.html):
On Linux and macOS: `cat ${HOME}/.cookiecutter_replay/cookiecutter-django.json`
(Please, take care to remove sensitive information)
```json
```
<summary>
Logs:
<details>
<pre>
$ cookiecutter https://github.com/cookiecutter/cookiecutter-django
project_name [Project Name]: ...
</pre>
</details>
</summary>
| closed | 2024-10-29T18:33:26Z | 2024-11-21T12:56:00Z | https://github.com/cookiecutter/cookiecutter-django/issues/5500 | [
"bug",
"Heroku"
] | sumith | 6 |
Farama-Foundation/PettingZoo | api | 840 | [Bug Report] Waterworld random freezes | **Describe the bug**
Not quite sure what it is yet, but when waterworld is continuously stepped and reset, sometimes it just freezes.
**Code example**
```python
from pettingzoo.sisl import waterworld_v4
env = waterworld_v4.parallel_env(
render_mode="rgb_array",
n_obestacles=2,
obstacle_coord=None,
)
iters = 0
while True:
env.reset()
terms = [False]
truncs = [False]
print(iters)
while not (any(terms) or any(truncs)):
iters += 1
# sample_actions
action_dict = {
a: env.action_space(a).sample() for a in env.possible_agents
}
# step env
next_obs, rew, terms, truncs, _ = env.step(action_dict)
terms = terms.values()
truncs = truncs.values()
```
Managed to replicate it on 3 different machines, my guess is that there's a rogue `while` loop somewhere according to the diagnosis below.
The logs of the run are [here](https://wandb.ai/jjshoots/waterworld_sweep/sweeps/24dg055u)
Notice by the 2 hour mark everything grinds to a halt. It's not a memory leak AFAICT.
What's weird is that when running the above script, when it freezes and you ctrl-c, it continues running after that. Here's the stack trace:
```
0
500
1000
1500
2000
2500
3000
3500
4000
^CException ignored from cffi callback <function CollisionHandler._set_separate.<local
s>.cf at 0x7f060d4d52d0>:
Traceback (most recent call last):
File "/home/jet/Sandboxes/waterworld_sweep/venv/lib/python3.10/site-packages/pymunk/
collision_handler.py", line 199, in cf
func(Arbiter(_arb, self._space), self._space, self._data)
File "/home/jet/Sandboxes/waterworld_sweep/venv/lib/python3.10/site-packages/petting
zoo/sisl/waterworld/waterworld_base.py", line 697, in pursuer_evader_separate_callback
x, y = self._generate_coord(evader_shape.radius)
File "/home/jet/Sandboxes/waterworld_sweep/venv/lib/python3.10/site-packages/petting
zoo/sisl/waterworld/waterworld_base.py", line 250, in _generate_coord
while (
KeyboardInterrupt:
4500
5000
5500
6000
6500
7000
7500
``` | closed | 2022-10-29T00:42:37Z | 2022-10-29T12:20:36Z | https://github.com/Farama-Foundation/PettingZoo/issues/840 | [] | jjshoots | 0 |
axnsan12/drf-yasg | rest-api | 871 | Headers Not Getting Sent Along with Request! | I am trying to use drf-yasg so that we can have an UI for our customers. I am passing headers with my API , however it is not getting passed to the backend.

Code:
```
openapi.Parameter(name='tenantUseCase',in_="headers",
description="tenantUseCase parameter (optional field)", type="string"),
openapi.Parameter(name='serverSubnet',in_="headers",
description="serverSubnet parameter (optional field)", type="string"),
openapi.Parameter(name='lbpair',in_="headers",
description="lbpair parameter (optional field)", type="string"),
openapi.Parameter(name='afinitiFlag',in_="headers",
description="afinitiFlag parameter (optional field)", type="string")],
```
Any help is appreciated! | open | 2023-10-18T01:50:34Z | 2025-03-07T12:09:08Z | https://github.com/axnsan12/drf-yasg/issues/871 | [
"triage"
] | sindhujit1 | 0 |
nonebot/nonebot2 | fastapi | 2,536 | Feature: 支持重名子插件 | ### 希望能解决的问题
如果有以下文件结构:
```
src/
plugins/
plugin1/
plugins/
name.py
plugin2/
plugins/
name.py
```
其中在 `plugins1` 的子插件下有一个 `name.py`,在 `plugins2` 的子插件下也有一个 `name.py`,此时这两个 `name.py` 只会加载其中一个,`nonebot` 似乎并不会加载两个重名插件,请问能否将插件从顶层插件到当前插件的所有名称的元组作为 `id` 存到插件缓存中以支持重名子插件?
### 描述所需要的功能
rt,支持重名子插件 | closed | 2024-01-16T08:19:42Z | 2024-04-20T06:47:13Z | https://github.com/nonebot/nonebot2/issues/2536 | [
"enhancement"
] | uf-fipper | 5 |
Lightning-AI/pytorch-lightning | data-science | 19,830 | WandbLogger `save_dir` and `dir` parameters do not work as expected. | ### Bug description
The `save_dir` param is set to `.` by default and hence `if save_dir is not None:` is always False.
https://github.com/Lightning-AI/pytorch-lightning/blob/d1949766f8cddd424e2fac3a68b275bebe13d3e4/src/lightning/pytorch/loggers/wandb.py#L327C12-L327C20
When a user only specifies the `dir` and does specify `save_dir`, the current implementation ignores `dir` because `save_dir` is not None (set to `.` by default). The priority should be 1. Use `save_dir` if provided, 2. Use dir if `save_dir` is not present. 3. Use `.` which is the default.
If this is not a duplicate of an issue and the workflow is expected as described above, let me know and I will be interested in creating a PR to fix this.
### What version are you seeing the problem on?
v2.1
### How to reproduce the bug
```python
wandb_logger = WandbLogger(
project="PROJECT", dir=ignored_dir, id=timestr, name=config.run_id, resume=False, job_type="testing", log_model="all")
```
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_ | open | 2024-04-30T20:18:16Z | 2025-02-28T16:48:47Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19830 | [
"bug",
"needs triage",
"ver: 2.1.x"
] | Jigar1201 | 1 |
tqdm/tqdm | pandas | 919 | Feature Request Variable Length | I'm working on making the tqdm progress bar in [Pytorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning) accurate
The problem is it has both training (longer) and eval (shorter) steps in the same main progress bar.
Would be really super helpful if I could pass a list of tuples to tqdm like [(num_train_samples, init_avg_time_per_train), (num_eval_sample, init_avg_time_per_eval) ...] and tqdm would keep track of the partitioned set, updating average times.
I could calculate this percentage manually, and then update tqdm on a say 100 point scale, but then tqdm would no longer show index/total_examples which is necessary.
Thoughts? Thanks | open | 2020-03-19T17:27:51Z | 2020-03-19T18:42:11Z | https://github.com/tqdm/tqdm/issues/919 | [
"need-feedback 📢"
] | pertschuk | 3 |
arogozhnikov/einops | tensorflow | 312 | What am I doing wrong with repeat command? | So I have a batch of images(grayscale) which I want to imitate as 3 channel images, because my model expects them.
I found the tutorial and examples however when I try to apply them I always get varying, unexpected results.
So my image Batch has the shape `[B, C, H, W] - 128x1x224x224` from a torch dataloader/dataset.
I thought easy, I can change the 1 to a 3 with einops by doing:
`repeat(images, "b c h w -> b repeat(c) h w", repeat=3)`
But this returns `128x3x1x224x224`, which I do not really understand, the 1 should not be there anymore, since my target vector has a length of 4? I though about using reduce afterwards to get rid of the 1 dimension, but since there are some functions involved (max...) I was not sure if this alters the result.
Can you help me?
| closed | 2024-03-27T13:18:20Z | 2024-03-27T18:24:07Z | https://github.com/arogozhnikov/einops/issues/312 | [] | asusdisciple | 1 |
waditu/tushare | pandas | 1,737 | realtime_quote 在src='dc' 时可以获取数据,但 src='sina'时, 返回 None | Python 3.10.12
复现代码:
``
import tushare as ts
#设置你的token,登录tushare在个人用户中心里拷贝
ts.set_token('你的token')
_**#东财数据: 可以正确返回**_
df = ts.realtime_quote(ts_code='600000.SH', src='dc')
_**# 返回数据为 None**_
df = ts.realtime_quote(ts_code='600000.SH,000001.SZ,000001.SH')
``
但在另一台机器上(Python 3.10.4),使用同样的 token,sina 和 dc 都可以正确返回数据; | closed | 2024-06-14T07:38:55Z | 2024-06-14T07:47:56Z | https://github.com/waditu/tushare/issues/1737 | [] | Xingtao | 1 |
aio-libs/aiomysql | sqlalchemy | 457 | Is charset required | Charset must be set, or error will be reported | open | 2019-12-19T02:56:36Z | 2022-01-13T00:33:01Z | https://github.com/aio-libs/aiomysql/issues/457 | [
"question"
] | wangguanfu | 0 |
akfamily/akshare | data-science | 5,594 | stock_zh_a_spot_em() only return 200 rows | It is found that stock_zh_a_spot_em() only return 200 rows in the afternoon on 2025-02-15.
has this api function changed?
| closed | 2025-02-15T08:58:35Z | 2025-02-19T13:43:14Z | https://github.com/akfamily/akshare/issues/5594 | [] | aistrategycloud | 8 |
python-restx/flask-restx | flask | 97 | 'BlueprintSetupState' object has no attribute 'subdomain' | ### **Code**
```python
from quart import Blueprint, Quart
from flask_restplus import Api
blueprint = Blueprint('api', __name__)
api = Api(blueprint)
app = Quart(__name__)
app.register_blueprint(blueprint)
```
### **Repro Steps** (if applicable)
1. start the app
### **Expected Behavior**
App starts successfully
### **Actual Behavior**
An exception occurs
### **Error Messages/Stack Trace**
```
Traceback (most recent call last):
File "manage.py", line 20, in <module>
app.register_blueprint(blueprint)
File "C:\Users\username\AppData\Local\pypoetry\Cache\virtualenvs\auto-test-system-4gCsnwKR-py3.7\lib\site-packages\quart\app.py", line 1422, in register_blueprint
blueprint.register(self, first_registration, url_prefix=url_prefix)
File "C:\Users\username\AppData\Local\pypoetry\Cache\virtualenvs\auto-test-system-4gCsnwKR-py3.7\lib\site-packages\quart\blueprints.py", line 755, in register
func(state)
File "C:\Users\username\AppData\Local\pypoetry\Cache\virtualenvs\auto-test-system-4gCsnwKR-py3.7\lib\site-packages\flask_restx\api.py", line 840, in _deferred_blueprint_init
self._init_app(setup_state.app)
File "C:\Users\username\AppData\Local\pypoetry\Cache\virtualenvs\auto-test-system-4gCsnwKR-py3.7\lib\site-packages\flask_restx\api.py", line 228, in _init_app
self._register_specs(self.blueprint or app)
File "C:\Users\username\AppData\Local\pypoetry\Cache\virtualenvs\auto-test-system-4gCsnwKR-py3.7\lib\site-packages\flask_restx\api.py", line 285, in _register_specs
resource_class_args=(self,),
File "C:\Users\username\AppData\Local\pypoetry\Cache\virtualenvs\auto-test-system-4gCsnwKR-py3.7\lib\site-packages\flask_restx\api.py", line 349, in _register_view
url, view_func=resource_func, **kwargs
File "C:\Users\username\AppData\Local\pypoetry\Cache\virtualenvs\auto-test-system-4gCsnwKR-py3.7\lib\site-packages\flask_restx\api.py", line 803, in _blueprint_setup_add_url_rule_patch
options.setdefault("subdomain", blueprint_setup.subdomain)
AttributeError: 'BlueprintSetupState' object has no attribute 'subdomain'
```
### **Environment**
- Python 3.7
- Quart 0.11.3
- Flask-RESTX 0.2.0
- Other installed Flask extensions
### **Additional Context**
I know it's due to the different implementations between Flask and Quart, espacially add_url_rule in this case, and I understand Quart is not flask-restx's target. But by any chance to bring this excellent extenstion to Quart would be great and should be easy. Thansk a lot. | open | 2020-03-23T06:11:32Z | 2022-10-17T11:33:45Z | https://github.com/python-restx/flask-restx/issues/97 | [
"bug"
] | pansila | 3 |
twopirllc/pandas-ta | pandas | 441 | Multiple runtime warning errors | **Which version are you running? The lastest version is on Github. Pip is for major releases.**
0.3.14b0
**Do you have _TA Lib_ also installed in your environment?**
TA-Lib 0.4.21
**Describe the bug**
Adding all indicators shows the following warning errors:
```sh
/usr/local/lib/python3.8/dist-packages/pandas_ta/overlap/linreg.py:53: RuntimeWarning: invalid value encountered in double_scalars
return rn / rd
/usr/local/lib/python3.8/dist-packages/pandas/core/arraylike.py:364: RuntimeWarning: divide by zero encountered in log10
result = getattr(ufunc, method)(*inputs, **kwargs)
```
**To Reproduce**
Use a large dataset (>500MB) and apply all indicators as follows:
```python
mi_estrategia = pandas_ta.Strategy(
name="Data Strategy",
description="Test.",
ta=[
{"kind": "sma", "length": 10},
{"kind": "sma", "length": 20},
{"kind": "sma", "length": 40},
{"kind": "sma", "length": 60},
{"kind": "sma", "length": 80},
{"kind": "sma", "length": 100},
]
)
# Indcamos que indicadores queremos excluir. Normalmente estos vana ser los que hayamos puestos en la estrategia anterior.
excluded_indicators = [
"sma",
"ad",
]
# Aplicamos nuestros metodos.
data.ta.strategy(mi_estrategia)
# Aplicamos todos los indicadores posibles.
data.ta.strategy(exclude=excluded_indicators)
```
**Expected behavior**
No warning errors
| closed | 2021-11-28T10:51:56Z | 2021-12-03T16:52:05Z | https://github.com/twopirllc/pandas-ta/issues/441 | [
"duplicate",
"help wanted",
"question",
"info"
] | Pl0414141 | 2 |
holoviz/panel | plotly | 7,272 | Inconsistent handling of (start, end, value) in DatetimeRangeSlider and DatetimeRangePicker widget | #### ALL software version info
<details>
<summary>Software Version Info</summary>
```plaintext
panel 1.4.5
param 2.1.1
```
</details>
#### Description of expected behavior and the observed behavior
* DatetimeRangePicker should allow changing `start` and `end` without raising out-of-bound exception
* `value` of DatetimeRange* widgets is always between `start` and `end` parameter or an Exception is raised
* same behavior of DatetimeRangeSlider and DatetimeRangePicker widget on this issue
#### Complete, minimal, self-contained example code that reproduces the issue
```python
import panel as pn
import datetime as dt
dtmin = dt.datetime(1000, 1, 1)
dtlow = dt.datetime(2000, 1, 1)
dtmax = dt.datetime(3000, 1, 1)
# increasing (start, end) and set value=(start, end) SHOULD WORK !
sel_dtrange = pn.widgets.DatetimeRangeSlider(start=dtmin, end=dtlow, value=(dtmin, dtlow))
sel_dtrange.param.update(start=dtmin, end=dtmax, value=(dtmin, dtmax)) # OK
sel_dtrange = pn.widgets.DatetimeRangePicker(start=dtmin, end=dtlow, value=(dtmin, dtlow))
sel_dtrange.param.update(start=dtmin, end=dtmax, value=(dtmin, dtmax)) # ERROR
sel_dtrange.param.update(start=dtmin, end=dtmax) # increasing (start, end) without setting value works
```
#### Stack traceback and/or browser JavaScript console output
```
---> [12] sel_dtrange.param.update(start=dtmin, end=dtmax, value=(dtmin, dtmax)) # ERROR
ValueError: DateRange parameter 'DatetimeRangePicker.value' upper bound must be in range [1000-01-01 00:00:00, 2000-01-01 00:00:00], not 3000-01-01 00:00:00.
```
#### Additional Info
On the contrary, the `DatetimeRangeSlider` does not raise an exception although `value` is out of bounds, which might also not be expected by the user.
```python
import panel as pn
import datetime as dt
dtmin = dt.datetime(1000, 1, 1)
dtlow = dt.datetime(2000, 1, 1)
dtmax = dt.datetime(3000, 1, 1)
# reducing (start, end) without correcting out-of-range value SHOULD FAIL !
sel_dtrange = pn.widgets.DatetimeRangeSlider(start=dtmin, end=dtmax, value=(dtmin, dtmax))
sel_dtrange.param.update(start=dtmin, end=dtlow) # ERROR as value is out of bounds and should raise
sel_dtrange = pn.widgets.DatetimeRangePicker(start=dtmin, end=dtmax, value=(dtmin, dtmax))
#sel_dtrange.param.update(start=dtmin, end=dtlow) # OK, fails as value is out of bounds
sel_dtrange.param.update(start=dtmin, end=dtlow, value=(dtmin, dtlow)) # OK, setting value to reduced bounds works
```
| open | 2024-09-13T13:52:03Z | 2024-09-13T19:23:43Z | https://github.com/holoviz/panel/issues/7272 | [] | rhambach | 0 |
GibbsConsulting/django-plotly-dash | plotly | 515 | DjangoDash type hint results in error | There's a type hint on as_dash_app that uses DjangoDash but it is not imported.
<img width="787" alt="Image" src="https://github.com/user-attachments/assets/91327d3c-45fe-4f1a-85fb-0f37c8fcdecb" /> | closed | 2025-02-05T20:46:51Z | 2025-02-06T03:54:27Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/515 | [
"bug"
] | dick-mule | 2 |
RasaHQ/rasa | machine-learning | 12,373 | Add random_state (as keyword argument?) to generate_folds and use it when executing stratified sampling | https://github.com/RasaHQ/rasa/blob/8e786957508dd0c23ddf81440d02409ff2107ca1/rasa/nlu/test.py#L1450-L1456
In order to allow replicability of results while cross validating, it would be useful to be able to provide the random_state variable as input of generate_folds and then use it when calling sklearn.model_selection.StratifiedKFold. Right now any cross validation would randomly shuffle the dataset and provide different folds every time generate_folds is called. | closed | 2023-05-09T09:06:17Z | 2023-05-17T09:33:18Z | https://github.com/RasaHQ/rasa/issues/12373 | [] | MarcelloGiannini | 1 |
developmentseed/lonboard | jupyter | 762 | Animate `ColumnLayer` | **Is your feature request related to a problem? Please describe.**
I'm currently using lonboard to animate geospatial data with the `TripsLayer` and it's working great. I'd like to extend this to animate statistics about point locations using the `ColumnLayer`, but it doesn't currently support animation.
**Describe the solution you'd like**
An API similar to `TripsLayer` for the `ColumnLayer` to animate column values, preferably allowing different colored stacks at a single geometry over a time range.
**Additional context**
My end goal is something similar to this video of agent-based travel:
https://www.youtube.com/watch?v=B0v2Wi5t7Go
| open | 2025-02-25T06:44:31Z | 2025-03-05T16:06:24Z | https://github.com/developmentseed/lonboard/issues/762 | [] | Jake-Moss | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,490 | CUDA version | I followed readme to install the environment but come across this issue "NVIDIA GeForce RTX 3080 Laptop GPU with CUDA capability sm_86 is not compatible with the current PyTorch installation.The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37."
I changed cuda version to 10.2,11.0,11.2,but all don't work well,does anyone know how to fix it?
| open | 2022-10-06T14:23:59Z | 2024-03-12T03:11:53Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1490 | [] | AriaZiyanYang | 2 |
amdegroot/ssd.pytorch | computer-vision | 31 | bug: test.py (BaseTransform) | line41: `x = torch.from_numpy(transform(img)[0]).permute(2, 0, 1)` is not change the bgr to rgb. It's not equal to the `dataset = VOCDetection(args.voc_root, [('2007', set_type)], BaseTransform(300, dataset_mean), AnnotationTransform())` (it change bgr to rgb).
So, I think it's better to add change the line138 `img = img[:, :, (2, 1, 0)]` in voc0712 to the base_transform function's * (The results will not change too much if we set vis_threshold=0.6, however in the eval.py, if we use BaseTransform out the dataset, it will change the mAP) | closed | 2017-06-23T12:43:28Z | 2019-09-09T10:33:48Z | https://github.com/amdegroot/ssd.pytorch/issues/31 | [] | AceCoooool | 6 |
SALib/SALib | numpy | 124 | Add new method - Distance-Based Generalized Sensitivity Analysis | Add the [Distance-Based Generalized Sensitivity Analysis](http://link.springer.com/article/10.1007/s11004-014-9530-5) method to the library | open | 2017-01-03T11:28:58Z | 2019-02-05T15:27:01Z | https://github.com/SALib/SALib/issues/124 | [
"enhancement",
"add_method"
] | willu47 | 2 |
matterport/Mask_RCNN | tensorflow | 2,290 | Source code of mask_rcnn_coco.h5 file??? | We know that the original paper uses different types of ResNet (https://github.com/facebookresearch/Detectron/blob/master/MODEL_ZOO.md) but where does the author (@matterport) take/make/train the mask_rcnn_coco.h5 file?
Did he train the model? Where is the source code from that model? | closed | 2020-07-23T16:35:18Z | 2020-07-23T17:05:15Z | https://github.com/matterport/Mask_RCNN/issues/2290 | [] | MatiasLoiseau | 0 |
okken/pytest-check | pytest | 175 | pylint compatibility + pycharm intellisense compatibility | Hello,
I found `pytest-check` and think its quite useful for us.
We are using pylint for the static code analysis of our python code and tests.
`pytest-check` introduces false positives for pylint like these:
```
from pytest_check import check
...
check.equal(response.status.code, ERROR_CODE_SUCCESSFUL)
```
```
E1101: Instance of 'CheckContextManager' has no 'equal' member (no-member)
```
I know I can suppress this false positive, but I'd prefer to not have to do this.
Also, the intellisense of PyCharm seems to have an issue as well as it doesn't report `equal` or related member functions.
I know that this is not necessarily an issue of `pytest-check` but rather PyLint or PyCharm.
But anyway, since this tool is supposed to be used widely (right?) I thought there should be a standard recommended solution available that is different to just suppression of the false positives.
What do you think? | closed | 2025-03-04T17:46:37Z | 2025-03-18T20:46:44Z | https://github.com/okken/pytest-check/issues/175 | [] | cre4ture | 3 |
QingdaoU/OnlineJudge | django | 17 | 请问现在这个系统可以部署到 Daocloud 上吗? | 补充:
今天看了一下GoOnlineJudge 可以通过 https://github.com/ZJGSU-Open-Source/docker-oj 来直接部署到daocloud
请问这个系统可以用类似的方式部署到 daocloud 上吗
| closed | 2016-02-19T10:30:12Z | 2016-02-22T03:53:37Z | https://github.com/QingdaoU/OnlineJudge/issues/17 | [] | Shuenhoy | 11 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 1,231 | UNION query unable to resolve correct bind and fails with "(sqlite3.OperationalError) no such table" | This is related to #1210 and in general about finding correct bind to use for execution.
We were looking to upgrade to v3 to take advantage of the dynamic bind changes we need to and and came across this problem: `UNION` queries (maybe others too) are falling back to using default bind (`None` key in `SQLALCHEMY_BINDS`). Below is sample testcase that reproduces the problem:
```python
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
import sqlalchemy as sa
from sqlalchemy import func, union
def test_union_query() :
app = Flask("test-app")
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///"
app.config["SQLALCHEMY_BINDS"] = {"db1": "sqlite:///"}
db = SQLAlchemy(app)
class UserType1(db.Model):
__bind_key__ = "db1"
__tablename__ = "user_type_1"
id = sa.Column(sa.Integer, primary_key=True)
user_type = sa.Column(sa.Integer, nullable=False)
name = sa.Column(sa.String(50), nullable=False)
class UserType2(db.Model):
__bind_key__ = "db1"
__tablename__ = "user_type_2"
id = sa.Column(sa.Integer, primary_key=True)
user_type = sa.Column(sa.Integer, nullable=False)
name = sa.Column(sa.String(50), nullable=False)
with app.app_context():
db.create_all()
db.session.add(UserType1(id=1, user_type=1, name="alice"))
db.session.add(UserType1(id=2, user_type=1, name="bob"))
db.session.add(UserType2(id=2, user_type=2, name="charlie"))
query_1 = db.session.query(
func.count(UserType1.name).label("count"),
UserType1.user_type.label("user_type"),
).group_by(UserType1.user_type)
assert db.session.query(query_1.subquery()).all()
query_2 = db.session.query(
func.count(UserType2.name).label("count"),
UserType2.user_type.label("user_type"),
).group_by(UserType2.user_type)
assert db.session.query(query_2.subquery()).all()
union_query = union(query_1, query_2)
assert len(db.session.query(union_query.subquery()).all()) == 2
```
The individual queries work as expected, however as soon as we do `union(query_1, query_2)` we get a nasty error:
`FAILED tests/test_union.py::test_union_query - sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: user_type_2`
Debugging this further it appears that `Session.get_bind` is called only with `clause` argument and `mapper` is `None`. The `clause` is some selectable: `<sqlalchemy.sql.selectable.Select object at 0x7f260bf6a5e0>` but this doesn't have `metadata`, `bind` or `table` attributes.
Looking further still, this seems to be an issue with SqlAlchemy 1.4.x in general (I tested with 1.4.0, 1.4.10, 1.4.18, 1.4.19, 1.4.19 and 1.4.48), and upgrading to SQLAlchemy 2.x (2.0.17 at the time of writing) seem to resolve issue and `mapper` is now passed in to `Session.get_bind` as `<Mapper at 0x7fbc1ba1ecd0; UserType1>`. So my suspicion is that for a (small) subset of queries like union, flask_sqlalchemy is not compatible with sqlalchemy 1.4.x and requires version 2.
I was not able to come up with a detection mechanism for `clause` that would resolve to a table in this instance, so that a bind can be found. Any thoughts? I am also going to file this with sqlalchemy in hope they can patch to 1.4.49 to have this fix in as unfortunately we have another dependency with hard pin `sqlalchemy<2` so upgrading is not currently an option.
Do you want me to create a PR with the test case above?
Environment:
- Python version: 3.9.16
- Flask-SQLAlchemy version: 3.0.5
- SQLAlchemy version: 1.4.48
| closed | 2023-06-30T11:54:26Z | 2023-07-21T01:10:42Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/1231 | [] | alexjironkin | 2 |
GibbsConsulting/django-plotly-dash | plotly | 477 | Using dash-tradingview dash_tvlwc breaks the app in Django | Hi, I have a functioning Dash app using [dash_tvwlc](https://github.com/tysonwu/dash-tradingview) that works fine if I run it on its own. When I try to display it in Django, I get an error that its JS file can't be downloaded from unpkg.com:
<img width="933" alt="image" src="https://github.com/GibbsConsulting/django-plotly-dash/assets/149114346/983ff9b0-f03c-4540-b751-d7dd21db1f1f">
When I run the app on its own outside of Django, it serves the JS locally. Any idea how I can make that work? | closed | 2023-10-26T20:55:58Z | 2023-10-28T18:48:54Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/477 | [
"question"
] | notgonnamakeit | 2 |
neuml/txtai | nlp | 821 | Cloud storage improvements | Make the following improvements to the cloud storage component.
- When loading an archive file from cloud storage, create the full local directory, if necessary
- Make check for `provider` case insensitive
- Add `prefix` key to documentation
- Clarify how to find `provider` strings | closed | 2024-11-27T21:37:00Z | 2024-11-27T21:42:42Z | https://github.com/neuml/txtai/issues/821 | [] | davidmezzetti | 0 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 321 | Details not changing while applying Jobs | I have changes the personal details in resume.yaml and also changed config.yaml and it is running successful but when the bot applying the job it is not taking my details instead taking Liam details and also resume of liam not sure why i am getting this issue and i also tried to delete the data_folder_example but same issue. | closed | 2024-09-08T15:06:05Z | 2024-09-08T22:47:27Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/321 | [] | manishjangra28 | 4 |
modin-project/modin | pandas | 6,626 | BUG: apply contains extra columns after goupby and selected columns | ### Modin version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest released version of Modin.
- [ ] I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow [this guide](https://modin.readthedocs.io/en/stable/getting_started/installation.html#installing-from-the-github-master-branch).)
### Reproducible Example
```python
import modin.pandas as pd
import numpy as np
df = pd.DataFrame(np.random.rand(5, 10), index=[f's{i+1}' for i in range(5)])
selected_cols = df.columns.values
df['new_col'] = 0
def func(df):
assert('new_col' not in df)
return 1
df.groupby('new_col', group_keys=True)[selected_cols].apply(func)
```
### Issue Description
`apply` function on selected columns after `groupby` do not work in Modin.
### Expected Behavior
Vanilla pandas support this feature only selected cols will enter the apply function.
### Error Logs
<details>
```python-traceback
assert('new_col' not in df)
AssertionError
```
</details>
### Installed Versions
<details>
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit : b5545c686751f4eed6913f1785f9c68f41f4e51d
python : 3.8.10.final.0
python-bits : 64
OS : Linux
OS-release : 5.10.16.3-microsoft-standard-WSL2
Version : #1 SMP Fri Apr 2 22:23:49 UTC 2021
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
Modin dependencies
------------------
modin : 0.23.1
ray : 2.7.0
dask : 2023.5.0
distributed : 2023.5.0
hdk : None
pandas dependencies
-------------------
pandas : 2.0.3
numpy : 1.24.4
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 56.0.0
pip : 23.2.1
Cython : None
pytest : 7.4.2
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : 1.1
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : None
pandas_datareader: None
bs4 : 4.12.2
bottleneck : None
brotli : None
fastparquet : None
fsspec : 2023.9.2
gcsfs : None
matplotlib : 3.7.3
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 13.0.0
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.10.1
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
</details>
| open | 2023-10-03T15:53:26Z | 2023-10-10T14:26:57Z | https://github.com/modin-project/modin/issues/6626 | [
"bug 🦗",
"P1",
"External"
] | SiRumCz | 2 |
ultralytics/ultralytics | pytorch | 19,255 | How to post process the outputs of yolov8n-seg model which gives two output tensors output0(1,116,8400) and output1(1,32,160,160)? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
How to get bounding box data and segmentation mask data from those output tensors so that I can overlay them on the source image?
### Additional
_No response_ | open | 2025-02-15T06:04:35Z | 2025-02-15T06:20:07Z | https://github.com/ultralytics/ultralytics/issues/19255 | [
"question",
"segment"
] | imvamsidhark | 2 |
OpenBB-finance/OpenBB | machine-learning | 6,784 | [Bug] tiingo and biztoc providers for world news in SDK and openbb-cli not working | **Describe the bug**
Either using SDK or openbb-cli, it throws the same error when trying to access world news.
**To Reproduce**
Run the following command in openbb cli
world --provider biztoc
world --provider tiingo
if we are using the SDK, can try with this, as it throws the same errors too:
from openbb import obb
obb.news.world(provider='biztoc')
obb.news.world(provider='tiingo', source='yahoo')
**Screenshots**



**Desktop (please complete the following information):**
- OS : Windows 11
- Python version: 3.12.2
| closed | 2024-10-16T00:46:00Z | 2024-10-23T08:28:53Z | https://github.com/OpenBB-finance/OpenBB/issues/6784 | [
"bug",
"platform"
] | tatsean-dhumall | 2 |
awesto/django-shop | django | 676 | django-shop/email_auth/migrations/0005_auto_20171101_1035.py is missing | When I execute python manage.py makemigrations --dry-run, I got the message:
Migrations for 'email_auth':
/home/frank/webshop/django-shop/email_auth/migrations/0005_auto_20171101_1035.py:
- Alter field username on user
In the directory there is not such a file.
Where can I find the file 0005_auto_20171101_1035.py
| closed | 2017-11-01T10:03:45Z | 2017-11-01T11:59:04Z | https://github.com/awesto/django-shop/issues/676 | [] | gntjou | 2 |
matplotlib/mplfinance | matplotlib | 409 | make_addplot with a legend | ```
import yfinance as yf
df = yf.Ticker('MSFT').history(period='1y')
import mplfinance as mpf
import pandas_ta as ta
apdict = mpf.make_addplot(ta.sma(df['Close'], length=10), linestyle='dotted')
mpf.plot(df, block=False, volume=True, addplot=apdict, savefig='/tmp/MSFT.pdf')
```
I can plot some data using `make_addplot()`. But I don't see an option that can add a legend.
https://github.com/matplotlib/mplfinance/blob/master/src/mplfinance/plotting.py#L1024
Is there a way to do so? Thanks. | closed | 2021-06-16T03:48:42Z | 2021-07-14T19:12:39Z | https://github.com/matplotlib/mplfinance/issues/409 | [
"question"
] | prRZ5F4LXZ | 1 |
aiogram/aiogram | asyncio | 665 | Can't download any file to custom folder | ## Context
Please provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions.
* Operating System: Windows 10 - 1809 (17763.2061)
* Python Version: 3.9.6
* aiogram version: aiogram~=2.14.3
configparser~=5.0.2
environs~=9.3.2
aioredis~=1.3.1
* aiohttp version: aiohttp 3.7.4.post0
* uvloop version (if installed):
## Expected Behavior
Please describe the behavior you are expecting
Aren't able to dowload photo by .donwload() method. When you're making a path by yourself like
`path = 'some_path'`
`another_path = 'another_path'`
`file_name = 'file_name.jpg'` or another like `.png`, `.jpeg` etc
and put it into
`await message.photo[-1].download(destination=f'{path}{another_path}{file_name}', make_dirs=True)`
it drops with an issue like
```
>>>dest = destination if isinstance(destination, io.IOBase) else open(destination, 'wb')
FileNotFoundError: [Errno 2] No such file or directory: <<<
```
Aiogram wont make dir that I needed
### Steps to Reproduce
If it takes a long path like path/path
which should create both of folders named path and path and put here an image that downloaded by sending through telegram
```python
@dp.message_handler(content_types=["photo"])
async def download_photo(message: types.Message):
path = 'path/path'
image = 'image.jpg'
await message.photo[-1].download(destination=f'{path}/{image}', make_dirs=True)
```
if it tries to make path with no .directory
```python
@dp.message_handler(content_types=["photo"])
async def download_photo(message: types.Message):
path = 'path'
image = 'image.jpg'
await message.photo[-1].download(destination=f'{path}/{image}', make_dirs=True)
```
Or current dir
```python
@dp.message_handler(content_types=["photo"])
async def download_photo(message: types.Message):
path = 'path'
image = 'image.jpg'
await message.photo[-1].download(destination=f'./{path}/{image}', make_dirs=True)
```
even if it takes something like current directory.
or something like that - w/o VARs but with argument ('make_dirs=False')
```python
@dp.message_handler(content_types=["photo"])
async def download_photo(message: types.Message):
await message.photo[-1].download(destination=f'./path/image.jpg', make_dirs=False) <<<
dest = destination if isinstance(destination, io.IOBase) else open(destination, 'wb')
FileNotFoundError: [Errno 2] No such file or directory: './path/image.jpg'
future: <Task finished name='Task-14' coro=<Dispatcher._process_polling_updates() done, defined at D:\projects\own_temp\.venv\lib\site-packages\aiogram\dispatcher\dispatcher.py:409> exception=FileNotFoundError(2, 'No such file or directory')>
Traceback (most recent call last):
File "D:\projects\own_temp\.venv\lib\site-packages\aiogram\dispatcher\dispatcher.py", line 417, in _process_polling_updates
for responses in itertools.chain.from_iterable(await self.process_updates(updates, fast)):
File "D:\projects\own_temp\.venv\lib\site-packages\aiogram\dispatcher\dispatcher.py", line 238, in process_updates
return await asyncio.gather(*tasks)
File "D:\projects\own_temp\.venv\lib\site-packages\aiogram\dispatcher\handler.py", line 116, in notify
response = await handler_obj.handler(*args, **partial_data)
File "D:\projects\own_temp\.venv\lib\site-packages\aiogram\dispatcher\dispatcher.py", line 259, in process_update
return await self.message_handlers.notify(update.message)
File "D:\projects\own_temp\.venv\lib\site-packages\aiogram\dispatcher\handler.py", line 116, in notify
response = await handler_obj.handler(*args, **partial_data)
File "D:\projects\own_temp\__main__.py", line 47, in download_photo
await message.photo[-1].download(destination=f'./path/image.jpg', make_dirs=False)
File "D:\projects\own_temp\.venv\lib\site-packages\aiogram\types\mixins.py", line 34, in download
return await self.bot.download_file(file_path=file.file_path, destination=destination, timeout=timeout,
File "D:\projects\own_temp\.venv\lib\site-packages\aiogram\bot\base.py", line 235, in download_file
dest = destination if isinstance(destination, io.IOBase) else open(destination, 'wb')
FileNotFoundError: [Errno 2] No such file or directory: './path/image.jpg'
```
Please provide detailed steps for reproducing the issue.
1. step 1
2. step 2
3. you get it...
### Failure Logs
```python
future: <Task finished name='Task-11' coro=<Dispatcher._process_polling_updates() done, defined at D:\projects\own_temp\.venv\lib\site-packages\aiogram\dispatcher\dispatcher.py:409> exception=FileNotFoundError(2, 'No such file or directory')>
Traceback (most recent call last):
File "D:\projects\own_temp\.venv\lib\site-packages\aiogram\dispatcher\dispatcher.py", line 417, in _process_polling_updates
for responses in itertools.chain.from_iterable(await self.process_updates(updates, fast)):
File "D:\projects\own_temp\.venv\lib\site-packages\aiogram\dispatcher\dispatcher.py", line 238, in process_updates
return await asyncio.gather(*tasks)
File "D:\projects\own_temp\.venv\lib\site-packages\aiogram\dispatcher\handler.py", line 116, in notify
response = await handler_obj.handler(*args, **partial_data)
File "D:\projects\own_temp\.venv\lib\site-packages\aiogram\dispatcher\dispatcher.py", line 259, in process_update
return await self.message_handlers.notify(update.message)
File "D:\projects\own_temp\.venv\lib\site-packages\aiogram\dispatcher\handler.py", line 116, in notify
response = await handler_obj.handler(*args, **partial_data)
File "D:\projects\own_temp\__main__.py", line 47, in download_photo
await message.photo[-1].download(destination='./news/file12', make_dirs=True)
File "D:\projects\own_temp\.venv\lib\site-packages\aiogram\types\mixins.py", line 34, in download
return await self.bot.download_file(file_path=file.file_path, destination=destination, timeout=timeout,
File "D:\projects\own_temp\.venv\lib\site-packages\aiogram\bot\base.py", line 235, in download_file
dest = destination if isinstance(destination, io.IOBase) else open(destination, 'wb')
FileNotFoundError: [Errno 2] No such file or directory: './news/file12'
```
Please include any relevant log snippets or files here.
BUT!!!!!! If you have created a path and download your file into path you had created you can donwload this one with no problem. And if you had created a path and put your file into the path Aiogram create own folder named photos and put here its downloaded photo!!! What is this?
If you want to pm me - my own telegram is @Rizle | closed | 2021-08-17T17:48:43Z | 2021-09-05T21:05:53Z | https://github.com/aiogram/aiogram/issues/665 | [
"bug",
"confirmed"
] | Jsnufmars | 6 |
MycroftAI/mycroft-core | nlp | 2,495 | Is "mycroft.skill.handler.complete" the wrong tense | I am assumming that this is a notification, so it should be '''completed''' to show that, right? (my gut says it's right, but I am not actually sure) | closed | 2020-03-08T00:34:58Z | 2024-09-08T08:36:40Z | https://github.com/MycroftAI/mycroft-core/issues/2495 | [] | FruityWelsh | 1 |
mljar/mercury | jupyter | 242 | add chat widget | closed | 2023-04-07T10:13:19Z | 2023-04-07T10:46:33Z | https://github.com/mljar/mercury/issues/242 | [
"enhancement"
] | pplonski | 1 |
|
ets-labs/python-dependency-injector | asyncio | 57 | Add docs for Catalogs | closed | 2015-05-08T14:42:41Z | 2015-08-05T13:44:19Z | https://github.com/ets-labs/python-dependency-injector/issues/57 | [
"docs"
] | rmk135 | 0 |
|
iperov/DeepFaceLab | deep-learning | 5,574 | Noureen afrose piya | THIS IS NOT TECH SUPPORT FOR NEWBIE FAKERS
POST ONLY ISSUES RELATED TO BUGS OR CODE
## Expected behavior
*Describe, in some detail, what you are trying to do and what the output is that you expect from the program.*
## Actual behavior
*Describe, in some detail, what the program does instead. Be sure to include any error message or screenshots.*
## Steps to reproduce
*Describe, in some detail, the steps you tried that resulted in the behavior described above.*
## Other relevant information
- **Command lined used (if not specified in steps to reproduce)**: main.py ...
- **Operating system and version:** Windows, macOS, Linux
- **Python version:** 3.5, 3.6.4, ... (if you are not using prebuilt windows binary) | open | 2022-10-30T04:20:17Z | 2023-06-08T23:05:30Z | https://github.com/iperov/DeepFaceLab/issues/5574 | [] | Mdroni658 | 1 |
christabor/flask_jsondash | plotly | 219 | Dash | closed | 2020-04-27T00:35:29Z | 2020-04-27T00:37:20Z | https://github.com/christabor/flask_jsondash/issues/219 | [] | Consola | 0 |
|
jupyter/nbviewer | jupyter | 916 | 404 : Not Found |
**Describe the bug**
When trying to view this notebook https://github.com/fangohr/coronavirus-2020/blob/master/index.ipynb
through nbviewer (using this url: https://nbviewer.jupyter.org/github/fangohr/coronavirus-2020/blob/master/index.ipynb), I see a
```
404 : Not Found
Remote HTTP 404: index.ipynb not found among 10 files
```
reported.
**Expected behavior**
To see rendering of the notebook
**Desktop (please complete the following information):**
- OSX, Chrome, Safari
**Additional context**
- Other 404 reports: #912 (the example in there is also broken at the moment)
- I have used that feature and nbviewer link two days ago (1 April) succesfully, and also yesterday I think.

| closed | 2020-04-03T08:26:47Z | 2022-12-13T05:59:20Z | https://github.com/jupyter/nbviewer/issues/916 | [
"status:Duplicate",
"tag:GitHub",
"status:Needs Reproduction",
"status:Need Info",
"tag:Public Service"
] | fangohr | 5 |
ultralytics/yolov5 | pytorch | 12,518 | will yolo detects the objects based on colors also along with feature | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
will yolo detects the objects based on colors also along with feature? i have red, green, yellow lights on pole, how yolo knows the color of light?
### Additional
_No response_ | closed | 2023-12-17T01:33:01Z | 2024-02-06T00:20:02Z | https://github.com/ultralytics/yolov5/issues/12518 | [
"question",
"Stale"
] | b4u365 | 4 |
dask/dask | numpy | 11,122 | API docs missing for `read_csv`, `read_fwf` and `read_table` | I believe there are still plenty of doc strings missing since the dask-expr migration
For example https://docs.dask.org/en/latest/generated/dask_expr.read_csv.html

| closed | 2024-05-15T09:47:14Z | 2025-01-13T15:05:58Z | https://github.com/dask/dask/issues/11122 | [
"good first issue",
"documentation"
] | fjetter | 4 |
albumentations-team/albumentations | machine-learning | 1,895 | [Tech debt] Remove dependency on scikit-image | Scikit-image is quite heavy and all it's functionality that is used in Albumentations could be easily reimplemented.
| closed | 2024-08-19T16:08:10Z | 2024-10-23T03:00:47Z | https://github.com/albumentations-team/albumentations/issues/1895 | [
"enhancement"
] | ternaus | 1 |
viewflow/viewflow | django | 74 | Process "migration"? | One thing I have hated in other BMPN style workflows is that it is very hard to change a process once some have been started. django-viewflow looks simple enough to be able to have some sort of migration process for already started processes. Do you have any tips or future apis for modifying already started processes and tasks?
| closed | 2014-08-19T13:27:51Z | 2016-03-21T04:07:39Z | https://github.com/viewflow/viewflow/issues/74 | [
"request/enhancement",
"PRO"
] | sherzberg | 1 |
dgtlmoon/changedetection.io | web-scraping | 2,124 | [feature] user interface Multilanguage/translation support | **Version and OS**
Any new version
https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-xiii-i18n-and-l10n
Would be nice to have multi-language support, `flask-babel` looks like a great solution.
I'm hoping it would let more people enjoy changedetection.io and perhaps find more great contributors | open | 2024-01-22T10:11:58Z | 2024-01-28T13:48:05Z | https://github.com/dgtlmoon/changedetection.io/issues/2124 | [
"enhancement",
"user-interface"
] | dgtlmoon | 2 |
Kav-K/GPTDiscord | asyncio | 144 | Show sources for /search | Display the relevant links when someone does a /search | closed | 2023-02-05T21:40:37Z | 2023-02-06T07:40:00Z | https://github.com/Kav-K/GPTDiscord/issues/144 | [
"enhancement",
"help wanted",
"good first issue"
] | Kav-K | 1 |
MycroftAI/mycroft-core | nlp | 2,527 | mycroft "devices" web UI doesn't show core version |
Version/setup same as MycroftAI/mycroft-core#2523 2523
## Try to provide steps that we can use to replicate the Issue
Hit up https://account.mycroft.ai/devices

## Provide log files or other output to help us see the error
N/A TBD (can help investigate let me know how) per the ref'd ticket the "self support" method didn't work | closed | 2020-04-01T02:09:24Z | 2020-04-22T12:16:20Z | https://github.com/MycroftAI/mycroft-core/issues/2527 | [] | fermulator | 6 |
PeterL1n/RobustVideoMatting | computer-vision | 65 | 透明背景 | 请问一下,透明背景该如何设置 | open | 2021-10-07T07:22:38Z | 2022-11-21T16:39:32Z | https://github.com/PeterL1n/RobustVideoMatting/issues/65 | [] | lzghades | 1 |
Skyvern-AI/skyvern | automation | 1,749 | Authentication Error (401) - Incorrect API Key in Skyvern Setup | I am encountering an issue while setting up Skyvern on my local machine. The installation process went smoothly, and all required dependencies (Docker, Python 3.11, Poetry, PostgreSQL, Node.js) are installed and configured correctly. However, when attempting to execute a task in Skyvern, I receive the following error:
Error Message (Docker Logs)
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: <sk-proj**********>. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
Additionally, Skyvern logs show:
litellm.exceptions.AuthenticationError: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: <sk-proj**********>. You can find your API key at https://platform.openai.com/account/api-keys.'}}
Steps Taken to Troubleshoot:
1. Verified API Key:
• I double-checked the API key in docker-compose.yml and replaced it with a fresh one from [OpenAI’s API dashboard](https://platform.openai.com/account/api-keys).
• The correct key format (sk-...) is being used.
2. Restarted Containers After Updating the Key:
docker compose down
docker compose up -d
3. Checked Container Status:
• Running docker ps shows all containers are running and healthy.
4. Checked API Key in Environment Variables:
• Ran printenv | grep OPENAI_API_KEY inside the container to confirm the key is set properly.
Issue:
Despite verifying and updating the API key, the error persists. The system still reports an “Incorrect API Key” error when running Skyvern tasks.
Questions:
1. Could there be a caching issue in Skyvern where it is still using an old API key?
2. Is there an additional step required to fully refresh the API key within Skyvern?
3. Are there any known issues related to API authentication failures with the latest Skyvern setup?
Any insights or troubleshooting steps would be greatly appreciated.
System Details:
• OS: macOS
• Python Version: 3.11.11
• Docker Version: Latest
• Skyvern Version: Latest (pulled from repository)
• OpenAI API Status: Verified and working with other applications | closed | 2025-02-08T12:46:09Z | 2025-02-11T16:18:10Z | https://github.com/Skyvern-AI/skyvern/issues/1749 | [] | adminwems | 6 |
sktime/sktime | scikit-learn | 7,496 | [BUG] Loaded model from a saved sktime model failing to forecast on new data | I recently saved a deep neural network model (LSTFDLinear) after fitting it on a large dataset.After saving it i loaded it and wanted to update it and for it to make new forecasting figures based on the latest data but it keeps on giving results on the last fit procedure and does not change no matter what l do ...any help on how i can fix that .....Thank you | open | 2024-12-08T16:12:30Z | 2024-12-10T06:53:03Z | https://github.com/sktime/sktime/issues/7496 | [
"bug"
] | jsteve677 | 2 |
Kav-K/GPTDiscord | asyncio | 400 | Prompt leakage | 
Some prompting helpers are leaking into conversation (Image Info-Caption) | closed | 2023-11-12T07:32:50Z | 2023-11-12T11:52:51Z | https://github.com/Kav-K/GPTDiscord/issues/400 | [] | Kav-K | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,404 | 测试通道数问题 | 利用单通道图片来进行训练,将input_nc和output_nc都设置为1,但是测试得到的图片通道数为3.请问如何得到通道数为1的测试图片。 | closed | 2022-04-03T08:32:54Z | 2022-04-03T08:34:12Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1404 | [] | duke023456 | 0 |
NVIDIA/pix2pixHD | computer-vision | 282 | why the local generator can't enlarge image size? | I want to enlarge images size by pix2pixHD model,for example: when I input image `256*256`,the local generator can enlarge the image size to `512*512`,when I use 2 local generator,it can enlarge the image size to `768*768`.But it did not achieve this effect,it will output the image size as same as the input image size;when I chekout the `LocalEnhancer` in `network.py` , I found the number of down-sampling modules is the same as up-sampling,Why is that?Pix2pixHD how to achieve super resolution? | open | 2021-11-20T03:38:14Z | 2021-11-20T03:38:52Z | https://github.com/NVIDIA/pix2pixHD/issues/282 | [] | DejaVuyan | 0 |
JoeanAmier/TikTokDownloader | api | 34 | 关于检测新版本功能的说明 | # 检测新版本
程序从 `3.3` 版本起支持检测新版本功能,启用检测新版本功能后,运行程序时会向 `https://github.com/JoeanAmier/TikTokDownloader/releases/latest` 发送请求获取最新 `Releases` 版本号,并提示是否存在新版本。
如果存在新版本会输出新版本的 `URL` 地址,不会自动下载更新。 | closed | 2023-07-20T03:54:39Z | 2023-07-31T13:05:31Z | https://github.com/JoeanAmier/TikTokDownloader/issues/34 | [
"文档补充(docs)"
] | JoeanAmier | 0 |
ludwig-ai/ludwig | computer-vision | 3,247 | `torchvision` installing 0.15.0 in the CI instead of 0.14.1 | Similar issue as https://github.com/ludwig-ai/ludwig/issues/3245 | closed | 2023-03-15T00:25:28Z | 2023-03-15T16:07:13Z | https://github.com/ludwig-ai/ludwig/issues/3247 | [] | geoffreyangus | 0 |
jowilf/starlette-admin | sqlalchemy | 521 | Enhancement: custom relation field | **Is your feature request related to a problem? Please describe.**
I want to make custom relation field so i can set custom render_function_key and custom display template.
**Describe the solution you'd like**
_A clear and concise description of what you want to happen._
Something like this in list:

and something like this in detail:

**Describe alternatives you've considered**
If i trying to use StringField as base class:
```
@dataclass
class NotesField(StringField):
rows: int = 6
render_function_key: str = "notes"
class_: str = "field-textarea form-control"
form_template: str = "forms/textarea.html"
display_template: str = "displays/note.html"
exclude_from_create: Optional[bool] = True
exclude_from_edit: Optional[bool] = True
exclude_from_list: Optional[bool] = False
class ClientView(MyModelView):
fields = [
NotesField("notes"),
Client.notes,
]
class DesktView(MyModelView):
fields = [
Desk.clients,
]
```
It leads to this api response:
GET: http://127.0.0.1:8000/admin/api/desk?skip=0&limit=20&order_by=id%20asc:
Reponse:
```
{
"items": [
{
"id": 1,
"client": [
{
"id": 15,
"notes": "[Note(content='test', client_id=15, id=4,), Note(content='teaa', client_id=15, id=6)]",
"_repr": "15",
"_detail_url": "http://127.0.0.1:8000/admin/client/detail/15",
"_edit_url": "http://127.0.0.1:8000/admin/client/edit/15"
}
],
```
As you can see it resolving "notes" field (relation of a relation) and showing it like text (serializing func thinks it string field).
This is problem because it overloads backend by resolving a lot of related records.
And it might be not a problem if there is no a lot of relations, but if there a long chain of relations it starting resolving literally all databse which leads to crash of worker and even cycling of resolving records which leads to endless fetching to the database (and blocking it) until worker crashing and restarts by itself.
I found out that i can change "**serialize**" function to exclude this field:
```
@dataclass
class CustomRelationField(StringField):
rows: int = 6
async def serialize(
self,
obj: Any,
request: Request,
action: RequestAction,
include_relationships: bool = True,
include_relationships2: bool = True,
include_relationships3: bool = True,
include_select2: bool = False,
) -> Dict[str, Any]:
...
elif not isinstance(field, RelationField):
+ if isinstance(field, CustomRelationField):
+ continue
...
```
but this method is not allowing me to use relations information in detail view.
This method leads to overloading database in detail view:
```
async def serialize(
self,
obj: Any,
request: Request,
action: RequestAction,
include_relationships: bool = True,
include_relationships2: bool = True,
include_relationships3: bool = True,
include_select2: bool = False,
) -> Dict[str, Any]:
...
elif not isinstance(field, RelationField):
+ if isinstance(field, CustomRelationField) and action == RequestAction.LIST:
+ continue
...
```
How i can change serialize def so it stop to resolving in some moment? Or it will be even better if there is another more correct way to customize relation field display?
| closed | 2024-03-06T01:07:51Z | 2024-08-05T12:24:27Z | https://github.com/jowilf/starlette-admin/issues/521 | [
"enhancement"
] | Ilya-Green | 6 |
2noise/ChatTTS | python | 105 | 加载权重文件报错:AssertionError | 错误详情:
File "/mnt/data/RAG_LX/ChatTTS/ChatTTS/core.py", line 105, in _load
assert os.path.exists(spk_stat_path), f'Missing spk_stat.pt: {spk_stat_path}'
AssertionError: Missing spk_stat.pt: /mnt/data/RAG_LX/models/chatTTS/asset/spk_stat.pt
魔塔社区权重文件中没有asset/spk_stat.pt这个pt文件啊 没明白core.py文件为什么有:
if gpt_config_path:
cfg = OmegaConf.load(gpt_config_path)
gpt = GPT_warpper(**cfg).to(device).eval()
assert gpt_ckpt_path, 'gpt_ckpt_path should not be None'
gpt.load_state_dict(torch.load(gpt_ckpt_path, map_location='cpu'))
if compile:
gpt.gpt.forward = torch.compile(gpt.gpt.forward, backend='inductor', dynamic=True)
self.pretrain_models['gpt'] = gpt
spk_stat_path = os.path.join(os.path.dirname(gpt_ckpt_path), 'spk_stat.pt')
assert os.path.exists(spk_stat_path), f'Missing spk_stat.pt: {spk_stat_path}'
self.pretrain_models['spk_stat'] = torch.load(spk_stat_path).to(device)
self.logger.log(logging.INFO, 'gpt loaded.')
是权重文件名和代码函数者写错了吗
| closed | 2024-05-30T14:04:01Z | 2024-07-16T04:01:49Z | https://github.com/2noise/ChatTTS/issues/105 | [
"stale"
] | Kyrie-LiuX | 4 |
deeppavlov/DeepPavlov | nlp | 1,409 | Add support for Actions in Go-Bot | Moved to internal Trello | closed | 2021-03-15T14:17:21Z | 2021-11-30T10:15:14Z | https://github.com/deeppavlov/DeepPavlov/issues/1409 | [] | danielkornev | 2 |
pyg-team/pytorch_geometric | pytorch | 10,110 | Can we get a new release? | ### 😵 Describe the installation problem
Currently the last 2.6.1 release is not compatible with numpy 2 as some functions make use of `np.math` resulting in attribute errors. As far as I can tell these calls were removed in #9752 but a release has not yet been made.
Lack of a release is preventing other packages such as https://github.com/FAIR-Chem/fairchem/pull/1003 from supporting numpy 2.
### Environment
| open | 2025-03-12T00:15:51Z | 2025-03-17T22:14:57Z | https://github.com/pyg-team/pytorch_geometric/issues/10110 | [
"installation"
] | CompRhys | 1 |
wandb/wandb | tensorflow | 9,199 | [Q]: Setting the maximum value of y axis in a line plot programmatically (Python) | ### Ask your question
Hello,
I was wondering if there is a way to set the maximum value of the y axis in a line plot programmatically (asking for Python). I can set it manually on the wandb website, however it would be great if I could also set it programmatically. For example, I set this value to 1 in the following line plot:
<img width="606" alt="Image" src="https://github.com/user-attachments/assets/e3a70ba0-a35e-48f3-8779-9c40d98ac406" /> | open | 2025-01-07T14:45:10Z | 2025-01-14T18:31:09Z | https://github.com/wandb/wandb/issues/9199 | [
"ty:question",
"a:app"
] | ardarslan | 5 |
gradio-app/gradio | data-science | 10,199 | Auto-Reloading doesn't run gr.render(input=state_object) | ### Describe the bug
Auto-Reloading doesn't run the `@gr.render(...)` decorated function if the input is a gr.State object.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
1. Run this official doc's example on dynamic event listeners
https://www.gradio.app/guides/dynamic-apps-with-render-decorator#dynamic-event-listeners
```python
import gradio as gr
with gr.Blocks() as demo:
text_count = gr.State(1)
add_btn = gr.Button("Add Box")
add_btn.click(lambda x: x + 1, text_count, text_count)
@gr.render(inputs=text_count)
def render_count(count):
boxes = []
for i in range(count):
box = gr.Textbox(key=i, label=f"Box {i}")
boxes.append(box)
def merge(*args):
return " ".join(args)
merge_btn.click(merge, boxes, output)
merge_btn = gr.Button("Merge")
output = gr.Textbox(label="Merged Output")
demo.launch()
```
it should render correctly like this:

2. Now change the code slightly, e.g. change the button text to `Add a Box` and wait for auto-reloading to re-render

### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
gradio environment
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.3.0
gradio_client version: 1.4.2
```
### Severity
I can work around it by refreshing the page, however, if it works as expected, it will be more ergonomic and make the development experience more enjoyable and less disruptive. | open | 2024-12-13T13:04:20Z | 2024-12-18T19:24:40Z | https://github.com/gradio-app/gradio/issues/10199 | [
"bug"
] | cliffxuan | 2 |
liangliangyy/DjangoBlog | django | 155 | requests的response需要close把 | https://github.com/liangliangyy/DjangoBlog/blob/master/oauth/oauthmanager.py#L67 | closed | 2018-08-26T07:40:48Z | 2018-08-26T08:34:40Z | https://github.com/liangliangyy/DjangoBlog/issues/155 | [] | ignite-404 | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,197 | AttributeError: Can't pickle local object 'get_transform.<locals>.<lambda>' | when I change the configuration of preprocess to "none" so that I can keep the size of my original picture,the problem occurs as following. Maybe the lambda in the transforms.Lambda should change.
Traceback (most recent call last):
File "D:/WORKSPACE/py_temp/pytorch-CycleGAN-and-pix2pix-master/train.py", line 47, in <module>
for i, data in enumerate(dataset): # inner loop within one epoch
File "D:\WORKSPACE\py_temp\pytorch-CycleGAN-and-pix2pix-master\data\__init__.py", line 90, in __iter__
for i, data in enumerate(self.dataloader):
File "D:\ProgramData\Anaconda3\envs\py36_7\lib\site-packages\torch\utils\data\dataloader.py", line 279, in __iter__
return _MultiProcessingDataLoaderIter(self)
File "D:\ProgramData\Anaconda3\envs\py36_7\lib\site-packages\torch\utils\data\dataloader.py", line 719, in __init__
w.start()
File "D:\ProgramData\Anaconda3\envs\py36_7\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "D:\ProgramData\Anaconda3\envs\py36_7\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\ProgramData\Anaconda3\envs\py36_7\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\ProgramData\Anaconda3\envs\py36_7\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "D:\ProgramData\Anaconda3\envs\py36_7\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'get_transform.<locals>.<lambda>'
| open | 2020-11-25T08:34:31Z | 2022-01-18T09:09:40Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1197 | [] | XinJiade | 4 |
kymatio/kymatio | numpy | 180 | Warning in `mnist.py` | Specifically, https://github.com/kymatio/kymatio/blob/289bc26551e92456ef7a48fbe83d48e157f7632c/examples/2d/mnist.py#L50 generates a warning saying that `size_average` will be deprecated and says to use `reduction='sum'` instead. Is this ok for us to do? | closed | 2018-11-21T15:44:30Z | 2018-11-23T17:58:26Z | https://github.com/kymatio/kymatio/issues/180 | [] | janden | 0 |
Johnserf-Seed/TikTokDownload | api | 95 | 批量下载获取用户sec_uid的正则有点问题,会出现匹配不到的情况 | eg url :https://www.douyin.com/user/MS4wLjABAAAATzfMeIy53j-Fbsn-n7KEgomNcGg1Tfse1j1t-s0PBeAcsqxXmrNeVu_KNPw_c87K
原始正则:user\/([\d\D]*)([?]) , 无法匹配到sec_uid
本地使用:([^/]+)$ 的规则来进行匹配。来达成目标
作者有时间可以看下 是我本地的问题 还是就是代码有问题 | closed | 2022-02-16T16:42:36Z | 2022-03-02T02:56:32Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/95 | [
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] | Layofhan | 1 |
DistrictDataLabs/yellowbrick | matplotlib | 606 | Support Vector Machines and ROCAUC | **Describe the bug**
ROCAUC fails with support vector machines. This doesn't happen to me with other algorithms.
**To Reproduce**
```python
model = SVC()
viz = ROCAUC(model, n_jobs=-1)
viz.fit(X_train, y_train)
viz.score(X_test, y_test)
```
**Dataset**
I'm using the credit dataset on Yellowbrick
**Expected behavior**
To obtain the ROC plot
```
/media/Data/Gatech/machine-learning/supervised-learning/helpers.py in plt_roc(model, X_train, y_train, X_test, y_test)
30 viz = ROCAUC(model, n_jobs=-1)
31 viz.fit(X_train, y_train)
---> 32 viz.score(X_test, y_test)
33 return viz
34
~/venvs/global/lib/python3.6/site-packages/yellowbrick/classifier/rocauc.py in score(self, X, y, **kwargs)
176 # Compute ROC curve and ROC area for each class
177 for i, c in enumerate(classes):
--> 178 self.fpr[i], self.tpr[i], _ = roc_curve(y, y_pred[:,i], pos_label=c)
179 self.roc_auc[i] = auc(self.fpr[i], self.tpr[i])
```
`IndexError: too many indices for array`
**Desktop (please complete the following information):**
- OS: Ubuntu 18.04
- Python Version: 3.6.5
- Yellowbrick Version: 0.8
| closed | 2018-09-13T02:05:43Z | 2020-01-15T15:34:53Z | https://github.com/DistrictDataLabs/yellowbrick/issues/606 | [
"type: bug"
] | FranGoitia | 7 |
ultralytics/yolov5 | machine-learning | 13,249 | What prevents me from using the AMP function? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Thank you very much for your work. I would like to be able to use the AMP function, but when training on my device it says `AMP checks failed ❌, disabling Automatic Mixed Precision.` My device situation is as follows:
```bash
pytorch=2.0
CUDA=11.8
4070Ti
```
I would like to know what are the factors that prevent AMP from working? Like CUDA version, graphics hardware, or other factors, because I really want to use the AMP feature!
### Additional
_No response_ | open | 2024-08-07T08:52:08Z | 2024-08-07T12:57:39Z | https://github.com/ultralytics/yolov5/issues/13249 | [
"question"
] | thgpddl | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.