repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
Evil0ctal/Douyin_TikTok_Download_API | api | 398 | [BUG] TikTok down? | ***发生错误的平台?***
如:抖音/TikTok
***发生错误的端点?***
如:API-V1/API-V2/Web APP
***提交的输入值?***
如:短视频链接
***是否有再次尝试?***
如:是,发生错误后X时间后错误依旧存在。
***你有查看本项目的自述文件或接口文档吗?***
如:有,并且很确定该问题是程序导致的。
| closed | 2024-05-18T22:57:06Z | 2024-06-14T08:23:58Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/398 | [
"BUG"
] | heismauri | 3 |
amdegroot/ssd.pytorch | computer-vision | 3 | error:Dimension out of range | When I run test.py, the sentence “y = net(x)” is error:
RuntimeError: dimension out of range - got 1 but the tensor is only 1D
Change:
'--cuda', default=True
thank you for help. | closed | 2017-03-31T08:31:28Z | 2017-04-03T21:18:42Z | https://github.com/amdegroot/ssd.pytorch/issues/3 | [] | DragonBornHD | 3 |
vimalloc/flask-jwt-extended | flask | 553 | cannot import name 'DecodeError' from 'jwt' | Traceback (most recent call last):
File "D:\practical\python-fullstack\api\jwtAuth.py", line 2, in <module>
from flask_jwt_extended import JWTManager, create_access_token, jwt_required, get_jwt_identity, get_jwt
File "D:\practical\python-fullstack\api\menv\Lib\site-packages\flask_jwt_extended\__init__.py", line 1, in <module>
from .jwt_manager import JWTManager as JWTManager
File "D:\practical\python-fullstack\api\menv\Lib\site-packages\flask_jwt_extended\jwt_manager.py", line 8, in <module>
from jwt import DecodeError
ImportError: cannot import name 'DecodeError' from 'jwt' (D:\practical\python-fullstack\api\menv\Lib\site-packages\jwt\__init__.py) | open | 2024-07-18T04:20:45Z | 2024-07-18T04:34:54Z | https://github.com/vimalloc/flask-jwt-extended/issues/553 | [] | legend1998 | 2 |
LAION-AI/Open-Assistant | python | 2,797 | Chat is down | Chat is down. Nothing really to report other than that | closed | 2023-04-21T03:36:29Z | 2023-04-21T07:46:28Z | https://github.com/LAION-AI/Open-Assistant/issues/2797 | [] | GhostHunterGal | 1 |
albumentations-team/albumentations | deep-learning | 1,876 | affine with fit_output set to true does not scale bounding boxes correctly | ## Describe the bug
Bounding boxes are not augmented as expected when fit_output=True.
### To Reproduce
perform affine augmentation with fit_output=True and observe the decoupling of bounding boxes and objects
### Expected behavior
Bounding boxes should be augmented such that the final image is correctly described by them.
### Actual behavior
Bounding boxes are not to be trusted
### Screenshots

| closed | 2024-08-13T21:45:47Z | 2024-08-15T23:16:59Z | https://github.com/albumentations-team/albumentations/issues/1876 | [
"bug"
] | dominicdill | 1 |
piskvorky/gensim | data-science | 3,229 | Word2Vec model callbacks property not accessible | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
It seems that the callbacks property in the Word2Vec model is not callable:
#### Steps/code/corpus to reproduce
```python
from gensim.models.callbacks import CallbackAny2Vec
from pprint import pprint as print
from gensim.models.word2vec import Word2Vec
from gensim.test.utils import datapath
class callback(CallbackAny2Vec):
'''Callback to print loss after each epoch.'''
def __init__(self):
self.epoch = 0
def on_epoch_end(self, model):
loss = model.get_latest_training_loss()
print('Loss after epoch {}: {}'.format(self.epoch, loss))
self.epoch += 1
# Set file names for train and test data
corpus_file = datapath('lee_background.cor')
model = Word2Vec(vector_size=100, callbacks=[callback()])
# build the vocabulary
model.build_vocab(corpus_file=corpus_file)
# train the model
model.train(
corpus_file=corpus_file, epochs=model.epochs,
total_examples=model.corpus_count, total_words=model.corpus_total_words,
callbacks=model.callbacks, compute_loss=True,
)
print(model)
```
>
> AttributeError Traceback (most recent call last)
> <ipython-input-1-4a4736964107> in <module>
> 27 corpus_file=corpus_file, epochs=model.epochs,
> 28 total_examples=model.corpus_count, total_words=model.corpus_total_words,
> ---> 29 callbacks=model.callbacks, compute_loss=True,
> 30 )
> 31
>
> AttributeError: 'Word2Vec' object has no attribute 'callbacks'
>
>
>
#### Versions
```>>> import platform; print(platform.platform())
macOS-10.16-x86_64-i386-64bit
>>> import sys; print("Python", sys.version)
Python 3.9.5 (default, May 18 2021, 12:31:01)
[Clang 10.0.0 ]
>>> import struct; print("Bits", 8 * struct.calcsize("P"))
Bits 64
>>> import numpy; print("NumPy", numpy.__version__)
NumPy 1.20.3
>>> import scipy; print("SciPy", scipy.__version__)
SciPy 1.7.1
>>> import gensim; print("gensim", gensim.__version__)
gensim 4.1.0
>>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
FAST_VERSION 0
```
| closed | 2021-09-09T04:02:40Z | 2021-09-14T17:47:45Z | https://github.com/piskvorky/gensim/issues/3229 | [] | ginward | 1 |
holoviz/colorcet | matplotlib | 41 | pyct dependency used inside setup.py | The pyct.build package is imported and used before setup is called and so a build from a fresh python environment fails because it can't import pyct.build as it hasn't yet been installed as a build dependency. | closed | 2019-11-23T23:35:57Z | 2019-12-10T14:20:14Z | https://github.com/holoviz/colorcet/issues/41 | [] | jsharpe | 11 |
flasgger/flasgger | api | 616 | OpenAPI client generator docs link? | Hey there! I'm one of the maintainers of [openapi-ts](https://github.com/hey-api/openapi-ts), a package for turning the generated OpenAPI specs into TypeScript clients. Would you be open to including a section in README on generating clients from OpenAPI specs? FastAPI has a [similar section](https://fastapi.tiangolo.com/advanced/generate-clients/) | open | 2024-04-03T11:15:37Z | 2024-06-02T23:21:47Z | https://github.com/flasgger/flasgger/issues/616 | [] | mrlubos | 3 |
svc-develop-team/so-vits-svc | pytorch | 106 | [Help]: 无法理解代码 | ### 请勾选下方的确认框。
- [X] 我已仔细阅读[README.md](https://github.com/svc-develop-team/so-vits-svc/blob/4.0/README_zh_CN.md)和[wiki中的Quick solution](https://github.com/svc-develop-team/so-vits-svc/wiki/Quick-solution)。
- [X] #111
- [X] 我未在使用由第三方用户提供的一键包/环境包。
### 系统平台版本号
Ubuntu 20.04
### GPU 型号
2080
### Python版本
3.7.16
### PyTorch版本
1.11
### sovits分支
4.0(默认)
### 数据集来源(用于判断数据集质量)
UVR处理过的vtb直播音频
### 出现问题的环节或执行的命令
训练
### 问题描述
大佬们,我对你们这个项目非常感兴趣,目前在琢磨F0Decoder和utils.interpolate_f0这一块的原理,但直接看代码有点吃力,请问你们可以提供一些参考文献帮助我的理解吗
### 日志
```python
无
```
### 截图`so-vits-svc`、`logs/44k`文件夹并粘贴到此处
无
### 补充说明
_No response_ | open | 2023-03-31T01:56:18Z | 2023-04-16T17:19:58Z | https://github.com/svc-develop-team/so-vits-svc/issues/106 | [
"not urgent"
] | vince-c98 | 3 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,147 | FileNotFoundError: [Errno 2] while running multiple instances | I'm running multiple instances with ThreadPoolExecutor and I randomly get this error and the entire program crashes.
I'm on MacOS and using the latest version of `UC` and Selenium. I also gave a try to `@fix-multiple-instance` but it didn't work. I ran it on Windows, Linux etc and I got the same error everywhere.
Full traceback:
```py
Traceback (most recent call last):
File "/Users/bharat/Documents/Email Extractor/prod_queue.py", line 265, in <module>
results = [
File "/Users/bharat/Documents/Email Extractor/prod_queue.py", line 266, in <listcomp>
future.result()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/Users/bharat/Documents/Email Extractor/prod_queue.py", line 217, in process_url_chunk
driver = uc.Chrome(options=options, headless=True, patcher_force_close=True)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/undetected_chromedriver/__init__.py", line 246, in __init__
self.patcher.auto()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/undetected_chromedriver/patcher.py", line 127, in auto
self.unzip_package(self.fetch_package())
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/undetected_chromedriver/patcher.py", line 180, in unzip_package
os.rename(os.path.join(self.zip_path, self.exe_name), self.executable_path)
FileNotFoundError: [Errno 2] No such file or directory: '/Users/bharat/Library/Application Support/undetected_chromedriver/undetected/chromedriver' -> '/Users/bharat/Library/Application Support/undetected_chromedriver/undetected_chromedriver'
```
Not sure if it matters but i'm also using different user-data-dirs for all drivers. I run around 5-6 drivers at the same time. I don't exceed my CPU core limit. I have tried many things suggested from similar issues here but nothing worked.
Heck I event went back to 3.2.0 and it didn't work either. I don't know what to do. Do i need to pre-download uc chromedriver and then put a separate driver path for each uc.Chrome instance? | open | 2023-03-22T10:11:32Z | 2023-03-30T04:06:18Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1147 | [] | bharatbots | 1 |
stanford-oval/storm | nlp | 171 | [BUG] knowledge_storm 0.2.5 doesn't contain TavilySearchRM in rm.py? | **Describe the bug**
Can't run examples locally using the packaged knowledge_storm as TavilySearchRM is not included in the package.
**To Reproduce**
1. Clone repository
2. pip install -r requirements.txt`
3. python ./examples/run_storm_wiki_gpt.py --retriever you --do-research --do-generate-outline --search-top-k 10 --do-generate-article
```
Traceback (most recent call last):
File "/mnt/c/Users/UserName/Develop/_external_sources/storm/./examples/run_storm_wiki_gpt.py", line 26, in <module>
from knowledge_storm.rm import YouRM, BingSearch, BraveRM, SerperRM, DuckDuckGoSearchRM, TavilySearchRM, SearXNG
ImportError: cannot import name 'TavilySearchRM' from 'knowledge_storm.rm' (/home/kaesmad/Envs/storm/lib/python3.11/site-packages/knowledge_storm/rm.py)
```
**Environment:**
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version: 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.0
> langsmith: 0.1.121
> langchain_huggingface: 0.1.0
> langchain_qdrant: 0.1.4
> langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> fastembed: Installed. No version info available.
> httpx: 0.27.0
> huggingface-hub: 0.24.5
> jsonpatch: 1.33
> orjson: 3.10.6
> packaging: 23.2
> pydantic: 2.8.2
> PyYAML: 6.0.1
> qdrant-client: 1.11.2
> requests: 2.32.3
> sentence-transformers: 3.1.0
> tenacity: 8.5.0
> tokenizers: 0.19.1
> transformers: 4.43.3
> typing-extensions: 4.12.2 | closed | 2024-09-18T13:01:27Z | 2024-09-23T15:48:29Z | https://github.com/stanford-oval/storm/issues/171 | [] | danieldekay | 3 |
xorbitsai/xorbits | numpy | 605 | BUG: ConnectionError: Unable to connect to application when deploying on yarn | ### Describe the bug
when deploying on yarn, creation of the cluster fails.
### To Reproduce
To help us to reproduce this bug, please provide information below:
1. Your Python version: 3.9.12
2. The version of Xorbits you use: 0.4.2
3. Versions of crucial packages, such as numpy, scipy and pandas: pandas==1.4.2
4. Full stack of the error.
5. Minimized code to reproduce the error.
Hadoop version: Hadoop 3.2.2
Code
```python
import os
from xorbits._mars.deploy.yarn import new_cluster
import xorbits.pandas as pd
os.environ['JAVA_HOME'] = '/usr/jdk64/jdk1.8.0_191'
os.environ['HADOOP_HOME'] = "/usr/local/service/hadoop"
os.environ['PATH'] = '/usr/local/service/hadoop:/usr/local/service/hadoop/bin:' + os.environ['PATH']
cluster = new_cluster(
environment='hdfs:///python/senv/anaconda3.zip',
supervisor_num=1,
supervisor_cpu=1,
supervisor_mem='4g',
redirect=False,
web_num=1,
app_name="test-xorbits-deploy-on-yarn",
app_queue="eng",
worker_num=4,
worker_cpu=1,
worker_mem='4g',
min_worker_num=2,
timeout=6000,
supervisor_extra_args='--log-level DEBUG',
worker_extra_env={
"MARS_USE_PROCESS_STAT": "1",
'HADOOP_HOME': "/usr/local/service/hadoop"
},
supervisor_extra_env={
"MARS_USE_PROCESS_STAT": "1",
},
worker_cache_mem='3g')
print(cluster.session.endpoint)
print(pd.DataFrame({'a': [1,2,3,4]}).sum())
```
Error message:
```text
WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
23/07/19 13:54:04 INFO client.AHSProxy: Connecting to Application History server at /10.***.**.**:10200
23/07/19 13:54:04 INFO skein.Driver: Driver started, listening on 34513
23/07/19 13:54:05 INFO conf.Configuration: resource-types.xml not found
23/07/19 13:54:05 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
23/07/19 13:54:05 INFO skein.Driver: Uploading application resources to hdfs://HDFS***/user/***/.skein/application_16835364***_109379
23/07/19 13:54:05 INFO skein.Driver: Submitting application...
23/07/19 13:54:05 INFO impl.YarnClientImpl: Submitted application application_16835364***_109379
WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
23/07/19 13:54:12 INFO client.AHSProxy: Connecting to Application History server at /10.***.**.**:10200
23/07/19 13:54:12 INFO skein.Driver: Driver started, listening on 37649
23/07/19 13:54:13 INFO impl.YarnClientImpl: Killed application application_16835364***_109379
Traceback (most recent call last):
File "/home/***/xorbits/test.py", line 10, in <module>
cluster = new_cluster(
File "/opt/anaconda3/lib/python3.9/site-packages/xorbits/_mars/deploy/yarn/client.py", line 189, in new_cluster
wait_services_ready(
File "/opt/anaconda3/lib/python3.9/site-packages/xorbits/_mars/deploy/utils.py", line 42, in wait_services_ready
readies[idx] = count_fun(selector)
File "/opt/anaconda3/lib/python3.9/site-packages/xorbits/_mars/deploy/yarn/client.py", line 192, in <lambda>
lambda svc: _get_ready_container_count(app_client, svc),
File "/opt/anaconda3/lib/python3.9/site-packages/xorbits/_mars/deploy/yarn/client.py", line 64, in _get_ready_container_count
c.yarn_container_id for c in app_client.get_containers([svc], ["RUNNING"])
File "/opt/anaconda3/lib/python3.9/site-packages/skein/core.py", line 1090, in get_containers
resp = self._call('getContainers', req)
File "/opt/anaconda3/lib/python3.9/site-packages/skein/core.py", line 280, in _call
raise ConnectionError("Unable to connect to %s" % self._server_name)
skein.exceptions.ConnectionError: Unable to connect to application
```
| closed | 2023-07-19T06:10:32Z | 2023-08-11T03:21:58Z | https://github.com/xorbitsai/xorbits/issues/605 | [
"bug"
] | smartguo | 1 |
huggingface/pytorch-image-models | pytorch | 2,280 | [BUG] Update sdpa args to be consistent with non-sdpa torch implementation | Hi, Thanks for the amazing repo!
I would like to flag a minor inconsistency with the sdpa function call in the Attention class of `vision_transformers.py.`
In the non-sdpa torch implementation of the Attention class, the `self.scale` param is used to scale the queries: https://github.com/huggingface/pytorch-image-models/blob/ee5b1e8217134e9f016a0086b793c34abb721216/timm/models/vision_transformer.py#L97
However, when calling the sdpa function, the `self.scale` param is not used (thereby leading sdpa to use its default value). In most cases, this would not cause a bug because the default scale parameter is used. However, using custom scale values can lead to a silent bug.
https://github.com/huggingface/pytorch-image-models/blob/ee5b1e8217134e9f016a0086b793c34abb721216/timm/models/vision_transformer.py#L92
Ideally, the function call will need to be changed to the following:
```
x = F.scaled_dot_product_attention(
q, k, v,
dropout_p=(self.attn_drop.p if self.training else 0.0),
scale=self.scale
)
``` | closed | 2024-09-11T13:59:49Z | 2024-09-11T19:37:02Z | https://github.com/huggingface/pytorch-image-models/issues/2280 | [
"bug"
] | Nik-V9 | 2 |
microsoft/nni | deep-learning | 5,613 | assert len(graph.nodes) == len(graph_check.nodes) | **Describe the bug**:
I compress my model with L1NormPruner and speedup it, then an error occurs. How can I solve this problem?
This is error:

I checked it, but I don't know how to solve this problem:

This is the code for the pruning part of my project:
device = torch.device("cpu")
inputs = torch.randn((1, 3, 768, 768))
model_path = 'weights/yolov3_cqxdq_total_300_300.pt'
pruner_model_path = 'weights/yolov3_cqxdq_pruner_weights.pth'
config_list = [{'sparsity': 0.6, 'op_types': ['Conv2d']}]
model = attempt_load(model_path, map_location=device) # load FP32 model
from nni.compression.pytorch.pruning import L1NormPruner
pruner = L1NormPruner(model, config_list)
_, masks = pruner.compress()
for name, mask in masks.items():
print(name, ' sparsity : ', '{:.2}'.format(mask['weight'].sum() / mask['weight'].numel()))
pruner._unwrap_model()
from nni.compression.pytorch.speedup.v2 import ModelSpeedup
m_speedup = ModelSpeedup(model, inputs, masks, device, batch_size=2)
m_speedup.speedup_model()
This is the structure and forward of my model:
[https://github.com/ultralytics/yolov3](https://github.com/ultralytics/yolov3)
Due to the high version of Pytorch, I have made modifications to this:_Originally posted by @EdwardAndersonMcDermott in https://github.com/ultralytics/yolov5/issues/6948#issuecomment-1075528897_

And I deleted the control-flow:

**Environment**:
NNI version: v3.0rc1
Training service (local|remote|pai|aml|etc): local
Python version: 3.8.5
PyTorch version: 1.11.0
Cpu or cuda version: cpu
**Reproduce the problem**
- Code|Example:
- How to reproduce: | closed | 2023-06-19T01:35:59Z | 2023-07-05T01:47:23Z | https://github.com/microsoft/nni/issues/5613 | [] | HuYue233 | 3 |
jupyterhub/zero-to-jupyterhub-k8s | jupyter | 3,366 | Z2JH 3.3.0 is broken - pycurl issues with certificates | We've run into the issue described in https://discourse.jupyter.org/t/suddenly-getting-oath-cert-error/24217. | closed | 2024-03-20T17:02:07Z | 2024-03-20T17:23:07Z | https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/3366 | [
"bug"
] | consideRatio | 0 |
sammchardy/python-binance | api | 1,528 | [Enhancement] Update links in docs to new documentation link | **Describe the bug**
Update links like https://binance-docs.github.io/apidocs/futures/en/#compressed-aggregate-trades-list-market_data
to the corresponding link in https://developers.binance.com/docs/derivatives | open | 2024-12-25T11:07:13Z | 2024-12-25T11:07:13Z | https://github.com/sammchardy/python-binance/issues/1528 | [
"enhancement"
] | pcriadoperez | 0 |
pytorch/pytorch | numpy | 149,314 | Inductor Incorrectly Handles `torch.view_copy` When Changing dtype | ### 🐛 Describe the bug
~The `torch.view_copy` function produces incorrect results in Eager mode when changing the `dtype` of a tensor. However, when using `torch.compile` (Inductor), the results are correct. This suggests a potential bug in the Eager implementation of `torch.view_copy`.~
Compile causes wrong answer in this corner case:
```python
import torch
def f():
res = torch.arange(1, 5, dtype=torch.float32)
res_copy = torch.view_copy(res, dtype=torch.float64)
return res, res_copy
print('@@@@ INDUCTOR @@@@')
res, res_copy = torch.compile(f)()
print('res', res)
print('res_copy', res_copy)
print()
print('@@@@ Eager @@@@')
res, res_copy = f()
print('res', res)
print('res_copy', res_copy)
```
~The output in Eager mode is incorrect:~
Testcase Result:
```
@@@@ INDUCTOR @@@@
res tensor([1., 2., 3., 4.])
res_copy tensor([1., 2., 3., 4.], dtype=torch.float64)
@@@@ Eager @@@@
res tensor([1., 2., 3., 4.])
res_copy tensor([ 2.0000, 512.0001], dtype=torch.float64)
```
### Versions
PyTorch 2.7.0.dev20250218+cu124
cc @albanD @chauhang @penguinwu @ezyang @gchanan @zou3519 @kadeng @msaroufim | open | 2025-03-17T12:06:54Z | 2025-03-18T06:42:42Z | https://github.com/pytorch/pytorch/issues/149314 | [
"triaged",
"module: viewing and reshaping",
"module: python frontend",
"module: edge cases",
"oncall: pt2",
"topic: fuzzer"
] | WLFJ | 3 |
lux-org/lux | pandas | 178 | pandas display | C:\Anaconda3\lib\site-packages\IPython\lib\pretty.py:700: UserWarning:
Unexpected error in rendering Lux widget and recommendations. Falling back to Pandas display.
| closed | 2020-12-11T15:56:49Z | 2020-12-12T02:06:01Z | https://github.com/lux-org/lux/issues/178 | [] | danilosantiago | 1 |
ycd/manage-fastapi | fastapi | 118 | ModuleNotFoundError: No module named 'app' | Steps performed:
1) fastapi startproject [project_name]
2) cd [project_name]
3) fastapi startapp [app_name]
4) fastapi run (from the project folder)
It is throwing an error ModuleNotFoundError: No module named 'app'
<img width="1710" alt="Screenshot 2023-04-06 at 9 11 58 AM" src="https://user-images.githubusercontent.com/63961278/230267399-2ff54c4d-3ba8-47e6-bea1-fecef5da594c.png">
Can someone please help me in getting the server up and running | closed | 2023-04-06T03:45:17Z | 2023-04-06T03:51:52Z | https://github.com/ycd/manage-fastapi/issues/118 | [] | paras97verma | 1 |
serengil/deepface | deep-learning | 913 | Issues installing deepface | Sorry if the issue is a repeat, but i have gone through a lot of the solutions of what other people have come across with no success. I have even tried to installing the packages individually when I was able to identify the version, but i still get stuck at one point. Even going back to older versions of deepface. Here is what I have been trying to get around all the issues:
`python -m venv myenv
myenv\Scripts\activate
pip install opencv-python==4.5.5.64
From <https://pypi.org/project/opencv-python/4.5.5.64/>
pip install https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0-py3-none-any.whl
pip install https://files.pythonhosted.org/packages/dd/e9/d7a7f52c698f76941dbd04bc950772dc08d2bb91245867f52ed304003ac9/python_fire-0.1.0-py2.py3-none-any.whl
pip install https://files.pythonhosted.org/packages/ff/a1/0d342d7edbbcf04252afe135682e5985fea5f3fb40acd1caf1413241daed/deepface-0.0.80-py3-none-any.whl
`
Below are the error messages:
`PS C:\Users\sedky\Documents\Python Scripts\opencv_project> python -m pip install deepface
Requirement already satisfied: deepface in c:\program files\python312\lib\site-packages (0.0.79)
Requirement already satisfied: numpy>=1.14.0 in c:\program files\python312\lib\site-packages (from deepface) (1.26.2)
Collecting pandas>=0.23.4 (from deepface)
Using cached pandas-2.1.4-cp312-cp312-win_amd64.whl.metadata (18 kB)
Collecting tqdm>=4.30.0 (from deepface)
Using cached tqdm-4.66.1-py3-none-any.whl.metadata (57 kB)
Collecting gdown>=3.10.1 (from deepface)
Using cached gdown-4.7.1-py3-none-any.whl (15 kB)
Requirement already satisfied: Pillow>=5.2.0 in c:\program files\python312\lib\site-packages (from deepface) (10.1.0)
Requirement already satisfied: opencv-python>=4.5.5.64 in c:\program files\python312\lib\site-packages (from deepface) (4.8.1.78)
Requirement already satisfied: tensorflow>=1.9.0 in c:\program files\python312\lib\site-packages (from deepface) (1.9.0)
Collecting keras>=2.2.0 (from deepface)
Using cached keras-3.0.1-py3-none-any.whl.metadata (4.8 kB)
Collecting Flask>=1.1.2 (from deepface)
Using cached flask-3.0.0-py3-none-any.whl.metadata (3.6 kB)
Collecting mtcnn>=0.1.0 (from deepface)
Using cached mtcnn-0.1.1-py3-none-any.whl (2.3 MB)
Collecting retina-face>=0.0.1 (from deepface)
Using cached retina_face-0.0.13-py3-none-any.whl (16 kB)
Collecting fire>=0.4.0 (from deepface)
Using cached fire-0.5.0.tar.gz (88 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [1 lines of output]
ERROR: Can not execute `setup.py` since setuptools is not available in the build environment.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.` | closed | 2023-12-09T19:50:13Z | 2023-12-17T09:39:36Z | https://github.com/serengil/deepface/issues/913 | [
"dependencies"
] | raspcode | 1 |
PokeAPI/pokeapi | graphql | 533 | Flavor Texts for different forms/varieties all mixed together | This may be a feature request rather than a bug report. I guess it depends on what the intention is behind the current implementation.
I'm trying to parse the flavor text for a given Pokemon species. This is easy for Pokemon that only have one variety or form. But if I want to show the flavor text for a Pokemon with various forms or varieties, they're all mixed up for the same species and there doesn't seem to be any way to distinguish which is which.
For example, Raichu. When I download it all from the API, there doesn’t seem to be any way to tell the difference between the entry for Alolan Raichu and Kanto Raichu. And I may be wrong, but it looks like they’re not even ordered consistently. I think for lets-go-pikachu/lets-go-eevee the Alola entry is first, but for sword/shield the Kanto entry is first. Looking at Meowth is even more confusing because of Kanto, Alola, Galar, and Gigantamax are all there and the order is inconsistent even between sword and shield.
If this isn't intended, then consider this a bug report. If this is intended, then consider this a feature request that somehow these varieties of flavor texts are somehow separated or at least sorted consistently so when it's parsed out it's at least easy to know which is which. Hopefully I'm not just missing some other organizational system for this? If so, then consider this just a question on how to sort through this. | closed | 2020-10-17T19:02:28Z | 2021-12-10T09:01:11Z | https://github.com/PokeAPI/pokeapi/issues/533 | [] | jonduenas | 19 |
sgl-project/sglang | pytorch | 3,869 | [Bug] bench_serving can not bench sglang with api_key auth | ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [x] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
When sglang is started with the `--api-key` parameter, an authentication error will be reported when using bench_serving test, even if the `OPENAI_API_KEY` environment variable is added.
### Reproduction
1. Start sglang server with: `--model /data/llms/Qwen2.5-7B-Instruct --served-model-name=model --tp 2 --enable-p2p-check --api-key kebe`
2. Run bench serving `OPENAI_API_KEY=kebe python3 -m sglang.bench_serving --backend sglang --dataset-name random --num-prompts 3000 --random-input 1024 --random-output 1024 --random-range-ratio 0.5 --tokenizer Qwen/Qwen2.5-7B-Instruct`
### Environment
2025-02-26 11:39:27,619 - INFO - flashinfer.jit: Prebuilt kernels not found, using JIT backend
INFO 02-26 11:39:29 __init__.py:190] Automatically detected platform cuda.
Python: 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:24:40) [GCC 13.3.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA GeForce RTX 3090
GPU 0,1,2,3,4,5,6,7 Compute Capability: 8.6
CUDA_HOME: /root/miniconda3/envs/vllm-dev
NVCC: Cuda compilation tools, release 11.8, V11.8.89
CUDA Driver Version: 560.28.03
PyTorch: 2.5.1+cu124
sglang: 0.4.3.post2
sgl_kernel: 0.0.3.post6
flashinfer: 0.2.2
triton: 3.1.0
transformers: 4.48.3
torchao: 0.8.0
numpy: 1.26.4
aiohttp: 3.11.13
fastapi: 0.115.8
hf_transfer: 0.1.9
huggingface_hub: 0.29.1
interegular: 0.3.3
modelscope: 1.23.1
orjson: 3.10.15
packaging: 24.2
psutil: 5.9.0
pydantic: 2.10.6
multipart: 0.0.20
zmq: 26.2.0
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.7.2
openai: 1.64.0
tiktoken: 0.9.0
anthropic: 0.47.2
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PXB PXB PXB SYS SYS SYS SYS PXB 0-27,56-83 0 N/A
GPU1 PXB X PXB PXB SYS SYS SYS SYS PIX 0-27,56-83 0 N/A
GPU2 PXB PXB X PIX SYS SYS SYS SYS PXB 0-27,56-83 0 N/A
GPU3 PXB PXB PIX X SYS SYS SYS SYS PXB 0-27,56-83 0 N/A
GPU4 SYS SYS SYS SYS X PXB PXB PXB SYS 28-55,84-111 1 N/A
GPU5 SYS SYS SYS SYS PXB X PXB PXB SYS 28-55,84-111 1 N/A
GPU6 SYS SYS SYS SYS PXB PXB X PIX SYS 28-55,84-111 1 N/A
GPU7 SYS SYS SYS SYS PXB PXB PIX X SYS 28-55,84-111 1 N/A
NIC0 PXB PIX PXB PXB SYS SYS SYS SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_bond_0
ulimit soft: 1024
| closed | 2025-02-26T03:40:07Z | 2025-02-28T04:18:55Z | https://github.com/sgl-project/sglang/issues/3869 | [] | kebe7jun | 1 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 332 | 中文对话问题 | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[LangChain](https://github.com/hwchase17/langchain)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案。
### 问题类型
效果问题
### 基础模型
Chinese-Alpaca-2 (7B/13B)
### 操作系统
Windows
### 详细描述问题
系统:win11
cmd工具:git bash
模型:ziqingyang/chinese-alpaca-2-7b 已经生成量化版本模型q4_k
输入中文似乎无法理解
### 依赖情况(代码类问题务必提供)
_No response_
### 运行日志或截图
 | closed | 2023-10-10T10:02:51Z | 2023-10-25T22:04:32Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/332 | [
"stale"
] | hsinlung | 2 |
jacobgil/pytorch-grad-cam | computer-vision | 442 | Float 16 not supported by EigenCAM methods (TypeError: array type float16 is unsupported in linalg ) | Hi I am getting this error when I load my model in float16 (model.half()) to use on the GPU, works fine when I use model.float(), is there a another work around for this ? Note I have cast the tensor to a float 32 outside of the model before I pass it into cam. I am using a yolov5 variant as the model.
Traceback (most recent call last):
line 146, in <module>
main()
File "/home/lonrix/smapping/EigenCam/maptest.py", line 111, in main
grayscale_cam = cam(tensor, class_id)[0, :, :]
^^^^^^^^^^^^^^^^^^^^^
File "/home/lonrix/anaconda3/envs/mapping/lib/python3.11/site-packages/pytorch_grad_cam/base_cam.py", line 188, in __call__
return self.forward(input_tensor,
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lonrix/anaconda3/envs/mapping/lib/python3.11/site-packages/pytorch_grad_cam/base_cam.py", line 95, in forward
cam_per_layer = self.compute_cam_per_layer(input_tensor,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lonrix/anaconda3/envs/mapping/lib/python3.11/site-packages/pytorch_grad_cam/base_cam.py", line 127, in compute_cam_per_layer
cam = self.get_cam_image(input_tensor,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lonrix/anaconda3/envs/mapping/lib/python3.11/site-packages/pytorch_grad_cam/eigen_cam.py", line 23, in get_cam_image
return get_2d_projection(activations)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lonrix/anaconda3/envs/mapping/lib/python3.11/site-packages/pytorch_grad_cam/utils/svd_on_activations.py", line 15, in get_2d_projection
U, S, VT = np.linalg.svd(reshaped_activations, full_matrices=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lonrix/anaconda3/envs/mapping/lib/python3.11/site-packages/numpy/linalg/linalg.py", line 1663, in svd
t, result_t = _commonType(a)
^^^^^^^^^^^^^^
File "/home/lonrix/anaconda3/envs/mapping/lib/python3.11/site-packages/numpy/linalg/linalg.py", line 173, in _commonType
raise TypeError("array type %s is unsupported in linalg" %
TypeError: array type float16 is unsupported in linalg | closed | 2023-07-17T22:36:52Z | 2023-07-17T22:54:05Z | https://github.com/jacobgil/pytorch-grad-cam/issues/442 | [] | Ryan37342342 | 1 |
babysor/MockingBird | pytorch | 635 | web界面 AI拟音 提示No such file or directory:错误 求解 谢谢 | 错误界面

| closed | 2022-07-07T01:55:43Z | 2022-07-10T03:18:55Z | https://github.com/babysor/MockingBird/issues/635 | [] | yxwudi | 2 |
ultralytics/yolov5 | deep-learning | 13,303 | Error During TensorFlow SavedModel and TFLite Export: TFDetect.__init__() got multiple values for argument 'w' and 'NoneType' object has no attribute 'outputs' | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I encountered errors while attempting to export a YOLOv5 model to TensorFlow SavedModel and TFLite formats. The model is a YOLOv5 with FPN, and the export process fails with the following errors:
`TensorFlow SavedModel: export failure ❌ 1.5s: TFDetect.__init__() got multiple values for argument 'w'`
`TensorFlow Lite: export failure ❌ 0.0s: 'NoneType' object has no attribute 'call'
Traceback (most recent call last):
File "/home/ai/Masood/Pipes/yolov5_old/export.py", line 1542, in <module>
main(opt)
File "/home/ai/Masood/Pipes/yolov5_old/export.py", line 1537, in main
run(**vars(opt))
File "/home/ai/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/ai/Masood/Pipes/yolov5_old/export.py", line 1450, in run
add_tflite_metadata(f[8] or f[7], metadata, num_outputs=len(s_model.outputs))
AttributeError: 'NoneType' object has no attribute 'outputs'`
### Additional
# yolov5fpn.yaml
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [5, 7, 10, 13, 16, 20] # P2/4
- [57.5, 42.0, 46.99, 36.0, 23.99, 17.5] # P3/8
- [30, 61, 62, 45, 59, 119] # P4/16
- [152, 110, 165, 115, 181, 120] # P5/32
### YOLOv5 v6.0 backbone
backbone:
[
[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
### YOLOv5 v6.0 FPN head
head: [
[-1, 3, C3, [1024, False]], # 10 (P5/32-large)
[-1, 1, nn.Upsample, [None, 2, "nearest"]],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 1, Conv, [512, 1, 1]],
[-1, 3, C3, [512, False]], # 14 (P4/16-medium)
[-1, 1, nn.Upsample, [None, 2, "nearest"]],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 1, Conv, [256, 1, 1]],
[-1, 3, C3, [256, False]], # 18 (P3/8-small)
# Add a new layer for P2/4 detection
[-1, 1, nn.Upsample, [None, 2, "nearest"]],
[[-1, 2], 1, Concat, [1]], # cat backbone P2
[-1, 1, Conv, [128, 1, 1]],
[-1, 3, C3, [128, False]], # 22 P2/4-small
# [[18, 14, 10], 1, Detect, [nc, anchors, [128, 256, 512, 1024]]], # Detect(P3, P4, P5)
[[22, 18, 14, 10], 1, Detect, [nc, anchors, [128, 256, 512, 1024]]] # Detect(P2, P3, P4, P5)
] | open | 2024-09-09T06:07:09Z | 2024-10-27T13:30:39Z | https://github.com/ultralytics/yolov5/issues/13303 | [
"question"
] | computerVision3 | 1 |
scikit-learn-contrib/metric-learn | scikit-learn | 251 | Typo in documentation | In [this](http://contrib.scikit-learn.org/metric-learn/weakly_supervised.html) documentation page, in the following paragraph
> The most intuitive way to represent tuples is to provide the algorithm with a 3D array-like of tuples of shape (n_tuples, t, n_features), where n_tuples is the number of tuples, tuple_size is the number of elements in a tuple (2 for pairs, 3 for triplets for instance), and n_features is the number of features of each point.
`t` should be `tuple_size` (or vice versa).
| closed | 2019-09-30T14:58:30Z | 2019-10-12T13:04:49Z | https://github.com/scikit-learn-contrib/metric-learn/issues/251 | [] | leotrs | 1 |
pydata/xarray | pandas | 9,491 | Why both "backend" and "engine"? | ### What is your issue?
I've always felt this was unnecessarily confusing. We have multiple "backends" that are selected through the `engine` kwarg to `open_dataset`, which ultimately calls an instance of a `BackendEntrypoint` subclass. Most of the internal implementation is not called `Engine`-anything, though we do have a function g`uess_engine`.
Why not `open_dataset(backend=...)` or have an `EngineEntrypoint` internally?
It is probably too late to actually change either of these at this point though. | open | 2024-09-13T00:27:15Z | 2024-09-27T13:37:08Z | https://github.com/pydata/xarray/issues/9491 | [
"topic-backends",
"topic-documentation",
"io"
] | TomNicholas | 3 |
deezer/spleeter | deep-learning | 326 | [Bug] Spleeter 1.5.0 installs Tensorflow 1.14.0 through Conda | ## Description
Downloaded `spleeter` 1.5.0 from `conda` and it installs the wrong tensorflow version.
## Step to reproduce
<!-- Indicates clearly steps to reproduce the behavior: -->
1. Installed using `conda install -c conda-forge spleeter`
## Output
```
$> conda search spleeter=1.5.0=py37hc8dfbb8_0 --info
Loading channels: done
spleeter 1.5.0 py37hc8dfbb8_0
-----------------------------
file name : spleeter-1.5.0-py37hc8dfbb8_0.tar.bz2
name : spleeter
version : 1.5.0
build : py37hc8dfbb8_0
build number: 0
size : 64 KB
license : MIT
subdir : linux-64
url : https://conda.anaconda.org/conda-forge/linux-64/spleeter-1.5.0-py37hc8dfbb8_0.tar.bz2
md5 : 41d09e54e42fa2f56a0285021d720f05
timestamp : 2020-03-20 22:05:14 UTC
dependencies:
- ffmpeg-python
- librosa 0.7.2
- norbert
- pandas 0.25.1
- python >=3.7,<3.8.0a0
- python_abi 3.7.* *_cp37m
- requests
- setuptools
- tensorflow 1.14.0
```
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Linux |
| Installation type | Conda |
| RAM available | 32Gb |
| Hardware spec | CPU: Intel Xeon Platinum 8000 |
## Additional context
Ran into disparity between the `requirements.txt` and the actual dependencies installed by `conda`
| closed | 2020-04-14T14:49:00Z | 2020-04-15T19:57:11Z | https://github.com/deezer/spleeter/issues/326 | [
"bug",
"distribution",
"conda"
] | Juan-Carlos-Rodero-Sales-Bose | 3 |
feature-engine/feature_engine | scikit-learn | 296 | Improve SelectByTargetMeanPerformance functionality | Idea:
Create predictor class TargetMeanPrediction or similar name with methods fit and predict.
- Fit - learns transformation
- predict - returns the mean target value per observation
Output:
- This transformer will automatically output the mean value of the target per category if variable is categorical (we have an encoder for this).
- If variable is numerical, it will first discretize it (we have discretizers for this, equal width and frequency, user selects) and then replace by the target mean.
The reason to create a predictor class is that then, we can use it with cross_validate and cross_val_score, in the main selector function.
Things to consider:
- if categorical variables are highly cardinal, the encoders will introduce NaN. In issue #294 we expand the functionality to inform in which variables nan are being introduced to help user troubleshoot
- if numerical variable is highly skewed, nan can be introduced. In issue, #295 we expand the functionality to inform in which variables nan are being introduced to help user troubleshoot
Then we need to re-code the class SelectByTargetMeanPerformance to call our predictor, and use it with cross-validate to return the important features. The advantage of using corss_validate is not just the cross_validation, which offers a less biased score, but it also allows the use of other metrics, not just roc and r2 as what we have at the moment. | closed | 2021-07-18T09:31:41Z | 2022-03-26T08:02:35Z | https://github.com/feature-engine/feature_engine/issues/296 | [
"priority"
] | solegalli | 10 |
miguelgrinberg/Flask-SocketIO | flask | 1,411 | Cannot import emit or socketio from flask | **Your question**
I am unable to import emit or SocketIO into worker app. I get the following error:
from flask import Flask, url_for, render_template, redirect, SocketIO, emit
ImportError: cannot import name 'SocketIO' from 'flask' (C:\Users\gfrick\AppData\Local\Programs\Python\Python39\lib\site-packages\flask\__init__.py)
I've tried:
from flask_module import ....
from flash import SocketIO, emit
from flask.ext.socketio... (deprecated)
The appropriate packages are installed to the best of my knowledge:
Package Version
--------------- ---------
bcrypt 3.2.0
beautifulsoup4 4.9.3
branca 0.4.1
certifi 2020.6.20
cffi 1.14.3
chardet 3.0.4
click 7.1.2
colorama 0.4.4
configparser 5.0.1
cryptography 3.2.1
distro 1.5.0
ebaysdk 2.2.0
elasticsearch 7.9.1
emit 0.4.0
Flask 1.1.2
Flask-WTF 0.14.3
flist 0.602
folium 0.11.0
geographiclib 1.50
geopy 2.0.0
gevent 20.9.0
greenlet 0.4.17
idna 2.10
itsdangerous 1.1.0
Jinja2 2.11.2
lxml 4.6.1
MarkupSafe 1.1.1
module 0.0.4
netifaces 0.10.6
ntlm-auth 1.5.0
numpy 1.19.2
panda 0.3.1
pandas 1.1.3
paramiko 2.7.2
pip 20.2.4
pycparser 2.20
pyinfra 1.2.1
PyNaCl 1.4.0
python-dateutil 2.8.1
pytz 2020.1
pywinrm 0.4.1
requests 2.24.0
requests-ntlm 1.1.0
setuptools 50.3.2
six 1.15.0
socketio 0.2.1
.
.
.
**Logs**
I have no 'socketio' logs for obvious reasons | closed | 2020-11-16T15:43:10Z | 2020-11-16T19:30:55Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1411 | [
"question"
] | gjfrick | 4 |
biolab/orange3 | scikit-learn | 6,744 | No internet connection problem with add-on manager | <!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
<!-- Be specific, clear, and concise. Include screenshots if relevant. -->
<!-- If you're getting an error message, copy it, and enclose it with three backticks (```). -->
```
There's an issue with the internet connection.
Traceback (most recent call last):
File "/Applications/Orange.app/Contents/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/Applications/Orange.app/Contents/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/orangecanvas/application/addons.py", line 510, in <lambda>
lambda config=config: (config, list_available_versions(config)),
File "/Applications/Orange.app/Contents/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/orangecanvas/application/utils/addons.py", line 369, in list_available_versions
response = session.get(PYPI_API_JSON.format(name=p))
File "/Applications/Orange.app/Contents/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/requests_cache/session.py", line 102, in get
return self.request('GET', url, params=params, **kwargs)
File "/Applications/Orange.app/Contents/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/requests_cache/session.py", line 158, in request
return super().request(method, url, *args, headers=headers, **kwargs) # type: ignore
File "/Applications/Orange.app/Contents/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/Applications/Orange.app/Contents/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/requests_cache/session.py", line 194, in send
actions.update_from_cached_response(cached_response, self.cache.create_key, **kwargs)
File "/Applications/Orange.app/Contents/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/requests_cache/policy/actions.py", line 184, in update_from_cached_response
usable_response = self.is_usable(cached_response)
File "/Applications/Orange.app/Contents/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/requests_cache/policy/actions.py", line 152, in is_usable
or (cached_response.is_expired and self._stale_while_revalidate is True)
File "/Applications/Orange.app/Contents/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/requests_cache/models/response.py", line 149, in is_expired
return self.expires is not None and datetime.utcnow() >= self.expires
TypeError: can't compare offset-naive and offset-aware datetimes
```

**How can we reproduce the problem?**
<!-- Upload a zip with the .ows file and data. -->
<!-- Describe the steps (open this widget, click there, then add this...) -->
Go to Options > Add-ons
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system: Mac OSX Sonoma 14.2.1
- Orange version: 3.36.1 (but I experienced the same issue on 3.36.2 as well but it did not give the detailed stack trace of the error)
- How you installed Orange: .dmg downloaded from website (**I do not experience this issue when installing via pip**)
- Apple M1 Pro (2021)
| open | 2024-02-21T14:17:17Z | 2024-08-12T08:37:46Z | https://github.com/biolab/orange3/issues/6744 | [
"bug report"
] | kodymoodley | 14 |
kymatio/kymatio | numpy | 363 | Scattering2D doesn't work with 2**J == image_size | I'm not sure if #346 fixes #284 :
```
import torch
from kymatio import Scattering2D
scattering = Scattering2D(J=5, shape=(32, 32))
x = torch.randn(1, 1, 32, 32)
Sx = scattering(x)
print(Sx.size())
```
gives:
```
C:\Python36\python.exe D:/Cours/3A/PFE/Python/kymatio_examples/scattering2D_test.py
Traceback (most recent call last):
File "D:/Cours/3A/PFE/Python/kymatio_examples/scattering2D_test.py", line 6, in <module>
Sx = scattering(x)
File "C:\Python36\lib\site-packages\kymatio-0.2.0.dev0-py3.6.egg\kymatio\scattering2d\scattering2d.py", line 235, in __call__
File "C:\Python36\lib\site-packages\kymatio-0.2.0.dev0-py3.6.egg\kymatio\scattering2d\scattering2d.py", line 188, in forward
File "C:\Python36\lib\site-packages\kymatio-0.2.0.dev0-py3.6.egg\kymatio\scattering2d\backend\backend_torch.py", line 40, in __call__
File "C:\Python36\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "C:\Python36\lib\site-packages\torch\nn\modules\padding.py", line 172, in forward
return F.pad(input, self.padding, 'reflect')
File "C:\Python36\lib\site-packages\torch\nn\functional.py", line 2685, in pad
ret = torch._C._nn.reflection_pad2d(input, pad)
RuntimeError: Argument #4: Padding size should be less than the corresponding input dimension, but got: padding (32, 32) at dimension 3 of input [1, 1, 32, 32]
Process finished with exit code 1
```
Can you reproduce the error ?
The goal is to get a (1, K, 1, 1) tensor.
Thanks | closed | 2019-03-03T09:14:07Z | 2019-07-22T15:11:05Z | https://github.com/kymatio/kymatio/issues/363 | [
"bug",
"2D"
] | Jonas1312 | 7 |
PokeAPI/pokeapi | graphql | 809 | Build fail on latest master with Docker | Latest master fails on `make docker-setup` with
```python
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 316, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 353, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python3.7/site-packages/django/core/management/commands/shell.py", line 92, in handle
exec(sys.stdin.read())
File "<string>", line 1, in <module>
File "/code/data/v2/build.py", line 2270, in build_all
_build_moves()
File "/code/data/v2/build.py", line 791, in _build_moves
build_generic((Move,), "moves.csv", csv_record_to_objects)
File "/code/data/v2/build.py", line 106, in build_generic
model_class.objects.bulk_create(batch)
File "/usr/local/lib/python3.7/site-packages/django/db/models/manager.py", line 82, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 471, in bulk_create
obj_without_pk._state.db = self.db
File "/usr/local/lib/python3.7/site-packages/cachalot/monkey_patch.py", line 174, in inner
original(self, exc_type, exc_value, traceback)
File "/usr/local/lib/python3.7/site-packages/django/db/transaction.py", line 212, in __exit__
connection.commit()
File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 261, in commit
self._commit()
File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 239, in _commit
return self.connection.commit()
File "/usr/local/lib/python3.7/site-packages/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 239, in _commit
return self.connection.commit()
django.db.utils.IntegrityError: insert or update on table "pokemon_v2_move" violates foreign key constraint "pokemon_v2_move_move_target_id_47f917eb_fk_pokemon_v"
DETAIL: Key (move_target_id)=(0) is not present in table "pokemon_v2_movetarget".
```
| open | 2023-01-06T23:06:38Z | 2023-01-07T16:13:35Z | https://github.com/PokeAPI/pokeapi/issues/809 | [] | Thorbenl | 2 |
litestar-org/litestar | pydantic | 3,606 | Bug: Duplication in request URL path when configuring OpenAPI Servers | ### Description
All requests URLs in documentation have duplicate paths when configuring the Litestar app with a custom path and configure the OpenAPI with servers.
So the final request URL has the full server url from OpenAPIConfig and the path added from the Litestar app.
`http://localhost:8000/api/v3/api/v3/...` instead of `http://localhost:8000/api/v3/...`
### URL to code causing the issue
_No response_
### MCVE
```python
return Litestar(path="api/v3", ...)
return OpenAPIConfig(
path="/schema",
servers=[Server(url="http://localhost:8000/api/v3")],
render_plugins=[StoplightRenderPlugin(version="latest")],
)
```
### Steps to reproduce
```bash
1. Create an OpenAPIConfig that has at least one server e.g. `http://localhost:8000/api/v3`
2. Create a new Litestar app with `api/v3` path and use the OpenAPIConfig from above
3. Run the application and access the schema
4. See that all requests in schema has full server URL from OpenAPI and also the app path which is `api/path` so the final URL is incorrect `http://localhost:8000/api/v3/api/v3/...`
```
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
`2.9.1`
### Platform
- [X] Linux
- [X] Mac
- [X] Windows
- [ ] Other (Please specify in the description above) | open | 2024-07-01T07:11:52Z | 2025-03-20T15:54:48Z | https://github.com/litestar-org/litestar/issues/3606 | [
"Bug :bug:",
"OpenAPI"
] | mohammedbabelly20 | 0 |
nolar/kopf | asyncio | 191 | [PR] Detect per-field diffs inside of the changed containers | > <a href="https://github.com/nolar"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> A pull request by [nolar](https://github.com/nolar) at _2019-09-25 20:51:24+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/pull/191
> Merged by [nolar](https://github.com/nolar) at _2019-09-26 10:34:42+00:00_
Fix the issue with no calling the field-handlers if the change is too big/generic.
> Issue : fixes #190
## Description
```python
import kopf
@kopf.on.field('zalando.org', 'v1', 'kopfexamples', field='spec.field')
def fn(**kwargs):
pass
```
Example original object (note the absence of `spec`!):
```yaml
apiVersion: zalando.org/v1
kind: KopfExample
metadata:
name: kopf-example-1
```
Changed object:
```yaml
apiVersion: zalando.org/v1
kind: KopfExample
metadata:
name: kopf-example-1
spec:
field: value
```
Caused by the diff detection and reduction algorithms:
* The addition of the whole `spec` field is detected, so that the diff equals to `[('add', ('spec',), None, {'field': 'value'})]`.
* The reduction algorithm for `spec.field` scans the whole diff for all records starting with field-prefix `('spec', 'field')`, and finds nothing — `('spec',)` does not start with `('spec', 'field')`.
* The handler is not selected for execution.
**In addition** to the fix, the types of diff structures were clarified and restricted, so that later the diffs can be extended with diff-specific DSL (e.g. diff-slicing, etc).
## Types of Changes
- Bug fix (non-breaking change which fixes an issue)
- Refactoring (types).
## Review
_List of tasks the reviewer must do to review the PR_
- [ ] Tests
- [ ] Documentation
| closed | 2020-08-18T20:00:11Z | 2020-08-23T20:49:48Z | https://github.com/nolar/kopf/issues/191 | [
"bug",
"archive"
] | kopf-archiver[bot] | 0 |
QingdaoU/OnlineJudge | django | 127 | 数据上传的问题 | 从Mac OS拷来的数据,压缩成ZIP后上传,显示Empty File
自己做的数据压缩后上传是没有问题的
求如何解决 | closed | 2018-03-02T01:07:31Z | 2019-09-10T06:59:23Z | https://github.com/QingdaoU/OnlineJudge/issues/127 | [] | 1481767320 | 4 |
markjay4k/Audio-Spectrum-Analyzer-in-Python | matplotlib | 11 | On MacOS 10.15.3, the wave appears but the spectrum does not | On MacOS 10.15.3, the wave appears but the spectrum does not. | open | 2020-04-10T05:18:46Z | 2020-04-24T12:23:18Z | https://github.com/markjay4k/Audio-Spectrum-Analyzer-in-Python/issues/11 | [] | nebulou5 | 3 |
idealo/image-super-resolution | computer-vision | 206 | how can me run this frame work please,the steps | open | 2021-06-04T11:07:24Z | 2021-06-04T11:07:24Z | https://github.com/idealo/image-super-resolution/issues/206 | [] | Hager-ahmed2021 | 0 |
|
seleniumbase/SeleniumBase | pytest | 2,789 | Script is not being executed | I am encountering an issue where the script below does not seem to be executed.
The script is supposed to log messages to the console and perform a fetch request, but none of the console logs appear, and the fetch request does not seem to be executed
```python
def sign_in(proxy, username, password):
user, pass_, host, port = proxy.replace('@', ':').split(':')
with SB(uc=True, extension_dir=create_extension(host, port, user, pass_), headless=False) as sb:
sb.driver.uc_open_with_reconnect('https://example/api/oauth2/', 5)
sb.wait_for_ready_state_complete()
script = f'''
console.log('Starting script execution');
fetch('https://example.com/oauth2/authorization/oidc', {{
mode: 'no-cors',
headers: {{
'sec-ch-ua': '"Chromium";v="124", "Google Chrome";v="124", "Not-A.Brand";v="99"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
'Sec-Fetch-Site': 'document',
'Sec-Fetch-Mode': 'navigate',
'Sec-Fetch-User': '?1',
'Sec-Fetch-Dest': 'document',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.9'
}}
}})
.then(response => response.text())
.then(text => {{
console.log('Response text received');
let parser = new DOMParser();
let doc = parser.parseFromString(text, 'text/html');
let form = doc.querySelector('form[id="kc-form-login"]');
let formActionUrl = form ? form.action : null;
if (formActionUrl) {{
console.log('Form action URL found:', formActionUrl);
let formData = new FormData();
formData.append('username', '{username}');
formData.append('password', '{password}');
fetch(formActionUrl, {{
method: 'POST',
body: formData,
credentials: 'include'
}})
.then(postResponse => {{
if (postResponse.ok) {{
console.log('Login successful');
}} else {{
console.log('Login failed');
}}
}})
.catch(error => console.error('Error during login:', error));
}} else {{
console.log('Form action URL not found');
}}
}})
.catch(error => console.error('Error during fetch:', error));
'''
sb.driver.execute_script(script)
cookies = {cookie['name']: cookie['value'] for cookie in sb.driver.get_cookies()}
return cookies
| closed | 2024-05-19T20:06:29Z | 2024-05-20T12:11:48Z | https://github.com/seleniumbase/SeleniumBase/issues/2789 | [
"invalid usage",
"external",
"UC Mode / CDP Mode"
] | mobti100 | 1 |
snarfed/granary | rest-api | 106 | CI test for Instagram and Google+ scraping against live sites | our Instagram and Google+ scraping have both broken recently when their embedded JSON changed, and we didn't notice until days or weeks after. we should add tests against their live sites, similar to `facebook_live_test.py`, and run them nightly on CircleCI: https://circleci.com/docs/1.0/nightly-builds/ | closed | 2017-05-17T06:53:48Z | 2017-08-04T17:22:04Z | https://github.com/snarfed/granary/issues/106 | [
"now"
] | snarfed | 2 |
axnsan12/drf-yasg | django | 475 | Is there a way to represent multiple responses? | Hi, sorry for my bad English.
I want to represent multiple responses with two serializers.
I found the answer and following is the link of the answer.
[https://stackoverflow.com/questions/55772347/documenting-a-drf-get-endpoint-with-multiple-responses-using-swagger](https://stackoverflow.com/questions/55772347/documenting-a-drf-get-endpoint-with-multiple-responses-using-swagger)
But above answer uses json and has to write down all of fields manually.
Is there a way using serializer?
or Can I make two document with one url? | open | 2019-10-18T06:15:48Z | 2025-03-07T12:16:29Z | https://github.com/axnsan12/drf-yasg/issues/475 | [
"triage"
] | darkblank | 0 |
keras-rl/keras-rl | tensorflow | 53 | Is there a way to use this for multi-agent environments? | Can keras-rl be modified to work with multi-agent environments? For example could you teach the ghosts in PacMan to cooperate and catch the PacMan? | closed | 2016-12-05T08:12:25Z | 2020-05-24T01:35:41Z | https://github.com/keras-rl/keras-rl/issues/53 | [] | hmate9 | 15 |
albumentations-team/albumentations | machine-learning | 2,101 | [Add transform] Add RandomMotionBlur | Add RandomMotionBlur which is an alias over MotionBlur and has the same API as Kornia's
https://kornia.readthedocs.io/en/latest/augmentation.module.html#kornia.augmentation.RandomMotionBlur
| closed | 2024-11-08T15:53:56Z | 2024-11-18T23:57:46Z | https://github.com/albumentations-team/albumentations/issues/2101 | [
"enhancement"
] | ternaus | 1 |
qubvel-org/segmentation_models.pytorch | computer-vision | 382 | Extracting only ground truth masks from training dataset (to build a confusion matrix)? | Hi everyone,
I have successfully trained a U-Net using this library with the CamVid example provided for this repo. Now I am trying to put together some evaluation assessments (plots, graphs, etc.). I am trying to build a confusion matrix to compare the ground truth masks and predicted masks from the training set. Does anyone know how to extract only the masks from the training dataset? Here is how the dataset object was created (taken from the CamVid example and modified):
`# Create helper classes for data preprocessing and augmentation.
class Dataset(BaseDataset):
def __init__(
self,
images_dir,
masks_dir,
augmentation=None,
preprocessing=None,
):
self.ids = os.listdir(images_dir)
self.images_fps = [os.path.join(images_dir, image_id) for image_id in self.ids]
self.masks_fps = [os.path.join(masks_dir, image_id) for image_id in self.ids]
self.augmentation = augmentation
self.preprocessing = preprocessing
def __getitem__(self, i):
# Read and resize images and masks to desired resolution. *Interpolation argument must be set to nearest-neighbor
# to preserve ground truth.
image = cv2.imread(self.images_fps[i])
image = cv2.resize(image, (256, 256))
mask = cv2.imread(self.masks_fps[i], 0)
mask = cv2.resize(mask, (256, 256), interpolation = cv2.INTER_NEAREST)
# One-hot encode masks. In this case, low-center polygons have been assigned a pixel value of 119
# and high-center polygons have been assigned a pixel value of 238 on grayscale. To prepare the masks
# for one-hot encoding, these pixel values must be re-assigned to 1 and 2, background will remain 0.
mask[mask == 119] = 1
mask[mask == 238] = 2
masks = tf.one_hot(mask, 3, axis = 0)
mask = np.stack(masks, axis=-1).astype('float')
# Apply augmentations
if self.augmentation:
sample = self.augmentation(image=image, mask=mask)
image, mask = sample['image'], sample['mask']
# Apply preprocessing
if self.preprocessing:
sample = self.preprocessing(image=image, mask=mask)
image, mask = sample['image'], sample['mask']
return image, mask
def __len__(self):
return len(self.ids)`
Furthermore, here is how I created a tensor holding all the predicted masks from training:
`@torch.no_grad()
def get_all_preds(model, loader):
all_preds = torch.tensor([])
for batch in loader:
images, labels = batch
preds = model(images)
all_preds = torch.cat(
(all_preds, preds)
,dim=0
)
return all_preds` | closed | 2021-04-15T22:29:50Z | 2021-04-16T04:48:16Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/382 | [] | eliasm56 | 1 |
google-research/bert | nlp | 901 | Wrong number of params in mBERT README? | Hi,
I think the number of params "110M" in the mBERT README is wrong. It should be something around 12\*768\*(4\*768)\*3 + 110k\*768 ~= 170M?
Best,
Alexis | open | 2019-11-05T07:10:07Z | 2019-11-05T07:10:27Z | https://github.com/google-research/bert/issues/901 | [] | aconneau | 0 |
xinntao/Real-ESRGAN | pytorch | 604 | about realesr-general-wdn-x4v3.pth ? | realesr-general-wdn-x4v3.pth
可以发布x1 , x2的模型吗,GPU太慢,运行x4花费的时间太久。
Can you provide x1, x2 models, GPU is too slow and it takes too long to run x4.
x1、x2モデルを提供できますか? GPUが遅すぎて、x4の実行に時間がかかりすぎます。
x1, x2 모델을 제공 할 수 있습니까? GPU가 너무 느리고 x4를 실행하는 데 시간이 너무 오래 걸립니다.
Like this.
realesr-general-wdn-x1v3.pth ?
or
realesr-general-wdn-x2v3.pth ?
我需要它们
I need them
私はそれらが必要です
나는 그들이 필요하다
不好意思,用的是翻译软件,怕你们看不明白~v~ | open | 2023-04-09T03:31:19Z | 2023-09-19T16:44:04Z | https://github.com/xinntao/Real-ESRGAN/issues/604 | [] | juntaosun | 1 |
vitalik/django-ninja | pydantic | 1,116 | Field with list factory causes API docs to not load with non-serializable. | When I use Field(list, alias="model related name") API doc generation no longer works and fails out with an issue with serializing. I'm not sure if this is how I am doing it or if I should make it `Optional[List[BookSchema]]` in the case that no matching foreign keys exist.
```python
class Author(models.Model):
first_name = models.CharField(max_length=64)
last_name = models.CharField(max_length=64)
class Book(models.Model):
title = models.CharField(max_length=64)
author = models.ForeignKey(
Author,
related_name="books",
null=False,
on_delete=models.CASCADE,
)
class BookSchema(Schema):
title: str
class AuthorSchema(Schema):
first_name: str
last_name: str
books: List[BookSchema] = Field(list, alias="books") # Causes an error in API docs
```
| open | 2024-03-29T02:56:38Z | 2024-03-30T09:30:54Z | https://github.com/vitalik/django-ninja/issues/1116 | [] | wachpwnski | 2 |
PaddlePaddle/ERNIE | nlp | 516 | 使用repro分支下的ernie-gen 报错 | 运行环境python==2.7.14 paddle-gpu=1.7.2 训练的是demo数据,预训练模型使用ernie_base_2.0英文版 错误如下

| closed | 2020-07-09T04:27:45Z | 2020-07-09T07:29:06Z | https://github.com/PaddlePaddle/ERNIE/issues/516 | [
"repro"
] | niantianlei | 1 |
gradio-app/gradio | deep-learning | 10,344 | example of adding custom js from Gradio docs is not working | ### Describe the bug
I am struggling to accomplish something similar to the example from here: https://www.gradio.app/guides/custom-CSS-and-JS (passing some value from python function to execute in js), but apparently even the example from the gradio website is not working. Could you please suggest an example which works and does the same?
I tried to copy paste the example code and execute it locally (thinking maybe it is an issue with gradio website and not gradio itself) but it also throws a bunch of errors.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
blocks = gr.Blocks()
with blocks as demo:
subject = gr.Textbox(placeholder="subject")
verb = gr.Radio(["ate", "loved", "hated"])
object = gr.Textbox(placeholder="object")
with gr.Row():
btn = gr.Button("Create sentence.")
reverse_btn = gr.Button("Reverse sentence.")
foo_bar_btn = gr.Button("Append foo")
reverse_then_to_the_server_btn = gr.Button(
"Reverse sentence and send to server."
)
def sentence_maker(w1, w2, w3):
return f"{w1} {w2} {w3}"
output1 = gr.Textbox(label="output 1")
output2 = gr.Textbox(label="verb")
output3 = gr.Textbox(label="verb reversed")
output4 = gr.Textbox(label="front end process and then send to backend")
btn.click(sentence_maker, [subject, verb, object], output1)
reverse_btn.click(
None, [subject, verb, object], output2, js="(s, v, o) => o + ' ' + v + ' ' + s"
)
verb.change(lambda x: x, verb, output3, js="(x) => [...x].reverse().join('')")
foo_bar_btn.click(None, [], subject, js="(x) => x + ' foo'")
reverse_then_to_the_server_btn.click(
sentence_maker,
[subject, verb, object],
output4,
js="(s, v, o) => [s, v, o].map(x => [...x].reverse().join(''))",
)
demo.launch()
```
### Screenshot


### Logs
_No response_
### System Info
```shell
gradio==5.12.0
```
### Severity
Blocking usage of gradio | open | 2025-01-13T12:43:00Z | 2025-02-21T14:02:32Z | https://github.com/gradio-app/gradio/issues/10344 | [
"bug"
] | SlimakSlimak | 1 |
plotly/jupyter-dash | dash | 74 | Show output in full height without scroll bar | Hi there,
I am facing an issue where the JupyterDash output cell will have the vertical scroll bar whenever the output is too long.
Is there a way for me to code it such that the output cell will always be at maximum height according to what graphs are generated, so that my users won't need to scroll through two vertical bars?
I have attached a screenshot here to show the two vertical bars I am referring to.

| closed | 2022-01-05T17:01:27Z | 2022-03-08T01:43:22Z | https://github.com/plotly/jupyter-dash/issues/74 | [] | kennethleungty | 1 |
psf/requests | python | 6,697 | Can't access trailers with the Request library | <!-- Summary. -->
Requests doesn't seem to support processing [Trailer](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Trailer) response headers. While I know that Trailer support has been [discussed](https://github.com/psf/requests/issues/3613) [previously](https://github.com/psf/requests/issues/2281), some time has passed since then and I hope the issue can be reconsidered.
The desired use case is to add Trailer header to API responses but we cannot read them using the Request library.
Other frameworks in other languages ([Vert.x](https://vertx.io/docs/vertx-core/java/#_chunked_http_responses_and_trailers) for example) support Trailers, but not Python Requests.
## Expected Result
A Trailer response header has been added to a response, which is then read using Requests, the Trailer should be accessible.
<!-- What you expected. -->
## Actual Result
There is no way to access the Trailer in the response using Requests.
<!-- What happened instead. -->
| closed | 2024-05-09T11:05:19Z | 2024-05-09T12:35:09Z | https://github.com/psf/requests/issues/6697 | [] | fmurray-r7 | 1 |
slackapi/bolt-python | fastapi | 1,009 | Incorporating django example in my app | Hello,
I have incorporated slack-bolt Django example in my Django project.
I can successfully install the app using /slack/install.
However, I have challenges with customisation.
My app is initialised as in the [django example](https://github.com/slackapi/bolt-python/tree/main/examples/django)
```
app = App(
signing_secret=signing_secret,
oauth_settings=OAuthSettings(
client_id=client_id,
client_secret=client_secret,
scopes=scopes,
user_scopes=user_scopes,
# If you want to test token rotation, enabling the following line will make it easy
# token_rotation_expiration_minutes=1000000,
installation_store=DjangoInstallationStore(
client_id=client_id,
logger=logger,
),
state_store=DjangoOAuthStateStore(
expiration_seconds=120,
logger=logger,
),
),
)
```
## Challenge 1:
I would like the installation flow to be triggered from the /profile page of my app.
I generated the [Slack install button](https://api.slack.com/authentication/oauth-v2#buttongen) and placed it inside of my app /profile page. Please note that the profile page is available to the user after authentication.
When the user gets redirected, the Slack page shows up, user clicks Allow and is redirected back to the /slack/oauth_redirect
The error shows up with information that Slack installation was triggered from a different URL than /slack/install.
I tried to set the installation_url in my app as follows
app.oauth_flow.install_path = '/profile'
app.oauth_flow.settings.install_path = '/profile'
but it didn't work
The only way I could make it work was to disable the state validation
app.oauth_flow.settings.state_validation_enabled = False
### Question 1: How do I set up a custom URL from which Slack app installation can be triggered?
### Question 2: How do I generate the URL in a way that state is properly managed? (Currently I simply use the generated install button HTML code in my django template).
I will appreciate a code example showing how to do it.
## Challenge 2:
When the user approves the Slack app scopes, user is redirected back to /oauth_redirect to complete the app installation (save the data the database). I would like the user to be redirected back to the /profile page after all the settings are saved with additional query string parameters for successful and failed installation.
I tried setting up the following but it doesn't work
app.oauth_flow.settings.redirect_uri = "/profile"
app.oauth_flow.settings.success_url = "/profile?slack_install=1"
app.oauth_flow.settings.failure_url = "/profile?slack_install=0"
### Question: How do I redirect the user back to my app URL from bolt django /oauth_redierect page?
### Reproducible in:
#### The `slack_bolt` version
slack-bolt==1.18.1
#### Python runtime version
Python 3.9.5
#### OS info
ProductName: macOS
ProductVersion: 14.2.1
BuildVersion: 23C71
Darwin Kernel Version 23.2.0: Wed Nov 15 21:53:34 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T8103
| closed | 2024-01-08T05:25:26Z | 2024-02-26T00:11:07Z | https://github.com/slackapi/bolt-python/issues/1009 | [
"question",
"area:adapter",
"auto-triage-stale"
] | emilmajkowski | 3 |
0b01001001/spectree | pydantic | 242 | error of werkzeug 2.2.0 version | Error with werkzeug >= 2.2.0 version
```
Traceback (most recent call last):
File "/Users/matvei/Desktop/Work/project/.venv/lib/python3.10/site-packages/flask/app.py", line 2091, in __call__
return self.wsgi_app(environ, start_response)
File "/Users/matvei/Desktop/Work/project/.venv/lib/python3.10/site-packages/flask/app.py", line 2076, in wsgi_app
response = self.handle_exception(e)
File "/Users/matvei/Desktop/Work/project/.venv/lib/python3.10/site-packages/flask_cors/extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/Users/matvei/Desktop/Work/project/.venv/lib/python3.10/site-packages/flask/app.py", line 2073, in wsgi_app
response = self.full_dispatch_request()
File "/Users/matvei/Desktop/Work/project/.venv/lib/python3.10/site-packages/flask/app.py", line 1519, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/matvei/Desktop/Work/project/.venv/lib/python3.10/site-packages/flask_cors/extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/Users/matvei/Desktop/Work/project/.venv/lib/python3.10/site-packages/flask/app.py", line 1517, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/matvei/Desktop/Work/project/.venv/lib/python3.10/site-packages/flask/app.py", line 1503, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/Users/matvei/Desktop/Work/project/.venv/lib/python3.10/site-packages/spectree/plugins/flask_plugin.py", line 241, in <lambda>
view_func=lambda: jsonify(self.spectree.spec),
File "/Users/matvei/Desktop/Work/project/.venv/lib/python3.10/site-packages/spectree/spec.py", line 83, in spec
self._spec = self._generate_spec()
File "/Users/matvei/Desktop/Work/project/.venv/lib/python3.10/site-packages/spectree/spec.py", line 249, in _generate_spec
path, parameters = self.backend.parse_path(
File "/Users/matvei/Desktop/Work/project/.venv/lib/python3.10/site-packages/spectree/plugins/flask_plugin.py", line 62, in parse_path
from werkzeug.routing import parse_converter_args, parse_rule
ImportError: cannot import name 'parse_rule' from 'werkzeug.routing' (/Users/matvei/Desktop/Work/project/.venv/lib/python3.10/site-packages/werkzeug/routing/__init__.py)
```
I found same issue here https://github.com/python-restx/flask-restx/issues/460 | closed | 2022-07-28T13:39:46Z | 2022-08-01T02:18:45Z | https://github.com/0b01001001/spectree/issues/242 | [] | bekishev04 | 3 |
django-cms/django-cms | django | 7,244 | [BUG] Some JavaScript libraries have unexpected behaviour in edit mode | <!--
Please fill in each section below, otherwise, your issue will be closed.
This info allows django CMS maintainers to diagnose (and fix!) your issue
as quickly as possible.
-->
## Description
<!--
If this is a security issue stop immediately and follow the instructions at:
http://docs.django-cms.org/en/latest/contributing/development-policies.html#reporting-security-issues
-->
Some JavaScript plugins have unexpected behaviour in edit mode. In my case - PhotoSwipe.
## Steps to reproduce
<!--
Clear steps describing how to reproduce the issue.
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
-->
1.Install PhotoSwipe
2.Pass some images to PhotoSwipe
3.Try to open PhotoSwipe modal window in edit and preview modes
4.See that modal window popping up only in preview mode, and opening source image in edit mode
## Expected behaviour
<!--
A clear and concise description of what you expected to happen.
-->
Photoswipe modal window opening when clicking on image both in edit and preview modes.
## Actual behaviour
Source image is opening instead of modal window when clicking on image in edit mode.
<!--
A clear and concise description of what is actually happening.
-->
## Screenshots
https://imgur.com/a/GRrBZTN
<!--If applicable, add screenshots to help explain your problem.
-->
## Additional information (CMS/Python/Django versions)
Django-cms 3.9.0
Django 3.1.13
Python 3.9
Photoswipe 4.1.3
<!--
Add any other context about the problem such as environment,
CMS/Python/Django versions, logs etc. here.
-->
| closed | 2022-02-17T20:19:11Z | 2022-07-28T04:48:05Z | https://github.com/django-cms/django-cms/issues/7244 | [
"stale"
] | YOBA1112 | 4 |
Nemo2011/bilibili-api | api | 746 | [漏洞] 舰长的价格获取出现问题,获取到错误价格 | **Python 版本:** 3.12.2
**模块版本:** 16.2.0
**运行环境:** Windows
**模块路径:** `bilibili_api.live LiveRoom`
**解释器:** cpython
---
138舰长获得的event中price为198000
{'uid': ________, 'username': '________', 'guard_level': 3, 'num': 1, 'price': 198000, 'gift_id': 10003, 'gift_name': '舰长', 'start_time': 1713451876, 'end_time': 1713451876} | open | 2024-04-19T00:58:50Z | 2024-06-07T17:51:27Z | https://github.com/Nemo2011/bilibili-api/issues/746 | [
"question"
] | finalparanoia | 5 |
qwj/python-proxy | asyncio | 109 | RDP tunnel through ssh | Hi! based on this [article](https://www.fireeye.com/blog/threat-research/2019/01/bypassing-network-restrictions-through-rdp-tunneling.html) i want to do something like this using python proxy (because plink is a disaster and i am looking for a neater way to accomplish this):
```
plink -v -N -T -C -noagent -ssh -R 31337:127.0.0.1:3389 -pw "[SSH_PASSWORD]" -2 -4 -D [PROXY_SERVER]:[PROXY_PORT] root@[SSH_SERVER]
```
basically I want to do a remote mapping of my RDP port to a random port (31337) on the SSH server but to bypass firewall I also have to use a proxy to achieve this matter (for the proxy I am already using a solution and it is not my case here), is it possible to replace plink with python proxy in this scenario?
Cheers! | closed | 2021-01-30T16:40:36Z | 2021-02-20T17:51:08Z | https://github.com/qwj/python-proxy/issues/109 | [] | shar333n | 3 |
HIT-SCIR/ltp | nlp | 661 | 如何部署到docker中 | 如何部署到docker中? | open | 2023-08-15T06:01:55Z | 2023-08-15T06:01:55Z | https://github.com/HIT-SCIR/ltp/issues/661 | [] | liyanfu520 | 0 |
predict-idlab/plotly-resampler | plotly | 258 | Application to scatter_geo? | Hello
I've found this resampler to be of great use, but for a project within my company they want to plot heatmap on a worldmap. For this I was using the scattergeo graph type. However, I'm applying about ~300k points so it's very slow to interact with. Is it possible to use this resampler for this type of graph?
Thanks in advance for any help! | open | 2023-09-19T09:18:15Z | 2023-11-09T02:52:40Z | https://github.com/predict-idlab/plotly-resampler/issues/258 | [
"new feature"
] | ODupon | 0 |
encode/databases | sqlalchemy | 105 | Support or document example with Sqlalchemy Declarative | Not sure if what I'm asking here is #76 since I'm not that familiar with sqlalchemy. I've seen various places define sqlalchamy tables by inheriting from `sqlalchemy.ext.declarative.declarative_base()`.
There's no examples for using it with `databases` so I assume it just isn't possible? | closed | 2019-06-01T01:23:30Z | 2022-02-07T17:24:53Z | https://github.com/encode/databases/issues/105 | [] | NotAFile | 8 |
streamlit/streamlit | data-science | 10,719 | Nested periodic st.fragments can cause StreamlitDuplicateElementId | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
When you create 2 fragments that rerun both automatically and call one from the other, sometimes one rerun can cause both fragments to rerun at the same time and generate the same widget at the inner fragment which causes an error
### Reproducible Code Example
```Python
import streamlit as st
@st.fragment(run_every=0.2)
def func2():
st.button("OK2")
@st.fragment(run_every=0.5)
def func():
st.button("OK")
func2()
func()
```
### Steps To Reproduce
- start streamlit run <script.py>
- see streamlit console logs and wait (~4-5 sec for me, or make smaller run_every times)
### Expected Behavior
One of:
- The streamlit rendering smoothly without any error, without any console-log
- The streamlit wrote an error at the first run, stating that a fragment cannot contain another fragment (or another fragment with an automatic rerun setting).
### Current Behavior
A warning message appears every `func` fragment rerun time
---
```
<timestamp> The fragment with id <fragment-id> does not exist anymore - it might have been removed during a preceding full-app rerun.
```
Error message appears after a short time (for me ~4-5 sec)
---
```
<timestamp> Uncaught app execution
Traceback (most recent call last):
File "<streamlit_path>/streamlit/runtime/fragment.py", line 244, in wrapped_fragment
result = non_optional_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<script_path>/main.py", line 5, in func2
st.button("OK2")
File "<streamlit_path>/streamlit/runtime/metrics_util.py", line 410, in wrapped_func
result = non_optional_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<streamlit_path>/streamlit/elements/widgets/button.py", line 243, in button
return self.dg._button(
^^^^^^^^^^^^^^^^
File "<streamlit_path>/streamlit/elements/widgets/button.py", line 1010, in _button
element_id = compute_and_register_element_id(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<streamlit_path>/streamlit/elements/lib/utils.py", line 241, in compute_and_register_element_id
_register_element_id(ctx, element_type, element_id)
File "<streamlit_path>/streamlit/elements/lib/utils.py", line 147, in _register_element_id
raise StreamlitDuplicateElementId(element_type)
streamlit.errors.StreamlitDuplicateElementId: There are multiple `button` elements with the same auto-generated ID. When this element is created, it is assigned an internal ID based on the element type and provided parameters. Multiple elements with the same type and parameters will cause this error.
```
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.43.1
- Python version: 3.12.3
- Operating System: Linux Mint 22.1
- Browser: Firefox Browser 135.0.1 (64-bit)
### Additional Information
_No response_ | open | 2025-03-11T14:56:57Z | 2025-03-12T14:27:18Z | https://github.com/streamlit/streamlit/issues/10719 | [
"type:bug",
"status:confirmed",
"priority:P3",
"feature:st.fragment"
] | schaumb | 2 |
unit8co/darts | data-science | 2,396 | [BUG] Static covariates not added to val_series for RegressionModel | **Describe the bug**
When fitting a RegressionModel on a series which includes static covariates, and also providing validation data, an error will be thrown due to data shape mismatch. This appears to be because lagged data is created from the val series (in self.fit) _before_ self.uses_static_covariates is set to True in super.fit - after which lagged data is created from the train series. Only when this attribute is set to True are static covariates added in when lagged data is created. Hence the train data will have an extra feature but not the val data.
**To Reproduce**
The code below (modified from example in CatBoostModel docs) will throw an error when model.fit is run. However if the preceding line is uncommented, manually setting self.uses_static_covariates to True, then it will run successfully.
```python
import pandas as pd
from darts.datasets import WeatherDataset
from darts.models import CatBoostModel
series = WeatherDataset().load()
# adding static covs
static_covs = pd.DataFrame({"year": [2020]})
series = series.with_static_covariates(static_covs)
# predicting atmospheric pressure
target = series['p (mbar)'][:100]
# optionally, use past observed rainfall (pretending to be unknown beyond index 100)
past_cov = series['rain (mm)'][:100]
# optionally, use future temperatures (pretending this component is a forecast)
future_cov = series['T (degC)'][:106]
# using subsequent time period as validation data
val_target = series['p (mbar)'][100:118]
val_past_cov = series['rain (mm)'][100:118]
val_future_cov = series['T (degC)'][100:118]
# predict 6 pressure values using the 12 past values of pressure and rainfall, as well as the 6 temperature
# values corresponding to the forecasted period
model = CatBoostModel(
lags=12,
lags_past_covariates=12,
lags_future_covariates=[0,1,2,3,4,5],
output_chunk_length=6,
use_static_covariates=True,
)
# UNCOMMENT to prevent error in model.fit
# setattr(model, "_uses_static_covariates", True)
model.fit(
target,
past_covariates=past_cov,
future_covariates=future_cov,
val_series=val_target,
val_future_covariates=val_future_cov,
val_past_covariates=val_past_cov,
)
pred = model.predict(6)
pred.values()
```
**Expected behavior**
model.fit not to throw an error when fitting on series with static covariates and also using validation data.
**System (please complete the following information):**
- Python version: 3.10.3
- darts version: 0.29.0
**Additional context**
Obligatory thanks for creating such a great package. Your docs are amongst the best I've come across!
| closed | 2024-05-28T11:09:47Z | 2024-05-31T15:09:15Z | https://github.com/unit8co/darts/issues/2396 | [
"bug",
"triage"
] | sharmuz | 1 |
ploomber/ploomber | jupyter | 697 | Path resolution discrepancy between ploomber build and Jupyter integration | ### Summary
Ploomber resolves relative paths differently depending on the context. In `ploomber --build` it seems that the paths are resolved relative to the current working directory. In Jupyter, they are resolved relative to the notebook location. The result is that the notebooks can't find files in one or the other context.
### Background
I'm using `pipeline.yaml` to inject relative paths into the notebooks in the standard way:
```
- source: '{{src_nb_dir}}/change-point.py'
name: change-point
product: '{{out_nb_dir}}/change-point.ipynb'
params:
data_path: '{{out_data_dir}}/request-latencies.parquet'
```
In `env.yaml` I have
```
out_data_dir: '../output/data'
```
Relevant directory structure:
```
├───output
│ ├───data
│ └───nb
├───pipeline
└───.ipynb_checkpoints
```
My notebooks are in `pipeline`.
So in the above, the notebooks find the data files in Jupyter, but not in `ploomber build`.
| open | 2022-03-31T18:37:28Z | 2022-04-01T01:30:54Z | https://github.com/ploomber/ploomber/issues/697 | [] | williewheeler-ms | 2 |
deepinsight/insightface | pytorch | 2,210 | arcface torch load pretrain_model? | 训练 arcface在workdir里面 只有一个model.pt,我在config里应该怎么写路径才可以加载这个模型继续训练呢? | open | 2023-01-04T06:14:18Z | 2023-01-06T03:33:04Z | https://github.com/deepinsight/insightface/issues/2210 | [] | sssssshf | 1 |
zappa/Zappa | flask | 1,112 | [Question] How does Lambda layers work? | Hi, I'm currently using AWS Chalice for my services and am currently looking to see if I can use Flask/Django + Zappa instead
1. Are lambda layers autogenerated by for dependencies (like Chalice's `automatic layers`)
2. Are lambda layers updated for each update or only on change of dependencies
| closed | 2022-02-22T12:48:50Z | 2023-06-22T15:39:55Z | https://github.com/zappa/Zappa/issues/1112 | [] | VaZark | 1 |
explosion/spaCy | nlp | 12,900 | Transformer NER training and Loading | Hey , I am training a NER on my own data on a GPU, using your transformer model.
I have installed Spacy and generated my config.cfg
I trained the model and I save it under the name best model, I am trying to load it in order to use it but it shows an error :
Irun this command : nlp_ner = spacy.load("/content/model-best")
it gives me : RuntimeError: Error(s) in loading state_dict for RobertaModel:
Missing key(s) in state_dict: "embeddings.position_ids".
(config.cfg)
[paths]
train = null
dev = null
vectors = null
init_tok2vec = null
[system]
gpu_allocator = "pytorch"
seed = 0
[nlp]
lang = "en"
pipeline = ["transformer","ner"]
batch_size = 128
disabled = []
before_creation = null
after_creation = null
after_pipeline_creation = null
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}
[components]
[components.ner]
factory = "ner"
incorrect_spans_key = null
moves = null
scorer = {"@scorers":"spacy.ner_scorer.v1"}
update_with_oracle_cut_size = 100
[components.ner.model]
@architectures = "spacy.TransitionBasedParser.v2"
state_type = "ner"
extra_state_tokens = false
hidden_width = 64
maxout_pieces = 2
use_upper = false
nO = null
[components.ner.model.tok2vec]
@architectures = "spacy-transformers.TransformerListener.v1"
grad_factor = 1.0
pooling = {"@layers":"reduce_mean.v1"}
upstream = "*"
[components.transformer]
factory = "transformer"
max_batch_items = 4096
set_extra_annotations = {"@annotation_setters":"spacy-transformers.null_annotation_setter.v1"}
[components.transformer.model]
@architectures = "spacy-transformers.TransformerModel.v3"
name = "roberta-base"
mixed_precision = false
[components.transformer.model.get_spans]
@span_getters = "spacy-transformers.strided_spans.v1"
window = 128
stride = 96
[components.transformer.model.grad_scaler_config]
[components.transformer.model.tokenizer_config]
use_fast = true
[components.transformer.model.transformer_config]
[corpora]
[corpora.dev]
@readers = "spacy.Corpus.v1"
path = ${paths.dev}
max_length = 0
gold_preproc = false
limit = 0
augmenter = null
[corpora.train]
@readers = "spacy.Corpus.v1"
path = ${paths.train}
max_length = 0
gold_preproc = false
limit = 0
augmenter = null
[training]
accumulate_gradient = 3
dev_corpus = "corpora.dev"
train_corpus = "corpora.train"
seed = ${system.seed}
gpu_allocator = ${system.gpu_allocator}
dropout = 0.1
patience = 1600
max_epochs = 0
max_steps = 20000
eval_frequency = 200
frozen_components = []
annotating_components = []
before_to_disk = null
before_update = null
[training.batcher]
@batchers = "spacy.batch_by_padded.v1"
discard_oversize = true
size = 2000
buffer = 256
get_length = null
[training.logger]
@loggers = "spacy.ConsoleLogger.v1"
progress_bar = false
[training.optimizer]
@optimizers = "Adam.v1"
beta1 = 0.9
beta2 = 0.999
L2_is_weight_decay = true
L2 = 0.01
grad_clip = 1.0
use_averages = false
eps = 0.00000001
[training.optimizer.learn_rate]
@schedules = "warmup_linear.v1"
warmup_steps = 250
total_steps = 20000
initial_rate = 0.00005
[training.score_weights]
ents_f = 1.0
ents_p = 0.0
ents_r = 0.0
ents_per_type = null
[pretraining]
[initialize]
vectors = ${paths.vectors}
init_tok2vec = ${paths.init_tok2vec}
vocab_data = null
lookups = null
before_init = null
after_init = null
[initialize.components]
[initialize.tokenizer]
| closed | 2023-08-09T13:53:41Z | 2023-10-12T00:02:14Z | https://github.com/explosion/spaCy/issues/12900 | [
"bug",
"third-party",
"feat / serialize",
"feat / transformer"
] | Oumayma68 | 9 |
graphql-python/graphene-sqlalchemy | graphql | 236 | object of type 'SQLAlchemyConnectionField' has no len() | Hello,
I am implementing a connection field.
The excerpts from the code are:
```
# Classes:
class Common(graphene_sqlalchemy.SQLAlchemyObjectType):
class Meta:
abstract = True
...
class MyGraphQLModel(Common):
class Meta:
model = MyModel
interfaces = (graphene.relay.Node, )
# Fields:
mymodels = graphene.List(MyGraphQLModel)
def mymodels(p,i):
query = MyGraphQLModel.get_query(i)
return query.all()
connection_mymodels = graphene_sqlalchemy.SQLAlchemyConnectionField(MyGraphQLModel)
```
The first field (`mymodels`) which is an array without a connection works fine.
The second field which is a connection gives error:
```
"message": "object of type 'SQLAlchemyConnectionField' has no len()",
```
Any ideas? Thanks. | open | 2019-07-11T07:36:43Z | 2019-10-30T12:50:05Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/236 | [] | docelic | 2 |
graphistry/pygraphistry | pandas | 620 | [BUG] gfql de-serialization | **Describe the bug**
bug in the gfql de-serialization
**To Reproduce**
Code, including data, than can be run without editing:
```python
from graphistry.compute.chain import Chain
from graphistry import e_undirected, is_in
chain_operations = graphistry.Chain([
e_undirected(hops=1, edge_match={"source": is_in(options=[
"Oakville Square",
"Maplewood Square"
])})
])
Chain.from_json(chain_operations.to_json())
```
**Expected behavior**
The query should be de-serialised and re-serialised correctly
**Actual behavior**
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[41], line 9
2 from graphistry import e_undirected, is_in
3 chain_operations = graphistry.Chain([
4 e_undirected(hops=1, edge_match={"source": is_in(options=[
5 "Oakville Square",
6 "Maplewood Square"
7 ])})
8 ])
----> 9 Chain.from_json(chain_operations.to_json())
11 # ga = g.chain(chain_operations).settings(height=1300)
12 # # g3 = g3.edges(g3._edges.sort_values(by=['source', 'destination']))
13 # # g3 = g3.edges(g3._edges.reset_index().rename(columns={'index': 'edgeId'})).bind(edge='edgeId')
14 # # g3._edges
15 # ga.plot()
File ~/.local/lib/python3.10/site-packages/graphistry/compute/chain.py:37, in Chain.from_json(cls, d)
35 assert 'chain' in d
36 assert isinstance(d['chain'], list)
---> 37 out = cls([ASTObject_from_json(op) for op in d['chain']])
38 out.validate()
39 return out
File ~/.local/lib/python3.10/site-packages/graphistry/compute/chain.py:37, in <listcomp>(.0)
...
38 """
39 constructor_args = {k: v for k, v in d.items() if k not in cls.reserved_fields}
---> 40 return cls(**constructor_args)
TypeError: ASTPredicate() takes no arguments
```
**Screenshots**
If applicable, any screenshots to help explain the issue
**Browser environment (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Graphistry GPU server environment**
- Where run [e.g., Hub, AWS, on-prem]
- If self-hosting, Graphistry Version [e.g. 0.14.0, see bottom of a viz or login dashboard]
- If self-hosting, any OS/GPU/driver versions
**PyGraphistry API client environment**
- Where run [e.g., Graphistry 2.35.9 Jupyter]
- Version [e.g. 0.14.0, print via `graphistry.__version__`]
- Python Version [e.g. Python 3.7.7]
**Additional context**
Add any other context about the problem here.
| closed | 2024-12-18T18:20:07Z | 2024-12-24T05:51:22Z | https://github.com/graphistry/pygraphistry/issues/620 | [
"bug"
] | mj3cheun | 3 |
PedroBern/django-graphql-auth | graphql | 110 | I register a user without a password, an error comes that such a user does not exist. But at the same time yuh | {
"data": {
"createUser": {
"success": false,
"errors": {
"email": [
{
"message": "Custom user with this Email already exists.",
"code": "unique"
}
]
}
}
}
}
class Mutation(graphene.ObjectType):
create_user = mutations.Register.Field()
ALLOW_PASSWORDLESS_REGISTRATION = True
GRAPHQL_AUTH = {
"LOGIN_ALLOWED_FIELDS": ['email'],
"REGISTER_MUTATION_FIELDS": ['first_name', 'last_name', 'email', 'role'],
"ALLOW_LOGIN_NOT_VERIFIED": False,
"ALLOW_PASSWORDLESS_REGISTRATION": True,
"SEND_PASSWORD_SET_EMAIL": True,
"SEND_ACTIVATION_EMAIL": False,
} | open | 2021-04-13T11:09:00Z | 2021-04-13T18:55:59Z | https://github.com/PedroBern/django-graphql-auth/issues/110 | [] | eshpilevsky | 2 |
inducer/pudb | pytest | 274 | Previus command is painted with delay when running under Docker | Because you respond so promptly, I reward you with more issues :). This one might not be an easy one to track down (and it might not even be related to PuDB but perhaps to urwid): when I run PuDB inside a Docker container (both the host and guest are Ubuntu 17.04) and I go to the Command line (Ctrl+X) and press Ctrl+P to get the previous command, the previous command is not painted until I move the focus away. This does not happen when running PuDB outside of Docker.
I know that this sounds confusing so I made a little video: https://asciinema.org/a/Lcx5SFyzqgvtJRYHIUfdIYTS1
What you're seeing:
- I'm starting the tests which are being run inside of a docker container
- I switch to the command line (Ctrl+X) and evaluate `self`
- Now I press Ctrl+P and expect to see `self` again
- But actually I don't see `self` until I press the right arrow to focus on the "<Clear>" button
I now just instinctively do right-arrow + left-arrow but perhaps there is a fix for this? | closed | 2017-09-04T13:35:46Z | 2017-10-01T06:40:53Z | https://github.com/inducer/pudb/issues/274 | [] | cdman | 8 |
neuml/txtai | nlp | 517 | Add count method to database | Add `count` method to the database. | closed | 2023-08-09T11:03:22Z | 2023-08-09T11:28:27Z | https://github.com/neuml/txtai/issues/517 | [] | davidmezzetti | 0 |
xonsh/xonsh | data-science | 5,717 | Redirecting to fifo and backgrounding doesn't actually put process in background | ## Current Behavior
```xsh
mkfifo fifo
echo "fifo" > fifo & # this blocks
```
The process doesn't actually go in the background until a reader also appears on the fifo, which defeats the purpose at that point.
## Expected Behavior
The process should immediately go in the background.
## xonfig
```xsh
+------------------+-------------------------+
| xonsh | 0.14.4 |
| Python | 3.12.3 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.43 |
| shell type | prompt_toolkit |
| history backend | json |
| pygments | 2.17.2 |
| on posix | True |
| on linux | True |
| distro | ubuntu |
| on wsl | False |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| xontrib | [] |
| RC file 1 | /home/<username>/.xonshrc |
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| open | 2024-10-30T14:54:50Z | 2024-10-31T11:51:11Z | https://github.com/xonsh/xonsh/issues/5717 | [
"threading",
"edge-case"
] | Supreeeme | 1 |
ultralytics/yolov5 | deep-learning | 13,141 | how to convert pt to onnx to trt | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
how to convert pt to onnx to trt
### Additional
im doing this
python export.py --weights best.pt --include onnx --opset 12
after trtexec --onnx=best.onnx --saveEngine=best.trt
after I try to load the model I get this

I used to be able to do it, but six months later I forgot how I did it.
Please help | closed | 2024-06-27T03:11:08Z | 2024-12-16T10:28:50Z | https://github.com/ultralytics/yolov5/issues/13141 | [
"question",
"Stale"
] | gdfapokgdpafog | 8 |
gee-community/geemap | jupyter | 812 | Add points from xy | Create a marker cluster from a csv or pandas dataframe containing xy coordinates.
| closed | 2021-12-13T16:34:39Z | 2024-06-18T17:42:03Z | https://github.com/gee-community/geemap/issues/812 | [
"Feature Request"
] | giswqs | 4 |
dpgaspar/Flask-AppBuilder | rest-api | 1,823 | Version 4.0.0 has insufficient constraints on dependencies | ### Environment
Flask-Appbuilder version: 4.0.0
pip freeze output:
```
apispec==3.3.2
attrs==21.4.0
Babel==2.9.1
click==8.1.1
colorama==0.4.4
dnspython==2.2.1
email-validator==1.1.3
Flask==2.1.1
Flask-AppBuilder==4.0.0
Flask-Babel==2.0.0
Flask-JWT-Extended==4.3.1
Flask-Login==0.4.1
Flask-SQLAlchemy==2.5.1
Flask-WTF==0.14.3
greenlet==1.1.2
idna==3.3
importlib-metadata==4.11.3
itsdangerous==2.1.2
Jinja2==3.1.1
jsonschema==4.4.0
MarkupSafe==2.1.1
marshmallow==3.15.0
marshmallow-enum==1.5.1
marshmallow-sqlalchemy==0.26.1
packaging==21.3
pep517==0.12.0
pip-tools==6.5.1
prison==0.2.1
PyJWT==2.3.0
pyparsing==3.0.7
pyrsistent==0.18.1
python-dateutil==2.8.2
pytz==2022.1
PyYAML==6.0
six==1.16.0
SQLAlchemy==1.4.33
SQLAlchemy-Utils==0.38.2
tomli==2.0.1
Werkzeug==2.1.0
WTForms==2.3.3
zipp==3.7.0
```
### Expected results
I would expect to be able to successfully run "import flask_appbuilder".
### Actual results
An exception occurs:
```pytb
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\local\testfab\venv\lib\site-packages\flask_appbuilder\__init__.py", line 5, in <module>
from .api import ModelRestApi # noqa: F401
File "C:\local\testfab\venv\lib\site-packages\flask_appbuilder\api\__init__.py", line 62, in <module>
from ..security.decorators import permission_name, protect
File "C:\local\testfab\venv\lib\site-packages\flask_appbuilder\security\decorators.py", line 21, in <module>
from flask_login import current_user
File "C:\local\testfab\venv\lib\site-packages\flask_login\__init__.py", line 16, in <module>
from .login_manager import LoginManager
File "C:\local\testfab\venv\lib\site-packages\flask_login\login_manager.py", line 24, in <module>
from .utils import (_get_user, login_url as make_login_url, _create_identifier,
File "C:\local\testfab\venv\lib\site-packages\flask_login\utils.py", line 13, in <module>
from werkzeug.security import safe_str_cmp
ImportError: cannot import name 'safe_str_cmp' from 'werkzeug.security' (C:\local\testfab\venv\lib\site-packages\werkzeug\security.py)
```
### Steps to reproduce
```bash
mkdir ~/testfab && cd ~/testfab
python -m venv venv
source venv/bin/activate
pip install flask-appbuilder==4.0.0
python -c 'import flask_appbuilder'
``` | closed | 2022-03-31T15:45:58Z | 2022-05-03T14:40:47Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1823 | [
"bug",
"dependency-bump"
] | chrihartl | 10 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,135 | Can I train a model from sketch to water cup? This model can include glass, mug and thermos cup | Can I train a model from sketch to water cup? This model can include glass, mug and thermos cup | closed | 2020-08-28T11:21:51Z | 2020-09-17T09:46:29Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1135 | [] | LIMr1209 | 1 |
sinaptik-ai/pandas-ai | data-visualization | 1,413 | env file not found | ### System Info
OS version: macOS 15.0.1
Python version: I assume as specified in Dockerfile of server's folder, _FROM python:3.11-slim_
pandasai version: latest
### 🐛 Describe the bug
```
$ docker compose up
env file /Users/Giuseppe/developing/pandas-ai/server/.env not found: stat /Users/Giuseppe/developing/pandas-ai/server/.env: no such file or directory
``` | closed | 2024-10-29T11:40:43Z | 2024-10-29T14:00:40Z | https://github.com/sinaptik-ai/pandas-ai/issues/1413 | [
"bug"
] | giuseppe-coco | 3 |
microsoft/nni | data-science | 5,231 | PyTorch Pruning with ProxylessNAS | Hi,
I am interested in using pruning with ProxylessNAS. So I added a new class like "MBInvertedConvLayer_Sparse" in ProxlyessNAS example that applies pruning in [forward pass](https://github.com/microsoft/nni/blob/aec9962673e033f0d8d7784ac8dfba3ee198f0e3/examples/nas/oneshot/proxylessnas/ops.py#L309).
example:
```
def forward(self, x):
x = self.inverted_bottleneck(x)
x = self.depth_conv(x)
x = self.point_linear(x)
return prune.ln_structured(nn.Conv2d, 'weight', 0.2)
```
However, I get the below error:
```
File "~nni/retiarii/oneshot/pytorch/proxylessnas.py", line 74, in forward
return ArchGradientFunction.apply(
File "~nni/retiarii/oneshot/pytorch/proxylessnas.py", line 31, in forward
return output.data
AttributeError: 'NoneType' object has no attribute 'data'
```
Any idea how I could hack proxylessNAS for it? Thanks! | closed | 2022-11-17T13:06:43Z | 2022-11-25T08:06:13Z | https://github.com/microsoft/nni/issues/5231 | [] | singagan | 5 |
aio-libs/aiomysql | sqlalchemy | 121 | Connections from Pool Keeps Getting Old Data from Database | I have an API that executes code similar to this:
```
mysql = await create_pool(host='HOST', port=PORT, user='USER', password='PWORD', db='DB', loop=asyncio.get_event_loop())
async with mysql.acquire() as conn:
async with conn.cursor() as cur:
query_string = "SELECT * FROM table;"
await cur.execute(query_string)
rows = await cur.fetchall()
```
This works well to get the required data from the db the first time. However, on making the same request, over and over again even after changing the data in the database, I noticed that the results obtained from the piece of code is different from what is in the DB. What was returned was actually the result from before making changes to the db outside of the API.
Python version = 3.5.1
MySQL version = 5.6
Environment = Fedora 24
One thing I noticed is that when I explicitly called `conn.close()` after executing the query, it worked correctly.
I am assuming this shouldn't be how to use `conn.close()` in this case? | closed | 2016-11-11T19:13:52Z | 2021-07-15T09:37:00Z | https://github.com/aio-libs/aiomysql/issues/121 | [] | jleoirab | 3 |
microsoft/nni | machine-learning | 5,286 | free(): invalid pointer Aborted (core dumped) | While I was trying to use NAS I got this error.
free(): invalid pointer
Aborted (core dumped)
my code is:
```
import torch
import torch.nn.functional as F
import nni.retiarii.nn.pytorch as nn
from nni.retiarii import model_wrapper
@model_wrapper # this decorator should be put on the out most
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.dropout1 = nn.Dropout(0.25)
self.dropout2 = nn.Dropout(0.5)
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(self.conv2(x), 2)
x = torch.flatten(self.dropout1(x), 1)
x = self.fc2(self.dropout2(F.relu(self.fc1(x))))
output = F.log_softmax(x, dim=1)
return output
```
**Environment**:
- NNI version: 2.10
- Training service (local|remote|pai|aml|etc): local
- Client OS: Ubuntu 20.04
- Python version: 3.8
- PyTorch version: 1.13.0+cu117
- Is conda/virtualenv/venv used?: Venv
- Is running in Docker?:No
| closed | 2022-12-17T19:48:40Z | 2022-12-21T13:41:16Z | https://github.com/microsoft/nni/issues/5286 | [] | Armanasq | 3 |
ipython/ipython | data-science | 14,554 | contribution file not mentioned clearly | it is necessary to add steps of contribution in readme file for the benefit of the contributor | closed | 2024-10-21T07:23:08Z | 2024-10-21T07:26:28Z | https://github.com/ipython/ipython/issues/14554 | [] | jayadarshinig0609 | 0 |
gee-community/geemap | jupyter | 2,096 | Unable to implement ee_to_geopandas | I have been trying to incroporate ee_to_geopandas in my project using Jupyter. It produced an error showing that ee_to_geopandas is not a valid
it was something like:
module 'geemap' has no attribute 'ee_to_geopandas' | closed | 2024-07-25T05:57:53Z | 2024-07-25T11:33:19Z | https://github.com/gee-community/geemap/issues/2096 | [
"bug"
] | sgindeed | 1 |
Urinx/WeixinBot | api | 167 | 想要个自动通过好友请求的功能 | 想要个自动通过好友请求的功能,主要是用来收款的...自动同意好友请求后对方才能发红包,这样我可以集成到免签接口 | open | 2017-03-15T01:24:59Z | 2017-03-15T01:24:59Z | https://github.com/Urinx/WeixinBot/issues/167 | [] | xuelainiao | 0 |
piskvorky/gensim | machine-learning | 3,484 | File "<string>", line 111, in finalize_options AttributeError: 'dict' object has no attribute '__NUMPY_SETUP__' when installing gensim 3.8.3 with pip install | Hi,
I'm getting the following error message when attemting to install gensim 3.8.3 with pip install (Python 3.9)
``
File` "C:\Users\XXX\AppData\Local\Temp\pip-build-env-aj3aaluh\overlay\Lib\site-packages\setuptools\_distutils\cmd.py", line 111, in ensure_finalized
self.finalize_options()
File "<string>", line 111, in finalize_options
AttributeError: 'dict' object has no attribute '__NUMPY_SETUP__' ``
I have looked for a setup.py file that I could hack (fix to a similar reported bug) but couldn't find any... | open | 2023-07-17T07:03:01Z | 2023-07-17T13:15:14Z | https://github.com/piskvorky/gensim/issues/3484 | [] | bweill555 | 0 |
vi3k6i5/flashtext | nlp | 80 | Search Results Bug | Hey Team,
Let us say we have power, plant and power plant in our keyword processor and we run a search on the below text:
Text: The thermal power plant is situated in Germany.
Can some one tell me the result for this query ? | closed | 2019-05-13T06:07:25Z | 2020-02-24T20:12:43Z | https://github.com/vi3k6i5/flashtext/issues/80 | [] | koolcoder007 | 1 |
apachecn/ailearning | scikit-learn | 450 | "/src/py3.x/ml/4.NaiveBayes/bayes.py" #line164 疑问 | 引自该文件164行:" # 可以理解为 1.单词在词汇表中的条件下,文件是good 类别的概率 也可以理解为 2.在整个空间下,文件既在词汇表中又是good类别的概率"
您好,我对这里的p1Vec意义有疑问,我认为应该是在文件是good类别的前提下,对应位置(index)的单词出现在文档中的概率. | closed | 2018-09-28T03:53:25Z | 2018-10-07T13:48:51Z | https://github.com/apachecn/ailearning/issues/450 | [] | wtzhang95 | 1 |
python-gino/gino | asyncio | 226 | Limiting select query results (slices/pagination) | * GINO version: 0.7.3
* Python version: 3.6
* Operating System: Linux
### Description
Setting a limit (subset) for the results of a select query might be easily implemented server side with SQL's offset and limit.
SQLAlquemy makes use of Python's list slicing (`q[:50]`).
This can also be used for pagination (for example: http://flask-sqlalchemy.pocoo.org/2.3/api/#flask_sqlalchemy.BaseQuery.paginate) . | closed | 2018-05-21T09:45:21Z | 2018-05-25T05:26:51Z | https://github.com/python-gino/gino/issues/226 | [
"question"
] | aprilmay | 3 |
liangliangyy/DjangoBlog | django | 438 | Heroku live server application error | closed | 2020-12-07T08:28:00Z | 2021-06-18T08:57:57Z | https://github.com/liangliangyy/DjangoBlog/issues/438 | [] | crazeoni | 0 |
|
qubvel-org/segmentation_models.pytorch | computer-vision | 216 | SSL: CERTIFICATE_VERIFY_FAILED | Hi, When I was running the tutorial on cars segmentation (CamVid), I got the following error:
URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)>
Which is providing error while downloading the pre-trained network.
I could download the net manually using the browser (where it provided an error but I just bypassed it).
and run the tutorial. | closed | 2020-05-28T20:52:44Z | 2024-12-02T21:49:18Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/216 | [
"Stale"
] | tariqul-islam | 7 |
jupyter-book/jupyter-book | jupyter | 2,051 | is there a way to have a collapsible/dropdown admonition start opened ? | ### Describe the bug
**context**
I'm trying to create admonitions that are collapsible, but start as opened
**expectation 1 **
I have tried to use the `{toggle}` directive as per <https://jupyterbook.org/en/stable/interactive/hiding.html#the-toggle-directive> i.e. st like
```markdown
```{toggle} Click the button to reveal!
:show:
Some collapsible toggle content!
```
```
**bug 1 **
But I am getting this in the output html

Plus, btw, this does not render well in jupyterlab either:

**other attempts**
I was half expecting that adding a flag to an admonition could maybe work; but with e.g. this
```markdown
```{note} Collapsible content
:class: dropdown
:show:
Some collapsible toggle content!
```
```
the rendered collapsible still starts up as closed
**problem**
bottom line is I can't find a way to have a collapsible admonition that would start opened
in the event where this is unsupported, imho it would help for the doc to stress that fact, keeping users from wasting their time trying to guess if/how to achieve this :)
### Reproduce the bug
just use one of the above cells
### List your environment
```bash
$ jupyter-book --version
Jupyter Book : 0.15.1
External ToC : 0.3.1
MyST-Parser : 0.18.1
MyST-NB : 0.17.2
Sphinx Book Theme : 1.0.1
Jupyter-Cache : 0.6.1
NbClient : 0.7.4
``` | open | 2023-08-28T10:59:52Z | 2025-03-06T14:08:00Z | https://github.com/jupyter-book/jupyter-book/issues/2051 | [
"bug"
] | parmentelat | 2 |
huggingface/datasets | tensorflow | 7,016 | `drop_duplicates` method | ### Feature request
`drop_duplicates` method for huggingface datasets (similiar in simplicity to the `pandas` one)
### Motivation
Ease of use
### Your contribution
I don't think i am good enough to help | open | 2024-07-01T09:01:06Z | 2024-07-20T06:51:58Z | https://github.com/huggingface/datasets/issues/7016 | [
"duplicate",
"enhancement"
] | MohamedAliRashad | 1 |
deepfakes/faceswap | machine-learning | 615 | Docker image is asking for run_jupyter.sh which does not exist | **Describe the bug**
The docker gpu image build file is running `/run_jupyter.sh` at the end, but there is no such file in the repo or in the `tensorflow/tensorflow:latest-gpu-py3` image.
**To Reproduce**
Steps to reproduce the behavior:
1. `docker build -t deepfakes-gpu -f Dockerfile.gpu .`
2. `nvidia-docker run --rm -it -p 8888:8888 \
--hostname faceswap-gpu --name faceswap-gpu \
-v /opt/faceswap:/srv \
deepfakes-gpu`
3. then should see the error
**Expected behavior**
`run_jupyter.sh` exists
**Screenshots**
None
**Desktop (please complete the following information):**
- manjaro 4.14
- chromium
- 61 | closed | 2019-02-19T03:38:34Z | 2019-03-01T17:05:03Z | https://github.com/deepfakes/faceswap/issues/615 | [] | FrontMage | 5 |
flairNLP/flair | nlp | 2,713 | About TransformerWordEmbeddings class | Hi, I am using the following line of code to create contextualized embeddings from a biomedical roberta model.
```
TransformerWordEmbeddings("PlanTL-GOB-ES/roberta-base-biomedical-clinical-es")
```
However, when executing the code, I get the following error:
```
File "/home/x/work/clinical-nested-ner-mlc/venv/lib/python3.8/site-packages/flair/embeddings/base.py", line 652, in _extract_token_embeddings
assert subword_start_idx < subword_end_idx <= sentence_hidden_state.size()[1]
AssertionError
```
This does not occur using other hugging face library models, what could be the error in the model? Here is the link to the repository:
https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es
Thanks | closed | 2022-04-07T19:55:54Z | 2022-09-09T02:02:25Z | https://github.com/flairNLP/flair/issues/2713 | [
"question",
"wontfix"
] | matirojasg | 5 |
pennersr/django-allauth | django | 3,377 | Login via Google issue | I connected Allauth and Google provider.
I expect that the user should be redirected to Google to select an account every time when user clicks Log In via Google button.
But in my case user can't switch between Google accounts and truelly Logout from account:
1. User Login via Google account 1
2. User logs Out
3. The user clicks Log In via Google button and immediately logs into the same Google account 1 without asking which of the Google accounts the user wants to Log In.
Does anyone know how to solve this problem? | closed | 2023-08-12T09:45:12Z | 2024-05-17T17:37:10Z | https://github.com/pennersr/django-allauth/issues/3377 | [] | darbre | 5 |
pywinauto/pywinauto | automation | 822 | How to access buttons in ToolBarWindow32 control? | I happened to have a software that the Icon Buttons in toolbar are with class "ToolBarWindow32", but I don't found corresponding controls in pywinauto for this class. Only toolbar is the closet one. How can I use toolbar to access buttons for ToolBarWindow32 control? The toolbar in the software is with some big icon and when mouse hove on it, it will become a raised button and could be pressed.
I also tried different spy tools.
ViewWizard could only access the whole toolbar (with highlight box out of it).
UIASpy could list the toolbar and its sub Buttons. For each button, there is corresponding property like text, button count etc.
How can I access the single button in this "ToolBarWindow32" control? | closed | 2019-09-17T00:41:30Z | 2019-09-23T11:53:50Z | https://github.com/pywinauto/pywinauto/issues/822 | [
"question"
] | eastam | 5 |
aio-libs/aiomysql | asyncio | 503 | Error raised when creating a cursor when cursorclass is set in connection | open | 2020-06-15T13:58:28Z | 2022-07-11T01:05:02Z | https://github.com/aio-libs/aiomysql/issues/503 | [
"docs"
] | Gugu7264 | 3 |
|
ultralytics/ultralytics | machine-learning | 19,248 | Error during Exporting my custom model.pt to tflite | i geting below error ny trying this
from ultralytics import YOLO
model = YOLO("/home/gpu-server/runs/detect/city_combined_dataset_training4_5122/weights/best_latest_improved_3.pt")
model.export(format="tflite")
and also tryed with cmd line
Ultralytics 8.3.75 🚀 Python-3.10.11 torch-2.4.1+cu118 CPU (12th Gen Intel Core(TM) i5-12400)
Model summary (fused): 168 layers, 3,009,548 parameters, 0 gradients, 8.1 GFLOPs
PyTorch: starting from '/home/gpu-server/runs/detect/city_combined_dataset_training4_5122/weights/best_latest_improved_3.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 24, 8400) (5.9 MB)
requirements: Ultralytics requirement ['protobuf>=5'] not found, attempting AutoUpdate...
requirements: ❌ AutoUpdate skipped (offline)
TensorFlow SavedModel: starting export with tensorflow 2.16.2...
ONNX: starting export with onnx 1.17.0 opset 19...
ONNX: slimming with onnxslim 0.1.34...
ONNX: export success ✅ 0.6s, saved as '/home/gpu-server/runs/detect/city_combined_dataset_training4_5122/weights/best_latest_improved_3.onnx' (11.8 MB)
TensorFlow SavedModel: starting TFLite export with onnx2tf 1.17.8...
ERROR: The trace log is below.
Traceback (most recent call last):
File "/home/gpu-server/anaconda3/envs/pytorch-training/lib/python3.10/site-packages/onnx2tf/utils/common_functions.py", line 288, in print_wrapper_func
result = func(*args, **kwargs)
File "/home/gpu-server/anaconda3/envs/pytorch-training/lib/python3.10/site-packages/onnx2tf/utils/common_functions.py", line 361, in inverted_operation_enable_disable_wrapper_func
result = func(*args, **kwargs)
File "/home/gpu-server/anaconda3/envs/pytorch-training/lib/python3.10/site-packages/onnx2tf/ops/Conv.py", line 246, in make_node
input_tensor = get_padding_as_op(
File "/home/gpu-server/anaconda3/envs/pytorch-training/lib/python3.10/site-packages/onnx2tf/utils/common_functions.py", line 2009, in get_padding_as_op
return tf.pad(x, padding)
File "/home/gpu-server/anaconda3/envs/pytorch-training/lib/python3.10/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/gpu-server/anaconda3/envs/pytorch-training/lib/python3.10/site-packages/keras/src/backend/common/keras_tensor.py", line 138, in __tf_tensor__
raise ValueError(
ValueError: A KerasTensor cannot be used as input to a TensorFlow function. A KerasTensor is a symbolic placeholder for a shape and dtype, used when constructing Keras Functional models or Keras Functions. You can only use it as input to a Keras layer or a Keras operation (from the namespaces `keras.layers` and `keras.operations`). You are likely doing something like:
```
x = Input(...)
...
tf_fn(x) # Invalid.
```
What you should do instead is wrap `tf_fn` in a layer:
```
class MyLayer(Layer):
def call(self, x):
return tf_fn(x)
x = MyLayer()(x)
```
ERROR: input_onnx_file_path: /home/gpu-server/runs/detect/city_combined_dataset_training4_5122/weights/best_latest_improved_3.onnx
ERROR: onnx_op_name: /model.0/conv/Conv
ERROR: Read this and deal with it. https://github.com/PINTO0309/onnx2tf#parameter-replacement
ERROR: Alternatively, if the input OP has a dynamic dimension, use the -b or -ois option to rewrite it to a static shape and try again.
ERROR: If the input OP of ONNX before conversion is NHWC or an irregular channel arrangement other than NCHW, use the -kt or -kat option.
ERROR: Also, for models that include NonMaxSuppression in the post-processing, try the -onwdt option.
An exception has occurred, use %tb to see the full traceback.
SystemExit: 1 | open | 2025-02-14T12:17:02Z | 2025-02-15T08:14:11Z | https://github.com/ultralytics/ultralytics/issues/19248 | [
"bug",
"detect",
"exports"
] | Vinaygoudasp7 | 2 |
PaddlePaddle/PaddleHub | nlp | 1,439 | more installation suggestions in docs | When I try to install paddlepaddle with gpu version, there is no command for me. So I will search the related command in another web page, which is time-consuming for beginner.
```shell
!pip install --upgrade paddlepaddle -i https://mirror.baidu.com/pypi/simple
!pip install --upgrade paddlehub -i https://mirror.baidu.com/pypi/simple
```
I think there should be more installation suggestions in readme. If you approve it, I will try to fix with this little improvement.
```shell
# install paddlepaddle with gpu
# !pip install --upgrade paddlepaddle-gpu -i https://mirror.baidu.com/pypi/simple
# or install paddlepaddle with cpu
!pip install --upgrade paddlepaddle -i https://mirror.baidu.com/pypi/simple
# install paddlehub
!pip install --upgrade paddlehub -i https://mirror.baidu.com/pypi/simple
```
| open | 2021-06-02T12:23:14Z | 2021-06-03T09:13:09Z | https://github.com/PaddlePaddle/PaddleHub/issues/1439 | [] | wj-Mcat | 1 |
BeanieODM/beanie | asyncio | 674 | [BUG] Error when a nested field name equals a `string` method | **Describe the bug**
When accessing a nested field in a beanie operator, an error is thrown when that nested field's name is equal to one of the python builtin `string` methods, like `center`, `index`, `strip`, `split` etc.
The error occurs, when beanie tries to split the path into its segements.
```
File "C:\...\beanie\odm\utils\relations.py", line 21, in convert_ids
k_splitted = k.split(".")
^^^^^^^
AttributeError: 'builtin_function_or_method' object has no attribute 'split'
```
Example for better understanding:
```python
class Geo(BaseModel):
type: str
coordinates: list[float]
class Location(BaseModel):
center: Geo
info: str
class Point(Document):
location: Location
await Point.find(Near(Point.location.center, 30, 34))
```
Apparently, the parent of the nested field (e.g. Point.location) is at that point in the code a string. And the dot-operator with a `string` method (e.g. center) makes python think, that the object `Point.location.center` is a method instead of a string.
**To Reproduce**
See snippet above.
**Expected behavior**
The nested field name does not matter and is not treated as a `string` method.
**Additional context**
This problem only occurs with nested field names, not in the first level inside the tree. (At least, as far as I tried...)
I think, it might be connected to two other issues I found here, that were complaining about nested field aliases not being used correctly (#124 , #369), which were both auto-closed after some time. I might be wrong tho.
In the meantime, I'll have to rename my fields or write `"points.location.center"` instead, which is not very nice regarding stuff like LSP and completion.
Happy to hear from you about this :)
| closed | 2023-08-24T14:49:58Z | 2023-10-24T01:48:31Z | https://github.com/BeanieODM/beanie/issues/674 | [
"Stale"
] | akriese | 3 |
miguelgrinberg/python-socketio | asyncio | 294 | How to connect with https? | closed | 2019-05-02T18:41:47Z | 2019-05-02T19:06:23Z | https://github.com/miguelgrinberg/python-socketio/issues/294 | [] | renatonerijr | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.