repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
sammchardy/python-binance | api | 970 | Client Data | hi ... in the last part of the code im trying to get my account balance but its not working i dont know what is the problem
i get an error
info = Client.get_account()
TypeError: get_account() missing 1 required positional argument: 'self'
import websocket, json, pprint, talib, numpy
from binance.client import Client
from binance.enums import *
import config
RSI_PERIOD = 14
RSI_OVERBOUGHT = int(input(" Enter The Over Bought Value :"))
RSI_OVERSOLD = int(input("Enter The Over Soled Value :"))
TRADE_SYMBOL = input("The Pair U Want To Trade In Apper Case : ")
TRADE_QUANTITY = 0.05
STOP_LOSS = 30
API_KEY = config.API_KEY
API_SECRET = config.API_SECRET
The_Socket_Data = input("Enter The Pair U Want To Trade in Lower Case: ")
The_Frame = input("Enter The Time Frame U Want Trade In: ")
SOCKET = "wss://stream.binance.com:9443/ws/"+The_Socket_Data+"@kline_"+The_Frame+"m"
client = Client(API_KEY, API_SECRET)
print(Client)
info = Client.get_account()
print(info)
| open | 2021-07-19T23:00:29Z | 2021-09-01T13:12:45Z | https://github.com/sammchardy/python-binance/issues/970 | [] | alooosh111 | 1 |
flairNLP/flair | pytorch | 3,546 | [Question]: Citeable Information on Word Embedding Origins | ### Question
Hi there,
I was wondering where the vector models of the WordEmbeddings and FastTextEmbeddings classes are originally from? Scanning the issues and questions here on Github, there seems to be the assumption that they are the models trained by Fasttext (https://fasttext.cc/docs/en/crawl-vectors.html and https://fasttext.cc/docs/en/pretrained-vectors.html). Is that actually true? Or were they trained especially for FLAIR?
| closed | 2024-09-19T16:59:54Z | 2025-03-11T04:52:10Z | https://github.com/flairNLP/flair/issues/3546 | [
"question"
] | Alina-El | 1 |
biolab/orange3 | data-visualization | 7,017 | Resize Feature Statistics Widget Window | Hi all,
After changing the colour of the distribution I can resize the window of the Feature Statistics widget because the legend is too long. On my Mac I cannot get to the bottom of the window. Do you have any suggestions?
<img width="1507" alt="Image" src="https://github.com/user-attachments/assets/83ba3ac5-8697-45d5-bccd-61d106e46d45" />
| open | 2025-02-04T18:18:50Z | 2025-02-07T13:17:07Z | https://github.com/biolab/orange3/issues/7017 | [
"bug report"
] | TheItalianDataGuy | 2 |
AntonOsika/gpt-engineer | python | 655 | Ask human for what is not working as expected in a loop, and feed it into LLM to fix the code, until the human is happy | closed | 2023-09-02T18:27:04Z | 2024-04-25T17:43:34Z | https://github.com/AntonOsika/gpt-engineer/issues/655 | [
"good first issue"
] | AntonOsika | 4 |
|
xlwings/xlwings | automation | 1,917 | Split excel sheets to new workbooks fails | #### OS: WSL2/Linux
#### Versions of xlwings and Python: 0.27.7 and Python 3.9
I am trying to create new workbooks from worksheets of a workbook. Eventually I want to integrate this step in a Lambda function that runs on AWS. Testing the minimal code below on WSL2 and AWS Lambda, I ran into the same error shown below. This function worked with no errors on MacOS Monterey (v12.2.1).
```python
ERROR:root:'NoneType' object has no attribute 'apps'
Traceback (most recent call last):
File "/mnt/c/Users/kbuddika/Documents/Alamar_Repos/Benchling-Automation/test.py", line 33, in <module>
sheet_names = split_excel_file(excel_file="sigTune_data_767dcc9fbaa91d83.xlsx")
File "/mnt/c/Users/kbuddika/Documents/Alamar_Repos/Benchling-Automation/test.py", line 15, in split_excel_file
with xw.App(visible=False) as app:
File "/home/kbuddika/miniconda3/lib/python3.9/site-packages/xlwings/main.py", line 279, in __init__
self.impl = engines.active.apps.add(
AttributeError: 'NoneType' object has no attribute 'apps'
```
Here is the code I am trying.
```python
import xlwings as xw
import logging
def split_excel_file(excel_file: str) -> list:
try:
sheet_names = list()
with xw.App(visible=False) as app:
wb = app.books.open(excel_file)
for sheet in wb.sheets:
sheet_names.append(f"{sheet.name}.xlsx")
wb_new = app.books.add()
sheet.copy(after=wb_new.sheets[0])
wb_new.sheets[0].delete()
wb_new.save(f"{sheet.name}.xlsx")
wb_new.close()
return sheet_names
except Exception as error:
logging.error(error)
raise
if __name__ == "__main__":
sheet_names = split_excel_file(excel_file="myexcel_file.xlsx")
print(sheet_names)
```
Can someone please let me know a potential solution to get this integrated on AWS Lambda? | closed | 2022-05-17T19:36:14Z | 2022-05-18T06:57:35Z | https://github.com/xlwings/xlwings/issues/1917 | [] | jkkbuddika | 1 |
jupyter/nbviewer | jupyter | 353 | Feature request: browse GitHub repo branches | Currently it's possible to type in a GitHub username and browse their repositories. It's then possible to browse a repository to look for notebooks.
In some cases I'd like to be able to access a notebook that is in a branch other than master. For instance, in many cases it would be particularly useful to access the gh-pages branch of a repository that stores a GitHub Pages site.
So far as I can see there's no UI to access the branches of a repository, though it is possible to access a branch by manually altering the URL. It would be nice to be able to browse branches.
| closed | 2014-10-10T05:11:06Z | 2014-11-25T18:20:24Z | https://github.com/jupyter/nbviewer/issues/353 | [] | claresloggett | 1 |
amidaware/tacticalrmm | django | 1,713 | Add script run time and exit code to Script Editor's script tester | Add these to script editor header

| closed | 2023-12-22T11:30:23Z | 2024-01-26T00:05:32Z | https://github.com/amidaware/tacticalrmm/issues/1713 | [
"enhancement"
] | silversword411 | 1 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 95 | qlora使用fp16训练出现loss nan | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [x] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案
### 问题类型
模型训练与精调
### 基础模型
LLaMA-2-7B
### 操作系统
Linux
### 详细描述问题
在chinese-llama-2-7b上lora sft时,用LlamaTokenizer会出现eval loss nan,但换成AutoTokenizer(fastTokenizer)之后不会nan,似乎与fastTokenizer中\</s\>的编码有关,请问预训练是用的是fastTokenizer吗?
### 依赖情况(代码类问题务必提供)
```
peft 0.4.0
torch 1.13.1
torchaudio 0.13.1
torchvision 0.14.1
transformers 4.31.0
```
### 运行日志或截图
```
# 请在此处粘贴运行日志
```
| closed | 2023-08-07T19:47:01Z | 2023-08-12T14:41:04Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/95 | [] | yanghh2000 | 4 |
gee-community/geemap | streamlit | 1,606 | add shown parameter to add_styled_vector | <!-- Please search existing issues to avoid creating duplicates. -->
### Description
By default, a vector is activated in the geemap display. Could it be possible to add a shown parameter to be able to hide it when displayed? Thanks
| closed | 2023-07-06T02:11:26Z | 2023-07-06T14:48:54Z | https://github.com/gee-community/geemap/issues/1606 | [
"Feature Request"
] | ccsuehara | 2 |
modelscope/data-juicer | data-visualization | 46 | support text-based interleaved multimodal data as an intermediate format | closed | 2023-10-25T09:08:41Z | 2023-10-26T06:16:55Z | https://github.com/modelscope/data-juicer/issues/46 | [
"invalid"
] | HYLcool | 0 |
|
apify/crawlee-python | web-scraping | 1,110 | bug: Playwright template fails with default settings | Currently if there is `Playwright` actor created through crawlee cli and run in template generated dockerfile then it will fail with:
`[pid=28][err] [0314/101922.720829:ERROR:zygote_host_impl_linux.cc(100)] Running as root without --no-sandbox is not supported. See https://crbug.com/638180.`
Template default is chromium. Either add `--no-sandbox` argument to the Playwright crawler in the template or update the template dockerfile to allow running without it. | open | 2025-03-20T13:11:37Z | 2025-03-24T13:02:58Z | https://github.com/apify/crawlee-python/issues/1110 | [
"bug",
"t-tooling"
] | Pijukatel | 0 |
JaidedAI/EasyOCR | machine-learning | 388 | How change to my ready custom network | Hello.
I have trained a None-ResNet-BiLSTM-CTC network with the classes 0123456789abcdefghijklmnopqrstuvwxyzñ- in the original repo, using the data generator mentioned in this repository.
How could i use my own .pth it in this repository? I am working on it but i cant get it | closed | 2021-03-05T12:20:14Z | 2021-07-02T08:56:13Z | https://github.com/JaidedAI/EasyOCR/issues/388 | [] | mjack3 | 3 |
neuml/txtai | nlp | 804 | Add txtai agents 🚀 | This will add agents to txtai. We'll build on the [Transformers Agents framework](https://huggingface.co/docs/transformers/en/agents) in combination with [txtai's LLM pipeline](https://neuml.github.io/txtai/pipeline/text/llm/).
The goals of this framework are:
- Easy-to-use
- Integrate with pipelines, workflows and embeddings databases
- Support all LLM backends (tranformers, llama.cpp, API integrations)
- Support use cases like self-correcting RAG, multi-source RAG and real-time data integration
The following issues cover this change.
- [x] #808
- [x] #809
- [x] #810
- [x] #811
| closed | 2024-10-29T10:47:56Z | 2024-11-18T18:32:05Z | https://github.com/neuml/txtai/issues/804 | [] | davidmezzetti | 0 |
ranaroussi/yfinance | pandas | 2,158 | Change `Screener.set_predefined_body` and `Screener.set_body` to respond with the class | I want to change `Screener.set_predefined_body` and `Screener.set_body` to respond with the class.
This would reduce
```python
s = yf.Screener()
s.set_predefined_body("day_gainers")
r=s.response
```
to
```python
r = yf.Screener().set_predefined_body("day_gainers").response
```
or to be able to set it on initialisation
```python
r = yf.Screener(body="day_gainers").response
```
which I think is cleaner and more concise. I would also suggest editing the docs of `predefined_bodies` as they come above `set_predefined_body` to include how to set the Screener to them | closed | 2024-11-28T15:58:17Z | 2024-12-08T17:43:24Z | https://github.com/ranaroussi/yfinance/issues/2158 | [] | R5dan | 1 |
explosion/spaCy | machine-learning | 13,059 | Dependency sentence segmenter handles newlines inconsistently between languages | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
[Colab notebook demonstrating problem](https://colab.research.google.com/drive/14FFYKqjRVRbN7aAVmHUYEao9CwahY0We?usp=sharing)
When parsing a sentence that contains newlines, the Italian parser sometimes assigns the newline to a sentence by itself, for example:
>Ma regolamenta solo un settore, a differenza dell’azione a largo raggio dell’Inflation Act. \nI tentativi di legiferare per stimolare l’industria non hanno avuto molto successo.
Produces 3 sentences:
```
'Ma regolamenta solo un settore, a differenza dell’azione a largo raggio dell’Inflation Act (dalla sanità all’industria pesante).'
'\n'
'I tentativi di legiferare per stimolare l’industria non hanno avuto molto successo.'
```
There are various experiments with different combinations of punctuation in the notebook.
Looking at the tokens and their `is_sent_start` property, it seems under some circumstances the `\n` and `I` tokens are both assigned as the start of a new sentence.
I have not been able to cause this problem with `en_core_web_sm`, which always correctly identifies 2 sentences.
Although I understand that sentence segmentation based on the dependency parser is probabilistic and not always correct, it seems there's some inconsistency between languages here, and I don't think it would ever be correct for a whitespace token to be assigned as the start of a sentence.
## Your Environment
- **spaCy version:** 3.6.1
- **Platform:** Linux-5.15.120+-x86_64-with-glibc2.35
- **Python version:** 3.10.12
- **Pipelines:** it_core_news_sm (3.6.0), en_core_web_sm (3.6.0) | open | 2023-10-11T15:25:36Z | 2023-10-13T11:17:23Z | https://github.com/explosion/spaCy/issues/13059 | [
"lang / it",
"feat / senter"
] | freddyheppell | 3 |
pywinauto/pywinauto | automation | 453 | ElementAmbiguousError when using backend win32 | Hi,
I got and ElementAmbiguousError when i used backend win32.
full traceback
```
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
appwindow.print_control_identifiers(filename='identifiers.txt')
File "build\bdist.win-amd64\egg\pywinauto\application.py", line 584, in print_control_identifiers
this_ctrl = self.__resolve_control(self.criteria)[-1]
File "build\bdist.win-amd64\egg\pywinauto\application.py", line 245, in __resolve_control
criteria)
File "build\bdist.win-amd64\egg\pywinauto\timings.py", line 419, in wait_until_passes
func_val = func(*args)
File "build\bdist.win-amd64\egg\pywinauto\application.py", line 190, in __get_ctrl
dialog = self.backend.generic_wrapper_class(findwindows.find_element(**criteria[0]))
File "build\bdist.win-amd64\egg\pywinauto\findwindows.py", line 98, in find_element
raise exception
ElementAmbiguousError: There are 2 elements that match the criteria {'process': 6232, 'backend': u'win32'}
```
code that causes this error is very simple, it seems that it is the app being tested that causes this. My app that i'm testing is opening winform on top of winform and that's the reason.
```
import pywinauto
app = pywinauto.Application()
appconnect = app.connect(path='TestApp.exe')
appwindow = appconnect.window()
appwindow.print_control_identifiers(filename='identifiers.txt')
```
I was also able to fix this by editing findwindows.py, i commented one if statement out from find_element function. Lines 89 - 98
```
def find_element(**kwargs):
"""
Call find_elements and ensure that only one element is returned
Calls find_elements with exactly the same arguments as it is called with
so please see :py:func:`find_elements` for the full parameters description.
"""
elements = find_elements(**kwargs)
if not elements:
raise ElementNotFoundError(kwargs)
# if len(elements) > 1:
# exception = ElementAmbiguousError(
# "There are {0} elements that match the criteria {1}".format(
# len(elements),
# six.text_type(kwargs),
# )
# )
#
# exception.elements = elements
# raise exception
return elements[0]
```
so my question is that is this if statement necessary or should we comment this out also from master branch? | closed | 2017-12-29T08:36:58Z | 2018-01-16T14:53:22Z | https://github.com/pywinauto/pywinauto/issues/453 | [
"question"
] | nimuston | 2 |
jazzband/django-oauth-toolkit | django | 924 | Can't open (actually can't find) the django-oauth-toolkit goolge group ? | <!-- What is your question? -->
Can't open (actually can't find or can't access) the django-oauth-toolkit goolge group ?

| closed | 2021-02-10T04:20:53Z | 2021-03-18T13:28:07Z | https://github.com/jazzband/django-oauth-toolkit/issues/924 | [
"question"
] | azataiot | 3 |
gradio-app/gradio | machine-learning | 10,453 | Concurrency is not working properly, regardless of concurrency_limit | ### Describe the bug
We are encountering an issue where Gradio's concurrency handling is not working as expected. Despite setting both `max_size` and `default_concurrency_limit`, the app does not scale beyond 2 concurrent sessions.
### Steps to Reproduce:
1. Launch the Gradio app with 4 instances.
2. Send a chat message to all 4 instances simultaneously.
3. Observe that only 2 instances process the messages while the other 2 are waiting, regardless of the settings for `max_size` and `default_concurrency_limit`.
### Expected Behavior:
- All 4 instances should process the chat messages concurrently, with no queuing, as defined by the configuration.
### Actual Behavior:
- Only some of the 4 instances process the messages concurrently. The others do not process the messages and are not queueing, they are just loading, even though the settings for `max_size` and `default_concurrency_limit` are in place.
### Additional Information:
- Unfortunately, the issue is not always reproducible with the minimal example, but this behaviour always occurs in our production setup.
- The issue occurs regardless of the number of devices used. We tested on 4 different devices to rule out browser-specific issues.
- We noticed that using 4 private windows in a browser can make the concurrency work sometimes, but it fails when using 4 normal browser windows.
- It seems like having the app open also consumes a "resource". We observed a behaviour where 1 instance was loading, and as soon as we closed another instance, the first one loaded successfully.
### Environment:
- Gradio version: 5.13.1
- Python version: 3.12
- OS: macOS 14.4 and Ray
- Browser: Tested on Safari and Firefox
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import time
import gradio as gr
def wait(_, __, a):
time.sleep(15)
return "done"
with gr.Blocks() as demo:
a = gr.State()
gr.ChatInterface(fn=wait, additional_inputs=[a])
demo.queue(max_size=100, default_concurrency_limit=10).launch()
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 5.13.1
gradio_client version: 1.6.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.7
ffmpy: 0.5.0
gradio-client==1.6.0 is not installed.
httpx: 0.28.1
huggingface-hub: 0.28.0
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 2.2.2
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.3
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.45.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.12.0
httpx: 0.28.1
huggingface-hub: 0.28.0
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.2
```
### Severity
Blocking usage of gradio | closed | 2025-01-28T18:38:24Z | 2025-01-31T21:30:01Z | https://github.com/gradio-app/gradio/issues/10453 | [
"bug"
] | GeorgSchenzel | 6 |
vanna-ai/vanna | data-visualization | 514 | Add a delete button query result history on the screen | **Describe the solution you'd like**
Can you add a delete button query result history on the screen?
Or remind me how to do it. I can also do it.
| open | 2024-06-24T05:06:03Z | 2024-06-24T05:06:03Z | https://github.com/vanna-ai/vanna/issues/514 | [] | liushuang393 | 0 |
amdegroot/ssd.pytorch | computer-vision | 156 | Only 2 feature map size shrinkages in extra_layer | Hi, I print the layers in extra_layer. I think there should be 4 feature map shrinkages according to the paper, but found only 2 feature map shrinkages.
```bash
[Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1)),
Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)),
Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)),
Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)),
Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1)),
Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1)), ◀
Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1)),
Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1)) ◀]
``` | closed | 2018-04-29T04:09:52Z | 2018-04-29T04:12:44Z | https://github.com/amdegroot/ssd.pytorch/issues/156 | [] | Fangyh09 | 1 |
jina-ai/serve | deep-learning | 5,280 | make gRPC DocumentArrayProto in Java | I want to use the gRPC protocol with a language other than Python.
so I follow the instruction (https://docs.jina.ai/fundamentals/client/third-party-client/):
"Download the two proto definition files: jina.proto and docarray.proto from [github](https://github.com/jina-ai/jina/tree/master/jina/proto) (be sure to use the latest release branch)
Compile them with [protoc](https://grpc.io/docs/protoc-installation/) and precise to which programming language you want to compile them.
Add the generated files to your project and import them in your code.
You should finally be able to communicate with your Flow using the gRPC protocol. You can find more information on the gRPC message and service that you can use to communicate in the [Protobuf documentation](https://docs.jina.ai/proto/docs/).
"
I generate the Java class with proto files.
But It didn't say how to build the proto DocumentArrayProto!
Docarray.DocumentArrayProto da = Docarray.DocumentArrayProto.parseFrom("byte array here")
"byte array here" for what? what's the format of the string?
Or I should use the builder? And how?
Please show me some example on how to build the DocumentArrayProto
with the given data like:
text
image
tensor
...
Thanks!
| closed | 2022-10-15T13:45:26Z | 2022-10-21T06:19:03Z | https://github.com/jina-ai/serve/issues/5280 | [] | fishfl | 5 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,693 | The problem of style transfer | Hello, author.
I want to use my own dataset to perform some style transfer tasks, such as converting land scenes into an underwater style. However, I only want to transfer the style. But when I was running my own dataset, I found that besides the style being transferred, the scenery in the pictures also changes (perhaps because the scenery in the land photos is different from that at the bottom of the water). How can I keep the scenery in the pictures unchanged while making the environment look like it's underwater? | open | 2025-03-19T00:47:55Z | 2025-03-19T00:47:55Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1693 | [] | zhangjy328 | 0 |
RobertCraigie/prisma-client-py | pydantic | 80 | Add support for overwriting datasources | ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Prisma supports dynamically overriding datasources:
```ts
import { PrismaClient } from '@prisma/client'
const prisma = new PrismaClient({
datasources: {
db: {
url: 'file:./dev_qa.db',
},
},
})
```
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We could make a slight improvement to this API by making it singular and removing the requirement to name the datasource. Prisma chose the above API for forwards compatibility, I am okay with having to make changes to this in the future.
```py
from prisma import Client
client = Client(
datasource={
'url': 'file:./dev_qa.db',
},
)
```
| closed | 2021-10-15T08:47:05Z | 2021-10-16T20:11:02Z | https://github.com/RobertCraigie/prisma-client-py/issues/80 | [
"kind/feature"
] | RobertCraigie | 0 |
jschneier/django-storages | django | 871 | s3boto I/O operation on closed file | This i have seen a very old issue in boto and i think the issue has been solved in codes but I am geeting the "I/O operation on closed file" error whenever I'm updating images in my django site. A new image is uploaded to s3 without any problem but the error occurs when i update existing images. The images are uploaded from a frontend form.
```
Request Method: | POST
-- | --
https://nakulsharma.in/dashboard/home/
3.0.3
ValueError
I/O operation on closed file.
/home2/nakulsha/virtualenv/portfolio_folder/3.7/lib/python3.7/site-packages/storages/backends/s3boto3.py in _save, line 546
/home2/nakulsha/virtualenv/portfolio_folder/3.7/bin/python3.7
3.7.3
['', '/opt/alt/python37/bin', '/home2/nakulsha/portfolio_folder', '/home2/nakulsha/virtualenv/portfolio_folder/3.7/lib64/python37.zip', '/home2/nakulsha/virtualenv/portfolio_folder/3.7/lib64/python3.7', '/home2/nakulsha/virtualenv/portfolio_folder/3.7/lib64/python3.7/lib-dynload', '/opt/alt/python37/lib64/python3.7', '/opt/alt/python37/lib/python3.7', '/home2/nakulsha/virtualenv/portfolio_folder/3.7/lib/python3.7/site-packages']
Thu, 9 Apr 2020 11:37:35 +0000
```
Any help?? | closed | 2020-04-09T11:53:32Z | 2020-12-21T05:19:21Z | https://github.com/jschneier/django-storages/issues/871 | [] | yodaljit | 2 |
redis/redis-om-python | pydantic | 104 | Change license to MIT | This will be re-licensed to use the MIT license from the next release. | closed | 2022-01-24T17:51:23Z | 2022-02-11T19:39:31Z | https://github.com/redis/redis-om-python/issues/104 | [
"documentation"
] | simonprickett | 2 |
TencentARC/GFPGAN | pytorch | 523 | billing problem | billing problem | open | 2024-03-02T11:10:58Z | 2024-03-02T11:10:58Z | https://github.com/TencentARC/GFPGAN/issues/523 | [] | skshanu1234 | 0 |
seleniumbase/SeleniumBase | web-scraping | 2,990 | error 'Chrome' object has no attribute 'uc_gui_click_captcha' | why I get this error 'Chrome' object has no attribute 'uc_gui_click_captcha' on driver
how can use this method on wich object ?
please help | closed | 2024-08-03T20:17:04Z | 2024-08-03T22:01:56Z | https://github.com/seleniumbase/SeleniumBase/issues/2990 | [
"question",
"UC Mode / CDP Mode"
] | alish511 | 1 |
deepspeedai/DeepSpeed | pytorch | 5,671 | [BUG] DeepSpeed on pypi not compatible with latest `numpy` | **Describe the bug**
Importing deepspeed on a python env with numpy>=2.0.0 fails:
```bash
File "/miniconda3/envs/py39/lib/python3.9/site-packages/deepspeed/autotuning/scheduler.py", line 8, in <module>
from numpy import BUFSIZE
E cannot import name 'BUFSIZE' from 'numpy' (/miniconda3/envs/py39/lib/python3.9/site-packages/numpy/__init__.py)
```
**To Reproduce**
`pip install deepspeed` on a env with python>=3.9 and import deepspeed
| closed | 2024-06-17T12:14:59Z | 2024-08-28T15:07:00Z | https://github.com/deepspeedai/DeepSpeed/issues/5671 | [
"bug",
"compression"
] | younesbelkada | 7 |
erdewit/ib_insync | asyncio | 624 | Question: Pandas 2.0.3 compatibility | Just wondering if ib_insync has any compatibility issues with Pandas 2.0.3.
Thank you,
GN | closed | 2023-08-24T09:03:46Z | 2023-08-24T09:09:56Z | https://github.com/erdewit/ib_insync/issues/624 | [] | yiorgosn | 1 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 197 | How to increase NPairsLoss batch size. | This is more of a question:
When using NPairsLoss, it's creating unique pairs before running the Cross Entropy Loss (just like in the paper). Is there a way to **increase the batch size** by adding more examples for each class? Or will the loss function only keep one example of each class? Are all the negative examples kept?
For example, can I feed a representation tensor of size N * M * V, where N is the batch size, M the number of different examples for a single class and V the embedding size? | closed | 2020-09-15T19:28:09Z | 2020-09-18T19:23:03Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/197 | [
"documentation",
"Frequently Asked Questions",
"question"
] | marued | 5 |
omnilib/aiomultiprocess | asyncio | 17 | No beneficial to run CPU-bound task at the same time with aiomultiprocess ? | ### Description
I have a task which is kind of both CPU-bound and IO-bound, like the toy code below.
When I ran it, I found that the CPU only run 100% on a single thread, not as many as the number of processes in the Pool.
Is it that there is no beneficial to run CPU-bound task at the same time with aiomultiprocess, or I wrote the code wrong?
```python
import asyncio
from datetime import datetime
from aiohttp import request
from aiomultiprocess import Pool, Process
def fib(x):
"""Recursive function of Fibonacci number"""
if x==0:
return 0
elif x==1:
return 1
else:
return fib(x-1)+fib(x-2)
async def get(url):
# async with request("GET", url) as response:
# await asyncio.sleep(1)
# return await response.text("utf-8")
print('url ' + str(multiprocessing.current_process()) + ' ' + str(datetime.now()))
await asyncio.sleep(5)
fib(30)
print('url ' + str(multiprocessing.current_process()) + ' ' + str(datetime.now()))
async def main():
urls = ["https://jreese.sh", "https://www.baidu.com", "a", "b", "c", "d"]
# p = Process(target=get, args=("https://jreese.sh", "https://www.baidu.com",))
# await p
print(datetime.now())
async with Pool(4) as pool:
result = await pool.map(get, urls)
print(result)
print(datetime.now())
# asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
asyncio.run(main())
```
### Details
* OS: MacOS 10.13.6 / CentOS 7
* Python version: 3.7.2
* aiomultiprocess version: 0.5.0
* Can you repro on master?
* Can you repro in a clean virtualenv?
| closed | 2019-03-15T06:09:28Z | 2019-03-30T20:28:14Z | https://github.com/omnilib/aiomultiprocess/issues/17 | [] | vnnw | 1 |
tfranzel/drf-spectacular | rest-api | 773 | Failed to obtain model through view's queryset due to raised exception. | **Describe the bug**
```
Warning #1: BankAccountViewSet: Failed to obtain model through view's queryset due to raised exception.
Prevent this either by setting "queryset = Model.objects.none()" on the view, checking for
"getattr(self, "swagger_fake_view", False)" in get_queryset() or by simply using @extend_schema.
(Exception: 'AnonymousUser' object has no attribute 'provider')
```
We've got code that looks a bit like this (omitted code marked by `[...]`:
```python
class BankAccountViewSet([...]):
serializer_class = BankAccountSerializer
lookup_field = "public_id"
def get_queryset(self) -> QuerySet[BankAccount]:
if not self.request.user.provider:
raise Exception(f"No provider set for user '{self.request.user}'")
[...]
```
Since DRF Spectacular is using `AnonymousUser` which does not have a provider, we get an exception here when trying to access `self.request.user.provider`.
I've tried to use `@extend_schema(responses=[BankAccountSerializer(many=True)])` here on the `get_queryset()` method, but that doesn't seem to help anything.
**To Reproduce**
Given the above code, running `python manage.py spectacular --file schema.yaml`.
**Expected behavior**
I understand why the error happens due to `AnonymousUser` not having a provider. But with the `@extend_schema(responses=[BankAccountSerializer(many=True)])` override, shouldn't this be resolved?
| closed | 2022-07-25T18:23:37Z | 2022-07-25T19:13:32Z | https://github.com/tfranzel/drf-spectacular/issues/773 | [] | lexicalunit | 2 |
mwaskom/seaborn | pandas | 3,241 | sns.histplot has a mystery/ghost patch, new with update from 0.11.2--> 0.12.2 | the problem is outlined here in this colab notebook
https://colab.research.google.com/drive/1i69qTX1SPSPKogUBa2tMUmjC3cHO_VPn?usp=sharing
sns.__version__, mpl.__version__
('0.11.2', '3.2.2')
displot and histplot display correct patched when finding the patched plotted as a hist (used for key)
<img width="446" alt="image" src="https://user-images.githubusercontent.com/19584564/216197437-d795dc89-93ab-48a1-b469-6c89e86171f7.png">
sns.__version__, mpl.__version__
('0.12.2', '3.6.3')
there is a blue patch somewhere only for histplot but not for displot
<img width="435" alt="image" src="https://user-images.githubusercontent.com/19584564/216197516-74cfe9ec-eaf8-4640-996d-85c46fa1d505.png">
| closed | 2023-02-02T00:10:25Z | 2023-02-06T19:56:24Z | https://github.com/mwaskom/seaborn/issues/3241 | [] | PhillipMaire | 9 |
kizniche/Mycodo | automation | 1,396 | Feature Suggestion: Add support for IFTTT Webhooks. | It would be great to add integration with IFTTT Webhooks to the functions, enabling us to automate a wider range of components. I use IFTTT in my hydroponic setup to easily connect with smart plugs and other devices from multiple manufacturers.
Thank you! | open | 2024-10-25T14:51:06Z | 2024-10-25T14:51:06Z | https://github.com/kizniche/Mycodo/issues/1396 | [] | yunielsg86 | 0 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,210 | [Bug]: a lot of No module named 'scripts.xxx' | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
when launching, a lot of similar error happen in extensions. They share the same pattern,
for example,
controlnet extension, we have
....../sd-webui-controlnet\internal_controlnet\args.py", line 12, in <module>
from scripts.enums import (
ModuleNotFoundError: No module named 'scripts.enums'
and as long as an extension use from scripts.xxx import yyy,
error happens.
### Steps to reproduce the problem
None
### What should have happened?
None
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
None
### Console logs
```Shell
None
```
### Additional information
_No response_ | open | 2024-07-14T10:57:57Z | 2024-10-19T11:06:34Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16210 | [
"bug-report"
] | sipie800 | 2 |
chrieke/prettymapp | streamlit | 31 | Error running app. If this keeps happening, please contact support. | I tried to start the prettymapp app on streamlit using https://prettymapp.streamlit.app/ and get this error:
> Error running app. If this keeps happening, please [contact support](https://github.com/chrieke/prettymapp/issues/new).
Both on Brave and in Firefox | closed | 2023-08-23T11:52:23Z | 2023-08-25T01:11:03Z | https://github.com/chrieke/prettymapp/issues/31 | [] | gertvervaet | 2 |
scrapy/scrapy | python | 6,323 | Spider.logger not logging custom extra information | I noticed the implicit behavior of the Spider.logger: when logging with extra, extra ultimately do not end up in the log because they are overwritten by default `process` method of [LoggerAdapter](https://github.com/scrapy/scrapy/blob/master/scrapy/spiders/__init__.py#L47)
Current logic:
```py
>>> self.logger.info("test log", extra={"test": "very important information"})
{"message": "test log", "spider": "spider_name"}
```
Expected logic:
```py
>>> self.logger.info("test log", extra={"test": "very important information"})
{"message": "test log", "spider": "spider_name", "test": "very important information"}
```
| closed | 2024-04-27T16:57:52Z | 2024-05-13T12:33:25Z | https://github.com/scrapy/scrapy/issues/6323 | [] | bloodforcream | 0 |
ansible/ansible | python | 84,793 | Different behavior for `state: restarted` in the `ansible.builtin.service` module (FreeBSD) | ### Summary
The `state: restarted` in the `ansible.builtin.service` module differs from the others (`started`, `stopped` and `reloaded`) of the implementation in FreeBSD:
https://github.com/ansible/ansible/blob/09391f38f009ec58b5759dbd74df34fd281ef3ac/lib/ansible/modules/service.py#L1090-L1099
With the current implementation `started`, `stopped` and `reloaded` execute the corresponding action even if the service is disabled, only `restarted` will have no effect when the service is disabled.
#### Option 1
Add the following part - to be consistent with the rest
```
if self.action == "restart":
self.action = "onerestart"
```
#### Option 2
Omit the `start`/`stop` modification - to match the documentation
_
I would prefer option 1 but that's just my opinion.
### Issue Type
Bug Report
### Component Name
ansible.builtin.service
### Ansible Version
```console
$ ansible --version
N/A
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
N/A
```
### OS / Environment
FreeBSD
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Restart apache24
become: true
ansible.builtin.service:
name: "apache24"
state: "restarted"
```
### Expected Results
N/A
### Actual Results
```console
N/A
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct | open | 2025-03-07T17:58:02Z | 2025-03-11T15:18:49Z | https://github.com/ansible/ansible/issues/84793 | [
"module",
"bug",
"needs_verified"
] | ohaucke | 1 |
hbldh/bleak | asyncio | 957 | Cannot notify/write to a characteristic | * bleak version: 0.15.1
* Python version: 3.8.0
* Operating System: Windows 11
* BlueZ version (`bluetoothctl -v`) in case of Linux:
### Description
I would like to scan for a bluetooth device using `BleakClient`, notify and write to two different characteristics
### What I Did
```python
import asyncio
from bleak import BleakScanner, BleakClient
from bleak.backends.winrt.client import BleakClientWinRT
from bleak.backends.winrt.scanner import BleakScannerWinRT
class BLDevice:
async def scan(self):
devices = await BleakScanner.discover()
for d in devices:
print(d)
def __init__(self, mac_addr: str):
self.mac_addr = mac_addr
self.client = None
async def connect(self):
if not self.client:
device = await BleakScannerWinRT.find_device_by_address(self.mac_addr, 20)
self.client = BleakClientWinRT(address_or_ble_device=device)
if not self.client.is_connected:
await self.client.connect()
async def write(self):
def response_handler(sender, data):
print("{}: {}".format(sender, data))
await self.connect()
await self.client.write_gatt_char("0000xxxxx-xxxxx-xxxxx-xxxxx-xxxxxxxxxxxx", str.encode("hello world!"))
async def get_sc(self):
await self.connect()
for svc in await self.client.get_services():
for ch in svc.characteristics:
print(ch)
if __name__ == '__main__':
MAC_ADDR = "FF:FF:FF:FF:FF:FF"
my_device = BLDevice(MAC_ADDR)
loop = asyncio.get_event_loop()
loop.run_until_complete(my_device.connect())
loop.run_until_complete(my_device.write())
```
I rant the above code and the following error:
```
Traceback (most recent call last):
File "C:/workstation/repos/bleak_demo/bleak_demo/run.py", line 32, in write
await self.client.write_gatt_char("0000xxxxx-xxxxx-xxxxx-xxxxx-xxxxxxxxxxxx", str.encode("hello world!"))
File "C:\Users\raghu\AppData\Local\pypoetry\Cache\virtualenvs\bleak-demo-xIb3w8Wb-py3.8\lib\site-packages\bleak\backends\winrt\client.py", line 665, in write_gatt_char
_ensure_success(
File "C:\Users\raghu\AppData\Local\pypoetry\Cache\virtualenvs\bleak-demo-xIb3w8Wb-py3.8\lib\site-packages\bleak\backends\winrt\client.py", line 107, in _ensure_success
raise BleakError(f"{fail_msg}: Unreachable")
bleak.exc.BleakError: Could not write value b'hello world!' to characteristic 000A: Unreachable
```
here is the output from Windows bluetooth virtual sniffer
```
87 29.147761 TexasIns_ (XXXX-XXX) localhost () SMP 11 Rcvd Security Request: AuthReq: No Bonding
```
| closed | 2022-08-22T19:24:48Z | 2022-12-02T18:06:19Z | https://github.com/hbldh/bleak/issues/957 | [
"Backend: WinRT"
] | raghumulukutla | 1 |
recommenders-team/recommenders | data-science | 1,203 | [BUG] For MIND small dataset utils, it can not download. | ### Description
### In which platform does it happen?
Local Host: Ubuntu 18.04
### How do we replicate the issue?
use example/00_quick_start/lstur_MIND.ipynb and change the MIND_TYPE to 'small'.
### Expected behavior (i.e. solution)
run this demo
### Other Comments
Could you please check this file name is correct?
https://github.com/microsoft/recommenders/blob/837b8081a4421e144f2bc05ba949c5ac6c52320f/reco_utils/recommender/newsrec/newsrec_utils.py#L358
In this code, the file named "MINDsma_utils.zip" may be not correct. I also tested "MINDsmall_utils.zip" but it can't download either.
| closed | 2020-09-16T02:13:20Z | 2020-11-07T13:20:37Z | https://github.com/recommenders-team/recommenders/issues/1203 | [
"bug"
] | Codle | 9 |
chainer/chainer | numpy | 7,925 | Flaky test: `tests/chainer_tests/links_tests/loss_tests/test_crf1d.py::TestCRF1d` | ERROR: type should be string, got "https://jenkins.preferred.jp/job/chainer/job/chainer_pr/1816/TEST=chainer-py2,label=mn1-p100/console\r\n\r\n>`FAIL tests/chainer_tests/links_tests/loss_tests/test_crf1d.py::TestCRF1d_param_1_{initial_cost='random', dtype=float16, transpose=False}::test_forward_cpu`\r\n\r\n```\r\n00:10:59 _ TestCRF1d_param_1_{initial_cost='random', dtype=float16, transpose=False}.test_forward_cpu _\r\n00:10:59 \r\n00:10:59 self = <chainer.testing._bundle.TestCRF1d_param_1_{initial_cost='random', dtype=float16, transpose=False} testMethod=test_forward_cpu>\r\n00:10:59 args = (), kwargs = {}\r\n00:10:59 e = AssertionError('\\nNot equal to tolerance rtol=0.0001, atol=0.005\\n\\n(mismatch ...5\\n total tolerance: 0.00544704666138\\nx: 4.4765625\\ny: 4.470466613769531\\n',)\r\n00:10:59 s = <StringIO.StringIO instance at 0x7fe9161c9b00>, k = 'transpose', v = False\r\n00:10:59 \r\n00:10:59 @functools.wraps(base_method)\r\n00:10:59 def new_method(self, *args, **kwargs):\r\n00:10:59 try:\r\n00:10:59 return base_method(self, *args, **kwargs)\r\n00:10:59 except unittest.SkipTest:\r\n00:10:59 raise\r\n00:10:59 except Exception as e:\r\n00:10:59 s = six.StringIO()\r\n00:10:59 s.write('Parameterized test failed.\\n\\n')\r\n00:10:59 s.write('Base test method: {}.{}\\n'.format(\r\n00:10:59 base.__name__, base_method.__name__))\r\n00:10:59 s.write('Test parameters:\\n')\r\n00:10:59 for k, v in six.iteritems(param2):\r\n00:10:59 s.write(' {}: {}\\n'.format(k, v))\r\n00:10:59 > utils._raise_from(e.__class__, s.getvalue(), e)\r\n00:10:59 \r\n00:10:59 args = ()\r\n00:10:59 base = <class 'chainer_tests.links_tests.loss_tests.test_crf1d.TestCRF1d'>\r\n00:10:59 base_method = <unbound method TestCRF1d.test_forward_cpu>\r\n00:10:59 e = AssertionError('\\nNot equal to tolerance rtol=0.0001, atol=0.005\\n\\n(mismatch ...5\\n total tolerance: 0.00544704666138\\nx: 4.4765625\\ny: 4.470466613769531\\n',)\r\n00:10:59 k = 'transpose'\r\n00:10:59 kwargs = {}\r\n00:10:59 param2 = {'dtype': <type 'numpy.float16'>, 'initial_cost': 'random', 'transpose': False}\r\n00:10:59 s = <StringIO.StringIO instance at 0x7fe9161c9b00>\r\n00:10:59 self = <chainer.testing._bundle.TestCRF1d_param_1_{initial_cost='random', dtype=float16, transpose=False} testMethod=test_forward_cpu>\r\n00:10:59 v = False\r\n00:10:59 \r\n00:10:59 chainer/testing/parameterized.py:89: \r\n00:10:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n00:10:59 chainer/utils/__init__.py:104: in _raise_from\r\n00:10:59 six.reraise(exc_type, new_exc, sys.exc_info()[2])\r\n00:10:59 chainer/testing/parameterized.py:78: in new_method\r\n00:10:59 return base_method(self, *args, **kwargs)\r\n00:10:59 tests/chainer_tests/links_tests/loss_tests/test_crf1d.py:86: in test_forward_cpu\r\n00:10:59 self.check_forward(self.xs, self.ys)\r\n00:10:59 tests/chainer_tests/links_tests/loss_tests/test_crf1d.py:83: in check_forward\r\n00:10:59 **self.check_forward_options)\r\n00:10:59 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n00:10:59 \r\n00:10:59 x = array(4.4765625, dtype=float16), y = array(4.470466613769531), atol = 0.005\r\n00:10:59 rtol = 0.0001, verbose = True\r\n00:10:59 \r\n00:10:59 def assert_allclose(x, y, atol=1e-5, rtol=1e-4, verbose=True):\r\n00:10:59 \"\"\"Asserts if some corresponding element of x and y differs too much.\r\n00:10:59 \r\n00:10:59 This function can handle both CPU and GPU arrays simultaneously.\r\n00:10:59 \r\n00:10:59 Args:\r\n00:10:59 x: Left-hand-side array.\r\n00:10:59 y: Right-hand-side array.\r\n00:10:59 atol (float): Absolute tolerance.\r\n00:10:59 rtol (float): Relative tolerance.\r\n00:10:59 verbose (bool): If ``True``, it outputs verbose messages on error.\r\n00:10:59 \r\n00:10:59 \"\"\"\r\n00:10:59 x = backend.CpuDevice().send(utils.force_array(x))\r\n00:10:59 y = backend.CpuDevice().send(utils.force_array(y))\r\n00:10:59 try:\r\n00:10:59 numpy.testing.assert_allclose(\r\n00:10:59 x, y, atol=atol, rtol=rtol, verbose=verbose)\r\n00:10:59 except AssertionError as e:\r\n00:10:59 f = six.StringIO()\r\n00:10:59 f.write(str(e) + '\\n\\n')\r\n00:10:59 f.write(\r\n00:10:59 'assert_allclose failed: \\n' +\r\n00:10:59 ' shape: {} {}\\n'.format(x.shape, y.shape) +\r\n00:10:59 ' dtype: {} {}\\n'.format(x.dtype, y.dtype))\r\n00:10:59 if x.shape == y.shape:\r\n00:10:59 xx = numpy.atleast_1d(x)\r\n00:10:59 yy = numpy.atleast_1d(y)\r\n00:10:59 err = numpy.abs(xx - yy)\r\n00:10:59 \r\n00:10:59 tol_rtol = rtol * numpy.abs(yy).astype(numpy.float64)\r\n00:10:59 tol_err = atol + tol_rtol\r\n00:10:59 \r\n00:10:59 i = numpy.unravel_index(\r\n00:10:59 numpy.argmax(err.astype(numpy.float64) - tol_err), err.shape)\r\n00:10:59 \r\n00:10:59 if yy[i] == 0:\r\n00:10:59 rel_err = 'inf'\r\n00:10:59 else:\r\n00:10:59 rel_err = err[i] / numpy.abs(yy[i])\r\n00:10:59 \r\n00:10:59 f.write(\r\n00:10:59 ' i: {}\\n'.format(i) +\r\n00:10:59 ' x[i]: {}\\n'.format(xx[i]) +\r\n00:10:59 ' y[i]: {}\\n'.format(yy[i]) +\r\n00:10:59 ' relative error[i]: {}\\n'.format(rel_err) +\r\n00:10:59 ' absolute error[i]: {}\\n'.format(err[i]) +\r\n00:10:59 ' relative tolerance * |y[i]|: {}\\n'.format(tol_rtol[i]) +\r\n00:10:59 ' absolute tolerance: {}\\n'.format(atol) +\r\n00:10:59 ' total tolerance: {}\\n'.format(tol_err[i]))\r\n00:10:59 \r\n00:10:59 opts = numpy.get_printoptions()\r\n00:10:59 try:\r\n00:10:59 numpy.set_printoptions(threshold=10000)\r\n00:10:59 f.write('x: ' + numpy.array2string(x, prefix='x: ') + '\\n')\r\n00:10:59 f.write('y: ' + numpy.array2string(y, prefix='y: ') + '\\n')\r\n00:10:59 finally:\r\n00:10:59 numpy.set_printoptions(**opts)\r\n00:10:59 > raise AssertionError(f.getvalue())\r\n00:10:59 E AssertionError: Parameterized test failed.\r\n00:10:59 E \r\n00:10:59 E Base test method: TestCRF1d.test_forward_cpu\r\n00:10:59 E Test parameters:\r\n00:10:59 E initial_cost: random\r\n00:10:59 E dtype: <type 'numpy.float16'>\r\n00:10:59 E transpose: False\r\n00:10:59 E \r\n00:10:59 E \r\n00:10:59 E (caused by)\r\n00:10:59 E AssertionError: \r\n00:10:59 E Not equal to tolerance rtol=0.0001, atol=0.005\r\n00:10:59 E \r\n00:10:59 E (mismatch 100.0%)\r\n00:10:59 E x: array(4.4765625, dtype=float16)\r\n00:10:59 E y: array(4.470466613769531)\r\n00:10:59 E \r\n00:10:59 E assert_allclose failed: \r\n00:10:59 E shape: () ()\r\n00:10:59 E dtype: float16 float64\r\n00:10:59 E i: (0,)\r\n00:10:59 E x[i]: 4.4765625\r\n00:10:59 E y[i]: 4.47046661377\r\n00:10:59 E relative error[i]: 0.00136359059515\r\n00:10:59 E absolute error[i]: 0.00609588623047\r\n00:10:59 E relative tolerance * |y[i]|: 0.000447046661377\r\n00:10:59 E absolute tolerance: 0.005\r\n00:10:59 E total tolerance: 0.00544704666138\r\n00:10:59 E x: 4.4765625\r\n00:10:59 E y: 4.470466613769531\r\n00:10:59 \r\n00:10:59 atol = 0.005\r\n00:10:59 e = AssertionError('\\nNot equal to tolerance rtol=0.0001, atol=0.005\\n\\n(mismatch 100.0%)\\n x: array(4.4765625, dtype=float16)\\n y: array(4.470466613769531)',)\r\n00:10:59 err = array([ 0.00609589])\r\n00:10:59 f = <StringIO.StringIO instance at 0x7fe9161c9a28>\r\n00:10:59 i = (0,)\r\n00:10:59 opts = {'edgeitems': 3, 'formatter': None, 'infstr': 'inf', 'linewidth': 75, ...}\r\n00:10:59 rel_err = 0.001363590595150123\r\n00:10:59 rtol = 0.0001\r\n00:10:59 tol_err = array([ 0.00544705])\r\n00:10:59 tol_rtol = array([ 0.00044705])\r\n00:10:59 verbose = True\r\n00:10:59 x = array(4.4765625, dtype=float16)\r\n00:10:59 xx = array([ 4.4765625], dtype=float16)\r\n00:10:59 y = array(4.470466613769531)\r\n00:10:59 yy = array([ 4.47046661])\r\n00:10:59 \r\n00:10:59 chainer/testing/array.py:68: AssertionError\r\n```" | closed | 2019-08-13T17:45:48Z | 2019-08-20T04:38:02Z | https://github.com/chainer/chainer/issues/7925 | [
"cat:test",
"prio:high",
"pr-ongoing"
] | niboshi | 0 |
Asabeneh/30-Days-Of-Python | matplotlib | 430 | Formatted Strings | I noticed that there is no documentation on how to use formatted strings which is a great help for strings that may require **multiple variables** which one can encounter in real life situations and other situations. | open | 2023-08-07T19:09:41Z | 2023-08-07T19:09:41Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/430 | [] | diremu | 0 |
keras-team/keras | python | 20,052 | Is 2D CNN's description correct ? | > This layer creates a convolution kernel that is convolved with the layer
input over a single spatial (or temporal) dimension to produce a tensor of
outputs. If `use_bias` is True, a bias vector is created and added to the
outputs. Finally, if `activation` is not `None`, it is applied to the
outputs as well.
[keras docs](https://github.com/keras-team/keras/blob/v3.4.1/keras/src/layers/convolutional/conv2d.py#L5)
Isn't convolved over *2 spatial dimensions* ? | closed | 2024-07-28T07:25:48Z | 2024-07-29T15:31:01Z | https://github.com/keras-team/keras/issues/20052 | [
"type:docs"
] | newresu | 1 |
nonebot/nonebot2 | fastapi | 2,582 | Plugin: nonebot-plugin-bf1marneserverlist | ### PyPI 项目名
nonebot-plugin-bf1marneserverlist
### 插件 import 包名
bf1marneserverlist
### 标签
[{"label":"server","color":"#ea5252"}]
### 插件配置项
```dotenv
# marne_url : str
# marne_plugin_enabled : bool = True
# marne_data_dir: str = './marne_data/'
```
| closed | 2024-02-18T05:34:45Z | 2024-02-18T05:36:04Z | https://github.com/nonebot/nonebot2/issues/2582 | [
"Plugin"
] | peach0x33a | 1 |
CorentinJ/Real-Time-Voice-Cloning | python | 344 | Error when running demo_cli.py. Please Help! | When I run demo_cli.py I get this error:
Traceback (most recent call last): File ".\demo_cli.py", line 96, in <module> mels = synthesizer.synthesize_spectrograms(texts, embeds) File "D:\Users\Jay\Desktop\Deepvoice\synthesizer\inference.py", line 77, in synthesize_spectrograms self.load() File "D:\Users\Jay\Desktop\Deepvoice\synthesizer\inference.py", line 58, in load self._model = Tacotron2(self.checkpoint_fpath, hparams) File "D:\Users\Jay\Desktop\Deepvoice\synthesizer\tacotron2.py", line 28, in __init__ split_infos=split_infos) File "D:\Users\Jay\Desktop\Deepvoice\synthesizer\models\tacotron.py", line 146, in initialize zoneout=hp.tacotron_zoneout_rate, scope="encoder_LSTM")) File "D:\Users\Jay\Desktop\Deepvoice\synthesizer\models\modules.py", line 221, in __init__ name="encoder_fw_LSTM") File "D:\Users\Jay\Desktop\Deepvoice\synthesizer\models\modules.py", line 114, in __init__ self._cell = tf.contrib.cudnn_rnn.CudnnLSTM(num_units, name=name) TypeError: __init__() missing 1 required positional argument: 'num_units'
Can somone help fix it? | closed | 2020-05-14T14:18:02Z | 2020-06-24T08:39:02Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/344 | [] | CosmonautNinja | 2 |
mljar/mljar-supervised | scikit-learn | 328 | For small data distable stacking and allow 10-fold cv | closed | 2021-03-02T09:47:22Z | 2021-03-02T16:39:03Z | https://github.com/mljar/mljar-supervised/issues/328 | [
"enhancement"
] | pplonski | 0 |
|
pennersr/django-allauth | django | 3,746 | Option to allow one-time password login for old users when using SOCIALACCOUNT_ONLY | One of my sites would like to migrate to allowing login only using social accounts, but has existing users with passwords.
I think it would be good to have a configuration with the concept of a "legacy password login" option which allows users to authenticate with their password, then migrate to a social account.
I'd be happy to look at doing the work for this, but wanted to raise an issue to see if there was appetite to accept this before I do. | closed | 2024-04-23T10:09:16Z | 2024-04-23T11:17:35Z | https://github.com/pennersr/django-allauth/issues/3746 | [] | riconnon | 1 |
dmlc/gluon-cv | computer-vision | 797 | Wrong results with yolo running inference multiple times with Qt backend | I'm getting wrong results when running this simple code (essentially the YOLO demo example) using the Qt backend for matplotlib. Both on CPU and GPU.
The first inference gives correct results, the following ones return nonsense.
This doesn't happen with other models (tried with `ssd_512_mobilenet1.0_coco`).
or using other backends (e.g. with `TkAgg`).
```python
from gluoncv import model_zoo, data, utils
import matplotlib
matplotlib.use('Qt5Agg')
from matplotlib import pyplot as plt
import mxnet as mx
for i in range(5):
print("########", i)
net = model_zoo.get_model('yolo3_darknet53_coco', pretrained=True, ctx=mx.gpu(0))
im_fname = utils.download('ht//raw.githubusercontent.com/zhreshold/' +
'mxnet-ssd/master/data/demo/dog.jpg', path='dog.jpg')
x, img = data.transforms.presets.yolo.load_test(im_fname, short=512)
x = x.as_in_context(mx.gpu(0))
print('Shape of pre-processed im', x.shape)
class_IDs, scores, bounding_boxs = net(x)
plt.figure()
print(scores[0,:10])
```
plt.figure() is useless but calling pyplot is what causes things to go haywire.
The output is:
```
######## 0
Shape of pre-processed im (1, 3, 512, 683)
[11:26:09] src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (setting env variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
[[ 0.9919528 ]
[ 0.9600397 ]
[ 0.62269807]
[ 0.29241946]
[ 0.01795176]
[ 0.01141726]
[-1. ]
[-1. ]
[-1. ]
[-1. ]]
<NDArray 10x1 @gpu(0)>
######## 1
Shape of pre-processed im (1, 3, 512, 683)
[[ 2.00238937e-04]
[ 1.32904228e-04]
[ 1.25731094e-04]
[ 1.21111851e-04]
[ 1.12316993e-04]
[ 1.06516934e-04]
[ 9.43234991e-05]
[ 5.61804554e-05]
[-1.00000000e+00]
[-1.00000000e+00]]
<NDArray 10x1 @gpu(0)>
######## 2
Shape of pre-processed im (1, 3, 512, 683)
[[ 2.00238937e-04]
[ 1.32904228e-04]
[ 1.25731094e-04]
[ 1.21111851e-04]
[ 1.12316993e-04]
[ 1.06516934e-04]
[ 9.43234991e-05]
[ 5.61804554e-05]
[-1.00000000e+00]
[-1.00000000e+00]]
<NDArray 10x1 @gpu(0)>
######## 3
Shape of pre-processed im (1, 3, 512, 683)
[[ 2.00238937e-04]
[ 1.32904228e-04]
[ 1.25731094e-04]
[ 1.21111851e-04]
[ 1.12316993e-04]
[ 1.06516934e-04]
[ 9.43234991e-05]
[ 5.61804554e-05]
[-1.00000000e+00]
[-1.00000000e+00]]
<NDArray 10x1 @gpu(0)>
######## 4
Shape of pre-processed im (1, 3, 512, 683)
[[ 2.00238937e-04]
[ 1.32904228e-04]
[ 1.25731094e-04]
[ 1.21111851e-04]
[ 1.12316993e-04]
[ 1.06516934e-04]
[ 9.43234991e-05]
[ 5.61804554e-05]
[-1.00000000e+00]
[-1.00000000e+00]]
<NDArray 10x1 @gpu(0)>
```
Of course plotting in a loop has small practical reason, but this clearly happens also when you run yolo multiple times in a shell.
| closed | 2019-06-06T10:05:36Z | 2021-05-31T07:02:54Z | https://github.com/dmlc/gluon-cv/issues/797 | [
"Stale"
] | leotac | 2 |
tfranzel/drf-spectacular | rest-api | 947 | `adrf` (Async DRF) Support | It was recently decided that the "official" async support for DRF would be the [`adrf`](https://github.com/em1208/adrf) package:
- https://github.com/encode/django-rest-framework/discussions/7774#discussioncomment-5063336
- https://github.com/encode/django-rest-framework/issues/8496#issuecomment-1438345677
Does `drf-spectacular` support this package? I didn't see it listed in the third party packages:
- https://drf-spectacular.readthedocs.io/en/latest/readme.html#
It provides a [new Async class-based and functional views](https://github.com/em1208/adrf#async-views). These allow end users to use `async` functionalities to better scale their DRF applications.
Somewhat related to
- #931 | open | 2023-02-25T21:14:00Z | 2024-04-15T17:51:39Z | https://github.com/tfranzel/drf-spectacular/issues/947 | [] | johnthagen | 7 |
miguelgrinberg/Flask-Migrate | flask | 363 | Ability to poll db to pull current alembic version | ### **Desired Behavior**
When running a new build from Jenkins it creates a new build environment 'flask db init' migration folder and has the ability to poll the database to pull the current alembic version and then perform migration, upgrade, downgrade as needed as the migrations folder is not saved between build environments or between builds.
### **Current Behavior**
I need to remove the current alembic table from the database OR save the migrations folder and move it between builds and build environments to then perform an upgrade otherwise I receive the following complaint.
(venv) root@build1:~/manual_build_dirs/balance_service# flask db migrate
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
ERROR [root] Error: Can't locate revision identified by '42f262e6ba92'
**### Environment**
* Python version: 3.6.9
* Flask-SQLAlchemy version: 2.4.4
* Flask-Migrate version: 2.5.3
* SQLAlchemy version: 1.3.19
| closed | 2020-08-29T15:03:33Z | 2020-08-29T19:40:19Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/363 | [
"question"
] | 30ThreeDegrees | 2 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,352 | Error after loading a JS file | Hello,
I have created a folder called "1" in _/var/globaleaks/scripts_ and I have uploaded the file _custom_script.js_ (example I have seen in the forum from _https://github.com/AjuntamentdeBarcelona/bustia-etica-bcn/tree/master/theme_ -changing GLClient by GL-).
I restart application with _service globaleaks restart_
But I dont see changes in home page and the console shows an error like this:
**Refused to execute script from 'http://127.0.0.1:8082/script' because its MIME type ('application/json') is not executable, and strict MIME type checking is enabled.**
I am using a Laptop with Debian 11 and the last version of Globaleaks (4.10.18).
Could you help me, please?
Thanks
| closed | 2023-02-19T19:40:19Z | 2023-02-20T18:44:23Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3352 | [] | jpdionisio | 2 |
plotly/dash | jupyter | 2,241 | Jumpy scroll in a Dash DataTable | When scrolling the DataTable on a mobile phone, the DataTable is jumpy.
Issue first reported on [Forum](https://community.plotly.com/t/jumpy-scroll-in-a-dash-datatable/67940)
The community member created [this sample app](https://datatable-scroll-issue.onrender.com/) to replicated the problem. They also shared the [code for the app](https://github.com/timofeymukha/datatable_scroll_issue) on GitHub. | closed | 2022-09-20T17:43:44Z | 2024-07-24T14:59:53Z | https://github.com/plotly/dash/issues/2241 | [] | Coding-with-Adam | 4 |
hbldh/bleak | asyncio | 761 | BT discovery crashes on MacOS 12.2, "attempted to access privacy-sensitive data without a usage description" | * bleak version: 0.14.2
* Python version: 3.10.2
* Operating System: MacOS 12.2
* Hardware: Apple Silicon/arm64
### Description
Bluetooth discovery crashes (segfaults).
### What I Did
```
% cat x.py
import asyncio
from bleak import BleakScanner
hub_uuid = "00001623-1212-EFDE-1623-785FEABCD123"
async def main():
devices = await BleakScanner.discover(service_uuids=[hub_uuid])
for d in devices:
print(d)
asyncio.run(main())
% BLEAK_LOGGING=1 python x.py
zsh: abort BLEAK_LOGGING=1 python x.py
```
Traceback:
```
Crashed Thread: 1 Dispatch queue: com.apple.root.default-qos
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY
Termination Reason: Namespace TCC, Code 0
This app has crashed because it attempted to access privacy-sensitive data without a usage description. The app's Info.plist must contain an NSBluetoothAlwaysUsageDescription key with a string value explaining to the user how the app uses this data.
Thread 0:: Dispatch queue: com.apple.main-thread
0 libsystem_kernel.dylib 0x1b424d0c0 __psynch_cvwait + 8
1 libsystem_pthread.dylib 0x1b4285808 _pthread_cond_wait + 1228
2 Python 0x100b5d5d4 PyThread_acquire_lock_timed + 396
3 Python 0x100bba2ac acquire_timed + 256
4 Python 0x100bba4e8 lock_PyThread_acquire_lock + 56
5 Python 0x100a12a34 method_vectorcall_VARARGS_KEYWORDS + 156
6 Python 0x100b01854 call_function + 128
7 Python 0x100afc384 _PyEval_EvalFrameDefault + 32708
8 Python 0x100af30ec _PyEval_Vector + 328
9 Python 0x100b01854 call_function + 128
10 Python 0x100afc384 _PyEval_EvalFrameDefault + 32708
11 Python 0x100af30ec _PyEval_Vector + 328
12 Python 0x100b01854 call_function + 128
13 Python 0x100afc384 _PyEval_EvalFrameDefault + 32708
14 Python 0x100af30ec _PyEval_Vector + 328
15 _objc.cpython-310-darwin.so 0x1015f5054 _PyObject_VectorcallTstate + 120
16 _objc.cpython-310-darwin.so 0x1015f4fd0 PyObject_Vectorcall + 60
17 _objc.cpython-310-darwin.so 0x1015f2f90 pysel_vectorcall + 440
18 Python 0x100b01854 call_function + 128
19 Python 0x100afc404 _PyEval_EvalFrameDefault + 32836
20 Python 0x100af30ec _PyEval_Vector + 328
21 Python 0x100a05c24 _PyObject_FastCallDictTstate + 208
22 Python 0x100a7c630 slot_tp_init + 196
23 Python 0x100a74968 type_call + 288
24 Python 0x100a0636c _PyObject_Call + 128
25 Python 0x100afc5ec _PyEval_EvalFrameDefault + 33324
26 Python 0x100a1d5e0 gen_send_ex2 + 224
27 Python 0x100af7888 _PyEval_EvalFrameDefault + 13512
28 Python 0x100a1d5e0 gen_send_ex2 + 224
29 _asyncio.cpython-310-darwin.so 0x100ee0a74 task_step_impl + 440
30 _asyncio.cpython-310-darwin.so 0x100ee0848 task_step + 52
31 Python 0x100a0594c _PyObject_MakeTpCall + 136
32 Python 0x100b17d50 context_run + 92
33 Python 0x100a58de8 cfunction_vectorcall_FASTCALL_KEYWORDS + 84
34 Python 0x100afc5ec _PyEval_EvalFrameDefault + 33324
35 Python 0x100af30ec _PyEval_Vector + 328
36 Python 0x100b01854 call_function + 128
37 Python 0x100afc384 _PyEval_EvalFrameDefault + 32708
38 Python 0x100af30ec _PyEval_Vector + 328
39 Python 0x100b01854 call_function + 128
40 Python 0x100afc384 _PyEval_EvalFrameDefault + 32708
41 Python 0x100af30ec _PyEval_Vector + 328
42 Python 0x100b01854 call_function + 128
43 Python 0x100afc384 _PyEval_EvalFrameDefault + 32708
44 Python 0x100af30ec _PyEval_Vector + 328
45 Python 0x100b01854 call_function + 128
46 Python 0x100afc384 _PyEval_EvalFrameDefault + 32708
47 Python 0x100af30ec _PyEval_Vector + 328
48 Python 0x100b01854 call_function + 128
49 Python 0x100afc404 _PyEval_EvalFrameDefault + 32836
50 Python 0x100af30ec _PyEval_Vector + 328
51 Python 0x100af2f90 PyEval_EvalCode + 104
52 Python 0x100b4c6cc run_eval_code_obj + 84
53 Python 0x100b4c614 run_mod + 112
54 Python 0x100b4c280 pyrun_file + 148
55 Python 0x100b4bb94 _PyRun_SimpleFileObject + 268
56 Python 0x100b4b1d4 _PyRun_AnyFileObject + 232
57 Python 0x100b6d40c pymain_run_file_obj + 220
58 Python 0x100b6cb5c pymain_run_file + 72
59 Python 0x100b6c3cc Py_RunMain + 868
60 Python 0x100b6d578 pymain_main + 36
61 Python 0x100b6d7ec Py_BytesMain + 40
62 dyld 0x1005650f4 start + 520
Thread 1 Crashed:: Dispatch queue: com.apple.root.default-qos
0 libsystem_kernel.dylib 0x1b4273eb8 __abort_with_payload + 8
1 libsystem_kernel.dylib 0x1b4276864 abort_with_payload_wrapper_internal + 104
2 libsystem_kernel.dylib 0x1b4276898 abort_with_payload + 16
3 TCC 0x1b943a874 __TCC_CRASHING_DUE_TO_PRIVACY_VIOLATION__ + 172
4 TCC 0x1b943b19c __TCCAccessRequest_block_invoke.194 + 600
5 TCC 0x1b9438794 __tccd_send_message_block_invoke + 632
6 libxpc.dylib 0x1b3fd99e8 _xpc_connection_reply_callout + 116
7 libxpc.dylib 0x1b3fd98e0 _xpc_connection_call_reply_async + 88
8 libdispatch.dylib 0x1b40c6c2c _dispatch_client_callout3 + 20
9 libdispatch.dylib 0x1b40e4698 _dispatch_mach_msg_async_reply_invoke + 348
10 libdispatch.dylib 0x1b40d90c0 _dispatch_kevent_worker_thread + 1316
11 libsystem_pthread.dylib 0x1b428133c _pthread_wqthread + 344
12 libsystem_pthread.dylib 0x1b4280018 start_wqthread + 8
Thread 2:
0 libsystem_pthread.dylib 0x1b4280010 start_wqthread + 0
``` | closed | 2022-02-11T11:05:12Z | 2022-02-14T18:16:25Z | https://github.com/hbldh/bleak/issues/761 | [
"3rd party issue",
"Backend: Core Bluetooth"
] | jseppanen | 3 |
collerek/ormar | fastapi | 399 | When trying to count relationship instances, it returns wrong result | **Describe the bug**
When following the README, I tried to insert one author with some books, and when I tried to call len(author.books) it returned
a wrong number of result in one case and a good number in another.
**To Reproduce**
Full code, you can just copy paste it
```python
from typing import Optional
import databases
import ormar
import sqlalchemy
DATABASE_URL = "sqlite:///db.sqlite"
database = databases.Database(DATABASE_URL)
metadata = sqlalchemy.MetaData()
class BaseMeta(ormar.ModelMeta):
metadata = metadata
database = database
class Author(ormar.Model):
class Meta(BaseMeta):
tablename = "authors"
id: int = ormar.Integer(primary_key=True)
name: str = ormar.String(max_length=100)
class Book(ormar.Model):
class Meta(BaseMeta):
tablename = "books"
id: int = ormar.Integer(primary_key=True)
author: Optional[Author] = ormar.ForeignKey(Author)
title: str = ormar.String(max_length=100)
year: int = ormar.Integer(nullable=True)
engine = sqlalchemy.create_engine(DATABASE_URL)
metadata.drop_all(engine)
metadata.create_all(engine)
async def with_connect(function):
async with database:
await function()
async def create():
tolkien = await Author.objects.create(name="J.R.R. Tolkien")
await Book.objects.create(author=tolkien, title="The Hobbit", year=1937)
await Book.objects.create(author=tolkien, title="The Lord of the Rings", year=1955)
await Book.objects.create(author=tolkien, title="The Silmarillion", year=1977)
# returns 2 ---> WEIRD
print(f"Tolkien books : {len(tolkien.books)}")
another_author = await Author.objects.create(name="Another author")
book1 = Book(title="Book1", year=1999)
book2 = Book(title="Book2", year=1999)
book3 = Book(title="Book3", year=1999)
another_author.books.append(book1)
another_author.books.append(book2)
another_author.books.append(book3)
await another_author.update()
# returns 3 ---> GOOD
print(f"Another author books : {len(another_author.books)}")
import asyncio
asyncio.run(with_connect(create))
```
**Expected behavior**
It should return 3 in both cases.
**Versions (please complete the following information):**
- Database backend used : Sqlite
- Python version : 3.9.2
- `ormar` version : ormar==0.10.22
- `pydantic` version : pydantic==1.8.2
| closed | 2021-10-31T08:58:49Z | 2021-10-31T09:38:28Z | https://github.com/collerek/ormar/issues/399 | [
"bug"
] | sorasful | 1 |
chatanywhere/GPT_API_free | api | 270 | 显示api_key错误 | **Describe the bug 描述bug**
游玩游戏时需要chatgpt的API key,但输入这个项目得到的API key(免费版)后显示API key错误。
**To Reproduce 复现方法**
1. 游戏名称:世界尽头与可爱猫娘 ~ 病娇AI女友;下载网址:[网址](https://helixngc7293.itch.io/yandere-ai-girlfriend-simulator)
2. 进入设置选项输入API key
3. 进入游戏游玩,显示“Incorrect API key provided”
**Screenshots 截图**


| open | 2024-07-22T09:06:18Z | 2024-07-27T07:44:09Z | https://github.com/chatanywhere/GPT_API_free/issues/270 | [] | Wekfar | 4 |
freqtrade/freqtrade | python | 11,113 | Freqtrade CI pipeline failing on ERROR: Failed to build installable wheels for some pyproject.toml based projects (blosc2) | <!--
Have you searched for similar issues before posting it?
If you have discovered a bug in the bot, please [search the issue tracker](https://github.com/freqtrade/freqtrade/issues?q=is%3Aissue).
If it hasn't been reported, please create a new issue.
Please do not use bug reports to request new features.
-->
## Describe the problem:
The step in Freqtrade CI pipeline which builds and pushes to the docker registries is failing due to `blosc2`. It seems to be failing from 3 days ago as docker images on atleast dockerhub haven't been updated since then.
```sh
#14 240.6 Building wheels for collected packages: blosc2, MarkupSafe
#14 240.6 Building wheel for blosc2 (pyproject.toml): started
#14 257.8 Building wheel for blosc2 (pyproject.toml): finished with status 'error'
#14 257.8 error: subprocess-exited-with-error
#14 257.8
#14 257.8 × Building wheel for blosc2 (pyproject.toml) did not run successfully.
#14 257.8 │ exit code: 1
#14 257.8 ╰─> [39 lines of output]
#14 257.8 *** scikit-build-core 0.10.7 using CMake 3.25.1 (wheel)
#14 257.8 *** Configuring CMake...
#14 257.8 loading initial cache file /tmp/tmp4ezbjt6w/build/CMakeInit.txt
#14 257.8 -- The C compiler identification is GNU 12.2.0
#14 257.8 -- The CXX compiler identification is GNU 12.2.0
#14 257.8 -- Detecting C compiler ABI info
#14 257.8 -- Detecting C compiler ABI info - done
#14 257.8 -- Check for working C compiler: /usr/bin/gcc - skipped
#14 257.8 -- Detecting C compile features
#14 257.8 -- Detecting C compile features - done
#14 257.8 -- Detecting CXX compiler ABI info
#14 257.8 -- Detecting CXX compiler ABI info - done
#14 257.8 -- Check for working CXX compiler: /usr/bin/g++ - skipped
#14 257.8 -- Detecting CXX compile features
#14 257.8 -- Detecting CXX compile features - done
#14 257.8 -- Found Python: /usr/local/bin/python3.11 (found version "3.11.10") found components: Interpreter NumPy Development.Module
#14 257.8 CMake Error at /usr/share/cmake-3.25/Modules/ExternalProject.cmake:2790 (message):
#14 257.8 error: could not find git for clone of blosc2-populate
#14 257.8 Call Stack (most recent call first):
#14 257.8 /usr/share/cmake-3.25/Modules/ExternalProject.cmake:4185 (_ep_add_download_command)
#14 257.8 CMakeLists.txt:23 (ExternalProject_Add)
#14 257.8
#14 257.8
#14 257.8 -- Configuring incomplete, errors occurred!
#14 257.8 See also "/tmp/tmp4ezbjt6w/build/_deps/blosc2-subbuild/CMakeFiles/CMakeOutput.log".
#14 257.8
#14 257.8 CMake Error at /usr/share/cmake-3.25/Modules/FetchContent.cmake:1604 (message):
#14 257.8 CMake step for blosc2 failed: 1
#14 257.8 Call Stack (most recent call first):
#14 257.8 /usr/share/cmake-3.25/Modules/FetchContent.cmake:1756:EVAL:2 (__FetchContent_directPopulate)
#14 257.8 /usr/share/cmake-3.25/Modules/FetchContent.cmake:1756 (cmake_language)
#14 257.8 /usr/share/cmake-3.25/Modules/FetchContent.cmake:1970 (FetchContent_Populate)
#14 257.8 CMakeLists.txt:55 (FetchContent_MakeAvailable)
#14 257.8
#14 257.8
#14 257.8 -- Configuring incomplete, errors occurred!
#14 257.8 See also "/tmp/tmp4ezbjt6w/build/CMakeFiles/CMakeOutput.log".
#14 257.8
#14 257.8 *** CMake configuration failed
#14 257.8 [end of output]
#14 257.8
#14 257.8 note: This error originates from a subprocess, and is likely not a problem with pip.
#14 257.8 ERROR: Failed building wheel for blosc2
#14 257.8 Building wheel for MarkupSafe (pyproject.toml): started
#14 265.6 Building wheel for MarkupSafe (pyproject.toml): finished with status 'done'
#14 265.6 Created wheel for MarkupSafe: filename=MarkupSafe-3.0.2-cp311-cp311-linux_armv7l.whl size=21971 sha256=92522a143bfa45c45bc2786b11887211221ad6b6ee58be2f0656f739465fe57c
#14 265.6 Stored in directory: /tmp/pip-ephem-wheel-cache-ssr909f5/wheels/9d/38/99/1f61f3b0dd7ab4898edfa9fcf6feb13644d4d49a44b3bed19d
#14 265.6 Successfully built MarkupSafe
#14 265.6 Failed to build blosc2
#14 267.9 ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (blosc2)
#14 ERROR: process "/bin/sh -c pip install --user --no-cache-dir numpy && pip install --user --no-index --find-links /tmp/ pyarrow TA-Lib && pip install --user --no-cache-dir -r requirements.txt" did not complete successfully: exit code: 1
------
> [python-deps 4/4] RUN pip install --user --no-cache-dir numpy && pip install --user --no-index --find-links /tmp/ pyarrow TA-Lib && pip install --user --no-cache-dir -r requirements.txt:
257.8
257.8 note: This error originates from a subprocess, and is likely not a problem with pip.
257.8 ERROR: Failed building wheel for blosc2
257.8 Building wheel for MarkupSafe (pyproject.toml): started
265.6 Building wheel for MarkupSafe (pyproject.toml): finished with status 'done'
265.6 Created wheel for MarkupSafe: filename=MarkupSafe-3.0.2-cp311-cp311-linux_armv7l.whl size=21971 sha256=92522a143bfa45c45bc2786b11887211221ad6b6ee58be2f0656f739465fe57c
265.6 Stored in directory: /tmp/pip-ephem-wheel-cache-ssr909f5/wheels/9d/38/99/1f61f3b0dd7ab4898edfa9fcf6feb13644d4d49a44b3bed19d
265.6 Successfully built MarkupSafe
265.6 Failed to build blosc2
267.9 ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (blosc2)
------
Dockerfile.armhf:37
--------------------
36 | USER ftuser
37 | >>> RUN pip install --user --no-cache-dir numpy \
38 | >>> && pip install --user --no-index --find-links /tmp/ pyarrow TA-Lib \
39 | >>> && pip install --user --no-cache-dir -r requirements.txt
40 |
--------------------
ERROR: failed to solve: process "/bin/sh -c pip install --user --no-cache-dir numpy && pip install --user --no-index --find-links /tmp/ pyarrow TA-Lib && pip install --user --no-cache-dir -r requirements.txt" did not complete successfully: exit code: 1
failed building multiarch image
```
### Steps to reproduce:
1. commit to `develop` branch
| closed | 2024-12-19T22:39:22Z | 2024-12-20T06:22:14Z | https://github.com/freqtrade/freqtrade/issues/11113 | [
"Bug",
"Docker"
] | erickaby | 2 |
tableau/server-client-python | rest-api | 1,575 | Multiple Profiles | ## Summary
I would love it if tableau authentication behaved more [like AWS](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials)
I obviously can build wrappers for this, but this seems like pretty common functionality in python api packages. (Databricks,DBT,AWS,etc)
## Request Type
I don't believe this would affect or be affected at all by rest API. as this is just some defaulting and setting before the API.
## Description
Specifically allowing profiles to be specified or defaulting back to environment variables.
I imagine a profile would have the same information as the call for PAT auth
* token_name
* personal_access_token
* site_id
| open | 2025-03-05T18:21:19Z | 2025-03-05T18:21:45Z | https://github.com/tableau/server-client-python/issues/1575 | [
"enhancement",
"needs investigation"
] | VDFaller | 0 |
3b1b/manim | python | 1,370 | Object has no attribute 'set_camera_orientation' | First off, I'm not sure whether this error is related more to Manim or Python so don't kill me for that. So I've tried to run this program from a YouTube video ([here it is](https://www.youtube.com/watch?v=oqDQwEvHGfE&ab_channel=Visualization101)), which is supposed to be an animation of the Lorenz Attractor.
**Code**:
```python
from manimlib.imports import *
class Lorenz_Attractor(ThreeDScene):
def construct(self):
axes = ThreeDAxes(x_min=-3.5,x_max=3.5,y_min=-3.5,y_max=3.5,z_min=0,z_max=6,axis_config={"include_tip": True,"include_ticks":True,"stroke_width":1})
dot = Sphere(radius=0.05,fill_color=BLUE).move_to(0*RIGHT + 0.1*UP + 0.105*OUT)
self.set_camera_orientation(phi=65 * DEGREES,theta=30*DEGREES,gamma = 90*DEGREES)
self.begin_ambient_camera_rotation(rate=0.05) #Start move camera
dtime = 0.01
numsteps = 30
self.add(axes,dot)
def lorenz(x, y, z, s=10, r=28, b=2.667):
x_dot = s*(y - x)
y_dot = r*x - y - x*z
z_dot = x*y - b*z
return x_dot, y_dot, z_dot
def update_trajectory(self, dt):
new_point = dot.get_center()
if get_norm(new_point - self.points[-1]) > 0.01:
self.add_smooth_curve_to(new_point)
traj = VMobject()
traj.start_new_path(dot.get_center())
traj.set_stroke(BLUE, 1.5, opacity=0.8)
traj.add_updater(update_trajectory)
self.add(traj)
def update_position(self,dt):
x_dot, y_dot, z_dot = lorenz(dot.get_center()[0]*10, dot.get_center()[1]*10, dot.get_center()[2]*10)
x = x_dot * dt/10
y = y_dot * dt/10
z = z_dot * dt/10
self.shift(x/10*RIGHT + y/10*UP + z/10*OUT)
dot.add_updater(update_position)
self.wait(420)
```
<!-- The code you run -->
When I run this code, I get this:
**Error**:
```
Traceback (most recent call last):
File "C:\Users\Azelide\manim\manim.py", line 5, in <module>
manimlib.main()
File "C:\Users\Azelide\manim\manimlib\__init__.py", line 12, in main
scene.run()
File "C:\Users\Azelide\manim\manimlib\scene\scene.py", line 76, in run
self.construct()
File "lorenztrick.py", line 8, in construct
self.set_camera_orientation(phi=65 * DEGREES,theta=30*DEGREES,gamma = 90*DEGREES)
AttributeError: 'Lorenz_Attractor' object has no attribute 'set_camera_orientation'
```
This looks to me quite inexplicable, since ``` set_camera_orientation ``` does exist in the Manim library and I assume the code should work fine otherwise, you can see the video for the demonstration.
Also, this is already a different issue but when I've tried running the 8 scenes from the example_scenes.py, they worked quite ok apart from the first scene (OpeningManimExample) giving me this traceback after I see the transformation of z to z^2:
```
C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-79 not recognized
warnings.warn(f"{ref} not recognized")
C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-114 not recognized
warnings.warn(f"{ref} not recognized")
C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-116 not recognized
warnings.warn(f"{ref} not recognized")
C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-104 not recognized
warnings.warn(f"{ref} not recognized")
C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-105 not recognized
warnings.warn(f"{ref} not recognized")
C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-110 not recognized
warnings.warn(f"{ref} not recognized")
C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-107 not recognized
warnings.warn(f"{ref} not recognized")
C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-103 not recognized
warnings.warn(f"{ref} not recognized")
C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-111 not recognized
warnings.warn(f"{ref} not recognized")
C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-102 not recognized
warnings.warn(f"{ref} not recognized")
C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-101 not recognized
warnings.warn(f"{ref} not recognized")
C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-112 not recognized
warnings.warn(f"{ref} not recognized")
C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-108 not recognized
warnings.warn(f"{ref} not recognized")
C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-97 not recognized
warnings.warn(f"{ref} not recognized")
C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-115 not recognized
warnings.warn(f"{ref} not recognized")
C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-44 not recognized
warnings.warn(f"{ref} not recognized")
C:\Users\Azelide\manim\manimlib\mobject\svg\svg_mobject.py:132: UserWarning: g1-109 not recognized
warnings.warn(f"{ref} not recognized")
```
I'm running on Python 3.9.1, Windows 10 64-bit, in virtualenv. I was getting exactly the same errors both when not running via virtualenv and running via virtualenv. Reinstalled everything a few times, routed everything necessary to PATH in environment variable settings, nothing has improved.
| closed | 2021-02-08T10:07:53Z | 2021-02-08T11:30:52Z | https://github.com/3b1b/manim/issues/1370 | [] | ghost | 3 |
writer/writer-framework | data-visualization | 291 | add a tooltip option on icon, button, ... component | Components that have an `ss-click` event should have a setting to configure a tooltip on mouseover.

If the tooltip field is empty, nothing appears. The components work as they do today.
#### Components
* Button
* Text
* Image
* Icon (even if it doesn't have the `ss-click`)
#### Design proposal
A simple tooltip above the button as in material ui.

| open | 2024-03-11T10:33:15Z | 2024-03-11T10:33:35Z | https://github.com/writer/writer-framework/issues/291 | [
"enhancement"
] | FabienArcellier | 0 |
igorbenav/fastcrud | pydantic | 88 | Is it possible to make is_deleted field redefined and optional? | https://github.com/igorbenav/fastcrud/blob/adf9ab08c6d7b39b303374a430623846bc6548d3/fastcrud/crud/fast_crud.py#L1697
It would be convenient to make this field redefined to self.is_deleted_column and optional, because in some cases, one deleted_at_column field is enough to understand that the record has been deleted. | closed | 2024-05-16T12:16:35Z | 2024-09-20T21:13:58Z | https://github.com/igorbenav/fastcrud/issues/88 | [
"enhancement",
"good first issue",
"FastCRUD Methods"
] | lisij86 | 3 |
thp/urlwatch | automation | 825 | --test-filter works but not with a normal execution | I just set up urlwatch with 4 urls.
My first one went great, and also the second one, here is my urls.yaml:
```
name: "Ransomfeed"
kind: url
url: "https://www.ransomfeed.it/?country=ITA"
filter:
- xpath: "//div[@class='table-responsive']//tbody/tr[1]"
- html2text
- strip
---
name: "Hacker News"
kind: url
url: "https://thehackernews.com/"
filter:
- xpath: "//div[@class='body-post clear'][1]"
- html2text
- shellpipe: "tr -cd '\\11\\12\\15\\40-\\176'"
- strip
---
name: "Red Hot Cyber"
kind: url
url: "https://www.redhotcyber.com/"
filter:
- xpath: "(//article[contains(@class, 'elementor-post')])[1]"
- html2text
- strip
---
name: "Commissariato di PS Notizie"
kind: url
url: "https://www.commissariatodips.it/notizie/index.html"
filter:
- xpath: "//div[@class='media article articletype-0 topnews dotted-h'][1]"
- html2text
- strip
I just want to use urlwatch for when specific sites publuish a new article.
```
The thing is, that this 4 urls that I set up they all pass the test:
```
user@ubuntu:~$ urlwatch --test-filter 1
16078
2024-06-24
15:33:25
Compagnia Trasporti Integrati S.R.L
monti
Italy
user@ubuntu:~$ urlwatch --test-filter 2
New MOVEit Transfer Vulnerability Under Active Exploitation - Patch ASAP!
Jun 26, 2024
Vulnerability / Data Protection
A newly disclosed critical security flaw impacting Progress Software MOVEit Transfer is already seeing exploitation attempts in the wild shortly after details of the bug were publicly disclosed. The vulnerability, tracked as CVE-2024-5806 (CVSS score: 9.1), concerns an authentication bypass that impacts the following versions - From 2023.0.0 before 2023.0.11 From 2023.1.0 before 2023.1.6, and From 2024.0.0 before 2024.0.2 "Improper authentication vulnerability in Progress MOVEit Transfer (SFTP module) can lead to Authentication Bypass," the company said in an advisory released Tuesday. Progress has also addressed another critical SFTP-associated authentication bypass vulnerability (CVE-2024-5805, CVSS score: 9.1) affecting MOVEit Gateway version 2024.0.0. Successful exploitation of the flaws could allow attackers to bypass SFTP authentication and gain access to MOVEit Transfer and Gateway systems. watchTowr Labs has since published additional technical specifi
user@ubuntu:~$ urlwatch --test-filter 3
Cybercrime e Dark Web
150.000 dollari. Il costo di uno 0-Day UAF nel Kernel Linux sul Dark Web
Recentemente è emerso un allarme nel mondo della sicurezza informatica: un attore malintenzionato ha annunciato la vendita di una vulnerabilità 0-Day di tipo Use After Free (UAF) nel kernel Linux su un noto forum del dark web. Questa vulnerabilità, se sfruttata, permetterebbe l’esecuzione di codice con privilegi elevati, rappresentando una
RHC Dark Lab
26/06/2024
16:20
user@ubuntu:~$ urlwatch --test-filter 4
25.06.2024
POLIZIA DI STATO E ANCI PIEMONTE: PATTO PER LA CYBER SICUREZZA
È stato siglato presso la Questura di Torino il Protocollo d’Intesa tra il Centro Operativo Sicurezza Cibernetica della Polizia Postale Piemonte e...
```
But only the first and the third one actually send me something to discord(I used a discord webhook for reporting)
And also the console shows this:
```
user@ubuntu:~$ urlwatch
===========================================================================
01. NEW: Commissariato di PS Notizie
===========================================================================
---------------------------------------------------------------------------
NEW: Commissariato di PS Notizie ( https://www.commissariatodips.it/notizie/index.html )
---------------------------------------------------------------------------
--
```
And this is what I got from Discord:
```
===========================================================================
01. NEW: Commissariato di PS Notizie
===========================================================================
---------------------------------------------------------------------------
NEW: Commissariato di PS Notizie ( https://www.commissariatodips.it/notizie/index.html )
---------------------------------------------------------------------------
--
urlwatch 2.28, Copyright 2008-2023 Thomas Perl
Website: https://thp.io/2008/urlwatch/
Support urlwatch development: https://github.com/sponsors/thp
watched 4 URLs in 0 seconds
```
I don't know if I am missing out on something important but I can't seem to figure out the issue by looking at the wiki | closed | 2024-06-26T20:07:46Z | 2024-06-27T14:26:38Z | https://github.com/thp/urlwatch/issues/825 | [] | SimoneFelici | 4 |
apify/crawlee-python | web-scraping | 929 | Prepare universal http interceptor for both static and browser crawlers tests | Currently in our tests sometimes respx is used to mock http traffic for static crawlers and for PlaywrightCrawler mostly real requests are done.
It would be convenient, faster and more robust to create a fixture that can mock both.
For Playwright related browser requests it can be done using custom browser **BrowserPool**, **page.route** for example:
```python
class _StaticRedirectBrowserPool(BrowserPool):
"""BrowserPool for redirecting browser requests to static content."""
async def new_page(
self,
*,
page_id: str | None = None,
browser_plugin: BaseBrowserPlugin | None = None,
proxy_info: ProxyInfo | None = None,
) -> CrawleePage:
crawlee_page = await super().new_page(page_id=page_id, browser_plugin=browser_plugin, proxy_info=proxy_info)
await crawlee_page.page.route(
'**/*',
lambda route: route.fulfill(
status=200, content_type='text/plain', body='<!DOCTYPE html><html><body>What a body!</body></html>'
),
)
return crawlee_page
``` | closed | 2025-01-22T13:02:26Z | 2025-03-20T09:04:33Z | https://github.com/apify/crawlee-python/issues/929 | [
"t-tooling",
"debt"
] | Pijukatel | 1 |
jina-ai/serve | machine-learning | 5,648 | docs: explain idempotent requests | Idempotent requests are a standard of the REST POST and PUT api's wherein duplicate requests or re-tried requests for certain use cases have a single effect and outcome. For example, user account creation is unique and retries will return an error response. In a similar way asynchronous jobs that have the same identifier will only be scheduled to run once. This concept must be documented in jina so that the custom Executors are aware that retrying requests with side effects are carefully handled. | closed | 2023-02-02T14:58:33Z | 2023-05-18T00:17:39Z | https://github.com/jina-ai/serve/issues/5648 | [
"Stale"
] | girishc13 | 1 |
statsmodels/statsmodels | data-science | 8,973 | The answer of statsmodels.sandbox.stats.runs is not the same with R. | My code of statsmodels:
```
'''Step 1: Importing libraries'''
from statsmodels.sandbox.stats.runs import runstest_1samp
import numpy as np
'''Step 2: Loading (Importing) the Data'''
seq = np.array([1,0,1,1,0,1,1,0,1,0,0,1,1,0,0
,0,1,0,1,0,1,0,0,0,0,1,1,1])
'''Step 3: Runs Test'''
res = runstest_1samp(seq)
print('Z-statistic value:', np.round(res[0], 3))
print('\nP-value:', np.round(res[1], 3))
```
result:
```
Z-statistic value: 0.578
P-value: 0.563
```
while using R:
```
# Step 1: Load necessary libraries
library(tseries)
# Step 2: Load the data
seq <- c(1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0,
0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1)
# Step 3: Runs Test
res <- runs.test(as.factor(seq))
cat("Z-statistic value:", round(res$statistic, 3), "\n")
cat("P-value:", round(res$p.value, 3), "\n")
```
result:
```
Z-statistic value: 0.77
P-value: 0.441
``` | closed | 2023-08-11T09:34:21Z | 2023-10-27T09:57:00Z | https://github.com/statsmodels/statsmodels/issues/8973 | [] | cqgx680 | 3 |
matplotlib/matplotlib | data-visualization | 28,957 | [Bug]: Possible defect in backend selection with Python 3.13 on Windows | ### Bug summary
I've ran into this while upgrading some CI to Python 3.13
Apparently a test as simple as `plt.subplots()` will crash specifically on Python 3.13 + Windows
I created a minimal reprod repo, where I run the same test successfully with the following combos:
- Python 3.13 + Ubuntu
- Python 3.12 + Windows
see https://github.com/neutrinoceros/reprod-mpl-win-python-3.13-bug/actions/runs/11257049452
From my non-expert perspective, there are multiple suspects:
- Python 3.13 itself (note that in my reprod I'm using a uv-managed binary from https://github.com/indygreg/python-build-standalone, but I also obtain the same failure with a binary from Github Actions)
- matplotlib
- my own test configuration: I'm not explicitly configuring a matplotlib backend, but never needed to until now, so I'm not sure this would be a solution or a workaround. However, I have verified that doing so *works*. Please feel free to close this issue if this is the recommended solution.
### Code for reproduction
```Python
import matplotlib.pyplot as plt
def test_subplots_simple():
fig, ax = plt.subplots()
```
### Actual outcome
```
============================= test session starts =============================
platform win32 -- Python 3.13.0, pytest-8.3.3, pluggy-1.5.0
rootdir: D:\a\reprod-mpl-win-python-3.13-bug\reprod-mpl-win-python-3.13-bug
configfile: pyproject.toml
collected 1 item
tests\test_1.py F [100%]
================================== FAILURES ===================================
____________________________ test_subplots_simple _____________________________
def test_subplots_simple():
> fig, ax = plt.subplots()
tests\test_1.py:4:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.venv\Lib\site-packages\matplotlib\pyplot.py:1759: in subplots
fig = figure(**fig_kw)
.venv\Lib\site-packages\matplotlib\pyplot.py:1027: in figure
manager = new_figure_manager(
.venv\Lib\site-packages\matplotlib\pyplot.py:550: in new_figure_manager
return _get_backend_mod().new_figure_manager(*args, **kwargs)
.venv\Lib\site-packages\matplotlib\backend_bases.py:3507: in new_figure_manager
return cls.new_figure_manager_given_figure(num, fig)
.venv\Lib\site-packages\matplotlib\backend_bases.py:3512: in new_figure_manager_given_figure
return cls.FigureCanvas.new_manager(figure, num)
.venv\Lib\site-packages\matplotlib\backend_bases.py:1797: in new_manager
return cls.manager_class.create_with_canvas(cls, figure, num)
.venv\Lib\site-packages\matplotlib\backends\_backend_tk.py:483: in create_with_canvas
window = tk.Tk(className="matplotlib")
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <tkinter.Tk object .>, screenName = None, baseName = 'pytest'
className = 'matplotlib', useTk = True, sync = False, use = None
def __init__(self, screenName=None, baseName=None, className='Tk',
useTk=True, sync=False, use=None):
"""Return a new top level widget on screen SCREENNAME. A new Tcl interpreter will
be created. BASENAME will be used for the identification of the profile file (see
readprofile).
It is constructed from sys.argv[0] without extensions if None is given. CLASSNAME
is the name of the widget class."""
self.master = None
self.children = {}
self._tkloaded = False
# to avoid recursions in the getattr code in case of failure, we
# ensure that self.tk is always _something_.
self.tk = None
if baseName is None:
import os
baseName = os.path.basename(sys.argv[0])
baseName, ext = os.path.splitext(baseName)
if ext not in ('.py', '.pyc'):
baseName = baseName + ext
interactive = False
> self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
E _tkinter.TclError: Can't find a usable init.tcl in the following directories:
E C:/Users/runneradmin/AppData/Roaming/uv/python/cpython-3.13.0-windows-x86_64-none/lib/tcl8.6 C:/Users/runneradmin/AppData/Roaming/uv/python/lib/tcl8.6 C:/Users/runneradmin/AppData/Roaming/uv/lib/tcl8.6 C:/Users/runneradmin/AppData/Roaming/uv/python/library C:/Users/runneradmin/AppData/Roaming/uv/library C:/Users/runneradmin/AppData/Roaming/uv/tcl8.6.12/library C:/Users/runneradmin/AppData/Roaming/tcl8.6.12/library
E
E
E
E This probably means that Tcl wasn't installed properly.
C:\Users\runneradmin\AppData\Roaming\uv\python\cpython-3.13.0-windows-x86_64-none\Lib\tkinter\__init__.py:2459: TclError
=========================== short test summary info ===========================
FAILED tests/test_1.py::test_subplots_simple - _tkinter.TclError: Can't find a usable init.tcl in the following directories:
C:/Users/runneradmin/AppData/Roaming/uv/python/cpython-3.13.0-windows-x86_64-none/lib/tcl8.6 C:/Users/runneradmin/AppData/Roaming/uv/python/lib/tcl8.6 C:/Users/runneradmin/AppData/Roaming/uv/lib/tcl8.6 C:/Users/runneradmin/AppData/Roaming/uv/python/library C:/Users/runneradmin/AppData/Roaming/uv/library C:/Users/runneradmin/AppData/Roaming/uv/tcl8.6.12/library C:/Users/runneradmin/AppData/Roaming/tcl8.6.12/library
This probably means that Tcl wasn't installed properly.
============================= 1 failed in 12.18s ==============================
```
### Expected outcome
no failure
### Additional information
_No response_
### Operating system
Windows
### Matplotlib Version
3.9.2
### Matplotlib Backend
?
### Python version
3.13.0
### Jupyter version
_No response_
### Installation
None | closed | 2024-10-09T14:37:24Z | 2024-10-10T19:33:13Z | https://github.com/matplotlib/matplotlib/issues/28957 | [] | neutrinoceros | 5 |
ray-project/ray | pytorch | 50,951 | Ray Tune AimLoggerCallback not support repo server? | ### What happened + What you expected to happen

the aim config see above, but the result in aim ui is below, only one trail track on aim, the other trial experment is not same with the first trial(my ray tune name is hpo) and it's status is running and track nothing useful information when the trail completed, when I set the aim repo a local path, it works well, does the
AimLoggerCallback not support repo server ?

### Versions / Dependencies
ray 2.40.0
aim 3.19.0
### Reproduction script
`def run_config(args):
aim_server = "aim://172.20.32.185:30058"
print("aim server is :", aim_server)
return RunConfig(
name=args.name,
callbacks=[AimLoggerCallback(
repo=aim_server,
system_tracking_interval=None
)],
storage_path=args.storage_path,
log_to_file=True
)`
### Issue Severity
High: It blocks me from completing my task. | closed | 2025-02-27T09:18:48Z | 2025-02-28T08:41:45Z | https://github.com/ray-project/ray/issues/50951 | [
"bug",
"tune",
"triage"
] | somunslotus1 | 1 |
RomelTorres/alpha_vantage | pandas | 231 | CryptoCurrencies.get_digital_currency_exchange_rate doesnt work | This funtion sends the api call:
https://www.alphavantage.co/query?function=CURRENCY_EXCHANGE_RATE&symbol=BTC&market=EUR&apikey=XXXX&datatype=json
The right one would be:
https://www.alphavantage.co/query?function=CURRENCY_EXCHANGE_RATE&from_currency=BTC&to_currency=EUR&apikey=XXXX&datatype=json
To fix this the keyword arguments of CryptoCurrencies.get_digital_currency_exchange_rate should be from_currency and to_currency (line 63 in cryptocurrencys.py)
Then the return value should be changed to:
FUNCTION_KEY, 'Realtime Currency Exchange Rate', None
But im not sure with the Meta_data = None, I just couldnt figure out a secound entry in the dict.
With this changes it works for the default Outputformat but not for output_format = 'pamdas'. Maybe anyone can fix this completely. | closed | 2020-06-11T14:01:18Z | 2021-01-19T21:32:28Z | https://github.com/RomelTorres/alpha_vantage/issues/231 | [
"bug",
"enhancement"
] | mariohanser | 3 |
huggingface/datasets | machine-learning | 7,228 | Composite (multi-column) features | ### Feature request
Structured data types (graphs etc.) might often be most efficiently stored as multiple columns, which then need to be combined during feature decoding
Although it is currently possible to nest features as structs, my impression is that in particular when dealing with e.g. a feature composed of multiple numpy array / ArrayXD's, it would be more efficient to store each ArrayXD as a separate column (though I'm not sure by how much)
Perhaps specification / implementation could be supported by something like:
```
features=Features(**{("feature0", "feature1")=Features(feature0=Array2D((None,10), dtype="float32"), feature1=Array2D((None,10), dtype="float32"))
```
### Motivation
Defining efficient composite feature types based on numpy arrays for representing data such as graphs with multiple node and edge attributes is currently challenging.
### Your contribution
Possibly able to contribute | open | 2024-10-14T23:59:19Z | 2024-10-15T11:17:15Z | https://github.com/huggingface/datasets/issues/7228 | [
"enhancement"
] | alex-hh | 0 |
coqui-ai/TTS | python | 2,719 | [Bug] Tacotron2-DDC denial of service + bizarre behavior when input ends with "?!?!" | ### Describe the bug
Before I start, I just want to say this is funniest bug I've come across in my 20+ years of software development.
To keep the issue a bit more readable, I've put the audio uploads in detail tags. Click on the arrow by each sample to hear it.
---
Adding on `?!?!` to the end of a prompt using Tacotron2-DDC causes the decoder to trail off (hence the "DOS" aspect of this bug). After `max_decoder_steps` is exceeded, the audio gets dumped to disk and the results are... well, somehow both nightmare fuel and the most hilarious sounds at the same time.
After the original prompt is finished speaking, it trails off into repeating bizarre remnants of the prompt over and over, akin to a baby speaking, or someone having a mental break down. In some cases it sounds much more, uh... explicit, depending on what was in the prompt.
Note how it says the prompt correctly before trailing off.
<details><summary><code>squibbidy bop boop doop bomp pewonkus dinkus womp womp womp deebop scoop top lop begomp?!?!?!</code></summary>
<p>
[boopdoop.webm](https://github.com/coqui-ai/TTS/assets/885648/233570c2-0b5c-48bb-8a84-a92605a127de)
</p>
</details>
It appears the question marks / bangs must come at the end of the input text; being present in the middle of the prompt seems to work fine.
<details><summary><code>before the question marks?!?!?!? after them</code></summary>
<p>
[middle.webm](https://github.com/coqui-ai/TTS/assets/885648/409915f4-e49f-4e63-98c4-e7de2fbd99f9)
</p>
</details>
Conversely, removing ` after them` from the prompt causes the bug, but it completes before `max_decoder_steps` is exceeded, suggesting that the decoder doesn't go off into infinity but has _some_ point of termination, albeit exponentially beyond the input text length.
<details><summary><code>before the question marks?!?!?!?!</code></summary>
<p>
[just-before.webm](https://github.com/coqui-ai/TTS/assets/885648/5220c870-a83d-4e21-a962-0258d3aa8029)
</p>
</details>
Further, it seems as little as `?!?!` causes the bug. `?!` and `?!?` do not.
<details><summary><code>what are you doing today?!</code></summary>
<p>
[wayd_1.webm](https://github.com/coqui-ai/TTS/assets/885648/33223b5e-68ca-488f-a52c-458940c90e1c)
</p>
</details>
<details><summary><code>what are you doing today?!?</code></summary>
<p>
[wayd_2.webm](https://github.com/coqui-ai/TTS/assets/885648/6210adf5-a62b-4fb5-a3aa-c8fb9786d9ac)
</p>
</details>
<details><summary><code>what are you doing today?!?!</code></summary>
<p>
[wayd_3.webm](https://github.com/coqui-ai/TTS/assets/885648/763ccb7a-af24-4984-aed0-9dd6d79e3094)
</p>
</details>
Some inputs, however, are completely unaffected.
<details><summary><code>woohoo I'm too cool for school weehee you're too cool for me?!?!?!</code></summary>
<p>
[in-situ-bug.webm](https://github.com/coqui-ai/TTS/assets/885648/31171d73-abcf-4e73-9d15-cfe8e8edcef0)
</p>
</details>
### Examples
Here are more examples, just because... well, why not.
<details><summary><code>blahblahblahblahblah?!?!?!</code></summary>
<p>
[blahblahblah.webm](https://github.com/coqui-ai/TTS/assets/885648/22c467c0-26b7-4d7c-8da6-1e96e03b11a7)
</p>
</details>
<details><summary><code>ah ah ah let's count to ten AH AH AH LET'S COUNT TO TEN?!?!?!</code></summary>
<p>
[counttoten.webm](https://github.com/coqui-ai/TTS/assets/885648/442c72b9-f16e-4457-b6c5-d54ee15e2a28)
</p>
</details>
<details><summary><code>holy smokes it's an artichoke gone broke woah ho ho?!?!?!</code></summary>
<p>
[artichoke.webm](https://github.com/coqui-ai/TTS/assets/885648/d700acf8-4b68-448d-8311-a88d9185fe40)
</p>
</details>
<details><summary><code>hahahahaha reeeeeeeeeeeee maaaaaaaaaaaaa?!?!?!</code></summary>
<p>
[hahahaha.webm](https://github.com/coqui-ai/TTS/assets/885648/ae7ec6ff-7d0e-4f29-9bab-31e495a5c28b)
</p>
</details>
<details><summary><code>scooby dooby doo where are you we've got some work to do now?!?!?!?!?!</code></summary>
<p>
[scoobydoo.webm](https://github.com/coqui-ai/TTS/assets/885648/a6131b66-0cdf-4068-bb5b-25816f2b1335)
</p>
</details>
<details><summary><code>ayyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy ah ah ah le-meow u r so dang funny amirite bros?!?!?!</code></summary>
<p>
[ayyy_bugged.webm](https://github.com/coqui-ai/TTS/assets/885648/e5095500-6063-4d79-a8c4-bdae2e135547)
</p>
</details>
### To Reproduce
Generate some speech with the tacotron2-ddc model with `?!?!?!` at the end.
```shell
tts \
--out_path output/hello.wav \
--model_name "tts_models/en/ljspeech/tacotron2-DDC" \
--text "holy smokes it's an artichoke gone broke woah ho ho?!?!?!"
```
### Expected behavior
Just speaking the input prompt and ending, not... whatever it's doing now.
### Logs
```console
$ tts --out_path output/hello.wav --model_name "tts_models/en/ljspeech/tacotron2-DDC" --text "holy smokes it's an artichoke gone broke woah ho ho?!?!?!"
> tts_models/en/ljspeech/tacotron2-DDC is already downloaded.
> vocoder_models/en/ljspeech/hifigan_v2 is already downloaded.
> Using model: Tacotron2
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:20
| > fft_size:1024
| > power:1.5
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:False
| > symmetric_norm:True
| > mel_fmin:0
| > mel_fmax:8000.0
| > pitch_fmin:1.0
| > pitch_fmax:640.0
| > spec_gain:1.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:True
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:2.718281828459045
| > hop_length:256
| > win_length:1024
> Model's reduction rate `r` is set to: 1
> Vocoder Model: hifigan
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:20
| > fft_size:1024
| > power:1.5
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:False
| > symmetric_norm:True
| > mel_fmin:0
| > mel_fmax:8000.0
| > pitch_fmin:1.0
| > pitch_fmax:640.0
| > spec_gain:1.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:False
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:2.718281828459045
| > hop_length:256
| > win_length:1024
> Generator Model: hifigan_generator
> Discriminator Model: hifigan_discriminator
Removing weight norm...
> Text: holy smokes it's an artichoke gone broke woah ho ho?!?!?!
> Text splitted to sentences.
["holy smokes it's an artichoke gone broke woah ho ho?!?!?!"]
> Decoder stopped with `max_decoder_steps` 10000
> Processing time: 77.33241438865662
> Real-time factor: 0.662833806507867
> Saving output to output/hello.wav
```
### Environment
```shell
{
"CUDA": {
"GPU": [],
"available": false,
"version": "11.7"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.0.1+cu117",
"TTS": "0.14.3",
"numpy": "1.23.5"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.10.6",
"version": "#2311-Microsoft Tue Nov 08 17:09:00 PST 2022"
}
}
```
To be clear, this is on WSL1 on Windows, so things are running under "Ubuntu".
### Additional context
I'm unsure if other models are affected, I haven't tried. | closed | 2023-06-28T15:58:21Z | 2023-06-29T11:36:26Z | https://github.com/coqui-ai/TTS/issues/2719 | [
"bug"
] | Qix- | 3 |
writer/writer-framework | data-visualization | 4 | Remove pandas and plotly from dependency | Congratulations on the release!
Is it possible to remove pandas and plotly from dependency? | closed | 2023-04-29T23:16:12Z | 2023-05-01T15:05:10Z | https://github.com/writer/writer-framework/issues/4 | [] | mzy2240 | 1 |
kennethreitz/responder | flask | 280 | Uvicorn 0.3.30 only accepts keyword arguments. | Looks like the argument `app` no longer exists in uvicorn's run function and only kwargs can be passed.
https://github.com/kennethreitz/responder/blob/be56e92d65ca59a7d532016955127328ab38cdd8/responder/api.py#L656
```
def run(**kwargs):
```
https://github.com/encode/uvicorn/blob/9f0ef8a9a90173fc39da34b0f56a633f40434b7d/uvicorn/main.py#L173 | closed | 2019-01-22T12:08:03Z | 2019-01-22T16:26:13Z | https://github.com/kennethreitz/responder/issues/280 | [] | jeremiahpslewis | 2 |
recommenders-team/recommenders | deep-learning | 1,954 | [BUG] SAR needs to be modified due to a breaking change in scipy | ### Description
<!--- Describe your issue/bug/request in detail -->
With scipy 1.10.1, the item similarity matrix is a dense matrix
```
print(type(model.item_similarity))
print(type(model.user_affinity))
print(type(model.item_similarity) == np.ndarray)
print(type(model.item_similarity) == scipy.sparse._csr.csr_matrix)
print(model.item_similarity.shape)
print(model.item_similarity)
<class 'numpy.ndarray'>
<class 'scipy.sparse._csr.csr_matrix'>
True
False
(1646, 1646)
[[1. 0.10650888 0.03076923 ... 0. 0. 0. ]
[0.10650888 1. 0.15104167 ... 0. 0.00729927 0.00729927]
[0.03076923 0.15104167 1. ... 0. 0. 0.01190476]
...
[0. 0. 0. ... 1. 0. 0. ]
[0. 0.00729927 0. ... 0. 1. 0. ]
[0. 0.00729927 0.01190476 ... 0. 0. 1. ]]
```
but with scipy 1.11.1 the item similarity matrix is sparse
```
print(type(model.item_similarity))
print(type(model.user_affinity))
type(model.item_similarity) == np.ndarray
type(model.item_similarity) == scipy.sparse._csr.csr_matrix
print(model.item_similarity.shape)
<class 'numpy.ndarray'>
<class 'scipy.sparse._csr.csr_matrix'>
()
```
### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
<!--- * Azure Data Science Virtual Machine. -->
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
Related to https://github.com/microsoft/recommenders/issues/1951
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
### Other Comments
We found that the issue was that during a division in Jaccard, scipy change the type. We talked to the authors of scipy and they told us that they did a breaking change in 1.11.0 https://github.com/scipy/scipy/issues/18796#issuecomment-1619125257
| closed | 2023-07-03T16:41:19Z | 2024-04-30T04:52:05Z | https://github.com/recommenders-team/recommenders/issues/1954 | [
"bug"
] | miguelgfierro | 9 |
hootnot/oanda-api-v20 | rest-api | 199 | question for hootnot 2 | #### CONTEXT
1. imagine a situation where _multiple_ trades are open in the _same subaccount_, _same instrument_, and _unit quantity_... Like this:

2. attempting to close all but one of these open trades will result in order being cancelled for FIFO reasons.
#### QUESTION
Do you know any tricks to test which trade is "allowed" to be closed per US FIFO law? Or stated differently, given a list of trades, can we determine which ones are NOT eligible to be closed?
I understand I could query for the lowest trade_id among such a group, and that should, by definition, work. But I thought _maybe_ you might know a better way.
This is all the info I see when I query for a trade... nothing lends itself to "can be closed", or im just blind
```
{
"trade": {
"id": "148",
"instrument": "EUR_USD",
"price": "1.09190",
"openTime": "2023-01-26T01:48:04.744487039Z",
"initialUnits": "-1",
"initialMarginRequired": "0.0218",
"state": "OPEN",
"currentUnits": "-1",
"realizedPL": "0.0000",
"financing": "0.0001",
"dividendAdjustment": "0.0000",
"unrealizedPL": "0.0057",
"marginUsed": "0.0217"
},
"lastTransactionID": "266"
}
```
| closed | 2023-02-01T00:04:34Z | 2023-02-01T12:18:27Z | https://github.com/hootnot/oanda-api-v20/issues/199 | [
"question"
] | traveller1011 | 1 |
indico/indico | sqlalchemy | 6,568 | Improve "Link existing booking" feature | When you click on "Link existing booking", you are redirected to the bookings page:

There are two things we could improve here:
- Currently, the bookings page uses the current date. It'd be better if it used the date of the event (or the first day in case of multi-day events)
- The bookings page currently shows all bookings which makes it hard to find a booking which can be linked. We should filter the bookings using the linked event to only show relevant bookings.
cc @Moliholy @OmeGak | open | 2024-10-02T07:54:55Z | 2025-03-18T19:15:18Z | https://github.com/indico/indico/issues/6568 | [
"enhancement"
] | tomasr8 | 12 |
openapi-generators/openapi-python-client | fastapi | 554 | Re-Export apis like we do models | **Is your feature request related to a problem? Please describe.**
Sometimes there are a lot of apis to import to even do a simple thing
**Describe the solution you'd like**
Models are re-exported, allowing us to
```python
import my-openapi-lib.models as m
from my-openapi-lib.api.do_the_things import create_thing_to_do
create_thing_to_do.sync(client=c, json_body=m.ThingModel(...))
# I'd like to be able to do
from my-openapi-lib import api
api.do_the_things.create_thing_to_do.sync(client=c, json_body=m.ThingModel(...))
# and
from my-openapi-lib.api import do_the_things
do_the_things.create_thing_to_do.sync(client=c, json_body=m.ThingModel(...))
```
| closed | 2021-12-22T09:06:32Z | 2023-08-13T01:50:14Z | https://github.com/openapi-generators/openapi-python-client/issues/554 | [
"✨ enhancement"
] | rtaycher | 0 |
ultralytics/ultralytics | pytorch | 19,151 | locking on to a single person in a multi-person frame | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi, I am using yolo11-pose to assess power lifts (squats) against standards and decide whether or not the lift passes or fails based on keypoints in the critical frame.
In dummy videos with only one person (the lifter) in the frame, the model and application perform perfectly.
However the application is intended for competition, and in competition, while there is only one lifter on the platform at a time, there are multiple other people (spotters) visible in the frame. This creates undesirable model behavior.
The desired behavior is that the model should only focus on the lifter and assess the quality of his lift. While the result derived is correct, the skeleton overlay is unstable: some times it correctly overlays the skeleton on the lifter, at other times during the lift the skeleton may be temporarily overlaid against a spotter or other person in frame. This is a problem. I have attached images to illustrate.
I have tried to overcome this by specifying the lifters person id number:
```
results = model.track(
source=video_file,
device=device,
show=False,
# conf=0.7,
save=True,
max_det=1
)
```
I have also tried to exclude ids which are erroneously annotated, reduce the ROI, experimented with increasing euclidean distance, and confidence weights.
```
lifter_selector:
expected_center: [0.5, 0.5]
roi: [0.3, 0.3, 0.7, 0.7]
distance_weight: 2.0
confidence_weight: 0.8
lifter_id: 4
excluded_ids: [1, 7, 10]
```
I am having no success, and i hope that someone can help me to find a way to "fix" the bounding box and skeleton overlay to the lifter and prevent those annotations on non-litfters on the platform.
thank you
correct

incorrect

incorrect

### Additional
Please let me know if you'd like me to share the code via GitHub repo. I am happy to do so. I am really hoping you can help me and i thank you in advance. Please let me know if my explanation is not clear or if you require more information. | closed | 2025-02-10T05:05:48Z | 2025-02-15T02:37:39Z | https://github.com/ultralytics/ultralytics/issues/19151 | [
"question",
"track",
"pose"
] | mkingopng | 8 |
Kanaries/pygwalker | plotly | 573 | [DEV-1122] [feature proposal] Add native support for pandas API | For now, pygwalker kernel computation uses sql + duckdb for data queries. Another approach might be using the native pandas API for all those computations. Benefits of this implementation include:
+ Test and switch to different high-performance dataframe library, like modin, polars.
It would be even better for the community if developers could customize their own query engines.
<sub>[DEV-1122](https://linear.app/kanaries/issue/DEV-1122/[feature-proposal]-add-native-support-for-pandas-api)</sub> | closed | 2024-06-12T03:52:06Z | 2024-12-13T02:35:09Z | https://github.com/Kanaries/pygwalker/issues/573 | [
"Vote if you want it",
"proposal"
] | ObservedObserver | 0 |
JoeanAmier/TikTokDownloader | api | 401 | 对于post请求a_bogus参数是如何生成的? | 对于post请求a_bogus参数是如何生成的?猜测会用到请求的data和param | open | 2025-02-05T03:29:16Z | 2025-02-10T02:06:53Z | https://github.com/JoeanAmier/TikTokDownloader/issues/401 | [] | aduoer | 1 |
davidsandberg/facenet | computer-vision | 813 | How to modify the generative script to take an input image? | To calculate the latent variable for it (by running the encoder), modify the latent variable by adding/subtracting an attribute vector, and then generate a new image by running the decoder | open | 2018-07-14T05:20:01Z | 2018-07-14T05:20:01Z | https://github.com/davidsandberg/facenet/issues/813 | [] | MdAshrafulAlam | 0 |
open-mmlab/mmdetection | pytorch | 11,887 | 文档教程考虑重写吗 | **Describe the feature**
**Motivation**
A clear and concise description of the motivation of the feature.
Ex1. It is inconvenient when \[....\].
Ex2. There is a recent paper \[....\], which is very helpful for \[....\].
**Related resources**
If there is an official code release or third-party implementations, please also provide the information here, which would be very helpful.
**Additional context**
Add any other context or screenshots about the feature request here.
If you would like to implement the feature and create a PR, please leave a comment here and that would be much appreciated.
| open | 2024-07-29T02:23:47Z | 2024-07-29T03:38:07Z | https://github.com/open-mmlab/mmdetection/issues/11887 | [] | liuxiaobina | 0 |
sqlalchemy/alembic | sqlalchemy | 1,431 | script_location PATH config is not OS agnostic | **Describe the bug**
This setting is different if you are running on Windows OS or Linux OS
```
script_location = app/db/migrations
```
**Expected behavior**
I am developing on Windows machine and i set the value of the setting to "app\db\migrations"
But then I build docker image, deploy and want to run it the server and I have to fix the config file to "app/db/migrations"
I find myself changing this quite frequently, is there a way to define this with mutli OS support in mind?
Thanks
**To Reproduce**
Please try to provide a [Minimal, Complete, and Verifiable](http://stackoverflow.com/help/mcve) example, with the migration script and/or the SQLAlchemy tables or models involved.
See also [Reporting Bugs](https://www.sqlalchemy.org/participate.html#bugs) on the website.
```py
# Insert code here
```
**Error**
```
# Copy error here. Please include the full stack trace.
```
**Versions.**
- OS:
- Python:
- Alembic:
- SQLAlchemy:
- Database:
- DBAPI:
**Additional context**
<!-- Add any other context about the problem here. -->
**Have a nice day!**
| closed | 2024-02-26T10:42:34Z | 2024-03-18T21:29:59Z | https://github.com/sqlalchemy/alembic/issues/1431 | [
"documentation",
"configuration"
] | developer992 | 7 |
streamlit/streamlit | data-visualization | 10,678 | Clickable Container | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Hi,
it would be very nice, if I could turn a st.container clickable (with an on_click event), so I could build clickable boxes with awesome content. :-)
Best Regards
### Why?
Because I have already tried to build a tile with streamlit, and it is very very difficult. That would make it much easier.
### How?
_No response_
### Additional Context
_No response_ | open | 2025-03-07T12:00:23Z | 2025-03-07T12:56:13Z | https://github.com/streamlit/streamlit/issues/10678 | [
"type:enhancement",
"feature:st.container",
"area:events"
] | alex-bork | 1 |
plotly/dash | flask | 2,960 | Dashtable case-insensitive filter causes exception when the column contains null value | If a column in the table contains null value at some rows and you try to do a case insensitive filtering like "ine foo', a javascript exception "Cannot read property 'toString' of null" exception" will occur. Apparently it is caused by the line "lhs.toString().toUpperCase()" of fnEval() method in relational.ts failed to check whether lhs (i.e. the cell value) is null or not.
A sample app to reproduce the problem.
```
from dash import html
from dash import dash_table
import pandas as pd
from collections import OrderedDict
import dash
app = dash.Dash(__name__)
df = pd.DataFrame(OrderedDict([
('climate', [None, 'Snowy', 'Sunny', 'Rainy']),
('temperature', [13, 43, 50, 30]),
('city', ['NYC', None, 'Miami', 'NYC'])
]))
app.layout = html.Div([
dash_table.DataTable(
id='table',
data=df.to_dict('records'),
columns=[
{'id': 'climate', 'name': 'climate'},
{'id': 'temperature', 'name': 'temperature'},
{'id': 'city', 'name': 'city'},
],
filter_action="native",
),
html.Div(id='table-dropdown-container')
])
if __name__ == '__main__':
app.run_server(debug=True, port=8051)
```
Run the app, in the 'city' column header of the table, type in 'ine foo' and hit enter, which should reproduce the problem.
I had a PR for fixing this bug for an old version of dash-table at https://github.com/plotly/dash-table/pull/935/files
Environment:
```
dash 2.17.1
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- OS: Ubuntu 22.04
- Browser: Chrome
- Version: 127.0.6533.119
 | open | 2024-08-23T06:34:31Z | 2024-09-04T14:20:51Z | https://github.com/plotly/dash/issues/2960 | [
"bug",
"dash-data-table",
"P3"
] | xwk | 1 |
iperov/DeepFaceLab | deep-learning | 603 | Adjusting Blur Mask Modifier Changes The Brightness | I've never noticed this before, but I just installed the 2/3/20 nvidia version. When I adjust the blur higher or lower, the face gets significantly brighter or darker depending which way I go. Here's a video I just shared on Google Drive illustrating this. Maybe this has happened in past versions and I never noticed? But I think I would have. Let me know if this is normal. Here's the video.... https://drive.google.com/file/d/1L77hcgAt8zcGM6jpOB53e6TRDslccivJ/view?usp=sharing | open | 2020-02-04T07:01:32Z | 2023-06-08T21:24:08Z | https://github.com/iperov/DeepFaceLab/issues/603 | [] | kilerb | 1 |
open-mmlab/mmdetection | pytorch | 11,476 | AssertionError: MMCV==1.7.2 is used but incompatible. Please install mmcv>=2.0.0rc4, <2.2.0. | Thanks for your error report and we appreciate it a lot.
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. I have read the [FAQ documentation](https://mmdetection.readthedocs.io/en/latest/faq.html) but cannot get the expected help.
3. The bug has not been fixed in the latest version.
**Describe the bug**
A clear and concise description of what the bug is.
**Reproduction**
1. What command or script did you run?
```none
A placeholder for the command.
```
2. Did you make any modifications on the code or config? Did you understand what you have modified?
3. What dataset did you use?
**Environment**
1. Please run `python mmdet/utils/collect_env.py` to collect necessary environment information and paste it here.
2. You may add addition that may be helpful for locating the problem, such as
- How you installed PyTorch \[e.g., pip, conda, source\]
- Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)
**Error traceback**
If applicable, paste the error trackback here.
```none
A placeholder for trackback.
```
**Bug fix**
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
| open | 2024-02-16T02:18:25Z | 2024-05-06T11:18:38Z | https://github.com/open-mmlab/mmdetection/issues/11476 | [] | shinparkg | 2 |
roboflow/supervision | computer-vision | 752 | `LineZone` 2.0 | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Description
TODO
### Use case
TODO
### Additional
- delivery per class in/out count
- new ways to visualize results | closed | 2024-01-18T20:45:41Z | 2024-01-29T13:08:14Z | https://github.com/roboflow/supervision/issues/752 | [
"enhancement",
"Q1.2024",
"planning"
] | SkalskiP | 3 |
deepset-ai/haystack | nlp | 8,766 | Hatch Test Environment Dependency Resolution Fails for Python 3.10+ | **Describe the bug**
Running `hatch run test:lint` in the default Haystack `test` environment fails for Python 3.10+ due to dependency resolution issues with `llvmlite`. The error occurs because `openai-whisper` (included in the default test environment) depends on `numba==0.53.1`, which in turn requires `llvmlite==0.36.0`. The version of `llvmlite` is incompatible with Python versions >= 3.10.
**Error message**
```
× Failed to build `llvmlite==0.36.0`
╰─▶ Call to `setuptools.build_meta:__legacy__.build_wheel` failed (exit status: 1)
[stderr]
RuntimeError: Cannot install on Python version 3.11.11; only versions >=3.6,<3.10 are supported.
hint: This usually indicates a problem with the package or the build environment.
help: `llvmlite` (v0.36.0) was included because `openai-whisper` (v20240930) depends on `numba` (v0.53.1) which depends on `llvmlite`.
```
**Expected behavior**
The test environment should successfully resolve dependencies and execute `hatch run test:lint` on all Python versions supported by Haystack (`>=3.8,<3.13`).
**Additional context**
- The issue occurs only for Python 3.10+ as `llvmlite==0.36.0` supports Python < 3.10.
- Dependencies like `llvmlite` and `numba` are resolved automatically and are not explicitly included in the `extra-dependencies` section of the `test` environment in `pyproject.toml`.
**To Reproduce**
Steps to reproduce the behavior:
1. Clone the Haystack repository.
2. Set up a `hatch` environment with Python 3.10, 3.11, or 3.12.
3. Run `hatch run test:lint`.
4. Observe the dependency resolution failure caused by `llvmlite`.
**FAQ Check**
- [x] Have you had a look at [[our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS: Ubuntu 24.04.1 WSL
- GPU/CPU: N/A
- Haystack version (commit or version number): Latest (`main` branch)
- DocumentStore: N/A
- Reader: N/A
- Retriever: N/A
| closed | 2025-01-24T04:54:02Z | 2025-01-27T10:55:20Z | https://github.com/deepset-ai/haystack/issues/8766 | [
"type:bug",
"P2"
] | lbux | 3 |
K3D-tools/K3D-jupyter | jupyter | 91 | vector_animation_k3d.ipynb | Widgets show double objects. Interestingly "screenshot" is For example:
OS - screenshot:
<img width="666" alt="screen shot 2018-06-21 at 16 52 07" src="https://user-images.githubusercontent.com/1192310/41727078-c78f06fc-7573-11e8-9d72-1483bf96f3fc.png">

Correct screenshot:

| closed | 2018-06-21T14:56:07Z | 2018-07-23T18:53:40Z | https://github.com/K3D-tools/K3D-jupyter/issues/91 | [] | marcinofulus | 0 |
piskvorky/gensim | data-science | 2,697 | Cannot use gensim 3.8.x when `nltk` package is installed | #### Problem description
> What are you trying to achieve? What is the expected result? What are you seeing instead?
In my script i'm trying to `import gensim.models.keyedvectors` and also import another package, that requires `nltk` package internally. Whenever i have NLTK installed in the same virtualenv (i'm not using virtualenv, but a docker image actually) - the gensim model fails to import.
#### Steps/code/corpus to reproduce
```
# pip list | grep -E 'gensim|nltk'
gensim 3.8.1
# pip install nltk
Processing /root/.cache/pip/wheels/96/86/f6/68ab24c23f207c0077381a5e3904b2815136b879538a24b483/nltk-3.4.5-cp36-none-any.whl
Requirement already satisfied: six in /usr/local/lib/python3.6/site-packages (from nltk) (1.13.0)
Installing collected packages: nltk
Successfully installed nltk-3.4.5
# pip list | grep -E 'gensim|nltk'
gensim 3.8.1
nltk 3.4.5
# python
Python 3.6.8 (default, Jun 11 2019, 01:16:11)
[GCC 6.3.0 20170516] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/site-packages/gensim/__init__.py", line 5, in <module>
from gensim import parsing, corpora, matutils, interfaces, models, similarities, summarization, utils # noqa:F401
File "/usr/local/lib/python3.6/site-packages/gensim/corpora/__init__.py", line 14, in <module>
from .wikicorpus import WikiCorpus # noqa:F401
File "/usr/local/lib/python3.6/site-packages/gensim/corpora/wikicorpus.py", line 539, in <module>
class WikiCorpus(TextCorpus):
File "/usr/local/lib/python3.6/site-packages/gensim/corpora/wikicorpus.py", line 577, in WikiCorpus
def __init__(self, fname, processes=None, lemmatize=utils.has_pattern(), dictionary=None,
File "/usr/local/lib/python3.6/site-packages/gensim/utils.py", line 1614, in has_pattern
from pattern.en import parse # noqa:F401
File "/usr/local/lib/python3.6/site-packages/pattern/text/en/__init__.py", line 61, in <module>
from pattern.text.en.inflect import (
File "/usr/local/lib/python3.6/site-packages/pattern/text/en/__init__.py", line 80, in <module>
from pattern.text.en import wordnet
File "/usr/local/lib/python3.6/site-packages/pattern/text/en/wordnet/__init__.py", line 57, in <module>
nltk.data.find("corpora/" + token)
File "/usr/local/lib/python3.6/site-packages/nltk/data.py", line 673, in find
return find(modified_name, paths)
File "/usr/local/lib/python3.6/site-packages/nltk/data.py", line 660, in find
return ZipFilePathPointer(p, zipentry)
File "/usr/local/lib/python3.6/site-packages/nltk/compat.py", line 228, in _decorator
return init_func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/nltk/data.py", line 506, in __init__
zipfile = OpenOnDemandZipFile(os.path.abspath(zipfile))
File "/usr/local/lib/python3.6/site-packages/nltk/compat.py", line 228, in _decorator
return init_func(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/nltk/data.py", line 1055, in __init__
zipfile.ZipFile.__init__(self, filename)
File "/usr/local/lib/python3.6/zipfile.py", line 1131, in __init__
self._RealGetContents()
File "/usr/local/lib/python3.6/zipfile.py", line 1198, in _RealGetContents
raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file
```
#### Versions
```python
python
Python 3.6.8 (default, Jun 11 2019, 01:16:11)
[GCC 6.3.0 20170516] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import platform; print(platform.platform())
Linux-5.0.0-050000rc8-generic-x86_64-with-debian-9.11
>>> import sys; print("Python", sys.version)
Python 3.6.8 (default, Jun 11 2019, 01:16:11)
[GCC 6.3.0 20170516]
>>> import numpy; print("NumPy", numpy.__version__)
NumPy 1.17.4
>>> import scipy; print("SciPy", scipy.__version__)
SciPy 1.3.3
>>> import gensim; print("gensim", gensim.__version__)
gensim 3.8.1
>>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
FAST_VERSION 1
```
| closed | 2019-12-04T18:47:26Z | 2021-01-17T07:14:20Z | https://github.com/piskvorky/gensim/issues/2697 | [] | yapus | 8 |
taverntesting/tavern | pytest | 798 | Feature Request: Support formatting in MQTT response topic | I am able to use variables to paramaterize the URL when doing HTTP tests. Additionally, when publishing via MQTT, the publish topic can be parameterized. However, the response topic does not allow parameters. All the examples and references to topics in MQTT land in tavern appear to hard code the topics.
* In a production system, a device may make use of wildcards in the topic. A good example is publishing a config value from the cloud to an IoT device, where the topic includes some unique identifier
* Another example is the [request/reply](https://www.emqx.com/en/blog/mqtt5-request-response) model. In MQTTv3, a design pattern used has the ACK from the device including a UUID (correlation ID) in the topic name that is different for each MQTT request.
My goal is shown below in the minimal example.Topics may need to change from saved values, or perhaps from other areas such as `conftest.py`.
```python
# conftest.py
import pytest
PING_THING = "foo"
PONG_THING = "bar"
@pytest.fixture
def get_ping_topic():
return PING_THING
@pytest.fixture
def get_pong_topic():
return PONG_THING
```
```yaml
# test_mqtt.tavern.yaml
---
test_name: Test mqtt message response
paho-mqtt:
client:
transport: websockets
client_id: tavern-tester
connect:
host: localhost
port: 9001
timeout: 3
stages:
- name: step 1 - ping/pong
mqtt_publish:
# Use value from conftest.py
topic: /device/123/{get_ping_topic}
payload: ping
mqtt_response:
# Use value from conftest.py
topic: /device/123/{get_ping_topic}
save:
json:
first_val: "thing.device_id[0]"
timeout: 5
- name: step 2 - using saved data in topic
mqtt_publish:
# Use saved value from an earlier stage
topic: /device/123/ping_{first_val}
payload: ping
mqtt_response:
# Use saved value from an earlier stage
topic: /device/123/pong_{first_val}
timeout: 5
```
I am willing to discuss the work needed and doing the contribution. When I get a chance, I will create a branch with an integration test that this is failing on. I have not found any workaround yet. | closed | 2022-07-28T23:53:20Z | 2022-10-01T13:02:40Z | https://github.com/taverntesting/tavern/issues/798 | [] | RFRIEDM-Trimble | 3 |
ResidentMario/geoplot | matplotlib | 42 | Choropleth map behavior | Hi, I was looking at the choropleth map function of this module, but the input argument seems a little bit confusing:
1) The first input argument, df, is the "data being plotted". From your example, I deduced that df should be the information obtained from some shapefiles. If this is true, could you update the documentation and variable name, so that it is less confusing?
2) From the documentation (as well as the examples), I cannot figure out how to plot continuously-valued data (for example, population density per state). The more intuitive input structure, in my opinion, is using a Python dictionary, with keys being names of polygons (corresponding to the polygons in the shapefiles), and values being the values to be mapped into colors (which can either be categorical or continuous). In this way, potential users who have their own shapefiles and the corresponding data (stored as a Python dict) can easily plot a choropleth map.
3) The USA map lacks Alaska and Hawaii.
Incidentally, I was working on a similar choropleth map plotting problem, for which I submitted a pull request to matplotlib: https://github.com/matplotlib/basemap/pull/366. I added Alaska and Hawaii elegantly into the corner of the USA map, and I also used Python dictionary as my input data structure.
Just offering my two cents. | closed | 2017-10-11T08:47:24Z | 2019-07-26T10:14:29Z | https://github.com/ResidentMario/geoplot/issues/42 | [
"enhancement"
] | jsh9 | 3 |
gradio-app/gradio | data-science | 9,972 | Issue with Gradio assets not loading through Nginx reverse proxy | ### Describe the bug
When accessing a Gradio application through Nginx reverse proxy, the main page loads but static assets (JS/CSS) fail to load with 404 errors when the page attempts to fetch them automatically.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
### gradio code
```python
import gradio as gr
import time
def test(x):
time.sleep(4)
return x
gr.Interface(test, "textbox", "textbox").queue().launch(
root_path="/webui", # add i also try to use nginx+http://0.0.0.0:7861/ with no root_path="/webui"
server_name="0.0.0.0",
server_port=7861
)
```
### Current Behavior
- `localhost:9999/webui` loads successfully and returns the Gradio web interface
- When the page tries to fetch its assets, the following requests return 404:
- `localhost:9999/assets/index-Dj1xzGVg.js`
- `localhost:9999/assets/index-Bmd1Nf3q.css`
I manually tried accessing with the /webui prefix, but still got 404:
- `localhost:9999/webui/assets/index-Dj1xzGVg.js`
- `localhost:9999/webui/assets/index-Bmd1Nf3q.css`
However, accessing directly through port 7861 works:
- `localhost:7861/webui/assets/index-Dj1xzGVg.js`
- `localhost:7861/webui/assets/index-Bmd1Nf3q.css`
### Expected Behavior
Static assets should load correctly when accessing the application through the Nginx reverse proxy at `localhost:9999/webui`.
### Question
Is there something wrong with my configuration? How can I properly serve Gradio's static assets through the Nginx reverse proxy?
### Additional Notes
- The main application interface loads correctly
- Static assets (JS/CSS) fail to load when the page tries to fetch them automatically
- Direct access to the Gradio server works as expected
### Nginx Configuration
```
server {
listen 9999;
server_name _;
root /root;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ [^/]\.php(/|$) {
try_files $uri =404;
fastcgi_pass unix:/run/php/php8.1-fpm.sock;
fastcgi_index index.php;
set $path_info $fastcgi_path_info;
set $real_script_name $fastcgi_script_name;
if ($fastcgi_script_name ~ "^(.+?\.php)(/.+)$") {
set $real_script_name $1;
set $path_info $2;
}
fastcgi_param SCRIPT_FILENAME $document_root$real_script_name;
fastcgi_param SCRIPT_NAME $real_script_name;
fastcgi_param PATH_INFO $path_info;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
location /webui {
proxy_pass http://0.0.0.0:7861/webui/; # and i also try to use http://0.0.0.0:7861/ with no root_path="/webui"
proxy_buffering off;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
### Screenshot
Gradio is running normally on port 7861, and 7861/webui can also be accessed.
The following is the situation on localhost:9999:
The webpage opens up as a blank page.
However, the returned HTML contains Gradio content.

### Logs
```shell
None
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.6.0
gradio_client version: 1.4.3
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.3.0
audioop-lts is not installed.
fastapi: 0.115.5
ffmpy: 0.4.0
gradio-client==1.4.3 is not installed.
httpx: 0.27.0
huggingface-hub: 0.26.2
jinja2: 3.1.3
markupsafe: 2.1.5
numpy: 2.1.3
orjson: 3.10.11
packaging: 24.0
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.9.2
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 6.0.1
ruff: 0.7.4
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit==0.12.0 is not installed.
typer: 0.13.0
typing-extensions: 4.11.0
urllib3: 2.2.1
uvicorn: 0.32.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.27.0
huggingface-hub: 0.26.2
packaging: 24.0
typing-extensions: 4.11.0
websockets: 12.0
```
### Severity
Blocking usage of gradio | open | 2024-11-16T18:34:18Z | 2025-02-28T17:53:19Z | https://github.com/gradio-app/gradio/issues/9972 | [
"bug",
"cloud"
] | shiertier | 3 |
JoeanAmier/TikTokDownloader | api | 15 | 视频分辨率低和同分辨率下码流异常 | 博主

地址
```
https://www.douyin.com/user/MS4wLjABAAAAJ6Lr2yJ-SAFg7GjMu7E2nHZd1nhGhzsygP7_uqQXlI0
```
---
下面截图的视频ID为```7214153200159558973```

左边是 本项目下载的
右边是 [TikTokDownload](https://github.com/Johnserf-Seed/TikTokDownload)下载的,下同
由于分辨率低了一些,所以文件体积也小一些。
---
然后下面的视频ID为```7198388067944631610```
在相同分辨率下文件体积不一样,

检查发现是码流小了。

---
还有一个是会下载到0K的文件,视频是可以正常播放的,地址```https://www.douyin.com/video/7020014643720539429```

console中没有报错。
---
分辨率低的问题,不是个例,一百多视频超过一半。

大概下载三四个博主会出现一个0k问题。
| closed | 2023-07-01T09:05:43Z | 2023-07-01T13:46:21Z | https://github.com/JoeanAmier/TikTokDownloader/issues/15 | [
"功能异常(bug)"
] | LetMeCarryU | 1 |
521xueweihan/HelloGitHub | python | 2,680 | 【开源自荐】UtilMeta | 极简高效的 Python 后端元框架 | ## 推荐项目
- 项目地址:https://github.com/utilmeta/utilmeta-py
- 类别:Python
- 项目标题:UtilMeta | 极简高效的 Python 后端元框架
- 项目描述:
UtilMeta 是一个用于开发 API 服务的后端元框架,基于 Python 类型注解标准高效构建声明式接口与 ORM 查询,能够自动解析请求参数与生成 OpenAPI 文档,高效开发 RESTful 接口,产出的代码简洁清晰,并且支持使用主流 Python 框架(django, flask, fastapi, starlette, sanic, tornado 等)作为运行时实现或渐进式整合
[框架主页](https://utilmeta.com/py)
[快速开始](https://docs.utilmeta.com/py/zh/)
[案例教程](https://docs.utilmeta.com/py/zh/tutorials/)
- 亮点:
* UtilMeta 的声明式 ORM 能高效处理大部分 CRUD 场景,what you define is what you get,自动规避 N+1 关系查询问题,简洁高效
* UtilMeta 可以使用主流的 Python 框架作为运行时实现,仅需一个参数即可切换整个底层实现,你也可以从现有项目渐进式地集成
* UtilMeta 配套了一个全周期的 API 管理平台,可以一站式解决小团队的接口文档调试,日志查询,服务器监控,报警通知与事件管理等运维与管理需求
- 示例代码:
```python
from utilmeta.core import api, orm
from django.db import models
from .models import User, Article
class UserSchema(orm.Schema[User]):
username: str
articles_num: int = models.Count('articles')
class ArticleSchema(orm.Schema[Article]):
id: int
author: UserSchema
content: str
class ArticleAPI(api.API):
async def get(self, id: int) -> ArticleSchema:
return await ArticleSchema.ainit(id)
```
- 截图:



- 后续更新计划:
1. 支持 Websocket 接口开发
2. 支持适配更多的 Python 框架作为运行时实现
3. 支持更多的 ORM 模型实现,如 tortoise-orm, peewee, sqlachemy
| closed | 2024-01-27T04:40:32Z | 2024-02-24T11:28:24Z | https://github.com/521xueweihan/HelloGitHub/issues/2680 | [] | voidZXL | 0 |
autogluon/autogluon | computer-vision | 4,793 | Using Chronos and Chronos-bolt models in offline machine | ## Description
In the moudle `timeseries`, there is no way to change the configuration to work in a offline machine when I have already the models in that machine.
How can I handle with that?
| closed | 2025-01-14T08:27:27Z | 2025-01-14T11:55:00Z | https://github.com/autogluon/autogluon/issues/4793 | [
"enhancement",
"module: timeseries"
] | dvirainisch | 2 |
man-group/arctic | pandas | 187 | latest dbfixtures release breaks unit tests | See this commit:
https://github.com/ClearcodeHQ/pytest-dbfixtures/commit/572629afc475f446fda09fabc4d33a613dd2af6f
Note that passing '?' into the fixtures (see https://github.com/manahl/arctic/blob/master/arctic/fixtures/arctic.py#L15 ) looks to be no longer supported.
| closed | 2016-08-01T16:54:24Z | 2016-08-02T15:56:41Z | https://github.com/man-group/arctic/issues/187 | [] | bmoscon | 1 |
Lightning-AI/pytorch-lightning | data-science | 19,773 | Support GAN based model training with deepspeed which need to setup fabric twice | ### Description & Motivation
I have same issue like https://github.com/Lightning-AI/pytorch-lightning/issues/17856 when training dcgan with fabric + deepspeed.
The official example works fine with deepspeed: https://github.com/microsoft/DeepSpeedExamples/blob/master/training/gan/gan_deepspeed_train.py
After adapting it to use fabric,
```
import torch
import torch.backends.cudnn as cudnn
import torch.nn as nn
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
from torch.utils.tensorboard import SummaryWriter
from time import time
from lightning.fabric import Fabric
from gan_model import Generator, Discriminator, weights_init
from utils import get_argument_parser, set_seed, create_folder
def get_dataset(args):
if torch.cuda.is_available() and not args.cuda:
print("WARNING: You have a CUDA device, so you should probably run with --cuda")
if args.dataroot is None and str(args.dataset).lower() != 'fake':
raise ValueError("`dataroot` parameter is required for dataset \"%s\"" % args.dataset)
if args.dataset in ['imagenet', 'folder', 'lfw']:
# folder dataset
dataset = dset.ImageFolder(root=args.dataroot,
transform=transforms.Compose([
transforms.Resize(args.imageSize),
transforms.CenterCrop(args.imageSize),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]))
nc=3
elif args.dataset == 'lsun':
classes = [ c + '_train' for c in args.classes.split(',')]
dataset = dset.LSUN(root=args.dataroot, classes=classes,
transform=transforms.Compose([
transforms.Resize(args.imageSize),
transforms.CenterCrop(args.imageSize),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]))
nc=3
elif args.dataset == 'cifar10':
dataset = dset.CIFAR10(root=args.dataroot, download=True,
transform=transforms.Compose([
transforms.Resize(args.imageSize),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]))
nc=3
elif args.dataset == 'mnist':
dataset = dset.MNIST(root=args.dataroot, download=True,
transform=transforms.Compose([
transforms.Resize(args.imageSize),
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
]))
nc=1
elif args.dataset == 'fake':
dataset = dset.FakeData(image_size=(3, args.imageSize, args.imageSize),
transform=transforms.ToTensor())
nc=3
elif args.dataset == 'celeba':
dataset = dset.ImageFolder(root=args.dataroot,
transform=transforms.Compose([
transforms.Resize(args.imageSize),
transforms.CenterCrop(args.imageSize),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]))
nc = 3
assert dataset
return dataset, nc
def train(args):
writer = SummaryWriter(log_dir=args.tensorboard_path)
create_folder(args.outf)
set_seed(args.manualSeed)
cudnn.benchmark = True
dataset, nc = get_dataset(args)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=args.batchSize, shuffle=True, num_workers=int(args.workers))
ngpu = 0
nz = int(args.nz)
ngf = int(args.ngf)
ndf = int(args.ndf)
netG = Generator(ngpu, ngf, nc, nz)
netG.apply(weights_init)
if args.netG != '':
netG.load_state_dict(torch.load(args.netG))
netD = Discriminator(ngpu, ndf, nc)
netD.apply(weights_init)
if args.netD != '':
netD.load_state_dict(torch.load(args.netD))
criterion = nn.BCELoss()
real_label = 1
fake_label = 0
fabric = Fabric(accelerator="auto", devices=1, precision='16-mixed',
strategy="deepspeed_stage_1")
fabric.launch()
fixed_noise = torch.randn(args.batchSize, nz, 1, 1, device=fabric.device)
# setup optimizer
optimizerD = torch.optim.Adam(netD.parameters(), lr=args.lr, betas=(args.beta1, 0.999))
optimizerG = torch.optim.Adam(netG.parameters(), lr=args.lr, betas=(args.beta1, 0.999))
netD, optimizerD = fabric.setup(netD, optimizerD)
netG, optimizerG = fabric.setup(netG, optimizerG)
dataloader = fabric.setup_dataloaders(dataloader)
torch.cuda.synchronize()
start = time()
for epoch in range(args.epochs):
for i, data in enumerate(dataloader, 0):
############################
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
###########################
# train with real
netD.zero_grad()
real = data[0]
batch_size = real.size(0)
label = torch.full((batch_size,), real_label, dtype=real.dtype, device=fabric.device)
output = netD(real)
errD_real = criterion(output, label)
fabric.backward(errD_real, model=netD)
D_x = output.mean().item()
# train with fake
noise = torch.randn(batch_size, nz, 1, 1, device=fabric.device)
fake = netG(noise)
label.fill_(fake_label)
output = netD(fake.detach())
errD_fake = criterion(output, label)
fabric.backward(errD_fake, model=netD)
D_G_z1 = output.mean().item()
errD = errD_real + errD_fake
optimizerD.step()
############################
# (2) Update G network: maximize log(D(G(z)))
###########################
netG.zero_grad()
label.fill_(real_label) # fake labels are real for generator cost
output = netD(fake)
errG = criterion(output, label)
fabric.backward(errG, model=netG)
D_G_z2 = output.mean().item()
optimizerG.step()
print('[%d/%d][%d/%d] Loss_D: %.4f Loss_G: %.4f D(x): %.4f D(G(z)): %.4f / %.4f'
% (epoch, args.epochs, i, len(dataloader),
errD.item(), errG.item(), D_x, D_G_z1, D_G_z2))
writer.add_scalar("Loss_D", errD.item(), epoch*len(dataloader)+i)
writer.add_scalar("Loss_G", errG.item(), epoch*len(dataloader)+i)
if i % 100 == 0:
vutils.save_image(real,
'%s/real_samples.png' % args.outf,
normalize=True)
fake = netG(fixed_noise)
vutils.save_image(fake.detach(),
'%s/fake_samples_epoch_%03d.png' % (args.outf, epoch),
normalize=True)
# do checkpointing
#torch.save(netG.state_dict(), '%s/netG_epoch_%d.pth' % (args.outf, epoch))
#torch.save(netD.state_dict(), '%s/netD_epoch_%d.pth' % (args.outf, epoch))
torch.cuda.synchronize()
stop = time()
print(f"total wall clock time for {args.epochs} epochs is {stop-start} secs")
def main():
parser = get_argument_parser()
args = parser.parse_args()
train(args)
if __name__ == "__main__":
main()
```
the error is like
```
Traceback (most recent call last):
File "/home/ichigo/LocalCodes/github/DeepSpeedExamples/training/gan/gan_fabric_train.py", line 183, in <module>
main()
File "/home/ichigo/LocalCodes/github/DeepSpeedExamples/training/gan/gan_fabric_train.py", line 180, in main
train(args)
File "/home/ichigo/LocalCodes/github/DeepSpeedExamples/training/gan/gan_fabric_train.py", line 152, in train
fabric.backward(errG, model=netG)
File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/lightning/fabric/fabric.py", line 449, in backward
self._strategy.backward(tensor, module, *args, **kwargs)
File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/lightning/fabric/strategies/strategy.py", line 191, in backward
self.precision.backward(tensor, module, *args, **kwargs)
File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/lightning/fabric/plugins/precision/deepspeed.py", line 91, in backward
model.backward(tensor, *args, **kwargs)
File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1976, in backward
self.optimizer.backward(loss, retain_graph=retain_graph)
File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 2056, in backward
self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward
scaled_loss.backward(retain_graph=retain_graph)
File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/torch/_tensor.py", line 522, in backward
torch.autograd.backward(
File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/torch/autograd/__init__.py", line 266, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 903, in reduce_partition_and_remove_grads
self.reduce_ready_partitions_and_remove_grads(param, i)
File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1416, in reduce_ready_partitions_and_remove_grads
self.reduce_independent_p_g_buckets_and_remove_grads(param, i)
File "/home/ichigo/miniconda3/envs/dl/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 949, in reduce_independent_p_g_buckets_and_remove_grads
new_grad_tensor = self.ipg_buffer[self.ipg_index].narrow(0, self.elements_in_ipg_bucket, param.numel())
TypeError: 'NoneType' object is not subscriptable
```
cc @williamFalcon @Borda @carmocca @awaelchli | open | 2024-04-13T10:54:21Z | 2024-10-18T03:08:45Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19773 | [
"feature",
"needs triage"
] | npuichigo | 2 |
lanpa/tensorboardX | numpy | 350 | No `tensorboardX.__version__` attribute | Title says it all... it would be nice to have a `__version__` attribute on this module. I'm running under Python 3.6.6.
```>>> import tensorboardX
>>> tensorboardX
<module 'tensorboardX' from '/home/dave/.conda/envs/dk_env/lib/python3.6/site-packages/tensorboardX/__init__.py'>
>>> tensorboardX.__version__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'tensorboardX' has no attribute '__version__'``` | closed | 2019-02-08T00:59:27Z | 2019-02-10T09:12:19Z | https://github.com/lanpa/tensorboardX/issues/350 | [] | kielpins | 0 |
postmanlabs/httpbin | api | 547 | support `accept` header for /status/{codes} endpoint | currently v 0.9.2 the `/status/{codes}` endpoints always returns `text/plain` content type.
could you please provide new feature:
if `accept: some-mime-type` defined in request then this mime-type should be returned in a response `content-type` header. | open | 2019-04-13T11:07:17Z | 2019-04-13T11:07:17Z | https://github.com/postmanlabs/httpbin/issues/547 | [] | dlukyanov | 0 |
open-mmlab/mmdetection | pytorch | 12,334 | The results are not the same each time | Over the past six months, I have been working with MMDetection, and I would like to seek advice from the community. I have observed that, despite using the same dataset and code, the results vary with each run. To address this, I have explicitly set randomness = dict(seed=0, deterministic=True) in my configuration. I have also experimented with multiple versions of MMDetection, including 3.0.0 and 2.28.2, but the issue persists: the results remain inconsistent across runs with identical datasets and code. Could anyone provide insights or suggestions to resolve this problem? | open | 2025-03-24T03:39:11Z | 2025-03-24T03:39:26Z | https://github.com/open-mmlab/mmdetection/issues/12334 | [] | CumtStudentZhanghua | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.