repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
pytest-dev/pytest-xdist | pytest | 395 | pytest_runtest_logreport() check for item_index ruins custom test threading logic inside workers | I'm trying to implement ordering and parallel execution of tests inside worker with threading.
For example I have a list of tests to execute defined like:
```
test02
test02, test01
test02
```
I use pytest_generate_tests() hook to order tests and add '_parallel' postfix for simultaneous tests.
Then I use pytest_runtest_protocol() hook to start parallel tests:
```
def pytest_runtest_protocol(self, item, nextitem):
if item in self._queue:
index = self._items.index(item)
return None
if not item.name.endswith('_parallel]'):
if len(self._queue) > 0:
threads = []
for test in self._queue:
thread = MyThread(test)
thread.start()
threads.append(thread)
[t.join() for t in threads]
self._queue = []
return None
else:
self._queue.append(item)
return True
class MyThread(threading.Thread):
def __init__(self, test):
super().__init__()
self.test = test
def run(self):
self.test.ihook.pytest_runtest_protocol(item=self.test, nextitem=None)
```
The issue is in pytest-xdist pytest_runtest_logreport() hook in 'remote' module, which performs check:
```
assert self.session.items[self.item_index].nodeid == report.nodeid
```
If I comment this out - everything works perfect.
The question is: is this check really needed to prevent some tests overlay, or is it possible to create a way to pass item_index to WorkerInteractor object, which works inside execnet?
Executing pytest with:
```
-n 2 --dist=loadscope --tx 2*popen//python=python -m 'not serial'
``` | closed | 2018-12-20T05:21:38Z | 2019-01-16T12:45:31Z | https://github.com/pytest-dev/pytest-xdist/issues/395 | [] | sosadchuk | 11 |
home-assistant/core | python | 140,740 | Constant "Login attempt failed" errors... | ### The problem
Hi there,
i get these error messages daily. HAOS is running locally, and is **not** available from the internet. The reported IP in the log is sometimes from my Macbook, sometimes an iPhone.
Please advise...
Thank you!
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
http
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/http
### Diagnostics information
<details>
<summary>error as reported in http://192.168.1.7:8123/config/logs - click to expand</summary>
```log
Logger: homeassistant.components.http.ban
Source: components/http/ban.py:136
integration: HTTP (documentation, issues)
First occurred: March 15, 2025 at 22:33:53 (5 occurrences)
Last logged: 16:03:57
Login attempt or request with invalid authentication from 192.168.1.20 (192.168.1.20). Requested URL: '/api/history/period/2025-03-15T21:18:05.681Z?filter_entity_id=sensor.airgradient_temperature&end_time=2025-03-15T21:18:13.987Z&skip_initial_state&minimal_response&no_attributes'. (Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0)
Login attempt or request with invalid authentication from 192.168.1.20 (192.168.1.20). Requested URL: '/api/history/period/2025-03-15T21:48:59.845Z?filter_entity_id=sensor.airgradient_temperature&end_time=2025-03-15T21:49:08.032Z&skip_initial_state&minimal_response&no_attributes'. (Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0)
Login attempt or request with invalid authentication from 192.168.1.20 (192.168.1.20). Requested URL: '/api/history/period/2025-03-15T22:04:45.947Z?filter_entity_id=sensor.airgradient_temperature&end_time=2025-03-15T22:19:58.087Z&skip_initial_state&minimal_response&no_attributes'. (Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0)
Login attempt or request with invalid authentication from 192.168.1.20 (192.168.1.20). Requested URL: '/api/history/period/2025-03-16T07:20:51.791Z?filter_entity_id=sensor.airgradient_temperature&end_time=2025-03-16T07:23:51.720Z&skip_initial_state&minimal_response&no_attributes'. (Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0)
Login attempt or request with invalid authentication from 192.168.1.20 (192.168.1.20). Requested URL: '/api/history/period/2025-03-16T14:40:02.451Z?filter_entity_id=sensor.airgradient_temperature&end_time=2025-03-16T14:57:59.700Z&skip_initial_state&minimal_response&no_attributes'. (Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:128.0) Gecko/20100101 Firefox/128.0)
```
</details>
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | open | 2025-03-16T17:35:44Z | 2025-03-20T12:58:10Z | https://github.com/home-assistant/core/issues/140740 | [
"integration: http"
] | notDavid | 3 |
modelscope/modelscope | nlp | 742 | modelscope/SkyPile-150B | **General Question**
Before asking a question, make sure you have:
* Searched the tutorial on modelscope [doc-site](https://modelscope.cn/docs)
* Googled your question.
* Searched related issues but cannot get the expected help.
* The bug has not been fixed in the latest version.
Please @ corresponding people according to your problem:
Model related: @wenmengzhou @tastelikefeet
Model hub related: @liuyhwangyh
Dataset releated: @wangxingjun778
Finetune related: @tastelikefeet @Jintao-Huang
Pipeline related: @Firmament-cyou @wenmengzhou
Contribute your model: @zzclynn
请问我在尝试下载该数据集时候遇到了报错,请问这个报错该如何解决
```shell
File "/root/.pyenv/versions/3.10.0/lib/python3.10/site-packages/modelscope/msdatasets/ms_dataset.py", line 284, in load
dataset_inst = remote_dataloader_manager.load_dataset(
File "/root/.pyenv/versions/3.10.0/lib/python3.10/site-packages/modelscope/msdatasets/data_loader/data_loader_manager.py", line 132, in load_dataset
oss_downloader.process()
File "/root/.pyenv/versions/3.10.0/lib/python3.10/site-packages/modelscope/msdatasets/data_loader/data_loader.py", line 82, in process
self._build()
File "/root/.pyenv/versions/3.10.0/lib/python3.10/site-packages/modelscope/msdatasets/data_loader/data_loader.py", line 109, in _build
meta_manager.parse_dataset_structure()
File "/root/.pyenv/versions/3.10.0/lib/python3.10/site-packages/modelscope/msdatasets/meta/data_meta_manager.py", line 115, in parse_dataset_structure
raise 'Cannot find dataset meta-files, please fetch meta from modelscope hub.'
TypeError: exceptions must derive from BaseException
```
| closed | 2024-01-31T01:19:09Z | 2024-05-28T01:49:43Z | https://github.com/modelscope/modelscope/issues/742 | [
"Stale"
] | LiuChen19960902 | 3 |
Lightning-AI/pytorch-lightning | deep-learning | 20,407 | update dataset at "on_train_epoch_start", but "training_step" still get old data | ### Bug description
I use `trainer.fit(model, datamodule=dm)` to start training.
"dm" is an object whose class inherited from `pl.LightningDataModule`, and in the class, I override the function:
```python
def train_dataloader(self):
train_dataset = MixedBatchMultiviewDataset(self.args, self.tokenizer,
known_exs=self.known_train,
unknown_exs=self.unknown_train,
feature=self.args.feature)
train_dataloader = DataLoader(train_dataset,
batch_size = self.args.train_batch_size,
shuffle=True, num_workers=self.args.num_workers,
pin_memory=True, collate_fn=self.collate_batch_feat)
return train_dataloader
```
at the model's hook `on_train_epoch_start`, I update the dataset:
```python
train_dl = self.trainer.train_dataloader
train_dl.dataset.update_pseudo_labels(uid2pl)
loop = self.trainer.fit_loop
loop._combined_loader = None
loop.setup_data()
```
in the `training_step`, the batch data is still old data, but `trainer.train_dataloader.dataset` is new:
```python
def training_step(self, batch: List[Dict[str, torch.Tensor]], batch_idx: int):
self.mv_model._on_train_batch_start()
logger.info(self.trainer.train_dataloader.dataset.unknown_feats) # new
logger.info(batch) # old
```
### What version are you seeing the problem on?
v2.3
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0):
#- PyTorch Version (e.g., 2.4):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
### More info
_No response_
cc @justusschock | open | 2024-11-08T16:22:03Z | 2024-11-18T22:48:19Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20407 | [
"bug",
"waiting on author",
"loops"
] | Yak1m4Sg | 1 |
bmoscon/cryptofeed | asyncio | 817 | Huobi and Okex liquidations error | *Errors*
2022-04-05 13:26:29,347 : ERROR : OKEX: Error: {'event': 'error', 'msg': "channel:liquidations,instId:ETH-USDT-SWAP doesn't exist", 'code': '60018'}
/cryptofeed/exchange.py", line 139, in std_symbol_to_exchange_symbol raise UnsupportedSymbol(f'{symbol} is not supported on {self.id}') cryptofeed.exceptions.UnsupportedSymbol: ETH-USDT-PERP is not supported on HUOBI
*Code*
**To Reproduce**
See attached zip file
**Expected behavior**
exchange symbol side quantity price id status timestamp
0 BINANCE_FUTURES ETH-USDT-PERP buy 0.006 3241.60 None filled 1.649326e+09
**Operating System:**
- macOS
**Cryptofeed Version**
2.2.0
[cryptofeed_huobi_okex_liquidations_issue.py.zip](https://github.com/bmoscon/cryptofeed/files/8441962/cryptofeed_huobi_okex_liquidations_issue.py.zip)
| closed | 2022-04-07T10:28:40Z | 2022-05-01T19:20:55Z | https://github.com/bmoscon/cryptofeed/issues/817 | [
"bug"
] | Nootski | 0 |
xuebinqin/U-2-Net | computer-vision | 97 | Did u2net face generation was trained with the same loss function? | open | 2020-11-22T06:12:23Z | 2020-11-22T17:16:41Z | https://github.com/xuebinqin/U-2-Net/issues/97 | [] | shgidi | 3 |
|
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,724 | [Bug]: OSError: [WinError -1073741795] Windows Error 0xc000001d | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
If I try to Train or use anything else then text2img or img2img, I will get this Error Message no matter what. OSError: [WinError -1073741795] Windows Error 0xc000001d
### Steps to reproduce the problem
1) Install WebUI with the Dreambooth Extension.
2) Create your Model with Standard Settings.
3) Train your Model with Standard Settings.
### What should have happened?
WebUI should Train the Model
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2024-05-06-17-49.json](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/15224532/sysinfo-2024-05-06-17-49.json)
### Console logs
```Shell
venv "C:\Users\Admin\Documents\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Installing requirements
If submitting an issue on github, please provide the full startup log for debugging purposes.
Initializing Dreambooth
Dreambooth revision: 45a12fe5950bf93205b6ef2b7511eb94052a241f
Checking xformers...
Checking bitsandbytes...
Checking bitsandbytes (ALL!)
Checking Dreambooth requirements...
Installed version of bitsandbytes: 0.43.0
[Dreambooth] bitsandbytes v0.43.0 is already installed.
Installed version of accelerate: 0.21.0
[Dreambooth] accelerate v0.21.0 is already installed.
Installed version of dadaptation: 3.2
[Dreambooth] dadaptation v3.2 is already installed.
Installed version of diffusers: 0.27.2
[Dreambooth] diffusers v0.25.0 is already installed.
Installed version of discord-webhook: 1.3.0
[Dreambooth] discord-webhook v1.3.0 is already installed.
Installed version of fastapi: 0.94.0
[Dreambooth] fastapi is already installed.
Installed version of gitpython: 3.1.32
[Dreambooth] gitpython v3.1.40 is not installed.
Successfully installed gitpython-3.1.43
Installed version of pytorch_optimizer: 2.12.0
[Dreambooth] pytorch_optimizer v2.12.0 is already installed.
Installed version of Pillow: 9.5.0
[Dreambooth] Pillow is already installed.
Installed version of tqdm: 4.66.2
[Dreambooth] tqdm is already installed.
Installed version of tomesd: 0.1.3
[Dreambooth] tomesd v0.1.2 is already installed.
Installed version of tensorboard: 2.13.0
[Dreambooth] tensorboard v2.13.0 is already installed.
[+] torch version 2.1.2+cu121 installed.
[+] torchvision version 0.16.2+cu121 installed.
[+] accelerate version 0.21.0 installed.
[+] diffusers version 0.27.2 installed.
[+] bitsandbytes version 0.43.0 installed.
[+] xformers version 0.0.23.post1 installed.
Launching Web UI with arguments: --xformers
[AddNet] Updating model hashes...
[AddNet] Updating model hashes...
Loading weights [6ce0161689] from C:\Users\Admin\Documents\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: C:\Users\Admin\Documents\stable-diffusion-webui\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 84.4s (prepare environment: 60.3s, import torch: 8.0s, import gradio: 1.5s, setup paths: 1.9s, initialize shared: 0.5s, other imports: 4.5s, list SD models: 0.4s, load scripts: 4.3s, create ui: 1.7s, gradio launch: 0.6s, add APIs: 0.3s).
Applying attention optimization: xformers... done.
Model loaded in 11.9s (load weights from disk: 0.9s, create model: 1.9s, apply weights to model: 7.8s, apply half(): 0.2s, move model to device: 0.1s, load textual inversion embeddings: 0.7s, calculate empty prompt: 0.2s).
Advanced elements visible: False
Extracting config from C:\Users\Admin\Documents\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\..\configs\v1-training-unfrozen.yaml
Extracting checkpoint from C:\Users\Admin\Documents\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Duration: 00:01:45
Advanced elements visible: True
Extracting config from C:\Users\Admin\Documents\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\..\configs\v1-training-unfrozen.yaml
Extracting checkpoint from C:\Users\Admin\Documents\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Duration: 00:01:29
Updating scheduler name to: DDIM
Initializing dreambooth training...
0 cached latents
Init dataset!
Preparing Dataset (With Caching)
Bucket 0 (512, 512, 0) - Instance Images: 30 | Class Images: 0 | Max Examples/batch: 30
Saving cache!
Total Buckets 1 - Instance Images: 30 | Class Images: 0 | Max Examples/batch: 30
Total images / batch: 30, total examples: 30
Initializing bucket counter!
WARNING:dreambooth.train_dreambooth:Wandb API key not set. Please set WANDB_API_KEY environment variable to use wandb.
Steps: 0%| | 0/3000 [00:00<?, ?it/s]Traceback (most recent call last):
File "C:\Users\Admin\Documents\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\ui_functions.py", line 735, in start_training
result = main(class_gen_method=class_gen_method)
File "C:\Users\Admin\Documents\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 2003, in main
return inner_loop()
File "C:\Users\Admin\Documents\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\memory.py", line 126, in decorator
return function(batch_size, grad_size, prof, *args, **kwargs)
File "C:\Users\Admin\Documents\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1811, in inner_loop
optimizer.step()
File "C:\Users\Admin\Documents\stable-diffusion-webui\venv\lib\site-packages\accelerate\optimizer.py", line 133, in step
self.scaler.step(self.optimizer, closure)
File "C:\Users\Admin\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\amp\grad_scaler.py", line 416, in step
retval = self._maybe_opt_step(optimizer, optimizer_state, *args, **kwargs)
File "C:\Users\Admin\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\amp\grad_scaler.py", line 315, in _maybe_opt_step
retval = optimizer.step(*args, **kwargs)
File "C:\Users\Admin\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\optim\lr_scheduler.py", line 68, in wrapper
return wrapped(*args, **kwargs)
File "C:\Users\Admin\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\optim\optimizer.py", line 373, in wrapper
out = func(*args, **kwargs)
File "C:\Users\Admin\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Admin\Documents\stable-diffusion-webui\venv\lib\site-packages\bitsandbytes\optim\optimizer.py", line 300, in step
self.update_step(group, p, gindex, pindex)
File "C:\Users\Admin\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Admin\Documents\stable-diffusion-webui\venv\lib\site-packages\bitsandbytes\optim\optimizer.py", line 581, in update_step
F.optimizer_update_8bit_blockwise(
File "C:\Users\Admin\Documents\stable-diffusion-webui\venv\lib\site-packages\bitsandbytes\functional.py", line 1469, in optimizer_update_8bit_blockwise
optim_func(
OSError: [WinError -1073741795] Windows Error 0xc000001d
Steps: 0%| | 0/3000 [00:20<?, ?it/s]
```
### Additional information
I am Using Tiny 10 (A rundown version of Windows 10) | open | 2024-05-06T17:51:04Z | 2024-05-06T17:51:04Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15724 | [
"bug-report"
] | samuelkurt | 0 |
svc-develop-team/so-vits-svc | pytorch | 155 | [Help]: | ### 请勾选下方的确认框。
- [X] 我已仔细阅读[README.md](https://github.com/svc-develop-team/so-vits-svc/blob/4.0/README_zh_CN.md)和[wiki中的Quick solution](https://github.com/svc-develop-team/so-vits-svc/wiki/Quick-solution)。
- [X] 我已通过各种搜索引擎排查问题,我要提出的问题并不常见。
- [X] 我未在使用由第三方用户提供的一键包/环境包。
### 系统平台版本号
win10
### GPU 型号
3060
### Python版本
3.8.10
### PyTorch版本
1.13
### sovits分支
4.0-v2
### 数据集来源(用于判断数据集质量)
自己的
### 出现问题的环节或执行的命令
训练效果
### 问题描述
自己录了一个小时左右的音频,切成 350个10s左右的小文件,目前训练到 Epoch: 158 10000step左右,试了下还有很大的电流声,像我这种情况,要训练到 多少 Epoch 或者 step比较好?
### 日志
```python
INFO:torch.nn.parallel.distributed:Reducer buckets have been rebuilt in this iteration.
INFO:44k:====> Epoch: 158, cost 90.36 s
```
### 截图`so-vits-svc`、`logs/44k`文件夹并粘贴到此处
目前没有截图
### 补充说明
_No response_ | closed | 2023-04-15T05:46:54Z | 2023-08-01T09:16:49Z | https://github.com/svc-develop-team/so-vits-svc/issues/155 | [
"help wanted"
] | yifenglv46 | 0 |
dgtlmoon/changedetection.io | web-scraping | 2,651 | [feature] regex substitution for each line of result | **Version and OS**
v0.46.04 / Docker
**Is your feature request related to a problem? Please describe.**
Yes: I'm trying to extract a list of links from a list from [this page](https://www.promocatalogues.fr/offres/tapis-diatomite). I'm using an XPath expression like this:
//div[@id="js-default-offers"]//a[contains(@class,"js-offer-link-item")]/@href
Then I'm using an SMS API to send those links via SMS.
This principle works well for 2 other websites, but I'm trying to achieve it for a website that have **relative links**. The output from the XPath expression is:
/magasins/la-foir-fouille/offres/tapis-diatomite-offre-43076750/
/magasins/la-foir-fouille/offres/tapis-diatomite-offre-43076805/
I'd like to make links clickable from my Android device from received SMS. For this I'd need to add the domain name in the beginning of each link, to have something like:
https://www.promocatalogues.fr/magasins/la-foir-fouille/offres/tapis-diatomite-offre-43076750/
https://www.promocatalogues.fr/magasins/la-foir-fouille/offres/tapis-diatomite-offre-43076805/
As far as I know, I can't add string to each result (`concat()` only works for one element).
**Describe the solution you'd like**
I'd like to be able to apply a substitution regex to each result line, like if I'm using sed applied to the output.
This is more generic than the existing "Extract text" section of "filters-and-triggers" page and could replace it (?)
**Describe the use-case and give concrete real-world examples**
This is not really related to any example, but I gave an example above
Thank you ! | closed | 2024-09-21T20:03:11Z | 2024-09-21T20:09:05Z | https://github.com/dgtlmoon/changedetection.io/issues/2651 | [
"enhancement"
] | brunetton | 1 |
allenai/allennlp | pytorch | 4,773 | SNLI-VE dataset reader and model | SNLI-VE is here: https://github.com/necla-ml/SNLI-VE
The VQA reader and model should serve as an example, but there will likely be significant differences. | closed | 2020-11-07T00:01:16Z | 2020-12-24T00:31:57Z | https://github.com/allenai/allennlp/issues/4773 | [] | dirkgr | 3 |
anselal/antminer-monitor | dash | 36 | KeyError: 'POOLS' | i'm running it on ubuntu 16.04.
i tried to add more than 70 L3+ miners to the monitor list.
here's what i see most of the times:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1997, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1985, in wsgi_app
response = self.handle_exception(e)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1540, in handle_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/root/antminer-monitor-master/app/views/antminer.py", line 53, in miners
worker = miner_pools['POOLS'][0]['User']
KeyError: 'POOLS'
what should i do to fix that? | closed | 2017-12-22T19:26:16Z | 2018-04-24T06:30:54Z | https://github.com/anselal/antminer-monitor/issues/36 | [
":dancing_men: duplicate"
] | merttokgozoglu | 8 |
gee-community/geemap | streamlit | 1,488 | [Charts] Change the charting backend from bqplot to plotly | Colab already has plotly pre-installed. We can probably switch the charting backend from bqplot to plotly. The plotly FigureWidget provides similar functionality to bqplot.
https://plotly.com/python/figurewidget-app/ | closed | 2023-04-06T03:04:07Z | 2024-08-18T00:40:30Z | https://github.com/gee-community/geemap/issues/1488 | [
"Feature Request"
] | giswqs | 2 |
seleniumbase/SeleniumBase | pytest | 2,581 | Break out UC Mode's `driver.reconnect(timeout)` method into two new methods: `driver.disconnect()` and `driver.connect()` | ### Break out UC Mode's `driver.reconnect(timeout)` method into two new methods: `driver.disconnect()` and `driver.connect()`
---
This will allow a bit more flexibility, such as the ability to perform non-Selenium actions between the `driver.disconnect()` and the `driver.connect()` calls.
Note that during the disconnected phase, Selenium can't issue commands to the browser, but that also means that the browser can't detect Selenium either (it's the main component of UC Mode). | closed | 2024-03-09T21:47:35Z | 2024-03-09T23:17:06Z | https://github.com/seleniumbase/SeleniumBase/issues/2581 | [
"enhancement",
"UC Mode / CDP Mode"
] | mdmintz | 1 |
microsoft/qlib | deep-learning | 1,613 | AttributeError: 'float' object has no attribute 'lower' | 我执行 python scripts/data_collector/yahoo/collector.py 从yahoo finance下载美股数据。
之后运行qlib自带例子,出现如下错误
AttributeError: 'float' object has no attribute 'lower'

类似错误 https://github.com/microsoft/qlib/issues/1428
| closed | 2023-07-23T01:37:49Z | 2023-10-24T03:15:34Z | https://github.com/microsoft/qlib/issues/1613 | [
"bug"
] | quant2008 | 2 |
coqui-ai/TTS | python | 3,602 | [Bug] AttributeError: 'TTS' object has no attribute 'is_multi_lingual' | ### Describe the bug
因为huggingface.co连接有问题所以下载的本地模型,最后出现了奇怪的错误
The local model downloaded because there is a problem with the connection in huggingface.co, and finally there is a strange error.
### To Reproduce
import torch
from TTS.api import TTS
# Get device
device = "cuda" if torch.cuda.is_available() else "cpu"
# Init TTS with local model path
model_path = "C:/Code/Coqui/TTS/model/tts_models--zh-CN--baker--tacotron2-DDC-GST" # Update this path
tts = TTS(model_path=model_path + "/model_file.pth", config_path=model_path + "/config.json").to(device)
# Run TTS
# ❗ Since this model is multi-lingual voice cloning model, we must set the target speaker_wav and language
# Text to speech list of amplitude values as output
wav = tts.tts(text="你好世界!", speaker_wav="my/cloning/audio.wav", language="zh-cn")
# Text to speech to a file
tts.tts_to_file(text="你好世界!", speaker_wav="my/cloning/audio.wav", language="zh-cn", file_path="output.wav")
model/tts_models--zh-CN--baker--tacotron2-DDC-GST中只有config.json model_file.pth scale_stats.npy
### Expected behavior
_No response_
### Logs
```shell
C:\Users\ning\AppData\Local\Programs\Python\Python39\python.exe C:/Code/Coqui/TTS/run.py
C:\Code\Coqui\TTS\TTS\utils\audio\processor.py:6: UserWarning: A NumPy version >=1.22.4 and <1.29.0 is required for this version of SciPy (detected version 1.22.0)
import scipy.io.wavfile
> Using model: tacotron2
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log10
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:0
| > fft_size:1024
| > power:1.5
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:True
| > symmetric_norm:True
| > mel_fmin:50.0
| > mel_fmax:7600.0
| > pitch_fmin:0.0
| > pitch_fmax:640.0
| > spec_gain:1.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:True
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:C:\Users\ning\AppData\Local\tts\tts_models--zh-CN--baker--tacotron2-DDC-GST\scale_stats.npy
| > base:10
| > hop_length:256
| > win_length:1024
Traceback (most recent call last):
File "C:\Code\Coqui\TTS\run.py", line 14, in <module>
wav = tts.tts(text="你好世界!", speaker_wav="my/cloning/audio.wav", language="zh-cn")
File "C:\Code\Coqui\TTS\TTS\api.py", line 273, in tts
self._check_arguments(
File "C:\Code\Coqui\TTS\TTS\api.py", line 228, in _check_arguments
if self.is_multi_lingual and language is None:
File "C:\Users\ning\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1688, in __getattr__
raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'TTS' object has no attribute 'is_multi_lingual'
> Model's reduction rate `r` is set to: 2
进程已结束,退出代码为 1
```
### Environment
Win10
```shell
{
"CUDA": {
"GPU": [],
"available": false,
"version": null
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.2.0+cpu",
"TTS": "0.22.0",
"numpy": "1.22.0"
},
"System": {
"OS": "Windows",
"architecture": [
"64bit",
"WindowsPE"
],
"processor": "Intel64 Family 6 Model 165 Stepping 2, GenuineIntel",
"python": "3.9.0",
"version": "10.0.19041"
}
}
```
### Additional context
_No response_ | closed | 2024-02-23T10:09:17Z | 2024-11-12T06:53:36Z | https://github.com/coqui-ai/TTS/issues/3602 | [
"bug",
"wontfix"
] | Niggling | 11 |
keras-team/keras | pytorch | 20,596 | Loss functions applied in alphabetical order instead of by dictionary keys in Keras 3.5.0 | **Environment info**
* Google Colab (CPU or GPU)
* Tensorflow 2.17.0, 2.17.1
* Python 3.10.12
**Problem description**
There seems to be a change in Keras 3.5.0 that has introduced a bug for models with multiple outputs.
The problem is not present in Keras 3.4.1.
Passing a dictionary as `loss` to model.compile() should result in those loss functions being applied to the respective outputs based on output name. But instead they now appear to be applied in alphabetical order of dictionary keys, leading to the wrong loss functions being applied against the model outputs.
For example, in the following snippet, "loss_small" gets applied against "output_big" when it should be applied against "output_small". It appears that the loss dictionary gets 1) re-ordered by alphabetical order of key, and then 2) the dictionary values are read off in the resultant order and applied as an ordered list against the model outputs.
```
...
output_small = Dense(1, activation="sigmoid", name="output_small")(x)
output_big = Dense(64, activation="softmax", name="output_big")(x)
model = Model(inputs=input_layer, outputs=[output_small, output_big])
model.compile(optimizer='adam',
loss={
'output_small': DebugLoss(name='loss_small'),
'output_big': DebugLoss(name='loss_big')
})
```
This conclusion is the result of flipping the orders of these components and comparing the results. Which is what the following code does...
**Code to reproduce**
```python
import sys
import tensorflow as tf
import numpy as np
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.models import Model
print(f"TensorFlow version: {tf.__version__}")
print(f"Keras version: {tf.keras.__version__}")
print(f"Python version: {sys.version}")
print()
print("Problem doesn't occur if model outputs happen to be ordered alphabetically: (big, small)")
# Generate synthetic training data
num_samples = 100
x_train = np.random.normal(size=(num_samples, 10)) # Input data
y_train_output_big = np.eye(64)[np.random.choice(64, size=num_samples)] # Shape (num_samples, 64)
y_train_output_small = np.random.choice([0, 1], size=(num_samples, 1)) # Shape (num_samples, 1)
dataset = tf.data.Dataset.from_tensor_slices((x_train, (y_train_output_big, y_train_output_small)))
dataset = dataset.batch(num_samples)
# Define model with single input and two named outputs
input_layer = Input(shape=(10,))
x = Dense(64, activation="relu")(input_layer)
output_big = Dense(64, activation="softmax", name="output_big")(x) # (100,64)
output_small = Dense(1, activation="sigmoid", name="output_small")(x) # (100,1)
model = Model(inputs=input_layer, outputs=[output_big, output_small])
# Compile with custom loss function for debugging
class DebugLoss(tf.keras.losses.Loss):
def call(self, y_true, y_pred):
print(f"{self.name} - y_true: {y_true.shape}, y_pred: {y_pred.shape}")
return tf.reduce_mean((y_true - y_pred)**2)
model.compile(optimizer='adam',
loss={
'output_big': DebugLoss(name='loss_big'),
'output_small': DebugLoss(name='loss_small')
})
# Train
tf.config.run_functions_eagerly(True)
history = model.fit(dataset, epochs=1, verbose=0)
print()
print("Problem occurs if model outputs happen to be ordered non-alphabetically: (small, big)")
# Generate synthetic training data
num_samples = 100
x_train = np.random.normal(size=(num_samples, 10)) # Input data
y_train_output_small = np.random.choice([0, 1], size=(num_samples, 1)) # Shape (num_samples, 1)
y_train_output_big = np.eye(64)[np.random.choice(64, size=num_samples)] # Shape (num_samples, 64)
dataset = tf.data.Dataset.from_tensor_slices((x_train, (y_train_output_small, y_train_output_big)))
dataset = dataset.batch(num_samples)
# Define model with single input and two named outputs
input_layer = Input(shape=(10,))
x = Dense(64, activation="relu")(input_layer)
output_small = Dense(1, activation="sigmoid", name="output_small")(x) # (100,1)
output_big = Dense(64, activation="softmax", name="output_big")(x) # (100,64)
model = Model(inputs=input_layer, outputs=[output_small, output_big])
# Compile with custom loss function for debugging
class DebugLoss(tf.keras.losses.Loss):
def call(self, y_true, y_pred):
print(f"{self.name} - y_true: {y_true.shape}, y_pred: {y_pred.shape}")
return tf.reduce_mean((y_true - y_pred)**2)
model.compile(optimizer='adam',
loss={
'output_small': DebugLoss(name='loss_small'),
'output_big': DebugLoss(name='loss_big')
})
# Train
tf.config.run_functions_eagerly(True)
history = model.fit(dataset, epochs=1, verbose=0)
```
**Code outputs on various environments**
Current Google Colab env - fails on second ordering:
```
TensorFlow version: 2.17.1
Keras version: 3.5.0
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
Problem doesn't occur if model outputs happen to be ordered alphabetically: (big, small)
loss_big - y_true: (100, 64), y_pred: (100, 64)
loss_small - y_true: (100, 1), y_pred: (100, 1)
Problem occurs occur if model outputs happen to be ordered non-alphabetically: (small, big)
loss_big - y_true: (100, 1), y_pred: (100, 1)
loss_small - y_true: (100, 64), y_pred: (100, 64)
```
Downgraded TF version, no change:
```
TensorFlow version: 2.17.0
Keras version: 3.5.0
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
Problem doesn't occur if model outputs happen to be ordered alphabetically: (big, small)
loss_big - y_true: (100, 64), y_pred: (100, 64)
loss_small - y_true: (100, 1), y_pred: (100, 1)
Problem occurs occur if model outputs happen to be ordered non-alphabetically: (small, big)
loss_big - y_true: (100, 1), y_pred: (100, 1)
loss_small - y_true: (100, 64), y_pred: (100, 64)
```
Downgraded Keras, and now get correct output for both orderings
```
TensorFlow version: 2.17.0
Keras version: 3.4.1
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
Problem doesn't occur if model outputs happen to be ordered alphabetically: (big, small)
loss_big - y_true: (100, 64), y_pred: (100, 64)
loss_small - y_true: (100, 1), y_pred: (100, 1)
Problem occurs occur if model outputs happen to be ordered non-alphabetically: (small, big)
loss_small - y_true: (100, 1), y_pred: (100, 1)
loss_big - y_true: (100, 64), y_pred: (100, 64)
```
**Final remarks**
This seems related to https://github.com/tensorflow/tensorflow/issues/37887, but looks like someone has since tried to fix that bug and introduced another perhaps? | closed | 2024-12-05T06:41:41Z | 2025-02-13T02:55:54Z | https://github.com/keras-team/keras/issues/20596 | [
"keras-team-review-pending",
"type:Bug"
] | malcolmlett | 6 |
arogozhnikov/einops | numpy | 351 | Explicitly document expand-like semantics of einops.repeat | That's something that was brought 3 or 4 times in issues, so it better be documented in docstring and maybe somewhere in wiki | open | 2024-11-22T06:37:50Z | 2024-11-22T06:45:11Z | https://github.com/arogozhnikov/einops/issues/351 | [] | arogozhnikov | 0 |
keras-team/keras | pytorch | 20,531 | AttributeError: module 'keras_nlp' has no attribute 'models' | <string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
Traceback (most recent call last):
File "C:\Users\wangshijiang\Desktop\deep_learning\project\llm\llm.py", line 68, in <module>
preprocessor = keras_nlp.models.DebertaV3Preprocessor.from_preset(
^^^^^^^^^^^^^^^^
AttributeError: module 'keras_nlp' has no attribute 'models'

| closed | 2024-11-21T14:59:13Z | 2024-11-29T16:42:00Z | https://github.com/keras-team/keras/issues/20531 | [
"type:support",
"stat:awaiting response from contributor"
] | iwqculrbud | 4 |
man-group/notebooker | jupyter | 95 | MongoDB queries should work with sharded libraries | closed | 2022-06-08T15:36:34Z | 2023-10-11T15:31:02Z | https://github.com/man-group/notebooker/issues/95 | [
"enhancement"
] | jonbannister | 0 |
|
ml-tooling/opyrator | streamlit | 4 | Finalize docker export capabilities | **Feature description:**
Finalize capabilities to export an opyrator to a Docker image.
The export can be executed via command line:
```bash
opyrator export my_opyrator:hello_world --format=docker my-opyrator-image:latest
```
_💡 The Docker export requires that Docker is installed on your machine._
After the successful export, the Docker image can be run as shown below:
```bash
docker run -p 8080:8080 my-opyrator-image:latest
```
Running your Opyrator within this Docker image has the advantage that only a single port is required to be exposed. The separation between UI and API is done via URL paths: `http://localhost:8080/api` (API); `http://localhost:8080/ui` (UI). The UI is automatically configured to use the API for all function calls.
| closed | 2021-04-19T10:01:47Z | 2023-07-27T14:30:30Z | https://github.com/ml-tooling/opyrator/issues/4 | [
"feature",
"stale"
] | lukasmasuch | 6 |
iperov/DeepFaceLab | machine-learning | 552 | Feature request: add a button on converter gui to apply current config to ALL previous frames | Hello,
now there is only the "M" key to apply the current config to only the previous frame. In my opinion would be very useful a key like the "/" key to apply the current config to ALL previous frames | closed | 2020-01-12T10:38:50Z | 2020-03-28T05:42:18Z | https://github.com/iperov/DeepFaceLab/issues/552 | [] | Heisen-burger | 0 |
keras-team/autokeras | tensorflow | 1,308 | Docker Image errors on mnist dataset | ### Bug Description
<!---
A clear and concise description of what the bug is.
-->
The mnist demonstration fails with the latest docker image. According to DockerHub the latest build is six months old.
Run output is:
```
Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex.
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 1s 0us/step
(60000, 28, 28)
(60000,)
[5 0 4]
Traceback (most recent call last):
File "mnist.py", line 17, in <module>
clf = ak.ImageClassifier(max_trials=3)
File "/usr/local/lib/python3.6/site-packages/autokeras/image/image_supervised.py", line 122, in __init__
super().__init__(**kwargs)
TypeError: __init__() got an unexpected keyword argument 'max_trials'
```
### Bug Reproduction
Code for reproducing the bug (from https://autokeras.com/docker/):
```
docker pull garawalid/autokeras:latest
curl https://raw.githubusercontent.com/keras-team/autokeras/master/examples/mnist.py --output mnist.py
docker run -it -v hostDir:/app --shm-size 2G garawalid/autokeras python file.py
```
Where hostDir is a path on windows: C:\Users\knorthover\Desktop
Data used by the code:
### Expected Behavior
<!---
If not so obvious to see the bug from the running results,
please briefly describe the expected behavior.
-->
### Setup Details
Include the details about the versions of:
- OS type and version: Docker Desktop v2.3.0.4 for Windows
- Python:
- autokeras: <!--- e.g. 0.4.0, 1.0.2, master-->
- keras-tuner:
- scikit-learn:
- numpy:
- pandas:
- tensorflow:
### Additional context
<!---
If applicable, add any other context about the problem.
-->
| open | 2020-08-24T18:49:37Z | 2020-08-25T06:11:39Z | https://github.com/keras-team/autokeras/issues/1308 | [
"bug report",
"pinned"
] | knorthover | 0 |
syrupy-project/syrupy | pytest | 560 | Writing snapshots is not thread safe | **Describe the bug**
If one runs `pytest -n auto tests --snapshot-update` then one thread will overwrite the results of another thread in the same snapshot file or directory.
**To reproduce**
Steps to reproduce the behavior:
1. Run your test suite with multiple threads enabled with the snapshot update flag
2. Rerun your test suite without the snapshot update flag
3. Observe error around missing snapshot entries
**Expected behavior**
Snapshot writing should either be thread safe or a warning should be present or documented.
**Environment (please complete the following information):**
- OS: Ubuntu
- Syrupy Version: 1.4.6
- Python Version: 3.9.1
| closed | 2021-10-20T18:16:47Z | 2021-10-25T14:39:30Z | https://github.com/syrupy-project/syrupy/issues/560 | [
"bug"
] | zbyte64 | 2 |
wemake-services/django-test-migrations | pytest | 58 | Dependabot can't resolve your Python dependency files | Dependabot can't resolve your Python dependency files.
As a result, Dependabot couldn't update your dependencies.
The error Dependabot encountered was:
```
Creating virtualenv django-test-migrations-aTEqW9gF-py3.8 in /home/dependabot/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies...
[PackageNotFound]
Package safety (1.8.5) not found.
```
If you think the above is an error on Dependabot's side please don't hesitate to get in touch - we'll do whatever we can to fix it.
[View the update logs](https://app.dependabot.com/accounts/wemake-services/update-logs/27601735). | closed | 2020-03-26T08:34:55Z | 2020-03-26T08:34:56Z | https://github.com/wemake-services/django-test-migrations/issues/58 | [] | dependabot-preview[bot] | 0 |
errbotio/errbot | automation | 1,498 | Constant Errors When Reading from RTM Stream | ### I am...
* [ ] Reporting a bug
* [ ] Suggesting a new feature
* [X ] Requesting help with running my bot
* [ ] Requesting help writing plugins
* [ ] Here about something else
### I am running...
* Errbot version: 6.4.1
* slackclient version: 1.3.2
* OS version: Ubuntu 18 / Mac OSX
* Python version: 3.6.5
* Using a virtual environment: yes
* Docker: Yes - Ubuntu 18 base image
* Kubernetes: Tried with and without
### Issue description
After processing a slack event/message like "$status", I see in the logs:
```
2021-01-12 22:54:06,831 DEBUG errbot.backends.slack Message size: 1607.
2021-01-12 22:54:06,833 DEBUG errbot.backends.slack No event handler available for user_change, ignoring this event
2021-01-12 22:54:06,838 DEBUG errbot.backends.slack No event handler available for user_change, ignoring this event
2021-01-12 22:54:06,839 DEBUG urllib3.connectionpool Starting new HTTPS connection (1): slack.com:443
2021-01-12 22:54:06,840 DEBUG errbot.backends.slack No event handler available for user_change, ignoring this event
2021-01-12 22:54:06,842 DEBUG errbot.backends.slack No event handler available for user_change, ignoring this event
2021-01-12 22:54:06,842 DEBUG root RTM disconnected
2021-01-12 22:54:06,842 ERROR errbot.backends.slack Error reading from RTM stream:
Traceback (most recent call last):
File "/home/errbot/pyenv/lib/python3.6/site-packages/slackclient/server.py", line 283, in websocket_safe_read
data += "{0}\n".format(self.websocket.recv())
File "/home/errbot/pyenv/lib/python3.6/site-packages/websocket/_core.py", line 310, in recv
opcode, data = self.recv_data()
File "/home/errbot/pyenv/lib/python3.6/site-packages/websocket/_core.py", line 327, in recv_data
opcode, frame = self.recv_data_frame(control_frame)
File "/home/errbot/pyenv/lib/python3.6/site-packages/websocket/_core.py", line 340, in recv_data_frame
frame = self.recv_frame()
File "/home/errbot/pyenv/lib/python3.6/site-packages/websocket/_core.py", line 374, in recv_frame
return self.frame_buffer.recv_frame()
File "/home/errbot/pyenv/lib/python3.6/site-packages/websocket/_abnf.py", line 361, in recv_frame
self.recv_header()
File "/home/errbot/pyenv/lib/python3.6/site-packages/websocket/_abnf.py", line 309, in recv_header
header = self.recv_strict(2)
File "/home/errbot/pyenv/lib/python3.6/site-packages/websocket/_abnf.py", line 396, in recv_strict
bytes_ = self.recv(min(16384, shortage))
File "/home/errbot/pyenv/lib/python3.6/site-packages/websocket/_core.py", line 449, in _recv
return recv(self.sock, bufsize)
File "/home/errbot/pyenv/lib/python3.6/site-packages/websocket/_socket.py", line 94, in recv
"Connection is already closed.")
websocket._exceptions.WebSocketConnectionClosedException: Connection is already closed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/errbot/pyenv/lib/python3.6/site-packages/errbot/backends/slack.py", line 442, in serve_once
for message in self.sc.rtm_read():
File "/home/errbot/pyenv/lib/python3.6/site-packages/slackclient/client.py", line 235, in rtm_read
json_data = self.server.websocket_safe_read()
File "/home/errbot/pyenv/lib/python3.6/site-packages/slackclient/server.py", line 301, in websocket_safe_read
"Unable to send due to closed RTM websocket"
slackclient.server.SlackConnectionError: Unable to send due to closed RTM websocket
```
Also at times seeing:
```
2021-01-12 22:11:57,400 DEBUG errbot.backends.slack No event handler available for user_change, ignoring this event
2021-01-12 22:11:57,402 DEBUG errbot.backends.slack Message size: 1608.
2021-01-12 22:11:57,404 DEBUG errbot.backends.slack No event handler available for user_change, ignoring this event
2021-01-12 22:11:57,408 DEBUG urllib3.connectionpool Starting new HTTPS connection (1): slack.com:443
2021-01-12 22:11:57,409 DEBUG errbot.backends.slack No event handler available for user_change, ignoring this event
2021-01-12 22:11:57,410 DEBUG errbot.backends.slack No event handler available for user_change, ignoring this event
2021-01-12 22:11:57,410 ERROR errbot.backends.slack Error reading from RTM stream:
Traceback (most recent call last):
File "/home/errbot/pyenv/lib/python3.6/site-packages/errbot/backends/slack.py", line 442, in serve_once
for message in self.sc.rtm_read():
File "/home/errbot/pyenv/lib/python3.6/site-packages/slackclient/client.py", line 235, in rtm_read
json_data = self.server.websocket_safe_read()
File "/home/errbot/pyenv/lib/python3.6/site-packages/slackclient/server.py", line 283, in websocket_safe_read
data += "{0}\n".format(self.websocket.recv())
File "/home/errbot/pyenv/lib/python3.6/site-packages/websocket/_core.py", line 310, in recv
opcode, data = self.recv_data()
File "/home/errbot/pyenv/lib/python3.6/site-packages/websocket/_core.py", line 327, in recv_data
opcode, frame = self.recv_data_frame(control_frame)
File "/home/errbot/pyenv/lib/python3.6/site-packages/websocket/_core.py", line 358, in recv_data_frame
self.pong(frame.data)
File "/home/errbot/pyenv/lib/python3.6/site-packages/websocket/_core.py", line 301, in pong
self.send(payload, ABNF.OPCODE_PONG)
File "/home/errbot/pyenv/lib/python3.6/site-packages/websocket/_core.py", line 250, in send
return self.send_frame(frame)
File "/home/errbot/pyenv/lib/python3.6/site-packages/websocket/_core.py", line 275, in send_frame
l = self._send(data)
File "/home/errbot/pyenv/lib/python3.6/site-packages/websocket/_core.py", line 445, in _send
return send(self.sock, data)
File "/home/errbot/pyenv/lib/python3.6/site-packages/websocket/_socket.py", line 117, in send
return sock.send(data)
File "/usr/lib/python3.6/ssl.py", line 944, in send
return self._sslobj.write(data)
File "/usr/lib/python3.6/ssl.py", line 642, in write
return self._sslobj.write(data)
BrokenPipeError: [Errno 32] Broken pipe
```
### Steps to reproduce
Run errbot and wait for it to start up and connect to Slack. Then run any "$status" command and the bot may respond at times before running into the errors above or just throw the error without responding to the command.
### Additional info
I am trying to understand why the connection to slack keeps closing and tried playing around with the XMPP_KEEPALIVE_INTERVAL property in the config but with no results/improvements.
I am running into the "Broken Pipe" more consistently and any help would be appreciated. | closed | 2021-01-12T23:08:25Z | 2021-01-15T15:51:30Z | https://github.com/errbotio/errbot/issues/1498 | [] | pchang388 | 2 |
littlecodersh/ItChat | api | 875 | 这框架目前不更新了,大家可以看这个 https://wkteam.gitbook.io/api/ 缺点是收费 | closed | 2019-09-27T05:47:12Z | 2020-03-30T08:39:06Z | https://github.com/littlecodersh/ItChat/issues/875 | [] | WangPney | 6 |
|
axnsan12/drf-yasg | django | 484 | Not showing post parameters for ObtainAuthToken view | ObtainAuthToken https://www.django-rest-framework.org/api-guide/authentication/#tokenauthentication
` url(r'^api-token-auth/', CustomAuthToken.as_view())`
Why django-rest-swagger is showing them by default?
i did this way https://drf-yasg.readthedocs.io/en/stable/custom_spec.html#the-swagger-auto-schema-decorator
```
decorated_auth_view = swagger_auto_schema(
method='post',
request_body=AuthTokenSerializer
)(obtain_auth_token)
urlpatterns = [
...
url(r'^login/$', decorated_auth_view)
]
``` | closed | 2019-10-29T02:06:46Z | 2020-02-17T17:04:52Z | https://github.com/axnsan12/drf-yasg/issues/484 | [] | AlexByte | 3 |
jumpserver/jumpserver | django | 14,906 | [Bug] Jumserver 3.10.17 模版信息 授权 ,新增资产后需要重新授权; | ### Product Version
3.10.17
### Product Edition
- [ ] Community Edition
- [ ] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [ ] Online Installation (One-click command installation)
- [ ] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
Jumserver 3.10.17
### 🐛 Bug Description
问题: 资产授权 中使用 指定账号 中的模板信息进行授权, 授权中使用 节点 Default ,但是在 节点下新增资产 后登录 该服务器 确无法登录; 需要重新授权才可以; 影响:在新增资产时候,如果有多个用户需要进行登录,需要全部进行重新授权,即使用户资产 使用的是节点,同时节点下包含新增的机器;也是无法登录;
### Recurrence Steps
问题: 资产授权 中使用 指定账号 中的模板信息进行授权, 授权中使用 节点 Default ,但是在 节点下新增资产 后登录 该服务器 确无法登录; 需要重新授权才可以; 影响:在新增资产时候,如果有多个用户需要进行登录,需要全部进行重新授权,即使用户资产 使用的是节点,同时节点下包含新增的机器;也是无法登录;
### Expected Behavior
_No response_
### Additional Information
_No response_
### Attempted Solutions
_No response_ | closed | 2025-02-20T10:15:42Z | 2025-02-20T10:16:24Z | https://github.com/jumpserver/jumpserver/issues/14906 | [
"🐛 Bug"
] | wangjingang | 0 |
dot-agent/nextpy | pydantic | 41 | Add support for Anthropic models | closed | 2023-09-11T05:46:57Z | 2023-11-18T04:00:14Z | https://github.com/dot-agent/nextpy/issues/41 | [] | anubrag | 1 |
|
lanpa/tensorboardX | numpy | 115 | Problem with specifying run name | I created the summary writer object like follows:
`self.writer = SummaryWriter('logs', self.name + '_' + datetime.now().strftime('%D-%T'))`
Scalars I render in multiple experiments end up displayed in single run named '.', so I can't distinguish between experiments. I used tensorboard packed with tensorflow-1.1 and tensorboardX-1.0 | closed | 2018-03-29T06:59:46Z | 2018-03-31T10:08:11Z | https://github.com/lanpa/tensorboardX/issues/115 | [] | akamaus | 2 |
dask/dask | pandas | 11,722 | `da.Array.__setitem__` blindly assumes that the chunks are writeable | If one calls `da.Array.__getitem__(idx).__setitem__(())` where idx selects a scalar, all seems fine but crashes on `compute()`.
The object returned by `__getitem__` is another da.Array, which is writeable, but internally the chunk contains a `np.generic`, which is not.
Moving on to `__setitem__`, dask blindly assumes that its chunks are always `np.ndarray` objects:
```python
>>> import dask.array as da
>>> x = da.zeros(())
>>> x[()] = 1
>>> x.compute()
array(1.)
>>> y = da.zeros(1)
>>> z = y[0] # a writeable da.Array, but the chunk is a read-only np.generic
>>> z[()] = 1 # No failure here
>>> z.compute() # TypeError: 'numpy.float64' object does not support item assignment
```
In the above snippet, x and z are outwardly identical, but z's graph is corrupted.
The same code works fine when the meta is a cupy array, because `__getitem__` results in chunks which are 0-dimensional, writeable `cp.Array` objects. | closed | 2025-02-06T16:54:12Z | 2025-02-07T06:36:18Z | https://github.com/dask/dask/issues/11722 | [
"array",
"bug",
"p2"
] | crusaderky | 0 |
prkumar/uplink | rest-api | 157 | Is there any approach of adopting HTTP stream from uplink calls via the corresponding supported parameter of requests.session.Session.send | requests actually supports this as we have adopted in daily development.
After going through the code of uplink it seems that the final parameters of requests.session.request is shadowed by hard coded values from uplink.execution.DefaultRequestExecution.send
However not being 100% sure on this thus I an trying to get it confirmed directly here.
And it can be a potential useful feature not yet supported indeed, while dealing with large data, it seems impossible to keep them in the memory by all means. (It calls for a repeated called callback assignment mechanism as well for this feature)
Providing the signature of requests.session.Session.send:
```
def request(self, method, url,
params=None, data=None, headers=None, cookies=None, files=None,
auth=None, timeout=None, allow_redirects=True, proxies=None,
hooks=None, stream=None, verify=None, cert=None, json=None):
"""Constructs a :class:`Request <Request>`, prepares it and sends it.
Returns :class:`Response <Response>` object.
:param method: method for the new :class:`Request` object.
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary or bytes to be sent in the query
string for the :class:`Request`.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) json to send in the body of the
:class:`Request`.
:param headers: (optional) Dictionary of HTTP Headers to send with the
:class:`Request`.
:param cookies: (optional) Dict or CookieJar object to send with the
:class:`Request`.
:param files: (optional) Dictionary of ``'filename': file-like-objects``
for multipart encoding upload.
:param auth: (optional) Auth tuple or callable to enable
Basic/Digest/Custom HTTP Auth.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param allow_redirects: (optional) Set to True by default.
:type allow_redirects: bool
:param proxies: (optional) Dictionary mapping protocol or protocol and
hostname to the URL of the proxy.
:param stream: (optional) whether to immediately download the response
content. Defaults to ``False``.
:param verify: (optional) Either a boolean, in which case it controls whether we verify
the server's TLS certificate, or a string, in which case it must be a path
to a CA bundle to use. Defaults to ``True``.
:param cert: (optional) if String, path to ssl client cert file (.pem).
If Tuple, ('cert', 'key') pair.
:rtype: requests.Response
"""
``` | closed | 2019-04-17T11:43:54Z | 2019-04-20T01:31:53Z | https://github.com/prkumar/uplink/issues/157 | [] | xiamubobby | 1 |
fbdesignpro/sweetviz | pandas | 124 | Add an argument to silence the progress bar | Is it possible to add an argument to silence the progress bar ?
We want to use SweetViz in an automatique pipeline and store the report in a database.
We already have a lot of logs in our process, hence we would love to get rid of the progress bar logs.
We can deactivate tqdm before loading SweetViz, but that would also impact others parts of our process.
One solution might be to add an argument in `DataframeReport.__init__` and set `self.progress_bar` to a fake logger.
| closed | 2022-10-04T15:45:58Z | 2023-11-16T14:12:44Z | https://github.com/fbdesignpro/sweetviz/issues/124 | [
"feature request"
] | LexABzH | 4 |
vitalik/django-ninja | rest-api | 600 | [BUG] Cannot use different pagination classes for the same item schema | **Describe the bug**
This was a real headache of a bug to track down.
```py
@router.get(
path="/foo",
response={200: list[MySchema]},
)
@paginate(FooPagination)
def list_foo(request):
...
@router.get(
path="/bar",
response={200: list[MySchema]},
)
@paginate(BarPagination)
def list_bar(request):
...
```
Assume that FooPagination and BarPagination have different Input and Output schemas. Using the above, we get a single PagedMySchema schema in the OpenAPI definition, instead of two. One of them overwrites the other. Effectively, it's currently not possible to use two different pagination classes for the same item schema.
**Versions (please complete the following information):**
- Python version: [e.g. 3.6] 3.9
- Django version: [e.g. 4.0] 3.2
- Django-Ninja version: [e.g. 0.16.2] 0.19.1
- Pydantic version: [e.g. 1.9.0] 1.9.1
| open | 2022-10-29T12:30:43Z | 2022-11-20T14:08:55Z | https://github.com/vitalik/django-ninja/issues/600 | [] | denizdogan | 2 |
litestar-org/litestar | asyncio | 3,998 | Bug: link format in rate limit | ### Description
something in the rate limit link is off

### URL to code causing the issue
_No response_
### MCVE
```python
# Your MCVE code here
```
### Steps to reproduce
```bash
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
```
### Screenshots
```bash
""
```
### Logs
```bash
```
### Litestar Version
2.14
### Platform
- [x] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2025-02-14T06:52:32Z | 2025-02-17T18:37:30Z | https://github.com/litestar-org/litestar/issues/3998 | [
"Bug :bug:",
"Documentation :books:"
] | euri10 | 0 |
xonsh/xonsh | data-science | 5,496 | Macs and Next don't have a login shell | _Originally posted by @bestlem in https://github.com/xonsh/xonsh/issues/5488#issuecomment-2162391775_
> I also work on Macs and Next which don't have a login shell. On other Unixes the login shell will already have been read, So I am used to having to get $PATH set by something and I don't want that in the script but something has to be read in e.g. using ~/.xonshrc etc,
| closed | 2024-06-13T03:11:54Z | 2024-06-13T15:14:02Z | https://github.com/xonsh/xonsh/issues/5496 | [
"xonshrc",
"integration-with-other-tools"
] | anki-code | 6 |
scikit-tda/kepler-mapper | data-visualization | 185 | Filter out nodes in graph based on number of elements | **Is your feature request related to a problem? Please describe.**
I have a point cloud with approximately 2000 data points. The clustering is based on a precomputed distance matrix. The graph produced by Mapper has a lot of nodes with between 1 and 5 elements, cluttering the visualization of the graph/ complex.
**Describe the solution you'd like**
I would like to specify a minimum number of elements in a node of the graph for visualization.
**Describe alternatives you've considered**
Filter out minor nodes from the dictionary produced by `mapper.map` with a dict comprehension. I have looked at `scikit-learn` docs for some solution on filtering out clusters with a small number of elements. I have not found such parameters when using precomputed metric. Increasing `distance_treshold` helps up to a certain extent.
**Additional context**
I would like to discuss how to remove nodes from graph without affecting the topology of the dataset.
| open | 2020-01-06T16:38:57Z | 2021-03-08T22:41:14Z | https://github.com/scikit-tda/kepler-mapper/issues/185 | [] | holmbuar | 18 |
ray-project/ray | python | 50,955 | [core] raylet memory leak | ### What happened + What you expected to happen
Seeing consistent memory growth in raylet process on a long-running cluster:
<img width="1371" alt="Image" src="https://github.com/user-attachments/assets/ebcdf7f7-0c1e-4ae7-91f0-7d18a0d7d183" />
<img width="1361" alt="Image" src="https://github.com/user-attachments/assets/67004d58-082b-443b-b666-4a1978150e00" />
### Versions / Dependencies
2.41
### Reproduction script
n/a
### Issue Severity
None | closed | 2025-02-27T18:02:35Z | 2025-03-08T00:20:31Z | https://github.com/ray-project/ray/issues/50955 | [
"bug",
"P0",
"core"
] | zcin | 1 |
tensorflow/tensor2tensor | deep-learning | 1,483 | code for "Model-Based Reinforcement Learning for Atari" | where can i find the code for "Model-Based Reinforcement Learning for Atari" https://arxiv.org/pdf/1903.00374.pdf
which said the code is in https://github.com/tensorflow/tensor2tensor/tree/master/tensor2tensor/rl
| closed | 2019-03-12T18:20:57Z | 2019-03-13T17:25:22Z | https://github.com/tensorflow/tensor2tensor/issues/1483 | [] | daiwk | 2 |
JaidedAI/EasyOCR | machine-learning | 1,249 | while running custom easyocr,This error is occured ,How to fix these | line 231, in __init__
self.recognizer, self.converter = get_recognizer(recog_network, network_params,\
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
line 173, in get_recognizer
model.load_state_dict(new_state_dict)
line 2189, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Model:
| open | 2024-05-08T05:09:36Z | 2025-01-31T01:56:34Z | https://github.com/JaidedAI/EasyOCR/issues/1249 | [] | sumathipriya | 1 |
python-restx/flask-restx | flask | 335 | Incorrect error handler response when exception contains a `data` field | If you create a handler for an exception that has a `data` attribute, then the response data is not what you returned from the handler but rather whatever is in that data attribute. This causes problems with popular Python libraries such as marshmallow.
For example, consider the following
```
class MyError(Exception):
def __init__(*args, data=None, **kwargs):
super().__init__(*args, **kwargs)
self.data = data
@api.errorhandler(MyError)
def handle_my_error(err):
return {"foo": "bar"}, 500
class MyResource(Resource):
def get(self):
raise MyError(data={"a": "b"})
```
Instead of receiving the response `{"foo": "bar"}` whenever this error occurs, you get whatever is in `<my_error>.data` - here, that is `{"a": "b"}`.
The culprit in the codebase is in `api.py` under `handle_error`, which overrides the correct response with whatever is in the data field of the exception:
```
data = getattr(e, "data", default_data)
```
The error handling is automatically choosing `<my_error>.data` over the returned data.
Ironically, we're recommending that people use marshmallow instead of the built-in reqparse library, but since marshmallow errors have a `data` field on them, this causes incompatibility between our error handlers and marshmallow errors.
Raising a `marshmallow.exceptions.ValidationError` will cause error handlers to send an incorrect response. | open | 2021-06-09T16:49:31Z | 2021-06-09T16:52:56Z | https://github.com/python-restx/flask-restx/issues/335 | [
"bug"
] | mcsimps2 | 1 |
hbldh/bleak | asyncio | 1,180 | bleak should have a consistent cross-platform exception if no Bluetooth is available on a device | * bleak version: probably 0.19.5 (`bleak` seem to have no `.__version__`)
* Python version: 3.8.10
* Operating System: Win7/Win10, MacOS
* BlueZ version (`bluetoothctl -v`) in case of Linux: 5.64
### Description
Trying to use `BleakScanner.discover` if no bluetooth hardware is present or if bluetooth is turned off raises different exception on different platforms.
On Windows it raises an `OSError` (sorry for the German error message):
```
OSError: [WinError -2147221231] ClassFactory kann angeforderte Klasse nicht liefern
```
On MacOS it raises (if Bluetooth is turned off) a generic `BleakError('Bluetooth device is turned off')`
On Linux it raises (if Bluetooth is turned off) a generic `BleakError("No powered Bluetooth adapters found")`
This makes it difficult to handle this situation ('no bluetooth') properly in a cross-platform independent manner, e.g. by `try... except` clauses.
It would be nice to have a consistent platform-indenpendent exception, e.g. `BleakBluetoothNotFoundError` for this situation.
### What I Did
Minimal workable example:
Run this on different devices (Windows/Mac/Linux) with or without Bluetooth adapters or with Bluetooth turned off:
```
import asyncio
from bleak import BleakScanner, BleakClient
async def test():
devices = await BleakScanner.discover()
asyncio.run(test())
```
### Logs
see above in **Description**
Edit: Added Exception for Linux
| open | 2022-12-22T14:03:31Z | 2022-12-23T10:57:32Z | https://github.com/hbldh/bleak/issues/1180 | [
"enhancement",
"Backend: BlueZ",
"Backend: Core Bluetooth",
"Backend: WinRT"
] | MarkusPiotrowski | 2 |
pywinauto/pywinauto | automation | 1,035 | Is it possible to identify the title of the window | I recently started using pywinauto . I am trying to fetch the title of the pop-up window.
FYI Application background is UIA.
So I am curious that is it possible to find the title of the pop-up window using pywinauto. | open | 2021-01-27T05:40:26Z | 2021-06-23T12:35:54Z | https://github.com/pywinauto/pywinauto/issues/1035 | [
"question"
] | sana-aawan | 2 |
localstack/localstack | python | 11,445 | bug: describe-security-group-rules output not consistent | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I have a security group with one ingress and one egress rule. When using the `describe-security-group-rules` call with a security-group-id filter, it correctly returns the two rules. However, if I try to use the `describe-security-group-rules` call and filter by security-group-rule-id, it does not return the rule as expected
### Expected Behavior
When listing security-group-rules by identifier, the given rule should be returned
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
```bash
# prepare
VPC_ID=$(aws --endpoint-url=http://localhost:4566 --region=us-east-1 ec2 create-vpc --cidr-block 10.0.0.0/16 | jq -r '.Vpc.VpcId')
SG_ID=$(aws --endpoint-url=http://localhost:4566 --region=us-east-1 ec2 create-security-group --group-name test --description "test" --vpc-id $VPC_ID | jq -r '.GroupId')
# create egress rule
RULE_ID=$(aws --endpoint-url=http://localhost:4566 --region=us-east-1 ec2 authorize-security-group-egress --group-id $SG_ID --protocol tcp --port 80 --cidr 0.0.0.0/0 | jq -r '.SecurityGroupRules[0].SecurityGroupRuleId')
# this returns the rule created above
aws --endpoint-url="http://localhost:4566" --region=us-east-1 ec2 describe-security-group-rules --filter Name=group-id,Values=$SG_ID
# this return an empty list
aws --endpoint-url="http://localhost:4566" --region=us-east-1 ec2 describe-security-group-rules --security-group-rule-ids $RULE_ID
```
### Environment
```markdown
- OS: macOS
- LocalStack:
LocalStack version: 3.6.1.dev
LocalStack build date: 2024-08-16
LocalStack build git hash: 1fafd6da1
```
### Anything else?
_No response_ | closed | 2024-09-02T16:18:04Z | 2024-09-06T20:18:41Z | https://github.com/localstack/localstack/issues/11445 | [
"type: bug",
"aws:ec2",
"status: backlog"
] | dnlopes | 1 |
babysor/MockingBird | pytorch | 166 | 求教这个项目有发相应论文不 | closed | 2021-10-23T05:00:19Z | 2022-03-07T15:43:14Z | https://github.com/babysor/MockingBird/issues/166 | [] | leona66 | 2 |
|
charlesq34/pointnet | tensorflow | 274 | [Part segmentation] wrong concatenation with out5 | Dear authors and all,
I do want to discuss about the feature concatenation code for part segmentation, specifically, where concatenates local features and global feature in file **pointnet/part_seg/pointnet_part_seg.py**.
According to the detail network architecture described in the supplement of main paper, feature dimension for cocatenation should be 3024 which is addition of following feature sizes: [64,128,128,128,512,2048,16]
However, in line #86-122, you added variable `out5`(2048) instead of `net_transformed`(128) in `concat`, which makes dimension size 4944 by adding [64,128,128,512,2048,2048,16].
In sum, according to your paper, concatenation code `[expand, out1,out2,out3,out4,out5]` should be fixed to `[expand, out1,out2,out3,net_transformed,out4]` in my opinion.
Can you tell me which version is correct for your experiment setting?
```
out1 = tf_util.conv2d(input_image, 64, [1,K], padding='VALID', stride=[1,1],
bn=True, is_training=is_training, scope='conv1', bn_decay=bn_decay)
out2 = tf_util.conv2d(out1, 128, [1,1], padding='VALID', stride=[1,1],
bn=True, is_training=is_training, scope='conv2', bn_decay=bn_decay)
out3 = tf_util.conv2d(out2, 128, [1,1], padding='VALID', stride=[1,1],
bn=True, is_training=is_training, scope='conv3', bn_decay=bn_decay)
with tf.variable_scope('transform_net2') as sc:
K = 128
transform = get_transform_K(out3, is_training, bn_decay, K)
end_points['transform'] = transform
squeezed_out3 = tf.reshape(out3, [batch_size, num_point, 128])
net_transformed = tf.matmul(squeezed_out3, transform)
net_transformed = tf.expand_dims(net_transformed, [2])
out4 = tf_util.conv2d(net_transformed, 512, [1,1], padding='VALID', stride=[1,1],
bn=True, is_training=is_training, scope='conv4', bn_decay=bn_decay)
out5 = tf_util.conv2d(out4, 2048, [1,1], padding='VALID', stride=[1,1],
bn=True, is_training=is_training, scope='conv5', bn_decay=bn_decay)
out_max = tf_util.max_pool2d(out5, [num_point,1], padding='VALID', scope='maxpool')
...
# segmentation network
one_hot_label_expand = tf.reshape(input_label, [batch_size, 1, 1, cat_num])
out_max = tf.concat(axis=3, values=[out_max, one_hot_label_expand])
expand = tf.tile(out_max, [1, num_point, 1, 1])
concat = tf.concat(axis=3, values=[expand, out1, out2, out3, out4, out5])
```
| open | 2021-03-29T07:37:47Z | 2021-03-29T07:37:47Z | https://github.com/charlesq34/pointnet/issues/274 | [] | hyunjinku | 0 |
erdewit/ib_insync | asyncio | 628 | ib.trades() results change after calling ib.reqAllOpenOrders() | I am noticing that in my current IB account, I have a limit order placed from the TWS GUI that only shows up in trades after ib.reqAllOpenOrders() is called.
**Example:**
```
ib = IBC.connect()
trades = ib.trades()
[t for t in trades if t.contract.localSymbol == 'ASNS']
```
**Returns:**
[Trade(contract=Stock(conId=626259992, symbol='ASNS', right='?', exchange='SMART', currency='USD', localSymbol='ASNS', tradingClass='SCM'), order=Order(orderId=36593, clientId=1, permId=302371717, action='BUY', totalQuantity=5.0, orderType='LMT', lmtPrice=1.14, auxPrice=0.0, tif='GTC', ocaType=3, displaySize=2147483647, outsideRth=True, rule80A='0', openClose='', volatilityType=0, deltaNeutralOrderType='None', referencePriceType=0, account='DU6019510', clearingIntent='IB', adjustedOrderType='None', cashQty=0.0, dontUseAutoPriceForHedge=True), orderStatus=OrderStatus(orderId=36593, status='Submitted', filled=0.0, remaining=5.0, avgFillPrice=0.0, permId=302371717, parentId=0, lastFillPrice=0.0, clientId=1, whyHeld='', mktCapPrice=0.0), fills=[], log=[TradeLogEntry(time=datetime.datetime(2023, 8, 26, 17, 0, 11, 889763, tzinfo=datetime.timezone.utc), status='Submitted', message='', errorCode=0)])]
**Then only when ib.reqAllOpenOrders() is called:**
```
open_orders = ib.reqAllOpenOrders()`
trades = ib.trades()
[t for t in trades if t.contract.localSymbol == 'ASNS']
```
**The same trades request returns:**
[Trade(contract=Stock(conId=626259992, symbol='ASNS', right='?', exchange='SMART', currency='USD', localSymbol='ASNS', tradingClass='SCM'), order=Order(orderId=36593, clientId=1, permId=302371717, action='BUY', totalQuantity=5.0, orderType='LMT', lmtPrice=1.14, auxPrice=0.0, tif='GTC', ocaType=3, displaySize=2147483647, outsideRth=True, rule80A='0', openClose='', volatilityType=0, deltaNeutralOrderType='None', referencePriceType=0, account='DU6019510', clearingIntent='IB', adjustedOrderType='None', cashQty=0.0, dontUseAutoPriceForHedge=True), orderStatus=OrderStatus(orderId=36593, status='Submitted', filled=0.0, remaining=5.0, avgFillPrice=0.0, permId=302371717, parentId=0, lastFillPrice=0.0, clientId=1, whyHeld='', mktCapPrice=0.0), fills=[], log=[TradeLogEntry(time=datetime.datetime(2023, 8, 26, 17, 0, 11, 889763, tzinfo=datetime.timezone.utc), status='Submitted', message='', errorCode=0)]),
Trade(contract=Stock(conId=626259992, symbol='ASNS', right='?', exchange='SMART', currency='USD', localSymbol='ASNS', tradingClass='SCM'), order=Order(permId=1674124202, action='BUY', totalQuantity=5.0, orderType='LMT', lmtPrice=0.95, auxPrice=0.0, tif='GTC', ocaType=3, displaySize=2147483647, rule80A='0', openClose='', volatilityType=0, deltaNeutralOrderType='None', referencePriceType=0, account='DU6019510', clearingIntent='IB', adjustedOrderType='None', cashQty=0.0, dontUseAutoPriceForHedge=True), orderStatus=OrderStatus(orderId=0, status='PreSubmitted', filled=0.0, remaining=5.0, avgFillPrice=0.0, permId=1674124202, parentId=0, lastFillPrice=0.0, clientId=0, whyHeld='', mktCapPrice=0.0), fills=[], log=[TradeLogEntry(time=datetime.datetime(2023, 8, 26, 17, 1, 30, 725609, tzinfo=datetime.timezone.utc), status='PreSubmitted', message='', errorCode=0)])]
It appears that ib.trades() is now returning limit orders from the client permId and from the TWS GUI permId? Is the behavior expected or a Bug? | closed | 2023-08-26T17:19:46Z | 2023-08-26T18:07:17Z | https://github.com/erdewit/ib_insync/issues/628 | [] | jakemdrew | 2 |
streamlit/streamlit | data-science | 10,679 | Changes in Modularized Components Not Auto-Reflecting | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
I've modularized my Streamlit app for better maintainability, but changes in component files don't update in the app upon rerun or refresh. Updates only appear after manually restarting the app from the terminal.
### Reproducible Code Example
```Python
```
### Steps To Reproduce
1. Modularize a Streamlit app into separate component files.
2. Run it with `streamlit run app.py`.
3. Modify a component file and save it.
4. Rerun or refresh the app and check if the changes are applied.
### Expected Behavior
After making changes to any component files and saving them, the Streamlit app should automatically detect the changes and reflect them on the next rerun or refresh.
### Current Behavior
Changes in the component files are not reflected in the Streamlit app until the app is manually stopped and restarted.
### Is this a regression?
- [x] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.42.0
- Python version: 3.10
- Operating System: Linux Mint 21.3 Virginia
- Browser: Chrome
### Additional Information
My `config.toml` settings are as follows:
```
[server]
runOnSave = true
fileWatcherType = "auto"
``` | closed | 2025-03-07T12:42:36Z | 2025-03-21T14:54:56Z | https://github.com/streamlit/streamlit/issues/10679 | [
"type:bug",
"status:cannot-reproduce"
] | G0v1ndD3v | 4 |
pydata/bottleneck | numpy | 191 | Preparing to release bottleneck 1.3.0 | I am getting ready to release bottleneck 1.2.2. The only thing left to do is testing.
The following people gave test reports on the pre-release of bottleneck 1.2.1, so I'm pinging you again in case you have time to test this release (the master branch): @cgohlke @itdaniher @toobaz @shoyer and anyone else. | closed | 2018-05-30T17:08:40Z | 2019-11-19T06:23:41Z | https://github.com/pydata/bottleneck/issues/191 | [] | kwgoodman | 40 |
marimo-team/marimo | data-science | 3,775 | on module change: restart kernel | ### Description
Who knows how impure some modules are. Based on a particular loading they might 'remember' or there may be a `global` state.
### Suggested solution
Restart the kernel to (almost) go back to the Jupyter way of ensuring reproducibility.
### Alternative
using `marimo run` (which for now starts from the top on changes).
### Additional context
_No response_ | open | 2025-02-13T00:31:01Z | 2025-02-13T00:31:21Z | https://github.com/marimo-team/marimo/issues/3775 | [
"enhancement"
] | majidaldo | 0 |
ionelmc/pytest-benchmark | pytest | 50 | Error in pygal.graph.box import is_list_like | version: pytest-benchmark-3.0.0
How to reproduce:
py.test <file> --benchmark-histogram
output:
File "/usr/local/lib/python2.7/dist-packages/pytest_benchmark/histogram.py", line 8, in <module>
raise ImportError(exc.args, "Please install pygal and pygaljs or pytest-benchmark[histogram]")
ImportError: (('cannot import name is_list_like',), 'Please install pygal and pygaljs or pytest-benchmark[histogram]')
It seems to be an issue in pygal. I also tried: from pygal.graph.box import is_list_like, and it raises a error.
What is the proper way to workaround it?
| closed | 2016-04-25T01:40:59Z | 2017-03-27T19:28:23Z | https://github.com/ionelmc/pytest-benchmark/issues/50 | [] | kirotawa | 3 |
plotly/dash | data-science | 2,984 | [QUESTION] Does Dash have an official logo | Does Dash have a purely official logo image that doesn't include the word `plotly`. | closed | 2024-09-05T09:17:24Z | 2024-09-26T16:19:08Z | https://github.com/plotly/dash/issues/2984 | [
"feature",
"P2"
] | CNFeffery | 5 |
autogluon/autogluon | scikit-learn | 4,073 | [BUG] TimeSeriesPredictor predict bug, similar to #2208, not fixed in v0.8.2 despite claims | **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [X] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [ ] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [ ] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
This is very similar to #2208. I get basically the exact same error traceback:
```
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /tmp/ipykernel_1500158/1599776890.py:1 in <module> │
│ │
│ [Errno 2] No such file or directory: '/tmp/ipykernel_1500158/1599776890.py' │
│ │
│ /fs/cml-projects/teamgahsp/anaconda/envs/test_env/lib/python3.9/site-packages/autogluon/timeseri │
│ es/predictor.py:630 in predict │
│ │
│ 627 │ │ # Don't use data.item_ids in case data is not a TimeSeriesDataFrame │
│ 628 │ │ original_item_id_order = data.reset_index()[ITEMID].unique() │
│ 629 │ │ data = self._check_and_prepare_data_frame(data) │
│ ❱ 630 │ │ predictions = self._learner.predict(data, known_covariates=known_covariates, mod │
│ 631 │ │ return predictions.reindex(original_item_id_order, level=ITEMID) │
│ 632 │ │
│ 633 │ def evaluate(self, data: Union[TimeSeriesDataFrame, pd.DataFrame], **kwargs): │
│ │
│ /fs/cml-projects/teamgahsp/anaconda/envs/test_env/lib/python3.9/site-packages/autogluon/timeseri │
│ es/learner.py:198 in predict │
│ │
│ 195 │ │ data = self.feature_generator.transform(data) │
│ 196 │ │ known_covariates = self.feature_generator.transform_future_known_covariates(know │
│ 197 │ │ known_covariates = self._align_covariates_with_forecast_index(known_covariates=k │
│ ❱ 198 │ │ return self.load_trainer().predict( │
│ 199 │ │ │ data=data, known_covariates=known_covariates, model=model, use_cache=use_cac │
│ 200 │ │ ) │
│ 201 │
│ │
│ /fs/cml-projects/teamgahsp/anaconda/envs/test_env/lib/python3.9/site-packages/autogluon/timeseri │
│ es/learner.py:58 in load_trainer │
│ │
│ 55 │ │
│ 56 │ def load_trainer(self) -> AbstractTimeSeriesTrainer: │
│ 57 │ │ """Return the trainer object corresponding to the learner.""" │
│ ❱ 58 │ │ return super().load_trainer() # noqa │
│ 59 │ │
│ 60 │ def fit( │
│ 61 │ │ self, │
│ │
│ /fs/cml-projects/teamgahsp/anaconda/envs/test_env/lib/python3.9/site-packages/autogluon/core/lea │
│ rner/abstract_learner.py:118 in load_trainer │
│ │
│ 115 │ │ │ │ raise AssertionError("Trainer does not exist.") │
│ 116 │ │ │ # trainer_path is used to determine if there's a trained trainer │
│ 117 │ │ │ # model_context contains the new trainer_path with updated context │
│ ❱ 118 │ │ │ return self.trainer_type.load(path=self.model_context, reset_paths=self.rese │
│ 119 │ │
│ 120 │ # reset_paths=True if the learner files have changed location since fitting. │
│ 121 │ # TODO: Potentially set reset_paths=False inside load function if it is the same pat │
│ │
│ /fs/cml-projects/teamgahsp/anaconda/envs/test_env/lib/python3.9/site-packages/autogluon/timeseri │
│ es/trainer/abstract_trainer.py:152 in load │
│ │
│ 149 │ │ │ return load_pkl.load(path=load_path) │
│ 150 │ │ else: │
│ 151 │ │ │ obj = load_pkl.load(path=load_path) │
│ ❱ 152 │ │ │ obj.set_contexts(path) │
│ 153 │ │ │ obj.reset_paths = reset_paths │
│ 154 │ │ │ return obj │
│ 155 │
│ │
│ /fs/cml-projects/teamgahsp/anaconda/envs/test_env/lib/python3.9/site-packages/autogluon/timeseri │
│ es/trainer/abstract_trainer.py:116 in set_contexts │
│ │
│ 113 │ │ return self.path + self.trainer_file_name │
│ 114 │ │
│ 115 │ def set_contexts(self, path_context: str) -> None: │
│ ❱ 116 │ │ self.path, model_paths = self.create_contexts(path_context) │
│ 117 │ │ for model, path in model_paths.items(): │
│ 118 │ │ │ self.set_model_attribute(model=model, attribute="path", val=path) │
│ 119 │
│ │
│ /fs/cml-projects/teamgahsp/anaconda/envs/test_env/lib/python3.9/site-packages/autogluon/timeseri │
│ es/trainer/abstract_trainer.py:126 in create_contexts │
│ │
│ 123 │ │ # TODO: of full paths │
│ 124 │ │ model_paths = self.get_models_attribute_dict(attribute="path") │
│ 125 │ │ for model, prev_path in model_paths.items(): │
│ ❱ 126 │ │ │ model_local_path = prev_path.split(self.path, 1)[1] │
│ 127 │ │ │ new_path = path + model_local_path │
│ 128 │ │ │ model_paths[model] = new_path │
│ 129 │
╰────────────────────────────────────────────────
IndexError: list index out of range
```
**Expected behavior**
I should be seeing a completed prediction, especially after [claims that this was fixed in v0.8](https://github.com/autogluon/autogluon/releases/tag/v0.8.0).
**To Reproduce**
Colab is [linked](https://drive.google.com/file/d/1axZgry_uWAcDU1Zmz4r2TjSeY63O1C8a/view?usp=sharing). The data source can be found [here](https://docs.google.com/spreadsheets/d/1VJl7-yHNhByBCt1-ZA7o_bpOC7WzqqJx/edit?usp=sharing&ouid=111611764028797479108&rtpof=true&sd=true).
**Screenshots / Logs**
Traceback provided above.
**Installed Versions**
AutoGluon v0.8.2, Python v3.9
```python
INSTALLED VERSIONS
------------------
date : 2024-04-10
time : 01:12:05.345491
python : 3.9.16.final.0
OS : Linux
OS-release : 4.18.0-513.18.1.el8_9.x86_64
Version : #1 SMP Thu Feb 1 03:51:05 EST 2024
machine : x86_64
processor : x86_64
num_cores : 16
cpu_ram_mb : 128253
cuda version : 12.550.54.14
num_gpus : 1
gpu_ram_mb : [16101]
avail_disk_size_mb : 19590
accelerate : 0.16.0
autogluon : 0.8.2
autogluon.common : 0.8.2
autogluon.core : 0.8.2
autogluon.features : 0.8.2
autogluon.multimodal : 0.8.2
autogluon.tabular : 0.8.2
autogluon.timeseries : 0.8.2
boto3 : 1.26.128
catboost : 1.1.1
defusedxml : 0.7.1
evaluate : 0.2.2
fastai : 2.7.11
gluonts : 0.13.7
hyperopt : None
imodels : None
jinja2 : 3.1.2
joblib : 1.2.0
jsonschema : 4.17.3
lightgbm : 3.3.5
matplotlib : 3.7.1
mlforecast : 0.7.3
networkx : 3.2.1
nlpaug : 1.1.11
nltk : 3.8.1
nptyping : 2.4.1
numpy : 1.23.5
omegaconf : 2.2.3
onnxruntime-gpu : None
openmim : 0.3.7
pandas : 1.5.3
Pillow : 9.4.0
psutil : 5.9.5
pydantic : 1.10.7
PyMuPDF : None
pytesseract : 0.3.10
pytorch-lightning : 1.9.4
pytorch-metric-learning: 1.7.3
ray : 2.2.0
requests : 2.29.0
scikit-image : 0.19.3
scikit-learn : 1.2.2
scikit-learn-intelex : None
scipy : 1.10.1
seqeval : 1.2.2
setuptools : 66.0.0
skl2onnx : None
statsforecast : 1.4.0
statsmodels : 0.13.5
tabpfn : None
tensorboard : 2.11.2
text-unidecode : 1.3
timm : 0.9.10
torch : 1.12.1
torchmetrics : 0.11.0
torchvision : 0.13.0a0+8069656
tqdm : 4.65.0
transformers : 4.26.1
ujson : 5.7.0
vowpalwabbit : None
xgboost : 1.7.4
```
</details>
| closed | 2024-04-10T05:12:50Z | 2024-04-19T16:57:51Z | https://github.com/autogluon/autogluon/issues/4073 | [
"bug: unconfirmed",
"Needs Triage"
] | Ai-Ya-Ya | 2 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,040 | I used the same paramters and same safetensor,but my result is much worse than WebUI,ppppplease give me some advises! [Bug]: | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I used the same paramters and same safetensor,but my result(by my code) is much worse than WebUI,ppppplease give me some advises! the way that i load the safetensor is wrong??? help!(all the safetensor or pt are download in my laptop)(here is code):
**import torch
from diffusers import StableDiffusionPipeline
from transformers import CLIPTextModel,CLIPModel,CLIPProcessor,CLIPTokenizer
from safetensors.torch import load_file # 用于加载 .safetensors 文件
import os
pipe = StableDiffusionPipeline.from_single_file("./AI-ModelScope/anyloraCheckpoint_bakedvaeFp16NOT.safetensors",local_files_only=True,use_safetensors=True,load_safety_checker=False)
pipe = pipe.to("cuda")
lora_path = "./Pokemon_LoRA/pokemon_v3_offset.safetensors"
lora_w = 1.0
pipe._lora_scale = lora_w
state_dict, network_alphas = pipe.lora_state_dict(
lora_path
)
for key in network_alphas:
network_alphas[key] = network_alphas[key] * lora_w
#network_alpha = network_alpha * lora_w
pipe.load_lora_into_unet(
state_dict = state_dict
, network_alphas = network_alphas
, unet = pipe.unet
)
pipe.load_lora_into_text_encoder(
state_dict = state_dict
, network_alphas = network_alphas
, text_encoder = pipe.text_encoder
)
pipe.load_textual_inversion("./AI-ModelScope/By bad artist -neg.pt")
# 设置随机种子
seed = int(3187489596)
generator = torch.Generator("cuda").manual_seed(seed)
# 生成图像
poke_prompt="sugimori ken (style), ghost and ground pokemon (creature), full body, gengar, marowak, solo, grin, half-closed eye, happy, highres, no humans, other focus, pokemon, purple eyes, simple background, smile, solo, standing, teeth, uneven eyes, white background , ((masterpiece))"
tokenizer = CLIPTokenizer.from_pretrained("./AI-ModelScope/clip-vit-large-patch14")
text_encoder = CLIPTextModel.from_pretrained("./AI-ModelScope/clip-vit-large-patch14")
pipe.text_encoder = text_encoder.to('cuda')
pipe.tokenizer = tokenizer
image = pipe(
prompt = poke_prompt,
negative_prompt="(painting by bad-artist-anime:0.9), (painting by bad-artist:0.9), watermark, text, error, blurry, jpeg artifacts, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, artist name, (worst quality, low quality:1.4), bad anatomy",
guidance_scale=9,
num_inference_steps=200,
generator=generator,
sampler="dpm++_sde_karras",
clip_skip=2,
).images[0]
# 保存生成的图像
output_path = f"./out.png"
print(os.path.abspath("./out.png"))
image.save(output_path)**
### Steps to reproduce the problem
None
### What should have happened?
None
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
None
### Console logs
```Shell
None
```
### Additional information
_No response_ | closed | 2024-06-18T06:19:34Z | 2024-06-19T19:49:57Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16040 | [] | OMTHSJUHW | 1 |
tox-dev/tox | automation | 2,686 | Tox 4 fails to parse a `tox.ini` with a testenv named `flake8` and a config section named `flake8` | ## Issue
Tox runs the wrong install and test commands when an equally named (?) section is available in `tox.ini`.
Python linter `flake8` supports configuration by adding a `[flake8]` section to `tox.ini`. We have a `[testenv:flake8]` in tox.ini that runs the flake8 test within tox. Under tox3, this setup worked fine for us, but as of tox 4, the flake8 test fails: dependencies are not installed and the wrong test command is executed.
A minimal failing `tox.ini`:
```ini
[tox]
envlist =
python3.10, flake8
[flake8]
extend-ignore = E501,W503
[testenv]
basepython = python3.10
deps =
pytest
commands = pytest
[testenv:flake8]
deps =
flake8
commands = flake8 -v hello.py
```
When running `tox -e flake8`, tox installs `pytest` as deps, and runs `pytest` as test command. I can resolve this in several ways:
- moving the `[flake8]` section in tox.ini towards the end of the config file (below the `[testenv:flake8]` entry).
- moving the flake config to a whole separate file (e.g, `.flake8`)
I have no idea whether configuring `flake8` within `tox.ini` was ever a supported idea from the side of tox. I'd prefer a separate config file like `.flake8` (or `pyproject.toml` / `setup.cfg` as being generic by definition), but I ran into this on an inherited setup.
## Environment
Provide at least:
- OS: ubuntu 20.04, Python 3.10
- `pip list` of the host Python where `tox` is installed:
```console
Package Version
------------- -------
cachetools 5.2.0
chardet 5.1.0
colorama 0.4.6
distlib 0.3.6
filelock 3.8.2
packaging 22.0
pip 22.3.1
platformdirs 2.6.0
pluggy 1.0.0
pyproject_api 1.2.1
setuptools 65.6.3
tomli 2.0.1
tox 4.0.8
virtualenv 20.17.1
wheel 0.38.4
```
## Output of running tox
Provide the output of `tox -rvv`:
```console
$ tox -rvv
python3.10: 145 W remove tox env folder /home/tom/code/tox-test/.tox/python3.10 [tox/tox_env/api.py:302]
python3.10: 184 I find interpreter for spec PythonSpec(major=3, minor=10) [virtualenv/discovery/builtin.py:56]
python3.10: 184 D discover exe for PythonInfo(spec=CPython3.10.8.final.0-64, exe=/home/tom/code/wagtail-helpdesk/env/bin/python3.10, platform=linux, version='3.10.8 (main, Oct 12 2022, 19:14:26) [GCC 9.4.0]', encoding_fs_io=utf-8-utf-8) in /usr [virtualenv/discovery/py_info.py:437]
python3.10: 185 D filesystem is case-sensitive [virtualenv/info.py:24]
python3.10: 185 D got python info of /usr/bin/python3.10 from /home/tom/.local/share/virtualenv/py_info/1/8a94588eda9d64d9e9a351ab8144e55b1fabf5113b54e67dd26a8c27df0381b3.json [virtualenv/app_data/via_disk_folder.py:129]
python3.10: 186 I proposed PythonInfo(spec=CPython3.10.8.final.0-64, system=/usr/bin/python3.10, exe=/home/tom/code/wagtail-helpdesk/env/bin/python3.10, platform=linux, version='3.10.8 (main, Oct 12 2022, 19:14:26) [GCC 9.4.0]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:63]
python3.10: 186 D accepted PythonInfo(spec=CPython3.10.8.final.0-64, system=/usr/bin/python3.10, exe=/home/tom/code/wagtail-helpdesk/env/bin/python3.10, platform=linux, version='3.10.8 (main, Oct 12 2022, 19:14:26) [GCC 9.4.0]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65]
python3.10: 212 I create virtual environment via CPython3Posix(dest=/home/tom/code/tox-test/.tox/python3.10, clear=False, no_vcs_ignore=False, global=False) [virtualenv/run/session.py:48]
python3.10: 212 D create folder /home/tom/code/tox-test/.tox/python3.10/bin [virtualenv/util/path/_sync.py:9]
python3.10: 213 D create folder /home/tom/code/tox-test/.tox/python3.10/lib/python3.10/site-packages [virtualenv/util/path/_sync.py:9]
python3.10: 213 D write /home/tom/code/tox-test/.tox/python3.10/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:30]
python3.10: 213 D home = /usr/bin [virtualenv/create/pyenv_cfg.py:34]
python3.10: 213 D implementation = CPython [virtualenv/create/pyenv_cfg.py:34]
python3.10: 213 D version_info = 3.10.8.final.0 [virtualenv/create/pyenv_cfg.py:34]
python3.10: 213 D virtualenv = 20.17.1 [virtualenv/create/pyenv_cfg.py:34]
python3.10: 213 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:34]
python3.10: 213 D base-prefix = /usr [virtualenv/create/pyenv_cfg.py:34]
python3.10: 213 D base-exec-prefix = /usr [virtualenv/create/pyenv_cfg.py:34]
python3.10: 213 D base-executable = /usr/bin/python3.10 [virtualenv/create/pyenv_cfg.py:34]
python3.10: 213 D symlink /usr/bin/python3.10 to /home/tom/code/tox-test/.tox/python3.10/bin/python [virtualenv/util/path/_sync.py:28]
python3.10: 214 D create virtualenv import hook file /home/tom/code/tox-test/.tox/python3.10/lib/python3.10/site-packages/_virtualenv.pth [virtualenv/create/via_global_ref/api.py:89]
python3.10: 214 D create /home/tom/code/tox-test/.tox/python3.10/lib/python3.10/site-packages/_virtualenv.py [virtualenv/create/via_global_ref/api.py:92]
python3.10: 214 D ============================== target debug ============================== [virtualenv/run/session.py:50]
python3.10: 214 D debug via /home/tom/code/tox-test/.tox/python3.10/bin/python /home/tom/code/wagtail-helpdesk/env/lib/python3.10/site-packages/virtualenv/create/debug.py [virtualenv/create/creator.py:197]
python3.10: 214 D {
"sys": {
"executable": "/home/tom/code/tox-test/.tox/python3.10/bin/python",
"_base_executable": "/home/tom/code/tox-test/.tox/python3.10/bin/python",
"prefix": "/home/tom/code/tox-test/.tox/python3.10",
"base_prefix": "/usr",
"real_prefix": null,
"exec_prefix": "/home/tom/code/tox-test/.tox/python3.10",
"base_exec_prefix": "/usr",
"path": [
"/usr/lib/python310.zip",
"/usr/lib/python3.10",
"/usr/lib/python3.10/lib-dynload",
"/home/tom/code/tox-test/.tox/python3.10/lib/python3.10/site-packages"
],
"meta_path": [
"<class '_virtualenv._Finder'>",
"<class '_frozen_importlib.BuiltinImporter'>",
"<class '_frozen_importlib.FrozenImporter'>",
"<class '_frozen_importlib_external.PathFinder'>"
],
"fs_encoding": "utf-8",
"io_encoding": "utf-8"
},
"version": "3.10.8 (main, Oct 12 2022, 19:14:26) [GCC 9.4.0]",
"makefile_filename": "/usr/lib/python3.10/config-3.10-x86_64-linux-gnu/Makefile",
"os": "<module 'os' from '/usr/lib/python3.10/os.py'>",
"site": "<module 'site' from '/usr/lib/python3.10/site.py'>",
"datetime": "<module 'datetime' from '/usr/lib/python3.10/datetime.py'>",
"math": "<module 'math' (built-in)>",
"json": "<module 'json' from '/usr/lib/python3.10/json/__init__.py'>"
} [virtualenv/run/session.py:51]
python3.10: 248 I add seed packages via FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/tom/.local/share/virtualenv) [virtualenv/run/session.py:55]
python3.10: 251 D got embed update of distribution setuptools from /home/tom/.local/share/virtualenv/wheel/3.10/embed/3/setuptools.json [virtualenv/app_data/via_disk_folder.py:129]
python3.10: 252 D got embed update of distribution pip from /home/tom/.local/share/virtualenv/wheel/3.10/embed/3/pip.json [virtualenv/app_data/via_disk_folder.py:129]
python3.10: 252 D got embed update of distribution wheel from /home/tom/.local/share/virtualenv/wheel/3.10/embed/3/wheel.json [virtualenv/app_data/via_disk_folder.py:129]
python3.10: 256 D install setuptools from wheel /home/tom/code/wagtail-helpdesk/env/lib/python3.10/site-packages/virtualenv/seed/wheels/embed/setuptools-65.6.3-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47]
python3.10: 256 D install pip from wheel /home/tom/code/wagtail-helpdesk/env/lib/python3.10/site-packages/virtualenv/seed/wheels/embed/pip-22.3.1-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47]
python3.10: 256 D install wheel from wheel /home/tom/code/wagtail-helpdesk/env/lib/python3.10/site-packages/virtualenv/seed/wheels/embed/wheel-0.38.4-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47]
python3.10: 257 D copy directory /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/pip-22.3.1-py3-none-any/pip to /home/tom/code/tox-test/.tox/python3.10/lib/python3.10/site-packages/pip [virtualenv/util/path/_sync.py:36]
python3.10: 257 D copy /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/distutils-precedence.pth to /home/tom/code/tox-test/.tox/python3.10/lib/python3.10/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:36]
python3.10: 257 D copy directory /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel to /home/tom/code/tox-test/.tox/python3.10/lib/python3.10/site-packages/wheel [virtualenv/util/path/_sync.py:36]
python3.10: 258 D copy directory /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/setuptools to /home/tom/code/tox-test/.tox/python3.10/lib/python3.10/site-packages/setuptools [virtualenv/util/path/_sync.py:36]
python3.10: 262 D copy /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel-0.38.4.virtualenv to /home/tom/code/tox-test/.tox/python3.10/lib/python3.10/site-packages/wheel-0.38.4.virtualenv [virtualenv/util/path/_sync.py:36]
python3.10: 262 D copy directory /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel-0.38.4.dist-info to /home/tom/code/tox-test/.tox/python3.10/lib/python3.10/site-packages/wheel-0.38.4.dist-info [virtualenv/util/path/_sync.py:36]
python3.10: 265 D generated console scripts wheel3 wheel wheel3.10 wheel-3.10 [virtualenv/seed/embed/via_app_data/pip_install/base.py:41]
python3.10: 288 D copy directory /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/_distutils_hack to /home/tom/code/tox-test/.tox/python3.10/lib/python3.10/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:36]
python3.10: 288 D copy /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/setuptools-65.6.3.virtualenv to /home/tom/code/tox-test/.tox/python3.10/lib/python3.10/site-packages/setuptools-65.6.3.virtualenv [virtualenv/util/path/_sync.py:36]
python3.10: 289 D copy directory /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/pkg_resources to /home/tom/code/tox-test/.tox/python3.10/lib/python3.10/site-packages/pkg_resources [virtualenv/util/path/_sync.py:36]
python3.10: 296 D copy directory /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/setuptools-65.6.3.dist-info to /home/tom/code/tox-test/.tox/python3.10/lib/python3.10/site-packages/setuptools-65.6.3.dist-info [virtualenv/util/path/_sync.py:36]
python3.10: 297 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:41]
python3.10: 313 D copy /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/pip-22.3.1-py3-none-any/pip-22.3.1.virtualenv to /home/tom/code/tox-test/.tox/python3.10/lib/python3.10/site-packages/pip-22.3.1.virtualenv [virtualenv/util/path/_sync.py:36]
python3.10: 314 D copy directory /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/pip-22.3.1-py3-none-any/pip-22.3.1.dist-info to /home/tom/code/tox-test/.tox/python3.10/lib/python3.10/site-packages/pip-22.3.1.dist-info [virtualenv/util/path/_sync.py:36]
python3.10: 315 D generated console scripts pip3.10 pip-3.10 pip pip3 [virtualenv/seed/embed/via_app_data/pip_install/base.py:41]
python3.10: 315 I add activators for Bash, CShell, Fish, Nushell, PowerShell, Python [virtualenv/run/session.py:61]
python3.10: 317 D write /home/tom/code/tox-test/.tox/python3.10/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:30]
python3.10: 317 D home = /usr/bin [virtualenv/create/pyenv_cfg.py:34]
python3.10: 317 D implementation = CPython [virtualenv/create/pyenv_cfg.py:34]
python3.10: 317 D version_info = 3.10.8.final.0 [virtualenv/create/pyenv_cfg.py:34]
python3.10: 317 D virtualenv = 20.17.1 [virtualenv/create/pyenv_cfg.py:34]
python3.10: 317 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:34]
python3.10: 317 D base-prefix = /usr [virtualenv/create/pyenv_cfg.py:34]
python3.10: 317 D base-exec-prefix = /usr [virtualenv/create/pyenv_cfg.py:34]
python3.10: 317 D base-executable = /usr/bin/python3.10 [virtualenv/create/pyenv_cfg.py:34]
python3.10: 319 W install_deps> python -I -m pip install pytest [tox/tox_env/api.py:408]
Collecting pytest
Using cached pytest-7.2.0-py3-none-any.whl (316 kB)
Collecting packaging
Using cached packaging-22.0-py3-none-any.whl (42 kB)
Collecting pluggy<2.0,>=0.12
Using cached pluggy-1.0.0-py2.py3-none-any.whl (13 kB)
Collecting tomli>=1.0.0
Using cached tomli-2.0.1-py3-none-any.whl (12 kB)
Collecting attrs>=19.2.0
Using cached attrs-22.1.0-py2.py3-none-any.whl (58 kB)
Collecting iniconfig
Using cached iniconfig-1.1.1-py2.py3-none-any.whl (5.0 kB)
Collecting exceptiongroup>=1.0.0rc8
Using cached exceptiongroup-1.0.4-py3-none-any.whl (14 kB)
Installing collected packages: iniconfig, tomli, pluggy, packaging, exceptiongroup, attrs, pytest
Successfully installed attrs-22.1.0 exceptiongroup-1.0.4 iniconfig-1.1.1 packaging-22.0 pluggy-1.0.0 pytest-7.2.0 tomli-2.0.1
python3.10: 2042 I exit 0 (1.72 seconds) /home/tom/code/tox-test> python -I -m pip install pytest pid=660839 [tox/execute/api.py:275]
python3.10: 2043 W commands[0]> pytest [tox/tox_env/api.py:408]
========================================================== test session starts ==========================================================
platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0
cachedir: .tox/python3.10/.pytest_cache
rootdir: /home/tom/code/tox-test
collected 1 item
test_hello.py . [100%]
=========================================================== 1 passed in 0.00s ===========================================================
python3.10: 2240 I exit 0 (0.20 seconds) /home/tom/code/tox-test> pytest pid=660879 [tox/execute/api.py:275]
python3.10: OK ✔ in 2.1 seconds
flake8: 2240 W remove tox env folder /home/tom/code/tox-test/.tox/flake8 [tox/tox_env/api.py:302]
flake8: 2272 I find interpreter for spec PythonSpec(major=3, minor=10) [virtualenv/discovery/builtin.py:56]
flake8: 2272 I proposed PythonInfo(spec=CPython3.10.8.final.0-64, system=/usr/bin/python3.10, exe=/home/tom/code/wagtail-helpdesk/env/bin/python3.10, platform=linux, version='3.10.8 (main, Oct 12 2022, 19:14:26) [GCC 9.4.0]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:63]
flake8: 2272 D accepted PythonInfo(spec=CPython3.10.8.final.0-64, system=/usr/bin/python3.10, exe=/home/tom/code/wagtail-helpdesk/env/bin/python3.10, platform=linux, version='3.10.8 (main, Oct 12 2022, 19:14:26) [GCC 9.4.0]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65]
flake8: 2273 I create virtual environment via CPython3Posix(dest=/home/tom/code/tox-test/.tox/flake8, clear=False, no_vcs_ignore=False, global=False) [virtualenv/run/session.py:48]
flake8: 2273 D create folder /home/tom/code/tox-test/.tox/flake8/bin [virtualenv/util/path/_sync.py:9]
flake8: 2273 D create folder /home/tom/code/tox-test/.tox/flake8/lib/python3.10/site-packages [virtualenv/util/path/_sync.py:9]
flake8: 2273 D write /home/tom/code/tox-test/.tox/flake8/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:30]
flake8: 2274 D home = /usr/bin [virtualenv/create/pyenv_cfg.py:34]
flake8: 2274 D implementation = CPython [virtualenv/create/pyenv_cfg.py:34]
flake8: 2274 D version_info = 3.10.8.final.0 [virtualenv/create/pyenv_cfg.py:34]
flake8: 2274 D virtualenv = 20.17.1 [virtualenv/create/pyenv_cfg.py:34]
flake8: 2274 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:34]
flake8: 2274 D base-prefix = /usr [virtualenv/create/pyenv_cfg.py:34]
flake8: 2274 D base-exec-prefix = /usr [virtualenv/create/pyenv_cfg.py:34]
flake8: 2274 D base-executable = /usr/bin/python3.10 [virtualenv/create/pyenv_cfg.py:34]
flake8: 2274 D symlink /usr/bin/python3.10 to /home/tom/code/tox-test/.tox/flake8/bin/python [virtualenv/util/path/_sync.py:28]
flake8: 2274 D create virtualenv import hook file /home/tom/code/tox-test/.tox/flake8/lib/python3.10/site-packages/_virtualenv.pth [virtualenv/create/via_global_ref/api.py:89]
flake8: 2275 D create /home/tom/code/tox-test/.tox/flake8/lib/python3.10/site-packages/_virtualenv.py [virtualenv/create/via_global_ref/api.py:92]
flake8: 2275 D ============================== target debug ============================== [virtualenv/run/session.py:50]
flake8: 2275 D debug via /home/tom/code/tox-test/.tox/flake8/bin/python /home/tom/code/wagtail-helpdesk/env/lib/python3.10/site-packages/virtualenv/create/debug.py [virtualenv/create/creator.py:197]
flake8: 2275 D {
"sys": {
"executable": "/home/tom/code/tox-test/.tox/flake8/bin/python",
"_base_executable": "/home/tom/code/tox-test/.tox/flake8/bin/python",
"prefix": "/home/tom/code/tox-test/.tox/flake8",
"base_prefix": "/usr",
"real_prefix": null,
"exec_prefix": "/home/tom/code/tox-test/.tox/flake8",
"base_exec_prefix": "/usr",
"path": [
"/usr/lib/python310.zip",
"/usr/lib/python3.10",
"/usr/lib/python3.10/lib-dynload",
"/home/tom/code/tox-test/.tox/flake8/lib/python3.10/site-packages"
],
"meta_path": [
"<class '_virtualenv._Finder'>",
"<class '_frozen_importlib.BuiltinImporter'>",
"<class '_frozen_importlib.FrozenImporter'>",
"<class '_frozen_importlib_external.PathFinder'>"
],
"fs_encoding": "utf-8",
"io_encoding": "utf-8"
},
"version": "3.10.8 (main, Oct 12 2022, 19:14:26) [GCC 9.4.0]",
"makefile_filename": "/usr/lib/python3.10/config-3.10-x86_64-linux-gnu/Makefile",
"os": "<module 'os' from '/usr/lib/python3.10/os.py'>",
"site": "<module 'site' from '/usr/lib/python3.10/site.py'>",
"datetime": "<module 'datetime' from '/usr/lib/python3.10/datetime.py'>",
"math": "<module 'math' (built-in)>",
"json": "<module 'json' from '/usr/lib/python3.10/json/__init__.py'>"
} [virtualenv/run/session.py:51]
flake8: 2301 I add seed packages via FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/tom/.local/share/virtualenv) [virtualenv/run/session.py:55]
flake8: 2303 D got embed update of distribution pip from /home/tom/.local/share/virtualenv/wheel/3.10/embed/3/pip.json [virtualenv/app_data/via_disk_folder.py:129]
flake8: 2303 D got embed update of distribution setuptools from /home/tom/.local/share/virtualenv/wheel/3.10/embed/3/setuptools.json [virtualenv/app_data/via_disk_folder.py:129]
flake8: 2304 D got embed update of distribution wheel from /home/tom/.local/share/virtualenv/wheel/3.10/embed/3/wheel.json [virtualenv/app_data/via_disk_folder.py:129]
flake8: 2304 D install pip from wheel /home/tom/code/wagtail-helpdesk/env/lib/python3.10/site-packages/virtualenv/seed/wheels/embed/pip-22.3.1-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47]
flake8: 2304 D install setuptools from wheel /home/tom/code/wagtail-helpdesk/env/lib/python3.10/site-packages/virtualenv/seed/wheels/embed/setuptools-65.6.3-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47]
flake8: 2304 D install wheel from wheel /home/tom/code/wagtail-helpdesk/env/lib/python3.10/site-packages/virtualenv/seed/wheels/embed/wheel-0.38.4-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47]
flake8: 2305 D copy /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/distutils-precedence.pth to /home/tom/code/tox-test/.tox/flake8/lib/python3.10/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:36]
flake8: 2306 D copy directory /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel to /home/tom/code/tox-test/.tox/flake8/lib/python3.10/site-packages/wheel [virtualenv/util/path/_sync.py:36]
flake8: 2306 D copy directory /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/setuptools to /home/tom/code/tox-test/.tox/flake8/lib/python3.10/site-packages/setuptools [virtualenv/util/path/_sync.py:36]
flake8: 2306 D copy directory /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/pip-22.3.1-py3-none-any/pip to /home/tom/code/tox-test/.tox/flake8/lib/python3.10/site-packages/pip [virtualenv/util/path/_sync.py:36]
flake8: 2312 D copy /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel-0.38.4.virtualenv to /home/tom/code/tox-test/.tox/flake8/lib/python3.10/site-packages/wheel-0.38.4.virtualenv [virtualenv/util/path/_sync.py:36]
flake8: 2312 D copy directory /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel-0.38.4.dist-info to /home/tom/code/tox-test/.tox/flake8/lib/python3.10/site-packages/wheel-0.38.4.dist-info [virtualenv/util/path/_sync.py:36]
flake8: 2316 D generated console scripts wheel3.10 wheel-3.10 wheel3 wheel [virtualenv/seed/embed/via_app_data/pip_install/base.py:41]
flake8: 2340 D copy directory /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/_distutils_hack to /home/tom/code/tox-test/.tox/flake8/lib/python3.10/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:36]
flake8: 2340 D copy /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/setuptools-65.6.3.virtualenv to /home/tom/code/tox-test/.tox/flake8/lib/python3.10/site-packages/setuptools-65.6.3.virtualenv [virtualenv/util/path/_sync.py:36]
flake8: 2341 D copy directory /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/pkg_resources to /home/tom/code/tox-test/.tox/flake8/lib/python3.10/site-packages/pkg_resources [virtualenv/util/path/_sync.py:36]
flake8: 2349 D copy directory /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/setuptools-65.6.3.dist-info to /home/tom/code/tox-test/.tox/flake8/lib/python3.10/site-packages/setuptools-65.6.3.dist-info [virtualenv/util/path/_sync.py:36]
flake8: 2350 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:41]
flake8: 2375 D copy /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/pip-22.3.1-py3-none-any/pip-22.3.1.virtualenv to /home/tom/code/tox-test/.tox/flake8/lib/python3.10/site-packages/pip-22.3.1.virtualenv [virtualenv/util/path/_sync.py:36]
flake8: 2375 D copy directory /home/tom/.local/share/virtualenv/wheel/3.10/image/1/CopyPipInstall/pip-22.3.1-py3-none-any/pip-22.3.1.dist-info to /home/tom/code/tox-test/.tox/flake8/lib/python3.10/site-packages/pip-22.3.1.dist-info [virtualenv/util/path/_sync.py:36]
flake8: 2376 D generated console scripts pip-3.10 pip pip3 pip3.10 [virtualenv/seed/embed/via_app_data/pip_install/base.py:41]
flake8: 2376 I add activators for Bash, CShell, Fish, Nushell, PowerShell, Python [virtualenv/run/session.py:61]
flake8: 2378 D write /home/tom/code/tox-test/.tox/flake8/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:30]
flake8: 2378 D home = /usr/bin [virtualenv/create/pyenv_cfg.py:34]
flake8: 2378 D implementation = CPython [virtualenv/create/pyenv_cfg.py:34]
flake8: 2378 D version_info = 3.10.8.final.0 [virtualenv/create/pyenv_cfg.py:34]
flake8: 2378 D virtualenv = 20.17.1 [virtualenv/create/pyenv_cfg.py:34]
flake8: 2378 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:34]
flake8: 2378 D base-prefix = /usr [virtualenv/create/pyenv_cfg.py:34]
flake8: 2378 D base-exec-prefix = /usr [virtualenv/create/pyenv_cfg.py:34]
flake8: 2378 D base-executable = /usr/bin/python3.10 [virtualenv/create/pyenv_cfg.py:34]
flake8: 2380 W install_deps> python -I -m pip install pytest [tox/tox_env/api.py:408]
Collecting pytest
Using cached pytest-7.2.0-py3-none-any.whl (316 kB)
Collecting packaging
Using cached packaging-22.0-py3-none-any.whl (42 kB)
Collecting pluggy<2.0,>=0.12
Using cached pluggy-1.0.0-py2.py3-none-any.whl (13 kB)
Collecting attrs>=19.2.0
Using cached attrs-22.1.0-py2.py3-none-any.whl (58 kB)
Collecting exceptiongroup>=1.0.0rc8
Using cached exceptiongroup-1.0.4-py3-none-any.whl (14 kB)
Collecting tomli>=1.0.0
Using cached tomli-2.0.1-py3-none-any.whl (12 kB)
Collecting iniconfig
Using cached iniconfig-1.1.1-py2.py3-none-any.whl (5.0 kB)
Installing collected packages: iniconfig, tomli, pluggy, packaging, exceptiongroup, attrs, pytest
Successfully installed attrs-22.1.0 exceptiongroup-1.0.4 iniconfig-1.1.1 packaging-22.0 pluggy-1.0.0 pytest-7.2.0 tomli-2.0.1
flake8: 4208 I exit 0 (1.83 seconds) /home/tom/code/tox-test> python -I -m pip install pytest pid=660893 [tox/execute/api.py:275]
flake8: 4208 W commands[0]> pytest [tox/tox_env/api.py:408]
========================================================== test session starts ==========================================================
platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0
cachedir: .tox/flake8/.pytest_cache
rootdir: /home/tom/code/tox-test
collected 1 item
test_hello.py . [100%]
=========================================================== 1 passed in 0.00s ===========================================================
flake8: 4403 I exit 0 (0.19 seconds) /home/tom/code/tox-test> pytest pid=660921 [tox/execute/api.py:275]
python3.10: OK (2.10=setup[1.90]+cmd[0.20] seconds)
flake8: OK (2.16=setup[1.97]+cmd[0.19] seconds)
congratulations :) (4.30 seconds)
```
## Minimal example
cat `hello.py`:
```python
print("hello world")
```
cat `test_hello.py`:
```python
def test_bogus():
assert True
```
cat `tox.ini`:
```ini
[tox]
envlist =
python3.10, flake8
[flake8]
extend-ignore = E501,W503
[testenv]
basepython = python3.10
deps =
pytest
commands = pytest
[testenv:flake8]
deps =
flake8
commands = flake8 -v hello.py
```
Run the test:
```console
$ tox -r
python3.10: remove tox env folder /home/tom/code/tox-test/.tox/python3.10
python3.10: install_deps> python -I -m pip install pytest
python3.10: commands[0]> pytest
========================================================== test session starts ==========================================================
platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0
cachedir: .tox/python3.10/.pytest_cache
rootdir: /home/tom/code/tox-test
collected 1 item
test_hello.py . [100%]
=========================================================== 1 passed in 0.00s ===========================================================
python3.10: OK ✔ in 2.11 seconds
flake8: remove tox env folder /home/tom/code/tox-test/.tox/flake8
flake8: install_deps> python -I -m pip install pytest
flake8: commands[0]> pytest
========================================================== test session starts ==========================================================
platform linux -- Python 3.10.8, pytest-7.2.0, pluggy-1.0.0
cachedir: .tox/flake8/.pytest_cache
rootdir: /home/tom/code/tox-test
collected 1 item
test_hello.py . [100%]
=========================================================== 1 passed in 0.00s ===========================================================
python3.10: OK (2.10=setup[1.88]+cmd[0.22] seconds)
flake8: OK (2.01=setup[1.81]+cmd[0.20] seconds)
congratulations :) (4.15 seconds)
```
As you can see, the flake test is incorrect (and passes for the wrong reasons). Fortunately, in the actual, non-simplified setup, the `flake8` test failed miserably. | closed | 2022-12-12T14:44:16Z | 2022-12-12T14:48:01Z | https://github.com/tox-dev/tox/issues/2686 | [] | whyscream | 1 |
clovaai/donut | nlp | 24 | Finetuning Document classification | Hi there,
thank you for publishing this model!
If i want to classify documents into different classes, not 16 as in [RVL-CDIP](https://www.cs.cmu.edu/~aharley/rvl-cdip) , but let's say only 8, do i have to use and train from the [donut-base](https://huggingface.co/naver-clova-ix/donut-base/tree/official) or i can further fine-tune [donut-base-finetuned-rvlcdip](https://huggingface.co/naver-clova-ix/donut-base-finetuned-rvlcdip/tree/official) ?
best regards
| closed | 2022-08-15T18:50:35Z | 2022-09-11T21:26:55Z | https://github.com/clovaai/donut/issues/24 | [] | sandorkonya | 6 |
HumanSignal/labelImg | deep-learning | 297 | (Ubuntu)QXcbConnection: Could not connect to display. Aborted (core dumped) | <!--
Please provide as much as detail and example as you can.
You can add screenshots if appropriate.
-->
- **OS:**
- **PyQt version:**
Hello, i followed the README and the end i run this line: ' python3 labelImg.py' with Ubuntu 16.04 +Python3 +Qt5.
But i got the problem :
QXcbConnection: Could not connect to display
Aborted (core dumped)
thx for help!
| closed | 2018-05-14T08:32:42Z | 2018-06-08T00:36:38Z | https://github.com/HumanSignal/labelImg/issues/297 | [] | ineslyl | 6 |
HumanSignal/labelImg | deep-learning | 340 | Add box dimensions feature | Very useful app!! It would be nice if you could add a box dimensions feature when you can see the number of pixels on each axis (the one that is saved in the xml file). Also it would be very nice if the user ccan modify the box acording to these dimensions. A dimenetion checker would also be very useful (eg. if one axis has less than N number of pixels it will pop a notification).
Machine learning with Tensorflow requires both x and y to be more than 32-33px. So this tool would save a lot of traing on crap data time!
| open | 2018-08-04T17:42:09Z | 2018-12-12T15:32:25Z | https://github.com/HumanSignal/labelImg/issues/340 | [] | christoschatzakis | 1 |
lepture/authlib | django | 599 | Next tag / release | The last release or tag is from 25.06.2023.
Since then, various PR and bugfixes have been merged.
Are there any plans for the next tag / release? | closed | 2023-11-28T13:20:00Z | 2023-12-18T11:14:03Z | https://github.com/lepture/authlib/issues/599 | [] | dklimpel | 1 |
dunossauro/fastapi-do-zero | sqlalchemy | 135 | Projetos de conclusão (TCC) | Essa issue foi feita para você compartilhar o seu projeto final com todas as pessoas que estão fazendo o curso e aprender um pouco com quem também está fazendo.
Compartilhe seu repositório da seguinte forma:
| Link do projeto | Seu @ no git | Comentário (opcional) |
| ---------------------- |--------------------| -------------------------------- |
| [fast_zero](https://github.com/dunossauro/fast_zero) | [@dunossauro](https://github.com/dunossauro)| projeto usando podman, poe the poet e dynaconf |
Montarei uma tabela juntado todos os projetos finais no site. Para podermos montar uma rede de conhecimento em diferentes ferramentas!
Espero que se divirtam com o projeto :heart:
Exemplo de como montar a tabela
```txt
| Link do projeto | Seu @ no git | Comentário (opcional) |
|-------------|:-------------:|:-------------:|
|[nome do repositório](link do repositório) | [seu @](link da conta) | Comentário |
``` | open | 2024-05-03T04:13:06Z | 2025-02-18T22:40:05Z | https://github.com/dunossauro/fastapi-do-zero/issues/135 | [
"agrupador"
] | dunossauro | 26 |
chezou/tabula-py | pandas | 282 | Does not correctly output mandarn | <!--- Provide a general summary of your changes in the Title above -->
The mandarin displays in the terminal once extracted from the pdf, but when it is written to an output file (.csv or .json) it writes each character out as "?"
command used to view mandarin in-terminal:
_df = tabula.read_pdf("path/to/file.pdf", pages=8)_
_df_
command used to write out:
_df = tabula.convert_into("path/to/file.pdf", "pg8out.csv", output_format="csv",pages=8)_
<!-- Write the summary of your issue here -->
I need to extract tabular mandarin data and write it out to a csv. When I try this, it replaces the mandarin characters with question marks.
<!--- Write and check the following questionaries. -->
- [ y ] Did you read [FAQ](https://tabula-py.readthedocs.io/en/latest/faq.html)?
- [ n ] (Optional, but really helpful) Your PDF URL: ?
- [ y ] Paste the output of `import tabula; tabula.environment_info()` on Python REPL: ?
>>> tabula.environment_info()
Python version:
3.8.3 (tags/v3.8.3:6f8c832, May 13 2020, 22:37:02) [MSC v.1924 64 bit (AMD64)]
Java version:
java version "16.0.1" 2021-04-20
Java(TM) SE Runtime Environment (build 16.0.1+9-24)
Java HotSpot(TM) 64-Bit Server VM (build 16.0.1+9-24, mixed mode, sharing)
tabula-py version: 2.2.0
platform: Windows-10-10.0.19041-SP0
uname:
uname_result(system='Windows', node="", release='10', version='10.0.19041', machine='AMD64', processor='Intel64 Family 6 Model 158 Stepping 13, GenuineIntel')
linux_distribution: ('', '', '')
mac_ver: ('', ('', '', ''), '')
If not possible to execute `tabula.environment_info()`, please answer following questions manually.
- [x] Paste the output of `python --version` command on your terminal: ?
C:\Users\tmgca>python --version
Python 3.8.3
- [x] Paste the output of `java -version` command on your terminal: ?
C:\Users\tmgca>java -version
java version "16.0.1" 2021-04-20
Java(TM) SE Runtime Environment (build 16.0.1+9-24)
Java HotSpot(TM) 64-Bit Server VM (build 16.0.1+9-24, mixed mode, sharing)
- [x] Does `java -h` command work well?; Ensure your java command is included in `PATH`
Works!
- [x] Write your OS and it's version: ?
uname:
uname_result(system='Windows', node="", release='10', version='10.0.19041', machine='AMD64', processor='Intel64 Family 6 Model 158 Stepping 13, GenuineIntel')
linux_distribution: ('', '', '')
mac_ver: ('', ('', '', ''), '')
# What did you do when you faced the problem?
Attempted to write out to different file formats.
## Code:
```
_df = tabula.convert_into("path/to/file.pdf", "pg8out.csv", output_format="csv",pages=8)_
```
## Expected behavior:
<!--- Write your expected results/outputs -->
This is what displays in the terminal, and I would therefore expect it to output:
```
[ 非经常性损益项目 金额 附注(如适用)
0 非流动资产处置损益 -5,001,638.74 NaN
1 越权审批,或无正式批准文 NaN NaN
2 件,或偶发性的税收返还、 NaN NaN
3 减免 NaN NaN
4 计入当期损益的政府补助, NaN NaN
5 但与公司正常经营业务密 NaN NaN
6 切相关,符合国家政策规 NaN NaN
7 NaN 26,533,566.50 NaN
8 定、按照一定标准定额或定 NaN NaN
9 量持续享受的政府补助除 NaN NaN
10 外 NaN NaN
11 计入当期损益的对非金融 NaN NaN
12 NaN NaN NaN
13 企业收取的资金占用费 NaN NaN
14 企业取得子公司、联营企业 NaN NaN
15 及合营企业的投资成本小 NaN NaN
16 于取得投资时应享有被投 NaN NaN
17 资单位可辨认净资产公允 NaN NaN
18 价值产生的收益 NaN NaN
19 非货币性资产交换损益 NaN NaN
20 委托他人投资或管理资产 NaN NaN
21 NaN 299,861.44 NaN
22 的损益 NaN NaN
23 因不可抗力因素,如遭受自 NaN NaN
24 然灾害而计提的各项资产 NaN NaN
25 减值准备 NaN NaN
26 债务重组损益 NaN NaN
27 企业重组费用,如安置职工 NaN NaN
28 NaN NaN NaN
29 的支出、整合费用等 NaN NaN
30 交易价格显失公允的交易 NaN NaN
31 产生的超过公允价值部分 NaN NaN
32 的损益 NaN NaN
33 同一控制下企业合并产生 NaN NaN
34 NaN -318,895.62 NaN
35 的子公司期初至合并日的 NaN NaN]
```
## Actual behavior:
<!--- Put the actual results/outputs -->
from pg8out.json:
```
[{"extraction_method":"stream","top":260.0,"left":83.0,"width":453.0,"height":506.0,"right":536.0,"bottom":766.0,"data":[[{"top":267.89,"left":111.86,"width":99.02999877929688,"height":6.0,"text":"????????"},{"top":267.89,"left":298.61,"width":27.0,"height":6.0,"text":"??"},{"top":267.89,"left":419.47,"width":87.02999877929688,"height":6.0,"text":"??(???)"}],[{"top":283.97,"left":89.9,"width":111.02999114990234,"height":6.0,"text":"?????????"},{"top":284.28,"left":313.73,"width":69.97998046875,"height":5.449999809265137,"text":"-5,001,638.74"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":300.05,"left":89.9,"width":140.04000854492188,"height":6.0,"text":"????,???????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":315.65,"left":89.9,"width":144.89999389648438,"height":6.0,"text":"?,??????????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":331.13,"left":89.9,"width":27.0,"height":6.0,"text":"??"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":347.21,"left":89.9,"width":144.0,"height":6.0,"text":"???????????,"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":362.81,"left":89.9,"width":139.92001342773438,"height":6.0,"text":"???????????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":378.41,"left":89.9,"width":139.92001342773438,"height":6.0,"text":"???,???????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":386.4,"left":311.69,"width":72.01998901367188,"height":5.449999809265137,"text":"26,533,566.50"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":393.89,"left":89.9,"width":140.04000854492188,"height":6.0,"text":"????????????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":409.49,"left":89.9,"width":139.92001342773438,"height":6.0,"text":"???????????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":425.09,"left":89.9,"width":15.0,"height":6.000018119812012,"text":"?"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":441.19,"left":89.9,"width":139.92001342773438,"height":6.0,"text":"???????????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":456.67,"left":89.9,"width":123.02999114990234,"height":6.0,"text":"??????????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":472.75,"left":89.9,"width":140.04000854492188,"height":6.0,"text":"????????????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":488.35,"left":89.9,"width":139.92001342773438,"height":6.0,"text":"???????????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":503.95,"left":89.9,"width":140.82000732421875,"height":6.0,"text":"???????????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":519.43,"left":89.9,"width":139.92001342773438,"height":6.0,"text":"???????????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":535.03,"left":89.9,"width":86.99999237060547,"height":6.0,"text":"???????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":551.11,"left":89.9,"width":123.02999114990234,"height":6.0,"text":"??????????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":567.19,"left":89.9,"width":139.92001342773438,"height":6.0,"text":"???????????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":575.18,"left":326.71,"width":57.0,"height":5.449999809265137,"text":"299,861.44"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":582.67,"left":89.9,"width":39.0,"height":6.0,"text":"???"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":598.75,"left":89.9,"width":140.04000854492188,"height":6.0,"text":"???????,????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":614.38,"left":89.9,"width":140.82000732421875,"height":6.0,"text":"???????????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":629.98,"left":89.9,"width":50.99999237060547,"height":6.000048637390137,"text":"????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":645.94,"left":89.9,"width":74.99999237060547,"height":6.0,"text":"??????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":662.02,"left":89.9,"width":140.04000854492188,"height":6.0,"text":"??????,?????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":677.62,"left":89.9,"width":111.02999114990234,"height":6.0,"text":"?????????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":693.7,"left":89.9,"width":139.92001342773438,"height":6.0,"text":"???????????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":709.18,"left":89.9,"width":140.82000732421875,"height":6.0,"text":"???????????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":724.78,"left":89.9,"width":39.0,"height":6.0,"text":"???"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":740.86,"left":89.9,"width":139.92001342773438,"height":6.0,"text":"???????????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":748.84,"left":322.73,"width":60.97998046875,"height":5.449999809265137,"text":"-318,895.62"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}],[{"top":756.46,"left":89.9,"width":139.92001342773438,"height":6.0,"text":"???????????"},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""},{"top":0.0,"left":0.0,"width":0.0,"height":0.0,"text":""}]]}]
```
From pg8out.csv:
```
???????? | ?? | ??(???)
-- | -- | --
????????? | ######## |
????,??????? |
?,?????????? |
?? | |
???????????, |
??????????? |
???,??????? |
| ######## |
???????????? |
??????????? |
? | |
??????????? |
| |
?????????? |
???????????? |
??????????? |
??????????? |
??????????? |
??????? | |
?????????? |
??????????? |
| ######## |
??? | |
???????,???? |
??????????? |
???? | |
?????? | |
??????,????? |
| |
????????? |
??????????? |
??????????? |
??? | |
??????????? |
| ######## |
??????????? |
```
## Related Issues:
From what I can tell, there are no related issues. | closed | 2021-06-24T19:00:07Z | 2021-06-24T19:00:56Z | https://github.com/chezou/tabula-py/issues/282 | [] | tmgcassidy | 1 |
StructuredLabs/preswald | data-visualization | 159 | [BUG] Iris example "entrypoint not defined" | **Describe the bug**
When running the new Iris example, the error `Error: entrypoint not defined in preswald.toml under [project] section.` is raised and the demo is unable to run.
**To Reproduce**
Steps to reproduce the behavior:
1. Change directory to `examples/iris`
2. Run `preswald run` in terminal
3. See error
**Expected behavior**
Iris demo should run
**Environment:**
- OS: Windows 10 Home 19045.5487
- Browser: Chrome
- Version: 133.0.6943.142 | open | 2025-03-07T17:05:27Z | 2025-03-07T17:18:02Z | https://github.com/StructuredLabs/preswald/issues/159 | [
"bug"
] | joshlavroff | 1 |
strawberry-graphql/strawberry | graphql | 3,134 | Add documentation for provided federated schema directives | Strawberry helpfully provides a bunch of federated schema directives we can use:
https://github.com/strawberry-graphql/strawberry/blob/628e4980b50e317cf99999aef052b8bb095370df/strawberry/federation/schema_directives.py
There's no reference to the provided directives in documentation, so I didn't even know they existed until I stumbled upon the code.
I can add documentation to at the very least list the defined directives and link out to apollo with details about what they are.
Where should this live?
In https://strawberry.rocks/docs/guides/federation?
In https://strawberry.rocks/docs/types/schema-directives?
In a new Federated Schema Directives page? | closed | 2023-10-05T13:59:37Z | 2025-03-20T15:56:24Z | https://github.com/strawberry-graphql/strawberry/issues/3134 | [] | bradleyoesch | 2 |
miguelgrinberg/python-socketio | asyncio | 1,101 | Session not found in AsyncNamespace | **Describe the bug**
Unable to save session in AsyncNamespace handlers
**To Reproduce**
Steps to reproduce the behavior:
server .py
```python
# server.py
from typing import Any
import uvicorn
from fastapi import FastAPI
from socketio import AsyncNamespace
import socketio
sio: Any = socketio.AsyncServer(async_mode="asgi",logger=True, engineio_logger=True)
socket_app = socketio.ASGIApp(sio)
app = FastAPI()
app.mount("/", socket_app)
class AsyncNS(AsyncNamespace):
def __init__(self,ns,sio):
super().__init__(ns)
self.sio=sio
async def on_connect(self,sid, env):
print("on connect")
await self.sio.save_session(sid,{'id':1})
self.sio.enter_room(sid,"room2")
print("ok")
async def broadcast(self,sid,data):
print(self.sio)
print(f"broadcast {data}")
session = await self.sio.get_session(sid)
print(session)
sio.register_namespace(AsyncNS("/chat/ws",sio=sio))
if __name__ == "__main__":
kwargs = {"host": "0.0.0.0", "port": 8556}
uvicorn.run(app, **kwargs)
```
client
```
import socketio
cl = socketio.Client()
cl.connect(
"http://127.0.0.1:8556",
namespaces="/chat/ws",
transports=["websocket"]) # server prints "on connect"
```
**Expected behavior**
Have a session, this works normal with simple handlers but not with class based one
**Logs**
```
INFO: Started server process [5057]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8556 (Press CTRL+C to quit)
JZlqLm5gxSJR-t5nAAAA: Sending packet OPEN data {'sid': 'JZlqLm5gxSJR-t5nAAAA', 'upgrades': [], 'pingTimeout': 20000, 'pingInterval': 25000}
JZlqLm5gxSJR-t5nAAAA: Received request to upgrade to websocket
INFO: ('127.0.0.1', 59937) - "WebSocket /socket.io/?transport=websocket&EIO=4&t=1671253855.206351" [accepted]
JZlqLm5gxSJR-t5nAAAA: Upgrade to websocket successful
INFO: connection open
JZlqLm5gxSJR-t5nAAAA: Received packet MESSAGE data 0/chat/ws,{}
on connect
message async handler error
Traceback (most recent call last):
File "/code/project/.venv/lib/python3.10/site-packages/engineio/server.py", line 633, in _get_socket
s = self.sockets[sid]
KeyError: None
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/code/project/.venv/lib/python3.10/site-packages/engineio/asyncio_server.py", line 456, in _trigger_event
ret = await self.handlers[event](*args)
File "/code/project/.venv/lib/python3.10/site-packages/socketio/asyncio_server.py", line 594, in _handle_eio_message
await self._handle_connect(eio_sid, pkt.namespace, pkt.data)
File "/code/project/.venv/lib/python3.10/site-packages/socketio/asyncio_server.py", line 472, in _handle_connect
success = await self._trigger_event(
File "/code/project/.venv/lib/python3.10/site-packages/socketio/asyncio_server.py", line 569, in _trigger_event
return await self.namespace_handlers[namespace].trigger_event(
File "/code/project/.venv/lib/python3.10/site-packages/socketio/asyncio_namespace.py", line 37, in trigger_event
ret = await handler(*args)
File "/code/project/local/socketio_redis/server.py", line 21, in on_connect
await self.sio.save_session(sid,{'id':1})
File "/code/project/.venv/lib/python3.10/site-packages/socketio/asyncio_server.py", line 319, in save_session
eio_session = await self.eio.get_session(eio_sid)
File "/code/project/.venv/lib/python3.10/site-packages/engineio/asyncio_server.py", line 110, in get_session
socket = self._get_socket(sid)
File "/code/project/.venv/lib/python3.10/site-packages/engineio/server.py", line 635, in _get_socket
raise KeyError('Session not found')
KeyError: 'Session not found'
``` | closed | 2022-12-17T05:14:23Z | 2022-12-17T12:32:14Z | https://github.com/miguelgrinberg/python-socketio/issues/1101 | [
"question"
] | devamin | 1 |
ydataai/ydata-profiling | pandas | 1,031 | ProfileReport not generated | ### Current Behaviour
getting KeyError: 'Requested level (var1) does not match index name (None)'


### Expected Behaviour
It should generate html file
### Data Description
i am using iris dataset https://github.com/venky14/Machine-Learning-with-Iris-Dataset/raw/master/Iris.csv
### Code that reproduces the bug
```Python
from pandas_profiling import ProfileReport
df2=pd.read_csv(r"https://github.com/venky14/Machine-Learning-with-Iris-Dataset/raw/master/Iris.csv")
profile= ProfileReport(df2)
profile.to_file(output_file='iris.html')
```
### pandas-profiling version
v3.1.0
### Dependencies
```Text
pandas==1.4.2
numpy==1.21.5
```
### OS
Windows 10
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | closed | 2022-08-31T16:24:13Z | 2022-10-01T12:42:28Z | https://github.com/ydataai/ydata-profiling/issues/1031 | [
"needs-triage"
] | Somesh140 | 2 |
flairNLP/flair | nlp | 3,194 | [help wanted] : Problem when training/using a POS tagger with Flair | I have a problem when training a POS tagger with Flair (Version: 0.11.3), or more precisely when using the resulting learned model.
The training data is in CONLLU format. For the sentence "Allora ùn ti dicu nulla, ùn ti dicu nulla !", the CONLLU is the following :
```
# text = Allora ùn ti dicu nulla, ùn ti dicu nulla !
1 Allora _ ADV _ _ 0 _ _ _
2 ùn _ ADV _ _ 1 _ _ _
3 ti _ PRON _ _ 1 _ _ _
4 dicu _ VERB _ _ 1 _ _ _
5 nulla _ PRON _ _ 1 _ _ SpaceAfter=No
6 , _ PUNCT _ _ 1 _ _ _
7 ùn _ ADV _ _ 1 _ _ _
8 ti _ PRON _ _ 1 _ _ _
9 dicu _ VERB _ _ 1 _ _ _
10 nulla _ PRON _ _ 1 _ _ _
11 ! _ PUNCT _ _ 1 _ _ _
```
I currently have very little learning data. I have done these first Flair tests with 100 sentences.
The training is done as described below from CONNLU files.
**1. TRAIN the POS tagger**
```
# a. Load data (CONLLU format)
columns = {0: 'id', 1: 'form', 2: 'lemma', 3: 'upos', 4: 'xpos', 5: 'feats', 6: 'head', 7:'deprel', 8:'deps', 9:'misc'}
corpus_cos: Corpus = ColumnCorpus(foldPath, columns,
train_file='TRAIN.conllu',
test_file='TEST.conllu',
dev_file='DEV.conllu')
label_type = 'upos'
label_dict = corpus.make_label_dictionary(label_type=label_type)
# b. embeddings
embeddings = StackedEmbeddings(embeddings=FlairEmbeddings(it-forward))
# c. define tagger
tagger = SequenceTagger(hidden_size=hiddenSize,
embeddings=embeddings,
tag_dictionary=label_dict,
tag_type=label_type,
use_crf=True)
# d. define trainer
trainer = ModelTrainer(tagger, corpus)
# e. learn
foldOutputDir="%s/%s"%(outputDir,str(fold).rjust(2,'0'))
trainer.train(foldOutputDir,
learning_rate=trainLearningRate,
mini_batch_size=trainMiniBatchSize,
max_epochs=trainMaxEpochs)
```
In order to get the details of the tags, I re-run the evaluation with the out_path argument as below:
```
result = tagger.evaluate(corpus_cos.test, mini_batch_size=32, out_path=f"%s/predictionsWithBestModel.txt"%foldOutputDir, gold_label_type="upos")
print(result.detailed_results)
```
I then use the best learned model to tag the TEST corpus (identical to the one used during training). The data to be annotated is loaded into a list of Sentences, each Sentence being a string containing the tokens separated by a space (this is why I used the mention use_tokenizer=False).
**2. PREDICT pos tags on TEST data using the learned model**
```
model = SequenceTagger.load('%s/best-model.pt'%foldOutputDir)
goldFile="%s/TEST.conllu"%foldPath
gold=pyconll.load_from_file(goldFile)
txtTag=[]
for sentGold in gold :
sent=""
for token in sentGold :
sent="%s %s"%(sent,token.form)
txtTag.append(Sentence(sent.strip(),use_tokenizer=False)) # use_tokenizer=False : no tokenization is performed and the text is split on whitespaces
model.predict(txtTag)
```
The results obtained at the end of the training (accuracy=0.2634) are very different from those I obtain using the best model (accuracy=0.1452).
The comparison of the results obtained shows that we do not obtain the same tags (differences are prefixed by *).
**-> Evaluation after training on CONLLU data :**
ID, GOLD, TAG
```
1 ADV DET
2 ADV NOUN
3 PRON ADP
4 VERB NOUN
5 PRON ADP
*6 PUNCT DET
*7 ADV NOUN
*8 PRON ADP
*9 VERB DET
*10 PRON NOUN
*11 PUNCT PUNCT
```
**-> Tagging with best model (on a string containing the tokens separated by spaces) :**
```
[0.1 - 0] Allora - GOLD : ADV <-> TAG NOTOK : DET
[0.2 - 1] ùn - GOLD : ADV <-> TAG NOTOK : NOUN
[0.3 - 2] ti - GOLD : PRON <-> TAG NOTOK : ADP
[0.4 - 3] dicu - GOLD : VERB <-> TAG NOTOK : NOUN
[0.5 - 4] nulla - GOLD : PRON <-> TAG NOTOK : ADP
*[0.6 - 5] , - GOLD : PUNCT <-> TAG NOTOK : NOUN
*[0.7 - 6] ùn - GOLD : ADV <-> TAG NOTOK : ADP
*[0.8 - 7] ti - GOLD : PRON <-> TAG NOTOK : DET
*[0.9 - 8] dicu - GOLD : VERB <-> TAG NOTOK : NOUN
*[0.10 - 9] nulla - GOLD : PRON <-> TAG NOTOK : ADP
*[0.11 - 10] ! - GOLD : PUNCT <-> TAG NOTOK : INTJ
```
Is the way I submit the text to be tagged correct? Where could this difference in tagging come from?
| closed | 2023-04-14T16:15:46Z | 2023-05-08T15:25:08Z | https://github.com/flairNLP/flair/issues/3194 | [] | lkevers | 4 |
sqlalchemy/alembic | sqlalchemy | 801 | How to check or edit the autogenerated migration? | When we autogenerate the migration through alembic, I would like to ensure that the migration file generated passes few checks. For example, if we add an index to a column the autogenerated file should automatically add `postgresql_concurrently=True` to the attribute.
Is there any programmatical way to verify the autogenerated migration?
**Have a nice day!**
| closed | 2021-02-24T06:35:34Z | 2021-02-24T18:12:07Z | https://github.com/sqlalchemy/alembic/issues/801 | [
"question"
] | sp1rs | 3 |
plotly/dash-core-components | dash | 643 | Additional languages code Markdown | referring issue #562
https://github.com/plotly/dash-core-components/pull/562#issuecomment-530517744
> are there others we should add by default? With the caveat that we don't want to include all available languages, that would be too big. So we should also document the process to make & use your own hljs build (https://highlightjs.org/download/#cdns) - also requires that we implement something like plotly/dash#655
We can discuss here which languages to import in the default package and perhaps an easy way to import additional languages (as pointed in plotly/dash#655).
Documentation is needed for these topics. | open | 2019-09-12T15:04:58Z | 2019-10-03T23:54:46Z | https://github.com/plotly/dash-core-components/issues/643 | [] | Luvideria | 1 |
comfyanonymous/ComfyUI | pytorch | 6,372 | add cache for api object_info | ### Feature Idea
if my installation of comfy have lot of customs nodes, then the api will cost tens of seconds, sometime cost minutes, to load. So may be add a short time cache for this api is need.
@routes.get("/object_info")
async def get_object_info(request):
with folder_paths.cache_helper:
out = {}
for x in nodes.NODE_CLASS_MAPPINGS:
try:
out[x] = node_info(x)
except Exception as e:
logging.error(f"[ERROR] An error occurred while retrieving information for the '{x}' node.")
logging.error(traceback.format_exc())
return web.json_response(out)
### Existing Solutions
_No response_
### Other
_No response_ | open | 2025-01-07T02:53:52Z | 2025-01-07T09:02:01Z | https://github.com/comfyanonymous/ComfyUI/issues/6372 | [
"Feature"
] | OneThingAI | 1 |
onnx/onnx | scikit-learn | 6,104 | Clarification of Reshape semantics for attribute 'allowzero' NOT set, zero volume | # Ask a Question
### Question
The Reshape documentation is not clear on whether reshaping from [0,10] to new shape [0,1,-1] is legal when attribute 'allowzero' is NOT set.
The relevant sentence is:
> At most one dimension of the new shape can be -1. In this case, the value is inferred from the size of the tensor and the remaining dimensions.
In the example, the input volume is 0, and the first two dimensions of the output tensor are [0,1]. The output tensor thus has a volume of 0 for _any_ value inferred for the -1 wildcard. Thus by one reading of the documentation the example is illegal.
However, another interpretation would be that when an input dimension is forwarded (because the new shape specifies 0 and `allowzero` is not set), then the dimension is ignored for purposes of inferring the -1 wildcard. I.e., the inference question is treated as equivalent to inferring the -1 wildcard for reshaping [10] to [1,-1].
Which interpretation is intended? | open | 2024-04-29T15:02:27Z | 2024-05-06T21:53:42Z | https://github.com/onnx/onnx/issues/6104 | [
"question",
"topic: spec clarification"
] | ArchRobison | 3 |
psf/black | python | 4,163 | Latest docker images fail to run blackd with an ImportError | <!--
Please make sure that the bug is not already fixed either in newer versions or the
current development version. To confirm this, you have three options:
1. Update Black's version if a newer release exists: `pip install -U black`
2. Use the online formatter at <https://black.vercel.app/?version=main>, which will use
the latest main branch.
3. Or run _Black_ on your machine:
- create a new virtualenv (make sure it's the same Python version);
- clone this repository;
- run `pip install -e .[d]`;
- run `pip install -r test_requirements.txt`
- make sure it's sane by running `python -m pytest`; and
- run `black` like you did last time.
-->
**Describe the bug**
I'm on MacOS. I try to run Black through Docker, but the container promptly stops with an ImportError related to the missing aiohttp dep.
**To Reproduce**
```
docker run -d -p 45484:45484 --pull always pyfound/black blackd --bind-host 0.0.0.0
# returns c42cfd39d8047d840beb779ea529005ef4d5682b060bcbca9b154e940bd346ab
```
```
docker logs -f c42cfd39d8047d840beb779ea529005ef4d5682b060bcbca9b154e940bd346ab
Traceback (most recent call last):
File "/opt/venv/bin/blackd", line 5, in <module>
from blackd import patched_main
File "/opt/venv/lib/python3.12/site-packages/blackd/__init__.py", line 14, in <module>
raise ImportError(
ImportError: aiohttp dependency is not installed: No module named 'aiohttp'. Please re-install black with the '[d]' extra install to obtain aiohttp_cors: `pip install black[d]`
```
- Black's version: Any tag greater than `pyfound/black:23.10.0`
- OS and Python version: [MacOS/14.2.1 (23C71)]
| closed | 2024-01-22T12:45:24Z | 2024-05-10T16:44:42Z | https://github.com/psf/black/issues/4163 | [
"T: bug"
] | ravishi | 8 |
FujiwaraChoki/MoneyPrinter | automation | 175 | Add multiple voice support | Add multiple voice support.
Users can select which person's voice they want like,
male, female
Can use Eleven Labs API for it | closed | 2024-02-11T08:06:35Z | 2024-02-11T19:04:04Z | https://github.com/FujiwaraChoki/MoneyPrinter/issues/175 | [] | MEMEO-PRO | 2 |
vanna-ai/vanna | data-visualization | 596 | CVE-2024-5826 | CVE-2024-5826, is this cve fixed in version>0.6.2? | closed | 2024-08-12T06:58:55Z | 2024-08-14T02:27:16Z | https://github.com/vanna-ai/vanna/issues/596 | [] | lisiteng | 1 |
benbusby/whoogle-search | flask | 634 | [BUG] `WHOOGLE_MINIMAL` removes search results | **Describe the bug**
After some time, in some cases, a Whoogle search leads to an empty search page.
**To Reproduce**
Steps to reproduce the behavior:
1. Search (though it's unreliable)
2. See blank page
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [ ] Docker
- [x] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [x] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [ ] Version [version number]
- [ ] Not sure
**Desktop (please complete the following information):**
- OS: Arch Linux
- Browser: firefox
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.

| closed | 2022-02-01T00:43:22Z | 2025-01-16T14:08:41Z | https://github.com/benbusby/whoogle-search/issues/634 | [
"bug"
] | DUOLabs333 | 19 |
gradio-app/gradio | data-science | 10,050 | Not all CSS files are loaded when app is mounted to FastAPI with some path | ### Describe the bug
It looks like not all assets are loaded from the proper directory when Gradio is mounted with FastAPI with some path. For example, for the default theme (when app is mounted to `/xyz`) it misses `/assets/index-Bmd1Nf3q.css`, while file `/xyz/assets/index-Bmd1Nf3q.css` is available (example in the logs below).
Much more files are missed for the other themes. I tested `gr.themes.Default()`, `gr.themes.Base()`, `gr.themes.Soft()`, and `gr.themes.Glass()`. Due to these issues progressing animation is not showed, and may be some other details are displayed not properly.
It may be the same as https://github.com/gradio-app/gradio/issues/8073, but I don't use NGINX or HTTPS.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
import uvicorn
from fastapi import FastAPI
app = FastAPI(docs_url=None, redoc_url=None)
with gr.Blocks(
theme=gr.themes.Default(text_size='md'), analytics_enabled=False, title="Test"
) as gr_io:
with gr.Row():
gr.Markdown(value="Description", line_breaks=False)
with gr.Row():
with gr.Column():
with gr.Row():
grc_submit_btn = gr.Button(
value="GO",
variant='primary',
)
with gr.Column(variant='panel'):
grc_result_area = gr.Markdown(value="")
with gr.Row():
gr.Markdown(value="Extra note", line_breaks=False)
app = gr.mount_gradio_app(app, gr_io, path='/xyz')
uvicorn.run(app, host='0.0.0.0', port=config.server_internal_port)
```
### Screenshot
_No response_
### Logs
```shell
INFO: 10.1.243.1:38230 - "GET /xyz/ HTTP/1.1" 200 OK
INFO: 10.1.243.1:38230 - "GET /xyz/assets/index-Dj1xzGVg.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/index-Bmd1Nf3q.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/svelte/svelte.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/Index-yDh-RRa8.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38230 - "GET /xyz/assets/Embed-Dgos_deE.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/index-CAS_VNRG.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38258 - "GET /xyz/assets/StreamingBar.svelte_svelte_type_style_lang-CxOfZBE-.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38268 - "GET /xyz/assets/IconButtonWrapper.svelte_svelte_type_style_lang-DAP8_Zsr.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/MarkdownCode.svelte_svelte_type_style_lang-CRfeLYV9.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38230 - "GET /xyz/assets/prism-python-VskFp_Cc.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/IconButton-DtUbToT-.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38258 - "GET /xyz/assets/Clear-By3xiIwg.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38268 - "GET /xyz/assets/context-TgWPFwN2.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/StreamingBar-DPKKRe-n.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/IconButtonWrapper-6oLg_adW.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38258 - "GET /xyz/assets/MarkdownCode-DfnQ3ojf.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/Index-BJ_RfjVB.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38230 - "GET /xyz/assets/DownloadLink-CqD3Uu0l.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /favicon.ico HTTP/1.1" 404 Not Found
INFO: 10.1.243.1:38258 - "GET /assets/index-Bmd1Nf3q.css HTTP/1.1" 404 Not Found
INFO: 10.1.243.1:38244 - "GET /xyz/theme.css?v=76ee63afdb6c2791ddf9b92428cb796885031b4a4f1259df434def0a7c3f9d63 HTTP/1.1" 200 OK
INFO: 10.1.243.1:38258 - "GET /xyz/gradio_api/heartbeat/6bmuowsl0i HTTP/1.1" 200 OK
INFO: 10.1.243.1:38268 - "GET /xyz/assets/Blocks-2mhBL-Wz.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38230 - "GET /xyz/assets/Button-Dn54xFln.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/Block-rEXcgPfT.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/Toast-CGNhF_fW.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38230 - "GET /xyz/assets/index-Dqmuz79m.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/utils-BsGrhMNe.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/Button-C-VfIjPJ.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38268 - "GET /xyz/assets/Blocks-yLdzXwzS.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/static/fonts/ui-sans-serif/ui-sans-serif-Regular.woff2 HTTP/1.1" 404 Not Found
INFO: 10.1.243.1:38268 - "GET /xyz/assets/Index-BWsGP2Ue.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/Index-BGB95BqN.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38230 - "GET /xyz/assets/Index-Cknuz4Hv.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38268 - "GET /xyz/assets/Check-BiRlaMNo.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38230 - "GET /xyz/assets/Copy-CxQ9EyK2.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/MarkdownCode-UKT7Q0jB.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/IconButtonWrapper-fdTarNL8.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/Example-BsK0JilY.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38230 - "GET /xyz/assets/Index-B8brEV0q.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38268 - "GET /xyz/assets/Index-B630uaPU.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/Tabs-CemoFNU3.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/Index-WEzAIkMk.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38230 - "GET /xyz/assets/Index-BaQTPtXo.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38268 - "GET /xyz/assets/FileUpload-D8-2zX42.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/BlockLabel-CnzaitFN.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/Empty-CMV1fpYf.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38268 - "GET /xyz/assets/Upload-DSEEphK_.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38230 - "GET /xyz/assets/File-BQ_9P3Ye.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/Upload-DXgDHKDd.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/DownloadLink-IzUam-rM.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38268 - "GET /xyz/assets/file-url-DgijyRSD.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/UploadText-DnPHeWhE.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/Example-DrmWnoSo.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38230 - "GET /xyz/assets/Info-_kFFYhID.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/Index-Danc61_d.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38268 - "GET /xyz/assets/Index-BLXLQ2B2.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/Index-DE1Sah7F.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/Textbox-DRR8nyCw.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/Send-DyoOovnk.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38268 - "GET /xyz/assets/Square-oAGqOwsh.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/Example-DN4wtGrM.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38230 - "GET /xyz/assets/Index-DLT8ABL8.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/Index-DV6aCiD8.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/Index-BEHDlc0X.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38230 - "GET /xyz/assets/DropdownArrow-B7m41FWT.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/Example-BFOhuzTJ.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38268 - "GET /xyz/assets/Index-dG59Z873.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/Index-wLIo4CCP.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38268 - "GET /xyz/assets/BlockTitle-BOkEQEU6.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/Index-CptIZeFZ.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/Example-D7K5RtQ2.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38230 - "GET /xyz/assets/Index-7U9UAML0.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38268 - "GET /xyz/assets/Tabs-C0qLuAtA.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38268 - "GET /xyz/assets/Index-BcNLXLca.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/FileUpload-2TE7T7kD.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38230 - "GET /xyz/assets/Index-Cgj6KPvj.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/Upload-A42O3qlm.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/Example-DpWs9cEC.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/Index-DMKGW8pW.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38268 - "GET /xyz/assets/Index-12OnbRhk.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/Textbox-jWD3sCxr.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38230 - "GET /xyz/assets/Example-Cj3ii62O.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/Index-CWxB-qJp.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/assets/Index-Dclo02rM.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38268 - "GET /xyz/assets/Index-WIAKB-_s.css HTTP/1.1" 200 OK
INFO: 10.1.243.1:38244 - "GET /xyz/assets/Index-uRgjJb4U.js HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "POST /xyz/gradio_api/queue/join HTTP/1.1" 200 OK
INFO: 10.1.243.1:38236 - "GET /xyz/gradio_api/queue/data?session_hash=6bmuowsl0i HTTP/1.1" 200 OK
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.6.0
gradio_client version: 1.4.3
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.6.2.post1
audioop-lts is not installed.
fastapi: 0.115.5
ffmpy: 0.4.0
gradio-client==1.4.3 is not installed.
httpx: 0.27.2
huggingface-hub: 0.26.2
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 1.26.4
orjson: 3.10.12
packaging: 24.2
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.10.2
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 6.0.2
ruff: 0.8.0
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit==0.12.0 is not installed.
typer: 0.13.1
typing-extensions: 4.12.2
urllib3: 2.2.3
uvicorn: 0.32.1
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.27.2
huggingface-hub: 0.26.2
packaging: 24.2
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
Blocking usage of gradio | open | 2024-11-27T12:40:38Z | 2024-12-27T04:57:18Z | https://github.com/gradio-app/gradio/issues/10050 | [
"bug"
] | YuryYakhno | 2 |
tfranzel/drf-spectacular | rest-api | 982 | parameters supporting lazy strings are not typed as accepting them | **Describe the bug**
Updating to django-stubs to 1.13.0 and later, the `lazy` string functions from the translations module are declared as returning `Promise` objects instead of `str`.
Most classes / functions in drf_spectacular are typed as accepting `str` only, but work well with lazy strings, and are now flagged by mypy.
**To Reproduce**
Declare a response with the description set to a lazy translatable string.
```py
from django.utils.translations import gettext_lazy as _
from drf_spectacular.utils import OpenApiResponse
OpenApiResponse(description=_("Parsing and validation errors."))
```
Typecheck the code with mypy and django-stubs 1.13.0 or later.
```text
foo.py:3: error: Argument "description" to "OpenApiResponse" has incompatible type "_StrPromise"; expected "str" [arg-type]
```
**Expected behavior**
No typecheck error is raised, the translatable arguments are declared as allowing `StrOrPromise`
**References**
https://github.com/typeddjango/django-stubs#why-am-i-getting-incompatible-argument-type-mentioning-_strpromise | closed | 2023-05-02T15:00:47Z | 2023-07-23T21:20:54Z | https://github.com/tfranzel/drf-spectacular/issues/982 | [
"enhancement",
"fix confirmation pending"
] | nils-van-zuijlen | 6 |
google-research/bert | tensorflow | 534 | Computing the softmax in run_squad.py | You compute it over nbest results
1. Is there a theoritical explanation of that ?
2. You sum start_logits and end_logits together without normalizing them with a softmax. Is not a risk of having one taking over the other ?
Thank you | open | 2019-04-01T10:11:49Z | 2019-04-01T10:11:49Z | https://github.com/google-research/bert/issues/534 | [] | christopher5106 | 0 |
ultralytics/yolov5 | pytorch | 12,472 | When measuring the Inference time (speed) of YOLOv5 with a batch size of 1 | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
When measuring the Inference time (speed) of YOLOv5 with a batch size of 1:
I am curious about the method used within your code.
For example, is it calculated as an average over a certain number of runs (e.g., 100 iterations)?
Which script should I refer to for the relevant code?
I noticed the `dt=Profile()` line in the val.py code.
Is it correctly understood that it is checked using the `with`statement?
### Additional
_No response_ | closed | 2023-12-06T05:15:22Z | 2024-01-16T00:21:23Z | https://github.com/ultralytics/yolov5/issues/12472 | [
"question",
"Stale"
] | ohjunee | 2 |
Asabeneh/30-Days-Of-Python | matplotlib | 192 | unknown | in day 12 Exercises: Level 2:https://github.com/Asabeneh/30-Days-Of-Python/blob/master/12_Day_Modules/12_modules.md#exercises-level-2
where is task 6
| closed | 2022-02-18T18:01:03Z | 2023-07-08T22:21:14Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/192 | [] | Wilsonagene | 1 |
coqui-ai/TTS | python | 2,825 | [Feature request] Global pruning for TTS models | Hi,
Have you thought about implementing a global pruning option for TTS models?
I was thinking it could be good if the end user could decide whether or not to use a pruned model for inference.
The implementation could be something along the lines of the Pytorch tutorial:
https://pytorch.org/tutorials/intermediate/pruning_tutorial.html
And something like this:
```
from TTS.tts.models.vits import Vits
from TTS.tts.configs.vits_config import VitsConfig
import torch
import torch.nn.utils.prune as prune
def prune_model(model, pruning_amount=0.5):
# Loop through each layer of the model
for name, module in model.named_modules():
# If the layer is a convolutional layer, apply pruning
if isinstance(module, torch.nn.Conv2d):
prune.l1_unstructured(module, name="weight", amount=pruning_amount)
return model
def save_pruned_model(model, save_path):
# Remove the pruning re-parametrization before saving
for module in model.modules():
if isinstance(module, torch.nn.Conv2d):
prune.remove(module, 'weight')
torch.save(model.state_dict(), save_path)
if __name__ == "__main__":
pruned_model_save_path = "./pruned_model.pth"
config = VitsConfig()
config.load_json("/home/mllopart/PycharmProjects/ttsAPI/tts-api/models/vits/config.json")
vits = Vits.init_from_config(config)
vits.load_checkpoint(config, "/home/mllopart/PycharmProjects/ttsAPI/tts-api/models/vits/model_file.pth")
pruned_model = prune_model(vits, pruning_amount=0.5)
save_pruned_model(pruned_model, pruned_model_save_path)
print(f"Pruned model saved to {pruned_model_save_path}")
config = VitsConfig()
config.load_json("/home/mllopart/PycharmProjects/ttsAPI/tts-api/models/vits/config.json")
vits = Vits.init_from_config(config)
vits.load_checkpoint(config, "/home/mllopart/PycharmProjects/ttsAPI/tts-api/server/pruned_model.pth")
```
Kind Regards
| closed | 2023-07-31T15:43:49Z | 2023-09-14T06:19:49Z | https://github.com/coqui-ai/TTS/issues/2825 | [
"wontfix",
"feature request"
] | mllopartbsc | 1 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,231 | First it showed the visual 14 error now this | setup.py:66: RuntimeWarning: NumPy 1.20.3 may not yet support Python 3.11.
warnings.warn(
Running from numpy source directory.
setup.py:485: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates
run_build = parse_setuppy_commands()
Processing numpy/random\_bounded_integers.pxd.in
Processing numpy/random\bit_generator.pyx
Processing numpy/random\mtrand.pyx
Processing numpy/random\_bounded_integers.pyx.in
Processing numpy/random\_common.pyx
Processing numpy/random\_generator.pyx
Processing numpy/random\_mt19937.pyx
Processing numpy/random\_pcg64.pyx
Processing numpy/random\_philox.pyx
Processing numpy/random\_sfc64.pyx
Cythonizing sources
blas_opt_info:
blas_mkl_info:
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize MSVCCompiler
libraries mkl_rt not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\libs']
NOT AVAILABLE
blis_info:
libraries blis not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\libs']
NOT AVAILABLE
openblas_info:
libraries openblas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\libs']
get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']'
customize GnuFCompiler
Could not locate executable g77
Could not locate executable f77
customize IntelVisualFCompiler
Could not locate executable ifort
Could not locate executable ifl
customize AbsoftFCompiler
Could not locate executable f90
customize CompaqVisualFCompiler
Could not locate executable DF
customize IntelItaniumVisualFCompiler
Could not locate executable efl
customize Gnu95FCompiler
Could not locate executable gfortran
Could not locate executable f95
customize G95FCompiler
Could not locate executable g95
customize IntelEM64VisualFCompiler
customize IntelEM64TFCompiler
Could not locate executable efort
Could not locate executable efc
customize PGroupFlangCompiler
Could not locate executable flang
don't know how to compile Fortran code on platform 'nt'
NOT AVAILABLE
atlas_3_10_blas_threads_info:
Setting PTATLAS=ATLAS
libraries tatlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\libs']
NOT AVAILABLE
atlas_3_10_blas_info:
libraries satlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\libs']
NOT AVAILABLE
C:\Users\pc\AppData\Local\Temp\pip-install-qkmhz9_b\numpy_fd3a96fb3a2947219376a02be8757c4c\numpy\distutils\system_info.py:1989: UserWarning:
Optimized (vendor) Blas libraries are not found.
Falls back to netlib Blas library which has worse performance.
A better performance should be easily gained by switching
Blas library.
if self._calc_info(blas):
blas_info:
libraries blas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\libs']
NOT AVAILABLE
C:\Users\pc\AppData\Local\Temp\pip-install-qkmhz9_b\numpy_fd3a96fb3a2947219376a02be8757c4c\numpy\distutils\system_info.py:1989: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
if self._calc_info(blas):
blas_src_info:
NOT AVAILABLE
C:\Users\pc\AppData\Local\Temp\pip-install-qkmhz9_b\numpy_fd3a96fb3a2947219376a02be8757c4c\numpy\distutils\system_info.py:1989: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
if self._calc_info(blas):
NOT AVAILABLE
non-existing path in 'numpy\\distutils': 'site.cfg'
lapack_opt_info:
lapack_mkl_info:
libraries mkl_rt not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\libs']
NOT AVAILABLE
openblas_lapack_info:
libraries openblas not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\libs']
NOT AVAILABLE
openblas_clapack_info:
libraries openblas,lapack not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\libs']
NOT AVAILABLE
flame_info:
libraries flame not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\libs']
NOT AVAILABLE
atlas_3_10_threads_info:
Setting PTATLAS=ATLAS
libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\lib
libraries tatlas,tatlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\lib
libraries lapack_atlas not found in C:\
libraries tatlas,tatlas not found in C:\
libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\libs
libraries tatlas,tatlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\libs
<class 'numpy.distutils.system_info.atlas_3_10_threads_info'>
NOT AVAILABLE
atlas_3_10_info:
libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\lib
libraries satlas,satlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\lib
libraries lapack_atlas not found in C:\
libraries satlas,satlas not found in C:\
libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\libs
libraries satlas,satlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\libs
<class 'numpy.distutils.system_info.atlas_3_10_info'>
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\lib
libraries ptf77blas,ptcblas,atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\lib
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\libs
libraries ptf77blas,ptcblas,atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\libs
<class 'numpy.distutils.system_info.atlas_threads_info'>
NOT AVAILABLE
atlas_info:
libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\lib
libraries f77blas,cblas,atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\lib
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\libs
libraries f77blas,cblas,atlas not found in C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\libs
<class 'numpy.distutils.system_info.atlas_info'>
NOT AVAILABLE
lapack_info:
libraries lapack not found in ['C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\lib', 'C:\\', 'C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\\libs']
NOT AVAILABLE
C:\Users\pc\AppData\Local\Temp\pip-install-qkmhz9_b\numpy_fd3a96fb3a2947219376a02be8757c4c\numpy\distutils\system_info.py:1849: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
return getattr(self, '_calc_info_{}'.format(name))()
lapack_src_info:
NOT AVAILABLE
C:\Users\pc\AppData\Local\Temp\pip-install-qkmhz9_b\numpy_fd3a96fb3a2947219376a02be8757c4c\numpy\distutils\system_info.py:1849: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
return getattr(self, '_calc_info_{}'.format(name))()
NOT AVAILABLE
numpy_linalg_lapack_lite:
FOUND:
language = c
define_macros = [('HAVE_BLAS_ILP64', None), ('BLAS_SYMBOL_SUFFIX', '64_')]
C:\Users\pc\AppData\Local\Temp\pip-build-env-4ueorcpy\overlay\Lib\site-packages\setuptools\_distutils\dist.py:275: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
running dist_info
running build_src
build_src
building py_modules sources
creating build
creating build\src.win-amd64-3.11
creating build\src.win-amd64-3.11\numpy
creating build\src.win-amd64-3.11\numpy\distutils
building library "npymath" sources
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 149, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Local\Temp\pip-build-env-4ueorcpy\overlay\Lib\site-packages\setuptools\build_meta.py", line 157, in prepare_metadata_for_build_wheel
self.run_setup()
File "C:\Users\pc\AppData\Local\Temp\pip-build-env-4ueorcpy\overlay\Lib\site-packages\setuptools\build_meta.py", line 249, in run_setup
self).run_setup(setup_script=setup_script)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Local\Temp\pip-build-env-4ueorcpy\overlay\Lib\site-packages\setuptools\build_meta.py", line 142, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 513, in <module>
setup_package()
File "setup.py", line 505, in setup_package
setup(**metadata)
File "C:\Users\pc\AppData\Local\Temp\pip-install-qkmhz9_b\numpy_fd3a96fb3a2947219376a02be8757c4c\numpy\distutils\core.py", line 169, in setup
return old_setup(**new_attr)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Local\Temp\pip-build-env-4ueorcpy\overlay\Lib\site-packages\setuptools\__init__.py", line 165, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Local\Temp\pip-build-env-4ueorcpy\overlay\Lib\site-packages\setuptools\_distutils\core.py", line 148, in setup
dist.run_commands()
File "C:\Users\pc\AppData\Local\Temp\pip-build-env-4ueorcpy\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 967, in run_commands
self.run_command(cmd)
File "C:\Users\pc\AppData\Local\Temp\pip-build-env-4ueorcpy\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 986, in run_command
cmd_obj.run()
File "C:\Users\pc\AppData\Local\Temp\pip-build-env-4ueorcpy\overlay\Lib\site-packages\setuptools\command\dist_info.py", line 31, in run
egg_info.run()
File "C:\Users\pc\AppData\Local\Temp\pip-install-qkmhz9_b\numpy_fd3a96fb3a2947219376a02be8757c4c\numpy\distutils\command\egg_info.py", line 24, in run
self.run_command("build_src")
File "C:\Users\pc\AppData\Local\Temp\pip-build-env-4ueorcpy\overlay\Lib\site-packages\setuptools\_distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "C:\Users\pc\AppData\Local\Temp\pip-build-env-4ueorcpy\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 986, in run_command
cmd_obj.run()
File "C:\Users\pc\AppData\Local\Temp\pip-install-qkmhz9_b\numpy_fd3a96fb3a2947219376a02be8757c4c\numpy\distutils\command\build_src.py", line 144, in run
self.build_sources()
File "C:\Users\pc\AppData\Local\Temp\pip-install-qkmhz9_b\numpy_fd3a96fb3a2947219376a02be8757c4c\numpy\distutils\command\build_src.py", line 155, in build_sources
self.build_library_sources(*libname_info)
File "C:\Users\pc\AppData\Local\Temp\pip-install-qkmhz9_b\numpy_fd3a96fb3a2947219376a02be8757c4c\numpy\distutils\command\build_src.py", line 288, in build_library_sources
sources = self.generate_sources(sources, (lib_name, build_info))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Local\Temp\pip-install-qkmhz9_b\numpy_fd3a96fb3a2947219376a02be8757c4c\numpy\distutils\command\build_src.py", line 378, in generate_sources
source = func(extension, build_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "numpy\core\setup.py", line 671, in get_mathlib_info
st = config_cmd.try_link('int main(void) { return 0;}')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Local\Temp\pip-build-env-4ueorcpy\overlay\Lib\site-packages\setuptools\_distutils\command\config.py", line 243, in try_link
self._link(body, headers, include_dirs,
File "C:\Users\pc\AppData\Local\Temp\pip-install-qkmhz9_b\numpy_fd3a96fb3a2947219376a02be8757c4c\numpy\distutils\command\config.py", line 162, in _link
return self._wrap_method(old_config._link, lang,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Local\Temp\pip-install-qkmhz9_b\numpy_fd3a96fb3a2947219376a02be8757c4c\numpy\distutils\command\config.py", line 96, in _wrap_method
ret = mth(*((self,)+args))
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Local\Temp\pip-build-env-4ueorcpy\overlay\Lib\site-packages\setuptools\_distutils\command\config.py", line 137, in _link
(src, obj) = self._compile(body, headers, include_dirs, lang)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Local\Temp\pip-install-qkmhz9_b\numpy_fd3a96fb3a2947219376a02be8757c4c\numpy\distutils\command\config.py", line 105, in _compile
src, obj = self._wrap_method(old_config._compile, lang,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Local\Temp\pip-install-qkmhz9_b\numpy_fd3a96fb3a2947219376a02be8757c4c\numpy\distutils\command\config.py", line 96, in _wrap_method
ret = mth(*((self,)+args))
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Local\Temp\pip-build-env-4ueorcpy\overlay\Lib\site-packages\setuptools\_distutils\command\config.py", line 132, in _compile
self.compiler.compile([src], include_dirs=include_dirs)
File "C:\Users\pc\AppData\Local\Temp\pip-build-env-4ueorcpy\overlay\Lib\site-packages\setuptools\_distutils\_msvccompiler.py", line 401, in compile
self.spawn(args)
File "C:\Users\pc\AppData\Local\Temp\pip-build-env-4ueorcpy\overlay\Lib\site-packages\setuptools\_distutils\_msvccompiler.py", line 505, in spawn
return super().spawn(cmd, env=env)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Local\Temp\pip-install-qkmhz9_b\numpy_fd3a96fb3a2947219376a02be8757c4c\numpy\distutils\ccompiler.py", line 90, in <lambda>
m = lambda self, *args, **kw: func(self, *args, **kw)
^^^^^^^^^^^^^^^^^^^^^^^
TypeError: CCompiler_spawn() got an unexpected keyword argument 'env'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details. | open | 2023-07-04T14:37:51Z | 2023-07-04T14:39:02Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1231 | [] | ZIKICROWN | 0 |
rthalley/dnspython | asyncio | 366 | Parse additional options from resolv.conf | ``read_resolv_conf`` only understands the rotate option. ``/etc/resolv.conf`` has more options like timeout.
https://github.com/rthalley/dnspython/blob/aab95c61d554dcf8bc2cf991811165a320991b8c/dns/resolver.py#L586-L604
Please consider to support additional options like global timeout, https://linux.die.net/man/5/resolv.conf | closed | 2019-04-16T12:54:35Z | 2020-05-15T12:51:13Z | https://github.com/rthalley/dnspython/issues/366 | [
"Enhancement Request"
] | tiran | 19 |
plotly/dash | flask | 2,514 | [BUG] Updating a html.Div() children with html.A() is firing its n_clicks property (ctx.triggered[0]["prop_id"] == "id.n_clicks") | I'm using a dcc.Store() to store the session layout to avoid backing to initial app.layout when refreshing the page.
Dash version: ```2.9.2```
The initial callback fires the condition ```if modified_timestamp is None:```:
```
@app.callback(
Output("container", 'children'),
Output("url", 'pathname'),
Input('memory', 'modified_timestamp'),
State('memory', 'data'),
)
def session_state(modified_timestamp,
memory_data):
if modified_timestamp is None:
return (
[
dcc.Location(id='url', refresh=False),
login_layout,
],
"/login"
)
else:
if memory_data is None:
raise PreventUpdate
else:
print(f"{line_numb()}: chegou aqui", flush=True)
print(f"{line_numb()}: {memory_data['pathname']}", flush=True)
return (
memory_data["after_login_layout"],
memory_data["pathname"]
)
```
After login occurs the ```else``` statement fires by the following callback:
```
@app.callback(
Output('memory', 'data', allow_duplicate=True),
Input('user-input', 'value'),
Input('password-input', 'value'),
Input('login-button', 'n_clicks'),
prevent_initial_call=True
)
def store_data(user_input,
password_input,
login_click):
if (user_input is not None) and (password_input is not None) and (ctx.triggered_id == "login-button"):
print(f"{line_numb()}: login button pressed", flush=True)
return (
{
"after_login_layout":
[
dcc.Location(id='url', refresh=False),
sidebar_layout,
header_layout,
html.Div(
id="sub-container",
className="sub-container",
children=[
home_layout
]
)
],
"pathname": "/home"
}
)
else:
raise PreventUpdate
```
The sidebar layout ```sidebar_layout,``` when is stored fires the ```ctx.triggered[0]["prop_id"] == "a-home.n_clicks"```. So I decide to benchmark if this really happens comenting the line and the problem is gone.
Before comenting the line:
```
91: chegou aqui
92: /home
a-home.n_clicks
151: home button pressed
```
After comenting the line:
```
91: chegou aqui
92: /home
```
My sidebar page is configured as it follows:
```
from dash import html, dcc
def layout():
sidebar_links = [
{"id":"a-home", "text": "Home", "icon": "bi bi-house-fill home", "href": "javascript:void(0);"},
{"id":"a-analytics", "text": "Analitico", "icon": "bi bi-pie-chart-fill analytics", "href": "javascript:void(0);"},
]
layout_sidebar = html.Div(
id="sidebar",
className="sidebar",
children=[
html.A(
id=link["id"],
className=link["id"],
children=[
html.I(
className=link["icon"],
),
link["text"],
],
href=link["href"],
) for link in sidebar_links
]
)
return layout_sidebar
```
My imports:
```
from dash import Dash, dcc, html, Input, Output, State, ctx, dcc
from dash.exceptions import PreventUpdate
from pages import login, sidebar, home, analytics
import plotly.express as px
import inspect
def line_numb():
'''Returns the current line number in our program'''
return inspect.currentframe().f_back.f_lineno
app = Dash(
__name__,
suppress_callback_exceptions=True,
external_stylesheets=[
"./assets/style.css"
]
)
# layouts
login_layout = login.layout()
sidebar_layout = sidebar.layout()
home_layout = home.layout()
# cabeçalho
header_layout = html.Div(
id="header",
className="header",
children=[
html.Div(
id='login-message-label',
className="login-message-label",
children=["LOGADO COM SUCESSO À 0 SEGUNDOS"],
style={"text-align": "center"}
),
]
)
app.layout = html.Div(
id="main-container",
className="main-container",
children=[
dcc.Location(id='url', refresh=False),
dcc.Store(
id='memory',
data=None,
storage_type='session',
),
html.Div(
id="container",
className="container",
children=[]
)
]
)
``` | closed | 2023-04-26T02:12:16Z | 2023-05-04T16:37:18Z | https://github.com/plotly/dash/issues/2514 | [] | leo-smi | 3 |
LAION-AI/Open-Assistant | python | 2,822 | Inference queue length diverges over time | Investigate how this can happen & see how to fix it | open | 2023-04-21T21:47:21Z | 2023-04-21T21:47:21Z | https://github.com/LAION-AI/Open-Assistant/issues/2822 | [
"bug",
"inference"
] | yk | 0 |
microsoft/nni | pytorch | 5,128 | error about the visulization website (http://127.0.0.1:ip) | After I start the `nni` and I open the website`http://127.0.0.1:ip`, the website becomes un-reopened.
The detail is like `I can't reload the website ` and the website is always the interface I open it firstly.
I use remote: Ubuntu server
And I want to see the details on my computer: Ubuntu server, too.
Looking for your reply. | closed | 2022-09-15T11:29:39Z | 2022-09-16T07:06:39Z | https://github.com/microsoft/nni/issues/5128 | [] | xiangtaowong | 6 |
wsvincent/awesome-django | django | 3 | docker | Thoughts on adding a Docker section with some basic resources? | closed | 2018-11-07T15:11:39Z | 2018-11-08T18:12:04Z | https://github.com/wsvincent/awesome-django/issues/3 | [] | mjhea0 | 1 |
plotly/dash | data-visualization | 2,914 | pollyfill.io vulnerability | Thanks so much for your interest in Dash!
Before posting an issue here, please check the Dash [community forum](https://community.plotly.com/c/dash) to see if the topic has already been discussed. The community forum is also great for implementation questions. When in doubt, please feel free to just post the issue here :)
**Is your feature request related to a problem? Please describe.**
I was running a server using dash which has been flagged by my institution as possibly vulnerable to JavaScript supply chain attack due to the recent pollyfill.io vulnerability https://www.bleepingcomputer.com/news/security/polyfillio-javascript-supply-chain-attack-impacts-over-100k-sites/
I am struggling to establish if the vulnerability relates to dash or one of the dependencies needed to create the server.
**Describe the solution you'd like**
Indicate if the vulnerability is related to dash and a resolution if so
**Describe alternatives you've considered**
Tried tracking the vulnerability to other dependencies but haven't been able due to my lack of js knowledge
| closed | 2024-07-05T07:06:46Z | 2024-07-18T03:15:55Z | https://github.com/plotly/dash/issues/2914 | [] | fbravosanchez | 4 |
google-research/bert | nlp | 912 | How to do multi gpu prediction for a finetuned bert model? | I am working on a use case in which i want to concurrently run finetuned bert model on a node which has 4 gpus. I would like to use all the gpus in the node, but when i try to use concurrent.futures package to copy the model to all the 4 gpus and run concurrently. It is running all the predictions on the 4th gpu on the node. Could you please help me with this? | open | 2019-11-12T11:02:43Z | 2021-11-19T06:59:03Z | https://github.com/google-research/bert/issues/912 | [] | aswin-giridhar | 5 |
noirbizarre/flask-restplus | api | 254 | fields.Nested failure? | Please consider:
```
@api.expect(api.model('Register client', {
'data': fields.Nested({
'name': fields.String(required= True),
'value': fields.String(required= True)
})
}))
```
When I start the Flask service and access the Swagger doc, I see this error:
File "/flask_restplus-0.10.1-py2.7.egg/flask_restplus/swagger.py", line 470, in register_field
self.register_model(field.nested)
File "flask_restplus-0.10.1-py2.7.egg/flask_restplus/swagger.py", line 453, in register_model
if name not in self.api.models:
TypeError: unhashable type: 'dict'
I think the error may be that register_model is called to register a nested field, and that appears to want a complete model not an instance of Nested.
Any thoughts? Thanks
| closed | 2017-03-15T16:04:45Z | 2019-04-30T10:46:06Z | https://github.com/noirbizarre/flask-restplus/issues/254 | [] | jbakermk | 3 |
wkentaro/labelme | computer-vision | 778 | [Feature] Changing the vertex size of polygon | **Is your feature request related to a problem? Please describe.**
How to change the vertex size of polygons customly?
I've seen the feature request https://github.com/wkentaro/labelme/issues/417#issue-453931300 .
Was it added ?
**Describe the solution you'd like**
As described in https://github.com/wkentaro/labelme/issues/417#issue-453931300, I want to change vertex size customly.
Anyone else know about this?
| closed | 2020-09-24T04:53:07Z | 2020-09-28T02:03:20Z | https://github.com/wkentaro/labelme/issues/778 | [] | planemanner | 1 |
chatanywhere/GPT_API_free | api | 106 | chatgpt4 plus 可以通过什么渠道访问和调用key | closed | 2023-10-07T15:24:06Z | 2023-10-21T09:49:04Z | https://github.com/chatanywhere/GPT_API_free/issues/106 | [] | doomooo | 1 |
|
python-restx/flask-restx | flask | 143 | How do I manually enforce Swagger UI to include a model's definition? | It seems like using list response with nested model mentioned here #65 doesn't automatically add the definition of the model. As a result, I got an error saying "Could not resolve reference: Could not resolve pointer: /definitions/MyModel does not exist in document" at the Swagger UI page.
`@api.response(200, '', fields.List(fields.Nested(MyModel))`
Can someone help me with this? | open | 2020-05-28T01:18:16Z | 2023-12-12T10:26:08Z | https://github.com/python-restx/flask-restx/issues/143 | [
"question"
] | pinyiw | 6 |
JaidedAI/EasyOCR | deep-learning | 899 | 'module' object is not callable while running in Google Colab | Hello Everyone,
Sorry if this is a very trivial issue but couldn't solve this even after trying few solutions from stackoverflow.
In VSCode I don't get any error, but on Google Colab I can't make it work at all. I run the default file provided in the trainer folder.
This is how i had to give path as i had mounted them from my Google Drive.
`
from importlib.machinery import SourceFileLoader
train = SourceFileLoader("train", "/content/drive/MyDrive/EasyOcrTrainer/train.py").load_module()
AttrDict = SourceFileLoader("utils", "/content/drive/MyDrive/EasyOcrTrainer/utils.py").load_module()
import pandas as pd
CTCLabelConverter = SourceFileLoader("utils", "/content/drive/MyDrive/EasyOcrTrainer/utils.py").load_module()
AttnLabelConverter = SourceFileLoader("utils", "/content/drive/MyDrive/EasyOcrTrainer/utils.py").load_module()
Averager = SourceFileLoader("utils", "/content/drive/MyDrive/EasyOcrTrainer/utils.py").load_module()
hierarchical_dataset = SourceFileLoader("dataset", "/content/drive/MyDrive/EasyOcrTrainer/dataset.py").load_module()
Batch_Balanced_Dataset = SourceFileLoader("dataset", "/content/drive/MyDrive/EasyOcrTrainer/dataset.py").load_module()
AlignCollate = SourceFileLoader("dataset", "/content/drive/MyDrive/EasyOcrTrainer/dataset.py").load_module()
Model = SourceFileLoader("model", "/content/drive/MyDrive/EasyOcrTrainer/model.py").load_module()
validation = SourceFileLoader("test", "/content/drive/MyDrive/EasyOcrTrainer/test.py").load_module()
`
I have not changed anything else. This is the error I get
```TypeError Traceback (most recent call last)
[<ipython-input-91-3f9d0d70acd7>](https://localhost:8080/#) in <module>
----> 1 opt = get_config("/content/drive/MyDrive/EasyOcrTrainer/config_files/en_filtered_config.yaml")
2 train(opt, amp=False)
[<ipython-input-88-59a3a683ab87>](https://localhost:8080/#) in get_config(file_path)
2 with open(file_path, 'r', encoding="utf8") as stream:
3 opt = yaml.safe_load(stream)
----> 4 opt = AttrDict(opt)
5 if opt.lang_char == 'None':
6 characters = ''
TypeError: 'module' object is not callable | closed | 2022-12-05T13:18:15Z | 2022-12-19T06:15:31Z | https://github.com/JaidedAI/EasyOCR/issues/899 | [] | iversions | 1 |
litestar-org/litestar | api | 3,116 | Enhancement: Session Middleware should create session id right away | ### Summary
Follow-up on: https://github.com/orgs/litestar-org/discussions/3112#discussioncomment-8477410
Currently, the `SessionMiddleware` of the `SessionAuth` backend creates the session id in the response wrapper and not when the session is first created, e.g., on user login. This is counter-intuitive since it gives the impression the session is created within the route handler when it's actually not. Furthermore, this makes it difficult or even impossible to add any custom handling of the session id during the login procedure since the session id does not exist yet.
Current state:
```python
@post("/login")
async def login(
data: UserLogin,
request: Request,
) -> User:
user = handle_auth(data)
request.set_session(
{"user-id": user.id, "foo": "bar"}
)
logger.info(f"Session content: {request.session}") # This will have content
logger.info(f"Cookies: {request.cookies}") # This will be empty. The send wrapper will add a session id here once created
return user
```
### Basic Example
My first best guess:
On registering the SessionAuth on the app, add a dependency so that it can be accessed as kwarg anywhere (not sure if the reference should be to that objectm, the session backend, or the middleware). On this reference, the user can call a method to pass the session data. This method will:
- create a session id
- store the session data in the store with that key
- add the session data to the connection scope
- potentially also already add it to the cookie? (not sure if this can be done here or should still be done in the send wrapper)
- return the session id
So for the user, it would look something like this:
```python
@post("/login")
async def login(
data: UserLogin,
request: Request,
session_backend: SessionBackend
) -> User:
user = handle_auth(data)
session_backend.create_session(request=request, session_data={"user-id": user.id, "foo": bar})
return user
```
### Drawbacks and Impact
_No response_
### Unresolved questions
Only just started digging through the code to understand the complete flow so might have missed something. Yet, the user perspective should be clear. | closed | 2024-02-15T11:41:44Z | 2025-03-20T15:54:26Z | https://github.com/litestar-org/litestar/issues/3116 | [
"Enhancement"
] | aranvir | 6 |
deepset-ai/haystack | pytorch | 8,656 | `pipeline.draw()` does not show user-provided value to variadic input | **Describe the bug**
Providing `run` with a value to a variadic input, e.g. a branchJoiner, is not shown by `pipeline.draw()`.
**Additional context**
As a result, the `branchJoiner` that should sit at the top of the pipeline is instead shown at the very bottom, making the structure a bit confusing.
This is probably intentional for optional inputs but variadic inputs should be handled separately IMHO as they are relevant to the control flow.
**To Reproduce**
* Create a pipeline with a loop using `branchJoiner`
* Provide it with a value through a connection and through `run({"branchJoiner": { "value": value}})`
* Call `pipeline.draw()`
* See the node appearing at the bottom of the pipeline instead of at the top. The user provided input, normally shown with a star, does not point to the `branchJoiner` node.
**FAQ Check**
- [x] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS: Linux
- Haystack version (commit or version number): 2.7.0 | closed | 2024-12-18T11:54:27Z | 2025-03-19T16:49:09Z | https://github.com/deepset-ai/haystack/issues/8656 | [
"P1"
] | Willenbrink | 4 |
PokeAPI/pokeapi | graphql | 270 | Pokemon.location_area_encounters URL is not consistent with other URLs | Under a Pokemon resource (e.g. https://pokeapi.co/api/v2/pokemon/427), the URL to the encounters resource is not full. Meaning, unlike other URLs in the API, the client needs to ensure it requests it by prefixing the protocol and host.
For example:
```
{
location_area_encounters: "/api/v2/pokemon/427/encounters",
}
```
Ideally it should be
```
{
location_area_encounters: "http://pokeapi.co/api/v2/pokemon/427/encounters",
}
```
| closed | 2016-10-15T20:29:33Z | 2019-12-07T14:17:42Z | https://github.com/PokeAPI/pokeapi/issues/270 | [
"bug",
"Beginner friendly"
] | jahed | 5 |
piskvorky/gensim | data-science | 3,555 | Always, each row of word2vec model txt format file starts with word and the rest is vector. But the function save_word2vec_format of the code make it starting with index. Is it a Bug? | https://github.com/piskvorky/gensim/blob/54dfec9909041817371ed96a4a53e36dc1b398d9/gensim/models/keyedvectors.py#L1639C2-L1670C110
``` python
store_order_vocab_keys = self.index_to_key
keys_to_write = itertools.chain(range(0, index_id_count), store_order_vocab_keys)
for key in keys_to_write:
key_vector = self[key]
if binary:
fout.write(f"{prefix}{key} ".encode('utf8') + key_vector.astype(REAL).tobytes())
else:
fout.write(f"{prefix}{key} {' '.join(repr(val) for val in key_vector)}\n".encode('utf8'))
```
Always, each row of word2vec model txt format file starts with word and the rest is vector. But the function save_word2vec_format of the code make it starting with index. Is it a Bug? | closed | 2024-07-31T09:49:07Z | 2024-11-12T17:29:09Z | https://github.com/piskvorky/gensim/issues/3555 | [] | rollingdeep | 5 |
kizniche/Mycodo | automation | 1,225 | Issue during upgrade: Bad gateway error | Tried to update, got error of bad gateway.
With
journalctl -u mycodoflask | tail -n 50
results indicated virtual env missing. This was upgraded. Still bad gateway erre.
now
journalctl -u mycodoflask | tail -n 50
gives results:
Sep 10 10:01:38 mush-room gunicorn[493]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/gunicorn/workers/base.py", line 134, in init_process
Sep 10 10:01:38 mush-room gunicorn[493]: self.load_wsgi()
Sep 10 10:01:38 mush-room gunicorn[493]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi
Sep 10 10:01:38 mush-room gunicorn[493]: self.wsgi = self.app.wsgi()
Sep 10 10:01:38 mush-room gunicorn[493]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
Sep 10 10:01:38 mush-room gunicorn[493]: self.callable = self.load()
Sep 10 10:01:38 mush-room gunicorn[493]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 58, in load
Sep 10 10:01:38 mush-room gunicorn[493]: return self.load_wsgiapp()
Sep 10 10:01:38 mush-room gunicorn[493]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
Sep 10 10:01:38 mush-room gunicorn[493]: return util.import_app(self.app_uri)
Sep 10 10:01:38 mush-room gunicorn[493]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/gunicorn/util.py", line 359, in import_app
Sep 10 10:01:38 mush-room gunicorn[493]: mod = importlib.import_module(module)
Sep 10 10:01:38 mush-room gunicorn[493]: File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
Sep 10 10:01:38 mush-room gunicorn[493]: return _bootstrap._gcd_import(name[level:], package, level)
Sep 10 10:01:38 mush-room gunicorn[493]: File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
Sep 10 10:01:38 mush-room gunicorn[493]: File "<frozen importlib._bootstrap>", line 983, in _find_and_load
Sep 10 10:01:38 mush-room gunicorn[493]: File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
Sep 10 10:01:38 mush-room gunicorn[493]: File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
Sep 10 10:01:38 mush-room gunicorn[493]: File "<frozen importlib._bootstrap_external>", line 728, in exec_module
Sep 10 10:01:38 mush-room gunicorn[493]: File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
Sep 10 10:01:38 mush-room gunicorn[493]: File "/home/pi/Mycodo/mycodo/start_flask_ui.py", line 11, in <module>
Sep 10 10:01:38 mush-room gunicorn[493]: from mycodo.mycodo_flask.app import create_app
Sep 10 10:01:38 mush-room gunicorn[493]: File "/home/pi/Mycodo/mycodo/mycodo_flask/app.py", line 17, in <module>
Sep 10 10:01:38 mush-room gunicorn[493]: from flask_limiter import Limiter
Sep 10 10:01:38 mush-room gunicorn[493]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask_limiter/__init__.py", line 4, in <module>
Sep 10 10:01:38 mush-room gunicorn[493]: from .extension import Limiter, HEADERS
Sep 10 10:01:38 mush-room gunicorn[493]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask_limiter/extension.py", line 14, in <module>
Sep 10 10:01:38 mush-room gunicorn[493]: from limits.errors import ConfigurationError
Sep 10 10:01:38 mush-room gunicorn[493]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/limits/__init__.py", line 5, in <module>
Sep 10 10:01:38 mush-room gunicorn[493]: from . import _version, aio, storage, strategies
Sep 10 10:01:38 mush-room gunicorn[493]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/limits/aio/__init__.py", line 1, in <module>
Sep 10 10:01:38 mush-room gunicorn[493]: from . import storage, strategies
Sep 10 10:01:38 mush-room gunicorn[493]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/limits/aio/storage/__init__.py", line 6, in <module>
Sep 10 10:01:38 mush-room gunicorn[493]: from .base import MovingWindowSupport, Storage
Sep 10 10:01:38 mush-room gunicorn[493]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/limits/aio/storage/base.py", line 5, in <module>
Sep 10 10:01:38 mush-room gunicorn[493]: from limits.storage.registry import StorageRegistry
Sep 10 10:01:38 mush-room gunicorn[493]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/limits/storage/__init__.py", line 12, in <module>
Sep 10 10:01:38 mush-room gunicorn[493]: from .base import MovingWindowSupport, Storage
Sep 10 10:01:38 mush-room gunicorn[493]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/limits/storage/base.py", line 4, in <module>
Sep 10 10:01:38 mush-room gunicorn[493]: from limits.storage.registry import StorageRegistry
Sep 10 10:01:38 mush-room gunicorn[493]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/limits/storage/registry.py", line 5, in <module>
Sep 10 10:01:38 mush-room gunicorn[493]: from limits.typing import Dict, List, Tuple, Union
Sep 10 10:01:38 mush-room gunicorn[493]: File "/home/pi/Mycodo/env/lib/python3.7/site-packages/limits/typing.py", line 13, in <module>
Sep 10 10:01:38 mush-room gunicorn[493]: from typing_extensions import ClassVar, Counter, ParamSpec, Protocol
Sep 10 10:01:38 mush-room gunicorn[493]: ImportError: cannot import name 'ParamSpec' from 'typing_extensions' (/usr/local/lib/python3.7/dist-packages/typing_extensions.py)
Sep 10 10:01:38 mush-room gunicorn[493]: [2022-09-10 10:01:38 +0200] [674] [INFO] Worker exiting (pid: 674)
Sep 10 10:01:38 mush-room gunicorn[493]: [2022-09-10 10:01:38 +0200] [493] [INFO] Shutting down: Master
Sep 10 10:01:38 mush-room gunicorn[493]: [2022-09-10 10:01:38 +0200] [493] [INFO] Reason: Worker failed to boot.
Sep 10 10:01:38 mush-room systemd[1]: mycodoflask.service: Main process exited, code=exited, status=3/NOTIMPLEMENTED
Sep 10 10:01:38 mush-room systemd[1]: mycodoflask.service: Failed with result 'exit-code'.
Unsure what the problem would be.. Any help would be appreciated.
Thanks | closed | 2022-09-10T08:07:43Z | 2022-11-22T16:20:00Z | https://github.com/kizniche/Mycodo/issues/1225 | [] | TeaPot169 | 1 |
opengeos/leafmap | streamlit | 2 | Get user drawn features as a GeoJSON | Users can draw multiple features on the map. However, ipyleaflet can only return the last drawn feature. It would be useful to return all user-drawn features as a GeoJSON dict. | closed | 2021-05-25T18:17:20Z | 2021-05-25T19:23:40Z | https://github.com/opengeos/leafmap/issues/2 | [
"Feature Request"
] | giswqs | 1 |
zappa/Zappa | django | 1,160 | Add support for python 3.10 | Title says it all. Ubuntu 22 ships with python 3.10 by default so it would be nice to have zappa support 3.10 | closed | 2022-08-05T07:19:59Z | 2023-05-19T06:06:28Z | https://github.com/zappa/Zappa/issues/1160 | [
"next-release-candidate"
] | ppartarr | 8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.