repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
DistrictDataLabs/yellowbrick | matplotlib | 340 | Code Conventions Guide for Documentation and Examples | In our documentation and code examples, we have several different styles around referring to the workflow and how to format code examples.
It would be helpful to identify and establish a handful of code conventions that we follow to reduce the cognitive load for using this library.
Code Examples:
- Should always include the import path of the visualizer
| closed | 2018-03-16T14:09:16Z | 2018-04-10T17:33:24Z | https://github.com/DistrictDataLabs/yellowbrick/issues/340 | [
"type: documentation"
] | ndanielsen | 11 |
joeyespo/grip | flask | 356 | Doc suggestion - Docker run | Hi, great project! I just wanted to share a quick-and-dirty docker run one liner if anyone feel a docker container is useful there as I did.
You may close this issue without changing anything if you want.
I didn't know where else to share this.
Cheers guys
```
# Access the working directory that you have the markdown files
# change what you require on the command below, then run it
docker run -it --name python-grip --rm \
-p 6419:6419 \
--env FILE=README.md \
--env DEBUG=True \
--env DEBUG_GRIP=True \
--env HOST=0.0.0.0 \
-v "$(pwd)":/workspace \
python bash -c "pip install grip && mkdir ~/.grip/ && bash -c \"echo -e \\\"DEBUG=\$DEBUG\nDEBUG_GRIP=\$DEBUG_GRIP\nHOST='\$HOST'\\\" >> ~/.grip/settings.py \" && cd workspace/ && grip \$FILE"
# access the page at localhost:6419 on your browser
``` | open | 2022-03-07T18:43:30Z | 2023-12-03T17:44:31Z | https://github.com/joeyespo/grip/issues/356 | [] | jfftonsic | 2 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,896 | replace ORM loader depth warning with notes in cache disabled message | ### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/10895
| closed | 2024-01-18T02:09:24Z | 2024-01-19T16:14:27Z | https://github.com/sqlalchemy/sqlalchemy/issues/10896 | [
"orm",
"loader depth warning"
] | zzzeek | 2 |
huggingface/datasets | pandas | 6,505 | Got stuck when I trying to load a dataset | ### Describe the bug
Hello, everyone. I met a problem when I am trying to load a data file using load_dataset method on a Debian 10 system. The data file is not very large, only 1.63MB with 600 records.
Here is my code:
from datasets import load_dataset
dataset = load_dataset('json', data_files='mypath/oaast_rm_zh.json')
I waited it for 20 minutes. It still no response. I cannot using Ctrl+C to cancel the command. I have to use Ctrl+Z to kill it. I also try it with a txt file, it still no response in a long time.
I can load the same file successfully using my laptop (windows 10, python 3.8.5, datasets==2.14.5). I can also make it on another computer (Ubuntu 20.04.5 LTS, python 3.10.13, datasets 2.14.7). It only takes me 1-2 miniutes.
Could you give me some suggestions? Thank you.
### Steps to reproduce the bug
from datasets import load_dataset
dataset = load_dataset('json', data_files='mypath/oaast_rm_zh.json')
### Expected behavior
I hope it can load the file successfully.
### Environment info
OS: Debian GNU/Linux 10
Python: Python 3.10.13
Pip list:
Package Version
------------------------- ------------
accelerate 0.25.0
addict 2.4.0
aiofiles 23.2.1
aiohttp 3.9.1
aiosignal 1.3.1
aliyun-python-sdk-core 2.14.0
aliyun-python-sdk-kms 2.16.2
altair 5.2.0
annotated-types 0.6.0
anyio 3.7.1
async-timeout 4.0.3
attrs 23.1.0
certifi 2023.11.17
cffi 1.16.0
charset-normalizer 3.3.2
click 8.1.7
contourpy 1.2.0
crcmod 1.7
cryptography 41.0.7
cycler 0.12.1
datasets 2.14.7
dill 0.3.7
docstring-parser 0.15
einops 0.7.0
exceptiongroup 1.2.0
fastapi 0.105.0
ffmpy 0.3.1
filelock 3.13.1
fonttools 4.46.0
frozenlist 1.4.1
fsspec 2023.10.0
gast 0.5.4
gradio 3.50.2
gradio_client 0.6.1
h11 0.14.0
httpcore 1.0.2
httpx 0.25.2
huggingface-hub 0.19.4
idna 3.6
importlib-metadata 7.0.0
importlib-resources 6.1.1
jieba 0.42.1
Jinja2 3.1.2
jmespath 0.10.0
joblib 1.3.2
jsonschema 4.20.0
jsonschema-specifications 2023.11.2
kiwisolver 1.4.5
markdown-it-py 3.0.0
MarkupSafe 2.1.3
matplotlib 3.8.2
mdurl 0.1.2
modelscope 1.10.0
mpmath 1.3.0
multidict 6.0.4
multiprocess 0.70.15
networkx 3.2.1
nltk 3.8.1
numpy 1.26.2
nvidia-cublas-cu12 12.1.3.1
nvidia-cuda-cupti-cu12 12.1.105
nvidia-cuda-nvrtc-cu12 12.1.105
nvidia-cuda-runtime-cu12 12.1.105
nvidia-cudnn-cu12 8.9.2.26
nvidia-cufft-cu12 11.0.2.54
nvidia-curand-cu12 10.3.2.106
nvidia-cusolver-cu12 11.4.5.107
nvidia-cusparse-cu12 12.1.0.106
nvidia-nccl-cu12 2.18.1
nvidia-nvjitlink-cu12 12.3.101
nvidia-nvtx-cu12 12.1.105
orjson 3.9.10
oss2 2.18.3
packaging 23.2
pandas 2.1.4
peft 0.7.1
Pillow 10.1.0
pip 23.3.1
platformdirs 4.1.0
protobuf 4.25.1
psutil 5.9.6
pyarrow 14.0.1
pyarrow-hotfix 0.6
pycparser 2.21
pycryptodome 3.19.0
pydantic 2.5.2
pydantic_core 2.14.5
pydub 0.25.1
Pygments 2.17.2
pyparsing 3.1.1
python-dateutil 2.8.2
python-multipart 0.0.6
pytz 2023.3.post1
PyYAML 6.0.1
referencing 0.32.0
regex 2023.10.3
requests 2.31.0
rich 13.7.0
rouge-chinese 1.0.3
rpds-py 0.13.2
safetensors 0.4.1
scipy 1.11.4
semantic-version 2.10.0
sentencepiece 0.1.99
setuptools 68.2.2
shtab 1.6.5
simplejson 3.19.2
six 1.16.0
sniffio 1.3.0
sortedcontainers 2.4.0
sse-starlette 1.8.2
starlette 0.27.0
sympy 1.12
tiktoken 0.5.2
tokenizers 0.15.0
tomli 2.0.1
toolz 0.12.0
torch 2.1.2
tqdm 4.66.1
transformers 4.36.1
triton 2.1.0
trl 0.7.4
typing_extensions 4.9.0
tyro 0.6.0
tzdata 2023.3
urllib3 2.1.0
uvicorn 0.24.0.post1
websockets 11.0.3
wheel 0.41.2
xxhash 3.4.1
yapf 0.40.2
yarl 1.9.4
zipp 3.17.0
| open | 2023-12-16T11:51:07Z | 2024-12-24T16:45:52Z | https://github.com/huggingface/datasets/issues/6505 | [] | yirenpingsheng | 7 |
openapi-generators/openapi-python-client | rest-api | 138 | Generated code style | Would you consider taking extra care to follow style conventions in the generated code?
I noticed a few things, like [spaces at the beginning and end of docstrings](https://github.com/triaxtec/openapi-python-client/blob/main/openapi_python_client/templates/endpoint_module.pyi#L45), or an [extra blank line after a function return type annotation](https://github.com/triaxtec/openapi-python-client/blob/main/openapi_python_client/templates/endpoint_macros.pyi#L54).
I can send small PRs with style-fixes if you want 🙂 | closed | 2020-08-06T15:09:46Z | 2020-09-26T15:19:28Z | https://github.com/openapi-generators/openapi-python-client/issues/138 | [
"✨ enhancement",
"👋 good first issue"
] | pawamoy | 5 |
DistrictDataLabs/yellowbrick | scikit-learn | 398 | Manifold Feature Engineering | 
Currently we have a t-SNE visualizer for text, but we can create a general manifold learning visualizer for projecting high dimensional data into 2 dimensions that respect non-linear effects (unlike our current decomposition methods.
### Proposal/Issue
The visualizer would take as hyperparameters:
- The color space of the original data (either by class or more specifically for each point)
- The manifold method (string or estimator)
It would be fit to training data.
The visualizer would display the representation in 2D space, as well as the training time and any other associated metrics.
### Code Snippet
- Code snippet found here: [Plot Compare Manifold Methods](http://scikit-learn.org/stable/auto_examples/manifold/plot_compare_methods.html)
### Background
- [Comparison of Manifold Algorithms (sklearn docs)](http://scikit-learn.org/stable/modules/manifold.html#manifold)
This investigation started with self-organizing maps (SOMS) visualization:
- http://blog.yhat.com/posts/self-organizing-maps-2.html
- https://stats.stackexchange.com/questions/210446/how-does-one-visualize-the-self-organizing-map-of-n-dimensional-data
| closed | 2018-05-12T15:09:31Z | 2018-05-18T01:04:03Z | https://github.com/DistrictDataLabs/yellowbrick/issues/398 | [
"type: feature",
"priority: low",
"level: intermediate"
] | bbengfort | 1 |
Avaiga/taipy | data-visualization | 2,470 | [🐛 BUG] Data Node Selector does not display nodes with CYCLE Scope | ### What went wrong? 🤔
When I create a Scenario with Cycles (that is, I add Scope.CYCLE to certain Data Node configuration objects + I add a Frequency to the Scenario), I can't see the Data Node in the Data Node selector. I see the GLOBAL and the SCENARIO Data Nodes, but not those with Scope CYCLE.
### Expected Behavior
I would expect to see all Data Nodes, including those with Scope set to CYCLE.
### Steps to Reproduce Issue
This code shows the issue:
```python
import datetime as dt
import taipy as tp
import taipy.gui.builder as tgb
from taipy import Config, Frequency, Gui, Scope
def add_three(a, b, c):
return a + b + c
a_node_config = Config.configure_data_node(id="a", default_data=1, scope=Scope.GLOBAL)
b_node_config = Config.configure_data_node(id="b", default_data=2, scope=Scope.CYCLE)
c_node_config = Config.configure_data_node(id="c", default_data=3, scope=Scope.SCENARIO)
result_node_config = Config.configure_data_node(id="result", scope=Scope.SCENARIO)
add_three_scenario_task = Config.configure_task(
id="add_three",
function=add_three,
input=[a_node_config, b_node_config, c_node_config],
output=result_node_config,
)
add_three_scenario_config = Config.configure_scenario(
id="scenario",
task_configs=add_three_scenario_task,
frequency=Frequency.MONTHLY,
)
with tgb.Page() as page:
tgb.text("# Data Node selector does not show Cycle Data Nodes", mode="md")
tgb.data_node_selector()
if __name__ == "__main__":
tp.Orchestrator().run()
scenario = tp.create_scenario(add_three_scenario_config)
scenario.submit()
gui = Gui(page=page)
gui.run(
title="test data node selector",
use_reloader=True,
)
```
### Screenshots

### Runtime Environment
Windows 10
### Browsers
Brave
### OS
Windows
### Version of Taipy
4.0.2
### Acceptance Criteria
- [ ] A unit test reproducing the bug is added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] The bug reporter validated the fix.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [x] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2025-03-03T07:53:17Z | 2025-03-04T12:04:03Z | https://github.com/Avaiga/taipy/issues/2470 | [
"Core",
"🟥 Priority: Critical",
"🖰 GUI",
"💥Malfunction"
] | enarroied | 3 |
axnsan12/drf-yasg | django | 319 | Swagger-codegen params order is changing with every update on the backend breaking the frontend | When using swagger-codegen to generate typescript-angular swagger the params order is changed thus breaking my frontend application.

| closed | 2019-02-21T12:17:37Z | 2019-03-09T19:38:25Z | https://github.com/axnsan12/drf-yasg/issues/319 | [
"bug"
] | aldokkani | 12 |
sherlock-project/sherlock | python | 1,750 | Sas | M | closed | 2023-03-16T05:41:10Z | 2023-03-16T14:47:36Z | https://github.com/sherlock-project/sherlock/issues/1750 | [] | Dodsad | 0 |
fastapi-users/fastapi-users | fastapi | 171 | Swagger issue for endpoints register & update | Hi,
First of all, great job. It's a very useful library.
However, after having setup my project. I noticed a few issues in the generated Swagger documentation. Indeed, the request body is pre-filled with the following information:
```
{
"id": "string",
"email": "user@example.com",
"is_active": true,
"is_superuser": false,
"password": "string"
}
```
However, according to your documentation, only the fields `email` & `password` are required. It can lead to some misunderstandings for someone wanting to use the API for the first time since the Swagger (or redoc) should describe how to use the API.
I think it's a cheap fix that can be very useful for when you'll find a solution for adding auth in the Swagger. Indeed, after having had a look at your code, one solution could be to make the models `BaseUserCreate` and `BaseUserUpdate` not to inherit from `BaseUser` but `BaseModel` instead.
Looking forward to hearing from you :)
| closed | 2020-04-30T10:15:08Z | 2021-03-25T21:16:07Z | https://github.com/fastapi-users/fastapi-users/issues/171 | [
"enhancement"
] | anancarv | 11 |
saulpw/visidata | pandas | 1,841 | Keystroke ] not detected on Windows | In Powershell and cmd.exe I encountered that sorting didn't work in both orders. The `[` shortcut was detected and had its effect, but the `]` didn't. I narrowed it down to a problem with `windows-curses`, and in turn with its dependency `PDCurses`: https://github.com/zephyrproject-rtos/windows-curses/issues/41
Here's my plan on how to address it. I hope I'll get around to it somewhere next week.
- [ ] Improve the mapping in `PDCurses` and submit a pull request
- [ ] Bump the git submodule in `windows-curses` to the `PDCurses` version that has the fix and ask/wait for a release of this package
- [ ] Address the issue in this repository, perhaps by pinning `windows-curses` to a version of at least the newly released package.
I'm making this issue here just to document it and track progress. If you're reading this because you have this issue, I would recommend using WSL instead. (WSL is not an option for me unfortunately).
I didn't include the `.vd`-file to reproduce this issue. The simplest way to reproduce it is to get a Windows computer, run `visidata` from Powershell or cmd.exe and sort any column by pressing `]`. | closed | 2023-04-06T07:26:41Z | 2024-11-13T06:23:05Z | https://github.com/saulpw/visidata/issues/1841 | [
"bug",
"fixed",
"windows"
] | bartbroere | 5 |
matplotlib/mplfinance | matplotlib | 264 | live update of the chart | I have data streaming through API calls to Alpaca and it is real time stock market data, it is using the code below to get data, and "on_message" event triggers I parse the data to pandas dataframe object dfObj, then plot the candlestick chart. Now the issue is the chart will need to be closed manually in order for it to continue execute the next "on message event", anyway to update the chart? and continue without plotting a new one?
--------------------------------------------
ws = websocket.WebSocketApp("wss://socket.polygon.io/stocks",
on_message = on_message,
on_error = on_error,
on_close = on_close)
ws.on_open = on_open
ws.run_forever()
------------------------------------------------------------
```python
import mplfinance as mpf
def on_message(ws, message):
# code to parse message and add data to dfObj .....
mpf.plot(dfObj, type='candle')
```
the code above just fragments | closed | 2020-09-16T20:20:48Z | 2024-02-17T12:22:29Z | https://github.com/matplotlib/mplfinance/issues/264 | [
"question"
] | gchudublin | 35 |
lundberg/respx | pytest | 11 | Rebrand | Rename to `respx` for shorter and more alike `httpx`. | closed | 2019-11-16T11:10:19Z | 2019-11-16T11:39:58Z | https://github.com/lundberg/respx/issues/11 | [] | lundberg | 0 |
2noise/ChatTTS | python | 135 | 文字多了会乱套 | 会跳跃性的阅读,而且语言会稀里糊涂的发声 | closed | 2024-05-31T08:16:05Z | 2024-07-17T04:01:31Z | https://github.com/2noise/ChatTTS/issues/135 | [
"stale"
] | cgk100 | 2 |
aiogram/aiogram | asyncio | 1,500 | Significant Response Delay After Idle Period in Version 3 | ### Checklist
- [X] I am sure the error is coming from aiogram code
- [X] I have searched in the issue tracker for similar bug reports, including closed ones
### Operating system
Oracle Linux 7
### Python version
3.9
### aiogram version
3.6.0
### Expected behavior
The bot should respond promptly without significant delay, even after being idle.
### Current behavior
The bot responds with a significant delay after being idle for more than 15 seconds.

### Steps to reproduce
Deploy a standard bot on a server.
Allow the bot to be idle for more than 15 seconds.
Send a message to the bot.
Observe the delay in the bot's response.
### Code example
```python3
import os
import sys
import asyncio
import logging
from aiogram import Bot, types
from aiogram import Dispatcher
from aiogram.client.default import DefaultBotProperties
from aiogram.enums import ParseMode
TOKEN = os.getenv('TOKEN')
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s [%(name)s] - %(message)s",
stream=sys.stdout,
force=True,
)
logger = logging.getLogger(__name__)
dp = Dispatcher()
@dp.message()
async def echo(message: types.Message):
logger.info('Handler Start')
await message.answer(message.text)
logger.info('Handler End')
async def main():
bot = Bot(
token=TOKEN,
default=DefaultBotProperties(parse_mode=ParseMode.HTML)
)
await dp.start_polling(bot, skip_updates=True)
if __name__ == '__main__':
try:
logger.info("Starting bot")
asyncio.run(main())
except (KeyboardInterrupt, SystemExit):
logger.info("Bot stopped!")
```
### Logs
_No response_
### Additional information
This issue is not present in version 2.x. | closed | 2024-05-31T07:21:37Z | 2024-07-31T04:16:15Z | https://github.com/aiogram/aiogram/issues/1500 | [
"bug",
"confirmed"
] | askhat-spec | 12 |
jonaswinkler/paperless-ng | django | 378 | Make dashboard page a true dashboard for productivity? | Hi, first I really like the dashboard / intro page as it's pretty informative for first-start users.
But if you work a lot with the application, I noticed, that it doesn't help me to do work, so I only skip it.
Instead, it could help me as a personal entrypoint, similar like it shows already the installation stats
Some ideas for (often used) functions:
* tagcloud
* saved search views (maybe also as inline list?)
* new in inbox (unprocessed -> need to be checked)
If we allow users to hide the first steps, there might be enough space to show up this personalized content. | open | 2021-01-17T20:53:33Z | 2022-08-01T17:56:48Z | https://github.com/jonaswinkler/paperless-ng/issues/378 | [
"feature request"
] | Matthias84 | 10 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 467 | Could not find Source.Txt file in dataset | Hi
In ENCODER, speaker.py reads source.txt from data set which is not found. When i run the train loop , it is showing me error:
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/shivani/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/home/shivani/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/home/shivani/Projects/Real-Time-Voice-Cloning/encoder/data_objects/speaker_verification_dataset.py", line 55, in collate
return SpeakerBatch(speakers, self.utterances_per_speaker, partials_n_frames)
File "/home/shivani/Projects/Real-Time-Voice-Cloning/encoder/data_objects/speaker_batch.py", line 8, in __init__
self.partials = {s: s.random_partial(utterances_per_speaker, n_frames) for s in speakers}
File "/home/shivani/Projects/Real-Time-Voice-Cloning/encoder/data_objects/speaker_batch.py", line 8, in <dictcomp>
self.partials = {s: s.random_partial(utterances_per_speaker, n_frames) for s in speakers}
File "/home/shivani/Projects/Real-Time-Voice-Cloning/encoder/data_objects/speaker.py", line 34, in random_partial
self._load_utterances()
File "/home/shivani/Projects/Real-Time-Voice-Cloning/encoder/data_objects/speaker.py", line 14, in _load_utterances
with self.root.joinpath("_sources.txt").open("r") as sources_file:
File "/home/shivani/anaconda3/lib/python3.7/pathlib.py", line 1203, in open
opener=self._opener)
File "/home/shivani/anaconda3/lib/python3.7/pathlib.py", line 1058, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: 'LibriSpeech/dev-clean/_sources.txt'
Please do an urgent help. | closed | 2020-08-04T13:10:13Z | 2020-08-07T07:27:30Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/467 | [] | Tayal-S | 1 |
horovod/horovod | pytorch | 3,275 | Cannot install horovod[spark] for Tensorflow 2.6 | **Environment:**
1. Framework: TensorFlow
2. Framework version:2.6.2
3. Horovod version: 0.23
4. MPI version:4.1.1
5. CUDA version:N/A
6. NCCL version:N/A
7. Python version: 3.7
8. Spark / PySpark version: 2.4.5
9. Ray version:N/A
10. OS and version: RHEL 8.4
11. GCC version: 9.3.0
12. CMake version: 3.5.0
**Checklist:**
1. Did you search issues to find if somebody asked this question before? Yes
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? N/A
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? N/A
4. Did you check if you question is answered in the [troubleshooting guide] (https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? Yes
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
```
Installing collected packages: pyparsing, pycparser, pyzmq, pyyaml, pyarrow, psutil, packaging, future, fsspec, diskcache, dill, cloudpickle, cffi, petastorm, horovod, h5py
Attempting uninstall: h5py
Found existing installation: h5py 3.1.0
Uninstalling h5py-3.1.0:
Successfully uninstalled h5py-3.1.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow 2.6.2 requires h5py~=3.1.0, but you have h5py 2.10.0 which is incompatible.
```
**Reproduce Steps:**
1. `conda create -n horovod python=3.7`
2. `conda activate horovod`
3. `conda install pyspark=2.4.5 openmpi-mpicc cmake -c conda-forge`
4. `pip install tensorflow==2.6.2`
5. `HOROVOD_WITH_MPI=1 HOROVOD_WITH_TENSORFLOW=1 pip install horovod[spark]`
| closed | 2021-11-16T01:15:17Z | 2022-03-02T21:40:46Z | https://github.com/horovod/horovod/issues/3275 | [
"bug"
] | LifengWang | 8 |
Layout-Parser/layout-parser | computer-vision | 99 | pip3 install detectron2 | I'm trying to run a Docker file and install layoutparser using the following pip command
RUN pip3 install layoutparser torchvision && pip install "git+https://github.com/facebookresearch/detectron2.git@v0.5#egg=detectron2"
I get the following error message back
`#15 354.2 aarch64-linux-gnu-gcc: fatal error: Killed signal terminated program cc1plus
#15 354.2 compilation terminated.
#15 354.2 error: command 'aarch64-linux-gnu-gcc' failed with exit status 1
#15 354.2 ----------------------------------------
#15 354.2 ERROR: Command errored out with exit status 1: /usr/bin/python3 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-j9rv61td/detectron2_679fb568038548bf8b387f71e68646a7/setup.py'"'"'; __file__='"'"'/tmp/pip-install-j9rv61td/detectron2_679fb568038548bf8b387f71e68646a7/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-t9fgp6nd/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.8/detectron2 Check the logs for full command output.
------
executor failed running [/bin/sh -c pip3 install layoutparser torchvision && pip install "git+https://github.com/facebookresearch/detectron2.git@v0.5#egg=detectron2"]: exit code: 1`
Can you advise what I am doing wrong an how I can go about resolving it? | open | 2021-11-09T16:43:41Z | 2021-11-09T16:43:41Z | https://github.com/Layout-Parser/layout-parser/issues/99 | [
"bug"
] | solidHeroLink | 0 |
OFA-Sys/Chinese-CLIP | nlp | 38 | 请问VG的中文数据集从哪里获取? | closed | 2023-01-12T07:54:43Z | 2023-02-04T08:28:30Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/38 | [] | coni-coco | 1 |
|
vaexio/vaex | data-science | 2,046 | Error installing vaex on win10 | Error installing vaex on win10
**Description**
I try to install vaex on **windows 10**, **amd64** cpu inside a **venv**.
with command:
- `pip install vaex`
- `pip install vaex-core vaex-viz vaex-jupyter vaex-server vaex-hdf5 vaex-astro vaex-ml`
with the same problem.
**Software information**
- Vaex version : vaex-4.9.1
- Vaex was installed via: pip
- OS: windows 10
- Python version: 3.10
- CPU: amd64
- vsbuildtools c++ installed
**Error**
`Building wheels for collected packages: vaex-core
Building wheel for vaex-core (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for vaex-core (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [260 lines of output]
setup.py:4: DeprecationWarning: the imp module is deprecated in favour of importlib and slated for removal in Python 3.12; see the module's documentation for alternative uses
import imp
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-310
creating build\lib.win-amd64-cpython-310\vaex
copying vaex\agg.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\array_types.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\asyncio.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\benchmark.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\cache.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\column.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\config.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\convert.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\cpu.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\dataframe.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\dataframe_protocol.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\dataset.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\dataset_misc.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\dataset_mmap.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\dataset_utils.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\datatype.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\datatype_test.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\delayed.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\docstrings.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\encoding.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\events.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\execution.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\export.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\expression.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\expresso.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\formatting.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\functions.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\geo.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\grids.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\groupby.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\hash.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\image.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\itertools.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\join.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\json.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\kld.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\legacy.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\logging.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\memory.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\meta.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\metal.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\misc_cmdline.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\multiprocessing.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\multithreading.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\parallelize.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\progress.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\promise.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\registry.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\rolling.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\samp.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\schema.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\scopes.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\selections.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\serialize.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\settings.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\shift.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\stat.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\strings.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\struct.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\tasks.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\utils.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\version.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\_version.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\__init__.py -> build\lib.win-amd64-cpython-310\vaex
copying vaex\__main__.py -> build\lib.win-amd64-cpython-310\vaex
package init file 'vaex\arrow\__init__.py' not found (or not a regular file)
creating build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\convert.py -> build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\dataset.py -> build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\numpy_dispatch.py -> build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\opener.py -> build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\utils.py -> build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\utils_test.py -> build\lib.win-amd64-cpython-310\vaex\arrow
copying vaex\arrow\_version.py -> build\lib.win-amd64-cpython-310\vaex\arrow
creating build\lib.win-amd64-cpython-310\vaex\core
copying vaex\core\_version.py -> build\lib.win-amd64-cpython-310\vaex\core
copying vaex\core\__init__.py -> build\lib.win-amd64-cpython-310\vaex\core
creating build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\asyncio.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\cache.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\column.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\gcs.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\s3.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\s3arrow.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\s3fs.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\s3_test.py -> build\lib.win-amd64-cpython-310\vaex\file
copying vaex\file\__init__.py -> build\lib.win-amd64-cpython-310\vaex\file
creating build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\all.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\cmodule.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\dataset.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\expresso.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\misc.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\plot.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\ui.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\__init__.py -> build\lib.win-amd64-cpython-310\vaex\test
copying vaex\test\__main__.py -> build\lib.win-amd64-cpython-310\vaex\test
creating build\lib.win-amd64-cpython-310\vaex\ext
copying vaex\ext\bokeh.py -> build\lib.win-amd64-cpython-310\vaex\ext
copying vaex\ext\common.py -> build\lib.win-amd64-cpython-310\vaex\ext
copying vaex\ext\ipyvolume.py -> build\lib.win-amd64-cpython-310\vaex\ext
copying vaex\ext\jprops.py -> build\lib.win-amd64-cpython-310\vaex\ext
copying vaex\ext\readcol.py -> build\lib.win-amd64-cpython-310\vaex\ext
copying vaex\ext\__init__.py -> build\lib.win-amd64-cpython-310\vaex\ext
creating build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\expressions.py -> build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\ordereddict.py -> build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\pandawrap.py -> build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\parallelize.py -> build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\progressbar.py -> build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\samp.py -> build\lib.win-amd64-cpython-310\vaex\misc
copying vaex\misc\__init__.py -> build\lib.win-amd64-cpython-310\vaex\misc
creating build\lib.win-amd64-cpython-310\vaex\datasets
copying vaex\datasets\__init__.py -> build\lib.win-amd64-cpython-310\vaex\datasets
running egg_info
writing vaex_core.egg-info\PKG-INFO
writing dependency_links to vaex_core.egg-info\dependency_links.txt
writing entry points to vaex_core.egg-info\entry_points.txt
writing requirements to vaex_core.egg-info\requires.txt
writing top-level names to vaex_core.egg-info\top_level.txt
reading manifest file 'vaex_core.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.c' under directory 'vendor'
warning: no files found matching '*.h' under directory 'src'
warning: no files found matching '*.c' under directory 'src'
adding license file 'LICENSE.txt'
writing manifest file 'vaex_core.egg-info\SOURCES.txt'
copying vaex\datasets\iris.hdf5 -> build\lib.win-amd64-cpython-310\vaex\datasets
copying vaex\datasets\titanic.hdf5 -> build\lib.win-amd64-cpython-310\vaex\datasets
running build_ext
building 'vaex.vaexfast' extension
creating build\temp.win-amd64-cpython-310
creating build\temp.win-amd64-cpython-310\Release
creating build\temp.win-amd64-cpython-310\Release\src
"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\wolvi\AppData\Local\Temp\pip-build-env-mequ5492\overlay\Lib\site-packages\numpy\core\include -IC:\Users\wolvi\Desktop\RICERCA\venv\include -IC:\Users\wolvi\AppData\Local\Programs\Python\Python310\include -IC:\Users\wolvi\AppData\Local\Programs\Python\Python310\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /EHsc /Tpsrc\vaexfast.cpp /Fobuild\temp.win-amd64-cpython-310\Release\src\vaexfast.obj /EHsc
vaexfast.cpp
src\vaexfast.cpp(18): warning C4005: 'INFINITY': ridefinizione macro
C:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt\corecrt_math.h(88): note: vedere la precedente definizione di 'INFINITY'
C:\Users\wolvi\AppData\Local\Temp\pip-build-env-mequ5492\overlay\Lib\site-packages\numpy\core\include\numpy\npy_1_7_deprecated_api.h(14) : Warning Msg: Using deprecated NumPy API, disable it with #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
src\vaexfast.cpp(201): warning C4244: 'argomento': conversione da '__int64' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(532): warning C4244: 'argomento': conversione da '__int64' a 'const int'. Possibile perdita di dati.
src\vaexfast.cpp(956): warning C4244: '=': conversione da 'Py_ssize_t' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(1798): warning C4244: 'argomento': conversione da '__int64' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(1798): warning C4244: 'argomento': conversione da '__int64' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(64): warning C4244: '=': conversione da 'npy_intp' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(198): note: vedere il riferimento all'istanza 'void object_to_numpy1d_nocopy<double>(T *&,PyObject *,__int64 &,int &,int)' della funzione modello di cui Š in corso la compilazione
with
[
T=double
]
src\vaexfast.cpp(88): warning C4244: '=': conversione da 'npy_intp' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(280): note: vedere il riferimento all'istanza 'void object_to_numpy1d_nocopy_endian<double>(T *&,PyObject *,__int64 &,bool &,int &,int)' della funzione modello di cui Š in corso la compilazione
with
[
T=double
]
src\vaexfast.cpp(105): warning C4244: 'inizializzazione': conversione da 'npy_intp' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(644): note: vedere il riferimento all'istanza 'void object_to_numpy2d_nocopy<double>(T *&,PyObject *,int &,int &,int)' della funzione modello di cui Š in corso la compilazione
with
[
T=double
]
src\vaexfast.cpp(108): warning C4244: 'inizializzazione': conversione da 'npy_intp' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(667): warning C4244: 'inizializzazione': conversione da 'const double' a 'float'. Possibile perdita di dati.
src\vaexfast.cpp(775): note: vedere il riferimento all'istanza 'void histogram2d_f4<__int64>(const float *__restrict const ,const float *__restrict const ,const float *const ,const __int64,bool,bool,bool,Tout *__restrict const ,const int,const int,const double,const double,const double,const double,const __int64,const __int64)' della funzione modello di cui Š in corso la compilazione
with
[
Tout=__int64
]
src\vaexfast.cpp(667): warning C4244: 'inizializzazione': conversione da 'const double' a 'const float'. Possibile perdita di dati.
src\vaexfast.cpp(668): warning C4244: 'inizializzazione': conversione da 'const double' a 'float'. Possibile perdita di dati.
src\vaexfast.cpp(668): warning C4244: 'inizializzazione': conversione da 'const double' a 'const float'. Possibile perdita di dati.
src\vaexfast.cpp(669): warning C4244: 'inizializzazione': conversione da 'const double' a 'float'. Possibile perdita di dati.
src\vaexfast.cpp(669): warning C4244: 'inizializzazione': conversione da 'const double' a 'const float'. Possibile perdita di dati.
src\vaexfast.cpp(670): warning C4244: 'inizializzazione': conversione da 'const double' a 'float'. Possibile perdita di dati.
src\vaexfast.cpp(670): warning C4244: 'inizializzazione': conversione da 'const double' a 'const float'. Possibile perdita di dati.
src\vaexfast.cpp(671): warning C4244: 'inizializzazione': conversione da 'double' a 'float'. Possibile perdita di dati.
src\vaexfast.cpp(671): warning C4244: 'inizializzazione': conversione da 'double' a 'const float'. Possibile perdita di dati.
src\vaexfast.cpp(672): warning C4244: 'inizializzazione': conversione da 'double' a 'float'. Possibile perdita di dati.
src\vaexfast.cpp(672): warning C4244: 'inizializzazione': conversione da 'double' a 'const float'. Possibile perdita di dati.
src\vaexfast.cpp(133): warning C4244: 'inizializzazione': conversione da 'npy_intp' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(887): note: vedere il riferimento all'istanza 'void object_to_numpy3d_nocopy<double>(T *&,PyObject *,int &,int &,int &,int)' della funzione modello di cui Š in corso la compilazione
with
[
T=double
]
src\vaexfast.cpp(136): warning C4244: 'inizializzazione': conversione da 'npy_intp' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(139): warning C4244: 'inizializzazione': conversione da 'npy_intp' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(174): warning C4244: '=': conversione da 'npy_intp' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(983): note: vedere il riferimento all'istanza 'void object_to_numpyNd_nocopy<double>(T *&,PyObject *,int,int &,int *,__int64 *,int)' della funzione modello di cui Š in corso la compilazione
with
[
T=double
]
src\vaexfast.cpp(1335): warning C4244: '=': conversione da 'Py_ssize_t' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(2072): note: vedere il riferimento all'istanza 'PyObject *statisticNd_<double,NPY_DOUBLE>(PyObject *,PyObject *)' della funzione modello di cui Š in corso la compilazione
src\vaexfast.cpp(1338): warning C4244: '=': conversione da 'Py_ssize_t' a 'int'. Possibile perdita di dati.
src\vaexfast.cpp(1149): warning C4244: 'inizializzazione': conversione da 'double' a 'T'. Possibile perdita di dati.
with
[
T=float
]
src\vaexfast.cpp(1271): note: vedere il riferimento all'istanza 'void statisticNd<T,op_add1<T,double,endian>,endian>(const T *__restrict const [],const T *__restrict const [],__int64,const int,const int,double *__restrict const ,const __int64 *__restrict const ,const int *__restrict const ,const T *__restrict const ,const T *__restrict const ,int)' della funzione modello di cui Š in corso la compilazione
with
[
T=float,
endian=functor_double_to_native
]
src\vaexfast.cpp(1308): note: vedere il riferimento all'istanza 'void statisticNd_wrap_template_endian<T,functor_double_to_native>(const T *const [],const T *const [],__int64,int,int,double *,__int64 [],int [],T [],T [],int,int)' della funzione modello di cui Š in corso la compilazione
with
[
T=float
]
src\vaexfast.cpp(1402): note: vedere il riferimento all'istanza 'void statisticNd_wrap_template<T>(const T *const [],const T *const [],__int64,int,int,double *,__int64 [],int [],T [],T [],bool,int,int)' della funzione modello di cui Š in corso la compilazione
with
[
T=float
]
src\vaexfast.cpp(2073): note: vedere il riferimento all'istanza 'PyObject *statisticNd_<float,NPY_FLOAT>(PyObject *,PyObject *)' della funzione modello di cui Š in corso la compilazione
src\vaexfast.cpp(1178): warning C4244: 'inizializzazione': conversione da 'double' a 'T'. Possibile perdita di dati.
with
[
T=float
]
src\vaexfast.cpp(1198): warning C4244: 'inizializzazione': conversione da 'double' a 'T'. Possibile perdita di dati.
with
[
T=float
]
src\vaexfast.cpp(1216): warning C4244: 'inizializzazione': conversione da 'double' a 'T'. Possibile perdita di dati.
with
[
T=float
]
"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\link.exe" /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:C:\Users\wolvi\Desktop\RICERCA\venv\libs /LIBPATH:C:\Users\wolvi\AppData\Local\Programs\Python\Python310\libs /LIBPATH:C:\Users\wolvi\AppData\Local\Programs\Python\Python310 /LIBPATH:C:\Users\wolvi\Desktop\RICERCA\venv\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\lib\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.19041.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.19041.0\um\x64" /EXPORT:PyInit_vaexfast build\temp.win-amd64-cpython-310\Release\src\vaexfast.obj /OUT:build\lib.win-amd64-cpython-310\vaex\vaexfast.cp310-win_amd64.pyd /IMPLIB:build\temp.win-amd64-cpython-310\Release\src\vaexfast.cp310-win_amd64.lib
Creazione della libreria build\temp.win-amd64-cpython-310\Release\src\vaexfast.cp310-win_amd64.lib e dell'oggetto build\temp.win-amd64-cpython-310\Release\src\vaexfast.cp310-win_amd64.exp
Generazione codice in corso...
Generazione codice terminata
building 'vaex.superstrings' extension
"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\wolvi\AppData\Local\Temp\pip-build-env-mequ5492\overlay\Lib\site-packages\numpy\core\include -Ivendor/pybind11/include -Ivendor/pybind11/include -Ivendor/string-view-lite/include -Ivendor/boost -IC:\Users\wolvi\Desktop\RICERCA\venv\include -IC:\Users\wolvi\Desktop\RICERCA\venv\Library\include -Ivendor\pcre\Library\include -IC:\Users\wolvi\Desktop\RICERCA\venv\include -IC:\Users\wolvi\AppData\Local\Programs\Python\Python310\include -IC:\Users\wolvi\AppData\Local\Programs\Python\Python310\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /EHsc /Tpsrc\string_utils.cpp /Fobuild\temp.win-amd64-cpython-310\Release\src\string_utils.obj /EHsc
string_utils.cpp
C:\Users\wolvi\AppData\Local\Temp\pip-install-oz16ctc3\vaex-core_234b08d7a5484e2aacaa3951062cdba9\src\string_utils.hpp(208): warning C4244: '=': conversione da 'char32_t' a 'char'. Possibile perdita di dati.
"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\wolvi\AppData\Local\Temp\pip-build-env-mequ5492\overlay\Lib\site-packages\numpy\core\include -Ivendor/pybind11/include -Ivendor/pybind11/include -Ivendor/string-view-lite/include -Ivendor/boost -IC:\Users\wolvi\Desktop\RICERCA\venv\include -IC:\Users\wolvi\Desktop\RICERCA\venv\Library\include -Ivendor\pcre\Library\include -IC:\Users\wolvi\Desktop\RICERCA\venv\include -IC:\Users\wolvi\AppData\Local\Programs\Python\Python310\include -IC:\Users\wolvi\AppData\Local\Programs\Python\Python310\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /EHsc /Tpsrc\strings.cpp /Fobuild\temp.win-amd64-cpython-310\Release\src\strings.obj /EHsc
strings.cpp
vendor/pybind11/include\pybind11/numpy.h(35): error C2065: 'ssize_t': identificatore non dichiarato
vendor/pybind11/include\pybind11/numpy.h(35): error C2338: ssize_t != Py_intptr_t
C:\Users\wolvi\AppData\Local\Temp\pip-install-oz16ctc3\vaex-core_234b08d7a5484e2aacaa3951062cdba9\src\string_utils.hpp(208): warning C4244: '=': conversione da 'char32_t' a 'char'. Possibile perdita di dati.
vendor\pcre\Library\include\pcrecpp.h(701): warning C4251: 'pcrecpp::RE::pattern_': class 'std::basic_string<char,std::char_traits<char>,std::allocator<char>>' deve avere un'interfaccia dll per essere utilizzata dai client di class 'pcrecpp::RE'
C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include\xstring(4905): note: vedere la dichiarazione di 'std::basic_string<char,std::char_traits<char>,std::allocator<char>>'
src\strings.cpp(273): warning C4018: '>': errata corrispondenza tra signed e unsigned
src\strings.cpp(282): warning C4018: '>': errata corrispondenza tra signed e unsigned
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for vaex-core
Failed to build vaex-core
ERROR: Could not build wheels for vaex-core, which is required to install pyproject.toml-based projects
`
| open | 2022-05-08T06:57:50Z | 2022-05-08T13:16:45Z | https://github.com/vaexio/vaex/issues/2046 | [] | automataIA | 2 |
StackStorm/st2 | automation | 5,460 | Include PySocks package in st2client module. | ## SUMMARY
By default, the st2client cli doesn't support SOCKS proxy connection using `HTTP_PROXY`, `HTTPS_PROXY` because it lacks the `pysocks` pypi package.
### STACKSTORM VERSION
```
❯ st2 --version
st2 3.5.0, on Python 3.8.12
```
### OS, environment, install method
Client: MacOS Big Sur, Python 3.8
Stackstorm: Ubuntu 18.04, Python 3.6
## Steps to reproduce the problem
Attempt to use the st2 cli with `HTTP_PROXY`, `HTTPS_PROXY` set.
## Expected Results
No errors, and able to query the stackstorm server.
## Actual Results
Currently get the following error.
```
ERROR: Missing dependencies for SOCKS support.
```
To work around this for now, I install pysocks on my virtual environment where st2client is installed `pip install pysocks==1.7.1`.
| closed | 2021-11-26T16:38:32Z | 2022-01-12T00:18:58Z | https://github.com/StackStorm/st2/issues/5460 | [
"enhancement"
] | kingsleyadam | 4 |
lux-org/lux | jupyter | 341 | [BUG] How to convert <class 'pandas.core.frame.DataFrame'> to Lux dataframe | Hello,
Is there any method to explicitly convert pandas dataframe to lux dataframe. Because when I'm trying to convert pandas dataframe df.save_to_html then pandas dataframe is not supporting. I'm getting error as : AttributeError: 'DataFrame' object has no attribute 'save_as_html'.
Please help on this issue. Thank you. | closed | 2021-04-05T17:32:49Z | 2021-04-05T17:56:31Z | https://github.com/lux-org/lux/issues/341 | [] | arjunko | 2 |
babysor/MockingBird | deep-learning | 813 | ValueError("loaded state dict contains a parameter group " 使用別人的訓練合成器延續訓練下去都會出錯。 | D:\MockingBird-0.0.1\MockingBird-0.0.1>python synthesizer_train.py CZC D:\Down\Ai\SV2TTS\synthesizer
Arguments:
run_id: CZC
syn_dir: D:\Down\Ai\SV2TTS\synthesizer
models_dir: synthesizer/saved_models/
save_every: 1000
backup_every: 25000
log_every: 200
force_restart: False
hparams:
Checkpoint path: synthesizer\saved_models\CZC\CZC.pt
Loading training data from: D:\Down\Ai\SV2TTS\synthesizer\train.txt
Using model: Tacotron
Using device: cuda
Initialising Tacotron Model...
Trainable Parameters: 31.948M
Loading weights at synthesizer\saved_models\CZC\CZC.pt
Traceback (most recent call last):
File "D:\MockingBird-0.0.1\MockingBird-0.0.1\synthesizer_train.py", line 37, in <module>
train(**vars(args))
File "D:\MockingBird-0.0.1\MockingBird-0.0.1\synthesizer\train.py", line 114, in train
model.load(weights_fpath, optimizer)
File "D:\MockingBird-0.0.1\MockingBird-0.0.1\synthesizer\models\tacotron.py", line 526, in load
optimizer.load_state_dict(checkpoint["optimizer_state"])
File "C:\Users\chen7\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\optim\optimizer.py", line 201, in load_state_dict
raise ValueError("loaded state dict contains a parameter group "
ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group
已參考#209 #37 每個訓練合成器也都試過了不是出現不匹配就是出現RuntimeError: Error(s) in loading state_dict for Tacotron
V.0.0.1跟新的兩個版本也都用過,都是一樣不行
| open | 2023-01-08T16:27:41Z | 2023-05-25T08:18:58Z | https://github.com/babysor/MockingBird/issues/813 | [] | hanjidont | 2 |
gevent/gevent | asyncio | 1,453 | why does my gevent run slowly than the normal program ? | Python Version : 3.7.3
IDE : pycharm
problems :
I make a speed testing. I am trying to read the local file whihc are more then 2000 , it took for 6.5 seconds when i use the gevent moulde, another took for 2.8 seconds when normally i used 'for in'.
I want to know why does not gevent raise efficiency for I/O.
| closed | 2019-08-27T05:23:59Z | 2021-11-19T01:26:21Z | https://github.com/gevent/gevent/issues/1453 | [
"Type: Question"
] | allwell997 | 17 |
allenai/allennlp | data-science | 5,548 | Set different Cache Directory for the Predictor.from_path api | Hi all,
I am using Dataiku platform for my project developement and there I need allennlp in my pipeline.
But while using the **Predictor.from_path** api, I am basically facing a Permission Denied issue, as Dataiku is not allowing to create the CACHE_ROOT directory ".allennlp" under its root folder. Please see the below error.
PermissionError Traceback (most recent call last)
<ipython-input-9-7c48c0dd7567> in <module>
----> 1 predictor = Predictor.from_path("https://storage.googleapis.com/allennlp-public-models/bidaf-elmo.2021-02-11.tar.gz")
~/code-env/lib/python3.7/site-packages/allennlp/predictors/predictor.py in from_path(cls, archive_path, predictor_name, cuda_device, dataset_reader_to_load, frozen, import_plugins, overrides, **kwargs)
364 plugins.import_plugins()
365 return Predictor.from_archive(
--> 366 load_archive(archive_path, cuda_device=cuda_device, overrides=overrides),
367 predictor_name,
368 dataset_reader_to_load=dataset_reader_to_load,
~/code-env/lib/python3.7/site-packages/allennlp/models/archival.py in load_archive(archive_file, cuda_device, overrides, weights_file)
204 """
205 # redirect to the cache, if necessary
--> 206 resolved_archive_file = cached_path(archive_file)
207
208 if resolved_archive_file == archive_file:
~/code-env/lib/python3.7/site-packages/allennlp/common/file_utils.py in cached_path(url_or_filename, cache_dir, extract_archive, force_extract)
135 cache_dir=cache_dir or CACHE_DIRECTORY,
136 extract_archive=extract_archive,
--> 137 force_extract=force_extract,
138 )
139
~/code-env/lib/python3.7/site-packages/cached_path/_cached_path.py in cached_path(url_or_filename, cache_dir, extract_archive, force_extract)
119 cache_dir = cache_dir if cache_dir else get_cache_dir()
120 cache_dir = os.path.expanduser(cache_dir)
--> 121 os.makedirs(cache_dir, exist_ok=True)
122
123 if not isinstance(url_or_filename, str):
~/code-env/lib/python3.7/os.py in makedirs(name, mode, exist_ok)
211 if head and tail and not path.exists(head):
212 try:
--> 213 makedirs(head, exist_ok=exist_ok)
214 except FileExistsError:
215 # Defeats race condition when another thread created the path
~/code-env/lib/python3.7/os.py in makedirs(name, mode, exist_ok)
221 return
222 try:
--> 223 mkdir(name, mode)
224 except OSError:
225 # Cannot rely on checking for EEXIST, since the operating system
PermissionError: [Errno 13] Permission denied: '/opt/dataiku/.allennlp'
-----------------------------------------------------------------------------
So my question is, instead of the root folder, if I want to set any other folder as the CACHE_ROOT folder, by declaring it through Predictor.from_path api, then how should I do that? Please help me. | closed | 2022-01-24T08:24:13Z | 2022-02-01T06:27:26Z | https://github.com/allenai/allennlp/issues/5548 | [
"question"
] | ytiam | 6 |
microsoft/nni | data-science | 5,634 | if _name is not defined in HPO space, the experiment will not stop. | **Describe the issue**:
When testing nested sub-search-space in HPO, if _name is not defined, the experiment will not stop.
**Environment**:
- NNI version: 2.10
- Training service (local|remote|pai|aml|etc): local
- Client OS: linux
- Server OS (for remote mode only):
- Python version: 3.7
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?: conda
- Is running in Docker?: no
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log: **RuntimeError: '_name' key is not found in this nested search space.**
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
command: `python test_hpo_space.py --space test.yaml`
test_hpo_space.py:
```
import nni
import numpy as np
import torch
import os
import logging
import random
import time
import argparse
import json
import yaml
nni.silence_stdout()
from nni.experiment import Experiment
def run_trial():
param = nni.get_next_parameter()
logging.info(f"param: {param}")
# time.sleep(1)
nni.report_final_result(random.random())
def main(space: dict):
experiment = Experiment("local")
experiment.config.trial_command = f"python test_hpo_space.py run_trial --space 123"
experiment.config.experiment_name = "HPO"
experiment.config.trial_code_directory = os.getcwd()
experiment.config.search_space = space
experiment.config.tuner.name = "Evolution"
experiment.config.tuner.class_args["optimize_mode"] = "maximize"
experiment.config.tuner.class_args["population_size"] = 60
experiment.config.max_trial_number = 60
experiment.config.trial_concurrency = 10
experiment.start(18189, debug=True, run_mode=nni.experiment.RunMode.Background)
try:
experiment._wait_completion()
except KeyboardInterrupt:
logging.warning("KeyboardInterrupt detected")
finally:
experiment.stop()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--space", type=str, required=True)
args, extra_args = parser.parse_known_args()
if "run_trial" in extra_args:
run_trial()
else:
space_file = args.space
try:
space = json.load(open(space_file))
except Exception:
with open(space_file, "r", encoding="utf-8") as f:
space = yaml.safe_load(f)
main(space)
```
test.yaml:
```
layer0:
_type: choice
_value:
- Empty
- kernel_size:
_type: choice
_value: [1, 2, 3, 5]
- _name: Max_pool
pooling_size:
_type: choice
_value: [2, 3, 5]
- _name: Avg_pool
pooling_size:
_type: choice
_value: [2, 3, 5]
```
| open | 2023-07-14T07:22:27Z | 2023-08-11T06:54:45Z | https://github.com/microsoft/nni/issues/5634 | [] | heibaidaolx123 | 0 |
nonebot/nonebot2 | fastapi | 3,067 | Plugin: nonebot-plugin-searchgames | ### PyPI 项目名
nonebot-plugin-searchgames
### 插件 import 包名
nonebot_plugin_searchgame
### 标签
[{"label":"Steam","color":"#ea5252"},{"label":"switch","color":"#ea5252"}]
### 插件配置项
_No response_ | closed | 2024-10-26T01:29:14Z | 2024-10-26T01:55:27Z | https://github.com/nonebot/nonebot2/issues/3067 | [
"Plugin"
] | NYAGO666 | 1 |
aiogram/aiogram | asyncio | 1,031 | [3.x] aiogram is looking for redis when aioredis is installed (fix imports) | ### Checklist
- [X] I am sure the error is coming from aiogram code
- [X] I have searched in the issue tracker for similar bug reports, including closed ones
### Operating system
macos 12.2.1 (21D62)
### Python version
3.9
### aiogram version
3.0.0b5
### Expected behavior
redis fsm storage works with aioredis
### Current behavior
redis fsm storage does not work with aioredis
### Steps to reproduce
install aiogram 3.0.0.b5
install aioredis `pip install aioredis`. Currently 2.0.1
create redis client `redis_client = Redis.from_url("redis://localhost:6379/3")`
create dispatcher `dp = Dispatcher(storage=RedisStorage(redis=redis_client))`
### Code example
```python3
from aiogram.fsm.storage.redis import RedisStorage
from aioredis.client import Redis
redis_client = Redis.from_url("redis://localhost:6379/3")
dp = Dispatcher(storage=RedisStorage(redis=redis_client))
```
### Logs
```sh
Traceback (most recent call last):
File "/Users/dev/projects/OWN/shopping_bot/src/bot/bot.py", line 6, in <module>
from aiogram.fsm.storage.redis import RedisStorage
File "/Users/dev/projects/OWN/shopping_bot/venv/lib/python3.9/site-packages/aiogram/fsm/storage/redis.py", line 5, in <module>
from redis.asyncio.client import Redis
ModuleNotFoundError: No module named 'redis'
```
### Additional information
imports failing in ../env/lib/python3.9/site-packages/aiogram/fsm/storage/redis.py
from redis.asyncio.client import Redis
from redis.asyncio.connection import ConnectionPool
from redis.asyncio.lock import Lock
from redis.typing import ExpiryT | closed | 2022-10-18T14:37:59Z | 2022-10-19T09:09:47Z | https://github.com/aiogram/aiogram/issues/1031 | [
"wontfix",
"3.x"
] | Stasyanz | 5 |
encode/httpx | asyncio | 2,888 | AsyncClient does not recognize a `cert` has been passed. | I'm capable of doing the following
```
$ curl https://foobar.com --cert /path/to/cert
```
However, when using a Session instance of `httpx.AsyncClient`, passing the cert gives me error from the server that no certs have been passed.
| closed | 2023-10-12T01:32:35Z | 2023-10-13T01:31:17Z | https://github.com/encode/httpx/issues/2888 | [] | achillesrasquinha | 1 |
jschneier/django-storages | django | 1,188 | No rename method on GoogleCloudStorage | I'm using the following to allow renaming:
```
@deconstruct.deconstructible
class GoogleCloudStorage(gcloud.GoogleCloudStorage):
def path(self, name) -> typing.AnyStr:
raise NotImplementedError()
def get_accessed_time(self, name) -> datetime.datetime:
raise NotImplementedError()
def rename(self, old_name: str, new_name: str) -> None:
blob = self.bucket.blob(old_name)
self.bucket.rename_blob(blob, new_name)
```
Requesting feature. | closed | 2022-10-26T10:29:33Z | 2024-04-21T01:25:07Z | https://github.com/jschneier/django-storages/issues/1188 | [] | wieczorek1990 | 1 |
open-mmlab/mmdetection | pytorch | 11,148 | During training of RTMDet loss_box and loss_mask are always 0 | **Describe the bug**
When trying to train an RTMDet model of any size using MMDetection on a COCO format dataset, during training the loss and loss_cls parameters will descend as normal, but the loss_box and loss_mask parameters start and stay at 0 for all of training. The model also does not produce any results during inference.
**Reproduction**
The exact training command: `tools/dist_train.sh configs/custom/rtmdet-ins-custom-s.py 2 --auto-scale-lr`
My config file:
```
_base_ = '../rtmdet/rtmdet-ins_s_8xb32-300e_coco.py'
dataset_type = 'CocoDataset'
data_root = '../../datasets/MyDataset/'
num_classes = 8
classes = ('Circular', 'Elliptical', 'Triangular', 'Quadrilateral', 'Polygonal', 'Capsule', 'Unique', 'Spheroid')
metainfo = {
'classes': ('Circular', 'Elliptical', 'Triangular', 'Quadrilateral', 'Polygonal', 'Capsule', 'Unique', 'Spheroid'),
'palette': [
(135, 206, 235),
(255, 192, 203),
(255, 218, 185),
(147, 112, 219),
(60, 179, 113),
(255, 165, 0),
(220, 20, 60),
(255, 255, 0)
]
}
train_dataloader = dict(
batch_size = 8,
num_workers = 10,
dataset = dict(
data_root=data_root,
metainfo=metainfo,
ann_file=data_root + '/annotations/instances_train.json',
data_prefix=dict(img=data_root + 'train/')
)
)
find_unused_parameters=True
val_dataloader = dict(
batch_size = 4,
num_workers = 10,
dataset = dict(
data_root=data_root,
metainfo=metainfo,
ann_file=data_root + '/annotations/instances_val.json',
data_prefix=dict(img=data_root + 'val/')
)
)
test_dataloader = val_dataloader
val_evaluator = dict(ann_file=data_root + 'annotations/instances_val.json')
test_evaluator = val_evaluator
```
A sample of my logs:
```
11/09 10:01:10 - mmengine - INFO - Epoch(train) [1][ 50/3256] lr: 1.9623e-05 eta: 1 day, 10:48:45 time: 0.3850 data_time: 0.0542 memory: 4411 loss: 0.5551 loss_cls: 0.5551 loss_bbox: 0.0000 loss_mask: 0.0000
11/09 10:01:24 - mmengine - INFO - Epoch(train) [1][ 100/3256] lr: 3.9643e-05 eta: 1 day, 5:15:10 time: 0.2621 data_time: 0.0017 memory: 4411 loss: 0.5109 loss_cls: 0.5109 loss_bbox: 0.0000 loss_mask: 0.0000
11/09 10:01:37 - mmengine - INFO - Epoch(train) [1][ 150/3256] lr: 5.9663e-05 eta: 1 day, 3:24:11 time: 0.2623 data_time: 0.0015 memory: 4411 loss: 0.4392 loss_cls: 0.4392 loss_bbox: 0.0000 loss_mask: 0.0000
11/09 10:01:50 - mmengine - INFO - Epoch(train) [1][ 200/3256] lr: 7.9683e-05 eta: 1 day, 2:35:58 time: 0.2678 data_time: 0.0014 memory: 4411 loss: 0.3513 loss_cls: 0.3513 loss_bbox: 0.0000 loss_mask: 0.0000
```
The only modifications I made to base configs were to increase the maximum number of detections to 500 (I am doing small object detection so this is needed for my use case) and to change the checkpoint interval to 5 so that I could evaluate my progress in finer steps. I have not modified the actual mmdetection codebase.
I am using a custom instance segmentation dataset in COCO format created synthetically. Due to the nature of my task I cannot share my dataset in full. However, the directory structure is as follows:
```
> Dataset
| > annotations
| | instances_train.json
| | instances_val.json
| | instances_test.json
| > train
| | trainimage0.png
| | trainimage1.png
| | trainimage2.png
| | ...
| | > val
| | valimage0.png
| | valimage1.png
| | valimage2.png
| | ...
| | > test
| | testimage0.png
| | testimage1.png
| | testimage2.png
| | ...
```
And here is a sample of my images and annotations:
```
"images": [
{
"id": 0,
"file_name": "img_0.png",
"height": 1800,
"width": 1800
},
{
"id": 1,
"file_name": "img_1.png",
"height": 1800,
"width": 1800
},
],
"annotations":[
{
"id": 13384448,
"image_id": 74402,
"category_id": 0,
"segmentation": {
"size": [
1800,
1800
],
"counts": "WhW74mg1>E7J5K4M4L3N2M3M2O2N1O1N3O0O1O1O1O1O2O0O100O10000O2O0000000000001O000001O000O10000O10001N1O100O1O1O100O2N1N2O2N1N3N2M3M3M4L4K5J8GSZPh2"
},
"bbox": [
131.0,
1480.0,
66.0,
66.0
],
"area": 3460,
"iscrowd": 0
},
{
"id": 13384449,
"image_id": 74402,
"category_id": 0,
"segmentation": {
"size": [
1800,
1800
],
"counts": "Rl]?:kg16K4M3L3M3M4L3M3M2N3M3M3N1O2N2N1O2N100O2O0O2O0O10001O0O100000000000000000000001O000O101O0O101N100O2O0O2N2N1O2M3N1N3L4M3M3M3M4L3M4L4L6H\\ef_2"
},
"bbox": [
280.0,
1696.0,
68.0,
66.0
],
"area": 3403,
"iscrowd": 0
}
],
```
I have written a script to visualize my dataset to confirm that my masks and bounding boxes align with their respective instances as expected, so the annotations are definitely accurate.
**Environment**
```
sys.platform: linux
Python: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0]
CUDA available: True
numpy_random_seed: 2147483648
GPU 0,1: NVIDIA RTX A5500
CUDA_HOME: /usr/local/cuda-11.7
NVCC: Cuda compilation tools, release 11.7, V11.7.64
GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
PyTorch: 2.0.1
PyTorch compiling details: PyTorch built with:
- GCC 9.3
- C++ Version: 201703
- Intel(R) oneAPI Math Kernel Library Version 2023.1-Product Build 20230303 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.7
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
- CuDNN 8.5
- Magma 2.6.1
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,
TorchVision: 0.15.2
OpenCV: 4.7.0
MMEngine: 0.7.3
MMDetection: 3.2.0+fe3f809
```
Additional Environment Info:
- Environment is running inside of WSL2 with CUDA access enabled.
- Installation instructions were followed as per the mmdetection website guide exactly. Pytorch was installed using the official torch installation instructions for conda and WSL.
| open | 2023-11-09T15:18:26Z | 2024-11-25T09:49:55Z | https://github.com/open-mmlab/mmdetection/issues/11148 | [] | h-fernand | 8 |
K3D-tools/K3D-jupyter | jupyter | 117 | Example "volume_render" does not work | I'm trying to run an example [volume_renderer](https://github.com/K3D-tools/K3D-jupyter/blob/master/examples/volume_renderer.ipynb) and I have error. In the case of Binder, module 'nibabel' don't exist. If I try locally with installed 'nibabel' - there is no 3D object, only an empty grid of coordinates is visible.
| closed | 2018-11-05T13:22:20Z | 2018-11-19T11:17:18Z | https://github.com/K3D-tools/K3D-jupyter/issues/117 | [] | sergii-mamedov | 1 |
noirbizarre/flask-restplus | api | 462 | Swagger UI assumes body payload when using api_ns.expect(model) for HTTP GET handler | Looking through the the [docs](http://flask-restplus.readthedocs.io/en/stable/swagger.html), there's an example of setting a model as an expected input to the get request handler, and to me it would be reasonable to assume that restplus would use this model to validate query string parameters as it's the only place that would make sense for it to be in the request. When using the same model for a post, swagger UI renders it as body request parameter, which makes sense. I'm just wondering if I'm wrong in my assumptions and this is by design, or it's a bug?
Current version of flask-restplus: 0.11.0
```python
class SomeResource(Resource):
@my_api_ns.expect(my_super_cool_model)
def get(self):
# This will render as a body request param, not expected
return {}
@my_api_ns.expect(my_super_cool_model)
def post(self):
# This will render as a body request param, as expected
return {}
``` | open | 2018-06-04T13:42:22Z | 2019-12-23T04:56:27Z | https://github.com/noirbizarre/flask-restplus/issues/462 | [] | mr-tabasco | 5 |
dinoperovic/django-salesman | rest-api | 2 | Add ability to specify extra during checkout | **Is your feature request related to a problem? Please describe.**
Saving extra data on order (eg. "phone number") during the checkout requires an additional POST request to `/api/basket/extra/`.
**Describe the solution you'd like**
When sending a POST request to `/api/checkout/` add an ability to send `extra` data in a payload directly.
**Describe alternatives you've considered**
`-`
**Additional context**
Validation for extra data should be enforced here as well (#1).
| closed | 2020-03-18T14:02:57Z | 2020-05-22T10:46:10Z | https://github.com/dinoperovic/django-salesman/issues/2 | [
"enhancement"
] | dinoperovic | 0 |
d2l-ai/d2l-en | machine-learning | 2,472 | It prompts ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject |
```
from d2l import torch as d2l
```
```
Traceback (most recent call last):
File "/xxx/pytorch/linear_regression/linear_regression.py", line 6, in <module>
from d2l import torch as d2l
File "/xxx/miniconda3/envs/d2l/lib/python3.9/site-packages/d2l/torch.py", line 32, in <module>
import pandas as pd
File "/xxx/miniconda3/envs/d2l/lib/python3.9/site-packages/pandas/__init__.py", line 29, in <module>
from pandas._libs import hashtable as _hashtable, lib as _lib, tslib as _tslib
File "/xxx/miniconda3/envs/d2l/lib/python3.9/site-packages/pandas/_libs/__init__.py", line 13, in <module>
from pandas._libs.interval import Interval
File "pandas/_libs/interval.pyx", line 1, in init pandas._libs.interval
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
(d2l)
```
Neither the master branch nor 2.0.0 release can fix this issue
and
d2l==1.0.0b0 prompts
```
ERROR: Could not find a version that satisfies the requirement gym==0.21.0 (from d2l) (from versions: none)
ERROR: No matching distribution found for gym==0.21.0
```
Versions:
python: 3.9.16
d2l: 0.17.6 | open | 2023-04-25T13:00:34Z | 2023-04-29T03:13:05Z | https://github.com/d2l-ai/d2l-en/issues/2472 | [] | archerbj | 1 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,796 | Notification Rules for unread reports | ### Proposal
At the moment the notifications for unread reports are sent to all recipients of a context, also if one of them as already read the report.
Ex. Context with 3 recipients: A, B, C.
If A has read the reports, notifications should not be sent to B and C because in this way these two recipients get spammed while the report has been already read by A.
Then, it would be better to configure this feature also on the sub-sites and not only on the main one, because it is intended to have sub-sites that may want to use different set up.

### Motivation and context
- Reduce spam email notifications
- Grant possibility to configure this feature in different ways for single sub-sites | open | 2023-11-20T10:54:02Z | 2023-12-18T11:52:16Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3796 | [
"T: Feature"
] | eleibr | 3 |
microsoft/nni | deep-learning | 5,192 | Please provide updated example code of pruning | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
provide updated example code of pruning
**Why is this needed**:
I got warning: `WARNING: The old API trainer,traced_optimizer,criterion,training_batches,mode,dummy_input will be deprecated after NNI v3.0, please using the new one evaluator,training_steps,mode,dummy_input` when runing the example pruning code from [here](https://github.com/microsoft/nni/blob/master/examples/model_compress/pruning/activation_pruning_torch.py). As far as i know, other pruning api have the same warning like TaylorFOWeightPruner, ADMMPruner etc.
**Without this feature, how does current nni work**:
It can work in nni 2.9, but after nni 3.0, the api will deprecated
**Components that may involve changes**:
**Brief description of your proposal if any**:
| closed | 2022-10-31T02:32:57Z | 2023-05-08T07:47:00Z | https://github.com/microsoft/nni/issues/5192 | [] | wwdok | 2 |
roboflow/supervision | tensorflow | 831 | useing my viedo to run speed | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
HI ,i used my viedo to run speed [speed_estimation](https://github.com/roboflow/supervision/tree/develop/examples/speed_estimation) code.
I didnt do anything with code,and my video had a little bit proble.could you help me ?
issue >>
AttributeError: 'NoneType' object has no attribute 'reshape'
my video [https://www.youtube.com/watch?v=8Gvz_FjWy4s](url)
my video run result [https://youtu.be/0KxiJQKj-vA?si=lVrhGR3edo499JP5](https://youtu.be/0KxiJQKj-vA?si=lVrhGR3edo499JP5)

` def transform_points(self, points: np.ndarray) -> np.ndarray:
reshaped_points = points.reshape(-1, 1, 2).astype(np.float32)
transformed_points = cv2.perspectiveTransform(reshaped_points, self.m)
return transformed_points.reshape(-1, 2)
`
### Additional
_No response_ | closed | 2024-02-01T07:34:15Z | 2024-02-01T09:30:04Z | https://github.com/roboflow/supervision/issues/831 | [
"question"
] | althonmp | 4 |
itamarst/eliot | numpy | 385 | Add support for async functions to @capture_logging | `@capture_logging` won't work for async functions, but it should be possible to make it do so. `pytest-asyncio` for example allows async methods to be run as tests. | open | 2019-03-07T18:30:32Z | 2019-03-07T18:30:32Z | https://github.com/itamarst/eliot/issues/385 | [
"bug"
] | itamarst | 0 |
matplotlib/matplotlib | matplotlib | 28,908 | [Bug]: Possible performance issue with _LazyTickList | ### Bug summary
The descriptor seems to get called twice and thus the expensive `instance._get_tick(major=True)` is executed twice. Ping @anntzer, who helped craft the implementation.
### Code for reproduction
When adding the following into `_LazyTickList.__get__`
```
if self._major:
# <snip>
import traceback, sys
print(f"\n*** Initializing major ticks on {type(instance)} **\n")
traceback.print_stack(file=sys.stdout, limit=6)
# </snip>
instance.majorTicks = []
tick = instance._get_tick(major=True)
instance.majorTicks.append(tick)
return instance.majorTicks
```
we see that it is called twice per Axis:
```
*** Initializing major ticks on <class 'matplotlib.axis.XAxis'> **
File "/home/tim/git/matplotlib/lib/matplotlib/axes/_base.py", line 1399, in clear
self.__clear()
File "/home/tim/git/matplotlib/lib/matplotlib/axes/_base.py", line 1315, in __clear
self.grid(False) # Disable grid on init to use rcParameter
File "/home/tim/git/matplotlib/lib/matplotlib/axes/_base.py", line 3295, in grid
self.xaxis.grid(visible, which=which, **kwargs)
File "/home/tim/git/matplotlib/lib/matplotlib/axis.py", line 1726, in grid
self.set_tick_params(which='major', **gridkw)
File "/home/tim/git/matplotlib/lib/matplotlib/axis.py", line 984, in set_tick_params
for tick in self.majorTicks:
File "/home/tim/git/matplotlib/lib/matplotlib/axis.py", line 549, in __get__
traceback.print_stack(file=sys.stdout, limit=6)
*** Initializing major ticks on <class 'matplotlib.axis.XAxis'> **
File "/home/tim/git/matplotlib/lib/matplotlib/axes/_base.py", line 1315, in __clear
self.grid(False) # Disable grid on init to use rcParameter
File "/home/tim/git/matplotlib/lib/matplotlib/axes/_base.py", line 3295, in grid
self.xaxis.grid(visible, which=which, **kwargs)
File "/home/tim/git/matplotlib/lib/matplotlib/axis.py", line 1726, in grid
self.set_tick_params(which='major', **gridkw)
File "/home/tim/git/matplotlib/lib/matplotlib/axis.py", line 984, in set_tick_params
for tick in self.majorTicks:
File "/home/tim/git/matplotlib/lib/matplotlib/axis.py", line 553, in __get__
instance.majorTicks.append(tick)
File "/home/tim/git/matplotlib/lib/matplotlib/axis.py", line 549, in __get__
traceback.print_stack(file=sys.stdout, limit=6)
[... same repeated for YAxis]
```
Looking at the second traceback it seems that the line `instance.majorTicks.append(tick)` re-triggers the descriptor, even though we have previously set `instance.majorTicks = []`. I would have expected that at the time, the name `instance.majorTicks` is already re-bound to the list (which is sort of the purpose of the init-empty-and-append acrobatics - see the code comment above). But then again, this is higher magic and we might be hitting some implementation details of descriptors.
This observation may have two implications:
- We're apparently running the expensive `instance._get_tick(major=True)` twice. This should be fixed.
- It may be that init-empty-and-append acrobatics does not fulfill its intended purpose of providing `instance.majorTicks` to the implementation of `_get_tick`.
### Possible alternative
have a dummy empty list for `_get_tick` - I assume it's anyway only trying to read and not modify or hold references.
And then create a new list with the tick. This prevents read-access to `instance.majorTicks` and re-calling the descriptor. i.e. replace
```
instance.majorTicks = []
tick = instance._get_tick(major=True)
instance.majorTicks.append(tick)
return instance.majorTicks
```
by
```
instance.majorTicks = []
tick = instance._get_tick(major=True)
instance.majorTicks = [tick]
return instance.majorTicks
```
Not sure how that works when `_get_tick` accesses `majorTicks` but it should not make it worse there, and we improve inside `__get__`.
#### Performance measurement:
While performance measurement is a bit tricky, I think fair timings are
| | before (ms) | after (ms) | change |
|----------------------|-------------|------------|-------------|
| plt.subplots() | 38±2 | 31±1 | -18% |
| plt.subplots(10, 10) | 1420±13 | 1063±7 | -25% | | closed | 2024-09-30T15:58:33Z | 2024-10-01T06:41:12Z | https://github.com/matplotlib/matplotlib/issues/28908 | [
"Performance"
] | timhoffm | 0 |
airtai/faststream | asyncio | 1,276 | Bug: BufferError in confluent kafka broker | **Describe the bug**
Hello, everyone! I have a question when processing messages using Confluent Kafka. If I have multiple subscribers going that all process mesages and publish them to another topic, I quickly get a `BufferError`. Updating `max_batch_size` on my broker didn't seem to help. I resorted to catching these exceptions and digging pretty deep in the the broker internals to call `poll()`. Here's a snippet of that code:
```python
try:
await publish_node_institution.publish(*nodes)
except BufferError:
broker._producer._producer.producer.poll(0) # type: ignore
await publish_node_institution.publish(*nodes)
```
My question is has anyone run into this issue using the Confluent Kafka broker? Does anyone have any suggestions for a better way of handling this?
Thanks!
[Discord message](https://discord.com/channels/1085457301214855171/1085457302280228898/1212835445822459934)
| closed | 2024-03-01T08:47:39Z | 2024-03-01T19:53:55Z | https://github.com/airtai/faststream/issues/1276 | [
"bug"
] | kumaranvpl | 0 |
clovaai/donut | nlp | 222 | How can I use this model for Feature Extraction. Everytime i reload the model, i get different set of feature values (output from the last hidden state)for the same image | open | 2023-07-04T07:32:43Z | 2023-07-04T07:35:37Z | https://github.com/clovaai/donut/issues/222 | [] | Shrutijain23 | 0 |
|
jmcnamara/XlsxWriter | pandas | 778 | problem, random cells write as zero | Hi hi :3
I found this writing data with pandas + xlsxwriter, actually some formulas just don't are written correctly, and are not exactly random, lets test with this code:
```
import pandas
a=[]
for i in range(50):
a.append("=1+1.0")
dd = pandas.DataFrame(a, columns=["test"])
dd.to_excel("test.xlsx", engine='xlsxwriter')
```
In the result xlsx we should have 50 equal formulas, but, in the cell B4 I get 0, and is not like the value has not been evaluated, literally the formula of the cell is zero, I don't know why that formula is not written...
Is "random" because from all that "equals" formulas some of them just don't works, but is not random because the cell with the problem is the same in some iterations, after some of them it change.
(every iterations is running the same code again)
A clue, if we use "=1+10" without decimals works, at least with 50 results.
If we want to get a similar result getting zero every where, use "=1+1,0".
The first time I run the script, the formula we read in the file is "=1+1.0", as we wrote, testing again and again I get "=1+1" for some reason... now we don't have the ".0"...
I'm using WPS, but all what I wrote here I check it with the formula bar, not the results.
Thx. | closed | 2021-01-22T07:58:00Z | 2021-01-22T19:47:13Z | https://github.com/jmcnamara/XlsxWriter/issues/778 | [
"ready to close",
"awaiting user feedback"
] | latot | 7 |
jina-ai/serve | fastapi | 6,039 | ERROR: Cannot install jina because these package versions have conflicting dependencies. | When trying to install clip_server, I am getting this error with the jina package. The error showed up a few hours ago (was fine yesterday on September 4).
pip install jina
ERROR: Cannot install jina because these package versions have conflicting dependencies.
The conflict is caused by:
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.40b0 depends on opentelemetry-instrumentation==0.40b0
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.39b0 depends on opentelemetry-instrumentation==0.39b0
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.38b0 depends on opentelemetry-instrumentation==0.38b0
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.37b0 depends on opentelemetry-instrumentation==0.37b0
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.36b0 depends on opentelemetry-instrumentation==0.36b0
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.35b0 depends on opentelemetry-instrumentation==0.35b0
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.34b0 depends on opentelemetry-instrumentation==0.34b0 | closed | 2023-09-06T00:01:28Z | 2023-09-06T15:02:01Z | https://github.com/jina-ai/serve/issues/6039 | [] | kirandeol | 5 |
mitmproxy/mitmproxy | python | 6,313 | Better UX | #### Problem Description
The software lacks of basic features that would make user experience much better
#### Proposal
- Delete button should have a dropdown menu entry to delete all intercepted requests
- Keyboard key `delete` should delete selected intercepted request
- There should be a pause button to temporarily stop intercepting traffic without killing the server or filtering
- Select multiple requests by holding keyboard keys `shift` / `ctrl` to do a certain action i.e removal, export | open | 2023-08-11T11:40:19Z | 2023-08-11T11:40:19Z | https://github.com/mitmproxy/mitmproxy/issues/6313 | [
"kind/feature"
] | VibingCreator | 0 |
mljar/mercury | data-visualization | 11 | Add text input | Add text input. Please remember to sanitize the input. | closed | 2022-01-17T14:10:12Z | 2022-01-26T17:45:07Z | https://github.com/mljar/mercury/issues/11 | [
"enhancement",
"help wanted"
] | pplonski | 1 |
darrenburns/posting | rest-api | 217 | Bug: ignore user-agent header and uses it own user-agent | When i set a user-agent posting ignore my user-agent and use it default user agent i'v had posting v2.3.0 and i update to v2.5.2 but it not fixed! | closed | 2025-03-09T11:28:03Z | 2025-03-13T20:05:50Z | https://github.com/darrenburns/posting/issues/217 | [
"bug"
] | ImMohammad20000 | 4 |
jmcnamara/XlsxWriter | pandas | 914 | Bug: Boolean Issue | ### Current behavior
The issue occurs in locally installed Microsoft Office 365, when a spreadsheet generated by xlsxWriter contains a column of type BOOLEAN.
When I open the worksheet I get a message saying that a problem was found, and Excel asks me if I want it to recover the worksheet as much as it can.
### Expected behavior
That the worksheet opens without warning of recovering lost data.
### Sample code to reproduce
```markdown
import xlsxwriter as xlsx
workbook = xlsx.Workbook('minimal.xlsx')
worksheet = workbook.add_worksheet()
worksheet.write('A1', 'Hello Destaxa')
worksheet. set_column('A:A', 20, workbook.add_format({'num_format': 'BOOLEAN'}))
workbook.close()
```
### Environment
```markdown
- XlsxWriter version: 3.0.3
- Python version: 3.10.7
- Excel version: Microsoft Office 365
- OS: Windows 10
- The issue does not occur in Microsoft Office 2016
- The issue does not occur in Microsoft Office 365 Web
```
### Any other information

### OpenOffice and LibreOffice users
- [X] I have tested the output file with Excel. | closed | 2022-10-11T23:06:04Z | 2022-10-12T08:47:37Z | https://github.com/jmcnamara/XlsxWriter/issues/914 | [
"bug"
] | pmateirodestaxa | 2 |
Yorko/mlcourse.ai | numpy | 693 | some notes on MAPE and infinite values | Lecture 9 describes MAPE and other metrics. As noted by @amber4eg it's good to mention that these metrics can explode around zero. | closed | 2021-12-22T22:21:02Z | 2022-01-07T13:26:02Z | https://github.com/Yorko/mlcourse.ai/issues/693 | [
"articles"
] | Yorko | 0 |
ploomber/ploomber | jupyter | 541 | add an example for processing independent chunks | This is a recurrent pattern and a few users have asked us about it so we should have a working example.
Assume you're getting data on batches, users want to pass the batch through the full pipeline when the next batch arrives, they want to repeat the process. An extension of this problem is when they already have all the batches and want to process them in parallel. Note that this is different from the `grid` feature because this will process all batches at once. What we want here, is essentially a giant for loop (one per batch) where the loop runs the full pipeline for each batch. | closed | 2022-02-04T21:15:42Z | 2022-09-02T22:53:17Z | https://github.com/ploomber/ploomber/issues/541 | [] | edublancas | 0 |
tensorflow/tensor2tensor | machine-learning | 1,835 | Example code for speech to text using tensor2tensor | ### Description
Hi,
Can you please share an example code to convert speech to text using Tensor2Tensor (maybe with transformer) mode?
This will help a lot.
Thanks
Nagaraju
...
### Environment information
Python 3.7.7
tensor2tensor 1.15.7
```
OS: <Windows 10 (64 bit)>
$ pip freeze | grep tensor
# your output here
$ python -V
# your output here
```
### For bugs: reproduction and error logs
```
# Steps to reproduce:
...
```
```
# Error logs:
...
```
| open | 2020-07-21T17:56:10Z | 2020-07-21T17:56:10Z | https://github.com/tensorflow/tensor2tensor/issues/1835 | [] | nag0811 | 0 |
plotly/plotly.py | plotly | 4,633 | How to smooth 3D surface plot | I have a 3D surface plot like this:

I am not sure how to smooth this plot, I searched but could not find any information. | closed | 2024-06-13T09:39:14Z | 2024-07-12T00:05:48Z | https://github.com/plotly/plotly.py/issues/4633 | [] | btayhan | 2 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,951 | Login in sub sites not work | ### What version of GlobaLeaks are you using?
4.14.3
### What browser(s) are you seeing the problem on?
All
### What operating system(s) are you seeing the problem on?
Linux
### Describe the issue
Hi,
I have created subsites in globaleaks, but when creating users within these subsites, the login does not work. It only works if I create them on the base site and this causes them to have access on the rest of the subsites
### Proposed solution
_No response_ | closed | 2024-01-12T15:36:24Z | 2024-06-15T13:12:49Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3951 | [] | aapian | 5 |
torrvision/crayon | data-visualization | 19 | Implement some form of web interface | It's annoying to write scripts to save / delete runs. We should just run a bootstrap frontend or something.
Probably to do when / after we refactor the server. | open | 2017-02-14T14:37:08Z | 2017-02-14T16:55:24Z | https://github.com/torrvision/crayon/issues/19 | [
"enhancement"
] | edran | 0 |
qubvel-org/segmentation_models.pytorch | computer-vision | 707 | Deep supervision for Unet++? | Hi, I'd like to know if your Unet++ implementation has deep supervision too.
Thanks for a great repo <3
| closed | 2023-01-08T15:35:24Z | 2023-05-12T01:52:52Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/707 | [
"Stale"
] | Abermal | 3 |
mkhorasani/Streamlit-Authenticator | streamlit | 51 | Clean up README.md | Some possible corrections:
1. Say somewhere one needs to install pyyaml and import yaml
2. In the first code snippet in 'Creating a login widget', replace `Loader=SafeLoader` by `Loader=yaml.SafeLoader`
Or something like that as I struggled to realize the above from the README.md. It may be obvious to a lot of folks but it was not to me.
I could do it myself but I am hesitant as I don't want to interfere in your README. Package is great by the way! Thanks! | closed | 2023-03-06T04:08:28Z | 2023-03-06T07:22:22Z | https://github.com/mkhorasani/Streamlit-Authenticator/issues/51 | [] | fascani | 1 |
python-restx/flask-restx | api | 239 | AssertionError: View function mapping is overwriting an existing endpoint function: api.specs | Hello,
I'm working on an API using Flask (full library versions available below) and Flask-Restx and [importing via Blueprints](https://flask-restx.readthedocs.io/en/latest/scaling.html#use-with-blueprints) with a project structure like so:
```bash
.
├── application
│ ├── apis
│ │ ├── __init__.py
│ │ └── organisations.py
│ ├── app.py
│ └── config.py
└── wsgi.py
```
The code in `app.py` calls a blueprint created in `application/apis/__init__.py` as follows:
**application/apis/__init__.py**
```python
from flask import Blueprint
from flask_restx import Api
from .organisations import api as orgapi
api_v1 = Blueprint('api', __name__)
api = Api(
api_v1,
title='My API',
version='1.0',
description='Access to my api',
)
api.add_namespace(orgapi)
```
**application/app.py**
```python
# ...
from application.apis import api
# ...
def create_app(config_name):
app = Flask(__name__)
# ...
api.init_app(app)
# Blueprints
from application.apis import api_v1
app.register_blueprint(api_v1, url_prefix="/api/v1")
```
The `organisations.py` code does not include any views at all, however when I try and access the application, I get the error in the title bar:
**application/apis/organisations.py**
```python
from flask_restx import Namespace, Resource, fields
api = Namespace('organisation', description='Organisation related operations')
```
The only reference I can find to this is a [stackoverflow](https://stackoverflow.com/questions/17256602/assertionerror-view-function-mapping-is-overwriting-an-existing-endpoint-functi) question, and [a github issue dating back to Flask 0.9 vs Flask 0.10](https://github.com/pallets/flask/issues/796), however given how old those questions are I'm pretty confident I'm just holding it wrong!
#### Additional context
**Library Versions (`pip freeze | grep -i flask`)**:
```
Flask==1.1.2
Flask-Admin==1.5.6
Flask-Migrate==2.5.3
flask-restx==0.2.0
Flask-SQLAlchemy==2.4.4
pytest-flask==1.0.0
```
#### Full Stack Trace
```bash
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/opt/code/wsgi.py", line 5, in <module>
app = create_app(os.environ["FLASK_CONFIG"])
File "/opt/code/application/app.py", line 54, in create_app
app.register_blueprint(api_v1, url_prefix="/api/v1")
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 98, in wrapper_func
return f(self, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1168, in register_blueprint
blueprint.register(self, options, first_registration)
File "/usr/local/lib/python3.9/site-packages/flask/blueprints.py", line 256, in register
deferred(state)
File "/usr/local/lib/python3.9/site-packages/flask/blueprints.py", line 294, in <lambda>
self.record(lambda s: s.add_url_rule(rule, endpoint, view_func, **options))
File "/usr/local/lib/python3.9/site-packages/flask_restx/api.py", line 809, in _blueprint_setup_add_url_rule_patch
blueprint_setup.app.add_url_rule(
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 98, in wrapper_func
return f(self, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/flask/app.py", line 1282, in add_url_rule
raise AssertionError(
AssertionError: View function mapping is overwriting an existing endpoint function: api.specs
``` | closed | 2020-10-15T06:27:38Z | 2023-06-01T05:04:14Z | https://github.com/python-restx/flask-restx/issues/239 | [
"question"
] | proffalken | 4 |
ultralytics/ultralytics | deep-learning | 18,749 | 在Ubuntu上训练时,总是会卡主两三分钟,该怎么解决呢? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
卡在了Model.py文件中的下面这一句:
self.trainer = (trainer or self._smart_load("trainer"))(overrides=args, _callbacks=self.callbacks)
打印了运行时间,如下图所示,耗费了130秒。

### Additional
_No response_ | open | 2025-01-18T09:17:46Z | 2025-01-20T09:09:57Z | https://github.com/ultralytics/ultralytics/issues/18749 | [
"question"
] | retioa11 | 5 |
voila-dashboards/voila | jupyter | 1,510 | Voila and Jupyterlab notebook command interactions | <!--
Welcome! Before creating a new issue please search for relevant issues and recreate the issue in a fresh environment.
-->
Thank you for the work behind Voila. It is the perfect tool for the demos I had the opportunity to show in the past years. I am however facing an issue in a trick I use to control the flow of a demo.
## Description
<!--Describe the bug clearly and concisely. Include screenshots/gifs if possible-->
I use an ipywidget button callback to programmatically drive the execution of a notebook. For example, pressing on a button executes a determined number of cells below the one that created the button. More information regarding notebook commands can be found [here](https://jupyterlab.readthedocs.io/en/latest/user/commands.html).
The approach works in Jupyterlab, but not when rendered with Voila.
## Reproduce
<!--Describe step-by-step instructions to reproduce the behavior-->
Cell 1
```python
import ipywidgets as widgets
from IPython.display import display
from ipylab import JupyterFrontEnd
button = widgets.Button(description="Click Me!")
output = widgets.Output()
app = JupyterFrontEnd()
state = False
clicked = False
display(button, output)
def on_button_clicked(b):
global state, clicked
clicked = True
state = not state
with output:
print("Button clicked.", state)
app.commands.execute('notebook:move-cursor-down')
app.commands.execute('notebook:run-cell-and-select-next')
button.on_click(on_button_clicked)
```
Cell 2
```python
if clicked:
with output:
print("exec'ed", state)
```
Execute the first cell, press the button a few times, you should get the following output:
```
Button clicked. True
exec'ed True
Button clicked. False
exec'ed False
Button clicked. True
exec'ed True
Button clicked. False
exec'ed False
```
If rendered in Voila, I obtain the following:
```
Button clicked. True
Button clicked. False
Button clicked. True
Button clicked. False
```
The callback is executed, but the notebook command has no effect.
The `clicked` global skips the execution of the cells I want to control with the button instead.
<!--Describe how you diagnosed the issue -->
## Expected behavior
<!--Describe what you expected to happen-->
I would like the same behavior in Voila as the one observed in the notebook, so that pressing the button triggers the execution of the cell below.
I can't tell whether this is a bug or a feature request, nor whether this is technically achievable when using Voila.
## Context
<!--Complete the following for context, and add any other relevant context-->
I couldn't spot anything useful in the context.
- voila version 0.5.8
- Operating System and version: Debian trixie/sid
- Browser and version: Chrome 131.0.6778.69
<details><summary>Troubleshoot Output</summary>
<pre>
$PATH:
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/bin
/usr/local/bin
/usr/bin
/bin
/usr/local/games
/usr/games
sys.path:
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/bin
/usr/lib/python311.zip
/usr/lib/python3.11
/usr/lib/python3.11/lib-dynload
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/lib/python3.11/site-packages
sys.executable:
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/bin/python
sys.version:
3.11.9 (main, Apr 10 2024, 13:16:36) [GCC 13.2.0]
platform.platform():
Linux-6.11.5-amd64-x86_64-with-glibc2.40
which -a jupyter:
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/bin/jupyter
pip list:
Package Version
--------------------------------- --------------
anyio 4.6.2.post1
argon2-cffi 23.1.0
argon2-cffi-bindings 21.2.0
arrow 1.3.0
asttokens 2.4.1
async-lru 2.0.4
attrs 24.2.0
babel 2.16.0
bap 1.3.1
beautifulsoup4 4.12.3
bleach 6.2.0
certifi 2024.8.30
cffi 1.17.1
charset-normalizer 3.4.0
comm 0.2.2
contourpy 1.3.1
cycler 0.12.1
debugpy 1.8.8
decorator 5.1.1
defusedxml 0.7.1
executing 2.1.0
fastjsonschema 2.20.0
fonttools 4.55.0
fqdn 1.5.1
freetype-py 2.5.1
h11 0.14.0
hsluv 5.0.4
httpcore 1.0.7
httpx 0.27.2
idna 3.10
ipykernel 6.29.5
ipylab 1.0.0
ipympl 0.9.4
ipython 8.29.0
ipython-genutils 0.2.0
ipywidgets 8.1.5
isoduration 20.11.0
jedi 0.19.2
Jinja2 3.1.4
json5 0.9.28
jsonpointer 3.0.0
jsonschema 4.23.0
jsonschema-specifications 2024.10.1
jupyter_client 8.6.3
jupyter_contrib_core 0.4.2
jupyter_contrib_nbextensions 0.7.0
jupyter_core 5.7.2
jupyter-events 0.10.0
jupyter-highlight-selected-word 0.2.0
jupyter-lsp 2.2.5
jupyter_nbextensions_configurator 0.6.4
jupyter_server 2.14.2
jupyter_server_terminals 0.5.3
jupyterlab 4.2.6
jupyterlab_pygments 0.3.0
jupyterlab_server 2.27.3
jupyterlab_widgets 3.0.13
kiwisolver 1.4.7
lief 0.15.1
lxml 5.3.0
MarkupSafe 3.0.2
matplotlib 3.9.2
matplotlib-inline 0.1.7
mistune 3.0.2
nbclient 0.10.0
nbconvert 7.16.4
nbformat 5.10.4
nest-asyncio 1.6.0
networkx 3.4.2
notebook 7.2.2
notebook_shim 0.2.4
numpy 2.1.3
overrides 7.7.0
packaging 24.2
pandocfilters 1.5.1
parso 0.8.4
pexpect 4.9.0
pillow 11.0.0
pip 24.2
platformdirs 4.3.6
prometheus_client 0.21.0
prompt_toolkit 3.0.48
psutil 6.1.0
ptyprocess 0.7.0
pure_eval 0.2.3
pycparser 2.22
Pygments 2.18.0
pyparsing 3.2.0
pypower 2.3.1
PyQt5 5.15.11
PyQt5-Qt5 5.15.15
PyQt5_sip 12.15.0
PyQtWebEngine 5.15.7
PyQtWebEngine-Qt5 5.15.15
python-dateutil 2.9.0.post0
python-json-logger 2.0.7
PyYAML 6.0.2
pyzmq 26.2.0
QDarkStyle 3.0.3
QtPy 2.4.2
referencing 0.35.1
requests 2.32.3
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
rpds-py 0.21.0
Send2Trash 1.8.3
setuptools 75.5.0
six 1.16.0
sniffio 1.3.1
soupsieve 2.6
stack-data 0.6.3
tabulate 0.9.0
tenacity 9.0.0
terminado 0.18.1
tinycss2 1.4.0
tornado 6.4.1
traitlets 5.14.3
types-python-dateutil 2.9.0.20241003
typing_extensions 4.12.2
uri-template 1.3.0
urllib3 2.2.3
vispy 0.11.0
voila 0.5.8
wcwidth 0.2.13
webcolors 24.11.1
webencodings 0.5.1
websocket-client 1.8.0
websockets 14.1
wheel 0.44.0
widgetsnbextension 4.0.13
</pre>
</details>
<details><summary>Command Line Output</summary>
<pre>
[Voila] Looking for voila in /etc/jupyter
[Voila] Looking for voila in /usr/local/etc/jupyter
[Voila] Looking for voila in ${HOME}/.jupyter
[Voila] Looking for voila in ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/etc/jupyter
[Voila] Looking for voila in /shared/Work/Projects/2024.09.25.Demo_Olivier_Flous/demo_wbc
[Voila] Loaded config file: /shared/Work/Projects/2024.09.25.Demo_Olivier_Flous/demo_wbc/voila.json
[Voila] using template: lab
[Voila] template paths:
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/templates/lab
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/nbconvert/templates/lab
/usr/share/jupyter/nbconvert/templates/lab
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/templates/base
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/nbconvert/templates/base
/usr/share/jupyter/nbconvert/templates/base
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/templates
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/nbconvert/templates
${HOME}/.local/share/jupyter
${HOME}/.local/share/jupyter/voila/templates
${HOME}/.local/share/jupyter/nbconvert/templates
/usr/local/share/jupyter
/usr/local/share/jupyter/voila/templates
/usr/local/share/jupyter/nbconvert/templates
/usr/share/jupyter
/usr/share/jupyter/voila/templates
/usr/share/jupyter/nbconvert/templates
[Voila] static paths:
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/templates/lab/static
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/nbconvert/templates/lab/static
${HOME}/.local/share/jupyter/voila/templates/lab/static
${HOME}/.local/share/jupyter/nbconvert/templates/lab/static
/usr/local/share/jupyter/voila/templates/lab/static
/usr/local/share/jupyter/nbconvert/templates/lab/static
/usr/share/jupyter/voila/templates/lab/static
/usr/share/jupyter/nbconvert/templates/lab/static
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/templates/base/static
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/nbconvert/templates/base/static
${HOME}/.local/share/jupyter/voila/templates/base/static
${HOME}/.local/share/jupyter/nbconvert/templates/base/static
/usr/local/share/jupyter/voila/templates/base/static
/usr/local/share/jupyter/nbconvert/templates/base/static
/usr/share/jupyter/voila/templates/base/static
/usr/share/jupyter/nbconvert/templates/base/static
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/lib/python3.11/site-packages/jupyter_server/static
[Voila] Using /tmp to store connection files
[Voila] Storing connection files in /tmp/voila_g1k2qe3t.
[Voila] Serving static files from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/lib/python3.11/site-packages/voila/static.
[Voila] serving directory: '/shared/Work/Projects/2024.09.25.Demo_Olivier_Flous/demo_wbc'
[Voila] Voilà is running at:
http://localhost:8866/
[Voila] WARNING | Clearing invalid/expired login cookie username-localhost-8866
[Voila] Generating new user for token-authenticated request: 902b9ae8875a4958a33ee85425f4d1d5
[Voila] Paths used for configuration of page_config:
/etc/jupyter/labconfig/page_config.json
[Voila] Paths used for configuration of page_config:
/usr/local/etc/jupyter/labconfig/page_config.json
[Voila] Paths used for configuration of page_config:
${HOME}/.jupyter/labconfig/page_config.json
[Voila] Paths used for configuration of page_config:
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/etc/jupyter/labconfig/page_config.json
[Voila] Using contents: services/contents
[Voila] Path jupyterlab_pygments/static/remoteEntry.5cbb9d2323598fbda535.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/jupyterlab_pygments/static/remoteEntry.5cbb9d2323598fbda535.js
[Voila] Path ipylab/static/remoteEntry.1c9b77c557d03a2498f4.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/ipylab/static/remoteEntry.1c9b77c557d03a2498f4.js
[Voila] Path @jupyter-notebook/lab-extension/static/remoteEntry.04dfa589925e7e7c6a3d.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-notebook/lab-extension/static/remoteEntry.04dfa589925e7e7c6a3d.js
[Voila] Path @jupyter-widgets/jupyterlab-manager/static/remoteEntry.e4ff09401a2f575928c0.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/remoteEntry.e4ff09401a2f575928c0.js
[Voila] Path @voila-dashboards/widgets-manager8/static/remoteEntry.958dac8c7410b5fcc9ee.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/labextensions/@voila-dashboards/widgets-manager8/static/remoteEntry.958dac8c7410b5fcc9ee.js
[Voila] Path jupyter-matplotlib/static/remoteEntry.a0518cb14ef99e994963.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/jupyter-matplotlib/static/remoteEntry.a0518cb14ef99e994963.js
404 GET /favicon.ico (::1) 0.52ms
[Voila] Path jupyterlab_pygments/static/747.67662283a5707eeb4d4c.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/jupyterlab_pygments/static/747.67662283a5707eeb4d4c.js
[Voila] Path jupyterlab_pygments/static/568.1e2faa2ba0bbe59c4780.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/jupyterlab_pygments/static/568.1e2faa2ba0bbe59c4780.js
[Voila] Path @voila-dashboards/widgets-manager8/static/651.d9c6fa52270ea21fdf9e.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/labextensions/@voila-dashboards/widgets-manager8/static/651.d9c6fa52270ea21fdf9e.js
[Voila] Path @voila-dashboards/widgets-manager8/static/264.95d855dc9ed80b79c78e.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/labextensions/@voila-dashboards/widgets-manager8/static/264.95d855dc9ed80b79c78e.js
[Voila] Path jupyter-matplotlib/static/480.18f23d468bae372d1c77.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/jupyter-matplotlib/static/480.18f23d468bae372d1c77.js
[Voila] Path ipylab/static/480.16044a8abb039e4c2a69.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/ipylab/static/480.16044a8abb039e4c2a69.js
[Voila] Path ipylab/static/78.bae6a35721d5e7309228.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/ipylab/static/78.bae6a35721d5e7309228.js
[Voila] Path @jupyter-notebook/lab-extension/static/928.bf5955f09ff1e05edfbb.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-notebook/lab-extension/static/928.bf5955f09ff1e05edfbb.js
[Voila] Path @jupyter-notebook/lab-extension/static/42.33f638f0a4239bed9676.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-notebook/lab-extension/static/42.33f638f0a4239bed9676.js
[Voila] Path @jupyter-notebook/lab-extension/static/568.3dd58d88e32a98358776.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-notebook/lab-extension/static/568.3dd58d88e32a98358776.js
[Voila] Path @jupyter-notebook/lab-extension/static/93.eae3497dd223d842d198.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-notebook/lab-extension/static/93.eae3497dd223d842d198.js
[Voila] Path @jupyter-widgets/jupyterlab-manager/static/651.fe40a967a60b543cf15c.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/651.fe40a967a60b543cf15c.js
[Voila] Path @jupyter-widgets/jupyterlab-manager/static/420.063e2ee9f71033206b1f.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/420.063e2ee9f71033206b1f.js
[Voila] Path @jupyter-widgets/jupyterlab-manager/static/439.33696bc45fbd403becbb.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/439.33696bc45fbd403becbb.js
[Voila] Path @jupyter-widgets/jupyterlab-manager/static/327.8166aeb81cf1531ca240.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/327.8166aeb81cf1531ca240.js
[Voila] Path @jupyter-widgets/jupyterlab-manager/static/722.3fefeac9cae358348cbc.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/722.3fefeac9cae358348cbc.js
[Voila] Path @jupyter-widgets/jupyterlab-manager/static/446.bf169bd3821a9ba1aa62.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions/@jupyter-widgets/jupyterlab-manager/static/446.bf169bd3821a9ba1aa62.js
[Voila] Path @voila-dashboards/widgets-manager8/static/883.bbe30bf61f3074749dda.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/labextensions/@voila-dashboards/widgets-manager8/static/883.bbe30bf61f3074749dda.js
[Voila] Path @voila-dashboards/widgets-manager8/static/324.aa49bd5aec16839cc9e0.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/labextensions/@voila-dashboards/widgets-manager8/static/324.aa49bd5aec16839cc9e0.js
[Voila] Path @voila-dashboards/widgets-manager8/static/603.9866b69497a4a124e57f.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/labextensions/@voila-dashboards/widgets-manager8/static/603.9866b69497a4a124e57f.js
[Voila] Path @voila-dashboards/widgets-manager8/static/496.45f50ff8111515264be7.js served from ${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/voila/labextensions/@voila-dashboards/widgets-manager8/static/496.45f50ff8111515264be7.js
404 GET /api/kernels?1732034622770 (::1) 0.34ms
</pre>
</details>
<details><summary>Browser Output</summary>
<pre>
Connection lost, reconnecting in 0 seconds.
_reconnect @ :8888/static/notebook/3676.bundle.js:1
reconnect @ :8888/static/notebook/3676.bundle.js:1
restart @ :8888/static/notebook/3676.bundle.js:1
await in restart
restartKernel @ :8888/static/notebook/9605.bundle.js:2
restart @ :8888/static/notebook/9605.bundle.js:2
await in restart
execute @ :8888/static/notebook/1962.bundle.js:1
execute @ :8888/static/notebook/3301.bundle.js:1
onClick @ :8888/static/notebook/7506.bundle.js:703
Yo.r @ :8888/static/notebook/7506.bundle.js:703
Oe @ :8888/static/notebook/1542.bundle.js:2
Be @ :8888/static/notebook/1542.bundle.js:2
(anonymous) @ :8888/static/notebook/1542.bundle.js:2
Ir @ :8888/static/notebook/1542.bundle.js:2
Ur @ :8888/static/notebook/1542.bundle.js:2
(anonymous) @ :8888/static/notebook/1542.bundle.js:2
cs @ :8888/static/notebook/1542.bundle.js:2
Le @ :8888/static/notebook/1542.bundle.js:2
Qr @ :8888/static/notebook/1542.bundle.js:2
qn @ :8888/static/notebook/1542.bundle.js:2
$n @ :8888/static/notebook/1542.bundle.js:2Understand this warningAI
Scrolling to a new item is requested.</pre>
</details>
### If using JupyterLab
- JupyterLab version: v4.2.6
<details><summary>Installed Labextensions</summary>
<pre>
JupyterLab v4.2.6
${HOME}/.local/share/virtualenvs/pipenv-3m7R3yRy/share/jupyter/labextensions
jupyterlab_pygments v0.3.0 enabled OK (python, jupyterlab_pygments)
jupyter-matplotlib v0.11.4 enabled OK
ipylab v1.0.0 enabled OK (python, ipylab)
@voila-dashboards/jupyterlab-preview v2.3.8 enabled OK (python, voila)
@jupyter-notebook/lab-extension v7.2.2 enabled OK
@jupyter-widgets/jupyterlab-manager v5.0.13 enabled OK (python, jupyterlab_widgets)</pre>
</details>
| open | 2024-11-19T17:06:28Z | 2024-11-19T17:06:28Z | https://github.com/voila-dashboards/voila/issues/1510 | [
"bug"
] | protopyte | 0 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 167 | UserWarning "this overload of nonzero is deprecated" when using with PyTorch 1.6 | Hi,
Not really a big deal, just started getting a deprecation warning after updating to the PyTorch 1.6:
```
/opt/conda/lib/python3.7/site-packages/pytorch_metric_learning/utils/loss_and_miner_utils.py:79: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1595629427478/work/torch/csrc/utils/python_arg_parser.cpp:766.)
a1_idx = matches.nonzero()[:, 0].flatten()
```
I use pytorch-metric-learning==0.9.89 | closed | 2020-08-04T14:21:26Z | 2020-08-08T00:51:31Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/167 | [
"enhancement",
"fixed in dev branch"
] | thinline72 | 1 |
ansible/ansible | python | 84,164 | Enhancing of validate_argument_spec documentaiton | ### Summary
The documentation lack clarity and information about options.
For example, default and required are mutually exclusive but it is only listed in the spec of the module https://docs.ansible.com/ansible/latest/dev_guide/developing_program_flow_modules.html#argument-spec but not in the main documentation https://docs.ansible.com/ansible/latest/collections/ansible/builtin/validate_argument_spec_module.html
And I think we currently have 2 documentations for 2 differents usages linked together. The spec seems to be more oriented for module developer and the module is targeted to validate role inputs.
Imo everything we can use in a meta/argument_specs.yaml should be describe in the module documentation.
### Issue Type
Documentation Report
### Component Name
lib/ansible/modules/validate_argument_spec.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.16.8]
config file = /home/myuser/myproject/ansible.cfg
configured module search path = ['/home/myuser/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/myuser/.local/pipx/venvs/ansible-core/lib/python3.12/site-packages/ansible
ansible collection location = /home/myuser/myproject/collections
executable location = /home/gaupee/.local/bin/ansible
python version = 3.12.7 (main, Oct 3 2024, 15:15:22) [GCC 14.2.0] (/home/gaupee/.local/pipx/venvs/ansible-core/bin/python)
jinja version = 3.1.4
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/myuser/myproject/ansible.cfg) = True
COLLECTIONS_PATHS(/home/myuser/myproject/ansible.cfg) = ['/home/myuser/myproject/collections']
CONFIG_FILE() = /home/myuser/myproject/ansible.cfg
DEFAULT_FORKS(/home/myuser/myproject/ansible.cfg) = 20
DEFAULT_ROLES_PATH(/home/myuser/myproject/ansible.cfg) = ['/home/myuser/myproject/roles']
DEFAULT_STDOUT_CALLBACK(/home/myuser/myproject/ansible.cfg) = yaml
DEFAULT_VAULT_PASSWORD_FILE(/home/myuser/myproject/ansible.cfg) = /home/myuser/myproject/.vault_pass
EDITOR(env: EDITOR) = nvim
PAGER(env: PAGER) = less
```
### OS / Environment
Debian 12
### Additional Information
I'm available to help for this issue, please understand that since I'm asking for documentation I don't have a lot of experience with this module and mistakes could happen
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | closed | 2024-10-24T12:36:11Z | 2024-11-18T14:00:02Z | https://github.com/ansible/ansible/issues/84164 | [
"module",
"has_pr",
"affects_2.16"
] | 4SH-gaupee | 3 |
OFA-Sys/Chinese-CLIP | nlp | 333 | 使用MUGE微调时,训练日志里面的Image2Text Acc与评测时的R@1的召回指标不一致? | 评测时top1的召回, 与训练日志里的acc不一致,是两者的计算方式有差别吗? | open | 2024-07-25T07:35:46Z | 2024-07-25T07:35:46Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/333 | [] | dulibubai | 0 |
ray-project/ray | pytorch | 51,527 | [Train] Crash at end of training | ### What happened + What you expected to happen
Recently I've been staring to experience a crash at the end of training. The backtrace is always the same:
```
Training completed after 1 iterations at 2025-03-05 04:39:56. Total running time: 8min 51s
2025-03-05 04:39:56,506 INFO tune.py:1009 -- Wrote the latest version of all result files and experiment state to 'earthdaily-pathfinders-scaleai/venus/afm-profiling/train/experimental-2025-03-05_04-31-02_a458' in 0.5035s.
(TorchTrainer pid=439, ip=10.212.157.221) *** SIGSEGV received at time=1741149596 on cpu 63 ***
(TorchTrainer pid=439, ip=10.212.157.221) PC: @ 0x7f60b5a3c7be (unknown) ray::gcs::TaskInfoAccessor::AsyncAddTaskEventData()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b6ee7050 1824 (unknown)
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b591d975 1392 ray::core::worker::TaskEventBufferImpl::FlushEvents()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b58a66ec 1488 ray::core::CoreWorker::Disconnect()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b58a6a9d 1152 ray::core::CoreWorker::ForceExit()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b58a6ecf 1680 ray::core::CoreWorker::HandleKillActor()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b589e3d4 192 ray::rpc::ServerCallImpl<>::HandleRequestImpl()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b5c2bbc8 1168 EventTracker::RecordExecution()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b5c0fffe 48 std::_Function_handler<>::_M_invoke()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b5c10476 112 boost::asio::detail::completion_handler<>::do_complete()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b62d68db 128 boost::asio::detail::scheduler::do_run_one()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b62d8259 288 boost::asio::detail::scheduler::run()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b62d8962 96 boost::asio::io_context::run()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b57ff0b1 1280 ray::core::CoreWorker::RunIOService()
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b5d1d4e0 64 thread_proxy
(TorchTrainer pid=439, ip=10.212.157.221) @ 0x7f60b6f341c4 (unknown) (unknown)
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: *** SIGSEGV received at time=1741149596 on cpu 63 ***
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: PC: @ 0x7f60b5a3c7be (unknown) ray::gcs::TaskInfoAccessor::AsyncAddTaskEventData()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b6ee7050 1824 (unknown)
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b591d975 1392 ray::core::worker::TaskEventBufferImpl::FlushEvents()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b58a66ec 1488 ray::core::CoreWorker::Disconnect()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b58a6a9d 1152 ray::core::CoreWorker::ForceExit()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b58a6ecf 1680 ray::core::CoreWorker::HandleKillActor()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b589e3d4 192 ray::rpc::ServerCallImpl<>::HandleRequestImpl()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b5c2bbc8 1168 EventTracker::RecordExecution()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b5c0fffe 48 std::_Function_handler<>::_M_invoke()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b5c10476 112 boost::asio::detail::completion_handler<>::do_complete()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b62d68db 128 boost::asio::detail::scheduler::do_run_one()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b62d8259 288 boost::asio::detail::scheduler::run()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b62d8962 96 boost::asio::io_context::run()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,568 E 439 479] logging.cc:484: @ 0x7f60b57ff0b1 1280 ray::core::CoreWorker::RunIOService()
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,569 E 439 479] logging.cc:484: @ 0x7f60b5d1d4e0 64 thread_proxy
(TorchTrainer pid=439, ip=10.212.157.221) [2025-03-05 04:39:56,569 E 439 479] logging.cc:484: @ 0x7f60b6f341c4 (unknown) (unknown)
(TorchTrainer pid=439, ip=10.212.157.221) Fatal Python error: Segmentation fault
(TorchTrainer pid=439, ip=10.212.157.221)
(TorchTrainer pid=439, ip=10.212.157.221)
(TorchTrainer pid=439, ip=10.212.157.221) Extension modules: msgpack._cmsgpack, google._upb._message, psutil._psutil_linux, psutil._psutil_posix, setproctitle, yaml._yaml, charset_normalizer.md, ray._raylet, numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.tslib, pandas._libs.lib, pandas._libs.hashing, pyarrow.lib, pandas._libs.ops, pyarrow._compute, bottleneck.move, bottleneck.nonreduce, bottleneck.nonreduce_axis, bottleneck.reduce, pandas._libs.arrays, pandas._libs.index, pandas._libs.join, pandas._libs.sparse, pandas._libs.reduction, pandas._libs.indexing, pandas._libs.internals, pandas._libs.writers, pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.tslibs.strptime, pandas._libs.groupby, pandas._libs.testing, pandas._libs.parsers, pandas._libs.json, pyarrow._fs, pyarrow._azurefs, pyarrow._hdfs, pyarrow._gcsfs, pyarrow._s3fs, pyarrow._parquet, torch._C, torch._C._dynamo.autograd_compiler, torch._C._dynamo.eval_frame, torch._C._dynamo.guards, torch._C._dynamo.utils, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, pydantic.typing, pydantic.errors, pydantic.version, pydantic.utils, pydantic.class_validators, pydantic.config, pydantic.color, pydantic.datetime_parse, pydantic.validators, pydantic.networks, pydantic.types, pydantic.json, pydantic.error_wrappers, pydantic.fields, pydantic.parse, pydantic.schema, pydantic.main, pydantic.dataclasses, pydantic.annotated_types, pydantic.decorator, pydantic.env_settings, pydantic.tools, pydantic, pyarrow._json, lazy_object_proxy.cext, matplotlib._c_internal_utils, PIL._imaging, matplotlib._path, kiwisolver._cext, matplotlib._image, _cffi_backend, scipy._lib._ccallback_c, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg._matfuncs_expm, scipy.linalg._linalg_pythran, scipy.linalg.cython_blas, scipy.linalg._decomp_update, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.linalg._propack._spropack, scipy.sparse.linalg._propack._dpropack, scipy.sparse.linalg._propack._cpropack, scipy.sparse.linalg._propack._zpropack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, multidict._multidict, yarl._quoting_c, propcache._helpers_c, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket.mask, aiohttp._websocket.reader_c, frozenlist._frozenlist, sklearn.__check_build._check_build, sklearn.utils.murmurhash, scipy.spatial._ckdtree, scipy._lib.messagestream, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.special._ufuncs_cxx, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, scipy.spatial.transform._rotation, scipy.optimize._group_columns, scipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, scipy.optimize._zeros, scipy.optimize._cython_nnls, scipy._lib._uarray._uarray, scipy.linalg._decomp_interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.optimize._direct, scipy.integrate._odepack, scipy.integrate._quadpack, scipy.integrate._vode, scipy.integrate._dop, scipy.integrate._lsoda, scipy.interpolate._fitpack, scipy.interpolate._dfitpack, scipy.interpolate._dierckx, scipy.interpolate._ppoly, scipy.interpolate._interpnd, scipy.interpolate._rbfinterp_pythran, scipy.interpolate._rgi_cython, scipy.interpolate._bspl, scipy.special.cython_special, scipy.stats._stats, scipy.stats._sobol, scipy.stats._qmc_cy, scipy.stats._biasedurn, scipy.stats._stats_pythran, scipy.stats._levy_stable.levyst, scipy.stats._ansari_swilk_statistics, scipy.stats._mvn, scipy.stats._rcont.rcont, scipy.ndimage._nd_image, scipy.ndimage._rank_filter_1d, _ni_label, scipy.ndimage._ni_label, sklearn.utils._openmp_helpers, sklearn.utils._logistic_sigmoid, sklearn.utils.sparsefuncs_fast, sklearn.preprocessing._csr_polynomial_expansion, sklearn.utils._typedefs, sklearn.utils._readonly_array_wrapper, sklearn.metrics._dist_metrics, sklearn.metrics.cluster._expected_mutual_info_fast, sklearn.utils._cython_blas, sklearn.utils._heap, sklearn.utils._sorting, sklearn.utils._vector_sentinel, sklearn.metrics._pairwise_distances_reduction, sklearn.metrics._pairwise_fast, sklearn.utils._random, markupsafe._speedups, scipy.fftpack.convolve, tornado.speedups, greenlet._greenlet (total: 228)
```
I am experiencing that regardless of the number of workers I use (one or multiple). I am always using the DDP strategy though. This is how I am initializing the PyTorch Lightning trainer in my training loop:
```
trainer = Trainer(
strategy=ray.train.lightning.RayDDPStrategy(),
plugins=[ray.train.lightning.RayLightningEnvironment()],
```
Beyond that, I'm not sure what information would be relevant, but I am happy to provide more info about the way I am running my training jobs upon request.
The same backtrace has been reported before in a comment on [this issue](https://github.com/ray-project/ray/issues/49998), however the original description of that issue seems unrelated, so I am creating a new issue here.
### Versions / Dependencies
Ray: 2.43.0
I think I've only experienced this with Ray 2.43.0 and not in an older version.
### Reproduction script
This is hard to reproduce - it happens occasionally at the end of training.
### Issue Severity
Medium: It is a significant difficulty but I can work around it. | open | 2025-03-19T16:53:00Z | 2025-03-19T22:00:57Z | https://github.com/ray-project/ray/issues/51527 | [
"bug",
"triage",
"train"
] | jleben | 0 |
ivy-llc/ivy | pytorch | 28,164 | Fix Ivy Failing Test: jax - searching.nonzero | open | 2024-02-03T07:25:36Z | 2024-02-03T07:25:36Z | https://github.com/ivy-llc/ivy/issues/28164 | [
"Sub Task"
] | MuhammadNizamani | 0 |
|
lanpa/tensorboardX | numpy | 138 | add_graph raises RuntimeError when parsing constant node | Hello
I got "RuntimeError: VariableType::ID() not implemented" when parsing constant nodes in the computation graph.
code to reproduce the RuntimeError:
```python
class SimpleModel(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return x * 2
input = (torch.zeros(1, 2, 3),)
model = SimpleModel()
with SummaryWriter(comment='test') as w:
w.add_graph(model, input)
```
Stack:
File "...tensorboardX\writer.py", line 419, in add_graph
self.file_writer.add_graph(graph(model, input_to_model, verbose))
File "...tensorboardX\graph.py", line 85, in graph
list_of_nodes = parse(graph)
File "...tensorboardX\graph.py", line 28, in parse
attrs = {k: n[k] for k in n.attributeNames()}
File "...tensorboardX\graph.py", line 28, in <dictcomp>
attrs = {k: n[k] for k in n.attributeNames()}
File "...torch\onnx\utils.py", line 444, in _node_getitem
return getattr(self, sel)(k)
RuntimeError: VariableType::ID() not implemented
The stack shows that calling `Constant["value"]` will give `RuntimeError`
str(n) = "%1 : Dynamic = onnx::Constant\[value={2}\](), scope: SimpleModel"
n["value"] ==> RuntimeError
So is this a bug or an unimplemented feature of ONNX ?
My temporary workaround for this is to set `attrs = str(n)` if `{k: n[k] for k in n.attributeNames()}` raises `RuntimeError`.
pytorch = 0.4.0
tensorflow = 1.8.0
tensorboardX = 1.2
| closed | 2018-05-03T08:12:24Z | 2018-05-08T16:03:23Z | https://github.com/lanpa/tensorboardX/issues/138 | [] | jhg543 | 1 |
Gozargah/Marzban | api | 1,116 | alpn custom config | درود alpn رو در پنل تعین میکنم ولی با کاستوم کانفیگ منتقل نمی شود
کانفیگ reality و vless ws tls رو تست کردم | closed | 2024-07-16T00:42:41Z | 2024-07-16T10:27:46Z | https://github.com/Gozargah/Marzban/issues/1116 | [
"Bug"
] | w0l4i | 2 |
d2l-ai/d2l-en | deep-learning | 2,460 | Cannot install package `d2l` due to failure of collecting `matplotlib` version 3.4 | Dear all,
I am on Macbook Pro Early 2011 and macOS 10.13.6. I was trying to install the `d2l` package and it outputs the following. I have `matplotlib` version 3.6.2 and `matplotlib-inline` version 0.1.6 installed on my machine. The Python version I use is 3.11.2, which I think is the latest(?).
```
➜ ~ python3 -m pip install -U d2l
Defaulting to user installation because normal site-packages is not writeable
Collecting d2l
Using cached d2l-0.17.6-py3-none-any.whl (112 kB)
Collecting jupyter==1.0.0
Using cached jupyter-1.0.0-py2.py3-none-any.whl (2.7 kB)
Collecting d2l
Using cached d2l-0.17.5-py3-none-any.whl (82 kB)
Using cached d2l-0.17.4-py3-none-any.whl (82 kB)
Requirement already satisfied: numpy==1.22.2 in ./Library/Python/3.11/lib/python/site-packages (from d2l) (1.22.2)
Collecting matplotlib==3.4
Using cached matplotlib-3.4.0.tar.gz (37.1 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [84 lines of output]
/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/dist.py:286: SetuptoolsDeprecationWarning: The namespace_packages parameter is deprecated, consider using implicit namespaces instead (PEP 420).
warnings.warn(msg, SetuptoolsDeprecationWarning)
Edit setup.cfg to change the build options; suppress output with --quiet.
BUILDING MATPLOTLIB
matplotlib: yes [3.4.0]
python: yes [3.11.2 (main, Feb 10 2023, 08:25:48) [Clang 9.1.0
(clang-902.0.39.2)]]
platform: yes [darwin]
tests: no [skipping due to configuration]
macosx: yes [installing]
running egg_info
creating /private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-pip-egg-info-1pzecj58/matplotlib.egg-info
writing /private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-pip-egg-info-1pzecj58/matplotlib.egg-info/PKG-INFO
writing dependency_links to /private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-pip-egg-info-1pzecj58/matplotlib.egg-info/dependency_links.txt
writing namespace_packages to /private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-pip-egg-info-1pzecj58/matplotlib.egg-info/namespace_packages.txt
writing requirements to /private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-pip-egg-info-1pzecj58/matplotlib.egg-info/requires.txt
writing top-level names to /private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-pip-egg-info-1pzecj58/matplotlib.egg-info/top_level.txt
writing manifest file '/private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-pip-egg-info-1pzecj58/matplotlib.egg-info/SOURCES.txt'
/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/command/egg_info.py:643: SetuptoolsDeprecationWarning: Custom 'build_py' does not implement 'get_data_files_without_manifest'.
Please extend command classes from setuptools instead of distutils.
warnings.warn(
Python(31612,0x7fff8c346380) malloc: *** mach_vm_map(size=18446744072367222784) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
init_dgelsd failed init
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-install-hg_s7w6r/matplotlib_63c8b20ecb3849b6b370d5c25a120a6d/setup.py", line 258, in <module>
setup( # Finally, pass this all along to distutils to do the heavy lifting.
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
^^^^^^^^^^^^^^^^^^
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/_distutils/dist.py", line 968, in run_commands
self.run_command(cmd)
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/dist.py", line 1217, in run_command
super().run_command(command)
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
cmd_obj.run()
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/command/egg_info.py", line 308, in run
self.find_sources()
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/command/egg_info.py", line 316, in find_sources
mm.run()
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/command/egg_info.py", line 560, in run
self.add_defaults()
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/command/egg_info.py", line 597, in add_defaults
sdist.add_defaults(self)
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/command/sdist.py", line 106, in add_defaults
super().add_defaults()
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/_distutils/command/sdist.py", line 252, in add_defaults
self._add_defaults_ext()
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/_distutils/command/sdist.py", line 336, in _add_defaults_ext
build_ext = self.get_finalized_command('build_ext')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/_distutils/cmd.py", line 306, in get_finalized_command
cmd_obj.ensure_finalized()
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/setuptools/_distutils/cmd.py", line 109, in ensure_finalized
self.finalize_options()
File "/private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-install-hg_s7w6r/matplotlib_63c8b20ecb3849b6b370d5c25a120a6d/setup.py", line 90, in finalize_options
self.distribution.ext_modules[:] = [
^
File "/private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-install-hg_s7w6r/matplotlib_63c8b20ecb3849b6b370d5c25a120a6d/setup.py", line 90, in <listcomp>
self.distribution.ext_modules[:] = [
^
File "/private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-install-hg_s7w6r/matplotlib_63c8b20ecb3849b6b370d5c25a120a6d/setupext.py", line 383, in get_extensions
add_numpy_flags(ext)
File "/private/var/folders/xt/4_gn7ry143zg0b8vwddc51nw0000gn/T/pip-install-hg_s7w6r/matplotlib_63c8b20ecb3849b6b370d5c25a120a6d/setupext.py", line 498, in add_numpy_flags
import numpy as np
File "/Users/Latisp/Library/Python/3.11/lib/python/site-packages/numpy/__init__.py", line 380, in <module>
raise RuntimeError(msg)
RuntimeError: Polyfit sanity test emitted a warning, most likely due to using a buggy Accelerate backend.
If you compiled yourself, more information is available at:
https://numpy.org/doc/stable/user/building.html#accelerated-blas-lapack-libraries
Otherwise report this to the vendor that provided NumPy.
RankWarning: Polyfit may be poorly conditioned
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
``` | closed | 2023-03-29T11:07:25Z | 2023-04-02T19:04:09Z | https://github.com/d2l-ai/d2l-en/issues/2460 | [] | guojing0 | 0 |
yzhao062/pyod | data-science | 455 | How to use of anogan ?? | Hello Sir,
I have interesting on Anomaly Detection Task.
My task is to detect anomaly on Image.
So I tried to find anogan model and I got some source from another github-site.
Surprisingly, PyOD already has anogan.
(But I think that PyOD's input type is feature vector-based, but anogan's input type is image-based)
If you don't mind, please tell me sample code about anogan on PyOD.
The input type of pyOD is image-based, Right??
Thanks,
Edward Cho. | open | 2022-11-09T01:34:39Z | 2022-12-15T12:01:59Z | https://github.com/yzhao062/pyod/issues/455 | [] | edwardcho | 7 |
alpacahq/alpaca-trade-api-python | rest-api | 602 | [Bug]: 'Stock' object has no attribute 'df' | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I get the following error and could not check if data is updated:
`'Stock' object has no attribute 'df'`
### Expected Behavior
I should not get any error and I should be able to check if data is updated
### Steps To Reproduce
```markdown
Python 3.7 64bit
Alpaca-trade-api 2.0.0
```
### Anything else?
The bot was running ok with Alpaca-trade-api version__ = '0.42'
The error is present after upgrade to Alpaca-trade-api 2.0.0 | closed | 2022-04-08T13:48:58Z | 2022-04-12T04:19:00Z | https://github.com/alpacahq/alpaca-trade-api-python/issues/602 | [] | sshcli | 0 |
biosustain/potion | sqlalchemy | 89 | No way to manually serialize objects? | If I write my own route, that for example creates a new object, sometimes I would want to send that object back to the client as JSON. It's not possible to return a SQLAlchemy object - I get:
```
raise TypeError(repr(o) + " is not JSON serializable")
```
Is there an easy way to pass Potion an obect and have it return the serialised form to the client?
| closed | 2016-07-04T16:04:54Z | 2016-07-04T17:20:16Z | https://github.com/biosustain/potion/issues/89 | [] | dabeeeenster | 5 |
kubeflow/katib | scikit-learn | 1,864 | Support hierarchical hyperparameter combinations | /kind feature
**Describe the solution you'd like**
I'd like to be able to do hyperparameter tuning over a hierarchical hyperparameter space. Specifically, I'd like to be able to do something like this Optuna example:
https://github.com/optuna/optuna-examples/blob/main/sklearn/sklearn_simple.py#L24-L32
Where first a particular classifier is chosen, and then relevant hyperparameters for the chosen classifier are selected. This might even go on further, with particular parameters for SGD vs Adam.
**Anything else you would like to add:**
Although Katib can use Optuna for hyperparameter suggestions, I didn't see a way get Katib to use Optuna features like the linked example.
---
Love this feature? Give it a 👍 We prioritize the features with the most 👍
| open | 2022-05-11T20:08:54Z | 2023-09-05T02:36:30Z | https://github.com/kubeflow/katib/issues/1864 | [
"kind/feature",
"lifecycle/frozen"
] | knkski | 2 |
scikit-optimize/scikit-optimize | scikit-learn | 240 | Add support for the `args=()` argument to the *_minimize API | Hi, maybe I'm doing this wrong, but I was trying to implement a version of what's in the hyperparameter optimization example, but not in an interactive notebook. If I set up the training data and search space in main() or another function, the only way for the objective function that I pass to the minimizer to access those variables is if I make them global. Is that right?
Would it make sense for there be another way to pass additional variables/kwargs to the objective function in the minimizer call? But then I guess the minimizer might then have to return those objects, too, so maybe not.
Is there any way to avoid globals like I did below?
```
def objective(params):
max_depth, learning_rate, max_features, min_samples_split, min_samples_leaf = params
reg.set_params(max_depth=max_depth,
learning_rate=learning_rate,
max_features=max_features,
min_samples_split=min_samples_split,
min_samples_leaf=min_samples_leaf)
return -np.mean(cross_val_score(reg, X_train, y_train, cv=5, n_jobs=-1,
scoring="neg_mean_absolute_error"))
```
```
def optimize_ml(X, y):
global X_train
global y_train
global reg
X_train, y_train, X_test, y_test = train_test_split(X, y, test_size=.25)
reg = GradientBoostingRegressor(n_estimators=50, random_state=0)
space = [(1, 5), # max_depth
(10**-5, 10**-1, "log-uniform"), # learning_rate
(1, X.shape[1]), # max_features
(2, 30), # min_samples_split
(1, 30)] # min_samples_leaf
x0 = [3, 0.01, 6, 2, 1]
res_gp = gp_minimize(objective, space, x0=x0, n_calls=50, random_state=0)
```
| open | 2016-09-29T03:07:31Z | 2023-05-22T12:08:40Z | https://github.com/scikit-optimize/scikit-optimize/issues/240 | [
"API",
"Easy"
] | ratrock | 9 |
ansible/awx | django | 15,833 | Error 504 Gateway Time-out while executing a schedule | ### Please confirm the following
- [x] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [x] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [x] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [x] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
On the new AWX user interface, after creating a schedule with a valid rule, when clicking on the button "Finish" at step _4, we have an error 504 **Gateway Time-out** after 1 or 2 minutes. Indeed, when we repeat the process, we are disconnected from AWX.
### AWX version
24.6.1
### Select the relevant components
- [x] UI
- [ ] UI (tech preview)
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [x] Other
### Installation method
kubernetes
### Modifications
no
### Ansible version
_No response_
### Operating system
_No response_
### Web browser
_No response_
### Steps to reproduce
1 - After connection, we have an information panel "A tech preview of the new AWX user interface can be found here." Click on **here** to access to the new interface.
2 - Click on "Schedules" on left panel.
3 - On step 1, choose a name for schedule name, for instance "Test schedule" and a time zone, for instance **Europe/Paris**
4 - On step 2, define a valid rule, for instance "DTSTART;TZID=Europe/Paris:20250213T081500
RRULE:FREQ=DAILY;INTERVAL=1;WKST=SU;BYSETPOS=3;BYHOUR=16;BYMINUTE=0;BYSECOND=0"
5 - No need to define any exception on step 3
6 - On step 4, click on button "Finish" to launch the schedule
### Expected results
The schedule runs and ends after a few seconds.
### Actual results
After approximately 1 or 2 minutes, an error occurred "Gateway Time-out".
When we repeat the process, we noticed the same error and, after several times, we are disconnected from AWX.
Sometimes, when we repeat the process, we noticed and othe error "This schedule will never run. If you have defined exceptions it is likely that the exceptions cancel out all the rules defined in the rules step."
### Additional information
_No response_ | closed | 2025-02-13T09:26:20Z | 2025-03-03T15:44:40Z | https://github.com/ansible/awx/issues/15833 | [
"type:bug",
"community"
] | gtho02 | 1 |
mwaskom/seaborn | matplotlib | 2,791 | UserWarning: ``square=True`` ignored in clustermap | Hi,
Whenever I run
```
sb.clustermap(
master_table_top_10_pearson.transpose(),
square=True
)
```
for my DataFrame which looks like a perfectly normal DataFrame

I get `UserWarning: ``square=True`` ignored in clustermap warnings.warn(msg)`. However, I need the squares in my plot. I cannot see a reason why the parameter gets ignored.
Thank you very much! | closed | 2022-05-09T10:38:50Z | 2022-05-09T10:45:44Z | https://github.com/mwaskom/seaborn/issues/2791 | [] | Zethson | 1 |
tflearn/tflearn | tensorflow | 261 | Where can I get trained models? | Hi, everyone,
I want some trained models (VGG, Inception, AlexNet) for feature extraction, but I cannot find any. For me, because of the GTX980 memory limitation, retraining a VGG model on imagenet is impossible. I'll be very grateful if someone could offer some trained models.
| open | 2016-08-09T01:27:12Z | 2017-10-30T03:08:15Z | https://github.com/tflearn/tflearn/issues/261 | [] | CharlesShang | 5 |
thtrieu/darkflow | tensorflow | 399 | Cannot install on windows 10 | I am having trouble installing on windows 10, I get the error "The system cannot find the path specified: 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\PlatformSDK\\lib".
Does it have to use visual studio 14? How would I be able to change it to use the version of visual studio I have (which is 15)
Here is the console output:
 | closed | 2017-09-13T10:40:34Z | 2021-08-19T13:55:08Z | https://github.com/thtrieu/darkflow/issues/399 | [] | j-fan | 6 |
seleniumbase/SeleniumBase | web-scraping | 2,839 | "Choose your search engine" google chrome popup | Since this week this popup appears, which seems to be very similar to the popup about privacy which could be solved with this argument: --add_argument('--disable-features=PrivacySandboxSettings4').
Maybe someone has a clue how to get past this one. The html tag is "search-engine-choice-app".
Thanks in advance


| closed | 2024-06-06T09:02:39Z | 2024-09-23T22:40:26Z | https://github.com/seleniumbase/SeleniumBase/issues/2839 | [
"workaround exists",
"not enough info"
] | MiMaPr | 2 |
FactoryBoy/factory_boy | django | 504 | Allow Renamed Keyword Arguments to be Optional | #### The problem
When using the `rename` functionality to rename a keyword argument, an error is thrown if the keyword argument is not passed into the factory. For example, given a class with a field called `total_score` and adding `rename = {'score': 'total_score'}` will throw the following exception if `score` is not passed into the factory.
```
@classmethod
def _rename_fields(cls, **kwargs):
for old_name, new_name in cls._meta.rename.items():
> kwargs[new_name] = kwargs.pop(old_name)
E KeyError: 'score'
```
#### Proposed solution
Allow the model instance to be created if the keyword argument is not present, essentially just ignoring the keyword conversion.
| closed | 2018-08-13T18:38:46Z | 2019-03-28T00:34:53Z | https://github.com/FactoryBoy/factory_boy/issues/504 | [
"Feature",
"BeginnerFriendly"
] | mrname | 3 |
Buuntu/fastapi-react | sqlalchemy | 170 | cannot load localhost:8000 | whenever I run `docker-compose up -d` everything works, but when I go to `localhost:8000` nginx returns `499` then `504`, and it does this every time.
| open | 2021-09-06T17:33:42Z | 2021-11-02T02:32:33Z | https://github.com/Buuntu/fastapi-react/issues/170 | [] | xFGhoul | 1 |
Kludex/mangum | fastapi | 60 | Handle Exception on mangum | Thank you for creating great library!!
I found the not good behavior when calling `exception_handler` with `Exception` on `FastAPI`
I defined `exception_hanlder` for `Exception` which returns `JSONResponse`.
```
@app.exception_handler(Exception)
def all_exception_handler(_: Any, error: Exception):
return JSONResponse(status_code=500, content={"message": error.args, "code": 500})
```
I want to get a response on which status code is 500 with content.
FastAPI(Starrett) raises `Exception`.
Mangum doesn't handle the exception and lambda was die. I can't get an excepted response from APIGateway.
However, `Uvicorn` handles the exception and returns expected response.
Could you change Mangum to return excepted response?
If you need the PR then, I can do it.
Thank you.
| closed | 2019-10-26T05:18:07Z | 2019-10-27T12:33:18Z | https://github.com/Kludex/mangum/issues/60 | [
"improvement"
] | koxudaxi | 3 |
AutoGPTQ/AutoGPTQ | nlp | 669 | [BUG] Cannot install from source | **Describe the bug**
$ pip install -vvv --no-build-isolation -e .
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/vic/workspace/AutoGPTQ/setup.py", line 111, in <module>
local_arch_list = detect_local_sm_architectures()
File "/home/vic/workspace/AutoGPTQ/setup.py", line 68, in detect_local_sm_architectures
arch_list[-1] += '+PTX'
IndexError: list index out of range
**Hardware details**
24GB RAM, Intel CPU
**Software version**
python 3.8.5
| open | 2024-05-12T02:07:18Z | 2024-09-24T03:55:45Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/669 | [
"bug"
] | victoryeo | 2 |
google/seq2seq | tensorflow | 295 | Run python -m unittest seq2seq.test.pipeline_test on win7 | when i run python -m unittest seq2seq.test.pipeline_test on win7 after 1 step,there is an "Permission denied" error, is the "ResourceWarning: unclosed file <_io.BufferedRandom name=6>" warning case this error?
2017-08-31 22:07:39.700066: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow li
brary wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
INFO:tensorflow:Saving checkpoints for 1 into C:\Users\ADMINI~1\AppData\Local\Temp\tmpl_zx4mgr\model.ckpt.
INFO:tensorflow:Prediction followed by Target @ Step 1
====================================================================================================
SEQUENCE_END a a a 泣
c c c c SEQUENCE_END
c c c c c
泣 泣 泣 泣 SEQUENCE_END
====================================================================================================
INFO:tensorflow:loss = 1.94618, step = 1
INFO:tensorflow:Performing full trace on next step.
INFO:tensorflow:Captured full trace at step 11
INFO:tensorflow:Saved run_metadata to C:\Users\ADMINI~1\AppData\Local\Temp\tmpl_zx4mgr\run_meta
INFO:tensorflow:Saved timeline to C:\Users\ADMINI~1\AppData\Local\Temp\tmpl_zx4mgr\timeline.json
WARNING:tensorflow:From E:\git\seq2seq\seq2seq\training\hooks.py:133: write_op_log (from tensorflow.contrib.tfprof.tfprof_logger) is deprecated and wi
ll be removed after 2018-01-01.
Instructions for updating:
Use `tf.profiler.write_op_log. go/tfprof`
INFO:tensorflow:Saved op log to C:\Users\ADMINI~1\AppData\Local\Temp\tmpl_zx4mgr
INFO:tensorflow:Saving checkpoints for 50 into C:\Users\ADMINI~1\AppData\Local\Temp\tmpl_zx4mgr\model.ckpt.
INFO:tensorflow:Loss for final step: 1.93593.
INFO:tensorflow:Evaluating model now.
INFO:tensorflow:Creating AttentionSeq2Seq in mode=eval
INFO:tensorflow:
AttentionSeq2Seq:
attention.class: AttentionLayerBahdanau
attention.params: {num_units: 10}
bridge.class: seq2seq.models.bridges.ZeroBridge
bridge.params: {}
decoder.class: seq2seq.decoders.AttentionDecoder
decoder.params:
rnn_cell:
cell_class: GRUCell
cell_params: {num_units: 8}
embedding.dim: 10
embedding.init_scale: 0.04
embedding.share: false
encoder.class: seq2seq.encoders.BidirectionalRNNEncoder
encoder.params:
rnn_cell:
cell_class: GRUCell
cell_params: {num_units: 8}
inference.beam_search.beam_width: 0
inference.beam_search.choose_successors_fn: choose_top_k
inference.beam_search.length_penalty_weight: 0.0
optimizer.clip_embed_gradients: 0.1
optimizer.clip_gradients: 5.0
optimizer.learning_rate: 0.0001
optimizer.lr_decay_rate: 0.99
optimizer.lr_decay_steps: 100
optimizer.lr_decay_type: ''
optimizer.lr_min_learning_rate: 1.0e-12
optimizer.lr_staircase: false
optimizer.lr_start_decay_at: 0
optimizer.lr_stop_decay_at: 2147483647
optimizer.name: Adam
optimizer.params: {}
optimizer.sync_replicas: 0
optimizer.sync_replicas_to_aggregate: 0
source.max_seq_len: 50
source.reverse: true
target.max_seq_len: 50
vocab_source: C:\Users\ADMINI~1\AppData\Local\Temp\tmpx283xxm9
vocab_target: C:\Users\ADMINI~1\AppData\Local\Temp\tmpyhe62_cm
INFO:tensorflow:Creating vocabulary lookup table of size 7
INFO:tensorflow:Creating vocabulary lookup table of size 7
INFO:tensorflow:Creating BidirectionalRNNEncoder in mode=eval
INFO:tensorflow:
BidirectionalRNNEncoder:
init_scale: 0.04
rnn_cell:
cell_class: GRUCell
cell_params: {num_units: 8}
dropout_input_keep_prob: 1.0
dropout_output_keep_prob: 1.0
num_layers: 1
residual_combiner: add
residual_connections: false
residual_dense: false
INFO:tensorflow:Creating AttentionLayerBahdanau in mode=eval
INFO:tensorflow:
AttentionLayerBahdanau: {num_units: 10}
INFO:tensorflow:Creating AttentionDecoder in mode=eval
INFO:tensorflow:
AttentionDecoder:
init_scale: 0.04
max_decode_length: 100
rnn_cell:
cell_class: GRUCell
cell_params: {num_units: 8}
dropout_input_keep_prob: 1.0
dropout_output_keep_prob: 1.0
num_layers: 1
residual_combiner: add
residual_connections: false
residual_dense: false
INFO:tensorflow:Creating ZeroBridge in mode=eval
INFO:tensorflow:
ZeroBridge: {}
INFO:tensorflow:Starting evaluation at 2017-08-31-14:09:07
INFO:tensorflow:Restoring parameters from C:\Users\ADMINI~1\AppData\Local\Temp\tmpl_zx4mgr\model.ckpt-50
2017-08-31 22:09:09.336193: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\kernels\queue_base.cc:303] _25_dev_input_fn/paralle
l_read_1/common_queue: Skipping cancelled dequeue attempt with queue not closed
sys:1: ResourceWarning: unclosed file <_io.BufferedRandom name=9>
sys:1: ResourceWarning: unclosed file <_io.BufferedRandom name=10>
2017-08-31 22:09:11.446314: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\framework\op_kernel.cc:1192] Unknown: PermissionErr
or: [Errno 13] Permission denied: 'C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\tmpaq6c50c1'
EC:\Program Files\Anaconda3\lib\unittest\case.py:628: ResourceWarning: unclosed file <_io.BufferedRandom name=3>
outcome.errors.clear()
C:\Program Files\Anaconda3\lib\unittest\case.py:628: ResourceWarning: unclosed file <_io.BufferedRandom name=4>
outcome.errors.clear()
C:\Program Files\Anaconda3\lib\unittest\case.py:628: ResourceWarning: unclosed file <_io.BufferedRandom name=5>
outcome.errors.clear()
C:\Program Files\Anaconda3\lib\unittest\case.py:628: ResourceWarning: unclosed file <_io.BufferedRandom name=6>
outcome.errors.clear()
C:\Program Files\Anaconda3\lib\unittest\case.py:628: ResourceWarning: unclosed file <_io.BufferedRandom name=7>
outcome.errors.clear()
C:\Program Files\Anaconda3\lib\unittest\case.py:628: ResourceWarning: unclosed file <_io.BufferedRandom name=8>
outcome.errors.clear()
======================================================================
ERROR: test_train_infer (seq2seq.test.pipeline_test.PipelineTest)
Tests training and inference scripts.
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1327, in _do_call
return fn(*args)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1306, in _run_fn
status, run_metadata)
File "C:\Program Files\Anaconda3\lib\contextlib.py", line 66, in __exit__
next(self.gen)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.UnknownError: PermissionError: [Errno 13] Permission denied: 'C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\tmpaq
6c50c1'
[[Node: bleu/value = PyFunc[Tin=[DT_STRING, DT_STRING], Tout=[DT_FLOAT], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](b
leu/Identity, bleu/Identity_1)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\git\seq2seq\seq2seq\test\pipeline_test.py", line 148, in test_train_infer
train_script.main([])
File "E:\git\seq2seq\bin\train.py", line 272, in main
schedule=FLAGS.schedule)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\learn_runner.py", line 209, in run
return _execute_schedule(experiment, schedule)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\learn_runner.py", line 46, in _execute_schedule
return task()
File "E:\git\seq2seq\seq2seq\contrib\experiment.py", line 112, in continuous_train_and_eval
hooks=self._eval_hooks)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py", line 296, in new_func
return func(*args, **kwargs)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 546, in evaluate
log_progress=log_progress)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 858, in _evaluate_model
config=self._session_config)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\training\evaluation.py", line 182, in _evaluate_once
session.run(eval_ops, feed_dict)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\training\monitored_session.py", line 518, in run
run_metadata=run_metadata)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\training\monitored_session.py", line 862, in run
run_metadata=run_metadata)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\training\monitored_session.py", line 818, in run
return self._sess.run(*args, **kwargs)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\training\monitored_session.py", line 972, in run
run_metadata=run_metadata)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\training\monitored_session.py", line 818, in run
return self._sess.run(*args, **kwargs)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 895, in run
run_metadata_ptr)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1124, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1321, in _do_run
options, run_metadata)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1340, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.UnknownError: PermissionError: [Errno 13] Permission denied: 'C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\tmpaq
6c50c1'
[[Node: bleu/value = PyFunc[Tin=[DT_STRING, DT_STRING], Tout=[DT_FLOAT], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](b
leu/Identity, bleu/Identity_1)]]
Caused by op 'bleu/value', defined at:
File "C:\Program Files\Anaconda3\lib\runpy.py", line 184, in _run_module_as_main
"__main__", mod_spec)
File "C:\Program Files\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Program Files\Anaconda3\lib\unittest\__main__.py", line 18, in <module>
main(module=None)
File "C:\Program Files\Anaconda3\lib\unittest\main.py", line 94, in __init__
self.runTests()
File "C:\Program Files\Anaconda3\lib\unittest\main.py", line 255, in runTests
self.result = testRunner.run(self.test)
File "C:\Program Files\Anaconda3\lib\unittest\runner.py", line 176, in run
test(result)
File "C:\Program Files\Anaconda3\lib\unittest\suite.py", line 84, in __call__
return self.run(*args, **kwds)
File "C:\Program Files\Anaconda3\lib\unittest\suite.py", line 122, in run
test(result)
File "C:\Program Files\Anaconda3\lib\unittest\suite.py", line 84, in __call__
return self.run(*args, **kwds)
File "C:\Program Files\Anaconda3\lib\unittest\suite.py", line 122, in run
test(result)
File "C:\Program Files\Anaconda3\lib\unittest\suite.py", line 84, in __call__
return self.run(*args, **kwds)
File "C:\Program Files\Anaconda3\lib\unittest\suite.py", line 122, in run
test(result)
File "C:\Program Files\Anaconda3\lib\unittest\case.py", line 648, in __call__
return self.run(*args, **kwds)
File "C:\Program Files\Anaconda3\lib\unittest\case.py", line 600, in run
testMethod()
File "E:\git\seq2seq\seq2seq\test\pipeline_test.py", line 148, in test_train_infer
train_script.main([])
File "E:\git\seq2seq\bin\train.py", line 272, in main
schedule=FLAGS.schedule)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\learn_runner.py", line 209, in run
return _execute_schedule(experiment, schedule)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\learn_runner.py", line 46, in _execute_schedule
return task()
File "E:\git\seq2seq\seq2seq\contrib\experiment.py", line 112, in continuous_train_and_eval
hooks=self._eval_hooks)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py", line 296, in new_func
return func(*args, **kwargs)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 546, in evaluate
log_progress=log_progress)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 832, in _evaluate_model
model_fn_results = self._get_eval_ops(features, labels, metrics)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 1199, in _get_eval_ops
metrics, features, labels, model_fn_ops.predictions))
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 271, in _make_metrics_ops
result[name] = metric.create_metric_ops(features, labels, predictions)
File "E:\git\seq2seq\seq2seq\metrics\metric_specs.py", line 124, in create_metric_ops
name="value")
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\ops\script_ops.py", line 203, in py_func
input=inp, token=token, Tout=Tout, name=name)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_script_ops.py", line 36, in _py_func
name=name)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op
op_def=op_def)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2630, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1204, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
UnknownError (see above for traceback): PermissionError: [Errno 13] Permission denied: 'C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\tmpaq6c50c1'
[[Node: bleu/value = PyFunc[Tin=[DT_STRING, DT_STRING], Tout=[DT_FLOAT], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](b
leu/Identity, bleu/Identity_1)]]
----------------------------------------------------------------------
Ran 2 tests in 114.668s
FAILED (errors=1) | open | 2017-08-31T14:23:40Z | 2019-10-26T05:49:32Z | https://github.com/google/seq2seq/issues/295 | [] | fanjingwei | 12 |
matplotlib/mplfinance | matplotlib | 211 | Bug Report: Alpha setting with Panel not working | **Describe the bug**
When Adding a plot mpf.make_addplot((df['SPY']),color='black',alpha=0.3,panel=0) the Alpha Setting is not reflected once Panel number is assigned. The line remains a solid black.
**To Reproduce**
Steps to reproduce the behavior:
1. Add Panel_ratios and num_panels to mpf.plot
2. Add apds = [mpf.make_addplot((df['SPY']),color='black',alpha=0.3,panel=0)]
3. Add addplot = pads to **kwargs
**Expected behavior**
Alpha setting to work
**Screenshots**
None | closed | 2020-07-07T22:34:02Z | 2020-07-08T03:34:14Z | https://github.com/matplotlib/mplfinance/issues/211 | [
"bug"
] | PowerrNuller | 2 |
sqlalchemy/sqlalchemy | sqlalchemy | 12,328 | DML RETURNING omits other mapped cols due to bulk insert assumptions |
### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/12327
```py
from __future__ import annotations
from sqlalchemy import create_engine
from sqlalchemy import ForeignKey
from sqlalchemy import update
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import relationship
from sqlalchemy.orm import Session
class Base(DeclarativeBase):
pass
class A(Base):
__tablename__ = "a"
id: Mapped[int] = mapped_column(primary_key=True)
data: Mapped[str]
bs: Mapped[list[B]] = relationship("B")
class B(Base):
__tablename__ = "b"
id: Mapped[int] = mapped_column(primary_key=True)
a_id: Mapped[int] = mapped_column(ForeignKey("a.id"))
data: Mapped[str]
e = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
Base.metadata.create_all(e)
s = Session(e)
s.add(
A(data='a1', bs=[B(data='b2')])
)
s.flush()
result = s.execute(
update(A).values(data='foo').where(A.id == B.a_id).returning(A.data, B.a_id, B.data)
)
print(result.all())
```
renders:
```
UPDATE a SET data=%(data)s FROM b WHERE a.id = b.a_id RETURNING a.id, a.data
```
and fails
```
sqlalchemy.exc.NoSuchColumnError: Could not locate column in row for column 'b.a_id'
``` | closed | 2025-02-09T22:46:29Z | 2025-02-10T20:26:59Z | https://github.com/sqlalchemy/sqlalchemy/issues/12328 | [
"bug",
"orm",
"near-term release",
"law of twos",
"dml"
] | zzzeek | 2 |
sammchardy/python-binance | api | 1,380 | I am getting this type of error | 2023-12-13 11:53:05,058 - crypto_trading_logger - INFO - Starting
bridge
bridge
hourtokeepscouthistory
hourtokeepscouthistory
scout_multiplier
scout_multiplier
scout_sleep_time
scout_sleep_time
api_key
api_key
api_secret_key
api_secret_key
tld
tld
current_coin
current_coin
strategy
strategy
sell_timeout
sell_timeout
buy_timeout
buy_timeout
Traceback (most recent call last):
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connection.py", line 203, in _new_conn
sock = connection.create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\util\connection.py", line 60, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\socket.py", line 962, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
socket.gaierror: [Errno 11001] getaddrinfo failed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 790, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 491, in _make_request
raise new_e
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 467, in _make_request
self._validate_conn(conn)
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 1092, in _validate_conn
conn.connect()
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connection.py", line 611, in connect
self.sock = sock = self._new_conn()
^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connection.py", line 210, in _new_conn
raise NameResolutionError(self.host, self, e) from e
urllib3.exceptions.NameResolutionError: <urllib3.connection.HTTPSConnection object at 0x000001E34914CCD0>: Failed to resolve 'api.binance.'com'' ([Errno 11001] getaddrinfo failed)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\adapters.py", line 486, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 844, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\util\retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host="api.binance.'com'", port=443): Max retries exceeded with url: /api/v3/ping (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x000001E34914CCD0>: Failed to resolve 'api.binance.'com'' ([Errno 11001] getaddrinfo failed)"))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "D:\binance-trade-bot\binance_trade_bot\__main__.py", line 5, in <module>
main()
File "D:\binance-trade-bot\binance_trade_bot\crypto_trading.py", line 18, in main
manager = BinanceAPIManager(config, db, logger)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\binance-trade-bot\binance_trade_bot\binance_api_manager.py", line 27, in __init__
self.binance_client = Client(
^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\binance\client.py", line 132, in __init__
self.ping()
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\binance\client.py", line 447, in ping
return self._get('ping', version=self.PRIVATE_API_VERSION)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\binance\client.py", line 292, in _get
return self._request_api('get', path, signed, version, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\binance\client.py", line 242, in _request_api
return self._request(method, uri, signed, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\binance\client.py", line 236, in _request
self.response = getattr(self.session, method)(uri, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\srila\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host="api.binance.'com'", port=443): Max retries exceeded with url: /api/v3/ping (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x000001E34914CCD0>: Failed to resolve 'api.binance.'com'' ([Errno 11001] getaddrinfo failed)")) | open | 2023-12-13T06:25:14Z | 2024-04-24T08:55:05Z | https://github.com/sammchardy/python-binance/issues/1380 | [] | Srilakshmi-Dirisala | 1 |
zihangdai/xlnet | tensorflow | 206 | ValueError: Cannot convert a partially known TensorShape to a Tensor: (1, 0, ?) | F:\tensorflow3\lib\site-packages\tensorflow\python\framework\dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
F:\tensorflow3\lib\site-packages\tensorflow\python\framework\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
F:\tensorflow3\lib\site-packages\tensorflow\python\framework\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
F:\tensorflow3\lib\site-packages\tensorflow\python\framework\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
F:\tensorflow3\lib\site-packages\tensorflow\python\framework\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
F:\tensorflow3\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
F:\tensorflow3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
F:\tensorflow3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
F:\tensorflow3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
F:\tensorflow3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
F:\tensorflow3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
F:\tensorflow3\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
WARNING: Logging before flag parsing goes to stderr.
W0808 21:46:41.206200 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\model_utils.py:295: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
W0808 21:46:41.221700 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\run_classifier.py:858: The name tf.app.run is deprecated. Please use tf.compat.v1.app.run instead.
W0808 21:46:41.225200 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\run_classifier.py:639: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.
W0808 21:46:41.225200 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\run_classifier.py:639: The name tf.logging.INFO is deprecated. Please use tf.compat.v1.logging.INFO instead.
W0808 21:46:41.225700 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\run_classifier.py:647: The name tf.gfile.Exists is deprecated. Please use tf.io.gfile.exists instead.
W0808 21:46:41.255199 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\model_utils.py:27: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.
W0808 21:46:41.957199 7844 lazy_loader.py:50]
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
* https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.
2019-08-08 21:46:41.962200: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
I0808 21:46:41.963700 7844 cross_device_ops.py:1174] Device is available but not used by distribute strategy: /device:CPU:0
W0808 21:46:41.964200 7844 cross_device_ops.py:1177] Not all devices in `tf.distribute.Strategy` are visible to TensorFlow.
W0808 21:46:41.964200 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\model_utils.py:40: The name tf.logging.info is deprecated. Please use tf.compat.v1.logging.info instead.
I0808 21:46:41.964699 7844 model_utils.py:41] Use MirroredStrategy with 8 devices.
I0808 21:46:41.965199 7844 run_config.py:558] Initializing RunConfig with distribution strategies.
I0808 21:46:41.965199 7844 estimator_training.py:167] Not using Distribute Coordinator.
I0808 21:46:41.965199 7844 estimator.py:209] Using config: {'_model_dir': 'F:/kaggleData/GS_ROOT/exp/imdb/', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 500, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
, '_keep_checkpoint_max': 0, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': <tensorflow.contrib.distribute.python.mirrored_strategy.MirroredStrategy object at 0x0000000012010320>, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x000000000D96DB00>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_distribute_coordinator_mode': None, '_tpu_config': TPUConfig(iterations_per_loop=500, num_shards=8, num_cores_per_replica=None, per_host_input_for_training=3, tpu_job_name=None, initial_infeed_sleep_secs=None, input_partition_dims=None, eval_training_input_configuration=2), '_cluster': None}
W0808 21:46:41.965700 7844 model_fn.py:630] Estimator's model_fn (<function get_model_fn.<locals>.model_fn at 0x0000000011EBC950>) includes params argument, but params are not passed to Estimator.
W0808 21:46:41.966200 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\run_classifier.py:314: The name tf.gfile.ListDirectory is deprecated. Please use tf.io.gfile.listdir instead.
W0808 21:46:41.966200 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\run_classifier.py:318: The name tf.gfile.Open is deprecated. Please use tf.io.gfile.GFile instead.
I0808 21:46:41.967200 7844 run_classifier.py:730] Num of eval samples: 4
I0808 21:46:41.967700 7844 run_classifier.py:404] Do not overwrite tfrecord F:/kaggleData/GS_ROOT/proc_data/imdb/model.model.len-512.dev.predict.tf_record exists.
W0808 21:46:41.967700 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\run_classifier.py:452: The name tf.FixedLenFeature is deprecated. Please use tf.io.FixedLenFeature instead.
I0808 21:46:41.967700 7844 run_classifier.py:461] Input tfrecord file F:/kaggleData/GS_ROOT/proc_data/imdb/model.model.len-512.dev.predict.tf_record
F:/kaggleData/prediction\imdb.tsv
<class 'function'>
W0808 21:46:41.994199 7844 deprecation.py:323] From C:\Users\hansaizhou\workspace\XLnet\run_classifier.py:506: map_and_batch (from tensorflow.contrib.data.python.ops.batching) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.experimental.map_and_batch(...)`.
W0808 21:46:41.994199 7844 deprecation.py:323] From F:\tensorflow3\lib\site-packages\tensorflow\contrib\data\python\ops\batching.py:273: map_and_batch (from tensorflow.python.data.experimental.ops.batching) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map(map_func, num_parallel_calls)` followed by `tf.data.Dataset.batch(batch_size, drop_remainder)`. Static tf.data optimizations will take care of using the fused implementation.
W0808 21:46:41.995699 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\run_classifier.py:465: The name tf.parse_single_example is deprecated. Please use tf.io.parse_single_example instead.
I0808 21:46:42.019700 7844 estimator.py:1145] Calling model_fn.
W0808 21:46:42.029199 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\xlnet.py:220: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.
W0808 21:46:42.029700 7844 deprecation_wrapper.py:119] From C:\Users\hansaizhou\workspace\XLnet\xlnet.py:220: The name tf.AUTO_REUSE is deprecated. Please use tf.compat.v1.AUTO_REUSE instead.
I0808 21:46:42.029700 7844 modeling.py:453] memory input None
I0808 21:46:42.030200 7844 modeling.py:455] Use float type <dtype: 'float32'>
Traceback (most recent call last):
File "F:\tensorflow3\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1877, in zeros
tensor_shape.TensorShape(shape))
File "F:\tensorflow3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 326, in _tensor_shape_tensor_conversion_function
"Cannot convert a partially known TensorShape to a Tensor: %s" % s)
ValueError: Cannot convert a partially known TensorShape to a Tensor: (1, 0, ?)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\hansaizhou\workspace\XLnet\run_classifier.py", line 858, in <module>
tf.app.run()
File "F:\tensorflow3\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "F:\tensorflow3\lib\site-packages\absl\app.py", line 300, in run
_run_main(main, args)
File "F:\tensorflow3\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "C:\Users\hansaizhou\workspace\XLnet\run_classifier.py", line 827, in main
checkpoint_path=FLAGS.predict_ckpt)):
File "F:\tensorflow3\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 619, in predict
features, None, ModeKeys.PREDICT, self.config)
File "F:\tensorflow3\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1146, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "C:\Users\hansaizhou\workspace\XLnet\run_classifier.py", line 525, in model_fn
FLAGS, features, n_class, is_training)
File "C:\Users\hansaizhou\workspace\XLnet\function_builder.py", line 152, in get_classification_loss
input_mask=inp_mask)
File "C:\Users\hansaizhou\workspace\XLnet\xlnet.py", line 222, in __init__
) = modeling.transformer_xl(**tfm_args)
File "C:\Users\hansaizhou\workspace\XLnet\modeling.py", line 499, in transformer_xl
dtype=tf_float)
File "F:\tensorflow3\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1880, in zeros
shape = ops.convert_to_tensor(shape, dtype=dtypes.int32)
File "F:\tensorflow3\lib\site-packages\tensorflow\python\framework\ops.py", line 1087, in convert_to_tensor
return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
File "F:\tensorflow3\lib\site-packages\tensorflow\python\framework\ops.py", line 1145, in convert_to_tensor_v2
as_ref=False)
File "F:\tensorflow3\lib\site-packages\tensorflow\python\framework\ops.py", line 1224, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "F:\tensorflow3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 305, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "F:\tensorflow3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 246, in constant
allow_broadcast=True)
File "F:\tensorflow3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 284, in _constant_impl
allow_broadcast=allow_broadcast))
File "F:\tensorflow3\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 467, in make_tensor_proto
nparray = np.array(values, dtype=np_dt)
TypeError: __int__ returned non-int (type NoneType)
| open | 2019-08-08T13:55:37Z | 2019-08-08T13:55:37Z | https://github.com/zihangdai/xlnet/issues/206 | [] | songruifei | 0 |
psf/requests | python | 5,984 | url "schema" should be "scheme" | https://github.com/psf/requests/blob/590350f8d094c216051510ed1dd18fe871b53b72/requests/models.py#L388-L392
I don't believe the first part of a URL is ever called a "schema." Exceptions and error messages referring to an incorrect schema are confusing, especially in contexts where actual schema errors are possible. If it isn't possible to change the `MissingSchema` exception (it looks like it is slated to be fixed in 3.x) please consider at least changing the error message.
References:
https://www.w3.org/Addressing/URL/url-spec.txt
https://github.com/psf/requests/issues/4495 | closed | 2021-11-23T23:49:06Z | 2022-03-29T02:29:06Z | https://github.com/psf/requests/issues/5984 | [] | DanLipsitt | 2 |
aimhubio/aim | tensorflow | 2,623 | Improve the Structure of the Metrics Table | ## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
Improve the structure of the metrics table by reorganizing the columns/groups.
### Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
In the metrics explorer, the comparison of different metrics and runs is very difficult due to the grouping by metric. In the following, I illustrated an example with three runs and three metrics grouped by parameter a. `X` stands for some arbitrary value. The evaluation and comparison of the runs is very challenging. And here we actually have a very simple example with just a few runs/metrics.
| Group | Run | Group Config | Metric | Value | | | | Run Params | | | Actions |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | Name| hparams.a | Name | Group Min | Mean | Group Max | ... | hparams.b | hparams.c | ... |
| Group 1 | Mixed: 3 Values | 0| loss | X | X | X
| | Run A | 0 | loss | | X | | | | | | S |
| | Run B | 0 | loss | | X | | | | | | S |
| | Run C | 0 | loss | | X | | | | | | S |
| Group 2 | Mixed: 3 Values | 0| acc| X | X | X
| | Run A | 0 | acc | | X | | | | | | S |
| | Run B | 0 | acc | | X | | | | | | S |
| | Run C | 0 | acc | | X | | | | | | S |
| Group 3 | Mixed: 3 Values | 0| val_loss | X | X | X
| | Run A | 0 | val_loss | | X | | | | | | S |
| | Run B | 0 | val_loss | | X | | | | | | S |
| | Run C | 0 | val_loss | | X | | | | | | S |
| Group 4 | Mixed: 3 Values | 1| loss | X | X | X
| | Run D | 1 | loss | | X | | | | | | S |
| | Run E | 1 | loss | | X | | | | | | S |
| | Run F | 1 | loss | | X | | | | | | S |
| Group 5 | Mixed: 3 Values | 1| acc| X | X | X
| | Run D | 1 | acc| | X | | | | | | S |
| | Run E | 1 | acc| | X | | | | | | S |
| | Run F | 1 | acc| | X | | | | | | S |
| Group 6 | Mixed: 3 Values | 1| val_loss | X | X | X
| | Run D | 1 | val_loss | | X | | | | | | S |
| | Run E | 1 | val_loss | | X | | | | | | S |
| | Run F | 1 | val_loss | | X | | | | | | S |
### Pitch
<!-- A clear and concise description of what you want to happen. -->
The structure can be improved by showing the metrics as columns instead of as separate groups (similar to the runs explorer).
Here is an example with the same data as above:
| Group | Run | Group Config | Metrics | | | | Run Params | | | Actions |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | Name| hparams.a | loss |acc | val_loss | ... | hparams.b | hparams.c | ...
| Group 1 | Mixed: 3 Values | 0| X±X | X±X | X±X |
| | Run A | 0 | X | X | X | | | | | S |
| | Run B | 0 | X| X | X | | | | | S |
| | Run C | 0 | X| X | X | | | | | S |
| Group 2 | Mixed: 3 Values | 1 | X±X | X±X | X±X |
| | Run D | 1 | X | X | X | | | | | S |
| | Run E | 1 | X| X | X | | | | | S |
| | Run F | 1 | X| X | X | | | | | S |
In this way:
- The table has become much smaller and clearer
- Comparing different runs as well as different metrics is much easier
- Scales better when the number of runs and metrics increases
- The existing managing of the columns can be used
- In the group column, an aggregated value can be shown (e.g. mean±std, mean (min - max), or just the mean. Possibly selectable)
- `S` stands for show/hide. In this way, one could easily show/hide a run from all plots instead of changing all individually
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. --> | open | 2023-03-28T03:56:12Z | 2023-03-28T10:42:29Z | https://github.com/aimhubio/aim/issues/2623 | [
"type / enhancement",
"area / Web-UI"
] | creinders | 2 |
PaddlePaddle/PaddleNLP | nlp | 9,386 | [Bug]: 跑llama3-8b的sft微调时,报错 KeyError: 'eval_accuracy' | ### 软件环境
```Markdown
- paddlepaddle-gpu: 0.0.0.post120
- paddlenlp: 3.0.0b2
```
### 重复问题
- [X] I have searched the existing issues
### 错误描述
```Markdown
跑llama3-8b的sft微调时,报错
Traceback (most recent call last):
File "/home/LAB/huangjx/new/PaddleNLP/llm/run_finetune.py", line 730, in <module>
main()
File "/home/LAB/huangjx/new/PaddleNLP/llm/run_finetune.py", line 570, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/LAB/huangjx/.local/lib/python3.10/site-packages/paddlenlp/trainer/trainer.py", line 829, in train
return self._inner_training_loop(
File "/home/LAB/huangjx/.local/lib/python3.10/site-packages/paddlenlp/trainer/trainer.py", line 1203, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, epoch, ignore_keys_for_eval, inputs=inputs)
File "/home/LAB/huangjx/.local/lib/python3.10/site-packages/paddlenlp/trainer/trainer.py", line 1478, in _maybe_log_save_evaluate
self._save_checkpoint(model, metrics=metrics)
File "/home/LAB/huangjx/.local/lib/python3.10/site-packages/paddlenlp/trainer/trainer.py", line 2460, in _save_checkpoint
metric_value = metrics[metric_to_check]
KeyError: 'eval_accuracy'
但如果我把config中"metric_for_best_model": "accuracy",删除,就不会报错。所以应该是不支持"metric_for_best_model": "accuracy".在这个过程中我开了pp和tp
```
### 稳定复现步骤 & 代码
1. cd PaddleNLP/llm/config/llama
2. cat sft_argument.json
{
"model_name_or_path": "meta-llama/Meta-Llama-3-8B",
"dataset_name_or_path": "./data",
"output_dir": "./checkpoints/llama_sft_ckpts",
"per_device_train_batch_size": 1,
"gradient_accumulation_steps": 1,
"per_device_eval_batch_size": 1,
"eval_accumulation_steps": 1,
"num_train_epochs": 3,
"learning_rate": 3e-05,
"warmup_steps": 30,
"max_steps": 20,
"max_evaluate_steps": 3,
"logging_steps": 1,
"evaluation_strategy": "epoch",
"save_strategy": "epoch",
"src_length": 1024,
"max_length": 200,
"do_train": true,
"do_eval": true,
"disable_tqdm": true,
"load_best_model_at_end": true,
"eval_with_do_generation": false,
"metric_for_best_model": "accuracy",
"recompute": true,
"save_total_limit": 1,
"tensor_parallel_degree": 2,
"pipeline_parallel_degree": 2,
"pipeline_parallel_config": "disable_p2p_cache_shape",
"sharding": "stage2",
"zero_padding": false,
"unified_checkpoint": false,
"use_flash_attention": false
}
3. python3 -u -m paddle.distributed.launch --gpus "0,1,2,3" run_finetune.py ./config/llama/sft_argument.json | closed | 2024-11-07T07:10:03Z | 2025-01-23T00:20:21Z | https://github.com/PaddlePaddle/PaddleNLP/issues/9386 | [
"bug",
"stale"
] | hjx620 | 3 |
inventree/InvenTree | django | 8,997 | [FR] Import CSV to Add Materials in One Click | ### Please verify that this feature request has NOT been suggested before.
- [x] I checked and didn't find a similar feature request
### Problem statement
Hello,
First, thank you for your great work on Inventree!
I would like to know if it is possible to import a CSV file to add materials (or other items) in one click. If this feature is not available, would it be possible to implement it? It would be very useful for bulk additions instead of manually entering each item.
### Suggested solution
In my use case, I would like a feature that allows users to upload a CSV file containing item details (e.g., name, description, quantity, supplier, price, etc.), and have Inventree automatically create the entries.
A possible implementation could include:
A simple UI option under the "Add Material" section to upload a CSV file.
A standardized CSV format with predefined columns.
An option to map CSV columns to Inventree fields if needed.
A preview step before confirming the import.
### Describe alternatives you've considered
Using an API endpoint to bulk-add materials via an external script.
A spreadsheet import feature within the database interface.
However, a built-in CSV import would be the most user-friendly solution.
### Examples of other systems
this feature can be implemented by allowing users to upload a structured CSV file, which is then automatically parsed and added to the inventory. The system also provides error handling and validation before finalizing the import.
### Do you want to develop this?
- [ ] I want to develop this. | closed | 2025-01-30T15:56:41Z | 2025-01-31T20:28:04Z | https://github.com/inventree/InvenTree/issues/8997 | [
"question"
] | ineselhajahmed11 | 0 |
lepture/authlib | flask | 531 | Automatic token refresh when using client_credentials | I'm experimenting with the Httpx Client using `client_credentials` grant. The automatic token refresh does not work as I expected.
`client_id` and `client_secret` and `token_endpoint` are given when the client is created, so all necessry information is available to fetch the token.
When making a request I get a `MissingTokenError` exception because I didn't supply a token:
https://github.com/lepture/authlib/blob/ee4337cf7c825349dd23870822a3cc7df123097f/authlib/integrations/httpx_client/oauth2_client.py#L198-L202
I had expected that the token would be automatically fetched if none is available and it can automatically be fetched. Thats what the call to `ensure_active_token()` does.
I could cheat the system by specifying a dummy token when creating the client (-1 because of #530):
```
token={'expires_at': -1, 'access_token': ''}
```
Is it deliberate that in this situation, when the token can be fetched without user intervention, that an initial token must be supplied (via token keyword or a explicit call to `fetch_token()`)? | open | 2023-02-15T14:16:20Z | 2023-12-10T12:24:42Z | https://github.com/lepture/authlib/issues/531 | [
"bug",
"good first issue"
] | NomAnor | 3 |
miguelgrinberg/python-socketio | asyncio | 393 | Intermittent long delay during connection upgrade | **Summary**
We are using `python-socketio` to push out regular service status updates to browser clients, which are using the `socket.io` JS client library. Most of the time it works fine but we are intermittently seeing a situation where there is a ~30 second delay part-way through the connection upgrade process, and no messages are received during this time. Eventually the connection is closed (although the process looks a bit messy) and a reconnect occurs, after which everything works fine again.
We can reproduce this fairly easily with a unit test that repeatedly navigates to the relevant page and away again, forcing the socket to be recreated and a connection re-established each time. We reliably run into the issue after a few iterations of this.
**Versions**
`python-socketio`: 4.3.1 (have also tried 4.4.0)
`python-engineio`: 3.10.0 (have also tried 3.11.0)
`socket.io JS client`: 2.3.0 (have also tried 2.2.0)
**Code**
This is a simplified version of our server code, which is running inside a Docker container within a Kubernetes pod:
```python
from gevent import monkey, pywsgi
monkey.patch_all()
from geventwebsocket.handler import WebSocketHandler
import logging
import socketio
import time
logging.getLogger('socketio.server').setLevel(logging.INFO)
logging.getLogger('engineio.server').setLevel(logging.INFO)
sio = socketio.Server(
logger=True,
engineio_logger=True,
cors_allowed_origins="*",
async_mode='gevent'
)
app = socketio.WSGIApp(sio)
@sio.on('connect')
def connect(sid, environ):
logger.info(f'Client connected with session id: {sid}')
logger.info(f'Environment is: {environ}')
@sio.on('disconnect')
def disconnect(sid):
logger.info(f'Client disconnected from session id: {sid}')
@sio.on('join')
def join(sid, room):
sio.enter_room(sid, room)
logger.info(f'Client joining room {room} in session {sid}')
def generate_update_message():
# Do some work here to generate the right status message
# ...
def update_loop():
while True:
# Generate and emit an update every second
update = generate_update_message()
sio.emit('update', update, room='admin')
sio.sleep(0.1)
time.sleep(1.0)
def main():
sio.start_background_task(update_loop)
pywsgi.WSGIServer(('', 8080), app, handler_class=WebSocketHandler).serve_forever()
```
The relevant bit of our client code is:
```javascript
function admin_vm() {
const self = this;
self.socket = io.connect({
path: window.config.project_url + 'status/system/socket.io'
});
self.socket.on('connect', function() {
console.log('Socket connected, joining admin room');
self.socket.emit('join', 'admin');
});
self.socket.on('update', function (update) {
update = JSON.parse(update);
const u = update;
console.log(
'Update for system status left server [' + u.timestamp +
'] arrived here [' + new Date().toString() + '] update count [' + update_count++ + ']'
);
// Apply the update to the UI here...
});
}
```
**Logs**
I've captured some detailed logs of both client and server both when the connection works and when it doesn't (see attached).
[bad-client.log](https://github.com/miguelgrinberg/python-socketio/files/3940079/bad-client.log)
[bad-server.log](https://github.com/miguelgrinberg/python-socketio/files/3940080/bad-server.log)
[good-client.log](https://github.com/miguelgrinberg/python-socketio/files/3940081/good-client.log)
[good-server.log](https://github.com/miguelgrinberg/python-socketio/files/3940082/good-server.log)
When the issue occurs, the client log clearly shows the delay while it's probing for the availability of the websocket transport:
```
12:58:46.220 socket.io.js 391:131 "engine.io-client:socket probe transport \"%s\" pong +7ms" "websocket"
12:58:46.220 socket.io.js 391:131 "engine.io-client:socket pausing current transport \"%s\" +1ms" "polling"
12:58:46.220 socket.io.js 391:131 "engine.io-client:polling we are currently polling - waiting to pause +8ms"
12:59:11.356 socket.io.js 391:131 "engine.io-client:socket writing ping packet - expecting pong within %sms +25s" 60000
12:59:17.224 socket.io.js 391:131 "engine.io-client:polling polling got data %s +31s" ArrayBuffer(0)
12:59:17.224 socket.io.js 391:131 "engine.io-client:polling pre-pause polling complete +2ms"
12:59:17.224 socket.io.js 391:131 "engine.io-client:polling paused +1ms"
12:59:17.224 socket.io.js 391:131 "engine.io-client:socket changing transport and sending upgrade packet +6s"
12:59:17.224 socket.io.js 391:131 "engine.io-client:socket setting transport %s +1ms" "websocket"
12:59:17.224 socket.io.js 391:131 "engine.io-client:socket clearing existing transport %s +0ms" "polling"
12:59:17.224 socket.io.js 391:131 "engine.io-client:polling ignoring poll - transport state \"%s\" +4ms" "paused"
12:59:17.224 socket.io.js 391:131 "engine.io-client:socket flushing %d packets in socket +3ms" 1
```
On the server side, we see part of the upgrade process and then it seems to stop (while still emitting update messages)...and eventually the server gives up and closes the socket:
```
12:58:46,184 1 MainProcess DEBUG Attempting to upgrade connection - geventwebsocket.handler
12:58:46,185 1 MainProcess DEBUG WebSocket request accepted, switching protocols - geventwebsocket.handler
12:58:46,185 1 MainProcess INFO 6294b24868144273b4a9bceaf0e439f9: Received request to upgrade to websocket - engineio.server
12:58:46,186 1 MainProcess DEBUG Initializing WebSocket - geventwebsocket.handler
12:58:46,187 1 MainProcess DEBUG Validating WebSocket request - geventwebsocket.handler
12:58:46,187 1 MainProcess DEBUG Can only upgrade connection if using GET method. - geventwebsocket.handler
12:58:46,187 1 MainProcess INFO 6294b24868144273b4a9bceaf0e439f9: Received packet MESSAGE data 2["join","admin"] - engineio.server
12:58:46,187 1 MainProcess INFO received event "join" from 6294b24868144273b4a9bceaf0e439f9 [/] - socketio.server
12:58:46,188 1 MainProcess INFO ::ffff:10.0.9.139 - - [12:58:46] "POST /socket.io/?EIO=3&transport=polling&t=MxglQ0a&sid=6294b24868144273b4a9bceaf0e439f9 HTTP/1.1" 200 208 0.001325 - geventwebsocket.handler
12:58:46,188 1 MainProcess INFO 6294b24868144273b4a9bceaf0e439f9 is entering room admin [/] - socketio.server
12:58:46,188 1 MainProcess INFO Client joining room admin in session 6294b24868144273b4a9bceaf0e439f9 - admin.status
12:58:46,190 1 MainProcess DEBUG Initializing WebSocket - geventwebsocket.handler
12:58:46,190 1 MainProcess DEBUG Validating WebSocket request - geventwebsocket.handler
12:58:46,199 1 MainProcess INFO ::ffff:10.0.9.139 - - [12:58:46] "GET /socket.io/?EIO=3&transport=polling&t=MxglQ0c&sid=6294b24868144273b4a9bceaf0e439f9 HTTP/1.1" 200 159 0.009237 - geventwebsocket.handler
12:58:46,215 1 MainProcess INFO emitting event "update" to admin [/] - socketio.server
12:58:46,217 1 MainProcess DEBUG Initializing WebSocket - geventwebsocket.handler
12:58:46,217 1 MainProcess DEBUG Validating WebSocket request - geventwebsocket.handler
12:58:47,400 1 MainProcess INFO emitting event "update" to admin [/] - socketio.server
12:58:48,576 1 MainProcess INFO emitting event "update" to admin [/] - socketio.server
...
12:59:14,619 1 MainProcess INFO emitting event "update" to admin [/] - socketio.server
12:59:15,841 1 MainProcess INFO emitting event "update" to admin [/] - socketio.server
12:59:17,045 1 MainProcess INFO emitting event "update" to admin [/] - socketio.server
12:59:17,046 1 MainProcess INFO 6294b24868144273b4a9bceaf0e439f9: Client is gone, closing socket - engineio.server
12:59:17,046 1 MainProcess INFO Client disconnected from session id: 6294b24868144273b4a9bceaf0e439f9 - admin.status
12:59:17,046 1 MainProcess INFO 6294b24868144273b4a9bceaf0e439f9: Client is gone, closing socket - engineio.server
12:59:17,047 1 MainProcess INFO ::ffff:10.0.9.139 - - [12:59:17] "GET /socket.io/?EIO=3&transport=polling&t=MxglQ14&sid=6294b24868144273b4a9bceaf0e439f9 HTTP/1.1" 200 155 30.829922 - geventwebsocket.handler
12:59:17,064 1 MainProcess INFO 6294b24868144273b4a9bceaf0e439f9: Upgrade to websocket successful - engineio.server
12:59:17,065 1 MainProcess INFO 6294b24868144273b4a9bceaf0e439f9: Received packet PING data None - engineio.server
12:59:17,065 1 MainProcess INFO Receive error -- socket is closed - engineio.server
12:59:17,068 1 MainProcess DEBUG Closed WebSocket - geventwebsocket.handler
12:59:17,069 1 MainProcess DEBUG Failed to write closing frame -> closing socket - geventwebsocket.handler
12:59:17,069 1 MainProcess DEBUG Closed WebSocket - geventwebsocket.handler
```
As you can see in the full log, a re-connection follows and it works, but we really want to eliminate this 30 second delay as it leads to a bad user experience.
**Workaround**
As a test, we tried using the websocket transport directly instead of starting with long polling (as described at https://socket.io/docs/client-api/#With-websocket-transport-only):
```javascript
self.socket = io.connect({
path: window.config.project_url + 'status/system/socket.io',
transports: ['websocket'] // Default to websocket transport first, only falling back to long-polling on connection failure
});
self.socket.on('reconnect_attempt', () => {
// On reconnection, reset the transports option, as the Websocket connection may
// have failed (caused by proxy, firewall, browser, ...)
self.socket.io.opts.transports = ['polling', 'websocket'];
});
```
This seems to solve the problem - our test case can go through hundreds of iterations without any problem.
| closed | 2019-12-09T14:40:01Z | 2020-03-10T23:39:44Z | https://github.com/miguelgrinberg/python-socketio/issues/393 | [
"investigate"
] | james-tisato-kortical | 5 |
deepset-ai/haystack | nlp | 8,692 | Document ID doesn't updated upon metadata update | **Describe the bug**
If you assign the `meta` field post initialization to a `Document`, the id of the document doesn't get updated.
This is e.g. done in the [PyPDFConverter](https://github.com/deepset-ai/haystack/blob/28ad78c73d6c11c9b77089aba42799508178a2fa/haystack/components/converters/pypdf.py#L225).
Documents having the same ID although they have different metadata leads to issues with document stores and duplicate policy `OVERWRITE` as all documents end up as the same document then and even overwrite each other.
**Error message**
Error that was thrown (if available)
**Expected behavior**
The ID should update itself if the metadata is changed. Same applies to the other properties.
**Additional context**
Ideally we find a solution that the ID is automatically updated but also can be overridden manually?
**To Reproduce**
```python
def test_set_meta_afterwards():
doc = Document()
old_id = doc.id
doc.meta = {"test": 10}
assert doc.meta == {"test": 10}
assert doc.id != old_id
```
**FAQ Check**
- [x] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS:
- GPU/CPU:
- Haystack version (commit or version number):
- DocumentStore:
- Reader:
- Retriever:
| closed | 2025-01-09T12:23:59Z | 2025-02-13T09:01:32Z | https://github.com/deepset-ai/haystack/issues/8692 | [
"P3"
] | wochinge | 2 |
jofpin/trape | flask | 20 | No victims are caught. | Basically, when I go to the url that you send to the victims I get nothing. Did I miss a installation step? Becuase I installed all the requirements and followed the simple installation steps and it does not launch. I'm on kali so maybe it doesn't work for kali? Wanted to try this thing out so bad lol. | closed | 2017-11-29T04:48:44Z | 2018-09-19T16:26:02Z | https://github.com/jofpin/trape/issues/20 | [] | Bry-fi | 1 |
MolSSI/cookiecutter-cms | pytest | 6 | Sphinx Theme | Currently the Sphinx theme is Alabaster which I have always found... difficult. Any object to changing this to the RTD theme? | closed | 2018-03-16T19:03:05Z | 2018-10-05T14:33:07Z | https://github.com/MolSSI/cookiecutter-cms/issues/6 | [] | dgasmith | 1 |
the0demiurge/ShadowSocksShare | flask | 41 | 如何修改下链接订阅源导入的文字 | 想修改一下一导入订阅源的网址就显示的名称,“Charles Xu”这样改成其他的名称像坚果面馆这样,不知道是那个源码配置文件;随便再问个问题,close的issue是不是没有通知提示了?因为仓库一直无人问津,所以这方面不是很清楚。
 | closed | 2018-04-25T14:11:26Z | 2018-05-16T14:15:53Z | https://github.com/the0demiurge/ShadowSocksShare/issues/41 | [] | hoochanlon | 1 |
yunjey/pytorch-tutorial | pytorch | 70 | I met this error: | Hi, @jtoy @hunkim @Kongsea @DingKe @JayParks
I met this error:
sgiO2:image_captioning sgi$ python build_vocab.py
loading annotations into memory...
Traceback (most recent call last):
File "build_vocab.py", line 77, in <module>
main(args)
File "build_vocab.py", line 59, in main
threshold=args.threshold)
File "build_vocab.py", line 31, in build_vocab
coco = COCO(json)
File "/Library/Python/2.7/site-packages/pycocotools/coco.py", line 84, in __init__
dataset = json.load(open(annotation_file, 'r'))
IOError: [Errno 2] No such file or directory: '/usr/share/mscoco/annotations/captions_train2014.json'
What's wrong with me?
| closed | 2017-10-07T14:06:55Z | 2018-05-10T08:58:21Z | https://github.com/yunjey/pytorch-tutorial/issues/70 | [] | bemoregt | 1 |
cvat-ai/cvat | computer-vision | 8,276 | Introduction to CVAT and Datumaro | I have a data set uploaded and annotated on CVAT and want to add images to the dataset. It's not obvious how to do this. Create a new project/task etc. ? But how are the images added to the existing data set? | closed | 2024-08-07T23:20:52Z | 2024-08-08T05:55:22Z | https://github.com/cvat-ai/cvat/issues/8276 | [] | palmcorp | 1 |
Yorko/mlcourse.ai | scikit-learn | 429 | Topic 2: typos | In
`mlcourse.ai-master/jupyter_english/topic02_visual_data_analysis/topic2_visual_data_analysis.ipynb`
"ellpise" instead of "ellipse" | closed | 2018-12-02T15:07:11Z | 2018-12-02T15:22:04Z | https://github.com/Yorko/mlcourse.ai/issues/429 | [] | Toundra | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.