repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
thunlp/OpenPrompt | nlp | 252 | GLM model support | Is is possible to support GLM(https://github.com/THUDM/GLM) as PLM which also available in transformers of huggingface? | open | 2023-03-17T11:42:35Z | 2023-03-17T11:48:31Z | https://github.com/thunlp/OpenPrompt/issues/252 | [] | yqt | 0 |
thunlp/OpenPrompt | nlp | 28 | Needing Tutorial for Generation Task | Hi, thanks for your excellent work.
I'm trying to apply OpenPrompt to generation task, but I have no idea.
If possible, could you provide a tutorial for generation task, like the tutorial of classification task in the readme?
Thanks anyway! | closed | 2021-10-26T09:10:28Z | 2021-11-03T18:08:55Z | https://github.com/thunlp/OpenPrompt/issues/28 | [] | RecklessRonan | 2 |
ultralytics/yolov5 | pytorch | 13,459 | 为什么mac会抛出这个问题 | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
![上传截屏2024-12-13 14.30.25.png...]()ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)
### Additional
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)
| open | 2024-12-13T06:32:50Z | 2024-12-18T04:41:01Z | https://github.com/ultralytics/yolov5/issues/13459 | [
"question"
] | sj746 | 8 |
kizniche/Mycodo | automation | 1,048 | Unable to access Mycodo live URL | I have install mycodo on my linux instance (linux mint). While instaalling it showed, mycodo was installed successfully. When i tried my_ip/live url, the page is giving Not found error. But when i visit my ip, the page is returning that my nginx server was installed successfully.
I am unable to find the problem



| closed | 2021-07-06T18:46:57Z | 2021-07-07T19:05:09Z | https://github.com/kizniche/Mycodo/issues/1048 | [] | Irfan995 | 1 |
man-group/arctic | pandas | 515 | Prune of previous versions may leave inconsistent data when mongo fails | #### Arctic Version
```
1.61
```
#### Arctic Store
```
VersionStore
```
#### Platform and version
Python 2.7.11
#### Description of problem and/or code sample that reproduces the issue
Since the chunk clean up stage of _prune_previous_versions is not wrapped by a mongo_retry, that step may raise an exception during a step down. This could cause the calling method to be retried (eg. a potentially successful write would be redone because of a failure deleting old chunks). That is not good by itself, but it gets worse. Since at that point the version would already have been deleted, the chunk will be left behind, pointing to an invalid version, until _cleanup_orphaned_chunks is called. | closed | 2018-03-02T09:58:35Z | 2018-03-05T09:13:01Z | https://github.com/man-group/arctic/issues/515 | [] | aflag | 0 |
ets-labs/python-dependency-injector | asyncio | 240 | Add Python 3.8 support | Python 3.8.0 is available since Oct of 2019 and it's needed to start supporting it.
Links:
- https://www.python.org/downloads/release/python-380/ | closed | 2020-01-24T02:09:13Z | 2020-01-29T18:33:53Z | https://github.com/ets-labs/python-dependency-injector/issues/240 | [
"enhancement"
] | rmk135 | 0 |
iperov/DeepFaceLab | deep-learning | 5,705 | NATHDEEP | THIS IS NOT TECH SUPPORT FOR NEWBIE FAKERS
POST ONLY ISSUES RELATED TO BUGS OR CODE
## Expected behavior
*Describe, in some detail, what you are trying to do and what the output is that you expect from the program.*
## Actual behavior
*Describe, in some detail, what the program does instead. Be sure to include any error message or screenshots.*
## Steps to reproduce
*Describe, in some detail, the steps you tried that resulted in the behavior described above.*
## Other relevant information
- **Command lined used (if not specified in steps to reproduce)**: main.py ...
- **Operating system and version:** Windows, macOS, Linux
- **Python version:** 3.5, 3.6.4, ... (if you are not using prebuilt windows binary) | open | 2023-07-22T19:57:13Z | 2023-07-22T19:57:13Z | https://github.com/iperov/DeepFaceLab/issues/5705 | [] | nathangodluv | 0 |
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 726 | 训练时lr的显示问题 | up你好,在模型的训练过程中,lr的显示仅在warm up的过程会变化,之后就固定了。但是tensorboard中记录的lr却是正常的。是create_lr_schedule函数中的返回值的问题吗
| open | 2023-03-15T11:27:47Z | 2023-11-20T14:54:14Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/726 | [] | CJ666-njust | 2 |
cvat-ai/cvat | pytorch | 8,673 | Keybinds in UI allow drawing disabled shape types | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1. Create a task with 1 label, points type
2. Open Single Shape mode
3. Open Standard mode
4. Press N - bbox drawing will start
### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
_No response_
### Environment
_No response_ | closed | 2024-11-08T17:34:46Z | 2024-11-13T12:44:04Z | https://github.com/cvat-ai/cvat/issues/8673 | [
"bug",
"ui/ux"
] | zhiltsov-max | 0 |
ghtmtt/DataPlotly | plotly | 207 | [Atlas] unprinted and inconsistent graphics with coverage vector | I created a simple project and an atlas.
I notice some problems:
1. randomly the graphs are not printed when exported to PDF;

2. the printed graphics do not respect the coverage vector;

Leafing through the atlas, I notice that the creation of the charts is rather slow.
I attach a PDF exported from the attached project.
---
## OSGeo4W64 Win 10 - QGIS 3.12 București
---

data and project:
[test_dataplotly35.zip](https://github.com/ghtmtt/DataPlotly/files/4334968/test_dataplotly35.zip)
| closed | 2020-03-15T19:45:17Z | 2024-10-08T16:28:34Z | https://github.com/ghtmtt/DataPlotly/issues/207 | [
"bug"
] | pigreco | 40 |
PokeAPI/pokeapi | api | 628 | certain characteristic (highest IV) endpoints are missing a value | <!--
Thanks for contributing to the PokéAPI project. To make sure we're effective, please check the following:
- Make sure your issue hasn't already been submitted on the issues tab. (It has search functionality!)
- If your issue is one of outdated API data, please note that we get our data from [veekun](https://github.com/veekun/pokedex/). If they are not up to date either, please look for or create an issue there. Otherwise, feel free to create an issue here.
- Provide a clear description of the issue.
- Provide a clear description of the steps to reproduce.
- Provide a clear description of the expected behavior.
Thank you!
-->
There are 30 characteristics, which correspond to a pokemon's highest IV, the stat that IV is in, and the value of that IV mod 5.
https://bulbapedia.bulbagarden.net/wiki/Characteristic
in pokeAPI, these characteristics are indexed numerically from 1-30. If a pokemon's highest IV is 31 in any stat, its characteristic will be 7-12, depending on which stat that 31 IV is in. This is the second row on the bulbapedia table.
If we request the endpoint for any characteristic, for example, characteristic 7, "Takes plenty of siestas" at url https://pokeapi.co/api/v2/characteristic/7 . The object the endpoint returns contains a field 'possible_values' with the possible values of that IV. In the case of our example, this field should be [1,6,11,16,21,26,31], however, 31 is missing.
This is not true only for characteristic 7. Each characteristic between 7-12, inclusive, is missing 31 in the 'possible_values' field.
The likely cause seems to be that range(31) may have been used instead of range(32) when either looping through, or checking membership in the range.
| closed | 2021-07-04T17:17:08Z | 2021-07-09T07:17:22Z | https://github.com/PokeAPI/pokeapi/issues/628 | [] | fissionprime | 2 |
coqui-ai/TTS | python | 3,452 | [Why have they put the limit of 400 tokens...?] | ### Describe the bug
I generated texts of 32k characters, then I cut it and generated the bad parts again, that allowed me to work quickly... what is that limitation, I thought that having it locally without updating, your updates did not affect me...
### To Reproduce
no
### Expected behavior
no
### Logs
```shell
(most recent call last):
File "D:\RVC\COQUI\text-generation-webui\installer_files\env\Lib\site-packages\gradio\queueing.py", line 407, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\RVC\COQUI\text-generation-webui\installer_files\env\Lib\site-packages\gradio\route_utils.py", line 226, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\RVC\COQUI\text-generation-webui\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1550, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\RVC\COQUI\text-generation-webui\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1185, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\RVC\COQUI\text-generation-webui\installer_files\env\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\RVC\COQUI\text-generation-webui\installer_files\env\Lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "D:\RVC\COQUI\text-generation-webui\installer_files\env\Lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\RVC\COQUI\text-generation-webui\installer_files\env\Lib\site-packages\gradio\utils.py", line 661, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "D:\RVC\COQUI\text-generation-webui\extensions\coqui_tts\script.py", line 109, in voice_preview
model.tts_to_file(
File "D:\RVC\COQUI\text-generation-webui\installer_files\env\Lib\site-packages\TTS\api.py", line 432, in tts_to_file
wav = self.tts(
^^^^^^^^^
File "D:\RVC\COQUI\text-generation-webui\installer_files\env\Lib\site-packages\TTS\api.py", line 364, in tts
wav = self.synthesizer.tts(
^^^^^^^^^^^^^^^^^^^^^
File "D:\RVC\COQUI\text-generation-webui\installer_files\env\Lib\site-packages\TTS\utils\synthesizer.py", line 383, in tts
outputs = self.tts_model.synthesize(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\RVC\COQUI\text-generation-webui\installer_files\env\Lib\site-packages\TTS\tts\models\xtts.py", line 397, in synthesize
return self.inference_with_config(text, config, ref_audio_path=speaker_wav, language=language, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\RVC\COQUI\text-generation-webui\installer_files\env\Lib\site-packages\TTS\tts\models\xtts.py", line 419, in inference_with_config
return self.full_inference(text, ref_audio_path, language, **settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\RVC\COQUI\text-generation-webui\installer_files\env\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\RVC\COQUI\text-generation-webui\installer_files\env\Lib\site-packages\TTS\tts\models\xtts.py", line 488, in full_inference
return self.inference(
^^^^^^^^^^^^^^^
File "D:\RVC\COQUI\text-generation-webui\installer_files\env\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\RVC\COQUI\text-generation-webui\installer_files\env\Lib\site-packages\TTS\tts\models\xtts.py", line 535, in inference
text_tokens.shape[-1] < self.args.gpt_max_text_tokens
AssertionError: ❗ XTTS can only generate text with a maximum of 400 tokens.
```
### Environment
```shell
my config is perfect
```
### Additional context
Some way to remove the limit, I haven't programmed in a while... | closed | 2023-12-20T16:34:21Z | 2024-02-04T23:03:57Z | https://github.com/coqui-ai/TTS/issues/3452 | [
"bug",
"wontfix"
] | instak1ll | 1 |
charlesq34/pointnet | tensorflow | 301 | __init__() missing 1 required positional argument: 'dtype' | closed | 2022-09-08T14:46:19Z | 2022-09-08T14:47:31Z | https://github.com/charlesq34/pointnet/issues/301 | [] | rohithsaro | 0 |
|
OpenInterpreter/open-interpreter | python | 1,483 | what is the difference between gptme and open-interpreter |
I am new here, to use open-interpreter.
I ask search-gpt, what is the difference between gptme and open-interpreter
is the answer correct?
gptme is a command-line tool that allows interaction with local or remote language models for executing code, browsing the web, managing files, and using computer vision, among other tasks. It mimics many of the capabilities of OpenAI's "Advanced Data Analysis" (formerly Code Interpreter) but runs locally, giving it more flexibility and fewer restrictions regarding file sizes, timeouts, and privacy concerns. Its primary strength is its versatility, with tools for running shell commands, handling files, and even offering a basic web UI.
On the other hand, open-interpreter focuses specifically on providing a natural language interface for running code locally. It is more narrowly tailored for executing tasks like Python, shell, and JavaScript scripts directly from the terminal. Open-interpreter also excels in integrating with various local development environments and offering deeper customization for programming tasks and system-level commands. However, it lacks some of the broader utilities of gptme, such as vision capabilities or a built-in web browser.
| closed | 2024-10-20T13:26:28Z | 2024-11-04T14:39:56Z | https://github.com/OpenInterpreter/open-interpreter/issues/1483 | [] | CoderYiFei | 1 |
MagicStack/asyncpg | asyncio | 1,123 | issue with connection pooling when using QueuePool (via sqlalchemy) | * **asyncpg version**: 0.29.0
* **PostgreSQL version**: 16.2
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: Both with AWS RDS and Local Installation
* **Python version**: 3.12.1
* **Platform**: aarch64-apple-darwin23.2.0
* **Do you use pgbouncer?**:
* **Did you install asyncpg with pip?**: poetry
* **If you built asyncpg locally, which version of Cython did you use?**: NA
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: Didnt Try
<!-- Enter your issue details below this comment. -->
We have a web app using sqlalchemy. If we use `NullPool` (disable connection pooling) everything works fine. The moment we enable `QueuePool` the app gets stuck
```
time=2024-02-29 14:48:01,669 level=INFO request-id=b1de3f74dac04a9799934a671cf9e0d6 user-id=2319885380 message=Permission datalake/roles/user granted for user 2319885380
time=2024-02-29 14:48:01,669 level=INFO request-id=b1de3f74dac04a9799934a671cf9e0d6 user-id=2319885380 message=creating database session
time=2024-02-29 14:48:01,669 level=INFO request-id=b1de3f74dac04a9799934a671cf9e0d6 user-id=2319885380 message=database session created successfully
time=2024-02-29 14:48:01,669 level=INFO request-id=b1de3f74dac04a9799934a671cf9e0d6 user-id=2319885380 message=Fetching datalake environment
2024-02-29 14:48:01,677 DEBUG sqlalchemy.pool.impl.QueuePool Created new connection <AdaptedConnection <asyncpg.connection.Connection object at 0x114136300>>
2024-02-29 14:48:01,680 DEBUG sqlalchemy.pool.impl.QueuePool Created new connection <AdaptedConnection <asyncpg.connection.Connection object at 0x1141365d0>>
[2024-02-29 14:52:27 +0530] [61517] [CRITICAL] WORKER TIMEOUT (pid:61518)
[2024-02-29 14:52:27 +0530] [61517] [ERROR] Worker (pid:61518) was sent SIGABRT!
[2024-02-29 14:52:27 +0530] [62957] [INFO] Booting worker with pid: 62957
[2024-02-29 14:52:42 +0530] [62957] [INFO] Started server process [62957]
[2024-02-29 14:52:42 +0530] [62957] [INFO] Waiting for application startup.
```
| closed | 2024-02-29T09:27:13Z | 2024-03-01T03:13:56Z | https://github.com/MagicStack/asyncpg/issues/1123 | [] | akhilputhiry | 2 |
gradio-app/gradio | data-science | 10,139 | Node.js server cannot be stopped when SSR mode is enabled (macOS) | ### Describe the bug
Once the SSR mode is enabled, a Node.js server is started along with the Gradio server. However, at least on macOS (15.1.1), if I press Ctrl+C to stop the Gradio server, it keeps on waiting indefinitely to terminate the node process (with a message `Stopping Node.js server...`).
An interesting side-effect of this is that while it attempts to stop the Node.js server, SSR mode becomes disabled and Gradio server still remains available. (I know that the SSR mode becomes disabled through the visual confirmation of font loading, as discussed in another issue: https://github.com/gradio-app/gradio/issues/10101)
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
with gr.Blocks() as demo:
input_name = gr.Textbox(label="Name")
btn_greet = gr.Button("Greet")
if __name__ == "__main__":
demo.launch(ssr_mode=True)
```
Node.js server is 21.5.0 and macOS is 15.1.1. It may happen on other systems but I have not tested.
### Screenshot
<img width="580" alt="Screenshot 2024-12-06 at 7 34 15" src="https://github.com/user-attachments/assets/ad0e4c9e-1c6e-47cd-b866-ed44097deed0">
and after a few more _impatient_ Ctrl+Cs:
<img width="580" alt="Screenshot 2024-12-06 at 7 38 43" src="https://github.com/user-attachments/assets/7e56e2ef-32ed-4c3b-830b-6c043d0aebcf">
### Logs
_No response_
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 5.7.1
gradio_client version: 1.5.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.6.2.post1
audioop-lts is not installed.
fastapi: 0.115.6
ffmpy: 0.4.0
gradio-client==1.5.0 is not installed.
httpx: 0.28.0
huggingface-hub: 0.26.3
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 2.1.3
orjson: 3.10.12
packaging: 24.2
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.10.3
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 6.0.2
ruff: 0.8.1
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit==0.12.0 is not installed.
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.2.3
uvicorn: 0.32.1
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.28.0
huggingface-hub: 0.26.3
packaging: 24.2
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
I can work around it | open | 2024-12-05T22:41:47Z | 2024-12-09T20:55:53Z | https://github.com/gradio-app/gradio/issues/10139 | [
"bug",
"SSR"
] | anirbanbasu | 5 |
huggingface/pytorch-image-models | pytorch | 1,413 | [FEATURE] Add NextViT | ByteDance has open-sourced a better version of ViT here:
https://github.com/bytedance/next-vit | closed | 2022-08-14T10:59:25Z | 2022-08-29T15:54:46Z | https://github.com/huggingface/pytorch-image-models/issues/1413 | [
"enhancement"
] | MohamedAliRashad | 5 |
yeongpin/cursor-free-vip | automation | 363 | [Bug]: Cannot Read or Write Config File, Please Check File Permissions | ### 提交前检查
- [x] 我理解 Issue 是用于反馈和解决问题的,而非吐槽评论区,将尽可能提供更多信息帮助问题解决。
- [x] 我已经查看了置顶 Issue 并搜索了现有的 [开放 Issue](https://github.com/yeongpin/cursor-free-vip/issues)和[已关闭 Issue](https://github.com/yeongpin/cursor-free-vip/issues?q=is%3Aissue%20state%3Aclosed%20),没有找到类似的问题。
- [x] 我填写了简短且清晰明确的标题,以便开发者在翻阅 Issue 列表时能快速确定大致问题。而不是“一个建议”、“卡住了”等。
### 平台
Windows x64
### 版本
1.7.17
### 错误描述
Cannot Read or Write Config File, Please Check File Permissions
Reset MachineID 时报错。
### 相关日志输出
```shell
```
### 附加信息
_No response_ | open | 2025-03-23T11:58:59Z | 2025-03-24T08:58:15Z | https://github.com/yeongpin/cursor-free-vip/issues/363 | [
"bug"
] | MutualMate | 2 |
trevorstephens/gplearn | scikit-learn | 111 | Drop Python 3.4 whenever scikit-learn does | If scikit-learn drops 3.4 gplearn should do so to to avoid weird compatibility issues. scikit-learn has indicated they will do so first release in 2019. | closed | 2018-12-04T04:09:05Z | 2019-02-24T07:25:40Z | https://github.com/trevorstephens/gplearn/issues/111 | [
"dependencies"
] | trevorstephens | 0 |
mitmproxy/mitmproxy | python | 7,354 | AttributeError: 'cryptography.hazmat.bindings._rust.x509.Certificat' object has no attribute '_cert' | #### Problem Description
Occasionally, while intercepting in wireguard mode with a custom CA set, mitmproxy will throw the following error:
```
error: [07:30:13.105] Addon error: 'cryptography.hazmat.bindings._rust.x509.Certificat' object has no attribute '_cert'
Traceback (most recent call last):
File "/lib/python3.10/site-packages/mitmproxy/addons/tlsconfig.py", line 418, in quic_start_client
tls_start.settings.certificate_chain = [
File "/lib/python3.10/site-packages/mitmproxy/addons/tlsconfig.py", line 419, in <listcomp>
cert._cert for cert in (*entry.chain_certs, *extra_chain_certs)
AttributeError: 'cryptography.hazmat.bindings._rust.x509.Certificat' object has no attribute '_cert'
```
This error originates in the QUIC interception code (`quic_start_client`).
#### Steps to reproduce the behavior:
1. Create a custom CA
2. Start mitmproxy with `--set confdir=<path to certs> --mode wireguard`
3. Browse random HTTPS sites on the device connected to the Wireguard VPN
#### System Information
Mitmproxy: 11.0.1
Python: 3.10.11
OpenSSL: OpenSSL 3.3.2 3 Sep 2024
Platform: macOS-14.6.1-x86_64-i386-64bit
| closed | 2024-11-26T23:36:07Z | 2024-11-27T09:43:50Z | https://github.com/mitmproxy/mitmproxy/issues/7354 | [
"kind/triage"
] | nneonneo | 0 |
healthchecks/healthchecks | django | 1,090 | feature request: prettify JSON in email and/or log viewer | Many services provide their webhook output in JSON, which is typically one long line. It would be cool if healthchecks would run this output through `jq` for display in the web event and for the email notification. The Download Original button could remain to view the single-line output as originally sent. Currently, I often have to paste the events into a terminal to view with `jq` because it's just so hard to read. | open | 2024-11-28T16:12:35Z | 2024-11-28T16:12:46Z | https://github.com/healthchecks/healthchecks/issues/1090 | [] | mmomjian | 0 |
strawberry-graphql/strawberry | django | 3,790 | Incorrect typing for the `type` decorator |
## Describe the Bug
The type function is decorated with
```
@dataclass_transform(
order_default=True, kw_only_default=True, field_specifiers=(field, StrawberryField)
)
```
Therefore mypy treats classes decorated with type as being dataclasses with ordering functions.
In particular, defining `__gt__` on such a class will be treated as an error by mypy.
However, type (that is the underlying `_wrap_dataclass` function) do not do anything to ensure the dataclass actually has order functions defined.
I see multiple solutions:
- Removing the `order_default=True` part of the `dataclass_transform` decorating `type`
- Enforcing the `order=True` in `_wrap_dataclass`
- Allowing the caller to pass dataclass kwargs (as per [my previous issue](https://github.com/strawberry-graphql/strawberry/issues/2688))
## System Information
- Operating system: Ubuntu 24.04
- Strawberry version (if applicable): 0.256.1
## Additional Context
Code samples to be clear on the issue
```
@strawberry.type
class MyClass:
attr: str
k = MyClass(attr="abc")
j = MyClass(attr="def")
j > k # TypeError: '<' not supported between instances of 'MyClass' and 'MyClass'
```
```
@strawberry.type
class MyClass:
attr: str
def __gt__(self, other):
return self.attr > other.attr
k = MyClass(attr="abc")
j = MyClass(attr="def")
j > k # True
# When running mypy
error: You may not have a custom "__gt__" method when "order" is True [misc]
``` | open | 2025-02-21T11:26:35Z | 2025-02-21T11:28:42Z | https://github.com/strawberry-graphql/strawberry/issues/3790 | [
"bug"
] | Corentin-Bravo | 0 |
lorien/grab | web-scraping | 186 | submit() всегда отправляет в post именованный input type="submit", если таковой имеется | https://github.com/lorien/grab/blob/master/grab/document.py
В submit().
Когда в форме имеются, например, 2 submit-а, один с именем, другой без, невозможно указать, что клик имитируется на кнопку без имени. Потому что submit с именем всегда будет присутствовать в post.
Пример со страницы авторизации приложения для Твиттера:
`<input type="submit" value="Authorize app" class="submit button selected" id="allow">
<input class="submit button" id="cancel" name="cancel" type="submit" value="Cancel">`
Чего бы ни делалось, submit с id="cancel" будет передаваться в пост реквесте.
Как вариант, и это решило мою проблему, можно добавить функции submit() еще параметр remove_from_post, где пользователи библиотеки указывали бы, какие элементы удалить из post.
Подготовил соответствующий pull request.
| closed | 2016-05-14T21:43:46Z | 2017-01-10T10:31:15Z | https://github.com/lorien/grab/issues/186 | [] | Vasilesk | 1 |
joerick/pyinstrument | django | 43 | Running modules | I'd like to trace a program requiring execution with `python -m`. How should I do it? I know Python 3.7 has introduced support for it in some of the script executing modules. | closed | 2018-07-17T16:08:56Z | 2018-08-04T15:02:43Z | https://github.com/joerick/pyinstrument/issues/43 | [] | iddan | 1 |
fastapi-users/fastapi-users | asyncio | 248 | `No matching distribution found for fastapi-users==2.0.1 | I don't know if I'm just having a moment, but I can't for the life of me install fastapi-users:
`No matching distribution found for fastapi-users==2.0.1 (from -r /requirements.txt (line 4))`
Thanks
| closed | 2020-07-08T13:16:00Z | 2020-09-06T16:47:02Z | https://github.com/fastapi-users/fastapi-users/issues/248 | [
"question"
] | stodge | 6 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 434 | When using reflect, multiple database, same tablename, only get from first database | closed | 2016-09-27T02:11:32Z | 2018-02-23T23:19:49Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/434 | [] | kingsj0405 | 0 |
|
xinntao/Real-ESRGAN | pytorch | 820 | Results for only half of the image |
I performed inference using a Python script file, but the bottom half does not proceed as follows.
<img width="335" alt="esrganissue" src="https://github.com/xinntao/Real-ESRGAN/assets/124443419/e3148700-2024-428b-bd48-77c80538707c">
| open | 2024-06-29T09:32:21Z | 2024-06-29T09:32:21Z | https://github.com/xinntao/Real-ESRGAN/issues/820 | [] | westzeroright | 0 |
recommenders-team/recommenders | data-science | 2,126 | [BUG] Cornac BiVAE test failing due to csc_matrix attribute error | ### Description
<!--- Describe your issue/bug/request in detail -->
```
E bivae = cornac.models.BiVAECF(
E k=LATENT_DIM,
E encoder_structure=ENCODER_DIMS,
E act_fn=ACT_FUNC,
E likelihood=LIKELIHOOD,
E n_epochs=NUM_EPOCHS,
E batch_size=BATCH_SIZE,
E learning_rate=LEARNING_RATE,
E seed=SEED,
E use_gpu=torch.cuda.is_available(),
E verbose=True
E )
E
E with Timer() as t:
E bivae.fit(train_set)
E print("Took *** seconds for training.".format(t))
E ------------------
E
E ----- stderr -----
E
0%| | 0/500 [00:00<?, ?it/s]
E ----- stderr -----
E
0%| | 0/500 [00:00<?, ?it/s]
E ----- stderr -----
E
E ------------------
E
E ---------------------------------------------------------------------------
E AttributeError Traceback (most recent call last)
E Cell In[6], line 15
E 1 bivae = cornac.models.BiVAECF(
E 2 k=LATENT_DIM,
E 3 encoder_structure=ENCODER_DIMS,
E (...)
E 11 verbose=True
E 12 )
E 14 with Timer() as t:
E ---> 15 bivae.fit(train_set)
E 16 print("Took *** seconds for training.".format(t))
E
E File /azureml-envs/azureml_adf614c86c43311fb41235e[662](https://github.com/recommenders-team/recommenders/actions/runs/9745905406/job/26897451776#step:3:669)27b9b3/lib/python3.10/site-packages/cornac/models/bivaecf/recom_bivaecf.py:178, in BiVAECF.fit(self, train_set, val_set)
E 166 num_users = train_set.matrix.shape[0]
E 167 self.bivae = BiVAE(
E 168 k=self.k,
E 169 user_encoder_structure=[num_items] + self.encoder_structure,
E (...)
E 175 batch_size=self.batch_size,
E 176 ).to(self.device)
E --> 178 learn(
E 179 self.bivae,
E 180 train_set,
E 181 n_epochs=self.n_epochs,
E 182 batch_size=self.batch_size,
E 183 learn_rate=self.learning_rate,
E 184 beta_kl=self.beta_kl,
E 185 verbose=self.verbose,
E 186 device=self.device,
E 187 )
E 188 elif self.verbose:
E 189 print("%s is trained already (trainable = False)" % (self.name))
E
E File /azureml-envs/azureml_adf614c86c43311fb41235e66227b9b3/lib/python3.10/site-packages/cornac/models/bivaecf/bivae.py:201, in learn(bivae, train_set, n_epochs, batch_size, learn_rate, beta_kl, verbose, device, dtype)
E 199 for i_ids in train_set.item_iter(batch_size, shuffle=False):
E 200 i_batch = tx[i_ids, :]
E --> 201 i_batch = i_batch.A
E 202 i_batch = torch.tensor(i_batch, dtype=dtype, device=device)
E 204 # Reconstructed batch
E
E AttributeError: 'csc_matrix' object has no attribute 'A'
```
### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
<!--- * Azure Data Science Virtual Machine. -->
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
VM
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
https://github.com/recommenders-team/recommenders/actions/runs/9745905406/job/26897451776
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
### Other Comments
| open | 2024-07-07T10:34:09Z | 2024-07-09T03:51:33Z | https://github.com/recommenders-team/recommenders/issues/2126 | [
"bug"
] | miguelgfierro | 3 |
dmlc/gluon-nlp | numpy | 779 | BiLMEncoder fails to initialize if num_layers > 1 | ## Description
`BiLMEncoder` fails during initialization if `num_layers > 1`. Works fine when num_layers = 1, but if it is at least 2, then initialization fails with a weird message. See simplest reproducible example below.
### Error Message
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/gluon/block.py", line 505, in initialize
self.collect_params().initialize(init, ctx, verbose, force_reinit)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/gluon/parameter.py", line 830, in initialize
v.initialize(None, ctx, init, force_reinit=force_reinit)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/gluon/parameter.py", line 400, in initialize
if not shape_is_known(self.shape):
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/gluon/utils.py", line 430, in shape_is_known
assert dim_size > unknown_dim_size, "shape dimension size cannot be less than {}, while " \
TypeError: '>' not supported between instances of 'NoneType' and 'int'
```
## To Reproduce
```python
from gluonnlp.model import BiLMEncoder
encoder = BiLMEncoder(mode='lstm', num_layers=2, input_size=200, hidden_size=100, dropout=0.1, skip_connection=False)
encoder.initialize()
```
## Environment
`pip install gluonnlp --pre` | closed | 2019-06-18T18:44:23Z | 2019-06-24T20:13:39Z | https://github.com/dmlc/gluon-nlp/issues/779 | [
"bug"
] | Ishitori | 1 |
stanfordnlp/stanza | nlp | 547 | Are pretrain.pt files just representations of word embeddings? | I'm training various models (pos, lemmatization, dependency parsing) separately (not as a pipeline) and wonder what the model files which end in "pretrain.pt" actually are. Are they just Stanza-style representations of word embeddings that I pass as input? Are they always the same regardless of the task (i.e. it is the same file for pos and parsing)?
Thanks! | closed | 2020-12-04T11:06:45Z | 2020-12-06T20:39:44Z | https://github.com/stanfordnlp/stanza/issues/547 | [
"question"
] | AleksandrsBerdicevskis | 1 |
robusta-dev/robusta | automation | 773 | Error in runner logs related to playbook reloading | hi! Not sure this is a bug but it might be an issue. There are error logs in runner related to playbook reloading and it might be effecting its execution. To be honest, I was trying to configure alertmanger to forward alerts to robusta and it seems like this error might be hampering it.
`RUNNER_VERSION : 0.10.13`
Here are the logs:
`
2023-03-08 07:58:16.604 INFO loading config /etc/robusta/config/active_playbooks.yaml
2023-03-08 07:58:16.646 ERROR unknown error reloading playbooks. will try again when they next change
Traceback (most recent call last):
File "/app/src/robusta/runner/config_loader.py", line 161, in __reload_playbook_packages
runner_config = self.__load_runner_config(self.config_file_path)
File "/app/src/robusta/runner/config_loader.py", line 277, in __load_runner_config
return RunnerConfig(**yaml_content)
File "pydantic/main.py", line 342, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 28 validation errors for RunnerConfig
sinks_config -> 0 -> robusta_sink
field required (type=value_error.missing)
sinks_config -> 0 -> slack_sink
none is not an allowed value (type=type_error.none.not_allowed)
sinks_config -> 0 -> jira_sink
field required (type=value_error.missing)
sinks_config -> 1 -> robusta_sink
field required (type=value_error.missing)
sinks_config -> 1 -> slack_sink
field required (type=value_error.missing)
sinks_config -> 1 -> datadog_sink
field required (type=value_error.missing)
sinks_config -> 1 -> kafka_sink
field required (type=value_error.missing)
sinks_config -> 1 -> ms_teams_sink
field required (type=value_error.missing)
sinks_config -> 1 -> opsgenie_sink
field required (type=value_error.missing)
sinks_config -> 1 -> telegram_sink
field required (type=value_error.missing)
sinks_config -> 1 -> webhook_sink
none is not an allowed value (type=type_error.none.not_allowed)
sinks_config -> 1 -> victorops_sink
field required (type=value_error.missing)
sinks_config -> 1 -> pagerduty_sink
field required (type=value_error.missing)
sinks_config -> 1 -> discord_sink
field required (type=value_error.missing)
sinks_config -> 1 -> mattermost_sink
field required (type=value_error.missing)
sinks_config -> 1 -> webex_sink
field required (type=value_error.missing)
sinks_config -> 1 -> jira_sink
field required (type=value_error.missing)
2023-03-08 07:58:16.647 INFO Telemetry set to include error info, Thank you for helping us improve Robusta.
2023-03-08 07:58:16.711 ERROR Sentry error: 'Registry' object has no attribute 'global_config'
Traceback (most recent call last):
File "/app/src/robusta/runner/telemetry_service.py", line 34, in __init__
global_config = self.registry.get_global_config()` | closed | 2023-03-08T13:20:51Z | 2023-03-08T20:16:20Z | https://github.com/robusta-dev/robusta/issues/773 | [] | metheu | 1 |
x-tabdeveloping/topicwizard | dash | 12 | Unable to handle nan output from a topic model. | Hello! I am very impressed with this library as per Marton Kardos's article on Medium.
I attempted to use topicwizard to visualize short-text topic modeling inferences based on a quickly trained tweetopic model. The results of my issues and troubleshooting are located on this [hosted Google Colab notebook. ](https://drive.google.com/file/d/1KB57dKgNehZCW3AJaVehghlP1h8ZxJyw/view?usp=sharing) Please note you can't run the notebook. I've just published so you can easily view via Google Colab.
Information about my Conda environment:
- Python 3.9.16 (Installed via Anaconda)
- ipykernel 6.9.12 and its dependencies (Anaconda)
- Tweetopic 0.3.0 (PyPi)
- Topic-wizard 0.2.5 (PyPi)
- And all other dependencies which ensue from these two libraries.
I can train a topic model in tweetopic with no problems. I can import the topicwizard module with no problem. Once finished training on my tweetopic model, I can infer topic names via `topicwizard.infer_topic_names(pipeline=pipeline)` with no problems.
However, when I attempt to run `topicwizard.visualize(vectorizer=vectorizer, topic_model=dmm, corpus=corpus_cleaned, port=8080)` I receive the following error:
> ValueError:
> Invalid element(s) received for the 'size' property of scatter.marker
> Invalid elements include: [nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]
>
> The 'size' property is a number and may be specified as:
> - An int or float in the interval [0, inf]
> - A tuple, list, or one-dimensional numpy array of the above
I troubleshooted and found that when I `.transform(...)` my corpus post-training, I found inferences that contain nans. I dropped those rows so that they don't mess with the elaborate computations the /prepare/<...py> files have in place to easily get the Dash app running. Despite cleaning up the nans, when I run the same .visualize() function above with the further cleaned inferences, I receive the following error tracing back to `...tweetopic/lib/site-packages/joblib/parallel.py:288` Further context as to the steps I followed is available on that Google Colab notebook.
> ValueError: cannot assign slice from input of different size
Could any one help me figure out what is preventing me from getting the Dash app working? Thank you! | closed | 2023-06-18T06:24:36Z | 2024-03-02T13:31:10Z | https://github.com/x-tabdeveloping/topicwizard/issues/12 | [
"bug"
] | vshourie-asu | 10 |
holoviz/colorcet | plotly | 56 | Update colorcet to upstream, especially new Gouldian Parula replacement | ## Changes in upstream CSVs:
### New
- C3 - Cyclic: white - red - black - blue
- C6 - "Six colour cyclic with primaries and secondaries matched in lightness"
- C7 - Cyclic Yellow - Magenta - Cyan - Green - Yellow
- L20 - Parula replacement "Gouldian"
- R4 - Rainbow colour map from blue to red
### Changed
- D1-D9, L1-L9 renamed to D01-D09, L01-L09
- Floats have higher precision
There actually look to be even more in colorcet.m, I guess that's the ultimate canonical source?
I'm a bit unsure how to integrate these changes into CET_to_py.py, especially the D01-D09 and L01-09 stuff. Or maybe this project keeps the original CSV names? | closed | 2021-01-15T23:35:41Z | 2021-11-12T02:39:54Z | https://github.com/holoviz/colorcet/issues/56 | [
"enhancement"
] | randallpittman | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 922 | CycleGAN inference | Can I get multiple variants from one trained CycleGAN in inference?
For instance:
I have one picture of a horse and I would like to have 4 different(!!!) pictures in style, trained in CycleGAN.
Is it possible? | closed | 2020-02-18T10:16:56Z | 2020-02-19T06:39:28Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/922 | [] | Anastasiyabordak | 1 |
serengil/deepface | machine-learning | 495 | Different results on different machine | While this may be slightly unrelated, I couldn't find out why the deepface models are returning different result for same code:
```
from deepface import DeepFace
from sklearn.datasets import fetch_lfw_pairs
from sklearn.metrics import accuracy_score
X = fetch_lfw_pairs(
subset="test",
funneled=False,
# slice_=(slice(0, 250), slice(0, 250)),
resize=1,
color=True
)
imgs, y = X.pairs, X.target
preds = []
for i in range(len(imgs)):
img = imgs[i]
img1 = img[0]
img2 = img[1]
img1 = img1[:,:,::-1]
img2 = img2[:,:,::-1]
result = DeepFace.verify(
img1,
img2,
model_name="ArcFace", # "DeepFace" # "ArcFace" # "Dlib" # "DeepID" # "OpenFace" # VGG-Face" # "Facenet"
detector_backend="dlib", # "mtcnn", #"ssd" # "opencv", # "retinaface",
enforce_detection=False,
)
# print(f"Actual :", y[i], "Predicted :", result["verified"])
preds.append(result["verified"]) # result
print("Accuracy :", 100*accuracy_score(y, preds))
```
Machine 1 (i.e. Google Colab) is returning 94% accuracy.
Machine 2 is returning 50% accuracy.
The library versions on two different platform is as follows:
Machine 1 Machine 2
deepface 0.0.75 0.0.75
scikit-learn 1.0.2 1.1.1
opencv 4.6.0.66 4.6.0.66
tensorflow 2.8.2+zzzcolab20220527125636 2.7.0
Could scikit-learn version be the cause? | closed | 2022-06-14T05:43:04Z | 2022-06-24T15:09:49Z | https://github.com/serengil/deepface/issues/495 | [
"dependencies"
] | AnkS4 | 5 |
qubvel-org/segmentation_models.pytorch | computer-vision | 495 | setting class weights | Hi All,
I am struggling with an issue. I am performing classification (binary). I am using a DiceLoss and it is working perfectly fine. Though, my background pixels are large in quantity compared to the positive class.
- Is there a way I can apply weights for the positive class ?
So far, I am not able to figure out a way or find an example.
Any help will be really appreciated in this regard
| closed | 2021-10-03T06:28:35Z | 2022-03-14T01:59:15Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/495 | [
"Stale"
] | nyk2001 | 4 |
STVIR/pysot | computer-vision | 524 | About MobileNet_V2 training config | I wanna reproduce MobileNet_V2 tracking results, now I found the pretrained model of MobileNet_V2 from torchvision modles, is it right? And could you tell me your official training parameter configuration?
Thank you | closed | 2021-04-23T14:26:21Z | 2021-04-27T09:17:55Z | https://github.com/STVIR/pysot/issues/524 | [] | jkboyJohn | 0 |
plotly/dash | data-science | 2,671 | DatePickerRrange accepts start_date which is less then min_date_allowed | The following datepicker range will accept start_date although it is less then min_date_allowed.
```
dcc.DatePickerRange(
id='picker-range',
min_date_allowed=date(2023, 8, 1),
start_date=date(2023, 7, 1),
)
```
| open | 2023-10-24T08:07:02Z | 2024-08-13T19:41:45Z | https://github.com/plotly/dash/issues/2671 | [
"bug",
"P3"
] | ognjengrubac-tomtom | 1 |
modin-project/modin | pandas | 6,702 | Don't materialize axes when calling `to_numpy` | closed | 2023-11-03T23:39:03Z | 2023-11-06T16:05:21Z | https://github.com/modin-project/modin/issues/6702 | [
"Performance 🚀"
] | anmyachev | 0 |
|
dropbox/PyHive | sqlalchemy | 130 | Please add doc url to readme | Please add http://pythonhosted.org/PyHive/ to the main readme. I had to pull this from pypi. | open | 2017-06-08T15:16:15Z | 2017-07-22T15:20:56Z | https://github.com/dropbox/PyHive/issues/130 | [] | mtdeguzis | 3 |
gto76/python-cheatsheet | python | 136 | Licence file | Hello,
I think adding a licence file would be useful as the actual licence of the project is a bit ambiguous (it is by default not possible legally to use any of the code in the cheatsheet).
Is it on purpose ?
Thank you | open | 2022-12-13T08:06:46Z | 2023-02-27T22:13:30Z | https://github.com/gto76/python-cheatsheet/issues/136 | [] | Ery4z | 5 |
PaddlePaddle/models | nlp | 4,760 | 按照例子我实现了WGAN-GP了,但是我调试了很久还是出问题 | 希望可以帮助我做一点修改,不知道具体问题出在哪,我参照了WGAN-GP其他框架的代码写的,但效果不好,由于我还是paddle框架的初学者,不能完美还原出来,希望可以帮助一下谢谢,这是公开有问题的项目地址:https://aistudio.baidu.com/aistudio/projectdetail/632252 | open | 2020-07-19T16:29:39Z | 2024-02-26T05:10:53Z | https://github.com/PaddlePaddle/models/issues/4760 | [] | yxhpy | 7 |
keras-team/keras | python | 20,147 | .keras model with base model trainable has poor performance in TensorFlow 2.17 (Keras 3) | I originally posted this bug in the [TensorFlow github issues section](https://github.com/tensorflow/tensorflow/issues/74170) since I believed it to be due to a higher TF version, but was asked to post here since it may instead be due to Keras 3. I am copying my post below:
I am training an EfficientNet model with a custom head using TensorFlow and Keras, saving the model to a .keras format. If the base model trainable flag is set to False, such that I only train the head, then when I later load the .keras model and evaluate it on a dataset, I get the expected good performance. When I set the trainable flag to True and train a model (which converges well), then when I later load the model and evaluate it on the same dataset the performance has degraded significantly. (I am evaluating the model on the same dataset using the same code both at the end of training, and later on in a separate notebook. It is in this separate notebook where the performance is bad, where again the same dataset is being used and the same code is being used in both evaluation places.)
Saving to a .h5 model does not have this issue, and the performance of the saved model is good. I have spent the day trying different trainable and training flag values in various places to no improvement, thinking originally that it was something to do with the BatchNorm layers in the model. Recompiling the model has not helped.
When I switch back to an older TensorFlow version (2.15.0.post1) with Keras 2 I do not see this issue. Both the trained .keras and .h5 models perform well when later loaded and evaluated on my dataset of interest.
This seems like a bug to me, though I also acknowledge that perhaps I have missed something in the TF/Keras updates. I have searched the TensorFlow API docs for the various methods to no success. If it is the latter I would be very grateful for any advice, thank you. | closed | 2024-08-22T12:45:14Z | 2024-08-28T04:11:55Z | https://github.com/keras-team/keras/issues/20147 | [
"keras-team-review-pending",
"type:Bug"
] | nkinnaird | 4 |
modin-project/modin | pandas | 7,084 | Explicitly check for exceptions in `test_indexing.py` | closed | 2024-03-13T12:11:56Z | 2024-03-13T14:17:21Z | https://github.com/modin-project/modin/issues/7084 | [
"Testing 📈"
] | anmyachev | 0 |
|
fastapi/sqlmodel | pydantic | 166 | Convenience methods to create multiple models + associated API endpoints | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
# A reproducible repository which includes the proposed convenience objects
# (and passes all the tests from the official tutorial):
https://github.com/cisaacstern/sqlmodel-abstraction
# All of the proposed code lives in:
https://github.com/cisaacstern/sqlmodel-abstraction/blob/main/project/abstractions.py
```
### Description
The [multiple models with inheritance](https://sqlmodel.tiangolo.com/tutorial/fastapi/multiple-models/#multiple-models-with-inheritance) design pattern is awesome. I would like to be able to implement it more concisely.
To achieve this, I've drafted a [`MultipleModels`](https://github.com/cisaacstern/sqlmodel-abstraction/blob/0ba1dfea39ec9c50d19774c8ba388b1d2e7cc330/project/abstractions.py#L13) dataclass which takes base and response models as input, and generates the remaining (table, creation, and update) models programmatically; e.g., [here](https://github.com/cisaacstern/sqlmodel-abstraction/blob/0ba1dfea39ec9c50d19774c8ba388b1d2e7cc330/project/main.py#L19). To register API endpoints for a given `MultipleModels` instance, it can be passed to the proposed [`register_endpoints`](https://github.com/cisaacstern/sqlmodel-abstraction/blob/0ba1dfea39ec9c50d19774c8ba388b1d2e7cc330/project/abstractions.py#L185) convenience function; e.g., [here](https://github.com/cisaacstern/sqlmodel-abstraction/blob/0ba1dfea39ec9c50d19774c8ba388b1d2e7cc330/project/main.py#L40).
The [example repo](https://github.com/cisaacstern/sqlmodel-abstraction) for this feature proposal is a fully reproducible example project which [passes all the same tests](https://github.com/cisaacstern/sqlmodel-abstraction#-same-tests-as-tutorial) as the tutorial project in the SQLModel docs.
### Wanted Solution
SQLModel currently provides other convenience methods (e.g., `create_engine`).
Whether it is via some version of the `MutlipleModels` and `register_endpoints` approach I've proposed, or some other methods, I would like to be have convenience methods that abstract away boilerplate code from the process of implementing the multiple models with inheritance design pattern.
### Wanted Code
```python
# This is an abbreviated `main.py` for a SQLModel project that uses the proposed features
# Full file: https://github.com/cisaacstern/sqlmodel-abstraction/blob/main/project/main.py
# Some imports omitted here
from sqlmodel import MultipleModels, register_endpoints
class HeroBase(SQLModel):
name: str
secret_name: str
age: Optional[int] = None
class HeroRead(HeroBase):
id: int
hero_models = MultipleModels(path="/heroes/", base=HeroBase, response=HeroRead)
# `engine` assignment and `get_session` definition omitted here
app = FastAPI()
register_endpoints(app, models=hero_models, get_session=get_session)
```
### Alternatives
If this is out of scope for SQLModel, I would nonetheless greatly appreciate feedback on any pitfalls that may arise if I implement the proposed abstractions in production.
If this is within scope for SQLModel, I would happily adapt my example repo into a PR, if it seems like a good enough start.
Thanks in advance for your consideration.
### Operating System
Linux, macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
```
>= 3.6, < 3.10
```
### Additional Context
As noted in the [example repo](https://github.com/cisaacstern/sqlmodel-abstraction) README, these ideas arose while exploring SQLModel for [a database + API](https://github.com/pangeo-forge/roadmap/pull/31) for [Pangeo Forge](https://pangeo-forge.readthedocs.io/en/latest/). | open | 2021-11-24T22:08:19Z | 2021-11-24T23:35:12Z | https://github.com/fastapi/sqlmodel/issues/166 | [
"feature"
] | cisaacstern | 1 |
python-gino/gino | asyncio | 706 | Usage gino in fastapi/starlette tests | GINO version:1.0.1
Python version: 3.7
asyncpg version: 0.18.3
### Description
Hello! I'm trying to figure out how to work with gino in tests of my fastapi app.
I'm following this tutorial https://python-gino.org/docs/en/1.0/tutorials/fastapi.html#
My goal is to be able to do something like this:
```
from myapp.db import MyModel
@pytest.mark.asyncio
async def test_smth(client):
obj = await MyModel.create(name='test')
response = client.get(
f'api/objects/{obj.id}/'
)
....
```
But for now i cant figure out how to initialize gino correctly to work with db inside tests
my client fixture look like this
```
@pytest.fixture
def client(app):
from myapp.app import get_app
migrate_db()
with TestClient(get_app() as client:
yield client
unmigrate_db()
```
where `get_app` is
```
def get_app():
app = FastAPI()
db.init_app(app)
return app
```
### What I Did
I tried just running code above, receive:
```
asyncpg.exceptions._base.InterfaceError: cannot perform operation: another operation is in progress
```
i tried to create own pool and create object using it
```
from gino.ext.starlette import Gino
from myapp.config import config
@pytest.mark.asyncio
async def test_smth(client):
db = Gino()
async with db.with_bind(bind=config['dsn']) as engine:
obj = await MyModel.create(
name='test'
bind=engine
)
response = client.get(
f'api/objects/{obj.id}/'
)
```
it actually works and object was created
but on line `response = client.get(` i get
```
requests/requests/sessions.py:543: in get
return self.request('GET', url, **kwargs)
starlette/starlette/testclient.py:429: in request
json=json,
requests/requests/sessions.py:530: in request
resp = self.send(prep, **send_kwargs)
requests/requests/sessions.py:643: in send
r = adapter.send(request, **kwargs)
starlette/starlette/testclient.py:243: in send
raise exc from None
starlette/starlette/testclient.py:240: in send
loop.run_until_complete(self.app(scope, receive, send))
python3/src/Lib/asyncio/base_events.py:563: in run_until_complete
self._check_runnung()
python3/src/Lib/asyncio/base_events.py:523: in _check_runnung
raise RuntimeError('This event loop is already running')
```
Can you give me advise of how to do it correctly?
| closed | 2020-07-09T17:23:42Z | 2020-08-25T16:00:03Z | https://github.com/python-gino/gino/issues/706 | [] | Smosker | 2 |
open-mmlab/mmdetection | pytorch | 11,482 | CO-DETR Demo Error: 'list' object has no attribute 'chunked_size' | **Checklist**
1. I have searched related issues but cannot get the expected help.
2. I have read the [FAQ documentation](https://mmdetection.readthedocs.io/en/latest/faq.html) but cannot get the expected help.
3. The bug has not been fixed in the latest version.
**Describe the bug**
A simple demo with [CO-DETR](https://github.com/open-mmlab/mmdetection/tree/main/projects/CO-DETR) throws the following error.
```
The model and loaded state dict do not match exactly
unexpected key in source state_dict: query_head.label_embedding.weight
missing keys in source state_dict: query_head.dn_generator.label_embedding.weight
/longdata/anurag_storage/2PCNet/LLIE/mmdetection/mmdet/apis/det_inferencer.py:130: UserWarning: dataset_meta or class names are not saved in the checkpoint's meta data, use COCO classes by default.
warnings.warn(
02/19 14:29:41 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "function" registry tree. As a workaround, the current "function" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized.
/home/aghosh/anaconda3/envs/openmmlab/lib/python3.8/site-packages/mmengine/visualization/visualizer.py:196: UserWarning: Failed to add <class 'mmengine.visualization.vis_backend.LocalVisBackend'>, please provide the `save_dir` argument.
warnings.warn(f'Failed to add {vis_backend.__class__}, '
Traceback (most recent call last):
File "demo/image_demo.py", line 192, in <module>
main()
File "demo/image_demo.py", line 182, in main
inferencer.model.test_cfg.chunked_size = chunked_size
AttributeError: 'list' object has no attribute 'chunked_size'
```
**Reproduction**
```
python demo/image_demo.py \
demo/demo.jpg \
projects/CO-DETR/configs/codino/co_dino_5scale_swin_l_lsj_16xb1_3x_coco.py \
--weights pretrained/co_dino_5scale_lsj_swin_large_1x_coco-3af73af2.pth
```
| open | 2024-02-19T19:32:52Z | 2024-11-30T04:02:53Z | https://github.com/open-mmlab/mmdetection/issues/11482 | [] | ShenZheng2000 | 3 |
joeyespo/grip | flask | 62 | GFM Link problems | It seems the inline links do not work as on github. Example:
```
### Defined Type: patterndb::update
[update](#defined-type-patterndbupdate)
```
| closed | 2014-06-17T08:08:24Z | 2014-06-29T12:02:53Z | https://github.com/joeyespo/grip/issues/62 | [
"duplicate"
] | faxm0dem | 2 |
plotly/dash-table | dash | 137 | Install trouble on linux. | Running `python setup.py sdist` from the latest version on master, I try to install on a centos docker and I get this error:
```
Running setup.py install for dash-table ... error
Complete output from command /usr/bin/python2 -u -c "import setuptools, tokenize;__file__='/tmp/pip-7oIJun-build/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-O4r71B-record/install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-7oIJun-build/setup.py", line 17, in <module>
install_requires=[]
File "/usr/lib64/python2.7/distutils/core.py", line 152, in setup
dist.run_commands()
File "/usr/lib64/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/usr/lib64/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/usr/lib/python2.7/site-packages/setuptools/command/install.py", line 53, in run
return _install.run(self)
File "/usr/lib64/python2.7/distutils/command/install.py", line 563, in run
self.run_command('build')
File "/usr/lib64/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/lib64/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/usr/lib64/python2.7/distutils/command/build.py", line 127, in run
self.run_command(cmd_name)
File "/usr/lib64/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/lib64/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/usr/lib/python2.7/site-packages/setuptools/command/build_py.py", line 89, in run
self.build_packages()
File "/usr/lib64/python2.7/distutils/command/build_py.py", line 372, in build_packages
self.build_module(module, module_file, package)
File "/usr/lib/python2.7/site-packages/setuptools/command/build_py.py", line 106, in build_module
outfile, copied = _build_py.build_module(self, module, module_file, package)
File "/usr/lib64/python2.7/distutils/command/build_py.py", line 333, in build_module
"'package' must be a string (dot-separated), list, or tuple")
TypeError: 'package' must be a string (dot-separated), list, or tuple
----------------------------------------
Command "/usr/bin/python2 -u -c "import setuptools, tokenize;__file__='/tmp/pip-7oIJun-build/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-O4r71B-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-7oIJun-build/
```
Changing `setup.py` packages to `['dash_table']` solved the problem.
| closed | 2018-10-12T18:09:40Z | 2018-10-31T18:59:53Z | https://github.com/plotly/dash-table/issues/137 | [] | T4rk1n | 4 |
pyg-team/pytorch_geometric | pytorch | 9,351 | Temporal Data set from TUDataset | ### Discussed in https://github.com/pyg-team/pytorch_geometric/discussions/9323
<div type='discussions-op-text'>
<sup>Originally posted by **xavierallem** May 16, 2024</sup>
I am trying to use
>current_dir = os.getcwd()
> path = osp.join(current_dir, '../../Data/INFECTIOUS')
`dataset` = TUDataset(path, name='infectious_ct1',use_node_attr=True,use_edge_attr=True).shuffle()
and I get the following error
> ValueError: expected sequence of length 4 at dim 1 (got 2)
Is it fixed? I have seen an issue dating to 2020.
</div> | open | 2024-05-22T13:49:06Z | 2024-05-22T13:49:07Z | https://github.com/pyg-team/pytorch_geometric/issues/9351 | [] | rusty1s | 0 |
ultralytics/yolov5 | machine-learning | 13,303 | Error During TensorFlow SavedModel and TFLite Export: TFDetect.__init__() got multiple values for argument 'w' and 'NoneType' object has no attribute 'outputs' | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I encountered errors while attempting to export a YOLOv5 model to TensorFlow SavedModel and TFLite formats. The model is a YOLOv5 with FPN, and the export process fails with the following errors:
`TensorFlow SavedModel: export failure ❌ 1.5s: TFDetect.__init__() got multiple values for argument 'w'`
`TensorFlow Lite: export failure ❌ 0.0s: 'NoneType' object has no attribute 'call'
Traceback (most recent call last):
File "/home/ai/Masood/Pipes/yolov5_old/export.py", line 1542, in <module>
main(opt)
File "/home/ai/Masood/Pipes/yolov5_old/export.py", line 1537, in main
run(**vars(opt))
File "/home/ai/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/ai/Masood/Pipes/yolov5_old/export.py", line 1450, in run
add_tflite_metadata(f[8] or f[7], metadata, num_outputs=len(s_model.outputs))
AttributeError: 'NoneType' object has no attribute 'outputs'`
### Additional
# yolov5fpn.yaml
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [5, 7, 10, 13, 16, 20] # P2/4
- [57.5, 42.0, 46.99, 36.0, 23.99, 17.5] # P3/8
- [30, 61, 62, 45, 59, 119] # P4/16
- [152, 110, 165, 115, 181, 120] # P5/32
### YOLOv5 v6.0 backbone
backbone:
[
[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 6, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 3, C3, [1024]],
[-1, 1, SPPF, [1024, 5]], # 9
]
### YOLOv5 v6.0 FPN head
head: [
[-1, 3, C3, [1024, False]], # 10 (P5/32-large)
[-1, 1, nn.Upsample, [None, 2, "nearest"]],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 1, Conv, [512, 1, 1]],
[-1, 3, C3, [512, False]], # 14 (P4/16-medium)
[-1, 1, nn.Upsample, [None, 2, "nearest"]],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 1, Conv, [256, 1, 1]],
[-1, 3, C3, [256, False]], # 18 (P3/8-small)
# Add a new layer for P2/4 detection
[-1, 1, nn.Upsample, [None, 2, "nearest"]],
[[-1, 2], 1, Concat, [1]], # cat backbone P2
[-1, 1, Conv, [128, 1, 1]],
[-1, 3, C3, [128, False]], # 22 P2/4-small
# [[18, 14, 10], 1, Detect, [nc, anchors, [128, 256, 512, 1024]]], # Detect(P3, P4, P5)
[[22, 18, 14, 10], 1, Detect, [nc, anchors, [128, 256, 512, 1024]]] # Detect(P2, P3, P4, P5)
] | open | 2024-09-09T06:07:09Z | 2024-10-27T13:30:39Z | https://github.com/ultralytics/yolov5/issues/13303 | [
"question"
] | computerVision3 | 1 |
PaddlePaddle/models | computer-vision | 5,131 | FatalError: A serious error (Segmentation fault) is detected by the operating system. (at /paddle/paddle/fluid/platform/init.cc:303) | 大家好,我在按照[这个教程](https://github.com/PaddlePaddle/models/blob/release/2.0-beta/PaddleCV/video/application/video_tag/Run.md)运行video_tag的样例代码时,遇到了如下报错,请问有谁知道什么原因吗?谢谢!
```
(base) user@user-TUF-Gaming-FX506LU-FX506LU:~/Repo/PaddlePaddle/models/PaddleCV/video/application/video_tag$ python videotag_test.py
Namespace(extractor_config='configs/tsn.yaml', extractor_name='TSN', extractor_weights='weights/tsn', filelist='./data/VideoTag_test.list', label_file='label_3396.txt', predictor_config='configs/attention_lstm.yaml', predictor_name='AttentionLSTM', predictor_weights='weights/attention_lstm', save_dir='data/VideoTag_results', use_gpu=True)
[INFO: videotag_test.py: 240]: Namespace(extractor_config='configs/tsn.yaml', extractor_name='TSN', extractor_weights='weights/tsn', filelist='./data/VideoTag_test.list', label_file='label_3396.txt', predictor_config='configs/attention_lstm.yaml', predictor_name='AttentionLSTM', predictor_weights='weights/attention_lstm', save_dir='data/VideoTag_results', use_gpu=True)
W1222 17:25:32.329594 11924 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 75, Driver API Version: 11.1, Runtime API Version: 10.2
W1222 17:25:32.357901 11924 device_context.cc:346] device: 0, cuDNN Version: 8.0.
[INFO: videotag_test.py: 138]: load extractor weights from weights/tsn
[INFO: tsn.py: 155]: Load pretrain weights from weights/tsn, exclude fc layer.
===pretrain=== weights/tsn
--------------------------------------
C++ Traceback (most recent call last):
--------------------------------------
0 paddle::framework::SignalHandle(char const*, int)
1 paddle::platform::GetCurrentTraceBackString[abi:cxx11]()
----------------------
Error Message Summary:
----------------------
FatalError: A serious error (Segmentation fault) is detected by the operating system. (at /paddle/paddle/fluid/platform/init.cc:303)
[TimeInfo: *** Aborted at 1608629142 (unix time) try "date -d @1608629142" if you are using GNU date ***]
[SignalInfo: *** SIGSEGV (@0x0) received by PID 11924 (TID 0x7f5a6801a740) from PID 0 ***]
段错误 (核心已转储)
```
我的电脑系统是ubuntu18.04,paddlepaddle版本是2.0.0rc0
| open | 2020-12-22T09:33:11Z | 2024-02-26T05:09:39Z | https://github.com/PaddlePaddle/models/issues/5131 | [] | wwdok | 4 |
huggingface/datasets | deep-learning | 7,066 | One subset per file in repo ? | Right now we consider all the files of a dataset to be the same data, e.g.
```
single_subset_dataset/
├── train0.jsonl
├── train1.jsonl
└── train2.jsonl
```
but in cases like this, each file is actually a different subset of the dataset and should be loaded separately
```
many_subsets_dataset/
├── animals.jsonl
├── trees.jsonl
└── metadata.jsonl
```
It would be nice to detect those subsets automatically using a simple heuristic. For example we can group files together if their paths names are the same except some digits ? | open | 2024-07-23T12:43:59Z | 2024-07-23T12:43:59Z | https://github.com/huggingface/datasets/issues/7066 | [] | lhoestq | 0 |
dask/dask | pandas | 10,903 | gpuCI failing | `dask/dataframe/tests/test_dataframe.py::test_to_datetime[True]` is failing consistently in our gpuCI build with
```python
10:11:09 ____________________________ test_to_datetime[True] ____________________________
10:11:09 [gw0] linux -- Python 3.10.13 /opt/conda/envs/dask/bin/python3.10
10:11:09
10:11:09 gpu = True
10:11:09
10:11:09 @pytest.mark.parametrize("gpu", [False, pytest.param(True, marks=pytest.mark.gpu)])
10:11:09 def test_to_datetime(gpu):
10:11:09 xd = pd if not gpu else pytest.importorskip("cudf")
10:11:09
10:11:09 # meta dtype is inconsistent for cuDF-backed frames
10:11:09 check_dtype = not gpu
10:11:09
10:11:09 df = xd.DataFrame({"year": [2015, 2016], "month": ["2", "3"], "day": [4, 5]})
10:11:09 df.index.name = "ix"
10:11:09 ddf = dd.from_pandas(df, npartitions=2)
10:11:09
10:11:09 assert_eq(xd.to_datetime(df), dd.to_datetime(ddf), check_dtype=check_dtype)
10:11:09 assert_eq(xd.to_datetime(df), dd.to_datetime(df), check_dtype=check_dtype)
10:11:09
10:11:09 s = xd.Series(
10:11:09 ["3/11/2000", "3/12/2000", "3/13/2000"] * 100,
10:11:09 index=["3/11/2000", "3/12/2000", "3/13/2000"] * 100,
10:11:09 )
10:11:09 ds = dd.from_pandas(s, npartitions=10, sort=False)
10:11:09
10:11:09 # infer_datetime_format is not supported anymore in dask-expr
10:11:09 if not DASK_EXPR_ENABLED:
10:11:09 if PANDAS_GE_200:
10:11:09 ctx = pytest.warns(
10:11:09 UserWarning, match="'infer_datetime_format' is deprecated"
10:11:09 )
10:11:09 else:
10:11:09 ctx = contextlib.nullcontext()
10:11:09
10:11:09 > with ctx:
10:11:09 E Failed: DID NOT WARN. No warnings of type (<class 'UserWarning'>,) were emitted.
10:11:09 E Emitted warnings: [].
10:11:09
10:11:09 dask/dataframe/tests/test_dataframe.py:5152: Failed
```
[Here is an example CI build](https://gpuci.gpuopenanalytics.com/job/dask/job/dask/job/prb/job/dask-prb/5620/CUDA_VER=11.8.0,LINUX_VER=ubuntu20.04,PYTHON_VER=3.10,RAPIDS_VER=24.04/console)
cc @charlesbluca @phofl | closed | 2024-02-06T16:28:46Z | 2024-02-07T15:00:34Z | https://github.com/dask/dask/issues/10903 | [
"dataframe",
"tests",
"gpu"
] | jrbourbeau | 2 |
explosion/spaCy | nlp | 13,622 | 3.7.6 does not have wheel for linux aarch64 / arm64 | - https://pypi.org/project/spacy/3.7.6/#files
- https://pypi.org/project/spacy/3.7.5/#files
3.7.5 provides wheels for linux aarch64 for various python versions, but 3.7.6 does not have any wheels for linux aarch64.
Is it intended? Couldn't find related info on changelogs. | open | 2024-09-10T01:39:12Z | 2025-03-19T15:19:02Z | https://github.com/explosion/spaCy/issues/13622 | [] | chulkilee | 5 |
httpie/cli | python | 762 | Package not found on archlinux | Hi,
I can't find the package on archlinux with `pacman -S httpie`. Any thought why?
```sh
42sh$ uname -a
Linux toblerone 5.0.3-arch1-1-ARCH #1 SMP PREEMPT Tue Mar 19 13:09:13 UTC 2019 x86_64 GNU/Linux
```
Thank you!
| closed | 2019-03-22T10:31:15Z | 2019-03-22T19:55:44Z | https://github.com/httpie/cli/issues/762 | [] | nilscox | 1 |
AirtestProject/Airtest | automation | 894 | snapshot+recognition可识别,运行不可识别。 | (请尽量按照下面提示内容填写,有助于我们快速定位和解决问题,感谢配合。否则直接关闭。)
**(重要!问题分类)**
* 测试开发环境AirtestIDE使用问题 -> https://github.com/AirtestProject/AirtestIDE/issues
* 控件识别、树状结构、poco库报错 -> https://github.com/AirtestProject/Poco/issues
* 图像识别、设备控制相关问题 -> 按下面的步骤
**描述问题bug**
(简洁清晰得概括一下遇到的问题是什么。或者是报错的traceback信息。)
[12:52:59][INFO]<airtest.core.api> Try finding: Template(D:\Desktop\sw.air\tpl1618894192938.png)
[12:53:00][DEBUG]<airtest.core.api> resize: (123, 23)->(1, 1), resolution: (1024, 768)=>(1029, 0)
[12:53:00][DEBUG]<airtest.core.api> try match with SURFMatching
[12:53:00][DEBUG]<airtest.aircv.keypoint_base> find_best_result() run time is 0.00 s.
[12:53:00][DEBUG]<airtest.core.api> try match with TemplateMatching
[12:53:00][DEBUG]<airtest.core.api> 'error: in template match, found im_search bigger than im_source.'
[12:53:00][DEBUG]<airtest.core.api> try match with BRISKMatching
[12:53:00][DEBUG]<airtest.aircv.keypoint_base> find_best_result() run time is 0.00 s.
[12:53:00][DEBUG]<airtest.core.api> match result: None
```
(在这里粘贴traceback或其他报错信息)
```
**复现步骤**
**预期效果**
希望运行时能够自动点击windows应用,但是不行。
在ide中点击代码,“snapshot+recognition”中是能够正确识别的,但是运行则不行。
target_Image: https://github.com/1999single/ProjectIssue/blob/master/airTest/target.png
source_Image: https://github.com/1999single/ProjectIssue/blob/master/airTest/source.png
snapshot+recognition_Image: https://github.com/1999single/ProjectIssue/blob/master/airTest/Snapshot%2BRecognition.png
result_img: https://github.com/1999single/ProjectIssue/blob/master/airTest/result.png
| open | 2021-04-20T05:08:08Z | 2021-04-21T06:22:34Z | https://github.com/AirtestProject/Airtest/issues/894 | [] | 1999single | 1 |
lanpa/tensorboardX | numpy | 58 | Cat issue in add_embedding with cuda tensor | In line number: https://github.com/lanpa/tensorboard-pytorch/blob/master/tensorboardX/embedding.py#L20
If a cuda tensor is passed it will throw an error as torch.randn will generate a cpu tensor and thus both won't be same type. Either throw an explicit warning or check for type of `label_img` and convert rand tensor accordingly.
I solved it by converting passed tensor to cpu for now. | closed | 2017-12-14T23:34:43Z | 2017-12-29T12:00:10Z | https://github.com/lanpa/tensorboardX/issues/58 | [] | apsdehal | 1 |
dunossauro/fastapi-do-zero | pydantic | 328 | Minha colaboração | | Link do projeto | Seu @ no git | Comentário (opcional) |
|-------------|:-------------:|:-------------:|
|[Fast_zero](https://github.com/AndrewGRM/fast_zero) | [@AndrewGRM](https://github.com/AndrewGRM) | Estou começando o projeto, espero aprender muito 💯 | | closed | 2025-02-28T03:00:06Z | 2025-03-01T00:18:28Z | https://github.com/dunossauro/fastapi-do-zero/issues/328 | [] | AndrewGRM | 2 |
streamlit/streamlit | python | 10,448 | Vega-lite selection interval shows "true" tooltip | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
When adding a selection interval to an altair plot, the selection rectangle has a tooltip "true". There should not be a tooltip for the selection mark.
<img width="806" alt="Image" src="https://github.com/user-attachments/assets/109a9cee-7431-4df1-9e1e-b28946279c01" />
### Reproducible Code Example
```Python
import altair as alt
import streamlit as st
from vega_datasets import data
# Load data
cars = data.cars()
# Selection parameter
selection = alt.selection_interval()
# Create scatter plot
scatter_plot = (
alt.Chart(cars)
.mark_circle()
.encode(
x='Horsepower',
y='Miles_per_Gallon',
color='Origin',
tooltip=['Name', 'Horsepower', 'Miles_per_Gallon']
)
.add_params(selection)
.properties(title='Horsepower vs. Miles per Gallon')
)
st.altair_chart(scatter_plot)
```
### Steps To Reproduce
1. Click and drag to create selection rectangle
2. Hover over the selection rectangle
### Expected Behavior
The drag cursor shows (and there is no tooltip).
### Current Behavior
The drag cursor shows along with the tooltip "true".
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.42.1
- Python version: 3.12
- Operating System: MacOS
- Browser: Chrome
### Additional Information
I see that the streamlit vega-lite theme enables tooltips by default. I was looking at https://github.com/vega/vega-lite/issues/6522 which lead me to tinker with the vega-lite config. I think all is needed is to update
https://github.com/streamlit/streamlit/blob/596657b3d2bd64d33bb8b3076b4cffc965ea4700/frontend/lib/src/components/elements/ArrowVegaLiteChart/CustomTheme.tsx#L124
to
```ts
tooltip: {content: "encoding"}
```
I can reproduce the behavior in the vega editor.
See [example I](https://vega.github.io/editor/#/url/vega-lite/N4Igxg9gdgZglgcxALlAWwIYCcDWLQAuEEANgXAA77jQECmUBKIDkAJnFEgL68A0INhgIZqAVywlmQkQHow2AM4A6AFaLoIbgMy5qBAJ4U6zMHCxgSJ7SyjtOSVDRIQs1eHRJtmAeSyJOEAFDY2YoCDRODCkbIlJyKmQAbVAPL2YAOQw0E2CjE2QQcMioaK0+VLhPb0KACVdFOgoIAHc6NzzQwoBHMQxGOBFyADdrCpA0mpAAWTgrRQB9YywFgHFolyggkBCCkF7+8iG4Ua0AXQEAD3cq9LqGptb27d3mA4Hj05sDG+rmWfmS3aaw2mk6e3eR2EJ2ssUGVmY9SwjWabSwAAJhip0QC6Ip0ct0esSJtthRsNlFCgUkVsntyVhsgsAIwANm2jSsYCYTlehU49CwwzKvAuIAAJIowAALOiYZjSggECiKZCyWSjBAYZQIQbSsQAI2UcAgsilsswGroWoAtCRBnQNQBWZQAJgADMpmWoNFtuEA) with config `{"mark": {"tooltip": {"content": "encoding"}}}` vs. [example II](https://vega.github.io/editor/#/url/vega-lite/N4Igxg9gdgZglgcxALlAWwIYCcDWLQAuEEANgXAA4oFYCuApgL6MA0IAJhgRviLViRQcuGAPRhsAZwB0AK0nQQrEJly8CATwr0hYOFjAkdy+lEjs4UJKnCkIWXvHol2QgPJZElkG03ahUBBolhiCykSk5FTIANqgTi5CAHIYaDq+WjrIIIHBUKFKLPFwzq7ZABL2kvQUEADu9A4Z-tkAjrQYUOTc5ABuxkUgCWUgALJwRpIA+tpYUwDioSSKzVkg7Z3dXHD9SgC6bAAejiWJFVU19Y0+IH5rG11wPTsDIBonpULjkzONC0srW6ZIQPLZ9YzhJ5GISVLDVWoNLAAAl6MiR33okiRsyRixIyygNwo2FSkhQcRyqTWxKwqSmAEYAGw3apGMAEdTA7KWAiNXoFZgHEAAEkkYAAFvRMEJxQQCBRJMhRKJ+ggMNIEE9xbQAEbSOAQURiyWYFX0NUAWhIT3oKoArNIAEwABmk9LkCkJjCAA) with config `{"mark": {"tooltip": true}}`.
Example II has the "true" tooltip, while example I works as expected.
Example II:
<img width="436" alt="Example II" src="https://github.com/user-attachments/assets/25426386-0629-489d-90ce-d92928cd1fd2" />
| closed | 2025-02-19T15:21:41Z | 2025-02-20T23:23:22Z | https://github.com/streamlit/streamlit/issues/10448 | [
"type:bug",
"status:confirmed",
"priority:P3",
"feature:st.altair_chart"
] | foldager | 2 |
pallets/quart | asyncio | 10 | reading from redis queue and sending data to webpage | I got quart websocket example running on Python 3.6.4. Now I want to read data from a redis queue and send it to my webpage.
I also will receive data from the web page and call functions within a class or if easier, put it into an outgoing redis queue.
How can I best implement it in quart server?
I would very much appreciate your help in this matter
| closed | 2018-02-28T13:21:28Z | 2022-07-07T00:22:51Z | https://github.com/pallets/quart/issues/10 | [] | juntiedt2 | 4 |
openapi-generators/openapi-python-client | rest-api | 715 | Support multi-file schemas | Hello,
I'm new to OpenAPI and Schemas world so it could be possible that I'm doing something wrong.
I'm trying to add '#ref' to components in openapi.json to link to another file.
for example:
"components": {
"schemas": {
"User": {
"$ref": "./schemas/User"
}
}
And I'm getting the errors:
Remote references such as ././.. are not supported yet.
and Reference schemas are not supported.
Can I link schemas files some other way? I dont want to write all my schemas in the openapi.json file.
Thanks in advance the thank you for this project.
| closed | 2023-01-01T13:51:46Z | 2023-08-13T00:55:47Z | https://github.com/openapi-generators/openapi-python-client/issues/715 | [
"✨ enhancement",
"🍭 OpenAPI Compliance"
] | axshani | 2 |
coqui-ai/TTS | python | 3,745 | [Bug] Anyway to run this as docker-compose ? | ### Describe the bug
Anyway to run this as docker-compose ?
### To Reproduce
docker-compose up
### Expected behavior
N/A
### Logs
```shell
N/A
```
### Environment
```shell
N/A
```
### Additional context
N/A | closed | 2024-05-16T23:08:26Z | 2024-07-27T06:33:40Z | https://github.com/coqui-ai/TTS/issues/3745 | [
"bug",
"wontfix"
] | PeterTucker | 4 |
explosion/spaCy | deep-learning | 12,000 | spacy package CLI command accepts list of code_paths, but the others do not | I recently started a new spaCy project and decided not to create a separate Python module with all the custom code. While I can pass a comma-separated list of code paths to the [spacy package](https://spacy.io/api/cli#package) command, other CLI commands such as [spacy train](https://spacy.io/api/cli#train) and [spacy assemble](https://spacy.io/api/cli#assemble) only accept a single value for the `--code` option. This makes it impossible to build a project with more than one code file, even though it's possible to assemble one with multiple files.
It would be really helpful if all the spaCy CLI commands accepted a comma-separated list for the `--code` option. Otherwise, all the code has to be stuffed into a single file.
The `--code` option exists in the following commands:
* [debug](https://spacy.io/api/cli#debug)
* [train](https://spacy.io/api/cli#train)
* [pretrain](https://spacy.io/api/cli#pretrain)
* [evaluate](https://spacy.io/api/cli#evaluate)
* [assemble](https://spacy.io/api/cli#assemble)
* [package](https://spacy.io/api/cli#package) (accepts comma-separated list)
## How to reproduce the behaviour
**OK**:
```sh
spacy package --code file_a.py,file_b.py …
```
The comma-separated value to the `--code` option is split and all the code files are loaded ([package.py#L48](https://github.com/explosion/spaCy/blob/18ffe5bbd6a554920107ff48d1387df34c3f872a/spacy/cli/package.py#L48)).
**Not OK**:
```sh
spacy assemble --code file_a.py,file_b.py …
Path to Python code not found
```
The comma-separated value to the `--code` option is used as the literal path, which fails to load ([assemble.py#L41](https://github.com/explosion/spaCy/blob/18ffe5bbd6a554920107ff48d1387df34c3f872a/spacy/cli/assemble.py#L41)).
## Your Environment
* Operating System: macOS Ventura 13.1 (22C65)
* Python Version Used: 3.10.9
* spaCy Version Used: 3.4.3
* Environment Information:
| open | 2022-12-19T17:01:23Z | 2022-12-20T08:03:13Z | https://github.com/explosion/spaCy/issues/12000 | [
"enhancement",
"feat / cli",
"feat / ux"
] | kinghuang | 2 |
litestar-org/litestar | asyncio | 3,646 | Bug: order of types in openapi spec is not consistent in json rendering | ### Description
We are seeing the order of types change in openapi generation, which makes comparing golden versions of the openapi spec problematic. I think the specific problem we are seeing comes from https://github.com/litestar-org/litestar/blob/ffaf5616b19f6f0f4128209c8b49dbcb41568aa2/litestar/_openapi/schema_generation/schema.py#L160 where we use the `set` operation to uniquify the list of types. The order doesn't matter to the correctness of the openapi spec, so perhaps the responsibility for ensuring a determistic spec file could also come from the serializer, but either way it would be helpful if we could always render the same openapi spec the same way.
### URL to code causing the issue
_No response_
### MCVE
```python
# Your MCVE code here
```
### Steps to reproduce
```bash
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
```
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
2.9.1
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | open | 2024-07-25T13:57:31Z | 2025-03-20T15:54:50Z | https://github.com/litestar-org/litestar/issues/3646 | [
"Bug :bug:",
"OpenAPI"
] | atom-andrew | 2 |
ageitgey/face_recognition | python | 751 | Running face_encodings in parallel gives RuntimeError: cudaGetDevice()... reason: initialization error | * face_recognition version: 1.2.3
* Python version: 3.7.2
* Operating System: Scientific Linux 7.6
---
I have dlib installed with GPU support.
```
>>> import dlib
>>> dlib.DLIB_USE_CUDA
True
```
I'm using `batch_face_locations()` to get image locations. Then for each location that was pulled out of the batch, I'm getting the encodings using `face_encodings()`. I time both of these operations, and the time get the encodings is about 3x longer than the time to get the locations. I supposed that I could speed up the time to get the encodings by getting them all in parallel. So I tried something like this:
```
import multiprocessing as mp
import face_recognition
def get_encoding(frame, face_locations, return_queue):
encode = face_recognition.face_encodings(frame, face_locations)
return_queue.put(encode)
all_batch_face_locations = ... # the frames and associated batch_face_locations returned for all images in my dataset
encodings = []
for frames, batch_face_locs in all_batch_face_locations:
# get the encodings for the current batch of images in parallel
procs = []
queues = []
for frame_number_in_batch, face_locations in enumerate(batch_face_locs):
q = mp.Queue()
p = mp.Process(
target=get_encoding,
args=(frames[frame_number_in_batch], face_locations, q))
p.start()
procs.append(p)
queues.append(q)
for p, q in zip(procs, queues):
p.join()
encoding = q.get()
encodings.append(encoding)
```
Yet this gives me an error:
```
...
RuntimeError: Error while calling cudaGetDevice(&the_device_id) in file /tmp/pip-install-2vh9r_rp/dlib/dlib/cuda/gpu_data.cpp:178. code: 3, reason: initialization error
```
Now I can't actually find anywhere that says that `face_recognition.face_encodings()` uses the GPU. Even the [dlib documentation](http://dlib.net/python/index.html#dlib.shape_predictor) for the function that face_recognition eventually calls doesn't mention it. But it seems to be using it nonetheless.
I see references in other issues (#98 #374 #649) to running `face_encodings()` on multiple CPU cores, and I'd at least like to try and experiment with that to see if I can get some improvement. Is there something I'm missing to allow me to run `batch_face_locations` on the GPU and `face_encodings` on the CPU? Or, if not, is there some way to also run the encodings on the GPU in batches?
| open | 2019-02-21T21:00:04Z | 2021-04-05T22:49:55Z | https://github.com/ageitgey/face_recognition/issues/751 | [] | dav-ell | 4 |
qubvel-org/segmentation_models.pytorch | computer-vision | 723 | Pretrained Download Error | hello.!
I am receiving the mit_b3 pretrained model, but the speed is too slow and I am getting disconnected while receiving it. Any solution?
(I checked and the mix vision Transformer model pretrained are all the same issue.)
I appreciate your time and assistance. Thank you. | closed | 2023-03-03T09:02:33Z | 2023-03-06T13:06:42Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/723 | [] | Junhyuk93 | 2 |
tableau/server-client-python | rest-api | 1,230 | [Help Wanted] TSC ServerInfo call and SignOut call Hanging/Freezing after TSC Version Upgrade | **Describe the bug**
After upgrading TSC from 0.19.0 to 0.22 the below lines of code cause a hanging/freezing behavior
(after experimenting I have determined these are the versions where the behavior changes, I'm actually trying to upgrade from 0.14.1 to 0.24)
I also tested all the way up to 0.25, same issue
I have to upgrade the TSC version due to a seperate issue.
Scenario 1:
server = TSC.Server(serverUrl, use_server_version = True) --- Freezes here
tableau_auth = TSC.TableauAuth(tableauAcct, tableauPwd)
server.auth.sign_in(tableau_auth)
server.auth.sign_out()
Scenario 2:
server = TSC.Server(serverUrl, use_server_version = False)
server.version = "3.17"
tableau_auth = TSC.TableauAuth(tableauAcct, tableauPwd)
server.auth.sign_in(tableau_auth)
server.auth.sign_out() --- Freezes here
**Versions**
Details of your environment, including:
- Tableau Server version (or note if using Tableau Online) API: 3.17 Server: 2022.3.5
- Python version : 3.7
- TSC library version: 0.19.0 is working, 0.22 is not
- Its running in a python flask app in a docker container: Flask version: 2.2.4
I can provide my dockerfile or requirements.txt if you think that would be helpful
**To Reproduce**
Steps to reproduce the behavior. Please include a code snippet where possible.
Run Above lines of code after deploying
**Results**
What are the results or error messages received?
Excecution Freezes/Hangs
After about 5 ish minutes I get this error
('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
**NOTE:** Be careful not to post user names, passwords, auth tokens or any other private or sensitive information.
| closed | 2023-05-02T15:09:12Z | 2023-06-16T23:21:14Z | https://github.com/tableau/server-client-python/issues/1230 | [
"help wanted"
] | caseycourtneycu | 3 |
voxel51/fiftyone | data-science | 5,335 | [BUG] evaluate_detections() is skipping detections | ### Describe the problem
Hi ! Thanks for this awesome project !
I've been using 51 for quite a while now. I recently tried the `evaluate_detections() `method with `fiftyone.core.labels.Detections` and `COCO` evaluation method, which is very powerful.
When I upgraded Fiftyone to 1.1.0 (and also in 1.2.0), I've noticed a strange behavior that I can not explain.
It looks like some predictions are not evaluated. Their `eval_id` remains empty, and the detection is neither evaluated as a `fp` or `tp`. When investigated, I discovered it seems it was all the predictions that are False Positive and where there is also a GT on the same image ( a non-negative image).
### Code to reproduce issue
This is how I created the evaluation :
dataset = fo.load_dataset('my_dataset')
```
dataset.evaluate_detections(
pred_field='preds',
gt_field="ground_truth",
eval_key=f"eval_test",
classwise=False,
compute_mAP=False,
iou=0.5,
)
```
### System information
- **OS Platform and Distribution** : Linux Ubuntu 22.04
- **Python version** (`python --version`): Python 3.10.13
- **FiftyOne version** (`fiftyone --version`): 1.2.0
- **FiftyOne installed from** (pip or source): poetry
### Other info/logs
Include any logs or source code that would be helpful to diagnose the problem.
If including tracebacks, please include the full traceback. Large logs and
files should be attached. Please do not use screenshots for sharing text. Code
snippets should be used instead when providing tracebacks, logs, etc.
### Willingness to contribute
The FiftyOne Community encourages bug fix contributions. Would you or another
member of your organization be willing to contribute a fix for this bug to the
FiftyOne codebase?
- [X] Yes. I would be willing to contribute a fix for this bug with guidance
from the FiftyOne community
- [ ] No. I cannot contribute a bug fix at this time
| open | 2025-01-02T15:30:40Z | 2025-02-03T20:39:38Z | https://github.com/voxel51/fiftyone/issues/5335 | [
"bug"
] | AntoninDvl | 4 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 179 | asyncio.run() cannot be called from a running event loop | Hi there,
trying to get SmartScraperGraph running on Fast-API.
```
@app.post("/crawl")
async def crawl(request: Request):
data = await request.json()
url = data.get('url')
try:
smart_scraper_graph = SmartScraperGraph(
prompt="List me all the articles",
# also accepts a string with the already downloaded HTML code
source=url,
config=graph_config
)
result = smart_scraper_graph.run()
print(result)
# Access the URL field
return result
except Exception as e:
print(f"Error in crawl: {e}")
return None
```
Config
```
`graph_config = {
"llm": {
"model": "ollama/llama3",
"temperature": 0,
"format": "json", # Ollama needs the format to be specified explicitly
"base_url": "http://localhost:11434", # set ollama URL arbitrarily
},
"embeddings": {
"model": "ollama/nomic-embed-text",
"base_url": "http://localhost:11434", # set ollama URL arbitrarilyURL
}
}`
```
Error:
```
Error in crawl: asyncio.run() cannot be called from a running event loop
/Users/konrad/Documents/Projects/product-spider/apps/service/main.py:171: RuntimeWarning: coroutine 'AsyncChromiumLoader.ascrape_playwright' was never awaited
```
Any idea? Thanks | closed | 2024-05-08T14:03:07Z | 2024-08-22T09:21:53Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/179 | [
"bug"
] | kkarkos | 17 |
onnx/onnx | scikit-learn | 6,018 | When will outstanding security vulnerabilities be fixed? | # Ask a Question
### Question
When are you planning to release the next version of Onnx so that the outstanding security vulnerabilities are patched?
### Further information
- https://vuldb.com/?id.254653 (fixed by https://github.com/onnx/onnx/commit/66b7fb630903fdcf3e83b6b6d56d82e904264a20)
- https://vuldb.com/?id.254654 (fixed by https://github.com/onnx/onnx/commit/08a399ba75a805b7813ab8936b91d0e274b08287)
- Is this issue related to a specific model?
No
| closed | 2024-03-12T12:32:04Z | 2024-03-28T07:58:37Z | https://github.com/onnx/onnx/issues/6018 | [
"question"
] | micahstairs | 2 |
ckan/ckan | api | 7,876 | The documentation should be updated to reflect the actual statistics. |
The current [2.10.x documentation](https://docs.ckan.org/en/2.10/maintaining/tracking.html) indicates that it will provde the following stats:
- Sort datasets by popularity
- Highlight popular datasets and resources
- Show view counts next to datasets and resources
- Show a list of the most popular datasets
- Export page-view data to a CSV file
but instead, the actual provided stats are:
- Total Number of Datasets
- Dataset Revisions per Week
- Most Edited Datasets
- Largest Groups
- Top Tags
- Users Creating Most Datasets
Also on v2.9.x, it provides the following stats
- Top rated datasets
- Largest group
- Top Tags
- Users who created most datasets. | closed | 2023-10-26T06:53:51Z | 2023-10-26T11:21:42Z | https://github.com/ckan/ckan/issues/7876 | [] | sagargg | 2 |
ray-project/ray | deep-learning | 51,154 | [Serve] On kuberay, vLLM-0.7.2 reports "No CUDA GPUs are available" while vllm-0.6.6.post1 works fine when deploy rayservice | ### What happened + What you expected to happen
### Description
When deploying Qwen2.5-0.5B model using kuberay with vLLM 0.7.2, encountering "RuntimeError: No CUDA GPUs are available" error. However, the same deployment works fine with vLLM 0.6.6.post1 under identical environment conditions.
### Environment Information
- Container Image: rayproject/ray:2.43.0-py39-cu124
- vLLM:
- Failed version: 0.7.2
- Working version: 0.6.6.post1
- Model: Qwen2.5-0.5B
### Steps to Reproduce
Using kuberay to deploy `RayService` with image `rayproject/ray:2.43.0-py39-cu124`, the RayService is:
```yam
apiVersion: ray.io/v1
kind: RayService
metadata:
name: qwen2005-0005b-vllm07
spec:
serveConfigV2: |
applications:
- name: llm
route_prefix: /
import_path: latest-serve:model
deployments:
- name: VLLMDeployment
num_replicas: 1
ray_actor_options:
num_cpus: 4
runtime_env:
working_dir: "https://xxx/vllm_script.zip"
pip:
- "vllm==0.7.2"
env_vars:
MODEL_ID: "Qwen/Qwen2.5-0.5B"
TENSOR_PARALLELISM: "1"
PIPELINE_PARALLELISM: "1"
rayClusterConfig:
headGroupSpec:
rayStartParams:
dashboard-host: '0.0.0.0'
template:
spec:
containers:
- name: ray-head
image: rayproject/ray:2.43.0-py39-cu124
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: "8"
memory: "16Gi"
requests:
cpu: "2"
memory: "4Gi"
ports:
- containerPort: 6379
name: gcs-server
- containerPort: 8265
name: dashboard
- containerPort: 10001
name: client
- containerPort: 8000
name: serve
env:
- name: HUGGING_FACE_HUB_TOKEN
valueFrom:
secretKeyRef:
name: hf-secret
key: hf_api_token
workerGroupSpecs:
- replicas: 1
minReplicas: 1
maxReplicas: 2
groupName: gpu-group
rayStartParams: {}
template:
spec:
containers:
- name: llm
image: rayproject/ray:2.43.0-py39-cu124
imagePullPolicy: IfNotPresent
env:
- name: HUGGING_FACE_HUB_TOKEN
valueFrom:
secretKeyRef:
name: hf-secret
key: hf_api_token
resources:
limits:
cpu: "8"
memory: "16Gi"
nvidia.com/gpu: "1"
requests:
cpu: "4"
memory: "8Gi"
nvidia.com/gpu: "1"
tolerations:
- key: "nvidia.com/gpu"
operator: "Exists"
effect: "NoSchedule"
```
and the `latest-serve.py` in `https://xxx/vllm_script.zip` is from: https://github.com/ray-project/ray/blob/master/doc/source/serve/doc_code/vllm_openai_example.py
The exception traceback:
```
[36mray::ServeReplica:llm:VLLMDeployment.initialize_and_get_metadata()[39m (pid=1886, ip=10.58.29.125, actor_id=c3a99f2865a8a727c40545aa01000000, repr=<ray.serve._private.replica.ServeReplica:llm:VLLMDeployment object at 0x7f9966d21550>)
File "/home/ray/anaconda3/lib/python3.9/concurrent/futures/_base.py", line 446, in result
return self.__get_result()
File "/home/ray/anaconda3/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/serve/_private/replica.py", line 965, in initialize_and_get_metadata
await self._replica_impl.initialize(deployment_config)
File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/serve/_private/replica.py", line 694, in initialize
raise RuntimeError(traceback.format_exc()) from None
RuntimeError: Traceback (most recent call last):
File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/serve/_private/replica.py", line 671, in initialize
self._user_callable_asgi_app = await asyncio.wrap_future(
File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/serve/_private/replica.py", line 1363, in initialize_callable
await self._call_func_or_gen(
File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/serve/_private/replica.py", line 1324, in _call_func_or_gen
result = callable(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/serve/api.py", line 221, in __init__
cls.__init__(self, *args, **kwargs)
File "/tmp/ray/session_2025-03-06_22-57-24_631998_1/runtime_resources/working_dir_files/https_aistudio-ant-mpc_oss-cn-zhangjiakou_aliyuncs_com_pengtuo_kuberay_vllm_script/latest-serve.py", line 57, in __init__
self.engine = AsyncLLMEngine.from_engine_args(engine_args)
File "/tmp/ray/session_2025-03-06_22-57-24_631998_1/runtime_resources/pip/a425849cda8f3a2d8bc88454de4cdc8455c376c1/virtualenv/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 644, in from_engine_args
engine = cls(
File "/tmp/ray/session_2025-03-06_22-57-24_631998_1/runtime_resources/pip/a425849cda8f3a2d8bc88454de4cdc8455c376c1/virtualenv/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 594, in __init__
self.engine = self._engine_class(*args, **kwargs)
File "/tmp/ray/session_2025-03-06_22-57-24_631998_1/runtime_resources/pip/a425849cda8f3a2d8bc88454de4cdc8455c376c1/virtualenv/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 267, in __init__
super().__init__(*args, **kwargs)
File "/tmp/ray/session_2025-03-06_22-57-24_631998_1/runtime_resources/pip/a425849cda8f3a2d8bc88454de4cdc8455c376c1/virtualenv/lib/python3.9/site-packages/vllm/engine/llm_engine.py", line 273, in __init__
self.model_executor = executor_class(vllm_config=vllm_config, )
File "/tmp/ray/session_2025-03-06_22-57-24_631998_1/runtime_resources/pip/a425849cda8f3a2d8bc88454de4cdc8455c376c1/virtualenv/lib/python3.9/site-packages/vllm/executor/executor_base.py", line 51, in __init__
self._init_executor()
File "/tmp/ray/session_2025-03-06_22-57-24_631998_1/runtime_resources/pip/a425849cda8f3a2d8bc88454de4cdc8455c376c1/virtualenv/lib/python3.9/site-packages/vllm/executor/uniproc_executor.py", line 41, in _init_executor
self.collective_rpc("init_device")
File "/tmp/ray/session_2025-03-06_22-57-24_631998_1/runtime_resources/pip/a425849cda8f3a2d8bc88454de4cdc8455c376c1/virtualenv/lib/python3.9/site-packages/vllm/executor/uniproc_executor.py", line 51, in collective_rpc
answer = run_method(self.driver_worker, method, args, kwargs)
File "/tmp/ray/session_2025-03-06_22-57-24_631998_1/runtime_resources/pip/a425849cda8f3a2d8bc88454de4cdc8455c376c1/virtualenv/lib/python3.9/site-packages/vllm/utils.py", line 2220, in run_method
return func(*args, **kwargs)
File "/tmp/ray/session_2025-03-06_22-57-24_631998_1/runtime_resources/pip/a425849cda8f3a2d8bc88454de4cdc8455c376c1/virtualenv/lib/python3.9/site-packages/vllm/worker/worker.py", line 155, in init_device
torch.cuda.set_device(self.device)
File "/tmp/ray/session_2025-03-06_22-57-24_631998_1/runtime_resources/pip/a425849cda8f3a2d8bc88454de4cdc8455c376c1/virtualenv/lib/python3.9/site-packages/torch/cuda/__init__.py", line 478, in set_device
torch._C._cuda_setDevice(device)
File "/tmp/ray/session_2025-03-06_22-57-24_631998_1/runtime_resources/pip/a425849cda8f3a2d8bc88454de4cdc8455c376c1/virtualenv/lib/python3.9/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init
torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available
```
### Related issue
I've been searching for solutions and found two issues that match my symptoms, but the solutions provided in those issues don't work in my case:
https://github.com/vllm-project/vllm/issues/6896
https://github.com/ray-project/ray/issues/50275
### Versions / Dependencies
```
Ray image: rayproject/ray:2.43.0-py39-cu124
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.9.21 | packaged by conda-forge | (main, Dec 5 2024, 13:51:40) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.10.134-18.al8.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A10
Nvidia driver version: 550.144.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-ml-py==12.570.86
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pyzmq==26.2.1
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] transformers==4.49.0
[pip3] triton==3.1.0
NVIDIA_VISIBLE_DEVICES=0
NVIDIA_REQUIRE_CUDA=cuda>=12.4 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=525,driver<526 brand=unknown,driver>=525,driver<526 brand=nvidia,driver>=525,driver<526 brand=nvidiartx,driver>=525,driver<526 brand=geforce,driver>=525,driver<526 brand=geforcertx,driver>=525,driver<526 brand=quadro,driver>=525,driver<526 brand=quadrortx,driver>=525,driver<526 brand=titan,driver>=525,driver<526 brand=titanrtx,driver>=525,driver<526 brand=tesla,driver>=535,driver<536 brand=unknown,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=geforce,driver>=535,driver<536 brand=geforcertx,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=titan,driver>=535,driver<536 brand=titanrtx,driver>=535,driver<536
NCCL_VERSION=2.21.5-1
NVIDIA_DRIVER_CAPABILITIES=compute,utility
NVIDIA_PRODUCT_NAME=CUDA
CUDA_VERSION=12.4.1
LD_LIBRARY_PATH=/tmp/ray/session_2025-03-06_04-45-27_752822_1/runtime_resources/pip/a425849cda8f3a2d8bc88454de4cdc8455c376c1/virtualenv/lib/python3.9/site-packages/cv2/../../lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
```
### Reproduction script
the vLLM deployment code:
```python
import os
from typing import Dict, Optional, List
import logging
from fastapi import FastAPI
from starlette.requests import Request
from starlette.responses import StreamingResponse, JSONResponse
from ray import serve
from vllm.engine.arg_utils import AsyncEngineArgs
from vllm.engine.async_llm_engine import AsyncLLMEngine
from vllm.entrypoints.openai.cli_args import make_arg_parser
from vllm.entrypoints.openai.protocol import (
ChatCompletionRequest,
ChatCompletionResponse,
ErrorResponse,
)
from vllm.entrypoints.openai.serving_chat import OpenAIServingChat
from vllm.entrypoints.openai.serving_models import (
BaseModelPath,
LoRAModulePath,
PromptAdapterPath,
OpenAIServingModels,
)
from vllm.utils import FlexibleArgumentParser
from vllm.entrypoints.logger import RequestLogger
logger = logging.getLogger("ray.serve")
app = FastAPI()
@serve.deployment(name="VLLMDeployment")
@serve.ingress(app)
class VLLMDeployment:
def __init__(
self,
engine_args: AsyncEngineArgs,
response_role: str,
lora_modules: Optional[List[LoRAModulePath]] = None,
prompt_adapters: Optional[List[PromptAdapterPath]] = None,
request_logger: Optional[RequestLogger] = None,
chat_template: Optional[str] = None,
):
logger.info(f"Starting with engine args: {engine_args}")
self.openai_serving_chat = None
self.engine_args = engine_args
self.response_role = response_role
self.lora_modules = lora_modules
self.prompt_adapters = prompt_adapters
self.request_logger = request_logger
self.chat_template = chat_template
self.engine = AsyncLLMEngine.from_engine_args(engine_args)
@app.post("/v1/chat/completions")
async def create_chat_completion(self, request: ChatCompletionRequest, raw_request: Request):
if not self.openai_serving_chat:
model_config = await self.engine.get_model_config()
models = OpenAIServingModels(
self.engine,
model_config,
[
BaseModelPath(
name=self.engine_args.model, model_path=self.engine_args.model
)
],
lora_modules=self.lora_modules,
prompt_adapters=self.prompt_adapters,
)
self.openai_serving_chat = OpenAIServingChat(
self.engine,
model_config,
models,
self.response_role,
request_logger=self.request_logger,
chat_template=self.chat_template,
chat_template_content_format="auto",
)
logger.info(f"Request: {request}")
generator = await self.openai_serving_chat.create_chat_completion(
request, raw_request
)
if isinstance(generator, ErrorResponse):
return JSONResponse(
content=generator.model_dump(), status_code=generator.code
)
if request.stream:
return StreamingResponse(content=generator, media_type="text/event-stream")
else:
assert isinstance(generator, ChatCompletionResponse)
return JSONResponse(content=generator.model_dump())
def parse_vllm_args(cli_args: Dict[str, str]):
arg_parser = FlexibleArgumentParser(
description="vLLM OpenAI-Compatible RESTful API server."
)
parser = make_arg_parser(arg_parser)
arg_strings = []
for key, value in cli_args.items():
arg_strings.extend([f"--{key}", str(value)])
logger.info(arg_strings)
parsed_args = parser.parse_args(args=arg_strings)
return parsed_args
# serve run latest-serve:build_app model="Qwen/Qwen2.5-0.5B" tensor-parallel-size=1 accelerator="GPU"
def build_app(cli_args: Dict[str, str]) -> serve.Application:
logger.info("*" * 100)
if "accelerator" in cli_args.keys():
accelerator = cli_args.pop("accelerator")
else:
accelerator = "GPU"
parsed_args = parse_vllm_args(cli_args)
engine_args = AsyncEngineArgs.from_cli_args(parsed_args)
engine_args.worker_use_ray = True
tp = engine_args.tensor_parallel_size
logger.info(f"Tensor parallelism = {tp}")
pg_resources = []
pg_resources.append({"CPU": 4}) # for the deployment replica
for i in range(tp):
pg_resources.append({"CPU": 2, accelerator: 1}) # for the vLLM actors
return VLLMDeployment.options(
placement_group_bundles=pg_resources, placement_group_strategy="SPREAD"
).bind(
engine_args,
parsed_args.response_role,
parsed_args.lora_modules,
parsed_args.prompt_adapters,
cli_args.get("request_logger"),
parsed_args.chat_template,
)
model = build_app({
"model": os.environ['MODEL_ID'],
"port": "8080",
"tensor-parallel-size": os.environ['TENSOR_PARALLELISM'],
"pipeline-parallel-size": os.environ['PIPELINE_PARALLELISM'],
"max-model-len": os.environ['MODEL_LEN'],
"gpu-memory-utilization": os.environ['GPU_MEMORY_UTILIZATION'],
"dtype": os.environ['DTYPE'],
"kv-cache-dtype": os.environ['KV_CACHE_DTYPE']
})
```
### Issue Severity
High: It blocks me from completing my task. | open | 2025-03-07T07:28:52Z | 2025-03-11T12:08:05Z | https://github.com/ray-project/ray/issues/51154 | [
"bug",
"triage",
"serve"
] | pteric | 6 |
suitenumerique/docs | django | 437 | Switch from Web Sockets to Long polling | ## Bug Report
**Problematic behavior**
Websockets are not compatible with internal networks and security configuration.
**Expected behavior/code**
Replace websocket with Long Polling.
Which means we need to make changes to hoccus poccus and Y.js | closed | 2024-11-20T18:33:06Z | 2025-01-31T15:11:47Z | https://github.com/suitenumerique/docs/issues/437 | [
"collaboration"
] | virgile-dev | 1 |
MaartenGr/BERTopic | nlp | 2,288 | Changing number of topics | ### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Desribe the bug
I have been running BERTopic on a dataset for the Chinese language. I see every time I run the model, I get either two topics or around 315 topics. I'm not sure what is causing this drastic change and which result is acceptable.
### Reproduction
```python
from bertopic import BERTopic
device = 'cuda' if torch.cuda.is_available() else 'cpu'
sentence_model = SentenceTransformer('paraphrase-multilingual-MiniLM-L12-v2', device=device)
umap_model = umap.UMAP(n_neighbors=15, n_components=5, metric='cosine')
topic_model = BERTopic(embedding_model=sentence_model, umap_model=umap_model, language="multilingual")
topics, _ = topic_model.fit_transform(texts)
topic_model.get_topic_info()
# either two or around 315 topics
```
### BERTopic Version
0.16.4 | open | 2025-02-14T22:26:35Z | 2025-02-16T07:26:30Z | https://github.com/MaartenGr/BERTopic/issues/2288 | [
"bug"
] | mitramir55 | 1 |
praw-dev/praw | api | 1,853 | Missing return signatures for mixin methods | ### Describe the Bug
The mixin methods do not have type signatures for what they return.
### Desired Result
They should have type signatures.
### Relevant Logs
_No response_
### Code to reproduce the bug
_No response_
### My code example does not include the `Reddit()` initialization to prevent credential leakage.
Yes
### This code has previously worked as intended.
Yes
### Operating System/Environment
MacOS Big Sur 11.4
### Python Version
Python 3.9
### PRAW Version
7.5.0
### Prawcore Version
2.3.0
### Anything else?
_No response_ | closed | 2022-02-23T20:27:27Z | 2022-05-21T18:53:22Z | https://github.com/praw-dev/praw/issues/1853 | [
"Feature",
"Stale"
] | PythonCoderAS | 7 |
InstaPy/InstaPy | automation | 6,528 | Instapy is not commenting posts? | Hello - i run the following code and likeing the post works fine -
But even tough i set
`session.set_do_comment(True, percentage=100) `
to 100 percent it doesn´t make any comments in the posts?
This is my full code:
```
from instapy import InstaPy
session = InstaPy(username=INSTA_USER,
password=INSTA_PW,
headless_browser= True)
session.login()
session.set_comments(listComments)
session.like_by_tags(listTags, amount=likeCount)
session.set_dont_like(listNotTags)
session.set_do_follow(True, percentage=100)
session.set_do_comment(True, percentage=100)
session.end()
```
And this is the final report where i can see that no comment happened:
```
INFO [2022-03-01 22:01:01] [aunikat_wien] --> Image already liked!
INFO [2022-03-01 22:01:01] [aunikat_wien] Tag: b'nieder\xc3\xb6sterreich'
INFO [2022-03-01 22:01:01] [aunikat_wien] Liked: 14
INFO [2022-03-01 22:01:01] [aunikat_wien] Already Liked: 5
INFO [2022-03-01 22:01:01] [aunikat_wien] Commented: 0
INFO [2022-03-01 22:01:01] [aunikat_wien] Followed: 0
INFO [2022-03-01 22:01:01] [aunikat_wien] Inappropriate: 1
INFO [2022-03-01 22:01:01] [aunikat_wien] Not valid users: 0
INFO [2022-03-01 22:01:05] [aunikat_wien] Sessional Live Report:
|> LIKED 14 images | ALREADY LIKED: 5
|> COMMENTED on 0 images
|> FOLLOWED 0 users | ALREADY FOLLOWED: 0
|> UNFOLLOWED 0 users
|> LIKED 0 comments
|> REPLIED to 0 comments
|> INAPPROPRIATE images: 1
|> NOT VALID users: 0
|> WATCHED 0 story(ies) | WATCHED 0 reel(s)
On session start was FOLLOWING 1 users & had 8 FOLLOWERS
[Session lasted 16.44 minutes]
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
INFO [2022-03-01 22:01:05] [aunikat_wien] Session ended!
ooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
``` | open | 2022-03-01T21:08:21Z | 2022-05-13T13:22:44Z | https://github.com/InstaPy/InstaPy/issues/6528 | [] | Rapid1898-code | 2 |
scikit-hep/awkward | numpy | 2,376 | `ak.to_list` has stopped working on some edge cases between awkward 1.8.0 and 2.1.1 | ### Version of Awkward Array
2.1.1
### Description and code to reproduce
So, this is a somewhat silly edge case, but I'd rather not write exceptions for it.
In awkward 1.8.0 I could do;
```python3
import awkward as ak
assert ak.__version__ == '1.8.0'
ak.to_list(['mystring'])
```
Sometimes I want a function to accept any list-like object, either an awkward array, a numpy array or a list, so it's convenient if `ak.to_list` works the same on all of those.
However, now in awkward 2.1.1
```python3
import awkward as ak
assert ak.__version__ == '2.1.1'
ak.to_list(['mystring'])
```
leads to
```
[... skipping similar frames: <listcomp> at line 80 (1487 times), _impl at line 80 (1487 times)]
File ~/Programs/anaconda3/envs/tree2/lib/python3.11/site-packages/awkward/operations/ak_to_list.py:80, in _impl(array)
77 return {k: _impl(v) for k, v in array.items()}
79 elif isinstance(array, Iterable):
---> 80 return [_impl(x) for x in array]
82 else:
83 return array
File ~/Programs/anaconda3/envs/tree2/lib/python3.11/site-packages/awkward/operations/ak_to_list.py:80, in <listcomp>(.0)
77 return {k: _impl(v) for k, v in array.items()}
79 elif isinstance(array, Iterable):
---> 80 return [_impl(x) for x in array]
82 else:
83 return array
File ~/Programs/anaconda3/envs/tree2/lib/python3.11/site-packages/awkward/operations/ak_to_list.py:49, in _impl(array)
48 def _impl(array):
---> 49 if isinstance(
50 array,
51 (
52 ak.highlevel.Array,
53 ak.highlevel.Record,
54 ak.highlevel.ArrayBuilder,
55 ),
56 ):
57 return array.to_list()
59 elif isinstance(array, (ak.contents.Content, ak.record.Record)):
File <frozen abc>:119, in __instancecheck__(cls, instance)
RecursionError: maximum recursion depth exceeded in comparison
```
The problem is almost certainly that a char (string length 1) is still iterable.
If it is considered reasonable to handle such a silly edge case, I will make a pull request?
| closed | 2023-04-08T11:41:40Z | 2023-07-02T17:08:23Z | https://github.com/scikit-hep/awkward/issues/2376 | [
"bug"
] | HenryDayHall | 2 |
quasarstream/python-ffmpeg-video-streaming | dash | 37 | Sites that require a login | Hello Amin
Thanks for the article
I want to download a video from a site that, after logging in, has access to its HLS videos
How do I do the login steps in FFmpeg? | closed | 2020-09-17T05:02:37Z | 2020-09-17T18:20:05Z | https://github.com/quasarstream/python-ffmpeg-video-streaming/issues/37 | [] | alefjim | 1 |
jeffknupp/sandman2 | rest-api | 221 | Where is the package? | Couldn't locate the package
Air ~ % pip install sandman2
Collecting sandman2
Could not fetch URL https://pypi.python.org/simple/sandman2/: There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:590) - skipping
Could not find a version that satisfies the requirement sandman2 (from versions: )
No matching distribution found for sandman2 | open | 2021-06-18T09:45:25Z | 2021-06-18T09:45:25Z | https://github.com/jeffknupp/sandman2/issues/221 | [] | jai2033shankar | 0 |
huggingface/datasets | deep-learning | 6,869 | Download is broken for dict of dicts: FileNotFoundError | It seems there is a bug when downloading a dict of dicts of URLs introduced by:
- #6794
## Steps to reproduce the bug:
```python
from datasets import DownloadManager
dl_manager = DownloadManager()
paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet"}})
```
Stack trace:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-7-0e0d76d25b09> in <module>
----> 1 paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet"}})
.../huggingface/datasets/src/datasets/download/download_manager.py in download(self, url_or_urls)
255 start_time = datetime.now()
256 with stack_multiprocessing_download_progress_bars():
--> 257 downloaded_path_or_paths = map_nested(
258 download_func,
259 url_or_urls,
.../huggingface/datasets/src/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, batched, batch_size, types, disable_tqdm, desc)
506 batch_size = max(len(iterable) // num_proc + int(len(iterable) % num_proc > 0), 1)
507 iterable = list(iter_batched(iterable, batch_size))
--> 508 mapped = [
509 _single_map_nested((function, obj, batched, batch_size, types, None, True, None))
510 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc)
.../huggingface/datasets/src/datasets/utils/py_utils.py in <listcomp>(.0)
507 iterable = list(iter_batched(iterable, batch_size))
508 mapped = [
--> 509 _single_map_nested((function, obj, batched, batch_size, types, None, True, None))
510 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc)
511 ]
.../huggingface/datasets/src/datasets/utils/py_utils.py in _single_map_nested(args)
375 and all(not isinstance(v, types) for v in data_struct)
376 ):
--> 377 return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]
378
379 # Reduce logging to keep things readable in multiprocessing with tqdm
.../huggingface/datasets/src/datasets/utils/py_utils.py in <listcomp>(.0)
375 and all(not isinstance(v, types) for v in data_struct)
376 ):
--> 377 return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]
378
379 # Reduce logging to keep things readable in multiprocessing with tqdm
.../huggingface/datasets/src/datasets/download/download_manager.py in _download_batched(self, url_or_filenames, download_config)
311 )
312 else:
--> 313 return [
314 self._download_single(url_or_filename, download_config=download_config)
315 for url_or_filename in url_or_filenames
.../huggingface/datasets/src/datasets/download/download_manager.py in <listcomp>(.0)
312 else:
313 return [
--> 314 self._download_single(url_or_filename, download_config=download_config)
315 for url_or_filename in url_or_filenames
316 ]
.../huggingface/datasets/src/datasets/download/download_manager.py in _download_single(self, url_or_filename, download_config)
321 # append the relative path to the base_path
322 url_or_filename = url_or_path_join(self._base_path, url_or_filename)
--> 323 out = cached_path(url_or_filename, download_config=download_config)
324 out = tracked_str(out)
325 out.set_origin(url_or_filename)
.../huggingface/datasets/src/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
220 elif is_local_path(url_or_filename):
221 # File, but it doesn't exist.
--> 222 raise FileNotFoundError(f"Local file {url_or_filename} doesn't exist")
223 else:
224 # Something unknown
FileNotFoundError: Local file .../huggingface/datasets/{'frr': 'hf:/datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet'} doesn't exist
```
Related to:
- #6850
| closed | 2024-05-06T05:13:36Z | 2024-05-06T09:25:53Z | https://github.com/huggingface/datasets/issues/6869 | [
"bug"
] | albertvillanova | 0 |
benbusby/whoogle-search | flask | 538 | [BUG] Incorrect initial forwarding session | **Describe the bug**
It seems that there is something wrong with the Nov 18th release concerning the session forwarding. The first time when one starts a search, thing gets forwarded to an URL which couldn't be interpreted. I've configured things in such way that Whoogle runs on port 443 towards my NAS and then reversed proxied to port 5005 internally.
The first time a session URL seems to be generated that couldn't be reverse proxied. All attempts after that works fine, except for the images where not all results are shown.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [ X] Docker on Synolgy NAS
- [ ] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [X ] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [ ] Version [version number]
- [ ] Not sure
**Desktop (please complete the following information):**
- OS: all
- Browser FF
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| closed | 2021-11-19T06:46:19Z | 2021-12-20T06:51:28Z | https://github.com/benbusby/whoogle-search/issues/538 | [
"bug"
] | eddydc | 8 |
google/trax | numpy | 1,183 | Reformer imagenet64 gin config - dataset loading failure | ### Description
Imagenet64 dataset from tensor2tensor used in this gin config:
https://github.com/google/trax/blob/master/trax/supervised/configs/reformer_imagenet64.gin
seems to have some loading issues. I tried to run this config on Google Colab:
https://colab.research.google.com/drive/1ysEQYOaIspHPBVu6S9jOxc7BkE2oDrh0
and ran into:
`tensorflow.python.framework.errors_impl.NotFoundError: /root/tensorflow_datasets/download/train_64x64; No such file or directory` (more detailed stack trace provided below).
For reference, gin configs that use different datasets from t2t, like this most recent one:
https://github.com/google/trax/blob/master/trax/supervised/configs/transformer_lm_cnndailymail.gin
worked correctly in the same colab. When trying a different gin config with imagenet224 also from t2t
failed in a similar way as this imagenet64.
Is this a known issue?
### Environment information
```
OS: google colab
$ pip freeze | grep trax
trax==1.3.6
$ pip freeze | grep tensor
mesh-tensorflow==0.1.17
tensor2tensor==1.15.7
tensorboard==2.3.0
tensorboard-plugin-wit==1.7.0
tensorboardcolab==0.0.22
tensorflow==2.3.0
tensorflow-addons==0.8.3
tensorflow-datasets==4.0.1
tensorflow-estimator==2.3.0
tensorflow-gan==2.0.0
tensorflow-gcs-config==2.3.0
tensorflow-hub==0.9.0
tensorflow-metadata==0.24.0
tensorflow-privacy==0.2.2
tensorflow-probability==0.7.0
tensorflow-text==2.3.0
$ pip freeze | grep jax
jax==0.2.4
jaxlib==0.1.56+cuda101
$ python -V
Python 3.6.9
```
### For bugs: reproduction and error logs
```
# Steps to reproduce (also available in attached colab):
...
python -m trax.trainer --config_file='reformer_imagenet64.gin'
```
```
# Error logs:
...
2020-11-03 08:45:54.260503: E tensorflow/stream_executor/cuda/cuda_driver.cc:314] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
I1103 08:45:54.260992 140307286194048 trainer_lib.py:733] No --output_dir specified
No --output_dir specified
I1103 08:45:54.261220 140307286194048 trainer_lib.py:733] Using default output_dir: /root/trax/ReformerLM_t2t_image_imagenet64_gen_flat_rev_20201103_0845
Using default output_dir: /root/trax/ReformerLM_t2t_image_imagenet64_gen_flat_rev_20201103_0845
2020-11-03 08:45:54.313886: E external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_driver.cc:328] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
W1103 08:45:54.327635 140307286194048 xla_bridge.py:131] No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
I1103 08:45:54.328048 140307286194048 tf_inputs.py:958] No dataset directory provided. Downloading and generating dataset for t2t_image_imagenet64_gen_flat_rev inside data directory /root/tensorflow_datasets/ For large datasets it is better to prepare datasets manually!
I1103 08:45:55.534647 140307286194048 common_layers.py:57] Running in V2 mode, using Keras layers.
I1103 08:45:57.490262 140307286194048 gym_utils.py:358] Entry Point [tensor2tensor.envs.tic_tac_toe_env:TicTacToeEnv] registered with id [T2TEnv-TicTacToeEnv-v0]
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.6/dist-packages/trax/trainer.py", line 171, in <module>
app.run(main)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 300, in run
_run_main(main, args)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "/usr/local/lib/python3.6/dist-packages/trax/trainer.py", line 165, in main
trainer_lib.train(output_dir=output_dir)
File "/usr/local/lib/python3.6/dist-packages/gin/config.py", line 1078, in gin_wrapper
utils.augment_exception_message_and_reraise(e, err_str)
File "/usr/local/lib/python3.6/dist-packages/gin/utils.py", line 49, in augment_exception_message_and_reraise
six.raise_from(proxy.with_traceback(exception.__traceback__), None)
File "<string>", line 3, in raise_from
File "/usr/local/lib/python3.6/dist-packages/gin/config.py", line 1055, in gin_wrapper
return fn(*new_args, **new_kwargs)
File "/usr/local/lib/python3.6/dist-packages/trax/supervised/trainer_lib.py", line 561, in train
inputs = inputs()
File "/usr/local/lib/python3.6/dist-packages/gin/config.py", line 1078, in gin_wrapper
utils.augment_exception_message_and_reraise(e, err_str)
File "/usr/local/lib/python3.6/dist-packages/gin/utils.py", line 49, in augment_exception_message_and_reraise
six.raise_from(proxy.with_traceback(exception.__traceback__), None)
File "<string>", line 3, in raise_from
File "/usr/local/lib/python3.6/dist-packages/gin/config.py", line 1055, in gin_wrapper
return fn(*new_args, **new_kwargs)
File "/usr/local/lib/python3.6/dist-packages/trax/data/inputs.py", line 538, in batcher
train_stream, eval_stream = data_streams()
File "/usr/local/lib/python3.6/dist-packages/gin/config.py", line 1078, in gin_wrapper
utils.augment_exception_message_and_reraise(e, err_str)
File "/usr/local/lib/python3.6/dist-packages/gin/utils.py", line 49, in augment_exception_message_and_reraise
six.raise_from(proxy.with_traceback(exception.__traceback__), None)
File "<string>", line 3, in raise_from
File "/usr/local/lib/python3.6/dist-packages/gin/config.py", line 1055, in gin_wrapper
return fn(*new_args, **new_kwargs)
File "/usr/local/lib/python3.6/dist-packages/trax/data/tf_inputs.py", line 80, in data_streams
data_dir = download_and_prepare(dataset_name, data_dir)
File "/usr/local/lib/python3.6/dist-packages/trax/data/tf_inputs.py", line 965, in download_and_prepare
dataset_name[len('t2t_'):]).generate_data(data_dir, dl_dir)
File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/data_generators/imagenet.py", line 271, in generate_data
self.dev_filepaths(data_dir, self.dev_shards, shuffled=True))
File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/data_generators/generator_utils.py", line 500, in generate_dataset_and_shuffle
generate_files(train_gen, train_paths)
File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/data_generators/generator_utils.py", line 174, in generate_files
for case in generator:
File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/data_generators/imagenet.py", line 85, in imagenet_pixelrnn_generator
image_files = tf.gfile.Glob(images_filepath + "/*")
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/lib/io/file_io.py", line 350, in get_matching_files
return get_matching_files_v2(filename)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/lib/io/file_io.py", line 409, in get_matching_files_v2
compat.as_bytes(pattern))
tensorflow.python.framework.errors_impl.NotFoundError: /root/tensorflow_datasets/download/train_64x64; No such file or directory
In call to configurable 'data_streams' (<function data_streams at 0x7f9b1f88fd90>)
In call to configurable 'batcher' (<function batcher at 0x7f9b7388ee18>)
In call to configurable 'train' (<function train at 0x7f9b1f60abf8>)
```
| open | 2020-11-03T09:19:41Z | 2020-11-03T21:34:48Z | https://github.com/google/trax/issues/1183 | [] | syzymon | 1 |
waditu/tushare | pandas | 800 | fund_daily返回了错误数据 | df = pro.fund_daily(trade_date='20080102')
df.loc[df['ts_code']=='150001!1.SZ']
以上命令返回了如下数据:
20080102 1.408 ... 0.5722 551447.01 77407.7552
代码ts_code有错误
ID:13632771963
| closed | 2018-11-04T12:38:58Z | 2018-11-05T03:17:11Z | https://github.com/waditu/tushare/issues/800 | [] | chenyizhe | 1 |
Sanster/IOPaint | pytorch | 585 | [BUG] Critical CVE File Overwrite | **Model**
Which model are you using?
**Describe the bug**
A clear and concise description of what the bug is.
CVE type: File Overwrite
URL: http://localhost:8080/api/v1/save_image
Poc: curl -X POST "http://localhost:8080/api/v1/save_image" -F "file=@file.mp4;filename=../../etc/passwd;"
Error is going here:
```py
def api_save_image(self, file: UploadFile):
filename = file.filename
origin_image_bytes = file.file.read()
with open(self.config.output_dir / filename, "wb") as fw:
fw.write(origin_image_bytes)
```
No check for filename, therefore it's user-controlled. Basically, we can't name our files like "../../something", but it's not the case when we're using curl.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**System Info**
Software version used
- iopaint: 1.4.2
- pytorch:
- CUDA:
| closed | 2024-10-21T23:44:24Z | 2024-10-22T09:42:54Z | https://github.com/Sanster/IOPaint/issues/585 | [] | caerry | 1 |
unionai-oss/pandera | pandas | 1,121 | version 0.14.x breaks mypy compatibility | **Describe the bug**
It seems that versions 0.14.x break the compatibility with `mypy`.
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest version of pandera.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandera.
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
`pyproject.toml`
```toml
[tool.mypy]
plugins = ["pandera.mypy"]
```
`test.py`
```python
import pandas
import pandera
```
```bash
pip install mypy pandera[mypy]==0.14.3 && mypy test.py
```
leads to
```bash
[...]/lib/python3.10/site-packages/pandera/api/pandas/model.py:563: error: INTERNAL ERROR -- Please try using mypy master on GitHub:
https://mypy.readthedocs.io/en/stable/common_issues.html#using-a-development-mypy-build
If this issue continues with mypy master, please report a bug at https://github.com/python/mypy/issues
version: 1.1.1
[...]/lib/python3.10/site-packages/pandera/api/pandas/model.py:563: : note: please use --show-traceback to print a traceback when reporting a bug
```
but
```bash
pip install mypy pandera[mypy]==0.13.4 && mypy test.py
```
is OK
```bash
Success: no issues found in 1 source file
```
#### Expected behavior
Consistent behaviour with version `0.13.4`.
#### Desktop (please complete the following information):
- python version 3.10.9
- conda environment
```bash
# packages in environment at [...]/test-mypy:
#
# Name Version Build Channel
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
bzip2 1.0.8 h7f98852_4 conda-forge
ca-certificates 2022.12.7 ha878542_0 conda-forge
ld_impl_linux-64 2.40 h41732ed_0 conda-forge
libffi 3.4.2 h7f98852_5 conda-forge
libgcc-ng 12.2.0 h65d4601_19 conda-forge
libgomp 12.2.0 h65d4601_19 conda-forge
libnsl 2.0.0 h7f98852_0 conda-forge
libsqlite 3.40.0 h753d276_0 conda-forge
libuuid 2.32.1 h7f98852_1000 conda-forge
libzlib 1.2.13 h166bdaf_4 conda-forge
multimethod 1.9.1 pypi_0 pypi
mypy 1.1.1 pypi_0 pypi
mypy-extensions 1.0.0 pypi_0 pypi
ncurses 6.3 h27087fc_1 conda-forge
numpy 1.24.2 pypi_0 pypi
openssl 3.1.0 h0b41bf4_0 conda-forge
packaging 23.0 pypi_0 pypi
pandas 1.5.3 pypi_0 pypi
pandas-stubs 1.4.3.220807 pypi_0 pypi
pandera 0.14.3 pypi_0 pypi
pip 23.0.1 pyhd8ed1ab_0 conda-forge
pydantic 1.10.6 pypi_0 pypi
python 3.10.9 he550d4f_0_cpython conda-forge
python-dateutil 2.8.2 pypi_0 pypi
pytz 2022.7.1 pypi_0 pypi
readline 8.1.2 h0f457ee_0 conda-forge
setuptools 67.6.0 pyhd8ed1ab_0 conda-forge
six 1.16.0 pypi_0 pypi
tk 8.6.12 h27826a3_0 conda-forge
tomli 2.0.1 pypi_0 pypi
types-pytz 2022.7.1.2 pypi_0 pypi
typing-extensions 4.5.0 pypi_0 pypi
typing-inspect 0.8.0 pypi_0 pypi
tzdata 2022g h191b570_0 conda-forge
wheel 0.40.0 pyhd8ed1ab_0 conda-forge
wrapt 1.15.0 pypi_0 pypi
xz 5.2.6 h166bdaf_0 conda-forge
```
| closed | 2023-03-16T17:55:23Z | 2023-03-16T23:19:26Z | https://github.com/unionai-oss/pandera/issues/1121 | [
"bug"
] | PetitLepton | 12 |
0b01001001/spectree | pydantic | 67 | Standard Starlette static mount breaks openapi.json endpoint [BUG] | **Describe the bug**
Adding a static mount to the routes will result in failure to load openapi.json, with an internal server error.
**To Reproduce**
Steps to reproduce the behavior:
Given the provided Starlette example (with the needed fix to the api mount path, as it is incorrectly provided in the example):
- Add a static mount to the routes:
```
...
from starlette.staticfiles import StaticFiles
...
app = Starlette(routes=[
Mount('/api', routes=[
Route('/user', user_profile, methods=['POST']),
]),
Mount('/static', StaticFiles(directory="static"), name='static'),
])
...
```
- navigate to apidoc/openapi.json
**Expected behavior**
Should be able to load openapi.json, redoc, and swagger, even if a static mount location is specified.
**Error Message**
```
Traceback (most recent call last):
File "/Users/scott/.pyenv/versions/3.7.7/lib/python3.7/site-packages/uvicorn/protocols/http/h11_impl.py", line 389, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/Users/scott/.pyenv/versions/3.7.7/lib/python3.7/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
return await self.app(scope, receive, send)
File "/Users/scott/.pyenv/versions/3.7.7/lib/python3.7/site-packages/starlette/applications.py", line 111, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/scott/.pyenv/versions/3.7.7/lib/python3.7/site-packages/starlette/middleware/errors.py", line 181, in __call__
raise exc from None
File "/Users/scott/.pyenv/versions/3.7.7/lib/python3.7/site-packages/starlette/middleware/errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "/Users/scott/.pyenv/versions/3.7.7/lib/python3.7/site-packages/starlette/exceptions.py", line 82, in __call__
raise exc from None
File "/Users/scott/.pyenv/versions/3.7.7/lib/python3.7/site-packages/starlette/exceptions.py", line 71, in __call__
await self.app(scope, receive, sender)
File "/Users/scott/.pyenv/versions/3.7.7/lib/python3.7/site-packages/starlette/routing.py", line 566, in __call__
await route.handle(scope, receive, send)
File "/Users/scott/.pyenv/versions/3.7.7/lib/python3.7/site-packages/starlette/routing.py", line 227, in handle
await self.app(scope, receive, send)
File "/Users/scott/.pyenv/versions/3.7.7/lib/python3.7/site-packages/starlette/routing.py", line 43, in app
response = await run_in_threadpool(func, request)
File "/Users/scott/.pyenv/versions/3.7.7/lib/python3.7/site-packages/starlette/concurrency.py", line 34, in run_in_threadpool
return await loop.run_in_executor(None, func, *args)
File "/Users/scott/.pyenv/versions/3.7.7/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/Users/scott/.pyenv/versions/3.7.7/lib/python3.7/site-packages/spectree/plugins/starlette_plugin.py", line 30, in <lambda>
lambda request: JSONResponse(self.spectree.spec),
File "/Users/scott/.pyenv/versions/3.7.7/lib/python3.7/site-packages/spectree/spec.py", line 60, in spec
self._spec = self._generate_spec()
File "/Users/scott/.pyenv/versions/3.7.7/lib/python3.7/site-packages/spectree/spec.py", line 150, in _generate_spec
for route in self.backend.find_routes():
File "/Users/scott/.pyenv/versions/3.7.7/lib/python3.7/site-packages/spectree/plugins/starlette_plugin.py", line 129, in find_routes
parse_route(self.app)
File "/Users/scott/.pyenv/versions/3.7.7/lib/python3.7/site-packages/spectree/plugins/starlette_plugin.py", line 127, in parse_route
parse_route(route, prefix=f'{prefix}{route.path}')
File "/Users/scott/.pyenv/versions/3.7.7/lib/python3.7/site-packages/spectree/plugins/starlette_plugin.py", line 102, in parse_route
for route in app.routes:
TypeError: 'NoneType' object is not iterable
```
**Desktop (please complete the following information):**
- OS: Mac
- Version Catalina 10.15.5
**Python Information (please complete the following information):**
- Python Version 3.7.7
- Library Version spectree==0.3.8
- Other dependencies starlette==0.13.8
**Additional context**
n/a
| closed | 2020-10-22T18:55:57Z | 2020-10-23T05:56:24Z | https://github.com/0b01001001/spectree/issues/67 | [
"bug"
] | scott2b | 2 |
scikit-optimize/scikit-optimize | scikit-learn | 914 | import past hyperparms and results to start gp.minimize with previous info | Hello, is there a way to basically save results from gp.minimisze, and then input those results the next time you want to run the hyperparameter turning? And multiple results, I'm aware that you can feed gp.minimize default params. Thanks! | open | 2020-06-11T13:53:27Z | 2020-07-10T13:35:50Z | https://github.com/scikit-optimize/scikit-optimize/issues/914 | [] | svideloc | 1 |
scikit-hep/awkward | numpy | 2,822 | GPU Tests Failed | The GPU tests failed for commit e3e48744d5a0ef09068996c1c590ca9bcb25492d with the following pytest output:
```
wer case.\n'
'4. The resulting values are returned directly instead of references.\n'
'\n')
__file__ = '/opt/build-venv/lib/python3.8/site-packages/cupy_backends/cuda/api/runtime.cpython-38-x86_64-linux-gnu.so'
__loader__ = <_frozen_importlib_external.ExtensionFileLoader object at 0x7f657afcf490>
__name__ = 'cupy_backends.cuda.api.runtime'
__package__ = 'cupy_backends.cuda.api'
__pyx_capi__ = {'_deviceEnsurePeerAccess': <capsule object "PyObject *(int, int __pyx_skip_dispatch)" at 0x7f657afcfe10>,
'_ensure_context': <capsule object "PyObject *(void)" at 0x7f657afdd990>,
'_is_hip_environment': <capsule object "int" at 0x7f657afcfb70>,
'check_status': <capsule object "PyObject *(int, int __pyx_skip_dispatch)" at 0x7f657afcfba0>,
'createSurfaceObject': <capsule object "uintmax_t (intptr_t, int __pyx_skip_dispatch)" at 0x7f657afddb40>,
'createTextureObject': <capsule object "uintmax_t (intptr_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd9c0>,
'destroySurfaceObject': <capsule object "PyObject *(uintmax_t, int __pyx_skip_dispatch)" at 0x7f657afddb70>,
'destroyTextureObject': <capsule object "PyObject *(uintmax_t, int __pyx_skip_dispatch)" at 0x7f657afdd9f0>,
'deviceAttributeComputeCapabilityMajor': <capsule object "int" at 0x7f657afcfb10>,
'deviceAttributeComputeCapabilityMinor': <capsule object "int" at 0x7f657afcfb40>,
'deviceCanAccessPeer': <capsule object "int (int, int, int __pyx_skip_dispatch)" at 0x7f657afcfdb0>,
'deviceEnablePeerAccess': <capsule object "PyObject *(int, int __pyx_skip_dispatch)" at 0x7f657afcfde0>,
'deviceGetAttribute': <capsule object "int (int, int, int __pyx_skip_dispatch)" at 0x7f657afcfc60>,
'deviceGetByPCIBusId': <capsule object "int (PyObject *, int __pyx_skip_dispatch)" at 0x7f657afcfc90>,
'deviceGetDefaultMemPool': <capsule object "intptr_t (int, int __pyx_skip_dispatch)" at 0x7f657afdd4b0>,
'deviceGetLimit': <capsule object "size_t (int, int __pyx_skip_dispatch)" at 0x7f657afcfe40>,
'deviceGetMemPool': <capsule object "intptr_t (int, int __pyx_skip_dispatch)" at 0x7f657afdd4e0>,
'deviceGetPCIBusId': <capsule object "PyObject *(int, int __pyx_skip_dispatch)" at 0x7f657afcfcc0>,
'deviceSetLimit': <capsule object "PyObject *(int, size_t, int __pyx_skip_dispatch)" at 0x7f657afcfe70>,
'deviceSetMemPool': <capsule object "PyObject *(int, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd510>,
'deviceSynchronize': <capsule object "PyObject *(int __pyx_skip_dispatch)" at 0x7f657afcfd50>,
'driverGetVersion': <capsule object "int (int __pyx_skip_dispatch)" at 0x7f657afcfbd0>,
'errorContextIsDestroyed': <capsule object "int" at 0x7f657afcfab0>,
'errorInvalidResourceHandle': <capsule object "int" at 0x7f657afcfae0>,
'errorInvalidValue': <capsule object "int" at 0x7f657afcfa50>,
'errorMemoryAllocation': <capsule object "int" at 0x7f657afcfa20>,
'errorPeerAccessAlreadyEnabled': <capsule object "int" at 0x7f657afcfa80>,
'eventCreate': <capsule object "intptr_t (int __pyx_skip_dispatch)" at 0x7f657afdd840>,
'eventCreateWithFlags': <capsule object "intptr_t (unsigned int, int __pyx_skip_dispatch)" at 0x7f657afdd870>,
'eventDestroy': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd8a0>,
'eventElapsedTime': <capsule object "float (intptr_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd8d0>,
'eventQuery': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd900>,
'eventRecord': <capsule object "PyObject *(intptr_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd930>,
'eventSynchronize': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd960>,
'free': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd090>,
'freeArray': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd0f0>,
'freeAsync': <capsule object "PyObject *(intptr_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd120>,
'freeHost': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd0c0>,
'getChannelDesc': <capsule object "cudaChannelFormatDesc (intptr_t)" at 0x7f657afdda20>,
'getDevice': <capsule object "int (int __pyx_skip_dispatch)" at 0x7f657afcfc30>,
'getDeviceCount': <capsule object "int (int __pyx_skip_dispatch)" at 0x7f657afcfcf0>,
'getDeviceProperties': <capsule object "PyObject *(int, int __pyx_skip_dispatch)" at 0x7f657afcfd80>,
'getTextureObjectResourceDesc': <capsule object "cudaResourceDesc (uintmax_t)" at 0x7f657afdda50>,
'getTextureObjectTextureDesc': <capsule object "cudaTextureDesc (uintmax_t)" at 0x7f657afdda80>,
'graphDestroy': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afddba0>,
'graphExecDestroy': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afddbd0>,
'graphInstantiate': <capsule object "intptr_t (intptr_t, int __pyx_skip_dispatch)" at 0x7f657afddc00>,
'graphLaunch': <capsule object "PyObject *(intptr_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afddc30>,
'graphUpload': <capsule object "PyObject *(intptr_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afddc60>,
'hostAlloc': <capsule object "intptr_t (size_t, unsigned int, int __pyx_skip_dispatch)" at 0x7f657afcffc0>,
'hostRegister': <capsule object "PyObject *(intptr_t, size_t, unsigned int, int __pyx_skip_dispatch)" at 0x7f657afdd030>,
'hostUnregister': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd060>,
'launchHostFunc': <capsule object "PyObject *(intptr_t, PyObject *, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd720>,
'make_Extent': <capsule object "cudaExtent (size_t, size_t, size_t)" at 0x7f657afddab0>,
'make_PitchedPtr': <capsule object "cudaPitchedPtr (intptr_t, size_t, size_t, size_t)" at 0x7f657afddb10>,
'make_Pos': <capsule object "cudaPos (size_t, size_t, size_t)" at 0x7f657afddae0>,
'malloc': <capsule object "intptr_t (size_t, int __pyx_skip_dispatch)" at 0x7f657afcfea0>,
'malloc3DArray': <capsule object "intptr_t (intptr_t, size_t, size_t, size_t, int __pyx_skip_dispatch, struct __pyx_opt_args_13cupy_backends_4cuda_3api_7runtime_malloc3DArray *__pyx_optional_args)" at 0x7f657afcff00>,
'mallocArray': <capsule object "intptr_t (intptr_t, size_t, size_t, int __pyx_skip_dispatch, struct __pyx_opt_args_13cupy_backends_4cuda_3api_7runtime_mallocArray *__pyx_optional_args)" at 0x7f657afcff30>,
'mallocAsync': <capsule object "intptr_t (size_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afcff60>,
'mallocFromPoolAsync': <capsule object "intptr_t (size_t, intptr_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afcff90>,
'mallocManaged': <capsule object "intptr_t (size_t, int __pyx_skip_dispatch, struct __pyx_opt_args_13cupy_backends_4cuda_3api_7runtime_mallocManaged *__pyx_optional_args)" at 0x7f657afcfed0>,
'memAdvise': <capsule object "PyObject *(intptr_t, size_t, int, int, int __pyx_skip_dispatch)" at 0x7f657afdd450>,
'memGetInfo': <capsule object "PyObject *(int __pyx_skip_dispatch)" at 0x7f657afdd150>,
'memPoolCreate': <capsule object "intptr_t (struct __pyx_obj_13cupy_backends_4cuda_3api_7runtime_MemPoolProps *, int __pyx_skip_dispatch)" at 0x7f657afdd540>,
'memPoolDestroy': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd570>,
'memPoolGetAttribute': <capsule object "PyObject *(intptr_t, int, int __pyx_skip_dispatch)" at 0x7f657afdd5d0>,
'memPoolSetAttribute': <capsule object "PyObject *(intptr_t, int, PyObject *, int __pyx_skip_dispatch)" at 0x7f657afdd600>,
'memPoolTrimTo': <capsule object "PyObject *(intptr_t, size_t, int __pyx_skip_dispatch)" at 0x7f657afdd5a0>,
'memPrefetchAsync': <capsule object "PyObject *(intptr_t, size_t, int, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd420>,
'memcpy': <capsule object "PyObject *(intptr_t, intptr_t, size_t, int, int __pyx_skip_dispatch)" at 0x7f657afdd180>,
'memcpy2D': <capsule object "PyObject *(intptr_t, size_t, intptr_t, size_t, size_t, size_t, cudaMemcpyKind, int __pyx_skip_dispatch)" at 0x7f657afdd240>,
'memcpy2DAsync': <capsule object "PyObject *(intptr_t, size_t, intptr_t, size_t, size_t, size_t, cudaMemcpyKind, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd270>,
'memcpy2DFromArray': <capsule object "PyObject *(intptr_t, size_t, intptr_t, size_t, size_t, size_t, size_t, int, int __pyx_skip_dispatch)" at 0x7f657afdd2a0>,
'memcpy2DFromArrayAsync': <capsule object "PyObject *(intptr_t, size_t, intptr_t, size_t, size_t, size_t, size_t, int, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd2d0>,
'memcpy2DToArray': <capsule object "PyObject *(intptr_t, size_t, size_t, intptr_t, size_t, size_t, size_t, int, int __pyx_skip_dispatch)" at 0x7f657afdd300>,
'memcpy2DToArrayAsync': <capsule object "PyObject *(intptr_t, size_t, size_t, intptr_t, size_t, size_t, size_t, int, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd330>,
'memcpy3D': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd360>,
'memcpy3DAsync': <capsule object "PyObject *(intptr_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd390>,
'memcpyAsync': <capsule object "PyObject *(intptr_t, intptr_t, size_t, int, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd1b0>,
'memcpyPeer': <capsule object "PyObject *(intptr_t, int, intptr_t, int, size_t, int __pyx_skip_dispatch)" at 0x7f657afdd1e0>,
'memcpyPeerAsync': <capsule object "PyObject *(intptr_t, int, intptr_t, int, size_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd210>,
'memset': <capsule object "PyObject *(intptr_t, int, size_t, int __pyx_skip_dispatch)" at 0x7f657afdd3c0>,
'memsetAsync': <capsule object "PyObject *(intptr_t, int, size_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd3f0>,
'pointerGetAttributes': <capsule object "struct __pyx_obj_13cupy_backends_4cuda_3api_7runtime_PointerAttributes *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd480>,
'runtimeGetVersion': <capsule object "int (int __pyx_skip_dispatch)" at 0x7f657afcfc00>,
'setDevice': <capsule object "PyObject *(int, int __pyx_skip_dispatch)" at 0x7f657afcfd20>,
'streamAddCallback': <capsule object "PyObject *(intptr_t, PyObject *, intptr_t, int __pyx_skip_dispatch, struct __pyx_opt_args_13cupy_backends_4cuda_3api_7runtime_streamAddCallback *__pyx_optional_args)" at 0x7f657afdd6f0>,
'streamBeginCapture': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch, struct __pyx_opt_args_13cupy_backends_4cuda_3api_7runtime_streamBeginCapture *__pyx_optional_args)" at 0x7f657afdd7b0>,
'streamCreate': <capsule object "intptr_t (int __pyx_skip_dispatch)" at 0x7f657afdd630>,
'streamCreateWithFlags': <capsule object "intptr_t (unsigned int, int __pyx_skip_dispatch)" at 0x7f657afdd660>,
'streamDestroy': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd690>,
'streamEndCapture': <capsule object "intptr_t (intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd7e0>,
'streamIsCapturing': <capsule object "int (intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd810>,
'streamQuery': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd750>,
'streamSynchronize': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd6c0>,
'streamWaitEvent': <capsule object "PyObject *(intptr_t, intptr_t, int __pyx_skip_dispatch, struct __pyx_opt_args_13cupy_backends_4cuda_3api_7runtime_streamWaitEvent *__pyx_optional_args)" at 0x7f657afdd780>}
__pyx_unpickle_MemPoolProps = <built-in function __pyx_unpickle_MemPoolProps>
__pyx_unpickle_PointerAttributes = <built-in function __pyx_unpickle_PointerAttributes>
__pyx_unpickle__ThreadLocal = <built-in function __pyx_unpickle__ThreadLocal>
__spec__ = ModuleSpec(name='cupy_backends.cuda.api.runtime', loader=<_frozen_importlib_external.ExtensionFileLoader object at 0x7f657afcf490>, origin='/opt/build-venv/lib/python3.8/site-packages/cupy_backends/cuda/api/runtime.cpython-38-x86_64-linux-gnu.so')
__test__ = {}
_deviceEnsurePeerAccess = <built-in function _deviceEnsurePeerAccess>
_export_enum = <built-in function _export_enum>
_threading = <module 'threading' from '/usr/lib/python3.8/threading.py'>
check_status = <built-in function check_status>
createSurfaceObject = <built-in function createSurfaceObject>
createTextureObject = <built-in function createTextureObject>
cudaAddressModeBorder = 3
cudaAddressModeClamp = 1
cudaAddressModeMirror = 2
cudaAddressModeWrap = 0
cudaArrayDefault = 0
cudaArraySurfaceLoadStore = 2
cudaChannelFormatKindFloat = 2
cudaChannelFormatKindNone = 3
cudaChannelFormatKindSigned = 0
cudaChannelFormatKindUnsigned = 1
cudaDevAttrAsyncEngineCount = 40
cudaDevAttrCanFlushRemoteWrites = 98
cudaDevAttrCanMapHostMemory = 19
cudaDevAttrCanUseHostPointerForRegisteredMem = 91
cudaDevAttrClockRate = 13
cudaDevAttrComputeMode = 20
cudaDevAttrComputePreemptionSupported = 90
cudaDevAttrConcurrentKernels = 31
cudaDevAttrConcurrentManagedAccess = 89
cudaDevAttrCooperativeLaunch = 95
cudaDevAttrCooperativeMultiDeviceLaunch = 96
cudaDevAttrDirectManagedMemAccessFromHost = 101
cudaDevAttrEccEnabled = 32
cudaDevAttrGPUDirectRDMAFlushWritesOptions = 117
cudaDevAttrGPUDirectRDMASupported = 116
cudaDevAttrGPUDirectRDMAWritesOrdering = 118
cudaDevAttrGlobalL1CacheSupported = 79
cudaDevAttrGlobalMemoryBusWidth = 37
cudaDevAttrGpuOverlap = 15
cudaDevAttrHostNativeAtomicSupported = 86
cudaDevAttrHostRegisterReadOnlySupported = 113
cudaDevAttrHostRegisterSupported = 99
cudaDevAttrIntegrated = 18
cudaDevAttrIsMultiGpuBoard = 84
cudaDevAttrKernelExecTimeout = 17
cudaDevAttrL2CacheSize = 38
cudaDevAttrLocalL1CacheSupported = 80
cudaDevAttrManagedMemory = 83
cudaDevAttrMaxBlockDimX = 2
cudaDevAttrMaxBlockDimY = 3
cudaDevAttrMaxBlockDimZ = 4
cudaDevAttrMaxBlocksPerMultiprocessor = 106
cudaDevAttrMaxGridDimX = 5
cudaDevAttrMaxGridDimY = 6
cudaDevAttrMaxGridDimZ = 7
cudaDevAttrMaxPitch = 11
cudaDevAttrMaxRegistersPerBlock = 12
cudaDevAttrMaxRegistersPerMultiprocessor = 82
cudaDevAttrMaxSharedMemoryPerBlock = 8
cudaDevAttrMaxSharedMemoryPerBlockOptin = 97
cudaDevAttrMaxSharedMemoryPerMultiprocessor = 81
cudaDevAttrMaxSurface1DLayeredLayers = 62
cudaDevAttrMaxSurface1DLayeredWidth = 61
cudaDevAttrMaxSurface1DWidth = 55
cudaDevAttrMaxSurface2DHeight = 57
cudaDevAttrMaxSurface2DLayeredHeight = 64
cudaDevAttrMaxSurface2DLayeredLayers = 65
cudaDevAttrMaxSurface2DLayeredWidth = 63
cudaDevAttrMaxSurface2DWidth = 56
cudaDevAttrMaxSurface3DDepth = 60
cudaDevAttrMaxSurface3DHeight = 59
cudaDevAttrMaxSurface3DWidth = 58
cudaDevAttrMaxSurfaceCubemapLayeredLayers = 68
cudaDevAttrMaxSurfaceCubemapLayeredWidth = 67
cudaDevAttrMaxSurfaceCubemapWidth = 66
cudaDevAttrMaxTexture1DLayeredLayers = 43
cudaDevAttrMaxTexture1DLayeredWidth = 42
cudaDevAttrMaxTexture1DLinearWidth = 69
cudaDevAttrMaxTexture1DMipmappedWidth = 77
cudaDevAttrMaxTexture1DWidth = 21
cudaDevAttrMaxTexture2DGatherHeight = 46
cudaDevAttrMaxTexture2DGatherWidth = 45
cudaDevAttrMaxTexture2DHeight = 23
cudaDevAttrMaxTexture2DLayeredHeight = 28
cudaDevAttrMaxTexture2DLayeredLayers = 29
cudaDevAttrMaxTexture2DLayeredWidth = 27
cudaDevAttrMaxTexture2DLinearHeight = 71
cudaDevAttrMaxTexture2DLinearPitch = 72
cudaDevAttrMaxTexture2DLinearWidth = 70
cudaDevAttrMaxTexture2DMipmappedHeight = 74
cudaDevAttrMaxTexture2DMipmappedWidth = 73
cudaDevAttrMaxTexture2DWidth = 22
cudaDevAttrMaxTexture3DDepth = 26
cudaDevAttrMaxTexture3DDepthAlt = 49
cudaDevAttrMaxTexture3DHeight = 25
cudaDevAttrMaxTexture3DHeightAlt = 48
cudaDevAttrMaxTexture3DWidth = 24
cudaDevAttrMaxTexture3DWidthAlt = 47
cudaDevAttrMaxTextureCubemapLayeredLayers = 54
cudaDevAttrMaxTextureCubemapLayeredWidth = 53
cudaDevAttrMaxTextureCubemapWidth = 52
cudaDevAttrMaxThreadsPerBlock = 1
cudaDevAttrMaxThreadsPerMultiProcessor = 39
cudaDevAttrMaxTimelineSemaphoreInteropSupported = 114
cudaDevAttrMemoryClockRate = 36
cudaDevAttrMemoryPoolSupportedHandleTypes = 119
cudaDevAttrMemoryPoolsSupported = 115
cudaDevAttrMultiGpuBoardGroupID = 85
cudaDevAttrMultiProcessorCount = 16
cudaDevAttrPageableMemoryAccess = 88
cudaDevAttrPageableMemoryAccessUsesHostPageTables = 100
cudaDevAttrPciBusId = 33
cudaDevAttrPciDeviceId = 34
cudaDevAttrPciDomainId = 50
cudaDevAttrReserved92 = 92
cudaDevAttrReserved93 = 93
cudaDevAttrReserved94 = 94
cudaDevAttrReservedSharedMemoryPerBlock = 111
cudaDevAttrSingleToDoublePrecisionPerfRatio = 87
cudaDevAttrSparseCudaArraySupported = 112
cudaDevAttrStreamPrioritiesSupported = 78
cudaDevAttrSurfaceAlignment = 30
cudaDevAttrTccDriver = 35
cudaDevAttrTextureAlignment = 14
cudaDevAttrTexturePitchAlignment = 51
cudaDevAttrTotalConstantMemory = 9
cudaDevAttrUnifiedAddressing = 41
cudaDevAttrWarpSize = 10
cudaFilterModeLinear = 1
cudaFilterModePoint = 0
cudaIpcMemLazyEnablePeerAccess = 1
cudaLimitDevRuntimePendingLaunchCount = 4
cudaLimitDevRuntimeSyncDepth = 3
cudaLimitMallocHeapSize = 2
cudaLimitMaxL2FetchGranularity = 5
cudaLimitPrintfFifoSize = 1
cudaLimitStackSize = 0
cudaMemAdviseSetAccessedBy = 5
cudaMemAdviseSetPreferredLocation = 3
cudaMemAdviseSetReadMostly = 1
cudaMemAdviseUnsetAccessedBy = 6
cudaMemAdviseUnsetPreferredLocation = 4
cudaMemAdviseUnsetReadMostly = 2
cudaMemAllocationTypePinned = 1
cudaMemAttachGlobal = 1
cudaMemAttachHost = 2
cudaMemAttachSingle = 4
cudaMemHandleTypeNone = 0
cudaMemHandleTypePosixFileDescriptor = 1
cudaMemLocationTypeDevice = 1
cudaMemPoolAttrReleaseThreshold = 4
cudaMemPoolAttrReservedMemCurrent = 5
cudaMemPoolAttrReservedMemHigh = 6
cudaMemPoolAttrUsedMemCurrent = 7
cudaMemPoolAttrUsedMemHigh = 8
cudaMemPoolReuseAllowInternalDependencies = 3
cudaMemPoolReuseAllowOpportunistic = 2
cudaMemPoolReuseFollowEventDependencies = 1
cudaMemoryTypeDevice = 2
cudaMemoryTypeHost = 1
cudaReadModeElementType = 0
cudaReadModeNormalizedFloat = 1
cudaResourceTypeArray = 0
cudaResourceTypeLinear = 2
cudaResourceTypeMipmappedArray = 1
cudaResourceTypePitch2D = 3
destroySurfaceObject = <built-in function destroySurfaceObject>
destroyTextureObject = <built-in function destroyTextureObject>
deviceCanAccessPeer = <built-in function deviceCanAccessPeer>
deviceDisablePeerAccess = <built-in function deviceDisablePeerAccess>
deviceEnablePeerAccess = <built-in function deviceEnablePeerAccess>
deviceGetAttribute = <built-in function deviceGetAttribute>
deviceGetByPCIBusId = <built-in function deviceGetByPCIBusId>
deviceGetDefaultMemPool = <built-in function deviceGetDefaultMemPool>
deviceGetLimit = <built-in function deviceGetLimit>
deviceGetMemPool = <built-in function deviceGetMemPool>
deviceGetPCIBusId = <built-in function deviceGetPCIBusId>
deviceSetLimit = <built-in function deviceSetLimit>
deviceSetMemPool = <built-in function deviceSetMemPool>
deviceSynchronize = <built-in function deviceSynchronize>
driverGetVersion = <built-in function driverGetVersion>
eventBlockingSync = 1
eventCreate = <built-in function eventCreate>
eventCreateWithFlags = <built-in function eventCreateWithFlags>
eventDefault = 0
eventDestroy = <built-in function eventDestroy>
eventDisableTiming = 2
eventElapsedTime = <built-in function eventElapsedTime>
eventInterprocess = 4
eventQuery = <built-in function eventQuery>
eventRecord = <built-in function eventRecord>
eventSynchronize = <built-in function eventSynchronize>
free = <built-in function free>
freeArray = <built-in function freeArray>
freeAsync = <built-in function freeAsync>
freeHost = <built-in function freeHost>
getDevice = <built-in function getDevice>
getDeviceCount = <built-in function getDeviceCount>
getDeviceProperties = <built-in function getDeviceProperties>
graphDestroy = <built-in function graphDestroy>
graphExecDestroy = <built-in function graphExecDestroy>
graphInstantiate = <built-in function graphInstantiate>
graphLaunch = <built-in function graphLaunch>
graphUpload = <built-in function graphUpload>
hostAlloc = <built-in function hostAlloc>
hostAllocDefault = 0
hostAllocMapped = 2
hostAllocPortable = 1
hostAllocWriteCombined = 4
hostRegister = <built-in function hostRegister>
hostUnregister = <built-in function hostUnregister>
ipcCloseMemHandle = <built-in function ipcCloseMemHandle>
ipcGetEventHandle = <built-in function ipcGetEventHandle>
ipcGetMemHandle = <built-in function ipcGetMemHandle>
ipcOpenEventHandle = <built-in function ipcOpenEventHandle>
ipcOpenMemHandle = <built-in function ipcOpenMemHandle>
is_hip = False
launchHostFunc = <built-in function launchHostFunc>
malloc = <built-in function malloc>
malloc3DArray = <built-in function malloc3DArray>
mallocArray = <built-in function mallocArray>
mallocAsync = <built-in function mallocAsync>
mallocFromPoolAsync = <built-in function mallocFromPoolAsync>
mallocManaged = <built-in function mallocManaged>
memAdvise = <built-in function memAdvise>
memGetInfo = <built-in function memGetInfo>
memPoolCreate = <built-in function memPoolCreate>
memPoolDestroy = <built-in function memPoolDestroy>
memPoolGetAttribute = <built-in function memPoolGetAttribute>
memPoolSetAttribute = <built-in function memPoolSetAttribute>
memPoolTrimTo = <built-in function memPoolTrimTo>
memPrefetchAsync = <built-in function memPrefetchAsync>
memcpy = <built-in function memcpy>
memcpy2D = <built-in function memcpy2D>
memcpy2DAsync = <built-in function memcpy2DAsync>
memcpy2DFromArray = <built-in function memcpy2DFromArray>
memcpy2DFromArrayAsync = <built-in function memcpy2DFromArrayAsync>
memcpy2DToArray = <built-in function memcpy2DToArray>
memcpy2DToArrayAsync = <built-in function memcpy2DToArrayAsync>
memcpy3D = <built-in function memcpy3D>
memcpy3DAsync = <built-in function memcpy3DAsync>
memcpyAsync = <built-in function memcpyAsync>
memcpyDefault = 4
memcpyDeviceToDevice = 3
memcpyDeviceToHost = 2
memcpyHostToDevice = 1
memcpyHostToHost = 0
memcpyPeer = <built-in function memcpyPeer>
memcpyPeerAsync = <built-in function memcpyPeerAsync>
memoryTypeDevice = 2
memoryTypeHost = 1
memoryTypeManaged = 3
memoryTypeUnregistered = 0
memset = <built-in function memset>
memsetAsync = <built-in function memsetAsync>
pointerGetAttributes = <built-in function pointerGetAttributes>
runtimeGetVersion = <built-in function runtimeGetVersion>
setDevice = <built-in function setDevice>
streamAddCallback = <built-in function streamAddCallback>
streamBeginCapture = <built-in function streamBeginCapture>
streamCaptureModeGlobal = 0
streamCaptureModeRelaxed = 2
streamCaptureModeThreadLocal = 1
streamCaptureStatusActive = 1
streamCaptureStatusInvalidated = 2
streamCaptureStatusNone = 0
streamCreate = <built-in function streamCreate>
streamCreateWithFlags = <built-in function streamCreateWithFlags>
streamDefault = 0
streamDestroy = <built-in function streamDestroy>
streamEndCapture = <built-in function streamEndCapture>
streamIsCapturing = <built-in function streamIsCapturing>
streamLegacy = 1
streamNonBlocking = 1
streamPerThread = 2
streamQuery = <built-in function streamQuery>
streamSynchronize = <built-in function streamSynchronize>
streamWaitEvent = <built-in function streamWaitEvent>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E cupy_backends.cuda.api.runtime.CUDARuntimeError: cudaErrorNoDevice: no CUDA-capable device is detected
CUDARuntimeError = <class 'cupy_backends.cuda.api.runtime.CUDARuntimeError'>
CUDA_C_16F = 6
CUDA_C_32F = 4
CUDA_C_64F = 5
CUDA_C_8I = 7
CUDA_C_8U = 9
CUDA_R_16F = 2
CUDA_R_32F = 0
CUDA_R_64F = 1
CUDA_R_8I = 3
CUDA_R_8U = 8
MemPoolProps = <class 'cupy_backends.cuda.api.runtime.MemPoolProps'>
PointerAttributes = <class 'cupy_backends.cuda.api.runtime.PointerAttributes'>
_ThreadLocal = <class 'cupy_backends.cuda.api.runtime._ThreadLocal'>
__builtins__ = <builtins>
__doc__ = ('Thin wrapper of CUDA Runtime API.\n'
'\n'
'There are four differences compared to the original C API.\n'
'\n'
'1. Not all functions are ported.\n'
'2. Errors are translated into CUDARuntimeError exceptions.\n'
"3. The 'cuda' prefix of each API is omitted and the next character is set "
'to\n'
' lower case.\n'
'4. The resulting values are returned directly instead of references.\n'
'\n')
__file__ = '/opt/build-venv/lib/python3.8/site-packages/cupy_backends/cuda/api/runtime.cpython-38-x86_64-linux-gnu.so'
__loader__ = <_frozen_importlib_external.ExtensionFileLoader object at 0x7f657afcf490>
__name__ = 'cupy_backends.cuda.api.runtime'
__package__ = 'cupy_backends.cuda.api'
__pyx_capi__ = {'_deviceEnsurePeerAccess': <capsule object "PyObject *(int, int __pyx_skip_dispatch)" at 0x7f657afcfe10>,
'_ensure_context': <capsule object "PyObject *(void)" at 0x7f657afdd990>,
'_is_hip_environment': <capsule object "int" at 0x7f657afcfb70>,
'check_status': <capsule object "PyObject *(int, int __pyx_skip_dispatch)" at 0x7f657afcfba0>,
'createSurfaceObject': <capsule object "uintmax_t (intptr_t, int __pyx_skip_dispatch)" at 0x7f657afddb40>,
'createTextureObject': <capsule object "uintmax_t (intptr_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd9c0>,
'destroySurfaceObject': <capsule object "PyObject *(uintmax_t, int __pyx_skip_dispatch)" at 0x7f657afddb70>,
'destroyTextureObject': <capsule object "PyObject *(uintmax_t, int __pyx_skip_dispatch)" at 0x7f657afdd9f0>,
'deviceAttributeComputeCapabilityMajor': <capsule object "int" at 0x7f657afcfb10>,
'deviceAttributeComputeCapabilityMinor': <capsule object "int" at 0x7f657afcfb40>,
'deviceCanAccessPeer': <capsule object "int (int, int, int __pyx_skip_dispatch)" at 0x7f657afcfdb0>,
'deviceEnablePeerAccess': <capsule object "PyObject *(int, int __pyx_skip_dispatch)" at 0x7f657afcfde0>,
'deviceGetAttribute': <capsule object "int (int, int, int __pyx_skip_dispatch)" at 0x7f657afcfc60>,
'deviceGetByPCIBusId': <capsule object "int (PyObject *, int __pyx_skip_dispatch)" at 0x7f657afcfc90>,
'deviceGetDefaultMemPool': <capsule object "intptr_t (int, int __pyx_skip_dispatch)" at 0x7f657afdd4b0>,
'deviceGetLimit': <capsule object "size_t (int, int __pyx_skip_dispatch)" at 0x7f657afcfe40>,
'deviceGetMemPool': <capsule object "intptr_t (int, int __pyx_skip_dispatch)" at 0x7f657afdd4e0>,
'deviceGetPCIBusId': <capsule object "PyObject *(int, int __pyx_skip_dispatch)" at 0x7f657afcfcc0>,
'deviceSetLimit': <capsule object "PyObject *(int, size_t, int __pyx_skip_dispatch)" at 0x7f657afcfe70>,
'deviceSetMemPool': <capsule object "PyObject *(int, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd510>,
'deviceSynchronize': <capsule object "PyObject *(int __pyx_skip_dispatch)" at 0x7f657afcfd50>,
'driverGetVersion': <capsule object "int (int __pyx_skip_dispatch)" at 0x7f657afcfbd0>,
'errorContextIsDestroyed': <capsule object "int" at 0x7f657afcfab0>,
'errorInvalidResourceHandle': <capsule object "int" at 0x7f657afcfae0>,
'errorInvalidValue': <capsule object "int" at 0x7f657afcfa50>,
'errorMemoryAllocation': <capsule object "int" at 0x7f657afcfa20>,
'errorPeerAccessAlreadyEnabled': <capsule object "int" at 0x7f657afcfa80>,
'eventCreate': <capsule object "intptr_t (int __pyx_skip_dispatch)" at 0x7f657afdd840>,
'eventCreateWithFlags': <capsule object "intptr_t (unsigned int, int __pyx_skip_dispatch)" at 0x7f657afdd870>,
'eventDestroy': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd8a0>,
'eventElapsedTime': <capsule object "float (intptr_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd8d0>,
'eventQuery': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd900>,
'eventRecord': <capsule object "PyObject *(intptr_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd930>,
'eventSynchronize': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd960>,
'free': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd090>,
'freeArray': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd0f0>,
'freeAsync': <capsule object "PyObject *(intptr_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd120>,
'freeHost': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd0c0>,
'getChannelDesc': <capsule object "cudaChannelFormatDesc (intptr_t)" at 0x7f657afdda20>,
'getDevice': <capsule object "int (int __pyx_skip_dispatch)" at 0x7f657afcfc30>,
'getDeviceCount': <capsule object "int (int __pyx_skip_dispatch)" at 0x7f657afcfcf0>,
'getDeviceProperties': <capsule object "PyObject *(int, int __pyx_skip_dispatch)" at 0x7f657afcfd80>,
'getTextureObjectResourceDesc': <capsule object "cudaResourceDesc (uintmax_t)" at 0x7f657afdda50>,
'getTextureObjectTextureDesc': <capsule object "cudaTextureDesc (uintmax_t)" at 0x7f657afdda80>,
'graphDestroy': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afddba0>,
'graphExecDestroy': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afddbd0>,
'graphInstantiate': <capsule object "intptr_t (intptr_t, int __pyx_skip_dispatch)" at 0x7f657afddc00>,
'graphLaunch': <capsule object "PyObject *(intptr_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afddc30>,
'graphUpload': <capsule object "PyObject *(intptr_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afddc60>,
'hostAlloc': <capsule object "intptr_t (size_t, unsigned int, int __pyx_skip_dispatch)" at 0x7f657afcffc0>,
'hostRegister': <capsule object "PyObject *(intptr_t, size_t, unsigned int, int __pyx_skip_dispatch)" at 0x7f657afdd030>,
'hostUnregister': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd060>,
'launchHostFunc': <capsule object "PyObject *(intptr_t, PyObject *, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd720>,
'make_Extent': <capsule object "cudaExtent (size_t, size_t, size_t)" at 0x7f657afddab0>,
'make_PitchedPtr': <capsule object "cudaPitchedPtr (intptr_t, size_t, size_t, size_t)" at 0x7f657afddb10>,
'make_Pos': <capsule object "cudaPos (size_t, size_t, size_t)" at 0x7f657afddae0>,
'malloc': <capsule object "intptr_t (size_t, int __pyx_skip_dispatch)" at 0x7f657afcfea0>,
'malloc3DArray': <capsule object "intptr_t (intptr_t, size_t, size_t, size_t, int __pyx_skip_dispatch, struct __pyx_opt_args_13cupy_backends_4cuda_3api_7runtime_malloc3DArray *__pyx_optional_args)" at 0x7f657afcff00>,
'mallocArray': <capsule object "intptr_t (intptr_t, size_t, size_t, int __pyx_skip_dispatch, struct __pyx_opt_args_13cupy_backends_4cuda_3api_7runtime_mallocArray *__pyx_optional_args)" at 0x7f657afcff30>,
'mallocAsync': <capsule object "intptr_t (size_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afcff60>,
'mallocFromPoolAsync': <capsule object "intptr_t (size_t, intptr_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afcff90>,
'mallocManaged': <capsule object "intptr_t (size_t, int __pyx_skip_dispatch, struct __pyx_opt_args_13cupy_backends_4cuda_3api_7runtime_mallocManaged *__pyx_optional_args)" at 0x7f657afcfed0>,
'memAdvise': <capsule object "PyObject *(intptr_t, size_t, int, int, int __pyx_skip_dispatch)" at 0x7f657afdd450>,
'memGetInfo': <capsule object "PyObject *(int __pyx_skip_dispatch)" at 0x7f657afdd150>,
'memPoolCreate': <capsule object "intptr_t (struct __pyx_obj_13cupy_backends_4cuda_3api_7runtime_MemPoolProps *, int __pyx_skip_dispatch)" at 0x7f657afdd540>,
'memPoolDestroy': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd570>,
'memPoolGetAttribute': <capsule object "PyObject *(intptr_t, int, int __pyx_skip_dispatch)" at 0x7f657afdd5d0>,
'memPoolSetAttribute': <capsule object "PyObject *(intptr_t, int, PyObject *, int __pyx_skip_dispatch)" at 0x7f657afdd600>,
'memPoolTrimTo': <capsule object "PyObject *(intptr_t, size_t, int __pyx_skip_dispatch)" at 0x7f657afdd5a0>,
'memPrefetchAsync': <capsule object "PyObject *(intptr_t, size_t, int, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd420>,
'memcpy': <capsule object "PyObject *(intptr_t, intptr_t, size_t, int, int __pyx_skip_dispatch)" at 0x7f657afdd180>,
'memcpy2D': <capsule object "PyObject *(intptr_t, size_t, intptr_t, size_t, size_t, size_t, cudaMemcpyKind, int __pyx_skip_dispatch)" at 0x7f657afdd240>,
'memcpy2DAsync': <capsule object "PyObject *(intptr_t, size_t, intptr_t, size_t, size_t, size_t, cudaMemcpyKind, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd270>,
'memcpy2DFromArray': <capsule object "PyObject *(intptr_t, size_t, intptr_t, size_t, size_t, size_t, size_t, int, int __pyx_skip_dispatch)" at 0x7f657afdd2a0>,
'memcpy2DFromArrayAsync': <capsule object "PyObject *(intptr_t, size_t, intptr_t, size_t, size_t, size_t, size_t, int, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd2d0>,
'memcpy2DToArray': <capsule object "PyObject *(intptr_t, size_t, size_t, intptr_t, size_t, size_t, size_t, int, int __pyx_skip_dispatch)" at 0x7f657afdd300>,
'memcpy2DToArrayAsync': <capsule object "PyObject *(intptr_t, size_t, size_t, intptr_t, size_t, size_t, size_t, int, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd330>,
'memcpy3D': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd360>,
'memcpy3DAsync': <capsule object "PyObject *(intptr_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd390>,
'memcpyAsync': <capsule object "PyObject *(intptr_t, intptr_t, size_t, int, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd1b0>,
'memcpyPeer': <capsule object "PyObject *(intptr_t, int, intptr_t, int, size_t, int __pyx_skip_dispatch)" at 0x7f657afdd1e0>,
'memcpyPeerAsync': <capsule object "PyObject *(intptr_t, int, intptr_t, int, size_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd210>,
'memset': <capsule object "PyObject *(intptr_t, int, size_t, int __pyx_skip_dispatch)" at 0x7f657afdd3c0>,
'memsetAsync': <capsule object "PyObject *(intptr_t, int, size_t, intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd3f0>,
'pointerGetAttributes': <capsule object "struct __pyx_obj_13cupy_backends_4cuda_3api_7runtime_PointerAttributes *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd480>,
'runtimeGetVersion': <capsule object "int (int __pyx_skip_dispatch)" at 0x7f657afcfc00>,
'setDevice': <capsule object "PyObject *(int, int __pyx_skip_dispatch)" at 0x7f657afcfd20>,
'streamAddCallback': <capsule object "PyObject *(intptr_t, PyObject *, intptr_t, int __pyx_skip_dispatch, struct __pyx_opt_args_13cupy_backends_4cuda_3api_7runtime_streamAddCallback *__pyx_optional_args)" at 0x7f657afdd6f0>,
'streamBeginCapture': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch, struct __pyx_opt_args_13cupy_backends_4cuda_3api_7runtime_streamBeginCapture *__pyx_optional_args)" at 0x7f657afdd7b0>,
'streamCreate': <capsule object "intptr_t (int __pyx_skip_dispatch)" at 0x7f657afdd630>,
'streamCreateWithFlags': <capsule object "intptr_t (unsigned int, int __pyx_skip_dispatch)" at 0x7f657afdd660>,
'streamDestroy': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd690>,
'streamEndCapture': <capsule object "intptr_t (intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd7e0>,
'streamIsCapturing': <capsule object "int (intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd810>,
'streamQuery': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd750>,
'streamSynchronize': <capsule object "PyObject *(intptr_t, int __pyx_skip_dispatch)" at 0x7f657afdd6c0>,
'streamWaitEvent': <capsule object "PyObject *(intptr_t, intptr_t, int __pyx_skip_dispatch, struct __pyx_opt_args_13cupy_backends_4cuda_3api_7runtime_streamWaitEvent *__pyx_optional_args)" at 0x7f657afdd780>}
__pyx_unpickle_MemPoolProps = <built-in function __pyx_unpickle_MemPoolProps>
__pyx_unpickle_PointerAttributes = <built-in function __pyx_unpickle_PointerAttributes>
__pyx_unpickle__ThreadLocal = <built-in function __pyx_unpickle__ThreadLocal>
__spec__ = ModuleSpec(name='cupy_backends.cuda.api.runtime', loader=<_frozen_importlib_external.ExtensionFileLoader object at 0x7f657afcf490>, origin='/opt/build-venv/lib/python3.8/site-packages/cupy_backends/cuda/api/runtime.cpython-38-x86_64-linux-gnu.so')
__test__ = {}
_deviceEnsurePeerAccess = <built-in function _deviceEnsurePeerAccess>
_export_enum = <built-in function _export_enum>
_threading = <module 'threading' from '/usr/lib/python3.8/threading.py'>
check_status = <built-in function check_status>
createSurfaceObject = <built-in function createSurfaceObject>
createTextureObject = <built-in function createTextureObject>
cudaAddressModeBorder = 3
cudaAddressModeClamp = 1
cudaAddressModeMirror = 2
cudaAddressModeWrap = 0
cudaArrayDefault = 0
cudaArraySurfaceLoadStore = 2
cudaChannelFormatKindFloat = 2
cudaChannelFormatKindNone = 3
cudaChannelFormatKindSigned = 0
cudaChannelFormatKindUnsigned = 1
cudaDevAttrAsyncEngineCount = 40
cudaDevAttrCanFlushRemoteWrites = 98
cudaDevAttrCanMapHostMemory = 19
cudaDevAttrCanUseHostPointerForRegisteredMem = 91
cudaDevAttrClockRate = 13
cudaDevAttrComputeMode = 20
cudaDevAttrComputePreemptionSupported = 90
cudaDevAttrConcurrentKernels = 31
cudaDevAttrConcurrentManagedAccess = 89
cudaDevAttrCooperativeLaunch = 95
cudaDevAttrCooperativeMultiDeviceLaunch = 96
cudaDevAttrDirectManagedMemAccessFromHost = 101
cudaDevAttrEccEnabled = 32
cudaDevAttrGPUDirectRDMAFlushWritesOptions = 117
cudaDevAttrGPUDirectRDMASupported = 116
cudaDevAttrGPUDirectRDMAWritesOrdering = 118
cudaDevAttrGlobalL1CacheSupported = 79
cudaDevAttrGlobalMemoryBusWidth = 37
cudaDevAttrGpuOverlap = 15
cudaDevAttrHostNativeAtomicSupported = 86
cudaDevAttrHostRegisterReadOnlySupported = 113
cudaDevAttrHostRegisterSupported = 99
cudaDevAttrIntegrated = 18
cudaDevAttrIsMultiGpuBoard = 84
cudaDevAttrKernelExecTimeout = 17
cudaDevAttrL2CacheSize = 38
cudaDevAttrLocalL1CacheSupported = 80
cudaDevAttrManagedMemory = 83
cudaDevAttrMaxBlockDimX = 2
cudaDevAttrMaxBlockDimY = 3
cudaDevAttrMaxBlockDimZ = 4
cudaDevAttrMaxBlocksPerMultiprocessor = 106
cudaDevAttrMaxGridDimX = 5
cudaDevAttrMaxGridDimY = 6
cudaDevAttrMaxGridDimZ = 7
cudaDevAttrMaxPitch = 11
cudaDevAttrMaxRegistersPerBlock = 12
cudaDevAttrMaxRegistersPerMultiprocessor = 82
cudaDevAttrMaxSharedMemoryPerBlock = 8
cudaDevAttrMaxSharedMemoryPerBlockOptin = 97
cudaDevAttrMaxSharedMemoryPerMultiprocessor = 81
cudaDevAttrMaxSurface1DLayeredLayers = 62
cudaDevAttrMaxSurface1DLayeredWidth = 61
cudaDevAttrMaxSurface1DWidth = 55
cudaDevAttrMaxSurface2DHeight = 57
cudaDevAttrMaxSurface2DLayeredHeight = 64
cudaDevAttrMaxSurface2DLayeredLayers = 65
cudaDevAttrMaxSurface2DLayeredWidth = 63
cudaDevAttrMaxSurface2DWidth = 56
cudaDevAttrMaxSurface3DDepth = 60
cudaDevAttrMaxSurface3DHeight = 59
cudaDevAttrMaxSurface3DWidth = 58
cudaDevAttrMaxSurfaceCubemapLayeredLayers = 68
cudaDevAttrMaxSurfaceCubemapLayeredWidth = 67
cudaDevAttrMaxSurfaceCubemapWidth = 66
cudaDevAttrMaxTexture1DLayeredLayers = 43
cudaDevAttrMaxTexture1DLayeredWidth = 42
cudaDevAttrMaxTexture1DLinearWidth = 69
cudaDevAttrMaxTexture1DMipmappedWidth = 77
cudaDevAttrMaxTexture1DWidth = 21
cudaDevAttrMaxTexture2DGatherHeight = 46
cudaDevAttrMaxTexture2DGatherWidth = 45
cudaDevAttrMaxTexture2DHeight = 23
cudaDevAttrMaxTexture2DLayeredHeight = 28
cudaDevAttrMaxTexture2DLayeredLayers = 29
cudaDevAttrMaxTexture2DLayeredWidth = 27
cudaDevAttrMaxTexture2DLinearHeight = 71
cudaDevAttrMaxTexture2DLinearPitch = 72
cudaDevAttrMaxTexture2DLinearWidth = 70
cudaDevAttrMaxTexture2DMipmappedHeight = 74
cudaDevAttrMaxTexture2DMipmappedWidth = 73
cudaDevAttrMaxTexture2DWidth = 22
cudaDevAttrMaxTexture3DDepth = 26
cudaDevAttrMaxTexture3DDepthAlt = 49
cudaDevAttrMaxTexture3DHeight = 25
cudaDevAttrMaxTexture3DHeightAlt = 48
cudaDevAttrMaxTexture3DWidth = 24
cudaDevAttrMaxTexture3DWidthAlt = 47
cudaDevAttrMaxTextureCubemapLayeredLayers = 54
cudaDevAttrMaxTextureCubemapLayeredWidth = 53
cudaDevAttrMaxTextureCubemapWidth = 52
cudaDevAttrMaxThreadsPerBlock = 1
cudaDevAttrMaxThreadsPerMultiProcessor = 39
cudaDevAttrMaxTimelineSemaphoreInteropSupported = 114
cudaDevAttrMemoryClockRate = 36
cudaDevAttrMemoryPoolSupportedHandleTypes = 119
cudaDevAttrMemoryPoolsSupported = 115
cudaDevAttrMultiGpuBoardGroupID = 85
cudaDevAttrMultiProcessorCount = 16
cudaDevAttrPageableMemoryAccess = 88
cudaDevAttrPageableMemoryAccessUsesHostPageTables = 100
cudaDevAttrPciBusId = 33
cudaDevAttrPciDeviceId = 34
cudaDevAttrPciDomainId = 50
cudaDevAttrReserved92 = 92
cudaDevAttrReserved93 = 93
cudaDevAttrReserved94 = 94
cudaDevAttrReservedSharedMemoryPerBlock = 111
cudaDevAttrSingleToDoublePrecisionPerfRatio = 87
cudaDevAttrSparseCudaArraySupported = 112
cudaDevAttrStreamPrioritiesSupported = 78
cudaDevAttrSurfaceAlignment = 30
cudaDevAttrTccDriver = 35
cudaDevAttrTextureAlignment = 14
cudaDevAttrTexturePitchAlignment = 51
cudaDevAttrTotalConstantMemory = 9
cudaDevAttrUnifiedAddressing = 41
cudaDevAttrWarpSize = 10
cudaFilterModeLinear = 1
cudaFilterModePoint = 0
cudaIpcMemLazyEnablePeerAccess = 1
cudaLimitDevRuntimePendingLaunchCount = 4
cudaLimitDevRuntimeSyncDepth = 3
cudaLimitMallocHeapSize = 2
cudaLimitMaxL2FetchGranularity = 5
cudaLimitPrintfFifoSize = 1
cudaLimitStackSize = 0
cudaMemAdviseSetAccessedBy = 5
cudaMemAdviseSetPreferredLocation = 3
cudaMemAdviseSetReadMostly = 1
cudaMemAdviseUnsetAccessedBy = 6
cudaMemAdviseUnsetPreferredLocation = 4
cudaMemAdviseUnsetReadMostly = 2
cudaMemAllocationTypePinned = 1
cudaMemAttachGlobal = 1
cudaMemAttachHost = 2
cudaMemAttachSingle = 4
cudaMemHandleTypeNone = 0
cudaMemHandleTypePosixFileDescriptor = 1
cudaMemLocationTypeDevice = 1
cudaMemPoolAttrReleaseThreshold = 4
cudaMemPoolAttrReservedMemCurrent = 5
cudaMemPoolAttrReservedMemHigh = 6
cudaMemPoolAttrUsedMemCurrent = 7
cudaMemPoolAttrUsedMemHigh = 8
cudaMemPoolReuseAllowInternalDependencies = 3
cudaMemPoolReuseAllowOpportunistic = 2
cudaMemPoolReuseFollowEventDependencies = 1
cudaMemoryTypeDevice = 2
cudaMemoryTypeHost = 1
cudaReadModeElementType = 0
cudaReadModeNormalizedFloat = 1
cudaResourceTypeArray = 0
cudaResourceTypeLinear = 2
cudaResourceTypeMipmappedArray = 1
cudaResourceTypePitch2D = 3
destroySurfaceObject = <built-in function destroySurfaceObject>
destroyTextureObject = <built-in function destroyTextureObject>
deviceCanAccessPeer = <built-in function deviceCanAccessPeer>
deviceDisablePeerAccess = <built-in function deviceDisablePeerAccess>
deviceEnablePeerAccess = <built-in function deviceEnablePeerAccess>
deviceGetAttribute = <built-in function deviceGetAttribute>
deviceGetByPCIBusId = <built-in function deviceGetByPCIBusId>
deviceGetDefaultMemPool = <built-in function deviceGetDefaultMemPool>
deviceGetLimit = <built-in function deviceGetLimit>
deviceGetMemPool = <built-in function deviceGetMemPool>
deviceGetPCIBusId = <built-in function deviceGetPCIBusId>
deviceSetLimit = <built-in function deviceSetLimit>
deviceSetMemPool = <built-in function deviceSetMemPool>
deviceSynchronize = <built-in function deviceSynchronize>
driverGetVersion = <built-in function driverGetVersion>
eventBlockingSync = 1
eventCreate = <built-in function eventCreate>
eventCreateWithFlags = <built-in function eventCreateWithFlags>
eventDefault = 0
eventDestroy = <built-in function eventDestroy>
eventDisableTiming = 2
eventElapsedTime = <built-in function eventElapsedTime>
eventInterprocess = 4
eventQuery = <built-in function eventQuery>
eventRecord = <built-in function eventRecord>
eventSynchronize = <built-in function eventSynchronize>
free = <built-in function free>
freeArray = <built-in function freeArray>
freeAsync = <built-in function freeAsync>
freeHost = <built-in function freeHost>
getDevice = <built-in function getDevice>
getDeviceCount = <built-in function getDeviceCount>
getDeviceProperties = <built-in function getDeviceProperties>
graphDestroy = <built-in function graphDestroy>
graphExecDestroy = <built-in function graphExecDestroy>
graphInstantiate = <built-in function graphInstantiate>
graphLaunch = <built-in function graphLaunch>
graphUpload = <built-in function graphUpload>
hostAlloc = <built-in function hostAlloc>
hostAllocDefault = 0
hostAllocMapped = 2
hostAllocPortable = 1
hostAllocWriteCombined = 4
hostRegister = <built-in function hostRegister>
hostUnregister = <built-in function hostUnregister>
ipcCloseMemHandle = <built-in function ipcCloseMemHandle>
ipcGetEventHandle = <built-in function ipcGetEventHandle>
ipcGetMemHandle = <built-in function ipcGetMemHandle>
ipcOpenEventHandle = <built-in function ipcOpenEventHandle>
ipcOpenMemHandle = <built-in function ipcOpenMemHandle>
is_hip = False
launchHostFunc = <built-in function launchHostFunc>
malloc = <built-in function malloc>
malloc3DArray = <built-in function malloc3DArray>
mallocArray = <built-in function mallocArray>
mallocAsync = <built-in function mallocAsync>
mallocFromPoolAsync = <built-in function mallocFromPoolAsync>
mallocManaged = <built-in function mallocManaged>
memAdvise = <built-in function memAdvise>
memGetInfo = <built-in function memGetInfo>
memPoolCreate = <built-in function memPoolCreate>
memPoolDestroy = <built-in function memPoolDestroy>
memPoolGetAttribute = <built-in function memPoolGetAttribute>
memPoolSetAttribute = <built-in function memPoolSetAttribute>
memPoolTrimTo = <built-in function memPoolTrimTo>
memPrefetchAsync = <built-in function memPrefetchAsync>
memcpy = <built-in function memcpy>
memcpy2D = <built-in function memcpy2D>
memcpy2DAsync = <built-in function memcpy2DAsync>
memcpy2DFromArray = <built-in function memcpy2DFromArray>
memcpy2DFromArrayAsync = <built-in function memcpy2DFromArrayAsync>
memcpy2DToArray = <built-in function memcpy2DToArray>
memcpy2DToArrayAsync = <built-in function memcpy2DToArrayAsync>
memcpy3D = <built-in function memcpy3D>
memcpy3DAsync = <built-in function memcpy3DAsync>
memcpyAsync = <built-in function memcpyAsync>
memcpyDefault = 4
memcpyDeviceToDevice = 3
memcpyDeviceToHost = 2
memcpyHostToDevice = 1
memcpyHostToHost = 0
memcpyPeer = <built-in function memcpyPeer>
memcpyPeerAsync = <built-in function memcpyPeerAsync>
memoryTypeDevice = 2
memoryTypeHost = 1
memoryTypeManaged = 3
memoryTypeUnregistered = 0
memset = <built-in function memset>
memsetAsync = <built-in function memsetAsync>
pointerGetAttributes = <built-in function pointerGetAttributes>
runtimeGetVersion = <built-in function runtimeGetVersion>
setDevice = <built-in function setDevice>
streamAddCallback = <built-in function streamAddCallback>
streamBeginCapture = <built-in function streamBeginCapture>
streamCaptureModeGlobal = 0
streamCaptureModeRelaxed = 2
streamCaptureModeThreadLocal = 1
streamCaptureStatusActive = 1
streamCaptureStatusInvalidated = 2
streamCaptureStatusNone = 0
streamCreate = <built-in function streamCreate>
streamCreateWithFlags = <built-in function streamCreateWithFlags>
streamDefault = 0
streamDestroy = <built-in function streamDestroy>
streamEndCapture = <built-in function streamEndCapture>
streamIsCapturing = <built-in function streamIsCapturing>
streamLegacy = 1
streamNonBlocking = 1
streamPerThread = 2
streamQuery = <built-in function streamQuery>
streamSynchronize = <built-in function streamSynchronize>
streamWaitEvent = <built-in function streamWaitEvent>
cupy_backends/cuda/api/runtime.pyx:144: CUDARuntimeError
=========================== short test summary info ============================
SKIPPED [1] tests-cuda/test_1276_cuda_num.py:14: too old Numba version
SKIPPED [1] tests-cuda/test_1276_cuda_transfers.py:14: too old Numba version
SKIPPED [1] tests-cuda/test_1276_cupy_interop.py:16: too old Numba version
SKIPPED [1] tests-cuda/test_1276_from_cupy.py:14: too old Numba version
SKIPPED [1] tests-cuda/test_1300_same_for_numba_cuda.py:10: could not import 'numba': No module named 'numba'
SKIPPED [1] tests-cuda/test_1381_check_errors.py:17: too old Numba version
SKIPPED [1] tests-cuda/test_1809_array_cuda_jit.py:10: could not import 'numba': No module named 'numba'
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_ByteMaskedArray_reduce_next_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_ByteMaskedArray_reduce_next_nonlocal_nextshifts_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArray32_overlay_mask8_to64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArray32_reduce_next_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArray32_reduce_next_nonlocal_nextshifts_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArray32_reduce_next_nonlocal_nextshifts_fromshifts_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArray64_overlay_mask8_to64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArray64_reduce_next_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArray64_reduce_next_nonlocal_nextshifts_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArray64_reduce_next_nonlocal_nextshifts_fromshifts_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArrayU32_overlay_mask8_to64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArrayU32_reduce_next_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArrayU32_reduce_next_nonlocal_nextshifts_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArrayU32_reduce_next_nonlocal_nextshifts_fromshifts_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_IndexedArray_reduce_next_fix_offsets_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_ListOffsetArray32_rpad_and_clip_axis1_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_ListOffsetArray64_rpad_and_clip_axis1_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_ListOffsetArrayU32_rpad_and_clip_axis1_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_NumpyArray_reduce_adjust_starts_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_NumpyArray_reduce_adjust_starts_shifts_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_NumpyArray_reduce_mask_ByteMaskedArray_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_float32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_float64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_int16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_int32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_int64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_int8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_uint16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_uint32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_uint64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmax_uint8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_float32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_float64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_int16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_int32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_int64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_int8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_uint16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_uint32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_uint64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_argmin_uint8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_count_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_bool_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_float32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_float64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_int16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_int32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_int64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_int8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_uint16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_uint32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_uint64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_countnonzero_uint8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_float32_float32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_float64_float64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_int16_int16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_int32_int32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_int64_int64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_int8_int8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_uint16_uint16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_uint32_uint32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_uint64_uint64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_max_uint8_uint8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_float32_float32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_float64_float64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_int16_int16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_int32_int32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_int64_int64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_int8_int8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_uint16_uint16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_uint32_uint32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_uint64_uint64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_min_uint8_uint8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_bool_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_float32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_float64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_int16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_int32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_int64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_int8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_uint16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_uint32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_uint64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_prod_bool_uint8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_bool_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_float32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_float64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_int16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_int32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_int64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_int8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_uint16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_uint32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_uint64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_bool_uint8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_float32_float32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_float64_float64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_int32_bool_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_int32_int16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_int32_int32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_int32_int8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_int64_bool_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_int64_int16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_int64_int32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_int64_int64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_int64_int8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_uint32_uint16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_uint32_uint32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_uint32_uint8_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_uint64_uint16_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_uint64_uint32_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_uint64_uint64_64.py:20: Unable to generate any tests for kernel
SKIPPED [1] tests-cuda-kernels/test_cudaawkward_reduce_sum_uint64_uint8_64.py:20: Unable to generate any tests for kernel
================= 1229 failed, 120 skipped in 83.29s (0:01:23) =================
``` | closed | 2023-11-15T13:34:23Z | 2023-11-15T13:50:49Z | https://github.com/scikit-hep/awkward/issues/2822 | [] | agoose77 | 0 |
run-llama/rags | streamlit | 65 | Stop response generation in langchain framework |
Python code:
qa_chain = RetrievalQA.from_chain_type(llm=turbo_llm,
chain_type="stuff",
retriever=compression_retriever,
return_source_documents=True
)
response = qa_chain("What is Langchain?")
This is the python code I am using to query over a PDF by following RAG approach.
My requirement is, if it takes more than 1 minute to generate the response then it should stop response generation from the backend.
How I can do that? Is there any python code architecture available for this?
| open | 2024-03-18T09:29:00Z | 2024-03-18T09:29:00Z | https://github.com/run-llama/rags/issues/65 | [] | Aniketparab1999 | 0 |
oegedijk/explainerdashboard | dash | 139 | Exception: Model does not have a known objective or output type! When model_output is not "raw" then we need to know the model's objective or link function. | **I am trying to build a explainer dashboard with multiclass model but it is throwing the following error**
"
Exception Traceback (most recent call last)
<ipython-input-105-d37d4aaf569b> in <module>
8
9 from explainerdashboard import ClassifierExplainer, ExplainerDashboard, RegressionExplainer, ExplainerHub
---> 10 explainer = ClassifierExplainer(model, X_test, y_test)
~\Anaconda3\lib\site-packages\explainerdashboard\explainers.py in __init__(self, model, X, y, permutation_metric, shap, X_background, model_output, cats, cats_notencoded, idxs, index_name, target, descriptions, n_jobs, permutation_cv, cv, na_fill, precision, labels, pos_label)
2039 self.__class__ = XGBClassifierExplainer
2040
-> 2041 _ = self.shap_explainer
2042
2043
~\Anaconda3\lib\site-packages\explainerdashboard\explainers.py in shap_explainer(self)
2158 "pass model_output='logodds' to get shap values in logodds without the need for "
2159 "a background dataset and also working shap interaction values...")
-> 2160 self._shap_explainer = shap.TreeExplainer(
2161 self.model,
2162 self.X_background if self.X_background is not None else self.X,
~\Anaconda3\lib\site-packages\shap\explainers\_tree.py in __init__(self, model, data, model_output, feature_perturbation, feature_names, **deprecated_options)
161 if self.model.model_output != "raw":
162 if self.model.objective is None and self.model.tree_output is None:
--> 163 raise Exception("Model does not have a known objective or output type! When model_output is " \
164 "not \"raw\" then we need to know the model's objective or link function.")
165
Exception: Model does not have a known objective or output type! When model_output is not "raw" then we need to know the model's objective or link function.
"
**I am simply passing model and test (data+label) like
explainer = ClassifierExplainer(model, X_test, y_test)
** | closed | 2021-08-10T10:00:12Z | 2021-08-17T12:33:03Z | https://github.com/oegedijk/explainerdashboard/issues/139 | [] | muhammad49 | 4 |
numba/numba | numpy | 9,692 | Jitted and non-jitten function give different output | ## Reporting a bug
- [x] I have tried using the latest released version of Numba (most recent is
visible in the release notes
(https://numba.readthedocs.io/en/stable/release-notes-overview.html).
- [x] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
Running the script just below with and without `@jit` results in significant differences in the output (up to 0.01). Unfortunately, to run this an extra input is needed and can be found at this [link](https://drive.google.com/file/d/1k6oXrYpSqXL8MLtNWo3YOsLhz70WNN47) or in the archvie attached. It just needs to be loaded with `np.savez` (I hope this is allowed, I don't know how else to make this reproducible otherwise).
Happy to provide more info / checks if needed. Apologies for the spaghetti code as well.
[Ninja edit] Output of the script:
```
np.allclose: False
np.max(diff): 0.013723357478664322
```
---
```
#!/usr/bin/env python
# coding: utf-8
import numpy as np
from numba import jit
from copy import deepcopy as dc
c_kms = 299792.458
sqrt_pi = np.sqrt(np.pi)
@jit
def voigt(a, x):
return np.exp(-(x**2)) - (a / sqrt_pi) / (x * x) * (
np.exp(-(x**2)) ** 2 * (4 * x**2 * x**2 + 7 * x**2 + 4 + 1.5 / x**2)
- 1.5 / x**2
- 1
)
@jit()
def jit_update_sum_of_voigts(wave, tau_lam, c_voigt, a, lambda_z, b, tau_min, tau_max):
"""
Given arrays of parameters, compute the summed optical depth
spectrum of absorbers using Voigt profiles.
Uses the Tepper-Garcia 2006 approximation for the Voigt function.
"""
u_max = np.clip(np.sqrt(c_voigt * (a / sqrt_pi) / tau_min), 5.0, np.inf)
# ***assumes constant velocity bin spacings***
dv = (wave[1] - wave[0]) / (0.5 * (wave[0] + wave[1])) * c_kms
du = dv / b
b_norm = b / c_kms
n_pix = (u_max / du).astype(np.int32)
w0 = np.searchsorted(wave, lambda_z)
w0_m_npix = w0 - n_pix
w0_p_npix = w0 + n_pix
len_wave = len(wave)
### Faster by a factor of about 2x, 2.5x than previous implementation
i1_i2 = np.empty((len(a), 2), dtype=np.int32)
for i in range(len(a)):
i1 = w0_m_npix[i] if w0_m_npix[i] > 0 else 0
i2 = w0_p_npix[i] if w0_p_npix[i] < len_wave else len_wave
i1_i2[i] = i1, i2
i1_i2_sorting_inds = np.argsort(i1_i2[:, 1] - i1_i2[:, 0])
i1_checked, i2_checked = len_wave, 0
# now re-run
for i in i1_i2_sorting_inds:
i1, i2 = i1_i2[i]
# if we have already checked this region, just skip this altogether
if i1 > i1_checked and i2 < i2_checked:
continue
region_all_greater_tau_max = True
for j in range(i1, i2):
if tau_lam[j] < tau_max:
# the clip is to prevent division by zero errors
u = np.clip(
np.abs((wave[i1:i2] / lambda_z[i] - 1) / b_norm[i]), 1e-5, np.inf
)
tau_lam[i1:i2] += c_voigt[i] * voigt(a[i], u)
region_all_greater_tau_max = False
break # if we find one point, no need to check the rest
if region_all_greater_tau_max:
# if this region has no points that need to be updated, extend the checked region
i1_checked = i1 if i1 < i1_checked else i1_checked
i2_checked = i2 if i2 > i2_checked else i2_checked
tau_lam[tau_lam > tau_max] = tau_max + 1e-3
def no_jit_update_sum_of_voigts(
wave, tau_lam, c_voigt, a, lambda_z, b, tau_min, tau_max
):
"""
Given arrays of parameters, compute the summed optical depth
spectrum of absorbers using Voigt profiles.
Uses the Tepper-Garcia 2006 approximation for the Voigt function.
"""
u_max = np.clip(np.sqrt(c_voigt * (a / sqrt_pi) / tau_min), 5.0, np.inf)
# ***assumes constant velocity bin spacings***
dv = (wave[1] - wave[0]) / (0.5 * (wave[0] + wave[1])) * c_kms
du = dv / b
b_norm = b / c_kms
n_pix = (u_max / du).astype(np.int32)
w0 = np.searchsorted(wave, lambda_z)
w0_m_npix = w0 - n_pix
w0_p_npix = w0 + n_pix
len_wave = len(wave)
### Faster by a factor of about 2x, 2.5x than previous implementation
i1_i2 = np.empty((len(a), 2), dtype=np.int32)
for i in range(len(a)):
i1 = w0_m_npix[i] if w0_m_npix[i] > 0 else 0
i2 = w0_p_npix[i] if w0_p_npix[i] < len_wave else len_wave
i1_i2[i] = i1, i2
i1_i2_sorting_inds = np.argsort(i1_i2[:, 1] - i1_i2[:, 0])
i1_checked, i2_checked = len_wave, 0
# now re-run
for i in i1_i2_sorting_inds:
i1, i2 = i1_i2[i]
# if we have already checked this region, just skip this altogether
if i1 > i1_checked and i2 < i2_checked:
continue
region_all_greater_tau_max = True
for j in range(i1, i2):
if tau_lam[j] < tau_max:
# the clip is to prevent division by zero errors
u = np.clip(
np.abs((wave[i1:i2] / lambda_z[i] - 1) / b_norm[i]), 1e-5, np.inf
)
tau_lam[i1:i2] += c_voigt[i] * voigt(a[i], u)
region_all_greater_tau_max = False
break # if we find one point, no need to check the rest
if region_all_greater_tau_max:
# if this region has no points that need to be updated, extend the checked region
i1_checked = i1 if i1 < i1_checked else i1_checked
i2_checked = i2 if i2 > i2_checked else i2_checked
tau_lam[tau_lam > tau_max] = tau_max + 1e-3
# are the results the same
test_base = dict(np.load("./SimQSO_sum_of_voigts_testInput.npz"))
test_1 = dc(test_base)
test_2 = dc(test_base)
test_1["tau_lam"] = np.zeros_like(test_base["tau_lam"])
test_2["tau_lam"] = np.zeros_like(test_base["tau_lam"])
jit_update_sum_of_voigts(**test_1)
no_jit_update_sum_of_voigts(**test_2)
print("np.allclose: ", np.allclose(test_1["tau_lam"], test_2["tau_lam"]))
print("np.max(diff): ", np.max(test_1["tau_lam"] - test_2["tau_lam"]))
```
[report.zip](https://github.com/user-attachments/files/16564446/report.zip) | closed | 2024-08-09T16:36:30Z | 2024-09-20T01:56:47Z | https://github.com/numba/numba/issues/9692 | [
"more info needed",
"stale"
] | G-Francio | 4 |
ckan/ckan | api | 8,420 | Incorporate new theme to current development branch | The new design is taking shape nicely (the [preview site](https://github.com/ckan/ckan/discussions/8399) needs to be updated but will have the new theme soon) and it's time to start bringing it over to the current master branch.
There is a mechanism in `environment.py` to switch the base [public](https://github.com/ckan/ckan/blob/5117fbaeee5a60cf62e42abde2dcef19c747505a/ckan/config/environment.py#L43) and [template](https://github.com/ckan/ckan/blob/5117fbaeee5a60cf62e42abde2dcef19c747505a/ckan/config/environment.py#L191) folders, and for now we can just reuse that. We just need to add the new allowed values:
```diff
diff --git a/ckan/config/environment.py b/ckan/config/environment.py
index 9de9e7cf0..c35840989 100644
--- a/ckan/config/environment.py
+++ b/ckan/config/environment.py
@@ -40,7 +40,7 @@ def load_environment(conf: Union[Config, CKANConfig]):
"""
os.environ['CKAN_CONFIG'] = cast(str, conf['__file__'])
- valid_base_public_folder_names = ['public']
+ valid_base_public_folder_names = ['public', 'public-{suffix}']
static_files = conf.get('ckan.base_public_folder', 'public')
conf['ckan.base_public_folder'] = static_files
@@ -188,7 +188,7 @@ def update_config() -> None:
helpers.load_plugin_helpers()
# Templates and CSS loading from configuration
- valid_base_templates_folder_names = ['templates']
+ valid_base_templates_folder_names = ['templates', 'templates-{suffix}']
templates = config.get('ckan.base_templates_folder')
config['ckan.base_templates_folder'] = templates
```
Then we need to copy the `public` and `templates` folders of the new theme next to the current ones with a different name, eg:
```
ckan/
public
public-{suffix}
templates
templates-{suffix}
```
The `-{suffix}` folder should contain the *new* theme for now. When it's ready to go we will switch and make it the default but for now, to use it you will have to set these config options:
```
ckan.base_public_folder = public-{suffix}
ckan.base_templates_folder = templates-{suffix}
```
Now for the hard part, the name to replace `-{suffix}` :) Let's not call it "new", "v2", "3.0" or anything that will eventually get outdated, just a code name. Maybe something related to the color like cobalt, blue jay, ocean or whatever.
So the first pull request should include:
1. The changes in environment.py above
2. The new templates-{suffix} and public-{suffix} folders
3. Whatever initial changes you think make sense to get an initial feeling and validate that the assets are loading fine etc. Can be the homepage, just the header/footer, etc
After that we can move forward with PRs that focus on pages, functionalities etc.
Does this make sense @aleeexgreeen ?
| closed | 2024-09-04T13:54:29Z | 2024-10-14T11:06:02Z | https://github.com/ckan/ckan/issues/8420 | [] | amercader | 1 |
Lightning-AI/pytorch-lightning | data-science | 20,605 | Training crashes when using RichProgressBar with num_sanity_val_steps but no validation dataloader | ### Bug description
When using the `RichProgressBar` callback and setting `num_sanity_val_steps > 0`, but not providing a validation dataloader in the `LightningDataModule`, the training crashes. This only happens when explicitly returning an empty list in val_dataloader.
### What version are you seeing the problem on?
v2.5
### How to reproduce the bug
```python
import lightning as pl
from lightning.pytorch.callbacks import RichProgressBar
from torch.utils.data import DataLoader, Dataset
import torch
class RandomDataset(Dataset):
def __init__(self, size):
self.data = torch.randn(size, 10)
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
return self.data[idx], torch.tensor(0) # Dummy target
class MinimalDataModule(pl.LightningDataModule):
def train_dataloader(self):
return DataLoader(RandomDataset(100), batch_size=10)
# when removing the val_dataloader method completely, the error is not raised
def val_dataloader(self):
return []
class MinimalModel(pl.LightningModule):
def __init__(self):
super().__init__()
self.linear = torch.nn.Linear(10, 1)
def forward(self, x):
return self.linear(x)
def training_step(self, batch, batch_idx):
x, y = batch
loss = torch.nn.functional.mse_loss(self(x), y.float().unsqueeze(1))
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
loss = torch.nn.functional.mse_loss(self(x), y.float().unsqueeze(1))
return loss
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.01)
trainer = pl.Trainer(
max_epochs=1,
num_sanity_val_steps=1, # Set this to 0 to avoid the error
callbacks=[RichProgressBar()]
)
model = MinimalModel()
data = MinimalDataModule()
trainer.fit(model, datamodule=data)
```
### Error messages and logs
File "C:\Users\tscha\.conda\envs\GRIPSS\lib\site-packages\lightning\pytorch\callbacks\progress\rich_progress.py", line 379, in on_sanity_check_end
assert self.val_sanity_progress_bar_id is not None
AssertionError
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- NVIDIA GeForce RTX 4080 Laptop GPU
- available: True
- version: 12.6
* Lightning:
- lightning: 2.5.0.post0
- lightning-utilities: 0.12.0
- pytorch-lightning: 2.5.0.post0
- torch: 2.6.0+cu126
- torchaudio: 2.6.0+cu126
- torchmetrics: 1.6.1
- torchvision: 0.21.0+cu126
* Packages:
- aiofiles: 24.1.0
- aiohappyeyeballs: 2.4.0
- aiohttp: 3.10.5
- aiosignal: 1.3.1
- annotated-types: 0.7.0
- antlr4-python3-runtime: 4.9.3
- anyio: 4.4.0
- argon2-cffi: 23.1.0
- argon2-cffi-bindings: 21.2.0
- arrow: 1.3.0
- astroid: 3.2.4
- asttokens: 2.4.1
- async-lru: 2.0.4
- async-timeout: 4.0.3
- attrs: 24.2.0
- autocommand: 2.2.2
- azure-core: 1.31.0
- azure-eventhub: 5.12.1
- azure-identity: 1.17.1
- babel: 2.14.0
- backports.tarfile: 1.2.0
- beautifulsoup4: 4.12.3
- black: 24.8.0
- blackdoc: 0.3.9
- bleach: 6.1.0
- brotli: 1.1.0
- bump2version: 1.0.1
- cached-property: 1.5.2
- cachetools: 5.5.0
- certifi: 2024.8.30
- cffi: 1.17.0
- cfgv: 3.3.1
- chardet: 5.2.0
- charset-normalizer: 3.3.2
- click: 8.1.7
- colorama: 0.4.6
- comm: 0.2.2
- contourpy: 1.2.1
- coverage: 7.6.1
- cryptography: 43.0.1
- cycler: 0.12.1
- dataclasses-json: 0.6.7
- debugpy: 1.8.5
- decorator: 5.1.1
- defusedxml: 0.7.1
- deprecated: 1.2.14
- detox: 0.19
- dill: 0.3.8
- dirtyjson: 1.0.8
- distlib: 0.3.8
- distro: 1.9.0
- dnspython: 2.6.1
- email-validator: 2.2.0
- entrypoints: 0.4
- eventlet: 0.36.1
- exceptiongroup: 1.2.2
- execnet: 2.1.1
- executing: 2.0.1
- fastapi: 0.112.2
- fastapi-cli: 0.0.5
- fastjsonschema: 2.20.0
- filelock: 3.15.4
- flake8: 7.1.1
- fonttools: 4.53.1
- fqdn: 1.5.1
- frozenlist: 1.4.1
- fsspec: 2024.6.1
- greenlet: 3.0.3
- gripss-extraction-service: 0.1.0
- gripss-list-matching: 0.1.2
- gripss-service-matching-api: 0.1.0
- gripss-service-matching-backend: 0.1.0
- gripss-service-matching-helpers: 0.1.0
- h11: 0.14.0
- h2: 4.1.0
- hpack: 4.0.0
- httpcore: 1.0.5
- httptools: 0.6.1
- httpx: 0.27.0
- hydra-core: 1.3.2
- hyperframe: 6.0.1
- identify: 2.6.0
- idna: 3.8
- importlib-metadata: 8.4.0
- importlib-resources: 6.4.4
- inflect: 7.3.1
- iniconfig: 2.0.0
- ipykernel: 6.29.5
- ipython: 8.26.0
- ipywidgets: 8.1.5
- isoduration: 20.11.0
- isort: 5.13.2
- jaraco.context: 5.3.0
- jaraco.functools: 4.0.1
- jaraco.text: 3.12.1
- jedi: 0.19.1
- jinja2: 3.1.4
- jiter: 0.5.0
- joblib: 1.4.2
- json5: 0.9.25
- jsonpatch: 1.33
- jsonpointer: 3.0.0
- jsonschema: 4.23.0
- jsonschema-specifications: 2023.12.1
- jupyter-client: 8.6.2
- jupyter-core: 5.7.2
- jupyter-events: 0.10.0
- jupyter-lsp: 2.2.5
- jupyter-server: 2.14.2
- jupyter-server-terminals: 0.5.3
- jupyterlab: 4.2.4
- jupyterlab-pygments: 0.3.0
- jupyterlab-server: 2.27.3
- jupyterlab-widgets: 3.0.13
- kafka-python-ng: 2.2.2
- kiwisolver: 1.4.5
- langchain: 0.2.14
- langchain-community: 0.2.12
- langchain-core: 0.2.35
- langchain-text-splitters: 0.2.2
- langsmith: 0.1.104
- lightning: 2.5.0.post0
- lightning-utilities: 0.12.0
- llama-index-core: 0.10.56
- llama-index-embeddings-openai: 0.1.11
- llama-index-llms-openai: 0.1.26
- lxml: 5.3.0
- markdown-it-py: 3.0.0
- markupsafe: 2.1.5
- marshmallow: 3.22.0
- matplotlib: 3.9.2
- matplotlib-inline: 0.1.7
- mccabe: 0.7.0
- mdurl: 0.1.2
- mistune: 3.0.2
- mongoengine: 0.28.2
- more-itertools: 10.4.0
- motor: 3.5.1
- mpmath: 1.3.0
- msal: 1.31.0
- msal-extensions: 1.2.0
- multidict: 6.0.5
- munkres: 1.1.4
- mypy-extensions: 1.0.0
- nbclient: 0.10.0
- nbconvert: 7.16.4
- nbformat: 5.10.4
- nest-asyncio: 1.6.0
- networkx: 3.3
- nltk: 3.9.1
- nodeenv: 1.9.1
- notebook-shim: 0.2.4
- numpy: 1.26.4
- omegaconf: 2.3.0
- openai: 1.42.0
- ordered-set: 4.1.0
- orjson: 3.10.7
- overrides: 7.7.0
- packaging: 24.1
- pandas: 2.2.2
- pandocfilters: 1.5.0
- parso: 0.8.4
- pathspec: 0.12.1
- pickleshare: 0.7.5
- pillow: 10.4.0
- pip: 24.2
- pkgutil-resolve-name: 1.3.10
- platformdirs: 4.2.2
- pluggy: 0.13.1
- portalocker: 2.10.1
- pre-commit: 3.8.0
- prometheus-client: 0.20.0
- prompt-toolkit: 3.0.47
- psutil: 6.0.0
- pure-eval: 0.2.3
- py: 1.11.0
- pycodestyle: 2.12.1
- pycparser: 2.22
- pydantic: 2.8.2
- pydantic-core: 2.20.1
- pyflakes: 3.2.0
- pygments: 2.18.0
- pyjwt: 2.9.0
- pylint: 3.2.6
- pymongo: 4.8.0
- pymupdf: 1.24.9
- pymupdfb: 1.24.9
- pyparsing: 3.1.4
- pyproject-api: 1.7.1
- pyside6: 6.7.2
- pysocks: 1.7.1
- pytest: 8.3.2
- pytest-cov: 5.0.0
- pytest-xdist: 3.6.1
- python-dateutil: 2.9.0
- python-docx: 1.1.2
- python-dotenv: 1.0.1
- python-json-logger: 2.0.7
- python-multipart: 0.0.9
- pytorch-lightning: 2.5.0.post0
- pytz: 2024.1
- pywin32: 306
- pywinpty: 2.0.13
- pyyaml: 6.0.2
- pyzmq: 26.2.0
- referencing: 0.35.1
- regex: 2024.7.24
- requests: 2.32.3
- rfc3339-validator: 0.1.4
- rfc3986-validator: 0.1.1
- rich: 13.7.1
- rpds-py: 0.20.0
- send2trash: 1.8.3
- setuptools: 71.0.4
- shellingham: 1.5.4
- shiboken6: 6.7.2
- six: 1.16.0
- sniffio: 1.3.1
- soupsieve: 2.5
- sqlalchemy: 2.0.32
- stack-data: 0.6.2
- starlette: 0.38.2
- sympy: 1.13.1
- tenacity: 8.5.0
- tender-service-apis: 0.1.0
- terminado: 0.18.1
- tiktoken: 0.7.0
- tinycss2: 1.3.0
- toml: 0.10.2
- tomli: 2.0.1
- tomlkit: 0.13.2
- torch: 2.6.0+cu126
- torchaudio: 2.6.0+cu126
- torchmetrics: 1.6.1
- torchvision: 0.21.0+cu126
- tornado: 6.4.1
- tox: 3.6.1
- tqdm: 4.66.5
- traitlets: 5.14.3
- typeguard: 4.3.0
- typer: 0.12.5
- typer-slim: 0.12.5
- types-python-dateutil: 2.9.0.20240821
- typing-extensions: 4.12.2
- typing-inspect: 0.9.0
- typing-utils: 0.1.0
- tzdata: 2024.1
- ukkonen: 1.0.1
- unicodedata2: 15.1.0
- uri-template: 1.3.0
- urllib3: 2.2.2
- uvicorn: 0.30.6
- virtualenv: 20.26.3
- watchfiles: 0.23.0
- wcwidth: 0.2.13
- webcolors: 24.8.0
- webencodings: 0.5.1
- websocket-client: 1.8.0
- websockets: 13.0
- wheel: 0.44.0
- widgetsnbextension: 4.0.13
- win-inet-pton: 1.1.0
- wrapt: 1.16.0
- yarl: 1.9.4
- zipp: 3.20.0
- zstandard: 0.23.0
* System:
- OS: Windows
- architecture:
- 64bit
- WindowsPE
- processor: Intel64 Family 6 Model 183 Stepping 1, GenuineIntel
- python: 3.10.14
- release: 10
- version: 10.0.26100
</details>
### More info
I recreated this issue on a Windows machine and on a Mac. | open | 2025-02-27T06:48:10Z | 2025-02-27T06:48:25Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20605 | [
"bug",
"needs triage",
"ver: 2.5.x"
] | t-schanz | 0 |
flairNLP/flair | nlp | 3,431 | [Question]: Fine-tune transformer model with TransformerWordEmbedding | ### Question
I am using `TransformerWordEmbedding` to obtain contextualized embeddings from RoBERTa pre-trained weights. I was seeking more clarity about what the `fine-tune` parameter does. What is meant by fine-tuneable embeddings in this case? Does this allow for backpropagation through all of the layers in the RoBERTa model? If not, is there a way to achieve this?
I am using these contextualized embeddings as pre-processed inputs to my custom GNN model, and ideally, I would like to backprop back to all of the RoBERTa layers. | closed | 2024-03-20T18:07:27Z | 2024-03-22T10:07:09Z | https://github.com/flairNLP/flair/issues/3431 | [
"question"
] | rohandas14 | 1 |
graphql-python/graphene-django | django | 621 | Received incompatible instance | Hi,
`I am trying to make: query { getNeighborhoodsAround(postalCode:2600, countryCode:"AU", distance:2) { name } }`
```
import graphene
import pymongo
from graphene_django.types import DjangoObjectType
from pymongo import MongoClient
class Query(object):
def resolve_get_neighborhoods_around(self, info, **kwargs):
countryCode = kwargs.get('countryCode')
postalCode = kwargs.get('postalCode')
distance = kwargs.get('distance')
NeighborhoodResult = Neighborhood.objects.filter(
postalCode=postalCode,
countryCode=countryCode,
city=1
)
if NeighborhoodResult is not None:
obj = NeighborhoodResult.first()
lat = getattr(obj, 'latitude')
longitude = getattr(obj, 'longitude')
metersPerKm = 1000
client = pymongo.MongoClient("mongodb+srv://randomLetters.mongodb.net/something")
db = client.NextTown
return db.ingredients_neighborhood.aggregate([
{ "$geoNear": {
"near": {
"type": "Point",
"coordinates": [longitude, lat]
},
"maxDistance": distance * metersPerKm,
"spherical": True,
"distanceField": "distance",
"distanceMultiplier": 0.001
}}])
return None
```
This method runs a Mongo query that should bring all the data. It works in the Mongo shell but not with Graphene.
The response I get is:
```
...
{
"message": "Received incompatible instance \"{'_id': '5c71d81cbbfb2ca091318e04', 'state': 'Australian Capital Territory', 'latitude': -35.2688, 'longitude': 149.1247, 'accuracy': 4, 'countryCode': 'AU', 'postalCode': 2612, 'name': 'Turner', 'countyProvinceName': 'CANBERRA', 'stateCode': 'ACT', 'communityName': '', 'location': {'type': 'Point', 'coordinates': [149.1247, -35.2688]}, 'id': 38538, 'city': 0, 'distance': 1.6653032424960552}\"."
}
],
"data": {
"getNeighborhoodsAround": [
null,
null,
null,
null,
null,
null,
null,
null
]
}
}
```
Any suggestions? | closed | 2019-04-25T22:49:19Z | 2019-05-06T11:11:09Z | https://github.com/graphql-python/graphene-django/issues/621 | [] | pyjavo | 4 |
Miserlou/Zappa | django | 1,256 | Stage name prepended to request path, using certificate (regression) | I have been successfully using Zappa with Django with a custom domain/certificate.
I just updated to zappa version 0.45.1 and on the next deployment django started receiving the incorrect path which erroneously included the stage name as a prefix.
Reverting to zappa version 0.43.2 and deploying again fixed the issue.
It looks like recent updates have introduced a bug or regressed.
(Settings: slim_handler=true, use_precompiled_packages=true) | closed | 2017-11-24T12:00:59Z | 2017-11-24T12:06:12Z | https://github.com/Miserlou/Zappa/issues/1256 | [] | python1981 | 1 |
K3D-tools/K3D-jupyter | jupyter | 223 | 2.8.0 does not display plot | I upgraded k3d to 2.8.0 with conda. With no other changes to my Python environment, 2.7.4 gives expected output

but 2.8.0 does not plot any points

Is this a bug, or did something change in the interface?
| closed | 2020-05-06T13:12:25Z | 2020-05-06T13:33:14Z | https://github.com/K3D-tools/K3D-jupyter/issues/223 | [] | johnomotani | 4 |
plotly/jupyterlab-dash | dash | 7 | Installation npm run build | I am using installation instructions. I can't seem to get to run:
`npm run build`
it is giving me the following error:
jupyterlab_dash@0.1.0 build /home/hamza/jupyterlab-dash
tsc
/usr/bin/env: ‘node’: No such file or directory
npm ERR! Linux 4.4.0-141-generic
npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "run" "build"
npm ERR! node v4.2.6
npm ERR! npm v3.5.2
npm ERR! file sh
npm ERR! code ELIFECYCLE
npm ERR! errno ENOENT
npm ERR! syscall spawn
npm ERR! jupyterlab_dash@0.1.0 build: `tsc`
npm ERR! spawn ENOENT
npm ERR!
npm ERR! Failed at the jupyterlab_dash@0.1.0 build script 'tsc'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the jupyterlab_dash package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! tsc
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs jupyterlab_dash
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls jupyterlab_dash
npm ERR! There is likely additional logging output above.
npm ERR! Please include the following file with any support request:
npm ERR! /home/hamza/jupyterlab-dash/npm-debug.log
how do I fix this? | closed | 2019-01-17T08:16:09Z | 2019-01-17T08:50:34Z | https://github.com/plotly/jupyterlab-dash/issues/7 | [] | hamzaafridi | 1 |
influxdata/influxdb-client-python | jupyter | 333 | Write DataPoint with specific time | <!--
Thank you for reporting a bug.
* Please add a :+1: or comment on a similar existing bug report instead of opening a new one.
* https://github.com/influxdata/influxdb-client-python/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+is%3Aclosed+sort%3Aupdated-desc+label%3Abug+
* Please check whether the bug can be reproduced with the latest release.
* The fastest way to fix a bug is to open a Pull Request.
* https://github.com/influxdata/influxdb-client-python/pulls
-->
I have a sensor MSSQL database, and I want to write these things to InfluxDB.
__Steps to reproduce:__
1. Read sensor data line by line.
[**(4893172, 'K-311227', 17, datetime.datetime(2021, 7, 28, 8, 23, 34, 993000), 4, 5, 14, 0, 0.0, 0.0, 62.587481, 0, 22710, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 'SERVER')**]
2. Extract data to point and write. The is no error but I **can not** get the data in "Data Explorer" in InfluxDB UI.
```
point = Point(name)\
.tag("host", row[25])\
.tag("sender", sender)\
.tag("Heat_No", row[1])\
.field("event_code", row[2])\
.field("step_no", row[4])\
.field("process_no", row[5])\
.field("tap_no", row[6])\
.field("voltage_value", row[7])\
.field("current_value", row[8])\
.field("insert_amount", row[9])\
.field("pwr_factor", row[10])\
.field("temperature", row[11])\
.field("total_power", row[12])\
.time(row[3])
write_api.write(bucket, org, point)
```
3. But when I write with **.time(datetime.utcnow(), WritePrecision.NS)**. It works. I **can** get data in "Data Explorer".
```
point = Point(name)\
.tag("host", row[25])\
.tag("sender", sender)\
.tag("Heat_No", row[1])\
.field("event_code", row[2])\
.field("step_no", row[4])\
.field("process_no", row[5])\
.field("tap_no", row[6])\
.field("voltage_value", row[7])\
.field("current_value", row[8])\
.field("insert_amount", row[9])\
.field("pwr_factor", row[10])\
.field("temperature", row[11])\
.field("total_power", row[12])\
.time(datetime.utcnow(), WritePrecision.NS)
write_api.write(bucket, org, point)
```
__Expected behavior:__
I want to write data with the specific time
__Actual behavior:__
__Specifications:__
- Client Version: Latest
- InfluxDB Version: 2.0.8
- Platform: Anaconda/Python 3.8/Windows 10. IDE: Spyder, Jupyter.
| closed | 2021-09-27T02:49:41Z | 2021-09-29T05:56:17Z | https://github.com/influxdata/influxdb-client-python/issues/333 | [
"question",
"wontfix"
] | ntdgo | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.