repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
QuivrHQ/quivr | api | 3,170 | [Backend] KMS upload limits | ## KMS Limits:
### Uploads (Local)
* \[x\] Single file size ? (50Mb / file )
* \[x\] Single Folder total size ? No limits
* Concurrent file upload ? ( rate limit upload route, for now can't upload whole folders)
* \[x\] # Files per folder ? -> No limits
* Total storage per tier of users ?
* 500Mb free
* 100GB premium
### Knowledge linked to Brains ( sync & local )
* Total size for all brains:
* free: 500 Mb
* premium: 100Gb for all brains ( get size users in production)
* No limits for # brains
* Nb or files ?
* correct metric **# of chunks in brain** -> have the infos back to the user (too much chunks / files in brain .
* File size across syncs + local ? No diff
* \# sync files per user to add to brain ?
* Limit to # chunks in a brain
### KM Processing
* \# max concurrent processing of files ?
* max queue size of 10_000 (check ) 4 worker
* \# max concurrent processing files per user
* fairness scheduler / per user
* Max time processing per file
* p99 celery worker
* Max batch size sync
* Randomize across users
* Priorize refresh for connected users | open | 2024-09-09T08:48:46Z | 2025-01-27T16:06:37Z | https://github.com/QuivrHQ/quivr/issues/3170 | [
"area: backend"
] | linear[bot] | 2 |
supabase/supabase-py | flask | 45 | update insert and filter docs for python | # Improve documentation
## Link
https://supabase.io/docs/guides/client-libraries#managing-data
## Describe the problem
Python example missing for filtering and insert not updated yet.
## Describe the improvement
Add python examples for filtering and insert section.
| closed | 2021-09-11T14:16:45Z | 2021-09-11T14:18:27Z | https://github.com/supabase/supabase-py/issues/45 | [
"documentation"
] | sureshdsk | 1 |
encode/apistar | api | 158 | Url encoding on TestClient requests | I'm adding tests and I have some urls with spaces in it. the tests fail because the url parameters are not properly encoded in the handler function. For example I have a handler function that recieves one parameter "time"
`routes.py`
```python
Route('/{time}', 'GET', timestamp)
```
`views.py`
```python
def timestamp(time):
print(time)
.... code here
```
`test`
```python
def test_http_request():
"""
Testing a view, using the test client.
"""
client = TestClient()
response = client.get("/December 15, 2015")
```
`time` in `timestamp` function is now `December%2015,%202015` instead of `December 15, 2015`
I've also tried with `client.get("/December%2015,%202015")`
Am I doing something wrong or is this a bug?
Thanks | closed | 2017-05-21T18:04:11Z | 2017-05-22T23:48:32Z | https://github.com/encode/apistar/issues/158 | [] | marcosfede | 2 |
apify/crawlee-python | automation | 928 | Implement a playwright-based `HttpClient` implementation | There are two advantages to this:
1. `PlaywrightCrawler` could use it for `send_request` which would make those "AJAX calls" look more natural and probably less likely to be blocked
2. We could use it in e.g. `ParselCrawler` an another option to avoid blocking | open | 2025-01-22T12:54:40Z | 2025-03-06T09:44:21Z | https://github.com/apify/crawlee-python/issues/928 | [
"enhancement",
"t-tooling"
] | janbuchar | 0 |
mage-ai/mage-ai | data-science | 4,941 | [BUG] free -t -m command not compatible with Windows shell | ### Mage version
0.9.68
### Describe the bug
When running any pipeline using conda on Windows (no Docker), the below exception keeps popping up:
```
'free' is not recognized as an internal or external command,
operable program or batch file.
Command 'free -t -m' returned non-zero exit status 1.
```
The command seems to get the current memory usage.
### To reproduce
1. Execute `mage init mage` in cmd terminal
2. Execute `mage start mage` in cmd terminal
3. Start running the pipeline `example_pipeline`
4. Check the log in terminal
### Expected behavior
The exception should not be popping up.
### Screenshots

### Operating system
conda virtual environment on Windows 10 Pro
### Additional context
It looks like the command `free -t -m` is not executable in Windows OS
I did a code search:
https://github.com/mage-ai/mage-ai/blob/ebdd95ff3410602aeb37c05692dc93f864757274/mage_ai/orchestration/utils/resources.py#L22 | closed | 2024-04-15T21:28:43Z | 2024-04-16T09:42:26Z | https://github.com/mage-ai/mage-ai/issues/4941 | [
"bug"
] | alangan17 | 0 |
iperov/DeepFaceLab | deep-learning | 5,273 | why when i run quick 96 will error occurred? THANKS | Running trainer.
Loading model...
Model first run.
Using plaidml.keras.backend backend.
INFO:plaidml:Opening device "opencl_amd_ellesmere.0"
Error: The Keras backend function 'random_binomial' is not yet implemented in Plaid. You can help us prioritize by letting us know if this function is important to you, and as always, contributions are welcome!
Traceback (most recent call last):
File "F:\deep\DeepFaceLab_OpenCL_SSE\_internal\DeepFaceLab\mainscripts\Trainer.py", line 50, in trainerThread
device_args=device_args)
File "F:\deep\DeepFaceLab_OpenCL_SSE\_internal\DeepFaceLab\models\Model_Quick96\Model.py", line 21, in __init__
ask_random_flip=False)
File "F:\deep\DeepFaceLab_OpenCL_SSE\_internal\DeepFaceLab\models\ModelBase.py", line 145, in __init__
self.onInitialize()
File "F:\deep\DeepFaceLab_OpenCL_SSE\_internal\DeepFaceLab\models\Model_Quick96\Model.py", line 161, in onInitialize
self.src_train = K.function ([self.model.warped_src, self.model.target_src, self.model.target_srcm], [src_loss], self.src_dst_opt.get_updates( src_loss, self.model.src_trainable_weights) )
File "F:\deep\DeepFaceLab_OpenCL_SSE\_internal\DeepFaceLab\nnlib\nnlib.py", line 932, in get_updates
lr_rnds = [ K.random_binomial(K.int_shape(p), p=self.lr_dropout, dtype=K.dtype(p)) for p in params ]
File "F:\deep\DeepFaceLab_OpenCL_SSE\_internal\DeepFaceLab\nnlib\nnlib.py", line 932, in <listcomp>
lr_rnds = [ K.random_binomial(K.int_shape(p), p=self.lr_dropout, dtype=K.dtype(p)) for p in params ]
File "F:\deep\DeepFaceLab_OpenCL_SSE\_internal\python-3.6.8\lib\site-packages\plaidml\keras\backend.py", line 1070, in random_binomial
_report_unimplemented('random_binomial')
File "F:\deep\DeepFaceLab_OpenCL_SSE\_internal\python-3.6.8\lib\site-packages\plaidml\keras\backend.py", line 103, in _report_unimplemented
raise NotImplementedError(report)
NotImplementedError: The Keras backend function 'random_binomial' is not yet implemented in Plaid. You can help us prioritize by letting us know if this function is important to you, and as always, contributions are welcome! | open | 2021-02-03T13:25:30Z | 2023-06-08T22:21:00Z | https://github.com/iperov/DeepFaceLab/issues/5273 | [] | super18863 | 1 |
pytorch/pytorch | python | 149,048 | multinomial does not preserve dynamic dimension | ### 🐛 Describe the bug
multinomial expects a fixed dimension for the number of samples. It should be dynamic.
```python
import torch
class Model(torch.nn.Module):
def forward(self, x, y):
return torch.multinomial(x, y.shape[0])
model = Model()
inputs = (
torch.tensor([[4, 5],[6,7]], dtype=torch.float32),
torch.tensor([0, 5], dtype=torch.int64),
)
model(*inputs)
DYNAMIC = torch.export.Dim.DYNAMIC
ep = torch.export.export(
model, inputs, dynamic_shapes={"x": {0: DYNAMIC, 1: DYNAMIC}, "y": {0: DYNAMIC}}
)
print(ep)
```
Raises an error:
```
- Not all values of RelaxedUnspecConstraint(L['y'].size()[0]) are valid because L['y'].size()[0] was inferred to be a constant (2).
```
### Versions
```
PyTorch version: 2.7.0.dev20250311+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.12.8 (main, Dec 4 2024, 08:54:12) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] model-explorer-onnx==0.3.4
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.25.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.18.0
[pip3] onnx-array-api==0.3.0
[pip3] onnx-extended==0.3.0
[pip3] onnxruntime_extensions==0.13.0
[pip3] onnxruntime-genai-cuda==0.6.0
[pip3] onnxruntime-training==1.21.0+cu126
[pip3] onnxscript==0.3.0.dev20250301
[pip3] optree==0.14.0
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.7.0.dev20250311+cu126
[pip3] torch_geometric==2.4.0
[pip3] torchaudio==2.6.0.dev20250311+cu126
[pip3] torchmetrics==1.6.2
[pip3] torchvision==0.22.0.dev20250311+cu126
[conda] Could not collect
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | closed | 2025-03-12T15:03:08Z | 2025-03-19T23:16:55Z | https://github.com/pytorch/pytorch/issues/149048 | [
"oncall: pt2",
"oncall: export"
] | xadupre | 0 |
fastapi/sqlmodel | sqlalchemy | 594 | How to add current date time by default on a table declaration? | ### Privileged issue
- [X] I'm @tiangolo or he asked me directly to create an issue here.
### Issue Content
how to add a field in the table with the current date by default? | closed | 2023-05-15T00:36:07Z | 2025-03-16T12:32:50Z | https://github.com/fastapi/sqlmodel/issues/594 | [] | fraddygil | 17 |
jupyterhub/zero-to-jupyterhub-k8s | jupyter | 2,861 | Remove old GitHub Releases for this project? | We currently don't rely on [GitHub Releases](https://github.com/jupyterhub/zero-to-jupyterhub-k8s/releases), but instead on git tags to declare new versions etc. If someone relied on GitHub Releases, for example because they use a github api to access the latest GitHub release, they can do so with `helm show chart` from the helm chart repo directly instead.
The situation is that whenever we now make a new release by pushing a git tag, we end up also needing to create a GitHub Release to avoid that the github UI showing us an old version.

So, should we simply remove all old github releases?
| closed | 2022-09-07T08:39:35Z | 2022-10-31T20:33:27Z | https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/2861 | [] | consideRatio | 2 |
flasgger/flasgger | flask | 168 | does not support upload | """
test
---
tags:
- test
parameters:
- in: file
name: formData
type: file
required: true
responses:
200:
description:
"""
hello,this is my yaml,it can not upload file succ,I saw the request payload not have "Content-Disposition" when the post request,is this a bug?need you help and can you give me a upload example,thanks very much. | open | 2017-11-03T08:51:40Z | 2020-04-06T21:38:43Z | https://github.com/flasgger/flasgger/issues/168 | [
"bug",
"help wanted",
"hacktoberfest"
] | yumen | 2 |
httpie/cli | api | 913 | PATCH support | Hi,
Is there a possibility to use other HTTP verbs instead of GET, PUT and DELETE? I would like to use the PATCH but it returns an error: unrecognised arguments.
Thanks! | closed | 2020-05-11T14:18:20Z | 2020-05-11T14:48:18Z | https://github.com/httpie/cli/issues/913 | [] | LanderMoerkerke | 1 |
pytest-dev/pytest-django | pytest | 180 | Why do you override settings and set DEBUG=False? | This happens after 2821e4178fa09732cbdaaf4ac369ada0e0b8f558. What was the reason for it?
| closed | 2014-10-13T10:46:20Z | 2014-10-13T14:14:36Z | https://github.com/pytest-dev/pytest-django/issues/180 | [] | andreif | 1 |
adbar/trafilatura | web-scraping | 136 | Tables not removed | I have just installed trafilatura 0.9.3 with Python 3.9.6.
The attached file (apache.txt, which is actually a html file) contains tables that are not removed when I run:
$ trafilatura --no-tables <apache.txt
[apache.txt](https://github.com/adbar/trafilatura/files/7491052/apache.txt) | closed | 2021-11-06T18:35:11Z | 2021-12-08T17:12:07Z | https://github.com/adbar/trafilatura/issues/136 | [
"bug"
] | pieterhartel | 9 |
lukas-blecher/LaTeX-OCR | pytorch | 98 | Error while trying running the GUI | Hello ,
I appreciate the effort and time you are putting into this project it is indeed the utility most of us need but haven't known existed.
However, I have an issue.
I have just installed all requirements and then tried running `python gui.py` and got the following error
`Traceback (most recent call last):
File "D:\Latex_OCR\LaTeX-OCR-main\LaTeX-OCR-main\gui.py", line 274, in <module>
ex = App(arguments)
File "D:\Latex_OCR\LaTeX-OCR-main\LaTeX-OCR-main\gui.py", line 26, in __init__
self.initModel()
File "D:\Latex_OCR\LaTeX-OCR-main\LaTeX-OCR-main\gui.py", line 33, in initModel
args, *objs = pix2tex.initialize(self.args)
File "D:\Latex_OCR\LaTeX-OCR-main\LaTeX-OCR-main\pix2tex.py", line 55, in initialize
model.load_state_dict(torch.load(args.checkpoint, map_location=args.device))
File "C:\Users\amuka\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1482, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Model:
Missing key(s) in state_dict: "decoder.net.attn_layers.layers.0.0.0.weight", "decoder.net.attn_layers.layers.0.0.0.bias", "decoder.net.attn_layers.layers.1.0.0.weight", "decoder.net.attn_layers.layers.1.0.0.bias", "decoder.net.attn_layers.layers.2.0.0.weight", "decoder.net.attn_layers.layers.2.0.0.bias", "decoder.net.attn_layers.layers.2.1.net.3.weight", "decoder.net.attn_layers.layers.2.1.net.3.bias", "decoder.net.attn_layers.layers.3.0.0.weight", "decoder.net.attn_layers.layers.3.0.0.bias", "decoder.net.attn_layers.layers.4.0.0.weight", "decoder.net.attn_layers.layers.4.0.0.bias", "decoder.net.attn_layers.layers.5.0.0.weight", "decoder.net.attn_layers.layers.5.0.0.bias", "decoder.net.attn_layers.layers.5.1.net.3.weight", "decoder.net.attn_layers.layers.5.1.net.3.bias", "decoder.net.attn_layers.layers.6.0.0.weight", "decoder.net.attn_layers.layers.6.0.0.bias", "decoder.net.attn_layers.layers.7.0.0.weight", "decoder.net.attn_layers.layers.7.0.0.bias", "decoder.net.attn_layers.layers.8.0.0.weight", "decoder.net.attn_layers.layers.8.0.0.bias", "decoder.net.attn_layers.layers.8.1.net.3.weight", "decoder.net.attn_layers.layers.8.1.net.3.bias", "decoder.net.attn_layers.layers.9.0.0.weight", "decoder.net.attn_layers.layers.9.0.0.bias", "decoder.net.attn_layers.layers.10.0.0.weight", "decoder.net.attn_layers.layers.10.0.0.bias", "decoder.net.attn_layers.layers.11.0.0.weight", "decoder.net.attn_layers.layers.11.0.0.bias", "decoder.net.attn_layers.layers.11.1.net.3.weight", "decoder.net.attn_layers.layers.11.1.net.3.bias".
Unexpected key(s) in state_dict: "decoder.net.attn_layers.layers.0.0.weight", "decoder.net.attn_layers.layers.0.0.bias", "decoder.net.attn_layers.layers.1.0.weight", "decoder.net.attn_layers.layers.1.0.bias", "decoder.net.attn_layers.layers.2.0.weight", "decoder.net.attn_layers.layers.2.0.bias", "decoder.net.attn_layers.layers.2.1.net.2.weight", "decoder.net.attn_layers.layers.2.1.net.2.bias", "decoder.net.attn_layers.layers.3.0.weight", "decoder.net.attn_layers.layers.3.0.bias", "decoder.net.attn_layers.layers.4.0.weight", "decoder.net.attn_layers.layers.4.0.bias", "decoder.net.attn_layers.layers.5.0.weight", "decoder.net.attn_layers.layers.5.0.bias", "decoder.net.attn_layers.layers.5.1.net.2.weight", "decoder.net.attn_layers.layers.5.1.net.2.bias", "decoder.net.attn_layers.layers.6.0.weight", "decoder.net.attn_layers.layers.6.0.bias", "decoder.net.attn_layers.layers.7.0.weight", "decoder.net.attn_layers.layers.7.0.bias", "decoder.net.attn_layers.layers.8.0.weight", "decoder.net.attn_layers.layers.8.0.bias", "decoder.net.attn_layers.layers.8.1.net.2.weight", "decoder.net.attn_layers.layers.8.1.net.2.bias", "decoder.net.attn_layers.layers.9.0.weight", "decoder.net.attn_layers.layers.9.0.bias", "decoder.net.attn_layers.layers.10.0.weight", "decoder.net.attn_layers.layers.10.0.bias", "decoder.net.attn_layers.layers.11.0.weight", "decoder.net.attn_layers.layers.11.0.bias", "decoder.net.attn_layers.layers.11.1.net.2.weight", "decoder.net.attn_layers.layers.11.1.net.2.bias".`
Any help or comment would be highly appreciated | closed | 2022-02-09T16:45:36Z | 2022-02-09T16:50:10Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/98 | [
"duplicate"
] | DoozyDoz | 2 |
litestar-org/litestar | pydantic | 3,399 | Bug: Exception not handled correctly | ### Description
I noticed that when I tried to raise an HTTPException, instead of returning the dedicated HTTP status code, it was automatically returned as an error 500.
### URL to code causing the issue
_No response_
### MCVE
```python
from litestar import Litestar, get
from litestar.exceptions import HTTPException
@get("/")
def hello_world() -> dict[str, str]:
"""Keeping the tradition alive with hello world."""
raise HTTPException(410, "Gone")
app = Litestar(route_handlers=[hello_world])
```
### Steps to reproduce
```bash
Just navigate to the / route and notice that we get an 500 error instead of a 410.
```
### Screenshots
```bash
""
```
### Logs
_No response_
### Litestar Version
litestar==2.8.2
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-04-17T20:12:37Z | 2025-03-20T15:54:36Z | https://github.com/litestar-org/litestar/issues/3399 | [
"Question"
] | sorasful | 1 |
python-gino/gino | asyncio | 344 | Can not connect with sanic | I'm trying to connect with sanic using this [doc](https://github.com/fantix/gino/blob/master/docs/sanic.rst). But I can't start the sanic server
```
web_1 | [2018-09-20 16:10:09 +0600] [1] [INFO] Goin' Fast @ http://0.0.0.0:5000
web_1 | Executing <Task pending coro=<before_server_start() running at /usr/local/lib/python3.6/site-packages/gino/ext/sanic.py:124> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f878f697b88>()] created at /usr/local/lib/python3.6/asyncio/tasks.py:341> cb=[run_until_complete.<locals>.<lambda>()] created at /usr/local/lib/python3.6/site-packages/sanic/server.py:496> took 0.194 seconds
web_1 | [2018-09-20 16:10:09 +0600] [1] [ERROR] Experienced exception while trying to serve
web_1 | Traceback (most recent call last):
web_1 | File "/usr/local/lib/python3.6/site-packages/sanic/app.py", line 646, in run
web_1 | serve(**server_settings)
web_1 | File "/usr/local/lib/python3.6/site-packages/sanic/server.py", line 588, in serve
web_1 | trigger_events(before_start, loop)
web_1 | File "/usr/local/lib/python3.6/site-packages/sanic/server.py", line 496, in trigger_events
web_1 | loop.run_until_complete(result)
web_1 | File "uvloop/loop.pyx", line 1448, in uvloop.loop.Loop.run_until_complete
web_1 | File "/usr/local/lib/python3.6/site-packages/gino/ext/sanic.py", line 124, in before_server_start
web_1 | loop=loop,
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/gino/ext/sanic.py", line 142, in set_bind
web_1 | return await super().set_bind(bind, loop=loop, **kwargs)
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/gino/api.py", line 386, in set_bind
web_1 | bind = await create_engine(bind, loop=loop, **kwargs)
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/gino/strategies.py", line 42, in create
web_1 | pool = await dialect.init_pool(u, loop)
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/gino/dialects/asyncpg.py", line 318, in init_pool
web_1 | **self._pool_kwargs)
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/gino/dialects/asyncpg.py", line 209, in _init
web_1 | self._pool = await asyncpg.create_pool(**args)
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/asyncpg/pool.py", line 398, in _async__init__
web_1 | await self._initialize()
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/asyncpg/pool.py", line 415, in _initialize
web_1 | await first_ch.connect()
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/asyncpg/pool.py", line 125, in connect
web_1 | self._con = await self._pool._get_new_connection()
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/asyncpg/pool.py", line 461, in _get_new_connection
web_1 | **self._connect_kwargs)
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/asyncpg/connection.py", line 1602, in connect
web_1 | max_cacheable_statement_size=max_cacheable_statement_size)
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/asyncpg/connect_utils.py", line 434, in _connect
web_1 | raise last_error
web_1 | File "/usr/local/lib/python3.6/site-packages/asyncpg/connect_utils.py", line 426, in _connect
web_1 | connection_class=connection_class)
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/asyncpg/connect_utils.py", line 396, in _connect_addr
web_1 | connector, timeout=timeout, loop=loop)
web_1 | File "/usr/local/lib/python3.6/asyncio/tasks.py", line 358, in wait_for
web_1 | return fut.result()
web_1 | File "uvloop/loop.pyx", line 1884, in create_connection
web_1 | OSError: Multiple exceptions: [Errno 111] Connection refused, [Errno 99] Cannot assign requested address
web_1 | Traceback (most recent call last):
web_1 | File "app.py", line 66, in <module>
web_1 | app.run(host="0.0.0.0", port=5000, debug=True)
web_1 | File "/usr/local/lib/python3.6/site-packages/sanic/app.py", line 646, in run
web_1 | serve(**server_settings)
web_1 | File "/usr/local/lib/python3.6/site-packages/sanic/server.py", line 588, in serve
web_1 | trigger_events(before_start, loop)
web_1 | File "/usr/local/lib/python3.6/site-packages/sanic/server.py", line 496, in trigger_events
web_1 | loop.run_until_complete(result)
web_1 | File "uvloop/loop.pyx", line 1448, in uvloop.loop.Loop.run_until_complete
web_1 | File "/usr/local/lib/python3.6/site-packages/gino/ext/sanic.py", line 124, in before_server_start
web_1 | loop=loop,
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/gino/ext/sanic.py", line 142, in set_bind
web_1 | return await super().set_bind(bind, loop=loop, **kwargs)
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/gino/api.py", line 386, in set_bind
web_1 | bind = await create_engine(bind, loop=loop, **kwargs)
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/gino/strategies.py", line 42, in create
web_1 | pool = await dialect.init_pool(u, loop)
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/gino/dialects/asyncpg.py", line 318, in init_pool
web_1 | **self._pool_kwargs)
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/gino/dialects/asyncpg.py", line 209, in _init
web_1 | self._pool = await asyncpg.create_pool(**args)
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/asyncpg/pool.py", line 398, in _async__init__
web_1 | await self._initialize()
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/asyncpg/pool.py", line 415, in _initialize
web_1 | await first_ch.connect()
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/asyncpg/pool.py", line 125, in connect
web_1 | self._con = await self._pool._get_new_connection()
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/asyncpg/pool.py", line 461, in _get_new_connection
web_1 | **self._connect_kwargs)
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/asyncpg/connection.py", line 1602, in connect
web_1 | max_cacheable_statement_size=max_cacheable_statement_size)
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/asyncpg/connect_utils.py", line 434, in _connect
web_1 | raise last_error
web_1 | File "/usr/local/lib/python3.6/site-packages/asyncpg/connect_utils.py", line 426, in _connect
web_1 | connection_class=connection_class)
web_1 | File "/usr/local/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
web_1 | return self.gen.send(None)
web_1 | File "/usr/local/lib/python3.6/site-packages/asyncpg/connect_utils.py", line 396, in _connect_addr
web_1 | connector, timeout=timeout, loop=loop)
web_1 | File "/usr/local/lib/python3.6/asyncio/tasks.py", line 358, in wait_for
web_1 | return fut.result()
web_1 | File "uvloop/loop.pyx", line 1884, in create_connection
web_1 | OSError: Multiple exceptions: [Errno 111] Connection refused, [Errno 99] Cannot assign requested address
web_1 | sys:1: RuntimeWarning: coroutine 'Loop.create_server' was never awaited
``` | closed | 2018-09-20T10:18:18Z | 2018-09-20T11:19:37Z | https://github.com/python-gino/gino/issues/344 | [] | rana-ahmed | 4 |
ymcui/Chinese-BERT-wwm | nlp | 221 | 关于词表没有中文双引号的问题 | 我发现从huggingface上下载的全掩码中文[Bert/RoBerta](https://huggingface.co/hfl/chinese-roberta-wwm-ext)的词表(vocab.txt)中都没有中文的双引号`“”`,请问为什么会有这种情况呢?
我的下游任务是一个本文纠错的任务,因此当句子中出现中文双引号时,tokenizer会将其变成[UNK],非常影响纠错结果。有推荐的方法解决这个问题嘛?或许我应该手动在词表中添加中文双引号? | closed | 2022-06-14T07:40:53Z | 2022-06-14T13:03:13Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/221 | [] | Dicer-Zz | 3 |
huggingface/transformers | pytorch | 36,258 | Bug introduced in `from_pretrained` `v4.48.3`..`v4.49.0` | Hi 🤗
Diffusers 🧨 noticed some failing tests starting with `v4.49.0` in `Kolors`, one of our models that uses a custom text encoder.
## Reproduction
This is working on `v4.48.3`.
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("hf-internal-testing/tiny-random-chatglm3-6b", trust_remote_code=True)
```
On `v4.49.0`:
```python
TypeError: empty() received an invalid combination of arguments - got (tuple, dtype=str, device=str), but expected one of:
* (tuple of ints size, *, tuple of names names, torch.memory_format memory_format = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
* (tuple of ints size, *, torch.memory_format memory_format = None, Tensor out = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
```
The issue seems to be that the config in the [test model](https://huggingface.co/hf-internal-testing/tiny-random-chatglm3-6b/blob/main/config.json) and checkpoints like [`Kwai-Kolors/Kolors-diffusers`](https://huggingface.co/Kwai-Kolors/Kolors-diffusers/blob/main/text_encoder/config.json) contain `torch_dtype` as a string.
On Diffusers end explicitly setting `torch_dtype` when using `ChatGLMModel` and setting a default `torch_dtype` for `from_pretrained` paths is working https://github.com/huggingface/diffusers/pull/10816 and it's mainly internal effects as `torch_dtype` wasn't passed for some tests, should be ok for end users as they would generally pass `torch_dtype`. | closed | 2025-02-18T12:28:18Z | 2025-03-21T12:27:48Z | https://github.com/huggingface/transformers/issues/36258 | [
"bug"
] | hlky | 3 |
mwaskom/seaborn | data-science | 3,413 | Error thrown upon import with `numpy` versions >= 1.24.0 | Hi all,
When I try to import the latest seaborn (0.12.2), I am getting an AttributeError on numpy:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[1], line 1
----> 1 import seaborn as sns
File /opt/conda/lib/python3.8/site-packages/seaborn/__init__.py:5
3 from .utils import * # noqa: F401,F403
4 from .palettes import * # noqa: F401,F403
----> 5 from .relational import * # noqa: F401,F403
6 from .regression import * # noqa: F401,F403
7 from .categorical import * # noqa: F401,F403
File /opt/conda/lib/python3.8/site-packages/seaborn/relational.py:17
8 from ._oldcore import (
9 VectorPlotter,
10 )
11 from .utils import (
12 locator_to_legend_entries,
13 adjust_legend_subtitles,
14 _default_color,
15 _deprecate_ci,
16 )
---> 17 from ._statistics import EstimateAggregator
18 from .axisgrid import FacetGrid, _facet_docs
19 from ._docstrings import DocstringComponents, _core_docs
File /opt/conda/lib/python3.8/site-packages/seaborn/_statistics.py:31
29 import pandas as pd
30 try:
---> 31 from scipy.stats import gaussian_kde
32 _no_scipy = False
33 except ImportError:
File /opt/conda/lib/python3.8/site-packages/scipy/stats/__init__.py:384
1 """
2 .. _statsrefmanual:
3
(...)
380
381 """
382 from __future__ import division, print_function, absolute_import
--> 384 from .stats import *
385 from .distributions import *
386 from .morestats import *
File /opt/conda/lib/python3.8/site-packages/scipy/stats/stats.py:179
176 from numpy import array, asarray, ma
178 from scipy._lib.six import callable, string_types
--> 179 from scipy.spatial.distance import cdist
180 from scipy.ndimage import measurements
181 from scipy._lib._version import NumpyVersion
File /opt/conda/lib/python3.8/site-packages/scipy/spatial/__init__.py:97
1 """
2 =============================================================
3 Spatial algorithms and data structures (:mod:`scipy.spatial`)
(...)
92
93 """
95 from __future__ import division, print_function, absolute_import
---> 97 from .kdtree import *
98 from .ckdtree import *
99 from .qhull import *
File /opt/conda/lib/python3.8/site-packages/scipy/spatial/kdtree.py:8
6 import numpy as np
7 from heapq import heappush, heappop
----> 8 import scipy.sparse
10 __all__ = ['minkowski_distance_p', 'minkowski_distance',
11 'distance_matrix',
12 'Rectangle', 'KDTree']
15 def minkowski_distance_p(x, y, p=2):
File /opt/conda/lib/python3.8/site-packages/scipy/sparse/__init__.py:229
223 # Original code by Travis Oliphant.
224 # Modified and extended by Ed Schofield, Robert Cimrman,
225 # Nathan Bell, and Jake Vanderplas.
227 import warnings as _warnings
--> 229 from .base import *
230 from .csr import *
231 from .csc import *
File /opt/conda/lib/python3.8/site-packages/scipy/sparse/base.py:8
6 from scipy._lib.six import xrange
7 from scipy._lib._numpy_compat import broadcast_to
----> 8 from .sputils import (isdense, isscalarlike, isintlike,
9 get_sum_dtype, validateaxis, check_reshape_kwargs,
10 check_shape, asmatrix)
12 __all__ = ['spmatrix', 'isspmatrix', 'issparse',
13 'SparseWarning', 'SparseEfficiencyWarning']
16 class SparseWarning(Warning):
File /opt/conda/lib/python3.8/site-packages/scipy/sparse/sputils.py:17
11 __all__ = ['upcast', 'getdtype', 'isscalarlike', 'isintlike',
12 'isshape', 'issequence', 'isdense', 'ismatrix', 'get_sum_dtype']
14 supported_dtypes = ['bool', 'int8', 'uint8', 'short', 'ushort', 'intc',
15 'uintc', 'longlong', 'ulonglong', 'single', 'double',
16 'longdouble', 'csingle', 'cdouble', 'clongdouble']
---> 17 supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
19 _upcast_memo = {}
22 def upcast(*args):
File /opt/conda/lib/python3.8/site-packages/scipy/sparse/sputils.py:17, in <listcomp>(.0)
11 __all__ = ['upcast', 'getdtype', 'isscalarlike', 'isintlike',
12 'isshape', 'issequence', 'isdense', 'ismatrix', 'get_sum_dtype']
14 supported_dtypes = ['bool', 'int8', 'uint8', 'short', 'ushort', 'intc',
15 'uintc', 'longlong', 'ulonglong', 'single', 'double',
16 'longdouble', 'csingle', 'cdouble', 'clongdouble']
---> 17 supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
19 _upcast_memo = {}
22 def upcast(*args):
File /opt/conda/lib/python3.8/site-packages/numpy/__init__.py:320, in __getattr__(attr)
317 from .testing import Tester
318 return Tester
--> 320 raise AttributeError("module {!r} has no attribute "
321 "{!r}".format(__name__, attr))
AttributeError: module 'numpy' has no attribute 'typeDict'
```
I noticed that `pyproject.toml` within `seaborn` has the line `"numpy>=1.20,!=1.24.0"`, and was wondering if this should be changed to `"numpy>=1.20,<=1.24.0"` instead?
I am reproducing my `pip list` below:
```
WARNING: Ignoring invalid distribution -eerml (/opt/conda/lib/python3.8/site-packages)
Package Version
---------------------------------- -------------------
absl-py 0.12.0
aiohttp 3.7.4.post0
alabaster 0.7.12
alembic 1.11.1
anndata 0.9.1
anyio 3.7.0
appdirs 1.4.4
argon2-cffi 20.1.0
arrow 1.2.3
astroid 2.5.6
astropy 4.2.1
asttokens 2.2.1
astunparse 1.6.3
async-generator 1.10
async-timeout 3.0.1
atomicwrites 1.4.0
attrs 21.2.0
Automat 20.2.0
autopep8 1.5.5
Babel 2.9.1
backcall 0.2.0
backports.shutil-get-terminal-size 1.0.0
backports.zoneinfo 0.2.1
batchglm 0.7.4
bcrypt 4.0.1
beautifulsoup4 4.9.3
binaryornot 0.4.4
biopython 1.81
bitarray 2.1.0
bkcharts 0.2
black 21.5b1
blaze 0.10.1
bleach 3.3.0
blessed 1.20.0
bokeh 2.3.2
boto 2.49.0
boto3 1.26.163
botocore 1.29.163
Bottleneck 1.3.2
boxsdk 3.7.2
brewer2mpl 1.4.1
Brotli 1.0.9
brotlipy 0.7.0
bs4 0.0.1
bson 0.5.10
cachetools 4.2.2
cairocffi 1.2.0
certifi 2020.12.5
cffi 1.14.5
cfgv 3.3.1
chardet 3.0.4
charset-normalizer 3.1.0
click 8.0.0
cloudpickle 1.6.0
clyent 1.2.1
cmake 3.26.4
colorama 0.4.4
conda 4.10.1
conda-package-handling 1.7.3
configobj 5.0.6
constantly 15.1.0
contextlib2 0.6.0.post1
contourpy 1.1.0
cookiecutter 1.7.0
coverage 5.5
croniter 1.3.15
cryptography 3.4.7
cx-Oracle 8.1.0
cycler 0.10.0
Cython 0.29.23
cytoolz 0.11.0
dash 2.10.2
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
dask 2021.4.1
databricks-cli 0.17.7
datashape 0.5.2
dateutils 0.6.12
decorator 4.4.2
deepdiff 6.3.0
defusedxml 0.7.1
deprecation 2.1.0
diff-match-patch 20200713
diffxpy 0.7.4
distlib 0.3.6
distributed 2021.4.1
Django 2.2.5
django-research 1.8.0
dnspython 2.3.0
docker 6.1.3
docutils 0.16
entrypoints 0.3
et-xmlfile 1.1.0
exceptiongroup 1.1.1
executing 1.2.0
fastapi 0.88.0
fastcache 1.1.0
fastcluster 1.2.6
filelock 3.12.2
flake8 3.8.4
Flask 2.0.0
Flask-Compress 1.9.0
Flask-Cors 3.0.10
fonttools 4.40.0
fsspec 2023.6.0
future 0.18.2
gast 0.3.3
gevent 21.1.2
ggplot 0.11.5
gitdb 4.0.7
GitPython 3.1.14
glob2 0.7
gmpy2 2.0.8
google-auth 1.30.0
google-auth-oauthlib 0.4.4
google-pasta 0.2.0
gprofiler-official 1.0.0
greenlet 1.1.0
grpcio 1.37.1
gunicorn 20.1.0
h11 0.14.0
h5py 3.8.0
HeapDict 1.0.1
html5lib 1.1
hyperlink 21.0.0
hyperopt 0.2.7
hypothesis 6.12.0
identify 2.5.24
idna 2.10
igraph 0.10.4
imageio 2.9.0
imagesize 1.2.0
importlib-metadata 4.0.1
importlib-resources 5.12.0
incremental 21.3.0
inflection 0.5.1
iniconfig 1.1.1
inquirer 3.1.3
intervaltree 3.1.0
ipdb 0.13.13
ipykernel 5.5.4
ipyparallel 8.6.1
ipython 8.12.2
ipython-genutils 0.2.0
ipywidgets 7.6.3
isort 5.8.0
itsdangerous 2.1.2
JayDeBeApi 1.2.3
jdcal 1.4.1
jedi 0.17.2
jeepney 0.6.0
Jinja2 3.0.0
jinja2-time 0.2.0
jmespath 1.0.1
joblib 1.0.1
JPype1 1.2.1
json5 0.9.5
jsonify 0.5
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 6.1.12
jupyter-codeserver-proxy 1.0b1
jupyter-console 6.4.0
jupyter-contrib-core 0.4.2
jupyter-contrib-nbextensions 0.7.0
jupyter-core 4.7.1
jupyter-highlight-selected-word 0.2.0
jupyter-nbextensions-configurator 0.6.3
jupyter-server 1.7.0
jupyter-server-mathjax 0.2.2
jupyter-server-proxy 3.0.2
jupyterlab 2.3.2
jupyterlab-git 0.24.0
jupyterlab-pygments 0.1.2
jupyterlab-server 1.2.0
jupyterlab-widgets 1.0.0
kaleido 0.2.1
Keras 2.4.3
Keras-Preprocessing 1.1.2
keyring 23.0.1
kiwisolver 1.3.1
lazy-object-proxy 1.6.0
lightning 2.0.3
lightning-cloud 0.5.36
lightning-utilities 0.8.0
lit 16.0.5.post0
llvmlite 0.40.1
locket 0.2.1
lxml 4.6.3
Mako 1.2.4
Markdown 3.3.4
markdown-it-py 3.0.0
MarkupSafe 2.0.0
matplotlib 3.7.1
matplotlib-inline 0.1.2
matplotlib-venn 0.11.9
mccabe 0.6.1
mdurl 0.1.2
mistune 0.8.4
mlflow 2.4.1
mpltools 0.2.0
mpmath 1.2.1
msgpack 1.0.2
multidict 5.1.0
multipledispatch 0.6.0
mypy-extensions 0.4.3
natsort 8.3.1
nbclassic 0.2.8
nbclient 0.5.3
nbconvert 6.0.7
nbdime 2.1.1
nbformat 5.1.3
nest-asyncio 1.5.1
networkx 2.5.1
nltk 3.6.2
nodeenv 1.8.0
nose 1.3.7
notebook 6.3.0
numba 0.57.1
numexpr 2.7.3
numpy 1.24.4
numpydoc 1.1.0
nvidia-cublas-cu11 11.10.3.66
nvidia-cuda-cupti-cu11 11.7.101
nvidia-cuda-nvrtc-cu11 11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11 8.5.0.96
nvidia-cufft-cu11 10.9.0.58
nvidia-curand-cu11 10.2.10.91
nvidia-cusolver-cu11 11.4.0.1
nvidia-cusparse-cu11 11.7.4.91
nvidia-nccl-cu11 2.14.3
nvidia-nvtx-cu11 11.7.91
oauthlib 3.1.0
odo 0.5.0
olefile 0.46
openpyxl 3.0.7
opt-einsum 3.3.0
ordered-set 4.1.0
packaging 23.1
pandas 1.2.4
pandocfilters 1.4.3
paramiko 3.2.0
parso 0.7.0
partd 1.2.0
path 15.1.2
path.py 12.5.0
pathlib2 2.3.5
pathspec 0.8.1
patsy 0.5.3
pep8 1.7.1
pexpect 4.8.0
pickleshare 0.7.5
Pillow 8.2.0
pip 23.1.2
pkginfo 1.7.0
platformdirs 3.5.3
plotly 5.15.0
pluggy 0.13.1
ply 3.11
polling2 0.4.6
poyo 0.5.0
pre-commit 3.3.2
prometheus-client 0.10.1
prompt-toolkit 3.0.38
protobuf 3.16.0
psutil 5.8.0
psycopg2-binary 2.8.6
ptyprocess 0.7.0
pure-eval 0.2.2
py 1.10.0
py4j 0.10.9.7
pyarrow 12.0.1
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycodestyle 2.6.0
pycosat 0.6.3
pycparser 2.20
pycrypto 2.6.1
pycurl 7.43.0.6
pydantic 1.10.9
pydocstyle 6.0.0
pyerfa 1.7.3
pyflakes 2.2.0
Pygments 2.15.1
PyJWT 2.7.0
pylint 2.8.2
pyls-black 0.4.6
pyls-spyder 0.3.2
pymongo 4.4.0
PyNaCl 1.5.0
pynndescent 0.5.10
pyodbc 4.0.30
pyopenms 2.7.0
pyOpenSSL 20.0.1
pypandoc 1.5
pyparsing 2.4.7
PyQt5 5.12.3
PyQt5-sip 12.9.0
PyQtWebEngine 5.12.1
pyquickbase 0.0.6
pyro-api 0.1.2
pyro-ppl 1.8.5
pyrsistent 0.17.3
pysam 0.21.0
pysftp 0.2.9
PySocks 1.7.1
pyteomics 4.6
pytest 6.2.4
pytest-arraydiff 0.3
pytest-astropy 0.8.0
pytest-astropy-header 0.1.2
pytest-cov 2.11.1
pytest-doctestplus 0.9.0
pytest-filter-subpackage 0.1.1
pytest-openfiles 0.5.0
pytest-remotedata 0.3.2
python-dateutil 2.8.1
python-domino 1.0.4
python-editor 1.0.4
python-jsonrpc-server 0.4.0
python-language-server 0.36.2
python-multipart 0.0.6
pytorch-lightning 2.0.3
pytz 2021.1
pytz-deprecation-shim 0.1.0.post0
PyWavelets 1.1.1
pyxdg 0.27
PyYAML 5.4.1
pyzmq 22.0.3
QDarkStyle 3.0.2
qmplot 0.3.2
qstylizer 0.2.0
QtAwesome 1.0.2
qtconsole 5.1.0
QtPy 1.9.0
querystring-parser 1.2.4
readchar 4.0.5
regex 2021.4.4
requests 2.31.0
requests-oauthlib 1.3.0
requests-toolbelt 1.0.0
rich 13.4.2
rope 0.19.0
rpy2 3.5.12
rsa 4.7.2
ruamel-yaml-conda 0.15.100
s3fs 0.4.2
s3transfer 0.6.1
scanpy 1.9.3
scikit-image 0.18.1
scikit-learn 0.24.2
scipy 1.4.1
seaborn 0.12.2
SecretStorage 3.3.1
seerbio-library 2.1.1
seerml 0.0.1a1
seerml-source 0.3.0
Send2Trash 1.5.0
service-identity 21.1.0
session-info 1.0.0
setuptools 52.0.0.post20210125
simpervisor 0.4
simplegeneric 0.8.1
singledispatch 3.6.1
six 1.15.0
smart-open 6.3.0
smmap 4.0.0
sniffio 1.2.0
snowballstemmer 2.1.0
sortedcollections 2.1.0
sortedcontainers 2.3.0
soupsieve 2.2.1
sparse 0.14.0
Sphinx 3.5.4
sphinxcontrib-applehelp 1.0.2
sphinxcontrib-devhelp 1.0.2
sphinxcontrib-htmlhelp 1.0.3
sphinxcontrib-jsmath 1.0.1
sphinxcontrib-qthelp 1.0.3
sphinxcontrib-serializinghtml 1.1.4
sphinxcontrib-websupport 1.2.4
spyder 5.0.2
spyder-kernels 2.0.2
SQLAlchemy 1.4.15
sqlparse 0.4.4
stack-data 0.6.2
starlette 0.22.0
starsessions 1.3.0
statsmodels 0.14.0
stdlib-list 0.8.0
sympy 1.8
tables 3.6.1
tabulate 0.9.0
tblib 1.7.0
tenacity 8.2.2
tensorboard 2.5.0
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.0
tensorflow-estimator 2.3.0
termcolor 1.1.0
terminado 0.9.5
testpath 0.4.4
textdistance 4.2.1
texttable 1.6.7
threadpoolctl 2.1.0
three-merge 0.1.1
tifffile 2021.4.8
tinycss2 1.1.0
toml 0.10.2
tomli 2.0.1
tomli_w 1.0.0
toolz 0.11.1
torch 2.0.1
torchmetrics 0.11.4
torchvision 0.15.2
tornado 6.1
tqdm 4.59.0
traitlets 5.9.0
triton 2.0.0
Twisted 21.2.0
typing_extensions 4.6.3
tzdata 2023.3
tzlocal 4.3
ujson 4.0.2
umap-learn 0.5.3
unicodecsv 0.14.1
UpSetPlot 0.8.0
urllib3 1.26.16
uvicorn 0.22.0
uWSGI 2.0.19.1
venn 0.1.3
virtualenv 20.23.0
watchdog 1.0.2
wcwidth 0.2.5
webencodings 0.5.1
websocket 0.2.1
websocket-client 0.59.0
websockets 11.0.3
Werkzeug 2.0.0
wheel 0.36.2
whichcraft 0.6.1
widgetsnbextension 3.5.1
wrapt 1.12.1
wurlitzer 2.1.0
xgboost 1.7.5
xlrd 2.0.1
XlsxWriter 1.4.2
xlwt 1.3.0
yapf 0.31.0
yarl 1.6.3
zict 2.0.0
zipp 3.4.1
zope.event 4.5.0
zope.interface 5.4.0
zstandard 0.21.0
```
Thanks a bunch. | closed | 2023-06-29T18:28:40Z | 2023-07-01T17:42:20Z | https://github.com/mwaskom/seaborn/issues/3413 | [] | guhanrv | 4 |
miLibris/flask-rest-jsonapi | sqlalchemy | 27 | Decorator wrapping when creating new Resource class | Every time new Resource class instantiates, methods (get, post, patch, put) are wrapped with check_headers decorator. Wrapping occures on class level. For example, after creating ten Resource classes, there are ten same decorators wrapping get method.
https://github.com/miLibris/flask-rest-jsonapi/blob/master/flask_rest_jsonapi/resource.py#L39
Since Flask [creates new instance](https://github.com/pallets/flask/blob/master/flask/views.py#L83) of View class (which is inherited by Resource) with every incoming request, in the end there are lots of same decorators wrapping (get, post, patch, put) methods. That causes RecursionError at some point. | closed | 2017-04-07T15:53:40Z | 2017-04-11T07:32:06Z | https://github.com/miLibris/flask-rest-jsonapi/issues/27 | [] | kypsibir | 3 |
omar2535/GraphQLer | graphql | 100 | [QOL] Make objects bucket a singleton | It's tedious to pass around the objects bucket through the `fuzzer` to the `fengine` module. Refactoring the objects bucket from a simple dictionary into its singleton class will allow us to contain all objects-bucket-related functions in a single file.
# Deliverable
Create a `objects_bucket.py` in the which has a singleton for the objects bucket. Reference this for all calls to the objects bucket except for the first call in the fuzzer initialization step.
# Extension
Add not only IDs that are seen in the responses, but also other object fields | closed | 2024-08-24T05:17:46Z | 2024-08-29T05:50:55Z | https://github.com/omar2535/GraphQLer/issues/100 | [
"➕enhancement"
] | omar2535 | 1 |
streamlit/streamlit | data-science | 10,407 | Default `set_page_config` through `.streamlit/config.toml` | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Currently, when setting `st.set_page_config()` with `page_title` and `page_icon`, Streamlit still displays the default "Streamlit" title and icon during the initial loading phase. This behavior can be disruptive for larger web applications where branding consistency is important.
It would be useful to define default values for `set_page_config` parameters inside `.streamlit/config.toml`, ensuring that even before `st.set_page_config()` is explicitly called in a script, the title and icon match the expected defaults.
### Why?
I'm always frustrated when the page first loads with the Streamlit default title and icon, even though I have set my own values using `st.set_page_config()`. This creates an unpolished user experience, especially for production-ready applications.
### How?
Introduce the ability to specify default values for `page_title`, `page_icon`, `layout`, and other `st.set_page_config` parameters inside `.streamlit/config.toml`. If a user does not call `st.set_page_config()` in their script, Streamlit should use these defaults from the config file instead of falling back to "Streamlit" and the default icon.
For example, the `.streamlit/config.toml` file could support:
```toml
[theme]
defaultPageTitle = "My App"
defaultPageIcon = "favicon.png"
```
### Additional Context
_No response_ | open | 2025-02-15T13:42:52Z | 2025-02-16T17:39:18Z | https://github.com/streamlit/streamlit/issues/10407 | [
"type:enhancement",
"feature:st.set_page_config",
"feature:config"
] | pallagj | 1 |
BlinkDL/RWKV-LM | pytorch | 250 | RWKV only show lower GPU memory occupancy when inference? | I tried to use RWKV(e.g., Vision-RWKV) in CV tasks. But I found RWKV shows similar GPU memory occupancy to full-attention Transformer (like ViT) when training. I found both RWKV and Vision-RWKV only present their inference memory occupancy in the paper.
The high memory consume is not friendly for my tasks. Do you have any advice?
| open | 2024-07-21T10:27:46Z | 2024-09-08T19:40:17Z | https://github.com/BlinkDL/RWKV-LM/issues/250 | [] | thucz | 3 |
Kanaries/pygwalker | plotly | 359 | [DEV-523] [BUG] pygwalker duckdb for large dataset may crash in frontend | - version: pygwalker 0.3.17
Reference article:
https://zenn.dev/aidemy/articles/3aeea1470f1535
<sub>[DEV-523](https://linear.app/kanaries/issue/DEV-523/[bug]-pygwalker-duckdb-for-large-dataset-may-crash-in-frontend)</sub> | closed | 2023-12-19T03:31:10Z | 2024-01-08T08:21:58Z | https://github.com/Kanaries/pygwalker/issues/359 | [
"bug",
"linear"
] | ObservedObserver | 0 |
strawberry-graphql/strawberry-django | graphql | 655 | Returning a Query type in a mutation's payload | I have a `Query` type in my schema that defines some custom resolvers and uses `strawberry_django.field`/`strawberry_django.node` without providing resolver functions.
```python
@strawberry_django.type(SomeModel)
class SomeType:
id: auto
name: auto
@strawberry.type(name='Query')
class Query:
some_type_query: SomeType = strawberry_django.field()
@strawberry_django.field()
def custom_resolver(self) -> bool:
return True
```
I want to return a `Query` along with the mutation result, enabling the user to fetch whatever data they need in a single request. However, when I initialize a Query object, I must provide arguments.
```python
@strawberry.type
class MutationPayload:
some_type: SomeType
@strawberry_django.field(
def query(self) -> Query:
return Query()
@strawberry.type
class Mutation:
@strawberry_django.mutation()
def mutation_resolver(self) -> MutationPayload:
some_model = SomeModel.objects.create(name="test")
return MutationPayload(some_type=some_model)
```
```
{
"data": null,
"errors": [
{
"message": "Query.__init__() missing 1 required keyword-only argument: 'some_type_query'",
"locations": [
{
"line": 7,
"column": 5
}
],
"path": [
"mutationResolver",
"query"
]
}
]
}
```
Is it possible to initialize the `Query` type without providing all the arguments? Or is there an easy way to provide the arguments to the query?
<!-- POLAR PLEDGE BADGE START -->
## Upvote & Fund
- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.
- We receive the funding once the issue is completed & confirmed by you.
- Thank you in advance for helping prioritize & fund our backlog.
<a href="https://polar.sh/strawberry-graphql/strawberry-django/issues/655">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/strawberry-graphql/strawberry-django/issues/655/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/strawberry-graphql/strawberry-django/issues/655/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
| closed | 2024-11-08T14:51:01Z | 2024-12-14T17:32:31Z | https://github.com/strawberry-graphql/strawberry-django/issues/655 | [] | rcybulski1122012 | 1 |
cvat-ai/cvat | pytorch | 8,227 | backup task and import it in CVAT online website | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
I backup task and download it locally
when I try to create a new task from backup
it gives me error
AttributeError: 'list' object has no attribute 'get'
[error cvat backup.docx](https://github.com/user-attachments/files/16414027/error.cvat.backup.docx)
see attached file
can you please advise how to solve it
also I tried to export the annotation using CVAT format and coco and import it alone but it did not work
### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
_No response_
### Environment
_No response_ | closed | 2024-07-29T13:32:43Z | 2024-09-06T11:18:46Z | https://github.com/cvat-ai/cvat/issues/8227 | [
"bug",
"need info"
] | kh-diab | 7 |
axnsan12/drf-yasg | django | 453 | Migrating django-rest-swagger to drf-yasg issue. How to display input parameters ? | I am trying to migrate my API from django-rest-swagger==2.1.1 to drf-yasg
Meaning, trying to use this since I am migrating Django 1.11.7 --> 2.2
djangorestframework==3.9
drf-yasg==1.16.1
In the current code, I'ved used django-rest-swagger simply by creating the API this way: (https://gist.github.com/axilaris/92d9d2de6bdd2476cdd02fb535d9bde5 - here are more details in url.py, api.py, swagger_schema.py)
in api.py:
@api_view(['POST'])
def get_countries(request):
# ----- YAML below for Swagger -----
"""
description: countries list
parameters:
- name: token
type: string
required: true
location: form
"""
......
return Response(countries_list, status=status.HTTP_200_OK)
Having now enabled
djangorestframework==3.9
drf-yasg==1.16.1
I notice the new swagger does not have any input forms parameter like in this screenshot
https://i.imgur.com/YLYihRi.png
How can I customize drf-yasg based on my current API since I dont want to break the mobile to have that kind of documentation. It doesnt matter if its swagger or redoc ? | closed | 2019-09-12T03:45:49Z | 2020-10-26T01:12:00Z | https://github.com/axnsan12/drf-yasg/issues/453 | [] | axilaris | 2 |
tensorflow/tensor2tensor | deep-learning | 1,193 | checkpoint_path AttributeError during exporting | ### Description
I am trying to use t2t-exporter to export a specific checkpoint. I set checkpoint_path, but when i ran t2t-exporter, it complains AttributeError.
```
t2t-exporter \
--data_dir=$DATA_DIR \
--checkpoint_path=$TRAIN_DIR\/model.ckpt-251001 \
--problem=$PROBLEM \
--model=$MODEL \
--hparams_set=$HPARAMS \
--t2t_usr_dir=$USR_DIR
```
INFO:tensorflow:Importing user module train_mat_adv from path /models
Traceback (most recent call last):
File "/home/ubuntu/tensorflow/bin/t2t-exporter", line 16, in <module>
tf.app.run()
File "/home/ubuntu/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv))
File "/home/ubuntu/tensorflow/bin/t2t-exporter", line 12, in main
export.main(argv)
File "/home/ubuntu/tensorflow/local/lib/python2.7/site-packages/tensor2tensor/serving/export.py", line 59, in main
run_config = t2t_trainer.create_run_config(hparams)
File "/home/ubuntu/tensorflow/local/lib/python2.7/site-packages/tensor2tensor/bin/t2t_trainer.py", line 189, in create_run_config
assert FLAGS.output_dir or FLAGS.checkpoint_path
File "/home/ubuntu/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/platform/flags.py", line 85, in __getattr__
return wrapped.__getattr__(name)
File "/home/ubuntu/tensorflow/local/lib/python2.7/site-packages/absl/flags/_flagvalues.py", line 470, in __getattr__
raise AttributeError(name)
AttributeError: checkpoint_path
| open | 2018-11-01T00:39:22Z | 2018-11-15T05:55:52Z | https://github.com/tensorflow/tensor2tensor/issues/1193 | [] | KayShenClarivate | 7 |
adbar/trafilatura | web-scraping | 41 | Importing only the extract utilities | Hello @adbar, thanks for your tremendous work on the library. Do you know if there is a way to install and then import the library so that you will only load the utilities related to raw content extraction from a html string? If not, is there anyway we can discuss this particular topic and see if I could help you implement this in any way? My use case is basically the following: I have a CLI tool that currently relies on dragnet and I would like to jump ship and adopt trafilatura. My issue is that I don't want to install the net-related dependencies you list in your setup.py (notably `requests` and `tldextract`) because they will clash with some of my dependencies and I have my own means of downloading things, dealing with urls etc.
Have a good day, | closed | 2020-12-14T18:07:56Z | 2021-01-04T14:25:18Z | https://github.com/adbar/trafilatura/issues/41 | [] | Yomguithereal | 8 |
huggingface/transformers | tensorflow | 36,341 | suppress_tokens=[] should be legal as some older. whisper models rely on this | ### System Info
- `transformers` version: 4.46.0
- Platform: macOS-15.2-arm64-arm-64bit
- Python version: 3.11.8
- Huggingface_hub version: 0.25.2
- Safetensors version: 0.4.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.1 (False)
- Tensorflow version (GPU?): 2.16.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
I'm intending to fix this at once
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Run this code
```python
import torch
from transformers import pipeline
# path to the audio file to be transcribed
audio = "/path/to/audio.format"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
transcribe = pipeline(task="automatic-speech-recognition", model="vasista22/whisper-tamil-large-v2", chunk_length_s=30, device=device)
transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="ta", task="transcribe")
print('Transcription: ', transcribe(audio)["text"])
```
on any machine
### Expected behavior
The model produces a prediction, and no error is thrown
What actually happens is I get
```
/Users/plato/code/translation-station/.venv/lib/python3.11/site-packages/transformers/models/whisper/generation_whisper.py:573: FutureWarning: The input name `inputs` is deprecated. Please make sure to use `input_features` instead.
warnings.warn(
Traceback (most recent call last):
File "/Users/plato/code/translation-station/pad.py", line 11, in <module>
print('Transcription: ', transcribe(audio)["text"])
^^^^^^^^^^^^^^^^^
File "/Users/plato/code/translation-station/.venv/lib/python3.11/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 283, in __call__
return super().__call__(inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/plato/code/translation-station/.venv/lib/python3.11/site-packages/transformers/pipelines/base.py", line 1360, in __call__
return next(
^^^^^
File "/Users/plato/code/translation-station/.venv/lib/python3.11/site-packages/transformers/pipelines/pt_utils.py", line 124, in __next__
item = next(self.iterator)
^^^^^^^^^^^^^^^^^^^
File "/Users/plato/code/translation-station/.venv/lib/python3.11/site-packages/transformers/pipelines/pt_utils.py", line 269, in __next__
processed = self.infer(next(self.iterator), **self.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/plato/code/translation-station/.venv/lib/python3.11/site-packages/transformers/pipelines/base.py", line 1275, in forward
model_outputs = self._forward(model_inputs, **forward_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/plato/code/translation-station/.venv/lib/python3.11/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 521, in _forward
tokens = self.model.generate(
^^^^^^^^^^^^^^^^^^^^
File "/Users/plato/code/translation-station/.venv/lib/python3.11/site-packages/transformers/models/whisper/generation_whisper.py", line 739, in generate
decoder_input_ids, kwargs = self._prepare_decoder_input_ids(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/plato/code/translation-station/.venv/lib/python3.11/site-packages/transformers/models/whisper/generation_whisper.py", line 1782, in _prepare_decoder_input_ids
prev_start_of_text = suppress_tokens[-2] if suppress_tokens is not None else None
~~~~~~~~~~~~~~~^^^^
IndexError: index -2 is out of bounds for dimension 0 with size 0
```
this exact same error was noticed on some other models posted on huggingface by https://huggingface.co/vasista22 around 2 years ago, for example https://huggingface.co/vasista22/whisper-tamil-large-v2/discussions/4 and https://huggingface.co/vasista22/whisper-hindi-small/discussions/7 | closed | 2025-02-22T00:45:55Z | 2025-02-28T20:55:24Z | https://github.com/huggingface/transformers/issues/36341 | [
"bug"
] | Lewington-pitsos | 2 |
MagicStack/asyncpg | asyncio | 363 | Got an unexpected number of seconds in datetime from asyncpg | * **asyncpg version**: 0.17.0
* **PostgreSQL version**: 10.3
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: reproduced with a local install
* **Python version**: 3.6.5
* **Platform**: Linux x86_64
* **Do you use pgbouncer?**: no
* **Did you install asyncpg with pip?**: yes
```python
from datetime import datetime
import asyncio
import asyncpg
async def main():
pool = await asyncpg.create_pool(
database='postgres',
user='postgres',
host='127.0.0.1',
password='1234',
port=2000
)
async with pool.acquire() as connection:
dt_in = datetime(1970, 1, 1, 20, 31, 23, 648000)
dt_out = await connection.fetchval("SELECT '%s'::timestamp" % dt_in)
op = '==' if dt_in == dt_out else '!='
print('%s %s %s' % (dt_in, op, dt_out))
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
```
Output:
```
1970-01-01 20:31:23.648000 != 1970-01-01 20:31:24.648000
```
But in psql all works fine:
```
postgres=# select '1970-01-01 20:31:23.648000'::timestamp;
timestamp
-------------------------
1970-01-01 20:31:23.648
(1 row)
```
I guess there is a problem with time parsing in asyncpg.
Maybe seconds were rounded somewhere, like this:
```
>>> d
datetime.datetime(1970, 1, 1, 20, 31, 23, 648000)
>>> d.timestamp()
63083.648
>>> int(d.timestamp()) % 60
23
>>> round(d.timestamp()) % 60
24
``` | closed | 2018-09-18T16:21:54Z | 2018-09-18T19:42:16Z | https://github.com/MagicStack/asyncpg/issues/363 | [] | rossnomann | 0 |
google-research/bert | tensorflow | 973 | Question about using function 'tf.squeeze' for selecting 'first_token_tensor' | https://github.com/google-research/bert/blob/cc7051dc592802f501e8a6f71f8fb3cf9de95dc9/modeling.py#L227
I'd like to ask there is a reason to use tf.squeeze rather than below?
```python
first_token_tensor = self.sequence_output[:, 0, :]
```
Is it related with performance issue?
Thanks | open | 2019-12-28T08:08:42Z | 2019-12-28T08:08:42Z | https://github.com/google-research/bert/issues/973 | [] | jihobak | 0 |
PokemonGoF/PokemonGo-Bot | automation | 6,216 | nicodeDecodeError: 'ascii' codec can't decode byte 0x9c in position 1: ordinal not in range(128) | ### Steps to Reproduce
Fresh start of the bot after a clean generation of config+auth files
### Output when issue occurred
--------------------Starting bot--------------------
2017-10-10 14:40:02,233 [ cli] [INFO] PokemonGO Bot v1.0
2017-10-10 14:40:02,244 [ cli] [INFO] commit: unknown
2017-10-10 14:40:02,246 [ cli] [INFO] No auth argument specified, checking for C:\Users\birdo\PokemonGo-BootPokemonGo-Bot\pokemongo_bot\..\configs\auth.json
2017-10-10 14:40:02,246 [ cli] [INFO] No config argument specified, checking for C:\Users\birdo\PokemonGo-BootPokemonGo-Bot\pokemongo_bot\..\configs\config.json
2017-10-10 14:40:02,250 [ cli] [INFO] Configuration initialized
2017-10-10 14:40:02,250 [pokemongo_bot.health_record.bot_event] [INFO] Health check is enabled. For more information:
2017-10-10 14:40:02,252 [pokemongo_bot.health_record.bot_event] [INFO] https://github.com/PokemonGoF/PokemonGo-Bot/tree/dev#analytics
2017-10-10 14:40:02,259 [requests.packages.urllib3.connectionpool] [INFO] Starting new HTTP connection (1): www.google-analytics.com
(6532) wsgi starting up on http://127.0.0.1:4000
[2017-10-10 14:40:02] [PokemonGoBot] [INFO] Setting start location.
[2017-10-10 14:40:04] [PokemonGoBot] [INFO] Location found: budapest (47.497912, 19.040235, 0.0)
[2017-10-10 14:40:04] [PokemonGoBot] [INFO] Now at (47.497912, 19.040235, 0.0)
[2017-10-10 14:40:04] [PokemonGoBot] [INFO] Login procedure started.
[2017-10-10 14:40:11] [PokemonGoBot] [INFO] Login successful.
[2017-10-10 14:40:11] [PokemonGoBot] [INFO] Checking for captcha challenge.
[2017-10-10 14:40:11] [PokemonGoBot] [WARNING] No 2captcha token set, executing manual solving
[2017-10-10 14:40:11] [PokemonGoBot] [ERROR] Error with Chromedriver, please ensure it is the latest version.
Exception AttributeError: "'Service' object has no attribute 'process'" in <bound method Service.__del__ of <selenium.webdriver.chrome.service.Service object at 0x0000000008ABC208>> ignored
_inventory was not initialized
_inventory was not initialized
[2017-10-10 14:40:11] [ cli] [INFO]
[2017-10-10 14:40:11] [ cli] [INFO] Ran for 0:00:09
[2017-10-10 14:40:11] [ cli] [INFO] Total XP Earned: 0 Average: 0.00/h
[2017-10-10 14:40:11] [ cli] [INFO] Travelled 0.00km
[2017-10-10 14:40:11] [ cli] [INFO] Visited 0 stops
[2017-10-10 14:40:11] [ cli] [INFO] Encountered 0 pokemon, 0 caught, 0 released, 0 evolved, 0 never seen before ()
[2017-10-10 14:40:11] [ cli] [INFO] Threw 0 pokeballs
[2017-10-10 14:40:11] [ cli] [INFO] Earned 0 Stardust
[2017-10-10 14:40:11] [ cli] [INFO] Hatched eggs 0
[2017-10-10 14:40:11] [ cli] [INFO]
[2017-10-10 14:40:11] [ cli] [INFO] Highest CP Pokemon:
[2017-10-10 14:40:11] [ cli] [INFO] Most Perfect Pokemon:
Traceback (most recent call last):
File "pokecli.py", line 978, in <module>
main()
File "pokecli.py", line 247, in main
bot = start_bot(bot, config)
File "pokecli.py", line 199, in start_bot
bot.start(bot)
File "C:\Users\birdo\PokemonGo-BootPokemonGo-Bot\pokemongo_bot\__init__.py", line 186, in start
self._setup_api()
File "C:\Users\birdo\PokemonGo-BootPokemonGo-Bot\pokemongo_bot\__init__.py", line 1187, in _setup_api
self.login()
File "C:\Users\birdo\PokemonGo-BootPokemonGo-Bot\pokemongo_bot\__init__.py", line 1099, in login
formatted="Login successful."
File "C:\Users\birdo\PokemonGo-BootPokemonGo-Bot\pokemongo_bot\event_manager.py", line 209, in emit
handler.handle_event(event, sender, level, formatted_msg, data)
File "C:\Users\birdo\PokemonGo-BootPokemonGo-Bot\pokemongo_bot\event_handlers\captcha_handler.py", line 98, in handle_event
token = self.get_token(url)
File "C:\Users\birdo\PokemonGo-BootPokemonGo-Bot\pokemongo_bot\event_handlers\captcha_handler.py", line 48, in get_token
sys.exit(1)
NameError: global name 'sys' is not defined
[2017-10-10 14:40:11] [sentry.errors] [ERROR] Sentry responded with an error: 'ascii' codec can't decode byte 0x9c in position 1: ordinal not in range(128) (url: https://app.getsentry.com/api/90254/store/)
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\raven\transport\threaded.py", line 174, in send_sync
super(ThreadedHTTPTransport, self).send(data, headers)
File "C:\Python27\lib\site-packages\raven\transport\http.py", line 47, in send
ca_certs=self.ca_certs,
File "C:\Python27\lib\site-packages\raven\utils\http.py", line 66, in urlopen
return opener.open(url, data, timeout)
File "C:\Python27\lib\site-packages\future\backports\urllib\request.py", line 494, in open
response = self._open(req, data)
File "C:\Python27\lib\site-packages\future\backports\urllib\request.py", line 512, in _open
'_open', req)
File "C:\Python27\lib\site-packages\future\backports\urllib\request.py", line 466, in _call_chain
result = func(*args)
File "C:\Python27\lib\site-packages\raven\utils\http.py", line 46, in https_open
return self.do_open(ValidHTTPSConnection, req)
File "C:\Python27\lib\site-packages\future\backports\urllib\request.py", line 1284, in do_open
h.request(req.get_method(), req.selector, req.data, headers)
File "C:\Python27\lib\httplib.py", line 1057, in request
self._send_request(method, url, body, headers)
File "C:\Python27\lib\httplib.py", line 1097, in _send_request
self.endheaders(body)
File "C:\Python27\lib\httplib.py", line 1053, in endheaders
self._send_output(message_body)
File "C:\Python27\lib\httplib.py", line 895, in _send_output
msg += message_body
UnicodeDecodeError: 'ascii' codec can't decode byte 0x9c in position 1: ordinal not in range(128)
[2017-10-10 14:40:11] [sentry.errors.uncaught] [ERROR] [u"NameError: global name 'sys' is not defined", u' File "pokecli.py", line 978, in <module>', u' File "pokecli.py", line 247, in main', u' File "pokecli.py", line 199, in start_bot', u' File "C:\\Users\\birdo\\PokemonGo-BootPokemonGo-Bot\\pokemongo_bot\\__init__.py", line 186, in start', u' File "C:\\Users\\birdo\\PokemonGo-BootPokemonGo-Bot\\pokemongo_bot\\__init__.py", line 1187, in _setup_api', u' File "C:\\Users\\birdo\\PokemonGo-BootPokemonGo-Bot\\pokemongo_bot\\__init__.py", line 1099, in login', u' File "C:\\Users\\birdo\\PokemonGo-BootPokemonGo-Bot\\pokemongo_bot\\event_manager.py", line 209, in emit', u' File "C:\\Users\\birdo\\PokemonGo-BootPokemonGo-Bot\\pokemongo_bot\\event_handlers\\captcha_handler.py", line 98, in handle_event', u' File "C:\\Users\\birdo\\PokemonGo-BootPokemonGo-Bot\\pokemongo_bot\\event_handlers\\captcha_handler.py", line 48, in get_token']
Something went wrong and the bot needed to be restarted. Please investigate the cause.
Waiting for 56 seconds, press a key to continue ...
2017-10-10 14:40:23,204 [ cli] [INFO] PokemonGO Bot v1.0
2017-10-10 14:44:38,732 [ cli] [INFO] commit: unknown
2017-10-10 14:44:38,733 [ cli] [INFO] No auth argument specified, checking for C:\Users\birdo\PokemonGo-BootPokemonGo-Bot\pokemongo_bot\..\configs\auth.json
2017-10-10 14:44:38,733 [ cli] [INFO] No config argument specified, checking for C:\Users\birdo\PokemonGo-BootPokemonGo-Bot\pokemongo_bot\..\configs\config.json
2017-10-10 14:44:38,739 [ cli] [INFO] Configuration initialized
2017-10-10 14:44:38,740 [pokemongo_bot.health_record.bot_event] [INFO] Health check is enabled. For more information:
2017-10-10 14:44:38,740 [pokemongo_bot.health_record.bot_event] [INFO] https://github.com/PokemonGoF/PokemonGo-Bot/tree/dev#analytics
2017-10-10 14:44:38,759 [requests.packages.urllib3.connectionpool] [INFO] Starting new HTTP connection (1): www.google-analytics.com
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\eventlet\hubs\hub.py", line 457, in fire_timers
timer()
File "C:\Python27\lib\site-packages\eventlet\hubs\timer.py", line 58, in __call__
cb(*args, **kw)
File "C:\Python27\lib\site-packages\eventlet\green\thread.py", line 41, in __thread_body
func(*args, **kwargs)
File "C:\Python27\lib\threading.py", line 774, in __bootstrap
self.__bootstrap_inner()
File "C:\Python27\lib\threading.py", line 814, in __bootstrap_inner
(self.name, _format_exc()))
File "C:\Python27\lib\codecs.py", line 369, in write
data, consumed = self.encode(object, self.errors)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xf6 in position 665: ordinal not in range(128)
---------------------------------------
### Other Information
OS:
Win10
Branch:
master
Git Commit:
6f6ca78eba2a683486e78a8eed2c2f2fd3bb9765'
config.json:valid
Python Version: Python 2.7.12
I am attempting to start the bot with the Windows bat file, but crashes immidiatelly with the above errorlog. | closed | 2017-10-10T13:03:44Z | 2017-10-14T07:44:40Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/6216 | [] | rakuuhwa | 9 |
scikit-tda/kepler-mapper | data-visualization | 5 | How to show the 3D | I am totally new to the topological data analysis. I try to learn some. When I copy the code to the Python Notebook and replace with my own data, The code is ok. But no plot output. Do i need to install other software? Then python I use is Anaconda. | closed | 2017-07-20T23:27:49Z | 2017-07-25T23:30:30Z | https://github.com/scikit-tda/kepler-mapper/issues/5 | [] | LiyuMa | 14 |
pydantic/FastUI | pydantic | 223 | Support for discriminated union component | I have a data structure which is discriminated union like this:
```json
[
{
"type":"foo",
"value":true
},
{
"type":"bar",
"value":3
},
{
"type":"baz",
"upper_bound":4,
"lower_bound":1
}
]
```
would be nice to be a way to make the filter based on the discriminated union,
then I can use it on forms (in modifying/adding data), or in search filter. | open | 2024-02-27T12:11:41Z | 2024-02-27T12:11:41Z | https://github.com/pydantic/FastUI/issues/223 | [] | ManiMozaffar | 0 |
serpapi/google-search-results-python | web-scraping | 19 | Provide a more convenient way to paginate via the Python package | Currently, the way to paginate searches is to get the `serpapi_pagination.current` and increase the `offset` or `start` parameters in the loop. Like with regular HTTP requests to `serpapi.com/search` without an API wrapper.
```python
import os
from serpapi import GoogleSearch
params = {
"engine": "google",
"q": "coffee",
"tbm": "nws",
"api_key": os.getenv("API_KEY"),
}
search = GoogleSearch(params)
results = search.get_dict()
print(f"Current page: {results['serpapi_pagination']['current']}")
for news_result in results["news_results"]:
print(f"Title: {news_result['title']}\nLink: {news_result['link']}\n")
while 'next' in results['serpapi_pagination']:
search.params_dict[
"start"] = results['serpapi_pagination']['current'] * 10
results = search.get_dict()
print(f"Current page: {results['serpapi_pagination']['current']}")
for news_result in results["news_results"]:
print(
f"Title: {news_result['title']}\nLink: {news_result['link']}\n"
)
```
A more convenient way for an official API wrapper would be to provide some function like `search.paginate(callback: Callable)` which will properly calculate offset for the specific search engine and loop through pages until the end.
```python
import os
from serpapi import GoogleSearch
def print_results(results):
print(f"Current page: {results['serpapi_pagination']['current']}")
for news_result in results["news_results"]:
print(f"Title: {news_result['title']}\nLink: {news_result['link']}\n")
params = {
"engine": "google",
"q": "coffee",
"tbm": "nws",
"api_key": os.getenv("API_KEY"),
}
search = GoogleSearch(params)
search.paginate(print_results)
```
@jvmvik @hartator What do you think? | closed | 2021-04-14T09:09:25Z | 2021-06-07T03:18:26Z | https://github.com/serpapi/google-search-results-python/issues/19 | [
"enhancement",
"question"
] | ilyazub | 6 |
vitalik/django-ninja | rest-api | 429 | Delete please | closed | 2022-04-25T13:00:55Z | 2022-04-25T13:14:14Z | https://github.com/vitalik/django-ninja/issues/429 | [] | 6691a | 0 |
|
flaskbb/flaskbb | flask | 522 | typo in editor.js | I'm working on some issues around previewing posts using server-side rendering and so have been digging into the editor.js file. Shouldn't `output` in this line be `msg`?
https://github.com/flaskbb/flaskbb/blob/520fc7a22ffde4203b80dc81646ddc1fc20ee904/flaskbb/themes/aurora/src/js/editor.js#L45 | closed | 2019-04-17T06:35:52Z | 2019-04-25T08:15:41Z | https://github.com/flaskbb/flaskbb/issues/522 | [
"bug"
] | bmjjr | 2 |
pyg-team/pytorch_geometric | pytorch | 9,469 | The default `bias = 0.0` of PGExplainer frequently results in `edge_mask` values containing NaNs. | ### 🐛 Describe the bug
Setting a small bias value prevents this issue. However, the bias hyperparameter is not mentioned in the PyG documentation:
https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.explain.algorithm.PGExplainer.html?highlight=PGExplainer
The occurrence of NaN values can be confusing for users, requiring them to investigate the cause, which can be time-consuming.
The original implementation by the authors of PGExplainer sets `bias = 0.01` as the default:
https://github.com/flyingdoog/PGExplainer/blob/51edc7b3000e980635f68618af027ae24297f6e5/codes/Explainer.py#L122
https://github.com/flyingdoog/PGExplainer/issues/2#issuecomment-733462580
For convenience and to prevent confusion, I suggest setting the default bias to `bias = 0.01`, consistent with the authors' implementation.
# description
```python
def _concrete_sample(self, logits: Tensor,
temperature: float = 1.0) -> Tensor:
bias = self.coeffs['bias']
eps = (1 - 2 * bias) * torch.rand_like(logits) + bias
return (eps.log() - (1 - eps).log() + logits) / temperature
```
The current default bias of PGExplainer is 0.0. Since `torch.rand_like()` can output values close to 0 or 1, setting `bias = 0.0` can cause `eps` to also be close to 0 or 1. This results in `eps.log()` or `(1 - eps).log()` becoming `-inf`, which in turn leads to NaN model parameters after the update.
https://github.com/pyg-team/pytorch_geometric/blob/44007acc55a28089f673aba9f830a00a773726ea/torch_geometric/explain/algorithm/pg_explainer.py#L237
### Versions
PyG version: 2.3.0
PyTorch version: 2.0.1
OS: Ubuntu Server
Python version: 3.10.14
How you installed PyTorch and PyG (conda, pip, source): conda | closed | 2024-06-29T09:16:29Z | 2024-07-02T10:47:08Z | https://github.com/pyg-team/pytorch_geometric/issues/9469 | [
"bug"
] | kano5266 | 0 |
jupyterhub/jupyterhub-deploy-docker | jupyter | 55 | 500 Error when starting Notebook | I have followed the instructions to get setup with JupyterHub and the GitHub Authenticator. However, I am using my own Docker image, built from jupyter/base-notebook. Once I log in and authenticate, I get the following message:
```
500 : Internal Server Error
Redirect loop detected. Notebook has jupyterhub version unknown (likely < 0.8), but the Hub expects 0.8.0. Try installing jupyterhub==0.8.0 in the user environment if you continue to have problems.
You can try restarting your server from the home page.
```
What does this mean exactly? Is this error coming from my individual notebook server?
| closed | 2017-11-09T14:33:14Z | 2017-11-09T15:03:04Z | https://github.com/jupyterhub/jupyterhub-deploy-docker/issues/55 | [] | rwmajor2 | 2 |
graphql-python/graphene | graphql | 1,552 | More documentation for DisableIntrospection? | The docs currently give this example:
```
validation_errors = validate(
schema=schema.graphql_schema,
document_ast=parse('THE QUERY'),
rules=(
DisableIntrospection,
)
)
```
But it's not obvious how to set `document_ast`, and leave it as an exercise to the reader assuming knowledge on Query AST. How should this be setup to disable introspection across all queries?
(Appreciate you might prefer support requests elsewhere, but no hits on my SO post on this https://stackoverflow.com/questions/78100152/how-to-disable-introspection-on-graphene-python/78498422#78498422 and see it was also raised in https://github.com/graphql-python/graphene/issues/1388#issuecomment-1282388489)
| open | 2024-05-21T04:05:35Z | 2024-06-22T09:49:33Z | https://github.com/graphql-python/graphene/issues/1552 | [
"✨ enhancement",
"📖 documentation"
] | dwjp | 0 |
vvbbnn00/WARP-Clash-API | flask | 180 | 请问安卓使用哪个客户端可以正常使用呢? | ClashA 3.0.3订阅后启动会闪退
MetaClash订阅后启动不生效
Sing-Box订阅不了
请问一下安卓哪个客户端可以正常使用? | closed | 2024-04-28T06:14:06Z | 2024-04-28T06:41:29Z | https://github.com/vvbbnn00/WARP-Clash-API/issues/180 | [] | hitsword | 0 |
matplotlib/matplotlib | data-science | 29,704 | Option to add FancyBboxPatch around quiverkey() | ### Bug summary
Pretty self explanatory.
Grok has this to say:
<details><summary>Grok's advice</summary>
<p>
Thank you for sharing the full traceback and your Matplotlib version (3.10). The error AttributeError: PolyCollection.set() got an unexpected keyword argument 'bbox' occurring within QuiverKey._init() suggests that the bbox parameter, while intended for the label of the quiverkey, is being incorrectly passed down to the underlying PolyCollection object (which draws the arrow), where it’s not a valid keyword. This seems like a bug or misunderstanding in how Matplotlib 3.10 handles quiverkey with bbox, especially in combination with Cartopy.
The bbox parameter is supposed to apply to the label text, not the arrow itself, but the traceback shows it’s being propagated incorrectly. Since the first snippet didn’t work, and you’re on a recent version (Matplotlib 3.10), let’s pivot to the alternative approach using axes.text with a bbox, which avoids this issue entirely by separating the label and its box from the quiverkey arrow.
</p>
</details>
### Code for reproduction
```Python
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(12, 6))
axes = plt.axes(projection=ccrs.PlateCarree())
axes.coastlines()
# Simple quiver plot
q = axes.quiver([0], [0], [1], [1], transform=ccrs.PlateCarree())
axes.quiverkey(q, X=0.85, Y=0.95, U=1, label='1 unit', labelpos='E',
bbox=dict(facecolor='white', edgecolor='black', boxstyle='round,pad=0.5'))
plt.show()
```
### Actual outcome
AttributeError Traceback (most recent call last)
File ~/Miniforge3/envs/RSL/lib/python3.13/site-packages/IPython/core/formatters.py:402, in BaseFormatter.__call__(self, obj)
400 pass
401 else:
--> 402 return printer(obj)
403 # Finally look for special method names
404 method = get_real_method(obj, self.print_method)
File ~/Miniforge3/envs/RSL/lib/python3.13/site-packages/IPython/core/pylabtools.py:170, in print_figure(fig, fmt, bbox_inches, base64, **kwargs)
167 from matplotlib.backend_bases import FigureCanvasBase
168 FigureCanvasBase(fig)
--> 170 fig.canvas.print_figure(bytes_io, **kw)
171 data = bytes_io.getvalue()
172 if fmt == 'svg':
File ~/Miniforge3/envs/RSL/lib/python3.13/site-packages/matplotlib/backend_bases.py:2155, in FigureCanvasBase.print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, bbox_inches, pad_inches, bbox_extra_artists, backend, **kwargs)
2152 # we do this instead of `self.figure.draw_without_rendering`
2153 # so that we can inject the orientation
2154 with getattr(renderer, "_draw_disabled", nullcontext)():
-> 2155 self.figure.draw(renderer)
2156 if bbox_inches:
2157 if bbox_inches == "tight":
File ~/Miniforge3/envs/RSL/lib/python3.13/site-packages/matplotlib/artist.py:94, in _finalize_rasterization.<locals>.draw_wrapper(artist, renderer, *args, **kwargs)
92 @wraps(draw)
93 def draw_wrapper(artist, renderer, *args, **kwargs):
---> 94 result = draw(artist, renderer, *args, **kwargs)
95 if renderer._rasterizing:
96 renderer.stop_rasterizing()
File ~/Miniforge3/envs/RSL/lib/python3.13/site-packages/matplotlib/artist.py:71, in allow_rasterization.<locals>.draw_wrapper(artist, renderer)
68 if artist.get_agg_filter() is not None:
69 renderer.start_filter()
---> 71 return draw(artist, renderer)
72 finally:
73 if artist.get_agg_filter() is not None:
File ~/Miniforge3/envs/RSL/lib/python3.13/site-packages/matplotlib/figure.py:3257, in Figure.draw(self, renderer)
3254 # ValueError can occur when resizing a window.
3256 self.patch.draw(renderer)
-> 3257 mimage._draw_list_compositing_images(
3258 renderer, self, artists, self.suppressComposite)
3260 renderer.close_group('figure')
3261 finally:
File ~/Miniforge3/envs/RSL/lib/python3.13/site-packages/matplotlib/image.py:134, in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
132 if not_composite or not has_images:
133 for a in artists:
--> 134 a.draw(renderer)
135 else:
136 # Composite any adjacent images together
137 image_group = []
File ~/Miniforge3/envs/RSL/lib/python3.13/site-packages/matplotlib/artist.py:71, in allow_rasterization.<locals>.draw_wrapper(artist, renderer)
68 if artist.get_agg_filter() is not None:
69 renderer.start_filter()
---> 71 return draw(artist, renderer)
72 finally:
73 if artist.get_agg_filter() is not None:
File ~/Miniforge3/envs/RSL/lib/python3.13/site-packages/cartopy/mpl/geoaxes.py:524, in GeoAxes.draw(self, renderer, **kwargs)
519 self.imshow(img, extent=extent, origin=origin,
520 transform=factory.crs, *factory_args[1:],
521 **factory_kwargs)
522 self._done_img_factory = True
--> 524 return super().draw(renderer=renderer, **kwargs)
File ~/Miniforge3/envs/RSL/lib/python3.13/site-packages/matplotlib/artist.py:71, in allow_rasterization.<locals>.draw_wrapper(artist, renderer)
68 if artist.get_agg_filter() is not None:
69 renderer.start_filter()
---> 71 return draw(artist, renderer)
72 finally:
73 if artist.get_agg_filter() is not None:
File ~/Miniforge3/envs/RSL/lib/python3.13/site-packages/matplotlib/axes/_base.py:3181, in _AxesBase.draw(self, renderer)
3178 if artists_rasterized:
3179 _draw_rasterized(self.get_figure(root=True), artists_rasterized, renderer)
-> 3181 mimage._draw_list_compositing_images(
3182 renderer, self, artists, self.get_figure(root=True).suppressComposite)
3184 renderer.close_group('axes')
3185 self.stale = False
File ~/Miniforge3/envs/RSL/lib/python3.13/site-packages/matplotlib/image.py:134, in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
132 if not_composite or not has_images:
133 for a in artists:
--> 134 a.draw(renderer)
135 else:
136 # Composite any adjacent images together
137 image_group = []
File ~/Miniforge3/envs/RSL/lib/python3.13/site-packages/matplotlib/artist.py:71, in allow_rasterization.<locals>.draw_wrapper(artist, renderer)
68 if artist.get_agg_filter() is not None:
69 renderer.start_filter()
---> 71 return draw(artist, renderer)
72 finally:
73 if artist.get_agg_filter() is not None:
File ~/Miniforge3/envs/RSL/lib/python3.13/site-packages/matplotlib/quiver.py:400, in QuiverKey.draw(self, renderer)
398 @martist.allow_rasterization
399 def draw(self, renderer):
--> 400 self._init()
401 self.vector.draw(renderer)
402 pos = self.get_transform().transform((self.X, self.Y))
File ~/Miniforge3/envs/RSL/lib/python3.13/site-packages/matplotlib/quiver.py:379, in QuiverKey._init(self)
377 kwargs = self.Q.polykw
378 kwargs.update(self.kw)
--> 379 self.vector = mcollections.PolyCollection(
380 self.verts,
381 offsets=[(self.X, self.Y)],
382 offset_transform=self.get_transform(),
383 **kwargs)
384 if self.color is not None:
385 self.vector.set_color(self.color)
File ~/Miniforge3/envs/RSL/lib/python3.13/site-packages/matplotlib/collections.py:1210, in PolyCollection.__init__(self, verts, sizes, closed, **kwargs)
1190 def __init__(self, verts, sizes=None, *, closed=True, **kwargs):
1191 """
1192 Parameters
1193 ----------
(...)
1208 Forwarded to `.Collection`.
1209 """
-> 1210 super().__init__(**kwargs)
1211 self.set_sizes(sizes)
1212 self.set_verts(verts, closed)
File ~/Miniforge3/envs/RSL/lib/python3.13/site-packages/matplotlib/collections.py:209, in Collection.__init__(self, edgecolors, facecolors, linewidths, linestyles, capstyle, joinstyle, antialiaseds, offsets, offset_transform, norm, cmap, colorizer, pickradius, hatch, urls, zorder, **kwargs)
206 self._offset_transform = offset_transform
208 self._path_effects = None
--> 209 self._internal_update(kwargs)
210 self._paths = None
File ~/Miniforge3/envs/RSL/lib/python3.13/site-packages/matplotlib/artist.py:1233, in Artist._internal_update(self, kwargs)
1226 def _internal_update(self, kwargs):
1227 """
1228 Update artist properties without prenormalizing them, but generating
1229 errors as if calling `set`.
1230
1231 The lack of prenormalization is to maintain backcompatibility.
1232 """
-> 1233 return self._update_props(
1234 kwargs, "{cls.__name__}.set() got an unexpected keyword argument "
1235 "{prop_name!r}")
File ~/Miniforge3/envs/RSL/lib/python3.13/site-packages/matplotlib/artist.py:1206, in Artist._update_props(self, props, errfmt)
1204 func = getattr(self, f"set_{k}", None)
1205 if not callable(func):
-> 1206 raise AttributeError(
1207 errfmt.format(cls=type(self), prop_name=k),
1208 name=k)
1209 ret.append(func(v))
1210 if ret:
AttributeError: PolyCollection.set() got an unexpected keyword argument 'bbox'
### Expected outcome
A box around the text with the specified properties.
### Additional information
_No response_
### Operating system
_No response_
### Matplotlib Version
3.10.0
### Matplotlib Backend
_No response_
### Python version
_No response_
### Jupyter version
_No response_
### Installation
None | open | 2025-03-04T22:49:45Z | 2025-03-08T01:49:19Z | https://github.com/matplotlib/matplotlib/issues/29704 | [
"New feature"
] | bbuzz31 | 7 |
huggingface/transformers | machine-learning | 36,783 | Throw messages in text-generation task with deepseek r1 with PEFTModel | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.49.0
- Platform: Linux-5.15.0-134-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.29.3
- Safetensors version: 0.5.3
- Accelerate version: 1.3.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 0
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- deepspeed_config: {'deepspeed_config_file': '/opt/config/train_config.json', 'zero3_init_flag': True}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- DeepSpeed version: 0.16.4
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: No
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
@ArthurZucker @Rocketknight1 @muellerzr
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import pipeline, AutoModelForCausalLM, BitsAndBytesConfig, AutoTokenizer
from peft import PeftModel
ADAPTER_PATH = "./output/adapter/mnc_adapter"
BASE_PATH = "./output/model"
BNB_CONFG = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
)
# input
text = "Who is a Elon Musk?"
model = AutoModelForCausalLM.from_pretrained(
BASE_PATH,
quantization_config=BNB_CONFG,
torch_dtype=torch.float16,
device_map = 'auto',
)
tokenizer = AutoTokenizer.from_pretrained(BASE_PATH)
lora_model = PeftModel.from_pretrained(
model,
ADAPTER_PATH,
quantization_config=BNB_CONFG,
torch_dtype=torch.float16,
device_map = 'auto',
)
default_generator = pipeline(
task="text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto",
torch_dtype=torch.float16
)
print(f"this is base model result: {default_generator(text)}")
lora_generator = pipeline(
task="text-generation",
model=lora_model,
tokenizer=tokenizer,
device_map="auto",
torch_dtype=torch.float16
)
print(f"this is lora model result: {lora_generator(text)}")
```
1. execute `lora_generator(text)`
2. output warning messages with followings
3. With my debugging, `transformers/pipelines/base.py` that section was problems
```python
def check_model_type(self, supported_models: Union[List[str], dict]):
"""
Check if the model class is in supported by the pipeline.
Args:
supported_models (`List[str]` or `dict`):
The list of models supported by the pipeline, or a dictionary with model class values.
"""
if not isinstance(supported_models, list): # Create from a model mapping
supported_models_names = []
for _, model_name in supported_models.items():
# Mapping can now contain tuples of models for the same configuration.
if isinstance(model_name, tuple):
supported_models_names.extend(list(model_name))
else:
supported_models_names.append(model_name)
if hasattr(supported_models, "_model_mapping"):
for _, model in supported_models._model_mapping._extra_content.items():
if isinstance(model_name, tuple):
supported_models_names.extend([m.__name__ for m in model])
else:
supported_models_names.append(model.__name__)
supported_models = supported_models_names
if self.model.__class__.__name__ not in supported_models:
logger.error(
f"The model '{self.model.__class__.__name__}' is not supported for {self.task}. Supported models are"
f" {supported_models}."
)
```
### Expected behavior
without unsupported models message.
This error might be occured the deepseek model was not in `supported_models` List
* The pipeline was successfully worked, but I wanna remove this annoying message
```python
python hug_inference.py
/root/workspace/lora_test/.venv/lib/python3.10/site-packages/transformers/quantizers/auto.py:206: UserWarning: You passed `quantization_config` or equivalent parameters to `from_pretrained` but the model you're loading already has a `quantization_config` attribute. The `quantization_config` from the model will be used.
warnings.warn(warning_msg)
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:07<00:00, 1.12it/s]
Device set to use cuda:0
/root/workspace/lora_test/.venv/lib/python3.10/site-packages/bitsandbytes/nn/modules.py:451: UserWarning: Input type into Linear4bit is torch.float16, but bnb_4bit_compute_dtype=torch.float32 (default). This will lead to slow inference or training speed.
warnings.warn(
this is base model result: [{'generated_text': "Who is a Elon Musk? Well, he's a business magnate, investor, and entrepreneur. He's known for his ambitious"}]
Device set to use cuda:0
The model 'PeftModel' is not supported for text-generation. Supported models are ['AriaTextForCausalLM', 'BambaForCausalLM', 'BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'LlamaForCausalLM', 'CodeGenForCausalLM', 'CohereForCausalLM', 'Cohere2ForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'DbrxForCausalLM', 'DiffLlamaForCausalLM', 'ElectraForCausalLM', 'Emu3ForCausalLM', 'ErnieForCausalLM', 'FalconForCausalLM', 'FalconMambaForCausalLM', 'FuyuForCausalLM', 'GemmaForCausalLM', 'Gemma2ForCausalLM', 'GitForCausalLM', 'GlmForCausalLM', 'GotOcr2ForConditionalGeneration', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'GraniteForCausalLM', 'GraniteMoeForCausalLM', 'GraniteMoeSharedForCausalLM', 'HeliumForCausalLM', 'JambaForCausalLM', 'JetMoeForCausalLM', 'LlamaForCausalLM', 'MambaForCausalLM', 'Mamba2ForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MistralForCausalLM', 'MixtralForCausalLM', 'MllamaForCausalLM', 'MoshiForCausalLM', 'MptForCausalLM', 'MusicgenForCausalLM', 'MusicgenMelodyForCausalLM', 'MvpForCausalLM', 'NemotronForCausalLM', 'OlmoForCausalLM', 'Olmo2ForCausalLM', 'OlmoeForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PersimmonForCausalLM', 'PhiForCausalLM', 'Phi3ForCausalLM', 'PhimoeForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'Qwen2ForCausalLM', 'Qwen2MoeForCausalLM', 'RecurrentGemmaForCausalLM', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'StableLmForCausalLM', 'Starcoder2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'WhisperForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM', 'ZambaForCausalLM', 'Zamba2ForCausalLM'].
this is lora model result: [{'generated_text': "Who is a Elon Musk? I mean, I know he's a business magnate or something, but what has he actually done"}]
``` | open | 2025-03-18T04:54:56Z | 2025-03-21T16:17:18Z | https://github.com/huggingface/transformers/issues/36783 | [
"bug"
] | falconlee236 | 9 |
Farama-Foundation/Gymnasium | api | 361 | [Bug Report] Blackjack-v1 env observation state return type is different from docs | ### Describe the bug
Blackjack-v1 environment docs, observation space return as tuple(int(), int(), int()) https://gymnasium.farama.org/environments/toy_text/blackjack/
but, it return tuple(int(), int(), boolen())
```python
import gymnasium as gym
env = gym.make('Blackjack-v1', natural=False, sab=False)
s, _ = env.reset()
print(s)
print(type(s[0]), type(s[1]), type(s[2]))
```
output is
```
<class 'tuple'>
(11, 6, False)
<class 'int'> <class 'int'> <class 'bool'>
```
### Code example
```shell
import gymnasium as gym
env = gym.make('Blackjack-v1', natural=False, sab=False)
s, _ = env.reset()
print(s)
print(type(s[0]), type(s[1]), type(s[2]))
```
### System info
gymnasium=0.27.1
### Additional context
gymnasium/envs/toy_text/blackjack.py line: 28
```python
return 1 in hand and sum(hand) + 10 <= 21
```
to
```python
return int(1 in hand and sum(hand) + 10 <= 21)
```
can be solution
_No response_
### Checklist
- [x] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| closed | 2023-03-06T06:42:30Z | 2023-03-08T11:46:19Z | https://github.com/Farama-Foundation/Gymnasium/issues/361 | [
"bug"
] | devhoodit | 3 |
GibbsConsulting/django-plotly-dash | plotly | 18 | Dash opens its own DB connections to Django | This is not necessarily an issue but I have noticed that Dash must be opening it's own session with the database.
I noticed this when I was accessing a database that had user specific security policies applied. In order to use a Django defined model, I had to send to the database the user id for Dash and Django separately to acquire data from the database. This makes sense considering that Dash is loaded in as an IFrame.
Are there plans on the roadmap to integrate the two more closely? Is this even practical in your opinion or will this need recoding by Plotly before this is possible?
| closed | 2018-07-23T21:45:38Z | 2018-11-03T09:31:48Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/18 | [
"question"
] | eddy-ojb | 4 |
flairNLP/flair | nlp | 2,786 | HIPE-2022: Add all test data | Hi,
test data for the HIPE-2022 shared task is released and should be added to the `NER_HIPE_2022` data loader. | closed | 2022-05-20T08:58:17Z | 2022-06-18T08:03:35Z | https://github.com/flairNLP/flair/issues/2786 | [] | stefan-it | 1 |
SCIR-HI/Huatuo-Llama-Med-Chinese | nlp | 105 | 请问考虑释放基于llama-13B的huatuo模型吗? | 13B相比于7B应该会表现更好,尤其是在细粒度要求更高的医学领域吧。如果能释放模型,感谢 | open | 2024-01-30T07:19:43Z | 2024-01-30T07:19:43Z | https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese/issues/105 | [] | jzssz | 0 |
eriklindernoren/ML-From-Scratch | deep-learning | 9 | Tiene probela | closed | 2017-03-02T18:35:39Z | 2017-03-02T18:37:22Z | https://github.com/eriklindernoren/ML-From-Scratch/issues/9 | [] | Anon23456 | 0 |
|
jupyterlab/jupyter-ai | jupyter | 306 | Custom inference endpoints | <!-- Welcome! Thank you for contributing. These HTML comments will not render in the issue, but you can delete them once you've read them if you prefer! -->
<!--
Thanks for thinking of a way to improve JupyterLab. If this solves a problem for you, then it probably solves that problem for lots of people! So the whole community will benefit from this request.
Before creating a new feature request please search the issues for relevant feature requests.
-->
### Problem
<!-- Provide a clear and concise description of what problem this feature will solve. For example:
* I'm always frustrated when [...] because [...]
* I would like it if [...] happened when I [...] because [...]
--> A lot of enterprises are building their own llm models, can we use them instead of chatgpt/hugging face, etc. Sagemaker is one option but I should be able to provide just an inference endpoint to use this for prompts in jupyter lab!
### Proposed Solution
<!-- Provide a clear and concise description of a way to accomplish what you want. For example:
* Add an option so that when [...] [...] will happen
-->A lot of enterprises are building their own llm models, can we use them instead of chatgpt/hugging face, etc. Sagemaker is one option but I should be able to provide just an inference endpoint to use this for prompts in jupyter lab!
### Additional context
<!-- Add any other context or screenshots about the feature request here. You can also include links to examples of other programs that have something similar to your request. For example:
* Another project [...] solved this by [...]
-->
| closed | 2023-07-31T15:55:27Z | 2023-12-29T00:29:56Z | https://github.com/jupyterlab/jupyter-ai/issues/306 | [
"duplicate",
"enhancement"
] | himsgpt | 6 |
koxudaxi/datamodel-code-generator | pydantic | 1,362 | 0.18.0 introduced a breaking change - UnionIntFloat | **Describe the bug**
When upgrading from 0.17.2 to 0.18.0 or any later version, I get the following error on the identical YAML:
```
File "/Users/matt.holden/.cache/pre-commit/repoi7nrpoo1/py_env-3.11/lib/python3.11/site-packages/datamodel_code_generator/model/pydantic/types.py", line 251, in <dictcomp>
kwargs={k: Decimal(v) for k, v in data_type_kwargs.items()},
^^^^^^^^^^
TypeError: conversion from UnionIntFloat to Decimal is not supported
**To Reproduce**
Run my (quite complex) YAML against 0.17.2 or any earlier version, see it work. Run it against 0.18.0 or any newer version, see it fail. I've tried with `target-python-version` of both `3.11` and `3.10`, no change.
Example schema:
**Because there is no traceability in errors (see https://github.com/koxudaxi/datamodel-code-generator/issues/1330) I can't identify where in my huge YAML this is. I've tried ensuring all `number` fields in my OpenAPI document have `type: double`, as this is the only way I could see having a field that didn't know if it was a int or a float, but no change. I'm happy to submit samples if I can get more of an idea where to look for the problem; our YAML is fairly enormous and complex and posting it all would just give everyone a migraine.
Used commandline:
```
#!/usr/bin/env bash
set -e
# clean
rm -vrf app/v1/models/*
cat >app/v1/models/base.py <<EOL
from decimal import Decimal
from pydantic import validator
from pydantic.main import BaseModel as PydanticBaseModel, Extra
class BaseModel(PydanticBaseModel):
class Config:
json_encoders = {Decimal: str}
extra = Extra.forbid
@validator('*', pre=True, each_item=True)
def strict_decimal(cls, v, field):
if field.type_ == Decimal:
assert type(v) in {str, Decimal}, f'must be a string'
return v
EOL
# strict-nullable: https://github.com/koxudaxi/datamodel-code-generator/issues/327#issuecomment-775236310
datamodel-codegen --input descriptions/v1/api.bundled.yaml --output app/v1/models/ \
--use-standard-collections \
--enum-field-as-literal all \
--target-python-version 3.11 \
--disable-timestamp \
--strict-nullable \
--use-subclass-enum \
--base-class app.v1.models.base.BaseModel
```
**Expected behavior**
The same YAML that worked in 0.17.2 works in 0.18.0 and up
**Version:**
- OS: MacOS
- Python version: 3.11.3
- datamodel-code-generator version: [e.g. 18]
**Additional context**
Add any other context about the problem here.
| closed | 2023-06-09T17:35:35Z | 2023-08-09T17:14:34Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1362 | [
"bug"
] | St0rmdrag0n | 0 |
paperless-ngx/paperless-ngx | django | 8,918 | [BUG] When attempting to add more than 25 Document Links, new links become "Not Found" | ### Description
It seems I can only link 25 together in a related doc reference.
The reason I feel it's a bug is that it lets me link as many as I wish (I currently wish to link 36). But after 25, new links turn into 'Not Found" (as shown in the attached screenshot), silently failing.

### Steps to reproduce
1. make a doc
2. Add a custom field "Document Link"
3. add more than 25 other docs
4. Try to link the additional 25+ docs to the original using the new custom field "document link"
5. Go back to the original doc and see if all 26+ documents are linked.
### Webserver logs
```bash
No log entries are generated when adding links.
```
### Browser logs
```bash
```
### Paperless-ngx version
2.14.5
### Host OS
Unraid 7 / ghcr.io/paperless-ngx/paperless-ngx
### Installation method
Docker - official image
### System status
```json
{
"pngx_version": "2.14.5",
"server_os": "Linux-6.6.68-Unraid-x86_64-with-glibc2.36",
"install_type": "docker",
"storage": {
"total": 90990558269440,
"available": 56558888435712
},
"database": {
"type": "sqlite",
"url": "/usr/src/paperless/data/db.sqlite3",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "documents.1061_workflowactionwebhook_as_json",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://redis:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2025-01-26T14:32:23.883353-05:00",
"index_error": null,
"classifier_status": "OK",
"classifier_last_trained": "2025-01-26T19:05:01.974489Z",
"classifier_error": null
}
}
```
### Browser
Firefox 134.0.2
### Configuration changes
Only real overrides are
PAPERLESS_FILENAME_FORMAT = {correspondent}/{document_type}/{created_year}-{created_month}-{created_day} - {title}, {tag_list}
PAPERLESS_DATE_ORDER = MDY
### Please confirm the following
- [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [x] I have already searched for relevant existing issues and discussions before opening this report.
- [x] I have updated the title field above with a concise description. | closed | 2025-01-26T19:42:28Z | 2025-02-26T03:09:49Z | https://github.com/paperless-ngx/paperless-ngx/issues/8918 | [
"bug",
"frontend"
] | matthewdavis | 1 |
pbugnion/gmaps | jupyter | 55 | support only about 1500 points. Doesn't work for more points | I have a list of 200,000 lat,lon pairs. The heatmap works fine until 1500 points. When set to 2000, it just show a regular map without heat. Is that there's a limit on how many points can be supported?
The code is running on jupyter in datascientist workbench with spark
import gmaps
longitude_list = sqlContext.sql("SELECT lon From table_15_1")
latitude_list = sqlContext.sql("SELECT lat From table_15_1")
data = []
for lat,lon in zip(latitude_list.collect()[:2000], longitude_list.collect()[:2000]):
point = []
point.append(lat)
point.append(lon)
data.append(point)
gmaps.heatmap(data)
| closed | 2016-03-28T01:03:18Z | 2016-04-02T11:11:46Z | https://github.com/pbugnion/gmaps/issues/55 | [] | beizhao215 | 1 |
taverntesting/tavern | pytest | 542 | Saving response with correct format... trying to use it in other stage... empty | I just recently updated to 1.0.0
I was saving a response in an anchor stage. Then that anchor was used throughout other steps, and the saved response was used in the next stage of the test. It seems that even though I am saving the response it is bringing in an empty string.
I had issues first getting the correct new format, but once I figured that out it seemed to be correct. Now issue is reusing that var. | closed | 2020-04-23T15:40:19Z | 2020-05-01T15:17:22Z | https://github.com/taverntesting/tavern/issues/542 | [] | rebecca-makar | 2 |
miguelgrinberg/python-socketio | asyncio | 80 | Accessing underlying app when using AsyncServer | With the current implementation, the `app` that `socketio.AsyncServer` attaches to is unavailable in `on` method calls.
For example, when using python-socketio with sanic, we're unable to access the request object (which provides access to `request.app`).
```
# in __init__.py file
from myproj.sio import sio
...
sio.attach(app)
# in sio.py
@sio.on('my event')
def test_message(sid, message):
return app.config['my_setting']
```
I've manually resorted to calling `sio.app = app` to access the app instance. But I'm wondering if you had a reason for not providing `AsyncServer.app`? | closed | 2017-03-10T21:02:52Z | 2019-08-04T22:57:52Z | https://github.com/miguelgrinberg/python-socketio/issues/80 | [
"investigate"
] | dfee | 5 |
jupyterhub/zero-to-jupyterhub-k8s | jupyter | 3,380 | Cull jupyter-user pods that were started before hub restart | <!-- Thank you for contributing. These HTML comments will not render in the issue, but you can delete them once you've read them if you prefer! -->
### Proposed change
[User pods do not have ownerReferences set](https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/2837) and are not cleaned up when the `hub` pod is restarted.
The hub culling mechanism doesn't work on these "dangling pods"
### Who would use this feature?
This would benefit all z2jh admins.
### (Optional): Suggest a solution
<!-- Describe what you think needs to be done. Doing that is an excellent first step to get the feature implemented. -->
I'm not sure how to implement this-- ownerReferences is the way I know, but that would disrupt user pods during maintenance. Maybe on startup the hub pod can register jupyter-* pods that are already up?
| closed | 2024-03-26T01:52:32Z | 2024-03-26T15:08:51Z | https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/3380 | [
"enhancement"
] | asmacdo | 3 |
zappa/Zappa | django | 439 | [Migrated] Async Tasks DynamoDB in VPC? | Originally from: https://github.com/Miserlou/Zappa/issues/1149 by [Erstwild](https://github.com/Erstwild)
Is the DynamoDB instance for async task handling created in the VPC specified in "vpc_config"? Or is that only for the Lambda functions? If not, what would be the best way to go about configuring that? | closed | 2021-02-20T08:32:57Z | 2024-04-13T16:17:55Z | https://github.com/zappa/Zappa/issues/439 | [
"aws",
"feature-request",
"no-activity",
"auto-closed"
] | jneves | 2 |
healthchecks/healthchecks | django | 123 | Telegram integration not working with supergroups | The telegram integration is working great! I can integrate with a single user or even add the bot to a group and it will message me/us everytime a check fails.
But when adding the bot to a supergroup it will not work. I try to `/start`/`/start@bot_name` and it will simply not reply to me. | closed | 2017-05-22T14:44:35Z | 2017-05-22T19:52:24Z | https://github.com/healthchecks/healthchecks/issues/123 | [] | bellini666 | 2 |
dfki-ric/pytransform3d | matplotlib | 163 | How to update the transformation in TransformManager? Should remove the old transformation and add again? | Just like the example, In real robot control, if we change the transformation, such as ee2robot, object2cam, how should we pass the new data to the defined transformation ,tm.add_transform("camera", "robot", cam2robot), should we delete the old one and add again? Moreover, how we plot the whole TransformManager in animation to avoid plot the old one, just keep the newest frame.
<img width="599" alt="bugs---1" src="https://user-images.githubusercontent.com/45029643/133107331-96c0e4bd-01d7-4684-8a88-b74da6578a0d.png">
import numpy as np
import matplotlib.pyplot as plt
from pytransform3d import rotations as pr
from pytransform3d import transformations as pt
from pytransform3d.transform_manager import TransformManager
random_state = np.random.RandomState(0)
ee2robot = pt.transform_from_pq(
np.hstack((np.array([0.4, -0.3, 0.5]),
pr.random_quaternion(random_state))))
cam2robot = pt.transform_from_pq(
np.hstack((np.array([0.0, 0.0, 0.8]), pr.q_id)))
object2cam = pt.transform_from(
pr.active_matrix_from_intrinsic_euler_xyz(np.array([0.0, 0.0, -0.5])),
np.array([0.5, 0.1, 0.1]))
tm = TransformManager()
tm.add_transform("end-effector", "robot", ee2robot)
tm.add_transform("camera", "robot", cam2robot)
tm.add_transform("object", "camera", object2cam)
tm.add_transform("end-effector", "robot", object2cam)
ee2object = tm.get_transform("end-effector", "object")
ax = tm.plot_frames_in("robot", s=0.1)
ax.set_xlim((-0.25, 0.75))
ax.set_ylim((-0.5, 0.5))
ax.set_zlim((0.0, 1.0))
plt.show()
| closed | 2021-09-13T14:58:55Z | 2021-09-13T15:12:59Z | https://github.com/dfki-ric/pytransform3d/issues/163 | [] | caimingxue | 2 |
JaidedAI/EasyOCR | pytorch | 786 | Repository with all models in ONNX format | In this [Drive folder](https://drive.google.com/drive/folders/1n_LOrJHkMVcZhyCgg37PYMAcsJ7_Sxsn?usp=sharing) you can find all EasyOCR models in ONNX format covering all currently available languages. **Look at the text file** in the folder to see which one corresponds to the language you are using.
[ONNX](https://onnx.ai/) is an agnostic and standardized format for storing models. Models in ONNX format can be imported and used in several Runtimes including PyTorch, TensorFlow and OpenVINO. The ONNX team also provides its own runtine, [ONNX Runtime](https://onnxruntime.ai/). This Runtime allows development in several languages, operating systems and acceleration hardware. By owning the EasyOCR models in ONNX it is possible to cover new scenarios where the use of Python and/or PyTorch is not adequate.
In my use case, I am using the EasyOCR and ONNX Runtime models to be able to develop C++ applications. This makes it much easier to bring EasyOCR functionalities into production.
In [this issue](https://github.com/JaidedAI/EasyOCR/issues/746) you can find a general guide on how I have managed to modify EasyOCR to export the models, initially developed in Torch to ONNX. Note that the export process explained in the guide can only be achieved if you have CUDA acceleration. As not everyone has access to such hardware, I have decided to share a copy of the models already exported.
The models that I make available are a **first version**, I may optimize the exported models in the next weeks. | open | 2022-07-17T02:33:35Z | 2024-08-13T02:24:19Z | https://github.com/JaidedAI/EasyOCR/issues/786 | [] | Kromtar | 19 |
huggingface/datasets | nlp | 6,663 | `write_examples_on_file` and `write_batch` are broken in `ArrowWriter` | ### Describe the bug
`write_examples_on_file` and `write_batch` are broken in `ArrowWriter` since #6636. The order between the columns and the schema is not preserved anymore. So these functions don't work anymore unless the order happens to align well.
### Steps to reproduce the bug
Try to do `write_batch` with anything that has many columns, and it's likely to break.
### Expected behavior
I expect these functions to work, instead of it trying to cast a column to its incorrect type.
### Environment info
- `datasets` version: 2.17.0
- Platform: Linux-5.15.0-1040-aws-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.19.4
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | closed | 2024-02-15T01:43:27Z | 2024-02-16T09:25:00Z | https://github.com/huggingface/datasets/issues/6663 | [] | bryant1410 | 3 |
onnx/onnx | scikit-learn | 6,279 | --- | # Bug Report
### Is the issue related to model conversion?
I dont know. I have no idea what i am doing
### Describe the bug
i am following this tutorial:
https://www.youtube.com/watch?v=_6yP3Gv04-w
and when doing the step at 19:40 i recieve the following error:
Traceback (most recent call last):
File "C:\Users\Domin\Desktop\convert_stable_diffusion_checkpoint_to_onnx.py", line 20, in <module>
import onnx
File "C:\Users\Domin\miniconda3\envs\sd39new\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: A dynamic link library (DLL) initialization routine failed.
### System information
<!--
- WIn 23h2
- ONNX version: i have no idea
- Python version: miniconda 3
- GCC/Compiler version (if compiling from source):
- CMake version:
- Protobuf version:
- Visual Studio version (if applicable):-->
### Reproduction instructions
I have no idea what i am doing so i dont know how to reproduce it
...
```
- Attach the ONNX model to the issue (where applicable)-->
### Expected behavior
Expected it to just install
### Notes
Sorry but i am just following this tutorial so i have no idea about how to properly do a bug report
| closed | 2024-08-05T12:02:59Z | 2025-01-21T12:45:40Z | https://github.com/onnx/onnx/issues/6279 | [
"bug"
] | Dodonicc | 2 |
aio-libs/aiomysql | sqlalchemy | 413 | when minsize reach can't acquire a new connection ,it return a exit conn | code as folows:---------
```python
import aiomysql
import asyncio
from aiomysql.cursors import DictCursor
class MySQL_Pool(object):
async def init_pool(self):
self.pool = await aiomysql.create_pool(host='10.16.xx.xx',
port=3306, minsize=5, maxsize=50, user='k8sadmin',
password='123456', db='kubernetes_backend')
return self.pool
class MySQL_Conn(object):
def __init__(self, pool):
self.pool = pool
async def get_conn(self):
async with self.pool.acquire() as self.conn:
print(self.conn)
pass
def close_conn(self):
self.pool.release(self.conn)
async def execute(self, sql):
async with self.conn.cursor(DictCursor) as cur:
await cur.execute(sql)
r = await cur.fetchall()
await self.conn.commit()
print(r)
async def query_db(pool):
mysql = MySQL_Conn(pool)
await mysql.get_conn()
k = await mysql.execute("select name from tb_deployment where id=60")
#mysql.close_conn()
async def main():
task_lst = []
for i in range(6):
task_lst.append(
asyncio.create_task(query_db(pool))
)
for i in task_lst:
await i
mysql_loop = asyncio.get_event_loop()
pool = mysql_loop.run_until_complete(MySQL_Pool().init_pool())
mysql_loop.run_until_complete(main())
```
-------------
I run less than 5 coroutine to select ,it has no problem, when I run more than 5 coroutine ,pool.acquire() will get an existing conn,not new connection, I print the conn, such i run this with 6 coroutine ,as result as follows:
---------------------
```python
<aiomysql.connection.Connection object at 0x7fde099b5a90>
<aiomysql.connection.Connection object at 0x7fde099b5f60>
<aiomysql.connection.Connection object at 0x7fde099d1240>
<aiomysql.connection.Connection object at 0x7fde099d1470>
<aiomysql.connection.Connection object at 0x7fde099d16a0>
<aiomysql.connection.Connection object at 0x7fde099b5a90>
Traceback (most recent call last):
File "aiomysqltest.py", line 48, in <module>
mysql_loop.run_until_complete(main())
File "/usr/local/Python-3.7.3/lib/python3.7/asyncio/base_events.py", line 584, in run_until_complete
return future.result()
File "aiomysqltest.py", line 44, in main
await i
File "aiomysqltest.py", line 35, in query_db
k = await mysql.execute("select name from tb_deployment where id=60")
File "aiomysqltest.py", line 29, in execute
await self.conn.commit()
File "/usr/local/Python-3.7.3/lib/python3.7/site-packages/aiomysql/connection.py", line 353, in commit
await self._read_ok_packet()
File "/usr/local/Python-3.7.3/lib/python3.7/site-packages/aiomysql/connection.py", line 333, in _read_ok_packet
raise OperationalError(2014, "Command Out of Sync")
pymysql.err.OperationalError: (2014, 'Command Out of Sync')
Task exception was never retrieved
future: <Task finished coro=<query_db() done, defined at aiomysqltest.py:32> exception=RuntimeError('readexactly() called while another coroutine is already waiting for incoming data')>
Traceback (most recent call last):
File "aiomysqltest.py", line 35, in query_db
k = await mysql.execute("select name from tb_deployment where id=60")
File "aiomysqltest.py", line 27, in execute
await cur.execute(sql)
File "/usr/local/Python-3.7.3/lib/python3.7/site-packages/aiomysql/cursors.py", line 239, in execute
await self._query(query)
File "/usr/local/Python-3.7.3/lib/python3.7/site-packages/aiomysql/cursors.py", line 457, in _query
await conn.query(q)
File "/usr/local/Python-3.7.3/lib/python3.7/site-packages/aiomysql/connection.py", line 428, in query
await self._read_query_result(unbuffered=unbuffered)
File "/usr/local/Python-3.7.3/lib/python3.7/site-packages/aiomysql/connection.py", line 622, in _read_query_result
await result.read()
File "/usr/local/Python-3.7.3/lib/python3.7/site-packages/aiomysql/connection.py", line 1105, in read
first_packet = await self.connection._read_packet()
File "/usr/local/Python-3.7.3/lib/python3.7/site-packages/aiomysql/connection.py", line 561, in _read_packet
packet_header = await self._read_bytes(4)
File "/usr/local/Python-3.7.3/lib/python3.7/site-packages/aiomysql/connection.py", line 598, in _read_bytes
data = await self._reader.readexactly(num_bytes)
File "/usr/local/Python-3.7.3/lib/python3.7/asyncio/streams.py", line 679, in readexactly
await self._wait_for_data('readexactly')
File "/usr/local/Python-3.7.3/lib/python3.7/asyncio/streams.py", line 460, in _wait_for_data
f'{func_name}() called while another coroutine is '
RuntimeError: readexactly() called while another coroutine is already waiting for incoming data
```
------------------
you can see the printed conn has 6,but one is same as the other:<aiomysql.connection.Connection object at 0x7fde099b5a90>
Am I using the wrong method?
and how can i release the conn, when i use pool.release(conn),Remove the comment(mysql_close_conn()), it will exception:
------------------------
```python
Traceback (most recent call last):
File "aiomysqltest.py", line 49, in <module>
mysql_loop.run_until_complete(main())
File "/usr/local/Python-3.7.3/lib/python3.7/asyncio/base_events.py", line 584, in run_until_complete
return future.result()
File "aiomysqltest.py", line 45, in main
await i
File "aiomysqltest.py", line 36, in query_db
mysql.close_conn()
File "aiomysqltest.py", line 23, in close_conn
self.pool.release(self.conn)
File "/usr/local/Python-3.7.3/lib/python3.7/site-packages/aiomysql/pool.py", line 204, in release
assert conn in self._used, (conn, self._used)
AssertionError: (<aiomysql.connection.Connection object at 0x7f30c3745eb8>, set())
Task exception was never retrieved
future: <Task finished coro=<query_db() done, defined at aiomysqltest.py:32> exception=AssertionError((<aiomysql.connection.Connection object at 0x7f30bf5266d8>, set()))>
Traceback (most recent call last):
File "aiomysqltest.py", line 36, in query_db
mysql.close_conn()
File "aiomysqltest.py", line 23, in close_conn
self.pool.release(self.conn)
File "/usr/local/Python-3.7.3/lib/python3.7/site-packages/aiomysql/pool.py", line 204, in release
assert conn in self._used, (conn, self._used)
AssertionError: (<aiomysql.connection.Connection object at 0x7f30bf5266d8>, set())
```
-----------------------------
this is not a problem, when the coroutine ended the conn will auto release
thanks
| closed | 2019-06-11T10:49:36Z | 2019-06-12T03:19:43Z | https://github.com/aio-libs/aiomysql/issues/413 | [] | futianshi1314 | 4 |
ContextLab/hypertools | data-visualization | 95 | support multiindex Pandas DataFrames | open | 2017-04-18T14:49:43Z | 2017-04-28T17:21:59Z | https://github.com/ContextLab/hypertools/issues/95 | [
"enhancement",
"help wanted",
"low priority"
] | andrewheusser | 1 |
|
microsoft/nni | data-science | 5,748 | it only ran 8 combinations each time, and then no code was running. But nni didn’t stop | **Describe the issue**:
I ran it with the settings below, but it only ran 8 combinations each time, and then no code was running. But nni didn’t stop
**Environment**:
- NNI version: 3.0
- Training service (local|remote|pai|aml|etc): remote
- Client OS: mac
- Server OS (for remote mode only): linux
- Python version: 3.8
- PyTorch/TensorFlow version: 1.13
- Is conda/virtualenv/venv used?: conda
**Configuration**:
- Experiment config (remember to remove secrets!):
experimentName: sgd
searchSpaceFile: search_space.json
trialGpuNumber: 1
trialConcurrency: 8
max_trial_number: 10000
tuner:
name: TPE
classArgs:
optimize_mode: maximize
trainingService:
platform: local
useActiveGpu: True
- Search space:
{
"lr": {"_type": "uniform", "_value": [0.0001, 1.0]},
"batch_size":{"_type":"choice","_value": [8, 16, 32, 64, 128]}
}
**How to reproduce it?**:Why and How to deal with it? | open | 2024-02-26T14:55:29Z | 2024-02-26T14:55:29Z | https://github.com/microsoft/nni/issues/5748 | [] | CCJing14 | 0 |
AutoGPTQ/AutoGPTQ | nlp | 223 | Apple silicon cannot install the AutoGPT | When I use "pip install auto-gptq" to quick start, it prompts that cant get the Cuda version, do we support MPS or Apple Silicon Chip?
(base) kenneth@KenStudio AutoGPTQ % pip install auto-gptq
Collecting auto-gptq
Downloading auto_gptq-0.3.2.tar.gz (63 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.5/63.5 kB 131.0 kB/s eta 0:00:00
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [7 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/2c/lk_jmzrs53bdr3np4k5tr9kr0000gn/T/pip-install-cc1gf7m1/auto-gptq_7f3e2b1f14334c129f6bf5d18dea8201/setup.py", line 21, in <module>
CUDA_VERSION = "".join(torch.version.cuda.split("."))
^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'split'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details. | open | 2023-08-02T03:00:47Z | 2025-01-16T15:29:15Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/223 | [] | canychan | 5 |
Anjok07/ultimatevocalremovergui | pytorch | 1,652 | Asking for cuda toolkit version | Hello, I have the Cuda Toolkit 12.4 installed on my system, but I'm unable to run UVR properly. Each time I start processing, the application crashes after 5% progress. I suspect this may be due to an incompatibility between the UVR torch version and the Cuda Toolkit version on my system. Below is the error log. My Graphic card is Nvidia GeForce GTX 1080Ti with 11GB dedicated GPU memory
Faulting application name: UVR.exe, version: 0.0.0.0, time stamp: 0x652f4a81
Faulting module name: c10_cuda.dll, version: 0.0.0.0, time stamp: 0x6356f675
Exception code: 0xc0000005
Fault offset: 0x000000000002fb9b
Faulting process id: 0x0x4240
Faulting application start time: 0x0x1DB4777CC03154E
Faulting application path: C:\Users\***\AppData\Local\Programs\Ultimate Vocal Remover\UVR.exe
Faulting module path: C:\Users\***\AppData\Local\Programs\Ultimate Vocal Remover\torch\lib\c10_cuda.dll
Report Id: 7946691e-29a9-4c58-9707-7586a6a492d8
Faulting package full name:
Faulting package-relative application ID:
I hope someone can tell me which Cuda Toolkit version should I install on my system.
| closed | 2024-12-06T09:22:56Z | 2024-12-07T06:02:13Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1652 | [] | cyrus-arch | 1 |
JaidedAI/EasyOCR | pytorch | 651 | dataset used for english/latin language | Hello,
Thank you for sharing this very good work!
I have a question regarding latin/english dataset used for the training of the CRNN model. Is it trained only on the Synth dataset ? (https://www.robots.ox.ac.uk/~vgg/data/scenetext/) or is there more datasets than this one ?
| closed | 2022-01-24T13:04:12Z | 2022-08-25T10:52:27Z | https://github.com/JaidedAI/EasyOCR/issues/651 | [] | Shiro-LK | 1 |
chrieke/prettymapp | streamlit | 74 | Line width scaling with map size | The width of lines is given as a fixed value of matplotlib "points" which means the width of a line stays the same regardless of the map size. In small maps the lines end up smaller and in large maps the lines end up so big you can't see anything.
I thought it would be nice to allow configuring the width of lines in meters. I'm not sure if doing so is in scope for this project or not, whether it would break things for people. If there would be an interest in having all the width configuration be in meters I'd be happy to open a PR to implement that, but otherwise I'd like to just leave the solution I wrote in case it's useful for anybody else.
This ended up generating reasonably looking small and large maps. When generating very large (radius >10km) maps it works when saving to svg, but the resolution ends up being too small when exporting to jpg or png without increasing the dpi.
```
from shapely import Point
from geopandas import GeoDataFrame
(
xmin,
ymin,
xmax,
ymax,
) = aoi.bounds
ymid = (ymin + ymax) / 2
# Width of the figure in matplotlib "points"
width_points = plot.fig.get_size_inches()[0] * 72
left_point = Point(xmin, ymid)
right_point = Point(xmax, ymid)
gdf = GeoDataFrame(geometry=[left_point, right_point], crs="EPSG:4326")
gdf = gdf.to_crs("EPSG:32633")
# This represents the (longitudal) distance (in meters) of the plot
# through the center of the plot.
#
# Should be equal to two times the radius.
width_meters = gdf.geometry.iloc[0].distance(gdf.geometry.iloc[1])
# The width in points of one meter (at the center of the plot)
meter_points = width_points / width_meters
for key in STREETS_WIDTH:
STREETS_WIDTH[key] *= meter_points
```
and also modify plotting.py to multiply by `meter_points` (at https://github.com/chrieke/prettymapp/blob/3a6ef573f343718f001be8ee8038c15b687876f9/prettymapp/plotting.py#L126 and https://github.com/chrieke/prettymapp/blob/3a6ef573f343718f001be8ee8038c15b687876f9/prettymapp/plotting.py#L146) | open | 2024-12-30T15:00:03Z | 2024-12-30T15:00:03Z | https://github.com/chrieke/prettymapp/issues/74 | [] | C4K3 | 0 |
twelvedata/twelvedata-python | matplotlib | 74 | Daily price is not adjusted for the splits | **Describe the bug**
The prices are not adjusted for the splits.
**To Reproduce**
ts = td.time_series(
symbol="EVAV",
interval="1day",
outputsize=5000,
timezone="America/New_York",
**Expected behavior**
The Twelve Data website mentions that the prices are adjusted for daily prices.
**Screenshots**
<img width="503" alt="image" src="https://github.com/twelvedata/twelvedata-python/assets/11630907/1044e2fb-1960-4327-8aac-050a0a900e48">
| closed | 2023-12-27T15:38:22Z | 2024-01-08T15:47:00Z | https://github.com/twelvedata/twelvedata-python/issues/74 | [] | misik | 1 |
suitenumerique/docs | django | 677 | Prerequisites for Multi-Page Search | For the multipage update, the search will evolve to not only (1) be able to search for documents (as it does today), but also **(2) be able to search for sub-pages within their context**. This means that under the sub-page, **the parent document is displayed** (see Figure below). The displayed parent document is the highest page accessible to the user in the tree. The backend must therefore be able to return it.

[Figma Prototype (FR)](https://www.figma.com/design/qdCWR4tTUr7vQSecEjCyqO/DOCS?node-id=6924-19895&t=pXmgIcFel1cMxdaQ-1) | open | 2025-03-04T16:17:03Z | 2025-03-04T16:17:38Z | https://github.com/suitenumerique/docs/issues/677 | [
"backend",
"designed"
] | rl-83 | 0 |
matplotlib/matplotlib | data-visualization | 29,535 | [Doc]: Typo in Solarized Light stylesheet example code | ### Documentation Link
https://matplotlib.org/devdocs/gallery/style_sheets/plot_solarizedlight2.html
### Problem
There is a ` * x` missing in the 4th plot line of the example plot. The plots renders fine, but looks a bit odd.
### Suggested improvement
Add ` * x` to the line. | closed | 2025-01-28T17:22:50Z | 2025-01-28T23:23:29Z | https://github.com/matplotlib/matplotlib/issues/29535 | [
"Documentation"
] | johanneskopton | 0 |
horovod/horovod | deep-learning | 3,972 | A question about model parallel | Hello, are there any examples of model parallel when using horovod?
| open | 2023-08-14T06:00:56Z | 2023-08-14T06:00:56Z | https://github.com/horovod/horovod/issues/3972 | [
"enhancement"
] | etoilestar | 0 |
jina-ai/serve | machine-learning | 5,482 | support opentelemetry export more params like headers | **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
jina init param
<img width="636" alt="image" src="https://user-images.githubusercontent.com/114633619/205478332-40854617-136b-4270-ba4c-d0641b5a16f6.png">
opentelemetry init param
<img width="655" alt="image" src="https://user-images.githubusercontent.com/114633619/205478341-27d5e35e-b68c-4eeb-a0d6-9dfb9a378e48.png">
**Describe how you solve it**
<!-- copy past your code/pull request link -->
apm like uptrace https://github.com/uptrace/uptrace will need opentelemetry header to collect information, now jina not support pass header to tracer and meter
---
<!-- Optional, but really help us locate the problem faster -->
**Environment**
<!-- Run `jina --version-full` and copy paste the output here -->
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. --> | closed | 2022-12-04T06:51:15Z | 2022-12-07T16:19:17Z | https://github.com/jina-ai/serve/issues/5482 | [] | big-thousand | 2 |
huggingface/peft | pytorch | 1,562 | How do I easily inherit and register the new method? | Is there an example of inheriting existing methods and registering my own new methods, I guess this will help enhance the extensibility of the repository. Such as, how can i add a linear layer for Lora? | closed | 2024-03-14T08:26:53Z | 2024-03-14T10:30:42Z | https://github.com/huggingface/peft/issues/1562 | [] | mrwu-mac | 2 |
akfamily/akshare | data-science | 5,158 | AKShare 股票 - 深圳中小板与主板合并 | > 欢迎加入《数据科学实战》知识星球,交流财经数据与量化投资相关内容,
> 详细信息参考:https://akshare.akfamily.xyz/learn.html
## 前提
遇到任何问题,请先将您的 AKShare 版本升级到**最新版**,可以通过如下命令升级:
```
pip install akshare --upgrade # Python 版本需要大于等于 3.8
```
## 如何提交问题
提交问题的同时,请提交以下相关信息,以更精准的解决问题。
**不符合提交规范的 issues 会被关闭!**
**详细问题描述**
1. 请先详细阅读文档对应接口的使用方式:https://akshare.akfamily.xyz
2. 操作系统版本,目前只支持 64 位操作系统
windows 11
3. Python 版本,目前只支持 3.8 以上的版本
3.11
4. AKShare 版本,请升级到最新版
最新版本
5. 接口的名称和相应的调用代码
接口: stock_szse_summary
目标地址: http://www.szse.cn/market/overview/index.html
描述: 深圳证券交易所-市场总貌-证券类别统计
限量: 单次返回指定 date 的市场总貌数据-证券类别统计(当前交易日的数据需要交易所收盘后统计)
6. 接口报错的截图或描述
因为主板和中小板合并了,所以目标地址的页面内容变了。
目标地址: http://www.szse.cn/market/overview/index.html
7. 期望获得的正确结果
| closed | 2024-09-01T03:19:54Z | 2024-09-01T08:39:09Z | https://github.com/akfamily/akshare/issues/5158 | [
"bug"
] | raphaelxiao | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 903 | read and write DCM image | hello,
I have tried your code to train the DICOM image of scanner. but the Dicom image is 16bit grayscale image which can not traited by PIL.image objet. Have you any idea how to overcome this problem?
can we use mode 'I' (32bit int) for data input?
Thank you | open | 2020-01-19T21:14:47Z | 2022-10-30T06:22:13Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/903 | [] | qiuLimoges | 5 |
facebookresearch/fairseq | pytorch | 4,919 | TypeError: Can't instantiate abstract class TrainModule with abstract methods requirements | ## ❓ Questions and Help
### Before asking:
1. search the issues. [x]
2. search the docs. [x]
<!-- If you still can't find what you need: -->
#### Trying to train NLLB but got an error?
```
TypeError: Can't instantiate abstract class TrainModule with abstract methods requirements
```
### Complete Error
```
examples/nllb/modeling/train/train_script.py:287: UserWarning:
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
@hydra.main(config_path="conf", config_name="base_config")
/home/sn/miniconda3/envs/nllb/lib/python3.8/site-packages/hydra/_internal/hydra.py:119: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default.
See https://hydra.cc/docs/next/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information.
ret = run_job(
Error executing job with overrides: ['cfg=bilingual', 'cfg.fairseq_root=/home/sn/phd/experiments/2022-2023/nllb/fairseq', 'cfg.output_dir=/home/sn/outdir/nllb', 'cfg.dropout=0.1']
Traceback (most recent call last):
File "examples/nllb/modeling/train/train_script.py", line 294, in <module>
main()
File "/home/sn/miniconda3/envs/nllb/lib/python3.8/site-packages/hydra/main.py", line 90, in decorated_main
_run_hydra(
File "/home/sn/miniconda3/envs/nllb/lib/python3.8/site-packages/hydra/_internal/utils.py", line 389, in _run_hydra
_run_app(
File "/home/sn/miniconda3/envs/nllb/lib/python3.8/site-packages/hydra/_internal/utils.py", line 452, in _run_app
run_and_report(
File "/home/sn/miniconda3/envs/nllb/lib/python3.8/site-packages/hydra/_internal/utils.py", line 216, in run_and_report
raise ex
File "/home/sn/miniconda3/envs/nllb/lib/python3.8/site-packages/hydra/_internal/utils.py", line 213, in run_and_report
return func()
File "/home/sn/miniconda3/envs/nllb/lib/python3.8/site-packages/hydra/_internal/utils.py", line 453, in <lambda>
lambda: hydra.run(
File "/home/sn/miniconda3/envs/nllb/lib/python3.8/site-packages/hydra/_internal/hydra.py", line 132, in run
_ = ret.return_value
File "/home/sn/miniconda3/envs/nllb/lib/python3.8/site-packages/hydra/core/utils.py", line 260, in return_value
raise self._return_value
File "/home/sn/miniconda3/envs/nllb/lib/python3.8/site-packages/hydra/core/utils.py", line 186, in run_job
ret.return_value = task_function(task_cfg)
File "examples/nllb/modeling/train/train_script.py", line 289, in main
train_module = TrainModule(config)
TypeError: Can't instantiate abstract class TrainModule with abstract methods requirements
```
#### Code
```
export OUTPUT_DIR=/tmp/nllb
export DROP=0.1
python examples/nllb/modeling/train/train_script.py \
cfg=bilingual \
cfg.fairseq_root=$(pwd) \
cfg.output_dir=$OUTPUT_DIR \
cfg.dropout=$DROP
```
<!-- Please paste a code snippet if your question requires it! -->
#### What have you tried?
#### What's your environment?
- fairseq Version (e.g., nllb): nllb
- PyTorch Version (e.g., 1.10): 1.10.1+cu113
- OS (e.g., Linux): Ubuntu 20.04
- How you installed fairseq (`pip`,`source`):
- Build command you used (if compiling from source): `pip install --editable .`
- Python version: 3.8
- CUDA/cuDNN version: 11.3
- GPU models and configuration: Tesla V100
- Any other relevant information:
| open | 2022-12-23T04:38:54Z | 2023-03-07T10:47:30Z | https://github.com/facebookresearch/fairseq/issues/4919 | [
"question",
"needs triage"
] | sanjibnarzary | 3 |
django-import-export/django-import-export | django | 1,133 | Is there a way to export two different tables (which are linked by ForeignKey in TabularInline) in one excel file? | Like, models.py
```
class Person(models.Model):
name = models.CharField(max_length=50)
age = models.IntegerField()
income = models.CharField(choices=income_choices, max_length=50)
class DetailsOfEducationQualification(models.Model):
person = models.ForeignKey(Person, on_delete=models.CASCADE)
course_class = models.CharField(max_length=50, blank=True)
satisfaction_of_study = models.BooleanField(default=False, blank=True)
scholarship_received = models.BooleanField(default=False, blank=True)
"Specify the scholarship Left"
drop_out = models.BooleanField(default=False, blank=True)
"Reason Of DropOut Left"
drop_out_reason = models.CharField(max_length=1000, blank=True)
class ScholarshipNames(models.Model):
details = models.ForeignKey(Person, on_delete=models.CASCADE)
name = models.CharField(max_length=200, blank=True)
```
In admin.py,
```
from .models import Person, DetailsOfEducationQualification, ScholarshipNames
class DetailsOfEducationQualificationInline(admin.TabularInline):
model = DetailsOfEducationQualification
extra = 1
class ScholarshipNamesInline(admin.TabularInline):
model = ScholarshipNames
extra = 1
class PersonAdmin(admin.ModelAdmin):
fieldsets = [
(
'Personal Information', {
'fields':[
'name', 'age', 'education_level', 'occupation_status', 'income',
]
}
),
]
inlines = [
DetailsOfEducationQualificationInline, ScholarshipNamesInline,
]
```
The question I asked is, How do I put these three models in one CSV file while exporting since it is related to each other... | closed | 2020-05-09T13:11:23Z | 2023-09-15T21:14:04Z | https://github.com/django-import-export/django-import-export/issues/1133 | [
"question"
] | wan0004 | 3 |
matterport/Mask_RCNN | tensorflow | 2,293 | Has anyone tried training only the RPN from scratch? | I am getting very high scores for all the anchors | open | 2020-07-28T08:41:11Z | 2020-07-30T11:43:00Z | https://github.com/matterport/Mask_RCNN/issues/2293 | [] | nnnetizen | 1 |
flairNLP/flair | nlp | 3,049 | Training of TARS model based on xlm-roberta-base not supported? | In the [tutorial](https://github.com/flairNLP/flair/blob/master/resources/docs/TUTORIAL_10_TRAINING_ZERO_SHOT_MODEL.md) to train a TARS model, I replaced the bert-base-uncased embeddings with xml-roberta-base embeddings.
```
# 5a: alternatively, comment out previous line and comment in next line to train a new TARS model from scratch instead
# tars = TARSClassifier(embeddings="bert-base-uncased")
tars = TARSClassifier(embeddings="xlm-roberta-base")
```
No error or warning message shows up during training, but loss is not really decreasing, and predictions seem to be empty, which results in 0.0 evaluation scores.
see loss:
```
EPOCH TIMESTAMP BAD_EPOCHS LEARNING_RATE TRAIN_LOSS DEV_LOSS DEV_PRECISION DEV_RECALL DEV_F1 DEV_ACCURACY
1 13:34:20 0 0.0200 0.6781343320539873 0.5536978244781494 0.0 0.0 0.0 0.0
2 13:36:27 0 0.0200 0.6408121450934816 0.5375534296035767 0.0 0.0 0.0 0.0
3 13:38:33 0 0.0200 0.6377008462209723 0.5173110365867615 0.0 0.0 0.0 0.0
4 13:40:40 1 0.0200 0.6371011527674117 0.5206502676010132 0.0 0.0 0.0 0.0
5 13:42:46 2 0.0200 0.6370353295454698 0.5219637155532837 0.0 0.0 0.0 0.0
```
see predictions:
```
How far is it from Denver to Aspen ?
- Gold: question about number
- Pred:
-> MISMATCH!
What county is Modesto , California in ?
- Gold: question about location
- Pred:
-> MISMATCH!
```
Are other TransformerEmbeddings not supported for TARS models? | closed | 2023-01-09T15:04:08Z | 2023-01-11T14:03:21Z | https://github.com/flairNLP/flair/issues/3049 | [
"question"
] | ann-so | 2 |
ultralytics/yolov5 | pytorch | 12,993 | Saving Augmented Images | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello friends,
"As you know, data augmentation methods lead to a slower training process. Therefore, I have decided to perform augmentation methods once and save the new images. Then, for another training process, I will only use these new images and will not use augmentation methods in the training process. In the _getitem_ method of the LoadImagesAndLabels class, I saved the images and their corresponding labels. To achieve this, I used one epoch with a batch size of one to perform these methods for each image. After the process was finished, I checked the images and labels, and everything is fine. However, the problem is that I can't achieve the same accuracy as before with this new dataset (augmented method)."
### Additional
_No response_ | closed | 2024-05-09T14:34:19Z | 2024-06-20T00:22:02Z | https://github.com/ultralytics/yolov5/issues/12993 | [
"question",
"Stale"
] | BehdadSDP | 2 |
frappe/frappe | rest-api | 29,994 | node not found when saving website theme | ## Description of the issue
When creating a new website theme, and then save it, it will fail saying that the node executable is not found.
When printing the environments of the web application process, it does run as the frappe user, however, since the master process is spawned by root, i believe the supervisor worker process like gunicorn it is inheriting the root environment, hence the failure to load the node executable as the node inside nvm is not loaded in $PATH of root user, but the frappe user inside their own shell.
## Context information (for bug reports)
**Output of `bench version`**
```
erpnext 15.49.1
frappe 15.53.0
hrms 15.38.0
```
## Steps to reproduce the issue
1. Use bare-metal installation method
2. Create new site
3. Create website theme
4. Save website theme, and then error will pop-out saying node is not found.
### Observed result
Node is not found, because the $PATH is from root, not frappe user
### Expected result
Website theme saved successfully
### Stacktrace / full error message
```
Traceback (most recent call last):
File "apps/frappe/frappe/app.py", line 114, in application
response = frappe.api.handle(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/api/__init__.py", line 49, in handle
data = endpoint(**arguments)
^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/api/v1.py", line 36, in handle_rpc_call
return frappe.handler.handle()
^^^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/handler.py", line 50, in handle
data = execute_cmd(cmd)
^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/handler.py", line 86, in execute_cmd
return frappe.call(method, **frappe.form_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/__init__.py", line 1726, in call
return fn(*args, **newargs)
^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/utils/typing_validations.py", line 31, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/desk/form/save.py", line 39, in savedocs
doc.save()
File "apps/frappe/frappe/model/document.py", line 342, in save
return self._save(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/model/document.py", line 364, in _save
return self.insert()
^^^^^^^^^^^^^
File "apps/frappe/frappe/model/document.py", line 295, in insert
self.run_before_save_methods()
File "apps/frappe/frappe/model/document.py", line 1103, in run_before_save_methods
self.run_method("validate")
File "apps/frappe/frappe/model/document.py", line 974, in run_method
out = Document.hook(fn)(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/model/document.py", line 1334, in composer
return composed(self, method, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/model/document.py", line 1316, in runner
add_to_return_value(self, fn(self, *args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/model/document.py", line 971, in fn
return method_object(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/website/doctype/website_theme/website_theme.py", line 50, in validate
self.generate_bootstrap_theme()
File "apps/frappe/frappe/website/doctype/website_theme/website_theme.py", line 110, in generate_bootstrap_theme
process = Popen(command, cwd=frappe.get_app_source_path("frappe"), stdout=PIPE, stderr=PIPE)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/frappe/.pyenv/versions/3.12.8/lib/python3.12/subprocess.py", line 1026, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/home/frappe/.pyenv/versions/3.12.8/lib/python3.12/subprocess.py", line 1955, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'node'
```
## Additional information
OS version / distribution, `Frappe` install method, etc.
Bare metal installation method using Ubuntu 24.04.1
Environment of the gunicorn process:

the link below is the same issue but raised in the forums instead, because the last two days i was unable to open the issue here
https://discuss.frappe.io/t/node-not-found-when-saving-website-theme/141009/1
| open | 2025-01-23T03:11:25Z | 2025-01-23T03:16:08Z | https://github.com/frappe/frappe/issues/29994 | [
"bug"
] | ismxilxrif | 0 |
rthalley/dnspython | asyncio | 811 | TXT records are parsed incorrect if contain spaces | **Describe the bug**
In case of TXT records, which contain spaces, like "localhost localhost6 suchhost,verylocal", result of rdata.from_text() is ambigious string '"localhost" "localhost6" "suchhost,verylocal"'. The latter, according to [rfc7208](https://www.rfc-editor.org/rfc/rfc7208#section-3.3) must be treated as if there is no spaces between parts, which is not what user wants.
**To Reproduce**
- In [1]: from dns import name, rdata, rdatatype
- In [2]: origin = name.from_text("example.com")
- In [3]: rd = rdata.from_text(1, rdatatype.TXT, "localhost localhost6 suchhost,verylocal", origin=origin)
- In [4]: rd.to_text()
- Out[4]: '"localhost" "localhost6" "suchhost,verylocal"'
- In [5]: rd
- Out[5]: <DNS IN TXT rdata: "localhost" "localhost6" "suchhost,verylocal">
**Context (please complete the following information):**
- dnspython version 2.2.1
- Python version 3.7.0
- OS: macOS Monterey
| closed | 2022-05-26T10:20:50Z | 2022-05-26T11:44:50Z | https://github.com/rthalley/dnspython/issues/811 | [] | anpenik | 1 |
praw-dev/praw | api | 1,819 | Comment stream providing comments out of order | ### Describe the Bug
When using a comment stream on a specific subreddit, if a high load of comments are pushed through the stream, some will not be processed until the stream is caught up.
For example, here we tested by spamming many many many comments replying to our own comments with a command that triggers the bot to reply with the current UTC time. Hypothetically, all the replies should be in order of which the comments were placed. Here is how it actually came out:

### Desired Result
Comments come through the reddit stream oldest to newest
https://praw.readthedocs.io/en/stable/code_overview/other/subredditstream.html#praw.models.reddit.subreddit.SubredditStream.comments
### Relevant Logs
_No response_
### Code to reproduce the bug
```python
for comment in reddit.subreddit(subreddit).stream.comments():
comment.reply(datetime.datetime.now())
```
### My code example does not include the `Reddit()` initialization to prevent credential leakage.
Yes
### This code has previously worked as intended.
No
### Operating System/Environment
Windows 10 Version 21H1, CentOS 7
### Python Version
Python 3.9
### PRAW Version
7.3.0
### Prawcore Version
2.2.0
### Anything else?
This issue was discovered via /r/KickOpenTheDoor. An RPG inside of reddit. When bosses in this game get low, the races (4 teams) will rally their members and all attack the boss with as much people as possible to try and get the final hit and kill it. We noticed this bug happens most when these comments are spammed quickly. Other players comments which are newer will process before a certain attack. These are rare but it means a player will not actually get to hit the boss until the bot has caught up to real time, by which time, the boss is already dead. Then, the bot will proceed to go back to the comment it skipped, then process it.
Possibly a general API issue and how reddit delivers the comments. | closed | 2021-11-20T03:41:12Z | 2021-11-20T17:57:07Z | https://github.com/praw-dev/praw/issues/1819 | [] | ZorudaRinku | 4 |
litestar-org/polyfactory | pydantic | 617 | Bug: Imperative Factory Creation doesn't respect pydantic Field constrains | ### Description
I have this pydantic model:

I have this base factory:

When i create a factory, it does not respect the pydantic Field constrains:

### Release Version
2.18.1
| open | 2024-12-04T22:15:51Z | 2024-12-05T13:07:19Z | https://github.com/litestar-org/polyfactory/issues/617 | [
"bug"
] | idan-rahamim-lendbuzz | 2 |
skypilot-org/skypilot | data-science | 4,179 | [Core] No oci installation abort the `sky launch` | <!-- Describe the bug report / feature request here -->
If OCI is not enabled and local python environment does not have oci installation, it breaks the provision process. We should fix this before the 0.7.0 release.
Steps to produce:
1. Start a fresh container using `docker run -it --rm --name skypilot-debug berkeleyskypilot/skypilot-debug /bin/bash`.
2. Installing skypilot w/ AWS: `cd skypilot; pip install -e '.[aws]'
3. `aws configure` and input tokens
4. `sky launch` gives the following output:
```bash
(base) root@6d62005c658d:/sky_repo/skypilot# sky launch
D 10-25 20:45:49 common.py:231] Updated AWS catalog aws/vms.csv.
D 10-25 20:45:49 optimizer.py:292] #### Task<name=sky-cmd>(run=<empty>)
D 10-25 20:45:49 optimizer.py:292] resources: default instances ####
D 10-25 20:45:49 optimizer.py:317] Defaulting the task's estimated time to 1 hour.
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.4
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.4
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.4
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.4
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.4
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.4
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.4
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.4
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.4
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.4
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.4
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.5
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.5
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.5
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.5
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.5
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.5
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.5
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.5
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.5
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.5
D 10-25 20:45:49 optimizer.py:339] resources: AWS(m6i.2xlarge)
D 10-25 20:45:49 optimizer.py:358] estimated_runtime: 3600 s (1.0 hr)
D 10-25 20:45:49 optimizer.py:362] estimated_cost (not incl. egress): $0.5
I 10-25 20:45:50 optimizer.py:881] Considered resources (1 node):
I 10-25 20:45:50 optimizer.py:951] ------------------------------------------------------------------------------------------
I 10-25 20:45:50 optimizer.py:951] CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN
I 10-25 20:45:50 optimizer.py:951] ------------------------------------------------------------------------------------------
I 10-25 20:45:50 optimizer.py:951] AWS m6i.2xlarge 8 32 - us-east-1 0.38 ✔
I 10-25 20:45:50 optimizer.py:951] ------------------------------------------------------------------------------------------
Launching a new cluster 'sky-9922-root'. Proceed? [Y/n]:
D 10-25 20:45:50 cloud_vm_ray_backend.py:4408] cluster_ever_up: False
D 10-25 20:45:50 cloud_vm_ray_backend.py:4409] record: None
D 10-25 20:45:52 common.py:231] Updated AWS catalog aws/instance_quota_mapping.csv.
D 10-25 20:45:52 common.py:231] Updated AWS catalog aws/images.csv.
Traceback (most recent call last):
File "/sky_repo/skypilot/sky/adaptors/common.py", line 31, in load_module
self._module = importlib.import_module(self._module_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1140, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'oci'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/sky_repo/skypilot/sky/clouds/oci.py", line 466, in get_credential_file_mounts
oci_cfg = oci_adaptor.get_oci_config(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/adaptors/oci.py", line 26, in get_oci_config
oci_config = oci.config.from_file(file_location=conf_file_path,
^^^^^^^^^^
File "/sky_repo/skypilot/sky/adaptors/common.py", line 46, in __getattr__
return getattr(self.load_module(), name)
^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/adaptors/common.py", line 36, in load_module
raise ImportError(self._import_error_message) from e
ImportError: Failed to import dependencies for OCI. Try running: pip install "skypilot[oci]"
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/sky_repo/skypilot/sky/adaptors/common.py", line 31, in load_module
self._module = importlib.import_module(self._module_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1140, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'oci'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/bin/sky", line 8, in <module>
sys.exit(cli())
^^^^^
File "/opt/conda/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/utils/common_utils.py", line 366, in _record
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/cli.py", line 818, in invoke
return super().invoke(ctx)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/cli.py", line 1131, in launch
_launch_with_confirm(task,
File "/sky_repo/skypilot/sky/cli.py", line 609, in _launch_with_confirm
sky.launch(
File "/sky_repo/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/execution.py", line 454, in launch
return _execute(
^^^^^^^^^
File "/sky_repo/skypilot/sky/execution.py", line 280, in _execute
handle = backend.provision(task,
^^^^^^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/utils/common_utils.py", line 366, in _record
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/backends/backend.py", line 60, in provision
return self._provision(task, to_provision, dryrun, stream_logs,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/backends/cloud_vm_ray_backend.py", line 2810, in _provision
config_dict = retry_provisioner.provision_with_retries(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/backends/cloud_vm_ray_backend.py", line 1988, in provision_with_retries
config_dict = self._retry_zones(
^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/backends/cloud_vm_ray_backend.py", line 1399, in _retry_zones
config_dict = backend_utils.write_cluster_config(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/utils/common_utils.py", line 386, in _record
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/backends/backend_utils.py", line 832, in write_cluster_config
credentials = sky_check.get_cloud_credential_file_mounts(excluded_clouds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/check.py", line 213, in get_cloud_credential_file_mounts
cloud_file_mounts = cloud.get_credential_file_mounts()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/clouds/oci.py", line 471, in get_credential_file_mounts
except (ImportError, oci_adaptor.oci.exceptions.ConfigFileNotFound):
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/adaptors/common.py", line 46, in __getattr__
return getattr(self.load_module(), name)
^^^^^^^^^^^^^^^^^^
File "/sky_repo/skypilot/sky/adaptors/common.py", line 36, in load_module
raise ImportError(self._import_error_message) from e
ImportError: Failed to import dependencies for OCI. Try running: pip install "skypilot[oci]"
```
<!-- If relevant, fill in versioning info to help us troubleshoot -->
_Version & Commit info:_
* `sky -v`: skypilot, version 1.0.0-dev0
* `sky -c`: skypilot, commit ee708e7aad508fab1b48b1f6f297ebe6fb5dbe63
| closed | 2024-10-25T20:50:36Z | 2024-12-19T09:31:38Z | https://github.com/skypilot-org/skypilot/issues/4179 | [
"P0"
] | cblmemo | 1 |
sammchardy/python-binance | api | 1,148 | websocket's interval is not changed. how to fix it ? | # depth cache manager using threads dcm = ThreadedDepthCacheManager() dcm.start() def handle_socket_message(msg): print(f"message type: {msg['e']}") print(msg) def handle_dcm_message(depth_cache): print(f"symbol {depth_cache.symbol}") print("top 5 bids") print(depth_cache.get_bids()[:5]) print("top 5 asks") print(depth_cache.get_asks()[:5]) print("last update time {}".format(depth_cache.update_time)) twm.start_kline_socket(callback=handle_socket_message, symbol='BNBBTC') dcm.start_depth_cache(callback=handle_dcm_message, symbol='ETHBTC') # replace with a current options symbol options_symbol = 'BTC-210430-36000-C' dcm.start_options_depth_cache(callback=handle_dcm_message, symbol=options_symbol) # join the threaded managers to the main thread twm.join() dcm.join()
i used this code, it is not change the interval ( now interval is 1 second. i want to change it live data.) how to fix it to loas live streamimg ? thank you. | open | 2022-02-22T06:34:43Z | 2022-03-05T17:33:16Z | https://github.com/sammchardy/python-binance/issues/1148 | [] | tjddnjs124 | 1 |
open-mmlab/mmdetection | pytorch | 11,880 | training speed slower 2X | I use 1 coputer,which have 2 GPU,and i trained 1 model by 1 GPU, when 1GPU is used , training speed is normal , but when i trained 2 model use 2 GPU(1 model training use 1GPU),speed is 2X slower.

| closed | 2024-07-26T05:15:43Z | 2024-07-26T05:16:41Z | https://github.com/open-mmlab/mmdetection/issues/11880 | [] | zhaohai1314 | 0 |
jstrieb/github-stats | asyncio | 25 | Still not so convenient | It still need to add this files to the repo, even to every repo so that the stats can be updated.
But it's possible to checkout another user's repo in GitHub Action...why not just checkout this repo and use the python source files inside? | closed | 2021-02-23T16:27:05Z | 2021-02-26T10:56:06Z | https://github.com/jstrieb/github-stats/issues/25 | [] | PassionPenguin | 8 |
huggingface/datasets | computer-vision | 7,467 | load_dataset with streaming hangs on parquet datasets | ### Describe the bug
When I try to load a dataset with parquet files (e.g. "bigcode/the-stack") the dataset loads, but python interpreter can't exit and hangs
### Steps to reproduce the bug
```python3
import datasets
print('Start')
dataset = datasets.load_dataset("bigcode/the-stack", data_dir="data/yaml", streaming=True, split="train")
it = iter(dataset)
next(it)
print('Finish')
```
The program prints finish but doesn't exit and hangs indefinitely.
I tried this on two different machines and several datasets.
### Expected behavior
The program exits successfully
### Environment info
datasets==3.4.1
Python 3.12.9.
MacOS and Ubuntu Linux | open | 2025-03-18T23:33:54Z | 2025-03-18T23:33:54Z | https://github.com/huggingface/datasets/issues/7467 | [] | The0nix | 0 |
kymatio/kymatio | numpy | 785 | Reviewing `core` JTFS | I recommend focusing on the "standard" variant:
- `average=True; average_fr=True/False; aligned=True; out_3D=False; sampling_filters_fr='resample'; out_exclude=None`
All other code can be ignored (e.g. `if out_3D: ...`). This should cut code to under 25% of full size. The rest can be validated via `test_jtfs.py`.
### Pseudocode
Up to a spinned pair:
```python
# first order time scattering
U1 = []
for p1t in psi1_t:
U1.append(abs(conv(x, p1t)))
# joint scattering
U2 = []
for p2t in psi2_t:
# second order time scattering
U2_tm = []
for u1 in U1:
U2_tm.append(conv(u1, p2t))
U2_tm = array(U2_tm) # shape == (freq, time)
# frequential scattering
for p1f in psi1_fr_up:
U2.append(abs(conv(U2_tm, p1f, axis=-2)))
```
JTFS is exactly this, but with
1. padding, unpadding
2. lowpass filtering
3. subsampling
4. other joint pairs
Majority of complexity goes into 3; implementation subsamples as much as possible, ~losslessly, which is very complicated in light of all variants.
### Complete validation
The most complicated method, by far, is `_get_stride` in `timefrequency_scattering1d.py`; a 300+ line docstring dedicates to explaining it. Validating this requires familiarity with the entire implementation, and accounting for every possible kwarg combination.
The rest can be traversed in an exploratory fashion: picking a test signal, and tracking the output back to input - breakpointing into various stages, and printing all pertinent attributes and plotting intermediate outputs.
It's also helpful to familiarize with `compute_meta_jtfs`, which encodes the entire computational graph into a dictionary.
Lastly, majority of features are explained in depth in [Discussions](https://github.com/kymatio/kymatio/discussions) via my posts and discussions with @lostanlen. For those without an IDE preference, I recommend [Spyder](https://github.com/kymatio/kymatio/discussions/782). | closed | 2021-10-10T16:12:06Z | 2022-05-30T15:18:10Z | https://github.com/kymatio/kymatio/issues/785 | [] | OverLordGoldDragon | 1 |
sqlalchemy/sqlalchemy | sqlalchemy | 12,341 | Add dialect-specific support for Oracle's FETCH EXACT/APPROXIMATE |
### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/12337
support for FETCH EXACT/APPROXIMATE feature described at https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/SELECT.html#GUID-CFA006CA-6FF1-4972-821E-6996142A51C6__BABBADDD | open | 2025-02-12T20:49:45Z | 2025-02-13T13:49:25Z | https://github.com/sqlalchemy/sqlalchemy/issues/12341 | [
"oracle",
"use case"
] | CaselIT | 2 |
TracecatHQ/tracecat | automation | 95 | Clerk migration | # Motivation
1. Supabase auth isn't robust enough for our usage. Move to Clerk.
2. Supabase auth primitives are quite low-level and a lot of the authentication logic has to be managed directly by us
3. Handling sessions and middleware is cumbersome
# Implementation
- Leverage middleware to decouple auth logic from application logic
- This means we should only use redirects where necessary
- Use Axios interceptors to apply authentication for all backend requests
- Offload onboarding and new user flow responsibilities to custom Clerk `publicMetadata` for each user, and handled within a `/onboarding` component
# Tasks
- [x] Migrate sign in/up page
- [x] Replaced these with Clerk builtins
- [x] Migrate `middleware.ts`
- [x] Migrate all `supabase.auth` usage to Clerk alternative
- [x] Update onboarding flow (creating first workflow) to use Clerk onboarding middleware
- [x] Update FastAPI backend
- [x] Update fastapi auth dependencies?
- [x] How to handle authing sessions
- [x] [v0] No need to replace Supabase as a DB provider, but decouple Supabase dependency
- [ ] [v1] Replace supabase with generic postgres
- [x] Allow passing of `TRACECAT__DISABLE_AUTH` build flag to disable auth
- [x] This works nicely because we've centralized (almost) all auth logic into the Axios interceptor. We can conditionally apply the authenticated interceptor at application startup.
- [x] Update backend to change authentication strategy based on `TRACECAT__DISABLE_AUTH` flag | closed | 2024-04-23T20:30:19Z | 2024-04-28T20:04:59Z | https://github.com/TracecatHQ/tracecat/issues/95 | [
"tracker"
] | daryllimyt | 0 |
neuml/txtai | nlp | 829 | Persist embeddings components to specified schema | Currently, all the embeddings components that support persistence with relational databases store content in the default schema for the session.
This change will allow setting what the default schema should be for the following components.
- [PGVector ANN](https://neuml.github.io/txtai/embeddings/configuration/ann/#pgvector)
- [RDBMS Database Storage](https://neuml.github.io/txtai/embeddings/configuration/database/#content)
- [Graph RDBMS](https://neuml.github.io/txtai/embeddings/configuration/graph/#rdbms)
- [PGText Scoring](https://neuml.github.io/txtai/embeddings/configuration/scoring/#method)
This is set on a per Embeddings database instance. | closed | 2024-12-04T19:53:31Z | 2024-12-04T20:19:44Z | https://github.com/neuml/txtai/issues/829 | [] | davidmezzetti | 0 |
akfamily/akshare | data-science | 5,025 | 【南华期货-南华指数所有品种一览表】响应报错 | 调用示例:`ak.futures_return_index_nh('NHCI')`;
报错原因:调用函数接口`futures_index_symbol_table_nh`中的南华期货数据网址`https://www.nanhua.net/ianalysis/plate-variety.json`
无法得到响应数据。 | closed | 2024-07-07T11:07:22Z | 2024-07-08T05:59:51Z | https://github.com/akfamily/akshare/issues/5025 | [] | hugsk | 1 |
plotly/plotly.py | plotly | 4,583 | zorder doesn't work on version 5.21.0 | The `zorder` attribute was added recently. However, the following code does not work as expected (exact code as [here](https://github.com/plotly/plotly.py/issues/2345#issuecomment-2064818826)):
```python
import plotly
import plotly.graph_objects as go
fig = go.Figure(
go.Bar(
x=[0, 1, 2],
y=[1, 2, 3]
),
)
fig.add_trace(
go.Scatter(
x=[0, 1, 2],
y=[3, 2, 1],
zorder=-2
)
)
fig.update_layout(
title=dict(text=f"{plotly.__version__ = }")
)
fig.show()
```
Expected Behavior: Bar chart should have a higher z-order, meaning it should be at the front.
Instead, the code produces the following:

| closed | 2024-04-21T00:59:42Z | 2024-04-22T16:39:28Z | https://github.com/plotly/plotly.py/issues/4583 | [] | khisr0w | 5 |
SYSTRAN/faster-whisper | deep-learning | 951 | Cannot start faster-whisper | I attempted to use faster-whisper in PyCharm, created a new project with dedicated venv and followed the installation instructions given in this repo. However, when I tried to run the example script I got the following error message:
`Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory`
OS: Linux Mint | open | 2024-08-06T12:47:45Z | 2024-09-08T16:51:28Z | https://github.com/SYSTRAN/faster-whisper/issues/951 | [] | Denis-Kazakov | 8 |
google-research/bert | tensorflow | 1,302 | Degraded bookcorpus link at README.md | Refer to README.md, search for BookCorpus & click on the link. It will redirect you to [here](https://yknzhu.wixsite.com/mbweb) & you will notice BookCorpus is no longer available from the original authors.
<img width="674" alt="image" src="https://user-images.githubusercontent.com/56106630/158722360-596527d4-f0b6-415a-ad1a-65611acc14b2.png">
| open | 2022-03-17T02:15:08Z | 2022-03-26T16:14:30Z | https://github.com/google-research/bert/issues/1302 | [] | teohsinyee | 1 |
man-group/arctic | pandas | 195 | Top level tickstore write with list of dicts fails on timezones | Each dict contains a key 'index' that is a datetime. Whether the datetime is timezone aware or naive the write will fail, either in the top level slicing or in the low level write. Current top level write tests just the pandas dataframe data. I propose to convert the top level date ranges to UTC if naive and add an integration test to write a list of dicts to a top level library.
| closed | 2016-08-11T09:50:48Z | 2016-08-15T14:14:14Z | https://github.com/man-group/arctic/issues/195 | [] | reasto | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.