repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
vllm-project/vllm | pytorch | 15,393 | [Bug]: Batch embedding inference is inconsistent with hf | Below is the minimal reproduction script, you may firstly setup an embedding server of 'intfloat/multilingual-e5-large-instruct' on 8000 port.
[batch_embedding.txt](https://github.com/user-attachments/files/19429471/batch_embedding.txt)
### Your current environment
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.9
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.3.107
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A40
GPU 1: NVIDIA A40
GPU 2: NVIDIA A40
GPU 3: NVIDIA A40
GPU 4: NVIDIA A40
GPU 5: NVIDIA A40
GPU 6: NVIDIA A40
GPU 7: NVIDIA A40
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 86
On-line CPU(s) list: 0-85
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 43
Socket(s): 1
Stepping: 6
BogoMIPS: 5187.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid md_clear arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 2.7 MiB (86 instances)
L1i cache: 2.7 MiB (86 instances)
L2 cache: 172 MiB (43 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-85
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-dali-cuda120==1.32.0
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvidia-pyindex==1.0.9
[pip3] onnx==1.15.0rc2
[pip3] optree==0.10.0
[pip3] pynvml==11.4.1
[pip3] pytorch-quantization==2.1.2
[pip3] pyzmq==25.1.2
[pip3] sentence-transformers==3.2.1
[pip3] torch==2.5.1
[pip3] torch-tensorrt==2.2.0a0
[pip3] torchaudio==2.5.1
[pip3] torchdata==0.7.0a0
[pip3] torchtext==0.17.0a0
[pip3] torchvision==0.20.1
[pip3] transformers==4.49.0
[pip3] triton==3.1.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.6.4.post2.dev240+g7c4f9883.d20250321
vLLM Build Flags:
CUDA Archs: 5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PHB PHB PHB PHB PHB PHB PHB 0-85 0 N/A
GPU1 PHB X PHB PHB PHB PHB PHB PHB 0-85 0 N/A
GPU2 PHB PHB X PHB PHB PHB PHB PHB 0-85 0 N/A
GPU3 PHB PHB PHB X PHB PHB PHB PHB 0-85 0 N/A
GPU4 PHB PHB PHB PHB X PHB PHB PHB 0-85 0 N/A
GPU5 PHB PHB PHB PHB PHB X PHB PHB 0-85 0 N/A
GPU6 PHB PHB PHB PHB PHB PHB X PHB 0-85 0 N/A
GPU7 PHB PHB PHB PHB PHB PHB PHB X 0-85 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NVIDIA_VISIBLE_DEVICES=all
CUBLAS_VERSION=12.3.4.1
NVIDIA_REQUIRE_CUDA=cuda>=9.0
CUDA_CACHE_DISABLE=1
TORCH_CUDA_ARCH_LIST=5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX
NCCL_VERSION=2.19.3
NVIDIA_DRIVER_CAPABILITIES=compute,utility,video
NVIDIA_PRODUCT_NAME=PyTorch
CUDA_VERSION=12.3.2.001
PYTORCH_VERSION=2.2.0a0+81ea7a4
PYTORCH_BUILD_NUMBER=0
CUDNN_VERSION=8.9.7.29+cuda12.2
PYTORCH_HOME=/opt/pytorch/pytorch
LD_LIBRARY_PATH=/usr/local/lib/python3.10/dist-packages/cv2/../../lib64:/usr/local/lib/python3.10/dist-packages/torch/lib:/usr/local/lib/python3.10/dist-packages/torch_tensorrt/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
NVIDIA_BUILD_ID=76438008
CUDA_DRIVER_VERSION=545.23.08
PYTORCH_BUILD_VERSION=2.2.0a0+81ea7a4
CUDA_HOME=/usr/local/cuda
CUDA_HOME=/usr/local/cuda
CUDA_MODULE_LOADING=LAZY
NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS=
NVIDIA_PYTORCH_VERSION=23.12
TORCH_ALLOW_TF32_CUBLAS_OVERRIDE=1
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
### 🐛 Describe the bug
when i use vllm to create embeddings, it turns out weird in the behavior between batching and send requests one by one.
My model is "intfloat/e5-mistral-7b-instruct", my test data is a list with 100 strings.
When i set the max-num-seqs=1, i can pass the test in https://github.com/vllm-project/vllm/commits/main/tests/models/embedding/language/test_embedding.py .
But when i use batch inference, the result is inconsistent with huggingface or sentence-transformers, only the first 20 of embeddings can stay consistent with hf, others are inconsistent with cosine_similarity of 0.98 or lower, do you have any ideas to solve this batch inference problem? Thanks

### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | open | 2025-03-24T12:09:09Z | 2025-03-24T12:48:32Z | https://github.com/vllm-project/vllm/issues/15393 | [
"bug"
] | ehuaa | 1 |
man-group/arctic | pandas | 86 | append not working as expected | I have a dataframe stored in mongodb using arctic and I would like to append to the existing dataframe, e.g. updating daily prices.
I've tried using version storage and the append() function, however it gives me not implemented for handler error
" File "C:\Anaconda\lib\site-packages\arctic\store\version_store.py", line 496, in append
raise Exception("Append not implemented for handler %s" % handler)
Exception: Append not implemented for handler <arctic.store._pickle_store.PickleStore object at 0x09274AB0>"
've tried register_library_type('dataframestore', PandasDataFrameStore) but received some other error.
Do you have an example of how to update existing dataframe/series data or is there a rule of thumb?
| closed | 2016-01-05T19:59:38Z | 2016-01-06T21:55:39Z | https://github.com/man-group/arctic/issues/86 | [] | b3yang | 6 |
Avaiga/taipy | data-visualization | 2,215 | [🐛 BUG] Filtering on string in tables has a bad icon | ### What went wrong? 🤔
Since [PR#2087](https://github.com/Avaiga/taipy/pull/2087), addressing #426, there's an icon in the string field indicating whether or not the filtering should take into account the casing.
This icon is ugly:

### Expected Behavior
A more explicit icon (which should not be difficult to find) is visible.
### Version of Taipy
' develop' branch at the time of creating this issue.
### Acceptance Criteria
- [ ] A unit test reproducing the bug is added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] The bug reporter validated the fix.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2024-11-06T09:06:32Z | 2024-11-06T15:13:42Z | https://github.com/Avaiga/taipy/issues/2215 | [
"💥Malfunction",
"🟨 Priority: Medium",
"GUI: Front-End"
] | FabienLelaquais | 0 |
pydata/xarray | pandas | 9,142 | mfdataset - ds.encoding["source"] to retrieve filename not valid key | ### What happened?
Looking at the doc https://docs.xarray.dev/en/stable/generated/xarray.open_mfdataset.html
> preprocess ([callable()](https://docs.python.org/3/library/functions.html#callable), optional) – If provided, call this function on each dataset prior to concatenation. You can find the file-name from which each dataset was loaded in ds.encoding["source"].
I expected to be able to use ds.encoding["source"] in my preprocess function to retrieve the filename. However I get
### What did you expect to happen?
I expected the doc to be correct? unless I missed something trivial.
### Minimal Complete Verifiable Example
```Python
def preprocess_xarray_no_class(ds):
filename = ds.encoding["source"]
ds = ds.assign(
filename=("time"), [filename])
) # add new filename variable with time dimension
return ds
ds = xr.open_mfdataset(
fileset,
preprocess=preprocess_xarray_no_class,
engine='h5netcdf',
concat_characters=True,
mask_and_scale=True,
decode_cf=True,
decode_times=True,
use_cftime=True,
parallel=True,
decode_coords=True,
compat="equals",
)
```
### MVCE confirmation
- [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [ ] Complete example — the example is self-contained, including all data and the text of any traceback.
- [ ] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
```Python
...
1 def preprocess_xarray_no_class(ds):
----> 2 filename = ds.encoding["source"]
3 ds = ds.assign(
4 filename=("time",), [filename])
5 ) # add new filename variable with time dimension
KeyError: 'source'
```
### Anything else we need to know?
_No response_
### Environment
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0]
python-bits: 64
OS: Linux
OS-release: 6.5.0-1023-oem
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: C.UTF-8
LANG: en_IE.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.14.2
libnetcdf: 4.9.3-development
xarray: 2024.6.0
pandas: 2.2.2
numpy: 1.26.4
scipy: 1.13.1
netCDF4: 1.7.1
pydap: None
h5netcdf: 1.3.0
h5py: 3.11.0
zarr: 2.18.2
cftime: 1.6.4
nc_time_axis: 1.4.1
iris: None
bottleneck: 1.3.8
dask: 2024.6.0
distributed: 2024.6.0
matplotlib: 3.9.0
cartopy: None
seaborn: 0.13.2
numbagg: 0.8.1
fsspec: 2024.6.0
cupy: None
pint: None
sparse: None
flox: 0.9.7
numpy_groupies: 0.11.1
setuptools: 70.0.0
pip: 24.0
conda: None
pytest: 8.2.2
mypy: 1.10.0
IPython: 7.34.0
sphinx: None
</details>
| closed | 2024-06-20T04:09:51Z | 2024-06-30T14:03:47Z | https://github.com/pydata/xarray/issues/9142 | [
"bug"
] | lbesnard | 3 |
alpacahq/alpaca-trade-api-python | rest-api | 320 | Is polygon.historic_agg_v2 adjusted for dividends as well? | I know that the polygon.historic_agg_v2 is adjusted for splits, but is it adjusted for dividends as well? If not, what is a good way to adjust both dividends and splits for historical prices? | closed | 2020-11-09T19:14:23Z | 2020-12-23T21:29:06Z | https://github.com/alpacahq/alpaca-trade-api-python/issues/320 | [] | zhangz73 | 4 |
aiortc/aiortc | asyncio | 793 | Forwarding remote tracks received from one peer to all other peers . | Hi @jlaine ,
First of all thank you for this amazing frame work
I'm facing issue with forwarding tracks received from one peer to all other peers which are running in separate threads
1. I'm receiving remote tracks and i'm adding to global set from one peer connection
2. When new peer joins the i'm adding all the tracks in the peer with addtrack
3. The tracks are received in remote side but not playing and not receiving any frames except for only one user i,e the firt user to receive the remote track and playing but other peer connections not receiving any frames and the track is muted not unmutes
thanks a lot in advance
| closed | 2022-11-08T06:07:50Z | 2023-01-30T22:50:07Z | https://github.com/aiortc/aiortc/issues/793 | [] | manoj-thamizhan | 1 |
piccolo-orm/piccolo | fastapi | 1,117 | Are there any suggestions for one to many queries? | Are there any suggestions for one to many queries? | open | 2024-10-24T11:05:23Z | 2024-10-24T13:33:04Z | https://github.com/piccolo-orm/piccolo/issues/1117 | [] | sarvesh-deserve | 2 |
jupyter/nbviewer | jupyter | 137 | 404 error on a valid URL with the '/url' API | Hi,
A few weeks ago, I uploaded a ipynb file to a web site available at http://www.logilab.org/file/187482/raw/quandl-data-with-pandas.ipynb and could see the notebook thanks to a simple http://nbviewer.ipython.org/url/www.logilab.org/file/187482/raw/quandl-data-with-pandas.ipynb
Unfortunately, I would like to show the notebook to someone and I was surprised to have a 404 error while the URL is still valid and the file has not been changed.
Is there a recent change about the '/url' API or is there a problem with my ipynb file?
Thanks,
Damien G.
| closed | 2013-12-06T07:57:52Z | 2013-12-10T18:46:35Z | https://github.com/jupyter/nbviewer/issues/137 | [] | garaud | 7 |
biolab/orange3 | data-visualization | 6,188 | Portable Orange3-3.33.0. could not find pythonw.exe | win10 64bit
Portable Orange
[Orange3-3.33.0.zip](https://download.biolab.si/download/files/Orange3-3.33.0.zip)
No installation needed. Just extract the archive and open the shortcut in the extracted folder.
double click the
shortcut,Orange ,it‘s target:%COMSPEC% /C start Orange\pythonw.exe -m Orange.canvas
could not find pythonw.exe, please check the name is correct?
checked the pythonw.exe,it is in the "G:\Orange3-3.33.0\Orange" all the time.


| closed | 2022-10-29T03:41:16Z | 2022-11-04T10:52:30Z | https://github.com/biolab/orange3/issues/6188 | [
"bug report"
] | huangliang0828 | 2 |
pydata/pandas-datareader | pandas | 659 | Create Daily Sentiment Reader for IEX? | Hi - all - an additional reader for IEX for the link below would be great. I tried creating something based on pandas_datareader.iex.daily.IEXDailyReader but couldn't get it to work. Ideally getting a data frame back with daily sentiment between two dates (start, end) would be extremely useful.
Here is the API I'm trying to hit....
https://iexcloud.io/docs/api/#social-sentiment.
Anyone have any suggestions ? | closed | 2019-08-03T16:00:54Z | 2019-08-19T03:40:20Z | https://github.com/pydata/pandas-datareader/issues/659 | [] | scottstables | 0 |
kizniche/Mycodo | automation | 491 | Custom colors Graph dashboard doesn't work | ## Mycodo Issue Report:
- Specific Mycodo Version: 6.1.2
- Chromium Version 60.0.3112.89 (Developer Build) Built on Ubuntu 14.04, running on Raspbian 9.4 (32-bit)
#### Problem Description
Please list: When enabling custom colors in Graph dash the color pickers won't show. Can't select any colors.
### Errors
- No errors
### Steps to Reproduce the issue:
How can this issue be reproduced?
1. Make graph -> save
2. collapse graph -> tick custom colors -> save
3. collapse graph -> no colorpicker or input fields
Not a biggie but I thought I bring it to your attention. | closed | 2018-06-07T08:47:26Z | 2020-05-01T23:49:15Z | https://github.com/kizniche/Mycodo/issues/491 | [] | Gossen1 | 17 |
zihangdai/xlnet | tensorflow | 149 | How can i load pretrained model? | I have pretrained XLNET model in Georgian lagnuage. Training has generated this files:
.
Now i want to load pretrained XLNET model and for one sentence get sentence_embedding vector.
Can you help me ? | open | 2019-07-10T11:28:01Z | 2019-07-11T03:11:02Z | https://github.com/zihangdai/xlnet/issues/149 | [] | Bagdu | 1 |
tiangolo/uvicorn-gunicorn-fastapi-docker | pydantic | 56 | docker-compose and gunicorn_conf.py file preparation? | Hi,
I want to pass custom settings for Gunicorn and Uvicorn for `workers` settings. I have followed this [file ](https://github.com/tiangolo/uvicorn-gunicorn-docker/blob/622470ec9aedb5da2cd2235bbca3f9e8e6256cdb/docker-images/gunicorn_conf.py#L21)
So I have added `gunicorn_conf.py` file in my `/app/` folder. Directory structure is as follows
```
fastapi
|-app
|-main.py
|- gunicorn_conf.py
|-docker-compose.yml
|-Dockerfile
```
The content of `gunicorn_conf.py`
```
import json
import multiprocessing
import os
workers_per_core_str = os.getenv("WORKERS_PER_CORE", "10")
max_workers_str = os.getenv("MAX_WORKERS")
use_max_workers = None
if max_workers_str:
use_max_workers = int(max_workers_str)
web_concurrency_str = os.getenv("WEB_CONCURRENCY", None)
host = os.getenv("HOST", "0.0.0.0")
port = os.getenv("PORT", "80")
bind_env = os.getenv("BIND", None)
use_loglevel = os.getenv("LOG_LEVEL", "info")
if bind_env:
use_bind = bind_env
else:
use_bind = f"{host}:{port}"
cores = multiprocessing.cpu_count()
workers_per_core = float(workers_per_core_str)
default_web_concurrency = workers_per_core * cores
if web_concurrency_str:
web_concurrency = int(web_concurrency_str)
assert web_concurrency > 0
else:
web_concurrency = max(int(default_web_concurrency), 2)
if use_max_workers:
web_concurrency = min(web_concurrency, use_max_workers)
accesslog_var = os.getenv("ACCESS_LOG", "-")
use_accesslog = accesslog_var or None
errorlog_var = os.getenv("ERROR_LOG", "-")
use_errorlog = errorlog_var or None
graceful_timeout_str = os.getenv("GRACEFUL_TIMEOUT", "120")
timeout_str = os.getenv("TIMEOUT", "120")
keepalive_str = os.getenv("KEEP_ALIVE", "5")
# Gunicorn config variables
loglevel = use_loglevel
workers = web_concurrency
bind = use_bind
errorlog = use_errorlog
worker_tmp_dir = "/dev/shm"
accesslog = use_accesslog
graceful_timeout = int(graceful_timeout_str)
timeout = int(timeout_str)
keepalive = int(keepalive_str)
# For debugging and testing
log_data = {
"loglevel": loglevel,
"workers": workers,
"bind": bind,
"graceful_timeout": graceful_timeout,
"timeout": timeout,
"keepalive": keepalive,
"errorlog": errorlog,
"accesslog": accesslog,
# Additional, non-gunicorn variables
"workers_per_core": workers_per_core,
"use_max_workers": use_max_workers,
"host": host,
"port": port,
}
print(json.dumps(log_data))
```
And content of `docker-compose.yml`
```
version: '3'
services:
web:
build:
context: .
volumes:
- ./app:/app
ports:
- "80:80"
#environment:
command: bash -c "uvicorn main:app --reload --host 0.0.0.0 --port 80"
# Infinite loop, to keep it alive, for debugging
# command: bash -c "while true; do echo 'sleeping...' && sleep 10; done"
```
My server is not picking parameters of `gunicorn_conf.py`.
Am I missing something here?
| closed | 2020-09-14T20:06:39Z | 2020-12-27T20:31:11Z | https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/56 | [] | laxmimerit | 3 |
mwaskom/seaborn | matplotlib | 3,437 | Seaborn Heatmap Documentation | Hi all I was looking at your [heatmap example](https://seaborn.pydata.org/examples/spreadsheet_heatmap.html) and found that the `pandasDataframe.pivot()` function does not work locally as called.
I had to change
`flights = flights_long.pivot("month", "year", "passengers")`
to
`flights = flights_long.pivot(index="month", columns="year", values="passengers")`, specifying the kwargs.
I'm working through this because I am making an Advanced Visualization Cookbook for [Project Pythia](https://projectpythia.org/) and trying to provide an overview of all the different plotting libraries scientific python programmers have ever asked me about during plotting tutorials. If you'd like input or feedback on how your project is summarized or if you'd like a workflow to be featured in our interactive plotting chapter please let me know. | closed | 2023-08-09T22:19:49Z | 2023-08-10T00:05:39Z | https://github.com/mwaskom/seaborn/issues/3437 | [] | jukent | 1 |
tfranzel/drf-spectacular | rest-api | 692 | Document endpoint supporting both many=True and many=False | I have a viewset that currently supports creation of a single or multiple items at once. It looks something like this:
```python
class FooViewSet(viewsets.ModelViewSet):
def create(self, request, *args, **kwargs):
if not isinstance(request.data, list):
return super().create(request, *args, **kwargs)
else:
serializer = self.get_serializer(data=request.data, many=True)
serializer.is_valid(raise_exception=True)
self.perform_bulk_create(serializer)
headers = self.get_success_headers(serializer.data)
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
```
There are two ways this could be documented. Either by reusing the component schema with something like this:
```yaml
content:
application/json:
schema:
anyOf:
- $ref: '#/components/schemas/Foo'
- type: array
items:
$ref: '#/components/schemas/Foo'
```
<details>
<summary>schema.yaml reusing component schemas</summary>
```yaml
openapi: 3.0.3
info:
title: ''
version: 0.0.0
paths:
/api/foos/foo/:
post:
operationId: foo_foo_create
description: ''
requestBody:
content:
application/json:
schema:
anyOf:
- $ref: '#/components/schemas/FooRequest'
- type: array
items:
$ref: '#/components/schemas/FooRequest'
required: true
responses:
'201':
content:
application/json:
schema:
anyOf:
- $ref: '#/components/schemas/Foo'
- type: array
items:
$ref: '#/components/schemas/Foo'
description: ''
components:
schemas:
Foo:
type: object
properties:
id:
type: integer
some_field:
type: integer
required:
- id
- some_field
FooRequest:
type: object
properties:
some_field:
type: integer
required:
- some_field
```
</details>
Or perhaps one could define multiple component schemas:
```yaml
content:
application/json:
schema:
anyOf:
- $ref: '#/components/schemas/Foo'
- $ref: '#/components/schemas/FooList'
```
<details>
<summary>schema.yaml with multiple component schemas</summary>
```yaml
openapi: 3.0.3
info:
title: ''
version: 0.0.0
paths:
/api/foos/foo/:
post:
operationId: foo_foo_create
description: ''
requestBody:
content:
application/json:
schema:
anyOf:
- $ref: '#/components/schemas/FooRequest'
- $ref: '#/components/schemas/FooRequestList'
required: true
responses:
'201':
content:
application/json:
schema:
anyOf:
- $ref: '#/components/schemas/Foo'
- $ref: '#/components/schemas/FooList'
description: ''
components:
schemas:
Foo:
type: object
properties:
id:
type: integer
some_field:
type: integer
required:
- id
- some_field
FooList:
type: array
items:
$ref: '#/components/schemas/Foo'
FooRequest:
type: object
properties:
some_field:
type: integer
required:
- some_field
FooRequestList:
type: array
items:
$ref: '#/components/schemas/FooRequest'
```
</details>
I've tried the PolymorphicProxySerializer but that doesn't seem to work here.
This just generates an empty request:
```python
@extend_schema(
request=PolymorphicProxySerializer(
"DifferentRequests",
serializers=[
FooSerializer,
inline_serializer("ListSerializer", fields={"items": FooSerializer()}),
],
resource_type_field_name=None,
)
)
```
This just gives an error `'child' is a required argument`:
```python
@extend_schema(
request=PolymorphicProxySerializer(
"DifferentRequests",
serializers=[
FooSerializer, # FooSerializer.Meta.list_serializer_class == FooListSerializer
FooListSerializer, # FooListSerializer isinstance of ListSerializer
],
resource_type_field_name=None,
)
)
```
This just fails:
```python
@extend_schema(
request=PolymorphicProxySerializer(
"DifferentRequests",
serializers=[
FooSerializer,
FooSerializer(many=True), # extend_schema expecting type, not an instance
],
resource_type_field_name=None,
)
)
```
How can I have drf-spectacular generate the correct documentation for me in this case? I want to have an api that supports both single objects and list of objects. | closed | 2022-03-25T21:32:10Z | 2022-03-30T19:33:24Z | https://github.com/tfranzel/drf-spectacular/issues/692 | [
"bug",
"enhancement",
"fix confirmation pending"
] | CelestialGuru | 9 |
tableau/server-client-python | rest-api | 710 | Version releases according to semantic versioning | Hello,
The release version numbers of `tableauserverclient` follow `<major.minor>`. Would it be possible to use `<major.minor.patch>`, as advised by semantic versioning (https://semver.org/)?
Following your current releases, it should be easy to simply use 0 as the patch number, and keep the rest unchanged.
Thank you! | closed | 2020-10-23T11:32:03Z | 2020-11-11T22:38:58Z | https://github.com/tableau/server-client-python/issues/710 | [] | matthieucan | 4 |
klen/mixer | sqlalchemy | 30 | Handle Django multi-table inheritance | I try to blend instance of child model with `commit=False`. The model is inherited from `auth.User` model, using multi-table inheritance. I get following error:
```
Cannot generate a unique value for user_ptr
```
| closed | 2014-11-26T14:03:00Z | 2014-12-08T14:42:43Z | https://github.com/klen/mixer/issues/30 | [] | DXist | 2 |
koaning/scikit-lego | scikit-learn | 414 | [FEATURE] More time parameters on make_simpleseries | Good morning,
currently make_simple_series only generates daily data, this is not fit for my example where I reed a higher granularity. Given that in order to generate dates it is using pd.date_range, I want to add those parameters in the generation of the time dimension.
Alaso the possibility to set the time as an index or even to create the object as pd.Series if input.
Thank you,
Gonxo | open | 2020-09-29T09:37:47Z | 2020-10-26T08:22:11Z | https://github.com/koaning/scikit-lego/issues/414 | [] | GonxoMR | 1 |
d2l-ai/d2l-en | pytorch | 2,523 | pip install d2l==1.0.0b0 Fails to Install on Linux Mint/Ubuntu 22.04 | Error Message:
Collecting d2l==1.0.0b0
Using cached d2l-1.0.0b0-py3-none-any.whl (141 kB)
Collecting jupyter (from d2l==1.0.0b0)
Using cached jupyter-1.0.0-py2.py3-none-any.whl (2.7 kB)
Requirement already satisfied: numpy in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (1.24.3)
Requirement already satisfied: matplotlib in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (3.7.1)
Requirement already satisfied: matplotlib-inline in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (0.1.6)
Requirement already satisfied: requests in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (2.31.0)
Requirement already satisfied: pandas in /home/remote/miniconda3/envs/pt/lib/python3.10/site-packages (from d2l==1.0.0b0) (1.5.3)
Collecting gym==0.21.0 (from d2l==1.0.0b0)
Using cached gym-0.21.0.tar.gz (1.5 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [1 lines of output]
error in gym setup command: 'extras_require' must be a dictionary whose values are strings or lists of strings containing valid project/version requirement specifiers.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Thank you! | closed | 2023-07-01T17:56:41Z | 2023-07-01T18:12:09Z | https://github.com/d2l-ai/d2l-en/issues/2523 | [] | k7e7n7t | 1 |
QuivrHQ/quivr | api | 3,453 | Create workflow management systems (WMS) | We should industrialise our workflow management by developing a system following this [scheme](https://www.figma.com/board/s8D352vFbVGXMERi6XOPV6/Workflow-Management?node-id=0-1&node-type=canvas&t=pSjkemXnvBGdf2wO-0) | closed | 2024-11-04T16:10:30Z | 2025-02-07T20:06:32Z | https://github.com/QuivrHQ/quivr/issues/3453 | [
"Stale",
"area: backend",
"area: frontend"
] | jacopo-chevallard | 2 |
miguelgrinberg/Flask-SocketIO | flask | 1,506 | Can you do a start_background_task on eventlet async mode and never join/kill it? | Before anything else, I would like to apologize in advance for the grammar and english mistakes I'm probably going to make. It's not my first language.
So i'm working on a project and so far everything is working perfectly. Thank you for this amazing extension. I have a quick question. In my application, a user can place a sort of "bet". When they do, the server has to wait 30 seconds, while checking if there are any new players. If there are, the timer stops and the game begins. If not, the game is cancelled.
My question is, if I use the `socketio.start_background_task()` function, on the eventlet server, can I never actually do a `join()` on the thread? Will there be any memory leaks or will I lose performance because of "sleeping" threads?
I'll include some code to further illustrate my question. (The `self.get_state()` is simply a way to check (using redis) if any other process, such as another server when the app scales, has changed the state). I'm storing pretty much everything to do with the game's state in a redis DB.
```
# On a route file.
game = Game()
@socket.on("bet-jackpot")
def bet_jackpot(amount):
sid = request.sid
# Omitting some database work and checks.
try:
game.handle_bet(user_id, sid, user_name, user_avatar, amount)
except RuntimeError:
emit("jck-error", "Can't bet now", room=sid)
return
# On the models file.
class Game:
# Also omitting some logic.
def handle_bet(self, id, sid, username, avatar, amount):
if self.get_state() == "R":
raise RuntimeError("Cant bet now")
if self.get_state() == "W":
self._add_player_to_game(id, sid, username, avatar, amount)
socket.start_background_task(target=self.start_game_loop)
elif self.get_state() == "O" or self.get_state() == "L":
self._add_player_to_game(id, sid, username, avatar, amount)
def start_game_loop(self):
self.set_state("O")
socket.emit("jck-state", "O")
socket.emit("jck-timer", 30.00)
self._start_one_player_timer(30.00) # Timer function that loops until there are 2 players or the 30 seconds end.
if self.get_player_amount() <= 1: # Cancelling game
socket.emit("jck-state", "W")
self.reset()
return
self.set_state("L")
socket.emit("jck-state", "L")
socket.emit("jck-timer", 25.00)
self._start_main_timer(25.00) # Same as above, but no checking for players.
# After, we do some simple db work.
```
If this function gets called every time a game is played, will the server eventually lag and lose performance because of the threads that are never joined? Or will the eventlet web server know how to "stop" a finished thread?
Thank you and sorry if it seems like a "noob" question, I'm not very experienced in multithreading and these types of apps.
| closed | 2021-03-27T12:21:21Z | 2021-03-27T17:26:48Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1506 | [
"question"
] | andremsilva03 | 2 |
horovod/horovod | tensorflow | 4,073 | Failed to install Horovod | **Environment:**
1. **Framework**: TensorFlow, PyTorch
2. **Framework version**:
- TensorFlow: 2.18.0
- PyTorch: 2.4.1
3. **Horovod version**: Attempting to install latest via pip
4. **MPI version**: Microsoft MPI (attempted with version 10)
5. **CUDA version**: 11.8
6. **NCCL version**: None
7. **Python version**: 3.11.9
8. **Spark / PySpark version**: N/A
9. **Ray version**: N/A
10. **OS and version**: Windows 10
11. **GCC version**: Not installed (Windows environment)
12. **CMake version**: 3.30
---
**Checklist:**
1. Did you search issues to find if somebody asked this question before? Yes
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? Yes
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? N/A
4. Did you check if your question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? Yes
---
**Bug report:**
I'm encountering issues installing Horovod on Windows 10 with TensorFlow and PyTorch frameworks. Here’s a summary of the setup and error details:
**Steps Taken**:
1. Set environment variables:
```cmd
set HOROVOD_WITH_MPI=1
set HOROVOD_WITH_CUDA=1
set HOROVOD_CUDA_HOME=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8
set MPI_HOME="E:/Program Files/Microsoft MPI"
set MPIEXEC_EXECUTABLE="E:\Program Files\Microsoft MPI\Bin\mpiexec.exe"
```
2. Attempted installation command:
```bash
pip install horovod --no-cache-dir
```
**Error**:
```plaintext
Could NOT find MPI (missing: MPI_CXX_FOUND)
CMake Error: The source directory "E:/Projects" does not appear to contain CMakeLists.txt.
```
**Additional Details**:
- Running `mpiexec --version` did not provide the expected version output.
- Verified CUDA 11.8 installation via `nvcc --version`.
- Using Microsoft MPI, but suspect compatibility issues with Horovod on Windows.
| open | 2024-10-25T16:53:16Z | 2025-03-14T21:28:20Z | https://github.com/horovod/horovod/issues/4073 | [
"bug"
] | adityakm24 | 3 |
hzwer/ECCV2022-RIFE | computer-vision | 314 | [Question] Does RIFE interpolate by only using there two frames next to each other, or there are something more when Video Interpolation? | There is no discuss panel so I would just only ask a question here.
Does RIFE interpolate by only using there two frames next to each other? (Like the **Image Interpolation**)
Or there are something more when **Video Interpolation**?
------
TL;DR
Actually I just have image sequence, I want to know if there will be a difference in between
- Convert image sequence to video and then using **RIFE Image Interpolation**
- Use **RIFE Image Interpolation** and then convert the output image sequence into video
(I already wrote shell script for iterating image sequence in folder so this won't be a problem) | closed | 2023-05-29T08:24:38Z | 2023-05-31T03:12:11Z | https://github.com/hzwer/ECCV2022-RIFE/issues/314 | [] | catscarlet | 2 |
recommenders-team/recommenders | deep-learning | 1,962 | [ASK] Question on ndcg_at_k calculation | ### Description
<!--- Describe your general ask in detail -->
Why are we using rank ('first) to get the order of the ideal ranking instead of rank('min') or rank('average)?
https://github.com/microsoft/recommenders/blob/main/recommenders/evaluation/python_evaluation.py#L687
line 597
df_idcg["irank"] = df_idcg.groupby(col_user, as_index=False, sort=False)[
col_rating
].rank("first", ascending=False)
In this case, if there is a tied in the rating, for example, item A, B, C, D, rating is 1, 0, 0,0. Using rank('first), irank = 1,2,3,4. But should we take the tied condition into consideration, that means the irank = 1,2,2,2?
### Other Comments
| open | 2023-07-24T17:49:36Z | 2023-07-24T17:54:29Z | https://github.com/recommenders-team/recommenders/issues/1962 | [
"help wanted"
] | Lulu20220 | 0 |
LAION-AI/Open-Assistant | machine-learning | 2,781 | Clipped outputs | For several of the large output results, the output is clipped. Can the max output length be increased.

| closed | 2023-04-20T15:16:31Z | 2023-04-27T09:19:42Z | https://github.com/LAION-AI/Open-Assistant/issues/2781 | [] | ishgirwan | 1 |
kubeflow/katib | scikit-learn | 2,270 | Update experiment instance status failed: the object has been modified | /kind bug
**What steps did you take and what happened:**
I got error when update experiment status in experiment controller.
```
{"level":"info","ts":"2024-03-04T01:39:38Z","logger":"experiment-controller","msg":"Update experiment instance status failed, reconciler requeued","Experiment":{"name":"a10702550312415232282375","namespace":"heros-user"},"err":"Operation cannot be fulfilled on experiments.kubeflow.org \"a10702550312415232282375\": the object has been modified; please apply your changes to the latest version and try again"}
```
**What did you expect to happen:**
The code of experiment status update as follow. It's not supposed to raise error cause it only updates status even if experiment object is modified. I'm not sure my understanding is ok.
https://github.com/kubeflow/katib/blob/master/pkg/controller.v1beta1/experiment/experiment_controller.go#L237
```go
if !equality.Semantic.DeepEqual(original.Status, instance.Status) {
// assuming that only status change
err = r.updateStatusHandler(instance)
if err != nil {
logger.Info("Update experiment instance status failed, reconciler requeued", "err", err)
return reconcile.Result{
Requeue: true,
}, nil
}
}
```
**Environment:**
- Katib version: v0.16
- Kubernetes version: v1.25.13
- OS: Linux 5.15.47-1.el7.x86_64 x86_64
---
Impacted by this bug? Give it a 👍 We prioritize the issues with the most 👍
| closed | 2024-03-04T08:32:37Z | 2024-06-22T15:04:52Z | https://github.com/kubeflow/katib/issues/2270 | [
"kind/bug",
"lifecycle/stale"
] | Antsypc | 3 |
RobertCraigie/prisma-client-py | pydantic | 858 | Pydantic >2.0 makes `prisma generate` crash | Thank you for the awesome work on this project.
## Bug description
Prisma Generate fails when using Pydantic >2.0 because of a warning
## How to reproduce
* Step 1. In a project with an existing prisma.schema, install Prisma as well as Pydantic > 2.0.
* Step 2. Run `prisma generate`
Generation fails with the following error, and no Prisma classes are generated.
```
(.venv) monarch@Monarch-Legion:~/workspace/startedup/backend$ prisma generate
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
Error:
Traceback (most recent call last):
File "/home/monarch/workspace/startedup/backend/.venv/lib/python3.12/site-packages/prisma/generator/generator.py", line 112, in run
self._on_request(request)
File "/home/monarch/workspace/startedup/backend/.venv/lib/python3.12/site-packages/prisma/generator/generator.py", line 170, in _on_request
self.generate(data)
File "/home/monarch/workspace/startedup/backend/.venv/lib/python3.12/site-packages/prisma/generator/generator.py", line 268, in generate
render_template(rootdir, name, params)
File "/home/monarch/workspace/startedup/backend/.venv/lib/python3.12/site-packages/prisma/generator/generator.py", line 309, in render_template
output = template.render(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/monarch/workspace/startedup/backend/.venv/lib/python3.12/site-packages/jinja2/environment.py", line 1301, in render
self.environment.handle_exception()
File "/home/monarch/workspace/startedup/backend/.venv/lib/python3.12/site-packages/jinja2/environment.py", line 936, in handle_exception
raise rewrite_traceback_stack(source=source)
File "/home/monarch/workspace/startedup/backend/.venv/lib/python3.12/site-packages/prisma/generator/templates/client.py.jinja", line 42, in top-level template code
BINARY_PATHS = model_parse(BinaryPaths, {{ binary_paths.dict(by_alias=True) }})
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/monarch/workspace/startedup/backend/.venv/lib/python3.12/site-packages/typing_extensions.py", line 2498, in wrapper
warnings.warn(msg, category=category, stacklevel=stacklevel + 1)
pydantic.warnings.PydanticDeprecatedSince20: The `dict` method is deprecated; use `model_dump` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
```
## Expected behavior
Should generate Prisma classes and not print error
## Prisma information
<!-- Your Prisma schema, Prisma Client Python queries, ...
Do not include your database credentials when sharing your Prisma schema! -->
```prisma
// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema
generator client {
provider = "prisma-client-py"
interface = "asyncio"
recursive_type_depth = 5
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
model User {
id String @id @default(uuid())
is_admin Boolean @default(false)
email String @unique
password String @unique
created_at DateTime @default(now())
updated_at DateTime @updatedAt
GeneratedContent GeneratedContent[]
}
model GeneratedContent {
id String @id @default(uuid())
content String
user User @relation(fields: [user_id], references: [id])
user_id String
created_at DateTime @default(now())
updated_at DateTime @updatedAt
}
```
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: WSL on Windows
- Database: PostgreSQL
- Python version: Tested with 3.11.4 and 3.12
- Prisma version:
<!--[Run `prisma py version` to see your Prisma version and paste it between the ´´´]-->
```
prisma : 5.4.2
prisma client python : 0.11.0
platform : debian-openssl-1.1.x
expected engine version : ac9d7041ed77bcc8a8dbd2ab6616b39013829574
```
| closed | 2023-12-19T05:08:53Z | 2024-02-15T23:08:12Z | https://github.com/RobertCraigie/prisma-client-py/issues/858 | [
"bug/2-confirmed",
"kind/bug",
"priority/high",
"level/unknown",
"topic: crash"
] | monarchwadia | 2 |
graphql-python/graphene-sqlalchemy | sqlalchemy | 127 | Question About Long Running Mutations And Asynchronous Tasks | Hello,
We're using Flask, Graphene and SQLAlchemy on our project. The API is currently executed on our server using uWSGI and Nginx. Some of the mutations we have created trigger long running jobs (like 2 to 5 minutes). We realized that:
- When one of the long running job is triggered / running, no other http request would be treated by our Flask application. Is this a known limitation from Graphene / SQLAlchemy? Or are we doing something wrong?
- What would you say is the best way to manage this kind of long running request from the end user point of view? I'm thinking about returning immediately a message saying the job is triggered and then let the job run in the background, but I'm not really sure how to manage such asynchronous task in Python.
Thank you
Alexis
| closed | 2018-04-30T11:12:17Z | 2023-02-24T14:55:54Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/127 | [] | alexisrolland | 2 |
dynaconf/dynaconf | fastapi | 493 | [RFC] Support for Vault authentication through EC2 metadata service | **Is your feature request related to a problem? Please describe.**
I'm currently running an app in a EC2 instance, where I'd like to integrate dynaconf with Vault for my configurations. However, it seems dynaconf [current only supports AWS authentication through boto3 session](https://github.com/rochacbruno/dynaconf/blob/master/dynaconf/loaders/vault_loader.py), but not through [EC2 metadata service](https://hvac.readthedocs.io/en/stable/usage/auth_methods/aws.html#ec2-metadata-service) that I'm using. It will be nice if we could add support for it.
**Describe the solution you'd like**
Providing optional configurations to accept EC2 roles to authenticate through EC2 metadata service.
**Describe alternatives you've considered**
I'm currently using HVAC client directly to walk around the issue.
**Additional context**
A unrelated observation is, even for boto3 session authentication, it seems to me that we need to add `header_value` in the call to `client.auth.aws.iam_login()` as well, otherwise I'm receiving error. `hvac.exceptions.InvalidRequest: error validating X-Vault-AWS-IAM-Server-ID header: missing header "X-Vault-AWS-IAM-Server-ID", on post`
| closed | 2020-12-19T00:17:58Z | 2022-07-02T20:12:35Z | https://github.com/dynaconf/dynaconf/issues/493 | [
"wontfix",
"Not a Bug",
"RFC"
] | SuperStevenZ | 2 |
JaidedAI/EasyOCR | pytorch | 382 | Vertical recognition | Hi there,
I think there are few issues with vertical words.
Shouldn't this function https://github.com/JaidedAI/EasyOCR/blob/master/easyocr/easyocr.py#L344 output `max_width` as well in order to update `max_width = max(max_width, imgH)` in the next line ? It seems that if there is a long vertical word then it's capped by imgH and recognition is usually wrong.
Also, I realized that images are cropped and resized in here https://github.com/JaidedAI/EasyOCR/blob/master/easyocr/easyocr.py#L341 based on their ratio which makes long image crops, that is h >> w, very small (their width is squeezed a lot). Then, these resized images are rotated (90, 180 and 270) in https://github.com/JaidedAI/EasyOCR/blob/master/easyocr/easyocr.py#L344. I think the images should be rotated before they get resized. | closed | 2021-02-25T05:01:08Z | 2021-08-07T05:37:00Z | https://github.com/JaidedAI/EasyOCR/issues/382 | [] | miliadis | 2 |
plotly/dash-table | dash | 641 | Table loading-state behaves incorrectly | Using the same example as defined in the server test https://github.com/plotly/dash-table/blob/dev/tests/cypress/dash/v_data_loading.py, typing into the input causes the focus to be moved back to the table's cell in `dash>=1.3.0`.
The table should not steal away the focus from the input and yet refresh/renderer itself correctly and implying focus correctly if the table is selected, when the `loading_state` switches. | closed | 2019-11-12T22:26:38Z | 2019-11-14T15:44:46Z | https://github.com/plotly/dash-table/issues/641 | [
"dash-type-bug",
"size: 0.5"
] | Marc-Andre-Rivet | 0 |
modelscope/data-juicer | streamlit | 126 | [Bug]: 在使用一些过滤符操作的时候,出现了datasets.builder.DatasetGenerationError: An error occurred while generating the dataset报错,想知道原因,谢谢。 | ### Before Reporting 报告之前
- [X] I have pulled the latest code of main branch to run again and the bug still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
- [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully and no error occurred during the installation process. (Otherwise, we recommend that you can ask a question using the Question template) 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引,并且在安装过程中没有错误发生。(否则,我们建议您使用Question模板向我们进行提问)
### Search before reporting 先搜索,再报告
- [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar bugs. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的bug报告。
### OS 系统
ubuntu
### Installation Method 安装方式
source
### Data-Juicer Version Data-Juicer版本
v0.1.2
### Python Version Python版本
3.10.11
### Describe the bug 描述这个bug


### To Reproduce 如何复现
只是编辑了analyser.yaml文件,同时,输入数据是一个文件夹(里面包含了json文件以及txt文件和.sh文件)
### Configs 配置信息
_No response_
### Logs 报错日志
_No response_
### Screenshots 截图
_No response_
### Additional 额外信息
_No response_ | closed | 2023-12-08T08:11:18Z | 2023-12-26T08:06:27Z | https://github.com/modelscope/data-juicer/issues/126 | [
"bug"
] | hitszxs | 7 |
graphql-python/graphql-core | graphql | 78 | Field Directive Is Not "Inherited" From Interface | I was adding query complexity analysis into [graphql-utilities](https://github.com/melvinkcx/graphql-utilities) while I came across this strange behavior.
In the following schema, the `@cost` directive of `createdAt` in `TimestampedType` is not found in `Announcement -> createdAt`.
```
interface TimestampedType {
createdAt: String @cost(complexity: 2)
updatedAt: String @cost(complexity: 2)
}
type Announcement implements TimestampedType {
createdAt: String
updatedAt: String
announcementId: String! @cost(complexity: 4)
title: String
text: String
}
```
This is the screenshots of my debugger:
1. `<AnnouncementField> -> ast_node -> fields -> createdAt`:

2. `<AnnouncementField> -> interfaces[0] -> ast_node -> fields -> createdAt`:

As I couldn't find any relevant answer from the spec, I'm not certain if the directive is supposed to be "inherited" from the interface. However, from what I observed in `graphql-js`, inheriting directive seems to be the correct behavior.
I appreciate any answer or help, thanks! | closed | 2020-02-10T11:53:37Z | 2020-02-11T06:04:36Z | https://github.com/graphql-python/graphql-core/issues/78 | [] | melvinkcx | 2 |
GibbsConsulting/django-plotly-dash | plotly | 49 | Update template tag documentation | There are undocumented template tags, as noted in #48
Ideally the [documentation](https://django-plotly-dash.readthedocs.io/en/latest/template_tags.html) should be extended to cover them.
| closed | 2018-09-21T15:41:06Z | 2018-10-19T21:59:59Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/49 | [] | GibbsConsulting | 0 |
nl8590687/ASRT_SpeechRecognition | tensorflow | 203 | 字错误率100%,loss下降很慢 | 大家好,我在Google Cloab上运行python train_mspeech.py,一直得到语音单字错误率和dev单字错误率为100%,而且loss到了210左右下降很慢,请问正常吗?
`
*[测试结果]语音识别dev集语音单字错误率:100.0%
[message epodh. Have train datas11000+
Epoch 1/1
500/500[ ========================]-145s291ms/step-loss:209.9455
测试进度:0/4
*[测试结果]语音识别 train集语音单字错误率:100.0%
测试进度:0/4
*[测试结果]语音识别dev集语音单字错误率:100.0%
[message] epoch 0. Have train datas 11500+
Epoch 1/1
500/500[=====================]-144s288ms/step-loss:210.5319
测试进度:0/4
*[测试结果]语音识别train集语音单字错误率:100.0%
测试进度:0/4
*[测试结果]语音识别dev集语音单字错误率:100.0%
[message] epoch 0. Have train datas 12000+
Epoch 1/1
500/500[======================]-144s288ms/step-loss:209.1676
测试进度:0/4
*[测试结果]语音识别 train集语音单错误率:100.0%
测试进度:0/4
*[测试结果]语音识别dev集语音单字错误率:100.0%
[message] epoch 0. Have train datas 12500+
Epoch 1/1
500/500[=========================]-143s285ms/step-loss:209.7521
测试进度:0/4
*[测试结果]语音识别train集语音单字错误率:100.0%
测试进度:0/4
*[测试结果]语音识别dev集语音单字错误率:100.0%
[message] epoch 0. Have train datas 13000+
Epoch 1/1
227/500
8-1.2065
` | open | 2020-07-11T15:03:04Z | 2021-07-26T08:21:23Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/203 | [] | wxhiff | 4 |
Lightning-AI/pytorch-lightning | data-science | 19,838 | Torchmetrics Accuracy issue when dont shuffle test data. | ### Bug description
I am creating CNN model to recognize dogs and cats. I trained it and when I evaluate accuracy of it by hand it has 80-85% accuracy on an unseen data.
But, when I try to use library torchmetrics.accuracy to calculate my accuracy then for some reason I get wrong accuracy calculations. Let me explain:
The code of the model(I use python, torch, lightning to optimize the model and code):
```
import lightning as L
import torch
import torchmetrics
import torchvision
from torch import nn
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision import transforms, datasets
from torchvision.transforms import ToTensor
from CustomDataset import CustomDataset
class Model(L.LightningModule):
def __init__(self, batch_size, learning_rate, num_classes):
super(Model, self).__init__()
self.save_hyperparameters()
## HERE GOES MODEL LAYERS CRITERION etc
self.accuracy = torchmetrics.Accuracy(num_classes=2, average='macro', task='multiclass')
self.test_transform = transforms.Compose([
transforms.Resize((200, 200)), # Resize images to 256x256
transforms.ToTensor(), # Convert images to PyTorch tensors
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) # Normalize images
])
self.transform = transforms.Compose([
transforms.RandomResizedCrop(200), # Randomly crops and resizes images to 224x224
transforms.RandomHorizontalFlip(p=0.5), # Randomly flips images horizontally
transforms.RandomRotation(15), # Resize images to 256x256
transforms.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2, hue=0.1),
transforms.ToTensor(), # Convert images to PyTorch tensors
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) # Normalize images
])
def forward(self, image):
image = F.relu(self.conv1(image))
image = self.pool(image)
image = F.relu(self.conv2(image))
image = self.pool(image)
image = F.relu(self.conv3(image))
image = self.pool(image) # Output is now (128, 25, 25)
image = torch.flatten(image, 1) # Flatten the output
image = F.relu(self.fc1(image))
image = self.fc2(image)
return image
def training_step(self, batch, batch_idx):
images, labels = batch
predictions = self(images) # Forward pass
loss = self.criterion(predictions, labels) # Compute the loss
predicted_classes = torch.argmax(F.softmax(predictions, dim=1), dim=1)
predictions_softmax = F.softmax(predictions, dim=1)
acc = self.accuracy(predictions_softmax, labels)
self.log('train_loss', loss, on_step=True, on_epoch=True, prog_bar=True)
self.log('train_acc', acc, on_step=True, on_epoch=True, prog_bar=True)
return loss # Returning the loss for backpropagation
def validation_step(self, batch, batch_idx):
images, labels = batch
predictions = self(images)
loss = self.criterion(predictions, labels)
predicted_classes = torch.argmax(F.softmax(predictions, dim=1), dim=1)
predictions_softmax = F.softmax(predictions, dim=1)
acc = self.accuracy(predictions_softmax, labels)
self.log('val_loss', loss, prog_bar=True)
self.log('val_acc', acc, prog_bar=True)
return loss
def test_step(self, batch, batch_idx):
images, labels = batch
predictions = self(images) # Forward pass
loss = self.criterion(predictions, labels) # Compute the loss
predicted_classes = torch.argmax(F.softmax(predictions, dim=1), dim=1)
predictions_softmax = F.softmax(predictions, dim=1)
acc = self.accuracy(predictions_softmax, labels)
self.log('train_loss', loss, on_step=True, on_epoch=True, prog_bar=True)
self.log('train_acc', acc, on_step=True, on_epoch=True, prog_bar=True)
return loss # Returning the loss for backpropagation
# images, labels = batch
# predictions = self(images)
# loss = self.criterion(predictions, labels)
# predicted_classes = torch.argmax(F.softmax(predictions, dim=1), dim=1)
# predictions_softmax = F.softmax(predictions, dim=1)
# acc = self.accuracy(predictions_softmax, labels)
# real_step_acc = (labels == predicted_classes).sum() / self.batch_size
# self.log('test_loss', loss, prog_bar=True)
# self.log('real_test_acc', real_step_acc, prog_bar=True)
# self.log('test_acc', acc, prog_bar=True)
# return loss
def configure_optimizers(self):
optimizer = torch.optim.SGD(self.parameters(), lr=self.learning_rate, momentum=0.9)
return optimizer
def train_dataloader(self):
# Set up and return the training DataLoader
filepath_train = "dataset/test/"
train_dataset = datasets.ImageFolder(root=filepath_train, transform=self.transform)
train_loader = DataLoader(train_dataset, batch_size=self.batch_size, shuffle=False, num_workers=16)
return train_loader
def test_dataloader(self):
# Set up and return the training DataLoader
filepath_train = "dataset/test/"
test_dataset = datasets.ImageFolder(root=filepath_train, transform=self.transform)
test_loader = DataLoader(test_dataset, batch_size=self.batch_size, shuffle=True, num_workers=16)
return test_loader
def val_dataloader(self):
# Set up and return the validation DataLoader
filepath_train = "dataset/val/"
val_dataset = datasets.ImageFolder(root=filepath_train, transform=self.test_transform)
val_loader = DataLoader(val_dataset, batch_size=self.batch_size, shuffle=False, num_workers=16)
return val_loader
```
Output is like this:
train_acc_epoch 0.7635096907615662
real_test_acc 0.7901701927185059
test_acc 0.39825108647346497
Real test accuracy I compute like this:
```
predictions_softmax = F.softmax(predictions, dim=1)
acc = self.accuracy(predictions_softmax, labels)
real_step_acc = (labels == predicted_classes).sum() / self.batch_size
```
So the problem is:
When I run the testing then the test accuracy inside test_step method is 40% but the real test accuracy that I compute myself is 80-85%. so what I tried: When I enable shuffling on test data(I know it is bad practice but it was part of the debugging), torchmetrics.accuracy becomes correct! It outputs 80-85% accuracy.
So why the shuffling changes the thing? I think that it might also be some kind of bug. Or maybe I have issue somewhere.
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_ | closed | 2024-05-01T20:00:33Z | 2024-05-01T22:09:08Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19838 | [
"bug",
"needs triage"
] | DimaNarepeha | 1 |
youfou/wxpy | api | 233 | Howto send_location with custom POI | is it possible send_location on bot? | open | 2017-11-25T06:03:46Z | 2017-11-25T06:03:46Z | https://github.com/youfou/wxpy/issues/233 | [] | lecheel | 0 |
trevorstephens/gplearn | scikit-learn | 151 | Remove dependency on scikit-learn's six | We don't support Python 2 any more so can remove this anyhow. | closed | 2019-04-22T05:53:27Z | 2019-04-23T03:47:23Z | https://github.com/trevorstephens/gplearn/issues/151 | [
"dependencies"
] | trevorstephens | 2 |
chatanywhere/GPT_API_free | api | 173 | 玩锤子,demo都报错 | **Describe the bug 描述bug**
A clear and concise description of what the bug is.
**To Reproduce 复现方法**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Screenshots 截图**
If applicable, add screenshots to help explain your problem.
**Tools or Programming Language 使用的工具或编程语言**
Describe in detail the GPT tool or programming language you used to encounter the problem
**Additional context 其他内容**
Add any other context about the problem here.
| closed | 2024-01-12T07:20:49Z | 2024-01-12T11:17:58Z | https://github.com/chatanywhere/GPT_API_free/issues/173 | [] | jqsl2012 | 1 |
biolab/orange3 | numpy | 6,391 | Violin plot graph export after new widget-base | After biolab/orange-widget-base/pull/208 was merged Violin plot graph export does not work.
The problem should be looked at.
Also, we should add a widget test that executes graph saving for all widgets which allow that. Just to see if anything crashes... | closed | 2023-04-03T10:23:23Z | 2023-04-14T08:33:31Z | https://github.com/biolab/orange3/issues/6391 | [
"bug"
] | markotoplak | 2 |
tableau/server-client-python | rest-api | 1,373 | [Type 1] Implement Tableau Cloud-specific requests for the Subscriptions endpoint | ## Description:
The Subscriptions endpoint works somewhat differently for Tableau Cloud & Tableau Server, in that the subscription schedule needs to be defined as part of the request for Tableau Cloud. As of now, TSC only supports the request format for the Server endpoint, where a schedule id needs to be provided. This feature would implement the Tableau Cloud request format alongside the Tableau Server format. The subscriptions REST API documentation: [https://help.tableau.com/current/api/rest_api/en-us/REST/rest_api_ref_subscriptions.htm#tableau-cloud-request](url)
A "quick-and-dirty" implementation could allow the user to specify in the SubscriptionItem definition that instead of schedule_id, they'd like to set all the Tableau Cloud-specific fields. However, if it is expected that more API methods will have Tableau Server & Cloud versions, it could be beneficial to automatically detect Tableau Cloud vs Tableau Server during the construction of the Server object and pick the correct endpoint specs accordingly. TSC doesn't currently seem to have a way to distinguish between requests made to Tableau Cloud & Tableau Server, so this would need to be added first, potentially by checking the server URL for (online.tableau.com). | open | 2024-05-15T11:29:48Z | 2024-12-14T19:48:24Z | https://github.com/tableau/server-client-python/issues/1373 | [
"enhancement",
"needs investigation"
] | zozi0406 | 1 |
littlecodersh/ItChat | api | 237 | 1205 | 请问,我再向群聊中发送图片的时候,为什么老是返回1205的错误? | closed | 2017-02-21T01:58:46Z | 2017-06-14T05:51:23Z | https://github.com/littlecodersh/ItChat/issues/237 | [
"invalid"
] | sunfanteng | 3 |
ultralytics/ultralytics | python | 18,792 | About the problem of falling mAP when learning YOLOV8 | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
With the help of answering previous questions, I finished the YOLOV8X.pt version of the study with batch=8 with imgsz=1920. Looking at the learning outcomes, the mAP index started at 80 and went up to 87.2 and then finally dropped to 56. Should I think the training was not good in this case?
### Additional
_No response_ | open | 2025-01-21T07:36:55Z | 2025-01-21T15:37:41Z | https://github.com/ultralytics/ultralytics/issues/18792 | [
"question",
"detect"
] | B1ackPrince | 2 |
zappa/Zappa | flask | 1,101 | Unable to deploy a project with Werkzeug >= 2.0 |
## Context
After creating a new virtual environment and installing my project dependencies, including Zappa 0.54.1, I am no longer able to deploy my project.
My Django project does not use Werkzeug, but Werkzeug 2.0.2 gets installed by Zappa. After downgrading to Werkzeug<2.0.0, I am able to deploy my project again.
Updating Zappa to 0.54.1 from an older version that installed Werkzeug 1.0.1 still works because the version of Werkzeug is left unchanged.
I have confirmed this behavior with both Python 3.6 and Python 3.8 and with MacOS 10.15.7 and MacOS 12.1.
## Expected Behavior
Your updated Zappa deployment is live!:
## Actual Behavior
Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 502 response code.
Digging into the Cloud Watch logs, I see the error as described in #1087.
## Possible Fix
Specify a specific version of Werkzeug in Zappa dependencies. Werkzeug 1.0.1 works for me.
## Steps to Reproduce
Install Zappa 0.54.1 into a new virtual environment. Attempt to deploy your project.
| closed | 2022-01-06T20:43:58Z | 2022-01-27T19:36:28Z | https://github.com/zappa/Zappa/issues/1101 | [] | rjschave | 2 |
jupyter/nbgrader | jupyter | 1,846 | Validate button shows incomprehensible SyntaxError when the solution trips the timeout limit | ### Operating system
Arch Linux
### `nbgrader --version`
```
Python version 3.11.5 (main, Sep 2 2023, 14:16:33) [GCC 13.2.1 20230801]
nbgrader version 0.9.1
```
### `jupyterhub --version` (if used with JupyterHub)
4.0.2
### `jupyter notebook --version`
7.0.6
### Expected behavior
Nbgrader should not display a SyntaxError when some internal operation fails.
### Actual behavior
When a student tries to validate an assignment that gets stuck due to infinite loop, Nbgrader shows this error:
```
Validation failed
Cannot validate: SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data
```

### Steps to reproduce the behavior
Create an assignment with an infinite loop and click on "Validate":
```
while True:
pass
``` | open | 2023-11-22T11:12:33Z | 2024-04-09T07:21:58Z | https://github.com/jupyter/nbgrader/issues/1846 | [
"bug",
"needs info"
] | lahwaacz | 1 |
gevent/gevent | asyncio | 1,647 | the request for pywsgi WSGIServer still in queue, not parallel. | * gevent version: 20.6.2
* Python version: Please be as specific as possible: "pyenv Python3.7.5 downloaded from python.org"
* Operating System: Please be as specific as possible: "Centos7.4"
### Description:
the request for pywsgi WSGIServer still in queue, not parallel.
1. I use WSGIServer to wrap a flask web server, and I put this server in a Process.
2. I create this Process in a thread.
3. as your recommand, I put this code before I import everything. but the request still process one by one. how could I do to slove this.
```
from gevent import monkey
monkey.patch_all()
```
this main code for this web server as below
```
class JudgeHTTPProxy(Process):
def create_flask_app(self):
try:
from flask import Flask, request
from flask_compress import Compress
from flask_cors import CORS
from flask_json import FlaskJSON, as_json, JsonError
except ImportError:
raise ImportError('Flask or its dependencies are not fully installed, '
'they are required for serving HTTP requests.'
'Please use "pip install -U flask flask-compress flask-cors flask-json" to install it.')
client = ConcurrentJudgeClient(self.args)
app = Flask(__name__)
CORS(app)
FlaskJSON(app)
@app.route(self.args.url, methods=['POST'])
def _judge():
some logics
return app
def run(self):
app = self.create_flask_app()
server = WSGIServer(('0.0.0.0', self.args.http_port), app, log=None)
server.serve_forever()
```
### What I've run:
in `main.py`. it will create a thread, and the thread will create this Process.
```python
python main.py
```
| closed | 2020-06-19T06:14:05Z | 2020-06-19T08:12:23Z | https://github.com/gevent/gevent/issues/1647 | [] | xiongma | 1 |
recommenders-team/recommenders | machine-learning | 1,342 | Cannot replicate LSTUR results for MIND large test | Hello, I cannot replicate the results of LSTUR model with MIND test set. I used the scripts provided to generate `embedding.npy`, `word_dict.pkl` and `uid2index.pkl` for test set because they are not provided with MINDlarge_utils.zip.
I use the last lines of code in lstur_MIND.pynb to make predictions in test set, but the results of metrics in validations and test are very differents.
For example, I obtained
`group_auc: 0.65, mean_mrr: 0.31, ndcg@5: 0.34, ndcg@10: 0.40` in validation and `auc: 0.5075, mrr: 0.2259, ndcg@5: 0.2309, nDCG@10: 0.2868` in test set, with the model trained for 10 epochs. | closed | 2021-03-11T21:03:05Z | 2021-04-19T09:07:02Z | https://github.com/recommenders-team/recommenders/issues/1342 | [] | albertobezzon | 3 |
unit8co/darts | data-science | 2,675 | [QUESTION] How can I set num_workers in the underlying torch module? | When running the `score` function from a `ForecastingAnomalyModel` I am getting this warning:
```
[python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:425): The 'predict_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=15` in the `DataLoader` to improve performance.
```
It seems linked to be linked to PyTorch Lightning, is there any way I can pass the `num_workers` argument?
| closed | 2025-02-14T14:27:51Z | 2025-02-17T12:32:21Z | https://github.com/unit8co/darts/issues/2675 | [
"question"
] | mcapuccini | 1 |
biolab/orange3 | pandas | 6,010 | Dragging a data file to canvas should retain file history in the File widget | **What's wrong?**
A nice feature of Orange is a shortcut to open data files by dragging the file to the Orange canvas. This places a File widget on a canvas, and sets the name of the file accordingly. The only problem with this feature is that it empties all the file history that File widget keeps, including the initial history with the files that came with Orange. Especially when using Orange in hands-on workshops, removal of the file history with preloaded files does not help.
**How can we reproduce the problem?**
Open Orange and drag any excel file to the Canvas.
**Proposal for solution**
File widget should open the dragged file, but also keep the file history.
**Comment**
Perhaps this is not the bug, but rather an implementational feature, and if, treat this issue as feature request. | closed | 2022-06-09T04:01:52Z | 2022-09-16T12:46:23Z | https://github.com/biolab/orange3/issues/6010 | [
"bug"
] | BlazZupan | 0 |
0b01001001/spectree | pydantic | 63 | [BUG]description for query paramters can not show in swagger ui | Hi, when I add a description for a schema used in query, it can not show in swagger ui but can show in Redoc
```py
@HELLO.route('/', methods=['GET'])
@api.validate(query=HelloForm)
def hello():
"""
hello 注释
:return:
"""
return 'ok'
class HelloForm(BaseModel):
"""
hello表单
"""
user: str # 用户名称
msg: str = Field(description='msg test', example='aa')
index: int
data: HelloGetListForm
list: List[HelloListForm]
```


| closed | 2020-10-12T11:55:44Z | 2020-10-12T13:31:48Z | https://github.com/0b01001001/spectree/issues/63 | [
"bug"
] | csy18 | 1 |
lexiforest/curl_cffi | web-scraping | 85 | 如果想将这一套迁移到Java的HttpClient下面有可能吗 | closed | 2023-07-19T07:02:07Z | 2023-07-20T03:07:59Z | https://github.com/lexiforest/curl_cffi/issues/85 | [] | WTFWITHTHISNAMESHIT | 6 |
|
dynaconf/dynaconf | fastapi | 261 | [RFC] Method or property listing all defined environments | **Is your feature request related to a problem? Please describe.**
I'm trying to build a argparse argument that has a list of the available environments as choices to the argument. But I don't see any way to get this at the moment.
**Describe the solution you'd like**
I am proposing 2 features closely related to help with environment choices as a list and to validate that the environment was defined (not just that it is used with defaults or globals).
The first would be a way to get a list of defined environments minus `default` and global. This would make it easy to add to argparse as an argument to choices. I imagine a method or property such as `settings.available_environments` or `settings.defined_environments`.
The second feature would be a method to check if the environment is defined in settings. This could be used for checks in cases you don't use argparse or want to avoid selecting a non-existent environment. Maybe `settings.is_defined_environment('qa')` or similar.
**Describe alternatives you've considered**
I'm currently parsing my settings file keys outside of Dynaconf and discarding `default` and `global`. But this feels hacky.
**Additional context**
Since the environment is lazy loaded I wonder if this would be considered too expensive to do at load time. Maybe it makes sense as a utility outside of the `settings` object? Maybe there is a good way to do this without the feature? Maybe I shouldn't be doing this at all? :thinking:
| open | 2019-11-14T05:46:51Z | 2024-02-05T21:17:08Z | https://github.com/dynaconf/dynaconf/issues/261 | [
"hacktoberfest",
"Not a Bug",
"RFC"
] | andyshinn | 4 |
yzhao062/pyod | data-science | 470 | a universal feature importance analysis | I wanted to conduct feature importance analysis, but found that many models did not provide feature importance analysis methods except iforest and xgbod . | open | 2023-01-08T07:42:30Z | 2023-02-07T07:52:45Z | https://github.com/yzhao062/pyod/issues/470 | [] | YangR14ustc | 2 |
desec-io/desec-stack | rest-api | 198 | empty ${DESECSTACK_API_PSL_RESOLVER} breaks POSTing domains | Setting `${DESECSTACK_API_PSL_RESOLVER}` to empty (or not setting it at all) in `.env` will result in a 30s delay when posting to `api/v1/domains` endpoint, then raise a timeout exception, which results in a 500 error.
Call stack:
api_1 | Internal Server Error: /api/v1/domains/
api_1 | Traceback (most recent call last):
api_1 | File "/usr/local/lib/python3.7/site-packages/django/core/handlers/exception.py", line 34, in inner
api_1 | response = get_response(request)
api_1 | File "/usr/local/lib/python3.7/site-packages/django/core/handlers/base.py", line 115, in _get_response
api_1 | response = self.process_exception_by_middleware(e, request)
api_1 | File "/usr/local/lib/python3.7/site-packages/django/core/handlers/base.py", line 113, in _get_response
api_1 | response = wrapped_callback(request, *callback_args, **callback_kwargs)
api_1 | File "/usr/local/lib/python3.7/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
api_1 | return view_func(*args, **kwargs)
api_1 | File "/usr/local/lib/python3.7/site-packages/django/views/generic/base.py", line 71, in view
api_1 | return self.dispatch(request, *args, **kwargs)
api_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 495, in dispatch
api_1 | response = self.handle_exception(exc)
api_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 455, in handle_exception
api_1 | self.raise_uncaught_exception(exc)
api_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 492, in dispatch
api_1 | response = handler(request, *args, **kwargs)
api_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/generics.py", line 244, in post
api_1 | return self.create(request, *args, **kwargs)
api_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/mixins.py", line 21, in create
api_1 | self.perform_create(serializer)
api_1 | File "./desecapi/views.py", line 119, in perform_create
api_1 | public_suffix = self.psl.get_public_suffix(domain_name)
api_1 | File "/usr/local/lib/python3.7/site-packages/psl_dns/querier.py", line 42, in get_public_suffix
api_1 | public_suffix = self._get_public_suffix_raw(domain)
api_1 | File "/usr/local/lib/python3.7/site-packages/psl_dns/querier.py", line 30, in _get_public_suffix_raw
api_1 | answer = self.query(domain, dns.rdatatype.PTR)
api_1 | File "/usr/local/lib/python3.7/site-packages/psl_dns/querier.py", line 93, in query
api_1 | answer = self.resolver.query(qname, rdatatype, lifetime=self.timeout)
api_1 | File "/usr/local/lib/python3.7/site-packages/dns/resolver.py", line 992, in query
api_1 | timeout = self._compute_timeout(start, lifetime)
api_1 | File "/usr/local/lib/python3.7/site-packages/dns/resolver.py", line 799, in _compute_timeout
api_1 | raise Timeout(timeout=duration)
api_1 | dns.exception.Timeout: The DNS operation timed out after 30.001466035842896 seconds
api_1 | [pid: 250|app: 0|req: 1/1] 172.16.0.1 () {44 vars in 629 bytes} [Thu May 30 17:31:09 2019] POST /api/v1/domains/ => generated 14294 bytes in 30219 msecs (HTTP/1.1 500) 2 headers in 102 bytes (1 switches on core 0)
Expected behavior: according to README: use the system's resolver. (I confirmed in my setup that the resolver is working; however wireshark did not show a DNS query to somewhere after trying to post a domain.)
Steps to reproduce: clean master, clean builds, empty database, unset psl resolver (obviously). Then post to the domains endpoint.
Workaround: set it to 9.9.9.9 or competitors. | closed | 2019-05-30T17:38:03Z | 2020-02-25T18:05:08Z | https://github.com/desec-io/desec-stack/issues/198 | [
"bug",
"api",
"prio: low"
] | nils-wisiol | 2 |
vanna-ai/vanna | data-visualization | 548 | Add support for additional options when connecting to a database. | **Is your feature request related to a problem? Please describe.**
Unable to pass parameters to databases via `connect_to_<database>` (ie: `psycopg2`->`postgres` `connection_timeout`)
**Describe the solution you'd like**
Add support for all parameters a database may support.
**Describe alternatives you've considered**
None that I can think of.
**Related**
https://github.com/vanna-ai/vanna/issues/541
https://github.com/vanna-ai/vanna/issues/542
https://github.com/vanna-ai/vanna/issues/475
https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS
| closed | 2024-07-11T09:44:30Z | 2024-07-25T18:56:11Z | https://github.com/vanna-ai/vanna/issues/548 | [] | pygeek | 0 |
plotly/dash-table | plotly | 299 | show page numbers for pagination | Thanks for your interest in Plotly's Dash DataTable component!!
Note that GitHub issues in this repo are reserved for bug reports and feature
requests. Implementation questions should be discussed in our
[Dash Community Forum](https://community.plot.ly/c/dash).
Before opening a new issue, please search through existing issues (including
closed issues) and the [Dash Community Forum](https://community.plot.ly/c/dash).
If your problem or idea has not been addressed yet, feel free to
[open an issue](https://github.com/plotly/plotly.py/issues/new).
When reporting a bug, please include a reproducible example! We recommend using
the [latest version](https://github.com/plotly/dash-table/blob/master/CHANGELOG.md)
as this project is frequently updated. Issues can be browser-specific so
it's usually helpful to mention the browser and version that you are using.
Thanks for taking the time to help up improve this component!
| closed | 2018-12-13T16:11:36Z | 2019-10-04T18:51:06Z | https://github.com/plotly/dash-table/issues/299 | [
"dash-type-enhancement",
"size: 2"
] | bwang2453 | 5 |
proplot-dev/proplot | data-visualization | 452 | Migrate proplot repo to be housed under another open-source development group? | I'm wondering if the `proplot` repo here could be moved to another organization, e.g. https://github.com/matplotlib or https://github.com/pangeo-data or elsewhere that it would fit.
This wonderful package now has > 1,000 stars and a lot of passionate users, but no releases or commits have been posted in 9-12 months. This is causing incompatibility issues with latest versions of core packages. I think there's a lot of eager folks submitting issues and PRs that would help to maintain a community-based version of this package! I certainly don't want to rewrite my stack to exclude `proplot`, as it has been immensely helpful in my work.
I know @lukelbd is busy with a postdoc. I'm wondering if you're open to this idea! | open | 2024-03-05T21:02:18Z | 2024-08-18T16:53:03Z | https://github.com/proplot-dev/proplot/issues/452 | [] | riley-brady | 6 |
dask/dask | scikit-learn | 10,887 | max number of tasks per dask worker | <!-- Please do a quick search of existing issues to make sure that this has not been asked before. -->
I am using `SGECluster` to submit thousands of tasks to dask workers. I want to request a feature to specify max number of tasks per worker to improve cluster usage. For example, if it takes 4 hours to process a task, and the wall time limit for a worker is set to 5 hours (to make sure a single task can run through; and if the compute node goes abnormal, it will time out in 5 hours), then with the current dask configuration, each worker will waste 1 hour to run through the second task, and this second task will eventually get killed and resubmit to another worker. This is a waste of the compute cluster resource. So is it possible to specify max number of tasks `X` handled by each dask worker? Once a dask worker finishes handle `X` tasks (with whatever final status), then the dask worker (SGE job) will automatically get killed so we won't waste computing resource in the cluster.
Wish for similar feature for SLURMCluster as well. And appreciate for alternative workarounds.
| closed | 2024-02-05T03:27:14Z | 2024-02-05T03:35:16Z | https://github.com/dask/dask/issues/10887 | [
"needs triage"
] | llodds | 1 |
flavors/django-graphql-jwt | graphql | 124 | Circular dependancy of settings and graphql_jwt | graphql_jwt requires settings secret key. But because of circular depencancy secretkey is not set. If graphql_jwt is imported after secretkey in settings.py everything works fine. | closed | 2019-08-16T08:10:04Z | 2019-08-16T08:21:59Z | https://github.com/flavors/django-graphql-jwt/issues/124 | [] | a-c-sreedhar-reddy | 0 |
kornia/kornia | computer-vision | 2,301 | `NotImplementedError` for elastic transformation with probability p < 1 | ### Describe the bug
With the newest kornia release (0.6.11), the random elastic transformation fails if it is not applied to every image in the batch.
The problem is that the `apply_non_transform_mask()` method in `_AugmentationBase` per default raises an `NotImplementedError` and since this method is not overwritten in `RandomElasticTransform`, the error is raised. I see that for the other `apply_non*` methods the default is to just return the input.
I see two different solutions:
1. Change the default for `apply_non_transform_mask` to return the input in `_AugmentationBase`.
2. Overwrite the method in `RandomElasticTransform` and just return the input there.
There might be good reasons to keep the `NotImplementedError` in the base class, therefore I wanted to ask first what solution you prefer. I could make a PR for this.
### Reproduction steps
```python
import torch
import kornia.augmentation as K
features = torch.rand(5, 100, 480, 640, dtype=torch.float32, device="cuda")
labels = torch.randint(0, 10, (5, 1, 480, 640), dtype=torch.int64, device="cuda")
torch.manual_seed(0)
aug = K.AugmentationSequential(
K.RandomElasticTransform(alpha=(0.7, 0.7), sigma=(16, 16), padding_mode="reflection", p=0.2)
)
features_transformed, labels_transformed = aug(features, labels.float(), data_keys=["input", "mask"])
```
### Expected behavior
No `NotImplementedError`.
### Environment
```shell
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0): 2.0
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, source): pip
- Build command you used (if compiling from source):
- Python version: 3.10.9
- CUDA/cuDNN version: 11.8
- GPU models and configuration: 3090
- Any other relevant information:
``` | closed | 2023-03-29T14:23:49Z | 2023-04-01T05:39:22Z | https://github.com/kornia/kornia/issues/2301 | [
"help wanted"
] | JanSellner | 1 |
pytorch/vision | machine-learning | 8,909 | Setting a list of one or two `float` values to `kernel_size` argument of `GaussianBlur()` gets an indirect error message | ### 🐛 Describe the bug
Setting a list of one or two `float` values to `kernel_size` argument of `GaussianBlur()` gets the indirect error message as shown below:
```python
from torchvision.datasets import OxfordIIITPet
from torchvision.transforms.v2 import GaussianBlur
my_data1 = OxfordIIITPet(
root="data", # ↓↓↓↓↓
transform=GaussianBlur(kernel_size=[3.4])
)
my_data2 = OxfordIIITPet(
root="data", # ↓↓↓↓↓↓↓↓↓↓
transform=GaussianBlur(kernel_size=[3.4, 3.4])
)
my_data1[0] # Error
my_data2[0] # Error
```
```
TypeError: linspace() received an invalid combination of arguments - got (float, float, steps=float, device=torch.device, dtype=torch.dtype), but expected one of:
* (Tensor start, Tensor end, int steps, *, Tensor out = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
* (Number start, Tensor end, int steps, *, Tensor out = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
* (Tensor start, Number end, int steps, *, Tensor out = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
* (Number start, Number end, int steps, *, Tensor out = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
```
So the error message should be something direct like below:
> TypeError: `kernel_size` argument must be `int`
In addition, setting a `float` value to `kernel_size` argument of `GaussianBlur()` works as shown below:
```python
from torchvision.datasets import OxfordIIITPet
from torchvision.transforms.v2 import GaussianBlur
my_data = OxfordIIITPet(
root="data", # ↓↓↓
transform=GaussianBlur(kernel_size=3.4)
)
my_data[0]
# (<PIL.Image.Image image mode=RGB size=394x500>, 0)
```
### Versions
```python
import torchvision
torchvision.__version__ # '0.20.1'
``` | closed | 2025-02-16T12:03:21Z | 2025-02-19T13:46:31Z | https://github.com/pytorch/vision/issues/8909 | [] | hyperkai | 1 |
milesmcc/shynet | django | 274 | Missing Docker image for version 0.13.0 | Hi, I just wanted to upgrade to the new Shynet version which was released a couple of days ago. On the Docker Hub, this version is missing. The only tag that was updated it the `edge` one, but `latest` is still the version from 2 years ago.
I am not sure what the `edge` version is, but I am afraid to change my production environment to it without any information. | closed | 2023-07-23T07:54:23Z | 2023-07-28T07:44:03Z | https://github.com/milesmcc/shynet/issues/274 | [] | Kovah | 5 |
gee-community/geemap | streamlit | 1,226 | There is shift in X and Y direction of 1 pixel while downloading data using geemap.download_ee_image() | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
Please run the following code on your computer and share the output with us so that we can better debug your issue:
```python
import geemap
geemap.Report()
```
### Description
I am trying to download NASADEM data in EPSG:4326 coordinate system using geemap.download_ee_image(), but the downloaded data has pixel shift both in X and Y direction. The reason of error is due to the absence of crs transformation parameter.
The geemap.ee_export_image() gives correct output, but has a limitation on downloadable data. I am looking for a solution to download large image as 1 tile.
### What I Did
```
#!/usr/bin/env python
# coding: utf-8
# In[14]:
import ee,geemap,os
ee.Initialize()
# In[15]:
# NASADEM Digital Elevation 30m - version 001
elevdata=ee.Image("NASA/NASADEM_HGT/001").select('elevation')
# In[16]:
spatial_resolution_m=elevdata.projection().nominalScale().getInfo()
print(spatial_resolution_m)
# In[17]:
Map = geemap.Map()
Map
# In[23]:
# Draw any shape on the map using the Drawing tools before executing this code block
AOI=Map.user_roi
# In[21]:
print(elevdata.projection().getInfo())
# In[29]:
# geemap.ee_export_image(
# elevdata,
# r'C:\Users\rbapna\Downloads\nasadem_ee_export_image4.tif',
# scale=spatial_resolution_m,
# crs=elevdata.projection().getInfo()['crs'],
# crs_transform=elevdata.projection().getInfo()['transform'],
# region=AOI,
# dimensions=None,
# file_per_band=False,
# format='ZIPPED_GEO_TIFF',
# timeout=300,
# proxies=None,
# )
geemap.download_ee_image(
elevdata,
r'C:\Users\rbapna\Downloads\nasadem5.tif',
region=AOI,
crs=elevdata.projection().getInfo()['crs'],
scale=spatial_resolution_m,
resampling=None,
dtype='int16',
overwrite=True,
num_threads=None
)
```
| closed | 2022-08-26T11:55:52Z | 2022-08-30T16:02:18Z | https://github.com/gee-community/geemap/issues/1226 | [
"bug"
] | ravishbapna | 9 |
pytest-dev/pytest-xdist | pytest | 943 | Question: How to get collected tests by worker | I use `loadgroup`, `-n=8` and add mark `xdist_group("groupname")`. Can I just collect tests by workers? I want to see how pytest-xdist distribute tests by group. | open | 2023-08-24T05:31:59Z | 2023-08-24T05:31:59Z | https://github.com/pytest-dev/pytest-xdist/issues/943 | [] | alexterent | 0 |
LAION-AI/Open-Assistant | python | 3,268 | Planning OA v1.0 | This is a call for all OA collaborators to participate in planning the work of the next 8-12 weeks with the goal to release Open-Assistant v1.0.
Mission: Deliver a great open-source assistant model together with stand-alone installable inference infrastructure.
Release date (tentative): Aug 2023
## Organization
- [x] schedule call to collect collaborator feedback and ask for developer participation/commitment
- [ ] update vision & roadmap for v 1.0
- [x] schedule weekly developer meeting
## Feature set proposal (preliminary)
### Model
- fine-tune best available base LLMs (currently LLaMA 65B & Falcon 40B) ([QLoRA](https://arxiv.org/abs/2305.14314))
- implement long context (10k+), candidates: QLoRA+MQA+flash-attn, [BPT](https://arxiv.org/abs/2305.19370), [Landmark Attention](https://arxiv.org/abs/2305.16300)
- add retrieval/tool-use, candidate: [Toolformer](https://arxiv.org/abs/2302.04761)
### Inference system
- prompt preset + prompt database
- sharing of conversations via URL
- support for long-context & tool use
- stand-alone installation (without feedback collection system)
- allow editing of assistant results and message-tree submission as synthetic example for dataset for human labeling and ranking
### Classic human feedback collection
- editing messages for moderators, submit edit-proposals for users
- entering prompt + reply pairs
- collecting relevant links in a separate input field
- improve labeling: review, more guidelines, addition of further labels (e.g. robotic), labels no longer optional
### Experiments
- Analyze whether additional fine-tuning on (synthetic) instruction datasets (Alpaca, Vicuna) is beneficial or harmful: Only OA top-1 threads (Guanaco) vs. synthetic instruction-tuning + OA top-1, potentially with system-prompt for "mode" selection to distinguish between chat and instruction following, e.g. to use instruction mode for plugin processing
## Perspective strategy (brain-storming)
- Sunsetting of classic data collection after OASST2 release and transitioning towards semi-automated inference based data collection
- Extending data collection to new domains, give users more freedom in task selection, e.g. for Code: describing code, refactoring, writing unit tests, etc.
Please add further proposals for high-priority features and try to make a case for why they are important and should become part of v1.0. If you are a developer who wants to support OA: Let us know on what you would like to work (also if it is not yet part of the above list).
| closed | 2023-05-31T15:07:09Z | 2024-01-10T12:16:20Z | https://github.com/LAION-AI/Open-Assistant/issues/3268 | [] | andreaskoepf | 15 |
ageitgey/face_recognition | machine-learning | 774 | Missing Argument "IMAGE_TO_CHECK" | * face_recognition version:
* Python version: 3.4
* Operating System: WINDOWS 10
### Description
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
IMPORTANT: If your issue is related to a specific picture, include it so others can reproduce the issue.
### What I Did
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
| open | 2019-03-15T07:12:35Z | 2019-08-07T12:40:15Z | https://github.com/ageitgey/face_recognition/issues/774 | [] | jainna | 2 |
holoviz/panel | plotly | 6,923 | FileInput default to higher websocket_max_message_size? | Currently, the default is 20 MBs, but this is pretty small for most use cases.
If it exceeds the 20 MBs, it silently disconnects the websocket (at least in notebook; when serving, it does show `2024-06-14 11:39:36,766 WebSocket connection closed: code=None, reason=None`). This leaves the user confused as to why nothing is happening (perhaps a separate issue).
Is there a good reason why the default is 20 MBs, or can we make it larger?
For reference:
https://discourse.holoviz.org/t/file-upload-is-uploading-the-file-but-the-value-is-always-none/7268/7 | closed | 2024-06-14T18:59:54Z | 2024-06-25T11:23:18Z | https://github.com/holoviz/panel/issues/6923 | [
"wontfix",
"type: discussion"
] | ahuang11 | 1 |
pyjanitor-devs/pyjanitor | pandas | 1,045 | Deprecate functions ? | Central point to discuss functions to deprecate, if any?
- [x] `process_text` - `transform_columns` covers this very well
- [x] `impute` vs `fill_empty` - `impute` has the advantage of extra statistics functions (mean, mode, ...)
- [x] `rename_columns` - use pandas `rename`
- [x] `rename_column` - use `pd.rename`
- [x] `remove_columns` - use `pd.drop` or `select`
- [x] `filter_on` - use `query` or `select`
- [x] `fill_direction` - use `transform_columns` or `pd.DataFrame.assign`
- [x] `groupby_agg` - use `transform_columns` - once `by` is implemented
- [x] `then` - use `pd.DataFrame.pipe`
- [x] `to_datetime` - use `jn.transform_columns`
- [x] `pivot_wider` - use `pd.DataFrame.pivot` | open | 2022-03-17T23:20:07Z | 2024-04-21T14:36:28Z | https://github.com/pyjanitor-devs/pyjanitor/issues/1045 | [] | samukweku | 7 |
aiortc/aiortc | asyncio | 587 | [INFO] Python bindings for libwebrtc and C++ library with signaling server | Hi,
I would like to let you know that we have implemented Python bindings for libwebrtc in the opentera-webrtc project on GitHub. We have also implemented a C++ client library, a Javascript library and a compatible signaling server.
I thought this might be useful to share some implementation and ideas, so here is the link:
[https://github.com/introlab/opentera-webrtc](https://github.com/introlab/opentera-webrtc)
Thanks for your project!
Best regards,
Dominic Letourneau (@doumdi)
IntRoLab - Intelligent / Interactive / Integrated / Interdisciplinary Robot Lab @ Université de Sherbrooke, Québec, Canada | closed | 2021-11-18T20:57:57Z | 2021-12-02T21:39:58Z | https://github.com/aiortc/aiortc/issues/587 | [] | doumdi | 1 |
gunthercox/ChatterBot | machine-learning | 1,864 | problem installing chatterbot | Hi Everyone
I need your help guys ,I'm having a problem when installing Chatterbot.
I'm getting this error:
7\murmurhash":
running install
running build
running build_py
creating build
creating build\lib.win32-3.7
creating build\lib.win32-3.7\murmurhash
copying murmurhash\about.py -> build\lib.win32-3.7\murmurhash
copying murmurhash\__init__.py -> build\lib.win32-3.7\murmurhash
creating build\lib.win32-3.7\murmurhash\tests
copying murmurhash\tests\test_against_mmh3.py -> build\lib.win32-3.7\murmurhash\tests
copying murmurhash\tests\test_import.py -> build\lib.win32-3.7\murmurhash\tests
copying murmurhash\tests\__init__.py -> build\lib.win32-3.7\murmurhash\tests
copying murmurhash\mrmr.pyx -> build\lib.win32-3.7\murmurhash
copying murmurhash\mrmr.pxd -> build\lib.win32-3.7\murmurhash
copying murmurhash\__init__.pxd -> build\lib.win32-3.7\murmurhash
creating build\lib.win32-3.7\murmurhash\include
creating build\lib.win32-3.7\murmurhash\include\murmurhash
copying murmurhash\include\murmurhash\MurmurHash2.h -> build\lib.win32-3.7\murmurhash\include\murmurhash
copying murmurhash\include\murmurhash\MurmurHash3.h -> build\lib.win32-3.7\murmurhash\include\murmurhash
running build_ext
building 'murmurhash.mrmr' extension
creating build\temp.win32-3.7
creating build\temp.win32-3.7\Release
creating build\temp.win32-3.7\Release\murmurhash
C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.23.28105\bin\HostX86\x86\cl.exe /c /nologo /Ox /W
3 /GL /DNDEBUG /MT "-IC:\Users\SEAN JONES\AppData\Local\Programs\Python\Python37-32\include" -IC:\Users\SEANJO~1\AppData\Local\Temp\pip
-install-fnip5dny\murmurhash\murmurhash\include "-IC:\Users\SEAN JONES\PycharmProjects\untitled1\venv\include" "-IC:\Users\SEAN JONES\A
ppData\Local\Programs\Python\Python37-32\include" "-IC:\Users\SEAN JONES\AppData\Local\Programs\Python\Python37-32\include" "-IC:\Progr
am Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.23.28105\include" /EHsc /Tpmurmurhash/mrmr.cpp /Fobuild\temp.wi
n32-3.7\Release\murmurhash/mrmr.obj /Ox /EHsc
mrmr.cpp
C:\Users\SEAN JONES\AppData\Local\Programs\Python\Python37-32\include\pyconfig.h(59): fatal error C1083: Cannot open include file
: 'io.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.23.28105\\bin\\HostX86\\x
86\\cl.exe' failed with exit status 2
----------------------------------------
Command ""C:\Users\SEAN JONES\PycharmProjects\untitled1\venv\Scripts\python.exe" -u -c "import setuptools, tokenize;__file__='C:\\Use
rs\\SEANJO~1\\AppData\\Local\\Temp\\pip-install-fnip5dny\\murmurhash\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read
().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\SEANJO~1\AppData\Local\Temp\pip-rec
ord-g0rpfhzu\install-record.txt --single-version-externally-managed --prefix C:\Users\SEANJO~1\AppData\Local\Temp\pip-build-env-7vv1qnz
f\overlay --compile --install-headers "C:\Users\SEAN JONES\PycharmProjects\untitled1\venv\include\site\python3.7\murmurhash"" failed wi
th error code 1 in C:\Users\SEANJO~1\AppData\Local\Temp\pip-install-fnip5dny\murmurhash\
----------------------------------------
Command ""C:\Users\SEAN JONES\PycharmProjects\untitled1\venv\Scripts\python.exe" "C:\Users\SEAN JONES\PycharmProjects\untitled1\venv\li
b\site-packages\pip-19.0.3-py3.7.egg\pip" install --ignore-installed --no-user --prefix C:\Users\SEANJO~1\AppData\Local\Temp\pip-build-
env-7vv1qnzf\overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel>0.32.0,<0.33.0 Cython cymem>=2.0.2,<2.1.0 preshed>=2
.0.1,<2.1.0 murmurhash>=0.28.0,<1.1.0 thinc>=7.0.8,<7.1.0" failed with error code 1 in None
Please help!! | closed | 2019-11-11T01:40:21Z | 2020-08-22T19:18:56Z | https://github.com/gunthercox/ChatterBot/issues/1864 | [] | Seanjones98 | 1 |
ivy-llc/ivy | pytorch | 28,366 | fix `lint` error in `adaptive_max_pool3d` | closed | 2024-02-21T10:34:25Z | 2024-02-21T14:22:14Z | https://github.com/ivy-llc/ivy/issues/28366 | [
"Sub Task"
] | samthakur587 | 0 |
|
iperov/DeepFaceLab | machine-learning | 5,476 | excluding xseg obsctruction requires inclusion even if face is detected ? | Just wanted to mark obsctructions so training would ignore them, faces are detected properly so why should i mark the face again manyuallyu, this is very counterproductive, can you guys change it so it wont just discard automatically generated mask when i only add obstruction mark to properly detected image with face and crap in front of the jaw that i marked ?
Manual mode should be complimentary for generic, they should not exclude one another like it currently is.Most of the time generic works fine.
Manual fix/realligning for source like we have manual fix for destination would be nice as well.
Tools are nice but theyre quite cumbersome to us cause of weird masking workflow, you have great auto mode but you cripple it by manual thats very basic , they should work together.
MAjor focus should be put on best masking /obstruction workflow, the rest is quite easy.
Best way now would be to mark obtrusion in manual mode with vector mask, then run generic face autodetection again so it would now check obtrusion vector masks and ignore these areas and not use them when training.
Also sometimes half of the face is detected by generic, so using inclusion vector mask could fix this issue if done peroperly and rerunning generic auto after marking missed areas on face.
But now manual and auto modes exclude each other for no reason
| closed | 2022-02-12T00:17:08Z | 2022-02-17T22:19:04Z | https://github.com/iperov/DeepFaceLab/issues/5476 | [] | 2blackbar | 1 |
psf/requests | python | 6,378 | How to setup local dev environment and run the tests? | As I have not seen any details about it (beyond the cloning of the repo) in the README I put together a short blog posts on [Development environment for the Python requests package](https://dev.to/szabgab/development-environment-for-the-python-requests-package-eae) If you are interested, I'd be glad to send a PR for the README file to include some similar information. | closed | 2023-03-11T17:22:01Z | 2024-06-01T00:04:04Z | https://github.com/psf/requests/issues/6378 | [] | szabgab | 1 |
huggingface/datasets | tensorflow | 7,377 | Support for sparse arrays with the Arrow Sparse Tensor format? | ### Feature request
AI in biology is becoming a big thing. One thing that would be a huge benefit to the field that Huggingface Datasets doesn't currently have is native support for **sparse arrays**.
Arrow has support for sparse tensors.
https://arrow.apache.org/docs/format/Other.html#sparse-tensor
It would be a big deal if Hugging Face Datasets supported sparse tensors as a feature type, natively.
### Motivation
This is important for example in the field of transcriptomics (modeling and understanding gene expression), because a large fraction of the genes are not expressed (zero). More generally, in science, sparse arrays are very common, so adding support for them would be very benefitial, it would make just using Hugging Face Dataset objects a lot more straightforward and clean.
### Your contribution
We can discuss this further once the team comments of what they think about the feature, and if there were previous attempts at making it work, and understanding their evaluation of how hard it would be. My intuition is that it should be fairly straightforward, as the Arrow backend already supports it. | open | 2025-01-21T20:14:35Z | 2025-01-30T14:06:45Z | https://github.com/huggingface/datasets/issues/7377 | [
"enhancement"
] | JulesGM | 1 |
milesmcc/shynet | django | 223 | pushState based routing | Currently, it seems that `pushState` based client side routing is not supported. For example, NextJS is using this to allow fast client-side navigation.
Like other solutions such as plausible, shynet should be tracking these pages changes and treat them like a page view. | closed | 2022-08-21T11:03:39Z | 2023-03-18T09:50:34Z | https://github.com/milesmcc/shynet/issues/223 | [] | Empty2k12 | 6 |
xlwings/xlwings | automation | 1,661 | Issue with writing lists to Excel | #### OS (e.g. Windows 10)
#### Versions of xlwings, Excel and Python (e.g. 0.11.8, Office 365, Python 3.7)
I have a data frame 'df' in Python with the following structure and similar data :
<html xmlns:v="urn:schemas-microsoft-com:vml"
xmlns:o="urn:schemas-microsoft-com:office:office"
xmlns:x="urn:schemas-microsoft-com:office:excel"
xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta name=ProgId content=Excel.Sheet>
<meta name=Generator content="Microsoft Excel 15">
<link id=Main-File rel=Main-File
href="file:///C:/Users/GAURID~1/AppData/Local/Temp/msohtmlclip1/01/clip.htm">
<link rel=File-List
href="file:///C:/Users/GAURID~1/AppData/Local/Temp/msohtmlclip1/01/clip_filelist.xml">
<style>
<!--table
{mso-displayed-decimal-separator:"\.";
mso-displayed-thousand-separator:"\,";}
@page
{margin:.75in .7in .75in .7in;
mso-header-margin:.3in;
mso-footer-margin:.3in;}
tr
{mso-height-source:auto;}
col
{mso-width-source:auto;}
br
{mso-data-placement:same-cell;}
td
{padding-top:1px;
padding-right:1px;
padding-left:1px;
mso-ignore:padding;
color:black;
font-size:11.0pt;
font-weight:400;
font-style:normal;
text-decoration:none;
font-family:Calibri, sans-serif;
mso-font-charset:0;
mso-number-format:General;
text-align:general;
vertical-align:bottom;
border:none;
mso-background-source:auto;
mso-pattern:auto;
mso-protection:locked visible;
white-space:nowrap;
mso-rotate:0;}
-->
</style>
</head>
<body link="#0563C1" vlink="#954F72">
rowdata1 | 2.33
-- | --
rowdata2 | 4.55
rowdata3 | [1,2,3]
rowdata4 | []
</body>
</html>
I'm using the following code to write to excel
```python
outputs_sheet.range('A1').options(pd.DataFrame).value = df
```
This works for the single value entries in the dataframe but doesn't write the list elements to the excel sheet. Any thoughts on why this is occurring and ways to fix this?
| closed | 2021-07-19T15:50:02Z | 2022-02-09T00:41:44Z | https://github.com/xlwings/xlwings/issues/1661 | [] | g-dixit | 2 |
miguelgrinberg/Flask-Migrate | flask | 309 | App with custom data models doesn't import the app package | Version 2.5.2
Noticed that when trying to upgrade using a migration that adds a custom data type (something that subclasses `TypeDecorator`) the migration script that gets created correctly generates the data model (e.g. `sa.Column('mytype', app.models.CustomType())`); however, it fails to import `app` at the top of the script, and thus raises `NameError: name 'app' is not defined` when you run it.
Simple solution is to import the app. | closed | 2019-12-27T21:06:04Z | 2019-12-27T22:54:42Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/309 | [
"invalid"
] | fubuloubu | 2 |
pallets/flask | flask | 4,948 | [docs] clarify that Blueprint.before_request is not for all requests | # Summary
[The documentation for `Blueprint.before_request`](https://flask.palletsprojects.com/en/2.2.x/api/?highlight=before_request#flask.Blueprint.before_request) says:
> Register a function to run before each request.
This is not quite true. This decorator will only register a function to be run before each request *for this blueprint's views*.
The documentation today made it seem to me like `before_request` does what `before_app_request` does.
I think the docs should be amended to qualify when the registered functions get run, and link/compare to `before_app_request`.
I know it seems like overkill, and you're probably wondering why I didn't notice the documentation for `before_app_request` right above this. I'd clicked on an anchor from search results, so `before_app_request` was off-screen. Since `before_app_request` doesn't exist on a `Flask` object, and since the documentation for `before_request` sounded like what I wanted, it didn't occur to me to scroll up.
# MWE
Just to clarify the example:
This code fails with `before_request`, and succeeds with `before_app_request`:
```
from flask import Blueprint, Flask
simple_page = Blueprint('simple_page', __name__)
@simple_page.route('/')
def show():
return ("Hello world", 200)
hook_bp = Blueprint('decorator', __name__)
# global var to be mutated
count = {'count': 0}
@hook_bp.before_request
def before_request():
print("before_request hook called")
count['count'] += 1
app = Flask(__name__)
app.register_blueprint(simple_page)
app.register_blueprint(hook_bp)
r = app.test_client().get('/')
assert r.status_code == 200
assert r.text == "Hello world"
assert count['count'] == 1
```
| closed | 2023-01-20T00:45:25Z | 2023-02-25T00:06:20Z | https://github.com/pallets/flask/issues/4948 | [
"docs"
] | mdavis-xyz | 0 |
CatchTheTornado/text-extract-api | api | 23 | Use Local Ollama Instance Instead of Docker-Compose Instance | Hi,
I have already hosted Ollama on my local machine and would like to use that instance instead of the one created through the Docker Compose setup.
Could you please guide me on how I can configure the system to point to my local Ollama instance rather than using the Docker Compose-created instance?
Details:
I have Ollama running locally and accessible via [localhost:11434].
Currently, Docker Compose creates a separate instance, and I would prefer to use my local instance for efficiency.
What I've tried so far:
I have checked the Docker Compose configuration, but I'm unsure where to modify the settings to switch to my local instance.
Any guidance would be much appreciated!
Thanks in advance! | closed | 2024-11-07T08:09:05Z | 2024-11-07T17:11:59Z | https://github.com/CatchTheTornado/text-extract-api/issues/23 | [
"question"
] | madhankumar2211 | 4 |
zama-ai/concrete-ml | scikit-learn | 862 | Torch.where is not correctly supported | ## Summary
I think there is an issue with the support of "torch.where" within "compile_torch_model".
Torch.where is expecting a bool tensor for the "condition" parameter, while "compile_torch_model" is expecting a float tensor (maybe related to the discrepancy between the supported type of torch.where and numpy.where for the "condition" parameter).
It is not possible to compile a torch model using torch.where because then:
-to compute the trace, torch requires a bool tensor.
-to quantize the model, concrete ml is expecting a float tensor.
## Description
- versions affected: concrete-ml 1.6.1
- python version: 3.9
- workaround:
I was able to make it work with a (very bad) workaround:
in _process_initializer of PostTrainingAffineQuantization (concrete.ml.quantization.post_training), recast "values" variable to numpy.float if array of bool.
(unfortunately overriding "_check_distribution_is_symmetric_around_zero" is not enough..)
<details><summary>minimal POC to trigger the bug</summary>
<p>
```python
import torch
from concrete.ml.torch.compile import compile_torch_model
class PTQSimpleNet(torch.nn.Module):
def __init__(self, n_hidden):
super().__init__()
self.n_hidden = n_hidden
self.fc_tot = torch.rand(1, n_hidden) > 0.5
def forward(self, x):
y = torch.where(self.fc_tot, x, 0.)
return y
N_FEAT = 32
torch_input = torch.randn(1, N_FEAT)
torch_model = PTQSimpleNet(N_FEAT)
quantized_module = compile_torch_model(
torch_model,
torch_input
)
```
</p>
</details>
| closed | 2024-09-06T09:00:43Z | 2024-10-28T15:24:26Z | https://github.com/zama-ai/concrete-ml/issues/862 | [
"bug"
] | theostos | 1 |
neuml/txtai | nlp | 447 | Allow task action arguments to be dictionaries in addition to tuples | Currently, task action arguments are expected to be tuples. This is problematic when wanting to only set a single argument, especially in a longer list.
Keyword arguments should also be supported via parameters being passed as dictionaries. | closed | 2023-03-02T22:30:53Z | 2023-03-02T22:36:10Z | https://github.com/neuml/txtai/issues/447 | [] | davidmezzetti | 0 |
iperov/DeepFaceLab | machine-learning | 663 | Issue in train |
## Expected behavior
*i'm following Basic workflow video,but in the train it don't work*
## Actual behavior
*Model first run.
Enable autobackup? (y/n ?:help skip:n) : y
Write preview history? (y/n ?:help skip:n) : y
Choose image for the preview history? (y/n skip:n) : y
Target iteration (skip:unlimited/default) :
0
Batch_size (?:help skip:0) : 8
Flip faces randomly? (y/n ?:help skip:y) :
y
Use lightweight autoencoder? (y/n, ?:help skip:n) : y
Use pixel loss? (y/n, ?:help skip: n/default ) :
n
Using plaidml.keras.backend backend.
INFO:plaidml:Opening device "opencl_amd_ellesmere.0"
Loading: 100%|####################################################################| 7011/7011 [00:09<00:00, 732.12it/s]
*
Then, Program stuck.
## Steps to reproduce
The data_dst and data_src extract faces work.I try to convert H64 without train,it works.
Is it the cause of the GPU?
##Other relevant information
my rig:
Asus Motar B360
AMD 2600
sapphire rx580 2048sp 8g
win 10
I`m sorry for my poor english
| closed | 2020-03-20T00:15:37Z | 2020-03-20T05:27:42Z | https://github.com/iperov/DeepFaceLab/issues/663 | [] | hyz3203 | 1 |
ClimbsRocks/auto_ml | scikit-learn | 112 | run CustomSparseScaler on the subpredictor_predictions | closed | 2016-10-10T17:16:15Z | 2017-03-12T01:07:39Z | https://github.com/ClimbsRocks/auto_ml/issues/112 | [] | ClimbsRocks | 1 |
|
Lightning-AI/pytorch-lightning | machine-learning | 20,306 | NCCL backend fails during multi-node, multi-GPU training | ### Bug description
I set up a training on a Slurm cluster, specifying 2 nodes with 4 GPUs each. During initialization, I observed the [Unexpected behavior (times out) of all_gather_into_tensor with subgroups](https://github.com/pytorch/pytorch/issues/134006#top) (Pytorch issue)
Apparently, this issue has not been solved on the Pytorch or NCCL level, but there is a workaround (described in [this post](https://github.com/pytorch/pytorch/issues/134006#issuecomment-2300041017) on that same issue).
How/where could this workaround be implemented in Pytorch Lightning, if outright solving the underlying problem is not possible?
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
I'm working on a Slurm cluster with 2 headnodes (no GPUs), 6 computenodes (configuration see below) and NFS-mounted data storage.
```
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- NVIDIA RTX A6000
- NVIDIA RTX A6000
- NVIDIA RTX A6000
- NVIDIA RTX A6000
- NVIDIA RTX A6000
- NVIDIA RTX A6000
- NVIDIA RTX A6000
- NVIDIA RTX A6000
- available: True
- version: 12.1
* Lightning:
- lightning-utilities: 0.11.7
- pytorch-lightning: 2.4.0
- torch: 2.4.1+cu121
- torchmetrics: 1.4.2
- torchvision: 0.19.1+cu121
* Packages:
- absl-py: 2.1.0
- aiohappyeyeballs: 2.4.0
- aiohttp: 3.10.5
- aiosignal: 1.3.1
- albucore: 0.0.16
- albumentations: 1.4.15
- annotated-types: 0.7.0
- async-timeout: 4.0.3
- attrs: 24.2.0
- certifi: 2024.8.30
- charset-normalizer: 3.3.2
- contourpy: 1.3.0
- cycler: 0.12.1
- eval-type-backport: 0.2.0
- filelock: 3.13.1
- fonttools: 4.53.1
- frozenlist: 1.4.1
- fsspec: 2024.2.0
- future: 1.0.0
- geopandas: 1.0.1
- grpcio: 1.66.1
- huggingface-hub: 0.25.0
- idna: 3.10
- imageio: 2.35.1
- imgaug: 0.4.0
- jinja2: 3.1.3
- joblib: 1.4.2
- kiwisolver: 1.4.7
- lazy-loader: 0.4
- lightning-utilities: 0.11.7
- markdown: 3.7
- matplotlib: 3.9.2
- mpmath: 1.3.0
- msgpack: 1.1.0
- multidict: 6.1.0
- networkx: 3.2.1
- numpy: 1.26.3
- nvidia-cublas-cu12: 12.1.3.1
- nvidia-cuda-cupti-cu12: 12.1.105
- nvidia-cuda-nvrtc-cu12: 12.1.105
- nvidia-cuda-runtime-cu12: 12.1.105
- nvidia-cudnn-cu12: 9.1.0.70
- nvidia-cufft-cu12: 11.0.2.54
- nvidia-curand-cu12: 10.3.2.106
- nvidia-cusolver-cu12: 11.4.5.107
- nvidia-cusparse-cu12: 12.1.0.106
- nvidia-nccl-cu12: 2.20.5
- nvidia-nvjitlink-cu12: 12.1.105
- nvidia-nvtx-cu12: 12.1.105
- opencv-python: 4.10.0.84
- opencv-python-headless: 4.10.0.84
- packaging: 24.1
- pandas: 2.2.2
- pillow: 10.2.0
- pip: 22.3.1
- protobuf: 5.28.1
- pydantic: 2.9.2
- pydantic-core: 2.23.4
- pyogrio: 0.9.0
- pyparsing: 3.1.4
- pyproj: 3.6.1
- python-dateutil: 2.9.0.post0
- pytorch-lightning: 2.4.0
- pytz: 2024.2
- pyyaml: 6.0.2
- requests: 2.32.3
- s2sphere: 0.2.5
- safetensors: 0.4.5
- scikit-image: 0.24.0
- scikit-learn: 1.5.2
- scipy: 1.14.1
- setuptools: 65.5.0
- shapely: 2.0.6
- six: 1.16.0
- sympy: 1.12
- tensorboard: 2.17.1
- tensorboard-data-server: 0.7.2
- threadpoolctl: 3.5.0
- tifffile: 2024.8.30
- timm: 1.0.9
- torch: 2.4.1+cu121
- torchmetrics: 1.4.2
- torchvision: 0.19.1+cu121
- tqdm: 4.66.5
- triton: 3.0.0
- typing-extensions: 4.9.0
- tzdata: 2024.1
- urllib3: 2.2.3
- werkzeug: 3.0.4
- yarl: 1.11.1
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.10.9
- release: 5.15.0-50-generic
- version: #56~20.04.1-Ubuntu SMP Tue Sep 27 15:51:29 UTC 2022
</details>
```
### More info
_No response_ | open | 2024-09-26T16:09:22Z | 2024-09-26T16:09:35Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20306 | [
"bug",
"needs triage",
"ver: 2.4.x"
] | raketenolli | 0 |
sammchardy/python-binance | api | 743 | Async implementation | Are there any plans of implementing an async interface? | open | 2021-03-25T16:34:46Z | 2021-04-28T11:50:12Z | https://github.com/sammchardy/python-binance/issues/743 | [] | Kyzegs | 5 |
jupyter-book/jupyter-book | jupyter | 2,186 | Configure theme (e.g. primary color?) | Hi folks,
Loving jupyter-book (migrating here from quarto) but I am struggling to customize the theme, e.g. by setting the primary color. I've tried various ways I've seen suggested for doing this:
- [custom css variables](https://sphinx-design.readthedocs.io/en/latest/css_variables.html)
- I've trying to add a custom `_sass/theme.scss` redefining `$primary`
but haven't had any luck overriding this.
It seems that some sphinx themes provide a mechanism to set colors in the conf.py; it would be great to be able to do something similar in jupyterbook configuration yaml or with a custom sass. (compare to [quarto theming](https://quarto.org/docs/output-formats/html-themes.html#theme-options)). I'm only familiar with how other static site generators have handled this, I'm not experienced enough in css, sass or sphinx to figure out how to alter the behavior here though!
| open | 2024-08-08T19:04:30Z | 2024-08-08T19:04:30Z | https://github.com/jupyter-book/jupyter-book/issues/2186 | [] | cboettig | 0 |
jadore801120/attention-is-all-you-need-pytorch | nlp | 96 | Do you have a trained model dump ? | closed | 2019-03-08T21:44:08Z | 2019-12-08T09:57:48Z | https://github.com/jadore801120/attention-is-all-you-need-pytorch/issues/96 | [] | gauravlath07 | 1 |
|
pytest-dev/pytest-xdist | pytest | 256 | logging not captured with pytest 3.3 and xdist | Consider this file:
```python
import logging
logger = logging.getLogger(__name__)
def test():
logger.warn('Some warning')
```
When executing `pytest foo.py -n2`, the warning is printed to the console:
```
============================= test session starts =============================
platform win32 -- Python 3.5.0, pytest-3.3.1, py-1.5.2, pluggy-0.6.0
rootdir: C:\Users\bruno, inifile:
plugins: xdist-1.20.1, forked-0.2
gw0 [1] / gw1 [1]
scheduling tests via LoadScheduling
foo.py 6 WARNING Some warning
. [100%]
========================== 1 passed in 0.65 seconds ===========================
```
Executing `pytest` normally without the `-n2` flags then the message is not printed.
Using `pytest 3.3.1` and `xdist 1.20.1`. | closed | 2017-12-06T19:12:50Z | 2017-12-07T11:27:42Z | https://github.com/pytest-dev/pytest-xdist/issues/256 | [
"bug"
] | nicoddemus | 1 |
pywinauto/pywinauto | automation | 1,136 | Way to get the vertical scroll bar percentage | ## Expected Behavior
Expect to get vertical scroll bar percentage
## Actual Behavior
Able to scroll down
Unable to get verticalscrollbar percentage
So that we can determine scroll bar is 100% scrolled down
## Steps to Reproduce the Problem
1.
2.
3.
## Short Example of Code to Demonstrate the Problem
Currently using get_propeties() method but it doesn`t have info about it
## Specifications
- Pywinauto version:0.6.8
- Python version and bitness:3.7.8
- Platform and OS: uia n

Windows
| closed | 2021-10-19T09:03:19Z | 2021-10-20T05:47:52Z | https://github.com/pywinauto/pywinauto/issues/1136 | [] | YenikeRaghuRam | 5 |
strawberry-graphql/strawberry | fastapi | 2,923 | relay: returning an strawberry object with node: strawberry.relay.Node = strawberry.relay.node() breaks | <!-- Provide a general summary of the bug in the title above. -->
After the latest strawberry / strawberry django updates, the code
```` python
@strawberry.type
class SecretgraphObject:
node: strawberry.relay.Node = strawberry.relay.node()
@strawberry.type
class Query:
@strawberry_django.field
@staticmethod
def secretgraph(
info: Info, authorization: Optional[AuthList] = None
) -> SecretgraphObject:
return SecretgraphObject
````
doesn't work anymore.
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
<!-- A clear and concise description of what the bug is. -->
## System Information
- Operating system: linux
- Strawberry version (if applicable): 193.1
## Additional Context
````
GraphQL request:2:3
1 | query serverSecretgraphConfigQuery {
2 | secretgraph {
| ^
3 | config {
Traceback (most recent call last):
File "/home/alex/git/secretgraph/.venv/lib/python3.11/site-packages/graphql/execution/execute.py", line 528, in await_result
return_type, field_nodes, info, path, await result
^^^^^^^^^^^^
File "/home/alex/git/secretgraph/.venv/lib/python3.11/site-packages/asgiref/sync.py", line 479, in __call__
ret: _R = await loop.run_in_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/alex/git/secretgraph/.venv/lib/python3.11/site-packages/asgiref/sync.py", line 538, in thread_handler
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/alex/git/secretgraph/.venv/lib/python3.11/site-packages/strawberry_django/resolvers.py", line 91, in async_resolver
return sync_resolver(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/alex/git/secretgraph/.venv/lib/python3.11/site-packages/strawberry_django/resolvers.py", line 77, in sync_resolver
retval = retval()
^^^^^^^^
TypeError: SecretgraphObject.__init__() missing 1 required keyword-only argument: 'node'
```` | closed | 2023-07-05T21:36:29Z | 2025-03-20T15:56:17Z | https://github.com/strawberry-graphql/strawberry/issues/2923 | [
"bug"
] | devkral | 2 |
streamlit/streamlit | data-science | 10,814 | Add a checkbox group widget | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Add a new command to make it easy to create a group of checkboxes:
<img width="129" alt="Image" src="https://github.com/user-attachments/assets/60eef5f6-9b42-4dc4-9b44-430916ca59e2" />
### Why?
Simplify creating a group of checkboxes in a vertical or horizontal layout.
### How?
This can be supported by a very similar API as `st.radio` and `st.multiselect`:
```python
selected_options = st.checkbox_group(label, options, default=None, format_func=str, key=None, help=None, on_change=None, args=None, kwargs=None, *, max_selections=None, placeholder="Choose an option", disabled=False, label_visibility="visible", horizontal=False)
```
The `horizontal` parameter allows to orient the checkbox group horizontally instead of vertically (same as `st.radio`)
### Additional Context
_No response_ | open | 2025-03-18T10:44:36Z | 2025-03-18T10:46:05Z | https://github.com/streamlit/streamlit/issues/10814 | [
"type:enhancement",
"feature:st.checkbox"
] | lukasmasuch | 1 |
widgetti/solara | jupyter | 281 | Autoreload for subpackages | When you have an application with the following structure:
`my_application/app` (multipage solara app, directory with `__init__.py` which has `Page` component
`my_application/components` (module with solara components used in solara app)
Then when running as `solara run my_application.app`, and making changes in `components`, autoreload is triggered, but the change is not seen in the reloaded application.
The desired behavior is that all changes in the complete package are reloaded, not the subpackage only.
Workaround for testing/development is to create a file higher in the directory hierarchy and run from there. | closed | 2023-09-07T12:14:52Z | 2023-09-18T06:52:39Z | https://github.com/widgetti/solara/issues/281 | [
"bug"
] | Jhsmit | 0 |
recommenders-team/recommenders | data-science | 1,453 | Improvements on diversity metrics | I am thinking that it looks a bit as if we suggest random as a valid algorithm. I may rewrite a bit to emphasize the trade off i.e. one doesn't want maximum diversity when doing recommendations.
_Originally posted by @anargyri in https://github.com/microsoft/recommenders/pull/1416#r652624011_ | closed | 2021-06-16T13:21:41Z | 2021-07-07T12:33:43Z | https://github.com/recommenders-team/recommenders/issues/1453 | [] | anargyri | 2 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,009 | Detected by https://www.coolbet.com | Chromium Version 109.0.5414.87
UC 3.2.1
Running on a Manjaro 22.0.1

It worked until 2 days ago (01/26) | closed | 2023-01-26T22:19:59Z | 2023-02-04T21:29:39Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1009 | [] | JohnPortella | 2 |
keras-team/keras | deep-learning | 20,463 | BackupAndRestore callback sometimes can't load checkpoint | When training interrupts, sometimes model can't restore weights back with BackupAndRestore callback.
```python
Traceback (most recent call last):
File "/home/alex/jupyter/lab/model_fba.py", line 150, in <module>
model.fit(train_dataset, callbacks=callbacks, epochs=NUM_EPOCHS, steps_per_epoch=STEPS_PER_EPOCH, verbose=2)
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 113, in error_handler
return fn(*args, **kwargs)
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/backend/tensorflow/trainer.py", line 311, in fit
callbacks.on_train_begin()
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/callbacks/callback_list.py", line 218, in on_train_begin
callback.on_train_begin(logs)
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/callbacks/backup_and_restore.py", line 116, in on_train_begin
self.model.load_weights(self._weights_path)
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 113, in error_handler
return fn(*args, **kwargs)
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/models/model.py", line 353, in load_weights
saving_api.load_weights(
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/saving/saving_api.py", line 251, in load_weights
saving_lib.load_weights_only(
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/saving/saving_lib.py", line 550, in load_weights_only
weights_store = H5IOStore(filepath, mode="r")
File "/home/alex/.local/lib/python3.10/site-packages/keras/src/saving/saving_lib.py", line 931, in __init__
self.h5_file = h5py.File(root_path, mode=self.mode)
File "/home/alex/.local/lib/python3.10/site-packages/h5py/_hl/files.py", line 561, in __init__
fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
File "/home/alex/.local/lib/python3.10/site-packages/h5py/_hl/files.py", line 235, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 102, in h5py.h5f.open
OSError: Unable to synchronously open file (bad object header version number)
``` | closed | 2024-11-07T05:57:29Z | 2024-11-11T16:49:36Z | https://github.com/keras-team/keras/issues/20463 | [
"type:Bug"
] | shkarupa-alex | 1 |
neuml/txtai | nlp | 404 | Allow searching for images | At the moment the `similar` clause only allows searching for text. It would be useful to extend this to images also.
@davidmezzetti on Slack suggested using something like `similar(image:///PATH)`.
As a workaround for anyone else wanting to search by images, I did notice you can do it right now, but you can't use the SQL syntax.
That is, you can search the whole index for the closest entry, but can't filter entries out.
This functionality isn't documented on `txtai`, it just works as a side-effect of CLIP.
You can also search for embeddings directly.
For example:
```python
import requests
from sentence_transformers import SentenceTransformer
from PIL import Image
from txtai.embeddings import Embeddings
texts = ["a picture of a cat", "a painting of a dog"]
texts_index = [(i, t, None) for i, t in enumerate(texts)]
embeddings = Embeddings({"method": "sentence-transformers", "path": "sentence-transformers/clip-ViT-B-32", "content": True})
embeddings.index(texts_index)
url = "https://cataas.com/cat"
r = requests.get(url, stream=True)
im = Image.open(r.raw).convert("RGB")
# search image directly
print(embeddings.search(im, 2))
# search embeddings
model = SentenceTransformer('clip-ViT-B-32')
im_emb = model.encode(im)
print(embeddings.search(im_emb, 2))
```
outputs
```text
[{'id': '0', 'text': 'a picture of a cat', 'score': 0.25348278880119324}, {'id': '1', 'text': 'a painting of a dog', 'score': 0.18208511173725128}]
[{'id': '0', 'text': 'a picture of a cat', 'score': 0.25348278880119324}, {'id': '1', 'text': 'a painting of a dog', 'score': 0.18208511173725128}]
``` | closed | 2023-01-09T17:52:27Z | 2023-11-07T16:03:58Z | https://github.com/neuml/txtai/issues/404 | [] | dcferreira | 6 |
plotly/dash-core-components | dash | 255 | feature suggestion: Slider should have value printed next to it | the Slider should have an option to display the current value like ipywidgets sliders.
| open | 2018-08-07T15:09:57Z | 2020-11-05T09:10:43Z | https://github.com/plotly/dash-core-components/issues/255 | [] | arsenovic | 7 |
home-assistant/core | asyncio | 140,818 | Setup failed for 'panasonic_viera': Unable to import component: No module named 'Crypto.Cipher._mode_ctr' | ### The problem
Setup failed for 'panasonic_viera': Unable to import component: No module named 'Crypto.Cipher._mode_ctr'
Logger: homeassistant.setup
Source: setup.py:340
First occurred: 15:18:55 (1 occurrences)
Last logged: 15:18:55
Setup failed for 'panasonic_viera': Unable to import component: No module named 'Crypto.Cipher._mode_ctr'
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/setup.py", line 340, in _async_setup_component
component = await integration.async_get_component()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/loader.py", line 1034, in async_get_component
self._component_future.result()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/usr/src/homeassistant/homeassistant/loader.py", line 1014, in async_get_component
comp = await self.hass.async_add_import_executor_job(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self._get_component, True
^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/usr/local/lib/python3.13/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/src/homeassistant/homeassistant/loader.py", line 1074, in _get_component
ComponentProtocol, importlib.import_module(self.pkg_path)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/util/loop.py", line 201, in protected_loop_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.13/importlib/__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 1026, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/usr/src/homeassistant/homeassistant/components/panasonic_viera/__init__.py", line 9, in <module>
from panasonic_viera import EncryptionRequired, Keys, RemoteControl, SOAPError
File "/usr/local/lib/python3.13/site-packages/panasonic_viera/__init__.py", line 16, in <module>
from Crypto.Cipher import AES
File "/usr/local/lib/python3.13/site-packages/Crypto/Cipher/__init__.py", line 31, in <module>
ModuleNotFoundError: No module named 'Crypto.Cipher._mode_ctr'
### What version of Home Assistant Core has the issue?
2025.3.3
### What was the last working version of Home Assistant Core?
2025.3.3
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
15.0
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/panasonic_viera/
### Diagnostics information
Logger: homeassistant.setup
Source: setup.py:340
First occurred: 15:18:55 (1 occurrences)
Last logged: 15:18:55
Setup failed for 'panasonic_viera': Unable to import component: No module named 'Crypto.Cipher._mode_ctr'
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/setup.py", line 340, in _async_setup_component
component = await integration.async_get_component()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/loader.py", line 1034, in async_get_component
self._component_future.result()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/usr/src/homeassistant/homeassistant/loader.py", line 1014, in async_get_component
comp = await self.hass.async_add_import_executor_job(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self._get_component, True
^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/usr/local/lib/python3.13/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/src/homeassistant/homeassistant/loader.py", line 1074, in _get_component
ComponentProtocol, importlib.import_module(self.pkg_path)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/util/loop.py", line 201, in protected_loop_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.13/importlib/__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 1026, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/usr/src/homeassistant/homeassistant/components/panasonic_viera/__init__.py", line 9, in <module>
from panasonic_viera import EncryptionRequired, Keys, RemoteControl, SOAPError
File "/usr/local/lib/python3.13/site-packages/panasonic_viera/__init__.py", line 16, in <module>
from Crypto.Cipher import AES
File "/usr/local/lib/python3.13/site-packages/Crypto/Cipher/__init__.py", line 31, in <module>
ModuleNotFoundError: No module named 'Crypto.Cipher._mode_ctr'
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
Happened after update to OS 15.0 | open | 2025-03-17T19:28:30Z | 2025-03-17T19:29:44Z | https://github.com/home-assistant/core/issues/140818 | [
"integration: panasonic_viera"
] | vlad36N | 1 |
Yorko/mlcourse.ai | matplotlib | 712 | Latex math displays incorrectly in topic-4 | [First arcticle](https://mlcourse.ai/book/topic04/topic4_linear_models_part1_mse_likelihood_bias_variance.html) in the topic4 does not show some math. Math under toggle button with "Small CheatSheet on matrix derivatives" looks like this:
<img width="732" alt="image" src="https://user-images.githubusercontent.com/17138883/188671293-ba1dbe47-c5e6-491b-9191-3e48847dac09.png">
| closed | 2022-09-06T15:19:09Z | 2022-09-07T10:40:12Z | https://github.com/Yorko/mlcourse.ai/issues/712 | [] | aulasau | 1 |
gradio-app/gradio | deep-learning | 9,912 | Gradio.File throws "Invalid file type" error for files with long names (200+ characters) | ### Describe the bug
`gradio.exceptions.Error: "Invalid file type. Please upload a file that is one of these formats: ['.***']"`
When using the `gradio.File` component, files with names that exceed 200 characters (including the suffix) fail to be proceed. Even though the file is with the correct suffix), Gradio raises an error indicating that the file type is invalid.
Similar to #2681
Workaround: Rename the file
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
import pandas as pd
def analyze_pdfs(pdf_files):
# Simply return filenames without any processing
results = [{"Filename": pdf_file.name} for pdf_file in pdf_files]
df_output = pd.DataFrame(results)
return df_output
with gr.Blocks() as demo:
pdf_files = gr.File(label="Upload PDFs", file_count="multiple", file_types=[".pdf"], type="filepath")
analyze_button = gr.Button("Analyze")
output_df = gr.Dataframe(headers=["Filename"], interactive=False)
analyze_button.click(
analyze_pdfs,
inputs=[pdf_files],
outputs=[output_df],
)
if __name__ == "__main__":
demo.launch()
```
**Steps to Reproduce:**
1. Create or rename a PDF file with a filename of 200+ characters (e.g., very_long_filename_over_200_characters_long_example_document... .pdf).
2. Upload the file using the `gradio.File` component.
3. Click Analyze
4. There it is
### Screenshot

### Logs
_No response_
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.5.0
gradio_client version: 1.4.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.6.2.post1
audioop-lts is not installed.
fastapi: 0.115.4
ffmpy: 0.4.0
gradio-client==1.4.2 is not installed.
httpx: 0.27.2
huggingface-hub: 0.26.2
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 2.1.3
orjson: 3.10.11
packaging: 24.1
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.9.2
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 6.0.2
ruff: 0.7.2
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit==0.12.0 is not installed.
typer: 0.12.5
typing-extensions: 4.12.2
urllib3: 2.2.3
uvicorn: 0.32.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.27.2
huggingface-hub: 0.26.2
packaging: 24.1
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
I can work around it | closed | 2024-11-07T10:01:18Z | 2024-11-07T18:52:21Z | https://github.com/gradio-app/gradio/issues/9912 | [
"bug"
] | TakaSoap | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.