repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
netbox-community/netbox | django | 18,496 | Broken Link For Custom Links Documentation | ### Change Type
Correction
### Area
Features
### Proposed Changes
The [Custom Links](https://demo.netbox.dev/static/docs/customization/custom-links/) page has an external URL for [Jinja2 Template Code](https://jinja2docs.readthedocs.io/en/stable/), which returns “404 Documentation page not found
[jinja2docs.readthedocs.io](https://jinja2docs.readthedocs.io/)
The documentation page you requested does not exist or may have been removed.”
That URL needs to be changed to the proper URL. | closed | 2025-01-27T12:03:44Z | 2025-02-03T15:12:37Z | https://github.com/netbox-community/netbox/issues/18496 | [
"type: documentation",
"status: accepted"
] | mr1716 | 0 |
pandas-dev/pandas | data-science | 60,284 | BUG?: using `None` as replacement value in `replace()` typically upcasts to object dtype | I noticed that in certain cases, when replacing a value with `None`, that we always cast to object dtype, regardless of whether the dtype of the calling series can actually hold None (at least, when considering `None` just as a generic "missing value" indicator).
For example, a float Series can hold `None` in the sense of holding missing values, which is how `None` is treated in setitem:
```python
>>> ser = pd.Series([1, 2, 3], dtype="float")
>>> ser[1] = None
>>> ser
0 1.0
1 NaN
2 3.0
dtype: float64
```
However, when using `replace()` to change the value 2.0 with None, it depends on the exact way to specify the to_replace/value combo, but typically it will upcast to object:
```python
# with list
>>> ser.replace([1, 2], [10, None])
0 10.0
1 None
2 3.0
dtype: object
# with Series -> here it gives NaN but that is because the Series constructor already coerces the None
>>> ser.replace(pd.Series({1: 10, 2: None}))
0 10.0
1 NaN
2 3.0
dtype: float64
# with scalar replacements
>>> ser.replace(1, 10).replace(2, None)
0 10.0
1 None
2 3.0
dtype: object
```
In all the above cases, when replacing `None` with `np.nan`, it of course just results in a float Series with NaN.
The reason for this is two-fold. First, in `Block._replace_coerce` there is a check specifically for `value is None` and in that case we always cast to object dtype:
https://github.com/pandas-dev/pandas/blob/5f23aced2f97f2ed481deda4eaeeb049d6c7debe/pandas/core/internals/blocks.py#L906-L910
The above is used when replacing with a list of values. But for the scalar case, we also cast to object dtype because in this case we check for `if self._can_hold_element(value)` to do the replacement with a simple setitem (and if not cast to object dtype first before trying again). But it seems that `can_hold_element(np.array([], dtype=float), None)` gives False ..
---
Everything is tested with current main (3.0.0.dev), but I see the same behaviour on older releases (2.0 and 1.5)
---
Somewhat related issue:
* https://github.com/pandas-dev/pandas/issues/29024 | open | 2024-11-12T10:45:46Z | 2024-11-12T12:45:38Z | https://github.com/pandas-dev/pandas/issues/60284 | [
"Bug",
"Missing-data",
"replace",
"API - Consistency"
] | jorisvandenbossche | 2 |
LibrePhotos/librephotos | django | 968 | Add Folders as an Photos view | **Describe the enhancement you'd like**
Right now you can chose to see
* With timestamp
* Without time stamp
* Recently added
* Hidden
* Favorites
* My Public Photos
I would like to add an
* Folders
View here also. It would look similar to "with time stamp" Except that the sorting should be on folders and that the folder path should be shown in bold to the right instead of the dates.
**Describe why this will benefit the LibrePhotos**
Its similar to #129 but this might be easier to implement and would is some cases make more sense.
It's possible that the folders structure you have is not 100% suited to make albums but by sorting after the folder structure you could easier create valid albums.
**Additional context**
Add any other context or screenshots about the enhancement request here.
| open | 2023-07-24T22:26:59Z | 2024-05-06T16:05:53Z | https://github.com/LibrePhotos/librephotos/issues/968 | [
"enhancement"
] | sterys | 2 |
aleju/imgaug | deep-learning | 858 | NP_****_TYPES error | Hi, imgaug developers,
I was having the following error:
tests/test_postProcessor.py:4: in <module>
from openood.evaluation_api import Evaluator
.conda/lib/python3.10/site-packages/openood/evaluation_api/__init__.py:1: in <module>
from .evaluator import Evaluator
.conda/lib/python3.10/site-packages/openood/evaluation_api/evaluator.py:11: in <module>
from openood.evaluators.metrics import compute_all_metrics
.conda/lib/python3.10/site-packages/openood/evaluators/__init__.py:1: in <module>
from .utils import get_evaluator
.conda/lib/python3.10/site-packages/openood/evaluators/utils.py:1: in <module>
from openood.evaluators.mos_evaluator import MOSEvaluator
.conda/lib/python3.10/site-packages/openood/evaluators/mos_evaluator.py:13: in <module>
from openood.postprocessors import BasePostprocessor
.conda/lib/python3.10/site-packages/openood/postprocessors/__init__.py:14: in <module>
from .godin_postprocessor import GodinPostprocessor
.conda/lib/python3.10/site-packages/openood/postprocessors/godin_postprocessor.py:7: in <module>
from openood.preprocessors.transform import normalization_dict
.conda/lib/python3.10/site-packages/openood/preprocessors/__init__.py:3: in <module>
from .draem_preprocessor import DRAEMPreprocessor
.conda/lib/python3.10/site-packages/openood/preprocessors/draem_preprocessor.py:6: in <module>
import imgaug.augmenters as iaa
.conda/lib/python3.10/site-packages/imgaug/__init__.py:7: in <module>
from imgaug.imgaug import * # pylint: disable=redefined-builtin
.conda/lib/python3.10/site-packages/imgaug/imgaug.py:45: in <module>
NP_FLOAT_TYPES = set(np.sctypes["float"])
.conda/lib/python3.10/site-packages/numpy/__init__.py:400: in __getattr__
raise AttributeError(
E AttributeError: `np.sctypes` was removed in the NumPy 2.0 release. Access dtypes explicitly instead.
As said in the last line, in Numpy 2.0 it is not allowed to define the following lines linked:
```
NP_FLOAT_TYPES = set(np.sctypes["float"])
NP_INT_TYPES = set(np.sctypes["int"])
NP_UINT_TYPES = set(np.sctypes["uint"])
```
[https://github.com/aleju/imgaug/blob/0101108d4fed06bc5056c4a03e2bcb0216dac326/imgaug/imgaug.py#L39C1-L41C40](https://github.com/aleju/imgaug/blob/0101108d4fed06bc5056c4a03e2bcb0216dac326/imgaug/imgaug.py#L39C1-L41C40)
I changed those lines to:
```
NP_FLOAT_TYPES = set(np.sctypes["float"])
NP_INT_TYPES = set(np.sctypes["int"])
NP_UINT_TYPES = set(np.sctypes["uint"])
```
and worked for me. Is it okey to change those lines for the whole repo?
Thank you for your answer
| open | 2024-10-16T10:21:40Z | 2024-10-26T19:21:59Z | https://github.com/aleju/imgaug/issues/858 | [] | isega24 | 0 |
ageitgey/face_recognition | python | 1,155 | Face percent match | Hello how i display "percent match face" in live webcam video | closed | 2020-06-06T17:15:49Z | 2023-06-05T08:40:16Z | https://github.com/ageitgey/face_recognition/issues/1155 | [] | lavrenkov-sketch | 1 |
flaskbb/flaskbb | flask | 667 | flaskbb 没有安装,这个命令里面的flaskbb是哪里来的 | devconfig:dependencies ## Generates a development config
flaskbb makeconfig -d
| open | 2024-04-25T06:53:35Z | 2024-07-13T20:15:45Z | https://github.com/flaskbb/flaskbb/issues/667 | [] | windy003 | 1 |
ResidentMario/missingno | data-visualization | 169 | [Feature Request] Java based MissingNo library? | This is an awesome feature for a data exploration activity for any data scientist to begin with and we would like to include this into [data oculus](https://dataoculus.app/) product, where we do detailed profile of data but this is currently a standalone tool ( data processing + logic + charts all in one) that can not be integrated directly, and also its written in python.
we would love to contribute if anyone is considering java version of this same functionality. | open | 2024-02-28T22:56:03Z | 2024-07-27T22:12:51Z | https://github.com/ResidentMario/missingno/issues/169 | [] | dataoculus | 1 |
vitalik/django-ninja | django | 1,343 | docs: extend documentation how to send files | **Is your feature request related to a problem? Please describe.**
I noticed there is a page about file upload (the receiving part) and there could be a little chapter about the other part, how to send files (as a blob)
**Describe the solution you'd like**
Little chapter on best practices on sending files (fields defined with `models.FileField`) | open | 2024-11-21T18:24:39Z | 2024-11-22T14:43:19Z | https://github.com/vitalik/django-ninja/issues/1343 | [
"documentation"
] | Zerotask | 0 |
Farama-Foundation/Gymnasium | api | 446 | [Question] Setting a custom camera distance when recording videos of a MuJoCo environment using RecordVideo | ### Question
Using `gymnasium.wrappers.RecordVideo` to record a video of a MuJoco environment (e.g., Ant-v4) seems to ignore the camera distance specified by the underlying MuJoCo environment.
When I initialize the environment using `render_mode="human"`, the value of `default_camera_config` passed to `MujocoEnv.__init__` in `AntEnv` correctly affects the rendering displayed by the MuJoCo engine. However, using `render_mode="rgb_array"`, the video file generated by `RecordVideo` ignores this value and generates frames using some default value (possibly 4.0).
I validated that `env.unwrapped.mujoco_renderer.viewer.cam.distance` is set to the new value (e.g., 10.0) before and after the wrapping, but the resulting video frames are recorded using the default value and are unaffected. | closed | 2023-04-11T19:53:59Z | 2023-04-28T18:43:31Z | https://github.com/Farama-Foundation/Gymnasium/issues/446 | [
"question"
] | Omer1Yuval1 | 7 |
horovod/horovod | machine-learning | 3,785 | performance did not improve when i increase the number of processes | I want to know the differences on performance between using horovod and without horovod, so i use a pytorch version of mnist demo with different settings. Here's the result:
1. without hovorod, batch_size = 32
Test set: Average loss: 0.0466, Accuracy: 98.62%
duration: 0:03:23.520224
2. with horovod, np = 1, batch_size = 32
Test set: Average loss: 0.0425, Accuracy: 98.64%
duration: 0:03:09.118766
3. with horovod, np = 2, batch_size = 32 * 2
Test set: Average loss: 0.0535, Accuracy: 98.30%
duration: 0:02:13.352439
4. with horovod, np = 4, batch_size = 32 * 4
Test set: Average loss: 0.0717, Accuracy: 97.84%
duration: 0:02:42.104210
All the other settings are the same, from the results above, the training speed has an increase when using horovod compared with not using horovod.
But what puzzles me is that when i increase the number of processes from 2 to 4, the performance becomes worse. Is that normal? Or maybe there are some other settings that I ignore? | closed | 2022-11-29T08:30:47Z | 2023-02-19T13:19:23Z | https://github.com/horovod/horovod/issues/3785 | [
"wontfix"
] | zly9844 | 1 |
slackapi/bolt-python | fastapi | 747 | SQLAlchemy example for async_find_bot out of date | https://github.com/slackapi/bolt-python/blob/71c89cca62fe28b37bb3eba6263fa91451a6601b/examples/sqlalchemy/async_oauth_app.py#L66
The function is missing `is_enterprise_install` as a parameter. Submitting a PR, | closed | 2022-10-23T23:19:03Z | 2022-11-04T06:07:20Z | https://github.com/slackapi/bolt-python/issues/747 | [
"docs",
"area:async",
"area:examples"
] | ntarora | 0 |
JoeanAmier/XHS-Downloader | api | 21 | Python3 源码运行失败 | 
| closed | 2023-12-14T09:14:20Z | 2023-12-14T10:55:50Z | https://github.com/JoeanAmier/XHS-Downloader/issues/21 | [] | shirebella | 0 |
ultralytics/ultralytics | deep-learning | 19,375 | OOM Error for MultiGPU | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Train
### Bug
NOTE: I have found other similar reports, none of which had satisfying answers ([here](https://github.com/ultralytics/ultralytics/issues/1971) and [here](https://github.com/ultralytics/ultralytics/issues/7030)).
I get a CUDA out of memory error when training on multiple GPUs:
- This only seems to occur after training, when the final validation step is performed.
- This is not an issue for validation performed between epochs.
- This is also not an issue when I perform the exact same training on a single GPU.
So it has to be something specifically related to the way the final validation step is done with multiple GPUs.
Can you think of anything about how `ultralytics` handles the final validation step for multiple GPUs that would be any different from the validation run between epochs?
I found one discussion suggesting that the final training batch is not properly deallocated by `ultralytics`. However, given how small my batches are, this shouldn't be a problem, even if it were true. How are the batch sizes determined for the final validation step? Are there options for the CLI call to `yolo` that would allow me to modify the batch sizes for the final validation step?
Here is the error:
> [rank1]: Traceback (most recent call last):
[rank1]: File "/root/.config/Ultralytics/DDP/_temp_52nj83zv140051013153552.py", line 13, in <module>
[rank1]: results = trainer.train()
[rank1]: File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/ultralytics/engine/trainer.py", line 208, in train
[rank1]: self._do_train(world_size)
[rank1]: File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/ultralytics/engine/trainer.py", line 323, in _do_train
[rank1]: self._setup_train(world_size)
[rank1]: File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/ultralytics/engine/trainer.py", line 268, in _setup_train
[rank1]: dist.broadcast(self.amp, src=0) # broadcast the tensor from rank 0 to all other ranks (returns None)
[rank1]: File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
[rank1]: return func(*args, **kwargs)
[rank1]: File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2727, in broadcast
[rank1]: work = group.broadcast([tensor], opts)
[rank1]: torch.distributed.DistBackendError: NCCL error in: /pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:3384, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.25.1
[rank1]: ncclUnhandledCudaError: Call to CUDA function failed.
[rank1]: Last error:
[rank1]: Cuda failure 2 'out of memory'
W0222 11:04:45.741000 5868 site-packages/torch/distributed/elastic/multiprocessing/api.py:898] Sending process 5888 closing signal SIGTERM
E0222 11:04:45.905000 5868 site-packages/torch/distributed/elastic/multiprocessing/api.py:870] failed (exitcode: 1) local_rank: 1 (pid: 5889) of binary: SOME_DIR/anaconda3/envs/cuda_test/bin/python
Traceback (most recent call last):
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/torch/distributed/run.py", line 893, in <module>
main()
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 354, in wrapper
return f(*args, **kwargs)
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/torch/distributed/run.py", line 889, in main
run(args)
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/torch/distributed/run.py", line 880, in run
elastic_launch(
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
/root/.config/Ultralytics/DDP/_temp_52nj83zv140051013153552.py FAILED
Failures:
<NO_OTHER_FAILURES>
Root Cause (first observed failure):
[0]:
time : 2025-02-22_11:04:45
host : DESKTOP-32K0UQE.
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 5889)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
Traceback (most recent call last):
File "SOME_DIR/anaconda3/envs/cuda_test/bin/yolo", line 8, in <module>
sys.exit(entrypoint())
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/ultralytics/cfg/__init__.py", line 986, in entrypoint
getattr(model, mode)(**overrides) # default args from model
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/ultralytics/engine/model.py", line 810, in train
self.trainer.train()
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/ultralytics/engine/trainer.py", line 203, in train
raise e
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/site-packages/ultralytics/engine/trainer.py", line 201, in train
subprocess.run(cmd, check=True)
File "SOME_DIR/anaconda3/envs/cuda_test/lib/python3.10/subprocess.py", line 526, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['SOME_DIR/anaconda3/envs/cuda_test/bin/python', '-m', 'torch.distributed.run', '--nproc_per_node', '2', '--master_port', '57061', '/root/.config/Ultralytics/DDP/_temp_52nj83zv140051013153552.py']' returned non-zero exit status 1.
Here are the args:
> Ultralytics 8.3.78 🚀 Python-3.10.16 torch-2.7.0.dev20250221+cu128 CUDA:0 (NVIDIA GeForce RTX 5070 Ti, 16303MiB)
CUDA:1 (NVIDIA TITAN RTX, 24576MiB)
engine/trainer: task=classify, mode=train, model=yolov8s-cls.pt, data=<SOME DATA>, epochs=1, time=None, patience=100, batch=4, imgsz=224, save=True, save_period=-1, cache=False, device=(0, 1), workers=8, project=None, name=train8, exist_ok=False, pretrained=True, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, freeze=None, multi_scale=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, vid_stride=1, stream_buffer=False, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, embed=None, show=False, save_frames=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, show_boxes=True, line_width=None, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=True, opset=None, workspace=None, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, bgr=0.0, mosaic=1.0, mixup=0.0, copy_paste=0.0, copy_paste_mode=flip, auto_augment=randaugment, erasing=0.4, crop_fraction=1.0, cfg=None, tracker=botsort.yaml, save_dir=runs/classify/train8
### Environment
`Ultralytics 8.3.78 🚀 Python-3.10.16 torch-2.7.0.dev20250221+cu128 CUDA:0 (NVIDIA GeForce RTX 5070 Ti, 16303MiB)
Setup complete ✅ (24 CPUs, 31.3 GB RAM, 100.9/250.9 GB disk)
OS Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Environment Linux
Python 3.10.16
Install pip
RAM 31.31 GB
Disk 100.9/250.9 GB
CPU AMD Ryzen 9 5900X 12-Core Processor
CPU count 24
GPU NVIDIA GeForce RTX 5070 Ti, 16303MiB
GPU count 2
CUDA 12.8
numpy ✅ 2.1.1<=2.1.1,>=1.23.0
matplotlib ✅ 3.10.0>=3.3.0
opencv-python ✅ 4.11.0.86>=4.6.0
pillow ✅ 11.0.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.15.2>=1.4.1
torch ✅ 2.7.0.dev20250221+cu128>=1.8.0
torch ✅ 2.7.0.dev20250221+cu128!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.22.0.dev20250221+cu128>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 7.0.0
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0`
### Minimal Reproducible Example
```
#!/usr/bin/env bash
root_dir=SOME_DIR
dataset=SOME_DATASET
dataset_dir="${root_dir}/data/${dataset}"
task="classify"
mode="train"
model="yolov8s-cls.pt"
epochs=1
imgsz=224
$yolo_path/yolo task=$task mode=$mode model=$model data=$dataset_dir epochs=$epochs imgsz=$imgsz device=0,1 batch=4
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-02-22T16:28:23Z | 2025-02-23T02:08:23Z | https://github.com/ultralytics/ultralytics/issues/19375 | [
"bug",
"classify"
] | jeffrey-cochran | 3 |
InstaPy/InstaPy | automation | 6,787 | Hhvh | open | 2024-01-25T03:18:22Z | 2024-01-25T03:18:22Z | https://github.com/InstaPy/InstaPy/issues/6787 | [] | Gabrypp | 0 |
|
matplotlib/mplfinance | matplotlib | 46 | Add option for plot function to return figure and not display the figure itself. | I desire to use mplfinance with tkinter.
To do this most straight forwardly I desire an option to return the figure.
So adding an option returnfig, using code something like the below:
Adding to vkwargs:
'returnfig': {'Default': False,'Validator': lambda value: isinstance(value, bool)},
And extending the "savefig" code something like the below:
Code something like the below:
if config['savefig'] is not None:
save = config['savefig']
if isinstance(save,dict):
plt.savefig(**save)
else:
plt.savefig(save)
else:
if config['returnfig'] is not None:
save = config['returnfig']
if save:
return (fig)
# https://stackoverflow.com/a/13361748/1639359 suggests plt.show(block=False)
plt.show(block=config['block']) | closed | 2020-03-07T18:23:31Z | 2020-05-06T01:33:27Z | https://github.com/matplotlib/mplfinance/issues/46 | [
"enhancement",
"released"
] | MikeAMCloud | 6 |
ultralytics/yolov5 | deep-learning | 12,833 | let say we have train a custom model from yolov8 on low resolution images if i give that model to do inference on a high resolution image how the model will handle it in background and is there any information loss in the scaling process | let say we have train a custom model from yolov8 on low resolution images if i give that model to do inference on a high resolution image how the model will handle it in background and is there any information loss in the scaling process
_Originally posted by @dharakpatel in https://github.com/ultralytics/yolov5/issues/2660#issuecomment-2011680043_
| closed | 2024-03-21T09:02:41Z | 2024-10-20T19:41:52Z | https://github.com/ultralytics/yolov5/issues/12833 | [
"Stale"
] | dharakpatel | 3 |
allenai/allennlp | data-science | 5,050 | How can we support text_to_instance running in parallel? | Though we have a multi-process data-loader, we use for loops in [predictor](https://github.com/allenai/allennlp/blob/main/allennlp/predictors/predictor.py#L299).
It will make the process hard and slow when we have many inputs( especially when this predictor as a server).
I mean can we use or add some method to make this call(`text_to_instance`) support multi-process ?
@epwalsh @dirkgr maybe you could help ?
| closed | 2021-03-11T11:14:18Z | 2021-12-30T06:29:36Z | https://github.com/allenai/allennlp/issues/5050 | [
"Feature request"
] | wlhgtc | 21 |
JoeanAmier/TikTokDownloader | api | 213 | 下载的作品命名能付增加点赞评论和转发数目,比如说像 id+描述+标签+收藏数+转发数 这样的文件命名。 | open | 2024-05-08T15:46:08Z | 2024-05-09T13:51:44Z | https://github.com/JoeanAmier/TikTokDownloader/issues/213 | [] | Fanwaiwai | 1 |
|
python-security/pyt | flask | 18 | github-search performance | Something seems slow as of today! | closed | 2016-12-13T21:25:09Z | 2018-04-28T02:07:35Z | https://github.com/python-security/pyt/issues/18 | [
"github scanner"
] | StefanMich | 1 |
slackapi/bolt-python | fastapi | 349 | Re: External Data Selection in Select Menu | I posted an issue: https://github.com/slackapi/bolt-python/issues/336 where I'm unable to access external data from different blocks within the same form.
Is there a workaround to this? I think there must be some way to use the action block (we're able to access the input data from the first block here) to somehow update the view with a list of options for the next block? A solution similar to this is mentioned [here](https://stackoverflow.com/questions/61789719/slack-dialog-external-data-source-filtering) but some input here would be helpful.
Thank you!
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2021-05-24T16:27:21Z | 2021-05-26T21:40:14Z | https://github.com/slackapi/bolt-python/issues/349 | [
"question"
] | mariebarrramsey | 14 |
Gozargah/Marzban | api | 1,460 | Is there a way to deploy Marzban in "UI Only" mode or delete Master node in an existing deployment? | **Is your feature request related to a problem? Please describe.**
Is there a way to deploy Marzban in "UI Only" mode or delete Master node in an existing deployment?
**Describe the solution you'd like**
I want to split the nodes away from the web UI
| closed | 2024-11-27T09:27:25Z | 2024-11-27T12:33:44Z | https://github.com/Gozargah/Marzban/issues/1460 | [
"Question"
] | YR1044 | 1 |
marshmallow-code/flask-marshmallow | rest-api | 7 | Request parsing example? | Is this only for serializing responses, or can it be used for parsing requests as well? Can you add an example of this?
| closed | 2014-12-04T03:37:23Z | 2014-12-22T14:39:23Z | https://github.com/marshmallow-code/flask-marshmallow/issues/7 | [
"question"
] | nickretallack | 5 |
ultralytics/yolov5 | pytorch | 12,873 | YOLOv5 multi camera real-time recognition based on IP address | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
How to recognize multiple video streams in the detect.py file in the yolov5 master version, or is there a tutorial on recognizing multiple video streams? For example, I have two RTSP addresses: rtsp://admin:admin @192.168.43.210.8554/live, rtsp://admin:admin @192.168.43.16:8554/live, how can I make it possible for both of them to recognize in real-time simultaneously? Please answer. Thank you
### Additional
_No response_ | closed | 2024-04-02T14:36:54Z | 2024-10-20T19:42:47Z | https://github.com/ultralytics/yolov5/issues/12873 | [
"question",
"Stale"
] | fsamekl | 6 |
pallets/flask | python | 5,269 | FYI, bad documentation link | I was looking at the github release page, which links to https://flask.palletsprojects.com/en/3.0.x/changes/#version-3-0-0
This page gives me a page not found error. | closed | 2023-09-30T15:29:18Z | 2023-10-15T00:06:13Z | https://github.com/pallets/flask/issues/5269 | [] | havedill | 1 |
benbusby/whoogle-search | flask | 635 | Instance redirecting to startpage | It's in the [list](https://github.com/benbusby/whoogle-search/blob/main/misc/instances.txt): https://search.exonip.de | closed | 2022-02-01T10:57:10Z | 2022-02-01T17:12:53Z | https://github.com/benbusby/whoogle-search/issues/635 | [
"bug"
] | ManeraKai | 3 |
miguelgrinberg/Flask-SocketIO | flask | 1,344 | SameSite attribute in Flask-SocketIO | After recent Chrome update in Issues bar I can see a warning about SameSite attribute. Two warnings actually:
- Indicate whether to send a cookie in a cross-site request by specifying its SameSite attribute
- Indicate whether a cookie is intended to be set in cross-site context by specifying its SameSite attribute
I don't know how the 'SameSite=None' and 'Secure' should be set in configuration line. If I set 'cookie' to None, I get rid off second warning, but the first still persists. Can you help with that? Cheers in advance ;)
| closed | 2020-07-30T07:52:24Z | 2020-08-03T23:27:34Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1344 | [
"bug"
] | vonsky104 | 5 |
plotly/dash | data-visualization | 2,600 | allow explicitly picking which props to render in React components | Throughout our React components, we use `ramda.omit` to filter out props that we wish to exclude from being rendered.
We should invert this by using `ramda.pick` to include the props that we _do_ wish to render.
This will prevent us from accidentally sending props to be rendered in HTML.
| open | 2023-07-13T21:34:03Z | 2024-08-13T19:36:03Z | https://github.com/plotly/dash/issues/2600 | [
"feature",
"P3"
] | KoolADE85 | 0 |
desec-io/desec-stack | rest-api | 558 | api: add database trigger to exclude other records underneath a DNAME | open | 2021-07-03T13:12:22Z | 2021-07-03T13:12:22Z | https://github.com/desec-io/desec-stack/issues/558 | [
"bug",
"api"
] | peterthomassen | 0 |
|
graphistry/pygraphistry | jupyter | 385 | [FEA] anonymize graph | **Is your feature request related to a problem? Please describe.**
When sharing graphs with others, especially via going from private server / private account -> public hub, such as for publicizing or debugging, it'd help to have a way to quickly anonymize a graph
Sample use cases to make fast:
* show topology-only
* with and without renaming topology identifiers
* with and without renaming all cols
* including/dropping specific columns
* with/without preserving topology (prevent decloaking)
* with/without preserving value distributions
* as needed, opt in/out for particular columns
Perf:
* fast for graphs < 10M nodes, edges
* path to bigger graphs: if pandas, stick to vector ops, ...
**Describe the solution you'd like**
Something declarative and configurable like:
```python
g2 = g.anonymize(
node_policy={
'include': ['col1', ...], # safelist of columns to include
'preserve': ['col1', ...], # opt-in columns not to anonymize,
'rename': ['col1', ...] | True,
'sample_drop': 0.2 # % nodes to drop; 0 (default) means preserve all
'sample_add': 0.2 # % nodes to add; 0 (default) means add none
},
edge_policy={
'drop': ['col2', ...] # switch to opt-out via columns to exclude
},
sample_keep=..,
sample_add=...
)
g2.plot()
g_orig = g2.deanonymize(g2._anon_remapping)
```
Sample transforms:
* rename columns
* remap categoricals, including both range values & distribution, but preserve type
* resample edges, both removing/adding
* ... and shift topology distributions & supernode locations
----
If there is a popular tabular or graph centric library here that is well-maintained, we should consider using
... but not if it looks like maintenance or security risks
**Additional context**
Ultimately it'd be good to push this to the UI via some sort of safe mode: role-specific masking, ...
| open | 2022-07-29T21:01:18Z | 2022-09-09T20:08:45Z | https://github.com/graphistry/pygraphistry/issues/385 | [
"enhancement",
"help wanted",
"good-first-issue"
] | lmeyerov | 3 |
sammchardy/python-binance | api | 605 | Got HTTPS warning | Data returned successfully but got https warnings.
```
data = get_historical_klines('BTCUSDT', Client.KLINE_INTERVAL_4HOUR, 'October 15, 2020', 'October 20, 2020')
/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py:981: InsecureRequestWarning: Unverified HTTPS request is being made to host 'api.binance.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
```
| closed | 2020-10-20T02:45:44Z | 2020-11-01T03:22:19Z | https://github.com/sammchardy/python-binance/issues/605 | [] | jleei | 2 |
MagicStack/asyncpg | asyncio | 524 | Connection._cleanup not being called when the connection is dropped | This issue is continued from comments on #421 and #283.
## Problem steps
Run the following code:
```py
#!/usr/bin/env python3
import asyncpg
import asyncio
class MyConnection(asyncpg.Connection):
def _cleanup(self):
print(1)
return super()._cleanup()
async def amain():
conn = await asyncpg.connect(connection_class=MyConnection)
await asyncio.sleep(float('inf'))
if __name__ == '__main__':
asyncio.run(amain())
```
Then restart postgres.
## Expected results
"1" is printed.
## Actual results
Nothing is printed.
## System info
* **asyncpg version**: commit 851d58651deb10593a31a289b735c180f7895e3e
* **PostgreSQL version**: 12.1
* **Python version**: CPython 3.8.1
* **Platform**: Arch Linux
* **Do you use pgbouncer?**: No
* **Did you install asyncpg with pip?**: No
* **If you built asyncpg locally, which version of Cython did you use?**: 0.29.14
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: Yes
| closed | 2020-01-10T01:39:36Z | 2020-01-10T04:05:05Z | https://github.com/MagicStack/asyncpg/issues/524 | [] | ioistired | 7 |
ContextLab/hypertools | data-visualization | 235 | animate=True does not work in Google Colab | animate=False renders static plots, but animate=True renders static empty boxes.
https://colab.research.google.com/drive/10LoEodWC7PeMYfMnEf85eK97rXNtS7lh#scrollTo=vpUphrib4qGs | open | 2020-02-26T07:59:34Z | 2020-02-26T07:59:34Z | https://github.com/ContextLab/hypertools/issues/235 | [] | mhlr | 0 |
MilesCranmer/PySR | scikit-learn | 367 | [Feature]: SymPy.jl integration | ### Feature Request
Rather than making the user define custom `extra_sympy_mappings`, we could potentially take advantage of [SymPy.jl](https://github.com/JuliaPy/SymPy.jl) and automatically generate the sympy operators.
Furthermore, and this would be more work, we could look at transferring the expressions directly from SymbolicRegression.jl, rather than using a .csv file as a medium. | open | 2023-07-03T01:58:49Z | 2023-07-03T01:58:58Z | https://github.com/MilesCranmer/PySR/issues/367 | [
"enhancement",
"priority: low"
] | MilesCranmer | 0 |
healthchecks/healthchecks | django | 623 | DB_NAME for sqlite different than /tmp/hc.sqlite | Hi, When I use different location for DB_NAME like `DB_NAME=/db/hc.sqlite`
Error says can not find location
```
healthchecks-hc-1 | sendreports is now running
healthchecks-hc-1 | Traceback (most recent call last):
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/db/backends/base/base.py", line 244, in ensure_connection
healthchecks-hc-1 | self.connect()
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/utils/asyncio.py", line 26, in inner
healthchecks-hc-1 | return func(*args, **kwargs)
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/db/backends/base/base.py", line 225, in connect
healthchecks-hc-1 | self.connection = self.get_new_connection(conn_params)
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/utils/asyncio.py", line 26, in inner
healthchecks-hc-1 | return func(*args, **kwargs)
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/db/backends/sqlite3/base.py", line 206, in get_new_connection
healthchecks-hc-1 | conn = Database.connect(**conn_params)
healthchecks-hc-1 | sqlite3.OperationalError: unable to open database file
healthchecks-hc-1 |
healthchecks-hc-1 | The above exception was the direct cause of the following exception:
healthchecks-hc-1 |
healthchecks-hc-1 | Traceback (most recent call last):
healthchecks-hc-1 | File "/opt/healthchecks/./manage.py", line 10, in <module>
healthchecks-hc-1 | execute_from_command_line(sys.argv)
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
healthchecks-hc-1 | utility.execute()
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/core/management/__init__.py", line 440, in execute
healthchecks-hc-1 | self.fetch_command(subcommand).run_from_argv(self.argv)
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/core/management/base.py", line 414, in run_from_argv
healthchecks-hc-1 | self.execute(*args, **cmd_options)
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/core/management/base.py", line 460, in execute
healthchecks-hc-1 | output = self.handle(*args, **options)
healthchecks-hc-1 | File "/opt/healthchecks/hc/api/management/commands/sendreports.py", line 105, in handle
healthchecks-hc-1 | while not self.sigterm and self.handle_one_report():
healthchecks-hc-1 | File "/opt/healthchecks/hc/api/management/commands/sendreports.py", line 39, in handle_one_report
healthchecks-hc-1 | profile = q.first()
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/query.py", line 753, in first
healthchecks-hc-1 | for obj in (self if self.ordered else self.order_by("pk"))[:1]:
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/query.py", line 320, in __iter__
healthchecks-hc-1 | self._fetch_all()
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/query.py", line 1507, in _fetch_all
healthchecks-hc-1 | self._result_cache = list(self._iterable_class(self))
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/query.py", line 57, in __iter__
healthchecks-hc-1 | results = compiler.execute_sql(
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 1359, in execute_sql
healthchecks-hc-1 | cursor = self.connection.cursor()
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/utils/asyncio.py", line 26, in inner
healthchecks-hc-1 | return func(*args, **kwargs)
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/db/backends/base/base.py", line 284, in cursor
healthchecks-hc-1 | return self._cursor()
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/db/backends/base/base.py", line 260, in _cursor
healthchecks-hc-1 | self.ensure_connection()
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/utils/asyncio.py", line 26, in inner
healthchecks-hc-1 | return func(*args, **kwargs)
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/db/backends/base/base.py", line 243, in ensure_connection
healthchecks-hc-1 | with self.wrap_database_errors:
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/db/utils.py", line 91, in __exit__
healthchecks-hc-1 | raise dj_exc_value.with_traceback(traceback) from exc_value
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/db/backends/base/base.py", line 244, in ensure_connection
healthchecks-hc-1 | self.connect()
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/utils/asyncio.py", line 26, in inner
healthchecks-hc-1 | return func(*args, **kwargs)
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/db/backends/base/base.py", line 225, in connect
healthchecks-hc-1 | self.connection = self.get_new_connection(conn_params)
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/utils/asyncio.py", line 26, in inner
healthchecks-hc-1 | return func(*args, **kwargs)
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/db/backends/sqlite3/base.py", line 206, in get_new_connection
healthchecks-hc-1 | conn = Database.connect(**conn_params)
healthchecks-hc-1 | django.db.utils.OperationalError: unable to open database file
healthchecks-hc-1 | sendalerts is now running
healthchecks-hc-1 | Traceback (most recent call last):
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/db/backends/base/base.py", line 244, in ensure_connection
healthchecks-hc-1 | self.connect()
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/utils/asyncio.py", line 26, in inner
healthchecks-hc-1 | return func(*args, **kwargs)
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/db/backends/base/base.py", line 225, in connect
healthchecks-hc-1 | self.connection = self.get_new_connection(conn_params)
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/utils/asyncio.py", line 26, in inner
healthchecks-hc-1 | return func(*args, **kwargs)
healthchecks-hc-1 | File "/usr/local/lib/python3.10/site-packages/django/db/backends/sqlite3/base.py", line 206, in get_new_connection
healthchecks-hc-1 | conn = Database.connect(**conn_params)
healthchecks-hc-1 | sqlite3.OperationalError: unable to open database file
``` | closed | 2022-03-24T08:56:14Z | 2022-03-26T12:20:33Z | https://github.com/healthchecks/healthchecks/issues/623 | [] | alyvusal | 2 |
bigscience-workshop/petals | nlp | 6 | [RESEARCH] LM API merit system | Why:
- contributors who support the swarm over long time should feel that they are appreciated
- one client should not be able to DDOS the entire swarm - it should be prioritized according to some pre-defined system
Optional:
- client that has higher point total may end up prioritized on the processing queue
- contributors who run their GPUs should be motivated to use the model for something of theirs (free coupon effect)
__Demo constraint:__ the first public version must not have a mechanism for converting internal points into anything except for priority usage and participant self-worth (e.g. via leaderboards).
| open | 2022-05-31T19:14:14Z | 2023-01-27T06:33:21Z | https://github.com/bigscience-workshop/petals/issues/6 | [
"research"
] | justheuristic | 3 |
biolab/orange3 | pandas | 6,861 | Edit Domain: no way to get rid of warning "categories mapping for [variable] does not apply to current input" after change in upstream Formula | <!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
When changing the mapping of categories of a categorical variable in Edit Domain, Edit Domain displays a warning "categories mapping for [variable] does not apply to current input" once changes in an upstream Formula produce different or additional categories. Even when adapting the category mappings to the new inputs, the warning persists.
**How can we reproduce the problem?**
[edit domain category mappings.ows.zip](https://github.com/user-attachments/files/16359530/edit.domain.category.mappings.ows.zip)
In the Formula in the attached workflow, change 'America' in the if statement to 'North America'. Edit Domain will show the warning, although in its dialog box the category name has already been updated. Even when explicitly defining an new mapping based on the new category names, the warning doesn't go away.
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system: Mac OS 14.5
- Orange version: 3.37
- How you installed Orange: from DMG followed by updates using the internal installer within Orange
| closed | 2024-07-24T08:55:41Z | 2024-10-03T17:15:31Z | https://github.com/biolab/orange3/issues/6861 | [
"bug"
] | wvdvegte | 1 |
django-oscar/django-oscar | django | 3,688 | Error running 0005_regenerate_user_address_hashes migration | Found a bug? Please fill out the sections below.
### Issue Summary
I have a legacy project with django-oscar==1.5.2 and I'm trying to update it to 2.0. I'm getting the following error when trying to run the migration `0005_regenerate_user_address_hashes` from the `Address` app:
```
django.db.utils.IntegrityError: duplicate key value violates unique constraint "address_useraddress_user_id_4e104dbf168846a9_uniq"
DETAIL: Key (user_id, hash)=(445, 4102561442) already exists.
```
This error is only happening with a specific user who has two addresses, one of their addresses already has the new hash that the migration is generating, so I figured out the error is due the `Address` app has the `unique_together = ('user', 'hash')` attribute in his Meta class.
So, what is the best way to fix this? Should I fork the `Address` app and modify that migration?
### Technical details
* Python version: 3.6.10
* Django version: 2.0.13
* Oscar version: 2.0
| closed | 2021-03-31T22:11:41Z | 2021-04-01T04:10:00Z | https://github.com/django-oscar/django-oscar/issues/3688 | [] | Jeanluis019 | 1 |
iusztinpaul/energy-forecasting | streamlit | 7 | lesson 3 poetry Installing training-pipeline (0.1.0): Failed | So I configured pypi credentials:
```
sudo apt install -y apache2-utils
pip install passlib
mkdir ~/.htpasswd
htpasswd -sc ~/.htpasswd/htpasswd.txt energy-forecasting
poetry config repositories.my-pypi http://localhost
poetry config http-basic.my-pypi energy-forecasting <password>
```
That gave me this, the password seems missing, but not sure if it is expected here or not:
```
❯ cat ~/.config/pypoetry/auth.toml
[http-basic.my-pypi]
username = "energy-forecasting"
```
set up the pypi server separately:
```
docker run -p 80:8080 -v ~/.htpasswd:/data/.htpasswd pypiserver/pypiserver:latest run -P .htpasswd/htpasswd.txt --overwrite
```
Built and published in `training-pipeline`
But `poetry install` in `batch-predictions-pipeline` failed:
```
Using python3 (3.9.10)
Installing dependencies from lock file
Package operations: 1 install, 0 updates, 0 removals
• Installing training-pipeline (0.1.0): Failed
RuntimeError
Retrieved digest for link training_pipeline-0.1.0.tar.gz(md5:ddbaa...) not in poetry.lock metadata {'sha256:acf4ba6c...', 'sha256:b5db651b07...'}
at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/poetry/installation/chooser.py:117 in _get_links
113│
114│ selected_links.append(link)
115│
116│ if links and not selected_links:
→ 117│ raise RuntimeError(
118│ f"Retrieved digest for link {link.filename}({h}) not in poetry.lock"
119│ f" metadata {hashes}"
120│ )
121│
``` | closed | 2023-06-27T14:07:30Z | 2023-07-08T14:51:12Z | https://github.com/iusztinpaul/energy-forecasting/issues/7 | [] | hududed | 14 |
2noise/ChatTTS | python | 66 | few shot generate | It's really amazing and stunning but the noise is a bit noticeable and has obvious pause error
Whether to support cloning a new voice with few shot samples | closed | 2024-05-30T01:29:41Z | 2024-07-12T05:11:31Z | https://github.com/2noise/ChatTTS/issues/66 | [
"stale"
] | ben-8878 | 0 |
coqui-ai/TTS | pytorch | 3,111 | MAX token limit for XTTS | ### Describe the bug
I would like to know what is the max token limit for XTTS as while i was passing text of 130 tokens it said that it has a limit of 400 tokens for a single input prompt, kindly explain how you guys calculate tokens
### To Reproduce
Just give it a long text
### Expected behavior
AssertionError: ❗ XTTS can only generate text with a maximum of 400 tokens.
### Logs
```shell
INFO: 2023-10-27 06:03:52,381: app.main model loaded in 133.42864871025085 seconds
INFO: 2023-10-27 06:03:53,914: app.main I: Generating new audio...
Traceback (most recent call last):
File "/pkg/modal/_container_entrypoint.py", line 374, in handle_input_exception
yield
File "/pkg/modal/_container_entrypoint.py", line 465, in run_inputs
res = imp_fun.fun(*args, **kwargs)
File "/root/app/main.py", line 181, in clone_voice
out = model.inference(
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 620, in inference
text_tokens.shape[-1] < self.args.gpt_max_text_tokens
AssertionError: ❗ XTTS can only generate text with a maximum of 400 tokens.
Traceback (most recent call last):
File "/pkg/modal/_container_entrypoint.py", line 374, in handle_input_exception
yield
File "/pkg/modal/_container_entrypoint.py", line 465, in run_inputs
res = imp_fun.fun(*args, **kwargs)
File "/root/app/main.py", line 226, in process_clone_job
clone_voice.remote(
File "/pkg/synchronicity/synchronizer.py", line 497, in proxy_method
return wrapped_method(instance, *args, **kwargs)
File "/pkg/synchronicity/combined_types.py", line 26, in __call__
raise uc_exc.exc from None
File "<ta-x9hXLFyDMwQoOVUWS36QVU>:/root/app/main.py", line 181, in clone_voice
File "<ta-x9hXLFyDMwQoOVUWS36QVU>:/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
File "<ta-x9hXLFyDMwQoOVUWS36QVU>:/opt/conda/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 620, in inference
AssertionError: ❗ XTTS can only generate text with a maximum of 400 tokens.
```
### Environment
```shell
no
```
### Additional context
no | closed | 2023-10-27T06:04:42Z | 2024-03-09T16:07:42Z | https://github.com/coqui-ai/TTS/issues/3111 | [
"bug"
] | haiderasad | 3 |
Farama-Foundation/PettingZoo | api | 548 | Question: what action mask should I use for multidiscrete action space? | I'm using `apitest()` to verify my customized environment.
The error I got is as follows:

The error shows that I'm indexing a scalar, which should not be the case since my action space is Space.Multidiscrete and is supposed to be a list.
```
self.action_spaces = {
i: self._init_attacker_acts() if "attacker" in i else self._init_defender_acts() for i in self.agents
}
def _init_defender_acts(self):
acts = [self.graph.num_bins for _ in range(self.graph.num_vertices)]
acts.append(self.graph.num_edges)
return spaces.MultiDiscrete(acts)
```
I'm also using an action mask.
```
self.observation_spaces = {
i: spaces.Dict({'observation': self._init_attacker_obs(),
'action_mask': spaces.Box(
low=0, high=2, shape=(self.graph.num_edges,))})
if "attacker" in i else
spaces.Dict({'observation': self._init_defender_obs(), 'action_mask': spaces.Box(
low=0, high=2, shape=(self.graph.num_vertices+1,))})
for i in self.agents
}
```
Why is the api-test inputing int type action to my agent requiring multi-discrete action?
Thanks!
| closed | 2021-11-24T20:38:08Z | 2021-11-25T19:42:51Z | https://github.com/Farama-Foundation/PettingZoo/issues/548 | [] | aduyinuo | 0 |
graphql-python/graphql-core | graphql | 149 | Modify query programmatically | Sent here from https://github.com/graphql-python/gql/issues/272
There is a way in `gql` to construct queries programmatically with DSL module or by parsing a string into AST with `gql.gql()` and then using `print_ast` from `graphql` to get back the string.
```python
import gql
dg = gql.gql("""
query getContinents {
continents {
code
name
}
}
""")
from graphql import print_ast
print(print_ast(dg))
```
What is not clear is how to actually find nodes in AST and edit or expand them. For example, finding a parent of `node` in the query below (`continents`) and adding an attribute to it (`(code:"AF")`).
```graphql
query getContinents {
continents {
code
name
}
}
```
So that the query becomes.
```
query getContinents {
continents (code:"AF") {
code
name
}
}
```
I looked into the docs, but it doesn't actually explain.
* [ ] How to find AST node that needs modification?
* [ ] How to modify it (upsert attributes)?
The documentation container chapter about schemas https://graphql-core-3.readthedocs.io/en/latest/usage/extension.html?highlight=modify%20ast#extending-a-schema and as I am new to GraphQL I am not yet sure if schema and query are the same things.
# Feature requests
I am not sure [GraphQL.js](https://github.com/graphql/graphql-js) includes this too and not sure that fundamentally changes the way GraphQL works.
| closed | 2021-12-04T07:19:50Z | 2021-12-27T18:26:43Z | https://github.com/graphql-python/graphql-core/issues/149 | [] | abitrolly | 7 |
chaos-genius/chaos_genius | data-visualization | 390 | Inconsistent analytics occurs between hourly panel metrics, DeepDrill & anomaly data | closed | 2021-11-11T16:18:49Z | 2021-11-12T10:40:04Z | https://github.com/chaos-genius/chaos_genius/issues/390 | [
"🐛 bug",
"🧮 algorithms",
"P1"
] | suranah | 1 |
|
pallets-eco/flask-sqlalchemy | flask | 464 | How about let tablename prefix configable? | Add a configuration option, like `SQLALCHEMY_TABLENAME_PREFIX`. | closed | 2017-01-19T03:24:49Z | 2020-12-05T21:18:10Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/464 | [] | protream | 3 |
InstaPy/InstaPy | automation | 6,133 | If it is possible to consider all tags for like_by_tags action | By looking `like_by_tags` action you'll find that it takes a number of tags as its parameters.
And when you run it without any tags like this:
> session.like_by_tags(amount=10)
It returns a runtime error.
The question is what should I do if I like to consider **all posts** for liking regardless of tags? Is there any way? | closed | 2021-03-27T16:20:10Z | 2021-07-21T05:18:41Z | https://github.com/InstaPy/InstaPy/issues/6133 | [
"wontfix"
] | sasan007 | 1 |
521xueweihan/HelloGitHub | python | 2,071 | 【开源自荐】PHP项目:基于Canal实现ES文档增量更新的高性能轻量框架 | ## 项目推荐
- 项目地址:https://github.com/WGrape/esupdater
- 类别:PHP
- 项目后续更新计划:项目日常维护、提供开箱即用的Canal服务和ES服务,实现完全的极速上手使用
- 项目描述:
- 简介:esupdater是一个基于Canal实现ES文档增量更新的高性能轻量框架,易上手和接入业务,可以优雅的实现业务需求
- 场景 :常适用于搜索业务,比如当需要把MySQL增量数据实时更新到ES中时,就可以使用esupdater
- 推荐理由 :高性能、容器化、轻量化、事件驱动、易扩展、文档齐全
- 截图 :

| closed | 2022-01-11T13:03:11Z | 2022-01-28T02:24:25Z | https://github.com/521xueweihan/HelloGitHub/issues/2071 | [
"已发布",
"PHP 项目"
] | WGrape | 2 |
Gozargah/Marzban | api | 976 | creating vless users via cli | با توجه به اینکه بعد از نصب سایت فیلتر میشود دسترسی وب برای ایجاد کاربر نداریم با خط فرمان میشه یه کاربر تعریف کنیم که مثلا با پروتکل vless و شبکه ws وصل بشه؟اگر راهی هست ممنون میشم یه راهنمایی بکنید | closed | 2024-05-09T05:47:18Z | 2024-05-13T05:29:47Z | https://github.com/Gozargah/Marzban/issues/976 | [
"Invalid"
] | wolfheartman | 7 |
tqdm/tqdm | pandas | 1,323 | downloading file using requests.get caused a bug | - [ ] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [ ] visual output bug
- [ ] I have visited the [source website], and in particular
read the [known issues]
- [ ] I have searched through the [issue tracker] for duplicates
- [ ] I have mentioned version numbers, operating system and
environment, where applicable:
# here is the version:
```
4.62.3 3.7.5 (default, Jul 5 2021, 10:38:47)
[GCC 7.5.0] linux
```
# the bug I encountered:
- I used the example code to download file using requests.
```python
import pdb
import requests, os
from tqdm import tqdm
# this link is available.
eg_link = "https://github.com/dongrixinyu/jiojio/releases/download/v1.1.4/default_pos_model.zip"
response = requests.get(eg_link, stream=True)
with tqdm.wrapattr(open('default_pos_model.zip', "wb"), "write",
miniters=1, desc=eg_link.split('/')[-1],
total=int(response.headers.get('content-length', 0))) as fout:
for chunk in response.iter_content(chunk_size=4096):
fout.write(chunk)
# while the process executes to this line, check the filesize of default_pos_model.zip by `ls -lSt`, you can see the file size is 92508160
pdb.set_trace()
# while the process finished, check the filesize of default_pos_model.zip by `ls -lSt`, you can see the file size is 92510542
```
- the file size of `default_pos_model.zip` changed because the chunk_size 4096 can not be exactly divided. The file descriptor did not close until the process finished.
- So, I assume this bug is caused by tqdm or requests.
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
| open | 2022-05-05T06:14:29Z | 2023-05-03T21:29:50Z | https://github.com/tqdm/tqdm/issues/1323 | [
"invalid ⛔",
"question/docs ‽"
] | dongrixinyu | 2 |
ray-project/ray | python | 50,818 | CI test linux://python/ray/air:test_object_extension is flaky | CI test **linux://python/ray/air:test_object_extension** is flaky. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8495#01952b30-22c6-4a0f-9857-59a7988f67d8
- https://buildkite.com/ray-project/postmerge/builds/8491#01952b00-e020-4d4e-b46a-209c0b3dbf5b
- https://buildkite.com/ray-project/postmerge/builds/8491#01952ad9-1225-449b-84d0-29cfcc6a048c
DataCaseName-linux://python/ray/air:test_object_extension-END
Managed by OSS Test Policy | closed | 2025-02-22T01:46:49Z | 2025-03-04T06:34:57Z | https://github.com/ray-project/ray/issues/50818 | [
"bug",
"triage",
"data",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] | can-anyscale | 36 |
reloadware/reloadium | django | 166 | not support WSL | when i use wsl conda env to run python file, then will get error , we hope can fix it soon. | closed | 2023-09-18T05:56:37Z | 2023-11-07T16:09:10Z | https://github.com/reloadware/reloadium/issues/166 | [] | mapengsen | 3 |
PokeAPI/pokeapi | api | 573 | Pokédex entries | Hi !
I'd like to suggest a feature request for Pokédex entries conforms by generation.
Thanks for all folks !
| closed | 2021-02-17T17:27:21Z | 2021-03-02T17:45:57Z | https://github.com/PokeAPI/pokeapi/issues/573 | [] | raphaelwilker | 4 |
neuml/txtai | nlp | 373 | Integration with Argilla for Data Exploration and Annotation | > Argilla is a production-ready Python framework for exploring, annotating, and managing data in NLP projects.
This seems to be a/the leading tool for annotation. Perhaps it would be a good candidate for some sort of integration with txtai? It seems like something like this - a GUI for exploring, annotating, enriching, monitoring etc... - could be a basis for txtai v6...? Its one main thing that Haystack (and, presumably, Jina) offers that txtai doesn't, though the free Annotation Tool in Haystack is very basic in comparison to Rubrix. I imagine that Haystack Cloud (their commercially focused tool) is a similar tool, and will be what drives the growth of Deepset.
https://github.com/argilla-io/argilla
Edit: Rubrix was changed to Argilla | closed | 2022-10-22T02:20:32Z | 2023-08-30T18:08:07Z | https://github.com/neuml/txtai/issues/373 | [] | nickchomey | 6 |
OFA-Sys/Chinese-CLIP | nlp | 369 | Insecure called of torch.load in the function of load_from_name | In the function load_from_name, torch.load is called that uses pickle module implicitly, which is known to be insecure, please fix it
``` python
def load_from_name(name: str, device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu",
download_root: str = None, vision_model_name: str = None, text_model_name: str = None, input_resolution: int = None):
if name in _MODELS:
model_path = _download(_MODELS[name], download_root or os.path.expanduser("~/.cache/clip"))
model_name, model_input_resolution = _MODEL_INFO[name]['struct'], _MODEL_INFO[name]['input_resolution']
elif os.path.isfile(name):
assert vision_model_name and text_model_name and input_resolution, "Please specify specific 'vision_model_name', 'text_model_name', and 'input_resolution'"
model_path = name
model_name, model_input_resolution = f'{vision_model_name}@{text_model_name}', input_resolution
else:
raise RuntimeError(f"Model {name} not found; available models = {available_models()}")
with open(model_path, 'rb') as opened_file:
# loading saved checkpoint
checkpoint = torch.load(opened_file, map_location="cpu")
model = create_model(model_name, checkpoint)
if str(device) == "cpu":
model.float()
else:
model.to(device)
return model, image_transform(model_input_resolution)
``` | open | 2024-11-20T06:19:40Z | 2024-11-20T06:19:40Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/369 | [] | alpha731 | 0 |
GibbsConsulting/django-plotly-dash | plotly | 463 | Pa11y CI failures because `django-plotly-dash` `iframe` has no `title` | I'm part of a small team using `django-plotly-dash` (to great effect!) on a Django-based web project to embed some visualisations. So far I've just been using the
```
{% load plotly_dash %}
{% plotly_app name="example" %}
```
idiom, which creates an `iframe` on the page and embeds the Plotly Dash there.
We recently added [Pa11y CI](https://github.com/pa11y/pa11y-ci) accessibility tests to our continuous integration and the `iframe` environment produces three errors, two of which I think should be resolved in `django-plotly-dash`. Formatting the CI output, the errors are
1. [Frames must have an accessible name](https://dequeuniversity.com/rules/axe/4.7/frame-title?application=axeAPI)
2. [Frames should be tested with axe-core](https://dequeuniversity.com/rules/axe/4.7/frame-tested?application=axeAPI)
3. Iframe element requires a non-empty title attribute that identifies the frame.
I think error 2 is something preventing Pa11y from injecting JavaScript for testing into the `iframe` and I don't think is an upstream issue. Errors 1 and 3 can be resolved by providing any value to the `title` property of the `iframe`. e.g. I tried adding some text (e.g. "example") in the `iframe` in the `django-plotly-dash` templates:
https://github.com/GibbsConsulting/django-plotly-dash/blob/e06bb465c55cb81ec100391031f6c7d259b779b8/django_plotly_dash/templates/django_plotly_dash/plotly_app.html#L1-L3
and errors 1 and 3 disappear.
Though any text will do, my understanding (from the links above) is that screen readers will use the `title` information, so something descriptive (perhaps a user argument?) will help users navigate the page. | open | 2023-06-05T13:19:06Z | 2023-06-06T15:09:11Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/463 | [] | warrickball | 1 |
PaddlePaddle/models | computer-vision | 5,544 | train_val_kitti.yaml中的dim_ref是什么含义? | train_val_kitti.yaml中的dim_ref是什么含义,各项数值是怎么来的?如果换成其他数据集,应该怎样填写数值? | open | 2022-09-25T00:15:59Z | 2024-02-26T05:07:54Z | https://github.com/PaddlePaddle/models/issues/5544 | [] | Bingoang | 0 |
autokey/autokey | automation | 255 | FIX not all UTF characters are not supported | ## Classification: Bug
## Reproducibility: Always
try adding a keyboard paste phrase that is either:
⎡⎦
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
## expected result:
those chars inserted when triggered
## actual result:
trigger word deleted and nothing inserted, when hitting backspace. the amount of characters in the Phrase is deleted from the preceding text the trigger word was used
| open | 2019-02-09T02:17:26Z | 2020-01-08T19:37:40Z | https://github.com/autokey/autokey/issues/255 | [
"bug",
"duplicate",
"phrase expansion",
"scripting"
] | allanlaal | 21 |
hyperspy/hyperspy | data-visualization | 2,638 | Improve load function example in user guide | There should be an example in the user guide showing that the `load` function returns a list of signal instead of a single signal, where several datasets are present in the file. | closed | 2021-02-04T11:23:07Z | 2021-03-09T22:00:20Z | https://github.com/hyperspy/hyperspy/issues/2638 | [
"affects: documentation",
"good first PR"
] | ericpre | 1 |
streamlit/streamlit | python | 10,880 | `st.dataframe` displays wrong indizes for pivoted dataframe | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Under some conditions streamlit will display the wrong indices in pivoted / multi indexed dataframes.
### Reproducible Code Example
[](https://issues.streamlitapp.com/?issue=gh-10880)
```Python
import streamlit as st
import pandas as pd
df = pd.DataFrame(
{"Index": ["X", "Y", "Z"], "A": [1, 2, 3], "B": [6, 5, 4], "C": [9, 7, 8]}
)
df = df.set_index("Index")
st.dataframe(df)
st.dataframe(df.T.corr())
st.dataframe(df.T.corr().unstack())
print(df.T.corr().unstack())
```
### Steps To Reproduce
1. `streamlit run` the provided code.
2. Look at the result of the last `st.dataframe()` call.
### Expected Behavior
Inner index should be correct.
### Current Behavior
The provided code renders the following tables:

The first two tables are correct, while the last one displays a duplicate of the first index instead of the second one.
In comparison, this is the correct output from the `print()` statement:
```
Index Index
X X 1.000000
Y 0.999597
Z 0.888459
Y X 0.999597
Y 1.000000
Z 0.901127
Z X 0.888459
Y 0.901127
Z 1.000000
dtype: float64
```
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.42.2
- Python version: 3.12.9
- Operating System: Linux
- Browser: Google Chrome / Firefox
### Additional Information
The problem does not occur, when the default index is used.
```python
import streamlit as st
import pandas as pd
df = pd.DataFrame({"A": [1, 2, 3], "B": [6, 5, 4], "C": [9, 7, 8]})
st.dataframe(df.T.corr().unstack())
```
This renders the correct dataframe:

---
This issue is possibly related to https://github.com/streamlit/streamlit/issues/3696 (parsing column names and handling their types) | open | 2025-03-23T15:50:44Z | 2025-03-24T13:49:35Z | https://github.com/streamlit/streamlit/issues/10880 | [
"type:bug",
"feature:st.dataframe",
"status:confirmed",
"priority:P3",
"feature:st.data_editor"
] | punsii2 | 2 |
home-assistant/core | python | 140,708 | After upgrading to HA core 2025.3.3 Tailscale addon could not resolve DNS | ### The problem
I know this is Tailscale specific issue but I found that due to the nature of tailscale plugin on HA this broke my whole network DNS resolution with no possible fix on the TS Admin pannel with DNS(I tried it all). Only after some time did I correlate that HA was upgraded and when I tried to revert to a older version of HA Core this problem was resolved.
Anybody have any idea what could have gone wrong or what would be the working configuration for HA or/and Tailscale?
### What version of Home Assistant Core has the issue?
2025.3.3
### What was the last working version of Home Assistant Core?
2025.3.2
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Tailscale
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | closed | 2025-03-16T09:29:26Z | 2025-03-18T12:21:01Z | https://github.com/home-assistant/core/issues/140708 | [
"integration: tailscale"
] | mrupnikm | 2 |
plotly/dash | data-science | 2,438 | [BUG] WinError 10038 during hot reload | Hi all,
I am experiencing an exception that occurs during hot reloading. First, I thought this occurred because of my upgrade to Python 3.10, but after downgrading back to Python 3.9, the issue still occurs. However, the hot reloading itself seems to work fine.
```
dash 2.8.1
dash-bootstrap-components 1.3.1
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
My OS version is Windows
```
2023-03-01 12:00:54 INFO Dash is running on http://localhost:8050/dashboard/
Dash is running on http://localhost:8050/dashboard/
* Serving Flask app 'main'
* Debug mode: on
Exception in thread Thread-2 (serve_forever):
Traceback (most recent call last):
File "C:\Users\NiekLeeuwenvan\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\NiekLeeuwenvan\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\NiekLeeuwenvan\Code\pk-dashboard\venv\lib\site-packages\werkzeug\serving.py", line 766, in serve_forever
super().serve_forever(poll_interval=poll_interval)
File "C:\Users\NiekLeeuwenvan\AppData\Local\Programs\Python\Python310\lib\socketserver.py", line 232, in serve_forever
ready = selector.select(poll_interval)
File "C:\Users\NiekLeeuwenvan\AppData\Local\Programs\Python\Python310\lib\selectors.py", line 324, in select
r, w, _ = self._select(self._readers, self._writers, [], timeout)
File "C:\Users\NiekLeeuwenvan\AppData\Local\Programs\Python\Python310\lib\selectors.py", line 315, in _select
r, w, x = select.select(r, w, w, timeout)
OSError: [WinError 10038] An operation was attempted on something that is not a socket
```
I am running the Dash application using the default example:
```
if __name__ == '__main__':
app.run_server(debug=True, host='localhost')
```
Please let me know if this issue is better suited for the Werkzeug issue tracker.
| closed | 2023-03-01T11:06:35Z | 2024-10-13T13:10:16Z | https://github.com/plotly/dash/issues/2438 | [] | niekvleeuwen | 3 |
huggingface/datasets | machine-learning | 6,778 | Dataset.to_csv() missing commas in columns with lists | ### Describe the bug
The `to_csv()` method does not output commas in lists. So when the Dataset is loaded back in the data structure of the column with a list is not correct.
Here's an example:
Obviously, it's not as trivial as inserting commas in the list, since its a comma-separated file. But hopefully there's a way to export the list in a way that it'll be imported by `load_dataset()` correctly.
### Steps to reproduce the bug
Here's some code to reproduce the bug:
```python
from datasets import Dataset
ds = Dataset.from_dict(
{
"pokemon": ["bulbasaur", "squirtle"],
"type": ["grass", "water"]
}
)
def ascii_to_hex(text):
return [ord(c) for c in text]
ds = ds.map(lambda x: {"int": ascii_to_hex(x['pokemon'])})
ds.to_csv('../output/temp.csv')
```
temp.csv then contains:
```
### Expected behavior
ACTUAL OUTPUT:
```
pokemon,type,int
bulbasaur,grass,[ 98 117 108 98 97 115 97 117 114]
squirtle,water,[115 113 117 105 114 116 108 101]
```
EXPECTED OUTPUT:
```
pokemon,type,int
bulbasaur,grass,[98, 117, 108, 98, 97, 115, 97, 117, 114]
squirtle,water,[115, 113, 117, 105, 114, 116, 108, 101]
```
or probably something more like this since it's a CSV file:
```
pokemon,type,int
bulbasaur,grass,"[98, 117, 108, 98, 97, 115, 97, 117, 114]"
squirtle,water,"[115, 113, 117, 105, 114, 116, 108, 101]"
```
### Environment info
### Package Version
Name: datasets
Version: 2.16.1
### Python
version: 3.10.12
### OS Info
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
...
UBUNTU_CODENAME=jammy | open | 2024-04-04T16:46:13Z | 2024-04-08T15:24:41Z | https://github.com/huggingface/datasets/issues/6778 | [] | mpickard-dataprof | 1 |
miguelgrinberg/python-socketio | asyncio | 565 | timestampParam in client as as optional | Hello,
I would like to know if there is an option to add an option to 'cancel' the addition of t paramater (timestampParam) in the client.
In our WSS application we use the t paramater in the url for different purpose.
Thanks. | closed | 2020-11-17T08:20:24Z | 2020-11-17T10:20:55Z | https://github.com/miguelgrinberg/python-socketio/issues/565 | [
"question"
] | rondered | 4 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 412 | Issue with cover letter upload | Hi the gpt is providing cover letter but it's is not properly lined, here is what I mean
[tmpfftsuua6.pdf](https://github.com/user-attachments/files/17080224/tmpfftsuua6.pdf)
<img width="630" alt="image" src="https://github.com/user-attachments/assets/960c2d84-050e-40f0-bc7f-09231d808315">
| closed | 2024-09-20T19:49:25Z | 2024-09-25T14:12:07Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/412 | [] | ParthSoni-CS | 3 |
seleniumbase/SeleniumBase | pytest | 3,278 | Run CDP Mode in Github actions | I am trying to get a python script using Selenium Base CDP mode to work using github actions. I know i am supposed to be using XVFB, but not sure how to configure for this application.
I get an error when it tries to run the python code.
Please provide a simple working example of how to run in CDP mode using github actions.
Here is my YAML file
# This is a basic workflow to help you get started with Actions
name: Python Script
# Controls when the workflow will run
on:
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v4
- name: Install dependencies
run: |
sudo apt install xvfb
python -m pip install --upgrade pip wheel setuptools
pip install seleniumbase
pip install pyvirtualdisplay
- name: Install Chrome
run: |
sudo apt install google-chrome-stable
# Runs a single command using the runners shell
- name: Run python script
run: python .github/workflows/example.py
And here is my example.py
"""Example of using CDP Mode with WebDriver"""
from seleniumbase import SB
import pyvirtualdisplay
def main():
print("Hello World")
display = pyvirtualdisplay.Display()
display.start()
with SB(uc=True, test=True, locale_code="en",xvfb=True) as sb:
url = "https://www.priceline.com/"
sb.activate_cdp_mode(url)
print(sb.get_title())
if __name__ == '__main__':
main() | closed | 2024-11-21T06:02:13Z | 2024-11-21T13:06:23Z | https://github.com/seleniumbase/SeleniumBase/issues/3278 | [
"self-resolved",
"UC Mode / CDP Mode"
] | cohnhead66 | 1 |
pydata/bottleneck | numpy | 178 | Error when passing in a default to bn.push | Bottleneck doesn't seem to be well behaved if you _pass in_ default values for `bn.push`:
Default states `n=None` (and works with no value):
python
```
In [1]: import bottleneck as bn
In [4]: import numpy as np
In [5]: a=np.array([1,2,3,4])
In [7]: bn.push(a)
Out[7]: array([1, 2, 3, 4])
In [8]: bn.push?
Docstring:
push(a, n=None, axis=-1)
...
n : {int, None}, optional
```
but when you pass in `None`, it raises:
```
In [9]: bn.push(a, n=None)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-9-d7ea6fc52366> in <module>()
----> 1 bn.push(a, n=None)
TypeError: `n` must be an integer
```
This is only really impactful when building a function that uses `bottleneck`, (which we're increasingly doing in `xarray`) | closed | 2017-11-07T21:44:25Z | 2017-11-08T01:33:17Z | https://github.com/pydata/bottleneck/issues/178 | [
"good first issue"
] | max-sixty | 4 |
SCIR-HI/Huatuo-Llama-Med-Chinese | nlp | 3 | 请问是否有做过微调后与Med-ChatGLM的对比 | 请问是否有做过微调后与Med-ChatGLM的对比,哪款的能力更强些? | closed | 2023-04-18T06:39:17Z | 2023-04-18T12:20:21Z | https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese/issues/3 | [] | danger-dream | 1 |
napari/napari | numpy | 7,705 | [Windows] Look into frequent seg faults on test_vispy_camera.py test | ## 🧰 Task
These are popping up across CI. Rerunning once or twice fixes it.
e.g. most recent comprehensive
https://github.com/napari/napari/actions/runs/13867324742/job/38811775118
Need to look at skipping if we can't find a cause or solution. | open | 2025-03-15T14:46:08Z | 2025-03-17T01:48:26Z | https://github.com/napari/napari/issues/7705 | [
"task",
"ci"
] | psobolewskiPhD | 1 |
graphql-python/graphene-sqlalchemy | sqlalchemy | 328 | Modify relationship query runtime | Hello,
How would I modify a relationship query on runtime? I'd like to limit the dataset based on a context variable on runtime.
Any guides on where should I look?
Thanks | closed | 2022-01-14T19:24:30Z | 2023-08-15T00:35:45Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/328 | [
"question",
":eyes: more info needed"
] | fernandoflorez | 5 |
glumpy/glumpy | numpy | 319 | Glumpy not matching correctly shape of VBO | I am creating a verify basic simulation of bubbles using glumpy and I'm facing issue defining a uniform array of vec3. It seems that glumpy failed to evaluate correctly the shape of the object.
Here is the complete fragment shader:
```glsl
#version 330 core
const int num_circles = 2;
uniform vec3 circlePt[num_circles];
uniform vec2 u_resolution;
out vec4 fragColor;
float circle(vec2 px, vec3 cp) {
vec2 dist = px - cp.xy;
float r = cp.z;
return 1 - smoothstep(r - (r * 0.01), r + (r * 0.01), dot(dist, dist) * 4.0);
}
void main() {
vec2 px = gl_FragCoord.xy / u_resolution.xy;
float c = 1;
for(int i = 0; i < num_circles; i++) {
c = min(c, circle(px, circlePt[i]));
}
fragColor = vec4(c, c, c, 1);
}
```
And here is the python part (check run method)
```python
class Simulation:
"""Simulation of bubbles"""
def __init__(self) -> None:
"""Initialize the simulation"""
self.width = 1000
self.height = 1000
self.version = "330"
self.init_windows()
def init_windows(self) -> None:
"""Initialize the windows"""
app.use(
backend="glfw",
api="GL",
major=int(self.version[0]),
minor=int(self.version[1]),
profile="core",
)
self.loading_shaders()
self.window = app.Window()
def loading_shaders(self) -> None:
"""Load shaders"""
with open("./shaders/vertex.glsl", "r") as f:
self.vertex = f.read()
with open("./shaders/fragment.glsl", "r") as f:
self.fragment = f.read()
def run(self) -> None:
"""Run the simulation"""
quad = gloo.Program(
self.vertex,
self.fragment,
count=4,
version=self.version,
)
# Define buffers
dtype = [
("vertices", np.float32, 2),
("u_resolution", np.float32, 2),
("circlePt", np.float32, 3),
]
quad_arrays = np.zeros(4, dtype).view(gloo.VertexArray)
quad_arrays["vertices"] = [(-1, -1), (-1, 1), (1, -1), (1, 1)]
quad_arrays["u_resolution"] = (self.width, self.height)
quad["circlePt"] = [(0.25, 0.25, 0.1), (0.0, 0.25, 0.05)]
quad.bind(quad_arrays)
@self.window.event
def on_draw(dt):
self.window.clear()
quad.draw(gl.GL_TRIANGLE_STRIP)
app.run()
```
However it failed with the following error
```bash
$ python3 main.py
[i] Using GLFW (GL 3.3)
Traceback (most recent call last):
File "main.py", line 112, in <module>
sim.run()
File "main.py", line 52, in run
quad["circlePt"] = [(0.25, 0.25, 0.1), (0.0, 0.25, 0.05)]
File "venv/lib/python3.10/site-packages/glumpy/gloo/program.py", line 353, in __setitem__
self._uniforms[name].set_data(data)
File "venv/lib/python3.10/site-packages/glumpy/gloo/variable.py", line 261, in set_data
self._data[...] = np.array(data,copy=False).ravel()
ValueError: could not broadcast input array from shape (6,) into shape (3,)
```
I don't know if I'm not using correctly the library of if glumpy fail to match the correct shape
| closed | 2023-10-05T15:23:32Z | 2023-10-13T19:41:07Z | https://github.com/glumpy/glumpy/issues/319 | [] | VictorGoubet | 2 |
anselal/antminer-monitor | dash | 51 | Can you add ebit e9+? | Hi! Great app! can you add ebit e9+ support? | open | 2018-01-11T15:05:44Z | 2018-04-12T05:08:54Z | https://github.com/anselal/antminer-monitor/issues/51 | [
":pick: miner_support"
] | andrucha4004 | 5 |
ipython/ipython | data-science | 14,013 | %page stops working due to OInfo change | `import pandas as pd
df = pd.DataFrame(data={'a':[1,2,3],'b':list('abc')})
%page df
File lib/python3.11/site-packages/IPython/core/interactiveshell.py:2414, in InteractiveShell.run_line_magic(self, magic_name, line, _stack_depth)
2412 kwargs['local_ns'] = self.get_local_scope(stack_depth)
2413 with self.builtin_trap:
-> 2414 result = fn(*args, **kwargs)
2416 # The code below prevents the output from being displayed
2417 # when using magics with decodator @output_can_be_silenced
2418 # when the last Python token in the expression is a ';'.
2419 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False):
File lib/python3.11/site-packages/IPython/core/magics/basic.py:300, in BasicMagics.page(self, parameter_s)
298 info = self.shell._ofind(oname)
299 print(info)
--> 300 if info['found']:
301 if raw:
302 txt = str(info["obj"])
TypeError: 'OInfo' object is not subscriptable`
| closed | 2023-04-13T04:49:23Z | 2023-04-13T07:06:39Z | https://github.com/ipython/ipython/issues/14013 | [] | ghost | 0 |
Avaiga/taipy | data-visualization | 2,285 | [DOCS] Improve decimator for datetime axis or improve documentation | ### Issue Description
The documentation should explain how to make the decimator work with datetime if using datetime is possible. This is either documentation or an issue to improve Taipy.
Right now, the decimation is behaving in a strange way without any good feedback for the user.
```python
from datetime import datetime
import numpy as np
import pandas as pd
from taipy.gui import Gui
import taipy.gui.builder as tgb
from taipy.gui.data.decimator import MinMaxDecimator
x_values = np.linspace(1, 100, 50000)
y_values = np.log(x_values) * np.sin(x_values / 5)
noise_mask = np.random.rand(*y_values.shape) < 0.01
noise_values = np.random.uniform(-0.5, 0.5, size=np.sum(noise_mask))
y_values_noise = np.copy(y_values)
y_values_noise[noise_mask] += noise_values
data = pd.DataFrame({"X": x_values, "Y": y_values_noise})
data["X"] = pd.to_timedelta(data["X"], unit="s") + datetime.now()
decimator = MinMaxDecimator(200)
if __name__ == "__main__":
with tgb.Page() as page:
tgb.chart("{data}", type="markers", x="X", y="Y", decimator=decimator)
gui = Gui(page=page)
gui.run()
```
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional)
### Acceptance Criteria
- [ ] The documentation set as been generated without error.
- [ ] The new text has been passed to a grammatical tool for review.
- [ ] The 100 character length limit has been respected as much as possible.
- [ ] The links and cross-references in the documentation are working.
- [ ] If applicable, screenshots of the new documentation are added to the pull request. | open | 2024-11-27T08:57:18Z | 2024-12-02T08:31:14Z | https://github.com/Avaiga/taipy/issues/2285 | [
"📈 Improvement",
"📄 Documentation",
"🖰 GUI",
"🟨 Priority: Medium"
] | FlorianJacta | 2 |
inducer/pudb | pytest | 414 | Provide a simple way to install pudb's tab-completion | Before #350, tab completion was provided natively through Python's readline module.
New versions rely on a third party dependency, which adds an extra step to setup — and requires user to install an extra package without a relation with pudb.
It would be great if users could directly install pudb and its tab-completion support in a single `pip install`:
- Either add a fallback on Python's `readline` module if `jedi` can't be found;
- Or add an optional extra that pulls in a compatible version of `jedi`, e.g `pip install pudb[completion]`
If one of those options interests you, I'll be glad to provide a pull-request accordingly ;)
And thanks again for pudb, it's an amazing program 😉 | closed | 2020-12-23T15:07:49Z | 2021-03-04T21:48:45Z | https://github.com/inducer/pudb/issues/414 | [] | rbarrois | 5 |
strawberry-graphql/strawberry | django | 3,680 | `MaskErrors` does not mask errors for subscriptions | The `MaskErrors` schema extension does not seem to mask errors for subscriptions. I am running on the latest version of `strawberry`. Example included below:
```python
from typing import AsyncIterator
import strawberry
from strawberry.extensions import MaskErrors
@strawberry.type
class Query:
@strawberry.field
def hello(self) -> str:
raise Exception("boom")
return "world"
@strawberry.type
class Subscription:
@strawberry.subscription
async def stream(self) -> AsyncIterator[str]:
yield "hello"
yield "world"
raise Exception("boom")
schema = strawberry.Schema(
query=Query, subscription=Subscription, extensions=[MaskErrors()]
)
```
The errors are masked for queries:

But not for subscriptions:

## Describe the Bug
I expect the `MaskErrors` extension to mask my errors for subscriptions as well as queries and mutations, but it does not appear to do so.
## System Information
- Operating system: MacOS
- Strawberry version (if applicable): 0.247.0 | open | 2024-10-25T02:43:04Z | 2025-03-20T15:56:54Z | https://github.com/strawberry-graphql/strawberry/issues/3680 | [
"bug"
] | axiomofjoy | 1 |
Josh-XT/AGiXT | automation | 913 | libnvidia-ml.so.1: cannot open shared object file: no such file or directory | ### Description
This appears when I try to launch on RHEL 8, Nvidia docker 2 installed.
```
ERROR: for text-generation-webui Cannot start service text-generation-webui: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown
ERROR: Encountered errors while bringing up the project.
```
### Steps to Reproduce the Bug
1. Run the script
2. Select text-generation-webui
3. Wait for updates to run
4. Observe the error
### Expected Behavior
Script runs text-generation-webui successfully.
### Operating System
- [X] Linux
- [ ] Microsoft Windows
- [ ] Apple MacOS
- [ ] Android
- [ ] iOS
- [ ] Other
### Python Version
- [ ] Python <= 3.9
- [X] Python 3.10
- [ ] Python 3.11
### Environment Type - Connection
- [X] Local - You run AGiXT in your home network
- [ ] Remote - You access AGiXT through the internet
### Runtime environment
- [X] Using docker compose
- [ ] Using local
- [ ] Custom setup (please describe above!)
### Acknowledgements
- [X] I have searched the existing issues to make sure this bug has not been reported yet.
- [X] I am using the latest version of AGiXT.
- [X] I have provided enough information for the maintainers to reproduce and diagnose the issue. | closed | 2023-08-13T17:46:32Z | 2023-08-13T18:15:37Z | https://github.com/Josh-XT/AGiXT/issues/913 | [
"type | report | bug",
"needs triage"
] | eXactPlay | 2 |
jonra1993/fastapi-alembic-sqlmodel-async | sqlalchemy | 71 | Calling postgres from celery | Right now we can use syncify with the CRUD functions inside celery, but aysncpg is still in charge of the session. This seems suboptimal but switching into psychopg seems like a lot of duplicated work? Is using asyncpg fine for this use-case? | open | 2023-05-09T17:27:46Z | 2023-05-13T00:16:58Z | https://github.com/jonra1993/fastapi-alembic-sqlmodel-async/issues/71 | [] | bazylhorsey | 1 |
ivy-llc/ivy | numpy | 28,684 | Fix Frontend Failing Test: paddle - activations.tensorflow.keras.activations.deserialize | To-do List: https://github.com/unifyai/ivy/issues/27500 | closed | 2024-03-25T21:47:32Z | 2024-03-28T10:56:53Z | https://github.com/ivy-llc/ivy/issues/28684 | [
"Sub Task"
] | ZJay07 | 0 |
alpacahq/alpaca-trade-api-python | rest-api | 380 | pip install fails on Apple M1 Silicon Big Sur 11.1 | [output.txt](https://github.com/alpacahq/alpaca-trade-api-python/files/5955844/output.txt)
seems to be because of numpy version | closed | 2021-02-10T04:33:58Z | 2021-02-10T07:13:16Z | https://github.com/alpacahq/alpaca-trade-api-python/issues/380 | [] | seltzerfish | 2 |
tqdm/tqdm | pandas | 917 | Tqdm tests fail on PowerPC (ppc64le) | I am trying to build tqdm on rhel7.6 ppc64le. The build passes, however the tests fail.
Please find below the detailed log :
TOX_SKIP_ENV=perf tox --skip-missing-interpreters -p all
GLOB sdist-make: /tqdm/setup.py
| [3] py27 | py36 | tf-no-kerasERROR: invocation failed (exit code 1), logfile: /tqdm/.tox/tf-no-keras/log/tf-no-keras-9.log
============================================================================== log start ==============================================================================
tf-no-keras create: /tqdm/.tox/tf-no-keras
tf-no-keras installdeps: nose, coverage, coveralls, nose-timer, codecov, tensorflow
ERROR: invocation failed (exit code 1), logfile: /tqdm/.tox/tf-no-keras/log/tf-no-keras-11.log
================================== log start ===================================
Collecting nose
Using cached nose-1.3.7-py3-none-any.whl (154 kB)
Collecting coverage
Using cached coverage-5.0.4.tar.gz (680 kB)
Collecting coveralls
Using cached coveralls-1.11.1-py2.py3-none-any.whl (12 kB)
Collecting nose-timer
Using cached nose-timer-0.7.6.tar.gz (8.6 kB)
Collecting codecov
Using cached codecov-2.0.16-py2.py3-none-any.whl (14 kB)
ERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none)
ERROR: No matching distribution found for tensorflow
=================================== log end ====================================
ERROR: could not install deps [nose, coverage, coveralls, nose-timer, codecov, tensorflow]; v = InvocationError('/tqdm/.tox/tf-no-keras/bin/python -m pip install nose coverage coveralls nose-timer codecov tensorflow', 1)
=============================================================================== log end ===============================================================================
x [2] py27 | py36ERROR: invocation failed (exit code 1), logfile: /tqdm/.tox/py27/log/py27-9.log
============================================================================== log start ==============================================================================
py27 create: /tqdm/.tox/py27
py27 installdeps: nose, coverage, coveralls, nose-timer, codecov, cython, numpy, pandas, tensorflow, keras
ERROR: invocation failed (exit code 1), logfile: /tqdm/.tox/py27/log/py27-11.log
================================== log start ===================================
DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
Collecting nose
Using cached nose-1.3.7-py2-none-any.whl (154 kB)
Collecting coverage
Using cached coverage-5.0.4.tar.gz (680 kB)
Collecting coveralls
Using cached coveralls-1.11.1-py2.py3-none-any.whl (12 kB)
Collecting nose-timer
Using cached nose-timer-0.7.6.tar.gz (8.6 kB)
Collecting codecov
Using cached codecov-2.0.16-py2.py3-none-any.whl (14 kB)
Collecting cython
Using cached Cython-0.29.15-py2.py3-none-any.whl (968 kB)
Collecting numpy
Using cached numpy-1.16.6.zip (5.1 MB)
Collecting pandas
Using cached pandas-0.24.2.tar.gz (11.8 MB)
ERROR: Command errored out with exit status 1:
command: /tqdm/.tox/py27/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-44OFN5/pandas/setup.py'"'"'; __file__='"'"'/tmp/pip-install-44OFN5/pandas/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-44OFN5/pandas/pip-egg-info
cwd: /tmp/pip-install-44OFN5/pandas/
Complete output (491 lines):
DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
ERROR: Command errored out with exit status 1:
command: /tqdm/.tox/py27/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-wheel-R3MUB_/numpy/setup.py'"'"'; __file__='"'"'/tmp/pip-wheel-R3MUB_/numpy/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-nbRiKt
cwd: /tmp/pip-wheel-R3MUB_/numpy/
Complete output (446 lines):
Running from numpy source directory.
blas_opt_info:
blas_mkl_info:
customize UnixCCompiler
libraries mkl_rt not found in ['/tqdm/.tox/py27/lib', '/usr/local/lib64', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/']
NOT AVAILABLE
blis_info:
customize UnixCCompiler
libraries blis not found in ['/tqdm/.tox/py27/lib', '/usr/local/lib64', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/']
NOT AVAILABLE
openblas_info:
customize UnixCCompiler
customize UnixCCompiler
libraries openblas not found in ['/tqdm/.tox/py27/lib', '/usr/local/lib64', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/']
NOT AVAILABLE
atlas_3_10_blas_threads_info:
Setting PTATLAS=ATLAS
customize UnixCCompiler
libraries tatlas not found in ['/tqdm/.tox/py27/lib', '/usr/local/lib64', '/usr/local/lib', '/usr/lib64/atlas', '/usr/lib64/sse2', '/usr/lib64', '/usr/lib/sse2', '/usr/lib', '/usr/lib/sse2', '/usr/lib/']
NOT AVAILABLE
atlas_3_10_blas_info:
customize UnixCCompiler
libraries satlas not found in ['/tqdm/.tox/py27/lib', '/usr/local/lib64', '/usr/local/lib', '/usr/lib64/atlas', '/usr/lib64/sse2', '/usr/lib64', '/usr/lib/sse2', '/usr/lib', '/usr/lib/sse2', '/usr/lib/']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
customize UnixCCompiler
libraries ptf77blas,ptcblas,atlas not found in ['/tqdm/.tox/py27/lib', '/usr/local/lib64', '/usr/local/lib', '/usr/lib64/atlas', '/usr/lib64/sse2', '/usr/lib64', '/usr/lib/sse2', '/usr/lib', '/usr/lib/sse2', '/usr/lib/']
NOT AVAILABLE
atlas_blas_info:
customize UnixCCompiler
libraries f77blas,cblas,atlas not found in ['/tqdm/.tox/py27/lib', '/usr/local/lib64', '/usr/local/lib', '/usr/lib64/atlas', '/usr/lib64/sse2', '/usr/lib64', '/usr/lib/sse2', '/usr/lib', '/usr/lib/sse2', '/usr/lib/']
NOT AVAILABLE
accelerate_info:
NOT AVAILABLE
/tmp/pip-wheel-R3MUB_/numpy/numpy/distutils/system_info.py:639: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
self.calc_info()
blas_info:
customize UnixCCompiler
libraries blas not found in ['/tqdm/.tox/py27/lib', '/usr/local/lib64', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/']
NOT AVAILABLE
/tmp/pip-wheel-R3MUB_/numpy/numpy/distutils/system_info.py:639: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
self.calc_info()
blas_src_info:
NOT AVAILABLE
/tmp/pip-wheel-R3MUB_/numpy/numpy/distutils/system_info.py:639: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
self.calc_info()
NOT AVAILABLE
/bin/sh: svnversion: command not found
non-existing path in 'numpy/distutils': 'site.cfg'
lapack_opt_info:
lapack_mkl_info:
customize UnixCCompiler
libraries mkl_rt not found in ['/tqdm/.tox/py27/lib', '/usr/local/lib64', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/']
NOT AVAILABLE
openblas_lapack_info:
customize UnixCCompiler
customize UnixCCompiler
libraries openblas not found in ['/tqdm/.tox/py27/lib', '/usr/local/lib64', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/']
NOT AVAILABLE
openblas_clapack_info:
customize UnixCCompiler
customize UnixCCompiler
libraries openblas,lapack not found in ['/tqdm/.tox/py27/lib', '/usr/local/lib64', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/']
NOT AVAILABLE
atlas_3_10_threads_info:
Setting PTATLAS=ATLAS
customize UnixCCompiler
libraries lapack_atlas not found in /tqdm/.tox/py27/lib
customize UnixCCompiler
libraries tatlas,tatlas not found in /tqdm/.tox/py27/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/local/lib64
customize UnixCCompiler
libraries tatlas,tatlas not found in /usr/local/lib64
customize UnixCCompiler
libraries lapack_atlas not found in /usr/local/lib
customize UnixCCompiler
libraries tatlas,tatlas not found in /usr/local/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib64/atlas
customize UnixCCompiler
libraries tatlas,tatlas not found in /usr/lib64/atlas
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib64/sse2
customize UnixCCompiler
libraries tatlas,tatlas not found in /usr/lib64/sse2
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib64
customize UnixCCompiler
libraries tatlas,tatlas not found in /usr/lib64
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib/sse2
customize UnixCCompiler
libraries tatlas,tatlas not found in /usr/lib/sse2
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib
customize UnixCCompiler
libraries tatlas,tatlas not found in /usr/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib/sse2
customize UnixCCompiler
libraries tatlas,tatlas not found in /usr/lib/sse2
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib/
customize UnixCCompiler
libraries tatlas,tatlas not found in /usr/lib/
<class 'numpy.distutils.system_info.atlas_3_10_threads_info'>
NOT AVAILABLE
atlas_3_10_info:
customize UnixCCompiler
libraries lapack_atlas not found in /tqdm/.tox/py27/lib
customize UnixCCompiler
libraries satlas,satlas not found in /tqdm/.tox/py27/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/local/lib64
customize UnixCCompiler
libraries satlas,satlas not found in /usr/local/lib64
customize UnixCCompiler
libraries lapack_atlas not found in /usr/local/lib
customize UnixCCompiler
libraries satlas,satlas not found in /usr/local/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib64/atlas
customize UnixCCompiler
libraries satlas,satlas not found in /usr/lib64/atlas
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib64/sse2
customize UnixCCompiler
libraries satlas,satlas not found in /usr/lib64/sse2
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib64
customize UnixCCompiler
libraries satlas,satlas not found in /usr/lib64
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib/sse2
customize UnixCCompiler
libraries satlas,satlas not found in /usr/lib/sse2
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib
customize UnixCCompiler
libraries satlas,satlas not found in /usr/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib/sse2
customize UnixCCompiler
libraries satlas,satlas not found in /usr/lib/sse2
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib/
customize UnixCCompiler
libraries satlas,satlas not found in /usr/lib/
<class 'numpy.distutils.system_info.atlas_3_10_info'>
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
customize UnixCCompiler
libraries lapack_atlas not found in /tqdm/.tox/py27/lib
customize UnixCCompiler
libraries ptf77blas,ptcblas,atlas not found in /tqdm/.tox/py27/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/local/lib64
customize UnixCCompiler
libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib64
customize UnixCCompiler
libraries lapack_atlas not found in /usr/local/lib
customize UnixCCompiler
libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib64/atlas
customize UnixCCompiler
libraries ptf77blas,ptcblas,atlas not found in /usr/lib64/atlas
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib64/sse2
customize UnixCCompiler
libraries ptf77blas,ptcblas,atlas not found in /usr/lib64/sse2
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib64
customize UnixCCompiler
libraries ptf77blas,ptcblas,atlas not found in /usr/lib64
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib/sse2
customize UnixCCompiler
libraries ptf77blas,ptcblas,atlas not found in /usr/lib/sse2
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib
customize UnixCCompiler
libraries ptf77blas,ptcblas,atlas not found in /usr/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib/sse2
customize UnixCCompiler
libraries ptf77blas,ptcblas,atlas not found in /usr/lib/sse2
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib/
customize UnixCCompiler
libraries ptf77blas,ptcblas,atlas not found in /usr/lib/
<class 'numpy.distutils.system_info.atlas_threads_info'>
NOT AVAILABLE
atlas_info:
customize UnixCCompiler
libraries lapack_atlas not found in /tqdm/.tox/py27/lib
customize UnixCCompiler
libraries f77blas,cblas,atlas not found in /tqdm/.tox/py27/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/local/lib64
customize UnixCCompiler
libraries f77blas,cblas,atlas not found in /usr/local/lib64
customize UnixCCompiler
libraries lapack_atlas not found in /usr/local/lib
customize UnixCCompiler
libraries f77blas,cblas,atlas not found in /usr/local/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib64/atlas
customize UnixCCompiler
libraries f77blas,cblas,atlas not found in /usr/lib64/atlas
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib64/sse2
customize UnixCCompiler
libraries f77blas,cblas,atlas not found in /usr/lib64/sse2
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib64
customize UnixCCompiler
libraries f77blas,cblas,atlas not found in /usr/lib64
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib/sse2
customize UnixCCompiler
libraries f77blas,cblas,atlas not found in /usr/lib/sse2
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib
customize UnixCCompiler
libraries f77blas,cblas,atlas not found in /usr/lib
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib/sse2
customize UnixCCompiler
libraries f77blas,cblas,atlas not found in /usr/lib/sse2
customize UnixCCompiler
libraries lapack_atlas not found in /usr/lib/
customize UnixCCompiler
libraries f77blas,cblas,atlas not found in /usr/lib/
<class 'numpy.distutils.system_info.atlas_info'>
NOT AVAILABLE
lapack_info:
customize UnixCCompiler
libraries lapack not found in ['/tqdm/.tox/py27/lib', '/usr/local/lib64', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/']
NOT AVAILABLE
/tmp/pip-wheel-R3MUB_/numpy/numpy/distutils/system_info.py:639: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
self.calc_info()
lapack_src_info:
NOT AVAILABLE
/tmp/pip-wheel-R3MUB_/numpy/numpy/distutils/system_info.py:639: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
self.calc_info()
NOT AVAILABLE
/usr/lib64/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
running bdist_wheel
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
running build_src
build_src
building py_modules sources
creating build
creating build/src.linux-ppc64le-2.7
creating build/src.linux-ppc64le-2.7/numpy
creating build/src.linux-ppc64le-2.7/numpy/distutils
building library "npymath" sources
get_default_fcompiler: matching types: '['gnu95', 'intel', 'lahey', 'pg', 'absoft', 'nag', 'vast', 'compaq', 'intele', 'intelem', 'gnu', 'g95', 'pathf95', 'nagfor']'
customize Gnu95FCompiler
Could not locate executable gfortran
Could not locate executable f95
customize IntelFCompiler
Could not locate executable ifort
Could not locate executable ifc
customize LaheyFCompiler
Could not locate executable lf95
customize PGroupFCompiler
Could not locate executable pgfortran
customize AbsoftFCompiler
Could not locate executable f90
Could not locate executable f77
customize NAGFCompiler
customize VastFCompiler
customize CompaqFCompiler
Could not locate executable fort
customize IntelItaniumFCompiler
Could not locate executable efort
Could not locate executable efc
customize IntelEM64TFCompiler
customize GnuFCompiler
Could not locate executable g77
customize G95FCompiler
Could not locate executable g95
customize PathScaleFCompiler
Could not locate executable pathf95
customize NAGFORCompiler
Could not locate executable nagfor
don't know how to compile Fortran code on platform 'posix'
C compiler: gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mcpu=power8 -mtune=power8 -D_GNU_SOURCE -fPIC -fwrapv -O3 -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mcpu=power8 -mtune=power8 -D_GNU_SOURCE -fPIC -fwrapv -O3 -fPIC
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/tqdm/.tox/py27/include/python2.7 -c'
gcc: _configtest.c
gcc -pthread _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
C compiler: gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mcpu=power8 -mtune=power8 -D_GNU_SOURCE -fPIC -fwrapv -O3 -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mcpu=power8 -mtune=power8 -D_GNU_SOURCE -fPIC -fwrapv -O3 -fPIC
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/tqdm/.tox/py27/include/python2.7 -c'
gcc: _configtest.c
_configtest.c:1:5: warning: conflicting types for built-in function 'exp' [enabled by default]
int exp (void);
^
gcc -pthread _configtest.o -o _configtest
_configtest.o: In function `main':
/tmp/pip-wheel-R3MUB_/numpy/_configtest.c:6: undefined reference to `exp'
collect2: error: ld returned 1 exit status
failure.
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mcpu=power8 -mtune=power8 -D_GNU_SOURCE -fPIC -fwrapv -O3 -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mcpu=power8 -mtune=power8 -D_GNU_SOURCE -fPIC -fwrapv -O3 -fPIC
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/tqdm/.tox/py27/include/python2.7 -c'
gcc: _configtest.c
_configtest.c:1:5: warning: conflicting types for built-in function 'exp' [enabled by default]
int exp (void);
^
gcc -pthread _configtest.o -lm -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
creating build/src.linux-ppc64le-2.7/numpy/core
creating build/src.linux-ppc64le-2.7/numpy/core/src
creating build/src.linux-ppc64le-2.7/numpy/core/src/npymath
conv_template:> build/src.linux-ppc64le-2.7/numpy/core/src/npymath/npy_math_internal.h
adding 'build/src.linux-ppc64le-2.7/numpy/core/src/npymath' to include_dirs.
conv_template:> build/src.linux-ppc64le-2.7/numpy/core/src/npymath/ieee754.c
conv_template:> build/src.linux-ppc64le-2.7/numpy/core/src/npymath/npy_math_complex.c
None - nothing done with h_files = ['build/src.linux-ppc64le-2.7/numpy/core/src/npymath/npy_math_internal.h']
building library "npysort" sources
creating build/src.linux-ppc64le-2.7/numpy/core/src/common
conv_template:> build/src.linux-ppc64le-2.7/numpy/core/src/common/npy_sort.h
adding 'build/src.linux-ppc64le-2.7/numpy/core/src/common' to include_dirs.
creating build/src.linux-ppc64le-2.7/numpy/core/src/npysort
conv_template:> build/src.linux-ppc64le-2.7/numpy/core/src/npysort/quicksort.c
conv_template:> build/src.linux-ppc64le-2.7/numpy/core/src/npysort/mergesort.c
conv_template:> build/src.linux-ppc64le-2.7/numpy/core/src/npysort/heapsort.c
conv_template:> build/src.linux-ppc64le-2.7/numpy/core/src/common/npy_partition.h
conv_template:> build/src.linux-ppc64le-2.7/numpy/core/src/npysort/selection.c
conv_template:> build/src.linux-ppc64le-2.7/numpy/core/src/common/npy_binsearch.h
conv_template:> build/src.linux-ppc64le-2.7/numpy/core/src/npysort/binsearch.c
None - nothing done with h_files = ['build/src.linux-ppc64le-2.7/numpy/core/src/common/npy_sort.h', 'build/src.linux-ppc64le-2.7/numpy/core/src/common/npy_partition.h', 'build/src.linux-ppc64le-2.7/numpy/core/src/common/npy_binsearch.h']
building extension "numpy.core._dummy" sources
Generating build/src.linux-ppc64le-2.7/numpy/core/include/numpy/config.h
C compiler: gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mcpu=power8 -mtune=power8 -D_GNU_SOURCE -fPIC -fwrapv -O3 -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mcpu=power8 -mtune=power8 -D_GNU_SOURCE -fPIC -fwrapv -O3 -fPIC
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/tqdm/.tox/py27/include/python2.7 -c'
gcc: _configtest.c
_configtest.c:1:20: fatal error: Python.h: No such file or directory
#include <Python.h>
^
compilation terminated.
failure.
removing: _configtest.c _configtest.o
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-wheel-R3MUB_/numpy/setup.py", line 419, in <module>
setup_package()
File "/tmp/pip-wheel-R3MUB_/numpy/setup.py", line 411, in setup_package
setup(**metadata)
File "/tmp/pip-wheel-R3MUB_/numpy/numpy/distutils/core.py", line 171, in setup
return old_setup(**new_attr)
File "/tqdm/.tox/py27/lib/python2.7/site-packages/setuptools/__init__.py", line 145, in setup
return distutils.core.setup(**attrs)
File "/usr/lib64/python2.7/distutils/core.py", line 152, in setup
dist.run_commands()
File "/usr/lib64/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/usr/lib64/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/tqdm/.tox/py27/lib/python2.7/site-packages/wheel/bdist_wheel.py", line 223, in run
self.run_command('build')
File "/usr/lib64/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/lib64/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/tmp/pip-wheel-R3MUB_/numpy/numpy/distutils/command/build.py", line 47, in run
old_build.run(self)
File "/usr/lib64/python2.7/distutils/command/build.py", line 127, in run
self.run_command(cmd_name)
File "/usr/lib64/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/lib64/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/tmp/pip-wheel-R3MUB_/numpy/numpy/distutils/command/build_src.py", line 148, in run
self.build_sources()
File "/tmp/pip-wheel-R3MUB_/numpy/numpy/distutils/command/build_src.py", line 165, in build_sources
self.build_extension_sources(ext)
File "/tmp/pip-wheel-R3MUB_/numpy/numpy/distutils/command/build_src.py", line 322, in build_extension_sources
sources = self.generate_sources(sources, ext)
File "/tmp/pip-wheel-R3MUB_/numpy/numpy/distutils/command/build_src.py", line 375, in generate_sources
source = func(extension, build_dir)
File "numpy/core/setup.py", line 423, in generate_config_h
moredefs, ignored = cocache.check_types(config_cmd, ext, build_dir)
File "numpy/core/setup.py", line 47, in check_types
out = check_types(*a, **kw)
File "numpy/core/setup.py", line 281, in check_types
"install {0}-dev|{0}-devel.".format(python))
SystemError: Cannot compile 'Python.h'. Perhaps you need to install python-dev|python-devel.
----------------------------------------
ERROR: Failed building wheel for numpy
ERROR: Command errored out with exit status 1:
command: /tqdm/.tox/py27/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-wheel-R3MUB_/numpy/setup.py'"'"'; __file__='"'"'/tmp/pip-wheel-R3MUB_/numpy/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' clean --all
cwd: /tmp/pip-wheel-R3MUB_/numpy
Complete output (10 lines):
Running from numpy source directory.
`setup.py clean` is not supported, use one of the following instead:
- `git clean -xdf` (cleans all files)
- `git clean -Xdf` (cleans all versioned files, doesn't touch
files that aren't checked into the git repo)
Add `--force` to your command to use it anyway if you must (unsupported).
----------------------------------------
ERROR: Failed cleaning build dir for numpy
ERROR: Failed to build one or more wheels
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-44OFN5/pandas/setup.py", line 746, in <module>
**setuptools_kwargs)
File "/tqdm/.tox/py27/lib/python2.7/site-packages/setuptools/__init__.py", line 144, in setup
_install_setup_requires(attrs)
File "/tqdm/.tox/py27/lib/python2.7/site-packages/setuptools/__init__.py", line 139, in _install_setup_requires
dist.fetch_build_eggs(dist.setup_requires)
File "/tqdm/.tox/py27/lib/python2.7/site-packages/setuptools/dist.py", line 721, in fetch_build_eggs
replace_conflicting=True,
File "/tqdm/.tox/py27/lib/python2.7/site-packages/pkg_resources/__init__.py", line 782, in resolve
replace_conflicting=replace_conflicting
File "/tqdm/.tox/py27/lib/python2.7/site-packages/pkg_resources/__init__.py", line 1065, in best_match
return self.obtain(req, installer)
File "/tqdm/.tox/py27/lib/python2.7/site-packages/pkg_resources/__init__.py", line 1077, in obtain
return installer(requirement)
File "/tqdm/.tox/py27/lib/python2.7/site-packages/setuptools/dist.py", line 777, in fetch_build_egg
return fetch_build_egg(self, req)
File "/tqdm/.tox/py27/lib/python2.7/site-packages/setuptools/installer.py", line 130, in fetch_build_egg
raise DistutilsError(str(e))
distutils.errors.DistutilsError: Command '['/tqdm/.tox/py27/bin/python', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/tmp/tmpDU6PBR', '--quiet', 'numpy>=1.12.0']' returned non-zero exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
=================================== log end ====================================
ERROR: could not install deps [nose, coverage, coveralls, nose-timer, codecov, cython, numpy, pandas, tensorflow, keras]; v = InvocationError('/tqdm/.tox/py27/bin/python -m pip install nose coverage coveralls nose-timer codecov cython numpy pandas tensorflow keras', 1)
=============================================================================== log end ===============================================================================
+ [1] py36ERROR: invocation failed (exit code 1), logfile: /tqdm/.tox/py36/log/py36-9.log
============================================================================== log start ==============================================================================
py36 create: /tqdm/.tox/py36
py36 installdeps: nose, coverage, coveralls, nose-timer, codecov, cython, numpy, pandas, tensorflow, keras
ERROR: invocation failed (exit code 1), logfile: /tqdm/.tox/py36/log/py36-11.log
================================== log start ===================================
Collecting nose
Using cached nose-1.3.7-py3-none-any.whl (154 kB)
Collecting coverage
Using cached coverage-5.0.4.tar.gz (680 kB)
Collecting coveralls
Using cached coveralls-1.11.1-py2.py3-none-any.whl (12 kB)
Collecting nose-timer
Using cached nose-timer-0.7.6.tar.gz (8.6 kB)
Collecting codecov
Using cached codecov-2.0.16-py2.py3-none-any.whl (14 kB)
Collecting cython
Using cached Cython-0.29.15-py2.py3-none-any.whl (968 kB)
Collecting numpy
Using cached numpy-1.18.1.zip (5.4 MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing wheel metadata: started
Preparing wheel metadata: still running...
Preparing wheel metadata: finished with status 'done'
Collecting pandas
Using cached pandas-1.0.2.tar.gz (5.0 MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: still running...
Getting requirements to build wheel: finished with status 'done'
Preparing wheel metadata: started
Preparing wheel metadata: finished with status 'done'
ERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none)
ERROR: No matching distribution found for tensorflow
=================================== log end ====================================
ERROR: could not install deps [nose, coverage, coveralls, nose-timer, codecov, cython, numpy, pandas, tensorflow, keras]; v = InvocationError('/tqdm/.tox/py36/bin/python -m pip install nose coverage coveralls nose-timer codecov cython numpy pandas tensorflow keras', 1)
=============================================================================== log end ===============================================================================
_______________________________________________________________________________ summary _______________________________________________________________________________
py26: commands succeeded
ERROR: py27: parallel child exit code 1
py33: commands succeeded
py34: commands succeeded
py35: commands succeeded
ERROR: py36: parallel child exit code 1
py37: commands succeeded
pypy: commands succeeded
pypy3: commands succeeded
ERROR: tf-no-keras: parallel child exit code 1
flake8: commands succeeded
setup.py: commands succeeded
make: *** [test] Error 1
`Python version: Python 3.6.3`
`Environment: Rhel 7.6 ppc64le`
`Pip: pip 20.0.2 from /opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/pip (python 3.6)`
Would like some help on this issue. I am running it on a High end VM with good connectivity. | open | 2020-03-17T06:25:27Z | 2020-03-26T17:54:16Z | https://github.com/tqdm/tqdm/issues/917 | [
"invalid ⛔",
"need-feedback 📢"
] | aishwaryabk | 8 |
thtrieu/darkflow | tensorflow | 1,128 | Training complete, but can't test the neural net #1127 | Checkpoint is new file.... but ....you have to use .meta file you have to use .pb file for testing.
When you train the model on terminal you have to add in your command "--savepb " which can save your .pb file in your built_graph folder.
Command Example
"python flow --model cfg/yolo-1c.cfg --load bin/yolo.weights --train --annotation new_model_data/annotations --dataset new_model_data/images --epoch 40 --savepb"
After training two new files created in the form of .meta and .pb extension in the built_graph folder.
Use these two files(.meta and .pb extension files) after training in your testing code of Yolo project.
Use this link to see how to use ............
https://github.com/ankitAMD/Darkflow-object-detection/blob/master/Custom%20Automated%20Testing%20images%20of%20Solar_Panel%20(using%20tiny-yolo.cfg%20and%20weights).ipynb
Your new weight file are .meta and new configuration file are in .pb form extension.
I think your issues are cleared.Please like My comment and close both issues after solved.
_Originally posted by @ankitAMD in https://github.com/thtrieu/darkflow/issues/1127#issuecomment-575050229_ | closed | 2020-01-16T08:58:08Z | 2020-01-16T08:59:00Z | https://github.com/thtrieu/darkflow/issues/1128 | [] | ankitAMD | 0 |
graphql-python/graphene | graphql | 1,320 | Please Add Further Support for FastAPI or Pydantic ObjectTypes | I want to avoid repeating code fragments between the mutations and types with graphene in a FastAPI app.
Not a very polished example but please see below sample scenario.
**graphql\mutations.py**
import graphene
from models.customers import CustomerLogin, CustomerLogin_PydanticIn
from graphql.types import CustomerLoginType
class CreateUserMutation(graphene.Mutation):
class Arguments:
id = graphene.Int()
shard_id = graphene.Decimal()
seq_num = graphene.Decimal()
event_timedate = graphene.DateTime()
user_id = graphene.UUID()
device_token = graphene.UUID()
user_ip = graphene.String()
user_agent = graphene.String()
client_id = graphene.String()
process = graphene.String()
auth_result = graphene.String()
auth_fail_cause = graphene.String()
plain_email = graphene.String()
user = graphene.Field(CustomerLoginType)
@staticmethod
async def mutate(parent, info, user: CustomerLogin_PydanticIn):
user = await CustomerLogin.create(**user.dict())
return CreateUserMutation(user=user)
**graphql\types.py**
import graphene
# Fields should coincide with model.CustomerLogin
class CustomerLoginType(graphene.ObjectType):
id = graphene.Int()
shard_id = graphene.Decimal()
seq_num = graphene.Decimal()
event_timedate = graphene.DateTime()
user_id = graphene.UUID()
device_token = graphene.UUID()
user_ip = graphene.String()
user_agent = graphene.String()
client_id = graphene.String()
process = graphene.String()
auth_result = graphene.String()
auth_fail_cause = graphene.String()
plain_email = graphene.String()
**graphql\queries.py**
import graphene
from models.customers import CustomerLogin
from graphql.types import CustomerLoginType
class Query(graphene.ObjectType):
all_customer_logins = graphene.List(CustomerLoginType)
single_customer_login = graphene.Field(CustomerLoginType, id=graphene.Int())
@staticmethod
async def resolve_all_customer_logins(parent, info):
users = await CustomerLogin.all()
return users
@staticmethod
async def resolve_single_customer_login(parent, info, id):
user = await CustomerLogin.get(id=id)
return user
Then the ORM model using TortoiseORM
**models\customers.py**
from tortoise import fields
from tortoise.contrib.pydantic import pydantic_model_creator
from tortoise.models import Model
from models.events import ShardID, SequenceNum
class CustomerLogin(Model):
shard_id: fields.ForeignKeyNullableRelation[ShardID] = \
fields.ForeignKeyField(model_name='models.ShardID',
related_name='customerlogin_shardid',
to_field='shard_id',
on_delete=fields.RESTRICT,
null=True)
seq_num: fields.ForeignKeyNullableRelation[SequenceNum] = \
fields.ForeignKeyField(model_name='models.SequenceNum',
related_name='customerlogin_seqnum',
to_field='seq_num',
on_delete=fields.RESTRICT,
null=True)
event_timedate = fields.DatetimeField(null=True)
user_id = fields.UUIDField(null=True)
device_token = fields.UUIDField(null=True)
user_ip = fields.CharField(256, null=True)
user_agent = fields.CharField(1024, null=True)
client_id = fields.CharField(256, null=True)
process = fields.CharField(256, null=True)
auth_result = fields.CharField(256, null=True)
auth_fail_cause = fields.CharField(256, null=True)
plain_email = fields.CharField(256, null=True)
CustomerLogin_Pydantic = pydantic_model_creator(CustomerLogin, name='CustomerLogin')
CustomerLogin_PydanticIn = pydantic_model_creator(CustomerLogin, name='CustomerLoginIn', exclude_readonly=True) | open | 2021-04-05T05:51:19Z | 2021-04-11T03:00:50Z | https://github.com/graphql-python/graphene/issues/1320 | [
"✨ enhancement"
] | jcf-dev | 1 |
aio-libs-abandoned/aioredis-py | asyncio | 1,103 | Redis.close() doesn't close connection pool created in __init__ | This results in delayed cleanup via `Connection.__del__` that causes various errors. Circular references in aioredis make it harder to debug. The problem is especially clearly seen when testing with pytest-asyncio that uses a separate event loop. Just upgrading from aioredis 1.3.1 to 2.0.0 leads to the following errors in setup for random test after the test that creates aioredis client:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/aioredis/connection.py", line 759, in disconnect
self._writer.close()
File "/usr/local/lib/python3.8/asyncio/streams.py", line 353, in close
return self._transport.close()
File "/usr/local/lib/python3.8/asyncio/selector_events.py", line 690, in close
self._loop.call_soon(self._call_connection_lost, None)
File "/usr/local/lib/python3.8/asyncio/base_events.py", line 719, in call_soon
self._check_closed()
File "/usr/local/lib/python3.8/asyncio/base_events.py", line 508, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
```
Here the client is created with one loop and creates transport bound to it, then `__del__` is called when some other loop is running causing this transport to close. But this transport uses the loop stored when it's created, which is already closed.
With `gc.collect()` after each test it becomes much simpler:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/_pytest/runner.py", line 311, in from_call
result: Optional[TResult] = func()
File "/usr/local/lib/python3.8/site-packages/_pytest/runner.py", line 255, in <lambda>
lambda: ihook(item=item, **kwds), when=when, reraise=reraise
File "/usr/local/lib/python3.8/site-packages/pluggy/hooks.py", line 286, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/usr/local/lib/python3.8/site-packages/pluggy/manager.py", line 93, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/usr/local/lib/python3.8/site-packages/pluggy/manager.py", line 84, in <lambda>
self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
File "/usr/local/lib/python3.8/site-packages/pluggy/callers.py", line 203, in _multicall
gen.send(outcome)
File "/usr/local/lib/python3.8/site-packages/_pytest/unraisableexception.py", line 93, in pytest_runtest_teardown
yield from unraisable_exception_runtest_hook()
File "/usr/local/lib/python3.8/site-packages/_pytest/unraisableexception.py", line 78, in unraisable_exception_runtest_hook
warnings.warn(pytest.PytestUnraisableExceptionWarning(msg))
pytest.PytestUnraisableExceptionWarning: Exception ignored in: <socket.socket fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6>
Traceback (most recent call last):
File "tests/fixtures.py", line 99, in event_loop
gc.collect()
ResourceWarning: unclosed <socket.socket fd=34, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('172.22.0.5', 51708), raddr=('172.22.0.3', 6379)>
```
The workaround is simple:
```
await client.close()
+ await client.connection_pool.disconnect()
``` | closed | 2021-08-11T16:01:08Z | 2022-02-07T08:44:36Z | https://github.com/aio-libs-abandoned/aioredis-py/issues/1103 | [] | ods | 12 |
opengeos/leafmap | streamlit | 916 | Reorder layers interactively | From my perspective, it would be highly beneficial to have the ability to reorder raster layers interactively. In my case, I often visualize multiple raster layers on a map but don’t always know the ideal layer order beforehand. Being able to adjust the layer arrangement on the fly would provide valuable insights. For example, plotting tiles that contain houses on top of a larger background image would be much easier with this interactive functionality.
What is you opinion on that matter?
Thank you in advance. | closed | 2024-10-11T10:57:09Z | 2024-10-14T08:08:26Z | https://github.com/opengeos/leafmap/issues/916 | [
"Feature Request"
] | karantai | 1 |
writer/writer-framework | data-visualization | 506 | Collapsable navigation bar | I'm a huge fan of the collapsable leftbar. Is it possible to add a similar one — which is horizontal (top and/or bottom)? Maybe the positioning on the page could be an option, like the contents of columns are.
Or, alternatively an element (section-like) which can be collapsed/expanded. | closed | 2024-08-04T21:03:46Z | 2024-10-26T09:33:58Z | https://github.com/writer/writer-framework/issues/506 | [
"enhancement",
"wip"
] | hallvardnmbu | 3 |
itamarst/eliot | numpy | 359 | Hypothesis tests have issues (running too slowly? Flaky?) | E.g. https://travis-ci.org/itamarst/eliot/jobs/490759458 | closed | 2019-02-10T15:15:03Z | 2019-02-10T22:22:17Z | https://github.com/itamarst/eliot/issues/359 | [] | itamarst | 0 |
man-group/arctic | pandas | 494 | 'utf-8' codec can't decode byte 0xf3 during Setup | #### Arctic Version
Latest github at the moment of writing (also 1.58 in pipy)
#### Platform and version
Python 3.6 with Anaconda x64 under Win7 x64
#### Description of problem and/or code sample that reproduces the issue
I do:
`python -m pip install arctic-master.zip`
I get:
```
Exception:
Traceback (most recent call last):
File "C:\Anaconda3\lib\site-packages\pip\compat\__init__.py", line 73, in console_to_str
return s.decode(sys.__stdout__.encoding)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf3 in position 116: invalid continuation byte
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Anaconda3\lib\site-packages\pip\basecommand.py", line 215, in main
status = self.run(options, args)
File "C:\Anaconda3\lib\site-packages\pip\commands\install.py", line 335, in run
wb.build(autobuilding=True)
File "C:\Anaconda3\lib\site-packages\pip\wheel.py", line 749, in build
self.requirement_set.prepare_files(self.finder)
File "C:\Anaconda3\lib\site-packages\pip\req\req_set.py", line 380, in prepare_files
ignore_dependencies=self.ignore_dependencies))
File "C:\Anaconda3\lib\site-packages\pip\req\req_set.py", line 634, in _prepare_file
abstract_dist.prep_for_dist()
File "C:\Anaconda3\lib\site-packages\pip\req\req_set.py", line 129, in prep_for_dist
self.req_to_install.run_egg_info()
File "C:\Anaconda3\lib\site-packages\pip\req\req_install.py", line 439, in run_egg_info
command_desc='python setup.py egg_info')
File "C:\Anaconda3\lib\site-packages\pip\utils\__init__.py", line 676, in call_subprocess
line = console_to_str(proc.stdout.readline())
File "C:\Anaconda3\lib\site-packages\pip\compat\__init__.py", line 75, in console_to_str
return s.decode('utf_8')
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf3 in position 116: invalid continuation byte
```
| closed | 2018-01-22T13:07:16Z | 2018-01-22T14:23:42Z | https://github.com/man-group/arctic/issues/494 | [] | SanPen | 1 |
abhiTronix/vidgear | dash | 99 | Announcement: Dropped support for Python 3.5 and below legacies. | Hello everyone,
VidGear has an inclination towards inventing something new and innovative with its APIs. Since vidgear is moving into the next phase and a new [WebGear API](https://github.com/abhiTronix/vidgear/wiki/WebGear#webgear-api), which is built on a powerful ASGI library called [Starlette](https://www.starlette.io/), has been introduced. The [Starlette](https://www.starlette.io/) library has a strict condition of supporting python 3.6+ python legacies only, since asyncio wheel has been reinvented in python 3.6 legacies _(especially support for asynchronous generators [**PEP 0525**](https://www.python.org/dev/peps/pep-0525/))_.
Thereby, keeping vidgear's future enhancements and its incompatibility with Python 3.5 and below legacies in mind, **I'm dropping support for Python 3.5 and below legacies in the next upcoming version (i.e. `v0.1.7`). Kindly make adjustments to your applications accordingly. Thank you.** | closed | 2020-01-18T09:43:43Z | 2020-04-30T14:34:10Z | https://github.com/abhiTronix/vidgear/issues/99 | [
"ENHANCEMENT :zap:",
"SOLVED :checkered_flag:",
"ANNOUNCEMENT :loudspeaker:"
] | abhiTronix | 3 |
explosion/spaCy | deep-learning | 12,383 | Training transformer model goes from score 0.97 to ZERO | ### Discussed in https://github.com/explosion/spaCy/discussions/12301
<div type='discussions-op-text'>
<sup>Originally posted by **mbrunecky** February 18, 2023</sup>
I am training NER using transformer model.
On one of my data sets, during epoch 2, the score reaches 0.97 and then (after a huge loss) drops to ZERO, where it stays until the process dies with an out-of-memory error.
What I should I be looking for as the reason for this behavior?
```
02/18-02:52:32.282 ============================= Training pipeline =============================[0m
02/18-02:52:32.282 [i] Pipeline: ['transformer', 'ner', 'doc_cleaner']
02/18-02:52:32.282 [i] Initial learn rate: 0.0
02/18-02:52:32.282 E # LOSS TRANS... LOSS NER ENTS_F ENTS_P ENTS_R SCORE
02/18-02:52:32.282 --- ------ ------------- -------- ------ ------ ------ ------
02/18-02:53:26.942 0 0 741.03 842.20 0.83 0.44 6.68 0.03
02/18-03:00:53.389 0 800 35387.67 131378.27 92.45 91.63 93.28 0.93
02/18-03:08:21.388 0 1600 846.64 93264.55 92.85 92.78 92.91 0.93
02/18-03:15:56.981 0 2400 5107.06 68810.17 94.86 95.75 93.99 0.95
02/18-03:23:40.199 0 3200 23586.03 35748.45 95.69 96.39 95.01 0.96
02/18-03:31:42.270 0 4000 3324.74 10904.08 95.27 95.47 95.08 0.95
02/18-03:40:10.199 1 4800 69579.98 3293.41 95.71 95.29 96.13 0.96
02/18-03:49:08.304 1 5600 15203.48 1351.42 96.14 96.01 96.27 0.96
02/18-03:58:35.240 1 6400 5012.19 1022.37 96.19 96.33 96.06 0.96
02/18-04:08:44.572 1 7200 2621.33 943.09 95.85 95.30 96.40 0.96
02/18-04:19:21.697 1 8000 2262.92 829.70 96.75 97.13 96.37 0.97
02/18-04:31:10.735 1 8800 10229.21 982.74 95.90 97.48 94.37 0.96
02/18-04:43:10.557 2 9600 29553.29 1354.11 96.03 95.29 96.78 0.96
02/18-04:56:31.975 2 10400 3775.07 824.47 96.61 97.12 96.10 0.97
02/18-05:10:22.435 2 11200 2795971.49 12601.45 0.00 0.00 0.00 0.00
02/18-05:25:14.185 2 12000 513981.72 22502.53 0.00 0.00 0.00 0.00
02/18-05:40:56.915 2 12800 40347.06 18249.37 0.00 0.00 0.00 0.00
02/18-05:59:26.751 2 13600 34795.68 18328.94 0.00 0.00 0.00 0.00
02/18-06:18:05.600 3 14400 32507.22 19082.38 0.00 0.00 0.00 0.00
02/18-06:37:15.405 3 15200 27791.56 18447.91 0.00 0.00 0.00 0.00
02/18-06:57:16.382 3 16000 25837.16 18390.90 0.00 0.00 0.00 0.00
02/18-06:57:26.490 [+] Saved pipeline to output directory
02/18-06:59:28.779 Invoked train_run_004:: process finished, exit value=-1073741571 (0xc00000fd)
```
Configuration:
```
[paths]
train = "L:\\training\\CA\\PLACER\\FEB23\\DMOD\\train"
dev = "L:\\training\\CA\\PLACER\\FEB23\\DMOD\\tval"
vectors = null
init_tok2vec = null
[system]
gpu_allocator = "pytorch"
seed = 0
[nlp]
lang = "en"
pipeline = ["transformer","ner","doc_cleaner"]
batch_size = 80
disabled = []
after_creation = null
after_pipeline_creation = null
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}
[nlp.before_creation]
@callbacks = "adjust_stop_words"
add_stop_words = []
rem_stop_words = ["amount","and","as","at","between","by","eight","eleven","each","except","fifteen","fifty","first","five","for","formerly","forty","four","hereby","herein","nine","of","six","sixty","ten","third","three","to","twelve","twenty","two"]
debug = true
[components]
[components.doc_cleaner]
factory = "doc_cleaner"
silent = true
[components.doc_cleaner.attrs]
tensor = null
_.trf_data = null
[components.ner]
factory = "ner"
incorrect_spans_key = null
moves = null
scorer = {"@scorers":"spacy.ner_scorer.v1"}
update_with_oracle_cut_size = 128
[components.ner.model]
@architectures = "spacy.TransitionBasedParser.v2"
state_type = "ner"
extra_state_tokens = false
hidden_width = 80
maxout_pieces = 2
use_upper = false
nO = null
[components.ner.model.tok2vec]
@architectures = "spacy-transformers.TransformerListener.v1"
grad_factor = 1.0
pooling = {"@layers":"reduce_mean.v1"}
upstream = "*"
[components.transformer]
factory = "transformer"
max_batch_items = 2048
set_extra_annotations = {"@annotation_setters":"spacy-transformers.null_annotation_setter.v1"}
[components.transformer.model]
@architectures = "spacy-transformers.TransformerModel.v3"
name = "roberta-base"
mixed_precision = true
[components.transformer.model.get_spans]
@span_getters = "spacy-transformers.strided_spans.v1"
window = 128
stride = 80
[components.transformer.model.grad_scaler_config]
[components.transformer.model.tokenizer_config]
use_fast = true
[components.transformer.model.transformer_config]
[corpora]
[corpora.dev]
@readers = "spacy.Corpus.v1"
path = ${paths.dev}
max_length = 0
gold_preproc = true
limit = 0
augmenter = null
[corpora.train]
@readers = "spacy.Corpus.v1"
path = ${paths.train}
max_length = 0
gold_preproc = true
limit = 0
augmenter = null
[training]
accumulate_gradient = 3
dev_corpus = "corpora.dev"
train_corpus = "corpora.train"
seed = ${system.seed}
gpu_allocator = ${system.gpu_allocator}
dropout = 0.1
patience = 8000
max_epochs = 0
max_steps = 32000
eval_frequency = 800
frozen_components = []
before_to_disk = null
annotating_components = []
[training.batcher]
@batchers = "spacy.batch_by_padded.v1"
discard_oversize = true
size = 1536
buffer = 256
get_length = null
[training.logger]
@loggers = "spacy.ConsoleLogger.v1"
progress_bar = false
[training.optimizer]
@optimizers = "Adam.v1"
beta1 = 0.9
beta2 = 0.999
L2_is_weight_decay = true
L2 = 0.01
grad_clip = 1.0
use_averages = false
eps = 0.00000001
[training.optimizer.learn_rate]
@schedules = "warmup_linear.v1"
warmup_steps = 250
total_steps = 32000
initial_rate = 0.00005
[training.score_weights]
ents_f = 0.5
ents_p = 0.2
ents_r = 0.3
ents_per_type = null
[pretraining]
[initialize]
vectors = null
init_tok2vec = null
vocab_data = null
lookups = null
before_init = null
after_init = null
[initialize.components]
[initialize.tokenizer]
```</div> | open | 2023-03-08T08:55:36Z | 2023-03-08T08:55:36Z | https://github.com/explosion/spaCy/issues/12383 | [
"bug",
"feat / ner",
"perf / memory",
"feat / training",
"feat / transformer"
] | svlandeg | 0 |
python-gitlab/python-gitlab | api | 2,700 | Are the parent commits of a commit accessible? | ## Description of the problem, including code/CLI snippet
I was looking for the ids of the parent commits for a specific commit. I know that GitLab returns them as `parent_ids` when calling `GET /projects/:id/repository/commits/:sha`. But I wasn't able to find them in the `ProjectCommit` class.
Did I overlook them or do we have to add them to `ProjectCommit` as they are missing at the moment?
## Specifications
- python-gitlab version: v4.0.0
- API version you are using (v3/v4): v4
- Gitlab server version (or gitlab.com): not relevant
| closed | 2023-10-20T07:12:22Z | 2023-10-26T14:53:04Z | https://github.com/python-gitlab/python-gitlab/issues/2700 | [
"support"
] | kayman-mk | 2 |
Lightning-AI/pytorch-lightning | deep-learning | 20,646 | Model with dropout probability=1 keeps learning (PyTorch version doesn't) | ### Bug description
Setting dropout doesn't work as expected in a model that uses a pretrained transformer from Hugging Face (BertForSequenceClassification).
Expected result: Setting the dropout probability to 1 should prevent the model from learning.
Actual result: The model keeps learning, i.e. loss decreases each epoch.
Why it looks like a bug in Lightning:
- After running the PyTorch Lightning code, printing the model shows `Dropout(p=1, inplace=False)`, so the config setting is made correctly.
- The PyTorch code produces the expected result, where the loss doesn't change from epoch to epoch.
### What version are you seeing the problem on?
v2.5
### How to reproduce the bug
PyTorch Lightning code:
```python
import torch
from pytorch_lightning import LightningModule, Trainer
from transformers import BertTokenizer, BertConfig, BertForSequenceClassification
from torch.utils.data import DataLoader, TensorDataset
class BertClassifier(LightningModule):
def __init__(self, dropout_prob=1):
super().__init__()
self.config = BertConfig.from_pretrained('bert-base-uncased')
self.config.hidden_dropout_prob = dropout_prob
self.model = BertForSequenceClassification.from_pretrained('bert-base-uncased', config=self.config)
def forward(self, **inputs):
return self.model(**inputs)
def training_step(self, batch, batch_idx):
# Unpack the batch into proper format for the model
input_ids, attention_mask, labels = batch
# Create the inputs dictionary expected by the BERT model
inputs = {
'input_ids': input_ids,
'attention_mask': attention_mask,
'labels': labels
}
outputs = self.model(**inputs)
loss = outputs.loss
self.log('train_loss', loss, prog_bar=True)
return loss
def configure_optimizers(self):
return torch.optim.AdamW(self.parameters(), lr=1e-5)
# Training data
texts = ["red", "green", "blue", "hot", "warm", "cold"]
labels = [0, 0, 0, 1, 1, 1]
# Prepare data
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
encoded_inputs = tokenizer(texts, return_tensors="pt", padding=True)
input_ids = encoded_inputs['input_ids']
attention_mask = encoded_inputs['attention_mask']
labels_tensor = torch.tensor(labels)
# Create dataset and dataloader
dataset = TensorDataset(input_ids, attention_mask, labels_tensor)
dataloader = DataLoader(dataset, batch_size=6)
# Training
model = BertClassifier(dropout_prob=1)
trainer = Trainer(max_epochs=5)
trainer.fit(model, dataloader)
```
PyTorch code:
```python
import torch
from transformers import BertTokenizer, BertConfig, BertForSequenceClassification
# Training data
texts = ["red", "green", "blue", "hot", "warm", "cold"]
labels = [0, 0, 0, 1, 1, 1]
# PyTorch implementation
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
config = BertConfig.from_pretrained('bert-base-uncased')
config.hidden_dropout_prob = 1
model = BertForSequenceClassification.from_pretrained('bert-base-uncased', config=config)
inputs = tokenizer(texts, return_tensors="pt")
inputs["labels"] = torch.tensor(labels)
optimizer = torch.optim.AdamW(model.parameters(), lr=1e-5)
model.train()
for epoch in range(5):
outputs = model(**inputs)
loss = outputs.loss
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f"Epoch: {epoch+1}, Loss: {loss.item()}")
```
### Error messages and logs
Console output (progress bar) from PyTorch Lightning:
```
Epoch 0: 100%|████████| 1/1 [00:00<00:00, 1.44it/s, v_num=8, train_loss=0.713]
Epoch 1: 100%|████████| 1/1 [00:00<00:00, 1.94it/s, v_num=8, train_loss=0.618]
Epoch 2: 100%|████████| 1/1 [00:00<00:00, 2.04it/s, v_num=8, train_loss=0.581]
Epoch 3: 100%|████████| 1/1 [00:00<00:00, 1.91it/s, v_num=8, train_loss=0.556]
Epoch 4: 100%|████████| 1/1 [00:00<00:00, 2.03it/s, v_num=8, train_loss=0.528]
```
Console output from PyTorch:
```
Epoch: 1, Loss: 0.6931471824645996
Epoch: 2, Loss: 0.6931471824645996
Epoch: 3, Loss: 0.6931471824645996
Epoch: 4, Loss: 0.6931471824645996
Epoch: 5, Loss: 0.6931471824645996
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version: 2.5.0.post0
#- PyTorch Version: 2.6.0+cpu
#- Python version: 3.12.9
#- OS: Linux
#- CUDA/cuDNN version: NA
#- GPU models and configuration: NA
#- How you installed Lightning: pip
```
</details>
### More info
_No response_ | closed | 2025-03-15T14:35:58Z | 2025-03-16T02:42:00Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20646 | [
"bug",
"needs triage",
"ver: 2.5.x"
] | jedick | 1 |
jpadilla/django-rest-framework-jwt | django | 166 | Question: behavior as remote wsgi app vs. local runserver | Hi,
I have been using this lib for our JWT and it has worked great with `python manage.py runserver` and generating tokens / authenticating tokens with `localhost:8000` but am having problem with the exact same config running as a wsgi app on `mod_wsgi` on apache/2.2.
It basically is behaving like the authentication hook is not there. Config is all the same (and is using the required config settings per the djangorestframework-jwt docs in config.py), the only difference is that I have my django app running in a sub-directory `/django` vs. `/` in local.
In remote, I can POST to my url set with `url(r'^app/login','extensions.rest_framework_jwt.views.obtain_jwt_token')` and get a token issued, but when I use the same token i got back ($TOKEN) to try and access a protected resource ($MY_AUTH_REQUIRING_URL) like so:
```
curl -X GET -H "Content-Type: application/json" \
-H "Authorization: JWT $TOKEN" $MY_AUTH_REQUIRING_URL
```
I get:
```
{
"detail": "Authentication credentials were not provided."
}
```
Not sure what I might be missing or how I might debug this...
| closed | 2015-09-29T16:27:42Z | 2017-10-09T22:05:40Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/166 | [] | cerdman | 2 |
deepset-ai/haystack | pytorch | 8,931 | Remove tags 2.x, etc. in Haystack website | closed | 2025-02-25T10:56:29Z | 2025-03-11T11:02:42Z | https://github.com/deepset-ai/haystack/issues/8931 | [
"P2"
] | julian-risch | 0 |
|
InstaPy/InstaPy | automation | 6,497 | No interact | Doesn't interact with users
**Code:**
```
session.set_do_like(enabled=True,
percentage=100)
session.set_user_interact(amount=1,
randomize=False,
percentage=100)
session.interact_user_likers(usernames=["_tomato_333_"],
posts_grab_amount=1,
interact_likers_per_post=20,
randomize=True)
```
**Log:**
```
Traceback (most recent call last):
File "C:\Users\Серв\Desktop\Bot\inst_like.py", line 32, in <module>
session.interact_user_likers(usernames=["user"],
File "C:\Users\Серв\AppData\Local\Programs\Python\Python310\lib\site-packages\instapy\instapy.py", line 3332, in interact_user_likers
post_urls = get_photo_urls_from_profile(
File "C:\Users\Серв\AppData\Local\Programs\Python\Python310\lib\site-packages\instapy\commenters_util.py", line 449, in get_photo_urls_from_profile
photos_a_elems = browser.find_elements(
File "C:\Users\Серв\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 1279, in find_elements
return self.execute(Command.FIND_ELEMENTS, {
File "C:\Users\Серв\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 424, in execute
self.error_handler.check_response(response)
File "C:\Users\Серв\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 247, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.JavascriptException: Message: Cyclic object value
Stacktrace:
WebDriverError@chrome://remote/content/shared/webdriver/Errors.jsm:183:5
JavaScriptError@chrome://remote/content/shared/webdriver/Errors.jsm:362:5
evaluate.assertAcyclic@chrome://remote/content/marionette/evaluate.js:52:11
evaluate.toJSON@chrome://remote/content/marionette/evaluate.js:323:14
receiveMessage@chrome://remote/content/marionette/actors/MarionetteCommandsChild.jsm:177:31
``` | closed | 2022-02-11T10:27:23Z | 2022-03-04T15:59:19Z | https://github.com/InstaPy/InstaPy/issues/6497 | [] | alexpzpz | 3 |
explosion/spaCy | data-science | 12,933 | ValueError: [E949] Unable to align tokens for the predicted and reference docs. Windows, spacy 3.6.0, Python 3.8.10 | ### Discussed in https://github.com/explosion/spaCy/discussions/12932
<div type='discussions-op-text'>
<sup>Originally posted by **PeachDew** August 24, 2023</sup>
Hi! I referred to spacy's custom tokenization doc here: https://spacy.io/usage/linguistic-features#custom-tokenizer-training
and tried using a custom-trained tokenizer in my NER project.
Here is my functions.py file:
<details>
```
from tokenizers import Tokenizer
from spacy.tokens import Doc
import spacy
import pickle
TK_PATH = "./tokenizers/WPC-trained.json"
tokenizer = Tokenizer.from_file(TK_PATH)
class CustomTokenizer:
def __init__(self, vocab):
self.vocab = vocab
self._tokenizer = tokenizer
def __call__(self, text):
tokens = self._tokenizer.encode(text)
words = []
spaces = []
for i, (text, (start, end)) in enumerate(zip(tokens.tokens, tokens.offsets)):
words.append(text)
if i < len(tokens.tokens) - 1:
# If next start != current end we assume a space in between
next_start, next_end = tokens.offsets[i + 1]
spaces.append(next_start > end)
else:
spaces.append(True)
return Doc(self.vocab, words=words, spaces=spaces)
def to_bytes(self):
return pickle.dumps(self.__dict__)
def from_bytes(self, data):
self.__dict__.update(pickle.loads(data))
def to_disk(self, path, **kwargs):
with open(path, 'wb') as file_:
file_.write(self.to_bytes())
def from_disk(self, path, **kwargs):
with open(path, 'rb') as file_:
self.from_bytes(file_.read())
@spacy.registry.tokenizers("custom_tokenizer")
def create_whitespace_tokenizer():
def create_tokenizer(nlp):
return CustomTokenizer(nlp.vocab)
return create_tokenizer
```
</details>
and in my config.cfg:
```
[nlp.tokenizer]
@tokenizers = "custom_tokenizer"
```
I trained different tokenizers, and the BPE one worked without any hiccups but when training using the WordLevel tokenizer:
```
ValueError: [E949] Unable to align tokens for the predicted and reference docs.
It is only possible to align the docs when both texts are the same except for whitespace and capitalization.
The predicted tokens start with: ['AAA', 'BBB', ':', '0']. The reference tokens start with: ['AAA', 'BBB:0.999', '"', '\r']
```
It seems that spacy is not using my custom tokenizer for prediction. Or is it an issue with an additional alignment step I have to include in the config?
I used https://huggingface.co/docs/tokenizers/quicktour to train my custom tokenizers.
</div> | closed | 2023-08-24T04:59:43Z | 2023-09-30T00:02:08Z | https://github.com/explosion/spaCy/issues/12933 | [
"duplicate"
] | PeachDew | 2 |
httpie/cli | python | 839 | Multi-connection download for --download mode | `aria2` command-line download utility and `pySmartDL` Python library can download files using multiple connections. It would be nice if `httpie` had this feature too. | closed | 2020-01-19T13:13:42Z | 2020-05-23T19:05:26Z | https://github.com/httpie/cli/issues/839 | [] | rominf | 1 |
paperless-ngx/paperless-ngx | django | 7,319 | [BUG] Starting multiple consuming threads on the same file leads to File not found error | ### Description
Thank you for the great tool!!!
I copied a pdf file via SMB from my desktop into the consume folder. The file task starts to work. Before finishing the task futher tasks on the same file have been added to the queue. After the first task is finished, next task has started, but the file is not there any more, because the first task has moved the file.
Seems to be the same issue as described here: https://github.com/paperless-ngx/paperless-ngx/discussions/2682
It was suggested in https://github.com/paperless-ngx/paperless-ngx/issues/167 that https://github.com/paperless-ngx/paperless-ngx/pull/483 might fix the issue but setting `PAPERLESS_CONSUMER_INOTIFY_DELAY=5.0` did not help.
It only appears, when I set `PAPERLESS_CONSUMER_RECURSIVE=true`, so I think there is a problem with this confiig.
**Edit: I found this, so the issue could be closed**
> Some file systems such as NFS network shares don't support file system notifications with inotify. When storing the consumption directory on such a file system, paperless will not pick up new files with the default configuration. You will need to use [PAPERLESS_CONSUMER_POLLING](https://docs.paperless-ngx.com/configuration/#PAPERLESS_CONSUMER_POLLING), which will disable inotify. See [here](https://docs.paperless-ngx.com/configuration/#polling).
### Steps to reproduce
1. set PAPERLESS_CONSUMER_RECURSIVE=true
2. store new document (which is a liitle bit larger) in the consume directory (via SMB on a synology NAS)
### Webserver logs
```bash
[2024-07-25 09:59:09,726] [INFO] [paperless.management.consumer] Using inotify to watch directory for changes: /usr/src/paperless/consume
[2024-07-25 09:59:16,782] [INFO] [paperless.management.consumer] Adding /usr/src/paperless/consume/scan/2024-02-03_#HomeInstead#Pflegedienst.pdf to the task queue.
[2024-07-25 09:59:17,002] [INFO] [paperless.management.consumer] Using inotify to watch directory for changes: /usr/src/paperless/consume
[2024-07-25 09:59:17,479] [DEBUG] [paperless.tasks] Skipping plugin CollatePlugin
[2024-07-25 09:59:17,479] [DEBUG] [paperless.tasks] Skipping plugin BarcodePlugin
[2024-07-25 09:59:17,480] [DEBUG] [paperless.tasks] Executing plugin WorkflowTriggerPlugin
[2024-07-25 09:59:17,485] [INFO] [paperless.tasks] WorkflowTriggerPlugin completed with:
[2024-07-25 09:59:17,486] [DEBUG] [paperless.tasks] Executing plugin ConsumeTaskPlugin
[2024-07-25 09:59:17,505] [INFO] [paperless.consumer] Consuming 2024-02-03_#HomeInstead#Pflegedienst.pdf
[2024-07-25 09:59:17,512] [DEBUG] [paperless.consumer] Detected mime type: application/pdf
[2024-07-25 09:59:17,534] [DEBUG] [paperless.consumer] Parser: RasterisedDocumentParser
[2024-07-25 09:59:17,541] [DEBUG] [paperless.consumer] Parsing 2024-02-03_#HomeInstead#Pflegedienst.pdf...
[2024-07-25 09:59:17,936] [INFO] [paperless.parsing.tesseract] pdftotext exited 0
[2024-07-25 09:59:18,892] [DEBUG] [paperless.parsing.tesseract] Calling OCRmyPDF with args: {'input_file': PosixPath('/tmp/paperless/paperless-ngx1abf6zbv/2024-02-03_#HomeInstead#Pflegedienst.pdf'), 'output_file': PosixPath('/tmp/paperless/paperless-8hr6ls0m/archive.pdf'), 'use_threads': True, 'jobs': '1', 'language': 'deu', 'output_type': 'pdfa', 'progress_bar': False, 'color_conversion_strategy': 'RGB', 'skip_text': True, 'clean': True, 'deskew': True, 'rotate_pages': True, 'rotate_pages_threshold': 12.0, 'sidecar': PosixPath('/tmp/paperless/paperless-8hr6ls0m/sidecar.txt')}
[2024-07-25 09:59:21,662] [INFO] [ocrmypdf._pipeline] skipping all processing on this page
[2024-07-25 09:59:21,663] [INFO] [ocrmypdf._pipeline] skipping all processing on this page
[2024-07-25 09:59:21,664] [INFO] [ocrmypdf._pipeline] skipping all processing on this page
[2024-07-25 09:59:21,666] [INFO] [ocrmypdf._pipeline] skipping all processing on this page
[2024-07-25 09:59:21,667] [INFO] [ocrmypdf._pipeline] skipping all processing on this page
[2024-07-25 09:59:21,680] [INFO] [ocrmypdf._pipelines.ocr] Postprocessing...
[2024-07-25 09:59:23,534] [INFO] [paperless.management.consumer] Adding /usr/src/paperless/consume/scan/2024-02-03_#HomeInstead#Pflegedienst.pdf to the task queue.
[2024-07-25 09:59:23,815] [INFO] [paperless.management.consumer] Using inotify to watch directory for changes: /usr/src/paperless/consume
[2024-07-25 09:59:24,230] [WARNING] [ocrmypdf._metadata] Some input metadata could not be copied because it is not permitted in PDF/A. You may wish to examine the output PDF's XMP metadata.
[2024-07-25 09:59:27,969] [INFO] [ocrmypdf._pipeline] Image optimization ratio: 1.14 savings: 12.5%
[2024-07-25 09:59:27,970] [INFO] [ocrmypdf._pipeline] Total file size ratio: 0.93 savings: -7.8%
[2024-07-25 09:59:27,981] [INFO] [ocrmypdf._pipelines._common] Output file is a PDF/A-2B (as expected)
[2024-07-25 09:59:30,850] [DEBUG] [paperless.parsing.tesseract] Incomplete sidecar file: discarding.
[2024-07-25 09:59:31,220] [INFO] [paperless.management.consumer] Adding /usr/src/paperless/consume/scan/2024-02-03_#HomeInstead#Pflegedienst.pdf to the task queue.
[2024-07-25 09:59:31,435] [INFO] [paperless.management.consumer] Using inotify to watch directory for changes: /usr/src/paperless/consume
[2024-07-25 09:59:31,724] [INFO] [paperless.parsing.tesseract] pdftotext exited 0
[2024-07-25 09:59:31,729] [DEBUG] [paperless.consumer] Generating thumbnail for 2024-02-03_#HomeInstead#Pflegedienst.pdf...
[2024-07-25 09:59:31,735] [DEBUG] [paperless.parsing] Execute: convert -density 300 -scale 500x5000> -alpha remove -strip -auto-orient -define pdf:use-cropbox=true /tmp/paperless/paperless-8hr6ls0m/archive.pdf[0] /tmp/paperless/paperless-8hr6ls0m/convert.webp
[2024-07-25 09:59:34,601] [INFO] [paperless.parsing] convert exited 0
[2024-07-25 09:59:36,862] [DEBUG] [paperless.consumer] Saving record to database
[2024-07-25 09:59:36,863] [DEBUG] [paperless.consumer] Creation date from parse_date: 2022-03-01 00:00:00+01:00
[2024-07-25 09:59:38,750] [INFO] [paperless.management.consumer] Adding /usr/src/paperless/consume/scan/2024-02-03_#HomeInstead#Pflegedienst.pdf to the task queue.
[2024-07-25 09:59:38,998] [INFO] [paperless.management.consumer] Using inotify to watch directory for changes: /usr/src/paperless/consume
[2024-07-25 09:59:38,999] [INFO] [paperless.handlers] Assigning correspondent VOLKSWOHL BUND Sachversicherung AG to 2022-03-01 2024-02-03_#HomeInstead#Pflegedienst
[2024-07-25 09:59:39,055] [INFO] [paperless.handlers] Assigning document type Versicherungsschein to 2022-03-01 VOLKSWOHL BUND Sachversicherung AG 2024-02-03_#HomeInstead#Pflegedienst
[2024-07-25 09:59:39,107] [INFO] [paperless.handlers] Tagging "2022-03-01 VOLKSWOHL BUND Sachversicherung AG 2024-02-03_#HomeInstead#Pflegedienst" with "scan"
[2024-07-25 09:59:39,617] [DEBUG] [paperless.consumer] Deleting file /tmp/paperless/paperless-ngx1abf6zbv/2024-02-03_#HomeInstead#Pflegedienst.pdf
[2024-07-25 09:59:39,889] [DEBUG] [paperless.parsing.tesseract] Deleting directory /tmp/paperless/paperless-8hr6ls0m
[2024-07-25 09:59:39,891] [INFO] [paperless.consumer] Document 2022-03-01 VOLKSWOHL BUND Sachversicherung AG 2024-02-03_#HomeInstead#Pflegedienst consumption finished
[2024-07-25 09:59:39,902] [INFO] [paperless.tasks] ConsumeTaskPlugin completed with: Success. New document id 16 created
[2024-07-25 09:59:41,176] [DEBUG] [paperless.tasks] Skipping plugin CollatePlugin
[2024-07-25 09:59:41,177] [DEBUG] [paperless.tasks] Skipping plugin BarcodePlugin
[2024-07-25 09:59:41,178] [DEBUG] [paperless.tasks] Executing plugin WorkflowTriggerPlugin
[2024-07-25 09:59:41,183] [INFO] [paperless.tasks] WorkflowTriggerPlugin completed with:
[2024-07-25 09:59:41,184] [DEBUG] [paperless.tasks] Executing plugin ConsumeTaskPlugin
[2024-07-25 09:59:41,199] [ERROR] [paperless.consumer] Cannot consume /usr/src/paperless/consume/scan/2024-02-03_#HomeInstead#Pflegedienst.pdf: File not found.
[2024-07-25 09:59:41,200] [ERROR] [paperless.tasks] ConsumeTaskPlugin failed: 2024-02-03_#HomeInstead#Pflegedienst.pdf: Cannot consume /usr/src/paperless/consume/scan/2024-02-03_#HomeInstead#Pflegedienst.pdf: File not found.
Traceback (most recent call last):
File "/usr/src/paperless/src/documents/tasks.py", line 151, in consume_file
msg = plugin.run()
^^^^^^^^^^^^
File "/usr/src/paperless/src/documents/consumer.py", line 505, in run
self.pre_check_file_exists()
File "/usr/src/paperless/src/documents/consumer.py", line 309, in pre_check_file_exists
self._fail(
File "/usr/src/paperless/src/documents/consumer.py", line 302, in _fail
raise ConsumerError(f"{self.filename}: {log_message or message}") from exception
documents.consumer.ConsumerError: 2024-02-03_#HomeInstead#Pflegedienst.pdf: Cannot consume /usr/src/paperless/consume/scan/2024-02-03_#HomeInstead#Pflegedienst.pdf: File not found.
[2024-07-25 09:59:42,881] [DEBUG] [paperless.tasks] Skipping plugin CollatePlugin
[2024-07-25 09:59:42,882] [DEBUG] [paperless.tasks] Skipping plugin BarcodePlugin
[2024-07-25 09:59:42,882] [DEBUG] [paperless.tasks] Executing plugin WorkflowTriggerPlugin
[2024-07-25 09:59:42,887] [INFO] [paperless.tasks] WorkflowTriggerPlugin completed with:
[2024-07-25 09:59:42,888] [DEBUG] [paperless.tasks] Executing plugin ConsumeTaskPlugin
[2024-07-25 09:59:42,903] [ERROR] [paperless.consumer] Cannot consume /usr/src/paperless/consume/scan/2024-02-03_#HomeInstead#Pflegedienst.pdf: File not found.
[2024-07-25 09:59:42,904] [ERROR] [paperless.tasks] ConsumeTaskPlugin failed: 2024-02-03_#HomeInstead#Pflegedienst.pdf: Cannot consume /usr/src/paperless/consume/scan/2024-02-03_#HomeInstead#Pflegedienst.pdf: File not found.
Traceback (most recent call last):
File "/usr/src/paperless/src/documents/tasks.py", line 151, in consume_file
msg = plugin.run()
^^^^^^^^^^^^
File "/usr/src/paperless/src/documents/consumer.py", line 505, in run
self.pre_check_file_exists()
File "/usr/src/paperless/src/documents/consumer.py", line 309, in pre_check_file_exists
self._fail(
File "/usr/src/paperless/src/documents/consumer.py", line 302, in _fail
raise ConsumerError(f"{self.filename}: {log_message or message}") from exception
documents.consumer.ConsumerError: 2024-02-03_#HomeInstead#Pflegedienst.pdf: Cannot consume /usr/src/paperless/consume/scan/2024-02-03_#HomeInstead#Pflegedienst.pdf: File not found.
[2024-07-25 09:59:44,327] [DEBUG] [paperless.tasks] Skipping plugin CollatePlugin
[2024-07-25 09:59:44,328] [DEBUG] [paperless.tasks] Skipping plugin BarcodePlugin
[2024-07-25 09:59:44,328] [DEBUG] [paperless.tasks] Executing plugin WorkflowTriggerPlugin
[2024-07-25 09:59:44,334] [INFO] [paperless.tasks] WorkflowTriggerPlugin completed with:
[2024-07-25 09:59:44,335] [DEBUG] [paperless.tasks] Executing plugin ConsumeTaskPlugin
[2024-07-25 09:59:44,349] [ERROR] [paperless.consumer] Cannot consume /usr/src/paperless/consume/scan/2024-02-03_#HomeInstead#Pflegedienst.pdf: File not found.
[2024-07-25 09:59:44,350] [ERROR] [paperless.tasks] ConsumeTaskPlugin failed: 2024-02-03_#HomeInstead#Pflegedienst.pdf: Cannot consume /usr/src/paperless/consume/scan/2024-02-03_#HomeInstead#Pflegedienst.pdf: File not found.
Traceback (most recent call last):
File "/usr/src/paperless/src/documents/tasks.py", line 151, in consume_file
msg = plugin.run()
^^^^^^^^^^^^
File "/usr/src/paperless/src/documents/consumer.py", line 505, in run
self.pre_check_file_exists()
File "/usr/src/paperless/src/documents/consumer.py", line 309, in pre_check_file_exists
self._fail(
File "/usr/src/paperless/src/documents/consumer.py", line 302, in _fail
raise ConsumerError(f"{self.filename}: {log_message or message}") from exception
documents.consumer.ConsumerError: 2024-02-03_#HomeInstead#Pflegedienst.pdf: Cannot consume /usr/src/paperless/consume/scan/2024-02-03_#HomeInstead#Pflegedienst.pdf: File not found.
[2024-07-25 09:59:45,510] [INFO] [paperless.management.consumer] Using inotify to watch directory for changes: /usr/src/paperless/consume
```
### Browser logs
_No response_
### Paperless-ngx version
2.11.1
### Host OS
Synology DS 918+, DSM 7.2
### Installation method
Docker - official image
### System status
_No response_
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-07-25T08:32:04Z | 2024-08-25T03:03:42Z | https://github.com/paperless-ngx/paperless-ngx/issues/7319 | [
"not a bug"
] | stemForge | 1 |
marimo-team/marimo | data-visualization | 3,502 | Lazy download buttons (well, the data should be lazy, not the button) | ### Description
Can the data for a download button be "computed when needed", i.e., only once the user clicks the button? Currently I have
```python
mo.download(
data=expensive_function_generating_the_data(),
)
```
which means `expensive_function_generating_the_data` is called already when the button is created. Instead, I'd prefer something like
```python
mo.download(
data=expensive_function_generating_the_data,
)
```
where the function would only generate / provide the data when the button is actually clicked, as not the output `expensive_function_generating_the_data()` is passed to the data keyword, but the function handle `expensive_function_generating_the_data`.
(Feature request follows advice by @mscolnick in response to my question on [discord](https://discord.com/channels/1059888774789730424/1330473664272928918/1330473664272928918))
### Suggested solution
Not sure, but @mscolnick said it might be possible.
### Alternative
_No response_
### Additional context
_No response_ | closed | 2025-01-19T17:02:35Z | 2025-01-20T03:57:01Z | https://github.com/marimo-team/marimo/issues/3502 | [
"enhancement"
] | bjoseru | 1 |
pydantic/logfire | fastapi | 136 | Expose TLS/Insecure params via Logfire config | ### Description
Add support for sending data to a URL that uses a self-signed cert and also support for specifying TLS cert/key/ca.
I believe the HTTP exporter from OpenTelemetry has a param `insecure` for doing this. This is not exposed as part of LogfireConfig. There's also params for specifying cert/key/ca.
https://opentelemetry.io/docs/specs/otel/protocol/exporter/
These params are mostly needed for sending data to self-hosted endpoints or a self-hosted Logfire in the future. | open | 2024-05-06T13:56:31Z | 2024-08-01T13:09:28Z | https://github.com/pydantic/logfire/issues/136 | [
"Feature Request"
] | gaby | 9 |
graphql-python/graphene | graphql | 671 | Caching/Memoization of resolvers? | I've tried adding memoization to a resolver and cache the result in redis using flask-caching, see http://pythonhosted.org/Flask-Caching/#memoization
However it does not work, result is calculated on the next requests still. (with same args of course)
Is this because of some resolver "magic" happening? How can I make this work, I really need to be able to use memoization on some queries for graphene to be feasible.
Any other ideas how to achieve this type of caching? | closed | 2018-02-13T11:33:34Z | 2020-06-26T17:31:04Z | https://github.com/graphql-python/graphene/issues/671 | [] | HeyHugo | 2 |
mars-project/mars | pandas | 2,814 | [BUG] release_free_slot got wrong slot_id | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version
2. The version of Mars you use: 8f322b496df58022f6b7dd3839f1ce6b2d119c73
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
```
2022-03-14 11:57:44,895 ERROR api.py:115 -- Got unhandled error when handling message ('release_free_slot', 0, (0, ('t7rQ0udN4qHUWr9x3qg1baTI', 'XPRUxfx1ld4Dzz797R5KZeGW')), {}) in actor b'numa-0_band_slot_manager' at 127.0.0.1:14887
Traceback (most recent call last):
File "mars/oscar/core.pyx", line 478, in mars.oscar.core._BaseActor.__on_receive__
with debug_async_timeout('actor_lock_timeout',
File "mars/oscar/core.pyx", line 481, in mars.oscar.core._BaseActor.__on_receive__
async with self._lock:
File "mars/oscar/core.pyx", line 482, in mars.oscar.core._BaseActor.__on_receive__
result = func(*args, **kwargs)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/workerslot.py", line 169, in release_free_slot
assert acquired_slot_id == slot_id, f"acquired_slot_id {acquired_slot_id} != slot_id {slot_id}"
AssertionError: acquired_slot_id 1 != slot_id 0
2022-03-14 11:57:44,897 ERROR execution.py:120 -- Failed to run subtask XPRUxfx1ld4Dzz797R5KZeGW on band numa-0
Traceback (most recent call last):
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 331, in internal_run_subtask
subtask_info.result = await self._retry_run_subtask(
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 420, in _retry_run_subtask
return await _retry_run(subtask, subtask_info, _run_subtask_once)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 107, in _retry_run
raise ex
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 67, in _retry_run
return await target_async_func(*args)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 412, in _run_subtask_once
await slot_manager_ref.release_free_slot(
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/oscar/backends/context.py", line 189, in send
return self._process_result_message(result)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/oscar/backends/context.py", line 70, in _process_result_message
raise message.as_instanceof_cause()
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/oscar/backends/pool.py", line 542, in send
result = await self._run_coro(message.message_id, coro)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/oscar/backends/pool.py", line 333, in _run_coro
return await coro
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/oscar/api.py", line 115, in __on_receive__
return await super().__on_receive__(message)
File "mars/oscar/core.pyx", line 506, in __on_receive__
raise ex
File "mars/oscar/core.pyx", line 478, in mars.oscar.core._BaseActor.__on_receive__
with debug_async_timeout('actor_lock_timeout',
File "mars/oscar/core.pyx", line 481, in mars.oscar.core._BaseActor.__on_receive__
async with self._lock:
File "mars/oscar/core.pyx", line 482, in mars.oscar.core._BaseActor.__on_receive__
result = func(*args, **kwargs)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/workerslot.py", line 169, in release_free_slot
assert acquired_slot_id == slot_id, f"acquired_slot_id {acquired_slot_id} != slot_id {slot_id}"
AssertionError: [address=127.0.0.1:14887, pid=42643] acquired_slot_id 1 != slot_id 0
2022-03-14 11:57:44,900 INFO processor.py:508 -- Time consuming to execute a subtask is 0.023977041244506836s with session_id t7rQ0udN4qHUWr9x3qg1baTI, subtask_id XPRUxfx1ld4Dzz797R5KZeGW
2022-03-14 11:57:44,903 ERROR api.py:115 -- Got unhandled error when handling message ('release_free_slot', 0, (1, ('t7rQ0udN4qHUWr9x3qg1baTI', 'XPRUxfx1ld4Dzz797R5KZeGW')), {}) in actor b'numa-0_band_slot_manager' at 127.0.0.1:14887
Traceback (most recent call last):
File "mars/oscar/core.pyx", line 478, in mars.oscar.core._BaseActor.__on_receive__
with debug_async_timeout('actor_lock_timeout',
File "mars/oscar/core.pyx", line 481, in mars.oscar.core._BaseActor.__on_receive__
async with self._lock:
File "mars/oscar/core.pyx", line 482, in mars.oscar.core._BaseActor.__on_receive__
result = func(*args, **kwargs)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/workerslot.py", line 168, in release_free_slot
acquired_slot_id = self._session_stid_to_slot.pop(acquired_session_stid)
KeyError: ('t7rQ0udN4qHUWr9x3qg1baTI', 'XPRUxfx1ld4Dzz797R5KZeGW')
2022-03-14 11:57:44,904 ERROR execution.py:120 -- Failed to run subtask XPRUxfx1ld4Dzz797R5KZeGW on band numa-0
Traceback (most recent call last):
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 331, in internal_run_subtask
subtask_info.result = await self._retry_run_subtask(
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 420, in _retry_run_subtask
return await _retry_run(subtask, subtask_info, _run_subtask_once)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 107, in _retry_run
raise ex
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 67, in _retry_run
return await target_async_func(*args)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 412, in _run_subtask_once
await slot_manager_ref.release_free_slot(
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/oscar/backends/context.py", line 189, in send
return self._process_result_message(result)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/oscar/backends/context.py", line 70, in _process_result_message
raise message.as_instanceof_cause()
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/oscar/backends/pool.py", line 542, in send
result = await self._run_coro(message.message_id, coro)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/oscar/backends/pool.py", line 333, in _run_coro
return await coro
```
6. Minimized code to reproduce the error.
` pytest --log-level=INFO -v -s mars/dataframe/indexing/tests/test_indexing_execution.py::test_series_getitem` | closed | 2022-03-14T03:59:14Z | 2022-03-15T03:17:29Z | https://github.com/mars-project/mars/issues/2814 | [
"type: bug",
"mod: scheduling service"
] | chaokunyang | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.