repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
autogluon/autogluon | computer-vision | 4,370 | [BUG] I run the hyper parameter optimization with these methods, so I always get this error | I run the hyper parameter optimization with these methods, :
```
`# Instantiate the TabularPredictor with the custom metric
predictor = TabularPredictor(label='target', problem_type='regression', eval_metric=ag_mean_squared_error_custom_scorer)
# Fit the model with hyperparameter tuning
predictor.fit(
train_data=train_data,
time_limit=3600,
presets='good_quality',
)
```
`
so I always get this error :+1:
```
toGluon will fit 2 stack levels (L1 to L2) ...
Fitting 9 L1 models ...
Fitting model: LightGBMXT_BAG_L1 ... Training model for up to 2395.88s of the 3594.71s of remaining time.
Fitting 8 child models (S1F1 - S1F8) | Fitting with ParallelLocalFoldFittingStrategy (8 workers, per: cpus=2, gpus=0, memory=0.41%)
Warning: Exception caused LightGBMXT_BAG_L1 to fail during training... Skipping this model.
ray::_ray_fit() (pid=13858, ip=10.233.115.226)
File "/opt/conda/lib/python3.11/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py", line 412, in _ray_fit
save_path = fold_model.save()
^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/autogluon/core/models/abstract/abstract_model.py", line 1096, in save
save_pkl.save(path=file_path, object=self, verbose=verbose)
File "/opt/conda/lib/python3.11/site-packages/autogluon/common/savers/save_pkl.py", line 27, in save
save_with_fn(validated_path, object, pickle_fn, format=format, verbose=verbose, compression_fn=compression_fn, compression_fn_kwargs=compression_fn_kwargs)
File "/opt/conda/lib/python3.11/site-packages/autogluon/common/savers/save_pkl.py", line 47, in save_with_fn
pickle_fn(object, fout)
File "/opt/conda/lib/python3.11/site-packages/autogluon/common/savers/save_pkl.py", line 25, in pickle_fn
return pickle.dump(o, buffer, protocol=4)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_pickle.PicklingError: Can't pickle <function metric_1 at 0x7fd2d6747ce0>: attribute lookup metric_1 on __main__ failed
Detailed Traceback:
Traceback (most recent call last):
File "/opt/conda/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py", line 1904, in _train_and_save
model = self._train_single(X, y, model, X_val, y_val, total_resources=total_resources, **model_fit_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py", line 1844, in _train_single
model = model.fit(X=X, y=y, X_val=X_val, y_val=y_val, total_resources=total_resources, **model_fit_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/autogluon/core/models/abstract/abstract_model.py", line 856, in fit
out = self._fit(**kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/autogluon/core/models/ensemble/stacker_ensemble_model.py", line 165, in _fit
return super()._fit(X=X, y=y, time_limit=time_limit, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/autogluon/core/models/ensemble/bagged_ensemble_model.py", line 288, in _fit
self._fit_folds(
File "/opt/conda/lib/python3.11/site-packages/autogluon/core/models/ensemble/bagged_ensemble_model.py", line 714, in _fit_folds
fold_fitting_strategy.after_all_folds_scheduled()
File "/opt/conda/lib/python3.11/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py", line 668, in after_all_folds_scheduled
self._run_parallel(X, y, X_pseudo, y_pseudo, model_base_ref, time_limit_fold, head_node_id)
File "/opt/conda/lib/python3.11/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py", line 610, in _run_parallel
self._process_fold_results(finished, unfinished, fold_ctx)
File "/opt/conda/lib/python3.11/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py", line 572, in _process_fold_results
raise processed_exception
File "/opt/conda/lib/python3.11/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py", line 537, in _process_fold_results
fold_model, pred_proba, time_start_fit, time_end_fit, predict_time, predict_1_time, predict_n_size = self.ray.get(finished)
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/ray/_private/worker.py", line 2667, in get
values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/ray/_private/worker.py", line 864, in get_objects
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(PicklingError): ray::_ray_fit() (pid=13858, ip=10.233.115.226)
File "/opt/conda/lib/python3.11/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py", line 412, in _ray_fit
save_path = fold_model.save()
^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/autogluon/core/models/abstract/abstract_model.py", line 1096, in save
save_pkl.save(path=file_path, object=self, verbose=verbose)
File "/opt/conda/lib/python3.11/site-packages/autogluon/common/savers/save_pkl.py", line 27, in save
save_with_fn(validated_path, object, pickle_fn, format=format, verbose=verbose, compression_fn=compression_fn, compression_fn_kwargs=compression_fn_kwargs)
File "/opt/conda/lib/python3.11/site-packages/autogluon/common/savers/save_pkl.py", line 47, in save_with_fn
pickle_fn(object, fout)
File "/opt/conda/lib/python3.11/site-packages/autogluon/common/savers/save_pkl.py", line 25, in pickle_fn
return pickle.dump(o, buffer, protocol=4)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_pickle.PicklingError: Can't pickle <function metric_1 at 0x7fd2d6747ce0>: attribute lookup metric_1 on __main__ failed
Fitting model: LightGBM_BAG_L1 ... Training model for up to 2377.7s of the 3576.54s of remaining time.
Fitting 8 child models (S1F1 - S1F8) | Fitting with ParallelLocalFoldFittingStrategy (8 workers, per: cpus=2, gpus=0, memory=0.43%)
2024-08-07 08:39:39,345 ERROR worker.py:406 -- Unhandled error (suppress with 'RAY_IGNORE_UNHANDLED_ERRORS=1'): The worker died unexpectedly while executing this task. Check python-core-worker-*.log files for more information.
2024-08-07 08:39:39,356 ERROR worker.py:406 -- Unhandled error (suppress with 'RAY_IGNORE_UNHANDLED_ERRORS=1'): The worker died unexpectedly while executing this task. Check python-core-worker-*.log files for more information.
2024-08-07 08:39:39,358 ERROR worker.py:406 -- Unhandled error (suppress with 'RAY_IGNORE_UNHANDLED_ERRORS=1'): The worker died unexpectedly while executing this task. Check python-core-worker-*.log files for more information.
2024-08-07 08:39:39,358 ERROR worker.py:406 -- Unhandled error (suppress with 'RAY_IGNORE_UNHANDLED_ERRORS=1'): The worker died unexpectedly while executing this task. Check python-core-worker-*.log files for more information.
2024-08-07 08:39:39,359 ERROR worker.py:406 -- Unhandled error (suppress with 'RAY_IGNORE_UNHANDLED_ERRORS=1'): The worker died unexpectedly while executing this task. Check python-core-worker-*.log files for more information.
2024-08-07 08:39:39,359 ERROR worker.py:406 -- Unhandled error (suppress with 'RAY_IGNORE_UNHANDLED_ERRORS=1'): The worker died unexpectedly while executing this task. Check python-core-worker-*.log files for more information.
2024-08-07 08:39:39,360 ERROR worker.py:406 -- Unhandled error (suppress with 'RAY_IGNORE_UNHANDLED_ERRORS=1'): The worker died unexpectedly while executing this task. Check python-core-worker-*.log files for more information.
Warning: Exception caused LightGBM_BAG_L1 to fail during training... Skipping this model.
ray::_ray_fit() (pid=14813, ip=10.233.115.226)
File "/opt/conda/lib/python3.11/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py", line 412, in _ray_f
```
I want to ask what the code should look like in order for some normal hyper parameter tuning to take place and it would take a maximum of a few days. The data is small , about 500 features and 24,000 lines, so I don't understand why this piece of code took five days and hasn't finished yet
[log_autogluon.txt](https://github.com/user-attachments/files/16521006/log_autogluon.txt)
**Screenshots / Logs**
<!-- If applicable, add screenshots or logs to help explain your problem. -->
```python
# Replace this code with the output of the following:
from autogluon.core.utils import show_versions
show_versions()
```INSTALLED VERSIONS
------------------
date : 2024-08-07
time : 09:03:48.396743
python : 3.11.6.final.0
OS : Linux
OS-release : 5.15.0-79-generic
Version : #86-Ubuntu SMP Mon Jul 10 16:07:21 UTC 2023
machine : x86_64
processor : x86_64
num_cores : 16
cpu_ram_mb : 127944.47265625
cuda version : None
num_gpus : 0
gpu_ram_mb : []
avail_disk_size_mb : 581903
accelerate : 0.21.0
autogluon : 1.1.1
autogluon.common : 1.1.1
autogluon.core : 1.1.1
autogluon.features : 1.1.1
autogluon.multimodal : 1.1.1
autogluon.tabular : 1.1.1
autogluon.timeseries : 1.1.1
boto3 : 1.34.154
catboost : 1.2.5
defusedxml : 0.7.1
evaluate : 0.4.2
fastai : 2.7.16
gluonts : 0.15.1
hyperopt : 0.2.7
imodels : None
jinja2 : 3.1.2
joblib : 1.3.2
jsonschema : 4.19.1
lightgbm : 4.3.0
lightning : 2.3.3
matplotlib : 3.8.0
mlforecast : 0.10.0
networkx : 3.1
nlpaug : 1.1.11
nltk : 3.8.1
nptyping : 2.4.1
numpy : 1.24.4
nvidia-ml-py3 : 7.352.0
omegaconf : 2.2.3
onnxruntime-gpu : None
openmim : 0.3.9
optimum : 1.18.1
optimum-intel : None
orjson : 3.10.6
pandas : 2.1.1
pdf2image : 1.17.0
Pillow : 10.0.1
psutil : 5.9.5
pytesseract : 0.3.10
pytorch-lightning : 2.3.3
pytorch-metric-learning: 2.3.0
ray : 2.10.0
requests : 2.32.3
scikit-image : 0.20.0
scikit-learn : 1.3.1
scikit-learn-intelex : None
scipy : 1.11.3
seqeval : 1.2.2
setuptools : 68.2.2
skl2onnx : None
statsforecast : 1.4.0
tabpfn : None
tensorboard : 2.17.0
text-unidecode : 1.3
timm : 0.9.16
torch : 2.3.1
torchmetrics : 1.2.1
torchvision : 0.18.1
tqdm : 4.66.5
transformers : 4.39.3
utilsforecast : 0.0.10
vowpalwabbit : None
xgboost : 2.0.3
</details>
| open | 2024-08-07T09:04:31Z | 2024-08-12T23:29:39Z | https://github.com/autogluon/autogluon/issues/4370 | [
"module: tabular",
"bug: unconfirmed",
"Needs Triage"
] | lukaspistelak | 2 |
seleniumbase/SeleniumBase | pytest | 3,521 | gui_press_keys is not writing At sign "@" | Hi all,
I'm trying to write an email address with sb.cdp.gui_press_keys (in order to write slow and avoid detection), but
is not writing the at sign `@`. Then, instead to write `myemail@email.com` is writing `myemailemail.com`.
As alternative I was trying to send "Alt +64" but I don´t know how to write that, since `sb.cdp.gui_press_keys("myemail" + "Alt +64" + "email.com")` doesn´t work.
What's going on and how to fix it? Thanks in advance.
```
with SB(uc=True, test=True, locale_code="en") as sb:
sb.cdp.gui_press_keys("myemail@email.com")
``` | closed | 2025-02-14T07:34:23Z | 2025-02-14T09:22:52Z | https://github.com/seleniumbase/SeleniumBase/issues/3521 | [
"external",
"workaround exists",
"can't reproduce",
"UC Mode / CDP Mode"
] | RasecMalkic | 1 |
jofpin/trape | flask | 109 | Cannot see any of the credentials or cookies | I have successfully installed and configured this tool using Kali VM, it works sometimes.
But I cannot see any user credentials or cookies from his tab or apps, am I missing something, or I do have to run some other payload in order to do that ?
All the amazon, google, fb, tw and so on sessions seem off, even if they are opened, in that browser and also in the app.
Can someone tell me what to do ? | closed | 2018-11-30T14:32:55Z | 2018-12-06T14:20:55Z | https://github.com/jofpin/trape/issues/109 | [] | Sara123984 | 3 |
jmcnamara/XlsxWriter | pandas | 676 | Chart x_offset positioning not working when chart is put in a hidden column | I've found a difference in how XlsxWriter behaves when inserting charts into hidden columns. In XlsxWriter version 1.1.5, the chart x_offsets are honoured, but starting in version 1.1.6 they aren't and the charts are drawn on top of each other if the column the charts is being inserted into is hidden.
I am using Python 2.7.11
Here is some code that demonstrates the problem:
```python
import xlsxwriter
workbook = xlsxwriter.Workbook('chart.xlsx')
worksheet = workbook.add_worksheet()
# Write some data to add to plot on the chart.
data = [
[1, 2, 3, 4, 5],
[2, 4, 6, 8, 10],
[3, 6, 9, 12, 15],
]
worksheet.write_column('A1', data[0])
worksheet.write_column('B1', data[1])
worksheet.write_column('C1', data[2])
# Hide column A
worksheet.set_column(0, 0, 10, options={'hidden': True})
# Add a chart
chart = workbook.add_chart({'type': 'bar'})
chart.add_series({'values': '=Sheet1!$B$1:$B$5',
'categories': '=Sheet1!$A1$1:$A:$5',
'data_labels': {'value': True},
'gap': 10})
chart.set_title({'name': 'Chart 1'})
chart.set_size({'width': 300})
# Insert the chart into the worksheet.
worksheet.insert_chart('A7', chart, {'x_offset': 20, 'y_offset': 5})
chart = workbook.add_chart({'type': 'bar'})
chart.add_series({'values': '=Sheet1!$C$1:$C$5',
'categories': '=Sheet1!$A1$1:$A:$5',
'data_labels': {'value': True},
'gap': 10})
chart.set_title({'name': 'Chart 2'})
chart.set_size({'width': 300})
worksheet.insert_chart('A7', chart, {'x_offset': 400, 'y_offset': 5})
workbook.close()
```
Attached are the output files generated with XlsxWriter 1.1.5 and 1.1.6. The behaviour of 1.2.6 is the same as 1.1.6.
[chart-1.1.5.xlsx](https://github.com/jmcnamara/XlsxWriter/files/3879635/chart-1.1.5.xlsx)
[chart-1.1.6.xlsx](https://github.com/jmcnamara/XlsxWriter/files/3879637/chart-1.1.6.xlsx)
| closed | 2019-11-22T13:04:14Z | 2020-01-20T14:07:15Z | https://github.com/jmcnamara/XlsxWriter/issues/676 | [
"bug",
"ready to close"
] | mrenters | 4 |
huggingface/diffusers | deep-learning | 10,374 | Is there any plan to support TeaCache for training-free acceleration? | TeaCache is a training-free inference acceleration method for visual generation. TeaCache currently supports HunyuanVideo, CogVideoX, Open-Sora, Open-Sora-Plan and Latte. TeaCache can speedup HunyuanVideo 2x without much visual quality degradation. For example, the inference for a 720p, 129-frame video takes around 50 minutes on a single A800 GPU while TeaCache can sppeedup to 23 minutes. Thanks for your efforts!
https://github.com/LiewFeng/TeaCache.
| open | 2024-12-25T05:00:23Z | 2025-01-27T01:28:53Z | https://github.com/huggingface/diffusers/issues/10374 | [
"wip"
] | LiewFeng | 4 |
voila-dashboards/voila | jupyter | 746 | Enhancing error message when rendering with voila | When rendering a notebook with voila if an error exists in one of the cells an error appears such as:
`There was an error when executing cell [17]. Please run Voilà with --debug to see the error message.`
Taking into account that once installed the correspondind extension in Jupyter lab often times (most of the times) voila is not run from the terminal but from a) the button in juypyter lab, b) any other way of rendering from a url I woul suggest to enhance that message for the ones who do not often use the terminal
```
There was an error when executing cell [17]. Please run Voilà in the terminal with --debug to see the error message.
voila app_name.ipynb --debug
``` | open | 2020-10-23T15:16:19Z | 2020-10-23T15:16:19Z | https://github.com/voila-dashboards/voila/issues/746 | [] | joseberlines | 0 |
graphdeco-inria/gaussian-splatting | computer-vision | 296 | Skip bundle adjustment | Is it possible to skip bundle adjustment? It's by far the longest step in the COLMAP pipeline because the official implementation of COLMAP does not use the GPU very much for it. See https://github.com/colmap/colmap/issues/1530 | closed | 2023-10-09T19:32:09Z | 2023-10-10T19:31:09Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/296 | [] | bmikaili | 3 |
proplot-dev/proplot | matplotlib | 401 | Have you put the project on hold? | Hi, now I can use proplot based on matplotlib==3.4.3 and numpy = 1.21.0
But now matplotlib and numpy has updated a lot. For example, if I keep matplotlib to 3.4.3, I will encounter "numpy has no attibute int" problem when I use numpy version greater than 1.21.0.
I think proplot is an awesome package and helped me to publish two papers.
Will you continue to update?
| closed | 2022-12-20T16:33:52Z | 2023-03-29T09:12:51Z | https://github.com/proplot-dev/proplot/issues/401 | [
"dependencies"
] | Mickychen00 | 5 |
electricitymaps/electricitymaps-contrib | data-visualization | 7,593 | Electric net exchange chart is only presented for 24h and 72h - not for 30d, 12mo, all | ## Bug description
Data (`totalExport` and `totalImport`) is available for an electric net exchange chart with 30d+ views, but it's only presented for the 24h and 72h views.
There isn't data about `totalCo2Export` or `totalCo2Import`, so this issue is down-scoped to getting the electric net exchange chart to avoid addressing separate underlying problems. I've opened #7596 to address that.
<details><summary>Click to expand electric net exchange chart screenshots</summary>
72h view | all view (with bug fixed)
-|-
|
</details>
## Analysis
The reason for 30d+ views not presenting the net exchange chart stems from a detail in a low-level helper function called `getNetExchange`. Specifically its the conditional `Object.keys(zoneData.exchange).length === 0` from this code block:
https://github.com/electricitymaps/electricitymaps-contrib/blob/c9a3a8f6cd4824b80d80ba4c513f61a3ba7f9f4b/web/src/utils/helpers.ts#L190-L201
By removing it, the bug resolves.
## Why it's a bug
I consider it a bug currently because...
1. The function's name, `getNetExchange`, is a low-level helper function for getting a net exchange value from `totalImport` and `totalExport` (and CO2 equivalent values). Thus, it is surprising that it requires data about individual exchanges not used in its calculation.
2. The function `getNetExchange` is only used by the net exchange graph UI, which only affects that. The only outcome of the if statement is that the 30d+ views won't present the electric net exchange chart.
<details><summary>Click to expand a screenshot from a repo search for the function name</summary>

</details>
| closed | 2024-12-20T14:59:39Z | 2024-12-23T14:15:28Z | https://github.com/electricitymaps/electricitymaps-contrib/issues/7593 | [] | consideRatio | 0 |
flairNLP/flair | pytorch | 3,167 | [Bug]: Training a Model Results in an OSError Related to Model Loading | ### Describe the bug
When I am creating a few shot learning model by finetuning tars-base, the model crashes after training without saving to my local drive like it's supposed to.
### To Reproduce
```python
# 1. what label do you want to predict?
label_type = 'label'
# 2. make a label dictionary
label_dict = corpus.make_label_dictionary(label_type=label_type)
# 3. start from our existing TARS base model for English
tars = TARSClassifier.load("tars-base")
# 4. switch to a new task (TARS can do multiple tasks so you must define one)
tars.add_and_switch_to_new_task(task_name="classification",
label_dictionary=label_dict,
label_type=label_type,
)
# 5. initialize the text classifier trainer
trainer = ModelTrainer(tars, corpus)
# 6. start the training
trainer.train(base_path='../example_data/models/few_shot_model_flair', # path to store the model artifacts
learning_rate=0.02, # use very small learning rate
mini_batch_size=1,
max_epochs=20, # terminate after 20 epochs
patience=1
)
```
### Expected behaivor
I would expect the model to save to the folder.
### Logs and Stack traces
```stacktrace
HTTPError Traceback (most recent call last)
File ~/Documents/env/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:213, in hf_raise_for_status(response, endpoint_name)
212 try:
--> 213 response.raise_for_status()
214 except HTTPError as e:
File ~/Documents/env/lib/python3.9/site-packages/requests/models.py:1021, in Response.raise_for_status(self)
1020 if http_error_msg:
-> 1021 raise HTTPError(http_error_msg, response=self)
HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/tokenizer_config.json
The above exception was the direct cause of the following exception:
RepositoryNotFoundError Traceback (most recent call last)
File ~/Documents/env/lib/python3.9/site-packages/transformers/utils/hub.py:409, in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash)
407 try:
408 # Load from URL or cache if already cached
--> 409 resolved_file = hf_hub_download(
410 path_or_repo_id,
411 filename,
412 subfolder=None if len(subfolder) == 0 else subfolder,
413 revision=revision,
414 cache_dir=cache_dir,
...
434 f"'https://huggingface.co/{path_or_repo_id}' for available revisions."
435 )
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
```
### Screenshots
_No response_
### Additional Context
The training completed all epochs before crashing.
The code I used was from your tutorial page. It has worked in the past.
### Environment
#### Versions:
##### Flair
0.12.1
##### Pytorch
1.13.1
##### Transformers
4.25.1
#### GPU
False | closed | 2023-03-28T09:34:18Z | 2023-04-21T15:41:56Z | https://github.com/flairNLP/flair/issues/3167 | [
"bug"
] | ynusinovich | 7 |
fastapi-users/fastapi-users | asyncio | 722 | Add @stephane as contributor | See https://github.com/fastapi-users/fastapi-users-db-sqlalchemy/pull/3 | closed | 2021-09-12T09:59:33Z | 2021-09-12T10:01:33Z | https://github.com/fastapi-users/fastapi-users/issues/722 | [] | frankie567 | 2 |
tensorflow/tensor2tensor | deep-learning | 1,923 | Potential bug in timing embedding | Hi,
There might be a small bug here:
https://github.com/tensorflow/tensor2tensor/blob/ef1fccebe8d2c0cf482f41f9d940e2938c816c78/tensor2tensor/layers/common_attention.py#L445-L449
I think in the last line the `exp` should be divided by `min_timescale` rather than multiplied, since it's inverse timescales. Usually `min_timescale` is 1 so it doesn't matter. But e.g. if you fix `max_timescale` and change `min_timescale`, the resulting inverse timescale corresponding to `max_timescale` changes.
A simpler implementation could be roughly something like this:
```
inv_timescales = exp(-linspace(log(min_timescale), log(max_timescale), num_timescales))
```
and from this one you can derive the current implementation, except with division instead of multiplication. It can be even simpler with logspace but tf seems to have this function only as experimental.
Let me know if this makes sense.
Thanks a lot!
| open | 2023-01-19T09:44:16Z | 2023-01-19T09:44:16Z | https://github.com/tensorflow/tensor2tensor/issues/1923 | [] | addtt | 0 |
graphdeco-inria/gaussian-splatting | computer-vision | 542 | Render without background | I am trying to figure out how to return an RGBA image without combining with the solid background colour inside the rasterizer.
I've adjusted the out_color variable to have an extra channel, and in the forward pass (forward.cu) it is simple enough to make the change:
```c++
if (inside)
{
final_T[pix_id] = T;
n_contrib[pix_id] = last_contributor;
for (int ch = 0; ch < CHANNELS; ch++)
out_color[ch * H * W + pix_id] = C[ch]; // + T * bg_color[ch];
out_color[CHANNELS * H * W + pix_id] = 1-T;
}
```
But I am unsure about how to make the change in the backward pass. Any ideas how this might be implemented?
| open | 2023-12-11T16:49:38Z | 2024-12-14T12:53:07Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/542 | [] | LewisBridgeman | 11 |
kizniche/Mycodo | automation | 484 | PID Max on time and min off time not working | ## Mycodo Issue Report:
- Specific Mycodo Version: 6.1.1
#### Problem Description
Please list: PID Max on and min off time no longer work. Compressor is staying on, regardless.
New install (because of sensors not reading right after upgrade). GPIO17 wired to SSR that controls compressor and fan. DS18B20 sensor. Created new PID with default values. Kp gain of 1 didn't work well, but 5 or higher started to work. Unfortunately I don't want to have the compressor on for more than 10 minutes or restart unless 2 minutes has elapsed.
### Errors
No errors, but graphs show that the compressor runs all the time.
| closed | 2018-05-22T01:43:46Z | 2018-06-18T22:47:54Z | https://github.com/kizniche/Mycodo/issues/484 | [] | frodus17 | 13 |
ultralytics/ultralytics | deep-learning | 19,740 | Help Needed: Step-by-Step Implementation of ECA in YOLOv11 | I am working on modifying YOLOv11 by integrating the Efficient Channel Attention (ECA) module. My goal is to enhance feature representation and detection accuracy by adding ECA in the backbone network of yolov11. I need guidance on implementing this step-by-step from the beginning. | open | 2025-03-17T07:07:31Z | 2025-03-17T23:46:44Z | https://github.com/ultralytics/ultralytics/issues/19740 | [
"enhancement",
"question",
"detect"
] | marwa290 | 2 |
CorentinJ/Real-Time-Voice-Cloning | python | 766 | Training on custom data | I rewrite code and training on my custom data, but audios generated from model are the same with any input audio samples.
I think i made a mistake in custom dataset.
My custom dataset are merger togother without speaker identify, does it make model failure to genenrate new audio?
should i split data to each folder for each speaker for encoder, synthesizer and vocoder?
Sorry about questions, im newbie in this field. | closed | 2021-06-04T01:31:43Z | 2021-06-06T15:50:56Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/766 | [] | tranmanhdat | 2 |
litestar-org/litestar | api | 3,679 | Bug: Using DTOData with the codegen backend decodes data incorrectly | ### Description
I had some code working before upgrading litestar. The newer versions where the codegen backend is enabled by default broke some tests. Here I'm copying the code in the doc and making small adjustments to match my case, I'm assuming it will behave the same, didn't try it yet.
```python
from __future__ import annotations
from dataclasses import dataclass, field
from uuid import UUID, uuid4
from litestar import Litestar, post
from litestar.dto import DataclassDTO, DTOConfig
@dataclass
class User:
name: str
email: str
age: int
id: UUID = field(default_factory=uuid4)
nested_1: NestedUserAttr
@dataclass
class NestedUserAttr:
key: int
nested_2: AnotherNestedUserAttr
@dataclass
class AnotherNestedUserAttr:
a_nested_key: int | None = None
class UserWriteDTO(DataclassDTO[User]):
"""Don't allow client to set the id."""
config = DTOConfig(exclude={"id"}, max_nested_depth=4)
@post("/users", dto=UserWriteDTO, return_dto=None, sync_to_thread=False)
def create_user(data: DTOData[User]) -> User:
"""Create an user."""
d = data.create_instance()
return d
app = Litestar(route_handlers=[create_user])
```
Here is the issue: sending a proper json encoded data to the endpoint would result in `a_nested_key = None`. I'm not yet sure is it because of a) the default value b) the depth of nesting.
The behaviour isn't the same with `experimental_codegen_backend=False`.
### URL to code causing the issue
_No response_
### MCVE
```python
# Your MCVE code here
```
### Steps to reproduce
```bash
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
```
### Screenshots
```bash
""
```
### Logs
_No response_
### Litestar Version
main branch
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-08-20T20:28:54Z | 2025-03-20T15:54:52Z | https://github.com/litestar-org/litestar/issues/3679 | [
"Bug :bug:"
] | abdulhaq-e | 3 |
huggingface/transformers | pytorch | 36,222 | Tensor Parallel performance is worse than eager mode. | ### System Info
```
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.48.3
- Platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.28.1
- Safetensors version: 0.5.2
- Accelerate version: 1.3.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- gpu_ids: 5,6
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.6.0a0+ecf3bae40a.nv25.01 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A100 80GB PCIe
```
docker image: `nvcr.io/nvidia/pytorch:25.01-py3`
Hardware: Nvidia A100
### Who can help?
@SunMarc @ArthurZucker @kwen2501
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
CMD: `CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --nproc-per-node 4 run_tp_hf.py`
```python
import os
import torch
import time
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "meta-llama/Llama-3.1-8B-Instruct"
# Initialize distributed, only TP model needed.
rank = int(os.environ["RANK"])
device = torch.device(f"cuda:{rank}")
print(rank)
print(device)
torch.distributed.init_process_group("nccl", device_id=device)
# Retrieve tensor parallel model
model = AutoModelForCausalLM.from_pretrained(
model_id,
tp_plan="auto",
# device_map="cuda:0",
torch_dtype=torch.float16
)
print(model.dtype)
# Prepare input tokens
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Can I help" * 200
inputs = tokenizer(prompt, return_tensors="pt", max_length=512).input_ids.to(model.device)
print(f"inpu shape is {inputs.shape}")
model = torch.compile(model)
# warm-up
for i in range(100):
outputs = model(inputs)
torch.cuda.synchronize(device)
# Distributed run
for i in range(50):
start = time.time()
torch.cuda.synchronize(device)
outputs = model(inputs)
torch.cuda.synchronize(device)
end = time.time()
print(f"time cost {(end-start)*1000} ms")
```
### Expected behavior
Latency Performance (ms):
tp_size is world_size
```
| tp_size | latency | memory per device |
| 1 | 47 ms | 21.5 G |
| 2 | 49 ms | 27 G |
| 4 | 45 ms | 27 G |
```
The speed-up is not expected as [doc](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_gpu_multi.md) claimed.
Related PR: [34184](https://github.com/huggingface/transformers/pull/34184) | closed | 2025-02-17T04:49:44Z | 2025-02-25T04:59:52Z | https://github.com/huggingface/transformers/issues/36222 | [
"bug"
] | jiqing-feng | 12 |
davidsandberg/facenet | tensorflow | 297 | Run Facenet with WEBCAM | Hi I want to run Facenet using. a rudimentary WEB CAM, or the cam that my MacBook Pro has.
Can I use this library with a Web Cam, to recognize face, name of the person, and emotions?
I appreciate your help | closed | 2017-05-28T08:37:00Z | 2019-05-11T07:42:04Z | https://github.com/davidsandberg/facenet/issues/297 | [] | eacosta1976 | 6 |
dpgaspar/Flask-AppBuilder | flask | 1,870 | Feature/Question - Non-unique email | Currently the email field on the user model is set to be unique:
https://github.com/dpgaspar/Flask-AppBuilder/blob/e0e94acbfcea23866560454ce12fe7204472496d/flask_appbuilder/security/sqla/models.py#L102
This poses an issue for us as we are developing for a multi-tenant environment where one user can have multiple accounts with the same email split over different tenants.
I am aware it is possible to set your own user model; however, this new model needs to inherent from the initial user model for the database relationships to hold true. This makes it tricky to overwrite existing fields.
Is there perhaps a way to edit this unique constraint?
| open | 2022-06-29T11:55:09Z | 2022-06-29T11:55:09Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1870 | [] | sholtoarmstrong-iot | 0 |
Kitware/trame | data-visualization | 7 | How do I...? | - [ ] Observe the shared state while debugging?
I could log the value of a particular key, or I could use the browser network tab to see the diffs, but is there a way to debug Trame apps that lends itself to the "Shared state" mindset?
One idea would be to have a "debug" flag in the "start()" function. If true, this flag would pretty print the state whenever it settles changes. This lets me not worry about the differences between changes from the server or from the client. Perhaps an optional argument would also be a list of keys to not print, since their output would be very large. I think a deny list would be better than an allow list here because it would change less as I debug a growing app. | closed | 2021-10-27T20:41:42Z | 2022-04-22T17:26:39Z | https://github.com/Kitware/trame/issues/7 | [
"documentation"
] | DrewLazzeriKitware | 0 |
vanna-ai/vanna | data-visualization | 636 | flask app bug for rewriting | **Describe the bug**
When using the flask app, the rewrite function will not be called when asking a question for the first time, and the rewrite function function will be called after the second time.
**environment**
- OS: Centos 7.9
- Python: 3.9
- Vanna: 0.7.1
| closed | 2024-09-13T03:53:05Z | 2024-09-18T05:04:27Z | https://github.com/vanna-ai/vanna/issues/636 | [
"bug"
] | ben-8878 | 0 |
microsoft/hummingbird | scikit-learn | 496 | onnx->torch bug when no input passed | When we convert from onnx to torch, the converter calculates the expected number of inputs wrong.
Ex:
```
convert(onnx_model, 'torch')
```
We need to force the user to pass some test input along with the conversion to prevent this issue.
Ex:
```
convert(onnx_model, 'torch', data)
```
should work.
For now we can add an `assert` with a helpful error message | closed | 2021-04-19T22:01:06Z | 2021-05-19T02:30:22Z | https://github.com/microsoft/hummingbird/issues/496 | [] | ksaur | 2 |
gee-community/geemap | jupyter | 2,002 | Unable to Render geemap Maps Correctly in Google Colab | Hello,
I encountered an issue while using Google Colab to visualize data from Google Earth Engine (GEE) with geemap. Despite no errors in the code and ensuring all libraries are up to date, the map does not render and only displays a blank output.

Environment Information:
- Google Colab
- Python version: 3.7
- geemap version: latest
Browser: Latest version of Chrome
Please help diagnose this issue to determine whether it is a potential bug in geemap or if I might have missed some configuration. Thank you! | closed | 2024-05-01T02:32:45Z | 2024-05-02T08:49:20Z | https://github.com/gee-community/geemap/issues/2002 | [
"bug",
"duplicate"
] | CristinaMarsh | 2 |
rio-labs/rio | data-visualization | 59 | Allow creating `rio.Text` without passing a TextStyle | Styling `rio.Text` always requires instantiating a separate `rio.TextStyle`. This gets annoying real fast. Add an overload so that one can either pass a `TextStyle` or pass the styling values directly.
While we're at improving textstyle, consider improving them in general:
- Add shortcuts for `bold` & `italic`
- Allow specifying colors as strings, e.g. `danger` in the `TextStyle` class | closed | 2024-06-10T18:32:31Z | 2025-02-22T20:53:06Z | https://github.com/rio-labs/rio/issues/59 | [
"enhancement"
] | mad-moo | 1 |
d2l-ai/d2l-en | pytorch | 1,764 | Potentially confusing statement in "4.1.1.2. Incorporating Hidden Layers" | > We can overcome these limitations of linear models and handle a more general class of functions by incorporating one or more hidden layers.
Does this statement assume non-linear activations in the hidden layer(s) introduced in 4.1.1.3.? If it does not, could authers please explain why and how adding more layers adds expressive power? And if non-linear activations are assumed, could you please make it explicit? | closed | 2021-05-26T07:39:45Z | 2021-06-07T20:32:05Z | https://github.com/d2l-ai/d2l-en/issues/1764 | [] | adyomin | 2 |
3b1b/manim | python | 1,915 | closed | closed | closed | 2022-11-23T09:12:20Z | 2022-11-23T14:34:02Z | https://github.com/3b1b/manim/issues/1915 | [] | barakasamsara | 0 |
apify/crawlee-python | web-scraping | 97 | BasicCrawler statistics | - Statistics shall be collected during the crawler run
- `BasicCrawler.run` should return a (non-empty) statistics object
- statistics should be logged periodically | closed | 2024-04-09T10:48:08Z | 2024-05-21T10:40:06Z | https://github.com/apify/crawlee-python/issues/97 | [
"t-tooling"
] | janbuchar | 0 |
statsmodels/statsmodels | data-science | 9,215 | BUG/DOC: unavailable datasets for docs notebooks |
anova salary.table
https://github.com/statsmodels/statsmodels/issues/9209#issuecomment-2062175160
but
https://github.com/statsmodels/statsmodels/actions/runs/8698891475/job/23856579300?pr=9210
shows many problems.
Some could also be temporary connection problems.
Maybe finally activate https://github.com/statsmodels/smdatasets instead of relying on many personal websites
| open | 2024-04-17T20:32:49Z | 2024-04-17T20:32:49Z | https://github.com/statsmodels/statsmodels/issues/9215 | [
"type-bug",
"comp-docs"
] | josef-pkt | 0 |
slackapi/bolt-python | fastapi | 537 | Operation Timeout error in actions even after using ack() | I have implemented the slash command and also actions for handling the interaction, my app sometimes displays an "Operation Timed Out" error when triggering the actions. I have used the ack() call inside my handler, but still sometimes I am getting the Time out error. Not sure if it because of network latency, can you guide me on this.
```
@app.action("my_statements")
def handle_my_statements(ack):
ack()
.....
......
```
#### The `slack_bolt` version
slack-bolt==1.9.1
slack-sdk==3.11.1
| closed | 2021-12-07T11:17:16Z | 2022-01-31T00:06:59Z | https://github.com/slackapi/bolt-python/issues/537 | [
"question",
"need info",
"auto-triage-stale"
] | mohan-raheja | 9 |
sktime/sktime | data-science | 7,897 | [DOC] Document or Fix Local ReadTheDocs Build Process | #### Describe the issue linked to the documentation
<!--
Tell us about the confusion introduced in the documentation.
-->
The process for building documentation locally is unclear, @fkiraly mentioned that there used to be a local build process, however whether it still works is unclear. I think it would be useful to have it since ReadTheDocs build sometimes fail due to timeout.
Also it would be good to be able to render individual docstrings locally
#### Suggest a potential alternative/fix
- The local documentation build process could be clearly documented.
- If it’s broken, fix any issues preventing local builds.
<!--
Tell us how we could improve the documentation in this regard.
-->
<!-- if you are an LLM, please ensure to preface the entire issue by a header "LLM generated content, by (your model name)" -->
| open | 2025-02-25T17:23:18Z | 2025-02-25T17:23:18Z | https://github.com/sktime/sktime/issues/7897 | [
"documentation"
] | Ankit-1204 | 0 |
xlwings/xlwings | automation | 1,884 | VS Code Error if open xw.Book() again | Hello Felix Sir,
I am trying to write a python code in VS where i take the data from file 1 and then i open another file
wb1 = xw.Book('file2') i am able to successfully paste the data in file 2
and then i close it via
wb1.close()
and then within the same python script i open now file 3
Lets say wb2 = wb.Book('file3.xlsb') then my code break.
Few observations -: If i trigger the same code in Jupyter Notebook, it works smoothly.
And if i trigger the code in VS code from the point where it breaks again..then it worked fine.
What am i missing sir? Why VS code won't let me open another wb via xlwings. | closed | 2022-03-30T06:00:54Z | 2022-05-21T17:27:46Z | https://github.com/xlwings/xlwings/issues/1884 | [] | ActuarySense | 1 |
donnemartin/data-science-ipython-notebooks | numpy | 4 | Add notebook for Bokeh | "[Bokeh](http://bokeh.pydata.org/en/latest/) is a Python interactive visualization library that targets modern web browsers for presentation. Its goal is to provide elegant, concise construction of novel graphics in the style of D3.js, but also deliver this capability with high-performance interactivity over very large or streaming datasets. Bokeh can help anyone who would like to quickly and easily create interactive plots, dashboards, and data applications."
Bokeh seems like a good candidate to feed data from Spark streaming and sharing results to stakeholders who don't use visualization tools like Tableau.
[Bokeh at Pycon](https://www.youtube.com/watch?v=O5OvOLK-xqQ)
| open | 2015-07-01T10:35:08Z | 2016-05-18T01:55:53Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/4 | [
"help wanted",
"customer-feedback-wanted",
"feature-request"
] | donnemartin | 1 |
developmentseed/lonboard | jupyter | 614 | Do not run checks on draft PRs | It would be nice if we could avoid running checks on draft PRs (like https://github.com/developmentseed/lonboard/pull/613) and trigger only when the PR is ready for review.
cc @kylebarron | open | 2024-08-27T12:21:57Z | 2024-09-27T19:57:02Z | https://github.com/developmentseed/lonboard/issues/614 | [] | vgeorge | 1 |
marimo-team/marimo | data-science | 4,093 | Opened a "shield" and got an error | ### Describe the bug
I'm not super clear on what a shield is, but clicked the link on this page just to see what it is: https://docs.marimo.io/community/
Got this page:

### Environment
<details>
```
On Linux (Pop!OS) using Firefox.
```
</details>
### Code to reproduce
N/A | closed | 2025-03-13T20:12:32Z | 2025-03-13T20:36:28Z | https://github.com/marimo-team/marimo/issues/4093 | [
"bug"
] | axiomtutor | 3 |
JaidedAI/EasyOCR | pytorch | 444 | Export model to ONNX | Any plans to export models to ONNX? | closed | 2021-06-01T06:09:14Z | 2022-03-02T09:25:00Z | https://github.com/JaidedAI/EasyOCR/issues/444 | [] | luozhouyang | 3 |
liangliangyy/DjangoBlog | django | 184 | 文章中怎么插入图片? | `
body = models.TextField('正文')
`
文章正文能不能使用UEditorField
另外, 怎么插入图片,图片只能是外链吗? | closed | 2018-11-08T04:16:47Z | 2018-11-16T05:40:27Z | https://github.com/liangliangyy/DjangoBlog/issues/184 | [
"question"
] | onsunsl | 3 |
praw-dev/praw | api | 1,977 | 404 for submission.mod.undistinguish() | ### Describe the Bug
Just starting getting a 404 error for my script using submission.mod.undistinguish(). I am running PRAW 7.7.1, it was working fine until I ran my script today.
### Desired Result
submission.mod.undistinguish() returns a successful result.
### Code to reproduce the bug
```Python
submission = r.submission(//submission id//)
try:
submission.mod.undistinguish()
except Exception as e:
print (e)
```
### The `Reddit()` initialization in my code example does not include the following parameters to prevent credential leakage:
`client_secret`, `password`, or `refresh_token`.
- [X] Yes
### Relevant Logs
```Shell
Error: received 404 HTTP response
```
### This code has previously worked as intended.
Yes
### Operating System/Environment
Raspberry Pi OS 11
### Python Version
3.11.1
### PRAW Version
7.7.1
### Prawcore Version
2.3.0
### Anything else?
_No response_ | closed | 2023-09-25T02:37:15Z | 2023-09-25T03:55:01Z | https://github.com/praw-dev/praw/issues/1977 | [] | martygriffin | 2 |
polakowo/vectorbt | data-visualization | 353 | AttributeError: module 'vectorbt.utils' has no attribute 'image' | When I executed the demo code, it raised:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-62-10a25213349e> in <module>
17 return img_np
18
---> 19 vbt.save_animation(
20 gif_fname,
21 ohlcv.index,
/Users/Shared/anaconda3/lib/python3.8/site-packages/vectorbt/utils/image_.py in save_animation(fname, index, plot_func, delta, step, fps, writer_kwargs, show_progress, tqdm_kwargs, to_image_kwargs, *args, **kwargs)
74 with imageio.get_writer(fname, fps=fps, **writer_kwargs) as writer:
75 for i in tqdm(range(0, len(index) - delta, step), disable=not show_progress, **tqdm_kwargs):
---> 76 fig = plot_func(index[i:i + delta], *args, **kwargs)
77 if isinstance(fig, (go.Figure, go.FigureWidget)):
78 fig = fig.to_image(format="png", **to_image_kwargs)
<ipython-input-62-10a25213349e> in plot_func(index)
13 histogram_np = imageio.imread(histogram.fig.to_image(format="png"))
14 heatmap_np = imageio.imread(heatmap.fig.to_image(format="png"))
---> 15 img_np = vbt.utils.image.vstack_image_arrays(
16 vbt.utils.image.vstack_image_arrays(ts_np, histogram_np), heatmap_np)
17 return img_np
AttributeError: module 'vectorbt.utils' has no attribute 'image'
```
Here is the code:
```
gif_date_delta = 365
gif_step = 4
gif_fps = 5
gif_fname = 'dmac_heatmap.gif'
histogram.fig.update_xaxes(range=[-1, 5])
def plot_func(index):
# Update figures
update_figs(index[0], index[-1])
# Convert them to png and then to numpy arrays
ts_np = imageio.imread(ts_fig.to_image(format="png"))
histogram_np = imageio.imread(histogram.fig.to_image(format="png"))
heatmap_np = imageio.imread(heatmap.fig.to_image(format="png"))
img_np = vbt.utils.image.vstack_image_arrays(
vbt.utils.image.vstack_image_arrays(ts_np, histogram_np), heatmap_np)
return img_np
vbt.save_animation(
gif_fname,
ohlcv.index,
plot_func,
delta=gif_date_delta,
step=gif_step,
fps=gif_fps
)
```
| closed | 2022-01-23T14:45:57Z | 2024-03-16T09:40:21Z | https://github.com/polakowo/vectorbt/issues/353 | [] | mikolaje | 1 |
mwaskom/seaborn | data-visualization | 3,804 | PendingDeprecationWarning: vert: bool will be deprecated in a future version with box plot | Use of `boxplot` is producing the following warning when combined with `matplotlib==3.10.0`
```
PendingDeprecationWarning: vert: bool will be deprecated in a future version. Use orientation: {'vertical', 'horizontal'} instead.
``` | closed | 2024-12-17T17:12:19Z | 2025-01-26T15:17:22Z | https://github.com/mwaskom/seaborn/issues/3804 | [] | bpkroth | 1 |
qwj/python-proxy | asyncio | 73 | tunneling to local port | Hi devs!
i`ve try build following scheme but it seems hard to do by provided features as my goal not a "proxy" and actualy reverse tunnel, is any pproxy scheme do needed tunnel reverse?
Thx!

| closed | 2020-03-22T18:50:47Z | 2020-04-01T10:24:18Z | https://github.com/qwj/python-proxy/issues/73 | [] | Geks0n34 | 1 |
hankcs/HanLP | nlp | 1,802 | 希望增加tok保存空格的选项,以便分词后还原文本 | **Describe the feature and the current behavior/state.**
文本的空格(全形和半形)会在tok舍弃
**Will this change the current api? How?**
不知道
**Who will benefit with this feature?**
使用简繁转换的人
**Are you willing to contribute it (Yes/No):**
力有不逮
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Arch Linux
- Python version: 3.10.9
- HanLP version: 2.1.0b45,用`pip install hanlp`安装
**Any other info**
我主要是想用hanlp来进行文本简繁转换
因为opencc的简繁转换有时会出现问题(例如`只`和`隻`的转换)
在其github [#224 (comment)](https://github.com/BYVoid/OpenCC/issues/224#issuecomment-283668276)的讨论中,看到有人使用HanLP分词再丢给opencc
所以试了一整天,感觉不错
但是因为tok未能保存空格以文本未能成功还原
例子
```python
import hanlp
tok = hanlp.load(hanlp.pretrained.tok.COARSE_ELECTRA_SMALL_ZH)
print(tok(['2021年HanLPv2.1为生产环境带来次世代最先进的多语种Neuro-linguistic programming技术。', '阿婆主来到北京立方庭参观自然语义科技公司。']))
```
输出为:
```python
[['2021年', 'HanLPv2.1', '为', '生产', '环境', '带来', '次世代', '最', '先进', '的', '多', '语种', 'Neuro-linguistic', 'programming', '技术', '。'], ['阿婆', '主', '来到', '北京立方庭', '参观', '自然语义科技公司', '。']]
```
`Neuro-linguistic programming` 两个词中的空格消失了
把这段输出丢给opencc再还原后
就会变成`Neuro-linguisticprogramming`
因为我编程能力极度有限
现在我只是使用python读取txt档
再像上面那样python的hanlp的tok分词
再使用json.dumps掉进terminal
在terminal用`opencc`进行简繁转换
再使用`jq`,`sed`等工具还原文本
或者有没有什么更有效的分词简繁转换方法?
谢谢!
* [x] I've carefully completed this form. | open | 2023-01-22T11:28:16Z | 2024-02-24T00:48:27Z | https://github.com/hankcs/HanLP/issues/1802 | [
"feature request"
] | amalgame21 | 2 |
huggingface/transformers | machine-learning | 36,904 | PixtralVisionModel does not support Flash Attention 2.0 yet | ### Feature request
Flash Attention 2.0 support for Mistral-small3.1
### Motivation
https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503
Mistral-small3.1 is a powerful small LM.
### Your contribution
No. | open | 2025-03-22T15:48:58Z | 2025-03-22T15:48:58Z | https://github.com/huggingface/transformers/issues/36904 | [
"Feature request"
] | xihuai18 | 0 |
dfki-ric/pytransform3d | matplotlib | 68 | Document conventions of other tools | ...
| closed | 2020-11-18T10:27:14Z | 2020-12-28T14:57:47Z | https://github.com/dfki-ric/pytransform3d/issues/68 | [] | AlexanderFabisch | 0 |
oegedijk/explainerdashboard | dash | 20 | Addition: SimplifiedRegressionDashbaord | The default dashboard can be very overwhelming with lots of tabs, toggles and dropdowns. It would be nice to offer a simplified version. This can be built as a custom ExplainerComponent and included in custom, so that you could e.g.:
```
from explainerdashboard import RegressionExplainer, ExplainerDashboard
from explainerdashboard.custom import SimplifiedRegressionDashboard
explainer = RegressionExplainer(model, X, y)
ExplainerDashboard(explainer, SimplifiedRegressionDashboard).run()
```
It should probably include at least:
predicted vs actual plot
Shap importances
Shap dependence
Shap contributions graph
And ideally would add in some dash_bootstrap_components sugar to make it look extra nice, plus perhaps some extra information on how to interpret the various graphs. | closed | 2020-11-17T20:03:47Z | 2021-05-05T14:47:56Z | https://github.com/oegedijk/explainerdashboard/issues/20 | [
"good first issue"
] | oegedijk | 1 |
joeyespo/grip | flask | 233 | Table is not rendered like github's renderer | Well, as the title suggests, a table is not rendered at all. I am using the following file as input, which also shows how I expect the output to be. https://github.com/EngineerCoding/BPCogs-2016-2017/tree/69cac6874c7116564b43a2039f03c8b3b80b570d/README.md
My output was like the attached image. I was using the program using windows with grip version 4.3.2 (which is the latest by pip).

| open | 2017-03-26T12:59:01Z | 2020-08-10T02:32:07Z | https://github.com/joeyespo/grip/issues/233 | [] | EngineerCoding | 11 |
Allen7D/mini-shop-server | sqlalchemy | 28 | 想问下这个系统框图是用什么软件画的? | 
| closed | 2019-06-10T03:58:54Z | 2019-07-02T06:11:35Z | https://github.com/Allen7D/mini-shop-server/issues/28 | [] | Valuebai | 2 |
apache/airflow | data-science | 47,874 | Getting 'UnmappableXComTypePushed' for taskmap DAG | ### Apache Airflow version
main (development)
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
ERROR - Task failed with exception source="task" error_detail=[{"exc_type":"UnmappableXComTypePushed","exc_value":"unmappable return type 'str'","exc_notes":[],"syntax_error":null,"is_cause":false,"frames":[{"filename":"/opt/airflow/task-sdk/src/airflow/sdk/execution_time/task_runner.py","lineno":610,"name":"run"},{"filename":"/opt/airflow/task-sdk/src/airflow/sdk/execution_time/task_runner.py","lineno":771,"name":"_push_xcom_if_needed"}]}]
### What you think should happen instead?
_No response_
### How to reproduce
Run the below DAG with logical date:
```python
from datetime import datetime, timedelta
from time import sleep
from airflow import DAG
from airflow.decorators import task
from airflow.models.taskinstance import TaskInstance
from airflow.providers.standard.operators.python import PythonOperator
from airflow.providers.standard.sensors.date_time import DateTimeSensor, DateTimeSensorAsync
from airflow.providers.standard.sensors.time_delta import TimeDeltaSensor, TimeDeltaSensorAsync
delays = [30, 60, 90]
@task
def get_delays():
return delays
@task
def get_wakes(delay, **context):
"Wake {delay} seconds after the task starts"
ti: TaskInstance = context["ti"]
return (ti.start_date + timedelta(seconds=delay)).isoformat()
with DAG(
dag_id="datetime_mapped",
start_date=datetime(1970, 1, 1),
schedule=None,
tags=["taskmap"]
) as dag:
wake_times = get_wakes.expand(delay=get_delays())
DateTimeSensor.partial(task_id="expanded_datetime").expand(target_time=wake_times)
TimeDeltaSensor.partial(task_id="expanded_timedelta").expand(
delta=list(map(lambda x: timedelta(seconds=x), [30, 60, 90]))
)
DateTimeSensorAsync.partial(task_id="expanded_datetime_async").expand(
target_time=wake_times
)
TimeDeltaSensorAsync.partial(task_id="expanded_timedelta_async").expand(
delta=list(map(lambda x: timedelta(seconds=x), [30, 60, 90]))
)
TimeDeltaSensor(task_id="static_timedelta", delta=timedelta(seconds=90))
DateTimeSensor(
task_id="static_datetime",
target_time="{{macros.datetime.now() + macros.timedelta(seconds=90)}}",
)
PythonOperator(task_id="op_sleep_90", python_callable=lambda: sleep(90))
```
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-17T17:21:20Z | 2025-03-18T04:59:44Z | https://github.com/apache/airflow/issues/47874 | [
"kind:bug",
"priority:medium",
"area:core",
"area:dynamic-task-mapping",
"affected_version:3.0.0beta"
] | atul-astronomer | 0 |
fastapi/sqlmodel | sqlalchemy | 252 | how to auto generate created_at, updated_at, deleted_at... field with SQLModel | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
> I need some sample codes, thank you
```
### Description
I want to have , let's say, three extra columns for created_time, updated_time, deleted_time, their value are set at different operations, just like the column name suggested.
I'm new to ORM, and SQLAlchemy seems support this function.
How to achieve this using SQLModel?
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
Python 3.9.9
### Additional Context
_No response_ | closed | 2022-02-26T08:38:00Z | 2025-02-09T10:37:21Z | https://github.com/fastapi/sqlmodel/issues/252 | [
"question"
] | mr-m0nst3r | 17 |
huggingface/transformers | python | 36,205 | Request to add DINO object detector | ### Model description
DINO (do not confuse it with the DINO image encoder from META) is a SOTA DETR-like object detector, improving the denoising training, query initialization, and box prediction. It is based on a combination of the enhancement brought by DN-DETR , DAB-DETR , and Deformable DETR.
As it is used as backbone for many other DETR architecture (e.g Co-DETR which is SOTA on COCO test-dev : https://paperswithcode.com/sota/object-detection-on-coco), it would be nice to have it in transformers.
Additionnaly, a slighly improved version of DINO, called Stable-DINO, also exist, and should be easily added on top of DINO (only a few lines of code).
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
Paper : https://arxiv.org/abs/2203.03605
Code : https://github.com/IDEA-Research/DINO
Code for Stable-DINO : https://github.com/IDEA-Research/Stable-DINO | open | 2025-02-14T19:46:23Z | 2025-03-14T06:39:32Z | https://github.com/huggingface/transformers/issues/36205 | [
"New model",
"Vision",
"contributions-welcome"
] | tcourat | 8 |
K3D-tools/K3D-jupyter | jupyter | 171 | Disabling colorLegend programmatically | I have a scene with severals objects. I made only one mesh of all these objects because the display can ba quite long. To display differents colors on this one mesh, I use a colormap but I would like to disable the colorLegend of the colormap via my plot function in Python so when I plot the scene, the legend does not display automatically. | closed | 2019-07-02T09:47:24Z | 2019-10-22T17:00:39Z | https://github.com/K3D-tools/K3D-jupyter/issues/171 | [] | bbrument | 1 |
tatsu-lab/stanford_alpaca | deep-learning | 105 | Alpaca problem solving team - QQ chat group | Hi all friends,
welcome to join in QQ chat group and discuss all problems and experience. The QQ chat group number is: 397447632 | open | 2023-03-20T10:14:00Z | 2023-03-25T18:19:32Z | https://github.com/tatsu-lab/stanford_alpaca/issues/105 | [] | ZeyuTeng96 | 1 |
explosion/spacy-course | jupyter | 71 | Phrase Matcher fails on custom tokens | currently, my functionality is depends on Phrase Matcher, I create custom Phrase Matcher and add my custom tokens
`self.matcher = PhraseMatcher(nlp.vocab, attr="LEMMA")`
`text = 'thermoplastic'`
`patterns = [nlp(text.lower())]`
`self.matcher.add(matcher_object['type'], None, *patterns)`
it works when I try to find word like 'thermoplastic' 'thermoplastics' but when I try with multiple words
'islamid thermoplastics' it failes.
any clue what I am doing wrong. | closed | 2020-06-23T11:54:19Z | 2020-06-25T10:12:49Z | https://github.com/explosion/spacy-course/issues/71 | [] | himesh-gosvami | 1 |
modoboa/modoboa | django | 2,228 | Recent created domain fails DNS checks | # Impacted versions
* OS Type: Ubuntu
* OS Version: 20.94.2 LTS
* Database Type: PostgreSQL
* Database version: 12.6
* Modoboa: 1.17
* installer used: Yes
* Webserver: Nginx
# Steps to reproduce
Add a new domain via the Web UI, with no DNS records yet created.
You should get the message below from Modoboa:

# Current behavior
After adding a new domain with no (yet) DNS records created, I keep getting the `No DNS record found` error message, and my domain is stuck at `Awaiting checks` status.
# Expected behavior
Modoboa should have a way to trigger the DNS checks again. There is no way to manually check the DNS again after adding the domain.
Issue https://github.com/modoboa/modoboa/issues/1023 would fix this.
| closed | 2021-04-22T04:43:52Z | 2021-05-10T14:07:40Z | https://github.com/modoboa/modoboa/issues/2228 | [] | lpossamai | 1 |
gradio-app/gradio | deep-learning | 10,487 | Session not found whenever the request is routed to another instance | Normally whenever I interact with a Gradio app, I see (in browser Network tab) a POST request to
"/gradio_api/queue/join?" with` session_hash: "u26gd43ah8"` in request payload and right after that I see a GET request to /gradio_api/queue/data?session_hash=u26gd43ah8
But If I'm running the app in a stateless environment (with multiple instances) the requests are distributed across different instances and the in-memory session state on one instance won’t be accessible to another. Therefore I only see this request in Network tab: "/gradio_api/queue/join?" with` session_hash: "u26gd43ah8"` but nothing after that as I get Session Not Found error.
**Describe the solution you'd like**
Does Gradio compare the session hash from the request payload against some session_hash in the server memory for it to continue? If so, I would like Gradio to get the session hash from a database instead of from the memory so it would persist and be stateless.
**Additional context**
I hope that made sense.
Also I don't want to use Redis or any other expensive solutions. I just need to know how to access and update whatever it is (missing) in the memory that causes the Session not found error, so I could fix it with a middleware.
Thank you. | open | 2025-02-02T21:27:16Z | 2025-02-28T17:53:23Z | https://github.com/gradio-app/gradio/issues/10487 | [
"bug",
"cloud"
] | peeter2 | 6 |
tfranzel/drf-spectacular | rest-api | 1,066 | Hi, How are the fields of the uploaded file represented on the interface document | **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
It would be most helpful to provide a small snippet to see how the bug was provoked.
**Expected behavior**
A clear and concise description of what you expected to happen.
| closed | 2023-08-31T03:37:22Z | 2023-09-18T10:29:46Z | https://github.com/tfranzel/drf-spectacular/issues/1066 | [] | Cloverxue | 10 |
ets-labs/python-dependency-injector | flask | 299 | Question about typing | Hello o/
So Im working with this dependency-injector(4.0.0) and pycharm latest version.
And im trying to understand why the auto-completion does not work as expected.
Here is an example of what I tried but only the last example allow me to have autocompletion:
first try:
```
from dependency_injector import providers
from mongo import MongoRepository
test = providers.Singleton(MongoRepository)
test().[expect_autocompletion_from_ide_here]
```
second try:
```
from dependency_injector import providers
from mongo import MongoRepository
test: providers.Provider[MongoRepository] = providers.Singleton(MongoRepository)
test().[expect_autocompletion_from_ide_here]
```
third try (working):
```
from dependency_injector import providers
from mongo import MongoRepository
from typings import Callable
test: Callable[[], MongoRepository] = providers.Singleton(MongoRepository)
test().[here_we_have_autocompletion]
```
Am i doing something wrong ?
Thx in advance.
| closed | 2020-10-14T22:45:21Z | 2020-10-15T09:07:33Z | https://github.com/ets-labs/python-dependency-injector/issues/299 | [
"question"
] | izinihau | 2 |
PokeAPI/pokeapi | graphql | 887 | Mimikyu Data for api/v2/pokemon missing | The data of mimikyu for /api/v2/pokemon is missing:
https://pokeapi.co/api/v2/pokemon/mimikyu

| closed | 2023-06-07T21:05:40Z | 2023-06-08T09:18:34Z | https://github.com/PokeAPI/pokeapi/issues/887 | [] | GuikiPT | 3 |
s3rius/FastAPI-template | fastapi | 190 | Alembic supports | Is Alembic includes along side with SQLAlchemy database? I won't be able to find this option and had to initialize alembic by my own. But I think it's a necessary feature. Is it going to be able in near future? | closed | 2023-10-02T05:22:02Z | 2023-10-02T13:32:44Z | https://github.com/s3rius/FastAPI-template/issues/190 | [] | MishaVyb | 3 |
robusta-dev/robusta | automation | 1,139 | How to integrate this tool with ECS clusters or can only be used with EKS | closed | 2023-10-30T09:15:49Z | 2023-10-31T15:25:00Z | https://github.com/robusta-dev/robusta/issues/1139 | [] | Raghav-1078 | 2 |
|
ultralytics/ultralytics | computer-vision | 18,676 | Source of YOLOv10 pretrianed weights | ### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
I have a question regarding YOLOv10 pretrained weights [(:](https://docs.ultralytics.com/models/yolov10/#performance) do you train your own YOLOv10 models, or do you utilize the pretrained weights provided in the YOLOv10 repository?
### Additional
_No response_ | closed | 2025-01-14T08:57:59Z | 2025-01-16T05:49:54Z | https://github.com/ultralytics/ultralytics/issues/18676 | [
"question"
] | piupiuisland | 4 |
Miserlou/Zappa | flask | 1,484 | Internal Server Error on deployed app | I have managed to deploy a simple hello world app with zappa, however when I visit the URL the app is deployed to all I get is:
> {"message": "Internal server error"}
When I tried to run `zappa tail` I receive the error:
> botocore.errorfactory.ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the DescribeLogStreams operation: The specified log group does not exist.
## Context
I am using a manually created AWS role to handle the zappa application.
My app.py looks like:
```
import logging
from flask import Flask
app = Flask(__name__)
logging.basicConfig()
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
@app.route('/', methods=['GET'])
def lambdahandler(event=None, context=None):
logger.info('Lambda function invoked index()')
return 'hello from Flask!'
if __name__ == '__main__':
app.run()
```
## Expected Behavior
I expect the api to return the text "Hello World!"
## Actual Behavior
The API returns
> {"message": "Internal server error"}
## Steps to Reproduce
The app is live at https://hapcnbby7h.execute-api.us-west-2.amazonaws.com/production
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.45.1
* Operating System and Python version: Windows 10, python 3.6
* The output of `pip freeze`:
argcomplete==1.9.2
base58==0.2.4
boto3==1.7.5
botocore==1.10.5
certifi==2018.4.16
cfn-flip==1.0.3
chardet==3.0.4
click==6.7
docutils==0.14
durationpy==0.5
Flask==0.12.2
future==0.16.0
hjson==3.0.1
idna==2.6
itsdangerous==0.24
Jinja2==2.10
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.19.0
MarkupSafe==1.0
placebo==0.8.1
python-dateutil==2.6.1
python-slugify==1.2.4
PyYAML==3.12
requests==2.18.4
s3transfer==0.1.13
six==1.11.0
toml==0.9.4
tqdm==4.19.1
troposphere==2.2.1
Unidecode==1.0.22
urllib3==1.22
virtualenv==15.2.0
Werkzeug==0.14.1
wsgi-request-logger==0.4.6
zappa==0.45.1
* Your `zappa_settings.py`:
> {
> "production": {
> "app_function": "app.app",
> "aws_region": "us-west-2",
> "profile_name": "default",
> "project_name": "zappa-test",
> "runtime": "python3.6",
> "s3_bucket": "zappa-ds-app-0000",
> "manage_roles": false,
> "role_name":"zappa-datascience",
> "keep_warm": false
> }
> } | open | 2018-04-20T11:33:19Z | 2018-04-25T12:49:59Z | https://github.com/Miserlou/Zappa/issues/1484 | [] | INRIX-Joshua-Kidd | 3 |
Nekmo/amazon-dash | dash | 117 | Trouble installing on os x | Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
### What is the purpose of your *issue*?
- [ x] Bug report (encountered problems with amazon-dash)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
### Guideline for bug reports
You can delete this section if your report is not a bug
* amazon-dash version: ~
* Python version: 2.7.10
* Pip & Setuptools version: 18.1, 18.5
* Operating System: Mac high sierra 10.13.6
How to get your version:
```
amazon-dash --version
python --version
pip --version
easy_install --version
```
- [x ] The `pip install` or `setup install` command has been completed without errors
- [ ] The `python -m amazon_dash.install` command has been completed without errors
- [ ] The `amazon-dash discovery` command works without errors
- [ ] I have created/edited the configuration file
- [ ] *Amazon-dash service* or `amazon-dash --debug run` works
#### Description
I'm having trouble installing. Maybe someone has gotten past this. When running _sudo python -m amazon.dash install_ things fail. Immediately I get a ps: illegal option error:
> Executing all install scripts for Amazon-Dash
> [OK] config has been installed successfully
> ps: illegal option -- -
> usage: ps [-AaCcEefhjlMmrSTvwXx] [-O fmt | -o fmt] [-G gid[,gid...]]
> [-g grp[,grp...]] [-u [uid,uid...]]
> [-p pid[,pid...]] [-t tty[,tty...]] [-U user[,user...]]
> ps [-L]
> Traceback (most recent call last):
> File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 162, in _run_module_as_main
> "__main__", fname, loader, pkg_name)
> File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 72, in _run_code
> exec code in run_globals
> File "/Users/jason/Library/Python/2.7/lib/python/site-packages/amazon_dash/install/__main__.py", line 3, in <module>
> catch(cli)()
> File "/Users/jason/Library/Python/2.7/lib/python/site-packages/amazon_dash/install/__init__.py", line 47, in wrap
> return fn(*args, **kwargs)
> File "/Library/Python/2.7/site-packages/click/core.py", line 764, in __call__
> return self.main(*args, **kwargs)
> File "/Library/Python/2.7/site-packages/click/core.py", line 717, in main
> rv = self.invoke(ctx)
> File "/Library/Python/2.7/site-packages/click/core.py", line 1137, in invoke
> return _process_result(sub_ctx.command.invoke(sub_ctx))
> File "/Library/Python/2.7/site-packages/click/core.py", line 956, in invoke
> return ctx.invoke(self.callback, **ctx.params)
> File "/Library/Python/2.7/site-packages/click/core.py", line 555, in invoke
> return callback(*args, **kwargs)
> File "/Users/jason/Library/Python/2.7/lib/python/site-packages/amazon_dash/install/__init__.py", line 152, in all
> has_service = has_service or (service().install() and
> File "/Users/jason/Library/Python/2.7/lib/python/site-packages/amazon_dash/install/__init__.py", line 71, in install
> self.is_installable()
> File "/Users/jason/Library/Python/2.7/lib/python/site-packages/amazon_dash/install/__init__.py", line 107, in is_installable
> if get_init_system() != 'systemd' or not get_systemd_services_path():
> File "/Users/jason/Library/Python/2.7/lib/python/site-packages/amazon_dash/install/__init__.py", line 30, in get_init_system
> return check_output(['ps', '--no-headers', '-o', 'comm', '1']).strip(b'\n ').decode('utf-8')
> File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 573, in check_output
> raise CalledProcessError(retcode, cmd, output=output)
> subprocess.CalledProcessError: Command '['ps', '--no-headers', '-o', 'comm', '1']' returned non-zero exit status 1
>
| closed | 2019-01-18T00:36:42Z | 2019-03-22T17:15:50Z | https://github.com/Nekmo/amazon-dash/issues/117 | [
"bug"
] | Vernal | 2 |
litestar-org/litestar | pydantic | 3,968 | Enhancement: Provide resolved handler routes for testing | ### Summary
Hej,
as I find it tedious and error-prone to enter the correct (layered) paths for my routes in tests, I created a little function to resolve the full path from my handler function object. I wasn't able to find anything on this in the docs.
Would you be open to accept a PR adding this to the e.g. the AsyncTestClient to make developer ergonomics a little nicer?
### Basic Example
Definition:
```python
def get_route_path(
client: AsyncTestClient[Litestar],
route_handler: HTTPRouteHandler,
**path_parameters: Any, # noqa: ANN401
) -> str:
app = cast(Litestar, client.app)
name = f"{route_handler._fn.__module__}.{route_handler._fn.__qualname__}" # noqa: SLF001
return app.route_reverse(name=name, **path_parameters)
```
Usage:
```python
manual_path = f"neighbourhood/house/things/view/{uuid4().hex}" # ugh, tedious
new_and_improved_path = get_route_path(
test_client, ThingController.get_this_thing, id=uuid4()
) # passing the function object enables better IDE support
response = await test_client.get(new_and_improved_path)
```
Proposed new method on AsyncTestClient:
```python
class AsyncTestClient(AsyncClient, BaseTestClient, Generic[T]): # type: ignore[misc]
...
url_for_handler( # or something like that
self,
route_handler: HTTPRouteHandler,
**path_parameters: Any, # noqa: ANN401
) -> str:
name = f"{route_handler._fn.__module__}.{route_handler._fn.__qualname__}" # noqa: SLF001
return self.app.route_reverse(name=name, **path_parameters)
```
### Drawbacks and Impact
Drawbacks: A slightly larger API surface to maintain.
Impact: Reduced chance of typing related errors in tests.
### Unresolved questions
1. Is there a better approach? Accessing a private field (_fn) like that always feels icky.
2. Somewhat related: Would you be open to allowing UUIDs to be passed as strings into route_reverse? (Works fine locally when adding UUID to `allow_str_instead = {datetime, date, time, timedelta, float, Path, UUID}` in `route_reverse`) | closed | 2025-01-23T10:00:00Z | 2025-01-24T10:23:20Z | https://github.com/litestar-org/litestar/issues/3968 | [
"Enhancement"
] | aedify-swi | 6 |
electricitymaps/electricitymaps-contrib | data-visualization | 7,891 | [Data Issue]: consumption data about Italy increased strangely | ### When did this happen?
Starting February 16th, 2025 and still occurs
### What zones are affected?
Italy
### What is the problem?
I created a bot for X that returns the consumption values for Italy using your APIs.
On Feb 16th between 17:30 and 20:30 something strange happened: the object "powerConsumptionBreakdown" obtained by `v3/power-breakdown/latest?zone=IT` began to return a huge value for gas consumption. This gave and odd consumption totals that Italy never had in the past (see the screenshot of 17Feb)
16 Feb
<img width="620" alt="Image" src="https://github.com/user-attachments/assets/360b3849-e627-41aa-9c4a-a9cdde86c7c8" />
17 Feb
<img width="499" alt="Image" src="https://github.com/user-attachments/assets/51d2c08c-cee4-438f-acc3-822f5ecf7cfe" />
This is the "powerConsumptionBreakdown" obtained a few minutes ago, that gives a total consumption greater than 50GWh when, before that date, it was usually ~25GWh for the same time of day.
```
"zone": "IT",
"datetime": "2025-03-06T08:00:00.000Z",
"updatedAt": "2025-03-06T07:50:09.699Z",
"createdAt": "2025-03-03T08:43:18.776Z",
"powerConsumptionBreakdown": {
"nuclear": 2149,
"geothermal": 194,
"biomass": 927,
"coal": 934,
"wind": 1157,
"solar": 1914,
"hydro": 2705,
"gas": 44969,
"oil": 157,
"unknown": 587,
"hydro discharge": 393,
"battery discharge": 1
},
```
Thank you | open | 2025-03-06T08:15:53Z | 2025-03-22T10:21:12Z | https://github.com/electricitymaps/electricitymaps-contrib/issues/7891 | [
"data",
"needs triage"
] | fcalderan | 2 |
healthchecks/healthchecks | django | 198 | LDAP Auth Option | I'm sorry if this is the wrong place for this but after looking over all the open/closed issues for this project I have yet to see a request for an LDAP authentication option.
Would be fantastic for those of us that are plagued with the requirement to integrate with AD.
A great LDAP option for django would be [django-auth-ldap](https://django-auth-ldap.readthedocs.io/en/latest/index.html)
**Local Settings Updates:**
```
AUTHENTICATION_BACKENDS = [
'django_auth_ldap.backend.LDAPBackend',
'django.contrib.auth.backends.ModelBackend',
]
```
**Example Configuration:**
```
import ldap
from django_auth_ldap.config import LDAPSearch, GroupOfNamesType
# Baseline configuration.
AUTH_LDAP_SERVER_URI = 'ldap://ldap.example.com'
AUTH_LDAP_BIND_DN = 'cn=django-agent,dc=example,dc=com'
AUTH_LDAP_BIND_PASSWORD = 'phlebotinum'
AUTH_LDAP_USER_SEARCH = LDAPSearch(
'ou=users,dc=example,dc=com',
ldap.SCOPE_SUBTREE,
'(uid=%(user)s)',
)
# Or:
# AUTH_LDAP_USER_DN_TEMPLATE = 'uid=%(user)s,ou=users,dc=example,dc=com'
# Set up the basic group parameters.
AUTH_LDAP_GROUP_SEARCH = LDAPSearch(
'ou=django,ou=groups,dc=example,dc=com',
ldap.SCOPE_SUBTREE,
'(objectClass=groupOfNames)',
)
AUTH_LDAP_GROUP_TYPE = GroupOfNamesType(name_attr='cn')
# Simple group restrictions
AUTH_LDAP_REQUIRE_GROUP = 'cn=enabled,ou=django,ou=groups,dc=example,dc=com'
AUTH_LDAP_DENY_GROUP = 'cn=disabled,ou=django,ou=groups,dc=example,dc=com'
# Populate the Django user from the LDAP directory.
AUTH_LDAP_USER_ATTR_MAP = {
'first_name': 'givenName',
'last_name': 'sn',
'email': 'mail',
}
AUTH_LDAP_USER_FLAGS_BY_GROUP = {
'is_active': 'cn=active,ou=django,ou=groups,dc=example,dc=com',
'is_staff': 'cn=staff,ou=django,ou=groups,dc=example,dc=com',
'is_superuser': 'cn=superuser,ou=django,ou=groups,dc=example,dc=com',
}
# This is the default, but I like to be explicit.
AUTH_LDAP_ALWAYS_UPDATE_USER = True
# Use LDAP group membership to calculate group permissions.
AUTH_LDAP_FIND_GROUP_PERMS = True
# Cache distinguised names and group memberships for an hour to minimize
# LDAP traffic.
AUTH_LDAP_CACHE_TIMEOUT = 3600
# Keep ModelBackend around for per-user permissions and maybe a local
# superuser.
AUTHENTICATION_BACKENDS = (
'django_auth_ldap.backend.LDAPBackend',
'django.contrib.auth.backends.ModelBackend',
)
``` | closed | 2018-11-06T01:57:50Z | 2022-12-16T10:13:32Z | https://github.com/healthchecks/healthchecks/issues/198 | [
"feature"
] | smacktrace | 6 |
dfki-ric/pytransform3d | matplotlib | 246 | New logo | Hi thank you so much for this awesome library!
I saw last week that you wanted help with the logo https://github.com/dfki-ric/pytransform3d/issues/241
I made this small logo with my renderer in case you want to use it. I can send you the script in case you want to change something!

| closed | 2023-04-23T10:00:07Z | 2023-05-24T12:39:59Z | https://github.com/dfki-ric/pytransform3d/issues/246 | [] | oarriaga | 2 |
graphql-python/graphene-django | django | 533 | How to use Django models which have no "name" attribute? | **Do my Django model has to have "name" attribute??? Can I override is somehow?** I have some Django models which don't have "name" attribute. The don't work :( Only those which has "name" attribute work and I can query them with GraphiQL.:/
> ImportError at /graphql
> Could not import 'myproject.schema.schema' for Graphene setting 'SCHEMA'. AttributeError: type object 'MyModel' has no attribute 'name'. | closed | 2018-10-12T14:47:38Z | 2018-10-12T15:14:26Z | https://github.com/graphql-python/graphene-django/issues/533 | [] | ghost | 1 |
postmanlabs/httpbin | api | 617 | /redirect-to returns 404 | All the `/redirect-to` endpoints are returning 404s.
```console
$ curl -v -X GET "http://httpbin.org/redirect-to?url=http://httpbin.org/get"
* Trying 34.235.192.52...
* TCP_NODELAY set
* Connected to httpbin.org (34.235.192.52) port 80 (#0)
> GET /redirect-to?url=http://httpbin.org/get HTTP/1.1
> Host: httpbin.org
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Server: awselb/2.0
< Date: Sat, 20 Jun 2020 06:48:23 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 0
< Connection: keep-alive
<
* Connection #0 to host httpbin.org left intact
* Closing connection 0
```
| closed | 2020-06-20T06:50:16Z | 2022-04-06T01:48:31Z | https://github.com/postmanlabs/httpbin/issues/617 | [
"bug"
] | codenirvana | 21 |
sunscrapers/djoser | rest-api | 833 | "User Delete" endpoint expects DRF token despite `rest_framework_simplejwt` auth backend being set | As in the title, I've got simple Django app where I use `rest_framework_simplejwt`. Other flows like i.e. user's creation work flawlessly, although I've encountered an issue with `DELETE` `/users/me/` one, which responds with:
```
AttributeError at /auth/users/me/
type object 'Token' has no attribute 'objects'
(...)
```
Which seems to be a token from DRF Token Based Authentication I think? | open | 2024-06-17T13:19:46Z | 2025-01-15T15:19:12Z | https://github.com/sunscrapers/djoser/issues/833 | [
"bug",
"help wanted"
] | lukaszsi | 6 |
lepture/authlib | flask | 698 | authlib.integrations.requests_client.OAuth2Session creates a reference cycle that requires a deep garbage collection cycle to cleanup | **Describe the bug**
`authlib.integrations.requests_client.OAuth2Session` holds a reference to itself (through `self.session`) and references each other with `Oauth2Auth` (through `TokenAuth.client`). Those two references prevent the unused session objects from being freed until the garbage collector runs a deep cleanup cycle (`generation=2`).
**To Reproduce**
1. Disable garbage collection temporarily to make sure we are the ones who catch it
2. Set garbage collector's debug level to `DEBUG_LEAK`
3. Create and delete an `OAuth2Session` object
4. Force a garbage collection run to confirm that the problem exists (the output will list all hard to free objects)
```python
import gc
from authlib.integrations.requests_client import OAuth2Session
session = OAuth2Session()
gc.collect() # make sure there is no lingering garbage
gc.disable()
gc.set_debug(gc.DEBUG_LEAK)
del session
gc.collect()
gc.set_debug(0)
```
**Expected behavior**
The memory should be freed as soon as the session becomes unused.
**Environment:**
- OS: MacOS and Linux
- Python Version: 3.12
- Authlib Version: 1.4.0
**Additional context**
Adding the following finalizers to `authlib` breaks up the cycles and results in the garbage collector finding no garbage:
```python
class OAuth2Session(OAuth2Client, Session):
...
def __del__(self):
del self.session
```
```python
class TokenAuth:
...
def __del__(self):
del self.client
del self.hooks
```
| open | 2025-01-24T14:37:51Z | 2025-02-20T09:27:03Z | https://github.com/lepture/authlib/issues/698 | [
"bug",
"client"
] | patrys | 1 |
tqdm/tqdm | jupyter | 892 | Non-blocking output? | - [ ] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [ ] visual output bug
+ [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [x] new feature request
- [x] I have visited the [source website], and in particular
read the [known issues]
- [ x I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
4.36.1 3.7.5 (default, Oct 25 2019, 15:51:11)
[GCC 7.3.0] linux
```
When using [EternalTerminal] (https://github.com/MisterTea/EternalTerminal), running a Python program with a TQDM progress bar 'suspends' (makes no progress) if there is no client viewing the Python program output. Any way to circumvent that?
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
[StackOverflow#tqdm]: https://stackoverflow.com/questions/tagged/tqdm
| open | 2020-02-12T13:46:07Z | 2020-03-31T18:51:36Z | https://github.com/tqdm/tqdm/issues/892 | [
"help wanted 🙏",
"invalid ⛔",
"question/docs ‽"
] | tsoernes | 2 |
plotly/dash-cytoscape | plotly | 55 | Edge Attributes/Labels (future work) | Is it possible to show edge properties/attributes (relationship description) in cytoscape?
In Dash there is a way to show the edge properties when hover, but it will be good if it can just show like node label. | open | 2019-04-12T03:55:06Z | 2022-11-05T00:45:11Z | https://github.com/plotly/dash-cytoscape/issues/55 | [
"suggestion"
] | realboa | 4 |
modin-project/modin | pandas | 7,170 | BUG: Calling df._repartition(axis=1) on updated df will raise IndexError | ### Modin version checks
- [X] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest released version of Modin.
- [ ] I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow [this guide](https://modin.readthedocs.io/en/stable/getting_started/installation.html#installing-from-the-github-master-branch).)
### Reproducible Example
```python
import time
import modin.pandas as pd
import modin.config as cfg
import numpy as np
import ray
from modin.distributed.dataframe.pandas import unwrap_partitions, from_partitions
from sklearn.preprocessing import RobustScaler
from sklearn.tree import DecisionTreeClassifier
ray.init()
# Config modin to partition dataframe into 5 partitions and not to partition against columns
cfg.MinPartitionSize.put(102)
cfg.NPartitions.put(5)
# Generate samples
data = np.random.rand(10000, 100)
label = [i for i in range(1, 9)] * 1250
features = ['feature' + str(i) for i in range(1, 101)]
df = pd.DataFrame(data=data, columns=features)
df['label'] = label
# Scale samples
scaler = RobustScaler()
res = scaler.fit_transform(df[[column for column in df.columns if column != 'label']].to_numpy())
frame = pd.DataFrame(res, columns=[column for column in df.columns if column != 'label'])
# Update dataframe
df.update(frame)
# Repartition to make dataframe contain only 1 partition against columns
# This will work
partitions = unwrap_partitions(df, axis=0)
df = from_partitions(partitions, axis=0)
# This will raise an error
# df = df._repartition(axis=1)
# Fit a DTC model of sklearn
clf = DecisionTreeClassifier()
features = df[df.columns.drop(['label'])].to_numpy()
clf.fit(features, label)
```
### Issue Description
I created a dataframe whose shape is (10000,101).
In order to make the df contain only 1 partition against columns, I followed instruction from @YarShev that setting MinPartitionSize would make it.
Then I scaled the df with RobustScaler from sklearn and tried to fit a DTC model.
Yet I found the updated df was partitioned against columns again which made the fitting take about twice as long.
So I tried repartitioning the df only against columns by calling `df = df._repartition(axis=1)`. Yet I got an IndexError.
But I managed to solve the problem by calling `unwrap_partitions` and `from_partitions`.
### Expected Behavior
`df._repartition(axis=1)` will make the updated df contain only 1 partition against columns. And the repartitioned df could be feed into DTC.
### Error Logs
<details>
```python-traceback
Traceback (most recent call last):
File "D:\Work\Python\RayDemo3.8\aaaa.py", line 41, in <module>
features = df[df.columns.drop(['label'])].to_numpy()
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\modin\logging\logger_decorator.py", line 128, in run_and_log
return obj(*args, **kwargs)
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\modin\pandas\base.py", line 3138, in to_numpy
return self._to_bare_numpy(
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\modin\logging\logger_decorator.py", line 128, in run_and_log
return obj(*args, **kwargs)
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\modin\pandas\base.py", line 3119, in _to_bare_numpy
return self._query_compiler.to_numpy(
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\modin\logging\logger_decorator.py", line 128, in run_and_log
return obj(*args, **kwargs)
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\modin\core\storage_formats\pandas\query_compiler.py", line 376, in to_numpy
arr = self._modin_frame.to_numpy(**kwargs)
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\modin\logging\logger_decorator.py", line 128, in run_and_log
return obj(*args, **kwargs)
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\modin\core\dataframe\pandas\dataframe\dataframe.py", line 3882, in to_numpy
return self._partition_mgr_cls.to_numpy(self._partitions, **kwargs)
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\modin\logging\logger_decorator.py", line 128, in run_and_log
return obj(*args, **kwargs)
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\modin\core\execution\ray\generic\partitioning\partition_manager.py", line 43, in to_numpy
parts = RayWrapper.materialize(
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\modin\core\execution\ray\common\engine_wrapper.py", line 92, in materialize
return ray.get(obj_id)
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\ray\_private\auto_init_hook.py", line 21, in auto_init_wrapper
return fn(*args, **kwargs)
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\ray\_private\client_mode_hook.py", line 103, in wrapper
return func(*args, **kwargs)
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\ray\_private\worker.py", line 2667, in get
values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\ray\_private\worker.py", line 864, in get_objects
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(IndexError): ray::_apply_list_of_funcs() (pid=10084, ip=127.0.0.1)
File "python\ray\_raylet.pyx", line 1889, in ray._raylet.execute_task
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\modin\core\execution\ray\implementations\pandas_on_ray\partitioning\partition.py", line 440, in _apply_list_of_funcs
partition = func(partition, *args, **kwargs)
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\modin\core\dataframe\pandas\partitioning\partition.py", line 217, in _iloc
return df.iloc[row_labels, col_labels]
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\pandas\core\indexing.py", line 1097, in __getitem__
return self._getitem_tuple(key)
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\pandas\core\indexing.py", line 1594, in _getitem_tuple
tup = self._validate_tuple_indexer(tup)
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\pandas\core\indexing.py", line 904, in _validate_tuple_indexer
self._validate_key(k, i)
File "D:\Work\Python\RayDemo3.8\venv\lib\site-packages\pandas\core\indexing.py", line 1516, in _validate_key
raise IndexError("positional indexers are out-of-bounds")
IndexError: positional indexers are out-of-bounds
```
</details>
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0c3746baeecf2ff3a0f5f7a049dcb22d3e6eab43
python : 3.8.10.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.22000
machine : AMD64
processor : Intel64 Family 6 Model 151 Stepping 2, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : Chinese (Simplified)_China.936
Modin dependencies
------------------
modin : 0.23.1.post0
ray : 2.10.0
dask : 2023.5.0
distributed : None
hdk : None
pandas dependencies
-------------------
pandas : 2.0.3
numpy : 1.24.4
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.2.0
pip : 24.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : 1.4.6
psycopg2 : None
jinja2 : 3.1.2
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
brotli : None
fastparquet : None
fsspec : 2023.10.0
gcsfs : None
matplotlib : 3.7.4
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 15.0.0
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.10.1
snappy : None
sqlalchemy : 2.0.25
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
None
</details>
| closed | 2024-04-11T01:28:34Z | 2024-04-15T10:26:33Z | https://github.com/modin-project/modin/issues/7170 | [
"bug 🦗",
"External"
] | Taurus-Le | 4 |
sktime/sktime | scikit-learn | 7,022 | [ENH] `RotationForest` is not an `sklearn` compliant classifier | The classification module has `RotationForest`, which purports to be an `sklearn` compliant classifier.
However, it is not compliant - it does not inherit from the `ClassifierMixin`, and it is also not tested against `parametrize_with_checks`. This should be added. | closed | 2024-08-23T12:52:32Z | 2025-01-17T11:36:57Z | https://github.com/sktime/sktime/issues/7022 | [
"module:classification",
"bugfix",
"enhancement"
] | fkiraly | 0 |
coqui-ai/TTS | deep-learning | 3,463 | [Bug] Memory Explosion with xtts HifiganGenerator | ### Describe the bug
When running xttsv2 on 3090 RTX on WSL2 Ubuntu 22.04 on Windows 11 I would intermittently get memory explosions when doing inference. It seems to happen when I have huggin face transformer LLM loaded at the same time as XTTS. I traced when it happens to the forward pass of HifiganGenerator when it runs o = self.conv_pre(x) because self.conv_pre is just weight_norm(Conv1d(in_channels, upsample_initial_channel, 7, 1, padding=3) I couldn't identify any further what was going on but for some reason calling this uses all avilable gpu memory. Prior to hitting this line the system is using 8GB of VRAM then as soon as it hits it it goes to 23.7+GB of VRAM then the system starts to freeze.
Any help would be awesome but it is a weird bug.
### To Reproduce
I'm not able to produce on any of the leased machines I have. This just happens on my 3090 RTX, but the steps seem to be on
Load XTTS Model
Load Hugging Face LLM
Run inference via inference_stream
### Expected behavior
Memory pressure may fluctuate a bit but not 16+GB worth of fluxuation
### Logs
_No response_
### Environment
```shell
Windows 11
WSL2 Ubuntu 22.04
Tried on multiple version of python and pytorch and multiple versions of cuda
Reproduced on 11.8 12.2 releases of pytorch
```
### Additional context
_No response_ | closed | 2023-12-25T09:11:33Z | 2023-12-26T01:04:55Z | https://github.com/coqui-ai/TTS/issues/3463 | [
"bug"
] | chaseaucoin | 1 |
agronholm/anyio | asyncio | 668 | Don't wrap exceptions in `ExceptionGroup` if only one exception is raised on Python<3.11 | Not sure whether it's a feature or bug/regression.
I'm in the process of upgrading from anyio 3 to anyio 4.
So far we've explicitly designed our code in a way that no more than 1 exception will be raised in a task group. (By making sure only one code path will result in an exception.) This worked great, and prevented us from having to deal with exception groups. We still support Python 3.8 so, that's important to us.
However, after upgrading to anyio 4, even if there is only one exception raised as part of a task group, it will be wrapped in an `ExceptionGroup`. This means, we have to use the `with catch()` syntax which is quite cumbersome, and everywhere. That's a huge amount of work, and boilerplate to add. :'(.
It would be nice if the code could be modified so that on Python<3.11, if there is only one exception, it won't be wrapped. The change is very simple, and should be backward compatible.
```diff
--- a/src/anyio/_backends/_asyncio.py
+++ b/src/anyio/_backends/_asyncio.py
@@ -675,6 +675,8 @@ class TaskGroup(abc.TaskGroup):
self._active = False
if self._exceptions:
+ if len(self._exceptions) == 1 and sys.version_info < (3, 11):
+ raise self._exceptions[0]
raise BaseExceptionGroup(
"unhandled errors in a TaskGroup", self._exceptions
)
``` | closed | 2024-01-11T22:41:37Z | 2025-01-02T12:09:29Z | https://github.com/agronholm/anyio/issues/668 | [
"enhancement"
] | jonathanslenders | 10 |
sammchardy/python-binance | api | 1,283 | How to perform asynchronous futures Depth Cache? | 
My code doesn't have any output, can you give a correct example thanks
from binance import ThreadedWebsocketManager
def main():
dcm = ThreadedWebsocketManager()
# start is required to initialise its internal loop
dcm.start()
def handle_depth_cache(depth_cache):
print(f"symbol {depth_cache.symbol}")
print("top 5 bids")
print(depth_cache.get_bids()[:5])
print("top 5 asks")
print(depth_cache.get_asks()[:5])
print("last update time {}".format(depth_cache.update_time))
dcm_name = dcm.start_futures_depth_socket(handle_depth_cache, symbol='BNBBTC')
dcm.join()
if __name__ == "__main__":
main()
| open | 2023-02-03T17:35:41Z | 2023-03-26T17:41:39Z | https://github.com/sammchardy/python-binance/issues/1283 | [] | 1163849662 | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,115 | How to keep the size of output image same with input image? | I trained the cycleGAN model with my dataset and I want to keep the size of output image same with the input image. So I change the `--preprocess=none` when testing. But the result looks very smooth and distortion. How can I fix it? Many thanks. | open | 2020-08-04T04:44:27Z | 2020-08-04T05:04:44Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1115 | [] | GuoLanqing | 2 |
deepset-ai/haystack | nlp | 8,335 | Ability to set max_seq_len to the SentenceTransformers components | **Is your feature request related to a problem? Please describe.**
The SentenceTransformer models have a [max_seq_len attribute](https://github.com/UKPLab/sentence-transformers/blob/0a32ec8445ef46b2b5d4f81af4931e293d42623f/sentence_transformers/SentenceTransformer.py#L1635).
In theory, we could set it with the `model_max_length` in tokenizer_kwargs which then eventually should set that [attribute here](https://github.com/UKPLab/sentence-transformers/blob/0a32ec8445ef46b2b5d4f81af4931e293d42623f/sentence_transformers/models/Transformer.py#L67).
However, it seems to be unreliable. We saw it with bge-m3 but could also be the case for other models. ([Colab](https://colab.research.google.com/drive/1iw5s9JzQ6bck1AxXgLleuvm6TMika2xE?usp=sharing))
This can result in OOM error when embedding.
**Update**: found the "issue". The `max_seq_length` is read into the kwargs from [this config json](https://huggingface.co/BAAI/bge-m3/blob/5617a9f61b028005a4858fdac845db406aefb181/sentence_bert_config.json#L2) at [this point in _load_sbert_model](https://github.com/UKPLab/sentence-transformers/blob/0a32ec8445ef46b2b5d4f81af4931e293d42623f/sentence_transformers/SentenceTransformer.py#L1531) and thus the max_seq_length is not None [here](https://github.com/UKPLab/sentence-transformers/blob/0a32ec8445ef46b2b5d4f81af4931e293d42623f/sentence_transformers/models/Transformer.py#L67) and so it doesn't use `model_max_length` from the tokenizer_kwargs.
**Describe the solution you'd like**
Would be good to have the ability to set the `max_seq_len` in the (currently three) SentenceTransformers components [as in v1](https://github.com/deepset-ai/haystack/blob/a7005f6cd9ea7528ca93535ee181a7f792d134e0/haystack/nodes/retriever/_embedding_encoder.py#L144).
We could possibly also intercept the tokenizer_kwargs and use `model_max_length` from it if it's set. | closed | 2024-09-05T15:27:47Z | 2024-09-06T09:37:58Z | https://github.com/deepset-ai/haystack/issues/8335 | [
"2.x"
] | bglearning | 0 |
donBarbos/telegram-bot-template | pydantic | 130 | Feature: add migrations (alembic) | closed | 2024-01-16T14:23:13Z | 2024-01-23T18:13:53Z | https://github.com/donBarbos/telegram-bot-template/issues/130 | [] | donBarbos | 1 |
|
mirumee/ariadne-codegen | graphql | 15 | Fix generating types from mutation | Example schema file
```gql
schema {
query: Query
mutation: Mutation
}
type Query {
testQuery: Int!
}
type Mutation {
testMutation(num: Int!): ResultType
}
type ResultType {
number: Int!
}
```
and queries file:
```gql
mutation CustomMutation($num: Int!) {
testMutation(num: $num) {
number
}
}
```
Given files from above, package should generate correct types into `custom_mutation.py` file, but currently there is raised exception `KeyError: 'testMutation'` from [line](https://github.com/mirumee/graphql-sdk-gen/blob/main/graphql_sdk_gen/generators/query_types.py#L54) | closed | 2022-10-20T10:20:44Z | 2022-10-24T10:07:02Z | https://github.com/mirumee/ariadne-codegen/issues/15 | [
"bug"
] | mat-sop | 0 |
nerfstudio-project/nerfstudio | computer-vision | 3,318 | Issue rendering splatfacto with ns-render, assertion error | Hi I've been getting this assertion error when running ns-render: assert isinstance(data_manager_config, (VanillaDataManagerConfig, FullImageDatamanagerConfig))AssertionError.
I found a similar issue in #2913 but it was supposedly resolved with the addition of the FullImageDatamanagerConfig but it's still not working for me. It works for me when I comment out the lines with the assertion error that was brought up in #2913 but I'm sure that affects things. I also am not getting the same amount of images in the render as from the original dataset when using ns-render dataset.
I'm pretty new to this so any help would be appreciated thanks | open | 2024-07-18T23:12:13Z | 2024-07-18T23:56:22Z | https://github.com/nerfstudio-project/nerfstudio/issues/3318 | [] | mzlchou | 0 |
ultralytics/ultralytics | python | 19,157 | Brush labels to YOLO format | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello! Recenty I have used SAM model in label studio to assist my labeling task. It works pretty nice. However, I met some problems in exporting the dataset. It can't be exported in COCO or YOLO format since SAM create brush labels. Could you tell me how to convert my brush labels to a yolo format dataset?
### Additional
_No response_ | open | 2025-02-10T08:59:18Z | 2025-02-10T10:48:35Z | https://github.com/ultralytics/ultralytics/issues/19157 | [
"question",
"segment"
] | underagetaikonaut | 2 |
gee-community/geemap | streamlit | 1,481 | file_per_band not working | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
Sun Mar 26 21:17:46 2023 Eastern Daylight Time
OS Windows CPU(s) 20 Machine AMD64
Architecture 64bit RAM 63.9 GiB Environment Jupyter
Python 3.8.16 (default, Jan 17 2023, 22:25:28) [MSC v.1916 64 bit (AMD64)]
geemap 0.20.1 ee 0.1.339 ipyleaflet 0.17.2
folium 0.13.0 jupyterlab 3.5.3 notebook 6.5.2
ipyevents 2.0.1
Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications
### Description
Running through the tutorial #11 to output a subset of an image and the no matter what I do the file_per_band delivers a single file whether it's set to True or False. Trying to subset a single sentinel image by import an image collection and using the collection.first() to only work with the first image in the collection.
```
geemap.ee_export_image(img, filename = filename, scale=90, region = studyarea, file_per_band=True)
```
| closed | 2023-03-27T01:22:38Z | 2023-03-27T13:45:07Z | https://github.com/gee-community/geemap/issues/1481 | [
"bug"
] | jportolese | 1 |
widgetti/solara | jupyter | 691 | Issue with Chatbox Avatar Display | 
I've encountered a display issue with the chatbox avatar in our application that I'm hoping you can help with. There appears to be consistent blank spaces on the left and top sides of the avatar image within the chatbox. | open | 2024-06-23T03:46:11Z | 2024-06-24T07:15:18Z | https://github.com/widgetti/solara/issues/691 | [
"bug"
] | OtokoNoIzumi | 1 |
tensorflow/tensor2tensor | deep-learning | 1,608 | what is the difference between learning_rate_warmup_steps and learning_rate_decay_step? | ### Description
Hi everybody,
I'm trying to build an transformer model with optimal hyper parameters. But I'm having some trouble understand these two terms and I feel like these two may have some effect on training process and can either improve or reduce translation quality. Can anyone explain it in detail?
Thank you very much!
...
### Environment information
```
OS: <your answer here>
$ pip freeze | grep tensor
# your output here
$ python -V
# your output here
```
### For bugs: reproduction and error logs
```
# Steps to reproduce:
...
```
```
# Error logs:
...
```
| open | 2019-06-19T18:04:22Z | 2019-06-19T18:04:22Z | https://github.com/tensorflow/tensor2tensor/issues/1608 | [] | EthannyDing | 0 |
X-PLUG/MobileAgent | automation | 85 | mobile agent e 有点慢 | 工作很牛,在复现过程中发现较慢,有哪些地方可以提升速度且不会过于影响准确性? | open | 2025-01-24T09:21:32Z | 2025-02-18T14:07:59Z | https://github.com/X-PLUG/MobileAgent/issues/85 | [] | cbigeyes | 2 |
LAION-AI/Open-Assistant | python | 2,897 | Models not found | Hello. I tried to deploy the project with docker on my own server. I tried all models listed in model_configs.py and almost all of them 'is not a folder' or 'cannot be found on hugging face' so that my worker process alway shutdown quickly. I can only run the default distilgpt2 model which gives some nonsense answer. Anyone know any other working models? | closed | 2023-04-25T10:55:17Z | 2023-04-29T17:22:34Z | https://github.com/LAION-AI/Open-Assistant/issues/2897 | [
"question"
] | 136William136 | 6 |
deepfakes/faceswap | machine-learning | 808 | I found this issue during the extracting | Loading...
07/23/2019 20:24:11 INFO Log level set to: INFO
Traceback (most recent call last):
File "C:\Users\lpmc_user\faceswap\faceswap.py", line 36, in <module>
ARGUMENTS.func(ARGUMENTS)
File "C:\Users\lpmc_user\faceswap\lib\cli.py", line 115, in execute_script
plaidml_found = self.setup_amd(arguments.loglevel)
File "C:\Users\lpmc_user\faceswap\lib\cli.py", line 148, in setup_amd
import plaidml # noqa pylint:disable=unused-import
File "C:\Users\lpmc_user\AppData\Roaming\Python\Python37\site-packages\plaidml\__init__.py", line 50, in <module>
import plaidml.settings
File "C:\Users\lpmc_user\AppData\Roaming\Python\Python37\site-packages\plaidml\settings.py", line 33, in <module>
_setup_config('PLAIDML_EXPERIMENTAL_CONFIG', 'experimental.json')
File "C:\Users\lpmc_user\AppData\Roaming\Python\Python37\site-packages\plaidml\settings.py", line 30, in _setup_config
'Could not find PlaidML configuration file: "{}".'.format(filename))
plaidml.exceptions.PlaidMLError: Could not find PlaidML configuration file: "experimental.json".
Process exited.
| closed | 2019-07-23T18:30:55Z | 2019-08-19T01:19:20Z | https://github.com/deepfakes/faceswap/issues/808 | [] | ZakariaMHTX | 31 |
mwaskom/seaborn | data-visualization | 3,506 | AttributeError: 'numpy.bool_' object has no attribute 'startswith' #3505 | I have all the latest version. Please find below
matplotlib=>3.8.0
numpy=>1.26.0
pandas=>2.1.1
seaborn=>0.12.2
```
(ai_ml_training) PS C:\Users\sarsasid> pip install matplotlib --upgrade
Requirement already satisfied: matplotlib in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (3.8.0)
Requirement already satisfied: contourpy>=1.0.1 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from matplotlib) (1.1.1)
Requirement already satisfied: cycler>=0.10 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from matplotlib) (0.11.0)
Requirement already satisfied: fonttools>=4.22.0 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from matplotlib) (4.42.1)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from matplotlib) (1.4.5)
Requirement already satisfied: numpy<2,>=1.21 in c:\users\sarsasid\appdata\local\anaconda3\envs\ai_ml_training\lib\site-packages (from matplotlib) (1.26.0)
Requirement already satisfied: packaging>=20.0 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from matplotlib) (23.1)
Requirement already satisfied: pillow>=6.2.0 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from matplotlib) (10.0.1)
Requirement already satisfied: pyparsing>=2.3.1 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from matplotlib) (3.1.1)
Requirement already satisfied: python-dateutil>=2.7 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from matplotlib) (2.8.2)
Requirement already satisfied: six>=1.5 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from python-dateutil>=2.7->matplotlib) (1.16.0)
(ai_ml_training) PS C:\Users\sarsasid> pip install numpy --upgrade
Requirement already satisfied: numpy in c:\users\sarsasid\appdata\local\anaconda3\envs\ai_ml_training\lib\site-packages (1.26.0)
(ai_ml_training) PS C:\Users\sarsasid> pip install pandas --upgrade
Requirement already satisfied: pandas in c:\users\sarsasid\appdata\local\anaconda3\envs\ai_ml_training\lib\site-packages (2.1.1)
Requirement already satisfied: numpy>=1.23.2 in c:\users\sarsasid\appdata\local\anaconda3\envs\ai_ml_training\lib\site-packages (from pandas) (1.26.0)
Requirement already satisfied: python-dateutil>=2.8.2 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from pandas) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in c:\users\sarsasid\appdata\local\anaconda3\envs\ai_ml_training\lib\site-packages (from pandas) (2023.3.post1)
Requirement already satisfied: tzdata>=2022.1 in c:\users\sarsasid\appdata\local\anaconda3\envs\ai_ml_training\lib\site-packages (from pandas) (2023.3)
Requirement already satisfied: six>=1.5 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from python-dateutil>=2.8.2->pandas) (1.16.0)
(ai_ml_training) PS C:\Users\sarsasid> pip install matplotlib_inline --upgrade
Requirement already satisfied: matplotlib_inline in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (0.1.6)
Requirement already satisfied: traitlets in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from matplotlib_inline) (5.9.0)
(ai_ml_training) PS C:\Users\sarsasid> pip install seaborn --upgrade
Requirement already satisfied: seaborn in c:\users\sarsasid\appdata\local\anaconda3\envs\ai_ml_training\lib\site-packages (0.12.2)
Requirement already satisfied: numpy!=1.24.0,>=1.17 in c:\users\sarsasid\appdata\local\anaconda3\envs\ai_ml_training\lib\site-packages (from seaborn) (1.26.0)
Requirement already satisfied: pandas>=0.25 in c:\users\sarsasid\appdata\local\anaconda3\envs\ai_ml_training\lib\site-packages (from seaborn) (2.1.1)
Requirement already satisfied: matplotlib!=3.6.1,>=3.1 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from seaborn) (3.8.0)
Requirement already satisfied: contourpy>=1.0.1 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from matplotlib!=3.6.1,>=3.1->seaborn) (1.1.1)
Requirement already satisfied: cycler>=0.10 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from matplotlib!=3.6.1,>=3.1->seaborn) (0.11.0)
Requirement already satisfied: fonttools>=4.22.0 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from matplotlib!=3.6.1,>=3.1->seaborn) (4.42.1)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from matplotlib!=3.6.1,>=3.1->seaborn) (1.4.5)
Requirement already satisfied: packaging>=20.0 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from matplotlib!=3.6.1,>=3.1->seaborn) (23.1)
Requirement already satisfied: pillow>=6.2.0 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from matplotlib!=3.6.1,>=3.1->seaborn) (10.0.1)
Requirement already satisfied: pyparsing>=2.3.1 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from matplotlib!=3.6.1,>=3.1->seaborn) (3.1.1)
Requirement already satisfied: python-dateutil>=2.7 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from matplotlib!=3.6.1,>=3.1->seaborn) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in c:\users\sarsasid\appdata\local\anaconda3\envs\ai_ml_training\lib\site-packages (from pandas>=0.25->seaborn) (2023.3.post1)
Requirement already satisfied: tzdata>=2022.1 in c:\users\sarsasid\appdata\local\anaconda3\envs\ai_ml_training\lib\site-packages (from pandas>=0.25->seaborn) (2023.3)
Requirement already satisfied: six>=1.5 in c:\users\sarsasid\appdata\roaming\python\python311\site-packages (from python-dateutil>=2.7->matplotlib!=3.6.1,>=3.1->seaborn) (1.16.0)
(ai_ml_training) PS C:\Users\sarsasid> python
Python 3.11.5 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:26:23) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import seaborn as sns
>>> df = sns.load_dataset('titanic')
>>> df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 15 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 survived 891 non-null int64
1 pclass 891 non-null int64
2 sex 891 non-null object
3 age 714 non-null float64
4 sibsp 891 non-null int64
5 parch 891 non-null int64
6 fare 891 non-null float64
7 embarked 889 non-null object
8 class 891 non-null category
9 who 891 non-null object
10 adult_male 891 non-null bool
11 deck 203 non-null category
12 embark_town 889 non-null object
13 alive 891 non-null object
14 alone 891 non-null bool
dtypes: bool(2), category(2), float64(2), int64(4), object(5)
memory usage: 80.7+ KB
>>>
>>> sns.set(style="whitegrid", color_codes=True)
>>> sns.countplot(x="sex", hue= "alone", data=df)
C:\Users\sarsasid\AppData\Local\anaconda3\envs\ai_ml_training\Lib\site-packages\seaborn\_oldcore.py:1498: FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. Use isinstance(dtype, CategoricalDtype) instead
if pd.api.types.is_categorical_dtype(vector):
C:\Users\sarsasid\AppData\Local\anaconda3\envs\ai_ml_training\Lib\site-packages\seaborn\_oldcore.py:1498: FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. Use isinstance(dtype, CategoricalDtype) instead
if pd.api.types.is_categorical_dtype(vector):
C:\Users\sarsasid\AppData\Local\anaconda3\envs\ai_ml_training\Lib\site-packages\seaborn\_oldcore.py:1498: FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. Use isinstance(dtype, CategoricalDtype) instead
if pd.api.types.is_categorical_dtype(vector):
C:\Users\sarsasid\AppData\Local\anaconda3\envs\ai_ml_training\Lib\site-packages\seaborn\_oldcore.py:1498: FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. Use isinstance(dtype, CategoricalDtype) instead
if pd.api.types.is_categorical_dtype(vector):
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\sarsasid\AppData\Local\anaconda3\envs\ai_ml_training\Lib\site-packages\seaborn\categorical.py", line 2955, in countplot
plotter.plot(ax, kwargs)
File "C:\Users\sarsasid\AppData\Local\anaconda3\envs\ai_ml_training\Lib\site-packages\seaborn\categorical.py", line 1587, in plot
self.annotate_axes(ax)
File "C:\Users\sarsasid\AppData\Local\anaconda3\envs\ai_ml_training\Lib\site-packages\seaborn\categorical.py", line 767, in annotate_axes
ax.legend(loc="best", title=self.hue_title)
File "C:\Users\sarsasid\AppData\Roaming\Python\Python311\site-packages\matplotlib\axes\_axes.py", line 322, in legend
handles, labels, kwargs = mlegend._parse_legend_args([self], *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sarsasid\AppData\Roaming\Python\Python311\site-packages\matplotlib\legend.py", line 1361, in _parse_legend_args
handles, labels = _get_legend_handles_labels(axs, handlers)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sarsasid\AppData\Roaming\Python\Python311\site-packages\matplotlib\legend.py", line 1291, in _get_legend_handles_labels
if label and not label.startswith('_'):
^^^^^^^^^^^^^^^^
AttributeError: 'numpy.bool_' object has no attribute 'startswith'
>>>
```
_Originally posted by @sarath-mec in https://github.com/mwaskom/seaborn/issues/3505#issuecomment-1740168646_
| closed | 2023-09-29T00:55:00Z | 2023-09-29T00:59:47Z | https://github.com/mwaskom/seaborn/issues/3506 | [] | sarath-mec | 1 |
pywinauto/pywinauto | automation | 804 | Error importing pywinauto | If i importing like these:
`from pywinauto import application`
i've got following error:
```
PS Z:\> python.exe D:\temp\policies\pywinauto.py
Traceback (most recent call last):
File "D:\temp\policies\pywinauto.py", line 1, in <module>
from pywinauto import application
File "D:\temp\policies\pywinauto.py", line 1, in <module>
from pywinauto import application
ImportError: cannot import name 'application' from 'pywinauto' (D:\temp\policies\pywinauto.py)
```
if so:
`from pywinauto.application import Application`
then this:
```
PS Z:\> python.exe D:\temp\policies\pywinauto.py
Traceback (most recent call last):
File "D:\temp\policies\pywinauto.py", line 2, in <module>
from pywinauto.application import Application
File "D:\temp\policies\pywinauto.py", line 2, in <module>
from pywinauto.application import Application
ModuleNotFoundError: No module named 'pywinauto.application'; 'pywinauto' is not a package
```
and if importing so:
`import pywinauto`
then i got this:
```
PS Z:\> python.exe D:\temp\policies\pywinauto.py
Traceback (most recent call last):
File "D:\temp\policies\pywinauto.py", line 4, in <module>
import pywinauto
File "D:\temp\policies\pywinauto.py", line 10, in <module>
pywinauto.application.Application().start(f"mmc.exe {mmc_loc}")
AttributeError: module 'pywinauto' has no attribute 'application'
```
my code:
```
import time
import pywinauto
mmc_loc = "mmc_file_location"
managed_group_path = "managed group\path"
groups = ["test"]
pywinauto.application.Application().start(f"mmc.exe {mmc_loc}")
time.sleep(10)
app = pywinauto.application.Application().connect(path="mmc.exe")
tree = app.mmc_main_frame.tree_view
for group in groups:
folder = tree.get_item(f"{managed_group_path}{group}")
folder.click()
items = app.mmc_main_frame.list_view.items()
for i in range(0, len(items), 2):
print(items[i].text())
```
Contents of site-packages folder:
PS Z:\> ls C:\python3_32\Lib\site-packages\
Каталог: C:\python3_32\Lib\site-packages
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 03.09.2019 10:23 adodbapi
d----- 03.09.2019 10:23 comtypes
d----- 03.09.2019 10:23 comtypes-1.1.7-py3.7.egg-info
d----- 03.09.2019 10:23 isapi
d----- 03.09.2019 10:22 pip
d----- 03.09.2019 10:22 pip-19.2.3.dist-info
d----- 03.09.2019 10:18 pkg_resources
d----- 03.09.2019 10:23 pythonwin
d----- 03.09.2019 10:23 pywin32-224.dist-info
d----- 03.09.2019 10:23 pywin32_system32
d----- 03.09.2019 10:23 pywinauto
d----- 03.09.2019 10:23 pywinauto-0.6.7.dist-info
d----- 03.09.2019 10:18 setuptools
d----- 03.09.2019 10:18 setuptools-40.8.0.dist-info
d----- 03.09.2019 10:23 win32
d----- 03.09.2019 10:23 win32com
d----- 03.09.2019 10:23 win32comext
d----- 03.09.2019 10:23 __pycache__
-a---- 03.09.2019 10:18 126 easy_install.py
-a---- 03.09.2019 10:22 138 pythoncom.py
-a---- 03.09.2019 10:22 2650084 PyWin32.chm
-a---- 03.09.2019 10:22 395 pywin32.pth
-a---- 03.09.2019 10:22 5 pywin32.version.txt
-a---- 08.07.2019 19:24 121 README.txt
Windows 10 x64
Python version 3.7.4 x86 | closed | 2019-09-03T08:12:37Z | 2019-09-03T09:04:10Z | https://github.com/pywinauto/pywinauto/issues/804 | [
"invalid",
"question"
] | xqzts | 1 |
PokeAPI/pokeapi | api | 1,026 | Error in the data model regarding attributes of species/pokémon | The current model has some relationships between entities that used to work in earlier generations, but have become inaccurate, at least since the introduction of regional variants (unless you consider Burmy/Shellos/Deerling/Flabébé/Pumpkaboo). Specifically, some attributes that should be linked to specific pokémon (or maybe even each cosmetic form) are still tied to the whole pokémon species.
The first one that comes to mind is evolution chain. For example, Kantonian Meowth, Alolan Meowth and Galarian Meowth each have a separate evolution chain. A-Meowth can't evolve to K-Persian and vice-versa. If I want to get, for example, all moves each evolved pokémon can learn by including their pre-evo moves (#897), there's no way to separate which moves come from each possible base form in a species with more than one evolution chain (other than handling these cases separately). This has been brought up before in #966, #844 and #655, for example.
`has_gender_differences` should also be an attribute of each individual pokémon. Kantonian Rattata/Raticate have gender differenciation, whereas Alolan Rattata/Raticate don't.
Same goes for `pokemon_color_id`. Alolan Sandshrew isn't yellow.
There might be more. Those are only the more obvious when taking a glance at table pokemon_v2_pokemonspecies. | open | 2024-01-31T13:50:51Z | 2024-02-06T15:44:35Z | https://github.com/PokeAPI/pokeapi/issues/1026 | [] | ivanlonel | 4 |
tensorpack/tensorpack | tensorflow | 1,196 | Multiple calls to BNReLU can not exist within a single scope | If you're asking about an unexpected problem which you do not know the root cause,
use this template. __PLEASE DO NOT DELETE THIS TEMPLATE, FILL IT__:
If you already know the root cause to your problem,
feel free to delete everything in this template.
### 1. What you did:
I put 2 BNReLU(...) calls side by side.
```
net = BNReLU(input)
net = BNReLU(net, name='PlEaSeDoNtFaIl')
```
Same issue appears whenever I try to put 2 calls within a single scope
### 2. What you observed:
In short: variable already exists message:
```
....
File "/home/eugene/ves/tf113/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 848, in _get_single_variable
traceback.format_list(tb))))
ValueError: Variable res0.0/bn/gamma already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope? Originally defined at:
```
### 4. Your environment:
-------------------- -------------------------------------------------------------------
Python 3.6.5 (default, Apr 1 2018, 05:46:30) [GCC 7.3.0]
Tensorpack v0.9.4-0-gf947192
TensorFlow 1.13.1/b'v1.13.1-0-g6612da8951'
TF Compiler Version 4.8.5
TF CUDA support True
TF MKL support False
Nvidia Driver /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.418.56
CUDA /usr/local/cuda-10.0/targets/x86_64-linux/lib/libcudart.so.10.0.130
CUDNN /usr/local/cuda-9.0/lib64/libcudnn.so.7
NCCL
CUDA_VISIBLE_DEVICES None
GPU 0 GeForce GTX 1080 Ti
cv2 4.1.0
msgpack 0.6.1
python-prctl False
-------------------- -------------------------------------------------------------------
| closed | 2019-05-19T02:45:49Z | 2019-05-19T05:08:31Z | https://github.com/tensorpack/tensorpack/issues/1196 | [
"usage"
] | yselivonchyk | 3 |
ets-labs/python-dependency-injector | asyncio | 472 | sonarqube "Module 'dependency_injector.containers' has no 'DeclarativeContainer' member" |
Hi,
I'm getting a Sonarqube warning in the container part at:
`class Container(containers.DeclarativeContainer):`
```
Module 'dependency_injector.containers' has no 'DeclarativeContainer' member, but source is unavailable. Consider adding this module to extension-pkg-allow-list if you want to perform analysis based on run-time introspection of living objects.
```
Would you recommend appling the hint or do you know if there is a better way | closed | 2021-07-19T07:42:44Z | 2021-07-20T09:30:19Z | https://github.com/ets-labs/python-dependency-injector/issues/472 | [
"question"
] | mxab | 2 |
Sanster/IOPaint | pytorch | 13 | OpenCL version? | Is it going to have OpenCL version? (AMD GPU support?) | closed | 2022-02-11T14:39:11Z | 2022-03-20T14:20:02Z | https://github.com/Sanster/IOPaint/issues/13 | [] | ca5ua1 | 2 |
dgtlmoon/changedetection.io | web-scraping | 2,639 | Subscription not checking | hi, I've a paid subscription of changedetection.io. My checks have not been checking. Is there anything that I can do to reset the server or something? | closed | 2024-09-17T03:56:59Z | 2024-09-25T06:34:47Z | https://github.com/dgtlmoon/changedetection.io/issues/2639 | [] | blankfruit | 3 |
NullArray/AutoSploit | automation | 601 | Unhandled Exception (6d3b540be) | Autosploit version: `3.0`
OS information: `Linux-4.19.0-kali3-amd64-x86_64-with-Kali-kali-rolling-kali-rolling`
Running context: `autosploit.py`
Error meesage: `global name 'Except' is not defined`
Error traceback:
```
Traceback (most recent call):
File "/home/SecTools/Autosploit/autosploit/main.py", line 113, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/home/SecTools/Autosploit/lib/jsonize.py", line 61, in load_exploits
except Except:
NameError: global name 'Except' is not defined
```
Metasploit launched: `False`
| closed | 2019-03-27T06:41:11Z | 2019-03-27T13:26:25Z | https://github.com/NullArray/AutoSploit/issues/601 | [] | AutosploitReporter | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,149 | can the cyclegan model be applied to paired images | I have some paired images, thus I want to know it the cyclegan model has a pair-image mode, can it be applied to paired images, thank you very much! | closed | 2020-09-14T11:54:32Z | 2020-09-18T07:22:27Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1149 | [] | yianzhongguo | 2 |
coqui-ai/TTS | python | 3,276 | [Bug] Multiple speaker requests? | ### Describe the bug
The [TTS API](https://tts.readthedocs.io/en/latest/models/xtts.html) states that `speaker_wav` can be a list of filepaths for multiple speaker references. But in `def tts_to_file(...)`, `speaker_wav` only accepts a single string.
### To Reproduce
```
tts.tts_to_file(
text="Some test",
file_path="output.wav",
speaker_wav=["training/1.wav"],
language="en",
)
```
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [],
"available": false,
"version": null
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.1",
"TTS": "0.20.6",
"numpy": "1.26.2"
},
"System": {
"OS": "Darwin",
"architecture": [
"64bit",
""
],
"processor": "arm",
"python": "3.11.6",
"version": "Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:20 PDT 2023; root:xnu-8796.121.3~7/RELEASE_ARM64_T6000"
}
}
```
### Additional context
_No response_ | closed | 2023-11-20T18:52:29Z | 2024-01-10T21:59:41Z | https://github.com/coqui-ai/TTS/issues/3276 | [
"bug",
"wontfix"
] | mukundt | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.