repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
mlfoundations/open_clip | computer-vision | 1,026 | @rwightman Could you clarify the FLOPs calculation for the EVA models? | Hi @rwightman,
I have a question regarding the FLOPs calculation for EVA-based models in OpenCLIP.
As far as I know, the image width (hidden size) for EVA-01 ViT-G is 1408. However, in the model_profile.csv file, the image width for all EVA models is listed as 768.
Could you confirm whether this is a typo? Also, was the FLOPs calculation performed using the correct hidden size (e.g., 1408 for ViT-G)?
Thanks for your time, and I appreciate your help! | closed | 2025-02-07T03:21:10Z | 2025-02-07T08:15:58Z | https://github.com/mlfoundations/open_clip/issues/1026 | [] | ghost | 3 |
davidteather/TikTok-Api | api | 235 | [BUG] - RuntimeError: This event loop is already running | This is a known bug and I am working to resolve it.
The package works it just spits out a ton of unwanted text.
Problematic code in browser.py
```
fut.result() # so the async part running the functions
``` | closed | 2020-08-24T21:25:21Z | 2020-11-11T01:00:18Z | https://github.com/davidteather/TikTok-Api/issues/235 | [
"bug",
"Hacktoberfest"
] | davidteather | 10 |
piccolo-orm/piccolo | fastapi | 752 | Error in PIP Install commands at Windows | Errors:
```
C:\Users\Max>pip install 'piccolo[postgres]'
ERROR: Invalid requirement: "'piccolo[postgres]'"
C:\Users\Max>
C:\Users\Max>pip install 'piccolo[all]'
ERROR: Invalid requirement: "'piccolo[all]'"
```
But I installed the package in the following way:
```
C:\Users\Max>pip install piccolo
Collecting piccolo
Downloading piccolo-0.105.0-py3-none-any.whl (336 kB)
---------------------------------------- 336.9/336.9 kB 135.8 kB/s eta 0:00:00
Requirement already satisfied: pydantic[email]>=1.6 in c:\users\max\appdata\local\programs\python\python310\lib\site-packages (from piccolo) (1.10.4)
Collecting targ>=0.3.7
Downloading targ-0.3.7-py3-none-any.whl (7.2 kB)
Requirement already satisfied: typing-extensions>=4.3.0 in c:\users\max\appdata\local\programs\python\python310\lib\site-packages (from piccolo) (4.4.0)
Requirement already satisfied: inflection>=0.5.1 in c:\users\max\appdata\local\programs\python\python310\lib\site-packages (from piccolo) (0.5.1)
Collecting black
Downloading black-22.12.0-cp310-cp310-win_amd64.whl (1.2 MB)
---------------------------------------- 1.2/1.2 MB 447.4 kB/s eta 0:00:00
Requirement already satisfied: Jinja2>=2.11.0 in c:\users\max\appdata\local\programs\python\python310\lib\site-packages (from piccolo) (3.1.2)
Requirement already satisfied: colorama>=0.4.0 in c:\users\max\appdata\local\programs\python\python310\lib\site-packages (from piccolo) (0.4.5)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users\max\appdata\local\programs\python\python310\lib\site-packages (from Jinja2>=2.11.0->piccolo) (2.1.1)
Collecting email-validator>=1.0.3
Downloading email_validator-1.3.1-py2.py3-none-any.whl (22 kB)
Collecting docstring-parser==0.12
Downloading docstring_parser-0.12.tar.gz (23 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting mypy-extensions>=0.4.3
Downloading mypy_extensions-0.4.3-py2.py3-none-any.whl (4.5 kB)
Collecting pathspec>=0.9.0
Downloading pathspec-0.11.0-py3-none-any.whl (29 kB)
Requirement already satisfied: tomli>=1.1.0 in c:\users\max\appdata\local\programs\python\python310\lib\site-packages (from black->piccolo) (2.0.1)
Requirement already satisfied: platformdirs>=2 in c:\users\max\appdata\local\programs\python\python310\lib\site-packages (from black->piccolo) (2.5.3)
Requirement already satisfied: click>=8.0.0 in c:\users\max\appdata\local\programs\python\python310\lib\site-packages (from black->piccolo) (8.1.3)
Collecting dnspython>=1.15.0
Downloading dnspython-2.3.0-py3-none-any.whl (283 kB)
---------------------------------------- 283.7/283.7 kB 499.9 kB/s eta 0:00:00
Requirement already satisfied: idna>=2.0.0 in c:\users\max\appdata\local\programs\python\python310\lib\site-packages (from email-validator>=1.0.3->pydantic[email]>=1.6->piccolo) (2.10)
Building wheels for collected packages: docstring-parser
Building wheel for docstring-parser (pyproject.toml) ... done
Created wheel for docstring-parser: filename=docstring_parser-0.12-py3-none-any.whl size=31770 sha256=60cee92f6e6510b451033afad491d70ed5d89bc7aa362ccc106c467bfd357c1e
Stored in directory: c:\users\max\appdata\local\pip\cache\wheels\96\a5\94\45395285735d7713cd816d5051a797e3c13231d0aa833c8d64
Successfully built docstring-parser
Installing collected packages: mypy-extensions, pathspec, docstring-parser, dnspython, targ, email-validator, black, piccolo
Successfully installed black-22.12.0 dnspython-2.3.0 docstring-parser-0.12 email-validator-1.3.1 mypy-extensions-0.4.3 pathspec-0.11.0 piccolo-0.105.0 targ-0.3.7
```
| closed | 2023-01-29T19:00:21Z | 2023-01-29T23:57:27Z | https://github.com/piccolo-orm/piccolo/issues/752 | [] | BaseMax | 3 |
localstack/localstack | python | 11,698 | feature request: IoT core rules with "http" action | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Feature description
Support IoT core rules with "http" action.
https://docs.aws.amazon.com/iot/latest/developerguide/https-rule-action.html

### 🧑💻 Implementation
_No response_
### Anything else?
_No response_ | closed | 2024-10-16T08:03:17Z | 2025-01-06T11:41:25Z | https://github.com/localstack/localstack/issues/11698 | [
"type: feature",
"aws:iot",
"status: backlog"
] | MartynasAndr | 1 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,239 | Error OutOfMemoryError | . | closed | 2023-08-07T18:10:20Z | 2023-08-07T18:14:30Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1239 | [] | Soosaaas | 0 |
robusta-dev/robusta | automation | 1,452 | Robusta fetching data but not giving recommendations | I have configured my own prometheus to robusta and its fetching data but in efficiency tab its showing no data for every deployment's recommendation.
<img width="1723" alt="Screenshot 2024-06-11 at 12 51 29 PM" src="https://github.com/robusta-dev/robusta/assets/169645780/2af02dcd-0b5f-4cd3-a328-133ef5af15a8">
| open | 2024-06-11T07:21:45Z | 2024-06-14T07:16:47Z | https://github.com/robusta-dev/robusta/issues/1452 | [] | manishbitscrunch | 2 |
MaartenGr/BERTopic | nlp | 1,551 | BERTopic (Can't retrieve unregistered extension attribute 'trf_data'. Did you forget to call the set_extension method?) | Good morning, this is my code obtained from
the following page: https://spacy.io/universe/project/bertopic after running it I get the following error: Can't retrieve unregistered extension attribute 'trf_data'. Did you forget to call the set_extension method?
How can I solve this error?
Instalación de las bibliotecas necesarias
!pip install spacy
!pip install bertopic
!pip install scikit-learn
Descargar el modelo de spaCy en inglés (medium)
!python -m spacy download en_core_web_md
Cargar las bibliotecas y el modelo
import spacy
from bertopic import BERTopic
from sklearn.datasets import fetch_20newsgroups
Cargar los documentos de la base de datos de 20 Newsgroups
docs = fetch_20newsgroups(subset='all', remove=('headers', 'footers', 'quotes'))['data']
Cargar el modelo de spaCy en inglés (medium) excluyendo componentes innecesarios
nlp = spacy.load('en_core_web_md', exclude=['tagger', 'parser', 'ner', 'attribute_ruler', 'lemmatizer'])
Crear el modelo BERTopic con spaCy
topic_model = BERTopic(embedding_model=nlp)
topics, probs = topic_model.fit_transform(docs)
I have tried changing the version of spacy to one that is between version 3.3.0 and version 3.4.0, I still get the same error trying all of them spacy models (sm, md, lg, trf) | open | 2023-09-29T09:35:49Z | 2023-10-03T11:50:34Z | https://github.com/MaartenGr/BERTopic/issues/1551 | [] | FranValero97 | 1 |
deepspeedai/DeepSpeed | pytorch | 6,654 | Command '['ninja', '-v']' returned non-zero exit status 1 - Unsupported NVHPC compiler found | I encountered multiple issues while trying to perform full fine-tuning of the LLaMA 3 8B model with DeepSpeed with A100-80GB x 2.
As a result, I decided to follow the DeepSpeed tutorial on Huggingface.
Below is the command I used, which closely follows the example in the tutorial:
```
deepspeed examples/pytorch/translation/run_translation.py \
--deepspeed ds_config_zero3.json \
--model_name_or_path t5-small --per_device_train_batch_size 1 \
--output_dir output_dir --overwrite_output_dir --fp16 \
--do_train --max_train_samples 500 --num_train_epochs 1 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro
```
And this is ds_config_zero3.json
```
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "none",
"pin_memory": true
},
"offload_param": {
"device": "none",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
Then I got this error
```[2024-10-23 11:22:51,662] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-23 11:22:53,732] [WARNING] [runner.py:202:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
Detected CUDA_VISIBLE_DEVICES=0,1: setting --include=localhost:0,1
[2024-10-23 11:22:53,732] [INFO] [runner.py:568:main] cmd = /home/qmin2/anaconda3/envs/biicae/bin/python3.9 -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMV19 --master_addr=127.0.0.1 --master_port=29500 --enable_each_rank_log=None run_translation.py --deepspeed ds_config.json --model_name_or_path t5-small --per_device_train_batch_size 1 --output_dir output_dir --overwrite_output_dir --fp16 --do_train --max_train_samples 500 --num_train_epochs 1 --dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro
[2024-10-23 11:22:56,867] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-23 11:22:58,128] [INFO] [launch.py:145:main] WORLD INFO DICT: {'localhost': [0, 1]}
[2024-10-23 11:22:58,128] [INFO] [launch.py:151:main] nnodes=1, num_local_procs=2, node_rank=0
[2024-10-23 11:22:58,128] [INFO] [launch.py:162:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]})
[2024-10-23 11:22:58,128] [INFO] [launch.py:163:main] dist_world_size=2
[2024-10-23 11:22:58,128] [INFO] [launch.py:165:main] Setting CUDA_VISIBLE_DEVICES=0,1
[2024-10-23 11:22:58,138] [INFO] [launch.py:253:main] process 3332850 spawned with command: ['/home/qmin2/anaconda3/envs/biicae/bin/python3.9', '-u', 'run_translation.py', '--local_rank=0', '--deepspeed', 'ds_config.json', '--model_name_or_path', 't5-small', '--per_device_train_batch_size', '1', '--output_dir', 'output_dir', '--overwrite_output_dir', '--fp16', '--do_train', '--max_train_samples', '500', '--num_train_epochs', '1', '--dataset_name', 'wmt16', '--dataset_config', 'ro-en', '--source_lang', 'en', '--target_lang', 'ro']
[2024-10-23 11:22:58,154] [INFO] [launch.py:253:main] process 3332851 spawned with command: ['/home/qmin2/anaconda3/envs/biicae/bin/python3.9', '-u', 'run_translation.py', '--local_rank=1', '--deepspeed', 'ds_config.json', '--model_name_or_path', 't5-small', '--per_device_train_batch_size', '1', '--output_dir', 'output_dir', '--overwrite_output_dir', '--fp16', '--do_train', '--max_train_samples', '500', '--num_train_epochs', '1', '--dataset_name', 'wmt16', '--dataset_config', 'ro-en', '--source_lang', 'en', '--target_lang', 'ro']
[2024-10-23 11:23:03,294] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-23 11:23:03,580] [INFO] [comm.py:637:init_distributed] cdb=None
[2024-10-23 11:23:03,580] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
[2024-10-23 11:23:03,619] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-23 11:23:03,874] [INFO] [comm.py:637:init_distributed] cdb=None
10/23/2024 11:23:05 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: True, 16-bits training: True
10/23/2024 11:23:05 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(
_n_gpu=1,
accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False},
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
batch_eval_metrics=False,
bf16=False,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_persistent_workers=False,
dataloader_pin_memory=True,
dataloader_prefetch_factor=None,
ddp_backend=None,
ddp_broadcast_buffers=None,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,
ddp_timeout=1800,
debug=[],
deepspeed=ds_config.json,
disable_tqdm=False,
dispatch_batches=None,
do_eval=False,
do_predict=False,
do_train=True,
eval_accumulation_steps=None,
eval_delay=0,
eval_do_concat_batches=True,
eval_on_start=False,
eval_steps=None,
eval_strategy=no,
eval_use_gather_object=False,
evaluation_strategy=None,
fp16=True,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False},
fsdp_min_num_params=0,
fsdp_transformer_layer_cls_to_wrap=None,
full_determinism=False,
generation_config=None,
generation_max_length=None,
generation_num_beams=None,
gradient_accumulation_steps=1,
gradient_checkpointing=False,
gradient_checkpointing_kwargs=None,
greater_is_better=None,
group_by_length=False,
half_precision_backend=auto,
hub_always_push=False,
hub_model_id=None,
hub_private_repo=False,
hub_strategy=every_save,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
include_for_metrics=[],
include_inputs_for_metrics=False,
include_num_input_tokens_seen=False,
include_tokens_per_second=False,
jit_mode_eval=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=5e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=0,
log_level=passive,
log_level_replica=warning,
log_on_each_node=True,
logging_dir=output_dir/runs/Oct23_11-23-02_n57.gasi-cluster,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=500,
logging_strategy=steps,
lr_scheduler_kwargs={},
lr_scheduler_type=linear,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=None,
mp_parameters=,
neftune_noise_alpha=None,
no_cuda=False,
num_train_epochs=1.0,
optim=adamw_torch,
optim_args=None,
optim_target_modules=None,
output_dir=output_dir,
overwrite_output_dir=True,
past_index=-1,
per_device_eval_batch_size=8,
per_device_train_batch_size=1,
predict_with_generate=False,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
ray_scope=last,
remove_unused_columns=True,
report_to=['wandb'],
restore_callback_states_from_checkpoint=False,
resume_from_checkpoint=None,
run_name=output_dir,
save_on_each_node=False,
save_only_model=False,
save_safetensors=True,
save_steps=500,
save_strategy=steps,
save_total_limit=None,
seed=42,
skip_memory_metrics=True,
sortish_sampler=False,
split_batches=None,
tf32=None,
torch_compile=False,
torch_compile_backend=None,
torch_compile_mode=None,
torch_empty_cache_steps=None,
torchdynamo=None,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_cpu=False,
use_ipex=False,
use_legacy_prediction_loop=False,
use_liger_kernel=False,
use_mps_device=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
)
10/23/2024 11:23:06 - WARNING - __main__ - Process rank: 1, device: cuda:1, n_gpu: 1, distributed training: True, 16-bits training: True
Overwrite dataset info from restored data version if exists.
10/23/2024 11:23:16 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists.
Loading Dataset info from /home/qmin2/.cache/huggingface/datasets/wmt16/ro-en/0.0.0/41d8a4013aa1489f28fea60ec0932af246086482
10/23/2024 11:23:16 - INFO - datasets.info - Loading Dataset info from /home/qmin2/.cache/huggingface/datasets/wmt16/ro-en/0.0.0/41d8a4013aa1489f28fea60ec0932af246086482
Found cached dataset wmt16 (/home/qmin2/.cache/huggingface/datasets/wmt16/ro-en/0.0.0/41d8a4013aa1489f28fea60ec0932af246086482)
10/23/2024 11:23:16 - INFO - datasets.builder - Found cached dataset wmt16 (/home/qmin2/.cache/huggingface/datasets/wmt16/ro-en/0.0.0/41d8a4013aa1489f28fea60ec0932af246086482)
Loading Dataset info from /home/qmin2/.cache/huggingface/datasets/wmt16/ro-en/0.0.0/41d8a4013aa1489f28fea60ec0932af246086482
10/23/2024 11:23:16 - INFO - datasets.info - Loading Dataset info from /home/qmin2/.cache/huggingface/datasets/wmt16/ro-en/0.0.0/41d8a4013aa1489f28fea60ec0932af246086482
[INFO|configuration_utils.py:679] 2024-10-23 11:23:16,977 >> loading configuration file config.json from cache at /home/qmin2/.cache/huggingface/hub/models--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/config.json
[INFO|configuration_utils.py:746] 2024-10-23 11:23:16,984 >> Model config T5Config {
"_name_or_path": "t5-small",
"architectures": [
"T5ForConditionalGeneration"
],
"classifier_dropout": 0.0,
"d_ff": 2048,
"d_kv": 64,
"d_model": 512,
"decoder_start_token_id": 0,
"dense_act_fn": "relu",
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "relu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"is_gated_act": false,
"layer_norm_epsilon": 1e-06,
"model_type": "t5",
"n_positions": 512,
"num_decoder_layers": 6,
"num_heads": 8,
"num_layers": 6,
"output_past": true,
"pad_token_id": 0,
"relative_attention_max_distance": 128,
"relative_attention_num_buckets": 32,
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
},
"transformers_version": "4.46.0.dev0",
"use_cache": true,
"vocab_size": 32128
}
[INFO|tokenization_utils_base.py:2211] 2024-10-23 11:23:17,206 >> loading file spiece.model from cache at /home/qmin2/.cache/huggingface/hub/models--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/spiece.model
[INFO|tokenization_utils_base.py:2211] 2024-10-23 11:23:17,206 >> loading file tokenizer.json from cache at /home/qmin2/.cache/huggingface/hub/models--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer.json
[INFO|tokenization_utils_base.py:2211] 2024-10-23 11:23:17,206 >> loading file added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:2211] 2024-10-23 11:23:17,206 >> loading file special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:2211] 2024-10-23 11:23:17,206 >> loading file tokenizer_config.json from cache at /home/qmin2/.cache/huggingface/hub/models--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer_config.json
[INFO|modeling_utils.py:3936] 2024-10-23 11:23:17,446 >> loading weights file model.safetensors from cache at /home/qmin2/.cache/huggingface/hub/models--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/model.safetensors
[INFO|modeling_utils.py:4079] 2024-10-23 11:23:17,453 >> Detected DeepSpeed ZeRO-3: activating zero.init() for this model
[INFO|configuration_utils.py:1099] 2024-10-23 11:23:17,458 >> Generate config GenerationConfig {
"decoder_start_token_id": 0,
"eos_token_id": 1,
"pad_token_id": 0
}
[2024-10-23 11:23:18,839] [INFO] [partition_parameters.py:343:__exit__] finished initializing model - num_params = 132, num_elems = 0.08B
[INFO|modeling_utils.py:4799] 2024-10-23 11:23:19,120 >> All model checkpoint weights were used when initializing T5ForConditionalGeneration.
[INFO|modeling_utils.py:4807] 2024-10-23 11:23:19,120 >> All the weights of T5ForConditionalGeneration were initialized from the model checkpoint at t5-small.
If your task is similar to the task the model of the checkpoint was trained on, you can already use T5ForConditionalGeneration for predictions without further training.
[INFO|configuration_utils.py:1054] 2024-10-23 11:23:19,341 >> loading configuration file generation_config.json from cache at /home/qmin2/.cache/huggingface/hub/models--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/generation_config.json
[INFO|configuration_utils.py:1099] 2024-10-23 11:23:19,341 >> Generate config GenerationConfig {
"decoder_start_token_id": 0,
"eos_token_id": 1,
"pad_token_id": 0
}
[INFO|modeling_utils.py:2230] 2024-10-23 11:23:19,361 >> You are resizing the embedding layer without providing a `pad_to_multiple_of` parameter. This means that the new embedding dimension will be 32100. This might induce some performance reduction as *Tensor Cores* will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc
Loading cached processed dataset at /home/qmin2/.cache/huggingface/datasets/wmt16/ro-en/0.0.0/41d8a4013aa1489f28fea60ec0932af246086482/cache-8862cd207eac132f.arrow
10/23/2024 11:23:19 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /home/qmin2/.cache/huggingface/datasets/wmt16/ro-en/0.0.0/41d8a4013aa1489f28fea60ec0932af246086482/cache-8862cd207eac132f.arrow
10/23/2024 11:23:20 - WARNING - accelerate.utils.other - Detected kernel version 4.18.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
[INFO|trainer.py:688] 2024-10-23 11:23:21,317 >> Using auto half precision backend
[2024-10-23 11:23:21,487] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.13.2, git-hash=unknown, git-branch=unknown
[2024-10-23 11:23:21,493] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
Using /home/qmin2/.cache/torch_extensions/py39_cu121 as PyTorch extensions root...
/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/utils/cpp_extension.py:362: UserWarning:
!! WARNING !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Your compiler (/opt/ohpc/pub/apps/nvidia/hpc_sdk/Linux_x86_64/22.2/compilers/bin/nvc++) is not compatible with the compiler Pytorch was
built with for this platform, which is g++ on linux. Please
use g++ to to compile your extension. Alternatively, you may
compile PyTorch from source using /opt/ohpc/pub/apps/nvidia/hpc_sdk/Linux_x86_64/22.2/compilers/bin/nvc++, and then you can also use
/opt/ohpc/pub/apps/nvidia/hpc_sdk/Linux_x86_64/22.2/compilers/bin/nvc++ to compile your extension.
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help
with compiling PyTorch from source.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! WARNING !!
warnings.warn(WRONG_COMPILER_WARNING.format(
Detected CUDA files, patching ldflags
Emitting ninja build file /home/qmin2/.cache/torch_extensions/py39_cu121/fused_adam/build.ninja...
Building extension module fused_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/3] /opt/ohpc/pub/apps/nvidia/hpc_sdk/Linux_x86_64/22.2/compilers/bin/nvc++ -MMD -MF fused_adam_frontend.o.d -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/includes -I/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/adam -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/TH -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/THC -isystem /opt/ohpc/pub/apps/cuda/11.8/include -isystem /home/qmin2/anaconda3/envs/biicae/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++17 -g -Wno-reorder -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -DBF16_AVAILABLE -c /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/adam/fused_adam_frontend.cpp -o fused_adam_frontend.o
FAILED: fused_adam_frontend.o
/opt/ohpc/pub/apps/nvidia/hpc_sdk/Linux_x86_64/22.2/compilers/bin/nvc++ -MMD -MF fused_adam_frontend.o.d -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/includes -I/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/adam -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/TH -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/THC -isystem /opt/ohpc/pub/apps/cuda/11.8/include -isystem /home/qmin2/anaconda3/envs/biicae/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++17 -g -Wno-reorder -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -DBF16_AVAILABLE -c /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/adam/fused_adam_frontend.cpp -o fused_adam_frontend.o
nvc++-Error-Unknown switch: -Wno-reorder
[2/3] /opt/ohpc/pub/apps/cuda/11.8/bin/nvcc -ccbin /opt/ohpc/pub/apps/nvidia/hpc_sdk/Linux_x86_64/22.2/compilers/bin/nvc -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/includes -I/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/adam -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/TH -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/THC -isystem /opt/ohpc/pub/apps/cuda/11.8/include -isystem /home/qmin2/anaconda3/envs/biicae/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -lineinfo --use_fast_math -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_80,code=compute_80 -DBF16_AVAILABLE -U__CUDA_NO_BFLOAT16_OPERATORS__ -U__CUDA_NO_BFLOAT162_OPERATORS__ -std=c++17 -c /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/adam/multi_tensor_adam.cu -o multi_tensor_adam.cuda.o
FAILED: multi_tensor_adam.cuda.o
/opt/ohpc/pub/apps/cuda/11.8/bin/nvcc -ccbin /opt/ohpc/pub/apps/nvidia/hpc_sdk/Linux_x86_64/22.2/compilers/bin/nvc -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/includes -I/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/adam -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/TH -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/THC -isystem /opt/ohpc/pub/apps/cuda/11.8/include -isystem /home/qmin2/anaconda3/envs/biicae/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -lineinfo --use_fast_math -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_80,code=compute_80 -DBF16_AVAILABLE -U__CUDA_NO_BFLOAT16_OPERATORS__ -U__CUDA_NO_BFLOAT162_OPERATORS__ -std=c++17 -c /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/adam/multi_tensor_adam.cu -o multi_tensor_adam.cuda.o
nvcc fatal : Unsupported NVHPC compiler found. nvc++ is the only NVHPC compiler that is supported.
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 2100, in _run_ninja_build
subprocess.run(
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/subprocess.py", line 524, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/qmin2/3rd_semester_research/qmin2_infini_attention/run_translation.py", line 699, in <module>
main()
File "/home/qmin2/3rd_semester_research/qmin2_infini_attention/run_translation.py", line 614, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/transformers/trainer.py", line 2112, in train
return inner_training_loop(
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/transformers/trainer.py", line 2267, in _inner_training_loop
model, self.optimizer, self.lr_scheduler = self.accelerator.prepare(
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/accelerate/accelerator.py", line 1219, in prepare
result = self._prepare_deepspeed(*args)
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/accelerate/accelerator.py", line 1604, in _prepare_deepspeed
engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs)
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/__init__.py", line 176, in initialize
engine = DeepSpeedEngine(args=args,
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 307, in __init__
self._configure_optimizer(optimizer, model_parameters)
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 1231, in _configure_optimizer
basic_optimizer = self._configure_basic_optimizer(model_parameters)
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 1308, in _configure_basic_optimizer
optimizer = FusedAdam(
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/adam/fused_adam.py", line 94, in __init__
fused_adam_cuda = FusedAdamBuilder().load()
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/op_builder/builder.py", line 478, in load
return self.jit_load(verbose)
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/op_builder/builder.py", line 522, in jit_load
op_module = load(name=self.name,
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1308, in load
return _jit_compile(
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1710, in _jit_compile
_write_ninja_file_and_build_library(
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1823, in _write_ninja_file_and_build_library
_run_ninja_build(
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 2116, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error building extension 'fused_adam'
Using /home/qmin2/.cache/torch_extensions/py39_cu121 as PyTorch extensions root...
/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/utils/cpp_extension.py:362: UserWarning:
!! WARNING !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Your compiler (/opt/ohpc/pub/apps/nvidia/hpc_sdk/Linux_x86_64/22.2/compilers/bin/nvc++) is not compatible with the compiler Pytorch was
built with for this platform, which is g++ on linux. Please
use g++ to to compile your extension. Alternatively, you may
compile PyTorch from source using /opt/ohpc/pub/apps/nvidia/hpc_sdk/Linux_x86_64/22.2/compilers/bin/nvc++, and then you can also use
/opt/ohpc/pub/apps/nvidia/hpc_sdk/Linux_x86_64/22.2/compilers/bin/nvc++ to compile your extension.
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help
with compiling PyTorch from source.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! WARNING !!
warnings.warn(WRONG_COMPILER_WARNING.format(
Detected CUDA files, patching ldflags
Emitting ninja build file /home/qmin2/.cache/torch_extensions/py39_cu121/fused_adam/build.ninja...
Building extension module fused_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/3] /opt/ohpc/pub/apps/nvidia/hpc_sdk/Linux_x86_64/22.2/compilers/bin/nvc++ -MMD -MF fused_adam_frontend.o.d -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/includes -I/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/adam -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/TH -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/THC -isystem /opt/ohpc/pub/apps/cuda/11.8/include -isystem /home/qmin2/anaconda3/envs/biicae/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++17 -g -Wno-reorder -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -DBF16_AVAILABLE -c /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/adam/fused_adam_frontend.cpp -o fused_adam_frontend.o
FAILED: fused_adam_frontend.o
/opt/ohpc/pub/apps/nvidia/hpc_sdk/Linux_x86_64/22.2/compilers/bin/nvc++ -MMD -MF fused_adam_frontend.o.d -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/includes -I/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/adam -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/TH -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/THC -isystem /opt/ohpc/pub/apps/cuda/11.8/include -isystem /home/qmin2/anaconda3/envs/biicae/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++17 -g -Wno-reorder -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -DBF16_AVAILABLE -c /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/adam/fused_adam_frontend.cpp -o fused_adam_frontend.o
nvc++-Error-Unknown switch: -Wno-reorder
[2/3] /opt/ohpc/pub/apps/cuda/11.8/bin/nvcc -ccbin /opt/ohpc/pub/apps/nvidia/hpc_sdk/Linux_x86_64/22.2/compilers/bin/nvc -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/includes -I/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/adam -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/TH -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/THC -isystem /opt/ohpc/pub/apps/cuda/11.8/include -isystem /home/qmin2/anaconda3/envs/biicae/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -lineinfo --use_fast_math -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_80,code=compute_80 -DBF16_AVAILABLE -U__CUDA_NO_BFLOAT16_OPERATORS__ -U__CUDA_NO_BFLOAT162_OPERATORS__ -std=c++17 -c /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/adam/multi_tensor_adam.cu -o multi_tensor_adam.cuda.o
FAILED: multi_tensor_adam.cuda.o
/opt/ohpc/pub/apps/cuda/11.8/bin/nvcc -ccbin /opt/ohpc/pub/apps/nvidia/hpc_sdk/Linux_x86_64/22.2/compilers/bin/nvc -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/includes -I/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/adam -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/TH -isystem /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/include/THC -isystem /opt/ohpc/pub/apps/cuda/11.8/include -isystem /home/qmin2/anaconda3/envs/biicae/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -lineinfo --use_fast_math -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_80,code=compute_80 -DBF16_AVAILABLE -U__CUDA_NO_BFLOAT16_OPERATORS__ -U__CUDA_NO_BFLOAT162_OPERATORS__ -std=c++17 -c /home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/csrc/adam/multi_tensor_adam.cu -o multi_tensor_adam.cuda.o
nvcc fatal : Unsupported NVHPC compiler found. nvc++ is the only NVHPC compiler that is supported.
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 2100, in _run_ninja_build
subprocess.run(
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/subprocess.py", line 524, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/qmin2/3rd_semester_research/qmin2_infini_attention/run_translation.py", line 699, in <module>
main()
File "/home/qmin2/3rd_semester_research/qmin2_infini_attention/run_translation.py", line 614, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/transformers/trainer.py", line 2112, in train
return inner_training_loop(
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/transformers/trainer.py", line 2267, in _inner_training_loop
model, self.optimizer, self.lr_scheduler = self.accelerator.prepare(
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/accelerate/accelerator.py", line 1219, in prepare
result = self._prepare_deepspeed(*args)
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/accelerate/accelerator.py", line 1604, in _prepare_deepspeed
engine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs)
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/__init__.py", line 176, in initialize
engine = DeepSpeedEngine(args=args,
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 307, in __init__
self._configure_optimizer(optimizer, model_parameters)
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 1231, in _configure_optimizer
basic_optimizer = self._configure_basic_optimizer(model_parameters)
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 1308, in _configure_basic_optimizer
optimizer = FusedAdam(
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/adam/fused_adam.py", line 94, in __init__
fused_adam_cuda = FusedAdamBuilder().load()
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/op_builder/builder.py", line 478, in load
return self.jit_load(verbose)
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/deepspeed/ops/op_builder/builder.py", line 522, in jit_load
op_module = load(name=self.name,
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1308, in load
return _jit_compile(
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1710, in _jit_compile
_write_ninja_file_and_build_library(
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1823, in _write_ninja_file_and_build_library
_run_ninja_build(
File "/home/qmin2/anaconda3/envs/biicae/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 2116, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error building extension 'fused_adam'
[2024-10-23 11:23:24,181] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 3332850
[2024-10-23 11:23:24,181] [INFO] [launch.py:316:sigkill_handler] Killing subprocess 3332851
[2024-10-23 11:23:24,525] [ERROR] [launch.py:322:sigkill_handler] ['/home/qmin2/anaconda3/envs/biicae/bin/python3.9', '-u', 'run_translation.py', '--local_rank=1', '--deepspeed', 'ds_config.json', '--model_name_or_path', 't5-small', '--per_device_train_batch_size', '1', '--output_dir', 'output_dir', '--overwrite_output_dir', '--fp16', '--do_train', '--max_train_samples', '500', '--num_train_epochs', '1', '--dataset_name', 'wmt16', '--dataset_config', 'ro-en', '--source_lang', 'en', '--target_lang', 'ro'] exits with return code = 1
```
For your information
I'm using slurm cluster interactive mode.
GPU: A100-80GB x 2
gcc --version : 12.2.0
nvcc --version : 11.8
nvc++ --version: nvc++ 22.2-0 64-bit target on x86-64 Linux -tp zen3
nvidia-smi shows
NVIDIA-SMI 530.30.02 Driver Version: 530.30.02 CUDA Version: 12.1
This is pip list
```
pip list
Package Version
------------------------ ------------
accelerate 1.0.1
aiohappyeyeballs 2.4.3
aiohttp 3.10.10
aiosignal 1.3.1
annotated-types 0.7.0
async-timeout 4.0.3
attrs 24.2.0
certifi 2024.8.30
charset-normalizer 3.4.0
colorama 0.4.6
datasets 3.0.2
deepspeed 0.15.3
dill 0.3.8
evaluate 0.4.3
filelock 3.13.1
frozenlist 1.4.1
fsspec 2024.2.0
hjson 3.1.0
huggingface-hub 0.26.1
idna 3.10
Jinja2 3.1.3
lxml 5.3.0
MarkupSafe 2.1.5
mpmath 1.3.0
msgpack 1.1.0
multidict 6.1.0
multiprocess 0.70.16
networkx 3.2.1
ninja 1.11.1.1
numpy 1.26.3
nvidia-cublas-cu11 11.11.3.6
nvidia-cuda-cupti-cu11 11.8.87
nvidia-cuda-nvrtc-cu11 11.8.89
nvidia-cuda-runtime-cu11 11.8.89
nvidia-cudnn-cu11 9.1.0.70
nvidia-cufft-cu11 10.9.0.58
nvidia-curand-cu11 10.3.0.86
nvidia-cusolver-cu11 11.4.1.48
nvidia-cusparse-cu11 11.7.5.86
nvidia-nccl-cu11 2.21.5
nvidia-nvtx-cu11 11.8.86
packaging 24.1
pandas 2.2.3
pillow 10.2.0
pip 24.2
portalocker 2.10.1
propcache 0.2.0
psutil 6.1.0
py-cpuinfo 9.0.0
pyarrow 17.0.0
pydantic 2.9.2
pydantic_core 2.23.4
pynvml 11.5.3
python-dateutil 2.9.0.post0
pytz 2024.2
PyYAML 6.0.2
regex 2024.9.11
requests 2.32.3
sacrebleu 2.4.3
safetensors 0.4.5
setuptools 75.1.0
six 1.16.0
sympy 1.13.1
tabulate 0.9.0
tokenizers 0.20.1
torch 2.5.0+cu118
torchaudio 2.5.0+cu118
torchvision 0.20.0+cu118
tqdm 4.66.5
transformers 4.46.0.dev0
triton 3.1.0
typing_extensions 4.9.0
tzdata 2024.2
urllib3 2.2.3
wheel 0.44.0
xxhash 3.5.0
yarl 1.16.0
```
I spent lots of time handling this issue...
Is there any solution for this? | closed | 2024-10-23T02:33:37Z | 2024-10-30T19:07:17Z | https://github.com/deepspeedai/DeepSpeed/issues/6654 | [
"build"
] | qmin2 | 2 |
horovod/horovod | machine-learning | 3,882 | When running trainer script of transformers with some changes , throwing error | 4/07/2023 15:13:14 - INFO - __main__ - Grouping texts into single entries
[INFO|trainer.py:568] 2023-04-07 15:13:16,718 >> Using cuda_amp half precision backend
/home/user/.local/lib/python3.8/site-packages/transformers/optimization.py:391: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
Traceback (most recent call last):
File "run_clm.py", line 555, in <module>
main()
File "run_clm.py", line 518, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/user/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1572, in train
return inner_training_loop(
File "/home/user/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1650, in _inner_training_loop
self.create_optimizer_and_scheduler(num_training_steps=max_steps)
File "/home/user/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1021, in create_optimizer_and_scheduler
self.create_optimizer()
File "/home/user/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1085, in create_optimizer
hvd.broadcast_parameters(self.model.state_dict(), root_rank=0) #hvd_18
File "/usr/local/lib/python3.8/site-packages/horovod/torch/functions.py", line 54, in broadcast_parameters
handle = broadcast_async_(p, root_rank, name)
File "/usr/local/lib/python3.8/site-packages/horovod/torch/mpi_ops.py", line 880, in broadcast_async_
return _broadcast_async(tensor, tensor, root_rank, name, process_set)
File "/usr/local/lib/python3.8/site-packages/horovod/torch/mpi_ops.py", line 777, in _broadcast_async
function = _check_function(_broadcast_function_factory, tensor)
File "/usr/local/lib/python3.8/site-packages/horovod/torch/mpi_ops.py", line 100, in _check_function
raise ValueError('Tensor type %s is not supported.' % tensor.type()) | closed | 2023-04-07T06:35:46Z | 2023-04-21T14:18:23Z | https://github.com/horovod/horovod/issues/3882 | [] | 22Mukesh22 | 0 |
sktime/sktime | data-science | 7,496 | [BUG] Loaded model from a saved sktime model failing to forecast on new data | I recently saved a deep neural network model (LSTFDLinear) after fitting it on a large dataset.After saving it i loaded it and wanted to update it and for it to make new forecasting figures based on the latest data but it keeps on giving results on the last fit procedure and does not change no matter what l do ...any help on how i can fix that .....Thank you | open | 2024-12-08T16:12:30Z | 2024-12-10T06:53:03Z | https://github.com/sktime/sktime/issues/7496 | [
"bug"
] | jsteve677 | 2 |
jazzband/django-oauth-toolkit | django | 745 | Apps aren't loaded yet | Hello,
I extended the classes "Server", "AbstractAccessToken" and "AbstractRefreshToken" in my models.py file. Then tells `OAUTH2_PROVIDER` to use it.
But I always get the exception :
```
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
```
Apparently, this error comes from the import :
```
from oauth2_provider.models import AbstractAccessToken, AbstractRefreshToken
```
I'm lost.
Thanks for your help. | closed | 2019-10-10T10:13:19Z | 2021-10-23T01:19:55Z | https://github.com/jazzband/django-oauth-toolkit/issues/745 | [] | Mysteriosis | 0 |
plotly/plotly.py | plotly | 5,087 | add swarm plot | I can't find swarm plot in plotly, but can use this way to plot a swarm map:
```
import plotly.express as px
import numpy as np
import pandas as pd
np.random.seed(1)
y0 = np.random.randn(50) - 1
mm=pd.DataFrame({"org":y0})
tmp=[]
for k in y0:
tmp.append(np.floor(k*10)/10)
mm["cut"]=tmp
cc=pd.DataFrame(columns=["x","y"])
width=0.08
for s in mm.groupby("cut"):
# print(type(s))
x=s[0]
mp=s[1]
ls=len(s[1])
mid= int(ls/2)
org= -mid
if ls%2==0:
org=org+0.5
for i in range(ls):
cc.loc[len(cc)]=[x,(i+org)*width]
# for i in range(ls):
# if i == mid:
# org=org+1
# cc.loc[len(cc)]=[x,(i+org)*width]
else:
for i in range(ls):
cc.loc[len(cc)]=[x,(i+org)*width]
# print(mm)
# print(cc)
fig = px.scatter(cc,x='x',y='y',range_y=[-2,2])
# fig=px.strip(cc,x='x',y='y',range_y=[-2,2])
fig.show()
```
this is a example for how to deal data and transform data for swarm
 | open | 2025-03-14T07:50:26Z | 2025-03-17T18:25:29Z | https://github.com/plotly/plotly.py/issues/5087 | [
"feature",
"P3"
] | suterberg | 1 |
flairNLP/fundus | web-scraping | 263 | Unify `Pipeline` and `Crawler`. | With some upcoming features, especially an async API for the crawler as suggested by @dobbersc and formulated in #260 , the problem of documentation duplication became very obvious.
The get rid of this problem @dobbersc and I came up with the idea to build up an inheritance schema like the following:
```python
class BaseCrawler: # <- Former Pipeline. May as well be called Pipeline again
# this class holds the entire logic
def __init__(self, *scrapers: Scraper):
self.scrapers = scrapers
def crawl(self, ...) -> Iterator[Article]:
...
for article in self.scrapers:
...
class Crawler(BaseCrawler):
# this class works as an alternative constructor to utilize PublisherCollection
# for usability reasons.
def __init__(
self,
*publishers: Union[PublisherEnum, Type[PublisherEnum]],
restrict_sources_to: Optional[List[Type[URLSource]]] = None,
) -> None:
# create scrapers
...
self.scraper = ...
crawler = Crawler(...)
for article in crawler.crawl():
...
``` | closed | 2023-07-04T18:41:48Z | 2023-07-11T19:44:33Z | https://github.com/flairNLP/fundus/issues/263 | [] | MaxDall | 0 |
mwaskom/seaborn | data-science | 3,114 | Feature request: Parallel coordinates plots | When visualizing high-dimensional datasets, parallel coordinates plots are sometimes very useful. I would love for Seaborn to have a build in function to do this!
**Resources**
- Wikipedia: [Parallel coordinates](https://en.wikipedia.org/wiki/Parallel_coordinates)
- Python Graph Gallery: [Parallel coordinate plot](https://www.python-graph-gallery.com/parallel-plot/)
- plotly: [Parallel Coordinates Plot in Python](https://plotly.com/python/parallel-coordinates-plot/)
- Pandas docs: [pandas.plotting.parallel_coordinates](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.plotting.parallel_coordinates.html) | closed | 2022-10-27T13:14:14Z | 2022-11-04T10:37:24Z | https://github.com/mwaskom/seaborn/issues/3114 | [
"wishlist"
] | EwoutH | 11 |
aio-libs/aiopg | sqlalchemy | 65 | Closing connection object throws exception | Hi,
I am using aiopg with a Pool object that is shared among coroutines. Every once in a while, randomly I see following error:
psycopg2.ProgrammingError: close cannot be used while an asynchronous query is u
nderway
I can reproduce the issue with following:
<pre><code>
pool = yield from aiopg.create_pool(dsn)
with (yield from pool.cursor(timeout=1.0)) as cur:
yield from cur.execute("SELECT pg_sleep(2)")
</code></pre>
It seems valid to raise TimeoutError rather than ProgrammingError, I believe. What do you think?
Thanks,
| closed | 2015-07-10T09:46:19Z | 2016-07-16T16:22:44Z | https://github.com/aio-libs/aiopg/issues/65 | [] | sumerc | 8 |
kensho-technologies/graphql-compiler | graphql | 829 | Add a timezone-aware datetime GraphQL scalar type: DateTimeTz | Since #827 merged, we don't have a way to represent timezone-aware datetimes. For this, I propose adding a new scalar type: `scalar DateTimeTz`, Python name `GraphQLDateTimeTz`.
A few guidelines I'd like to propose:
- Outputting a field of type `DateTimeTz` is guaranteed to produce a timezone-aware result.
- Runtime arguments of type `DateTimeTz` are required to contain timezone information.
- Its serialization as a string must always contain either an explicit `+HH:mm` suffix or the simple suffix `Z` which is equivalent to `+00:00`. Other suffixes, such as `+HH`, `+H`, `America/New_York` etc. are explicitly not permitted and unsupported.
- When auto-generating schemas, if a datetime type is explicitly known to carry timezone information (for example, `TIMESTAMPTZ` in many SQL flavors), it must be represented as `DateTimeTz` in the auto-generated schema.
- When auto-generating schemas, if it is uncertain whether a datetime type carries timezone information or not, it must be represented as `DateTime` (i.e. timezone-naive) and a warning must be emitted about the uncertainty. This warning must provide sufficient information to the user so that they are able to manually resolve the issue by explicitly configuring the field in question to appear as timezone-aware or naive. | open | 2020-05-19T16:08:54Z | 2020-05-19T21:26:34Z | https://github.com/kensho-technologies/graphql-compiler/issues/829 | [
"enhancement"
] | obi1kenobi | 2 |
pallets/flask | python | 4,567 | Move `abort` to the `Flask` app object | Add an `abort` method to the `Flask` app object. Similar to functions like `flask.json.dumps`, `flask.abort` should look for a `current_app` and call its `abort` method. This will allow applications to override the abort behavior. | closed | 2022-05-02T14:29:47Z | 2022-05-27T00:06:02Z | https://github.com/pallets/flask/issues/4567 | [
"save-for-sprint"
] | davidism | 4 |
adbar/trafilatura | web-scraping | 82 | Broken docs python example | ["Producing TEI files"](https://trafilatura.readthedocs.io/en/latest/tutorial2.html#producing-tei-files) docs have this example
```python3
# load the necessary components
import trafilatura
# open a file and parse it
downloaded = trafilatura.fetch_url('https://github.blog/2019-03-29-leader-spotlight-erin-spiceland/')
result = trafilatura.extract(downloaded, tei_output=True, tei_validation=True)
```
[`extract` ](https://github.com/adbar/trafilatura/blob/d26d2136307261aa3537bc8da2a26fe2c6975ae9/trafilatura/core.py#L761)function doesn't have `tei_output` flag.
To get the result as `TEI`, [`output_format`](https://github.com/tomwojcik/trafilatura/blob/master/trafilatura/core.py#L600
) has to be passed accordingly.
| closed | 2021-06-07T11:50:12Z | 2021-06-07T18:13:26Z | https://github.com/adbar/trafilatura/issues/82 | [] | tomwojcik | 0 |
axnsan12/drf-yasg | rest-api | 378 | Serializer to openapi.Schema | First off, awesome library!
Running into an issue with some custom `ModelViewSet` methods not showing anything in their responses. To deal with it, I added a method decorator:
```python
@method_decorator(
name="list",
decorator=swagger_auto_schema(
operation_id="List Questions",
operation_description="List Questions ",
responses={"200": openapi.Response("OK", QuestionSerializer(many=True)},
security=[{"JWT": []}, {None: []}],
),
)
```
but it is unable to recognize that this should be wrapped in a `PageNumberPagination` result.
I tried manually creating the payload by modifying the decorator:
```python
@method_decorator(
name="list",
decorator=swagger_auto_schema(
operation_id="List Questions",
operation_description="List Questions",
responses={"200": openapi.Response("OK", openapi.Schema(
type=openapi.TYPE_OBJECT,
properties=OrderedDict((
('count', openapi.Schema(type=openapi.TYPE_INTEGER)),
('next', openapi.Schema(type=openapi.TYPE_STRING, format=openapi.FORMAT_URI, x_nullable=True)),
('previous', openapi.Schema(type=openapi.TYPE_STRING, format=openapi.FORMAT_URI, x_nullable=True)),
('results', openapi.Schema(type=openapi.TYPE_ARRAY, items=openapi.Response("OK", QuestionSerializer(many=True)))),
)),
required=['results']
))},
security=[{"JWT": []}, {None: []}],
),
)
```
but obviously this doesn't work. I tried a few things in the "results" property, but couldn't generate a schema from the serializer. I saw another issue mentioning this but had trouble following along to a resolution.
is there any way to do this? | open | 2019-06-07T20:48:16Z | 2025-03-07T12:16:50Z | https://github.com/axnsan12/drf-yasg/issues/378 | [
"bug",
"triage"
] | zak10 | 3 |
microsoft/nni | pytorch | 5,720 | NNI is starting, it's time to run an epoch but there's no value in the page? | **Describe the issue**:
it's time to run an epoch but there's no value in the page?
**Environment**:
- NNI version:2.5
- Training service (local|remote|pai|aml|etc):local
- Client OS:Win10
- Server OS (for remote mode only):
- Python version: 3.7
- PyTorch/TensorFlow version:PyTorch
- Is conda/virtualenv/venv used?:conda
- Is running in Docker?: no
**Configuration**:
searchSpaceFile: search_space.json
trialCommand: python train_nni.py
trialGpuNumber: 0
trialConcurrency: 1
tuner:
name: TPE
classArgs:
optimize_mode: maximize
trainingService:
platform: local

**How to reproduce it?**: | open | 2023-12-10T11:22:42Z | 2023-12-10T11:22:42Z | https://github.com/microsoft/nni/issues/5720 | [] | yao-ao | 0 |
mckinsey/vizro | pydantic | 281 | Uploading files in Vizro apps | ### Which package?
vizro
### What's the problem this feature will solve?
Perhaps there is a way already to do it, but I want the user to be able to select a file on the local computer and use it for analysis. I was hoping that there was a "vizro" object to do uploads, but I can't find one. Lacking that, can one somehow include a dcc object, such as Upload, into a "vizro" app? I saw one place that said one vizro object was "a thin wrapper" on a dcc object, but I don't know if there is someway for me to do that. Also lacking that, could I make a multipage app where I could have a dcc Update on a separate page or perhaps as a dropdown? If not, I think you really should have an "Upload" object or some object that includes that functionality.
### Describe the solution you'd like
I described ideally want to do Upload in a vizro object in the "What's the problem" section above.
### Alternative Solutions
I described alternative solutions in the "What's the problem" section above.
### Additional context
Is there a vizro forum for discussing this sort of issue? I looked at the "dash" forum, but it only seemed to mention vizro in one post.
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | open | 2024-01-24T03:56:05Z | 2024-07-08T15:03:32Z | https://github.com/mckinsey/vizro/issues/281 | [
"Feature Request :nerd_face:"
] | bcichowlas | 2 |
deezer/spleeter | deep-learning | 304 | [Discussion] How to finetune “2stems-finetune” model with "F":1536 | <!-- Please respect the title [Discussion] tag. -->
Grettings. I tried trainning from checkpoint from "2stems-finetune" model, with my own vocals-accompaniment dataset(44.1khz, stereo).
It works fine with "F":1024. Though reports error when I raise "F" to higher value like 1536. the error info as follows:
(0) Invalid argument: assertion failed: [Need value.shape >= size, got ] [624 3072 2] [512 4608 2]
[[{{node random_crop/Assert/Assert}}]]
[[IteratorGetNext]]
[[IteratorGetNext/_2768]]
(1) Invalid argument: assertion failed: [Need value.shape >= size, got ] [624 3072 2] [512 4608 2]
It seems that the "2stem-finetune" itself is trained on "F":1024, is there any way I can finetune it with higher "F" value, so I can take advantage of the information above 11khz from my own datasets?
Any idea appreciated!
| closed | 2020-03-27T03:38:16Z | 2020-04-02T09:14:36Z | https://github.com/deezer/spleeter/issues/304 | [
"question"
] | blackpaintedman | 1 |
Kludex/mangum | asyncio | 44 | Allow setting environment vars in CLI | closed | 2019-07-31T05:31:54Z | 2019-08-01T01:22:23Z | https://github.com/Kludex/mangum/issues/44 | [] | jordaneremieff | 0 |
|
Anjok07/ultimatevocalremovergui | pytorch | 867 | error by MDX-NET model:MDX23C-InstVoc HQ | Last Error Received:
Process: MDX-Net
If this error persists, please contact the developers with the error details.
Raw Error Details:
RuntimeError: "Error opening 'F:/***.wav': System error."
Traceback Error: "
File "UVR.py", line 6565, in process_start
File "separate.py", line 683, in seperate
File "separate.py", line 342, in final_process
File "separate.py", line 406, in write_audio
File "separate.py", line 379, in save_with_message
File "separate.py", line 350, in save_audio_file
File "soundfile.py", line 430, in write
File "soundfile.py", line 740, in __init__
File "soundfile.py", line 1264, in _open
File "soundfile.py", line 1455, in _error_check
"
Error Time Stamp [2023-10-07 13:07:04]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 10
window_size: 512
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: v3 | UVR_Model_1
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: MDX23C-InstVoc HQ
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: PCM_16
help_hints_var: False
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: Vocals
mdx_stems: Vocals | open | 2023-10-07T05:13:25Z | 2023-12-28T12:01:58Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/867 | [] | Errorrrrr | 1 |
pydantic/FastUI | pydantic | 262 | Could we build a "plugin" system to expand the components library? | I think it would be great if components that are built in separate python / js package by the community could be brought in the `fastui` framework.
This would make the components much easier to "pick and choose" and many of the components might not need to be in the main framework.
My lack of js knowledge unfortunately prevent me for now to provide more thoughts as i'm not sure how that could be integrated in the current `prebuilt_html` approach and other.
NB: I think the `polars` model for plugins is pretty cool, so just sharing for inspiration (albeit with a rust to python approach): [plugins](https://docs.pola.rs/user-guide/expressions/plugins/#community-plugins), [community](https://docs.pola.rs/user-guide/expressions/plugins/#community-plugins) | open | 2024-04-05T08:55:37Z | 2024-04-05T08:55:37Z | https://github.com/pydantic/FastUI/issues/262 | [] | tim-x-y-z | 0 |
graphql-python/graphene-sqlalchemy | graphql | 341 | Post processing relationship result with SQLAlchemyConnectionField | Hi there!
Maybe someone asked this question, sorry.
But how to do post processing or override relationship result with `SQLAlchemyConnectionField`?
https://github.com/graphql-python/graphene-sqlalchemy/blob/master/examples/flask_sqlalchemy/schema.py#L36
```Python
class Query(graphene.ObjectType):
....
all_departments = SQLAlchemyConnectionField(Department.connection, sort=None)
def resolve_all_departments(root, info, *args, **kwargs):
# Like result = super().resolve_all_departments() ?
# Do post processing result / add some additional filters to query.
# return result
```
| closed | 2022-04-29T22:01:47Z | 2023-02-24T14:56:10Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/341 | [
"question"
] | ego | 3 |
voila-dashboards/voila | jupyter | 1,149 | Framework | closed | 2022-04-27T18:48:41Z | 2022-04-27T18:48:49Z | https://github.com/voila-dashboards/voila/issues/1149 | [] | FaisalF12 | 0 |
|
HumanSignal/labelImg | deep-learning | 441 | Application hangs on adding specific number of annotations for an image | <!--
Please provide as much as detail and example as you can.
You can add screenshots if appropriate.
-->
on running labelImg.py the application hangs if I create more than 10 annotations for the medium resolution image, for a higher resolution image it hangs on the third selection
- **OS: Windows 10**
- **PyQt version: 1.5.2**
| closed | 2019-02-04T10:47:54Z | 2024-01-07T13:33:57Z | https://github.com/HumanSignal/labelImg/issues/441 | [] | PrakrutiChandak | 3 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,523 | [Bug]: Webui Infotext settings not working correctly | ### What happened?
I added options in settings to remove all adetailer infotext to have a simple and clean infotext but adetailer info continue to be saved in infotext. I don't know if this bug is limited to adetailer or exist in all extensions.
### Steps to reproduce the problem
Add what you don't want in infotext like image below

### What should have happened?
The adetailer (and others) info must no be inserted in infotext if they have been added to exclusion fields.
Webui v1.8
| open | 2024-04-15T08:12:25Z | 2024-04-15T08:24:36Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15523 | [
"bug-report"
] | ema7569 | 0 |
mitmproxy/pdoc | api | 179 | HTML generation / carriage return lost for multi-lines docstring | Hi,
I'm uisng the PEP 287 -- reStructuredText Docstring Format for my documentation like:
```
"""
This is a reST style.
:param param1: this is a first param
:param param2: this is a second param
:returns: this is a description of what is returned
:raises keyError: raises an exception
"""
```
and when generating the HTML pdoc, all the lines are merged in one long line instead of n lines (unless I had '\n' at the end of each line but...)
Thanks ! | closed | 2018-10-11T13:36:37Z | 2021-01-19T16:52:27Z | https://github.com/mitmproxy/pdoc/issues/179 | [] | shazz | 7 |
lepture/authlib | django | 494 | JWE | How use:
{"alg": "A256KW", "enc": "A256GCM"} | closed | 2022-09-27T07:34:50Z | 2022-09-27T07:49:38Z | https://github.com/lepture/authlib/issues/494 | [] | timscriptov | 1 |
Gozargah/Marzban | api | 1,295 | can't set custom node to user subscription in api | hi I want to create a functional for change the list of servers in users subscription but that I send a put request with something like {
"links": [
"vless://heremysubcription to server 1",
"False"
]
}
than I got response {
"proxies": {},
"expire": 0,
"data_limit": 0,
"data_limit_reset_strategy": "no_reset",
"inbounds": {},
"note": "string",
"sub_updated_at": "2024-09-03T13:48:06.648Z",
"sub_last_user_agent": "string",
"online_at": "2024-09-03T13:48:06.648Z",
"on_hold_expire_duration": 0,
"on_hold_timeout": "2024-09-03T13:48:06.648Z",
"auto_delete_in_days": 0,
"username": "string",
"status": "active",
"used_traffic": 0,
"lifetime_used_traffic": 0,
"created_at": "2024-09-03T13:48:06.648Z",
"links": [ "vless://heremysubcription to server 1", "vless://heremysubcription to server 2", "False"],
"subscription_url": "",
"excluded_inbounds": {},
"admin": {
"username": "string",
"is_sudo": true,
"telegram_id": 0,
"discord_webhook": "string"
}
} I got 2 node and 1 main server but I can't modify than is than is it the lack of ability to change the list of servers or am I doing something wrong I try put in the put request full list that got by /api/user/{username} and change just the array of links pls help me 🙏 | closed | 2024-09-03T13:54:57Z | 2024-09-03T15:29:05Z | https://github.com/Gozargah/Marzban/issues/1295 | [
"Question",
"P3",
"API"
] | MaximCemencov | 4 |
replicate/cog | tensorflow | 1,263 | Prakash | closed | 2023-08-20T04:02:00Z | 2023-08-20T14:26:58Z | https://github.com/replicate/cog/issues/1263 | [] | Prakash07zaliy | 0 |
|
encode/databases | sqlalchemy | 566 | Drop support for python 3.7 | Python 3.7 has just reached end-of-life: https://devguide.python.org/versions/#versions.
We can drop support for it now and update the codebase to use features available in `^3.8`. | closed | 2023-08-19T19:37:47Z | 2024-02-22T22:34:25Z | https://github.com/encode/databases/issues/566 | [] | zevisert | 1 |
google-research/bert | nlp | 1,210 | How to guide BERT to [MASK] certain tokens | I am fairly new to BERT and I was wondering if there is a way to guide the model into only [MASK]ing certain words. For instance, if I wanted to only randomly [MASK] Verbs, how will I go about doing that?
Thank you | open | 2021-03-19T09:23:44Z | 2021-03-19T09:23:44Z | https://github.com/google-research/bert/issues/1210 | [] | AdaUchendu | 0 |
Lightning-AI/pytorch-lightning | pytorch | 19,779 | Huge metrics jump between epochs && Step and epoch log not matched, when accumulate_grad_batches > 1 | ### Bug description
So at first I noticed the huge jump between epochs for both loss and accuracy calculated by torchmetrics, I debugged for a couple days by adding the "drop_last=True" in the dataloader, add some dropout or changing the model but nothing changed.
To clarify, Exp 4302c358770fe8041adbdc5137f079b8 has accumulate_grad_batches=4, batch size=2, ddp in 8 gpus, and exp a67c8a809390fe0b06b0d6737009f6e2: accumulated_grad_batches=1, batch_size=4, ddp in 4 gpus, all the other configurations are same so the overall batch sizes are equal.
There are some observations maybe related to this problem:
1. There's no cycled lr schedule and I shuffle the train dataset before each epoch
2. The loss fluctuated a lot due to some random masking in training
3. The train epoch metrics curves and validation curves are quite normal, keeps decreasing. However, as you can see in the picture, exp 4302c358770fe8041adbdc5137f079b8, the step metrics and epoch metrics are not matched (log_every_n_steps=1). I tried to average the step metrics manually and they were not matched.
4. After debugging for a couple days, I set the accumulated_grad_batches to 1, exp a67c8a809390fe0b06b0d6737009f6e2, the problem solved.
So after these experiments I tested the models. Both jumped and normal experiments worked just fine. Maybe there are some issues in the logging process but it's too hard to trace the code. The code to reproduce may take me a while so I just drop the descriptions here, if you have any thoughts on it. If you do need the code I'll see what I can do.
<img width="1220" alt="image" src="https://github.com/Lightning-AI/pytorch-lightning/assets/55123830/edc23047-2630-4e28-b23a-546484b6781f">
<img width="432" alt="image" src="https://github.com/Lightning-AI/pytorch-lightning/assets/55123830/cd3f4abb-13d7-40f2-b13b-fd7c904ba77a">
<img width="1231" alt="image" src="https://github.com/Lightning-AI/pytorch-lightning/assets/55123830/ad259ded-e572-4e13-8372-2f98cd040b6c">
<img width="1232" alt="image" src="https://github.com/Lightning-AI/pytorch-lightning/assets/55123830/5289a757-9862-4aec-890d-fb6cefd274cd">
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_ | open | 2024-04-15T14:50:08Z | 2024-04-16T06:32:47Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19779 | [
"bug",
"needs triage"
] | stg1205 | 0 |
dmlc/gluon-cv | computer-vision | 785 | Any other parameters that make training time shorter(different epochs, decay epoch)? | The models in gluoncv perform very well, but the training parameters given in the shell script require a lot of time to train. For example, coco has to train 26 epochs, and this time will be longer if the GPU is insufficient.

When I used detectron, two different training parameters (1x, 2x) are provided. The training time of the former is less but it does not harm much performance.
Does gluoncv have more experiments in this area? Can you provide some parameters that make training faster for people who are lack of gpu such as me ? | closed | 2019-05-28T13:05:28Z | 2019-06-10T10:04:20Z | https://github.com/dmlc/gluon-cv/issues/785 | [] | zhoulukuan | 2 |
PaddlePaddle/models | nlp | 5,207 | variational_seq2seq inference error | in models/PaddleNLP/legacy/seq2seq/variational_seq2seq/
run sh infer.sh ptb
it shows:
----------------------
Error Message Summary:
----------------------
InvalidArgumentError: Dims of all Inputs(X) must be the same, but received input 1 dim is:320096, 1 not equal to input 0 dim:32, 1.
[Hint: Expected input_dims[i] == input_dims[0], but received input_dims[i]:320096, 1 != input_dims[0]:32, 1.] at (/paddle/paddle/fluid/operators/stack_op.cc:46)
[operator < stack > error]
in model.py
outputs, _ = dynamic_decode(
beam_search_decoder,
inits=dec_initial_states,
max_step_num=max_length) | closed | 2021-01-15T09:39:50Z | 2021-01-25T06:52:15Z | https://github.com/PaddlePaddle/models/issues/5207 | [
"paddlenlp"
] | nickyoungforu | 9 |
deepspeedai/DeepSpeed | pytorch | 6,827 | [BUG] ImportError: libcufft.so.10: cannot read file data | **Describe the bug**
A clear and concise description of what the bug is.
The following error occurs when I execute DeepSpeed
Traceback (most recent call last):
File "/root/miniconda3/envs/deepspeed/bin/deepspeed", line 3, in <module>
from deepspeed.launcher.runner import main
File "/root/miniconda3/envs/deepspeed/lib/python3.10/site-packages/deepspeed/__init__.py", line 10, in <module>
import torch
File "/root/miniconda3/envs/deepspeed/lib/python3.10/site-packages/torch/__init__.py", line 367, in <module>
from torch._C import * # noqa: F403
ImportError: /root/miniconda3/envs/deepspeed/lib/python3.10/site-packages/torch/lib/../../../../libcufft.so.10: cannot read file data
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**ds_report output**
Please run `ds_report` to give us details about your setup.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**System info (please complete the following information):**
- OS: [e.g. Ubuntu 18.04]
- GPU count and types [e.g. two machines with x8 A100s each]
- Interconnects (if applicable) [e.g., two machines connected with 100 Gbps IB]
- Python version
- Any other relevant info about your setup
**Launcher context**
Are you launching your experiment with the `deepspeed` launcher, MPI, or something else?
**Docker context**
Are you using a specific docker image that you can share?
**Additional context**
Add any other context about the problem here.
| closed | 2024-12-06T02:56:38Z | 2024-12-09T18:14:50Z | https://github.com/deepspeedai/DeepSpeed/issues/6827 | [
"bug",
"training"
] | 1259010439 | 2 |
jupyter-book/jupyter-book | jupyter | 1,631 | installation problem | ### Describe the bug
Hello,
I have tried to install jupyter-book inside a virtual environment created with Python3.8.10. And I have errors during the installation (pip install -U jupyter-book) :
ERROR: myst-parser 0.15.2 has requirement mdit-py-plugins~=0.2.8, but you'll have mdit-py-plugins 0.3.0 which is incompatible.
ERROR: black 22.1.0 has requirement click>=8.0.0, but you'll have click 7.1.2 which is incompatible.
ERROR: myst-nb 0.13.1 has requirement sphinx-togglebutton~=0.2.2, but you'll have sphinx-togglebutton 0.3.0 which is incompatible.
ERROR: sphinx-book-theme 0.1.10 has requirement docutils<0.17,>=0.15, but you'll have docutils 0.17.1 which is incompatible.
In case, it matters, I am on Ubuntu 20.04.
Can someone help please ?
### Reproduce the bug
pip install -U jupyter-book
### List your environment
$ jb --version
Jupyter Book : 0.12.1
External ToC : 0.2.3
MyST-Parser : 0.15.2
MyST-NB : 0.13.1
Sphinx Book Theme : 0.1.10
Jupyter-Cache : 0.4.3
NbClient : 0.5.10
| closed | 2022-02-08T08:48:03Z | 2022-02-08T12:44:13Z | https://github.com/jupyter-book/jupyter-book/issues/1631 | [
"bug"
] | kamelNaroun | 4 |
CorentinJ/Real-Time-Voice-Cloning | python | 724 | Unable to unzip VoxCeleb1 and VoxCeleb2 | I follow the instructions to download the datasets - VoxCeleb1 and VoxCeleb2 and concatenate the files using "cat vox1_dev* > vox1_dev_wav.zip". However, when I get the following error when I try to unzip it:
tar: This does not look like a tar archive
tar: Skipping to next header
tar: Archive contains ‘\020\313¨/b\374!8\320\373h’ where numeric off_t value expected
tar: Archive contains ‘V\001W\216A\306R\201\373\231\020\311’ where numeric off_t value expected
tar: Archive contains ‘e\036\363\257N*\225\330[?\242\034’ where numeric off_t value expected
tar: Archive contains ‘\272\r\227:jτ\335CT\277G’ where numeric off_t value expected
tar: Exiting with failure status due to previous errors
Kindly let me know how the datasets are supposed to be downloaded and used. Thank you! | closed | 2021-04-05T12:04:18Z | 2021-04-09T19:32:52Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/724 | [] | namanshah971 | 3 |
okken/pytest-check | pytest | 171 | check doesn't respect runxfail option | When `runxfail` option is set (`pytest --runxfail ...`) tests that use `check` are reported as xfail instead of failed:
_xfail_test.py_
```python
import pytest
@pytest.mark.xfail(reason="foo")
def test_xfail(check):
with check: # with this line gone, this is actual behaviour
assert False
```
# Expected behaviour
```bash
$ pytest --runxfail xfail_test.py
================================================================ test session starts =================================================================
platform linux -- Python 3.10.12, pytest-8.3.4, pluggy-1.5.0
rootdir: /home/taylermulligan/pytest_xfail
plugins: check-2.4.1
collected 1 item
xfail_test.py F [100%]
====================================================================== FAILURES ======================================================================
_____________________________________________________________________ test_xfail _____________________________________________________________________
check = <pytest_check.context_manager.CheckContextManager object at 0x7f2db410d120>
@pytest.mark.xfail(reason="foo")
def test_xfail(check):
> assert False
E assert False
xfail_test.py:5: AssertionError
============================================================== short test summary info ===============================================================
FAILED xfail_test.py::test_xfail - assert False
================================================================= 1 failed in 0.01s ==================================================================
```
# Actual behaviour
```bash
$ pytest --runxfail xfail_test.py
================================================================ test session starts =================================================================
platform linux -- Python 3.10.12, pytest-8.3.4, pluggy-1.5.0
rootdir: /home/taylermulligan/pytest_xfail
plugins: check-2.4.1
collected 1 item
xfail_test.py x [100%]
================================================================= 1 xfailed in 0.03s =================================================================
``` | closed | 2024-12-10T21:04:43Z | 2025-02-09T20:57:04Z | https://github.com/okken/pytest-check/issues/171 | [] | taylermulligan | 0 |
python-gino/gino | sqlalchemy | 265 | Compatibility with Python 3.7 | - [x] Quart support for 3.7 (https://gitlab.com/pgjones/quart/merge_requests/19)
- [x] Remove watchdog and PyYAML dependencies
- [x] Travis tests for 3.7 (https://github.com/travis-ci/travis-ci/issues/9815) | closed | 2018-07-02T05:56:09Z | 2018-07-06T03:28:43Z | https://github.com/python-gino/gino/issues/265 | [] | wwwjfy | 2 |
google-research/bert | nlp | 1,066 | BERT-Tiny,BERT-Mini,BERT-Small,BERT-Medium - TF 2.0 checkpoints | Hi All ,
I am looking at BERT checkpoint here - https://github.com/tensorflow/models/tree/master/official/nlp/bert for TF 2.0 .
Are checkpoints for BERT-Tiny,BERT-Mini,BERT-Small,BERT-Medium avaialbe in TF 2.0 ?
| closed | 2020-04-20T17:42:37Z | 2020-08-14T19:17:55Z | https://github.com/google-research/bert/issues/1066 | [] | 17patelumang | 2 |
aleju/imgaug | machine-learning | 262 | generate trapeze | Hello,
With imgaug, I would like to transform the image to a trapeze. How I can do this ?
```
+--+ ++
| | to / \
+--+ +---+
```
Thanks | open | 2019-02-19T15:56:36Z | 2019-02-25T09:35:16Z | https://github.com/aleju/imgaug/issues/262 | [] | pprados | 2 |
CTFd/CTFd | flask | 1,774 | Customize challenge submission response | We should probably be able to customize how a challenge submission response looks.
LIke be able to change an alert to a toast or a modal or something and be able to customize the text in it. | open | 2021-01-08T21:27:13Z | 2021-01-08T21:27:13Z | https://github.com/CTFd/CTFd/issues/1774 | [] | ColdHeat | 0 |
2noise/ChatTTS | python | 171 | 欢迎体验啊 | 欢迎体验: https://chattts.sctux.cc

| closed | 2024-06-01T08:43:26Z | 2024-06-17T03:36:11Z | https://github.com/2noise/ChatTTS/issues/171 | [] | guomaoqiu | 3 |
HumanSignal/labelImg | deep-learning | 792 | Error opening file | The codes in labelImg.py only take 'jpg' types into consideration. Thus when the images' type isn't 'jpg' actually, it couldn't work out elegantly, like this:

- **OS:** Windows + anaconda
- **PyQt version:** 5.9.7
| open | 2021-09-07T10:48:34Z | 2021-11-19T13:33:15Z | https://github.com/HumanSignal/labelImg/issues/792 | [] | Venessa-wei | 8 |
2noise/ChatTTS | python | 139 | 无法生成语音 | <img width="1685" alt="image" src="https://github.com/2noise/ChatTTS/assets/10117682/aa8fba07-868f-4172-9910-884710233fb8">
正常安装,在UI里输入文字,生成语音时无限等待。 | closed | 2024-05-31T09:53:52Z | 2024-06-19T03:55:10Z | https://github.com/2noise/ChatTTS/issues/139 | [] | swizardlv | 7 |
ydataai/ydata-profiling | pandas | 831 | Correlation options in "Advanced Usage" not works as expected | Trying to run profiling with:
profile = ProfileReport(
postgres_db_table, title=db_parameter_dict["tableName"], html={"style": {"full_width": True}},
sort=None, minimal=None, interactions={'continuous': False}, orange_mode=True,
correlations={
"pearson": {"calculate": True,"warn_high_correlations":True,"threshold":0.9},
"spearman": {"calculate": False},
"kendall": {"calculate": False},
"phi_k": {"calculate": False},
"cramers": {"calculate": False},
}
)
parameters but no correlation visualizations shows up on report html. So i want to run just "Pearson" correlation but i can't.
When i try parameters below:
ProfileReport(
postgres_db_table, title=db_parameter_dict["tableName"], html={"style": {"full_width": True}},
sort=None, minimal=None, interactions={'continuous': False}, orange_mode=True,
correlations={"pearson": {"calculate": True}} )
Only "Phik, Cramers V" tabs shows up in profiling report html.
To Reproduce
Data:
Famous Titanic dataset with 889 records and ['id', 'survived', 'pclass', 'name', 'sex', 'age', 'sibsp', 'parch',
'ticket', 'fare', 'embarked'] columns
Version information:
python: 3.7.0
Environment: Jupyter Notebook
<details><summary>Click to expand <strong><em>Version information</em></strong></summary>
<p>
absl-py==0.13.0
adal==1.2.6
alembic==1.4.1
altair==4.1.0
amqp==2.6.1
apispec==3.3.2
appdirs==1.4.4
astroid==2.3.1
astunparse==1.6.3
atomicwrites==1.4.0
attrs==20.3.0
autopep8==1.5
azure-common==1.1.26
azure-graphrbac==0.61.1
azure-mgmt-authorization==0.61.0
azure-mgmt-containerregistry==2.8.0
azure-mgmt-keyvault==2.2.0
azure-mgmt-resource==12.0.0
azure-mgmt-storage==11.2.0
azureml-core==1.23.0
Babel==2.8.0
backcall==0.1.0
backoff==1.10.0
backports.tempfile==1.0
backports.weakref==1.0.post1
bcrypt==3.2.0
beautifulsoup4==4.9.0
billiard==3.6.3.0
bleach==3.1.0
bokeh==2.3.1
Boruta==0.3
boto==2.49.0
boto3==1.12.9
botocore==1.15.9
Bottleneck==1.3.2
Brotli==1.0.9
bs4==0.0.1
bson==0.5.9
cached-property==1.5.2
cachelib==0.1.1
cachetools==4.2.1
celery==4.4.7
certifi==2019.9.11
cffi==1.14.3
chardet==3.0.4
chart-studio==1.1.0
clang==5.0
click==8.0.0
cloudpickle==1.6.0
colorama==0.4.1
colorcet==2.0.6
colorlover==0.3.0
colour==0.1.5
confuse==1.4.0
contextlib2==0.6.0.post1
croniter==0.3.34
cryptography==3.2
cssselect==1.1.0
cufflinks==0.17.3
cx-Oracle==7.2.3
cycler==0.10.0
d6tcollect==1.0.5
d6tstack==0.2.0
dash==1.16.1
dash-core-components==1.12.1
dash-html-components==1.1.1
dash-renderer==1.8.1
dash-table==4.10.1
databricks-cli==0.14.2
dataclasses==0.6
decorator==4.4.0
defusedxml==0.6.0
dnspython==2.0.0
docker==4.4.4
docopt==0.6.2
docutils==0.15.2
dtreeviz==1.3
email-validator==1.1.1
entrypoints==0.3
et-xmlfile==1.0.1
exitstatus==1.4.0
extratools==0.8.2.1
fake-useragent==0.1.11
feature-selector===N-A
findspark==1.4.2
Flask==1.1.1
Flask-AppBuilder==3.0.1
Flask-Babel==1.0.0
Flask-Caching==1.9.0
Flask-Compress==1.5.0
Flask-Cors==3.0.10
Flask-JWT-Extended==3.24.1
Flask-Login==0.4.1
Flask-Migrate==2.5.3
Flask-OpenID==1.2.5
Flask-SQLAlchemy==2.4.4
flask-talisman==0.7.0
Flask-WTF==0.14.3
flatbuffers==1.12
future==0.18.2
gast==0.4.0
gensim==3.8.1
geographiclib==1.50
geopy==2.0.0
gitdb==4.0.5
GitPython==3.1.14
google-api-core==1.26.0
google-auth==1.27.0
google-auth-oauthlib==0.4.6
google-cloud-core==1.6.0
google-cloud-storage==1.36.1
google-crc32c==1.1.2
google-pasta==0.2.0
google-resumable-media==1.2.0
googleapis-common-protos==1.53.0
graphviz==0.17
great-expectations==0.13.19
grpcio==1.40.0
gunicorn==20.0.4
h5py==3.1.0
htmlmin==0.1.12
humanize==2.6.0
idna==2.8
ImageHash==4.2.0
imageio==2.9.0
imbalanced-learn==0.5.0
imblearn==0.0
imgkit==1.2.2
importlib-metadata==1.7.0
iniconfig==1.1.1
instaloader==4.7.1
ipykernel==5.1.2
ipython==7.8.0
ipython-genutils==0.2.0
ipywidgets==7.5.1
isodate==0.6.0
isort==4.3.21
itsdangerous==1.1.0
jdcal==1.4.1
jedi==0.15.1
jeepney==0.6.0
Jinja2==2.11.2
jmespath==0.9.5
joblib==1.0.0
json5==0.8.5
jsonpatch==1.32
jsonpickle==2.0.0
jsonpointer==2.1
jsonschema==3.0.2
jupyter==1.0.0
jupyter-client==6.1.11
jupyter-console==6.2.0
jupyter-contrib-core==0.3.3
jupyter-contrib-nbextensions==0.5.1
jupyter-core==4.7.0
jupyter-highlight-selected-word==0.2.0
jupyter-latex-envs==1.4.6
jupyter-nbextensions-configurator==0.4.1
jupyterlab==1.1.3
jupyterlab-server==1.0.6
jupyterthemes==0.20.0
karateclub==1.0.11
keras==2.6.0
Keras-Preprocessing==1.1.2
kiwisolver==1.1.0
kombu==4.6.11
kubernetes==12.0.1
lazy-object-proxy==1.4.2
lesscpy==0.14.0
lightgbm==2.2.3
llvmlite==0.35.0
lxml==4.5.0
Mako==1.1.3
Markdown==3.2.2
MarkupSafe==1.1.1
marshmallow==3.8.0
marshmallow-enum==1.5.1
marshmallow-sqlalchemy==0.23.1
matplotlib==3.4.1
mccabe==0.6.1
MechanicalSoup==0.12.0
metakernel==0.27.5
missingno==0.4.2
mistune==0.8.4
mleap==0.16.1
mlxtend==0.17.3
msgpack==1.0.0
msrest==0.6.21
msrestazure==0.6.4
multimethod==1.4
natsort==7.0.1
nbconvert==5.6.0
nbformat==4.4.0
ndg-httpsclient==0.5.1
networkx==2.4
notebook==6.0.1
numba==0.52.0
numpy==1.19.5
oauthlib==3.1.0
openpyxl==3.0.6
opt-einsum==3.3.0
packaging==20.9
pandas==1.1.5
pandas-profiling==3.0.0
pandocfilters==1.4.2
param==1.10.1
paramiko==2.7.2
parse==1.15.0
parsedatetime==2.6
parso==0.5.1
pathlib2==2.3.5
pathspec==0.8.1
patsy==0.5.1
pexpect==4.8.0
phik==0.11.2
pickleshare==0.7.5
Pillow==8.2.0
plotly==4.14.3
pluggy==0.13.1
ply==3.11
polyline==1.4.0
prefixspan==0.5.2
prison==0.1.3
prometheus-client==0.7.1
prometheus-flask-exporter==0.18.1
prompt-toolkit==2.0.9
protobuf==3.15.4
psutil==5.7.0
psycopg2==2.8.6
ptyprocess==0.6.0
py==1.10.0
pyarrow==3.0.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycodestyle==2.5.0
pycparser==2.20
pyct==0.4.8
pydantic==1.8.2
pydot==1.4.2
pyee==7.0.2
Pygments==2.4.2
PyGSP==0.5.1
PyJWT==1.7.1
pylint==2.4.2
pymssql==2.1.5
PyNaCl==1.4.0
pyodbc==4.0.27
pyOpenSSL==20.0.1
pyparsing==2.4.2
pyppeteer==0.2.2
pyquery==1.4.1
pyrsistent==0.15.4
pysftp==0.2.9
PySocks==1.7.1
pytest==6.2.4
python-dateutil==2.8.1
python-dotenv==0.14.0
python-editor==1.0.4
python-louvain==0.13
python3-openid==3.2.0
pytz==2019.2
PyWavelets==1.1.1
pywin32==227
pywinpty==0.5.5
PyYAML==5.3
pyzmq==18.1.0
qtconsole==5.0.1
QtPy==1.9.0
querystring-parser==1.2.4
requests==2.25.1
requests-html==0.10.0
requests-oauthlib==1.3.0
retrying==1.3.3
rsa==4.7.2
ruamel.yaml==0.16.12
ruamel.yaml.clib==0.2.2
s3transfer==0.3.3
scikit-image==0.18.1
scikit-learn==0.23.2
scikit-plot==0.3.7
scipy==1.6.0
seaborn==0.11.1
SecretStorage==3.3.1
Send2Trash==1.5.0
shap==0.36.0
Shapely==1.7.1
six==1.15.0
sklearn==0.0
slicer==0.0.7
smart-open==1.9.0
smmap==3.0.5
sortedcontainers==2.3.0
soupsieve==2.0
spylon==0.3.0
spylon-kernel==0.4.1
SQLAlchemy==1.3.19
SQLAlchemy-Utils==0.36.8
sqlparse==0.4.1
statsmodels==0.9.0
tabulate==0.8.9
tangled-up-in-unicode==0.1.0
tensorboard==2.6.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.0
tensorflow==2.6.0
tensorflow-estimator==2.6.0
termcolor==1.1.0
terminado==0.8.2
testpath==0.4.2
threadpoolctl==2.1.0
tifffile==2021.4.8
toml==0.10.2
toolz==0.11.1
tornado==6.0.3
tqdm==4.60.0
traitlets==4.3.2
tweepy==3.8.0
twitter-scraper==0.4.2
typed-ast==1.4.0
typing-extensions==3.7.4.3
tzlocal==2.1
urllib3==1.25.9
vine==1.3.0
virtualenv==16.7.9
visions==0.7.1
w3lib==1.22.0
waitress==1.4.4
wcwidth==0.1.7
webencodings==0.5.1
websocket-client==0.58.0
websockets==8.1
Werkzeug==1.0.0
widgetsnbextension==3.5.1
wrapt==1.12.1
WTForms==2.3.3
xgboost==1.1.1
xlrd==1.2.0
XlsxWriter==1.2.2
yellowbrick==0.7
zipp==3.1.0
</p>
</details>
| open | 2021-09-21T13:45:44Z | 2021-09-21T13:45:44Z | https://github.com/ydataai/ydata-profiling/issues/831 | [] | enesMesut | 0 |
fastapi/sqlmodel | sqlalchemy | 66 | postgreSQL: SQLModel.metadata.create_all(engine) doesn't create the database file | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from datetime import datetime
from typing import Optional, Dict
from sqlmodel import Field, SQLModel, create_engine
class SemanticSearch(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
id_user: int
date_time: datetime
query: str
clean_query: str
engine = create_engine('postgresql://postgres:postgres@localhost:5432/embeddings_sts_tf', echo=True)
SQLModel.metadata.create_all(engine)
```
### Description
Following the tutorial user guide based on sqlite I tried to do the same with postgresql database, but contrary to sqlite the `SQLModel.metadata.create_all(engine)` command doesn't seem to create my `embeddings_sts_tf` postgresql database
### Operating System
Linux
### Operating System Details
Ubuntu 18.04 LTS
### SQLModel Version
0.0.4
### Python Version
3.8.8
### Additional Context
_No response_ | open | 2021-09-01T11:34:29Z | 2021-09-02T21:28:11Z | https://github.com/fastapi/sqlmodel/issues/66 | [
"question"
] | Matthieu-Tinycoaching | 1 |
gunthercox/ChatterBot | machine-learning | 1,641 | Create bot for Support engineer | I have huge data of chats between clients and support Engineer for Perticula Software product, how I can create data set from this conversation so I would be able to Use ChatteBot as assistant to Support Engineer? | closed | 2019-02-25T19:38:15Z | 2025-02-25T22:50:46Z | https://github.com/gunthercox/ChatterBot/issues/1641 | [] | arshpreetsingh | 1 |
desec-io/desec-stack | rest-api | 298 | Replace Legacy DynDNS-Setup Checker with Forwarder to new Version | .. to save maintenance for two versions. The legacy version can be found at [dedyn.io/check/](https://dedyn.io/check) and https://github.com/desec-io/desec-stack/blob/master/www/html/check.html, respectively. | open | 2020-02-07T09:20:12Z | 2020-02-07T09:20:12Z | https://github.com/desec-io/desec-stack/issues/298 | [] | nils-wisiol | 0 |
kevlened/pytest-parallel | pytest | 104 | Maintainers needed | The project is not totally unmaintained but only minimal maintenance is done, and only obvious bugfixes will be quickly merged and released.
If you want your pull request to be merged, please ask some other competent contributors to [review your patch](https://docs.github.com/en/github/collaborating-with-pull-requests/reviewing-changes-in-pull-requests/reviewing-proposed-changes-in-a-pull-request#submitting-your-review).
If you are interested in maintaining this project, please answer this issue. | open | 2021-10-10T16:10:51Z | 2023-12-26T23:34:15Z | https://github.com/kevlened/pytest-parallel/issues/104 | [] | azmeuk | 5 |
benbusby/whoogle-search | flask | 367 | [DMCA] <Search Results are removed in google (DMCA Takedown) > | Search results are ***censored*** by google
(DMCA) Is there a way to show the results that are blocked?
(I don't think so, This can't be done without doing crawling)
Is there a way to implement this by looking at the urls that google's blocking? | closed | 2021-06-22T06:52:05Z | 2021-06-27T12:22:18Z | https://github.com/benbusby/whoogle-search/issues/367 | [
"question"
] | Albonycal | 2 |
d2l-ai/d2l-en | machine-learning | 2,099 | Need to tune performance for MXNet & TensorFlow for seq2seq | http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_recurrent-modern/seq2seq.html
http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_attention-mechanisms/bahdanau-attention.html
http://preview.d2l.ai.s3-website-us-west-2.amazonaws.com/d2l-en/master/chapter_attention-mechanisms/transformer.html
We need to tune performance for MXNet & TensorFlow to obtain similar performance of PyTorch for each section, such as learning rate & max_epochs. | open | 2022-04-13T07:55:48Z | 2022-05-16T13:23:18Z | https://github.com/d2l-ai/d2l-en/issues/2099 | [] | astonzhang | 1 |
datadvance/DjangoChannelsGraphqlWs | graphql | 11 | Incompatibility with channels-rabbitmq | I got errors when trying to use django-channels-graphql-ws together with channels-rabbitmq, see https://github.com/CJWorkbench/channels_rabbitmq/issues/3.
I don't understand the intricacies of the underlying problem, but I think you might at least appreciate the analysis provided by channels-rabbitmq's maintainer in the linked issue. | closed | 2019-04-05T15:16:55Z | 2019-04-20T01:43:43Z | https://github.com/datadvance/DjangoChannelsGraphqlWs/issues/11 | [] | rakyi | 1 |
kizniche/Mycodo | automation | 1,196 | Add K-96 multi gas GHG sensor for CH4, CO2, N2O | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Dear Mycodo team,
We are developing a new open-source soil greenhouse gas reader for farm application. this will use the newer Sense-air K-96 NDIR sensor or the existing K30 NDIR sensor to record CH4 and CO2 gas flux. This project is part of a USDA NIFA-supported program for open-source farming.
I'm interested in a new feature - adding a K-96 sensor.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
I need to add the K96 code to the input library.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
This will supplement the K30 sensor option with a newer generation of more precise multi-gas NDIR sensors.
**Additional context**
Add any other context or screenshots about the feature request here.
Sens-air provided the python code they are using to operate the sensor - see attached.
[K96_ReadLog_v220516.txt](https://github.com/kizniche/Mycodo/files/8740610/K96_ReadLog_v220516.txt)
| closed | 2022-05-20T13:46:23Z | 2024-10-04T04:00:57Z | https://github.com/kizniche/Mycodo/issues/1196 | [
"enhancement",
"Testing"
] | alonrab | 14 |
wger-project/wger | django | 1,183 | Provide a way to let users donate using crypto currencies | ## Use case
Due to your great work on this project, good support and responsibility, I (and maybe some other people like me) want to donate to Wger project using crypto-currencies. Please provide a way to do so. | open | 2022-11-16T20:06:46Z | 2023-03-02T10:48:33Z | https://github.com/wger-project/wger/issues/1183 | [] | mohammadrafigh | 1 |
PokemonGoF/PokemonGo-Bot | automation | 5,511 | gmail login makes bot crash at start up | ### Expected Behavior
Bot to login with gmail account and start
### Actual Behavior
Bot crashes right at start up.
config.json:
```
{
"websocket_server": false,
"heartbeat_threshold": 10,
"enable_social": true,
"live_config_update": {
"enabled": false,
"tasks_only": false
},
"tasks": [
{
"//NOTE: This task MUST be placed on the top of task list": {},
"type": "RandomAlivePause",
"config": {
"enabled": false,
"min_duration": "00:00:10",
"max_duration": "00:10:00",
"min_interval": "00:05:00",
"max_interval": "01:30:00"
}
},
{
"type": "HandleSoftBan"
},
{
"type": "CompleteTutorial",
"config": {
"enabled": false,
"// set a name": "",
"nickname": "",
"// 0 = No Team, 1 = Blue, 2 = Red, 3 = Yellow": "",
"team": 0
}
},
{
"type": "CollectLevelUpReward",
"config": {
"collect_reward": true,
"level_limit": -1
}
},
{
"type": "IncubateEggs",
"config": {
"enabled": true,
"infinite_longer_eggs_first": false,
"breakable_longer_eggs_first": true,
"min_interval": 120
}
},
{
"type": "UpdateLiveStats",
"config": {
"enabled": false,
"min_interval": 10,
"stats": ["uptime", "stardust_earned", "xp_earned", "xp_per_hour", "stops_visited"],
"terminal_log": true,
"terminal_title": true
}
},
{
"type": "UpdateLiveInventory",
"config": {
"enabled": false,
"min_interval": 120,
"show_all_multiple_lines": false,
"items": ["pokemon_bag", "space_info", "pokeballs", "greatballs", "ultraballs", "razzberries", "luckyegg"]
}
},
{
"type": "ShowBestPokemon",
"config": {
"enabled": true,
"min_interval": 60,
"amount": 5,
"order_by": "cp",
"info_to_show": ["cp", "ivcp", "dps", "hp"]
}
},
{
"type": "TransferPokemon",
"config": {
"enabled": true,
"min_free_slot": 5,
"transfer_wait_min": 3,
"transfer_wait_max": 5
}
},
{
"type": "NicknamePokemon",
"config": {
"enabled": false,
"nickname_above_iv": 0.9,
"nickname_template": "{iv_pct}-{iv_ads}",
"nickname_wait_min": 3,
"nickname_wait_max": 5
}
},
{
"type": "EvolvePokemon",
"config": {
"enabled": false,
"// evolve only pidgey and drowzee": "",
"// evolve_list": "pidgey, drowzee",
"// donot_evolve_list": "none",
"// evolve all but pidgey and drowzee": "",
"// evolve_list": "all",
"// donot_evolve_list": "pidgey, drowzee",
"evolve_list": "all",
"donot_evolve_list": "none",
"first_evolve_by": "cp",
"evolve_above_cp": 500,
"evolve_above_iv": 0.8,
"logic": "or",
"evolve_speed": 20,
"min_pokemon_to_be_evolved": 1,
"use_lucky_egg": false
}
},
{
"type": "RecycleItems",
"config": {
"enabled": true,
"min_empty_space": 15,
"max_balls_keep": 150,
"max_potions_keep": 50,
"max_berries_keep": 70,
"max_revives_keep": 70,
"item_filter": {
"Pokeball": { "keep" : 100 },
"Potion": { "keep" : 10 },
"Super Potion": { "keep" : 20 },
"Hyper Potion": { "keep" : 30 },
"Revive": { "keep" : 30 },
"Razz Berry": { "keep" : 100 }
},
"recycle_wait_min": 3,
"recycle_wait_max": 5,
"recycle_force": true,
"recycle_force_min": "00:01:00",
"recycle_force_max": "00:05:00"
}
},
{
"type": "CatchPokemon",
"config": {
"enabled": true,
"catch_visible_pokemon": true,
"catch_lured_pokemon": true,
"min_ultraball_to_keep": 5,
"berry_threshold": 0.35,
"vip_berry_threshold": 0.9,
"treat_unseen_as_vip": true,
"daily_catch_limit": 800,
"vanish_settings": {
"consecutive_vanish_limit": 10,
"rest_duration_min": "02:00:00",
"rest_duration_max": "04:00:00"
},
"catch_throw_parameters": {
"excellent_rate": 0.1,
"great_rate": 0.5,
"nice_rate": 0.3,
"normal_rate": 0.1,
"spin_success_rate" : 0.6
},
"catch_simulation": {
"flee_count": 3,
"flee_duration": 2,
"catch_wait_min": 3,
"catch_wait_max": 6,
"berry_wait_min": 3,
"berry_wait_max": 5,
"changeball_wait_min": 3,
"changeball_wait_max": 5,
"newtodex_wait_min": 20,
"newtodex_wait_max": 30
}
}
},
{
"type": "SpinFort",
"config": {
"enabled": true,
"spin_wait_min": 3,
"spin_wait_max": 5
}
},
{ "type": "UpdateWebInventory",
"config": {
"enabled": true
}
},
{
"type": "MoveToFort",
"config":{
"enabled": true,
"lure_attraction": true,
"lure_max_distance": 2000,
"log_interval": 5
}
},
{
"type": "FollowSpiral",
"config": {
"enabled": true,
"diameter": 4,
"step_size": 70
}
}
],
"map_object_cache_time": 5,
"forts": {
"avoid_circles": true,
"max_circle_size": 50,
"cache_recent_forts": true
},
"pokemon_bag": {
"// if 'show_at_start' is true, it will log all the pokemons in the bag (not eggs) at bot start": {},
"show_at_start": true,
"// if 'show_count' is true, it will show the amount of each pokemon (minimum 1)": {},
"show_count": false,
"// if 'show_candies' is true, it will show the amount of candies for each pokemon": {},
"show_candies": false,
"// 'pokemon_info' parameter define which info to show for each pokemon": {},
"// the available options are": {},
"// ['cp', 'iv_ads', 'iv_pct', 'ivcp', 'ncp', 'level', 'hp', 'moveset', 'dps']": {},
"pokemon_info": ["cp", "iv_pct"]
},
"walk_max": 4.16,
"walk_min": 2.16,
"alt_min": 500,
"alt_max": 1000,
"sleep_schedule": {
"enabled": true,
"enable_reminder": false,
"reminder_interval": 600,
"entries": [
{
"enabled": true,
"time": "2:00",
"duration": "5:30",
"time_random_offset": "00:30",
"duration_random_offset": "00:30",
"wake_up_at_location": ""
},
{
"enabled": true,
"time": "17:45",
"duration": "3:00",
"time_random_offset": "01:00",
"duration_random_offset": "00:30",
"wake_up_at_location": ""
}
]
},
"debug": false,
"test": false,
"walker_limit_output": false,
"health_record": true,
"location_cache": true,
"distance_unit": "km",
"reconnecting_timeout": 15,
"logging": {
"color": true,
"show_datetime": true,
"show_process_name": true,
"show_log_level": true
},
"catch": {
"any": {"catch_above_cp": 0, "catch_above_iv": 0, "logic": "or" },
"// Pokemons with example": { "always_catch": true },
"// Gets filtered with release parameters": {},
"// Legendary pokemons (Goes under S-Tier)": {},
"Lapras": { "always_catch": true },
"Moltres": { "always_catch": true },
"Zapdos": { "always_catch": true },
"Articuno": { "always_catch": true },
"// S-Tier pokemons (if pokemon can be evolved into tier, list the representative)": {},
"Mewtwo": { "always_catch": true },
"Dragonite": { "always_catch": true },
"Snorlax": { "always_catch": true },
"// Mew evolves to Mewtwo": {},
"Mew": { "always_catch": true },
"Arcanine": { "always_catch": true },
"Vaporeon": { "always_catch": true },
"Gyarados": { "always_catch": true },
"Exeggutor": { "always_catch": true },
"Muk": { "always_catch": true },
"Weezing": { "always_catch": true },
"Flareon": { "always_catch": true },
"// Growlithe evolves to Arcanine": {},
"Growlithe": { "always_catch": true },
"// Dragonair evolves to Dragonite": {},
"Dragonair": { "always_catch": true },
"// Grimer evolves to Muk": {},
"Grimer": { "always_catch": true },
"// Magikarp evolves to Gyarados": {},
"Magikarp": { "always_catch": true },
"// Exeggcute evolves to Exeggutor": {},
"Exeggcute": { "always_catch": true },
"// Eevee evolves to many versions, like Vaporeon, Flareon": {},
"Eevee": { "always_catch": true },
"// A-Tier pokemons": {},
"Slowbro": { "always_catch": true },
"Victreebel": { "always_catch": true },
"Machamp": { "always_catch": true },
"Poliwrath": { "always_catch": true },
"Clefable": { "always_catch": true },
"Nidoking": { "always_catch": true },
"Venusaur": { "always_catch": true },
"Charizard": { "always_catch": true },
"Golduck": { "always_catch": true },
"Nidoqueen": { "always_catch": true },
"Vileplume": { "always_catch": true },
"Blastoise": { "always_catch": true },
"Omastar": { "always_catch": true },
"Aerodactyl": { "always_catch": true },
"Golem": { "always_catch": true },
"Wigglytuff": { "always_catch": true },
"Dewgong": { "always_catch": true },
"Ninetales": { "always_catch": true },
"Magmar": { "always_catch": true },
"Kabutops": { "always_catch": true },
"Electabuzz": { "always_catch": true },
"Starmie": { "always_catch": true },
"Jolteon": { "always_catch": true },
"Rapidash": { "always_catch": true },
"Pinsir": { "always_catch": true },
"Scyther": { "always_catch": true },
"Tentacruel": { "always_catch": true },
"Gengar": { "always_catch": true },
"Hypno": { "always_catch": true },
"Pidgeot": { "always_catch": true },
"Rhydon": { "always_catch": true },
"Seaking": { "always_catch": true },
"Kangaskhan": { "always_catch": true }
},
"release": {
"any": {"release_below_cp": 0, "release_below_iv": 0, "release_below_ivcp": 0, "logic": "or" },
"// Legendary pokemons (Goes under S-Tier)": {},
"Lapras": { "release_below_cp": 1041, "release_below_iv": 0.8, "logic": "and" },
"Moltres": { "release_below_cp": 1132, "release_below_iv": 0.8, "logic": "and" },
"Zapdos": { "release_below_cp": 1087, "release_below_iv": 0.8, "logic": "and" },
"Articuno": { "release_below_cp": 1039, "release_below_iv": 0.8, "logic": "and" },
"// S-Tier pokemons (if pokemon can be evolved into tier, list the representative)": {},
"Mewtwo": { "release_below_cp": 1447, "release_below_iv": 0.8, "logic": "and"},
"Dragonite": { "release_below_cp": 1221, "release_below_iv": 0.8, "logic": "and" },
"Snorlax": { "release_below_cp": 1087, "release_below_iv": 0.8, "logic": "and" },
"// Mew evolves to Mewtwo": {},
"Mew": { "release_below_cp": 1152, "release_below_iv": 0.8, "logic": "and" },
"Arcanine": { "release_below_cp": 1041, "release_below_iv": 0.8, "logic": "and" },
"Vaporeon": { "release_below_cp": 984, "release_below_iv": 0.8, "logic": "and" },
"Gyarados": { "release_below_cp": 938, "release_below_iv": 0.8, "logic": "and" },
"Exeggutor": { "release_below_cp": 1032, "release_below_iv": 0.8, "logic": "and" },
"Muk": { "release_below_cp": 909, "release_below_iv": 0.8, "logic": "and" },
"Weezing": { "release_below_cp": 784, "release_below_iv": 0.8, "logic": "and" },
"Flareon": { "release_below_cp": 924, "release_below_iv": 0.8, "logic": "and" },
"// Growlithe evolves to Arcanine": {},
"Growlithe": { "release_below_cp": 465, "release_below_iv": 0.8, "logic": "and" },
"// Dragonair evolves to Dragonite": {},
"Dragonair": { "release_below_cp": 609, "release_below_iv": 0.8, "logic": "and" },
"// Grimer evolves to Muk": {},
"Grimer": { "release_below_cp": 448, "release_below_iv": 0.8, "logic": "and" },
"// Magikarp evolves to Gyarados": {},
"Magikarp": { "release_below_cp": 91, "release_below_iv": 0.8, "logic": "and" },
"// Exeggcute evolves to Exeggutor": {},
"Exeggcute": { "release_below_cp": 384, "release_below_iv": 0.8, "logic": "and" },
"// Eevee evolves to many versions, like Vaporeon, Flareon": {},
"Eevee": { "release_below_cp": 376, "release_below_iv": 0.8, "logic": "and" },
"// A-Tier pokemons": {},
"Slowbro": { "release_below_cp": 907, "release_below_iv": 0.8, "logic": "and" },
"Victreebel": { "release_below_cp": 883, "release_below_iv": 0.8, "logic": "and" },
"Machamp": { "release_below_cp": 907, "release_below_iv": 0.8, "logic": "and" },
"Poliwrath": { "release_below_cp": 876, "release_below_iv": 0.8, "logic": "and" },
"Clefable": { "release_below_cp": 837, "release_below_iv": 0.8, "logic": "and" },
"Nidoking": { "release_below_cp": 864, "release_below_iv": 0.8, "logic": "and" },
"Venusaur": { "release_below_cp": 902, "release_below_iv": 0.8, "logic": "and" },
"Charizard": { "release_below_cp": 909, "release_below_iv": 0.8, "logic": "and" },
"Golduck": { "release_below_cp": 832, "release_below_iv": 0.8, "logic": "and" },
"Nidoqueen": { "release_below_cp": 868, "release_below_iv": 0.8, "logic": "and" },
"Vileplume": { "release_below_cp": 871, "release_below_iv": 0.8, "logic": "and" },
"Blastoise": { "release_below_cp": 888, "release_below_iv": 0.8, "logic": "and" },
"Omastar": { "release_below_cp": 780, "release_below_iv": 0.8, "logic": "and" },
"Aerodactyl": { "release_below_cp": 756, "release_below_iv": 0.8, "logic": "and" },
"Golem": { "release_below_cp": 804, "release_below_iv": 0.8, "logic": "and" },
"Wigglytuff": { "release_below_cp": 760, "release_below_iv": 0.8, "logic": "and" },
"Dewgong": { "release_below_cp": 748, "release_below_iv": 0.8, "logic": "and" },
"Ninetales": { "release_below_cp": 763, "release_below_iv": 0.8, "logic": "and" },
"Magmar": { "release_below_cp": 792, "release_below_iv": 0.8, "logic": "and" },
"Kabutops": { "release_below_cp": 744, "release_below_iv": 0.8, "logic": "and" },
"Electabuzz": { "release_below_cp": 739, "release_below_iv": 0.8, "logic": "and" },
"Starmie": { "release_below_cp": 763, "release_below_iv": 0.8, "logic": "and" },
"Jolteon": { "release_below_cp": 746, "release_below_iv": 0.8, "logic": "and" },
"Rapidash": { "release_below_cp": 768, "release_below_iv": 0.8, "logic": "and" },
"Pinsir": { "release_below_cp": 741, "release_below_iv": 0.8, "logic": "and" },
"Scyther": { "release_below_cp": 724, "release_below_iv": 0.8, "logic": "and" },
"Tentacruel": { "release_below_cp": 775, "release_below_iv": 0.8, "logic": "and" },
"Gengar": { "release_below_cp": 724, "release_below_iv": 0.8, "logic": "and" },
"Hypno": { "release_below_cp": 763, "release_below_iv": 0.8, "logic": "and" },
"Pidgeot": { "release_below_cp": 729, "release_below_iv": 0.8, "logic": "and" },
"Rhydon": { "release_below_cp": 782, "release_below_iv": 0.8, "logic": "and" },
"Seaking": { "release_below_cp": 712, "release_below_iv": 0.8, "logic": "and" },
"Kangaskhan": { "release_below_cp": 712, "release_below_iv": 0.8, "logic": "and" },
"// Koffing evolves to Weezing (A-Tier)": {},
"Koffing": { "release_below_cp": 403, "release_below_iv": 0.8, "logic": "and" },
"// Below is B-tier and lower pokemons": {},
"Caterpie": { "release_below_cp": 156, "release_below_iv": 0.8, "logic": "and" },
"Weedle": { "release_below_cp": 156, "release_below_iv": 0.8, "logic": "and" },
"Diglett": { "release_below_cp": 158, "release_below_iv": 0.8, "logic": "and" },
"Metapod": { "release_below_cp": 168, "release_below_iv": 0.8, "logic": "and" },
"Kakuna": { "release_below_cp": 170, "release_below_iv": 0.8, "logic": "and" },
"Rattata": { "release_below_cp": 204, "release_below_iv": 0.8, "logic": "and" },
"Abra": { "release_below_cp": 208, "release_below_iv": 0.8, "logic": "and" },
"Zubat": { "release_below_cp": 225, "release_below_iv": 0.8, "logic": "and" },
"Chansey": { "release_below_cp": 235, "release_below_iv": 0.8, "logic": "and" },
"Pidgey": { "release_below_cp": 237, "release_below_iv": 0.8, "logic": "and" },
"Spearow": { "release_below_cp": 240, "release_below_iv": 0.8, "logic": "and" },
"Meowth": { "release_below_cp": 264, "release_below_iv": 0.8, "logic": "and" },
"Krabby": { "release_below_cp": 276, "release_below_iv": 0.8, "logic": "and" },
"Sandshrew": { "release_below_cp": 278, "release_below_iv": 0.8, "logic": "and" },
"Poliwag": { "release_below_cp": 278, "release_below_iv": 0.8, "logic": "and" },
"Horsea": { "release_below_cp": 278, "release_below_iv": 0.8, "logic": "and" },
"Gastly": { "release_below_cp": 280, "release_below_iv": 0.8, "logic": "and" },
"Ekans": { "release_below_cp": 288, "release_below_iv": 0.8, "logic": "and" },
"Shellder": { "release_below_cp": 288, "release_below_iv": 0.8, "logic": "and" },
"Vulpix": { "release_below_cp": 290, "release_below_iv": 0.8, "logic": "and" },
"Voltorb": { "release_below_cp": 292, "release_below_iv": 0.8, "logic": "and" },
"Geodude": { "release_below_cp": 297, "release_below_iv": 0.8, "logic": "and" },
"Doduo": { "release_below_cp": 297, "release_below_iv": 0.8, "logic": "and" },
"Onix": { "release_below_cp": 300, "release_below_iv": 0.8, "logic": "and" },
"Mankey": { "release_below_cp": 307, "release_below_iv": 0.8, "logic": "and" },
"Pikachu": { "release_below_cp": 309, "release_below_iv": 0.8, "logic": "and" },
"Magnemite": { "release_below_cp": 312, "release_below_iv": 0.8, "logic": "and" },
"Tentacool": { "release_below_cp": 316, "release_below_iv": 0.8, "logic": "and" },
"Paras": { "release_below_cp": 319, "release_below_iv": 0.8, "logic": "and" },
"Jigglypuff": { "release_below_cp": 321, "release_below_iv": 0.8, "logic": "and" },
"Ditto": { "release_below_cp": 321, "release_below_iv": 0.8, "logic": "and" },
"Staryu": { "release_below_cp": 326, "release_below_iv": 0.8, "logic": "and" },
"Charmander": { "release_below_cp": 333, "release_below_iv": 0.8, "logic": "and" },
"Goldeen": { "release_below_cp": 336, "release_below_iv": 0.8, "logic": "and" },
"Squirtle": { "release_below_cp": 352, "release_below_iv": 0.8, "logic": "and" },
"Cubone": { "release_below_cp": 352, "release_below_iv": 0.8, "logic": "and" },
"Venonat": { "release_below_cp": 360, "release_below_iv": 0.8, "logic": "and" },
"Bulbasaur": { "release_below_cp": 374, "release_below_iv": 0.8, "logic": "and" },
"Drowzee": { "release_below_cp": 374, "release_below_iv": 0.8, "logic": "and" },
"Machop": { "release_below_cp": 381, "release_below_iv": 0.8, "logic": "and" },
"Psyduck": { "release_below_cp": 386, "release_below_iv": 0.8, "logic": "and" },
"Seel": { "release_below_cp": 386, "release_below_iv": 0.8, "logic": "and" },
"Kabuto": { "release_below_cp": 386, "release_below_iv": 0.8, "logic": "and" },
"Bellsprout": { "release_below_cp": 391, "release_below_iv": 0.8, "logic": "and" },
"Omanyte": { "release_below_cp": 391, "release_below_iv": 0.8, "logic": "and" },
"Kadabra": { "release_below_cp": 396, "release_below_iv": 0.8, "logic": "and" },
"Oddish": { "release_below_cp": 400, "release_below_iv": 0.8, "logic": "and" },
"Dugtrio": { "release_below_cp": 408, "release_below_iv": 0.8, "logic": "and" },
"Rhyhorn": { "release_below_cp": 412, "release_below_iv": 0.8, "logic": "and" },
"Clefairy": { "release_below_cp": 420, "release_below_iv": 0.8, "logic": "and" },
"Slowpoke": { "release_below_cp": 424, "release_below_iv": 0.8, "logic": "and" },
"Pidgeotto": { "release_below_cp": 427, "release_below_iv": 0.8, "logic": "and" },
"Farfetch'd": { "release_below_cp": 441, "release_below_iv": 0.8, "logic": "and" },
"Poliwhirl": { "release_below_cp": 468, "release_below_iv": 0.8, "logic": "and" },
"Nidorino": { "release_below_cp": 480, "release_below_iv": 0.8, "logic": "and" },
"Haunter": { "release_below_cp": 482, "release_below_iv": 0.8, "logic": "and" },
"Nidorina": { "release_below_cp": 489, "release_below_iv": 0.8, "logic": "and" },
"Graveler": { "release_below_cp": 501, "release_below_iv": 0.8, "logic": "and" },
"Beedrill": { "release_below_cp": 504, "release_below_iv": 0.8, "logic": "and" },
"Raticate": { "release_below_cp": 504, "release_below_iv": 0.8, "logic": "and" },
"Butterfree": { "release_below_cp": 508, "release_below_iv": 0.8, "logic": "and" },
"Hitmonlee": { "release_below_cp": 520, "release_below_iv": 0.8, "logic": "and" },
"Ponyta": { "release_below_cp": 530, "release_below_iv": 0.8, "logic": "and" },
"Hitmonchan": { "release_below_cp": 530, "release_below_iv": 0.8, "logic": "and" },
"Charmeleon": { "release_below_cp": 544, "release_below_iv": 0.8, "logic": "and" },
"Wartortle": { "release_below_cp": 552, "release_below_iv": 0.8, "logic": "and" },
"Persian": { "release_below_cp": 568, "release_below_iv": 0.8, "logic": "and" },
"Lickitung": { "release_below_cp": 568, "release_below_iv": 0.8, "logic": "and" },
"Ivysaur": { "release_below_cp": 571, "release_below_iv": 0.8, "logic": "and" },
"Electrode": { "release_below_cp": 576, "release_below_iv": 0.8, "logic": "and" },
"Marowak": { "release_below_cp": 578, "release_below_iv": 0.8, "logic": "and" },
"Gloom": { "release_below_cp": 590, "release_below_iv": 0.8, "logic": "and" },
"Porygon": { "release_below_cp": 590, "release_below_iv": 0.8, "logic": "and" },
"Seadra": { "release_below_cp": 597, "release_below_iv": 0.8, "logic": "and" },
"Jynx": { "release_below_cp": 600, "release_below_iv": 0.8, "logic": "and" },
"Weepinbell": { "release_below_cp": 602, "release_below_iv": 0.8, "logic": "and" },
"Tangela": { "release_below_cp": 607, "release_below_iv": 0.8, "logic": "and" },
"Fearow": { "release_below_cp": 609, "release_below_iv": 0.8, "logic": "and" },
"Parasect": { "release_below_cp": 609, "release_below_iv": 0.8, "logic": "and" },
"Machoke": { "release_below_cp": 614, "release_below_iv": 0.8, "logic": "and" },
"Arbok": { "release_below_cp": 616, "release_below_iv": 0.8, "logic": "and" },
"Sandslash": { "release_below_cp": 631, "release_below_iv": 0.8, "logic": "and" },
"Alakazam": { "release_below_cp": 633, "release_below_iv": 0.8, "logic": "and" },
"Kingler": { "release_below_cp": 636, "release_below_iv": 0.8, "logic": "and" },
"Dodrio": { "release_below_cp": 640, "release_below_iv": 0.8, "logic": "and" },
"Tauros": { "release_below_cp": 643, "release_below_iv": 0.8, "logic": "and" },
"Primeape": { "release_below_cp": 650, "release_below_iv": 0.8, "logic": "and" },
"Magneton": { "release_below_cp": 657, "release_below_iv": 0.8, "logic": "and" },
"Venomoth": { "release_below_cp": 660, "release_below_iv": 0.8, "logic": "and" },
"Golbat": { "release_below_cp": 672, "release_below_iv": 0.8, "logic": "and" },
"Raichu": { "release_below_cp": 708, "release_below_iv": 0.8, "logic": "and" },
"Cloyster": { "release_below_cp": 717, "release_below_iv": 0.8, "logic": "and"},
"Mr. Mime": { "release_below_cp": 650, "release_below_iv": 0.8, "logic": "and" }
},
"vips" : {
"Any pokemon put here directly force to use Berry & Best Ball to capture, to secure the capture rate": {},
"any": {"catch_above_cp": 1200, "catch_above_iv": 0.9, "logic": "or" },
"Lapras": {},
"Moltres": {},
"Zapdos": {},
"Articuno": {},
"// S-Tier pokemons (if pokemon can be evolved into tier, list the representative)": {},
"Mewtwo": {},
"Dragonite": {},
"Snorlax": {},
"// Mew evolves to Mewtwo": {},
"Mew": {},
"Arcanine": {},
"Vaporeon": {},
"Gyarados": {},
"Exeggutor": {},
"Muk": {},
"Weezing": {},
"Flareon": {}
},
"websocket": {
"start_embedded_server": true,
"server_url": "127.0.0.1:4000"
}
}
```
auth.json
```
{
"auth_service": "google",
"username": "x@gmail.com",
"password": "xx",
"location": "Paris, Louvre",
"favorite_locations":[
{"name": "Milan", "coords": "45.472849,9.177567"}
],
"gmapkey": "x x x ",
"encrypt_location": "",
"telegram_token": ""
}
```
### Output when issue occurred
```
2016-09-17 14:10:13,442 [ cli] [INFO] PokemonGO Bot v1.0
2016-09-17 14:10:13,461 [ cli] [INFO] commit: fef76945
2016-09-17 14:10:13,481 [ cli] [INFO] Configuration initialized
2016-09-17 14:10:13,483 [pokemongo_bot.health_record.bot_event] [INFO] Health check is enabled. For more information:
2016-09-17 14:10:13,483 [pokemongo_bot.health_record.bot_event] [INFO] https://github.com/PokemonGoF/PokemonGo-Bot/tree/dev#analytics
2016-09-17 14:10:13,504 [requests.packages.urllib3.connectionpool] [INFO] Starting new HTTP connection (1): www.google-analytics.com
(11123) wsgi starting up on http://127.0.0.1:4000
[2016-09-17 14:10:13] [MainThread] [SleepSchedule] [INFO] Next sleep at 17:06:07, for a duration of 02:39:45
[2016-09-17 14:10:13] [MainThread] [PokemonGoBot] [INFO] Setting start location.
[2016-09-17 14:10:13] [MainThread] [PokemonGoBot] [INFO] Location found: Paris, Louvre (48.8638931, 2.3423476, 0.0)
[2016-09-17 14:10:13] [MainThread] [PokemonGoBot] [INFO] Now at (48.8638931, 2.3423476, 0.0)
[2016-09-17 14:10:13] [MainThread] [PokemonGoBot] [INFO] Login procedure started.
_inventory was not initialized
_inventory was not initialized
[2016-09-17 14:10:13] [MainThread] [ cli] [INFO]
[2016-09-17 14:10:13] [MainThread] [ cli] [INFO] Ran for 0:00:00
[2016-09-17 14:10:13] [MainThread] [ cli] [INFO] Total XP Earned: 0 Average: 0.00/h
[2016-09-17 14:10:13] [MainThread] [ cli] [INFO] Travelled 0.00km
[2016-09-17 14:10:13] [MainThread] [ cli] [INFO] Visited 0 stops
[2016-09-17 14:10:13] [MainThread] [ cli] [INFO] Encountered 0 pokemon, 0 caught, 0 released, 0 evolved, 0 never seen before ()
[2016-09-17 14:10:13] [MainThread] [ cli] [INFO] Threw 0 pokeballs
[2016-09-17 14:10:13] [MainThread] [ cli] [INFO] Earned 0 Stardust
[2016-09-17 14:10:13] [MainThread] [ cli] [INFO] Hatched eggs 0
[2016-09-17 14:10:13] [MainThread] [ cli] [INFO]
[2016-09-17 14:10:13] [MainThread] [ cli] [INFO] Highest CP Pokemon:
[2016-09-17 14:10:13] [MainThread] [ cli] [INFO] Most Perfect Pokemon:
Traceback (most recent call last):
File "pokecli.py", line 841, in <module>
main()
File "pokecli.py", line 189, in main
bot = start_bot(bot, config)
File "pokecli.py", line 144, in start_bot
bot.start()
File "/home/pi/pokebot/PokemonGo-Bot/pokemongo_bot/__init__.py", line 142, in start
self._setup_api()
File "/home/pi/pokebot/PokemonGo-Bot/pokemongo_bot/__init__.py", line 952, in _setup_api
self.login()
File "/home/pi/pokebot/PokemonGo-Bot/pokemongo_bot/__init__.py", line 886, in login
str(self.config.password))
File "/home/pi/pokebot/PokemonGo-Bot/pokemongo_bot/api_wrapper.py", line 96, in login
password=password
File "/home/pi/pokebot/PokemonGo-Bot/src/pgoapi/pgoapi/pgoapi.py", line 94, in set_authentication
if not self._auth_provider.user_login(username, password):
File "/home/pi/pokebot/PokemonGo-Bot/src/pgoapi/pgoapi/auth_google.py", line 58, in user_login
user_login = perform_master_login(username, password, self.GOOGLE_LOGIN_ANDROID_ID)
File "/home/pi/pokebot/PokemonGo-Bot/local/lib/python2.7/site-packages/gpsoauth/__init__.py", line 66, in perform_master_login
return _perform_auth_request(data)
File "/home/pi/pokebot/PokemonGo-Bot/local/lib/python2.7/site-packages/gpsoauth/__init__.py", line 22, in _perform_auth_request
headers={'User-Agent': useragent})
File "/home/pi/pokebot/PokemonGo-Bot/local/lib/python2.7/site-packages/requests/api.py", line 111, in post
return request('post', url, data=data, json=json, **kwargs)
File "/home/pi/pokebot/PokemonGo-Bot/local/lib/python2.7/site-packages/requests/api.py", line 57, in request
return session.request(method=method, url=url, **kwargs)
File "/home/pi/pokebot/PokemonGo-Bot/local/lib/python2.7/site-packages/requests/sessions.py", line 475, in request
resp = self.send(prep, **send_kwargs)
File "/home/pi/pokebot/PokemonGo-Bot/local/lib/python2.7/site-packages/requests/sessions.py", line 585, in send
r = adapter.send(request, **kwargs)
File "/home/pi/pokebot/PokemonGo-Bot/local/lib/python2.7/site-packages/requests/adapters.py", line 477, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
[2016-09-17 14:10:14] [MainThread] [sentry.errors] [ERROR] Sentry responded with an error: 'ascii' codec can't decode byte 0x9c in position 1: ordinal not in range(128) (url: https://app.getsentry.com/api/90254/store/)
Traceback (most recent call last):
File "/home/pi/pokebot/PokemonGo-Bot/local/lib/python2.7/site-packages/raven/transport/threaded.py", line 174, in send_sync
super(ThreadedHTTPTransport, self).send(data, headers)
File "/home/pi/pokebot/PokemonGo-Bot/local/lib/python2.7/site-packages/raven/transport/http.py", line 47, in send
ca_certs=self.ca_certs,
File "/home/pi/pokebot/PokemonGo-Bot/local/lib/python2.7/site-packages/raven/utils/http.py", line 66, in urlopen
return opener.open(url, data, timeout)
File "/home/pi/pokebot/PokemonGo-Bot/local/lib/python2.7/site-packages/future/backports/urllib/request.py", line 494, in open
response = self._open(req, data)
File "/home/pi/pokebot/PokemonGo-Bot/local/lib/python2.7/site-packages/future/backports/urllib/request.py", line 512, in _open
'_open', req)
File "/home/pi/pokebot/PokemonGo-Bot/local/lib/python2.7/site-packages/future/backports/urllib/request.py", line 466, in _call_chain
result = func(*args)
File "/home/pi/pokebot/PokemonGo-Bot/local/lib/python2.7/site-packages/raven/utils/http.py", line 46, in https_open
return self.do_open(ValidHTTPSConnection, req)
File "/home/pi/pokebot/PokemonGo-Bot/local/lib/python2.7/site-packages/future/backports/urllib/request.py", line 1284, in do_open
h.request(req.get_method(), req.selector, req.data, headers)
File "/usr/lib/python2.7/httplib.py", line 1001, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python2.7/httplib.py", line 1035, in _send_request
self.endheaders(body)
File "/usr/lib/python2.7/httplib.py", line 997, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 848, in _send_output
msg += message_body
UnicodeDecodeError: 'ascii' codec can't decode byte 0x9c in position 1: ordinal not in range(128)
[2016-09-17 14:10:14] [MainThread] [sentry.errors.uncaught] [ERROR] [u'SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)', u' File "pokecli.py", line 841, in <module>', u' File "pokecli.py", line 189, in main', u' File "pokecli.py", line 144, in start_bot', u' File "pokemongo_bot/__init__.py", line 142, in start', u' File "pokemongo_bot/__init__.py", line 952, in _setup_api', u' File "pokemongo_bot/__init__.py", line 886, in login', u' File "pokemongo_bot/api_wrapper.py", line 96, in login', u' File "pgoapi/pgoapi.py", line 94, in set_authentication', u' File "pgoapi/auth_google.py", line 58, in user_login', u' File "gpsoauth/__init__.py", line 66, in perform_master_login', u' File "gpsoauth/__init__.py", line 22, in _perform_auth_request', u' File "requests/api.py", line 111, in post', u' File "requests/api.py", line 57, in request', u' File "requests/sessions.py", line 475, in request', u' File "requests/sessions.py", line 585, in send', u' File "requests/adapters.py", line 477, in send']
Sat 17 Sep 14:10:14 CEST 2016 Pokebot Stopped.
Press any button or wait 20 seconds to continue.
```
### Steps to Reproduce
any time I start the bot with my gmail account, ptc account will work fine.
### Other Information
OS: linux
Branch: debian
dev
Git Commit: fef76945
I did download the encrypt.so again, but did no good. clean install with fresh config file wouldn't help either.
| closed | 2016-09-17T12:19:00Z | 2016-09-22T03:41:57Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5511 | [] | prusterle | 15 |
keras-team/keras | tensorflow | 20,731 | similar functions for `from_tensor` `to_tensor` from ragged api | I think ragged doesn't support yet. But is there any way to handle such following cases?
```python
tf.RaggedTensor.from_tensor
tf.RaggedTensor.to_tensor
...
def __init__(self, **kwargs):
super(RaggedToDenseTensor, self).__init__(**kwargs)
def call(self, inputs):
if isinstance(inputs, tf.RaggedTensor):
inputs = inputs.to_tensor()
return inputs
``` | closed | 2025-01-06T20:27:24Z | 2025-01-14T23:40:27Z | https://github.com/keras-team/keras/issues/20731 | [
"type:support"
] | innat | 6 |
JaidedAI/EasyOCR | machine-learning | 334 | French language support | Hi,
the model is not detecting french language, how can I add the model or train it? | closed | 2020-12-17T14:43:31Z | 2021-01-14T04:21:00Z | https://github.com/JaidedAI/EasyOCR/issues/334 | [] | AnassKartit | 1 |
airtai/faststream | asyncio | 1,540 | Feature: faststream should only require opentelemetry-api | To make it easier to switch the opentelemetry implementation it should be up to the user to install an implementation. The documentation can suggest `opentelemetry-sdk` but it should not be installed by default. This would allow a user to replace `opentelemetry-sdk` with a different implementation (e.g. Datadogs ddtrace that also implements the opentelemetry-api: https://ddtrace.readthedocs.io/en/stable/api.html#opentelemetry-api)
This is also recommended by OpenTelemetry here: https://opentelemetry.io/docs/concepts/instrumentation/libraries/#opentelemetry-api | open | 2024-06-20T13:36:38Z | 2024-07-02T21:47:13Z | https://github.com/airtai/faststream/issues/1540 | [
"enhancement"
] | florianmutter | 2 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,388 | Getting detected by cloudflare | undetected chromedriver worked well till yesterday but now, cloudflare improved and the chromedriver is not bypassing cloudflare. I have attached the screenshot of it. cloudflare is just looping the captcha when selenium is running. When I close it, the website loads.

This is my code snippet

It works fine when the script isn't running.
I also tried changing the binary to chrome rather than brave, but the issue still persists
| open | 2023-07-11T17:27:33Z | 2025-02-04T20:26:42Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1388 | [] | vndhote | 56 |
CorentinJ/Real-Time-Voice-Cloning | python | 519 | AttributeError: module 'umap' has no attribute 'UMAP' on Windows 10 | not sure if it is a platform specific problem, all umap import would need to be changed to this format:
import umap.umap_ as umap
to resolve this problem from multiple files:
AttributeError: module 'umap' has no attribute 'UMAP'
| closed | 2020-09-03T01:31:28Z | 2020-09-04T00:01:31Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/519 | [] | lawrence124 | 4 |
allenai/allennlp | data-science | 5,020 | Try the DETR object detection model with our vision+language tasks | [DETR](https://github.com/facebookresearch/detr) is an interesting object detection model that fits into our `RegionDetector` abstraction. This task is about porting the DETR model to AllenNLP and trying it on all the vision+language tasks that are implemented right now. At the moment, this is VQAv2, GQA, and Visual Entailment, but by the time this gets picked up there could be more.
DETR comes with some pre-trained weights, which we should try first. It would be a success to get within five points of our existing benchmarks on these tasks.
If that works, we can try a second step: Move the `DetrRegionDetector` from the dataset reader to the model, and fine-tune its weights while training on the tasks.
The best way to get started would be this:
1. Clone the [allennlp-models repo](https://github.com/allenai/allennlp-models) and run the existing training jobs for VQA, GQA, and Visual Entailment. There is a bit of setup involved with this because the datasets are so large, so this will come in handy later.
2. Start a new repo using the [AllenNLP Repository Template](https://github.com/allenai/allennlp-template-config-files), copy the GQA/VQA/VE models into it, and make sure they still run.
3. Write a `DetrRegionDetector`, using the structure from [`FasterRcnnRegionDetector`](https://github.com/allenai/allennlp/blob/main/allennlp/modules/vision/region_detector.py#L114), but using the code from [DETR](https://github.com/facebookresearch/detr).
4. Write a training config using the new region detector, train it, and compare scores. | open | 2021-02-25T00:45:37Z | 2021-03-09T23:41:02Z | https://github.com/allenai/allennlp/issues/5020 | [
"Contributions welcome",
"Models",
"medium"
] | dirkgr | 0 |
jmcnamara/XlsxWriter | pandas | 731 | timedeltas shifted by 24h | Hi,
I am using XlsxWriter to show elapsed times in an "hours:minutes" format, but it appears to do add 24h to the value. With the code below I expect to see "1500:00", but I see "1524:00". This only happens with timedeltas > ~1440h.
I am using Python version 3.6.9 and XlsxWriter 0.9.6 and Excel version 2016.
Here is some code that demonstrates the problem:
```python
from datetime import timedelta
import xlsxwriter
workbook = xlsxwriter.Workbook('timedelta.xlsx')
worksheet = workbook.add_worksheet()
delta_format = workbook.add_format({'num_format': '[HH]:MM'})
worksheet.write('A1', timedelta(hours=1500), delta_format)
workbook.close()
``` | closed | 2020-07-14T11:02:06Z | 2021-03-29T19:00:48Z | https://github.com/jmcnamara/XlsxWriter/issues/731 | [
"bug",
"wont_fix",
"under investigation"
] | ktosiek | 9 |
django-import-export/django-import-export | django | 1,082 | admin not working with utf-8 import of csv file | When trying to import a file containing names with foreign characters in utf-8 format using the admin mixin, import-export either fails or mangles the characters.
This seems to be because when storing the uploaded data in a file in the temp directory prior to processing the data it uses text mode, which then complains on certain unicode combinations. If I change get_read_mode in formats to 'rb', everything then works properly.
Is this a bug, or am I not setting something up correctly.
I'm using django 3.1 and python 3.8
Thanks, | closed | 2020-02-19T17:26:22Z | 2020-05-28T07:24:59Z | https://github.com/django-import-export/django-import-export/issues/1082 | [] | bilkusg | 4 |
tensorlayer/TensorLayer | tensorflow | 782 | Feature request:TL implement of CornerPooling | `class CornetPool(Layer):
"""The :class:`CornetPool` class is 2D cornet pool, see `here <https://arxiv.org/abs/1808.01244/>`__.
Parameters
--------------
prev_layer : :class:`Layer`
Previous layer.
filter_size : int
The filter size.
mode:str
BottomRight for the top left corrnet,
TopLeft for the bottom right corrnet.
name : str
A unique layer name.
"""
def __init__(
self,
prev_layer = None,
filter_size=(3,3),
mode='BottomRight',
name ='cornerpool_layer',
):
Layer.__init__(self, prev_layer=prev_layer, name=name)
self.inputs = prev_layer.outputs
if mode=='BottomRight':
temp=tf.keras.layers.ZeroPadding2D(padding=((0, filter_size[0]-1), (0, filter_size[1]-1)), name=name)(self.inputs)
temp=tf.layers.max_pooling2d(temp, (filter_size[0],1), (1,1), padding='valid', data_format='channels_last')
self.outputs=tf.layers.max_pooling2d(temp, (1,filter_size[1]), (1,1), padding='valid', data_format='channels_last',name=name)
elif mode=='TopLeft':
temp=tf.keras.layers.ZeroPadding2D(padding=((filter_size[0]-1,0), (filter_size[1]-1,0)), name=name)(self.inputs)
temp=tf.layers.max_pooling2d(temp, (filter_size[0],1), (1,1), padding='valid', data_format='channels_last')
self.outputs=tf.layers.max_pooling2d(temp, (1,filter_size[1]), (1,1), padding='valid', data_format='channels_last',name=name)
else:
raise AssertionError("Mode should be of 'BottomRight'and'TopLeft' ")
self._add_layers(self.outputs)`
Maybe Zero Padding is not a rigorous method for leaky relu activation func? | closed | 2018-08-13T12:23:51Z | 2018-08-13T12:26:23Z | https://github.com/tensorlayer/TensorLayer/issues/782 | [
"duplicate"
] | Windaway | 1 |
biolab/orange3 | scikit-learn | 6,219 | Deprecate: use_label_encoder | ```
test_XGB (test_xgb_cls.TestXGBCls) ... /home/runner/work/orange3/orange3/.tox/orange-released/lib/python3.8/site-packages/xgboost/sklearn.py:1421: UserWarning: `use_label_encoder` is deprecated in 1.7.0.
warnings.warn("`use_label_encoder` is deprecated in 1.7.0.")
/home/runner/work/orange3/orange3/.tox/orange-released/lib/python3.8/site-packages/xgboost/sklearn.py:1421: UserWarning: `use_label_encoder` is deprecated in 1.7.0.
warnings.warn("`use_label_encoder` is deprecated in 1.7.0.")
``` | closed | 2022-11-22T18:25:05Z | 2023-01-11T09:18:27Z | https://github.com/biolab/orange3/issues/6219 | [
"snack"
] | markotoplak | 1 |
iperov/DeepFaceLab | machine-learning | 883 | Error when applying XSeg Mask. Help would be appreciated ASAP | Full terminal window & error message:
Applying trained XSeg model to aligned/ folder.
Traceback (most recent call last):
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
return fn(*args)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1317, in _run_fn
self._extend_graph()
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1352, in _extend_graph
tf_session.ExtendSession(self._session)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation XSeg/conv01/conv/weight: {{node XSeg/conv01/conv/weight}}was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device. The requested device appears to be a GPU, but CUDA is not enabled.
[[{{node XSeg/conv01/conv/weight}}]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\main.py", line 324, in <module>
arguments.func(arguments)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\main.py", line 285, in process_xsegapply
XSegUtil.apply_xseg (Path(arguments.input_dir), Path(arguments.model_dir))
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\XSegUtil.py", line 32, in apply_xseg
raise_on_no_model_files=True)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\facelib\XSegNet.py", line 68, in __init__
do_init = not model.load_weights( model_file_path )
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Saveable.py", line 96, in load_weights
nn.batch_set_value(tuples)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\ops\__init__.py", line 29, in batch_set_value
nn.tf_sess.run(assign_ops, feed_dict=feed_dict)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
run_metadata_ptr)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
run_metadata)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation XSeg/conv01/conv/weight: node XSeg/conv01/conv/weight (defined at C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:76) was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device. The requested device appears to be a GPU, but CUDA is not enabled.
[[node XSeg/conv01/conv/weight (defined at C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:76) ]]
Caused by op 'XSeg/conv01/conv/weight', defined at:
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\main.py", line 324, in <module>
arguments.func(arguments)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\main.py", line 285, in process_xsegapply
XSegUtil.apply_xseg (Path(arguments.input_dir), Path(arguments.model_dir))
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\XSegUtil.py", line 32, in apply_xseg
raise_on_no_model_files=True)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\facelib\XSegNet.py", line 41, in __init__
self.model_weights = self.model.get_weights()
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 77, in get_weights
self.build()
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 65, in build
self._build_sub(v[name],name)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 35, in _build_sub
layer.build()
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 65, in build
self._build_sub(v[name],name)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 33, in _build_sub
layer.build_weights()
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 76, in build_weights
self.weight = tf.get_variable("weight", (self.kernel_size,self.kernel_size,self.in_ch,self.out_ch), dtype=self.dtype, initializer=kernel_initializer, trainable=self.trainable )
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1479, in get_variable
aggregation=aggregation)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1220, in get_variable
aggregation=aggregation)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 547, in get_variable
aggregation=aggregation)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 499, in _true_getter
aggregation=aggregation)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 911, in _get_single_variable
aggregation=aggregation)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 213, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 176, in _variable_v1_call
aggregation=aggregation)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 155, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 2495, in default_variable_creator
expected_shape=expected_shape, import_scope=import_scope)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 217, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 1395, in __init__
constraint=constraint)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\variables.py", line 1509, in _init_from_args
name=name)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\state_ops.py", line 79, in variable_op_v2
shared_name=shared_name)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_state_ops.py", line 1424, in variable_v2
shared_name=shared_name, name=name)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
op_def=op_def)
File "C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): Cannot assign a device for operation XSeg/conv01/conv/weight: node XSeg/conv01/conv/weight (defined at C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:76) was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device. The requested device appears to be a GPU, but CUDA is not enabled.
[[node XSeg/conv01/conv/weight (defined at C:\Users\Joshua Waghorn\Documents\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:76) ]]
Please send help ASAP. I really need to learn how to do a full head deepfake for my English Assessment. Thanks | open | 2020-09-04T06:30:57Z | 2023-06-08T21:18:14Z | https://github.com/iperov/DeepFaceLab/issues/883 | [] | Xlectron | 6 |
ydataai/ydata-profiling | jupyter | 873 | Embeddable HTML output | **Missing functionality**
Over the last few days I've tried with no much success to adjust the current HTML so it can be embed on a Confluence page without affecting the whole page styles. Unfortunately, bootstrap overwrites multiple styles making the page to have an aesthetic look.
**Proposed feature**
Make an HTML export that's easy to embed in any site without affect the style of other components.
**Alternatives considered**
Painstakingly manual edit of current HMTL output.
**Additional context**
None
| closed | 2021-11-02T11:29:15Z | 2021-11-03T20:51:40Z | https://github.com/ydataai/ydata-profiling/issues/873 | [
"feature request 💬"
] | ciberger | 1 |
huggingface/datasets | pytorch | 6,580 | dataset cache only stores one config of the dataset in parquet dir, and uses that for all other configs resulting in showing same data in all configs. | ### Describe the bug
ds = load_dataset("ai2_arc", "ARC-Easy"), i have tried to force redownload, delete cache and changing the cache dir.
### Steps to reproduce the bug
dataset = []
dataset_name = "ai2_arc"
possible_configs = [
'ARC-Challenge',
'ARC-Easy'
]
for config in possible_configs:
dataset_slice = load_dataset(dataset_name, config,ignore_verifications=True,cache_dir='ai2_arc_files')
dataset.append(dataset_slice)
### Expected behavior
all configs should get saved in cache with their respective names.
### Environment info
ai2_arc | closed | 2024-01-11T03:14:18Z | 2024-01-20T12:46:16Z | https://github.com/huggingface/datasets/issues/6580 | [] | kartikgupta321 | 0 |
kennethreitz/responder | flask | 51 | Middleware | Supporting ASGI middleware would be a really good way to compartmentalise away bits of complexity, as we all promoting more of a cross-framework ecosystem.
I'd suggest an interface of `app.add_middleware(cls, **options)`
We could then do things like:
* Move the GZip handling out of `Response`, and use `Starlette`'s GzipMiddleware, which will also take care of handling streaming responses (once responder has those)
* Be able to use Starlette's `CORSMiddleware`.
We can set up a default set of middleware if needed based on the configuration options presented to the `API(...)` class.
Any objections or considerations? | closed | 2018-10-15T11:39:52Z | 2018-10-17T19:08:09Z | https://github.com/kennethreitz/responder/issues/51 | [
"feature"
] | tomchristie | 6 |
aminalaee/sqladmin | fastapi | 879 | Postgres json field dont work with asyncpg | ### Checklist
- [x] The bug is reproducible against the latest release or `master`.
- [x] There are no similar issues or pull requests to fix it yet.
### Describe the bug
If we use postgresql JSON field in mode like
`data: Mapped[dict[any, any]] = mapped_column(JSON)`
it works file on frontend site - shows default as '{}' for json field and correctly checks json format, but on create i got error: "descriptor 'encode' for 'str' objects doesn't apply to a 'dict' object"
it happens because raw dict object later pass to asyncpg as is - but it should be passed not like {"key": "value"} but in single quaters: `{"key": "value"}`
<img width="1347" alt="Image" src="https://github.com/user-attachments/assets/e7941526-7092-441a-a50c-d79b4f10fad9" />
### Steps to reproduce the bug
Try to use JSON field with asyncpg and create record.
### Expected behavior
0.20.1
### Actual behavior
_No response_
### Debugging material
_No response_
### Environment
alembic 1.14.0 A database migration tool for SQLAlchemy.
annotated-types 0.7.0 Reusable constraint types to use with typing.Annotated
anyio 4.6.2.post1 High level compatibility layer for multiple asynchronous event loop implementations
asyncpg 0.29.0 An asyncio PostgreSQL driver
fastapi 0.112.4 FastAPI framework, high performance, easy to learn, fast to code, ready for production
fastapi-cli 0.0.5 Run and manage FastAPI apps from the command line with FastAPI CLI. 🚀
greenlet 3.1.1 Lightweight in-process concurrent programming
httpcore 1.0.6 A minimal low-level HTTP client.
httptools 0.6.4 A collection of framework independent HTTP protocol utils.
httpx 0.27.2 The next generation HTTP client.
identify 2.6.2 File identification library for Python
idna 3.10 Internationalized Domain Names in Applications (IDNA)
iniconfig 2.0.0 brain-dead simple config-ini parsing
itsdangerous 2.2.0 Safely pass data to untrusted environments and back.
jinja2 3.1.4 A very fast and expressive template engine.
mako 1.3.6 A super-fast templating language that borrows the best ideas from the existing templating languages.
markdown 3.7 Python implementation of John Gruber's Markdown.
markdown-code-blocks 3.1.0 Generate html from markdown with code-block highlighting
markdown-it-py 3.0.0 Python port of markdown-it. Markdown parsing, done right!
markupsafe 3.0.2 Safely add untrusted strings to HTML/XML markup.
mdurl 0.1.2 Markdown URL utilities
mistune 2.0.5 A sane Markdown parser with useful plugins and renderers
mypy 1.13.0 Optional static typing for Python
mypy-extensions 1.0.0 Type system extensions for programs checked with the mypy type checker.
nodeenv 1.9.1 Node.js virtual environment builder
oauthlib 3.2.2 A generic, spec-compliant, thorough implementation of the OAuth request-signing logic
orjson 3.10.11 Fast, correct Python JSON library supporting dataclasses, datetimes, and numpy
packaging 24.2 Core utilities for Python packages
pathspec 0.12.1 Utility library for gitignore style pattern matching of file paths.
platformdirs 4.3.6 A small Python package for determining appropriate platform-specific dirs, e.g. a `user data dir`.
pluggy 1.5.0 plugin and hook calling mechanisms for python
pre-commit 3.8.0 A framework for managing and maintaining multi-language pre-commit hooks.
psycopg2-binary 2.9.10 psycopg2 - Python-PostgreSQL Database Adapter
pycparser 2.22 C parser in Python
pydantic 2.9.2 Data validation using Python type hints
pydantic-core 2.23.4 Core functionality for Pydantic validation and serialization
pydantic-settings 2.6.1 Settings management using Pydantic
pygments 2.18.0 Pygments is a syntax highlighting package written in Python.
pytest 8.3.3 pytest: simple powerful testing with Python
pytest-asyncio 0.24.0 Pytest support for asyncio
python-dotenv 1.0.1 Read key-value pairs from a .env file and set them as environment variables
python-multipart 0.0.17 A streaming multipart parser for Python
pyyaml 6.0.2 YAML parser and emitter for Python
requests 2.32.3 Python HTTP for Humans.
requests-oauthlib 2.0.0 OAuthlib authentication support for Requests.
rich 13.9.4 Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal
ruff 0.5.7 An extremely fast Python linter and code formatter, written in Rust.
sentry-sdk 2.18.0 Python client for Sentry (https://sentry.io)
shellingham 1.5.4 Tool to Detect Surrounding Shell
sniffio 1.3.1 Sniff out which async library your code is running under
sqladmin 0.20.1 SQLAlchemy admin for FastAPI and Starlette
sqlalchemy 2.0.36 Database Abstraction Library
starlette 0.38.6 The little ASGI library that shines.
structlog 24.4.0 Structured Logging for Python
structlog-sentry 2.2.1 Sentry integration for structlog
testcontainers 4.8.2 Python library for throwaway instances of anything that can run in a Docker container
typer 0.13.0 Typer, build great CLIs. Easy to code. Based on Python type hints.
typing-extensions 4.12.2 Backported and Experimental Type Hints for Python 3.8+
urllib3 2.2.3 HTTP library with thread-safe connection pooling, file post, and more.
uvicorn 0.30.6 The lightning-fast ASGI server.
uvloop 0.21.0 Fast implementation of asyncio event loop on top of libuv
virtualenv 20.27.1 Virtual Python Environment builder
watchfiles 0.24.0 Simple, modern and high performance file watching and code reload in python.
websockets 14.1 An implementation of the WebSocket Protocol (RFC 6455 & 7692)
werkzeug 3.1.3 The comprehensive WSGI web application library.
wrapt 1.16.0 Module for decorators, wrappers and monkey patching.
wtforms 3.1.2 Form validation and rendering for Python web development.
### Additional context
_No response_ | open | 2025-02-05T22:30:33Z | 2025-02-05T22:30:33Z | https://github.com/aminalaee/sqladmin/issues/879 | [] | Vasiliy566 | 0 |
axnsan12/drf-yasg | django | 457 | swagger_auto_schema on request_body clears result schema? | ```
@swagger_auto_schema(request_body=openapi.Schema(
type=openapi.TYPE_OBJECT,
properties={
}
))
@list_route(methods=['post'])
@get_cart_viewset()
def remove_point(self, request, *args, **kwargs):
```
I get the empty response type.
```
Code | Description
-- | --
201 | Example ValueModel{ }
```
If I remove the swagger decorator
```
@list_route(methods=['post'])
@get_cart_viewset()
def remove_point(self, request, *args, **kwargs):
```
I get the following for the response type. Is this expected? I'm using drf-yasg==1.12.1
```
Code | Description
-- | --
201 | Example ValueModelCart{id*integertitle: Idlast_synced_at*string($date-time)title: Last synced atcart_lines*[CartLine{id*integertitle: Idcart_idintegertitle: Cart idreadOnly: trueitemstringtitle: ItemreadOnly: truehyper_line_idintegertitle: Hyper line idreadOnly: truequantityintegertitle: 수량 | id* | integertitle: Id | last_synced_at* | string($date-time)title: Last synced at | cart_lines* | [CartLine{id*integertitle: Idcart_idintegertitle: Cart idreadOnly: trueitemstringtitle: ItemreadOnly: truehyper_line_idintegertitle: Hyper line idreadOnly: truequantityintegertitle: 수량 | id* | integertitle: Id | cart_id | integertitle: Cart idreadOnly: true | item | stringtitle: ItemreadOnly: true | hyper_line_id | integertitle: Hyper line idreadOnly: true | quantity | integertitle: 수량
id* | integertitle: Id
last_synced_at* | string($date-time)title: Last synced at
cart_lines* | [CartLine{id*integertitle: Idcart_idintegertitle: Cart idreadOnly: trueitemstringtitle: ItemreadOnly: truehyper_line_idintegertitle: Hyper line idreadOnly: truequantityintegertitle: 수량 | id* | integertitle: Id | cart_id | integertitle: Cart idreadOnly: true | item | stringtitle: ItemreadOnly: true | hyper_line_id | integertitle: Hyper line idreadOnly: true | quantity | integertitle: 수량
id* | integertitle: Id
cart_id | integertitle: Cart idreadOnly: true
item | stringtitle: ItemreadOnly: true
hyper_line_id | integertitle: Hyper line idreadOnly: true
quantity | integertitle: 수량
``` | closed | 2019-09-19T07:54:35Z | 2020-10-26T01:10:36Z | https://github.com/axnsan12/drf-yasg/issues/457 | [] | pcompassion | 2 |
lepture/authlib | django | 394 | httpx integration: add support for other async backends | **Is your feature request related to a problem? Please describe.**
At the moment authlib's httpx integration only supports the default asyncio backend (i.e. usage of asyncio.Event in AsyncOAuth2Client). But httpx supports multiple backends (asyncio, trio, curio, anyio)[^1]
As a user I'd expect a library claiming to integrate with another lib to support the same environments.
**Describe the solution you'd like**
Straightforward integration with the other async backends (trio, curio, anyio).
Preferably in the way httpx provides it.
The following code snippet could be how a user chooses to use the trio backend with the AsyncOAuth2Client.
At least it is the way how it is done in httpx.
```python
from authlib.integrations.httpx_client import AsyncOAuth2Client
import trio
```
[^1]: https://www.python-httpx.org/async/#supported-async-environments | closed | 2021-10-19T11:42:18Z | 2022-01-12T22:12:43Z | https://github.com/lepture/authlib/issues/394 | [
"feature request",
"client"
] | nam3less | 0 |
mwaskom/seaborn | matplotlib | 3,704 | [Bug] Plotting categorical columns includes empty categories | ### A reproducible code example that demonstrates the problem
```python
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
countries = ['US', 'Canada', 'Spain', 'US', 'Canada', 'Sweden', 'Jordan', 'Netherlands', 'US', 'Spain']
df = pd.DataFrame(countries, columns=['Countries'])
df['Countries'] = df['Countries'].astype('category')
filtered_df = df[df['Countries'] == 'US'].copy()
sns.countplot(filtered_df, x='Countries')
plt.show()
```
### The output that you are seeing (an image of a plot, or the error message)

### A clear explanation of why you think something is wrong
When plotting a categorical column, the resulting plot will contain all the categories even if they don't exist anymore.
I couldn't find any direct information in the documentation about this. However, I found the following example at https://seaborn.pydata.org/tutorial/categorical.html#categorical-scatterplots. Specifically the part that contains
```python
sns.catplot(data=tips.query("size != 3"), x="size", y="total_bill", native_scale=True)
```
Where the result had an empty column at `size=3`. Nonetheless, I'm not sure that this should be the case when creating a new dataframe without certain categories from the orginal one.
I understand that this could be more of a pandas issue than seaborn's, but I felt like this should be mentioned or be more clearly documented.
There's a couple of easy solutions to this problem currently
```python
filtered_df['Countries'] = filtered_df['Countries'].astype('string')
# Or
filtered_df['Countries'] = filtered_df['Countries'].cat.remove_unused_categories()
```
### The specific versions of seaborn and matplotlib that you are working with
* **Python**: 3.12
* **seaborn**: 0.13.2
* **matplotlib**: 3.9.0
* This isn't specific to any version combinations though, because I observed the same behaviour with the oldest supported Python version (3.7) | closed | 2024-06-02T23:05:40Z | 2024-06-04T18:34:07Z | https://github.com/mwaskom/seaborn/issues/3704 | [] | Yazan-Sharaya | 3 |
Avaiga/taipy | data-visualization | 1,909 | [🐛 BUG] Certain python expression not working anymore in Markdown | ### What went wrong? 🤔
Using < in a Python expression will create an issue. This is a regression from 3.1.
```
WARNING:root:
--- 1 warning(s) were found for page '/' in variable 'md' ---
- Warning 1: Missing leading pipe '|' in opening tag line 2: '<|{data[data["A"] < 3]}|chart|x=A|y=B|>'.
```
And the related visual element will not be shown.
Develop version:

3.1 version:

### Expected Behavior
The visual element should appear like in 3.1.
### Steps to Reproduce Issue
Use the develop version.
```python
from taipy.gui import Gui
import pandas as pd
data = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]})
md = """
<|{data[data["A"] < 3]}|chart|x=A|y=B|>
"""
Gui(md).run(port=2415)
```
### Version of Taipy
develop - 10/3/24
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Create related issue in taipy-doc for documentation and Release Notes.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2024-10-04T08:23:19Z | 2024-10-04T09:56:33Z | https://github.com/Avaiga/taipy/issues/1909 | [
"🖰 GUI",
"💥Malfunction"
] | FlorianJacta | 0 |
keras-team/keras | tensorflow | 20,902 | pytorch backend lstm very +10x slow (maybe batch size with pytorch backend has different semantics than the traditional Keras semantics?) | https://stackoverflow.com/questions/78717341/keras-training-speed-with-pytorch-backend-is-a-lot-slower-than-with-tensorflow
"""
I am on native Windows and I used old Keras with TensorFlow 2.10 (GPU accelerated) before. I wanted to try Keras 3 with PyTorch backend. Can someone please help me why this model trains 10x slower with Keras 3.4.1 and PyTorch 2.3.1 backend? With my GPU a single epoch takes a little more than 2 minutes with TF, and over 20 minutes with PyTorch.
import os
os.environ["KERAS_BACKEND"] = "torch"
import torch
torch.cuda.is_available() # <-- returns True
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM
from keras import optimizers
from keras.regularizers import l2
x_train, y_train = np.float32(x_train), np.float32(y_train)
x_val, y_val = np.float32(x_val), np.float32(y_val)
model=Sequential()
reg=0.00001
model.add(LSTM( 80, return_sequences=True , dropout=0.0, kernel_regularizer=l2(reg), recurrent_regularizer=l2(reg), input_shape=(x_train.shape[1], x_train.shape[2]) ))
model.add(LSTM( 80, return_sequences=False, dropout=0.0, kernel_regularizer=l2(reg), recurrent_regularizer=l2(reg) ))
model.add(Dense(40))
model.add(Dense(40))
model.add(Dense(1))
opt = optimizers.Adam(learning_rate=lrate)
model.compile(optimizer=opt, loss='mean_squared_error')
from keras.callbacks import ModelCheckpoint
from keras.callbacks import BackupAndRestore
savecallback = ModelCheckpoint(basefolder+"/"+modelfile, save_best_only=False, monitor='val_loss', mode='min', verbose=1)
backupcallback = BackupAndRestore(basefolder+"/tmp/backup_"+modelfile)
hist=model.fit(x_train, y_train, validation_data=(x_val, y_val), batch_size=batchsize, epochs=20, callbacks=[savecallback, backupcallback])
I verified GPU acceleration with both backends.
""" | open | 2025-02-14T01:44:09Z | 2025-02-14T04:53:51Z | https://github.com/keras-team/keras/issues/20902 | [
"backend:torch"
] | mw66 | 2 |
openapi-generators/openapi-python-client | rest-api | 1,085 | OpenAPI 3.0 default response not suported | **Describe the bug**
I make use of the OpenAPI 3.0 default response feature on every handler: https://swagger.io/docs/specification/describing-responses/#default however this is unsupported by the codegen tool currently.
**OpenAPI Spec File**
https://github.com/Southclaws/storyden/blob/main/api/openapi.yaml
**Desktop (please complete the following information):**
- OS: Windows 11, Mac OS
- Python Version: 3.12.4
- openapi-python-client version: 0.21.2
**Additional context**
| closed | 2024-07-29T18:02:07Z | 2024-07-29T18:05:54Z | https://github.com/openapi-generators/openapi-python-client/issues/1085 | [] | Southclaws | 1 |
alirezamika/autoscraper | web-scraping | 44 | Add support for sepecifying text encoding. | I'm working with a legacy Chinese site with BIG5 text encoding, and I'm not able to set text encoding by passing arguments through `request_args`, because requests don't support it.
So the results I get was garbled, like this: `'¡ ̧ÔÚÕâ ̧öÊÀ1⁄2ç ̧æÖÕÒÔǰ©¤©¤A¡1-promise/result-'`.
Encoding can only be set by writing to the `encoding` property of requests object (According to [this](https://requests.readthedocs.io/en/master/user/quickstart/#response-content)).
So maybe adding an `encoding` param and set encoding in `_get_soup` in `auto_scraper.py` would be a good idea. | closed | 2021-01-10T09:20:01Z | 2021-01-23T17:11:51Z | https://github.com/alirezamika/autoscraper/issues/44 | [] | RealXuChe | 3 |
psf/black | python | 4,111 | What is "Incorrectly formatted" ? | _Python syntax has changed? Or it hasn't? This line was not flagged in the past._
It would be nice to add a bit more info to that error message (e.g. a link or at least a code what exactly is violated). This way the message is more annoying than helpful.


| closed | 2023-12-14T23:09:05Z | 2023-12-15T00:18:21Z | https://github.com/psf/black/issues/4111 | [
"T: style"
] | tfrokt | 4 |
Python3WebSpider/ProxyPool | flask | 190 | 为什么验证的返回的结果仍会带有本地ip, | 
| open | 2023-03-15T04:36:33Z | 2024-06-18T02:30:42Z | https://github.com/Python3WebSpider/ProxyPool/issues/190 | [
"bug"
] | Whale-Yu | 7 |
NVIDIA/pix2pixHD | computer-vision | 319 | RuntimeError: CUDA out of memory,continuous training? | I have trained 3000 pairs of data, and want to add another 2000 pairs to continue training, using the following command:
`
python train.py --name comics --dataroot ./datasets/comics3Kto5K --loadSize 512 --label_nc 0 --no_instance --netG local --load_pretrain checkpoints0310/comics/`
But the error is as follows:
#RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 0; 11.76 GiB total capacity; 8.86 GiB already allocated; 113.56 MiB free; 8.91 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
What is going on? | open | 2023-03-14T01:46:12Z | 2023-04-04T02:49:12Z | https://github.com/NVIDIA/pix2pixHD/issues/319 | [] | watertianyi | 5 |
idealo/image-super-resolution | computer-vision | 172 | git lfs can't download weights | open | 2021-01-27T11:58:16Z | 2021-04-02T16:41:20Z | https://github.com/idealo/image-super-resolution/issues/172 | [] | ItamarGronich | 1 |
|
huggingface/datasets | pandas | 6,644 | Support fsspec 2023.12 | Support fsspec 2023.12 by handling previous and new glob behavior. | closed | 2024-02-07T12:44:39Z | 2024-02-29T15:12:18Z | https://github.com/huggingface/datasets/issues/6644 | [
"enhancement"
] | albertvillanova | 1 |
pyeve/eve | flask | 554 | 'User-Restricted Resource Access' with HMAC authentication but it is doesn't work | Hi!
I'm trying to use the 'User-Restricted Resource Access' with HMAC authentication but it is doesn't work.
I just debugged the Eve code and I found this:
/eve/io/base.py

The clause request.authorization isn't allowing that resources to be filter.
Ok

Ok

NOK

This is caused because in file /werkzeug/http.py, the method parse_authorization_header is returning None. According with the documentation: 'The return value is either `None` if the header was invalid or not given'. Apparently it only supports basic and digest headers.
There is some solution to solve it?
This is my header:
GET /trainingSet HTTP/1.1
Host: 127.0.0.1:5000
Content-Type: application/json
Authorization: usr:8392495a7a3ec4002caba4cd8fbc196b932cddf7
Cache-Control: no-cache
| closed | 2015-02-03T14:16:19Z | 2015-02-03T16:11:10Z | https://github.com/pyeve/eve/issues/554 | [] | ghost | 3 |
pyppeteer/pyppeteer | automation | 144 | How to properly cleanup non-existent chromium processes? | OS Info: Windows 10 v1909
Python version: Python 3.7.0
Pyppeteer version: pyppeteer==0.2.2 (also tried with dev version and ran into same problem)
I am trying to take screenshots for 14 websites and of those 14 screenshots, 13 are successful. Which is great! However, one fails with
Exception occurred: net::ERR_SSL_PROTOCOL_ERROR which is just because of me using https when it only supports http.
However, the bigger problem is processes aren't getting cleaned up:

Here is a snippet of the code with the screenshot functionality
```python
async def take_screenshot(self, url):
url = f'http://{url}' if ('http' not in url and 'https' not in url) else url
# url = f'https://{url}' if ('http' not in url and 'https' not in url) else url
url = url.replace('www.', '')
print(f'Taking a screenshot of: {url}')
browser = await launch(headless=True, ignoreHTTPSErrors=True, args=["--no-sandbox"])
browser = await browser.createIncognitoBrowserContext()
page = await browser.newPage()
try:
# change default timeout from 30 to 35 seconds
page.setDefaultNavigationTimeout(35000)
await page.setUserAgent('Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/83.0.4103.106 Safari/537.36')
await page.goto(url, waitUntil='networkidle0')
await page.screenshot({'path': f'{self.output}\\{url.replace("http://", "").replace("https://", "")}.png'})
print('inside try and page has been closed')
await page.close()
# await browser.close()
# return True
except Exception as e:
print(f'Exception occurred: {e} for: {url} ')
# No matter what happens make sure browser and page are closed
if page.isClosed() is False:
print('page is closed as false')
await page.close()
print('browser is closed')
await browser.close()
```
This is output from screenshot function when run:
```
Taking a screenshot of: https://x
Taking a screenshot of: https://x
Taking a screenshot of: https://x
Taking a screenshot of: https://x
Taking a screenshot of: https://x
Taking a screenshot of: https://x
Taking a screenshot of: https://x
Taking a screenshot of: https://x
Taking a screenshot of: https://x
Taking a screenshot of: https://x
inside try and page has been closed
browser is closed
Taking a screenshot of: https://x
Taking a screenshot of: https://x
Taking a screenshot of: https://x
Taking a screenshot of: https://x
Exception occurred: net::ERR_SSL_PROTOCOL_ERROR at https://x for: https://x
page is closed as false
browser is closed
inside try and page has been closed
browser is closed
inside try and page has been closed
browser is closed
inside try and page has been closed
browser is closed
inside try and page has been closed
browser is closed
inside try and page has been closed
browser is closed
inside try and page has been closed
browser is closed
inside try and page has been closed
browser is closed
inside try and page has been closed
browser is closed
inside try and page has been closed
browser is closed
inside try and page has been closed
browser is closed
inside try and page has been closed
browser is closed
inside try and page has been closed
browser is closed
```
| open | 2020-06-30T03:49:46Z | 2021-12-26T14:10:23Z | https://github.com/pyppeteer/pyppeteer/issues/144 | [
"bug"
] | NotoriousRebel | 8 |
JaidedAI/EasyOCR | machine-learning | 341 | Terrible performance when trying to read single digit numbers. Performance improves significantly on using binary inverse thresholding, but not completely. | EasyOCR doesn't perform well when trying to read a page of purely single digit numbers. Replacing single digit numbers with at least 2 digits reverses this, with the reader recognizing almost all numbers correctly. I am using the allowlist string = '0123456789' to signal to the module that there are only digits present.
The poor performance improves significantly if the image is preprocessed using binary inverse thresholding. The improvement isn't perfect however, and some numbers are still not captured.
Test based on this image file:
<img width="321" alt="test_numbers_singles" src="https://user-images.githubusercontent.com/7851954/103452682-dd4ed280-4cc9-11eb-8079-ee2ad036201f.PNG">
## Performance with no pre-processing:
```
import easyocr
import cv2 as cv
# English
reader = easyocr.Reader(['en'])
# Without pre-processing
filename = 'test_numbers_singles.PNG'
img = cv.imread(filename)
ocr = reader.readtext(img, allowlist='0123456789')
img_annotated = img.copy()
for elem in ocr:
img_annotated = cv.line(img_annotated, tuple(elem[0][0]), tuple(elem[0][1]),
(0, 255, 0), 2)
img_annotated = cv.line(img_annotated, tuple(elem[0][1]), tuple(elem[0][2]),
(0, 255, 0), 2)
img_annotated = cv.line(img_annotated, tuple(elem[0][2]), tuple(elem[0][3]),
(0, 255, 0), 2)
img_annotated = cv.line(img_annotated, tuple(elem[0][3]), tuple(elem[0][0]),
(0, 255, 0), 2)
cv.imshow(filename, img_annotated); cv.waitKey(0); cv.destroyAllWindows()
```
Result:

## With pre-processing
```
# With inverse binary thresholding
ret, thresh = cv.threshold(img, 127, 255, cv.THRESH_BINARY_INV)
ocr = reader.readtext(thresh, allowlist='0123456789')
img_annotated = img.copy()
for elem in ocr:
img_annotated = cv.line(img_annotated, tuple(elem[0][0]), tuple(elem[0][1]),
(0, 255, 0), 2)
img_annotated = cv.line(img_annotated, tuple(elem[0][1]), tuple(elem[0][2]),
(0, 255, 0), 2)
img_annotated = cv.line(img_annotated, tuple(elem[0][2]), tuple(elem[0][3]),
(0, 255, 0), 2)
img_annotated = cv.line(img_annotated, tuple(elem[0][3]), tuple(elem[0][0]),
(0, 255, 0), 2)
cv.imshow(filename, img_annotated); cv.waitKey(0); cv.destroyAllWindows()
```
Result:

Performance improves, but there are still omissions as well as false detection/recognition.
Repeating the above test with double digit numbers gives good performance out-of-box and almost perfect performance with the thresholding approach.
Is there a workaround for my use case? | closed | 2021-01-02T08:09:28Z | 2024-11-18T09:30:23Z | https://github.com/JaidedAI/EasyOCR/issues/341 | [] | Arpanio | 5 |
vimalloc/flask-jwt-extended | flask | 318 | Validate token without authoration headers | Hello,thank you to create this wonderful library
i want generator token to validate email like this:
```python
@app.route('/register', methods=["GET", "POST"])
def register_view():
if request.method == 'POST':
email = request.form.get('email')
user = create_new_user()
send_email(
to=form.email.data,
content=url_for('admin.auth_email_view', token=create_token(identity=user.id))
)
user.update(form.data)
return generate_res()
return {}
@app.route('/auth/register/<string:token>')
def auth_email_view(token):
confirm_token(token)
identify = get_jwt_identity()
user = User.query_by_id(identify)
if user:
return {'status': 'success'}
return {'status': 'failed'}
```
just use confirm_token without http headers to validate token,
Is there any way to do this?
| closed | 2020-02-26T09:42:17Z | 2020-02-27T02:20:53Z | https://github.com/vimalloc/flask-jwt-extended/issues/318 | [] | joshuap233 | 2 |
deepset-ai/haystack | pytorch | 8,271 | docs: Pipeline.inputs() | Once a pipeline is created, it's difficult for users to _know_ how they should run the pipeline. We have quite a useful utility function for pipelines which is `.inputs()` which lists all the expected/required inputs for the components.
We should use this in our docs heavily imo. This function is hardly visible, and we don't really provide any help on how a pipeline should be run other than trial and errors. Once a pipeline fails, the error us quite handy. And if a user knows to ues `.show()` that is also handy. But I think we should add this everywhere as something that a user colud make use of while creating/running pipelines.
1. In the pipelines doc
2. In all component docs where we have example pipelines.. | closed | 2024-08-22T13:27:22Z | 2024-09-17T11:16:37Z | https://github.com/deepset-ai/haystack/issues/8271 | [
"topic:pipeline",
"type:documentation",
"P2"
] | TuanaCelik | 2 |
jupyterhub/zero-to-jupyterhub-k8s | jupyter | 2,950 | Parameter "singleuser.defaultUrl" not working | ### Bug description
With the lastest version of the Helm chart 2.0.0, the parameter "singleuser.defaultUrl" does not work. Setting it to "/lab" previously redirected the single-user servers to JupyterLab, now it opens the regular jupyter notebook
<!-- Use this section to clearly and concisely describe the bug. -->
#### Expected behaviour
Redirect the user's address to /lab when setting `singleuser.defaultUrl: "/lab"`
#### Actual behaviour
It does not redirect to /lab, but instead to "/tree", the regular jupyter notebook
### How to reproduce
1. Deploy helm chart 1.2 with `singleuser.defaultUrl: "/lab"`
2. Start a User server: it gets redirected to "/lab"
3. Deploy helm chart 2.0.0 with `singleuser.defaultUrl: "/lab"`
4. Start a User server: it gets redirected to "tree"
### Your personal set up
Tested with minikube version: v1.24.0
kubectl version
```
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:48:33Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
```
| closed | 2022-11-17T13:10:47Z | 2022-11-17T14:32:28Z | https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/2950 | [
"bug"
] | AdrianSanchezLopez | 3 |
streamlit/streamlit | streamlit | 10,151 | Show table with merged cells | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
For non-editable columns in st.dataframe or st.data_editor I want to display merged cells like the empty cells in the image above.
I want to merge any columns (not necessarily all rows) or any rows (not necessarily all columns).
It's ok if they are separated on selection like in the image below.


### Why?
It's just to improve the readability of the table.
But this is very necessary as I want to continue using Streamlit in the future.
### How?
It's fine to just remove the separator color for the specified coordinates.
But I can't come up with a concise way to specify the coordinates.
### Additional Context
_No response_ | open | 2025-01-10T06:30:49Z | 2025-01-11T11:08:01Z | https://github.com/streamlit/streamlit/issues/10151 | [
"type:enhancement",
"feature:st.dataframe",
"feature:st.table"
] | matsushitaa | 3 |
graphql-python/graphene-sqlalchemy | graphql | 342 | Enum support for automatic hybrid property type conversion | See discussion in #340 | open | 2022-04-29T23:03:53Z | 2022-04-29T23:03:53Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/342 | [
"enhancement"
] | erikwrede | 0 |
inventree/InvenTree | django | 8,656 | Dev Container | ### Deployment Method
- [ ] Installer
- [x] Docker Development
- [ ] Docker Production
- [ ] Bare metal Development
- [ ] Bare metal Production
- [ ] Digital Ocean image
- [ ] Other (please provide a link `Steps to Reproduce`
### Describe the problem*
When I walk through the dev container process, postCreateCommand.sh fails trying to install files. The last line looks like the script is trying to run `invoke dev.frontend-install` but it doesn't think that function exists.
Also, when the dev container is installing, VS Code shows a notification saying "Could not find Biome in your dependencies. Either add the @biomejs/biome package to your dependencies, or download the Biome binary."
When the container starts, this is the output:
```
Running the postCreateCommand from devcontainer.json...
[15502 ms] Start: Run in container: /bin/sh -c .devcontainer/postCreateCommand.sh
Error: [Errno 1] Operation not permitted: '/home/inventree/dev/venv/bin/Activate.ps1'
.devcontainer/postCreateCommand.sh: line 8: /home/inventree/dev/venv/bin/activate: No such file or directory
Updating InvenTree installation...
Installing required python packages from '/home/inventree/src/backend/requirements.txt'
Requirement already satisfied: pip in /usr/lib/python3.11/site-packages (23.1.2)
Collecting pip
Downloading pip-24.3.1-py3-none-any.whl (1.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.8/1.8 MB 5.0 MB/s eta 0:00:00
Requirement already satisfied: setuptools in /usr/lib/python3.11/site-packages (75.6.0)
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 23.1.2
Uninstalling pip-23.1.2:
ERROR: Could not install packages due to an OSError: [Errno 13] Permission denied: '/usr/bin/pip'
Consider using the `--user` option or check the permissions.
ERROR: InvenTree command failed: 'pip3 install --no-cache-dir --disable-pip-version-check -U pip setuptools'
- Refer to the error messages in the log above for more information
```
Then continues and gets here:
```
Successfully built django-querycount django-slowtests
Installing collected packages: django-querycount, distlib, zipp, tomli, pyproject-hooks, pycparser, platformdirs, pip, nodeenv, isort, identify, filelock, django-test-migrations, django, coverage, cli
ck, charset-normalizer, cfgv, virtualenv, importlib-metadata, django-slowtests, django-admin-shell, cffi, build, pre-commit, pip-tools, cryptography, pdfminer-six
ERROR: Could not install packages due to an OSError: [Errno 13] Permission denied: '/usr/lib/python3.11/site-packages/querycount'
Consider using the `--user` option or check the permissions.
ERROR: InvenTree command failed: 'pip3 install -U --require-hashes -r src/backend/requirements-dev.txt'
- Refer to the error messages in the log above for more information
No idea what 'dev.frontend-install' is!
```
### Steps to Reproduce
I'm following the steps in the `docs/docs/develop/devcontainer.md` guide. Step 4 is where the errors happen. When I open a terminal, it does activate the virtual environment, but I am not able to run the server. I get a `ModuleNotFoundError: No module named 'django'`
1. Clone the repository (If you want to submit changes fork it and use the url to your fork in the next step)
```bash
git clone https://github.com/inventree/InvenTree.git
```
2. Open vscode, navigate to the extensions sidebar and search for `ms-vscode-remote.remote-containers`. Click on install.
3. Open the cloned folder from above by clicking on `file > open folder`
4. vscode should now ask you if you'd like to reopen this folder in a devcontainer. Click `Reopen in Container`. If it does not ask you, open the command palette (<kbd>CTRL/CMD</kbd>+<kbd>Shift</kbd>+<kbd>P</kbd>) and search for `Reopen in Container`. This can take a few minutes until the image is downloaded, build and setup with all dependencies.
### Relevant log output
```bash
``` | closed | 2024-12-11T16:04:00Z | 2024-12-13T21:39:24Z | https://github.com/inventree/InvenTree/issues/8656 | [
"bug",
"setup"
] | ttftw | 4 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 237 | Using MPerClassSampler with thousands of labels? | I've misunderstood the API, thanks for an amazing repo:) | closed | 2020-11-21T07:50:03Z | 2020-11-21T08:16:34Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/237 | [] | dvirginz | 0 |
OFA-Sys/Chinese-CLIP | computer-vision | 344 | 翻译有点糟糕 | 
我抽查了一些Flicker30k cn的翻译,很多语句不通。直觉上应该会对模型训练效果打折扣吧,不知道我的担忧是否是多余的。 | open | 2024-08-13T03:26:40Z | 2024-08-13T03:26:40Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/344 | [] | ccl-private | 0 |
SCIR-HI/Huatuo-Llama-Med-Chinese | nlp | 22 | 运行infer.sh文件出现错误 | 运行infer.sh报错:没有找到adapter_config.json文件。
保存的权重文件如下:
<img width="150" alt="image" src="https://user-images.githubusercontent.com/49066765/236186586-a8fb7d68-7803-42ce-9c79-456af57ecf97.png">
请问是哪里出错导致没有adapter_config.json文件?
| closed | 2023-05-04T11:06:47Z | 2023-05-05T08:36:53Z | https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese/issues/22 | [] | CodingPeasantzgl | 2 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,643 | [Bug]: WebUI not startting with --listen | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Webui crash when trying to start with --listen flag
### Steps to reproduce the problem
Go to webui-user.sh
Add --listen flag to COMMANDLINE_ARGS
### What should have happened?
Webui should start
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
[sysinfo-2024-11-11-09-02.json](https://github.com/user-attachments/files/17699866/sysinfo-2024-11-11-09-02.json)
### Console logs
```Shell
https://pastebin.com/zZQJqvQj
```
### Additional information
_No response_ | open | 2024-11-11T09:08:11Z | 2024-11-19T09:30:32Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16643 | [
"bug-report"
] | IcteFourU | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.