filnow commited on
Commit
72c8556
1 Parent(s): 2778e79

filnow/nlg-umt5-pol

Browse files
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: google/umt5-small
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: working
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # working
15
+
16
+ This model is a fine-tuned version of [google/umt5-small](https://huggingface.co/google/umt5-small) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.2312
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 5e-05
38
+ - train_batch_size: 8
39
+ - eval_batch_size: 16
40
+ - seed: 42
41
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
+ - lr_scheduler_type: linear
43
+ - num_epochs: 3
44
+
45
+ ### Training results
46
+
47
+ | Training Loss | Epoch | Step | Validation Loss |
48
+ |:-------------:|:-----:|:----:|:---------------:|
49
+ | 0.7329 | 1.0 | 2321 | 0.3316 |
50
+ | 0.3731 | 2.0 | 4642 | 0.2464 |
51
+ | 0.3269 | 3.0 | 6963 | 0.2312 |
52
+
53
+
54
+ ### Framework versions
55
+
56
+ - Transformers 4.41.1
57
+ - Pytorch 2.1.2
58
+ - Datasets 2.19.1
59
+ - Tokenizers 0.19.1
config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "google/umt5-small",
3
+ "architectures": [
4
+ "UMT5ForConditionalGeneration"
5
+ ],
6
+ "classifier_dropout": 0.0,
7
+ "d_ff": 1024,
8
+ "d_kv": 64,
9
+ "d_model": 512,
10
+ "decoder_start_token_id": 0,
11
+ "dense_act_fn": "gelu_new",
12
+ "dropout_rate": 0.1,
13
+ "eos_token_id": 1,
14
+ "feed_forward_proj": "gated-gelu",
15
+ "initializer_factor": 1.0,
16
+ "is_encoder_decoder": true,
17
+ "is_gated_act": true,
18
+ "layer_norm_epsilon": 1e-06,
19
+ "max_new_tokens": 64,
20
+ "model_type": "umt5",
21
+ "num_decoder_layers": 8,
22
+ "num_heads": 6,
23
+ "num_layers": 8,
24
+ "pad_token_id": 0,
25
+ "relative_attention_max_distance": 128,
26
+ "relative_attention_num_buckets": 32,
27
+ "scalable_attention": true,
28
+ "tie_word_embeddings": false,
29
+ "tokenizer_class": "T5Tokenizer",
30
+ "torch_dtype": "float32",
31
+ "transformers_version": "4.41.1",
32
+ "use_cache": true,
33
+ "vocab_size": 256384
34
+ }
generation_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "decoder_start_token_id": 0,
4
+ "eos_token_id": 1,
5
+ "max_new_tokens": 64,
6
+ "pad_token_id": 0,
7
+ "transformers_version": "4.41.1"
8
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79650c76f720d67f579dbc020e1356374cf657dfa2ac2f16ac4455c30e040e8e
3
+ size 1226432312
runs/Jun03_11-19-32_743112a2decd/events.out.tfevents.1717413574.743112a2decd.34.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c92d51fa8e06b4e0bf9dd7ade298fe887094a3bafa8ebfedea0ab0f48fad8ef
3
+ size 9097
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31cb480aff35ba6dd9001ece6990027ed29113e29d83bb6333c7c11a41d62589
3
+ size 5240
wandb/debug-internal.log ADDED
The diff for this file is too large to render. See raw diff
 
wandb/debug.log ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_setup.py:_flush():76] Current SDK version is 0.17.0
2
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_setup.py:_flush():76] Configure stats pid to 34
3
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_setup.py:_flush():76] Loading settings from /root/.config/wandb/settings
4
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_setup.py:_flush():76] Loading settings from /kaggle/working/wandb/settings
5
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_setup.py:_flush():76] Loading settings from environment variables: {}
6
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_setup.py:_flush():76] Applying setup settings: {'_disable_service': False}
7
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_setup.py:_flush():76] Inferring run settings from compute environment: {'program': '<python with no main file>'}
8
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_setup.py:_flush():76] Applying login settings: {}
9
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_setup.py:_flush():76] Applying login settings: {'api_key': '***REDACTED***'}
10
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_init.py:_log_setup():520] Logging user logs to /kaggle/working/wandb/run-20240603_111947-zd4tutif/logs/debug.log
11
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_init.py:_log_setup():521] Logging internal logs to /kaggle/working/wandb/run-20240603_111947-zd4tutif/logs/debug-internal.log
12
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_init.py:_jupyter_setup():466] configuring jupyter hooks <wandb.sdk.wandb_init._WandbInit object at 0x79c87e995990>
13
+ 2024-06-03 11:19:47,903 INFO MainThread:34 [wandb_init.py:init():560] calling init triggers
14
+ 2024-06-03 11:19:47,903 INFO MainThread:34 [wandb_init.py:init():567] wandb.init called with sweep_config: {}
15
+ config: {}
16
+ 2024-06-03 11:19:47,903 INFO MainThread:34 [wandb_init.py:init():610] starting backend
17
+ 2024-06-03 11:19:47,903 INFO MainThread:34 [wandb_init.py:init():614] setting up manager
18
+ 2024-06-03 11:19:47,905 INFO MainThread:34 [backend.py:_multiprocessing_setup():105] multiprocessing start_methods=fork,spawn,forkserver, using: spawn
19
+ 2024-06-03 11:19:47,908 INFO MainThread:34 [wandb_init.py:init():622] backend started and connected
20
+ 2024-06-03 11:19:47,923 INFO MainThread:34 [wandb_run.py:_label_probe_notebook():1328] probe notebook
21
+ 2024-06-03 11:19:48,333 INFO MainThread:34 [wandb_init.py:init():711] updated telemetry
22
+ 2024-06-03 11:19:48,337 INFO MainThread:34 [wandb_init.py:init():744] communicating run to backend with 90.0 second timeout
23
+ 2024-06-03 11:19:48,544 INFO MainThread:34 [wandb_run.py:_on_init():2396] communicating current version
24
+ 2024-06-03 11:19:48,611 INFO MainThread:34 [wandb_run.py:_on_init():2405] got version response
25
+ 2024-06-03 11:19:48,611 INFO MainThread:34 [wandb_init.py:init():795] starting run threads in backend
26
+ 2024-06-03 11:20:04,877 INFO MainThread:34 [wandb_run.py:_console_start():2374] atexit reg
27
+ 2024-06-03 11:20:04,877 INFO MainThread:34 [wandb_run.py:_redirect():2229] redirect: wrap_raw
28
+ 2024-06-03 11:20:04,878 INFO MainThread:34 [wandb_run.py:_redirect():2294] Wrapping output streams.
29
+ 2024-06-03 11:20:04,878 INFO MainThread:34 [wandb_run.py:_redirect():2319] Redirects installed.
30
+ 2024-06-03 11:20:04,881 INFO MainThread:34 [wandb_init.py:init():838] run started, returning control to user process
31
+ 2024-06-03 11:20:04,886 INFO MainThread:34 [wandb_run.py:_config_callback():1376] config_cb None None {'vocab_size': 256384, 'd_model': 512, 'd_kv': 64, 'd_ff': 1024, 'num_layers': 8, 'num_decoder_layers': 8, 'num_heads': 6, 'relative_attention_num_buckets': 32, 'relative_attention_max_distance': 128, 'dropout_rate': 0.1, 'classifier_dropout': 0.0, 'layer_norm_epsilon': 1e-06, 'initializer_factor': 1.0, 'feed_forward_proj': 'gated-gelu', 'use_cache': True, 'dense_act_fn': 'gelu_new', 'is_gated_act': True, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'float32', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': False, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': True, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['UMT5ForConditionalGeneration'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': 'T5Tokenizer', 'prefix': None, 'bos_token_id': None, 'pad_token_id': 0, 'eos_token_id': 1, 'sep_token_id': None, 'decoder_start_token_id': 0, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'google/umt5-small', 'transformers_version': '4.41.1', 'max_new_tokens': 64, 'scalable_attention': True, 'model_type': 'umt5', 'output_dir': '/kaggle/working', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': True, 'do_predict': False, 'eval_strategy': 'epoch', 'prediction_loss_only': False, 'per_device_train_batch_size': 8, 'per_device_eval_batch_size': 16, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 1, 'eval_accumulation_steps': None, 'eval_delay': 0, 'learning_rate': 5e-05, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 3, 'max_steps': -1, 'lr_scheduler_type': 'linear', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.0, 'warmup_steps': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': '/kaggle/working/runs/Jun03_11-19-32_743112a2decd', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 500, 'logging_nan_inf_filter': True, 'save_strategy': 'epoch', 'save_steps': 500, 'save_total_limit': 1, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'restore_callback_states_from_checkpoint': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': False, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': None, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': None, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': '/kaggle/working', 'disable_tqdm': False, 'remove_unused_columns': True, 'label_names': None, 'load_best_model_at_end': True, 'metric_for_best_model': 'loss', 'greater_is_better': False, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'adamw_torch', 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['tensorboard', 'wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': False, 'resume_from_checkpoint': None, 'hub_model_id': None, 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': False, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'eval_do_concat_batches': True, 'fp16_backend': 'auto', 'evaluation_strategy': 'epoch', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None, 'optim_target_modules': None, 'batch_eval_metrics': False, 'sortish_sampler': False, 'predict_with_generate': True, 'generation_max_length': None, 'generation_num_beams': None, 'generation_config': None}
32
+ 2024-06-03 11:58:53,885 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
33
+ 2024-06-03 11:58:53,885 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
34
+ 2024-06-03 11:58:53,892 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
35
+ 2024-06-03 11:58:53,894 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
36
+ 2024-06-03 11:58:53,895 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
37
+ 2024-06-03 11:58:53,900 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
38
+ 2024-06-03 11:59:05,976 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
39
+ 2024-06-03 11:59:05,977 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
40
+ 2024-06-03 11:59:05,982 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
41
+ 2024-06-03 11:59:06,530 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
42
+ 2024-06-03 11:59:06,530 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
43
+ 2024-06-03 11:59:06,539 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
44
+ 2024-06-03 11:59:06,935 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
45
+ 2024-06-03 11:59:06,935 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
46
+ 2024-06-03 11:59:06,941 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
47
+ 2024-06-03 11:59:07,129 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
48
+ 2024-06-03 11:59:07,130 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
49
+ 2024-06-03 11:59:07,135 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
50
+ 2024-06-03 11:59:07,401 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
51
+ 2024-06-03 11:59:07,402 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
52
+ 2024-06-03 11:59:07,411 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
53
+ 2024-06-03 11:59:07,692 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
54
+ 2024-06-03 11:59:07,693 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
55
+ 2024-06-03 11:59:07,700 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
56
+ 2024-06-03 11:59:08,134 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
57
+ 2024-06-03 11:59:08,135 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
58
+ 2024-06-03 11:59:08,141 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
59
+ 2024-06-03 11:59:11,075 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
60
+ 2024-06-03 11:59:11,075 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
61
+ 2024-06-03 11:59:11,081 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
62
+ 2024-06-03 11:59:13,485 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
63
+ 2024-06-03 11:59:13,486 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
64
+ 2024-06-03 12:01:42,196 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
65
+ 2024-06-03 12:01:42,425 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
66
+ 2024-06-03 12:01:42,425 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
67
+ 2024-06-03 12:03:34,285 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
68
+ 2024-06-03 12:03:34,465 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
69
+ 2024-06-03 12:03:34,465 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
70
+ 2024-06-03 12:03:38,980 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
71
+ 2024-06-03 12:03:39,116 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
72
+ 2024-06-03 12:03:39,116 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
73
+ 2024-06-03 12:03:45,290 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
wandb/run-20240603_111947-zd4tutif/files/conda-environment.yaml ADDED
File without changes
wandb/run-20240603_111947-zd4tutif/files/config.yaml ADDED
@@ -0,0 +1,723 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ wandb_version: 1
2
+
3
+ _wandb:
4
+ desc: null
5
+ value:
6
+ python_version: 3.10.13
7
+ cli_version: 0.17.0
8
+ framework: huggingface
9
+ huggingface_version: 4.41.1
10
+ is_jupyter_run: true
11
+ is_kaggle_kernel: true
12
+ start_time: 1717413587
13
+ t:
14
+ 1:
15
+ - 1
16
+ - 2
17
+ - 3
18
+ - 5
19
+ - 11
20
+ - 12
21
+ - 49
22
+ - 51
23
+ - 53
24
+ - 55
25
+ - 71
26
+ - 105
27
+ 2:
28
+ - 1
29
+ - 2
30
+ - 3
31
+ - 5
32
+ - 11
33
+ - 12
34
+ - 49
35
+ - 51
36
+ - 53
37
+ - 55
38
+ - 71
39
+ - 105
40
+ 3:
41
+ - 7
42
+ - 13
43
+ - 23
44
+ - 62
45
+ - 66
46
+ 4: 3.10.13
47
+ 5: 0.17.0
48
+ 6: 4.41.1
49
+ 8:
50
+ - 1
51
+ - 2
52
+ - 5
53
+ 9:
54
+ 1: transformers_trainer
55
+ 13: linux-x86_64
56
+ m:
57
+ - 1: train/global_step
58
+ 6:
59
+ - 3
60
+ - 1: train/loss
61
+ 5: 1
62
+ 6:
63
+ - 1
64
+ - 1: train/grad_norm
65
+ 5: 1
66
+ 6:
67
+ - 1
68
+ - 1: train/learning_rate
69
+ 5: 1
70
+ 6:
71
+ - 1
72
+ - 1: train/epoch
73
+ 5: 1
74
+ 6:
75
+ - 1
76
+ - 1: eval/loss
77
+ 5: 1
78
+ 6:
79
+ - 1
80
+ - 1: eval/runtime
81
+ 5: 1
82
+ 6:
83
+ - 1
84
+ - 1: eval/samples_per_second
85
+ 5: 1
86
+ 6:
87
+ - 1
88
+ - 1: eval/steps_per_second
89
+ 5: 1
90
+ 6:
91
+ - 1
92
+ vocab_size:
93
+ desc: null
94
+ value: 256384
95
+ d_model:
96
+ desc: null
97
+ value: 512
98
+ d_kv:
99
+ desc: null
100
+ value: 64
101
+ d_ff:
102
+ desc: null
103
+ value: 1024
104
+ num_layers:
105
+ desc: null
106
+ value: 8
107
+ num_decoder_layers:
108
+ desc: null
109
+ value: 8
110
+ num_heads:
111
+ desc: null
112
+ value: 6
113
+ relative_attention_num_buckets:
114
+ desc: null
115
+ value: 32
116
+ relative_attention_max_distance:
117
+ desc: null
118
+ value: 128
119
+ dropout_rate:
120
+ desc: null
121
+ value: 0.1
122
+ classifier_dropout:
123
+ desc: null
124
+ value: 0.0
125
+ layer_norm_epsilon:
126
+ desc: null
127
+ value: 1.0e-06
128
+ initializer_factor:
129
+ desc: null
130
+ value: 1.0
131
+ feed_forward_proj:
132
+ desc: null
133
+ value: gated-gelu
134
+ use_cache:
135
+ desc: null
136
+ value: true
137
+ dense_act_fn:
138
+ desc: null
139
+ value: gelu_new
140
+ is_gated_act:
141
+ desc: null
142
+ value: true
143
+ return_dict:
144
+ desc: null
145
+ value: true
146
+ output_hidden_states:
147
+ desc: null
148
+ value: false
149
+ output_attentions:
150
+ desc: null
151
+ value: false
152
+ torchscript:
153
+ desc: null
154
+ value: false
155
+ torch_dtype:
156
+ desc: null
157
+ value: float32
158
+ use_bfloat16:
159
+ desc: null
160
+ value: false
161
+ tf_legacy_loss:
162
+ desc: null
163
+ value: false
164
+ pruned_heads:
165
+ desc: null
166
+ value: {}
167
+ tie_word_embeddings:
168
+ desc: null
169
+ value: false
170
+ chunk_size_feed_forward:
171
+ desc: null
172
+ value: 0
173
+ is_encoder_decoder:
174
+ desc: null
175
+ value: true
176
+ is_decoder:
177
+ desc: null
178
+ value: false
179
+ cross_attention_hidden_size:
180
+ desc: null
181
+ value: null
182
+ add_cross_attention:
183
+ desc: null
184
+ value: false
185
+ tie_encoder_decoder:
186
+ desc: null
187
+ value: false
188
+ max_length:
189
+ desc: null
190
+ value: 20
191
+ min_length:
192
+ desc: null
193
+ value: 0
194
+ do_sample:
195
+ desc: null
196
+ value: false
197
+ early_stopping:
198
+ desc: null
199
+ value: false
200
+ num_beams:
201
+ desc: null
202
+ value: 1
203
+ num_beam_groups:
204
+ desc: null
205
+ value: 1
206
+ diversity_penalty:
207
+ desc: null
208
+ value: 0.0
209
+ temperature:
210
+ desc: null
211
+ value: 1.0
212
+ top_k:
213
+ desc: null
214
+ value: 50
215
+ top_p:
216
+ desc: null
217
+ value: 1.0
218
+ typical_p:
219
+ desc: null
220
+ value: 1.0
221
+ repetition_penalty:
222
+ desc: null
223
+ value: 1.0
224
+ length_penalty:
225
+ desc: null
226
+ value: 1.0
227
+ no_repeat_ngram_size:
228
+ desc: null
229
+ value: 0
230
+ encoder_no_repeat_ngram_size:
231
+ desc: null
232
+ value: 0
233
+ bad_words_ids:
234
+ desc: null
235
+ value: null
236
+ num_return_sequences:
237
+ desc: null
238
+ value: 1
239
+ output_scores:
240
+ desc: null
241
+ value: false
242
+ return_dict_in_generate:
243
+ desc: null
244
+ value: false
245
+ forced_bos_token_id:
246
+ desc: null
247
+ value: null
248
+ forced_eos_token_id:
249
+ desc: null
250
+ value: null
251
+ remove_invalid_values:
252
+ desc: null
253
+ value: false
254
+ exponential_decay_length_penalty:
255
+ desc: null
256
+ value: null
257
+ suppress_tokens:
258
+ desc: null
259
+ value: null
260
+ begin_suppress_tokens:
261
+ desc: null
262
+ value: null
263
+ architectures:
264
+ desc: null
265
+ value:
266
+ - UMT5ForConditionalGeneration
267
+ finetuning_task:
268
+ desc: null
269
+ value: null
270
+ id2label:
271
+ desc: null
272
+ value:
273
+ '0': LABEL_0
274
+ '1': LABEL_1
275
+ label2id:
276
+ desc: null
277
+ value:
278
+ LABEL_0: 0
279
+ LABEL_1: 1
280
+ tokenizer_class:
281
+ desc: null
282
+ value: T5Tokenizer
283
+ prefix:
284
+ desc: null
285
+ value: null
286
+ bos_token_id:
287
+ desc: null
288
+ value: null
289
+ pad_token_id:
290
+ desc: null
291
+ value: 0
292
+ eos_token_id:
293
+ desc: null
294
+ value: 1
295
+ sep_token_id:
296
+ desc: null
297
+ value: null
298
+ decoder_start_token_id:
299
+ desc: null
300
+ value: 0
301
+ task_specific_params:
302
+ desc: null
303
+ value: null
304
+ problem_type:
305
+ desc: null
306
+ value: null
307
+ _name_or_path:
308
+ desc: null
309
+ value: google/umt5-small
310
+ transformers_version:
311
+ desc: null
312
+ value: 4.41.1
313
+ max_new_tokens:
314
+ desc: null
315
+ value: 64
316
+ scalable_attention:
317
+ desc: null
318
+ value: true
319
+ model_type:
320
+ desc: null
321
+ value: umt5
322
+ output_dir:
323
+ desc: null
324
+ value: /kaggle/working
325
+ overwrite_output_dir:
326
+ desc: null
327
+ value: false
328
+ do_train:
329
+ desc: null
330
+ value: false
331
+ do_eval:
332
+ desc: null
333
+ value: true
334
+ do_predict:
335
+ desc: null
336
+ value: false
337
+ eval_strategy:
338
+ desc: null
339
+ value: epoch
340
+ prediction_loss_only:
341
+ desc: null
342
+ value: false
343
+ per_device_train_batch_size:
344
+ desc: null
345
+ value: 8
346
+ per_device_eval_batch_size:
347
+ desc: null
348
+ value: 16
349
+ per_gpu_train_batch_size:
350
+ desc: null
351
+ value: null
352
+ per_gpu_eval_batch_size:
353
+ desc: null
354
+ value: null
355
+ gradient_accumulation_steps:
356
+ desc: null
357
+ value: 1
358
+ eval_accumulation_steps:
359
+ desc: null
360
+ value: null
361
+ eval_delay:
362
+ desc: null
363
+ value: 0
364
+ learning_rate:
365
+ desc: null
366
+ value: 5.0e-05
367
+ weight_decay:
368
+ desc: null
369
+ value: 0.0
370
+ adam_beta1:
371
+ desc: null
372
+ value: 0.9
373
+ adam_beta2:
374
+ desc: null
375
+ value: 0.999
376
+ adam_epsilon:
377
+ desc: null
378
+ value: 1.0e-08
379
+ max_grad_norm:
380
+ desc: null
381
+ value: 1.0
382
+ num_train_epochs:
383
+ desc: null
384
+ value: 3
385
+ max_steps:
386
+ desc: null
387
+ value: -1
388
+ lr_scheduler_type:
389
+ desc: null
390
+ value: linear
391
+ lr_scheduler_kwargs:
392
+ desc: null
393
+ value: {}
394
+ warmup_ratio:
395
+ desc: null
396
+ value: 0.0
397
+ warmup_steps:
398
+ desc: null
399
+ value: 0
400
+ log_level:
401
+ desc: null
402
+ value: passive
403
+ log_level_replica:
404
+ desc: null
405
+ value: warning
406
+ log_on_each_node:
407
+ desc: null
408
+ value: true
409
+ logging_dir:
410
+ desc: null
411
+ value: /kaggle/working/runs/Jun03_11-19-32_743112a2decd
412
+ logging_strategy:
413
+ desc: null
414
+ value: steps
415
+ logging_first_step:
416
+ desc: null
417
+ value: false
418
+ logging_steps:
419
+ desc: null
420
+ value: 500
421
+ logging_nan_inf_filter:
422
+ desc: null
423
+ value: true
424
+ save_strategy:
425
+ desc: null
426
+ value: epoch
427
+ save_steps:
428
+ desc: null
429
+ value: 500
430
+ save_total_limit:
431
+ desc: null
432
+ value: 1
433
+ save_safetensors:
434
+ desc: null
435
+ value: true
436
+ save_on_each_node:
437
+ desc: null
438
+ value: false
439
+ save_only_model:
440
+ desc: null
441
+ value: false
442
+ restore_callback_states_from_checkpoint:
443
+ desc: null
444
+ value: false
445
+ no_cuda:
446
+ desc: null
447
+ value: false
448
+ use_cpu:
449
+ desc: null
450
+ value: false
451
+ use_mps_device:
452
+ desc: null
453
+ value: false
454
+ seed:
455
+ desc: null
456
+ value: 42
457
+ data_seed:
458
+ desc: null
459
+ value: null
460
+ jit_mode_eval:
461
+ desc: null
462
+ value: false
463
+ use_ipex:
464
+ desc: null
465
+ value: false
466
+ bf16:
467
+ desc: null
468
+ value: false
469
+ fp16:
470
+ desc: null
471
+ value: false
472
+ fp16_opt_level:
473
+ desc: null
474
+ value: O1
475
+ half_precision_backend:
476
+ desc: null
477
+ value: auto
478
+ bf16_full_eval:
479
+ desc: null
480
+ value: false
481
+ fp16_full_eval:
482
+ desc: null
483
+ value: false
484
+ tf32:
485
+ desc: null
486
+ value: null
487
+ local_rank:
488
+ desc: null
489
+ value: 0
490
+ ddp_backend:
491
+ desc: null
492
+ value: null
493
+ tpu_num_cores:
494
+ desc: null
495
+ value: null
496
+ tpu_metrics_debug:
497
+ desc: null
498
+ value: false
499
+ debug:
500
+ desc: null
501
+ value: []
502
+ dataloader_drop_last:
503
+ desc: null
504
+ value: false
505
+ eval_steps:
506
+ desc: null
507
+ value: null
508
+ dataloader_num_workers:
509
+ desc: null
510
+ value: 0
511
+ dataloader_prefetch_factor:
512
+ desc: null
513
+ value: null
514
+ past_index:
515
+ desc: null
516
+ value: -1
517
+ run_name:
518
+ desc: null
519
+ value: /kaggle/working
520
+ disable_tqdm:
521
+ desc: null
522
+ value: false
523
+ remove_unused_columns:
524
+ desc: null
525
+ value: true
526
+ label_names:
527
+ desc: null
528
+ value: null
529
+ load_best_model_at_end:
530
+ desc: null
531
+ value: true
532
+ metric_for_best_model:
533
+ desc: null
534
+ value: loss
535
+ greater_is_better:
536
+ desc: null
537
+ value: false
538
+ ignore_data_skip:
539
+ desc: null
540
+ value: false
541
+ fsdp:
542
+ desc: null
543
+ value: []
544
+ fsdp_min_num_params:
545
+ desc: null
546
+ value: 0
547
+ fsdp_config:
548
+ desc: null
549
+ value:
550
+ min_num_params: 0
551
+ xla: false
552
+ xla_fsdp_v2: false
553
+ xla_fsdp_grad_ckpt: false
554
+ fsdp_transformer_layer_cls_to_wrap:
555
+ desc: null
556
+ value: null
557
+ accelerator_config:
558
+ desc: null
559
+ value:
560
+ split_batches: false
561
+ dispatch_batches: null
562
+ even_batches: true
563
+ use_seedable_sampler: true
564
+ non_blocking: false
565
+ gradient_accumulation_kwargs: null
566
+ deepspeed:
567
+ desc: null
568
+ value: null
569
+ label_smoothing_factor:
570
+ desc: null
571
+ value: 0.0
572
+ optim:
573
+ desc: null
574
+ value: adamw_torch
575
+ optim_args:
576
+ desc: null
577
+ value: null
578
+ adafactor:
579
+ desc: null
580
+ value: false
581
+ group_by_length:
582
+ desc: null
583
+ value: false
584
+ length_column_name:
585
+ desc: null
586
+ value: length
587
+ report_to:
588
+ desc: null
589
+ value:
590
+ - tensorboard
591
+ - wandb
592
+ ddp_find_unused_parameters:
593
+ desc: null
594
+ value: null
595
+ ddp_bucket_cap_mb:
596
+ desc: null
597
+ value: null
598
+ ddp_broadcast_buffers:
599
+ desc: null
600
+ value: null
601
+ dataloader_pin_memory:
602
+ desc: null
603
+ value: true
604
+ dataloader_persistent_workers:
605
+ desc: null
606
+ value: false
607
+ skip_memory_metrics:
608
+ desc: null
609
+ value: true
610
+ use_legacy_prediction_loop:
611
+ desc: null
612
+ value: false
613
+ push_to_hub:
614
+ desc: null
615
+ value: false
616
+ resume_from_checkpoint:
617
+ desc: null
618
+ value: null
619
+ hub_model_id:
620
+ desc: null
621
+ value: null
622
+ hub_strategy:
623
+ desc: null
624
+ value: every_save
625
+ hub_token:
626
+ desc: null
627
+ value: <HUB_TOKEN>
628
+ hub_private_repo:
629
+ desc: null
630
+ value: false
631
+ hub_always_push:
632
+ desc: null
633
+ value: false
634
+ gradient_checkpointing:
635
+ desc: null
636
+ value: false
637
+ gradient_checkpointing_kwargs:
638
+ desc: null
639
+ value: null
640
+ include_inputs_for_metrics:
641
+ desc: null
642
+ value: false
643
+ eval_do_concat_batches:
644
+ desc: null
645
+ value: true
646
+ fp16_backend:
647
+ desc: null
648
+ value: auto
649
+ evaluation_strategy:
650
+ desc: null
651
+ value: epoch
652
+ push_to_hub_model_id:
653
+ desc: null
654
+ value: null
655
+ push_to_hub_organization:
656
+ desc: null
657
+ value: null
658
+ push_to_hub_token:
659
+ desc: null
660
+ value: <PUSH_TO_HUB_TOKEN>
661
+ mp_parameters:
662
+ desc: null
663
+ value: ''
664
+ auto_find_batch_size:
665
+ desc: null
666
+ value: false
667
+ full_determinism:
668
+ desc: null
669
+ value: false
670
+ torchdynamo:
671
+ desc: null
672
+ value: null
673
+ ray_scope:
674
+ desc: null
675
+ value: last
676
+ ddp_timeout:
677
+ desc: null
678
+ value: 1800
679
+ torch_compile:
680
+ desc: null
681
+ value: false
682
+ torch_compile_backend:
683
+ desc: null
684
+ value: null
685
+ torch_compile_mode:
686
+ desc: null
687
+ value: null
688
+ dispatch_batches:
689
+ desc: null
690
+ value: null
691
+ split_batches:
692
+ desc: null
693
+ value: null
694
+ include_tokens_per_second:
695
+ desc: null
696
+ value: false
697
+ include_num_input_tokens_seen:
698
+ desc: null
699
+ value: false
700
+ neftune_noise_alpha:
701
+ desc: null
702
+ value: null
703
+ optim_target_modules:
704
+ desc: null
705
+ value: null
706
+ batch_eval_metrics:
707
+ desc: null
708
+ value: false
709
+ sortish_sampler:
710
+ desc: null
711
+ value: false
712
+ predict_with_generate:
713
+ desc: null
714
+ value: true
715
+ generation_max_length:
716
+ desc: null
717
+ value: null
718
+ generation_num_beams:
719
+ desc: null
720
+ value: null
721
+ generation_config:
722
+ desc: null
723
+ value: null
wandb/run-20240603_111947-zd4tutif/files/output.log ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+
2
+ There were missing keys in the checkpoint model loaded: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight'].
3
+ Your max_length is set to 20, but your input_length is only 10. Since this is a summarization task, where outputs shorter than the input are typically wanted, you might consider decreasing max_length manually, e.g. summarizer('...', max_length=5)
4
+ Your max_length is set to 20, but your input_length is only 14. Since this is a summarization task, where outputs shorter than the input are typically wanted, you might consider decreasing max_length manually, e.g. summarizer('...', max_length=7)
5
+ Your max_length is set to 20, but your input_length is only 16. Since this is a summarization task, where outputs shorter than the input are typically wanted, you might consider decreasing max_length manually, e.g. summarizer('...', max_length=8)
6
+ The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well.
7
+ Token is valid (permission: write).
8
+ Your token has been saved to /root/.cache/huggingface/token
wandb/run-20240603_111947-zd4tutif/files/requirements.txt ADDED
@@ -0,0 +1,868 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ==
2
+ Babel==2.14.0
3
+ Boruta==0.3
4
+ Brotli==1.1.0
5
+ CVXcanon==0.1.2
6
+ Cartopy==0.23.0
7
+ Cython==3.0.8
8
+ Deprecated==1.2.14
9
+ Farama-Notifications==0.0.4
10
+ Flask==3.0.3
11
+ Geohash==1.0
12
+ GitPython==3.1.41
13
+ ImageHash==4.3.1
14
+ Janome==0.5.0
15
+ Jinja2==3.1.2
16
+ LunarCalendar==0.0.9
17
+ Mako==1.3.5
18
+ Markdown==3.5.2
19
+ MarkupSafe==2.1.3
20
+ MarkupSafe==2.1.5
21
+ Pillow==9.5.0
22
+ PuLP==2.8.0
23
+ PyArabic==0.6.15
24
+ PyJWT==2.8.0
25
+ PyMeeus==0.5.12
26
+ PySocks==1.7.1
27
+ PyUpSet==0.1.1.post7
28
+ PyWavelets==1.5.0
29
+ PyYAML==6.0.1
30
+ Pygments==2.17.2
31
+ Pympler==1.0.1
32
+ QtPy==2.4.1
33
+ Rtree==1.2.0
34
+ SQLAlchemy==2.0.25
35
+ SecretStorage==3.3.3
36
+ Send2Trash==1.8.2
37
+ Shapely==1.8.5.post1
38
+ Shimmy==1.3.0
39
+ SimpleITK==2.3.1
40
+ TPOT==0.12.1
41
+ Theano-PyMC==1.1.2
42
+ Theano==1.0.5
43
+ Wand==0.6.13
44
+ Werkzeug==3.0.3
45
+ absl-py==1.4.0
46
+ accelerate==0.30.1
47
+ access==1.1.9
48
+ affine==2.4.0
49
+ aiobotocore==2.13.0
50
+ aiofiles==22.1.0
51
+ aiohttp-cors==0.7.0
52
+ aiohttp==3.9.5
53
+ aioitertools==0.11.0
54
+ aiorwlock==1.3.0
55
+ aiosignal==1.3.1
56
+ aiosqlite==0.19.0
57
+ albumentations==1.4.0
58
+ alembic==1.13.1
59
+ altair==5.3.0
60
+ annotated-types==0.6.0
61
+ annotated-types==0.7.0
62
+ annoy==1.17.3
63
+ anyio==4.2.0
64
+ apache-beam==2.46.0
65
+ aplus==0.11.0
66
+ appdirs==1.4.4
67
+ archspec==0.2.3
68
+ argon2-cffi-bindings==21.2.0
69
+ argon2-cffi==23.1.0
70
+ array-record==0.5.0
71
+ arrow==1.3.0
72
+ arviz==0.18.0
73
+ astroid==3.2.2
74
+ astropy-iers-data==0.2024.5.27.0.30.8
75
+ astropy==6.1.0
76
+ asttokens==2.4.1
77
+ astunparse==1.6.3
78
+ async-lru==2.0.4
79
+ async-timeout==4.0.3
80
+ attrs==23.2.0
81
+ audioread==3.0.1
82
+ autopep8==2.0.4
83
+ backoff==2.2.1
84
+ bayesian-optimization==1.4.3
85
+ beatrix_jupyterlab==2023.128.151533
86
+ beautifulsoup4==4.12.2
87
+ blake3==0.2.1
88
+ bleach==6.1.0
89
+ blessed==1.20.0
90
+ blinker==1.8.2
91
+ blis==0.7.10
92
+ blosc2==2.6.2
93
+ bokeh==3.4.1
94
+ boltons==23.1.1
95
+ boto3==1.26.100
96
+ botocore==1.34.106
97
+ bq_helper==0.4.1
98
+ bqplot==0.12.43
99
+ branca==0.7.2
100
+ brewer2mpl==1.4.1
101
+ brotlipy==0.7.0
102
+ cached-property==1.5.2
103
+ cachetools==4.2.4
104
+ cachetools==5.3.2
105
+ catalogue==2.0.10
106
+ catalyst==22.4
107
+ catboost==1.2.5
108
+ category-encoders==2.6.3
109
+ certifi==2024.2.2
110
+ cesium==0.12.1
111
+ cffi==1.16.0
112
+ charset-normalizer==3.3.2
113
+ chex==0.1.86
114
+ cleverhans==4.0.0
115
+ click-plugins==1.1.1
116
+ click==8.1.7
117
+ cligj==0.7.2
118
+ cloud-tpu-client==0.10
119
+ cloud-tpu-profiler==2.4.0
120
+ cloudpathlib==0.16.0
121
+ cloudpickle==2.2.1
122
+ cloudpickle==3.0.0
123
+ cmdstanpy==1.2.2
124
+ colorama==0.4.6
125
+ colorcet==3.1.0
126
+ colorful==0.5.6
127
+ colorlog==6.8.2
128
+ colorlover==0.3.0
129
+ comm==0.2.1
130
+ conda-libmamba-solver==23.12.0
131
+ conda-package-handling==2.2.0
132
+ conda==24.5.0
133
+ conda_package_streaming==0.9.0
134
+ confection==0.1.4
135
+ contextily==1.6.0
136
+ contourpy==1.2.0
137
+ contourpy==1.2.1
138
+ convertdate==2.4.0
139
+ crcmod==1.7
140
+ cryptography==41.0.7
141
+ cuda-python==12.5.0
142
+ cudf==24.4.1
143
+ cufflinks==0.17.3
144
+ cuml==24.4.0
145
+ cupy==13.1.0
146
+ cycler==0.12.1
147
+ cymem==2.0.8
148
+ cytoolz==0.12.3
149
+ daal4py==2024.4.0
150
+ daal==2024.4.0
151
+ dacite==1.8.1
152
+ dask-cuda==24.4.0
153
+ dask-cudf==24.4.1
154
+ dask-expr==1.1.1
155
+ dask==2024.5.1
156
+ dataclasses-json==0.6.6
157
+ dataproc_jupyter_plugin==0.1.66
158
+ datasets==2.19.1
159
+ datashader==0.16.1
160
+ datatile==1.0.3
161
+ db-dtypes==1.2.0
162
+ deap==1.4.1
163
+ debugpy==1.8.0
164
+ decorator==5.1.1
165
+ deepdiff==7.0.1
166
+ defusedxml==0.7.1
167
+ deprecation==2.1.0
168
+ descartes==1.1.0
169
+ dill==0.3.8
170
+ dipy==1.9.0
171
+ distlib==0.3.8
172
+ distributed==2024.1.1
173
+ distro==1.9.0
174
+ dm-tree==0.1.8
175
+ docker-pycreds==0.4.0
176
+ docker==7.0.0
177
+ docopt==0.6.2
178
+ docstring-parser==0.15
179
+ docstring-to-markdown==0.15
180
+ docutils==0.21.2
181
+ earthengine-api==0.1.404
182
+ easydict==1.13
183
+ easyocr==1.7.1
184
+ ecos==2.0.13
185
+ eli5==0.13.0
186
+ emoji==2.12.1
187
+ en-core-web-lg==3.7.1
188
+ en-core-web-sm==3.7.1
189
+ entrypoints==0.4
190
+ ephem==4.1.5
191
+ esda==2.5.1
192
+ essentia==2.1b6.dev1110
193
+ et-xmlfile==1.1.0
194
+ etils==1.6.0
195
+ exceptiongroup==1.2.0
196
+ executing==2.0.1
197
+ explainable-ai-sdk==1.3.3
198
+ fastai==2.7.15
199
+ fastapi==0.108.0
200
+ fastavro==1.9.3
201
+ fastcore==1.5.41
202
+ fastdownload==0.0.7
203
+ fasteners==0.19
204
+ fastjsonschema==2.19.1
205
+ fastprogress==1.0.3
206
+ fastrlock==0.8.2
207
+ fasttext==0.9.2
208
+ feather-format==0.4.1
209
+ featuretools==1.31.0
210
+ filelock==3.13.1
211
+ fiona==1.9.6
212
+ fitter==1.7.0
213
+ flake8==7.0.0
214
+ flashtext==2.7
215
+ flatbuffers==23.5.26
216
+ flax==0.8.4
217
+ folium==0.16.0
218
+ fonttools==4.47.0
219
+ fonttools==4.52.4
220
+ fqdn==1.5.1
221
+ frozendict==2.4.4
222
+ frozenlist==1.4.1
223
+ fsspec==2024.3.1
224
+ fsspec==2024.5.0
225
+ funcy==2.0
226
+ fury==0.10.0
227
+ future==1.0.0
228
+ fuzzywuzzy==0.18.0
229
+ gast==0.5.4
230
+ gatspy==0.3
231
+ gcsfs==2024.3.1
232
+ gensim==4.3.2
233
+ geographiclib==2.0
234
+ geojson==3.1.0
235
+ geopandas==0.14.4
236
+ geoplot==0.5.1
237
+ geopy==2.4.1
238
+ geoviews==1.12.0
239
+ ggplot==0.11.5
240
+ giddy==2.3.5
241
+ gitdb==4.0.11
242
+ google-ai-generativelanguage==0.6.4
243
+ google-api-core==2.11.1
244
+ google-api-core==2.19.0
245
+ google-api-python-client==2.131.0
246
+ google-apitools==0.5.31
247
+ google-auth-httplib2==0.2.0
248
+ google-auth-oauthlib==1.2.0
249
+ google-auth==2.26.1
250
+ google-cloud-aiplatform==0.6.0a1
251
+ google-cloud-artifact-registry==1.10.0
252
+ google-cloud-automl==1.0.1
253
+ google-cloud-bigquery==2.34.4
254
+ google-cloud-bigtable==1.7.3
255
+ google-cloud-core==2.4.1
256
+ google-cloud-datastore==2.19.0
257
+ google-cloud-dlp==3.14.0
258
+ google-cloud-jupyter-config==0.0.5
259
+ google-cloud-language==2.13.3
260
+ google-cloud-monitoring==2.18.0
261
+ google-cloud-pubsub==2.19.0
262
+ google-cloud-pubsublite==1.9.0
263
+ google-cloud-recommendations-ai==0.7.1
264
+ google-cloud-resource-manager==1.11.0
265
+ google-cloud-spanner==3.40.1
266
+ google-cloud-storage==1.44.0
267
+ google-cloud-translate==3.12.1
268
+ google-cloud-videointelligence==2.13.3
269
+ google-cloud-vision==2.8.0
270
+ google-crc32c==1.5.0
271
+ google-generativeai==0.5.4
272
+ google-pasta==0.2.0
273
+ google-resumable-media==2.7.0
274
+ googleapis-common-protos==1.62.0
275
+ gplearn==0.4.2
276
+ gpustat==1.0.0
277
+ gpxpy==1.6.2
278
+ graphviz==0.20.3
279
+ greenlet==3.0.3
280
+ grpc-google-iam-v1==0.12.7
281
+ grpcio-status==1.48.1
282
+ grpcio-status==1.48.2
283
+ grpcio==1.59.3
284
+ grpcio==1.60.0
285
+ gviz-api==1.10.0
286
+ gym-notices==0.0.8
287
+ gym==0.26.2
288
+ gymnasium==0.29.0
289
+ h11==0.14.0
290
+ h2o==3.46.0.2
291
+ h5netcdf==1.3.0
292
+ h5py==3.10.0
293
+ haversine==2.8.1
294
+ hdfs==2.7.3
295
+ hep-ml==0.7.2
296
+ hijri-converter==2.3.1
297
+ hmmlearn==0.3.2
298
+ holidays==0.24
299
+ holoviews==1.18.3
300
+ hpsklearn==0.1.0
301
+ html5lib==1.1
302
+ htmlmin==0.1.12
303
+ httpcore==1.0.5
304
+ httplib2==0.21.0
305
+ httptools==0.6.1
306
+ httpx==0.27.0
307
+ huggingface-hub==0.23.2
308
+ hunspell==0.5.5
309
+ hydra-slayer==0.5.0
310
+ hyperopt==0.2.7
311
+ hypertools==0.8.0
312
+ idna==3.6
313
+ igraph==0.11.5
314
+ imagecodecs==2024.1.1
315
+ imageio==2.33.1
316
+ imbalanced-learn==0.12.3
317
+ imgaug==0.4.0
318
+ importlib-metadata==6.11.0
319
+ importlib-metadata==7.0.1
320
+ importlib-resources==6.1.1
321
+ inequality==1.0.1
322
+ iniconfig==2.0.0
323
+ ipydatawidgets==4.3.5
324
+ ipykernel==6.28.0
325
+ ipyleaflet==0.19.1
326
+ ipympl==0.7.0
327
+ ipython-genutils==0.2.0
328
+ ipython-genutils==0.2.0
329
+ ipython-sql==0.5.0
330
+ ipython==8.20.0
331
+ ipyvolume==0.6.3
332
+ ipyvue==1.11.1
333
+ ipyvuetify==1.9.4
334
+ ipywebrtc==0.6.0
335
+ ipywidgets==7.7.1
336
+ isoduration==20.11.0
337
+ isort==5.13.2
338
+ isoweek==1.3.3
339
+ itsdangerous==2.2.0
340
+ jaraco.classes==3.3.0
341
+ jax-jumpy==1.0.0
342
+ jax==0.4.26
343
+ jaxlib==0.4.26.dev20240504
344
+ jedi==0.19.1
345
+ jeepney==0.8.0
346
+ jieba==0.42.1
347
+ jmespath==1.0.1
348
+ joblib==1.4.2
349
+ json5==0.9.14
350
+ jsonpatch==1.33
351
+ jsonpointer==2.4
352
+ jsonschema-specifications==2023.12.1
353
+ jsonschema==4.20.0
354
+ jupyter-console==6.6.3
355
+ jupyter-events==0.9.0
356
+ jupyter-http-over-ws==0.0.8
357
+ jupyter-leaflet==0.19.1
358
+ jupyter-lsp==1.5.1
359
+ jupyter-server-mathjax==0.2.6
360
+ jupyter-ydoc==0.2.5
361
+ jupyter_client==7.4.9
362
+ jupyter_client==8.6.0
363
+ jupyter_core==5.7.1
364
+ jupyter_server==2.12.5
365
+ jupyter_server_fileid==0.9.1
366
+ jupyter_server_proxy==4.1.0
367
+ jupyter_server_terminals==0.5.1
368
+ jupyter_server_ydoc==0.8.0
369
+ jupyterlab-lsp==5.1.0
370
+ jupyterlab-widgets==3.0.9
371
+ jupyterlab==4.2.1
372
+ jupyterlab_git==0.44.0
373
+ jupyterlab_pygments==0.3.0
374
+ jupyterlab_server==2.27.2
375
+ jupytext==1.16.0
376
+ kaggle-environments==1.14.9
377
+ kaggle==1.6.14
378
+ kagglehub==0.2.5
379
+ keras-cv==0.9.0
380
+ keras-nlp==0.12.1
381
+ keras-tuner==1.4.6
382
+ keras==3.3.3
383
+ kernels-mixer==0.0.7
384
+ keyring==24.3.0
385
+ keyrings.google-artifactregistry-auth==1.1.2
386
+ kfp-pipeline-spec==0.2.2
387
+ kfp-server-api==2.0.5
388
+ kfp==2.5.0
389
+ kiwisolver==1.4.5
390
+ kmapper==2.0.1
391
+ kmodes==0.12.2
392
+ korean-lunar-calendar==0.3.1
393
+ kornia==0.7.2
394
+ kornia_rs==0.1.3
395
+ kt-legacy==1.0.5
396
+ kubernetes==26.1.0
397
+ langcodes==3.4.0
398
+ langid==1.1.6
399
+ language_data==1.2.0
400
+ lazy_loader==0.3
401
+ learntools==0.3.4
402
+ leven==1.0.4
403
+ libclang==16.0.6
404
+ libmambapy==1.5.8
405
+ libpysal==4.9.2
406
+ librosa==0.10.2.post1
407
+ lightgbm==4.2.0
408
+ lightning-utilities==0.11.2
409
+ lime==0.2.0.1
410
+ line_profiler==4.1.3
411
+ linkify-it-py==2.0.3
412
+ llvmlite==0.41.1
413
+ llvmlite==0.42.0
414
+ lml==0.1.0
415
+ locket==1.0.0
416
+ loguru==0.7.2
417
+ lxml==5.2.2
418
+ lz4==4.3.3
419
+ mamba==1.5.8
420
+ mapclassify==2.6.1
421
+ marisa-trie==1.1.0
422
+ markdown-it-py==3.0.0
423
+ marshmallow==3.21.2
424
+ matplotlib-inline==0.1.6
425
+ matplotlib-venn==0.11.10
426
+ matplotlib==3.7.5
427
+ matplotlib==3.8.4
428
+ mccabe==0.7.0
429
+ mdit-py-plugins==0.4.0
430
+ mdurl==0.1.2
431
+ memory-profiler==0.61.0
432
+ menuinst==2.0.1
433
+ mercantile==1.2.1
434
+ mgwr==2.2.1
435
+ missingno==0.5.2
436
+ mistune==0.8.4
437
+ mizani==0.11.4
438
+ ml-dtypes==0.2.0
439
+ mlcrate==0.2.0
440
+ mlens==0.2.3
441
+ mlxtend==0.23.1
442
+ mne==1.7.0
443
+ mnist==0.2.2
444
+ momepy==0.7.0
445
+ more-itertools==10.2.0
446
+ mpld3==0.5.10
447
+ mpmath==1.3.0
448
+ msgpack==1.0.7
449
+ msgpack==1.0.8
450
+ multidict==6.0.4
451
+ multimethod==1.10
452
+ multipledispatch==1.0.0
453
+ multiprocess==0.70.16
454
+ munkres==1.1.4
455
+ murmurhash==1.0.10
456
+ mypy-extensions==1.0.0
457
+ namex==0.0.8
458
+ nb-conda-kernels==2.3.1
459
+ nb_conda==2.2.1
460
+ nbclassic==1.0.0
461
+ nbclient==0.5.13
462
+ nbconvert==6.4.5
463
+ nbdime==3.2.0
464
+ nbformat==5.9.2
465
+ ndindex==1.8
466
+ nest-asyncio==1.5.8
467
+ networkx==3.2.1
468
+ nibabel==5.2.1
469
+ nilearn==0.10.4
470
+ ninja==1.11.1.1
471
+ nltk==3.2.4
472
+ nose==1.3.7
473
+ notebook==6.5.4
474
+ notebook==6.5.6
475
+ notebook_executor==0.2
476
+ notebook_shim==0.2.3
477
+ numba==0.58.1
478
+ numba==0.59.1
479
+ numexpr==2.10.0
480
+ numpy==1.26.4
481
+ nvidia-ml-py==11.495.46
482
+ nvtx==0.2.10
483
+ oauth2client==4.1.3
484
+ oauthlib==3.2.2
485
+ objsize==0.6.1
486
+ odfpy==1.4.1
487
+ olefile==0.47
488
+ onnx==1.16.1
489
+ opencensus-context==0.1.3
490
+ opencensus==0.11.4
491
+ opencv-contrib-python==4.9.0.80
492
+ opencv-python-headless==4.9.0.80
493
+ opencv-python==4.9.0.80
494
+ openpyxl==3.1.2
495
+ openslide-python==1.3.1
496
+ opentelemetry-api==1.22.0
497
+ opentelemetry-exporter-otlp-proto-common==1.22.0
498
+ opentelemetry-exporter-otlp-proto-grpc==1.22.0
499
+ opentelemetry-exporter-otlp-proto-http==1.22.0
500
+ opentelemetry-exporter-otlp==1.22.0
501
+ opentelemetry-proto==1.22.0
502
+ opentelemetry-sdk==1.22.0
503
+ opentelemetry-semantic-conventions==0.43b0
504
+ opt-einsum==3.3.0
505
+ optax==0.2.2
506
+ optree==0.11.0
507
+ optuna==3.6.1
508
+ orbax-checkpoint==0.5.14
509
+ ordered-set==4.1.0
510
+ orjson==3.9.10
511
+ ortools==9.4.1874
512
+ osmnx==1.9.3
513
+ overrides==7.4.0
514
+ packaging==21.3
515
+ pandas-datareader==0.10.0
516
+ pandas-profiling==3.6.6
517
+ pandas-summary==0.2.0
518
+ pandas==2.2.1
519
+ pandas==2.2.2
520
+ pandasql==0.7.3
521
+ pandocfilters==1.5.0
522
+ panel==1.4.3
523
+ papermill==2.5.0
524
+ param==2.1.0
525
+ parso==0.8.3
526
+ partd==1.4.2
527
+ path.py==12.5.0
528
+ path==16.14.0
529
+ pathos==0.3.2
530
+ pathy==0.10.3
531
+ patsy==0.5.6
532
+ pdf2image==1.17.0
533
+ pettingzoo==1.24.0
534
+ pexpect==4.8.0
535
+ pexpect==4.9.0
536
+ phik==0.12.4
537
+ pickleshare==0.7.5
538
+ pillow==10.3.0
539
+ pip==23.3.2
540
+ pkgutil_resolve_name==1.3.10
541
+ platformdirs==4.2.2
542
+ plotly-express==0.4.1
543
+ plotly==5.18.0
544
+ plotnine==0.13.6
545
+ pluggy==1.5.0
546
+ pointpats==2.4.0
547
+ polars==0.20.30
548
+ polyglot==16.7.4
549
+ pooch==1.8.1
550
+ pox==0.3.4
551
+ ppca==0.0.4
552
+ ppft==1.7.6.8
553
+ preprocessing==0.1.13
554
+ preshed==3.0.9
555
+ prettytable==3.9.0
556
+ progressbar2==4.4.2
557
+ prometheus-client==0.19.0
558
+ promise==2.3
559
+ prompt-toolkit==3.0.42
560
+ prompt-toolkit==3.0.43
561
+ prophet==1.1.1
562
+ proto-plus==1.23.0
563
+ protobuf==3.20.3
564
+ protobuf==4.24.4
565
+ psutil==5.9.3
566
+ psutil==5.9.7
567
+ ptyprocess==0.7.0
568
+ pudb==2024.1
569
+ pure-eval==0.2.2
570
+ py-cpuinfo==9.0.0
571
+ py-spy==0.3.14
572
+ py4j==0.10.9.7
573
+ pyLDAvis==3.4.1
574
+ pyOpenSSL==23.3.0
575
+ pyaml==24.4.0
576
+ pyarrow-hotfix==0.6
577
+ pyarrow==14.0.2
578
+ pyasn1-modules==0.3.0
579
+ pyasn1==0.5.1
580
+ pybind11==2.12.0
581
+ pyclipper==1.3.0.post5
582
+ pycodestyle==2.11.1
583
+ pycosat==0.6.6
584
+ pycparser==2.21
585
+ pycryptodome==3.20.0
586
+ pyct==0.5.0
587
+ pycuda==2024.1
588
+ pydantic==2.5.3
589
+ pydantic==2.7.2
590
+ pydantic_core==2.14.6
591
+ pydantic_core==2.18.3
592
+ pydegensac==0.1.2
593
+ pydicom==2.4.4
594
+ pydocstyle==6.3.0
595
+ pydot==1.4.2
596
+ pydub==0.25.1
597
+ pyemd==1.0.0
598
+ pyerfa==2.0.1.4
599
+ pyexcel-io==0.6.6
600
+ pyexcel-ods==0.6.0
601
+ pyflakes==3.2.0
602
+ pygltflib==1.16.2
603
+ pykalman==0.9.7
604
+ pylibraft==24.4.0
605
+ pylint==3.2.2
606
+ pymc3==3.11.4
607
+ pymongo==3.13.0
608
+ pynndescent==0.5.12
609
+ pynvjitlink==0.2.3
610
+ pynvml==11.4.1
611
+ pynvrtc==9.2
612
+ pyparsing==3.1.1
613
+ pyparsing==3.1.2
614
+ pypdf==4.2.0
615
+ pyproj==3.6.1
616
+ pysal==24.1
617
+ pyshp==2.3.1
618
+ pytesseract==0.3.10
619
+ pytest==8.2.1
620
+ python-bidi==0.4.2
621
+ python-dateutil==2.9.0.post0
622
+ python-dotenv==1.0.0
623
+ python-json-logger==2.0.7
624
+ python-louvain==0.16
625
+ python-lsp-jsonrpc==1.1.2
626
+ python-lsp-server==1.11.0
627
+ python-slugify==8.0.4
628
+ python-utils==3.8.2
629
+ pythreejs==2.4.2
630
+ pytoolconfig==1.3.1
631
+ pytools==2024.1.3
632
+ pytorch-ignite==0.5.0.post2
633
+ pytorch-lightning==2.2.5
634
+ pytz==2023.3.post1
635
+ pytz==2024.1
636
+ pyu2f==0.1.5
637
+ pyviz_comms==3.0.2
638
+ pyzmq==24.0.1
639
+ pyzmq==25.1.2
640
+ qgrid==1.3.1
641
+ qtconsole==5.5.2
642
+ quantecon==0.7.2
643
+ qudida==0.0.4
644
+ raft-dask==24.4.0
645
+ rapids-dask-dependency==24.4.1a0
646
+ rasterio==1.3.10
647
+ rasterstats==0.19.0
648
+ ray-cpp==2.9.0
649
+ ray==2.9.0
650
+ referencing==0.32.1
651
+ regex==2023.12.25
652
+ requests-oauthlib==1.3.1
653
+ requests-toolbelt==0.10.1
654
+ requests==2.31.0
655
+ retrying==1.3.3
656
+ retrying==1.3.4
657
+ rfc3339-validator==0.1.4
658
+ rfc3986-validator==0.1.1
659
+ rgf-python==3.12.0
660
+ rich-click==1.8.2
661
+ rich==13.7.0
662
+ rich==13.7.1
663
+ rmm==24.4.0
664
+ rope==1.13.0
665
+ rpds-py==0.16.2
666
+ rsa==4.9
667
+ ruamel-yaml-conda==0.15.100
668
+ ruamel.yaml.clib==0.2.7
669
+ ruamel.yaml==0.18.5
670
+ s2sphere==0.2.5
671
+ s3fs==2024.3.1
672
+ s3transfer==0.6.2
673
+ safetensors==0.4.3
674
+ scattertext==0.1.19
675
+ scikit-image==0.22.0
676
+ scikit-learn-intelex==2024.4.0
677
+ scikit-learn==1.2.2
678
+ scikit-multilearn==0.2.0
679
+ scikit-optimize==0.10.1
680
+ scikit-plot==0.3.7
681
+ scikit-surprise==1.1.4
682
+ scipy==1.11.4
683
+ scipy==1.13.1
684
+ seaborn==0.12.2
685
+ segment_anything==1.0
686
+ segregation==2.5
687
+ semver==3.0.2
688
+ sentencepiece==0.2.0
689
+ sentry-sdk==2.3.1
690
+ setproctitle==1.3.3
691
+ setuptools-git==1.2
692
+ setuptools-scm==8.1.0
693
+ setuptools==69.0.3
694
+ shap==0.44.1
695
+ shapely==2.0.4
696
+ shellingham==1.5.4
697
+ simpervisor==1.0.0
698
+ simplejson==3.19.2
699
+ six==1.16.0
700
+ sklearn-pandas==2.2.0
701
+ slicer==0.0.7
702
+ smart-open==6.4.0
703
+ smmap==5.0.1
704
+ sniffio==1.3.0
705
+ snowballstemmer==2.2.0
706
+ snuggs==1.4.7
707
+ sortedcontainers==2.4.0
708
+ soundfile==0.12.1
709
+ soupsieve==2.5
710
+ soxr==0.3.7
711
+ spacy-legacy==3.0.12
712
+ spacy-loggers==1.0.5
713
+ spacy==3.7.3
714
+ spaghetti==1.7.5.post1
715
+ spectral==0.23.1
716
+ spglm==1.1.0
717
+ sphinx-rtd-theme==0.2.4
718
+ spint==1.0.7
719
+ splot==1.1.5.post1
720
+ spopt==0.6.0
721
+ spreg==1.4.2
722
+ spvcm==0.3.0
723
+ sqlparse==0.4.4
724
+ squarify==0.4.3
725
+ srsly==2.4.8
726
+ stable-baselines3==2.1.0
727
+ stack-data==0.6.2
728
+ stack-data==0.6.3
729
+ stanio==0.5.0
730
+ starlette==0.32.0.post1
731
+ statsmodels==0.14.1
732
+ stemming==1.0.1
733
+ stop-words==2018.7.23
734
+ stopit==1.1.2
735
+ stumpy==1.12.0
736
+ sympy==1.12
737
+ tables==3.9.2
738
+ tabulate==0.9.0
739
+ tangled-up-in-unicode==0.2.0
740
+ tbb==2021.12.0
741
+ tblib==3.0.0
742
+ tenacity==8.2.3
743
+ tensorboard-data-server==0.7.2
744
+ tensorboard-plugin-profile==2.15.0
745
+ tensorboard==2.15.1
746
+ tensorboardX==2.6.2.2
747
+ tensorflow-cloud==0.1.16
748
+ tensorflow-datasets==4.9.4
749
+ tensorflow-decision-forests==1.8.1
750
+ tensorflow-estimator==2.15.0
751
+ tensorflow-hub==0.16.1
752
+ tensorflow-io-gcs-filesystem==0.35.0
753
+ tensorflow-io==0.35.0
754
+ tensorflow-metadata==0.14.0
755
+ tensorflow-probability==0.23.0
756
+ tensorflow-serving-api==2.14.1
757
+ tensorflow-text==2.15.0
758
+ tensorflow-transform==0.14.0
759
+ tensorflow==2.15.0
760
+ tensorstore==0.1.59
761
+ termcolor==2.4.0
762
+ terminado==0.18.0
763
+ testpath==0.6.0
764
+ text-unidecode==1.3
765
+ textblob==0.18.0.post0
766
+ texttable==1.7.0
767
+ tf_keras==2.15.1
768
+ tfp-nightly==0.24.0.dev0
769
+ thinc==8.2.3
770
+ threadpoolctl==3.2.0
771
+ tifffile==2023.12.9
772
+ timm==1.0.3
773
+ tinycss2==1.2.1
774
+ tobler==0.11.2
775
+ tokenizers==0.19.1
776
+ toml==0.10.2
777
+ tomli==2.0.1
778
+ tomlkit==0.12.5
779
+ toolz==0.12.1
780
+ torch==2.1.2
781
+ torchaudio==2.1.2
782
+ torchdata==0.7.1
783
+ torchinfo==1.8.0
784
+ torchmetrics==1.4.0.post0
785
+ torchtext==0.16.2
786
+ torchvision==0.16.2
787
+ tornado==6.3.3
788
+ tqdm==4.66.4
789
+ traceml==1.0.8
790
+ traitlets==5.9.0
791
+ traittypes==0.2.1
792
+ transformers==4.41.1
793
+ treelite==4.1.2
794
+ truststore==0.8.0
795
+ trx-python==0.2.9
796
+ tsfresh==0.20.2
797
+ typeguard==4.1.5
798
+ typer==0.9.0
799
+ typer==0.9.4
800
+ types-python-dateutil==2.8.19.20240106
801
+ typing-inspect==0.9.0
802
+ typing-utils==0.1.0
803
+ typing_extensions==4.9.0
804
+ tzdata==2023.4
805
+ tzdata==2024.1
806
+ uc-micro-py==1.0.3
807
+ ucx-py==0.37.0
808
+ ujson==5.10.0
809
+ umap-learn==0.5.6
810
+ unicodedata2==15.1.0
811
+ update-checker==0.18.0
812
+ uri-template==1.3.0
813
+ uritemplate==3.0.1
814
+ urllib3==1.26.18
815
+ urllib3==2.1.0
816
+ urwid==2.6.12
817
+ urwid_readline==0.14
818
+ uvicorn==0.25.0
819
+ uvloop==0.19.0
820
+ vaex-astro==0.9.3
821
+ vaex-core==4.17.1
822
+ vaex-hdf5==0.14.1
823
+ vaex-jupyter==0.8.2
824
+ vaex-ml==0.18.3
825
+ vaex-server==0.9.0
826
+ vaex-viz==0.5.4
827
+ vaex==4.17.0
828
+ vec_noise==1.1.4
829
+ vecstack==0.4.0
830
+ virtualenv==20.21.0
831
+ visions==0.7.5
832
+ vowpalwabbit==9.9.0
833
+ vtk==9.3.0
834
+ wandb==0.17.0
835
+ wasabi==1.1.2
836
+ watchfiles==0.21.0
837
+ wavio==0.0.9
838
+ wcwidth==0.2.13
839
+ weasel==0.3.4
840
+ webcolors==1.13
841
+ webencodings==0.5.1
842
+ websocket-client==1.7.0
843
+ websockets==12.0
844
+ wfdb==4.1.2
845
+ whatthepatch==1.0.5
846
+ wheel==0.42.0
847
+ widgetsnbextension==3.6.6
848
+ witwidget==1.8.1
849
+ woodwork==0.31.0
850
+ wordcloud==1.9.3
851
+ wordsegment==1.3.1
852
+ wrapt==1.14.1
853
+ xarray-einstats==0.7.0
854
+ xarray==2024.5.0
855
+ xgboost==2.0.3
856
+ xvfbwrapper==0.2.9
857
+ xxhash==3.4.1
858
+ xyzservices==2024.4.0
859
+ y-py==0.6.2
860
+ yapf==0.40.2
861
+ yarl==1.9.3
862
+ yarl==1.9.4
863
+ ydata-profiling==4.6.4
864
+ yellowbrick==1.5
865
+ ypy-websocket==0.8.4
866
+ zict==3.0.0
867
+ zipp==3.17.0
868
+ zstandard==0.19.0
wandb/run-20240603_111947-zd4tutif/files/wandb-metadata.json ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "os": "Linux-5.15.133+-x86_64-with-glibc2.31",
3
+ "python": "3.10.13",
4
+ "heartbeatAt": "2024-06-03T11:19:48.640898",
5
+ "startedAt": "2024-06-03T11:19:47.900500",
6
+ "docker": null,
7
+ "cuda": null,
8
+ "args": [],
9
+ "state": "running",
10
+ "program": "kaggle.ipynb",
11
+ "codePathLocal": null,
12
+ "root": "/kaggle/working",
13
+ "host": "743112a2decd",
14
+ "username": "root",
15
+ "executable": "/opt/conda/bin/python3.10",
16
+ "cpu_count": 2,
17
+ "cpu_count_logical": 4,
18
+ "cpu_freq": {
19
+ "current": 2000.152,
20
+ "min": 0.0,
21
+ "max": 0.0
22
+ },
23
+ "cpu_freq_per_core": [
24
+ {
25
+ "current": 2000.152,
26
+ "min": 0.0,
27
+ "max": 0.0
28
+ },
29
+ {
30
+ "current": 2000.152,
31
+ "min": 0.0,
32
+ "max": 0.0
33
+ },
34
+ {
35
+ "current": 2000.152,
36
+ "min": 0.0,
37
+ "max": 0.0
38
+ },
39
+ {
40
+ "current": 2000.152,
41
+ "min": 0.0,
42
+ "max": 0.0
43
+ }
44
+ ],
45
+ "disk": {
46
+ "/": {
47
+ "total": 8062.387607574463,
48
+ "used": 5644.639331817627
49
+ }
50
+ },
51
+ "gpu": "Tesla P100-PCIE-16GB",
52
+ "gpu_count": 1,
53
+ "gpu_devices": [
54
+ {
55
+ "name": "Tesla P100-PCIE-16GB",
56
+ "memory_total": 17179869184
57
+ }
58
+ ],
59
+ "memory": {
60
+ "total": 31.357563018798828
61
+ }
62
+ }
wandb/run-20240603_111947-zd4tutif/files/wandb-summary.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train/loss": 0.3269, "train/grad_norm": 1.2661606073379517, "train/learning_rate": 3.3247163578917136e-06, "train/epoch": 3.0, "train/global_step": 6963, "_timestamp": 1717415933.8806534, "_runtime": 2345.9723534584045, "_step": 16, "eval/loss": 0.23116710782051086, "eval/runtime": 17.4131, "eval/samples_per_second": 118.474, "eval/steps_per_second": 7.408, "train_runtime": 2359.6292, "train_samples_per_second": 23.602, "train_steps_per_second": 2.951, "total_flos": 7499132383002624.0, "train_loss": 1.0388871652717377}
wandb/run-20240603_111947-zd4tutif/logs/debug-internal.log ADDED
The diff for this file is too large to render. See raw diff
 
wandb/run-20240603_111947-zd4tutif/logs/debug.log ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_setup.py:_flush():76] Current SDK version is 0.17.0
2
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_setup.py:_flush():76] Configure stats pid to 34
3
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_setup.py:_flush():76] Loading settings from /root/.config/wandb/settings
4
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_setup.py:_flush():76] Loading settings from /kaggle/working/wandb/settings
5
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_setup.py:_flush():76] Loading settings from environment variables: {}
6
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_setup.py:_flush():76] Applying setup settings: {'_disable_service': False}
7
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_setup.py:_flush():76] Inferring run settings from compute environment: {'program': '<python with no main file>'}
8
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_setup.py:_flush():76] Applying login settings: {}
9
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_setup.py:_flush():76] Applying login settings: {'api_key': '***REDACTED***'}
10
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_init.py:_log_setup():520] Logging user logs to /kaggle/working/wandb/run-20240603_111947-zd4tutif/logs/debug.log
11
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_init.py:_log_setup():521] Logging internal logs to /kaggle/working/wandb/run-20240603_111947-zd4tutif/logs/debug-internal.log
12
+ 2024-06-03 11:19:47,902 INFO MainThread:34 [wandb_init.py:_jupyter_setup():466] configuring jupyter hooks <wandb.sdk.wandb_init._WandbInit object at 0x79c87e995990>
13
+ 2024-06-03 11:19:47,903 INFO MainThread:34 [wandb_init.py:init():560] calling init triggers
14
+ 2024-06-03 11:19:47,903 INFO MainThread:34 [wandb_init.py:init():567] wandb.init called with sweep_config: {}
15
+ config: {}
16
+ 2024-06-03 11:19:47,903 INFO MainThread:34 [wandb_init.py:init():610] starting backend
17
+ 2024-06-03 11:19:47,903 INFO MainThread:34 [wandb_init.py:init():614] setting up manager
18
+ 2024-06-03 11:19:47,905 INFO MainThread:34 [backend.py:_multiprocessing_setup():105] multiprocessing start_methods=fork,spawn,forkserver, using: spawn
19
+ 2024-06-03 11:19:47,908 INFO MainThread:34 [wandb_init.py:init():622] backend started and connected
20
+ 2024-06-03 11:19:47,923 INFO MainThread:34 [wandb_run.py:_label_probe_notebook():1328] probe notebook
21
+ 2024-06-03 11:19:48,333 INFO MainThread:34 [wandb_init.py:init():711] updated telemetry
22
+ 2024-06-03 11:19:48,337 INFO MainThread:34 [wandb_init.py:init():744] communicating run to backend with 90.0 second timeout
23
+ 2024-06-03 11:19:48,544 INFO MainThread:34 [wandb_run.py:_on_init():2396] communicating current version
24
+ 2024-06-03 11:19:48,611 INFO MainThread:34 [wandb_run.py:_on_init():2405] got version response
25
+ 2024-06-03 11:19:48,611 INFO MainThread:34 [wandb_init.py:init():795] starting run threads in backend
26
+ 2024-06-03 11:20:04,877 INFO MainThread:34 [wandb_run.py:_console_start():2374] atexit reg
27
+ 2024-06-03 11:20:04,877 INFO MainThread:34 [wandb_run.py:_redirect():2229] redirect: wrap_raw
28
+ 2024-06-03 11:20:04,878 INFO MainThread:34 [wandb_run.py:_redirect():2294] Wrapping output streams.
29
+ 2024-06-03 11:20:04,878 INFO MainThread:34 [wandb_run.py:_redirect():2319] Redirects installed.
30
+ 2024-06-03 11:20:04,881 INFO MainThread:34 [wandb_init.py:init():838] run started, returning control to user process
31
+ 2024-06-03 11:20:04,886 INFO MainThread:34 [wandb_run.py:_config_callback():1376] config_cb None None {'vocab_size': 256384, 'd_model': 512, 'd_kv': 64, 'd_ff': 1024, 'num_layers': 8, 'num_decoder_layers': 8, 'num_heads': 6, 'relative_attention_num_buckets': 32, 'relative_attention_max_distance': 128, 'dropout_rate': 0.1, 'classifier_dropout': 0.0, 'layer_norm_epsilon': 1e-06, 'initializer_factor': 1.0, 'feed_forward_proj': 'gated-gelu', 'use_cache': True, 'dense_act_fn': 'gelu_new', 'is_gated_act': True, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'float32', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': False, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': True, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['UMT5ForConditionalGeneration'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': 'T5Tokenizer', 'prefix': None, 'bos_token_id': None, 'pad_token_id': 0, 'eos_token_id': 1, 'sep_token_id': None, 'decoder_start_token_id': 0, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'google/umt5-small', 'transformers_version': '4.41.1', 'max_new_tokens': 64, 'scalable_attention': True, 'model_type': 'umt5', 'output_dir': '/kaggle/working', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': True, 'do_predict': False, 'eval_strategy': 'epoch', 'prediction_loss_only': False, 'per_device_train_batch_size': 8, 'per_device_eval_batch_size': 16, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 1, 'eval_accumulation_steps': None, 'eval_delay': 0, 'learning_rate': 5e-05, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 3, 'max_steps': -1, 'lr_scheduler_type': 'linear', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.0, 'warmup_steps': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': '/kaggle/working/runs/Jun03_11-19-32_743112a2decd', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 500, 'logging_nan_inf_filter': True, 'save_strategy': 'epoch', 'save_steps': 500, 'save_total_limit': 1, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'restore_callback_states_from_checkpoint': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': False, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': None, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': None, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': '/kaggle/working', 'disable_tqdm': False, 'remove_unused_columns': True, 'label_names': None, 'load_best_model_at_end': True, 'metric_for_best_model': 'loss', 'greater_is_better': False, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'adamw_torch', 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['tensorboard', 'wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': False, 'resume_from_checkpoint': None, 'hub_model_id': None, 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': False, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'eval_do_concat_batches': True, 'fp16_backend': 'auto', 'evaluation_strategy': 'epoch', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None, 'optim_target_modules': None, 'batch_eval_metrics': False, 'sortish_sampler': False, 'predict_with_generate': True, 'generation_max_length': None, 'generation_num_beams': None, 'generation_config': None}
32
+ 2024-06-03 11:58:53,885 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
33
+ 2024-06-03 11:58:53,885 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
34
+ 2024-06-03 11:58:53,892 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
35
+ 2024-06-03 11:58:53,894 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
36
+ 2024-06-03 11:58:53,895 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
37
+ 2024-06-03 11:58:53,900 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
38
+ 2024-06-03 11:59:05,976 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
39
+ 2024-06-03 11:59:05,977 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
40
+ 2024-06-03 11:59:05,982 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
41
+ 2024-06-03 11:59:06,530 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
42
+ 2024-06-03 11:59:06,530 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
43
+ 2024-06-03 11:59:06,539 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
44
+ 2024-06-03 11:59:06,935 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
45
+ 2024-06-03 11:59:06,935 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
46
+ 2024-06-03 11:59:06,941 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
47
+ 2024-06-03 11:59:07,129 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
48
+ 2024-06-03 11:59:07,130 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
49
+ 2024-06-03 11:59:07,135 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
50
+ 2024-06-03 11:59:07,401 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
51
+ 2024-06-03 11:59:07,402 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
52
+ 2024-06-03 11:59:07,411 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
53
+ 2024-06-03 11:59:07,692 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
54
+ 2024-06-03 11:59:07,693 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
55
+ 2024-06-03 11:59:07,700 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
56
+ 2024-06-03 11:59:08,134 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
57
+ 2024-06-03 11:59:08,135 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
58
+ 2024-06-03 11:59:08,141 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
59
+ 2024-06-03 11:59:11,075 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
60
+ 2024-06-03 11:59:11,075 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
61
+ 2024-06-03 11:59:11,081 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
62
+ 2024-06-03 11:59:13,485 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
63
+ 2024-06-03 11:59:13,486 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
64
+ 2024-06-03 12:01:42,196 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
65
+ 2024-06-03 12:01:42,425 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
66
+ 2024-06-03 12:01:42,425 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
67
+ 2024-06-03 12:03:34,285 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
68
+ 2024-06-03 12:03:34,465 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
69
+ 2024-06-03 12:03:34,465 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
70
+ 2024-06-03 12:03:38,980 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
71
+ 2024-06-03 12:03:39,116 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
72
+ 2024-06-03 12:03:39,116 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
73
+ 2024-06-03 12:03:45,290 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
wandb/run-20240603_111947-zd4tutif/run-zd4tutif.wandb ADDED
Binary file (69.4 kB). View file