End of training
Browse files
wandb/debug-internal.log
CHANGED
@@ -7251,3 +7251,57 @@ subprocess.TimeoutExpired: Command '['conda', 'env', 'export']' timed out after
|
|
7251 |
2024-06-03 22:00:24,937 DEBUG SenderThread:322 [sender.py:send_request():405] send_request: summary_record
|
7252 |
2024-06-03 22:00:24,937 INFO SenderThread:322 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end
|
7253 |
2024-06-03 22:00:25,443 INFO Thread-12 :322 [dir_watcher.py:_on_file_modified():288] file/dir modified: /kaggle/working/wandb/run-20240603_175449-d191dh7n/files/wandb-summary.json
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7251 |
2024-06-03 22:00:24,937 DEBUG SenderThread:322 [sender.py:send_request():405] send_request: summary_record
|
7252 |
2024-06-03 22:00:24,937 INFO SenderThread:322 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end
|
7253 |
2024-06-03 22:00:25,443 INFO Thread-12 :322 [dir_watcher.py:_on_file_modified():288] file/dir modified: /kaggle/working/wandb/run-20240603_175449-d191dh7n/files/wandb-summary.json
|
7254 |
+
2024-06-03 22:00:27,444 INFO Thread-12 :322 [dir_watcher.py:_on_file_modified():288] file/dir modified: /kaggle/working/wandb/run-20240603_175449-d191dh7n/files/output.log
|
7255 |
+
2024-06-03 22:00:27,456 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: pause
|
7256 |
+
2024-06-03 22:00:27,456 INFO HandlerThread:322 [handler.py:handle_request_pause():724] stopping system metrics thread
|
7257 |
+
2024-06-03 22:00:27,456 INFO HandlerThread:322 [system_monitor.py:finish():203] Stopping system monitor
|
7258 |
+
2024-06-03 22:00:27,456 DEBUG SystemMonitor:322 [system_monitor.py:_start():179] Finished system metrics aggregation loop
|
7259 |
+
2024-06-03 22:00:27,456 DEBUG SystemMonitor:322 [system_monitor.py:_start():183] Publishing last batch of metrics
|
7260 |
+
2024-06-03 22:00:27,457 INFO HandlerThread:322 [interfaces.py:finish():200] Joined cpu monitor
|
7261 |
+
2024-06-03 22:00:27,458 INFO HandlerThread:322 [interfaces.py:finish():200] Joined disk monitor
|
7262 |
+
2024-06-03 22:00:27,468 INFO HandlerThread:322 [interfaces.py:finish():200] Joined gpu monitor
|
7263 |
+
2024-06-03 22:00:27,468 INFO HandlerThread:322 [interfaces.py:finish():200] Joined memory monitor
|
7264 |
+
2024-06-03 22:00:27,469 INFO HandlerThread:322 [interfaces.py:finish():200] Joined network monitor
|
7265 |
+
2024-06-03 22:00:27,469 DEBUG SenderThread:322 [sender.py:send():378] send: stats
|
7266 |
+
2024-06-03 22:00:28,211 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7267 |
+
2024-06-03 22:00:30,470 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7268 |
+
2024-06-03 22:00:33,212 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7269 |
+
2024-06-03 22:00:35,471 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7270 |
+
2024-06-03 22:00:38,213 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7271 |
+
2024-06-03 22:00:40,472 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7272 |
+
2024-06-03 22:00:43,214 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7273 |
+
2024-06-03 22:00:45,473 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7274 |
+
2024-06-03 22:00:48,215 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7275 |
+
2024-06-03 22:00:50,474 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7276 |
+
2024-06-03 22:00:53,216 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7277 |
+
2024-06-03 22:00:55,474 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7278 |
+
2024-06-03 22:00:58,217 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7279 |
+
2024-06-03 22:01:00,475 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7280 |
+
2024-06-03 22:01:03,218 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7281 |
+
2024-06-03 22:01:05,476 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7282 |
+
2024-06-03 22:01:08,219 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7283 |
+
2024-06-03 22:01:10,477 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7284 |
+
2024-06-03 22:01:13,220 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7285 |
+
2024-06-03 22:01:15,478 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7286 |
+
2024-06-03 22:01:18,222 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7287 |
+
2024-06-03 22:01:20,479 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7288 |
+
2024-06-03 22:01:23,223 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7289 |
+
2024-06-03 22:01:25,480 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7290 |
+
2024-06-03 22:01:28,223 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7291 |
+
2024-06-03 22:01:30,481 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7292 |
+
2024-06-03 22:01:33,225 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7293 |
+
2024-06-03 22:01:35,482 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7294 |
+
2024-06-03 22:01:38,226 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7295 |
+
2024-06-03 22:01:40,483 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7296 |
+
2024-06-03 22:01:43,227 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7297 |
+
2024-06-03 22:01:45,484 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7298 |
+
2024-06-03 22:01:48,228 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7299 |
+
2024-06-03 22:01:48,662 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: resume
|
7300 |
+
2024-06-03 22:01:48,662 INFO HandlerThread:322 [handler.py:handle_request_resume():715] starting system metrics thread
|
7301 |
+
2024-06-03 22:01:48,662 INFO HandlerThread:322 [system_monitor.py:start():194] Starting system monitor
|
7302 |
+
2024-06-03 22:01:48,662 INFO SystemMonitor:322 [system_monitor.py:_start():158] Starting system asset monitoring threads
|
7303 |
+
2024-06-03 22:01:48,663 INFO SystemMonitor:322 [interfaces.py:start():188] Started cpu monitoring
|
7304 |
+
2024-06-03 22:01:48,663 INFO SystemMonitor:322 [interfaces.py:start():188] Started disk monitoring
|
7305 |
+
2024-06-03 22:01:48,664 INFO SystemMonitor:322 [interfaces.py:start():188] Started gpu monitoring
|
7306 |
+
2024-06-03 22:01:48,666 INFO SystemMonitor:322 [interfaces.py:start():188] Started memory monitoring
|
7307 |
+
2024-06-03 22:01:48,667 INFO SystemMonitor:322 [interfaces.py:start():188] Started network monitoring
|
wandb/debug.log
CHANGED
@@ -170,3 +170,6 @@ config: {}
|
|
170 |
2024-06-03 20:50:14,119 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
|
171 |
2024-06-03 20:50:15,412 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
|
172 |
2024-06-03 20:50:16,872 INFO MainThread:34 [wandb_run.py:_config_callback():1376] config_cb None None {'vocab_size': 65024, 'hidden_size': 4544, 'num_hidden_layers': 32, 'num_attention_heads': 71, 'layer_norm_epsilon': 1e-05, 'initializer_range': 0.02, 'use_cache': False, 'hidden_dropout': 0.0, 'attention_dropout': 0.0, 'bos_token_id': 11, 'eos_token_id': 11, 'num_kv_heads': 71, 'alibi': False, 'new_decoder_architecture': False, 'multi_query': True, 'parallel_attn': True, 'bias': False, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'bfloat16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['FalconForCausalLM'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'pad_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'tiiuae/falcon-7b', 'transformers_version': '4.41.1', 'apply_residual_connection_post_layernorm': False, 'auto_map': {'AutoConfig': 'tiiuae/falcon-7b--configuration_falcon.FalconConfig', 'AutoModel': 'tiiuae/falcon-7b--modeling_falcon.FalconModel', 'AutoModelForSequenceClassification': 'tiiuae/falcon-7b--modeling_falcon.FalconForSequenceClassification', 'AutoModelForTokenClassification': 'tiiuae/falcon-7b--modeling_falcon.FalconForTokenClassification', 'AutoModelForQuestionAnswering': 'tiiuae/falcon-7b--modeling_falcon.FalconForQuestionAnswering', 'AutoModelForCausalLM': 'tiiuae/falcon-7b--modeling_falcon.FalconForCausalLM'}, 'model_type': 'falcon', 'quantization_config': {'quant_method': 'QuantizationMethod.BITS_AND_BYTES', '_load_in_8bit': False, '_load_in_4bit': True, 'llm_int8_threshold': 6.0, 'llm_int8_skip_modules': None, 'llm_int8_enable_fp32_cpu_offload': False, 'llm_int8_has_fp16_weight': False, 'bnb_4bit_quant_type': 'nf4', 'bnb_4bit_use_double_quant': False, 'bnb_4bit_compute_dtype': 'bfloat16', 'bnb_4bit_quant_storage': 'uint8', 'load_in_4bit': True, 'load_in_8bit': False}, 'output_dir': '/kaggle/working/', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': False, 'do_predict': False, 'eval_strategy': 'no', 'prediction_loss_only': False, 'per_device_train_batch_size': 8, 'per_device_eval_batch_size': 8, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 1, 'eval_accumulation_steps': None, 'eval_delay': 0, 'learning_rate': 0.0002, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 20, 'max_steps': -1, 'lr_scheduler_type': 'linear', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.0, 'warmup_steps': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': '/kaggle/working/runs/Jun03_20-50-07_f28ebe0d2526', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 10, 'logging_nan_inf_filter': True, 'save_strategy': 'epoch', 'save_steps': 500, 'save_total_limit': 4, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'restore_callback_states_from_checkpoint': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': True, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': None, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': None, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': '/kaggle/working/', 'disable_tqdm': False, 'remove_unused_columns': True, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'adamw_torch', 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['tensorboard', 'wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': True, 'resume_from_checkpoint': None, 'hub_model_id': 'othmanfa/fsttModel', 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': False, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'eval_do_concat_batches': True, 'fp16_backend': 'auto', 'evaluation_strategy': None, 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': True, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None, 'optim_target_modules': None, 'batch_eval_metrics': False}
|
|
|
|
|
|
|
|
170 |
2024-06-03 20:50:14,119 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
|
171 |
2024-06-03 20:50:15,412 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
|
172 |
2024-06-03 20:50:16,872 INFO MainThread:34 [wandb_run.py:_config_callback():1376] config_cb None None {'vocab_size': 65024, 'hidden_size': 4544, 'num_hidden_layers': 32, 'num_attention_heads': 71, 'layer_norm_epsilon': 1e-05, 'initializer_range': 0.02, 'use_cache': False, 'hidden_dropout': 0.0, 'attention_dropout': 0.0, 'bos_token_id': 11, 'eos_token_id': 11, 'num_kv_heads': 71, 'alibi': False, 'new_decoder_architecture': False, 'multi_query': True, 'parallel_attn': True, 'bias': False, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'bfloat16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['FalconForCausalLM'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'pad_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'tiiuae/falcon-7b', 'transformers_version': '4.41.1', 'apply_residual_connection_post_layernorm': False, 'auto_map': {'AutoConfig': 'tiiuae/falcon-7b--configuration_falcon.FalconConfig', 'AutoModel': 'tiiuae/falcon-7b--modeling_falcon.FalconModel', 'AutoModelForSequenceClassification': 'tiiuae/falcon-7b--modeling_falcon.FalconForSequenceClassification', 'AutoModelForTokenClassification': 'tiiuae/falcon-7b--modeling_falcon.FalconForTokenClassification', 'AutoModelForQuestionAnswering': 'tiiuae/falcon-7b--modeling_falcon.FalconForQuestionAnswering', 'AutoModelForCausalLM': 'tiiuae/falcon-7b--modeling_falcon.FalconForCausalLM'}, 'model_type': 'falcon', 'quantization_config': {'quant_method': 'QuantizationMethod.BITS_AND_BYTES', '_load_in_8bit': False, '_load_in_4bit': True, 'llm_int8_threshold': 6.0, 'llm_int8_skip_modules': None, 'llm_int8_enable_fp32_cpu_offload': False, 'llm_int8_has_fp16_weight': False, 'bnb_4bit_quant_type': 'nf4', 'bnb_4bit_use_double_quant': False, 'bnb_4bit_compute_dtype': 'bfloat16', 'bnb_4bit_quant_storage': 'uint8', 'load_in_4bit': True, 'load_in_8bit': False}, 'output_dir': '/kaggle/working/', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': False, 'do_predict': False, 'eval_strategy': 'no', 'prediction_loss_only': False, 'per_device_train_batch_size': 8, 'per_device_eval_batch_size': 8, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 1, 'eval_accumulation_steps': None, 'eval_delay': 0, 'learning_rate': 0.0002, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 20, 'max_steps': -1, 'lr_scheduler_type': 'linear', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.0, 'warmup_steps': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': '/kaggle/working/runs/Jun03_20-50-07_f28ebe0d2526', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 10, 'logging_nan_inf_filter': True, 'save_strategy': 'epoch', 'save_steps': 500, 'save_total_limit': 4, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'restore_callback_states_from_checkpoint': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': True, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': None, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': None, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': '/kaggle/working/', 'disable_tqdm': False, 'remove_unused_columns': True, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'adamw_torch', 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['tensorboard', 'wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': True, 'resume_from_checkpoint': None, 'hub_model_id': 'othmanfa/fsttModel', 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': False, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'eval_do_concat_batches': True, 'fp16_backend': 'auto', 'evaluation_strategy': None, 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': True, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None, 'optim_target_modules': None, 'batch_eval_metrics': False}
|
173 |
+
2024-06-03 22:00:27,455 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
|
174 |
+
2024-06-03 22:00:27,455 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
|
175 |
+
2024-06-03 22:01:48,661 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
|
wandb/run-20240603_175449-d191dh7n/files/output.log
CHANGED
@@ -329,3 +329,5 @@ Time to retrieve answer: 211.99833258299986
|
|
329 |
/tmp/ipykernel_34/3516238434.py:9: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
|
330 |
return {key: torch.tensor(val[idx]) for key, val in self.examples.items()}
|
331 |
/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
|
|
|
|
|
|
329 |
/tmp/ipykernel_34/3516238434.py:9: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
|
330 |
return {key: torch.tensor(val[idx]) for key, val in self.examples.items()}
|
331 |
/opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
|
332 |
+
warnings.warn(
|
333 |
+
/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
|
wandb/run-20240603_175449-d191dh7n/logs/debug-internal.log
CHANGED
@@ -7251,3 +7251,57 @@ subprocess.TimeoutExpired: Command '['conda', 'env', 'export']' timed out after
|
|
7251 |
2024-06-03 22:00:24,937 DEBUG SenderThread:322 [sender.py:send_request():405] send_request: summary_record
|
7252 |
2024-06-03 22:00:24,937 INFO SenderThread:322 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end
|
7253 |
2024-06-03 22:00:25,443 INFO Thread-12 :322 [dir_watcher.py:_on_file_modified():288] file/dir modified: /kaggle/working/wandb/run-20240603_175449-d191dh7n/files/wandb-summary.json
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7251 |
2024-06-03 22:00:24,937 DEBUG SenderThread:322 [sender.py:send_request():405] send_request: summary_record
|
7252 |
2024-06-03 22:00:24,937 INFO SenderThread:322 [sender.py:_save_file():1389] saving file wandb-summary.json with policy end
|
7253 |
2024-06-03 22:00:25,443 INFO Thread-12 :322 [dir_watcher.py:_on_file_modified():288] file/dir modified: /kaggle/working/wandb/run-20240603_175449-d191dh7n/files/wandb-summary.json
|
7254 |
+
2024-06-03 22:00:27,444 INFO Thread-12 :322 [dir_watcher.py:_on_file_modified():288] file/dir modified: /kaggle/working/wandb/run-20240603_175449-d191dh7n/files/output.log
|
7255 |
+
2024-06-03 22:00:27,456 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: pause
|
7256 |
+
2024-06-03 22:00:27,456 INFO HandlerThread:322 [handler.py:handle_request_pause():724] stopping system metrics thread
|
7257 |
+
2024-06-03 22:00:27,456 INFO HandlerThread:322 [system_monitor.py:finish():203] Stopping system monitor
|
7258 |
+
2024-06-03 22:00:27,456 DEBUG SystemMonitor:322 [system_monitor.py:_start():179] Finished system metrics aggregation loop
|
7259 |
+
2024-06-03 22:00:27,456 DEBUG SystemMonitor:322 [system_monitor.py:_start():183] Publishing last batch of metrics
|
7260 |
+
2024-06-03 22:00:27,457 INFO HandlerThread:322 [interfaces.py:finish():200] Joined cpu monitor
|
7261 |
+
2024-06-03 22:00:27,458 INFO HandlerThread:322 [interfaces.py:finish():200] Joined disk monitor
|
7262 |
+
2024-06-03 22:00:27,468 INFO HandlerThread:322 [interfaces.py:finish():200] Joined gpu monitor
|
7263 |
+
2024-06-03 22:00:27,468 INFO HandlerThread:322 [interfaces.py:finish():200] Joined memory monitor
|
7264 |
+
2024-06-03 22:00:27,469 INFO HandlerThread:322 [interfaces.py:finish():200] Joined network monitor
|
7265 |
+
2024-06-03 22:00:27,469 DEBUG SenderThread:322 [sender.py:send():378] send: stats
|
7266 |
+
2024-06-03 22:00:28,211 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7267 |
+
2024-06-03 22:00:30,470 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7268 |
+
2024-06-03 22:00:33,212 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7269 |
+
2024-06-03 22:00:35,471 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7270 |
+
2024-06-03 22:00:38,213 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7271 |
+
2024-06-03 22:00:40,472 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7272 |
+
2024-06-03 22:00:43,214 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7273 |
+
2024-06-03 22:00:45,473 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7274 |
+
2024-06-03 22:00:48,215 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7275 |
+
2024-06-03 22:00:50,474 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7276 |
+
2024-06-03 22:00:53,216 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7277 |
+
2024-06-03 22:00:55,474 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7278 |
+
2024-06-03 22:00:58,217 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7279 |
+
2024-06-03 22:01:00,475 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7280 |
+
2024-06-03 22:01:03,218 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7281 |
+
2024-06-03 22:01:05,476 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7282 |
+
2024-06-03 22:01:08,219 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7283 |
+
2024-06-03 22:01:10,477 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7284 |
+
2024-06-03 22:01:13,220 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7285 |
+
2024-06-03 22:01:15,478 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7286 |
+
2024-06-03 22:01:18,222 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7287 |
+
2024-06-03 22:01:20,479 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7288 |
+
2024-06-03 22:01:23,223 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7289 |
+
2024-06-03 22:01:25,480 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7290 |
+
2024-06-03 22:01:28,223 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7291 |
+
2024-06-03 22:01:30,481 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7292 |
+
2024-06-03 22:01:33,225 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7293 |
+
2024-06-03 22:01:35,482 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7294 |
+
2024-06-03 22:01:38,226 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7295 |
+
2024-06-03 22:01:40,483 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7296 |
+
2024-06-03 22:01:43,227 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7297 |
+
2024-06-03 22:01:45,484 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: status_report
|
7298 |
+
2024-06-03 22:01:48,228 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: keepalive
|
7299 |
+
2024-06-03 22:01:48,662 DEBUG HandlerThread:322 [handler.py:handle_request():158] handle_request: resume
|
7300 |
+
2024-06-03 22:01:48,662 INFO HandlerThread:322 [handler.py:handle_request_resume():715] starting system metrics thread
|
7301 |
+
2024-06-03 22:01:48,662 INFO HandlerThread:322 [system_monitor.py:start():194] Starting system monitor
|
7302 |
+
2024-06-03 22:01:48,662 INFO SystemMonitor:322 [system_monitor.py:_start():158] Starting system asset monitoring threads
|
7303 |
+
2024-06-03 22:01:48,663 INFO SystemMonitor:322 [interfaces.py:start():188] Started cpu monitoring
|
7304 |
+
2024-06-03 22:01:48,663 INFO SystemMonitor:322 [interfaces.py:start():188] Started disk monitoring
|
7305 |
+
2024-06-03 22:01:48,664 INFO SystemMonitor:322 [interfaces.py:start():188] Started gpu monitoring
|
7306 |
+
2024-06-03 22:01:48,666 INFO SystemMonitor:322 [interfaces.py:start():188] Started memory monitoring
|
7307 |
+
2024-06-03 22:01:48,667 INFO SystemMonitor:322 [interfaces.py:start():188] Started network monitoring
|
wandb/run-20240603_175449-d191dh7n/logs/debug.log
CHANGED
@@ -170,3 +170,6 @@ config: {}
|
|
170 |
2024-06-03 20:50:14,119 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
|
171 |
2024-06-03 20:50:15,412 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
|
172 |
2024-06-03 20:50:16,872 INFO MainThread:34 [wandb_run.py:_config_callback():1376] config_cb None None {'vocab_size': 65024, 'hidden_size': 4544, 'num_hidden_layers': 32, 'num_attention_heads': 71, 'layer_norm_epsilon': 1e-05, 'initializer_range': 0.02, 'use_cache': False, 'hidden_dropout': 0.0, 'attention_dropout': 0.0, 'bos_token_id': 11, 'eos_token_id': 11, 'num_kv_heads': 71, 'alibi': False, 'new_decoder_architecture': False, 'multi_query': True, 'parallel_attn': True, 'bias': False, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'bfloat16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['FalconForCausalLM'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'pad_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'tiiuae/falcon-7b', 'transformers_version': '4.41.1', 'apply_residual_connection_post_layernorm': False, 'auto_map': {'AutoConfig': 'tiiuae/falcon-7b--configuration_falcon.FalconConfig', 'AutoModel': 'tiiuae/falcon-7b--modeling_falcon.FalconModel', 'AutoModelForSequenceClassification': 'tiiuae/falcon-7b--modeling_falcon.FalconForSequenceClassification', 'AutoModelForTokenClassification': 'tiiuae/falcon-7b--modeling_falcon.FalconForTokenClassification', 'AutoModelForQuestionAnswering': 'tiiuae/falcon-7b--modeling_falcon.FalconForQuestionAnswering', 'AutoModelForCausalLM': 'tiiuae/falcon-7b--modeling_falcon.FalconForCausalLM'}, 'model_type': 'falcon', 'quantization_config': {'quant_method': 'QuantizationMethod.BITS_AND_BYTES', '_load_in_8bit': False, '_load_in_4bit': True, 'llm_int8_threshold': 6.0, 'llm_int8_skip_modules': None, 'llm_int8_enable_fp32_cpu_offload': False, 'llm_int8_has_fp16_weight': False, 'bnb_4bit_quant_type': 'nf4', 'bnb_4bit_use_double_quant': False, 'bnb_4bit_compute_dtype': 'bfloat16', 'bnb_4bit_quant_storage': 'uint8', 'load_in_4bit': True, 'load_in_8bit': False}, 'output_dir': '/kaggle/working/', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': False, 'do_predict': False, 'eval_strategy': 'no', 'prediction_loss_only': False, 'per_device_train_batch_size': 8, 'per_device_eval_batch_size': 8, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 1, 'eval_accumulation_steps': None, 'eval_delay': 0, 'learning_rate': 0.0002, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 20, 'max_steps': -1, 'lr_scheduler_type': 'linear', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.0, 'warmup_steps': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': '/kaggle/working/runs/Jun03_20-50-07_f28ebe0d2526', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 10, 'logging_nan_inf_filter': True, 'save_strategy': 'epoch', 'save_steps': 500, 'save_total_limit': 4, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'restore_callback_states_from_checkpoint': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': True, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': None, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': None, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': '/kaggle/working/', 'disable_tqdm': False, 'remove_unused_columns': True, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'adamw_torch', 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['tensorboard', 'wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': True, 'resume_from_checkpoint': None, 'hub_model_id': 'othmanfa/fsttModel', 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': False, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'eval_do_concat_batches': True, 'fp16_backend': 'auto', 'evaluation_strategy': None, 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': True, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None, 'optim_target_modules': None, 'batch_eval_metrics': False}
|
|
|
|
|
|
|
|
170 |
2024-06-03 20:50:14,119 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
|
171 |
2024-06-03 20:50:15,412 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
|
172 |
2024-06-03 20:50:16,872 INFO MainThread:34 [wandb_run.py:_config_callback():1376] config_cb None None {'vocab_size': 65024, 'hidden_size': 4544, 'num_hidden_layers': 32, 'num_attention_heads': 71, 'layer_norm_epsilon': 1e-05, 'initializer_range': 0.02, 'use_cache': False, 'hidden_dropout': 0.0, 'attention_dropout': 0.0, 'bos_token_id': 11, 'eos_token_id': 11, 'num_kv_heads': 71, 'alibi': False, 'new_decoder_architecture': False, 'multi_query': True, 'parallel_attn': True, 'bias': False, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'bfloat16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['FalconForCausalLM'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'pad_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'tiiuae/falcon-7b', 'transformers_version': '4.41.1', 'apply_residual_connection_post_layernorm': False, 'auto_map': {'AutoConfig': 'tiiuae/falcon-7b--configuration_falcon.FalconConfig', 'AutoModel': 'tiiuae/falcon-7b--modeling_falcon.FalconModel', 'AutoModelForSequenceClassification': 'tiiuae/falcon-7b--modeling_falcon.FalconForSequenceClassification', 'AutoModelForTokenClassification': 'tiiuae/falcon-7b--modeling_falcon.FalconForTokenClassification', 'AutoModelForQuestionAnswering': 'tiiuae/falcon-7b--modeling_falcon.FalconForQuestionAnswering', 'AutoModelForCausalLM': 'tiiuae/falcon-7b--modeling_falcon.FalconForCausalLM'}, 'model_type': 'falcon', 'quantization_config': {'quant_method': 'QuantizationMethod.BITS_AND_BYTES', '_load_in_8bit': False, '_load_in_4bit': True, 'llm_int8_threshold': 6.0, 'llm_int8_skip_modules': None, 'llm_int8_enable_fp32_cpu_offload': False, 'llm_int8_has_fp16_weight': False, 'bnb_4bit_quant_type': 'nf4', 'bnb_4bit_use_double_quant': False, 'bnb_4bit_compute_dtype': 'bfloat16', 'bnb_4bit_quant_storage': 'uint8', 'load_in_4bit': True, 'load_in_8bit': False}, 'output_dir': '/kaggle/working/', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': False, 'do_predict': False, 'eval_strategy': 'no', 'prediction_loss_only': False, 'per_device_train_batch_size': 8, 'per_device_eval_batch_size': 8, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 1, 'eval_accumulation_steps': None, 'eval_delay': 0, 'learning_rate': 0.0002, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 20, 'max_steps': -1, 'lr_scheduler_type': 'linear', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.0, 'warmup_steps': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': '/kaggle/working/runs/Jun03_20-50-07_f28ebe0d2526', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 10, 'logging_nan_inf_filter': True, 'save_strategy': 'epoch', 'save_steps': 500, 'save_total_limit': 4, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'restore_callback_states_from_checkpoint': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': True, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': None, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': None, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': '/kaggle/working/', 'disable_tqdm': False, 'remove_unused_columns': True, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'adamw_torch', 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['tensorboard', 'wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': True, 'resume_from_checkpoint': None, 'hub_model_id': 'othmanfa/fsttModel', 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': False, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'eval_do_concat_batches': True, 'fp16_backend': 'auto', 'evaluation_strategy': None, 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': True, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None, 'optim_target_modules': None, 'batch_eval_metrics': False}
|
173 |
+
2024-06-03 22:00:27,455 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
|
174 |
+
2024-06-03 22:00:27,455 INFO MainThread:34 [wandb_init.py:_pause_backend():431] pausing backend
|
175 |
+
2024-06-03 22:01:48,661 INFO MainThread:34 [wandb_init.py:_resume_backend():436] resuming backend
|
wandb/run-20240603_175449-d191dh7n/run-d191dh7n.wandb
CHANGED
Binary files a/wandb/run-20240603_175449-d191dh7n/run-d191dh7n.wandb and b/wandb/run-20240603_175449-d191dh7n/run-d191dh7n.wandb differ
|
|