Upload logs/train_codegen_20251129_215900.log with huggingface_hub
Browse files
logs/train_codegen_20251129_215900.log
ADDED
|
@@ -0,0 +1,98 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-11-29 21:59:00 - train_codegen - INFO - Logging to: logs/codegen/train_codegen_20251129_215900.log
|
| 2 |
+
2025-11-29 21:59:00 - train_codegen - INFO - Monitor progress: tail -f logs/codegen/train_codegen_20251129_215900.log
|
| 3 |
+
2025-11-29 21:59:00 - train_codegen - INFO - ============================================================
|
| 4 |
+
2025-11-29 21:59:00 - train_codegen - INFO - CodeGen Training
|
| 5 |
+
2025-11-29 21:59:00 - train_codegen - INFO - ============================================================
|
| 6 |
+
2025-11-29 21:59:00 - train_codegen - INFO - Using CUDA device: 0
|
| 7 |
+
2025-11-29 21:59:00 - train_codegen - INFO - GPU: NVIDIA GeForce RTX 5090
|
| 8 |
+
2025-11-29 21:59:00 - train_codegen - INFO - Configuration:
|
| 9 |
+
2025-11-29 21:59:00 - train_codegen - INFO - model: Salesforce/codegen-350M-mono
|
| 10 |
+
2025-11-29 21:59:00 - train_codegen - INFO - data: datasets/java
|
| 11 |
+
2025-11-29 21:59:00 - train_codegen - INFO - output: model/checkpoints/run1-java-codegen
|
| 12 |
+
2025-11-29 21:59:00 - train_codegen - INFO - batch_size: 10
|
| 13 |
+
2025-11-29 21:59:00 - train_codegen - INFO - gradient_accumulation_steps: 4
|
| 14 |
+
2025-11-29 21:59:00 - train_codegen - INFO - effective_batch_size: 40
|
| 15 |
+
2025-11-29 21:59:00 - train_codegen - INFO - learning_rate: 5e-05
|
| 16 |
+
2025-11-29 21:59:00 - train_codegen - INFO - epochs: 5
|
| 17 |
+
2025-11-29 21:59:00 - train_codegen - INFO - max_length: 1024
|
| 18 |
+
2025-11-29 21:59:00 - train_codegen - INFO - max_steps: -1
|
| 19 |
+
2025-11-29 21:59:00 - train_codegen - INFO - fp16: True
|
| 20 |
+
2025-11-29 21:59:00 - train_codegen - INFO - gradient_checkpointing: True
|
| 21 |
+
2025-11-29 21:59:00 - train_codegen - INFO - seed: 42
|
| 22 |
+
2025-11-29 21:59:00 - train_codegen - INFO - Loading tokenizer and model: Salesforce/codegen-350M-mono
|
| 23 |
+
2025-11-29 21:59:13 - train_codegen - INFO - Loading model with gradient checkpointing enabled
|
| 24 |
+
2025-11-29 21:59:13 - train_codegen - INFO - Loading dataset...
|
| 25 |
+
2025-11-29 21:59:13 - train_codegen - INFO - Loading dataset from datasets/java
|
| 26 |
+
2025-11-29 21:59:16 - train_codegen - INFO - Train samples: 275962
|
| 27 |
+
2025-11-29 21:59:16 - train_codegen - INFO - Validation samples: 34495
|
| 28 |
+
2025-11-29 21:59:16 - train_codegen - INFO - ============================================================
|
| 29 |
+
2025-11-29 21:59:16 - train_codegen - INFO - Dataset Preprocessing
|
| 30 |
+
2025-11-29 21:59:16 - train_codegen - INFO - ============================================================
|
| 31 |
+
2025-11-29 21:59:16 - train_codegen - INFO - Preprocessing 275962 samples (optimized eager loading)...
|
| 32 |
+
2025-11-29 21:59:22 - train_codegen - INFO - Preprocessed 10000/275962 samples
|
| 33 |
+
2025-11-29 21:59:27 - train_codegen - INFO - Preprocessed 20000/275962 samples
|
| 34 |
+
2025-11-29 21:59:33 - train_codegen - INFO - Preprocessed 30000/275962 samples
|
| 35 |
+
2025-11-29 21:59:38 - train_codegen - INFO - Preprocessed 40000/275962 samples
|
| 36 |
+
2025-11-29 21:59:43 - train_codegen - INFO - Preprocessed 50000/275962 samples
|
| 37 |
+
2025-11-29 21:59:49 - train_codegen - INFO - Preprocessed 60000/275962 samples
|
| 38 |
+
2025-11-29 21:59:54 - train_codegen - INFO - Preprocessed 70000/275962 samples
|
| 39 |
+
2025-11-29 22:00:00 - train_codegen - INFO - Preprocessed 80000/275962 samples
|
| 40 |
+
2025-11-29 22:00:05 - train_codegen - INFO - Preprocessed 90000/275962 samples
|
| 41 |
+
2025-11-29 22:00:10 - train_codegen - INFO - Preprocessed 100000/275962 samples
|
| 42 |
+
2025-11-29 22:00:16 - train_codegen - INFO - Preprocessed 110000/275962 samples
|
| 43 |
+
2025-11-29 22:00:22 - train_codegen - INFO - Preprocessed 120000/275962 samples
|
| 44 |
+
2025-11-29 22:00:27 - train_codegen - INFO - Preprocessed 130000/275962 samples
|
| 45 |
+
2025-11-29 22:00:33 - train_codegen - INFO - Preprocessed 140000/275962 samples
|
| 46 |
+
2025-11-29 22:00:38 - train_codegen - INFO - Preprocessed 150000/275962 samples
|
| 47 |
+
2025-11-29 22:00:45 - train_codegen - INFO - Preprocessed 160000/275962 samples
|
| 48 |
+
2025-11-29 22:00:52 - train_codegen - INFO - Preprocessed 170000/275962 samples
|
| 49 |
+
2025-11-29 22:00:57 - train_codegen - INFO - Preprocessed 180000/275962 samples
|
| 50 |
+
2025-11-29 22:01:03 - train_codegen - INFO - Preprocessed 190000/275962 samples
|
| 51 |
+
2025-11-29 22:01:09 - train_codegen - INFO - Preprocessed 200000/275962 samples
|
| 52 |
+
2025-11-29 22:01:13 - train_codegen - INFO - Preprocessed 210000/275962 samples
|
| 53 |
+
2025-11-29 22:01:20 - train_codegen - INFO - Preprocessed 220000/275962 samples
|
| 54 |
+
2025-11-29 22:01:25 - train_codegen - INFO - Preprocessed 230000/275962 samples
|
| 55 |
+
2025-11-29 22:01:30 - train_codegen - INFO - Preprocessed 240000/275962 samples
|
| 56 |
+
2025-11-29 22:01:36 - train_codegen - INFO - Preprocessed 250000/275962 samples
|
| 57 |
+
2025-11-29 22:01:42 - train_codegen - INFO - Preprocessed 260000/275962 samples
|
| 58 |
+
2025-11-29 22:01:47 - train_codegen - INFO - Preprocessed 270000/275962 samples
|
| 59 |
+
2025-11-29 22:01:51 - train_codegen - INFO - Preprocessed 275962/275962 samples
|
| 60 |
+
2025-11-29 22:01:51 - train_codegen - INFO - Preprocessing complete: 275962 samples ready
|
| 61 |
+
2025-11-29 22:01:52 - train_codegen - INFO - Preprocessing 34495 samples (optimized eager loading)...
|
| 62 |
+
2025-11-29 22:01:56 - train_codegen - INFO - Preprocessed 10000/34495 samples
|
| 63 |
+
2025-11-29 22:02:01 - train_codegen - INFO - Preprocessed 20000/34495 samples
|
| 64 |
+
2025-11-29 22:02:08 - train_codegen - INFO - Preprocessed 30000/34495 samples
|
| 65 |
+
2025-11-29 22:02:10 - train_codegen - INFO - Preprocessed 34495/34495 samples
|
| 66 |
+
2025-11-29 22:02:10 - train_codegen - INFO - Preprocessing complete: 34495 samples ready
|
| 67 |
+
2025-11-29 22:02:11 - train_codegen - INFO - ============================================================
|
| 68 |
+
2025-11-29 22:02:11 - train_codegen - INFO - Training Arguments
|
| 69 |
+
2025-11-29 22:02:11 - train_codegen - INFO - ============================================================
|
| 70 |
+
2025-11-29 22:02:11 - train_codegen - INFO - Training log will be saved to: model/checkpoints/run1-java-codegen/training_log.csv
|
| 71 |
+
2025-11-29 22:02:11 - train_codegen - INFO - ============================================================
|
| 72 |
+
2025-11-29 22:02:11 - train_codegen - INFO - Training Strategy
|
| 73 |
+
2025-11-29 22:02:11 - train_codegen - INFO - ============================================================
|
| 74 |
+
2025-11-29 22:02:11 - train_codegen - INFO - Evaluation every 1000 steps (optimized for speed)
|
| 75 |
+
2025-11-29 22:02:11 - train_codegen - INFO - Eval batch size: 20 (2x train batch)
|
| 76 |
+
2025-11-29 22:02:11 - train_codegen - INFO - Eval accumulation steps: 4
|
| 77 |
+
2025-11-29 22:02:11 - train_codegen - INFO - Save checkpoint every 2000 steps
|
| 78 |
+
2025-11-29 22:02:11 - train_codegen - INFO - Gradient checkpointing: ENABLED (saves VRAM, slower training)
|
| 79 |
+
2025-11-29 22:02:11 - train_codegen - INFO - FP16 mixed precision enabled
|
| 80 |
+
2025-11-29 22:02:11 - train_codegen - INFO - Dynamic padding per batch (10-20x faster than max_length padding)
|
| 81 |
+
2025-11-29 22:02:11 - train_codegen - INFO - ============================================================
|
| 82 |
+
2025-11-29 22:02:11 - train_codegen - INFO - Starting Training
|
| 83 |
+
2025-11-29 22:02:11 - train_codegen - INFO - ============================================================
|
| 84 |
+
2025-11-29 22:02:11 - train_codegen - INFO - Total training samples: 275962
|
| 85 |
+
2025-11-29 22:02:11 - train_codegen - INFO - Total validation samples: 34495
|
| 86 |
+
2025-11-29 22:02:11 - train_codegen - INFO - Starting training from scratch
|
| 87 |
+
2025-12-01 02:38:40 - train_codegen - INFO - Training completed successfully
|
| 88 |
+
2025-12-01 02:38:40 - train_codegen - INFO - ============================================================
|
| 89 |
+
2025-12-01 02:38:40 - train_codegen - INFO - Saving Final Model
|
| 90 |
+
2025-12-01 02:38:40 - train_codegen - INFO - ============================================================
|
| 91 |
+
2025-12-01 02:38:42 - train_codegen - INFO - Model and tokenizer saved to model/checkpoints/run1-java-codegen
|
| 92 |
+
2025-12-01 02:38:42 - train_codegen - INFO - ============================================================
|
| 93 |
+
2025-12-01 02:38:42 - train_codegen - INFO - Training Summary
|
| 94 |
+
2025-12-01 02:38:42 - train_codegen - INFO - ============================================================
|
| 95 |
+
2025-12-01 02:38:42 - train_codegen - INFO - Total steps: 34495
|
| 96 |
+
2025-12-01 02:38:42 - train_codegen - INFO - Best model checkpoint: model/checkpoints/run1-java-codegen/checkpoint-20000
|
| 97 |
+
2025-12-01 02:38:42 - train_codegen - INFO - Best eval loss: 0.7098406553268433
|
| 98 |
+
2025-12-01 02:38:42 - train_codegen - INFO - Done.
|