Liu-Xiang commited on
Commit
34e790a
1 Parent(s): cb01081

Model save

Browse files
Files changed (1) hide show
  1. README.md +97 -0
README.md ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ base_model: codellama/CodeLlama-7b-hf
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: sql-code-llama-alan
8
+ results: []
9
+ library_name: peft
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # sql-code-llama-alan
16
+
17
+ This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.4576
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+
36
+ The following `bitsandbytes` quantization config was used during training:
37
+ - quant_method: bitsandbytes
38
+ - _load_in_8bit: True
39
+ - _load_in_4bit: False
40
+ - llm_int8_threshold: 6.0
41
+ - llm_int8_skip_modules: None
42
+ - llm_int8_enable_fp32_cpu_offload: False
43
+ - llm_int8_has_fp16_weight: False
44
+ - bnb_4bit_quant_type: fp4
45
+ - bnb_4bit_use_double_quant: False
46
+ - bnb_4bit_compute_dtype: float32
47
+ - bnb_4bit_quant_storage: uint8
48
+ - load_in_4bit: False
49
+ - load_in_8bit: True
50
+ ### Training hyperparameters
51
+
52
+ The following hyperparameters were used during training:
53
+ - learning_rate: 0.0003
54
+ - train_batch_size: 32
55
+ - eval_batch_size: 8
56
+ - seed: 42
57
+ - gradient_accumulation_steps: 4
58
+ - total_train_batch_size: 128
59
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
+ - lr_scheduler_type: linear
61
+ - lr_scheduler_warmup_steps: 100
62
+ - training_steps: 400
63
+ - mixed_precision_training: Native AMP
64
+
65
+ ### Training results
66
+
67
+ | Training Loss | Epoch | Step | Validation Loss |
68
+ |:-------------:|:------:|:----:|:---------------:|
69
+ | 2.1992 | 0.0465 | 20 | 2.0335 |
70
+ | 1.14 | 0.0931 | 40 | 0.8371 |
71
+ | 0.8045 | 0.1396 | 60 | 0.6549 |
72
+ | 0.584 | 0.1862 | 80 | 0.5715 |
73
+ | 0.3807 | 0.2327 | 100 | 0.5561 |
74
+ | 0.5723 | 0.2792 | 120 | 0.5147 |
75
+ | 0.4262 | 0.3258 | 140 | 0.5056 |
76
+ | 0.6375 | 0.3723 | 160 | 0.5191 |
77
+ | 0.4839 | 0.4188 | 180 | 0.4865 |
78
+ | 0.3596 | 0.4654 | 200 | 0.4994 |
79
+ | 0.5285 | 0.5119 | 220 | 0.4803 |
80
+ | 0.4035 | 0.5585 | 240 | 0.4753 |
81
+ | 0.6019 | 0.6050 | 260 | 0.4772 |
82
+ | 0.4663 | 0.6515 | 280 | 0.4670 |
83
+ | 0.345 | 0.6981 | 300 | 0.4746 |
84
+ | 0.509 | 0.7446 | 320 | 0.4652 |
85
+ | 0.3946 | 0.7912 | 340 | 0.4614 |
86
+ | 0.5714 | 0.8377 | 360 | 0.4614 |
87
+ | 0.4525 | 0.8842 | 380 | 0.4585 |
88
+ | 0.3432 | 0.9308 | 400 | 0.4576 |
89
+
90
+
91
+ ### Framework versions
92
+
93
+ - PEFT 0.6.0.dev0
94
+ - Transformers 4.44.0.dev0
95
+ - Pytorch 2.2.2+cu121
96
+ - Datasets 2.19.1
97
+ - Tokenizers 0.19.1