IliyanGochev commited on
Commit
9282c62
1 Parent(s): 2f12828

Training in progress epoch 8

Browse files
README.md CHANGED
@@ -138,6 +138,30 @@ The following `bitsandbytes` quantization config was used during training:
138
  - bnb_4bit_use_double_quant: False
139
  - bnb_4bit_compute_dtype: float32
140
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
141
  The following `bitsandbytes` quantization config was used during training:
142
  - quant_method: bitsandbytes
143
  - load_in_8bit: True
@@ -175,6 +199,8 @@ The following `bitsandbytes` quantization config was used during training:
175
  - PEFT 0.5.0
176
  - PEFT 0.5.0
177
  - PEFT 0.5.0
 
 
178
 
179
  - PEFT 0.5.0.dev0
180
  `bitsandbytes` quantization config was used during training:
 
138
  - bnb_4bit_use_double_quant: False
139
  - bnb_4bit_compute_dtype: float32
140
 
141
+ The following `bitsandbytes` quantization config was used during training:
142
+ - quant_method: bitsandbytes
143
+ - load_in_8bit: True
144
+ - load_in_4bit: False
145
+ - llm_int8_threshold: 6.0
146
+ - llm_int8_skip_modules: None
147
+ - llm_int8_enable_fp32_cpu_offload: False
148
+ - llm_int8_has_fp16_weight: False
149
+ - bnb_4bit_quant_type: fp4
150
+ - bnb_4bit_use_double_quant: False
151
+ - bnb_4bit_compute_dtype: float32
152
+
153
+ The following `bitsandbytes` quantization config was used during training:
154
+ - quant_method: bitsandbytes
155
+ - load_in_8bit: True
156
+ - load_in_4bit: False
157
+ - llm_int8_threshold: 6.0
158
+ - llm_int8_skip_modules: None
159
+ - llm_int8_enable_fp32_cpu_offload: False
160
+ - llm_int8_has_fp16_weight: False
161
+ - bnb_4bit_quant_type: fp4
162
+ - bnb_4bit_use_double_quant: False
163
+ - bnb_4bit_compute_dtype: float32
164
+
165
  The following `bitsandbytes` quantization config was used during training:
166
  - quant_method: bitsandbytes
167
  - load_in_8bit: True
 
199
  - PEFT 0.5.0
200
  - PEFT 0.5.0
201
  - PEFT 0.5.0
202
+ - PEFT 0.5.0
203
+ - PEFT 0.5.0
204
 
205
  - PEFT 0.5.0.dev0
206
  `bitsandbytes` quantization config was used during training:
Whisper PEFT Fine-Tuning/events.out.tfevents.1696436338.MLbox.300106.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:94bdcd96fa14cea6a0aa01b078c2b2b8c61be16b5b284676d11b3b1d69f4e6fa
3
- size 5704
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9bd6b09d4942c038d660dff2cb242e14ed9dee1a1f4721a0b34777fb0445a0d
3
+ size 5863
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c2aef5cc5861aa4178fadf6aec45ec69bb0ede0dd24faa5f79f5e0acd0d39cfb
3
  size 38697637
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6af3df148b3ebd99b3b055a4741c5bb191ebece2eff08728b401ca84e4f8c422
3
  size 38697637
best_checkpoint/README.md CHANGED
@@ -424,6 +424,18 @@ The following `bitsandbytes` quantization config was used during training:
424
  - bnb_4bit_use_double_quant: False
425
  - bnb_4bit_compute_dtype: float32
426
 
 
 
 
 
 
 
 
 
 
 
 
 
427
  The following `bitsandbytes` quantization config was used during training:
428
  - quant_method: bitsandbytes
429
  - load_in_8bit: True
@@ -472,5 +484,6 @@ The following `bitsandbytes` quantization config was used during training:
472
  - PEFT 0.5.0
473
  - PEFT 0.5.0
474
  - PEFT 0.5.0
 
475
 
476
  - PEFT 0.5.0
 
424
  - bnb_4bit_use_double_quant: False
425
  - bnb_4bit_compute_dtype: float32
426
 
427
+ The following `bitsandbytes` quantization config was used during training:
428
+ - quant_method: bitsandbytes
429
+ - load_in_8bit: True
430
+ - load_in_4bit: False
431
+ - llm_int8_threshold: 6.0
432
+ - llm_int8_skip_modules: None
433
+ - llm_int8_enable_fp32_cpu_offload: False
434
+ - llm_int8_has_fp16_weight: False
435
+ - bnb_4bit_quant_type: fp4
436
+ - bnb_4bit_use_double_quant: False
437
+ - bnb_4bit_compute_dtype: float32
438
+
439
  The following `bitsandbytes` quantization config was used during training:
440
  - quant_method: bitsandbytes
441
  - load_in_8bit: True
 
484
  - PEFT 0.5.0
485
  - PEFT 0.5.0
486
  - PEFT 0.5.0
487
+ - PEFT 0.5.0
488
 
489
  - PEFT 0.5.0
best_checkpoint/adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:945fb9287c8252170548a62c8b31f0e2bac3a2123e2c7771498ff4b0ed86667f
3
  size 38697637
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a230cceee43a69525dbd7c377779c007f5c895483ddcb17f4df8654dd5c3efb
3
  size 38697637
best_checkpoint/random_states_1.pkl CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:97eef37a8daf39b503800acd6b12dacc4632d876f40eebb2435196bbc25f883d
3
  size 15755
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e6e978f9942ec103bb5784eec46f3cc88230ee85fad0ef1c8dce41f5e45a225
3
  size 15755