deuswoof commited on
Commit
8bd661a
·
1 Parent(s): 9548cff

Training in progress, step 10

Browse files
24_10_23_results.csv ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Column1,Column2,Column3
2
+ 0,0,0
3
+ 0,0,0
4
+ 0,0,0
5
+ 0,0,0
6
+ 10,0,0
7
+ 10,0,0
8
+ 30,0,0
9
+ 30,0,0
10
+ 0,0,0
11
+ 0,wer,0
12
+ 0,0,0
13
+ 0,wer,0
24_10_23_results_test.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ run_number,items_per_minute,total_time_taken,rouge_scores_unnest,rouge1 low Precision,rouge1 low Recall,rouge1 low F1 Score,rouge1 mid Precision,rouge1 mid Recall,rouge1 mid F1 Score,rouge1 high Precision,rouge1 high Recall,rouge1 high F1 Score,rouge2 low Precision,rouge2 low Recall,rouge2 low F1 Score,rouge2 mid Precision,rouge2 mid Recall,rouge2 mid F1 Score,rouge2 high Precision,rouge2 high Recall,rouge2 high F1 Score,learning_settings,preidiction_settings,additional_settings
2
+ 1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,,,
README.md CHANGED
@@ -376,6 +376,18 @@ The following `bitsandbytes` quantization config was used during training:
376
  - bnb_4bit_use_double_quant: True
377
  - bnb_4bit_compute_dtype: bfloat16
378
 
 
 
 
 
 
 
 
 
 
 
 
 
379
  The following `bitsandbytes` quantization config was used during training:
380
  - quant_method: bitsandbytes
381
  - load_in_8bit: False
@@ -420,5 +432,6 @@ The following `bitsandbytes` quantization config was used during training:
420
  - PEFT 0.5.0
421
  - PEFT 0.5.0
422
  - PEFT 0.5.0
 
423
 
424
  - PEFT 0.5.0
 
376
  - bnb_4bit_use_double_quant: True
377
  - bnb_4bit_compute_dtype: bfloat16
378
 
379
+ The following `bitsandbytes` quantization config was used during training:
380
+ - quant_method: bitsandbytes
381
+ - load_in_8bit: False
382
+ - load_in_4bit: True
383
+ - llm_int8_threshold: 6.0
384
+ - llm_int8_skip_modules: None
385
+ - llm_int8_enable_fp32_cpu_offload: False
386
+ - llm_int8_has_fp16_weight: False
387
+ - bnb_4bit_quant_type: nf4
388
+ - bnb_4bit_use_double_quant: True
389
+ - bnb_4bit_compute_dtype: bfloat16
390
+
391
  The following `bitsandbytes` quantization config was used during training:
392
  - quant_method: bitsandbytes
393
  - load_in_8bit: False
 
432
  - PEFT 0.5.0
433
  - PEFT 0.5.0
434
  - PEFT 0.5.0
435
+ - PEFT 0.5.0
436
 
437
  - PEFT 0.5.0
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9b0a0a93b3077aa6205dffe63f3b78745229a0bb1ab38322ccde596b37cea291
3
  size 100733709
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfbc873f4370004bff02bc73eb12eb0b949381663fb0c60e580dd0eadab215ac
3
  size 100733709
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:631ec82f0d2ba5f140e6a52d11ab8314e7dfdb6755333f2d0005b1e01fa20b1c
3
  size 100690288
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3292de5daf82aa621eb25f07ff15897b54cb831503d2378e6e5b897e5d6c32a
3
  size 100690288
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5fc3ddac7736daf5a0b783f70b6067e997f8d8d9cf537928371dda9625cf6c7a
3
  size 4091
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a4935782e34e08b6a1cfb759d92a5aece09454b2b0a6e52cb1bd9da3f51bad3
3
  size 4091