deuswoof commited on
Commit
6b73e5d
·
1 Parent(s): 9bbf40c

Training in progress, step 10

Browse files
25_10_23_config_test_1.csv CHANGED
@@ -1,5 +1,5 @@
1
  run_number,comment,peformed_already,num_train_epochs,max_tokens,temperature,stop_token,classification_of_valuems,stemming,lemmatization
2
  1,no variations,True,2,100,0.8,False,False,False,False
3
  2,lemmatization set True,True,2,100,0.8,False,False,False,True
4
- 3,stemming set True,False,2,100,0.8,False,False,True,False
5
  4,classification_of_valuems set True,False,2,100,0.8,False,True,False,False
 
1
  run_number,comment,peformed_already,num_train_epochs,max_tokens,temperature,stop_token,classification_of_valuems,stemming,lemmatization
2
  1,no variations,True,2,100,0.8,False,False,False,False
3
  2,lemmatization set True,True,2,100,0.8,False,False,False,True
4
+ 3,stemming set True,True,2,100,0.8,False,False,True,False
5
  4,classification_of_valuems set True,False,2,100,0.8,False,True,False,False
25_10_23_results_real.csv CHANGED
@@ -7,4 +7,6 @@ run_number,items_per_minute,changed_settings,total_time_taken,rouge_scores_unnes
7
  6,1298.332329691182,lemmatization set True,20.33377695083618,0,0.0234711373369264,0.0467977115267001,0.0288934142507923,0.0311863787084191,0.058877789857329,0.0370268429313795,0.0388879758661536,0.072243256161802,0.0454032849437132,0.0006151898261944,0.0019602681449647,0.0009124951221292,0.0016722065919165,0.0047689525636188,0.0023495558998716,0.0030807466029328,0.008568927174078,0.0041591604999086
8
  7,0.0,0,0.0,0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0
9
  8,0.0,0,0.0,0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0
10
- 9,0.0,0,0.0,0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0
 
 
 
7
  6,1298.332329691182,lemmatization set True,20.33377695083618,0,0.0234711373369264,0.0467977115267001,0.0288934142507923,0.0311863787084191,0.058877789857329,0.0370268429313795,0.0388879758661536,0.072243256161802,0.0454032849437132,0.0006151898261944,0.0019602681449647,0.0009124951221292,0.0016722065919165,0.0047689525636188,0.0023495558998716,0.0030807466029328,0.008568927174078,0.0041591604999086
8
  7,0.0,0,0.0,0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0
9
  8,0.0,0,0.0,0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0
10
+ 9,1527.7772954831448,stemming set True,17.28000545501709,0,0.1983957450904019,0.2519780089622217,0.2098971903962357,0.2402826270196343,0.2928000782881022,0.2490765635230842,0.2850926200202243,0.3364948588993159,0.2903129396144104,0.0723938258535772,0.0800337747692468,0.0729619311369439,0.0946682401043626,0.1041986615172271,0.0940029666330387,0.118932059220299,0.1294571703280245,0.1171713046848759
11
+ 10,0.0,0,0.0,0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0
12
+ 11,0.0,0,0.0,0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0
README.md CHANGED
@@ -604,6 +604,18 @@ The following `bitsandbytes` quantization config was used during training:
604
  - bnb_4bit_use_double_quant: True
605
  - bnb_4bit_compute_dtype: bfloat16
606
 
 
 
 
 
 
 
 
 
 
 
 
 
607
  The following `bitsandbytes` quantization config was used during training:
608
  - quant_method: bitsandbytes
609
  - load_in_8bit: False
@@ -667,5 +679,6 @@ The following `bitsandbytes` quantization config was used during training:
667
  - PEFT 0.5.0
668
  - PEFT 0.5.0
669
  - PEFT 0.5.0
 
670
 
671
  - PEFT 0.5.0
 
604
  - bnb_4bit_use_double_quant: True
605
  - bnb_4bit_compute_dtype: bfloat16
606
 
607
+ The following `bitsandbytes` quantization config was used during training:
608
+ - quant_method: bitsandbytes
609
+ - load_in_8bit: False
610
+ - load_in_4bit: True
611
+ - llm_int8_threshold: 6.0
612
+ - llm_int8_skip_modules: None
613
+ - llm_int8_enable_fp32_cpu_offload: False
614
+ - llm_int8_has_fp16_weight: False
615
+ - bnb_4bit_quant_type: nf4
616
+ - bnb_4bit_use_double_quant: True
617
+ - bnb_4bit_compute_dtype: bfloat16
618
+
619
  The following `bitsandbytes` quantization config was used during training:
620
  - quant_method: bitsandbytes
621
  - load_in_8bit: False
 
679
  - PEFT 0.5.0
680
  - PEFT 0.5.0
681
  - PEFT 0.5.0
682
+ - PEFT 0.5.0
683
 
684
  - PEFT 0.5.0
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:103edb18fc859a3f5f63febf5be8ffc53d0129d56347846c009b8fe30936003f
3
  size 100733709
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c23938b81ab1d3b4b11e0e15489146fb8ee91169102a31ddf4c9fc6c4a6979e
3
  size 100733709
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b6b9a0ba76adcc9dbd716fd62bef12fc052150ec7bc73f086b1b3f90f28fc206
3
  size 100690288
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:226ac5ca591c12b5c82dffad22b901a4a13f0bf0f1133767bde5c66fc9b56950
3
  size 100690288
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d8fcf31eee1ca50aec960836a217ffcbbc0de5f093c4172302bd6227a8926284
3
  size 4283
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db0866b30686144c47b08695b1a99d3431e7a665a2e0405212e33a457bf233a0
3
  size 4283