macadeliccc commited on
Commit
82700c8
1 Parent(s): 41e1370

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +80 -0
README.md CHANGED
@@ -9,3 +9,83 @@ This version of Solar-10.7B was lasered and perplexity was calculated against gs
9
  + New baseline perplexity: 12.554274559020996
10
 
11
  The laser process decreased perplexity by 2.41%
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  + New baseline perplexity: 12.554274559020996
10
 
11
  The laser process decreased perplexity by 2.41%
12
+ | Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
13
+ |-----------------------------------------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
14
+ |[SOLAR-10.7B-Instruct-v1.0-laser](https://huggingface.co/macadeliccc/SOLAR-10.7B-Instruct-v1.0-laser)| 46.9| 74.99| 70.64| 43.74| 59.07|
15
+
16
+ ### AGIEval
17
+ | Task |Version| Metric |Value| |Stderr|
18
+ |------------------------------|------:|--------|----:|---|-----:|
19
+ |agieval_aqua_rat | 0|acc |29.53|± | 2.87|
20
+ | | |acc_norm|28.35|± | 2.83|
21
+ |agieval_logiqa_en | 0|acc |39.78|± | 1.92|
22
+ | | |acc_norm|40.55|± | 1.93|
23
+ |agieval_lsat_ar | 0|acc |23.04|± | 2.78|
24
+ | | |acc_norm|21.30|± | 2.71|
25
+ |agieval_lsat_lr | 0|acc |51.18|± | 2.22|
26
+ | | |acc_norm|51.76|± | 2.21|
27
+ |agieval_lsat_rc | 0|acc |66.54|± | 2.88|
28
+ | | |acc_norm|66.91|± | 2.87|
29
+ |agieval_sat_en | 0|acc |78.16|± | 2.89|
30
+ | | |acc_norm|78.16|± | 2.89|
31
+ |agieval_sat_en_without_passage| 0|acc |50.97|± | 3.49|
32
+ | | |acc_norm|50.00|± | 3.49|
33
+ |agieval_sat_math | 0|acc |42.73|± | 3.34|
34
+ | | |acc_norm|38.18|± | 3.28|
35
+
36
+ Average: 46.9%
37
+
38
+ ### GPT4All
39
+ | Task |Version| Metric |Value| |Stderr|
40
+ |-------------|------:|--------|----:|---|-----:|
41
+ |arc_challenge| 0|acc |60.84|± | 1.43|
42
+ | | |acc_norm|63.99|± | 1.40|
43
+ |arc_easy | 0|acc |83.59|± | 0.76|
44
+ | | |acc_norm|81.44|± | 0.80|
45
+ |boolq | 1|acc |87.58|± | 0.58|
46
+ |hellaswag | 0|acc |68.11|± | 0.47|
47
+ | | |acc_norm|85.77|± | 0.35|
48
+ |openbookqa | 0|acc |35.40|± | 2.14|
49
+ | | |acc_norm|48.40|± | 2.24|
50
+ |piqa | 0|acc |80.58|± | 0.92|
51
+ | | |acc_norm|80.74|± | 0.92|
52
+ |winogrande | 0|acc |77.03|± | 1.18|
53
+
54
+ Average: 74.99%
55
+
56
+ ### TruthfulQA
57
+ | Task |Version|Metric|Value| |Stderr|
58
+ |-------------|------:|------|----:|---|-----:|
59
+ |truthfulqa_mc| 1|mc1 |55.45|± | 1.74|
60
+ | | |mc2 |70.64|± | 1.49|
61
+
62
+ Average: 70.64%
63
+
64
+ ### Bigbench
65
+ | Task |Version| Metric |Value| |Stderr|
66
+ |------------------------------------------------|------:|---------------------|----:|---|-----:|
67
+ |bigbench_causal_judgement | 0|multiple_choice_grade|57.37|± | 3.60|
68
+ |bigbench_date_understanding | 0|multiple_choice_grade|62.87|± | 2.52|
69
+ |bigbench_disambiguation_qa | 0|multiple_choice_grade|35.66|± | 2.99|
70
+ |bigbench_geometric_shapes | 0|multiple_choice_grade|33.15|± | 2.49|
71
+ | | |exact_str_match | 0.00|± | 0.00|
72
+ |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|26.20|± | 1.97|
73
+ |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|19.71|± | 1.50|
74
+ |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|45.00|± | 2.88|
75
+ |bigbench_movie_recommendation | 0|multiple_choice_grade|39.00|± | 2.18|
76
+ |bigbench_navigate | 0|multiple_choice_grade|51.20|± | 1.58|
77
+ |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|53.90|± | 1.11|
78
+ |bigbench_ruin_names | 0|multiple_choice_grade|40.18|± | 2.32|
79
+ |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|39.98|± | 1.55|
80
+ |bigbench_snarks | 0|multiple_choice_grade|63.54|± | 3.59|
81
+ |bigbench_sports_understanding | 0|multiple_choice_grade|68.36|± | 1.48|
82
+ |bigbench_temporal_sequences | 0|multiple_choice_grade|65.20|± | 1.51|
83
+ |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|22.48|± | 1.18|
84
+ |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|18.46|± | 0.93|
85
+ |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|45.00|± | 2.88|
86
+
87
+ Average: 43.74%
88
+
89
+ Average score: 59.07%
90
+
91
+ Elapsed time: 02:33:24