Dataset Preview
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 4 missing columns ({'result_metrics', 'eval_version', 'result_metrics_average', 'result_metrics_npm'}) This happened while the json dataset builder was generating data using hf://datasets/eduagarcia-temp/llm_pt_leaderboard_requests/152334H/miqu-1-70b-sf_eval_request_False_float16_Original.json (at revision dfc7bdf6dc79a68879c4574fb9a03a3810cbcf1e) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Error code:   UnexpectedError

Need help to make the dataset viewer work? Open a discussion for direct support.

model
string
base_model
string
revision
string
private
bool
precision
string
params
float64
architectures
string
weight_type
string
status
string
submitted_time
unknown
model_type
string
source
string
job_id
int64
job_start_time
string
main_language
string
eval_version
string
result_metrics
dict
result_metrics_average
float64
result_metrics_npm
float64
01-ai/Yi-34B-200K
main
false
bfloat16
34.389
LlamaForCausalLM
Original
FINISHED
"2024-02-05T23:18:19"
🟒 : pretrained
script
480
2024-04-17T23-49-34.862700
English
1.1.0
{ "enem_challenge": 0.7172848145556333, "bluex": 0.6481223922114048, "oab_exams": 0.5517084282460136, "assin2_rte": 0.9097218456052794, "assin2_sts": 0.7390390977418284, "faquad_nli": 0.49676238738738737, "hatebr_offensive": 0.8117947554592124, "portuguese_hate_speech": 0.7007076712295253, "tweetsentbr": 0.6181054682174745 }
0.688139
0.523233
01-ai/Yi-34B-Chat
main
false
bfloat16
34.389
LlamaForCausalLM
Original
FINISHED
"2024-02-27T00:40:17"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
272
2024-02-28T08-14-36.046639
English
1.1.0
{ "enem_challenge": 0.7123862841147656, "bluex": 0.6328233657858137, "oab_exams": 0.5202733485193621, "assin2_rte": 0.924014535978148, "assin2_sts": 0.7419038025688336, "faquad_nli": 0.7157210401891253, "hatebr_offensive": 0.7198401711140126, "portuguese_hate_speech": 0.7135410538975384, "tweetsentbr": 0.6880686233555414 }
0.707619
0.557789
01-ai/Yi-34B
main
false
bfloat16
34.389
LlamaForCausalLM
Original
FINISHED
"2024-02-05T23:05:39"
🟒 : pretrained
script
440
2024-04-13T15-53-49.411062
English
1.1.0
{ "enem_challenge": 0.7207837648705389, "bluex": 0.6648122392211405, "oab_exams": 0.5599088838268793, "assin2_rte": 0.917882167398896, "assin2_sts": 0.76681855136608, "faquad_nli": 0.7798334442926054, "hatebr_offensive": 0.8107834570679608, "portuguese_hate_speech": 0.6224786612758311, "tweetsentbr": 0.7320656959105744 }
0.730596
0.591978
01-ai/Yi-6B-200K
main
false
bfloat16
6.061
LlamaForCausalLM
Original
FINISHED
"2024-02-05T23:18:12"
🟒 : pretrained
script
469
2024-04-16T17-07-31.622853
English
1.1.0
{ "enem_challenge": 0.5423372988103569, "bluex": 0.4673157162726008, "oab_exams": 0.4328018223234624, "assin2_rte": 0.40523403335417163, "assin2_sts": 0.4964641013268987, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.4892942520605069, "portuguese_hate_speech": 0.6053769911504425, "tweetsentbr": 0.6290014694641435 }
0.500831
0.214476
01-ai/Yi-6B-Chat
main
false
bfloat16
6.061
LlamaForCausalLM
Original
FINISHED
"2024-02-27T00:40:39"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
273
2024-02-28T14-35-07.615539
English
1.1.0
{ "enem_challenge": 0.5570328901329601, "bluex": 0.5006954102920723, "oab_exams": 0.4118451025056948, "assin2_rte": 0.7948490568935549, "assin2_sts": 0.5684271643349206, "faquad_nli": 0.637960088691796, "hatebr_offensive": 0.775686136523575, "portuguese_hate_speech": 0.5712377041472934, "tweetsentbr": 0.5864804330790114 }
0.600468
0.40261
01-ai/Yi-6B
main
false
bfloat16
6.061
LlamaForCausalLM
Original
FINISHED
"2024-02-05T23:04:05"
🟒 : pretrained
script
228
2024-02-17T03-42-08.504508
English
1.1.0
{ "enem_challenge": 0.5689293212036389, "bluex": 0.5132127955493742, "oab_exams": 0.4460136674259681, "assin2_rte": 0.7903932929806128, "assin2_sts": 0.5666878345297481, "faquad_nli": 0.5985418799210473, "hatebr_offensive": 0.7425595238095237, "portuguese_hate_speech": 0.6184177704320946, "tweetsentbr": 0.5081067075683067 }
0.594763
0.391626
01-ai/Yi-9B-200k
main
false
bfloat16
8.829
LlamaForCausalLM
Original
FINISHED
"2024-04-13T05:22:25"
🟒 : pretrained
leaderboard
451
2024-04-14T12-49-52.148781
English
1.1.0
{ "enem_challenge": 0.6564030790762772, "bluex": 0.5354659248956884, "oab_exams": 0.5056947608200456, "assin2_rte": 0.8708321784112503, "assin2_sts": 0.7508245525986388, "faquad_nli": 0.7162112665738773, "hatebr_offensive": 0.8238294119604646, "portuguese_hate_speech": 0.6723821369343758, "tweetsentbr": 0.7162549372015228 }
0.694211
0.54216
01-ai/Yi-9B
main
false
bfloat16
8.829
LlamaForCausalLM
Original
FINISHED
"2024-04-13T05:20:56"
🟒 : pretrained
leaderboard
453
2024-04-14T11-08-02.891090
English
1.1.0
{ "enem_challenge": 0.6759972008397481, "bluex": 0.5493741307371349, "oab_exams": 0.4783599088838269, "assin2_rte": 0.8784695970900473, "assin2_sts": 0.752860308487488, "faquad_nli": 0.7478708154144531, "hatebr_offensive": 0.8574531631821884, "portuguese_hate_speech": 0.6448598532923182, "tweetsentbr": 0.6530471966712571 }
0.693144
0.542367
152334H/miqu-1-70b-sf
main
false
float16
68.977
LlamaForCausalLM
Original
PENDING
"2024-04-26T08:25:57"
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
-1
null
English
null
null
null
null
22h/cabrita-lora-v0-1
huggyllama/llama-7b
main
false
float16
0
?
Adapter
FAILED
"2024-02-05T23:03:11"
πŸ”Ά : fine-tuned
script
336
2024-04-02T03-54-36.291839
Portuguese
null
null
null
null
22h/cabrita_7b_pt_850000
main
false
float16
7
LlamaForCausalLM
Original
FINISHED
"2024-02-11T13:34:40"
πŸ†Ž : language adapted models (FP, FT, ...)
script
305
2024-03-08T02-07-35.059732
Portuguese
1.1.0
{ "enem_challenge": 0.22533240027991602, "bluex": 0.23087621696801114, "oab_exams": 0.2920273348519362, "assin2_rte": 0.3333333333333333, "assin2_sts": 0.1265472264440735, "faquad_nli": 0.17721518987341772, "hatebr_offensive": 0.5597546967409981, "portuguese_hate_speech": 0.490163110698825, "tweetsentbr": 0.4575265405956153 }
0.32142
-0.032254
22h/open-cabrita3b
main
false
float16
3
LlamaForCausalLM
Original
FINISHED
"2024-02-11T13:34:36"
πŸ†Ž : language adapted models (FP, FT, ...)
script
285
2024-02-28T16-38-27.766897
Portuguese
1.1.0
{ "enem_challenge": 0.17984604618614417, "bluex": 0.2114047287899861, "oab_exams": 0.22687927107061504, "assin2_rte": 0.4301327637723658, "assin2_sts": 0.08919111846797594, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.5046251022011318, "portuguese_hate_speech": 0.4118866620594333, "tweetsentbr": 0.47963247012405114 }
0.330361
-0.005342
AI-Sweden-Models/gpt-sw3-20b
main
false
float16
20.918
GPT2LMHeadModel
Original
PENDING_NEW_EVAL
"2024-02-05T23:15:38"
🟒 : pretrained
script
102
2024-02-08T16-32-05.080295
English
null
null
null
null
AI-Sweden-Models/gpt-sw3-40b
main
false
float16
39.927
GPT2LMHeadModel
Original
FINISHED
"2024-02-05T23:15:47"
🟒 : pretrained
script
253
2024-02-21T07-59-22.606213
English
1.1.0
{ "enem_challenge": 0.2358292512246326, "bluex": 0.2809457579972184, "oab_exams": 0.2542141230068337, "assin2_rte": 0.4096747911636189, "assin2_sts": 0.17308746611294112, "faquad_nli": 0.5125406216148655, "hatebr_offensive": 0.3920230910522173, "portuguese_hate_speech": 0.4365404510655907, "tweetsentbr": 0.491745311259787 }
0.354067
0.018354
AI-Sweden-Models/gpt-sw3-6.7b-v2
main
false
float16
7.111
GPT2LMHeadModel
Original
FINISHED
"2024-02-05T23:15:31"
🟒 : pretrained
script
462
2024-04-16T00-18-50.805343
English
1.1.0
{ "enem_challenge": 0.22813156053184044, "bluex": 0.23504867872044508, "oab_exams": 0.23097949886104785, "assin2_rte": 0.5833175952742944, "assin2_sts": 0.14706689693418745, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.5569200631123247, "portuguese_hate_speech": 0.5048069947120815, "tweetsentbr": 0.45897627809523983 }
0.3761
0.073856
AI-Sweden-Models/gpt-sw3-6.7b
main
false
float16
7.111
GPT2LMHeadModel
Original
FINISHED
"2024-02-05T23:15:23"
🟒 : pretrained
script
466
2024-04-15T22-34-55.424388
English
1.1.0
{ "enem_challenge": 0.21133659902029392, "bluex": 0.2573018080667594, "oab_exams": 0.2296127562642369, "assin2_rte": 0.6192900448928588, "assin2_sts": 0.08103924791097977, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.40737531518832293, "portuguese_hate_speech": 0.4441161100880904, "tweetsentbr": 0.433837189305867 }
0.347063
0.024837
AdaptLLM/finance-LLM-13B
main
false
float16
13
LlamaForCausalLM
Original
FINISHED
"2024-02-11T13:37:27"
πŸ”Ά : fine-tuned
script
555
2024-04-24T18-00-42.073230
English
1.1.0
{ "enem_challenge": 0.4730580825752274, "bluex": 0.3852573018080668, "oab_exams": 0.36173120728929387, "assin2_rte": 0.8704914563684142, "assin2_sts": 0.6914158506759536, "faquad_nli": 0.6137142857142857, "hatebr_offensive": 0.8210157972117231, "portuguese_hate_speech": 0.6648065091139095, "tweetsentbr": 0.6129534464124105 }
0.610494
0.4269
AdaptLLM/finance-LLM
main
false
float16
0
LLaMAForCausalLM
Original
FINISHED
"2024-02-11T13:37:12"
πŸ”Ά : fine-tuned
script
545
2024-04-24T13-38-50.219195
English
1.1.0
{ "enem_challenge": 0.37578726382085376, "bluex": 0.2906815020862309, "oab_exams": 0.3011389521640091, "assin2_rte": 0.7173994459883221, "assin2_sts": 0.3141019003448064, "faquad_nli": 0.6856866537717602, "hatebr_offensive": 0.6665618718263835, "portuguese_hate_speech": 0.3323844809709906, "tweetsentbr": 0.5501887299910238 }
0.470437
0.214015
AdaptLLM/law-LLM-13B
main
false
float16
13
LlamaForCausalLM
Original
FINISHED
"2024-02-11T13:37:17"
πŸ”Ά : fine-tuned
script
551
2024-04-24T17-20-32.289644
English
1.1.0
{ "enem_challenge": 0.48915325402379284, "bluex": 0.3796940194714882, "oab_exams": 0.36082004555808656, "assin2_rte": 0.7762008093366958, "assin2_sts": 0.6862803522831282, "faquad_nli": 0.5589431210148192, "hatebr_offensive": 0.7648719048333295, "portuguese_hate_speech": 0.6972417545621965, "tweetsentbr": 0.5969146546466134 }
0.590013
0.387281
AdaptLLM/law-LLM
main
false
float16
0
LLaMAForCausalLM
Original
FINISHED
"2024-02-11T13:37:01"
πŸ”Ά : fine-tuned
script
550
2024-04-24T01-23-04.736612
English
1.1.0
{ "enem_challenge": 0.3932820153953814, "bluex": 0.3157162726008345, "oab_exams": 0.3034168564920273, "assin2_rte": 0.7690457097032879, "assin2_sts": 0.2736321836385559, "faquad_nli": 0.6837598520969155, "hatebr_offensive": 0.6310564282443625, "portuguese_hate_speech": 0.32991640141820316, "tweetsentbr": 0.4897974076561671 }
0.465514
0.208557
AdaptLLM/medicine-LLM-13B
main
false
float16
13
LlamaForCausalLM
Original
FINISHED
"2024-02-11T13:37:22"
πŸ”Ά : fine-tuned
script
553
2024-04-24T17-45-23.659613
English
1.1.0
{ "enem_challenge": 0.45976207137858643, "bluex": 0.37552155771905427, "oab_exams": 0.3553530751708428, "assin2_rte": 0.802953910231819, "assin2_sts": 0.6774179667769704, "faquad_nli": 0.7227569273678784, "hatebr_offensive": 0.8155967923139503, "portuguese_hate_speech": 0.6722790404040404, "tweetsentbr": 0.5992582348356217 }
0.608989
0.426546
AdaptLLM/medicine-LLM
main
false
float16
0
LLaMAForCausalLM
Original
FINISHED
"2024-02-11T13:37:07"
πŸ”Ά : fine-tuned
script
550
2024-04-24T13-09-44.649718
English
1.1.0
{ "enem_challenge": 0.3806857942617215, "bluex": 0.3129346314325452, "oab_exams": 0.28610478359908886, "assin2_rte": 0.7412241742464284, "assin2_sts": 0.30610797857979344, "faquad_nli": 0.6385993049986635, "hatebr_offensive": 0.4569817890542286, "portuguese_hate_speech": 0.26575729349526506, "tweetsentbr": 0.4667966458909563 }
0.428355
0.135877
AetherResearch/Cerebrum-1.0-7b
main
false
float16
7.242
MistralForCausalLM
Original
FINISHED
"2024-03-14T11:07:59"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
332
2024-04-01T22-58-48.098123
English
1.1.0
{ "enem_challenge": 0.6137158852344297, "bluex": 0.5062586926286509, "oab_exams": 0.44510250569476084, "assin2_rte": 0.8562832789419443, "assin2_sts": 0.7083110279713039, "faquad_nli": 0.7709976024119299, "hatebr_offensive": 0.7925948726646638, "portuguese_hate_speech": 0.6342708554907774, "tweetsentbr": 0.6171926929726294 }
0.660525
0.494853
BAAI/Aquila-7B
main
false
float16
7
AquilaModel
Original
FINISHED
"2024-02-05T23:09:00"
🟒 : pretrained
script
343
2024-04-03T05-32-42.254781
?
1.1.0
{ "enem_challenge": 0.3275017494751575, "bluex": 0.2795549374130737, "oab_exams": 0.3047835990888383, "assin2_rte": 0.7202499022958302, "assin2_sts": 0.04640761012170769, "faquad_nli": 0.47034320848362593, "hatebr_offensive": 0.6981236353283272, "portuguese_hate_speech": 0.4164993156903397, "tweetsentbr": 0.4656320326711388 }
0.414344
0.144131
BAAI/Aquila2-34B
main
false
bfloat16
34
LlamaForCausalLM
Original
FINISHED
"2024-02-05T23:10:17"
🟒 : pretrained
script
484
2024-04-18T14-04-47.026230
?
1.1.0
{ "enem_challenge": 0.5479356193142058, "bluex": 0.4381084840055633, "oab_exams": 0.40455580865603646, "assin2_rte": 0.8261661293083891, "assin2_sts": 0.643049056717646, "faquad_nli": 0.4471267110923455, "hatebr_offensive": 0.4920183585480058, "portuguese_hate_speech": 0.6606858054226475, "tweetsentbr": 0.5598737392847967 }
0.557724
0.319206
BAAI/Aquila2-7B
main
false
float16
7
AquilaModel
Original
FINISHED
"2024-02-05T23:09:07"
🟒 : pretrained
script
360
2024-04-03T05-55-31.957348
?
1.1.0
{ "enem_challenge": 0.20573827851644508, "bluex": 0.14464534075104313, "oab_exams": 0.3225512528473804, "assin2_rte": 0.5426094787796916, "assin2_sts": 0.3589709171853071, "faquad_nli": 0.49799737773227726, "hatebr_offensive": 0.642139037433155, "portuguese_hate_speech": 0.5212215320910973, "tweetsentbr": 0.2826286167270258 }
0.390945
0.091046
Bruno/Caramelinho
ybelkada/falcon-7b-sharded-bf16
main
false
bfloat16
0
?
Adapter
FINISHED
"2024-02-24T18:01:08"
πŸ†Ž : language adapted models (FP, FT, ...)
leaderboard
256
2024-02-26T15-17-54.708968
Portuguese
1.1.0
{ "enem_challenge": 0.21483554933519944, "bluex": 0.2211404728789986, "oab_exams": 0.25148063781321184, "assin2_rte": 0.4896626375608876, "assin2_sts": 0.19384903999896694, "faquad_nli": 0.43917169974115616, "hatebr_offensive": 0.3396512838306731, "portuguese_hate_speech": 0.46566706851516976, "tweetsentbr": 0.563106045239156 }
0.353174
0.017928
Bruno/Caramelo_7B
ybelkada/falcon-7b-sharded-bf16
main
false
bfloat16
7
?
Adapter
FINISHED
"2024-02-24T18:00:57"
πŸ†Ž : language adapted models (FP, FT, ...)
leaderboard
255
2024-02-26T13-57-57.036659
Portuguese
1.1.0
{ "enem_challenge": 0.1980405878236529, "bluex": 0.24478442280945759, "oab_exams": 0.2528473804100228, "assin2_rte": 0.5427381481762671, "assin2_sts": 0.07473225338478715, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.33650009913117634, "portuguese_hate_speech": 0.412292817679558, "tweetsentbr": 0.35365936890599253 }
0.31725
-0.028868
CohereForAI/aya-101
main
false
float16
12.921
T5ForConditionalGeneration
Original
FINISHED
"2024-02-17T03:43:40"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
253
2024-02-21T19-25-38.847154
English
1.1.0
{ "enem_challenge": 0.5703289013296011, "bluex": 0.47844228094575797, "oab_exams": 0.3895216400911162, "assin2_rte": 0.845896116707975, "assin2_sts": 0.18932506997017534, "faquad_nli": 0.3536861536119358, "hatebr_offensive": 0.8577866430260047, "portuguese_hate_speech": 0.5858880778588808, "tweetsentbr": 0.7292099162284759 }
0.555565
0.354086
CohereForAI/c4ai-command-r-plus-4bit
main
false
4bit
55.052
CohereForCausalLM
Original
FINISHED
"2024-04-05T14:50:15"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
464
2024-04-15T16-05-38.445928
English
1.1.0
{ "enem_challenge": 0.7508747375787264, "bluex": 0.6620305980528511, "oab_exams": 0.6255125284738041, "assin2_rte": 0.9301234467745643, "assin2_sts": 0.7933785386356376, "faquad_nli": 0.7718257450767017, "hatebr_offensive": 0.773798484417851, "portuguese_hate_speech": 0.7166167166167167, "tweetsentbr": 0.7540570104676597 }
0.753135
0.625007
CohereForAI/c4ai-command-r-plus
main
false
float16
103.811
CohereForCausalLM
Original
FAILED
"2024-04-07T18:08:25"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
500
2024-04-19T08-13-47.793067
English
null
null
null
null
CohereForAI/c4ai-command-r-v01
main
false
float16
34.981
CohereForCausalLM
Original
FINISHED
"2024-04-05T14:48:52"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
472
2024-04-17T00-36-42.568466
English
1.1.0
{ "enem_challenge": 0.7158852344296711, "bluex": 0.6203059805285118, "oab_exams": 0.5521640091116173, "assin2_rte": 0.883132179380006, "assin2_sts": 0.7210331309303998, "faquad_nli": 0.47272296015180265, "hatebr_offensive": 0.8222299935886227, "portuguese_hate_speech": 0.7102306144559665, "tweetsentbr": 0.48595613300125107 }
0.664851
0.488799
DAMO-NLP-MT/polylm-1.7b
main
false
float16
1.7
GPT2LMHeadModel
Original
FINISHED
"2024-02-11T13:34:48"
🟒 : pretrained
script
478
2024-04-17T23-46-04.491918
English
1.1.0
{ "enem_challenge": 0.1966410076976907, "bluex": 0.26564673157162727, "oab_exams": 0.24874715261959, "assin2_rte": 0.4047692251758633, "assin2_sts": 0.05167868234986358, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.358843537414966, "portuguese_hate_speech": 0.4530026545569895, "tweetsentbr": 0.22711575772255002 }
0.294011
-0.067176
DAMO-NLP-MT/polylm-13b
main
false
float16
13
PolyLMHeadModel
Original
FINISHED
"2024-02-11T13:34:54"
🟒 : pretrained
script
345
2024-04-03T09-53-29.935717
English
1.1.0
{ "enem_challenge": 0, "bluex": 0, "oab_exams": 0, "assin2_rte": 0, "assin2_sts": 0, "faquad_nli": 0, "hatebr_offensive": 0, "portuguese_hate_speech": 0, "tweetsentbr": 0 }
0
-0.568819
Deci/DeciLM-6b
main
false
bfloat16
5.717
DeciLMForCausalLM
Original
FAILED
"2024-02-05T23:06:24"
πŸ”Ά : fine-tuned
script
253
2024-02-25T19-40-34.104437
English
null
null
null
null
Deci/DeciLM-7B
main
false
bfloat16
7.044
DeciLMForCausalLM
Original
FINISHED
"2024-02-05T23:06:34"
πŸ”Ά : fine-tuned
script
336
2024-04-02T05-42-17.715000
English
1.1.0
{ "enem_challenge": 0.5423372988103569, "bluex": 0.4200278164116829, "oab_exams": 0.358997722095672, "assin2_rte": 0.9123267863598024, "assin2_sts": 0.7555893659678592, "faquad_nli": 0.7857378310075815, "hatebr_offensive": 0.6990533471973728, "portuguese_hate_speech": 0.6754461749208054, "tweetsentbr": 0.6506550848022137 }
0.644463
0.474065
Doctor-Shotgun/limarp-miqu-1-70b-qlora
152334H/miqu-1-70b-sf
main
false
float16
70
?
Adapter
PENDING
"2024-04-26T08:26:41"
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
-1
null
English
null
null
null
null
EleutherAI/gpt-j-6b
main
false
float16
6
GPTJForCausalLM
Original
FINISHED
"2024-02-05T23:12:19"
🟒 : pretrained
script
387
2024-04-05T04-41-08.855450
English
1.1.0
{ "enem_challenge": 0.21973407977606718, "bluex": 0.2364394993045897, "oab_exams": 0.25466970387243737, "assin2_rte": 0.3582588385476761, "assin2_sts": 0.14562487212003206, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.6588376162844248, "portuguese_hate_speech": 0.5468502264582175, "tweetsentbr": 0.3534145441122185 }
0.357054
0.040386
EleutherAI/gpt-neo-1.3B
main
false
float16
1.366
GPTNeoForCausalLM
Original
FINISHED
"2024-02-05T23:12:06"
🟒 : pretrained
script
369
2024-04-04T01-04-14.137713
English
1.1.0
{ "enem_challenge": 0.20153953813855843, "bluex": 0.1835883171070932, "oab_exams": 0.2419134396355353, "assin2_rte": 0.3333333333333333, "assin2_sts": 0.27954490493177114, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.3333333333333333, "portuguese_hate_speech": 0.22986425339366515, "tweetsentbr": 0.15870406189555125 }
0.266831
-0.134397
EleutherAI/gpt-neo-125m
main
false
float16
0.15
GPTNeoForCausalLM
Original
FINISHED
"2024-02-05T23:11:59"
🟒 : pretrained
script
368
2024-04-04T00-23-23.313643
English
1.1.0
{ "enem_challenge": 0.18824352694191743, "bluex": 0.18497913769123783, "oab_exams": 0.22460136674259681, "assin2_rte": 0.40826127460837947, "assin2_sts": 0.13567407692821803, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.3359391417643845, "portuguese_hate_speech": 0.23174470457079152, "tweetsentbr": 0.1506866897702477 }
0.255532
-0.13829
EleutherAI/gpt-neo-2.7B
main
false
float16
2.718
GPTNeoForCausalLM
Original
FINISHED
"2024-02-05T23:12:14"
🟒 : pretrained
script
368
2024-04-04T01-08-46.345259
English
1.1.0
{ "enem_challenge": 0.19244226731980407, "bluex": 0.21696801112656466, "oab_exams": 0.24236902050113895, "assin2_rte": 0.34680711177144763, "assin2_sts": 0.2028018720426534, "faquad_nli": 0.44921692379616646, "hatebr_offensive": 0.3686829976188286, "portuguese_hate_speech": 0.23174470457079152, "tweetsentbr": 0.27297529346501975 }
0.280445
-0.107237
EleutherAI/gpt-neox-20b
main
false
float16
20.739
GPTNeoXForCausalLM
Original
FINISHED
"2024-02-05T23:12:26"
🟒 : pretrained
script
369
2024-04-04T02-29-53.418614
English
1.1.0
{ "enem_challenge": 0.19384184744576627, "bluex": 0.22809457579972184, "oab_exams": 0.2469248291571754, "assin2_rte": 0.3849409111254498, "assin2_sts": 0.24127351840284703, "faquad_nli": 0.4362532523850824, "hatebr_offensive": 0.3761140819964349, "portuguese_hate_speech": 0.2315040773180308, "tweetsentbr": 0.20969654257926673 }
0.283183
-0.103534
EleutherAI/polyglot-ko-12.8b
main
false
float16
13.061
GPTNeoXForCausalLM
Original
FAILED
"2024-02-05T23:15:01"
🟒 : pretrained
script
465
2024-04-15T22-28-42.463373
Other
null
null
null
null
EleutherAI/pythia-12b-deduped
main
false
float16
12
GPTNeoXForCausalLM
Original
FAILED
"2024-02-05T23:11:53"
🟒 : pretrained
script
367
2024-04-04T00-13-25.206534
English
null
null
null
null
EleutherAI/pythia-12b
main
false
float16
12
GPTNeoXForCausalLM
Original
PENDING
"2024-02-11T13:39:39"
🟒 : pretrained
script
554
2024-04-24T22-48-32.048335
English
null
null
null
null
EleutherAI/pythia-14m
main
false
float16
0.039
GPTNeoXForCausalLM
Original
FINISHED
"2024-02-05T23:11:12"
🟒 : pretrained
script
363
2024-04-03T19-47-56.339960
English
1.1.0
{ "enem_challenge": 0.19104268719384185, "bluex": 0.17941585535465926, "oab_exams": 0.21822323462414578, "assin2_rte": 0.2210516588115701, "assin2_sts": 0.0006847937896062521, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.17328604471858133, "portuguese_hate_speech": 0.2692126355492692, "tweetsentbr": 0.008390382047306943 }
0.188996
-0.247927
EleutherAI/pythia-160m-deduped
main
false
float16
0.213
GPTNeoXForCausalLM
Original
FINISHED
"2024-02-05T23:11:23"
🟒 : pretrained
script
364
2024-04-03T21-13-20.844629
English
1.1.0
{ "enem_challenge": 0.20713785864240727, "bluex": 0.17941585535465926, "oab_exams": 0.24555808656036446, "assin2_rte": 0.3389102997234224, "assin2_sts": 0.04193510248223561, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.45262567913287954, "portuguese_hate_speech": 0.38733242633164233, "tweetsentbr": 0.274837496010884 }
0.285268
-0.079546
EleutherAI/pythia-160m
main
false
float16
0.213
GPTNeoXForCausalLM
Original
FINISHED
"2024-02-11T13:39:10"
🟒 : pretrained
script
553
2024-04-24T22-01-22.569827
English
1.1.0
{ "enem_challenge": 0.20503848845346395, "bluex": 0.1905424200278164, "oab_exams": 0.22779043280182232, "assin2_rte": 0.5474759773218106, "assin2_sts": 0.05731560767696195, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.3333333333333333, "portuguese_hate_speech": 0.412292817679558, "tweetsentbr": 0.23733809038278153 }
0.294531
-0.060204
EleutherAI/pythia-1b-deduped
main
false
float16
1.079
GPTNeoXForCausalLM
Original
FINISHED
"2024-02-05T23:11:36"
🟒 : pretrained
script
366
2024-04-03T22-36-01.685148
English
1.1.0
{ "enem_challenge": 0.1994401679496151, "bluex": 0.20584144645340752, "oab_exams": 0.2378132118451025, "assin2_rte": 0.34088811077510744, "assin2_sts": 0.058535886752163556, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.3503150417064208, "portuguese_hate_speech": 0.23407429779522804, "tweetsentbr": 0.2072905953605302 }
0.25265
-0.142279
EleutherAI/pythia-1b
main
false
float16
1.079
GPTNeoXForCausalLM
Original
FINISHED
"2024-02-11T13:39:22"
🟒 : pretrained
script
253
2024-02-22T09-20-51.293467
English
1.1.0
{ "enem_challenge": 0.18964310706787962, "bluex": 0.19193324061196107, "oab_exams": 0.24145785876993167, "assin2_rte": 0.4650037024436128, "assin2_sts": 0, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.3333333333333333, "portuguese_hate_speech": 0.22986425339366515, "tweetsentbr": 0.1800598326070416 }
0.252328
-0.13319
EleutherAI/pythia-2.8b-deduped
main
false
float16
2.909
GPTNeoXForCausalLM
Original
FINISHED
"2024-02-05T23:11:43"
🟒 : pretrained
script
366
2024-04-03T22-36-58.032425
English
1.1.0
{ "enem_challenge": 0.2085374387683695, "bluex": 0.22531293463143254, "oab_exams": 0.2505694760820046, "assin2_rte": 0.3333333333333333, "assin2_sts": 0.25073155863228036, "faquad_nli": 0.17939674437408695, "hatebr_offensive": 0.3333333333333333, "portuguese_hate_speech": 0.22986425339366515, "tweetsentbr": 0.215234981952897 }
0.247368
-0.173173
EleutherAI/pythia-2.8b
main
false
float16
2.909
GPTNeoXForCausalLM
Original
FINISHED
"2024-02-11T13:39:28"
🟒 : pretrained
script
554
2024-04-24T22-06-55.870441
English
1.1.0
{ "enem_challenge": 0.21133659902029392, "bluex": 0.2239221140472879, "oab_exams": 0.24100227790432802, "assin2_rte": 0.3333333333333333, "assin2_sts": 0.20712471558047654, "faquad_nli": 0.17939674437408695, "hatebr_offensive": 0.3333333333333333, "portuguese_hate_speech": 0.22986425339366515, "tweetsentbr": 0.20067205514503328 }
0.239998
-0.181654
EleutherAI/pythia-410m-deduped
main
false
float16
0.506
GPTNeoXForCausalLM
Original
FINISHED
"2024-02-05T23:11:30"
🟒 : pretrained
script
365
2024-04-03T21-43-53.606908
English
1.1.0
{ "enem_challenge": 0.19174247725682295, "bluex": 0.20166898470097358, "oab_exams": 0.2337129840546697, "assin2_rte": 0.3333333333333333, "assin2_sts": 0.040138096618478045, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.3333333333333333, "portuguese_hate_speech": 0.22986425339366515, "tweetsentbr": 0.2072905953605302 }
0.245638
-0.152948
EleutherAI/pythia-410m
main
false
float16
0.506
GPTNeoXForCausalLM
Original
FINISHED
"2024-02-11T13:39:16"
🟒 : pretrained
script
556
2024-04-24T22-06-03.860992
English
1.1.0
{ "enem_challenge": 0.1980405878236529, "bluex": 0.2364394993045897, "oab_exams": 0.24555808656036446, "assin2_rte": 0.3333333333333333, "assin2_sts": 0.029778872408316164, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.34039116046362083, "portuguese_hate_speech": 0.2524344906158182, "tweetsentbr": 0.29805007023492736 }
0.263742
-0.125096
EleutherAI/pythia-6.9b-deduped
main
false
float16
6.9
GPTNeoXForCausalLM
Original
FINISHED
"2024-02-05T23:11:48"
🟒 : pretrained
script
367
2024-04-03T23-20-39.755265
English
1.1.0
{ "enem_challenge": 0.20783764870538837, "bluex": 0.2211404728789986, "oab_exams": 0.2656036446469248, "assin2_rte": 0.3333333333333333, "assin2_sts": 0.07157697545169536, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.3349186726374015, "portuguese_hate_speech": 0.4252047249041825, "tweetsentbr": 0.2072905953605302 }
0.278507
-0.097692
EleutherAI/pythia-6.9b
main
false
float16
6.9
GPTNeoXForCausalLM
Original
FINISHED
"2024-02-11T13:39:33"
🟒 : pretrained
script
253
2024-02-22T09-49-44.237199
English
1.1.0
{ "enem_challenge": 0.19454163750874737, "bluex": 0.20305980528511822, "oab_exams": 0.23006833712984054, "assin2_rte": 0.5918513695309833, "assin2_sts": 0.0025941556675326705, "faquad_nli": 0.3121791039110175, "hatebr_offensive": 0.32770726983578174, "portuguese_hate_speech": 0.42473737096921527, "tweetsentbr": 0.32668391292199067 }
0.29038
-0.065609
EleutherAI/pythia-70m-deduped
main
false
float16
0.096
GPTNeoXForCausalLM
Original
FINISHED
"2024-02-05T23:11:17"
🟒 : pretrained
script
364
2024-04-03T21-10-06.848681
English
1.1.0
{ "enem_challenge": 0.172148355493352, "bluex": 0.1835883171070932, "oab_exams": 0.2041002277904328, "assin2_rte": 0.23382263963539596, "assin2_sts": 0.02026922309956098, "faquad_nli": 0.2759039805530234, "hatebr_offensive": 0.28076386043861, "portuguese_hate_speech": 0.24182579976211263, "tweetsentbr": 0.13108766233766234 }
0.193723
-0.242146
EleutherAI/pythia-70m
main
false
float16
0.096
GPTNeoXForCausalLM
Original
FINISHED
"2024-02-11T13:38:58"
🟒 : pretrained
script
552
2024-04-24T21-25-37.361813
English
1.1.0
{ "enem_challenge": 0.0622813156053184, "bluex": 0.2086230876216968, "oab_exams": 0.030068337129840545, "assin2_rte": 0.4502521949740358, "assin2_sts": 0.006173005990956128, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.419144092439547, "portuguese_hate_speech": 0.3087375175771073, "tweetsentbr": 0.12087469376644588 }
0.227312
-0.156292
FuseAI/FuseChat-7B-VaRM
main
false
bfloat16
7.242
MistralForCausalLM
Original
FINISHED
"2024-03-04T15:36:23"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
311
2024-03-08T15-26-39.517660
English
1.1.0
{ "enem_challenge": 0.6480055983205039, "bluex": 0.5493741307371349, "oab_exams": 0.4182232346241458, "assin2_rte": 0.9272868051476477, "assin2_sts": 0.7836651113903375, "faquad_nli": 0.787259111855886, "hatebr_offensive": 0.8223021238433512, "portuguese_hate_speech": 0.6973371097488426, "tweetsentbr": 0.44067858320963216 }
0.674904
0.520153
FuseAI/OpenChat-3.5-7B-Solar
main
false
bfloat16
7.242
MistralForCausalLM
Original
FINISHED
"2024-03-04T14:31:17"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
310
2024-03-08T13-22-15.392524
English
1.1.0
{ "enem_challenge": 0.6452064380685795, "bluex": 0.5465924895688457, "oab_exams": 0.4218678815489749, "assin2_rte": 0.927694039333938, "assin2_sts": 0.7822958272680564, "faquad_nli": 0.7777514761773254, "hatebr_offensive": 0.8223021238433512, "portuguese_hate_speech": 0.7022123148304151, "tweetsentbr": 0.5877550319137789 }
0.690409
0.54326
HeyLucasLeao/gpt-neo-small-portuguese
main
false
float16
0
GPTNeoForCausalLM
Original
FINISHED
"2024-02-05T23:14:26"
πŸ†Ž : language adapted models (FP, FT, ...)
script
306
2024-03-08T04-18-26.971751
Portuguese
1.1.0
{ "enem_challenge": 0.16445066480055984, "bluex": 0.03894297635605007, "oab_exams": 0.023234624145785875, "assin2_rte": 0.3528931097729911, "assin2_sts": 0.040770337667175804, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.3333333333333333, "portuguese_hate_speech": 0.2854909694680288, "tweetsentbr": 0.1506866897702477 }
0.203273
-0.204329
HuggingFaceH4/zephyr-7b-alpha
main
false
bfloat16
7.242
MistralForCausalLM
Original
FINISHED
"2024-04-14T18:12:40"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
463
2024-04-15T10-11-49.222023
English
1.1.0
{ "enem_challenge": 0.562631210636809, "bluex": 0.5104311543810849, "oab_exams": 0.40273348519362184, "assin2_rte": 0.9011395676691729, "assin2_sts": 0.7233470427220756, "faquad_nli": 0.6962929525710168, "hatebr_offensive": 0.8526041634724087, "portuguese_hate_speech": 0.652858285536766, "tweetsentbr": 0.6548809121737628 }
0.66188
0.50199
HuggingFaceH4/zephyr-7b-beta
main
false
bfloat16
7.242
MistralForCausalLM
Original
FINISHED
"2024-02-21T18:04:59"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
253
2024-02-21T23-57-52.146406
English
1.1.0
{ "enem_challenge": 0.5787263820853744, "bluex": 0.47983310152990266, "oab_exams": 0.3931662870159453, "assin2_rte": 0.8836486323653452, "assin2_sts": 0.6678266192299295, "faquad_nli": 0.7017672651113582, "hatebr_offensive": 0.8176778106453834, "portuguese_hate_speech": 0.6658626171810755, "tweetsentbr": 0.46064331884597925 }
0.627684
0.45238
HuggingFaceH4/zephyr-7b-gemma-v0.1
main
false
bfloat16
8.538
GemmaForCausalLM
Original
FINISHED
"2024-03-02T00:49:26"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
386
2024-04-04T23-04-13.841492
English
1.1.0
{ "enem_challenge": 0.5815255423372988, "bluex": 0.47426981919332406, "oab_exams": 0.40728929384965834, "assin2_rte": 0.8604729280813948, "assin2_sts": 0.7259016112950178, "faquad_nli": 0.7486076732673268, "hatebr_offensive": 0.8755151098901099, "portuguese_hate_speech": 0.6244738628649016, "tweetsentbr": 0.6159470691844793 }
0.657111
0.494637
HuggingFaceTB/cosmo-1b
main
false
float16
1.742
LlamaForCausalLM
Original
FINISHED
"2024-02-24T19:59:46"
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
258
2024-02-26T18-03-06.542808
English
1.1.0
{ "enem_challenge": 0.20783764870538837, "bluex": 0.20723226703755215, "oab_exams": 0.23234624145785876, "assin2_rte": 0.5526600270022243, "assin2_sts": 0.07330211383402985, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.3411338879766024, "portuguese_hate_speech": 0.24442150902068246, "tweetsentbr": 0.2534405090147715 }
0.283559
-0.085225
Intel/neural-chat-7b-v3-1
main
false
float16
7.242
MistralForCausalLM
Original
FINISHED
"2024-02-21T18:03:11"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
253
2024-02-25T06-21-33.008420
English
1.1.0
{ "enem_challenge": 0.6263121063680895, "bluex": 0.47983310152990266, "oab_exams": 0.39726651480637815, "assin2_rte": 0.9268770228292367, "assin2_sts": 0.7658477385894799, "faquad_nli": 0.7840135895978708, "hatebr_offensive": 0.8905574366528357, "portuguese_hate_speech": 0.6685671281654837, "tweetsentbr": 0.5145983702206705 }
0.672653
0.522586
Intel/neural-chat-7b-v3-3
main
false
float16
7
MistralForCausalLM
Original
FINISHED
"2024-02-21T18:03:21"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
253
2024-02-21T22-54-50.520595
English
1.1.0
{ "enem_challenge": 0.6263121063680895, "bluex": 0.5034770514603616, "oab_exams": 0.39635535307517084, "assin2_rte": 0.9140545431322211, "assin2_sts": 0.7587721518241414, "faquad_nli": 0.7147222222222223, "hatebr_offensive": 0.8653967318817455, "portuguese_hate_speech": 0.6322323153577603, "tweetsentbr": 0.4689260995001763 }
0.653361
0.487161
J-LAB/BRisa-7B-Instruct-v0.2
main
false
bfloat16
7.242
MistralForCausalLM
Original
FINISHED
"2024-04-18T23:08:40"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
manual
502
2024-04-19T10-58-40.659391
Portuguese
1.1.0
{ "enem_challenge": 0.6508047585724283, "bluex": 0.5368567454798331, "oab_exams": 0.4337129840546697, "assin2_rte": 0.914959114959115, "assin2_sts": 0.7360504820365534, "faquad_nli": 0.6830556684274685, "hatebr_offensive": 0.7427748086927932, "portuguese_hate_speech": 0.6511659683002369, "tweetsentbr": 0.6077237421626446 }
0.6619
0.491829
J-LAB/BRisa-7B-Instruct-v0.2
main
false
float16
7.242
MistralForCausalLM
Original
FINISHED
"2024-04-18T13:21:31"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
505
2024-04-19T16-12-48.574869
Portuguese
1.1.0
{ "enem_challenge": 0.6522043386983905, "bluex": 0.5438108484005564, "oab_exams": 0.4432801822323462, "assin2_rte": 0.9133356979253711, "assin2_sts": 0.7369091629413509, "faquad_nli": 0.6808719560094265, "hatebr_offensive": 0.7400764917752776, "portuguese_hate_speech": 0.657700175064164, "tweetsentbr": 0.6091722968255823 }
0.664151
0.49476
JJhooww/Mistral-7B-v0.2-Base_ptbr
main
false
bfloat16
7.242
MistralForCausalLM
Original
FINISHED
"2024-04-18T23:08:40"
πŸ†Ž : language adapted models (FP, FT, ...)
leaderboard
491
2024-04-20T03-39-36.902005
Portuguese
1.1.0
{ "enem_challenge": 0.629111266620014, "bluex": 0.4631432545201669, "oab_exams": 0.3835990888382688, "assin2_rte": 0.9019128003131698, "assin2_sts": 0.16704630760888603, "faquad_nli": 0.566892243623113, "hatebr_offensive": 0.7250569715248458, "portuguese_hate_speech": 0.6679089916559607, "tweetsentbr": 0.5726952459126636 }
0.564152
0.374817
JJhooww/Mistral-7B-v0.2-Base_ptbr
main
false
float16
7.242
MistralForCausalLM
Original
FINISHED
"2024-04-13T03:15:37"
πŸ†Ž : language adapted models (FP, FT, ...)
leaderboard
442
2024-04-13T17-20-04.237102
Portuguese
1.1.0
{ "enem_challenge": 0.6494051784464661, "bluex": 0.5396383866481224, "oab_exams": 0.4542141230068337, "assin2_rte": 0.9011456831413249, "assin2_sts": 0.7251095355270992, "faquad_nli": 0.6904462094795298, "hatebr_offensive": 0.7961717229751414, "portuguese_hate_speech": 0.5852091456930166, "tweetsentbr": 0.6232338461110419 }
0.66273
0.492659
JJhooww/MistralReloadBR_v2_ptbr
main
false
bfloat16
7.242
MistralForCausalLM
Original
FINISHED
"2024-03-08T02:22:06"
πŸ†Ž : language adapted models (FP, FT, ...)
leaderboard
320
2024-03-09T04-58-37.486266
Portuguese
1.1.0
{ "enem_challenge": 0.6081175647305809, "bluex": 0.47983310152990266, "oab_exams": 0.40728929384965834, "assin2_rte": 0.9101172201226876, "assin2_sts": 0.745635698648774, "faquad_nli": 0.4760412001791312, "hatebr_offensive": 0.7982678280152018, "portuguese_hate_speech": 0.6632432143375528, "tweetsentbr": 0.6700347269707226 }
0.639842
0.456727
JJhooww/Mistral_Relora_Step2k
main
false
bfloat16
7.242
MistralForCausalLM
Original
FINISHED
"2024-04-18T23:08:40"
πŸ†Ž : language adapted models (FP, FT, ...)
leaderboard
508
2024-04-20T03-09-26.234801
Portuguese
1.1.0
{ "enem_challenge": 0.6179146256123164, "bluex": 0.5159944367176634, "oab_exams": 0.39635535307517084, "assin2_rte": 0.9121625173669783, "assin2_sts": 0.7065946896645577, "faquad_nli": 0.6466313961043266, "hatebr_offensive": 0.8143254279726638, "portuguese_hate_speech": 0.652940879778074, "tweetsentbr": 0.5167597069914197 }
0.642187
0.46864
JJhooww/Mistral_Relora_Step2k
main
false
float16
7.242
MistralForCausalLM
Original
FINISHED
"2024-03-08T02:22:23"
πŸ†Ž : language adapted models (FP, FT, ...)
leaderboard
320
2024-03-09T08-42-21.029909
Portuguese
1.1.0
{ "enem_challenge": 0.615815255423373, "bluex": 0.5257301808066759, "oab_exams": 0.3981776765375854, "assin2_rte": 0.9113496854193482, "assin2_sts": 0.7074610038971542, "faquad_nli": 0.6526577185427341, "hatebr_offensive": 0.8133973664850924, "portuguese_hate_speech": 0.6536416538696902, "tweetsentbr": 0.5193585604823832 }
0.644177
0.471533
JosephusCheung/LL7M
main
false
float16
0.007
LlamaForCausalLM
Original
FINISHED
"2024-04-21T18:48:05"
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
539
2024-04-23T00-09-23.964862
English
1.1.0
{ "enem_challenge": 0.22813156053184044, "bluex": 0.21279554937413073, "oab_exams": 0.24464692482915718, "assin2_rte": 0.656772420167965, "assin2_sts": 0.21905517553818948, "faquad_nli": 0.5111651047090131, "hatebr_offensive": 0.7436810107109835, "portuguese_hate_speech": 0.26722448543297267, "tweetsentbr": 0.33553712665916735 }
0.37989
0.082043
M4-ai/tau-0.5B-instruct-DPOP
main
false
float16
0.464
Qwen2ForCausalLM
Original
FINISHED
"2024-04-21T18:49:55"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
540
2024-04-23T02-24-34.579672
English
1.1.0
{ "enem_challenge": 0.23163051084674596, "bluex": 0.21279554937413073, "oab_exams": 0.25239179954441915, "assin2_rte": 0.6250683495142939, "assin2_sts": 0.14960279813437427, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.3333333333333333, "portuguese_hate_speech": 0.43626343456325545, "tweetsentbr": 0.2236745795098198 }
0.322713
-0.019326
M4-ai/tau-0.5B
main
false
float16
0.464
Qwen2ForCausalLM
Original
FINISHED
"2024-04-21T18:49:31"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
537
2024-04-23T00-49-38.450870
English
1.1.0
{ "enem_challenge": 0.19314205738278517, "bluex": 0.18915159944367177, "oab_exams": 0.23462414578587698, "assin2_rte": 0.39285662181494563, "assin2_sts": 0.07216057977107923, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.222010481181515, "portuguese_hate_speech": 0.412292817679558, "tweetsentbr": 0.21833154883841985 }
0.263803
-0.121635
M4-ai/tau-1.8B
main
false
bfloat16
1.837
Qwen2ForCausalLM
Original
FINISHED
"2024-04-21T18:50:22"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
538
2024-04-23T02-51-19.597157
English
1.1.0
{ "enem_challenge": 0.2610216934919524, "bluex": 0.23504867872044508, "oab_exams": 0.25466970387243737, "assin2_rte": 0.6240877656394997, "assin2_sts": 0.19269473597203718, "faquad_nli": 0.3987209371824756, "hatebr_offensive": 0.41976405672054573, "portuguese_hate_speech": 0.32018744664167026, "tweetsentbr": 0.15747184099568803 }
0.318185
-0.032001
MagusCorp/legislinho
main
false
float16
3.862
MistralForCausalLM
Original
FINISHED
"2024-04-09T02:48:03"
πŸ†Ž : language adapted models (FP, FT, ...)
leaderboard
434
2024-04-13T08-30-14.215121
Portuguese
1.1.0
{ "enem_challenge": 0.6305108467459762, "bluex": 0.5104311543810849, "oab_exams": 0.43234624145785877, "assin2_rte": 0.8870184075342467, "assin2_sts": 0.6776356777228696, "faquad_nli": 0.6379609737375439, "hatebr_offensive": 0.7264113460475401, "portuguese_hate_speech": 0.6563004846526657, "tweetsentbr": 0.5651626679179093 }
0.635975
0.453531
MaziyarPanahi/Mistral-7B-Instruct-Aya-101
main
false
bfloat16
7.242
MistralForCausalLM
Original
FINISHED
"2024-04-17T06:11:16"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
474
2024-04-17T09-07-30.140283
English
1.1.0
{ "enem_challenge": 0.6060181945416375, "bluex": 0.5438108484005564, "oab_exams": 0.39362186788154896, "assin2_rte": 0.9072398971802695, "assin2_sts": 0.7641692139433879, "faquad_nli": 0.6218181818181818, "hatebr_offensive": 0.8004209608305171, "portuguese_hate_speech": 0.6762940852684385, "tweetsentbr": 0.5030635127570277 }
0.646273
0.470432
MulaBR/Mula-4x160-v0.1
main
false
float16
0.417
MixtralForCausalLM
Original
FINISHED
"2024-04-21T22:40:10"
🟒 : pretrained
manual
531
2024-04-22T00-05-24.255163
Portuguese
1.1.0
{ "enem_challenge": 0.21343596920923724, "bluex": 0.2517385257301808, "oab_exams": 0.2505694760820046, "assin2_rte": 0.335683441456502, "assin2_sts": 0.11349165436666529, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.41502718891863716, "portuguese_hate_speech": 0.22986425339366515, "tweetsentbr": 0.11244668476153548 }
0.262435
-0.129114
NOVA-vision-language/GlorIA-1.3B
main
false
float16
1.416
GPTNeoForCausalLM
Original
FINISHED
"2024-03-07T19:45:38"
🟒 : pretrained
leaderboard
301
2024-03-07T22-34-35.217921
Portuguese
1.1.0
{ "enem_challenge": 0.018894331700489854, "bluex": 0.031988873435326845, "oab_exams": 0.05193621867881549, "assin2_rte": 0, "assin2_sts": 0.023212602251989234, "faquad_nli": 0.0026041666666666665, "hatebr_offensive": 0.0028436222959357994, "portuguese_hate_speech": 0.23522853957636566, "tweetsentbr": 0.0018832391713747645 }
0.040955
-0.499694
Nexusflow/Starling-LM-7B-beta
main
false
bfloat16
7.242
MistralForCausalLM
Original
FINISHED
"2024-03-29T12:49:58"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
331
2024-04-01T21-15-54.379246
English
1.1.0
{ "enem_challenge": 0.6466060181945417, "bluex": 0.5382475660639777, "oab_exams": 0.4542141230068337, "assin2_rte": 0.9256528007689433, "assin2_sts": 0.8246749931266709, "faquad_nli": 0.7748688218404758, "hatebr_offensive": 0.8311091883257347, "portuguese_hate_speech": 0.7137054053375511, "tweetsentbr": 0.5036962690569555 }
0.690308
0.541226
NousResearch/Hermes-2-Pro-Llama-3-8B
main
false
float16
8.031
LlamaForCausalLM
Original
FINISHED
"2024-05-06T23:04:05"
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
593
2024-05-07T04-01-33.854422
English
1.1.0
{ "enem_challenge": 0.6787963610916725, "bluex": 0.5702364394993046, "oab_exams": 0.44738041002277906, "assin2_rte": 0.9223739628332301, "assin2_sts": 0.7575480918675715, "faquad_nli": 0.7486659964426572, "hatebr_offensive": 0.821316847945847, "portuguese_hate_speech": 0.6324128242225997, "tweetsentbr": 0.6706448057731071 }
0.694375
0.543822
NousResearch/Nous-Capybara-34B
main
false
bfloat16
34
LlamaForCausalLM
Original
PENDING
"2024-04-26T07:21:50"
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
-1
null
English
null
null
null
null
NousResearch/Nous-Hermes-13b
main
false
bfloat16
13
LlamaForCausalLM
Original
FINISHED
"2024-02-27T00:38:36"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
271
2024-02-28T05-28-55.109832
English
1.1.0
{ "enem_challenge": 0.46186144156752973, "bluex": 0.36300417246175243, "oab_exams": 0.34350797266514804, "assin2_rte": 0.6630418610067084, "assin2_sts": 0.5166061482295778, "faquad_nli": 0.6245167597351622, "hatebr_offensive": 0.7448674633758359, "portuguese_hate_speech": 0.7040514693096853, "tweetsentbr": 0.591700154492257 }
0.557017
0.344072
NousResearch/Nous-Hermes-2-Mistral-7B-DPO
main
false
bfloat16
7.242
MistralForCausalLM
Original
FINISHED
"2024-02-27T00:37:25"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
267
2024-02-27T02-51-13.508742
English
1.1.0
{ "enem_challenge": 0.6326102169349195, "bluex": 0.541029207232267, "oab_exams": 0.43735763097949887, "assin2_rte": 0.601464720105945, "assin2_sts": 0.6915650379510005, "faquad_nli": 0.7138364779874213, "hatebr_offensive": 0.7767581619154933, "portuguese_hate_speech": 0.7090851811137625, "tweetsentbr": 0.4521585213804042 }
0.617318
0.416301
NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
main
false
bfloat16
46.703
MixtralForCausalLM
Original
FINISHED
"2024-02-27T00:38:29"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
296
2024-03-07T12-23-27.230508
English
1.1.0
{ "enem_challenge": 0.6641007697690693, "bluex": 0.5535465924895688, "oab_exams": 0.47289293849658315, "assin2_rte": 0.9023645725471283, "assin2_sts": 0.734857329244095, "faquad_nli": 0.7498307874026198, "hatebr_offensive": 0.7666031472700376, "portuguese_hate_speech": 0.5860877435617644, "tweetsentbr": 0.6190805774400715 }
0.672152
0.505874
NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
main
false
float16
46.703
MixtralForCausalLM
Original
FINISHED
"2024-02-21T13:34:22"
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
295
2024-03-06T22-07-16.186340
English
1.1.0
{ "enem_challenge": 0.655703289013296, "bluex": 0.5535465924895688, "oab_exams": 0.4710706150341686, "assin2_rte": 0.9011405575094769, "assin2_sts": 0.7346929104749711, "faquad_nli": 0.7626485982066783, "hatebr_offensive": 0.7640680874353314, "portuguese_hate_speech": 0.5811439239646979, "tweetsentbr": 0.6217084995395291 }
0.671747
0.505583
NousResearch/Nous-Hermes-2-SOLAR-10.7B
main
false
bfloat16
10.732
LlamaForCausalLM
Original
FINISHED
"2024-02-27T00:38:11"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
269
2024-02-27T18-41-43.250837
English
1.1.0
{ "enem_challenge": 0.7074877536738978, "bluex": 0.5605006954102921, "oab_exams": 0.47699316628701594, "assin2_rte": 0.9177576504248857, "assin2_sts": 0.797009717894243, "faquad_nli": 0.8008578431372549, "hatebr_offensive": 0.8507132845086443, "portuguese_hate_speech": 0.6722715111363347, "tweetsentbr": 0.6633893741720925 }
0.716331
0.578651
NousResearch/Nous-Hermes-2-Yi-34B
main
false
bfloat16
34.389
LlamaForCausalLM
Original
FINISHED
"2024-02-27T00:37:39"
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
268
2024-02-27T03-50-28.297314
English
1.1.0
{ "enem_challenge": 0.7312806158152554, "bluex": 0.6578581363004172, "oab_exams": 0.5599088838268793, "assin2_rte": 0.9215044447012628, "assin2_sts": 0.7985401560561216, "faquad_nli": 0.7605236777394121, "hatebr_offensive": 0.7703803469511286, "portuguese_hate_speech": 0.6607502875554572, "tweetsentbr": 0.6569392825486907 }
0.724187
0.579586
NucleusAI/nucleus-22B-token-500B
main
false
float16
21.828
LlamaForCausalLM
Original
FINISHED
"2024-02-05T23:11:04"
🟒 : pretrained
script
363
2024-04-03T18-19-57.556537
English
1.1.0
{ "enem_challenge": 0.20783764870538837, "bluex": 0.23226703755215578, "oab_exams": 0.269248291571754, "assin2_rte": 0.3333333333333333, "assin2_sts": 0.18130129349722302, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.3333333333333333, "portuguese_hate_speech": 0.412292817679558, "tweetsentbr": 0.2072905953605302 }
0.290729
-0.086471
OpenLLM-France/Claire-7B-0.1
main
false
bfloat16
7
FalconForCausalLM
Original
FINISHED
"2024-02-05T23:16:00"
πŸ†Ž : language adapted models (FP, FT, ...)
script
466
2024-04-16T02-18-49.342048
Other
1.1.0
{ "enem_challenge": 0.20643806857942618, "bluex": 0.25869262865090403, "oab_exams": 0.23963553530751708, "assin2_rte": 0.4166035250383718, "assin2_sts": 0.10146422112236213, "faquad_nli": 0.44552575932333716, "hatebr_offensive": 0.3349186726374015, "portuguese_hate_speech": 0.2373549382576118, "tweetsentbr": 0.13973127183731013 }
0.264485
-0.124557
OpenLLM-France/Claire-Mistral-7B-0.1
main
false
bfloat16
7
MistralForCausalLM
Original
FINISHED
"2024-02-05T23:15:54"
πŸ†Ž : language adapted models (FP, FT, ...)
script
467
2024-04-16T01-27-34.625162
Other
1.1.0
{ "enem_challenge": 0.5885234429671099, "bluex": 0.46870653685674546, "oab_exams": 0.4118451025056948, "assin2_rte": 0.8558405766376935, "assin2_sts": 0.6134554484624953, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.7566238658528611, "portuguese_hate_speech": 0.6637123879933216, "tweetsentbr": 0.5749619615583512 }
0.597036
0.394032
OrionStarAI/Orion-14B-Base
main
false
bfloat16
14
OrionForCausalLM
Original
FINISHED
"2024-02-05T23:08:40"
🟒 : pretrained
script
342
2024-04-03T01-15-16.610234
null
1.1.0
{ "enem_challenge": 0.6648005598320503, "bluex": 0.5869262865090403, "oab_exams": 0.47927107061503416, "assin2_rte": 0.9030903950791537, "assin2_sts": 0.7713114437383479, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.5878495043212718, "portuguese_hate_speech": 0.6028618421734901, "tweetsentbr": 0.6347054618172728 }
0.630052
0.418999
PORTULAN/gervasio-7b-portuguese-ptbr-decoder
main
false
bfloat16
7
LlamaForCausalLM
Original
FINISHED
"2024-03-07T19:46:58"
πŸ†Ž : language adapted models (FP, FT, ...)
leaderboard
302
2024-03-07T22-39-33.153068
Portuguese
1.1.0
{ "enem_challenge": 0.21343596920923724, "bluex": 0.21001390820584145, "oab_exams": 0.26287015945330294, "assin2_rte": 0.8315268026535532, "assin2_sts": 0.695518625458929, "faquad_nli": 0.18589914951337116, "hatebr_offensive": 0.5380422946087232, "portuguese_hate_speech": 0.47241701729780267, "tweetsentbr": 0.14208441880992456 }
0.394645
0.073719
PORTULAN/gervasio-7b-portuguese-ptpt-decoder
main
false
bfloat16
7
LlamaForCausalLM
Original
FINISHED
"2024-03-07T19:46:22"
πŸ†Ž : language adapted models (FP, FT, ...)
leaderboard
303
2024-03-08T02-58-56.846301
Portuguese
1.1.0
{ "enem_challenge": 0.17004898530440868, "bluex": 0.18915159944367177, "oab_exams": 0.2610478359908884, "assin2_rte": 0.85320443811568, "assin2_sts": 0.7909512765742471, "faquad_nli": 0.6221487631987646, "hatebr_offensive": 0.617427881406052, "portuguese_hate_speech": 0.5680690277464471, "tweetsentbr": 0.14381982292497947 }
0.46843
0.207284
PrunaAI/Jamba-v0.1-bnb-4bit
main
false
4bit
28.149
JambaForCausalLM
Original
FINISHED
"2024-04-25T01:38:55"
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
585
2024-05-01T21-10-51.867015
English
1.1.0
{ "enem_challenge": 0.708187543736879, "bluex": 0.5855354659248957, "oab_exams": 0.5439635535307517, "assin2_rte": 0.8443951572017836, "assin2_sts": 0.7346843581320739, "faquad_nli": 0.6238473593282052, "hatebr_offensive": 0.7456172415930774, "portuguese_hate_speech": 0.5675191957134176, "tweetsentbr": 0.6582912301408866 }
0.668005
0.486339
PrunaAI/dbrx-base-bnb-4bit
main
false
4bit
68.461
DbrxForCausalLM
Original
FINISHED
"2024-04-25T01:38:00"
🟒 : pretrained
leaderboard
586
2024-05-02T01-12-55.471013
English
1.1.0
{ "enem_challenge": 0.7235829251224632, "bluex": 0.6230876216968011, "oab_exams": 0.5503416856492027, "assin2_rte": 0.9280791802073519, "assin2_sts": 0.7917361086673437, "faquad_nli": 0.45449901481427424, "hatebr_offensive": 0.6659313413109303, "portuguese_hate_speech": 0.7084225011935856, "tweetsentbr": 0.7174767478492502 }
0.684795
0.50728
PrunaAI/dbrx-instruct-bnb-4bit
main
false
4bit
68.461
DbrxForCausalLM
Original
RERUN
"2024-04-25T01:37:30"
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
587
2024-05-02T08-22-51.334683
English
null
null
null
null
End of preview.

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
0
Add dataset card