The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 4 missing columns ({'result_metrics', 'eval_version', 'result_metrics_npm', 'result_metrics_average'}) This happened while the json dataset builder was generating data using hf://datasets/eduagarcia-temp/llm_pt_leaderboard_requests/152334H/miqu-1-70b-sf_eval_request_False_float16_Original.json (at revision 9c139956213be10af818308f80680f0fa326268a) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast model: string base_model: string revision: string private: bool precision: string params: double architectures: string weight_type: string main_language: string status: string submitted_time: timestamp[s] model_type: string source: string job_id: int64 job_start_time: string to {'model': Value(dtype='string', id=None), 'base_model': Value(dtype='string', id=None), 'revision': Value(dtype='string', id=None), 'private': Value(dtype='bool', id=None), 'precision': Value(dtype='string', id=None), 'params': Value(dtype='float64', id=None), 'architectures': Value(dtype='string', id=None), 'weight_type': Value(dtype='string', id=None), 'main_language': Value(dtype='string', id=None), 'status': Value(dtype='string', id=None), 'submitted_time': Value(dtype='timestamp[s]', id=None), 'model_type': Value(dtype='string', id=None), 'source': Value(dtype='string', id=None), 'job_id': Value(dtype='int64', id=None), 'job_start_time': Value(dtype='string', id=None), 'eval_version': Value(dtype='string', id=None), 'result_metrics': {'enem_challenge': Value(dtype='float64', id=None), 'bluex': Value(dtype='float64', id=None), 'oab_exams': Value(dtype='float64', id=None), 'assin2_rte': Value(dtype='float64', id=None), 'assin2_sts': Value(dtype='float64', id=None), 'faquad_nli': Value(dtype='float64', id=None), 'hatebr_offensive': Value(dtype='float64', id=None), 'portuguese_hate_speech': Value(dtype='float64', id=None), 'tweetsentbr': Value(dtype='float64', id=None)}, 'result_metrics_average': Value(dtype='float64', id=None), 'result_metrics_npm': Value(dtype='float64', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1324, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 938, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 4 missing columns ({'result_metrics', 'eval_version', 'result_metrics_npm', 'result_metrics_average'}) This happened while the json dataset builder was generating data using hf://datasets/eduagarcia-temp/llm_pt_leaderboard_requests/152334H/miqu-1-70b-sf_eval_request_False_float16_Original.json (at revision 9c139956213be10af818308f80680f0fa326268a) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Open a discussion for direct support.
model
string | base_model
string | revision
string | private
bool | precision
string | params
float64 | architectures
string | weight_type
string | main_language
string | status
string | submitted_time
unknown | model_type
string | source
string | job_id
int64 | job_start_time
string | eval_version
string | result_metrics
dict | result_metrics_average
float64 | result_metrics_npm
float64 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
01-ai/Yi-1.5-34B-Chat | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | English | FINISHED | "2024-05-15T17:39:33" | π¬ : chat (RLHF, DPO, IFT, ...) | manual | 624 | 2024-05-16T15-15-24.863291 | 1.1.0 | {
"enem_challenge": 0.6906927921623512,
"bluex": 0.6648122392211405,
"oab_exams": 0.5248291571753987,
"assin2_rte": 0.9170744853491483,
"assin2_sts": 0.7661887019644651,
"faquad_nli": 0.7743940809133725,
"hatebr_offensive": 0.8210886883714428,
"portuguese_hate_speech": 0.7105164005570834,
"tweetsentbr": 0.7096563287199421
} | 0.731028 | 0.598601 |
|
01-ai/Yi-1.5-34B | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | English | FINISHED | "2024-05-15T17:40:15" | π’ : pretrained | manual | 627 | 2024-05-17T10-36-18.336343 | 1.1.0 | {
"enem_challenge": 0.71518544436669,
"bluex": 0.6662030598052852,
"oab_exams": 0.5489749430523918,
"assin2_rte": 0.8976911637262349,
"assin2_sts": 0.8148786802023537,
"faquad_nli": 0.585644163957417,
"hatebr_offensive": 0.8363023241432246,
"portuguese_hate_speech": 0.6962399848962205,
"tweetsentbr": 0.7228749707523902
} | 0.720444 | 0.570852 |
|
01-ai/Yi-1.5-6B-Chat | main | false | bfloat16 | 6.061 | LlamaForCausalLM | Original | English | FINISHED | "2024-05-16T14:35:19" | π¬ : chat (RLHF, DPO, IFT, ...) | manual | 629 | 2024-05-17T14-53-37.626126 | 1.1.0 | {
"enem_challenge": 0.5066480055983205,
"bluex": 0.4631432545201669,
"oab_exams": 0.3908883826879271,
"assin2_rte": 0.8478217777818736,
"assin2_sts": 0.6797897994537765,
"faquad_nli": 0.6548247706694055,
"hatebr_offensive": 0.7881170986195587,
"portuguese_hate_speech": 0.6486990242682011,
"tweetsentbr": 0.6586657928083186
} | 0.626511 | 0.445931 |
|
01-ai/Yi-1.5-6B | main | false | bfloat16 | 6.061 | LlamaForCausalLM | Original | English | FINISHED | "2024-05-16T14:34:00" | π’ : pretrained | manual | 628 | 2024-05-17T13-51-05.776238 | 1.1.0 | {
"enem_challenge": 0.5395381385584325,
"bluex": 0.4993045897079277,
"oab_exams": 0.4154897494305239,
"assin2_rte": 0.85320443811568,
"assin2_sts": 0.611946662194731,
"faquad_nli": 0.566892243623113,
"hatebr_offensive": 0.8390372896945542,
"portuguese_hate_speech": 0.6034251055649058,
"tweetsentbr": 0.6835417262403757
} | 0.623598 | 0.440799 |
|
01-ai/Yi-1.5-9B-Chat | main | false | bfloat16 | 8.829 | LlamaForCausalLM | Original | English | FINISHED | "2024-05-15T17:39:54" | π¬ : chat (RLHF, DPO, IFT, ...) | manual | 625 | 2024-05-17T02-43-33.664147 | 1.1.0 | {
"enem_challenge": 0.6242127361791463,
"bluex": 0.5479833101529903,
"oab_exams": 0.4469248291571754,
"assin2_rte": 0.8807093197512774,
"assin2_sts": 0.7520700202607307,
"faquad_nli": 0.6913654763916721,
"hatebr_offensive": 0.8297877646706737,
"portuguese_hate_speech": 0.667940108892922,
"tweetsentbr": 0.6732618942406834
} | 0.679362 | 0.521304 |
|
01-ai/Yi-1.5-9B | main | false | bfloat16 | 8.829 | LlamaForCausalLM | Original | English | FINISHED | "2024-05-15T17:40:08" | π’ : pretrained | manual | 626 | 2024-05-17T09-09-38.931019 | 1.1.0 | {
"enem_challenge": 0.6710986703988804,
"bluex": 0.5771905424200278,
"oab_exams": 0.4947608200455581,
"assin2_rte": 0.8815204475360152,
"assin2_sts": 0.7102876692830821,
"faquad_nli": 0.6362495548508539,
"hatebr_offensive": 0.7837384886240519,
"portuguese_hate_speech": 0.6780580075662044,
"tweetsentbr": 0.6934621257745327
} | 0.680707 | 0.518635 |
|
01-ai/Yi-34B-200K | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | English | FINISHED | "2024-02-05T23:18:19" | π’ : pretrained | script | 480 | 2024-04-17T23-49-34.862700 | 1.1.0 | {
"enem_challenge": 0.7172848145556333,
"bluex": 0.6481223922114048,
"oab_exams": 0.5517084282460136,
"assin2_rte": 0.9097218456052794,
"assin2_sts": 0.7390390977418284,
"faquad_nli": 0.49676238738738737,
"hatebr_offensive": 0.8117947554592124,
"portuguese_hate_speech": 0.7007076712295253,
"tweetsentbr": 0.6181054682174745
} | 0.688139 | 0.523233 |
|
01-ai/Yi-34B-Chat | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | English | FINISHED | "2024-02-27T00:40:17" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 272 | 2024-02-28T08-14-36.046639 | 1.1.0 | {
"enem_challenge": 0.7123862841147656,
"bluex": 0.6328233657858137,
"oab_exams": 0.5202733485193621,
"assin2_rte": 0.924014535978148,
"assin2_sts": 0.7419038025688336,
"faquad_nli": 0.7157210401891253,
"hatebr_offensive": 0.7198401711140126,
"portuguese_hate_speech": 0.7135410538975384,
"tweetsentbr": 0.6880686233555414
} | 0.707619 | 0.557789 |
|
01-ai/Yi-34B | main | false | bfloat16 | 34.389 | LlamaForCausalLM | Original | English | FINISHED | "2024-02-05T23:05:39" | π’ : pretrained | script | 440 | 2024-04-13T15-53-49.411062 | 1.1.0 | {
"enem_challenge": 0.7207837648705389,
"bluex": 0.6648122392211405,
"oab_exams": 0.5599088838268793,
"assin2_rte": 0.917882167398896,
"assin2_sts": 0.76681855136608,
"faquad_nli": 0.7798334442926054,
"hatebr_offensive": 0.8107834570679608,
"portuguese_hate_speech": 0.6224786612758311,
"tweetsentbr": 0.7320656959105744
} | 0.730596 | 0.591978 |
|
01-ai/Yi-6B-200K | main | false | bfloat16 | 6.061 | LlamaForCausalLM | Original | English | FINISHED | "2024-02-05T23:18:12" | π’ : pretrained | script | 469 | 2024-04-16T17-07-31.622853 | 1.1.0 | {
"enem_challenge": 0.5423372988103569,
"bluex": 0.4673157162726008,
"oab_exams": 0.4328018223234624,
"assin2_rte": 0.40523403335417163,
"assin2_sts": 0.4964641013268987,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.4892942520605069,
"portuguese_hate_speech": 0.6053769911504425,
"tweetsentbr": 0.6290014694641435
} | 0.500831 | 0.214476 |
|
01-ai/Yi-6B-Chat | main | false | bfloat16 | 6.061 | LlamaForCausalLM | Original | English | FINISHED | "2024-02-27T00:40:39" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 273 | 2024-02-28T14-35-07.615539 | 1.1.0 | {
"enem_challenge": 0.5570328901329601,
"bluex": 0.5006954102920723,
"oab_exams": 0.4118451025056948,
"assin2_rte": 0.7948490568935549,
"assin2_sts": 0.5684271643349206,
"faquad_nli": 0.637960088691796,
"hatebr_offensive": 0.775686136523575,
"portuguese_hate_speech": 0.5712377041472934,
"tweetsentbr": 0.5864804330790114
} | 0.600468 | 0.40261 |
|
01-ai/Yi-6B | main | false | bfloat16 | 6.061 | LlamaForCausalLM | Original | English | FINISHED | "2024-02-05T23:04:05" | π’ : pretrained | script | 228 | 2024-02-17T03-42-08.504508 | 1.1.0 | {
"enem_challenge": 0.5689293212036389,
"bluex": 0.5132127955493742,
"oab_exams": 0.4460136674259681,
"assin2_rte": 0.7903932929806128,
"assin2_sts": 0.5666878345297481,
"faquad_nli": 0.5985418799210473,
"hatebr_offensive": 0.7425595238095237,
"portuguese_hate_speech": 0.6184177704320946,
"tweetsentbr": 0.5081067075683067
} | 0.594763 | 0.391626 |
|
01-ai/Yi-9B-200k | main | false | bfloat16 | 8.829 | LlamaForCausalLM | Original | English | FINISHED | "2024-04-13T05:22:25" | π’ : pretrained | leaderboard | 451 | 2024-04-14T12-49-52.148781 | 1.1.0 | {
"enem_challenge": 0.6564030790762772,
"bluex": 0.5354659248956884,
"oab_exams": 0.5056947608200456,
"assin2_rte": 0.8708321784112503,
"assin2_sts": 0.7508245525986388,
"faquad_nli": 0.7162112665738773,
"hatebr_offensive": 0.8238294119604646,
"portuguese_hate_speech": 0.6723821369343758,
"tweetsentbr": 0.7162549372015228
} | 0.694211 | 0.54216 |
|
01-ai/Yi-9B | main | false | bfloat16 | 8.829 | LlamaForCausalLM | Original | English | FINISHED | "2024-04-13T05:20:56" | π’ : pretrained | leaderboard | 453 | 2024-04-14T11-08-02.891090 | 1.1.0 | {
"enem_challenge": 0.6759972008397481,
"bluex": 0.5493741307371349,
"oab_exams": 0.4783599088838269,
"assin2_rte": 0.8784695970900473,
"assin2_sts": 0.752860308487488,
"faquad_nli": 0.7478708154144531,
"hatebr_offensive": 0.8574531631821884,
"portuguese_hate_speech": 0.6448598532923182,
"tweetsentbr": 0.6530471966712571
} | 0.693144 | 0.542367 |
|
152334H/miqu-1-70b-sf | main | false | float16 | 68.977 | LlamaForCausalLM | Original | English | RERUN | "2024-04-26T08:25:57" | πΆ : fine-tuned/fp on domain-specific datasets | leaderboard | 619 | 2024-05-16T12-21-53.362614 | null | null | null | null |
|
22h/cabrita-lora-v0-1 | huggyllama/llama-7b | main | false | float16 | 0 | ? | Adapter | Portuguese | RERUN | "2024-02-05T23:03:11" | πΆ : fine-tuned | script | 336 | 2024-04-02T03-54-36.291839 | null | null | null | null |
22h/cabrita_7b_pt_850000 | main | false | float16 | 7 | LlamaForCausalLM | Original | Portuguese | FINISHED | "2024-02-11T13:34:40" | π : language adapted models (FP, FT, ...) | script | 305 | 2024-03-08T02-07-35.059732 | 1.1.0 | {
"enem_challenge": 0.22533240027991602,
"bluex": 0.23087621696801114,
"oab_exams": 0.2920273348519362,
"assin2_rte": 0.3333333333333333,
"assin2_sts": 0.1265472264440735,
"faquad_nli": 0.17721518987341772,
"hatebr_offensive": 0.5597546967409981,
"portuguese_hate_speech": 0.490163110698825,
"tweetsentbr": 0.4575265405956153
} | 0.32142 | -0.032254 |
|
22h/open-cabrita3b | main | false | float16 | 3 | LlamaForCausalLM | Original | Portuguese | FINISHED | "2024-02-11T13:34:36" | π : language adapted models (FP, FT, ...) | script | 285 | 2024-02-28T16-38-27.766897 | 1.1.0 | {
"enem_challenge": 0.17984604618614417,
"bluex": 0.2114047287899861,
"oab_exams": 0.22687927107061504,
"assin2_rte": 0.4301327637723658,
"assin2_sts": 0.08919111846797594,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.5046251022011318,
"portuguese_hate_speech": 0.4118866620594333,
"tweetsentbr": 0.47963247012405114
} | 0.330361 | -0.005342 |
|
AI-Sweden-Models/gpt-sw3-20b | main | false | float16 | 20.918 | GPT2LMHeadModel | Original | English | PENDING_NEW_EVAL | "2024-02-05T23:15:38" | π’ : pretrained | script | 102 | 2024-02-08T16-32-05.080295 | null | null | null | null |
|
AI-Sweden-Models/gpt-sw3-40b | main | false | float16 | 39.927 | GPT2LMHeadModel | Original | English | FINISHED | "2024-02-05T23:15:47" | π’ : pretrained | script | 253 | 2024-02-21T07-59-22.606213 | 1.1.0 | {
"enem_challenge": 0.2358292512246326,
"bluex": 0.2809457579972184,
"oab_exams": 0.2542141230068337,
"assin2_rte": 0.4096747911636189,
"assin2_sts": 0.17308746611294112,
"faquad_nli": 0.5125406216148655,
"hatebr_offensive": 0.3920230910522173,
"portuguese_hate_speech": 0.4365404510655907,
"tweetsentbr": 0.491745311259787
} | 0.354067 | 0.018354 |
|
AI-Sweden-Models/gpt-sw3-6.7b-v2 | main | false | float16 | 7.111 | GPT2LMHeadModel | Original | English | FINISHED | "2024-02-05T23:15:31" | π’ : pretrained | script | 462 | 2024-04-16T00-18-50.805343 | 1.1.0 | {
"enem_challenge": 0.22813156053184044,
"bluex": 0.23504867872044508,
"oab_exams": 0.23097949886104785,
"assin2_rte": 0.5833175952742944,
"assin2_sts": 0.14706689693418745,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.5569200631123247,
"portuguese_hate_speech": 0.5048069947120815,
"tweetsentbr": 0.45897627809523983
} | 0.3761 | 0.073856 |
|
AI-Sweden-Models/gpt-sw3-6.7b | main | false | float16 | 7.111 | GPT2LMHeadModel | Original | English | FINISHED | "2024-02-05T23:15:23" | π’ : pretrained | script | 466 | 2024-04-15T22-34-55.424388 | 1.1.0 | {
"enem_challenge": 0.21133659902029392,
"bluex": 0.2573018080667594,
"oab_exams": 0.2296127562642369,
"assin2_rte": 0.6192900448928588,
"assin2_sts": 0.08103924791097977,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.40737531518832293,
"portuguese_hate_speech": 0.4441161100880904,
"tweetsentbr": 0.433837189305867
} | 0.347063 | 0.024837 |
|
AdaptLLM/finance-LLM-13B | main | false | float16 | 13 | LlamaForCausalLM | Original | English | FINISHED | "2024-02-11T13:37:27" | πΆ : fine-tuned | script | 555 | 2024-04-24T18-00-42.073230 | 1.1.0 | {
"enem_challenge": 0.4730580825752274,
"bluex": 0.3852573018080668,
"oab_exams": 0.36173120728929387,
"assin2_rte": 0.8704914563684142,
"assin2_sts": 0.6914158506759536,
"faquad_nli": 0.6137142857142857,
"hatebr_offensive": 0.8210157972117231,
"portuguese_hate_speech": 0.6648065091139095,
"tweetsentbr": 0.6129534464124105
} | 0.610494 | 0.4269 |
|
AdaptLLM/finance-LLM | main | false | float16 | 0 | LLaMAForCausalLM | Original | English | FINISHED | "2024-02-11T13:37:12" | πΆ : fine-tuned | script | 545 | 2024-04-24T13-38-50.219195 | 1.1.0 | {
"enem_challenge": 0.37578726382085376,
"bluex": 0.2906815020862309,
"oab_exams": 0.3011389521640091,
"assin2_rte": 0.7173994459883221,
"assin2_sts": 0.3141019003448064,
"faquad_nli": 0.6856866537717602,
"hatebr_offensive": 0.6665618718263835,
"portuguese_hate_speech": 0.3323844809709906,
"tweetsentbr": 0.5501887299910238
} | 0.470437 | 0.214015 |
|
AdaptLLM/law-LLM-13B | main | false | float16 | 13 | LlamaForCausalLM | Original | English | FINISHED | "2024-02-11T13:37:17" | πΆ : fine-tuned | script | 551 | 2024-04-24T17-20-32.289644 | 1.1.0 | {
"enem_challenge": 0.48915325402379284,
"bluex": 0.3796940194714882,
"oab_exams": 0.36082004555808656,
"assin2_rte": 0.7762008093366958,
"assin2_sts": 0.6862803522831282,
"faquad_nli": 0.5589431210148192,
"hatebr_offensive": 0.7648719048333295,
"portuguese_hate_speech": 0.6972417545621965,
"tweetsentbr": 0.5969146546466134
} | 0.590013 | 0.387281 |
|
AdaptLLM/law-LLM | main | false | float16 | 0 | LLaMAForCausalLM | Original | English | FINISHED | "2024-02-11T13:37:01" | πΆ : fine-tuned | script | 550 | 2024-04-24T01-23-04.736612 | 1.1.0 | {
"enem_challenge": 0.3932820153953814,
"bluex": 0.3157162726008345,
"oab_exams": 0.3034168564920273,
"assin2_rte": 0.7690457097032879,
"assin2_sts": 0.2736321836385559,
"faquad_nli": 0.6837598520969155,
"hatebr_offensive": 0.6310564282443625,
"portuguese_hate_speech": 0.32991640141820316,
"tweetsentbr": 0.4897974076561671
} | 0.465514 | 0.208557 |
|
AdaptLLM/medicine-LLM-13B | main | false | float16 | 13 | LlamaForCausalLM | Original | English | FINISHED | "2024-02-11T13:37:22" | πΆ : fine-tuned | script | 553 | 2024-04-24T17-45-23.659613 | 1.1.0 | {
"enem_challenge": 0.45976207137858643,
"bluex": 0.37552155771905427,
"oab_exams": 0.3553530751708428,
"assin2_rte": 0.802953910231819,
"assin2_sts": 0.6774179667769704,
"faquad_nli": 0.7227569273678784,
"hatebr_offensive": 0.8155967923139503,
"portuguese_hate_speech": 0.6722790404040404,
"tweetsentbr": 0.5992582348356217
} | 0.608989 | 0.426546 |
|
AdaptLLM/medicine-LLM | main | false | float16 | 0 | LLaMAForCausalLM | Original | English | FINISHED | "2024-02-11T13:37:07" | πΆ : fine-tuned | script | 550 | 2024-04-24T13-09-44.649718 | 1.1.0 | {
"enem_challenge": 0.3806857942617215,
"bluex": 0.3129346314325452,
"oab_exams": 0.28610478359908886,
"assin2_rte": 0.7412241742464284,
"assin2_sts": 0.30610797857979344,
"faquad_nli": 0.6385993049986635,
"hatebr_offensive": 0.4569817890542286,
"portuguese_hate_speech": 0.26575729349526506,
"tweetsentbr": 0.4667966458909563
} | 0.428355 | 0.135877 |
|
AetherResearch/Cerebrum-1.0-7b | main | false | float16 | 7.242 | MistralForCausalLM | Original | English | FINISHED | "2024-03-14T11:07:59" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 332 | 2024-04-01T22-58-48.098123 | 1.1.0 | {
"enem_challenge": 0.6137158852344297,
"bluex": 0.5062586926286509,
"oab_exams": 0.44510250569476084,
"assin2_rte": 0.8562832789419443,
"assin2_sts": 0.7083110279713039,
"faquad_nli": 0.7709976024119299,
"hatebr_offensive": 0.7925948726646638,
"portuguese_hate_speech": 0.6342708554907774,
"tweetsentbr": 0.6171926929726294
} | 0.660525 | 0.494853 |
|
BAAI/Aquila-7B | main | false | float16 | 7 | AquilaModel | Original | ? | FINISHED | "2024-02-05T23:09:00" | π’ : pretrained | script | 343 | 2024-04-03T05-32-42.254781 | 1.1.0 | {
"enem_challenge": 0.3275017494751575,
"bluex": 0.2795549374130737,
"oab_exams": 0.3047835990888383,
"assin2_rte": 0.7202499022958302,
"assin2_sts": 0.04640761012170769,
"faquad_nli": 0.47034320848362593,
"hatebr_offensive": 0.6981236353283272,
"portuguese_hate_speech": 0.4164993156903397,
"tweetsentbr": 0.4656320326711388
} | 0.414344 | 0.144131 |
|
BAAI/Aquila2-34B | main | false | bfloat16 | 34 | LlamaForCausalLM | Original | ? | FINISHED | "2024-02-05T23:10:17" | π’ : pretrained | script | 484 | 2024-04-18T14-04-47.026230 | 1.1.0 | {
"enem_challenge": 0.5479356193142058,
"bluex": 0.4381084840055633,
"oab_exams": 0.40455580865603646,
"assin2_rte": 0.8261661293083891,
"assin2_sts": 0.643049056717646,
"faquad_nli": 0.4471267110923455,
"hatebr_offensive": 0.4920183585480058,
"portuguese_hate_speech": 0.6606858054226475,
"tweetsentbr": 0.5598737392847967
} | 0.557724 | 0.319206 |
|
BAAI/Aquila2-7B | main | false | float16 | 7 | AquilaModel | Original | ? | FINISHED | "2024-02-05T23:09:07" | π’ : pretrained | script | 360 | 2024-04-03T05-55-31.957348 | 1.1.0 | {
"enem_challenge": 0.20573827851644508,
"bluex": 0.14464534075104313,
"oab_exams": 0.3225512528473804,
"assin2_rte": 0.5426094787796916,
"assin2_sts": 0.3589709171853071,
"faquad_nli": 0.49799737773227726,
"hatebr_offensive": 0.642139037433155,
"portuguese_hate_speech": 0.5212215320910973,
"tweetsentbr": 0.2826286167270258
} | 0.390945 | 0.091046 |
|
Bruno/Caramelinho | ybelkada/falcon-7b-sharded-bf16 | main | false | bfloat16 | 0 | ? | Adapter | Portuguese | FINISHED | "2024-02-24T18:01:08" | π : language adapted models (FP, FT, ...) | leaderboard | 256 | 2024-02-26T15-17-54.708968 | 1.1.0 | {
"enem_challenge": 0.21483554933519944,
"bluex": 0.2211404728789986,
"oab_exams": 0.25148063781321184,
"assin2_rte": 0.4896626375608876,
"assin2_sts": 0.19384903999896694,
"faquad_nli": 0.43917169974115616,
"hatebr_offensive": 0.3396512838306731,
"portuguese_hate_speech": 0.46566706851516976,
"tweetsentbr": 0.563106045239156
} | 0.353174 | 0.017928 |
Bruno/Caramelo_7B | ybelkada/falcon-7b-sharded-bf16 | main | false | bfloat16 | 7 | ? | Adapter | Portuguese | FINISHED | "2024-02-24T18:00:57" | π : language adapted models (FP, FT, ...) | leaderboard | 255 | 2024-02-26T13-57-57.036659 | 1.1.0 | {
"enem_challenge": 0.1980405878236529,
"bluex": 0.24478442280945759,
"oab_exams": 0.2528473804100228,
"assin2_rte": 0.5427381481762671,
"assin2_sts": 0.07473225338478715,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.33650009913117634,
"portuguese_hate_speech": 0.412292817679558,
"tweetsentbr": 0.35365936890599253
} | 0.31725 | -0.028868 |
CohereForAI/aya-101 | main | false | float16 | 12.921 | T5ForConditionalGeneration | Original | English | FINISHED | "2024-02-17T03:43:40" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 253 | 2024-02-21T19-25-38.847154 | 1.1.0 | {
"enem_challenge": 0.5703289013296011,
"bluex": 0.47844228094575797,
"oab_exams": 0.3895216400911162,
"assin2_rte": 0.845896116707975,
"assin2_sts": 0.18932506997017534,
"faquad_nli": 0.3536861536119358,
"hatebr_offensive": 0.8577866430260047,
"portuguese_hate_speech": 0.5858880778588808,
"tweetsentbr": 0.7292099162284759
} | 0.555565 | 0.354086 |
|
CohereForAI/c4ai-command-r-plus-4bit | main | false | 4bit | 55.052 | CohereForCausalLM | Original | English | FINISHED | "2024-04-05T14:50:15" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 464 | 2024-04-15T16-05-38.445928 | 1.1.0 | {
"enem_challenge": 0.7508747375787264,
"bluex": 0.6620305980528511,
"oab_exams": 0.6255125284738041,
"assin2_rte": 0.9301234467745643,
"assin2_sts": 0.7933785386356376,
"faquad_nli": 0.7718257450767017,
"hatebr_offensive": 0.773798484417851,
"portuguese_hate_speech": 0.7166167166167167,
"tweetsentbr": 0.7540570104676597
} | 0.753135 | 0.625007 |
|
CohereForAI/c4ai-command-r-plus | main | false | float16 | 103.811 | CohereForCausalLM | Original | English | RERUN | "2024-04-07T18:08:25" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 651 | 2024-05-19T05-17-19.033443 | null | null | null | null |
|
CohereForAI/c4ai-command-r-v01 | main | false | float16 | 34.981 | CohereForCausalLM | Original | English | FINISHED | "2024-04-05T14:48:52" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 472 | 2024-04-17T00-36-42.568466 | 1.1.0 | {
"enem_challenge": 0.7158852344296711,
"bluex": 0.6203059805285118,
"oab_exams": 0.5521640091116173,
"assin2_rte": 0.883132179380006,
"assin2_sts": 0.7210331309303998,
"faquad_nli": 0.47272296015180265,
"hatebr_offensive": 0.8222299935886227,
"portuguese_hate_speech": 0.7102306144559665,
"tweetsentbr": 0.48595613300125107
} | 0.664851 | 0.488799 |
|
CultriX/NeuralMona_MoE-4x7B | main | false | bfloat16 | 24.154 | MixtralForCausalLM | Original | English | RERUN | "2024-05-15T18:00:24" | π€ : base merges and moerges | leaderboard | 643 | 2024-05-19T05-53-29.044797 | null | null | null | null |
|
DAMO-NLP-MT/polylm-1.7b | main | false | float16 | 1.7 | GPT2LMHeadModel | Original | English | FINISHED | "2024-02-11T13:34:48" | π’ : pretrained | script | 478 | 2024-04-17T23-46-04.491918 | 1.1.0 | {
"enem_challenge": 0.1966410076976907,
"bluex": 0.26564673157162727,
"oab_exams": 0.24874715261959,
"assin2_rte": 0.4047692251758633,
"assin2_sts": 0.05167868234986358,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.358843537414966,
"portuguese_hate_speech": 0.4530026545569895,
"tweetsentbr": 0.22711575772255002
} | 0.294011 | -0.067176 |
|
DAMO-NLP-MT/polylm-13b | main | false | float16 | 13 | PolyLMHeadModel | Original | English | FINISHED | "2024-02-11T13:34:54" | π’ : pretrained | script | 345 | 2024-04-03T09-53-29.935717 | 1.1.0 | {
"enem_challenge": 0,
"bluex": 0,
"oab_exams": 0,
"assin2_rte": 0,
"assin2_sts": 0,
"faquad_nli": 0,
"hatebr_offensive": 0,
"portuguese_hate_speech": 0,
"tweetsentbr": 0
} | 0 | -0.568819 |
|
Danielbrdz/Barcenas-Llama3-8b-ORPO | main | false | float16 | 8.03 | LlamaForCausalLM | Original | English | FINISHED | "2024-05-13T16:38:54" | πΆ : fine-tuned/fp on domain-specific datasets | leaderboard | 636 | 2024-05-18T00-12-52.690138 | 1.1.0 | {
"enem_challenge": 0.7102869139258222,
"bluex": 0.5827538247566064,
"oab_exams": 0.508883826879271,
"assin2_rte": 0.9178150146340144,
"assin2_sts": 0.7260402501200387,
"faquad_nli": 0.7308849598805747,
"hatebr_offensive": 0.8698828946051447,
"portuguese_hate_speech": 0.5958643988009942,
"tweetsentbr": 0.4996436497852127
} | 0.682451 | 0.52568 |
|
Deci/DeciLM-6b | main | false | bfloat16 | 5.717 | DeciLMForCausalLM | Original | English | FAILED | "2024-02-05T23:06:24" | πΆ : fine-tuned | script | 253 | 2024-02-25T19-40-34.104437 | null | null | null | null |
|
Deci/DeciLM-7B | main | false | bfloat16 | 7.044 | DeciLMForCausalLM | Original | English | FINISHED | "2024-02-05T23:06:34" | πΆ : fine-tuned | script | 336 | 2024-04-02T05-42-17.715000 | 1.1.0 | {
"enem_challenge": 0.5423372988103569,
"bluex": 0.4200278164116829,
"oab_exams": 0.358997722095672,
"assin2_rte": 0.9123267863598024,
"assin2_sts": 0.7555893659678592,
"faquad_nli": 0.7857378310075815,
"hatebr_offensive": 0.6990533471973728,
"portuguese_hate_speech": 0.6754461749208054,
"tweetsentbr": 0.6506550848022137
} | 0.644463 | 0.474065 |
|
DeepMount00/Llama-3-8b-Ita | main | false | bfloat16 | 8.03 | LlamaForCausalLM | Original | English | PENDING | "2024-05-17T15:15:58" | πΆ : fine-tuned/fp on domain-specific datasets | leaderboard | -1 | null | null | null | null | null |
|
Doctor-Shotgun/limarp-miqu-1-70b-qlora | 152334H/miqu-1-70b-sf | main | false | float16 | 70 | ? | Adapter | English | RERUN | "2024-04-26T08:26:41" | πΆ : fine-tuned/fp on domain-specific datasets | leaderboard | 620 | 2024-05-16T12-58-23.175366 | null | null | null | null |
EleutherAI/gpt-j-6b | main | false | float16 | 6 | GPTJForCausalLM | Original | English | FINISHED | "2024-02-05T23:12:19" | π’ : pretrained | script | 387 | 2024-04-05T04-41-08.855450 | 1.1.0 | {
"enem_challenge": 0.21973407977606718,
"bluex": 0.2364394993045897,
"oab_exams": 0.25466970387243737,
"assin2_rte": 0.3582588385476761,
"assin2_sts": 0.14562487212003206,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.6588376162844248,
"portuguese_hate_speech": 0.5468502264582175,
"tweetsentbr": 0.3534145441122185
} | 0.357054 | 0.040386 |
|
EleutherAI/gpt-neo-1.3B | main | false | float16 | 1.366 | GPTNeoForCausalLM | Original | English | FINISHED | "2024-02-05T23:12:06" | π’ : pretrained | script | 369 | 2024-04-04T01-04-14.137713 | 1.1.0 | {
"enem_challenge": 0.20153953813855843,
"bluex": 0.1835883171070932,
"oab_exams": 0.2419134396355353,
"assin2_rte": 0.3333333333333333,
"assin2_sts": 0.27954490493177114,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.3333333333333333,
"portuguese_hate_speech": 0.22986425339366515,
"tweetsentbr": 0.15870406189555125
} | 0.266831 | -0.134397 |
|
EleutherAI/gpt-neo-125m | main | false | float16 | 0.15 | GPTNeoForCausalLM | Original | English | FINISHED | "2024-02-05T23:11:59" | π’ : pretrained | script | 368 | 2024-04-04T00-23-23.313643 | 1.1.0 | {
"enem_challenge": 0.18824352694191743,
"bluex": 0.18497913769123783,
"oab_exams": 0.22460136674259681,
"assin2_rte": 0.40826127460837947,
"assin2_sts": 0.13567407692821803,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.3359391417643845,
"portuguese_hate_speech": 0.23174470457079152,
"tweetsentbr": 0.1506866897702477
} | 0.255532 | -0.13829 |
|
EleutherAI/gpt-neo-2.7B | main | false | float16 | 2.718 | GPTNeoForCausalLM | Original | English | FINISHED | "2024-02-05T23:12:14" | π’ : pretrained | script | 368 | 2024-04-04T01-08-46.345259 | 1.1.0 | {
"enem_challenge": 0.19244226731980407,
"bluex": 0.21696801112656466,
"oab_exams": 0.24236902050113895,
"assin2_rte": 0.34680711177144763,
"assin2_sts": 0.2028018720426534,
"faquad_nli": 0.44921692379616646,
"hatebr_offensive": 0.3686829976188286,
"portuguese_hate_speech": 0.23174470457079152,
"tweetsentbr": 0.27297529346501975
} | 0.280445 | -0.107237 |
|
EleutherAI/gpt-neox-20b | main | false | float16 | 20.739 | GPTNeoXForCausalLM | Original | English | FINISHED | "2024-02-05T23:12:26" | π’ : pretrained | script | 369 | 2024-04-04T02-29-53.418614 | 1.1.0 | {
"enem_challenge": 0.19384184744576627,
"bluex": 0.22809457579972184,
"oab_exams": 0.2469248291571754,
"assin2_rte": 0.3849409111254498,
"assin2_sts": 0.24127351840284703,
"faquad_nli": 0.4362532523850824,
"hatebr_offensive": 0.3761140819964349,
"portuguese_hate_speech": 0.2315040773180308,
"tweetsentbr": 0.20969654257926673
} | 0.283183 | -0.103534 |
|
EleutherAI/polyglot-ko-12.8b | main | false | float16 | 13.061 | GPTNeoXForCausalLM | Original | Other | FAILED | "2024-02-05T23:15:01" | π’ : pretrained | script | 465 | 2024-04-15T22-28-42.463373 | null | null | null | null |
|
EleutherAI/pythia-12b-deduped | main | false | float16 | 12 | GPTNeoXForCausalLM | Original | English | FAILED | "2024-02-05T23:11:53" | π’ : pretrained | script | 367 | 2024-04-04T00-13-25.206534 | null | null | null | null |
|
EleutherAI/pythia-12b | main | false | float16 | 12 | GPTNeoXForCausalLM | Original | English | FINISHED | "2024-02-11T13:39:39" | π’ : pretrained | script | 601 | 2024-05-10T12-08-48.948197 | 1.1.0 | {
"enem_challenge": 0.2092372288313506,
"bluex": 0.2267037552155772,
"oab_exams": 0.2501138952164009,
"assin2_rte": 0.4179270639535547,
"assin2_sts": 0.05030336220324859,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.4676874308442238,
"portuguese_hate_speech": 0.558212623295537,
"tweetsentbr": 0.44153559547571525
} | 0.340153 | 0.014042 |
|
EleutherAI/pythia-14m | main | false | float16 | 0.039 | GPTNeoXForCausalLM | Original | English | FINISHED | "2024-02-05T23:11:12" | π’ : pretrained | script | 363 | 2024-04-03T19-47-56.339960 | 1.1.0 | {
"enem_challenge": 0.19104268719384185,
"bluex": 0.17941585535465926,
"oab_exams": 0.21822323462414578,
"assin2_rte": 0.2210516588115701,
"assin2_sts": 0.0006847937896062521,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.17328604471858133,
"portuguese_hate_speech": 0.2692126355492692,
"tweetsentbr": 0.008390382047306943
} | 0.188996 | -0.247927 |
|
EleutherAI/pythia-160m-deduped | main | false | float16 | 0.213 | GPTNeoXForCausalLM | Original | English | FINISHED | "2024-02-05T23:11:23" | π’ : pretrained | script | 364 | 2024-04-03T21-13-20.844629 | 1.1.0 | {
"enem_challenge": 0.20713785864240727,
"bluex": 0.17941585535465926,
"oab_exams": 0.24555808656036446,
"assin2_rte": 0.3389102997234224,
"assin2_sts": 0.04193510248223561,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.45262567913287954,
"portuguese_hate_speech": 0.38733242633164233,
"tweetsentbr": 0.274837496010884
} | 0.285268 | -0.079546 |
|
EleutherAI/pythia-160m | main | false | float16 | 0.213 | GPTNeoXForCausalLM | Original | English | FINISHED | "2024-02-11T13:39:10" | π’ : pretrained | script | 553 | 2024-04-24T22-01-22.569827 | 1.1.0 | {
"enem_challenge": 0.20503848845346395,
"bluex": 0.1905424200278164,
"oab_exams": 0.22779043280182232,
"assin2_rte": 0.5474759773218106,
"assin2_sts": 0.05731560767696195,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.3333333333333333,
"portuguese_hate_speech": 0.412292817679558,
"tweetsentbr": 0.23733809038278153
} | 0.294531 | -0.060204 |
|
EleutherAI/pythia-1b-deduped | main | false | float16 | 1.079 | GPTNeoXForCausalLM | Original | English | FINISHED | "2024-02-05T23:11:36" | π’ : pretrained | script | 366 | 2024-04-03T22-36-01.685148 | 1.1.0 | {
"enem_challenge": 0.1994401679496151,
"bluex": 0.20584144645340752,
"oab_exams": 0.2378132118451025,
"assin2_rte": 0.34088811077510744,
"assin2_sts": 0.058535886752163556,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.3503150417064208,
"portuguese_hate_speech": 0.23407429779522804,
"tweetsentbr": 0.2072905953605302
} | 0.25265 | -0.142279 |
|
EleutherAI/pythia-1b | main | false | float16 | 1.079 | GPTNeoXForCausalLM | Original | English | FINISHED | "2024-02-11T13:39:22" | π’ : pretrained | script | 253 | 2024-02-22T09-20-51.293467 | 1.1.0 | {
"enem_challenge": 0.18964310706787962,
"bluex": 0.19193324061196107,
"oab_exams": 0.24145785876993167,
"assin2_rte": 0.4650037024436128,
"assin2_sts": 0,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.3333333333333333,
"portuguese_hate_speech": 0.22986425339366515,
"tweetsentbr": 0.1800598326070416
} | 0.252328 | -0.13319 |
|
EleutherAI/pythia-2.8b-deduped | main | false | float16 | 2.909 | GPTNeoXForCausalLM | Original | English | FINISHED | "2024-02-05T23:11:43" | π’ : pretrained | script | 366 | 2024-04-03T22-36-58.032425 | 1.1.0 | {
"enem_challenge": 0.2085374387683695,
"bluex": 0.22531293463143254,
"oab_exams": 0.2505694760820046,
"assin2_rte": 0.3333333333333333,
"assin2_sts": 0.25073155863228036,
"faquad_nli": 0.17939674437408695,
"hatebr_offensive": 0.3333333333333333,
"portuguese_hate_speech": 0.22986425339366515,
"tweetsentbr": 0.215234981952897
} | 0.247368 | -0.173173 |
|
EleutherAI/pythia-2.8b | main | false | float16 | 2.909 | GPTNeoXForCausalLM | Original | English | FINISHED | "2024-02-11T13:39:28" | π’ : pretrained | script | 554 | 2024-04-24T22-06-55.870441 | 1.1.0 | {
"enem_challenge": 0.21133659902029392,
"bluex": 0.2239221140472879,
"oab_exams": 0.24100227790432802,
"assin2_rte": 0.3333333333333333,
"assin2_sts": 0.20712471558047654,
"faquad_nli": 0.17939674437408695,
"hatebr_offensive": 0.3333333333333333,
"portuguese_hate_speech": 0.22986425339366515,
"tweetsentbr": 0.20067205514503328
} | 0.239998 | -0.181654 |
|
EleutherAI/pythia-410m-deduped | main | false | float16 | 0.506 | GPTNeoXForCausalLM | Original | English | FINISHED | "2024-02-05T23:11:30" | π’ : pretrained | script | 365 | 2024-04-03T21-43-53.606908 | 1.1.0 | {
"enem_challenge": 0.19174247725682295,
"bluex": 0.20166898470097358,
"oab_exams": 0.2337129840546697,
"assin2_rte": 0.3333333333333333,
"assin2_sts": 0.040138096618478045,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.3333333333333333,
"portuguese_hate_speech": 0.22986425339366515,
"tweetsentbr": 0.2072905953605302
} | 0.245638 | -0.152948 |
|
EleutherAI/pythia-410m | main | false | float16 | 0.506 | GPTNeoXForCausalLM | Original | English | FINISHED | "2024-02-11T13:39:16" | π’ : pretrained | script | 556 | 2024-04-24T22-06-03.860992 | 1.1.0 | {
"enem_challenge": 0.1980405878236529,
"bluex": 0.2364394993045897,
"oab_exams": 0.24555808656036446,
"assin2_rte": 0.3333333333333333,
"assin2_sts": 0.029778872408316164,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.34039116046362083,
"portuguese_hate_speech": 0.2524344906158182,
"tweetsentbr": 0.29805007023492736
} | 0.263742 | -0.125096 |
|
EleutherAI/pythia-6.9b-deduped | main | false | float16 | 6.9 | GPTNeoXForCausalLM | Original | English | FINISHED | "2024-02-05T23:11:48" | π’ : pretrained | script | 367 | 2024-04-03T23-20-39.755265 | 1.1.0 | {
"enem_challenge": 0.20783764870538837,
"bluex": 0.2211404728789986,
"oab_exams": 0.2656036446469248,
"assin2_rte": 0.3333333333333333,
"assin2_sts": 0.07157697545169536,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.3349186726374015,
"portuguese_hate_speech": 0.4252047249041825,
"tweetsentbr": 0.2072905953605302
} | 0.278507 | -0.097692 |
|
EleutherAI/pythia-6.9b | main | false | float16 | 6.9 | GPTNeoXForCausalLM | Original | English | FINISHED | "2024-02-11T13:39:33" | π’ : pretrained | script | 253 | 2024-02-22T09-49-44.237199 | 1.1.0 | {
"enem_challenge": 0.19454163750874737,
"bluex": 0.20305980528511822,
"oab_exams": 0.23006833712984054,
"assin2_rte": 0.5918513695309833,
"assin2_sts": 0.0025941556675326705,
"faquad_nli": 0.3121791039110175,
"hatebr_offensive": 0.32770726983578174,
"portuguese_hate_speech": 0.42473737096921527,
"tweetsentbr": 0.32668391292199067
} | 0.29038 | -0.065609 |
|
EleutherAI/pythia-70m-deduped | main | false | float16 | 0.096 | GPTNeoXForCausalLM | Original | English | FINISHED | "2024-02-05T23:11:17" | π’ : pretrained | script | 364 | 2024-04-03T21-10-06.848681 | 1.1.0 | {
"enem_challenge": 0.172148355493352,
"bluex": 0.1835883171070932,
"oab_exams": 0.2041002277904328,
"assin2_rte": 0.23382263963539596,
"assin2_sts": 0.02026922309956098,
"faquad_nli": 0.2759039805530234,
"hatebr_offensive": 0.28076386043861,
"portuguese_hate_speech": 0.24182579976211263,
"tweetsentbr": 0.13108766233766234
} | 0.193723 | -0.242146 |
|
EleutherAI/pythia-70m | main | false | float16 | 0.096 | GPTNeoXForCausalLM | Original | English | FINISHED | "2024-02-11T13:38:58" | π’ : pretrained | script | 552 | 2024-04-24T21-25-37.361813 | 1.1.0 | {
"enem_challenge": 0.0622813156053184,
"bluex": 0.2086230876216968,
"oab_exams": 0.030068337129840545,
"assin2_rte": 0.4502521949740358,
"assin2_sts": 0.006173005990956128,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.419144092439547,
"portuguese_hate_speech": 0.3087375175771073,
"tweetsentbr": 0.12087469376644588
} | 0.227312 | -0.156292 |
|
FuseAI/FuseChat-7B-VaRM | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | English | FINISHED | "2024-03-04T15:36:23" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 311 | 2024-03-08T15-26-39.517660 | 1.1.0 | {
"enem_challenge": 0.6480055983205039,
"bluex": 0.5493741307371349,
"oab_exams": 0.4182232346241458,
"assin2_rte": 0.9272868051476477,
"assin2_sts": 0.7836651113903375,
"faquad_nli": 0.787259111855886,
"hatebr_offensive": 0.8223021238433512,
"portuguese_hate_speech": 0.6973371097488426,
"tweetsentbr": 0.44067858320963216
} | 0.674904 | 0.520153 |
|
FuseAI/OpenChat-3.5-7B-Solar | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | English | FINISHED | "2024-03-04T14:31:17" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 310 | 2024-03-08T13-22-15.392524 | 1.1.0 | {
"enem_challenge": 0.6452064380685795,
"bluex": 0.5465924895688457,
"oab_exams": 0.4218678815489749,
"assin2_rte": 0.927694039333938,
"assin2_sts": 0.7822958272680564,
"faquad_nli": 0.7777514761773254,
"hatebr_offensive": 0.8223021238433512,
"portuguese_hate_speech": 0.7022123148304151,
"tweetsentbr": 0.5877550319137789
} | 0.690409 | 0.54326 |
|
HeyLucasLeao/gpt-neo-small-portuguese | main | false | float16 | 0 | GPTNeoForCausalLM | Original | Portuguese | FINISHED | "2024-02-05T23:14:26" | π : language adapted models (FP, FT, ...) | script | 306 | 2024-03-08T04-18-26.971751 | 1.1.0 | {
"enem_challenge": 0.16445066480055984,
"bluex": 0.03894297635605007,
"oab_exams": 0.023234624145785875,
"assin2_rte": 0.3528931097729911,
"assin2_sts": 0.040770337667175804,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.3333333333333333,
"portuguese_hate_speech": 0.2854909694680288,
"tweetsentbr": 0.1506866897702477
} | 0.203273 | -0.204329 |
|
HuggingFaceH4/zephyr-7b-alpha | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | English | FINISHED | "2024-04-14T18:12:40" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 463 | 2024-04-15T10-11-49.222023 | 1.1.0 | {
"enem_challenge": 0.562631210636809,
"bluex": 0.5104311543810849,
"oab_exams": 0.40273348519362184,
"assin2_rte": 0.9011395676691729,
"assin2_sts": 0.7233470427220756,
"faquad_nli": 0.6962929525710168,
"hatebr_offensive": 0.8526041634724087,
"portuguese_hate_speech": 0.652858285536766,
"tweetsentbr": 0.6548809121737628
} | 0.66188 | 0.50199 |
|
HuggingFaceH4/zephyr-7b-beta | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | English | FINISHED | "2024-02-21T18:04:59" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 253 | 2024-02-21T23-57-52.146406 | 1.1.0 | {
"enem_challenge": 0.5787263820853744,
"bluex": 0.47983310152990266,
"oab_exams": 0.3931662870159453,
"assin2_rte": 0.8836486323653452,
"assin2_sts": 0.6678266192299295,
"faquad_nli": 0.7017672651113582,
"hatebr_offensive": 0.8176778106453834,
"portuguese_hate_speech": 0.6658626171810755,
"tweetsentbr": 0.46064331884597925
} | 0.627684 | 0.45238 |
|
HuggingFaceH4/zephyr-7b-gemma-v0.1 | main | false | bfloat16 | 8.538 | GemmaForCausalLM | Original | English | FINISHED | "2024-03-02T00:49:26" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 386 | 2024-04-04T23-04-13.841492 | 1.1.0 | {
"enem_challenge": 0.5815255423372988,
"bluex": 0.47426981919332406,
"oab_exams": 0.40728929384965834,
"assin2_rte": 0.8604729280813948,
"assin2_sts": 0.7259016112950178,
"faquad_nli": 0.7486076732673268,
"hatebr_offensive": 0.8755151098901099,
"portuguese_hate_speech": 0.6244738628649016,
"tweetsentbr": 0.6159470691844793
} | 0.657111 | 0.494637 |
|
HuggingFaceTB/cosmo-1b | main | false | float16 | 1.742 | LlamaForCausalLM | Original | English | FINISHED | "2024-02-24T19:59:46" | πΆ : fine-tuned/fp on domain-specific datasets | leaderboard | 258 | 2024-02-26T18-03-06.542808 | 1.1.0 | {
"enem_challenge": 0.20783764870538837,
"bluex": 0.20723226703755215,
"oab_exams": 0.23234624145785876,
"assin2_rte": 0.5526600270022243,
"assin2_sts": 0.07330211383402985,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.3411338879766024,
"portuguese_hate_speech": 0.24442150902068246,
"tweetsentbr": 0.2534405090147715
} | 0.283559 | -0.085225 |
|
Intel/neural-chat-7b-v3-1 | main | false | float16 | 7.242 | MistralForCausalLM | Original | English | FINISHED | "2024-02-21T18:03:11" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 253 | 2024-02-25T06-21-33.008420 | 1.1.0 | {
"enem_challenge": 0.6263121063680895,
"bluex": 0.47983310152990266,
"oab_exams": 0.39726651480637815,
"assin2_rte": 0.9268770228292367,
"assin2_sts": 0.7658477385894799,
"faquad_nli": 0.7840135895978708,
"hatebr_offensive": 0.8905574366528357,
"portuguese_hate_speech": 0.6685671281654837,
"tweetsentbr": 0.5145983702206705
} | 0.672653 | 0.522586 |
|
Intel/neural-chat-7b-v3-3 | main | false | float16 | 7 | MistralForCausalLM | Original | English | FINISHED | "2024-02-21T18:03:21" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 253 | 2024-02-21T22-54-50.520595 | 1.1.0 | {
"enem_challenge": 0.6263121063680895,
"bluex": 0.5034770514603616,
"oab_exams": 0.39635535307517084,
"assin2_rte": 0.9140545431322211,
"assin2_sts": 0.7587721518241414,
"faquad_nli": 0.7147222222222223,
"hatebr_offensive": 0.8653967318817455,
"portuguese_hate_speech": 0.6322323153577603,
"tweetsentbr": 0.4689260995001763
} | 0.653361 | 0.487161 |
|
J-AI/Phi_3-CREWAI-PTBR | main | false | float16 | 0 | MistralForCausalLM | Original | Portuguese | FAILED | "2024-05-15T20:00:58" | πΆ : fine-tuned/fp on domain-specific datasets | leaderboard | -1 | null | null | null | null | null |
|
J-LAB/BRisa-7B-Instruct-v0.2 | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | Portuguese | FINISHED | "2024-04-18T23:08:40" | π¬ : chat models (RLHF, DPO, IFT, ...) | manual | 502 | 2024-04-19T10-58-40.659391 | 1.1.0 | {
"enem_challenge": 0.6508047585724283,
"bluex": 0.5368567454798331,
"oab_exams": 0.4337129840546697,
"assin2_rte": 0.914959114959115,
"assin2_sts": 0.7360504820365534,
"faquad_nli": 0.6830556684274685,
"hatebr_offensive": 0.7427748086927932,
"portuguese_hate_speech": 0.6511659683002369,
"tweetsentbr": 0.6077237421626446
} | 0.6619 | 0.491829 |
|
J-LAB/BRisa-7B-Instruct-v0.2 | main | false | float16 | 7.242 | MistralForCausalLM | Original | Portuguese | FINISHED | "2024-04-18T13:21:31" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 505 | 2024-04-19T16-12-48.574869 | 1.1.0 | {
"enem_challenge": 0.6522043386983905,
"bluex": 0.5438108484005564,
"oab_exams": 0.4432801822323462,
"assin2_rte": 0.9133356979253711,
"assin2_sts": 0.7369091629413509,
"faquad_nli": 0.6808719560094265,
"hatebr_offensive": 0.7400764917752776,
"portuguese_hate_speech": 0.657700175064164,
"tweetsentbr": 0.6091722968255823
} | 0.664151 | 0.49476 |
|
JJhooww/Mistral-7B-v0.2-Base_ptbr | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | Portuguese | FINISHED | "2024-04-18T23:08:40" | π : language adapted models (FP, FT, ...) | leaderboard | 491 | 2024-04-20T03-39-36.902005 | 1.1.0 | {
"enem_challenge": 0.629111266620014,
"bluex": 0.4631432545201669,
"oab_exams": 0.3835990888382688,
"assin2_rte": 0.9019128003131698,
"assin2_sts": 0.16704630760888603,
"faquad_nli": 0.566892243623113,
"hatebr_offensive": 0.7250569715248458,
"portuguese_hate_speech": 0.6679089916559607,
"tweetsentbr": 0.5726952459126636
} | 0.564152 | 0.374817 |
|
JJhooww/Mistral-7B-v0.2-Base_ptbr | main | false | float16 | 7.242 | MistralForCausalLM | Original | Portuguese | FINISHED | "2024-04-13T03:15:37" | π : language adapted models (FP, FT, ...) | leaderboard | 442 | 2024-04-13T17-20-04.237102 | 1.1.0 | {
"enem_challenge": 0.6494051784464661,
"bluex": 0.5396383866481224,
"oab_exams": 0.4542141230068337,
"assin2_rte": 0.9011456831413249,
"assin2_sts": 0.7251095355270992,
"faquad_nli": 0.6904462094795298,
"hatebr_offensive": 0.7961717229751414,
"portuguese_hate_speech": 0.5852091456930166,
"tweetsentbr": 0.6232338461110419
} | 0.66273 | 0.492659 |
|
JJhooww/MistralReloadBR_v2_ptbr | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | Portuguese | FINISHED | "2024-03-08T02:22:06" | π : language adapted models (FP, FT, ...) | leaderboard | 320 | 2024-03-09T04-58-37.486266 | 1.1.0 | {
"enem_challenge": 0.6081175647305809,
"bluex": 0.47983310152990266,
"oab_exams": 0.40728929384965834,
"assin2_rte": 0.9101172201226876,
"assin2_sts": 0.745635698648774,
"faquad_nli": 0.4760412001791312,
"hatebr_offensive": 0.7982678280152018,
"portuguese_hate_speech": 0.6632432143375528,
"tweetsentbr": 0.6700347269707226
} | 0.639842 | 0.456727 |
|
JJhooww/Mistral_Relora_Step2k | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | Portuguese | FINISHED | "2024-04-18T23:08:40" | π : language adapted models (FP, FT, ...) | leaderboard | 508 | 2024-04-20T03-09-26.234801 | 1.1.0 | {
"enem_challenge": 0.6179146256123164,
"bluex": 0.5159944367176634,
"oab_exams": 0.39635535307517084,
"assin2_rte": 0.9121625173669783,
"assin2_sts": 0.7065946896645577,
"faquad_nli": 0.6466313961043266,
"hatebr_offensive": 0.8143254279726638,
"portuguese_hate_speech": 0.652940879778074,
"tweetsentbr": 0.5167597069914197
} | 0.642187 | 0.46864 |
|
JJhooww/Mistral_Relora_Step2k | main | false | float16 | 7.242 | MistralForCausalLM | Original | Portuguese | FINISHED | "2024-03-08T02:22:23" | π : language adapted models (FP, FT, ...) | leaderboard | 320 | 2024-03-09T08-42-21.029909 | 1.1.0 | {
"enem_challenge": 0.615815255423373,
"bluex": 0.5257301808066759,
"oab_exams": 0.3981776765375854,
"assin2_rte": 0.9113496854193482,
"assin2_sts": 0.7074610038971542,
"faquad_nli": 0.6526577185427341,
"hatebr_offensive": 0.8133973664850924,
"portuguese_hate_speech": 0.6536416538696902,
"tweetsentbr": 0.5193585604823832
} | 0.644177 | 0.471533 |
|
JosephusCheung/LL7M | main | false | float16 | 0.007 | LlamaForCausalLM | Original | English | FINISHED | "2024-04-21T18:48:05" | πΆ : fine-tuned/fp on domain-specific datasets | leaderboard | 539 | 2024-04-23T00-09-23.964862 | 1.1.0 | {
"enem_challenge": 0.22813156053184044,
"bluex": 0.21279554937413073,
"oab_exams": 0.24464692482915718,
"assin2_rte": 0.656772420167965,
"assin2_sts": 0.21905517553818948,
"faquad_nli": 0.5111651047090131,
"hatebr_offensive": 0.7436810107109835,
"portuguese_hate_speech": 0.26722448543297267,
"tweetsentbr": 0.33553712665916735
} | 0.37989 | 0.082043 |
|
Kquant03/CognitiveFusion2-4x7B-BF16 | main | false | bfloat16 | 24.154 | MixtralForCausalLM | Original | English | FINISHED | "2024-05-15T17:43:32" | π€ : base merges and moerges | leaderboard | 641 | 2024-05-19T01-32-18.922295 | 1.1.0 | {
"enem_challenge": 0.6389083275017495,
"bluex": 0.5438108484005564,
"oab_exams": 0.4145785876993166,
"assin2_rte": 0.9199303114955995,
"assin2_sts": 0.7779218286784337,
"faquad_nli": 0.7730900759529709,
"hatebr_offensive": 0.8170741905271608,
"portuguese_hate_speech": 0.7076245180055771,
"tweetsentbr": 0.49451740077068485
} | 0.676384 | 0.522319 |
|
Kukedlc/NeuralLLaMa-3-8b-DT-v0.1 | main | false | float16 | 8.03 | LlamaForCausalLM | Original | English | PENDING | "2024-05-17T15:18:30" | πΆ : fine-tuned/fp on domain-specific datasets | leaderboard | -1 | null | null | null | null | null |
|
LoneStriker/Smaug-34B-v0.1-GPTQ | main | false | GPTQ | 272 | LlamaForCausalLM | Original | English | PENDING | "2024-05-17T07:58:06" | π¬ : chat (RLHF, DPO, IFT, ...) | leaderboard | -1 | null | null | null | null | null |
|
M4-ai/tau-0.5B-instruct-DPOP | main | false | float16 | 0.464 | Qwen2ForCausalLM | Original | English | FINISHED | "2024-04-21T18:49:55" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 540 | 2024-04-23T02-24-34.579672 | 1.1.0 | {
"enem_challenge": 0.23163051084674596,
"bluex": 0.21279554937413073,
"oab_exams": 0.25239179954441915,
"assin2_rte": 0.6250683495142939,
"assin2_sts": 0.14960279813437427,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.3333333333333333,
"portuguese_hate_speech": 0.43626343456325545,
"tweetsentbr": 0.2236745795098198
} | 0.322713 | -0.019326 |
|
M4-ai/tau-0.5B | main | false | float16 | 0.464 | Qwen2ForCausalLM | Original | English | FINISHED | "2024-04-21T18:49:31" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 537 | 2024-04-23T00-49-38.450870 | 1.1.0 | {
"enem_challenge": 0.19314205738278517,
"bluex": 0.18915159944367177,
"oab_exams": 0.23462414578587698,
"assin2_rte": 0.39285662181494563,
"assin2_sts": 0.07216057977107923,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.222010481181515,
"portuguese_hate_speech": 0.412292817679558,
"tweetsentbr": 0.21833154883841985
} | 0.263803 | -0.121635 |
|
M4-ai/tau-1.8B | main | false | bfloat16 | 1.837 | Qwen2ForCausalLM | Original | English | FINISHED | "2024-04-21T18:50:22" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 538 | 2024-04-23T02-51-19.597157 | 1.1.0 | {
"enem_challenge": 0.2610216934919524,
"bluex": 0.23504867872044508,
"oab_exams": 0.25466970387243737,
"assin2_rte": 0.6240877656394997,
"assin2_sts": 0.19269473597203718,
"faquad_nli": 0.3987209371824756,
"hatebr_offensive": 0.41976405672054573,
"portuguese_hate_speech": 0.32018744664167026,
"tweetsentbr": 0.15747184099568803
} | 0.318185 | -0.032001 |
|
MagusCorp/legislinho | main | false | float16 | 3.862 | MistralForCausalLM | Original | Portuguese | FINISHED | "2024-04-09T02:48:03" | π : language adapted models (FP, FT, ...) | leaderboard | 434 | 2024-04-13T08-30-14.215121 | 1.1.0 | {
"enem_challenge": 0.6305108467459762,
"bluex": 0.5104311543810849,
"oab_exams": 0.43234624145785877,
"assin2_rte": 0.8870184075342467,
"assin2_sts": 0.6776356777228696,
"faquad_nli": 0.6379609737375439,
"hatebr_offensive": 0.7264113460475401,
"portuguese_hate_speech": 0.6563004846526657,
"tweetsentbr": 0.5651626679179093
} | 0.635975 | 0.453531 |
|
MaziyarPanahi/Mistral-7B-Instruct-Aya-101 | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | English | FINISHED | "2024-04-17T06:11:16" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 474 | 2024-04-17T09-07-30.140283 | 1.1.0 | {
"enem_challenge": 0.6060181945416375,
"bluex": 0.5438108484005564,
"oab_exams": 0.39362186788154896,
"assin2_rte": 0.9072398971802695,
"assin2_sts": 0.7641692139433879,
"faquad_nli": 0.6218181818181818,
"hatebr_offensive": 0.8004209608305171,
"portuguese_hate_speech": 0.6762940852684385,
"tweetsentbr": 0.5030635127570277
} | 0.646273 | 0.470432 |
|
MulaBR/Mula-4x160-v0.1 | main | false | float16 | 0.417 | MixtralForCausalLM | Original | Portuguese | FINISHED | "2024-04-21T22:40:10" | π’ : pretrained | manual | 531 | 2024-04-22T00-05-24.255163 | 1.1.0 | {
"enem_challenge": 0.21343596920923724,
"bluex": 0.2517385257301808,
"oab_exams": 0.2505694760820046,
"assin2_rte": 0.335683441456502,
"assin2_sts": 0.11349165436666529,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.41502718891863716,
"portuguese_hate_speech": 0.22986425339366515,
"tweetsentbr": 0.11244668476153548
} | 0.262435 | -0.129114 |
|
MulaBR/Mula-8x160-v0.1 | main | false | float16 | 0.748 | MixtralForCausalLM | Original | Portuguese | FINISHED | "2024-05-08T13:48:49" | π’ : pretrained | leaderboard | 597 | 2024-05-08T21-44-31.351826 | 1.1.0 | {
"enem_challenge": 0.20503848845346395,
"bluex": 0.21279554937413073,
"oab_exams": 0.26651480637813213,
"assin2_rte": 0.22379028616587124,
"assin2_sts": 0.04732817202670604,
"faquad_nli": 0.4396551724137931,
"hatebr_offensive": 0.3333333333333333,
"portuguese_hate_speech": 0.40205171444631815,
"tweetsentbr": 0.18457882485126118
} | 0.257232 | -0.135224 |
|
NOVA-vision-language/GlorIA-1.3B | main | false | float16 | 1.416 | GPTNeoForCausalLM | Original | Portuguese | FINISHED | "2024-03-07T19:45:38" | π’ : pretrained | leaderboard | 301 | 2024-03-07T22-34-35.217921 | 1.1.0 | {
"enem_challenge": 0.018894331700489854,
"bluex": 0.031988873435326845,
"oab_exams": 0.05193621867881549,
"assin2_rte": 0,
"assin2_sts": 0.023212602251989234,
"faquad_nli": 0.0026041666666666665,
"hatebr_offensive": 0.0028436222959357994,
"portuguese_hate_speech": 0.23522853957636566,
"tweetsentbr": 0.0018832391713747645
} | 0.040955 | -0.499694 |
|
Nexusflow/Starling-LM-7B-beta | main | false | bfloat16 | 7.242 | MistralForCausalLM | Original | English | FINISHED | "2024-03-29T12:49:58" | π¬ : chat models (RLHF, DPO, IFT, ...) | leaderboard | 331 | 2024-04-01T21-15-54.379246 | 1.1.0 | {
"enem_challenge": 0.6466060181945417,
"bluex": 0.5382475660639777,
"oab_exams": 0.4542141230068337,
"assin2_rte": 0.9256528007689433,
"assin2_sts": 0.8246749931266709,
"faquad_nli": 0.7748688218404758,
"hatebr_offensive": 0.8311091883257347,
"portuguese_hate_speech": 0.7137054053375511,
"tweetsentbr": 0.5036962690569555
} | 0.690308 | 0.541226 |
|
NotAiLOL/Yi-1.5-dolphin-9B | main | false | bfloat16 | 8.829 | LlamaForCausalLM | Original | English | PENDING | "2024-05-17T20:48:05" | πΆ : fine-tuned/fp on domain-specific datasets | leaderboard | -1 | null | null | null | null | null |
|
NousResearch/Hermes-2-Pro-Llama-3-8B | main | false | float16 | 8.031 | LlamaForCausalLM | Original | English | FINISHED | "2024-05-06T23:04:05" | π¬ : chat (RLHF, DPO, IFT, ...) | leaderboard | 593 | 2024-05-07T04-01-33.854422 | 1.1.0 | {
"enem_challenge": 0.6787963610916725,
"bluex": 0.5702364394993046,
"oab_exams": 0.44738041002277906,
"assin2_rte": 0.9223739628332301,
"assin2_sts": 0.7575480918675715,
"faquad_nli": 0.7486659964426572,
"hatebr_offensive": 0.821316847945847,
"portuguese_hate_speech": 0.6324128242225997,
"tweetsentbr": 0.6706448057731071
} | 0.694375 | 0.543822 |
|
NousResearch/Nous-Capybara-34B | main | false | bfloat16 | 34 | LlamaForCausalLM | Original | English | FINISHED | "2024-04-26T07:21:50" | πΆ : fine-tuned/fp on domain-specific datasets | leaderboard | 617 | 2024-05-16T07-59-45.028987 | 1.1.0 | {
"enem_challenge": 0.7116864940517844,
"bluex": 0.6300417246175244,
"oab_exams": 0.5530751708428246,
"assin2_rte": 0.9007100934823724,
"assin2_sts": 0.757100596654299,
"faquad_nli": 0.7731239092495636,
"hatebr_offensive": 0.7408626005155765,
"portuguese_hate_speech": 0.7161125319693095,
"tweetsentbr": 0.7078849222478135
} | 0.721178 | 0.578884 |
End of preview.