Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
arena-results / data /retrieval_individual-e24d4f9c-7b24-4491-968b-6ff7be4d9e89.jsonl
Muennighoff's picture
Scheduled Commit
f857879 verified
raw
history blame
3.72 kB
{"tstamp": 1722293244.2627, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722293210.4728, "finish": 1722293244.2627, "ip": "", "conv_id": "6c384347cc5f4cbeb88f73d02d9bbaa8", "model_name": "GritLM/GritLM-7B", "prompt": "MTEB: Massive text embedding benchmark<|SEP|>BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval<|SEP|>The Scandinavian Embedding Benchmarks: Comprehensive Assessment of Multilingual and Monolingual Text EmbeddingThe Scandinavian Embedding Benchmarks: Comprehensive Assessment of Multilingual and Monolingual Text Embedding<|SEP|>Beyond the imitation game: Quantifying and extrapolating the capabilities of language models<|SEP|>Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks<|SEP|>Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification<|SEP|>Deep Residual Learning for Image Recognition", "output": [["MTEB: Massive text embedding benchmark<|SEP|>BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval<|SEP|>The Scandinavian Embedding Benchmarks: Comprehensive Assessment of Multilingual and Monolingual Text EmbeddingThe Scandinavian Embedding Benchmarks: Comprehensive Assessment of Multilingual and Monolingual Text Embedding<|SEP|>Beyond the imitation game: Quantifying and extrapolating the capabilities of language models<|SEP|>Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks<|SEP|>Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification<|SEP|>Deep Residual Learning for Image Recognition", "Language model\n\nRecurrent neural network Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models). Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, furtherly causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net.\nLarge language models\nAlthough sometimes matching human performance, it is not clear whether they are plausible cognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do.\nEvaluation and benchmarks\nEvaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves.\nVarious data sets have been developed for use in evaluating language processing systems. These include:\nCorpus of Linguistic Acceptability\nGLUE benchmark\nMicrosoft Research Paraphrase Corpus\nMulti-Genre Natural Language Inference\nQuestion Natural Language Inference\nQuora Question Pairs\nRecognizing Textual Entailment\nSemantic Textual Similarity Benchmark\nSQuAD question answering Test\nStanford Sentiment Treebank\nWinograd NLI\nBoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU (Massive Multitask Language Understanding), BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs. (LLaMa Benchmark)"]], "corpus": "wikipedia"}