Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
955e6d3
·
verified ·
1 Parent(s): b3d89b2

Scheduled Commit

Browse files
data/retrieval_individual-1948e9c4-e613-4487-b12e-0e988013b994.jsonl CHANGED
@@ -325,3 +325,4 @@
325
  {"tstamp": 1722787284.9406, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722787282.6361, "finish": 1722787284.9406, "ip": "", "conv_id": "b117c1ea23f54fbe82f7601ad1a06cc3", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "What is the difference between RAM and SSD?", "output": [["What is the difference between RAM and SSD?", "Solid-state storage\n\nSolid-state storage (SSS) is non-volatile computer storage that has no moving parts; it uses only electronic circuits. This solid-state design dramatically differs from the commonly-used competing technology of electromechanical magnetic storage which uses moving media coated with magnetic material.\nGenerally, SSS is much faster but more expensive for the same amount of storage.\nSSS devices typically use flash memory, but some use battery-backed random-access memory (RAM). Devices come in various types, form factors, storage sizes, and interfacing options to satisfy application requirements for many computer systems and appliances.\nOverview\nHistorically, computer system secondary storage has been implemented to leverage magnetic properties of surface coatings applied to rotating platters (in hard disk drives and floppy disks) or linearly moving strips of plastic film (in tape drives). Pairing such magnetic media with read/write heads allows data to be written by separately magnetizing small sections of the ferromagnetic coating, and read later by detecting the transitions in magnetization. For the data to be read or written, exact sections of the magnetic media need to pass under the read/write heads that flow closely to the media surface; as a result, reading or writing data imposes delays required for the positioning of magnetic media and heads, with the delays differing depending on the actual technology."]], "corpus": "wikipedia"}
326
  {"tstamp": 1722787284.9406, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722787282.6361, "finish": 1722787284.9406, "ip": "", "conv_id": "ff1b619d1c87430493c1da7fc9a3bf83", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "What is the difference between RAM and SSD?", "output": [["What is the difference between RAM and SSD?", "RAM drive\n\nA RAM drive (also called a RAM disk) is a block of random-access memory (primary storage or volatile memory) that a computer's software is treating as if the memory were a disk drive (secondary storage). RAM drives provide high-performance temporary storage for demanding tasks and protect non-volatile storage devices from wearing down, since RAM is not prone to wear from writing, unlike non-volatile flash memory. They are in a sense the reverse of virtual memory: RAM drive uses a volatile fast memory as if it's a nonvolatile slow memory. Virtual memory is the opposite.\nIt is sometimes referred to as a virtual RAM drive or software RAM drive to distinguish it from a hardware RAM drive that uses separate hardware containing RAM, which is a type of battery-backed solid-state drive.\nHistorically primary storage based mass storage devices were conceived to bridge the performance gap between internal memory and secondary storage devices. In the advent of solid-state devices this advantage lost most of its appeal. However, solid-state devices do suffer from wear from frequent writing. Primary memory writes do not so or in far lesser effect. So RAM devices do offer an advantage to store frequently changing data, like temporary or cached information."]], "corpus": "wikipedia"}
327
  {"tstamp": 1722789234.3241, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722789232.1249, "finish": 1722789234.3241, "ip": "", "conv_id": "37d3e0c08a6c484eb5256df3bd9e64e6", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
 
 
325
  {"tstamp": 1722787284.9406, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722787282.6361, "finish": 1722787284.9406, "ip": "", "conv_id": "b117c1ea23f54fbe82f7601ad1a06cc3", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "What is the difference between RAM and SSD?", "output": [["What is the difference between RAM and SSD?", "Solid-state storage\n\nSolid-state storage (SSS) is non-volatile computer storage that has no moving parts; it uses only electronic circuits. This solid-state design dramatically differs from the commonly-used competing technology of electromechanical magnetic storage which uses moving media coated with magnetic material.\nGenerally, SSS is much faster but more expensive for the same amount of storage.\nSSS devices typically use flash memory, but some use battery-backed random-access memory (RAM). Devices come in various types, form factors, storage sizes, and interfacing options to satisfy application requirements for many computer systems and appliances.\nOverview\nHistorically, computer system secondary storage has been implemented to leverage magnetic properties of surface coatings applied to rotating platters (in hard disk drives and floppy disks) or linearly moving strips of plastic film (in tape drives). Pairing such magnetic media with read/write heads allows data to be written by separately magnetizing small sections of the ferromagnetic coating, and read later by detecting the transitions in magnetization. For the data to be read or written, exact sections of the magnetic media need to pass under the read/write heads that flow closely to the media surface; as a result, reading or writing data imposes delays required for the positioning of magnetic media and heads, with the delays differing depending on the actual technology."]], "corpus": "wikipedia"}
326
  {"tstamp": 1722787284.9406, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722787282.6361, "finish": 1722787284.9406, "ip": "", "conv_id": "ff1b619d1c87430493c1da7fc9a3bf83", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "What is the difference between RAM and SSD?", "output": [["What is the difference between RAM and SSD?", "RAM drive\n\nA RAM drive (also called a RAM disk) is a block of random-access memory (primary storage or volatile memory) that a computer's software is treating as if the memory were a disk drive (secondary storage). RAM drives provide high-performance temporary storage for demanding tasks and protect non-volatile storage devices from wearing down, since RAM is not prone to wear from writing, unlike non-volatile flash memory. They are in a sense the reverse of virtual memory: RAM drive uses a volatile fast memory as if it's a nonvolatile slow memory. Virtual memory is the opposite.\nIt is sometimes referred to as a virtual RAM drive or software RAM drive to distinguish it from a hardware RAM drive that uses separate hardware containing RAM, which is a type of battery-backed solid-state drive.\nHistorically primary storage based mass storage devices were conceived to bridge the performance gap between internal memory and secondary storage devices. In the advent of solid-state devices this advantage lost most of its appeal. However, solid-state devices do suffer from wear from frequent writing. Primary memory writes do not so or in far lesser effect. So RAM devices do offer an advantage to store frequently changing data, like temporary or cached information."]], "corpus": "wikipedia"}
327
  {"tstamp": 1722789234.3241, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722789232.1249, "finish": 1722789234.3241, "ip": "", "conv_id": "37d3e0c08a6c484eb5256df3bd9e64e6", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "output": [["I am searching for a good and large-scale benchmark for testing the performance of text embeddings.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "corpus": "arxiv"}
328
+ {"tstamp": 1722789296.1676, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722789293.8463, "finish": 1722789296.1676, "ip": "", "conv_id": "8b1ce90e9a164eb5b10d788f15c773f6", "model_name": "text-embedding-004", "prompt": "Please find me the paper on training code large language models to follow instructions via git commits.", "output": [["Please find me the paper on training code large language models to follow instructions via git commits.", "Title: Automated Commit Message Generation with Large Language Models: An Empirical Study and Beyond\n\nAbstract: Commit Message Generation (CMG) approaches aim to automatically generate commit messages based on given code diffs, which facilitate collaboration among developers and play a critical role in Open-Source Software (OSS). Very recently, Large Language Models (LLMs) have demonstrated extensive applicability in diverse code-related task. But few studies systematically explored their effectiveness using LLMs. This paper conducts the first comprehensive experiment to investigate how far we have been in applying LLM to generate high-quality commit messages. Motivated by a pilot analysis, we first clean the most widely-used CMG dataset following practitioners' criteria. Afterward, we re-evaluate diverse state-of-the-art CMG approaches and make comparisons with LLMs, demonstrating the superior performance of LLMs against state-of-the-art CMG approaches. Then, we further propose four manual metrics following the practice of OSS, including Accuracy, Integrity, Applicability, and Readability, and assess various LLMs accordingly. Results reveal that GPT-3.5 performs best overall, but different LLMs carry different advantages. To further boost LLMs' performance in the CMG task, we propose an Efficient Retrieval-based In-Context Learning (ICL) framework, namely ERICommiter, which leverages a two-step filtering to accelerate the retrieval efficiency and introduces semantic/lexical-based retrieval algorithm to construct the ICL examples. Extensive experiments demonstrate the substantial performance improvement of ERICommiter on various LLMs for code diffs of different programming languages. Meanwhile, ERICommiter also significantly reduces the retrieval time while keeping almost the same performance. Our research contributes to the understanding of LLMs' capabilities in the CMG field and provides valuable insights for practitioners seeking to leverage these tools in their workflows."]], "corpus": "arxiv"}