Muennighoff
commited on
Scheduled Commit
Browse files
data/retrieval_battle-f0f24c65-6f65-4435-a1e5-20bb12a2b295.jsonl
CHANGED
@@ -16,3 +16,8 @@
|
|
16 |
{"tstamp": 1722224501.6296, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "b0eecc5d9fd54f6594ba143e3606f4d2", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "Paper that introduced GPT-3", "0_output": [["Paper that introduced GPT-3", "Title: a survey on GPT-3\n\nAbstract: This paper provides an introductory survey to GPT-3. We cover some of the historical development behind this technology, some of the key features of GPT-3, and discuss the machine learning model and the datasets used. We survey both academic and commercial efforts applying GPT-3 in diverse domains such as developing conversational AI chatbots, software development, creative work, domain knowledge, and business productivity. We discuss some of the challenges that GPT-3 faces such as the problems of training complexity, bias, and hallucination/incorrect answers. We also discuss the future research opportunities in this area."]], "0_corpus": "arxiv", "1_conv_id": "a278bcf9cee34726ac3b8100de0e1d2a", "1_model_name": "embed-english-v3.0", "1_prompt": "Paper that introduced GPT-3", "1_output": [["Paper that introduced GPT-3", "Title: Improving Short Text Classification With Augmented Data Using GPT-3\n\nAbstract: GPT-3 is a large-scale natural language model developed by OpenAI that can perform many different tasks, including topic classification. Although researchers claim that it requires only a small number of in-context examples to learn a task, in practice GPT-3 requires these training examples to be either of exceptional quality or a higher quantity than easily created by hand. To address this issue, this study teaches GPT-3 to classify whether a question is related to data science by augmenting a small training set with additional examples generated by GPT-3 itself. This study compares two classifiers: the GPT-3 Classification Endpoint with augmented examples, and the GPT-3 Completion Endpoint with an optimal training set chosen using a genetic algorithm. We find that while the augmented Completion Endpoint achieves upwards of 80 percent validation accuracy, using the augmented Classification Endpoint yields more consistent accuracy on unseen examples. In this way, giving large-scale machine learning models like GPT-3 the ability to propose their own additional training examples can result in improved classification performance."]], "1_corpus": "arxiv"}
|
17 |
{"tstamp": 1722224531.6939, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "c7374c4ffde543f99eb8379b8225a12b", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "paper showing crosslingual generalization is possible", "0_output": [["paper showing crosslingual generalization is possible", "Title: Are Structural Concepts Universal in Transformer Language Models? Towards Interpretable Cross-Lingual Generalization\n\nAbstract: Large language models (LLMs) have exhibited considerable cross-lingual generalization abilities, whereby they implicitly transfer knowledge across languages. However, the transfer is not equally successful for all languages, especially for low-resource ones, which poses an ongoing challenge. It is unclear whether we have reached the limits of implicit cross-lingual generalization and if explicit knowledge transfer is viable. In this paper, we investigate the potential for explicitly aligning conceptual correspondence between languages to enhance cross-lingual generalization. Using the syntactic aspect of language as a testbed, our analyses of 43 languages reveal a high degree of alignability among the spaces of structural concepts within each language for both encoder-only and decoder-only LLMs. We then propose a meta-learning-based method to learn to align conceptual spaces of different languages, which facilitates zero-shot and few-shot generalization in concept classification and also offers insights into the cross-lingual in-context learning phenomenon. Experiments on syntactic analysis tasks show that our approach achieves competitive results with state-of-the-art methods and narrows the performance gap between languages, particularly benefiting those with limited resources."]], "0_corpus": "arxiv", "1_conv_id": "3283162f3da548e08c51faf0101b6c31", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "paper showing crosslingual generalization is possible", "1_output": [["paper showing crosslingual generalization is possible", "Title: Asymptotic Exit Location Distributions in the Stochastic Exit Problem\n\nAbstract: Consider a two-dimensional continuous-time dynamical system, with an attracting fixed point $S$. If the deterministic dynamics are perturbed by white noise (random perturbations) of strength $\\epsilon$, the system state will eventually leave the domain of attraction $\\Omega$ of $S$. We analyse the case when, as $\\epsilon\\to0$, the exit location on the boundary $\\partial\\Omega$ is increasingly concentrated near a saddle point $H$ of the deterministic dynamics. We show that the asymptotic form of the exit location distribution on $\\partial\\Omega$ is generically non-Gaussian and asymmetric, and classify the possible limiting distributions. A key role is played by a parameter $\\mu$, equal to the ratio $|\\lambda_s(H)|/\\lambda_u(H)$ of the stable and unstable eigenvalues of the linearized deterministic flow at $H$. If $\\mu<1$ then the exit location distribution is generically asymptotic as $\\epsilon\\to0$ to a Weibull distribution with shape parameter $2/\\mu$, on the $O(\\epsilon^{\\mu/2})$ length scale near $H$. If $\\mu>1$ it is generically asymptotic to a distribution on the $O(\\epsilon^{1/2})$ length scale, whose moments we compute. The asymmetry of the asymptotic exit location distribution is attributable to the generic presence of a `classically forbidden' region: a wedge-shaped subset of $\\Omega$ with $H$ as vertex, which is reached from $S$, in the $\\epsilon\\to0$ limit, only via `bent' (non-smooth) fluctuational paths that first pass through the vicinity of $H$. We deduce from the presence of this forbidden region that the classical Eyring formula for the small-$\\epsilon$ exponential asymptotics of the mean first exit time is generically inapplicable."]], "1_corpus": "arxiv"}
|
18 |
{"tstamp": 1722224559.6652, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "4a5116c2fe724927a11e34daeb171b0b", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "are all naruto shippuden episodes dubbed yet?", "0_output": [["are all naruto shippuden episodes dubbed yet?", "Scifi Stackexchange\n\nQ: What is the only Jutsu included in the manga from Filler Episodes of Naruto Shippuden Anime My friend challenged me with this question a while ago.\nWhich jutsu premiered in Naruto Shippuden, but was thereafter picked up by Masashi Kishimoto into his Manga? \nThis has apparently only happened once, all the other jutsu's happened the other way. He created them and put them into the Manga, then were disseminated into the Shippuden\n\nA: The only jutsu that matches somehow the requirements (at least to my knowledge and search) is the Water Release: Water Shuriken (Suiton: Mizu Shuriken):\n\nThis jutsu appeared first in Naruto: Shipp\u016bden #412 - Neji's Judgment which aired in Japan on May 14, 2015 and was later also used in the Naruto novel Gaara Hiden: A Sandstorm Mirage which was published in Japan on June 4, 2015.\nHowever this is a novel and not the original manga although is was illustrated by Masashi Kishimoto.\n"]], "0_corpus": "stackexchange", "1_conv_id": "cb439cf0253b48a4bec18921c3e8c6db", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "are all naruto shippuden episodes dubbed yet?", "1_output": [["are all naruto shippuden episodes dubbed yet?", "Scifi Stackexchange\n\nQ: What is the only Jutsu included in the manga from Filler Episodes of Naruto Shippuden Anime My friend challenged me with this question a while ago.\nWhich jutsu premiered in Naruto Shippuden, but was thereafter picked up by Masashi Kishimoto into his Manga? \nThis has apparently only happened once, all the other jutsu's happened the other way. He created them and put them into the Manga, then were disseminated into the Shippuden\n\nA: The only jutsu that matches somehow the requirements (at least to my knowledge and search) is the Water Release: Water Shuriken (Suiton: Mizu Shuriken):\n\nThis jutsu appeared first in Naruto: Shipp\u016bden #412 - Neji's Judgment which aired in Japan on May 14, 2015 and was later also used in the Naruto novel Gaara Hiden: A Sandstorm Mirage which was published in Japan on June 4, 2015.\nHowever this is a novel and not the original manga although is was illustrated by Masashi Kishimoto.\n"]], "1_corpus": "stackexchange"}
|
|
|
|
|
|
|
|
|
|
|
|
16 |
{"tstamp": 1722224501.6296, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "b0eecc5d9fd54f6594ba143e3606f4d2", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "Paper that introduced GPT-3", "0_output": [["Paper that introduced GPT-3", "Title: a survey on GPT-3\n\nAbstract: This paper provides an introductory survey to GPT-3. We cover some of the historical development behind this technology, some of the key features of GPT-3, and discuss the machine learning model and the datasets used. We survey both academic and commercial efforts applying GPT-3 in diverse domains such as developing conversational AI chatbots, software development, creative work, domain knowledge, and business productivity. We discuss some of the challenges that GPT-3 faces such as the problems of training complexity, bias, and hallucination/incorrect answers. We also discuss the future research opportunities in this area."]], "0_corpus": "arxiv", "1_conv_id": "a278bcf9cee34726ac3b8100de0e1d2a", "1_model_name": "embed-english-v3.0", "1_prompt": "Paper that introduced GPT-3", "1_output": [["Paper that introduced GPT-3", "Title: Improving Short Text Classification With Augmented Data Using GPT-3\n\nAbstract: GPT-3 is a large-scale natural language model developed by OpenAI that can perform many different tasks, including topic classification. Although researchers claim that it requires only a small number of in-context examples to learn a task, in practice GPT-3 requires these training examples to be either of exceptional quality or a higher quantity than easily created by hand. To address this issue, this study teaches GPT-3 to classify whether a question is related to data science by augmenting a small training set with additional examples generated by GPT-3 itself. This study compares two classifiers: the GPT-3 Classification Endpoint with augmented examples, and the GPT-3 Completion Endpoint with an optimal training set chosen using a genetic algorithm. We find that while the augmented Completion Endpoint achieves upwards of 80 percent validation accuracy, using the augmented Classification Endpoint yields more consistent accuracy on unseen examples. In this way, giving large-scale machine learning models like GPT-3 the ability to propose their own additional training examples can result in improved classification performance."]], "1_corpus": "arxiv"}
|
17 |
{"tstamp": 1722224531.6939, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "c7374c4ffde543f99eb8379b8225a12b", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "paper showing crosslingual generalization is possible", "0_output": [["paper showing crosslingual generalization is possible", "Title: Are Structural Concepts Universal in Transformer Language Models? Towards Interpretable Cross-Lingual Generalization\n\nAbstract: Large language models (LLMs) have exhibited considerable cross-lingual generalization abilities, whereby they implicitly transfer knowledge across languages. However, the transfer is not equally successful for all languages, especially for low-resource ones, which poses an ongoing challenge. It is unclear whether we have reached the limits of implicit cross-lingual generalization and if explicit knowledge transfer is viable. In this paper, we investigate the potential for explicitly aligning conceptual correspondence between languages to enhance cross-lingual generalization. Using the syntactic aspect of language as a testbed, our analyses of 43 languages reveal a high degree of alignability among the spaces of structural concepts within each language for both encoder-only and decoder-only LLMs. We then propose a meta-learning-based method to learn to align conceptual spaces of different languages, which facilitates zero-shot and few-shot generalization in concept classification and also offers insights into the cross-lingual in-context learning phenomenon. Experiments on syntactic analysis tasks show that our approach achieves competitive results with state-of-the-art methods and narrows the performance gap between languages, particularly benefiting those with limited resources."]], "0_corpus": "arxiv", "1_conv_id": "3283162f3da548e08c51faf0101b6c31", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "paper showing crosslingual generalization is possible", "1_output": [["paper showing crosslingual generalization is possible", "Title: Asymptotic Exit Location Distributions in the Stochastic Exit Problem\n\nAbstract: Consider a two-dimensional continuous-time dynamical system, with an attracting fixed point $S$. If the deterministic dynamics are perturbed by white noise (random perturbations) of strength $\\epsilon$, the system state will eventually leave the domain of attraction $\\Omega$ of $S$. We analyse the case when, as $\\epsilon\\to0$, the exit location on the boundary $\\partial\\Omega$ is increasingly concentrated near a saddle point $H$ of the deterministic dynamics. We show that the asymptotic form of the exit location distribution on $\\partial\\Omega$ is generically non-Gaussian and asymmetric, and classify the possible limiting distributions. A key role is played by a parameter $\\mu$, equal to the ratio $|\\lambda_s(H)|/\\lambda_u(H)$ of the stable and unstable eigenvalues of the linearized deterministic flow at $H$. If $\\mu<1$ then the exit location distribution is generically asymptotic as $\\epsilon\\to0$ to a Weibull distribution with shape parameter $2/\\mu$, on the $O(\\epsilon^{\\mu/2})$ length scale near $H$. If $\\mu>1$ it is generically asymptotic to a distribution on the $O(\\epsilon^{1/2})$ length scale, whose moments we compute. The asymmetry of the asymptotic exit location distribution is attributable to the generic presence of a `classically forbidden' region: a wedge-shaped subset of $\\Omega$ with $H$ as vertex, which is reached from $S$, in the $\\epsilon\\to0$ limit, only via `bent' (non-smooth) fluctuational paths that first pass through the vicinity of $H$. We deduce from the presence of this forbidden region that the classical Eyring formula for the small-$\\epsilon$ exponential asymptotics of the mean first exit time is generically inapplicable."]], "1_corpus": "arxiv"}
|
18 |
{"tstamp": 1722224559.6652, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "4a5116c2fe724927a11e34daeb171b0b", "0_model_name": "GritLM/GritLM-7B", "0_prompt": "are all naruto shippuden episodes dubbed yet?", "0_output": [["are all naruto shippuden episodes dubbed yet?", "Scifi Stackexchange\n\nQ: What is the only Jutsu included in the manga from Filler Episodes of Naruto Shippuden Anime My friend challenged me with this question a while ago.\nWhich jutsu premiered in Naruto Shippuden, but was thereafter picked up by Masashi Kishimoto into his Manga? \nThis has apparently only happened once, all the other jutsu's happened the other way. He created them and put them into the Manga, then were disseminated into the Shippuden\n\nA: The only jutsu that matches somehow the requirements (at least to my knowledge and search) is the Water Release: Water Shuriken (Suiton: Mizu Shuriken):\n\nThis jutsu appeared first in Naruto: Shipp\u016bden #412 - Neji's Judgment which aired in Japan on May 14, 2015 and was later also used in the Naruto novel Gaara Hiden: A Sandstorm Mirage which was published in Japan on June 4, 2015.\nHowever this is a novel and not the original manga although is was illustrated by Masashi Kishimoto.\n"]], "0_corpus": "stackexchange", "1_conv_id": "cb439cf0253b48a4bec18921c3e8c6db", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "are all naruto shippuden episodes dubbed yet?", "1_output": [["are all naruto shippuden episodes dubbed yet?", "Scifi Stackexchange\n\nQ: What is the only Jutsu included in the manga from Filler Episodes of Naruto Shippuden Anime My friend challenged me with this question a while ago.\nWhich jutsu premiered in Naruto Shippuden, but was thereafter picked up by Masashi Kishimoto into his Manga? \nThis has apparently only happened once, all the other jutsu's happened the other way. He created them and put them into the Manga, then were disseminated into the Shippuden\n\nA: The only jutsu that matches somehow the requirements (at least to my knowledge and search) is the Water Release: Water Shuriken (Suiton: Mizu Shuriken):\n\nThis jutsu appeared first in Naruto: Shipp\u016bden #412 - Neji's Judgment which aired in Japan on May 14, 2015 and was later also used in the Naruto novel Gaara Hiden: A Sandstorm Mirage which was published in Japan on June 4, 2015.\nHowever this is a novel and not the original manga although is was illustrated by Masashi Kishimoto.\n"]], "1_corpus": "stackexchange"}
|
19 |
+
{"tstamp": 1722224725.1029, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "16484c76198f456e85b2160419ef24e9", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "what is the difference between phd and pdf?", "0_output": [["what is the difference between phd and pdf?", "Stackoverflow Stackexchange\n\nQ: Importing Drupal 5 content into Drupal 6 I need to import several thousand nodes from a Drupal 5 sql dump into a Drupal 6 website. I tried using Node Import and Import/Export API but very little success... any ideas as to how this can be done?\nMany thanks!\nJane\n\nA: I used Views Bonus Pack on a Drupal 5 site to export the nodes into a csv file.\nThen I used the feeds module to import the csv into a Drupal 6 site.\n"]], "0_corpus": "stackexchange", "1_conv_id": "85af0818d827487d854641defc5a3e76", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "what is the difference between phd and pdf?", "1_output": [["what is the difference between phd and pdf?", "Math Stackexchange\n\nQ: Difference between a PDF and PMF What is the difference between a probability density function and a probability mass function?\n\nA: One is for discrete variables, the other for continuous.\nThey are also interpreted differently. The PDF is a probability density. If f(x) is a PDF, f(x) doesn't tell you the probability of getting x (In fact, the probability of getting precisely x is 0).\nThe way you use a PDF is through integration. So, the probability of getting a value that is in a dx area around x is approximately f(x)*dx.\n"]], "1_corpus": "stackexchange"}
|
20 |
+
{"tstamp": 1722224747.2038, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "27b74b6e3c64477498043a197913dbe6", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "Unified-IO model", "0_output": [["Unified-IO model", "Title: Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision, Language, Audio, and Action\n\nAbstract: We present Unified-IO 2, the first autoregressive multimodal model that is capable of understanding and generating image, text, audio, and action. To unify different modalities, we tokenize inputs and outputs -- images, text, audio, action, bounding boxes, etc., into a shared semantic space and then process them with a single encoder-decoder transformer model. Since training with such diverse modalities is challenging, we propose various architectural improvements to stabilize model training. We train our model from scratch on a large multimodal pre-training corpus from diverse sources with a multimodal mixture of denoisers objective. To learn an expansive set of skills, such as following multimodal instructions, we construct and finetune on an ensemble of 120 datasets with prompts and augmentations. With a single unified model, Unified-IO 2 achieves state-of-the-art performance on the GRIT benchmark and strong results in more than 35 benchmarks, including image generation and understanding, natural language understanding, video and audio understanding, and robotic manipulation. We release all our models to the research community."]], "0_corpus": "arxiv", "1_conv_id": "b4bc088789164404bebf053d0828474d", "1_model_name": "embed-english-v3.0", "1_prompt": "Unified-IO model", "1_output": [["Unified-IO model", "Title: A Unified Programming Model for Heterogeneous Computing with CPU and Accelerator Technologies\n\nAbstract: This paper consists of three parts. The first part provides a unified programming model for heterogeneous computing with CPU and accelerator (like GPU, FPGA, Google TPU, Atos QPU, and more) technologies. To some extent, this new programming model makes programming across CPUs and accelerators turn into usual programming tasks with common programming languages, and relieves complexity of programming across CPUs and accelerators. It can be achieved by extending file managements in common programming languages, such as C/C++, Fortran, Python, MPI, etc., to cover accelerators as I/O devices. In the second part, we show that all types of computer systems can be reduced to the simplest type of computer system, a single-core CPU computer system with I/O devices, by the unified programming model. Thereby, the unified programming model can truly build the programming of various computer systems on one API (i.e. file managements of common programming languages), and can make programming for various computer systems easier. In third part, we present a new approach to coupled applications computing (like multidisciplinary simulations) by the unified programming model. The unified programming model makes coupled applications computing more natural and easier since it only relies on its own power to couple multiple applications through MPI."]], "1_corpus": "arxiv"}
|
21 |
+
{"tstamp": 1722224768.3473, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "72ed621be6a54d4a8eb632a62857fa63", "0_model_name": "embed-english-v3.0", "0_prompt": "Good benchmark for multitask performance of llms", "0_output": [["Good benchmark for multitask performance of llms", "Title: Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once?\n\nAbstract: Large language models (LLMs) are typically prompted to follow a single instruction per inference call. In this work, we analyze whether LLMs also hold the capability to handle multiple instructions simultaneously, denoted as Multi-Task Inference. For this purpose, we introduce the MTI Bench(Multi-Task Inference Benchmark), a comprehensive evaluation benchmark encompassing 5,000 instances across 25 tasks. Each task in the MTI Bench involves 2 to 3 sub-tasks. As expected, we first demonstrate that Multi-Task Inference reduces the total inference time by 1.46 times in average since it does not require multiple inference calls. Interestingly, contrary to the expectation that LLMs would perform better when tasks are divided, we find that state-of-the-art LLMs, such as Llama-2-Chat-70B and GPT-4, show up to 7.3% and 12.4% improved performance with Multi-Task Inference compared to Single-Task Inference on the MTI Bench. We release the MTI Bench dataset and our code at this link https://github.com/guijinSON/MTI-Bench."]], "0_corpus": "arxiv", "1_conv_id": "492d5f636fc94d049f457c71188d11c0", "1_model_name": "sentence-transformers/all-MiniLM-L6-v2", "1_prompt": "Good benchmark for multitask performance of llms", "1_output": [["Good benchmark for multitask performance of llms", "Title: Limit theorems and ergodicity for general bootstrap random walks\n\nAbstract: Given the increments of a simple symmetric random walk $(X_n)_{n\\ge0}$, we characterize all possible ways of recycling these increments into a simple symmetric random walk $(Y_n)_{n\\ge0}$ adapted to the filtration of $(X_n)_{n\\ge0}$. We study the long term behavior of a suitably normalized two-dimensional process $((X_n,Y_n))_{n\\ge0}$. In particular, we provide necessary and sufficient conditions for the process to converge to a two-dimensional Brownian motion (possibly degenerate). We also discuss cases in which the limit is not Gaussian. Finally, we provide a simple necessary and sufficient condition for the ergodicity of the recycling transformation, thus generalizing results from Dubins and Smorodinsky (1992) and Fujita (2008), and solving the discrete version of the open problem of the ergodicity of the general L\\'evy transformation (see Mansuy and Yor, 2006)."]], "1_corpus": "arxiv"}
|
22 |
+
{"tstamp": 1722224795.2981, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "00300e8adfc2436a82273eaac0616ce2", "0_model_name": "intfloat/e5-mistral-7b-instruct", "0_prompt": "A paper evaluating whether updating one fact in a language model affects related facts.", "0_output": [["A paper evaluating whether updating one fact in a language model affects related facts.", "Title: Evaluating the Ripple Effects of Knowledge Editing in Language Models\n\nAbstract: Modern language models capture a large body of factual knowledge. However, some facts can be incorrectly induced or become obsolete over time, resulting in factually incorrect generations. This has led to the development of various editing methods that allow updating facts encoded by the model. Evaluation of these methods has primarily focused on testing whether an individual fact has been successfully injected, and if similar predictions for other subjects have not changed. Here we argue that such evaluation is limited, since injecting one fact (e.g. ``Jack Depp is the son of Johnny Depp'') introduces a ``ripple effect'' in the form of additional facts that the model needs to update (e.g.``Jack Depp is the sibling of Lily-Rose Depp''). To address this issue, we propose a novel set of evaluation criteria that consider the implications of an edit on related facts. Using these criteria, we then construct RippleEdits, a diagnostic benchmark of 5K factual edits, capturing a variety of types of ripple effects. We evaluate prominent editing methods on RippleEdits, showing that current methods fail to introduce consistent changes in the model's knowledge. In addition, we find that a simple in-context editing baseline obtains the best scores on our benchmark, suggesting a promising research direction for model editing."]], "0_corpus": "arxiv", "1_conv_id": "44bd3abae6e44faba2d1cf54ce8bec32", "1_model_name": "text-embedding-004", "1_prompt": "A paper evaluating whether updating one fact in a language model affects related facts.", "1_output": [["A paper evaluating whether updating one fact in a language model affects related facts.", "Title: WikiFactDiff: A Large, Realistic, and Temporally Adaptable Dataset for Atomic Factual Knowledge Update in Causal Language Models\n\nAbstract: The factuality of large language model (LLMs) tends to decay over time since events posterior to their training are \"unknown\" to them. One way to keep models up-to-date could be factual update: the task of inserting, replacing, or removing certain simple (atomic) facts within the model. To study this task, we present WikiFactDiff, a dataset that describes the evolution of factual knowledge between two dates as a collection of simple facts divided into three categories: new, obsolete, and static. We describe several update scenarios arising from various combinations of these three types of basic update. The facts are represented by subject-relation-object triples; indeed, WikiFactDiff was constructed by comparing the state of the Wikidata knowledge base at 4 January 2021 and 27 February 2023. Those fact are accompanied by verbalization templates and cloze tests that enable running update algorithms and their evaluation metrics. Contrary to other datasets, such as zsRE and CounterFact, WikiFactDiff constitutes a realistic update setting that involves various update scenarios, including replacements, archival, and new entity insertions. We also present an evaluation of existing update algorithms on WikiFactDiff."]], "1_corpus": "arxiv"}
|
23 |
+
{"tstamp": 1722224897.7564, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "66d97eb1cd7d4e31b48bcf9e823bba43", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "0_output": [["A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "Title: Images in Language Space: Exploring the Suitability of Large Language Models for Vision & Language Tasks\n\nAbstract: Large language models have demonstrated robust performance on various language tasks using zero-shot or few-shot learning paradigms. While being actively researched, multimodal models that can additionally handle images as input have yet to catch up in size and generality with language-only models. In this work, we ask whether language-only models can be utilised for tasks that require visual input -- but also, as we argue, often require a strong reasoning component. Similar to some recent related work, we make visual information accessible to the language model using separate verbalisation models. Specifically, we investigate the performance of open-source, open-access language models against GPT-3 on five vision-language tasks when given textually-encoded visual information. Our results suggest that language models are effective for solving vision-language tasks even with limited samples. This approach also enhances the interpretability of a model's output by providing a means of tracing the output back through the verbalised image content."]], "0_corpus": "arxiv", "1_conv_id": "e97d1a7a80bf4651a54001a5a1acb3b0", "1_model_name": "Salesforce/SFR-Embedding-2_R", "1_prompt": "A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "1_output": [["A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "Title: Visual Classification via Description from Large Language Models\n\nAbstract: Vision-language models (VLMs) such as CLIP have shown promising performance on a variety of recognition tasks using the standard zero-shot classification procedure -- computing similarity between the query image and the embedded words for each category. By only using the category name, they neglect to make use of the rich context of additional information that language affords. The procedure gives no intermediate understanding of why a category is chosen, and furthermore provides no mechanism for adjusting the criteria used towards this decision. We present an alternative framework for classification with VLMs, which we call classification by description. We ask VLMs to check for descriptive features rather than broad categories: to find a tiger, look for its stripes; its claws; and more. By basing decisions on these descriptors, we can provide additional cues that encourage using the features we want to be used. In the process, we can get a clear idea of what features the model uses to construct its decision; it gains some level of inherent explainability. We query large language models (e.g., GPT-3) for these descriptors to obtain them in a scalable way. Extensive experiments show our framework has numerous advantages past interpretability. We show improvements in accuracy on ImageNet across distribution shifts; demonstrate the ability to adapt VLMs to recognize concepts unseen during training; and illustrate how descriptors can be edited to effectively mitigate bias compared to the baseline."]], "1_corpus": "arxiv"}
|
data/retrieval_individual-f0f24c65-6f65-4435-a1e5-20bb12a2b295.jsonl
CHANGED
@@ -37,3 +37,15 @@
|
|
37 |
{"tstamp": 1722224521.4641, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722224519.3158, "finish": 1722224521.4641, "ip": "", "conv_id": "3283162f3da548e08c51faf0101b6c31", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "paper showing crosslingual generalization is possible", "output": [["paper showing crosslingual generalization is possible", "Title: Asymptotic Exit Location Distributions in the Stochastic Exit Problem\n\nAbstract: Consider a two-dimensional continuous-time dynamical system, with an attracting fixed point $S$. If the deterministic dynamics are perturbed by white noise (random perturbations) of strength $\\epsilon$, the system state will eventually leave the domain of attraction $\\Omega$ of $S$. We analyse the case when, as $\\epsilon\\to0$, the exit location on the boundary $\\partial\\Omega$ is increasingly concentrated near a saddle point $H$ of the deterministic dynamics. We show that the asymptotic form of the exit location distribution on $\\partial\\Omega$ is generically non-Gaussian and asymmetric, and classify the possible limiting distributions. A key role is played by a parameter $\\mu$, equal to the ratio $|\\lambda_s(H)|/\\lambda_u(H)$ of the stable and unstable eigenvalues of the linearized deterministic flow at $H$. If $\\mu<1$ then the exit location distribution is generically asymptotic as $\\epsilon\\to0$ to a Weibull distribution with shape parameter $2/\\mu$, on the $O(\\epsilon^{\\mu/2})$ length scale near $H$. If $\\mu>1$ it is generically asymptotic to a distribution on the $O(\\epsilon^{1/2})$ length scale, whose moments we compute. The asymmetry of the asymptotic exit location distribution is attributable to the generic presence of a `classically forbidden' region: a wedge-shaped subset of $\\Omega$ with $H$ as vertex, which is reached from $S$, in the $\\epsilon\\to0$ limit, only via `bent' (non-smooth) fluctuational paths that first pass through the vicinity of $H$. We deduce from the presence of this forbidden region that the classical Eyring formula for the small-$\\epsilon$ exponential asymptotics of the mean first exit time is generically inapplicable."]], "corpus": "arxiv"}
|
38 |
{"tstamp": 1722224554.1283, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722224545.2204, "finish": 1722224554.1283, "ip": "", "conv_id": "4a5116c2fe724927a11e34daeb171b0b", "model_name": "GritLM/GritLM-7B", "prompt": "are all naruto shippuden episodes dubbed yet?", "output": [["are all naruto shippuden episodes dubbed yet?", "Scifi Stackexchange\n\nQ: What is the only Jutsu included in the manga from Filler Episodes of Naruto Shippuden Anime My friend challenged me with this question a while ago.\nWhich jutsu premiered in Naruto Shippuden, but was thereafter picked up by Masashi Kishimoto into his Manga? \nThis has apparently only happened once, all the other jutsu's happened the other way. He created them and put them into the Manga, then were disseminated into the Shippuden\n\nA: The only jutsu that matches somehow the requirements (at least to my knowledge and search) is the Water Release: Water Shuriken (Suiton: Mizu Shuriken):\n\nThis jutsu appeared first in Naruto: Shipp\u016bden #412 - Neji's Judgment which aired in Japan on May 14, 2015 and was later also used in the Naruto novel Gaara Hiden: A Sandstorm Mirage which was published in Japan on June 4, 2015.\nHowever this is a novel and not the original manga although is was illustrated by Masashi Kishimoto.\n"]], "corpus": "stackexchange"}
|
39 |
{"tstamp": 1722224554.1283, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722224545.2204, "finish": 1722224554.1283, "ip": "", "conv_id": "cb439cf0253b48a4bec18921c3e8c6db", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "are all naruto shippuden episodes dubbed yet?", "output": [["are all naruto shippuden episodes dubbed yet?", "Scifi Stackexchange\n\nQ: What is the only Jutsu included in the manga from Filler Episodes of Naruto Shippuden Anime My friend challenged me with this question a while ago.\nWhich jutsu premiered in Naruto Shippuden, but was thereafter picked up by Masashi Kishimoto into his Manga? \nThis has apparently only happened once, all the other jutsu's happened the other way. He created them and put them into the Manga, then were disseminated into the Shippuden\n\nA: The only jutsu that matches somehow the requirements (at least to my knowledge and search) is the Water Release: Water Shuriken (Suiton: Mizu Shuriken):\n\nThis jutsu appeared first in Naruto: Shipp\u016bden #412 - Neji's Judgment which aired in Japan on May 14, 2015 and was later also used in the Naruto novel Gaara Hiden: A Sandstorm Mirage which was published in Japan on June 4, 2015.\nHowever this is a novel and not the original manga although is was illustrated by Masashi Kishimoto.\n"]], "corpus": "stackexchange"}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
{"tstamp": 1722224521.4641, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722224519.3158, "finish": 1722224521.4641, "ip": "", "conv_id": "3283162f3da548e08c51faf0101b6c31", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "paper showing crosslingual generalization is possible", "output": [["paper showing crosslingual generalization is possible", "Title: Asymptotic Exit Location Distributions in the Stochastic Exit Problem\n\nAbstract: Consider a two-dimensional continuous-time dynamical system, with an attracting fixed point $S$. If the deterministic dynamics are perturbed by white noise (random perturbations) of strength $\\epsilon$, the system state will eventually leave the domain of attraction $\\Omega$ of $S$. We analyse the case when, as $\\epsilon\\to0$, the exit location on the boundary $\\partial\\Omega$ is increasingly concentrated near a saddle point $H$ of the deterministic dynamics. We show that the asymptotic form of the exit location distribution on $\\partial\\Omega$ is generically non-Gaussian and asymmetric, and classify the possible limiting distributions. A key role is played by a parameter $\\mu$, equal to the ratio $|\\lambda_s(H)|/\\lambda_u(H)$ of the stable and unstable eigenvalues of the linearized deterministic flow at $H$. If $\\mu<1$ then the exit location distribution is generically asymptotic as $\\epsilon\\to0$ to a Weibull distribution with shape parameter $2/\\mu$, on the $O(\\epsilon^{\\mu/2})$ length scale near $H$. If $\\mu>1$ it is generically asymptotic to a distribution on the $O(\\epsilon^{1/2})$ length scale, whose moments we compute. The asymmetry of the asymptotic exit location distribution is attributable to the generic presence of a `classically forbidden' region: a wedge-shaped subset of $\\Omega$ with $H$ as vertex, which is reached from $S$, in the $\\epsilon\\to0$ limit, only via `bent' (non-smooth) fluctuational paths that first pass through the vicinity of $H$. We deduce from the presence of this forbidden region that the classical Eyring formula for the small-$\\epsilon$ exponential asymptotics of the mean first exit time is generically inapplicable."]], "corpus": "arxiv"}
|
38 |
{"tstamp": 1722224554.1283, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722224545.2204, "finish": 1722224554.1283, "ip": "", "conv_id": "4a5116c2fe724927a11e34daeb171b0b", "model_name": "GritLM/GritLM-7B", "prompt": "are all naruto shippuden episodes dubbed yet?", "output": [["are all naruto shippuden episodes dubbed yet?", "Scifi Stackexchange\n\nQ: What is the only Jutsu included in the manga from Filler Episodes of Naruto Shippuden Anime My friend challenged me with this question a while ago.\nWhich jutsu premiered in Naruto Shippuden, but was thereafter picked up by Masashi Kishimoto into his Manga? \nThis has apparently only happened once, all the other jutsu's happened the other way. He created them and put them into the Manga, then were disseminated into the Shippuden\n\nA: The only jutsu that matches somehow the requirements (at least to my knowledge and search) is the Water Release: Water Shuriken (Suiton: Mizu Shuriken):\n\nThis jutsu appeared first in Naruto: Shipp\u016bden #412 - Neji's Judgment which aired in Japan on May 14, 2015 and was later also used in the Naruto novel Gaara Hiden: A Sandstorm Mirage which was published in Japan on June 4, 2015.\nHowever this is a novel and not the original manga although is was illustrated by Masashi Kishimoto.\n"]], "corpus": "stackexchange"}
|
39 |
{"tstamp": 1722224554.1283, "task_type": "retrieval", "type": "chat", "model": "mixedbread-ai/mxbai-embed-large-v1", "gen_params": {}, "start": 1722224545.2204, "finish": 1722224554.1283, "ip": "", "conv_id": "cb439cf0253b48a4bec18921c3e8c6db", "model_name": "mixedbread-ai/mxbai-embed-large-v1", "prompt": "are all naruto shippuden episodes dubbed yet?", "output": [["are all naruto shippuden episodes dubbed yet?", "Scifi Stackexchange\n\nQ: What is the only Jutsu included in the manga from Filler Episodes of Naruto Shippuden Anime My friend challenged me with this question a while ago.\nWhich jutsu premiered in Naruto Shippuden, but was thereafter picked up by Masashi Kishimoto into his Manga? \nThis has apparently only happened once, all the other jutsu's happened the other way. He created them and put them into the Manga, then were disseminated into the Shippuden\n\nA: The only jutsu that matches somehow the requirements (at least to my knowledge and search) is the Water Release: Water Shuriken (Suiton: Mizu Shuriken):\n\nThis jutsu appeared first in Naruto: Shipp\u016bden #412 - Neji's Judgment which aired in Japan on May 14, 2015 and was later also used in the Naruto novel Gaara Hiden: A Sandstorm Mirage which was published in Japan on June 4, 2015.\nHowever this is a novel and not the original manga although is was illustrated by Masashi Kishimoto.\n"]], "corpus": "stackexchange"}
|
40 |
+
{"tstamp": 1722224719.7859, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722224719.593, "finish": 1722224719.7859, "ip": "", "conv_id": "16484c76198f456e85b2160419ef24e9", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "what is the difference between phd and pdf?", "output": [["what is the difference between phd and pdf?", "Stackoverflow Stackexchange\n\nQ: Importing Drupal 5 content into Drupal 6 I need to import several thousand nodes from a Drupal 5 sql dump into a Drupal 6 website. I tried using Node Import and Import/Export API but very little success... any ideas as to how this can be done?\nMany thanks!\nJane\n\nA: I used Views Bonus Pack on a Drupal 5 site to export the nodes into a csv file.\nThen I used the feeds module to import the csv into a Drupal 6 site.\n"]], "corpus": "stackexchange"}
|
41 |
+
{"tstamp": 1722224719.7859, "task_type": "retrieval", "type": "chat", "model": "GritLM/GritLM-7B", "gen_params": {}, "start": 1722224719.593, "finish": 1722224719.7859, "ip": "", "conv_id": "85af0818d827487d854641defc5a3e76", "model_name": "GritLM/GritLM-7B", "prompt": "what is the difference between phd and pdf?", "output": [["what is the difference between phd and pdf?", "Math Stackexchange\n\nQ: Difference between a PDF and PMF What is the difference between a probability density function and a probability mass function?\n\nA: One is for discrete variables, the other for continuous.\nThey are also interpreted differently. The PDF is a probability density. If f(x) is a PDF, f(x) doesn't tell you the probability of getting x (In fact, the probability of getting precisely x is 0).\nThe way you use a PDF is through integration. So, the probability of getting a value that is in a dx area around x is approximately f(x)*dx.\n"]], "corpus": "stackexchange"}
|
42 |
+
{"tstamp": 1722224742.5587, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722224742.3357, "finish": 1722224742.5587, "ip": "", "conv_id": "27b74b6e3c64477498043a197913dbe6", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Unified-IO model", "output": [["Unified-IO model", "Title: Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision, Language, Audio, and Action\n\nAbstract: We present Unified-IO 2, the first autoregressive multimodal model that is capable of understanding and generating image, text, audio, and action. To unify different modalities, we tokenize inputs and outputs -- images, text, audio, action, bounding boxes, etc., into a shared semantic space and then process them with a single encoder-decoder transformer model. Since training with such diverse modalities is challenging, we propose various architectural improvements to stabilize model training. We train our model from scratch on a large multimodal pre-training corpus from diverse sources with a multimodal mixture of denoisers objective. To learn an expansive set of skills, such as following multimodal instructions, we construct and finetune on an ensemble of 120 datasets with prompts and augmentations. With a single unified model, Unified-IO 2 achieves state-of-the-art performance on the GRIT benchmark and strong results in more than 35 benchmarks, including image generation and understanding, natural language understanding, video and audio understanding, and robotic manipulation. We release all our models to the research community."]], "corpus": "arxiv"}
|
43 |
+
{"tstamp": 1722224742.5587, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722224742.3357, "finish": 1722224742.5587, "ip": "", "conv_id": "b4bc088789164404bebf053d0828474d", "model_name": "embed-english-v3.0", "prompt": "Unified-IO model", "output": [["Unified-IO model", "Title: A Unified Programming Model for Heterogeneous Computing with CPU and Accelerator Technologies\n\nAbstract: This paper consists of three parts. The first part provides a unified programming model for heterogeneous computing with CPU and accelerator (like GPU, FPGA, Google TPU, Atos QPU, and more) technologies. To some extent, this new programming model makes programming across CPUs and accelerators turn into usual programming tasks with common programming languages, and relieves complexity of programming across CPUs and accelerators. It can be achieved by extending file managements in common programming languages, such as C/C++, Fortran, Python, MPI, etc., to cover accelerators as I/O devices. In the second part, we show that all types of computer systems can be reduced to the simplest type of computer system, a single-core CPU computer system with I/O devices, by the unified programming model. Thereby, the unified programming model can truly build the programming of various computer systems on one API (i.e. file managements of common programming languages), and can make programming for various computer systems easier. In third part, we present a new approach to coupled applications computing (like multidisciplinary simulations) by the unified programming model. The unified programming model makes coupled applications computing more natural and easier since it only relies on its own power to couple multiple applications through MPI."]], "corpus": "arxiv"}
|
44 |
+
{"tstamp": 1722224759.5578, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1722224759.4308, "finish": 1722224759.5578, "ip": "", "conv_id": "72ed621be6a54d4a8eb632a62857fa63", "model_name": "embed-english-v3.0", "prompt": "Good benchmark for multitask performance of llms", "output": [["Good benchmark for multitask performance of llms", "Title: Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once?\n\nAbstract: Large language models (LLMs) are typically prompted to follow a single instruction per inference call. In this work, we analyze whether LLMs also hold the capability to handle multiple instructions simultaneously, denoted as Multi-Task Inference. For this purpose, we introduce the MTI Bench(Multi-Task Inference Benchmark), a comprehensive evaluation benchmark encompassing 5,000 instances across 25 tasks. Each task in the MTI Bench involves 2 to 3 sub-tasks. As expected, we first demonstrate that Multi-Task Inference reduces the total inference time by 1.46 times in average since it does not require multiple inference calls. Interestingly, contrary to the expectation that LLMs would perform better when tasks are divided, we find that state-of-the-art LLMs, such as Llama-2-Chat-70B and GPT-4, show up to 7.3% and 12.4% improved performance with Multi-Task Inference compared to Single-Task Inference on the MTI Bench. We release the MTI Bench dataset and our code at this link https://github.com/guijinSON/MTI-Bench."]], "corpus": "arxiv"}
|
45 |
+
{"tstamp": 1722224759.5578, "task_type": "retrieval", "type": "chat", "model": "sentence-transformers/all-MiniLM-L6-v2", "gen_params": {}, "start": 1722224759.4308, "finish": 1722224759.5578, "ip": "", "conv_id": "492d5f636fc94d049f457c71188d11c0", "model_name": "sentence-transformers/all-MiniLM-L6-v2", "prompt": "Good benchmark for multitask performance of llms", "output": [["Good benchmark for multitask performance of llms", "Title: Limit theorems and ergodicity for general bootstrap random walks\n\nAbstract: Given the increments of a simple symmetric random walk $(X_n)_{n\\ge0}$, we characterize all possible ways of recycling these increments into a simple symmetric random walk $(Y_n)_{n\\ge0}$ adapted to the filtration of $(X_n)_{n\\ge0}$. We study the long term behavior of a suitably normalized two-dimensional process $((X_n,Y_n))_{n\\ge0}$. In particular, we provide necessary and sufficient conditions for the process to converge to a two-dimensional Brownian motion (possibly degenerate). We also discuss cases in which the limit is not Gaussian. Finally, we provide a simple necessary and sufficient condition for the ergodicity of the recycling transformation, thus generalizing results from Dubins and Smorodinsky (1992) and Fujita (2008), and solving the discrete version of the open problem of the ergodicity of the general L\\'evy transformation (see Mansuy and Yor, 2006)."]], "corpus": "arxiv"}
|
46 |
+
{"tstamp": 1722224784.465, "task_type": "retrieval", "type": "chat", "model": "intfloat/e5-mistral-7b-instruct", "gen_params": {}, "start": 1722224781.7765, "finish": 1722224784.465, "ip": "", "conv_id": "00300e8adfc2436a82273eaac0616ce2", "model_name": "intfloat/e5-mistral-7b-instruct", "prompt": "A paper evaluating whether updating one fact in a language model affects related facts.", "output": [["A paper evaluating whether updating one fact in a language model affects related facts.", "Title: Evaluating the Ripple Effects of Knowledge Editing in Language Models\n\nAbstract: Modern language models capture a large body of factual knowledge. However, some facts can be incorrectly induced or become obsolete over time, resulting in factually incorrect generations. This has led to the development of various editing methods that allow updating facts encoded by the model. Evaluation of these methods has primarily focused on testing whether an individual fact has been successfully injected, and if similar predictions for other subjects have not changed. Here we argue that such evaluation is limited, since injecting one fact (e.g. ``Jack Depp is the son of Johnny Depp'') introduces a ``ripple effect'' in the form of additional facts that the model needs to update (e.g.``Jack Depp is the sibling of Lily-Rose Depp''). To address this issue, we propose a novel set of evaluation criteria that consider the implications of an edit on related facts. Using these criteria, we then construct RippleEdits, a diagnostic benchmark of 5K factual edits, capturing a variety of types of ripple effects. We evaluate prominent editing methods on RippleEdits, showing that current methods fail to introduce consistent changes in the model's knowledge. In addition, we find that a simple in-context editing baseline obtains the best scores on our benchmark, suggesting a promising research direction for model editing."]], "corpus": "arxiv"}
|
47 |
+
{"tstamp": 1722224784.465, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1722224781.7765, "finish": 1722224784.465, "ip": "", "conv_id": "44bd3abae6e44faba2d1cf54ce8bec32", "model_name": "text-embedding-004", "prompt": "A paper evaluating whether updating one fact in a language model affects related facts.", "output": [["A paper evaluating whether updating one fact in a language model affects related facts.", "Title: WikiFactDiff: A Large, Realistic, and Temporally Adaptable Dataset for Atomic Factual Knowledge Update in Causal Language Models\n\nAbstract: The factuality of large language model (LLMs) tends to decay over time since events posterior to their training are \"unknown\" to them. One way to keep models up-to-date could be factual update: the task of inserting, replacing, or removing certain simple (atomic) facts within the model. To study this task, we present WikiFactDiff, a dataset that describes the evolution of factual knowledge between two dates as a collection of simple facts divided into three categories: new, obsolete, and static. We describe several update scenarios arising from various combinations of these three types of basic update. The facts are represented by subject-relation-object triples; indeed, WikiFactDiff was constructed by comparing the state of the Wikidata knowledge base at 4 January 2021 and 27 February 2023. Those fact are accompanied by verbalization templates and cloze tests that enable running update algorithms and their evaluation metrics. Contrary to other datasets, such as zsRE and CounterFact, WikiFactDiff constitutes a realistic update setting that involves various update scenarios, including replacements, archival, and new entity insertions. We also present an evaluation of existing update algorithms on WikiFactDiff."]], "corpus": "arxiv"}
|
48 |
+
{"tstamp": 1722224802.5928, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722224802.3639, "finish": 1722224802.5928, "ip": "", "conv_id": "c442c289c7594bd7839f1f8bafc30f60", "model_name": "BM25", "prompt": "I'm looking for a QA model that retrieves both textual and visual information. ", "output": [["I'm looking for a QA model that retrieves both textual and visual information. ", "Title: Look Before you Speak: Visually Contextualized Utterances\n\nAbstract: While most conversational AI systems focus on textual dialogue only, conditioning utterances on visual context (when it's available) can lead to more realistic conversations. Unfortunately, a major challenge for incorporating visual context into conversational dialogue is the lack of large-scale labeled datasets. We provide a solution in the form of a new visually conditioned Future Utterance Prediction task. Our task involves predicting the next utterance in a video, using both visual frames and transcribed speech as context. By exploiting the large number of instructional videos online, we train a model to solve this task at scale, without the need for manual annotations. Leveraging recent advances in multimodal learning, our model consists of a novel co-attentional multimodal video transformer, and when trained on both textual and visual context, outperforms baselines that use textual inputs alone. Further, we demonstrate that our model trained for this task on unlabelled videos achieves state-of-the-art performance on a number of downstream VideoQA benchmarks such as MSRVTT-QA, MSVD-QA, ActivityNet-QA and How2QA."]], "corpus": "arxiv"}
|
49 |
+
{"tstamp": 1722224802.5928, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722224802.3639, "finish": 1722224802.5928, "ip": "", "conv_id": "303ffd802de74cccb01c62b371bfd341", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "I'm looking for a QA model that retrieves both textual and visual information. ", "output": [["I'm looking for a QA model that retrieves both textual and visual information. ", "Title: Question Aware Vision Transformer for Multimodal Reasoning\n\nAbstract: Vision-Language (VL) models have gained significant research focus, enabling remarkable advances in multimodal reasoning. These architectures typically comprise a vision encoder, a Large Language Model (LLM), and a projection module that aligns visual features with the LLM's representation space. Despite their success, a critical limitation persists: the vision encoding process remains decoupled from user queries, often in the form of image-related questions. Consequently, the resulting visual features may not be optimally attuned to the query-specific elements of the image. To address this, we introduce QA-ViT, a Question Aware Vision Transformer approach for multimodal reasoning, which embeds question awareness directly within the vision encoder. This integration results in dynamic visual features focusing on relevant image aspects to the posed question. QA-ViT is model-agnostic and can be incorporated efficiently into any VL architecture. Extensive experiments demonstrate the effectiveness of applying our method to various multimodal architectures, leading to consistent improvement across diverse tasks and showcasing its potential for enhancing visual and scene-text understanding."]], "corpus": "arxiv"}
|
50 |
+
{"tstamp": 1722224861.2135, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1722224831.7718, "finish": 1722224861.2135, "ip": "", "conv_id": "66d97eb1cd7d4e31b48bcf9e823bba43", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "output": [["A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "Title: Images in Language Space: Exploring the Suitability of Large Language Models for Vision & Language Tasks\n\nAbstract: Large language models have demonstrated robust performance on various language tasks using zero-shot or few-shot learning paradigms. While being actively researched, multimodal models that can additionally handle images as input have yet to catch up in size and generality with language-only models. In this work, we ask whether language-only models can be utilised for tasks that require visual input -- but also, as we argue, often require a strong reasoning component. Similar to some recent related work, we make visual information accessible to the language model using separate verbalisation models. Specifically, we investigate the performance of open-source, open-access language models against GPT-3 on five vision-language tasks when given textually-encoded visual information. Our results suggest that language models are effective for solving vision-language tasks even with limited samples. This approach also enhances the interpretability of a model's output by providing a means of tracing the output back through the verbalised image content."]], "corpus": "arxiv"}
|
51 |
+
{"tstamp": 1722224861.2135, "task_type": "retrieval", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1722224831.7718, "finish": 1722224861.2135, "ip": "", "conv_id": "e97d1a7a80bf4651a54001a5a1acb3b0", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": "A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "output": [["A method using descriptors from large language models to improve accuracy and mitigate bias in visual classification tasks.", "Title: Visual Classification via Description from Large Language Models\n\nAbstract: Vision-language models (VLMs) such as CLIP have shown promising performance on a variety of recognition tasks using the standard zero-shot classification procedure -- computing similarity between the query image and the embedded words for each category. By only using the category name, they neglect to make use of the rich context of additional information that language affords. The procedure gives no intermediate understanding of why a category is chosen, and furthermore provides no mechanism for adjusting the criteria used towards this decision. We present an alternative framework for classification with VLMs, which we call classification by description. We ask VLMs to check for descriptive features rather than broad categories: to find a tiger, look for its stripes; its claws; and more. By basing decisions on these descriptors, we can provide additional cues that encourage using the features we want to be used. In the process, we can get a clear idea of what features the model uses to construct its decision; it gains some level of inherent explainability. We query large language models (e.g., GPT-3) for these descriptors to obtain them in a scalable way. Extensive experiments show our framework has numerous advantages past interpretability. We show improvements in accuracy on ImageNet across distribution shifts; demonstrate the ability to adapt VLMs to recognize concepts unseen during training; and illustrate how descriptors can be edited to effectively mitigate bias compared to the baseline."]], "corpus": "arxiv"}
|