Scheduled Commit
Browse files
data/retrieval_battle-928d45d9-19ab-4d05-9aae-4b6574886d0d.jsonl
CHANGED
@@ -114,3 +114,6 @@
|
|
114 |
{"tstamp": 1722602844.7342, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "7fbf42ba5e2f4020b502f644409cb70a", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "I need a paper that introduces multimodal word distributions outperforming word2vec and Gaussian embeddings on word similarity and entailment tasks.", "0_output": [["I need a paper that introduces multimodal word distributions outperforming word2vec and Gaussian embeddings on word similarity and entailment tasks.", "Title: Oscillation of a Linear Delay Impulsive Differential Equation\n\nAbstract: The main result of the paper is that the oscillation (non-oscillation) of the impulsive delay differential equation $\\dot {x}(t)+\\sum_{k=1}^m A_k(t)x[h_k(t)]=0,~~t\\geq 0$, $x(\\tau_j)=B_jx(\\tau_j-0), \\lim \\tau_j = \\infty$ is equivalent to the oscillation (non-oscillation) of the equation without impulses $\\dot {x}(t)=\\sum_{k=1}^m A_k(t) \\prod_{h_k(t)<\\tau_j\\leq t} B_j^{-1}x[h_k(t)]=0, t \\geq 0$. Explicit oscillation results are presented."]], "0_corpus": "arxiv", "1_conv_id": "efdec8f482134796bb29da414574e989", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "I need a paper that introduces multimodal word distributions outperforming word2vec and Gaussian embeddings on word similarity and entailment tasks.", "1_output": [["I need a paper that introduces multimodal word distributions outperforming word2vec and Gaussian embeddings on word similarity and entailment tasks.", "Title: Multimodal Word Distributions\n\nAbstract: Word embeddings provide point representations of words containing useful semantic information. We introduce multimodal word distributions formed from Gaussian mixtures, for multiple word meanings, entailment, and rich uncertainty information. To learn these distributions, we propose an energy-based max-margin objective. We show that the resulting approach captures uniquely expressive semantic information, and outperforms alternatives, such as word2vec skip-grams, and Gaussian embeddings, on benchmark datasets such as word similarity and entailment."]], "1_corpus": "arxiv"}
|
115 |
{"tstamp": 1722602868.0347, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "0de0f8e27e994aa68b09e107ceff0e94", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "is buu older than beerus?", "0_output": [["is buu older than beerus?", "Scifi Stackexchange\n\nQ: If Beerus wiped out all of the dinosaurs, how are dinosaurs still present? In Dragon Ball Z : Battle of Gods, Beerus claims that he went to Earth once and dinosaurs took a rude attitude with him, so he wiped them out.\n\nBut then we see that in the present DBZ timeline, there are still roaming dragons around. \n\nAnd even in Battle of Gods:\n\n\nA: Akira Toriyama forgot about the dinosaurs. If you look at the manga, after the Saiyan Saga, no dinosaurs are drawn. Furthermore, not everything in the anime and movies is canon. There is a lot of filler, and some are later contradicted by Toriyama.\n"]], "0_corpus": "stackexchange", "1_conv_id": "1c5af601f32e4ed39fb6d5770f80544f", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "is buu older than beerus?", "1_output": [["is buu older than beerus?", "Scifi Stackexchange\n\nQ: If Beerus wiped out all of the dinosaurs, how are dinosaurs still present? In Dragon Ball Z : Battle of Gods, Beerus claims that he went to Earth once and dinosaurs took a rude attitude with him, so he wiped them out.\n\nBut then we see that in the present DBZ timeline, there are still roaming dragons around. \n\nAnd even in Battle of Gods:\n\n\nA: Akira Toriyama forgot about the dinosaurs. If you look at the manga, after the Saiyan Saga, no dinosaurs are drawn. Furthermore, not everything in the anime and movies is canon. There is a lot of filler, and some are later contradicted by Toriyama.\n"]], "1_corpus": "stackexchange"}
|
116 |
{"tstamp": 1722602883.3759, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "accabe73868c4d5ca9c33c2b061706d9", "0_model_name": "BM25", "0_prompt": "A study that evaluates text embeddings across various tasks and datasets.", "0_output": [["A study that evaluates text embeddings across various tasks and datasets.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "d23fe4150a2b4017a738c97da3f76639", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "A study that evaluates text embeddings across various tasks and datasets.", "1_output": [["A study that evaluates text embeddings across various tasks and datasets.", "Title: Evaluating Word Embedding Models: Methods and Experimental Results\n\nAbstract: Extensive evaluation on a large number of word embedding models for language processing applications is conducted in this work. First, we introduce popular word embedding models and discuss desired properties of word models and evaluation methods (or evaluators). Then, we categorize evaluators into intrinsic and extrinsic two types. Intrinsic evaluators test the quality of a representation independent of specific natural language processing tasks while extrinsic evaluators use word embeddings as input features to a downstream task and measure changes in performance metrics specific to that task. We report experimental results of intrinsic and extrinsic evaluators on six word embedding models. It is shown that different evaluators focus on different aspects of word models, and some are more correlated with natural language processing tasks. Finally, we adopt correlation analysis to study performance consistency of extrinsic and intrinsic evalutors."]], "1_corpus": "arxiv"}
|
|
|
|
|
|
|
|
114 |
{"tstamp": 1722602844.7342, "task_type": "retrieval", "type": "rightvote", "models": ["", ""], "ip": "", "0_conv_id": "7fbf42ba5e2f4020b502f644409cb70a", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "I need a paper that introduces multimodal word distributions outperforming word2vec and Gaussian embeddings on word similarity and entailment tasks.", "0_output": [["I need a paper that introduces multimodal word distributions outperforming word2vec and Gaussian embeddings on word similarity and entailment tasks.", "Title: Oscillation of a Linear Delay Impulsive Differential Equation\n\nAbstract: The main result of the paper is that the oscillation (non-oscillation) of the impulsive delay differential equation $\\dot {x}(t)+\\sum_{k=1}^m A_k(t)x[h_k(t)]=0,~~t\\geq 0$, $x(\\tau_j)=B_jx(\\tau_j-0), \\lim \\tau_j = \\infty$ is equivalent to the oscillation (non-oscillation) of the equation without impulses $\\dot {x}(t)=\\sum_{k=1}^m A_k(t) \\prod_{h_k(t)<\\tau_j\\leq t} B_j^{-1}x[h_k(t)]=0, t \\geq 0$. Explicit oscillation results are presented."]], "0_corpus": "arxiv", "1_conv_id": "efdec8f482134796bb29da414574e989", "1_model_name": "mixedbread-ai/mxbai-embed-large-v1", "1_prompt": "I need a paper that introduces multimodal word distributions outperforming word2vec and Gaussian embeddings on word similarity and entailment tasks.", "1_output": [["I need a paper that introduces multimodal word distributions outperforming word2vec and Gaussian embeddings on word similarity and entailment tasks.", "Title: Multimodal Word Distributions\n\nAbstract: Word embeddings provide point representations of words containing useful semantic information. We introduce multimodal word distributions formed from Gaussian mixtures, for multiple word meanings, entailment, and rich uncertainty information. To learn these distributions, we propose an energy-based max-margin objective. We show that the resulting approach captures uniquely expressive semantic information, and outperforms alternatives, such as word2vec skip-grams, and Gaussian embeddings, on benchmark datasets such as word similarity and entailment."]], "1_corpus": "arxiv"}
|
115 |
{"tstamp": 1722602868.0347, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "0de0f8e27e994aa68b09e107ceff0e94", "0_model_name": "nomic-ai/nomic-embed-text-v1.5", "0_prompt": "is buu older than beerus?", "0_output": [["is buu older than beerus?", "Scifi Stackexchange\n\nQ: If Beerus wiped out all of the dinosaurs, how are dinosaurs still present? In Dragon Ball Z : Battle of Gods, Beerus claims that he went to Earth once and dinosaurs took a rude attitude with him, so he wiped them out.\n\nBut then we see that in the present DBZ timeline, there are still roaming dragons around. \n\nAnd even in Battle of Gods:\n\n\nA: Akira Toriyama forgot about the dinosaurs. If you look at the manga, after the Saiyan Saga, no dinosaurs are drawn. Furthermore, not everything in the anime and movies is canon. There is a lot of filler, and some are later contradicted by Toriyama.\n"]], "0_corpus": "stackexchange", "1_conv_id": "1c5af601f32e4ed39fb6d5770f80544f", "1_model_name": "GritLM/GritLM-7B", "1_prompt": "is buu older than beerus?", "1_output": [["is buu older than beerus?", "Scifi Stackexchange\n\nQ: If Beerus wiped out all of the dinosaurs, how are dinosaurs still present? In Dragon Ball Z : Battle of Gods, Beerus claims that he went to Earth once and dinosaurs took a rude attitude with him, so he wiped them out.\n\nBut then we see that in the present DBZ timeline, there are still roaming dragons around. \n\nAnd even in Battle of Gods:\n\n\nA: Akira Toriyama forgot about the dinosaurs. If you look at the manga, after the Saiyan Saga, no dinosaurs are drawn. Furthermore, not everything in the anime and movies is canon. There is a lot of filler, and some are later contradicted by Toriyama.\n"]], "1_corpus": "stackexchange"}
|
116 |
{"tstamp": 1722602883.3759, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "accabe73868c4d5ca9c33c2b061706d9", "0_model_name": "BM25", "0_prompt": "A study that evaluates text embeddings across various tasks and datasets.", "0_output": [["A study that evaluates text embeddings across various tasks and datasets.", "Title: MTEB: Massive Text Embedding Benchmark\n\nAbstract: Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking. This makes progress in the field difficult to track, as various models are constantly being proposed without proper evaluation. To solve this problem, we introduce the Massive Text Embedding Benchmark (MTEB). MTEB spans 8 embedding tasks covering a total of 58 datasets and 112 languages. Through the benchmarking of 33 models on MTEB, we establish the most comprehensive benchmark of text embeddings to date. We find that no particular text embedding method dominates across all tasks. This suggests that the field has yet to converge on a universal text embedding method and scale it up sufficiently to provide state-of-the-art results on all embedding tasks. MTEB comes with open-source code and a public leaderboard at https://github.com/embeddings-benchmark/mteb."]], "0_corpus": "arxiv", "1_conv_id": "d23fe4150a2b4017a738c97da3f76639", "1_model_name": "BAAI/bge-large-en-v1.5", "1_prompt": "A study that evaluates text embeddings across various tasks and datasets.", "1_output": [["A study that evaluates text embeddings across various tasks and datasets.", "Title: Evaluating Word Embedding Models: Methods and Experimental Results\n\nAbstract: Extensive evaluation on a large number of word embedding models for language processing applications is conducted in this work. First, we introduce popular word embedding models and discuss desired properties of word models and evaluation methods (or evaluators). Then, we categorize evaluators into intrinsic and extrinsic two types. Intrinsic evaluators test the quality of a representation independent of specific natural language processing tasks while extrinsic evaluators use word embeddings as input features to a downstream task and measure changes in performance metrics specific to that task. We report experimental results of intrinsic and extrinsic evaluators on six word embedding models. It is shown that different evaluators focus on different aspects of word models, and some are more correlated with natural language processing tasks. Finally, we adopt correlation analysis to study performance consistency of extrinsic and intrinsic evalutors."]], "1_corpus": "arxiv"}
|
117 |
+
{"tstamp": 1722602898.0651, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "85d5a43ad1af4aaf889301aace640303", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "Incorporating WordNet glosses into BERT for superior WSD performance.", "0_output": [["Incorporating WordNet glosses into BERT for superior WSD performance.", "Title: GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge\n\nAbstract: Word Sense Disambiguation (WSD) aims to find the exact sense of an ambiguous word in a particular context. Traditional supervised methods rarely take into consideration the lexical resources like WordNet, which are widely utilized in knowledge-based methods. Recent studies have shown the effectiveness of incorporating gloss (sense definition) into neural networks for WSD. However, compared with traditional word expert supervised methods, they have not achieved much improvement. In this paper, we focus on how to better leverage gloss knowledge in a supervised neural WSD system. We construct context-gloss pairs and propose three BERT-based models for WSD. We fine-tune the pre-trained BERT model on SemCor3.0 training corpus and the experimental results on several English all-words WSD benchmark datasets show that our approach outperforms the state-of-the-art systems."]], "0_corpus": "arxiv", "1_conv_id": "dc4ce6d44afb43fa9912cd2d73b33150", "1_model_name": "jinaai/jina-embeddings-v2-base-en", "1_prompt": "Incorporating WordNet glosses into BERT for superior WSD performance.", "1_output": [["Incorporating WordNet glosses into BERT for superior WSD performance.", "Title: Order Preservation in Limit Algebras\n\nAbstract: The matrix units of a digraph algebra, A, induce a relation, known as the diagonal order, on the projections in a masa in the algebra. Normalizing partial isometries in A act on these projections by conjugation; they are said to be order preserving when they respect the diagonal order. Order preserving embeddings, in turn, are those embeddings which carry order preserving normalizers to order preserving normalizers. This paper studies operator algebras which are direct limits of finite dimensional algebras with order preserving embeddings. We give a complete classification of direct limits of full triangular matrix algebras with order preserving embeddings. We also investigate the problem of characterizing algebras with order preserving embeddings."]], "1_corpus": "arxiv"}
|
118 |
+
{"tstamp": 1722602908.0151, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "10cf7ab193db40edb8773c13f994b75b", "0_model_name": "BAAI/bge-large-en-v1.5", "0_prompt": "A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "0_output": [["A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "Title: VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks\n\nAbstract: Recently, fine-tuning language models pre-trained on large text corpora have provided huge improvements on vision-and-language (V&L) tasks as well as on pure language tasks. However, fine-tuning the entire parameter set of pre-trained models becomes impractical since the model size is growing rapidly. Hence, in this paper, we introduce adapter-based parameter-efficient transfer learning techniques to V&L models such as VL-BART and VLT5. We evaluate our methods in a unified multi-task setup on both image-text and video-text benchmarks. For the image-text tasks, we use four diverse V&L datasets: VQAv2, GQA, NLVR2 , and MSCOCO image captioning. For video-text tasks, we use TVQA, How2QA, TVC, and YC2C. With careful training and thorough experiments, we benchmark three popular adapter-based methods (Adapter, Hyperformer, Compacter) against the standard full fine-tuning and the recently proposed prompt-tuning approach. We also enhance the efficiency and performance of adapters by sharing their weights to attain knowledge across tasks. Our results demonstrate that training the adapter with the weight-sharing technique (4.18% of total parameters for image-text tasks and 3.39% for video-text tasks) can match the performance of fine-tuning the entire model. Lastly, we present a comprehensive analysis including the combination of adapter and task-specific prompts and the impact of V&L pre-training on adapters. Our code is available at: https://github.com/ylsung/VL_adapter."]], "0_corpus": "arxiv", "1_conv_id": "d69db2236f7048f1bdcb2f1fa82510f5", "1_model_name": "intfloat/multilingual-e5-large-instruct", "1_prompt": "A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "1_output": [["A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "Title: VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks\n\nAbstract: Recently, fine-tuning language models pre-trained on large text corpora have provided huge improvements on vision-and-language (V&L) tasks as well as on pure language tasks. However, fine-tuning the entire parameter set of pre-trained models becomes impractical since the model size is growing rapidly. Hence, in this paper, we introduce adapter-based parameter-efficient transfer learning techniques to V&L models such as VL-BART and VLT5. We evaluate our methods in a unified multi-task setup on both image-text and video-text benchmarks. For the image-text tasks, we use four diverse V&L datasets: VQAv2, GQA, NLVR2 , and MSCOCO image captioning. For video-text tasks, we use TVQA, How2QA, TVC, and YC2C. With careful training and thorough experiments, we benchmark three popular adapter-based methods (Adapter, Hyperformer, Compacter) against the standard full fine-tuning and the recently proposed prompt-tuning approach. We also enhance the efficiency and performance of adapters by sharing their weights to attain knowledge across tasks. Our results demonstrate that training the adapter with the weight-sharing technique (4.18% of total parameters for image-text tasks and 3.39% for video-text tasks) can match the performance of fine-tuning the entire model. Lastly, we present a comprehensive analysis including the combination of adapter and task-specific prompts and the impact of V&L pre-training on adapters. Our code is available at: https://github.com/ylsung/VL_adapter."]], "1_corpus": "arxiv"}
|
119 |
+
{"tstamp": 1722602919.4472, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "f3f9752c34f5402f8178b4cd5b8a31fa", "0_model_name": "BM25", "0_prompt": "Paper combining glyph and pinyin information for better Chinese language pretraining.", "0_output": [["Paper combining glyph and pinyin information for better Chinese language pretraining.", "Title: ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information\n\nAbstract: Recent pretraining models in Chinese neglect two important aspects specific to the Chinese language: glyph and pinyin, which carry significant syntax and semantic information for language understanding. In this work, we propose ChineseBERT, which incorporates both the {\\it glyph} and {\\it pinyin} information of Chinese characters into language model pretraining. The glyph embedding is obtained based on different fonts of a Chinese character, being able to capture character semantics from the visual features, and the pinyin embedding characterizes the pronunciation of Chinese characters, which handles the highly prevalent heteronym phenomenon in Chinese (the same character has different pronunciations with different meanings). Pretrained on large-scale unlabeled Chinese corpus, the proposed ChineseBERT model yields significant performance boost over baseline models with fewer training steps. The porpsoed model achieves new SOTA performances on a wide range of Chinese NLP tasks, including machine reading comprehension, natural language inference, text classification, sentence pair matching, and competitive performances in named entity recognition. Code and pretrained models are publicly available at https://github.com/ShannonAI/ChineseBert."]], "0_corpus": "arxiv", "1_conv_id": "87f840669a704ace9c11b13737d46ad0", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Paper combining glyph and pinyin information for better Chinese language pretraining.", "1_output": [["Paper combining glyph and pinyin information for better Chinese language pretraining.", "Title: ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information\n\nAbstract: Recent pretraining models in Chinese neglect two important aspects specific to the Chinese language: glyph and pinyin, which carry significant syntax and semantic information for language understanding. In this work, we propose ChineseBERT, which incorporates both the {\\it glyph} and {\\it pinyin} information of Chinese characters into language model pretraining. The glyph embedding is obtained based on different fonts of a Chinese character, being able to capture character semantics from the visual features, and the pinyin embedding characterizes the pronunciation of Chinese characters, which handles the highly prevalent heteronym phenomenon in Chinese (the same character has different pronunciations with different meanings). Pretrained on large-scale unlabeled Chinese corpus, the proposed ChineseBERT model yields significant performance boost over baseline models with fewer training steps. The porpsoed model achieves new SOTA performances on a wide range of Chinese NLP tasks, including machine reading comprehension, natural language inference, text classification, sentence pair matching, and competitive performances in named entity recognition. Code and pretrained models are publicly available at https://github.com/ShannonAI/ChineseBert."]], "1_corpus": "arxiv"}
|
data/retrieval_individual-928d45d9-19ab-4d05-9aae-4b6574886d0d.jsonl
CHANGED
@@ -397,3 +397,7 @@
|
|
397 |
{"tstamp": 1722602877.5874, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722602875.4501, "finish": 1722602877.5874, "ip": "", "conv_id": "d23fe4150a2b4017a738c97da3f76639", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "A study that evaluates text embeddings across various tasks and datasets.", "output": [["A study that evaluates text embeddings across various tasks and datasets.", "Title: Evaluating Word Embedding Models: Methods and Experimental Results\n\nAbstract: Extensive evaluation on a large number of word embedding models for language processing applications is conducted in this work. First, we introduce popular word embedding models and discuss desired properties of word models and evaluation methods (or evaluators). Then, we categorize evaluators into intrinsic and extrinsic two types. Intrinsic evaluators test the quality of a representation independent of specific natural language processing tasks while extrinsic evaluators use word embeddings as input features to a downstream task and measure changes in performance metrics specific to that task. We report experimental results of intrinsic and extrinsic evaluators on six word embedding models. It is shown that different evaluators focus on different aspects of word models, and some are more correlated with natural language processing tasks. Finally, we adopt correlation analysis to study performance consistency of extrinsic and intrinsic evalutors."]], "corpus": "arxiv"}
|
398 |
{"tstamp": 1722602892.0542, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722602891.9476, "finish": 1722602892.0542, "ip": "", "conv_id": "85d5a43ad1af4aaf889301aace640303", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Incorporating WordNet glosses into BERT for superior WSD performance.", "output": [["Incorporating WordNet glosses into BERT for superior WSD performance.", "Title: GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge\n\nAbstract: Word Sense Disambiguation (WSD) aims to find the exact sense of an ambiguous word in a particular context. Traditional supervised methods rarely take into consideration the lexical resources like WordNet, which are widely utilized in knowledge-based methods. Recent studies have shown the effectiveness of incorporating gloss (sense definition) into neural networks for WSD. However, compared with traditional word expert supervised methods, they have not achieved much improvement. In this paper, we focus on how to better leverage gloss knowledge in a supervised neural WSD system. We construct context-gloss pairs and propose three BERT-based models for WSD. We fine-tune the pre-trained BERT model on SemCor3.0 training corpus and the experimental results on several English all-words WSD benchmark datasets show that our approach outperforms the state-of-the-art systems."]], "corpus": "arxiv"}
|
399 |
{"tstamp": 1722602892.0542, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722602891.9476, "finish": 1722602892.0542, "ip": "", "conv_id": "dc4ce6d44afb43fa9912cd2d73b33150", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "Incorporating WordNet glosses into BERT for superior WSD performance.", "output": [["Incorporating WordNet glosses into BERT for superior WSD performance.", "Title: Order Preservation in Limit Algebras\n\nAbstract: The matrix units of a digraph algebra, A, induce a relation, known as the diagonal order, on the projections in a masa in the algebra. Normalizing partial isometries in A act on these projections by conjugation; they are said to be order preserving when they respect the diagonal order. Order preserving embeddings, in turn, are those embeddings which carry order preserving normalizers to order preserving normalizers. This paper studies operator algebras which are direct limits of finite dimensional algebras with order preserving embeddings. We give a complete classification of direct limits of full triangular matrix algebras with order preserving embeddings. We also investigate the problem of characterizing algebras with order preserving embeddings."]], "corpus": "arxiv"}
|
|
|
|
|
|
|
|
|
|
397 |
{"tstamp": 1722602877.5874, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722602875.4501, "finish": 1722602877.5874, "ip": "", "conv_id": "d23fe4150a2b4017a738c97da3f76639", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "A study that evaluates text embeddings across various tasks and datasets.", "output": [["A study that evaluates text embeddings across various tasks and datasets.", "Title: Evaluating Word Embedding Models: Methods and Experimental Results\n\nAbstract: Extensive evaluation on a large number of word embedding models for language processing applications is conducted in this work. First, we introduce popular word embedding models and discuss desired properties of word models and evaluation methods (or evaluators). Then, we categorize evaluators into intrinsic and extrinsic two types. Intrinsic evaluators test the quality of a representation independent of specific natural language processing tasks while extrinsic evaluators use word embeddings as input features to a downstream task and measure changes in performance metrics specific to that task. We report experimental results of intrinsic and extrinsic evaluators on six word embedding models. It is shown that different evaluators focus on different aspects of word models, and some are more correlated with natural language processing tasks. Finally, we adopt correlation analysis to study performance consistency of extrinsic and intrinsic evalutors."]], "corpus": "arxiv"}
|
398 |
{"tstamp": 1722602892.0542, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722602891.9476, "finish": 1722602892.0542, "ip": "", "conv_id": "85d5a43ad1af4aaf889301aace640303", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "Incorporating WordNet glosses into BERT for superior WSD performance.", "output": [["Incorporating WordNet glosses into BERT for superior WSD performance.", "Title: GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge\n\nAbstract: Word Sense Disambiguation (WSD) aims to find the exact sense of an ambiguous word in a particular context. Traditional supervised methods rarely take into consideration the lexical resources like WordNet, which are widely utilized in knowledge-based methods. Recent studies have shown the effectiveness of incorporating gloss (sense definition) into neural networks for WSD. However, compared with traditional word expert supervised methods, they have not achieved much improvement. In this paper, we focus on how to better leverage gloss knowledge in a supervised neural WSD system. We construct context-gloss pairs and propose three BERT-based models for WSD. We fine-tune the pre-trained BERT model on SemCor3.0 training corpus and the experimental results on several English all-words WSD benchmark datasets show that our approach outperforms the state-of-the-art systems."]], "corpus": "arxiv"}
|
399 |
{"tstamp": 1722602892.0542, "task_type": "retrieval", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1722602891.9476, "finish": 1722602892.0542, "ip": "", "conv_id": "dc4ce6d44afb43fa9912cd2d73b33150", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": "Incorporating WordNet glosses into BERT for superior WSD performance.", "output": [["Incorporating WordNet glosses into BERT for superior WSD performance.", "Title: Order Preservation in Limit Algebras\n\nAbstract: The matrix units of a digraph algebra, A, induce a relation, known as the diagonal order, on the projections in a masa in the algebra. Normalizing partial isometries in A act on these projections by conjugation; they are said to be order preserving when they respect the diagonal order. Order preserving embeddings, in turn, are those embeddings which carry order preserving normalizers to order preserving normalizers. This paper studies operator algebras which are direct limits of finite dimensional algebras with order preserving embeddings. We give a complete classification of direct limits of full triangular matrix algebras with order preserving embeddings. We also investigate the problem of characterizing algebras with order preserving embeddings."]], "corpus": "arxiv"}
|
400 |
+
{"tstamp": 1722602904.5217, "task_type": "retrieval", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1722602904.3912, "finish": 1722602904.5217, "ip": "", "conv_id": "10cf7ab193db40edb8773c13f994b75b", "model_name": "BAAI/bge-large-en-v1.5", "prompt": "A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "output": [["A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "Title: VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks\n\nAbstract: Recently, fine-tuning language models pre-trained on large text corpora have provided huge improvements on vision-and-language (V&L) tasks as well as on pure language tasks. However, fine-tuning the entire parameter set of pre-trained models becomes impractical since the model size is growing rapidly. Hence, in this paper, we introduce adapter-based parameter-efficient transfer learning techniques to V&L models such as VL-BART and VLT5. We evaluate our methods in a unified multi-task setup on both image-text and video-text benchmarks. For the image-text tasks, we use four diverse V&L datasets: VQAv2, GQA, NLVR2 , and MSCOCO image captioning. For video-text tasks, we use TVQA, How2QA, TVC, and YC2C. With careful training and thorough experiments, we benchmark three popular adapter-based methods (Adapter, Hyperformer, Compacter) against the standard full fine-tuning and the recently proposed prompt-tuning approach. We also enhance the efficiency and performance of adapters by sharing their weights to attain knowledge across tasks. Our results demonstrate that training the adapter with the weight-sharing technique (4.18% of total parameters for image-text tasks and 3.39% for video-text tasks) can match the performance of fine-tuning the entire model. Lastly, we present a comprehensive analysis including the combination of adapter and task-specific prompts and the impact of V&L pre-training on adapters. Our code is available at: https://github.com/ylsung/VL_adapter."]], "corpus": "arxiv"}
|
401 |
+
{"tstamp": 1722602904.5217, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1722602904.3912, "finish": 1722602904.5217, "ip": "", "conv_id": "d69db2236f7048f1bdcb2f1fa82510f5", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "output": [["A study evaluating VL-BART and VLT5 models on both image-text and video-text benchmarks using adapter techniques", "Title: VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks\n\nAbstract: Recently, fine-tuning language models pre-trained on large text corpora have provided huge improvements on vision-and-language (V&L) tasks as well as on pure language tasks. However, fine-tuning the entire parameter set of pre-trained models becomes impractical since the model size is growing rapidly. Hence, in this paper, we introduce adapter-based parameter-efficient transfer learning techniques to V&L models such as VL-BART and VLT5. We evaluate our methods in a unified multi-task setup on both image-text and video-text benchmarks. For the image-text tasks, we use four diverse V&L datasets: VQAv2, GQA, NLVR2 , and MSCOCO image captioning. For video-text tasks, we use TVQA, How2QA, TVC, and YC2C. With careful training and thorough experiments, we benchmark three popular adapter-based methods (Adapter, Hyperformer, Compacter) against the standard full fine-tuning and the recently proposed prompt-tuning approach. We also enhance the efficiency and performance of adapters by sharing their weights to attain knowledge across tasks. Our results demonstrate that training the adapter with the weight-sharing technique (4.18% of total parameters for image-text tasks and 3.39% for video-text tasks) can match the performance of fine-tuning the entire model. Lastly, we present a comprehensive analysis including the combination of adapter and task-specific prompts and the impact of V&L pre-training on adapters. Our code is available at: https://github.com/ylsung/VL_adapter."]], "corpus": "arxiv"}
|
402 |
+
{"tstamp": 1722602916.6805, "task_type": "retrieval", "type": "chat", "model": "BM25", "gen_params": {}, "start": 1722602914.1978, "finish": 1722602916.6805, "ip": "", "conv_id": "f3f9752c34f5402f8178b4cd5b8a31fa", "model_name": "BM25", "prompt": "Paper combining glyph and pinyin information for better Chinese language pretraining.", "output": [["Paper combining glyph and pinyin information for better Chinese language pretraining.", "Title: ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information\n\nAbstract: Recent pretraining models in Chinese neglect two important aspects specific to the Chinese language: glyph and pinyin, which carry significant syntax and semantic information for language understanding. In this work, we propose ChineseBERT, which incorporates both the {\\it glyph} and {\\it pinyin} information of Chinese characters into language model pretraining. The glyph embedding is obtained based on different fonts of a Chinese character, being able to capture character semantics from the visual features, and the pinyin embedding characterizes the pronunciation of Chinese characters, which handles the highly prevalent heteronym phenomenon in Chinese (the same character has different pronunciations with different meanings). Pretrained on large-scale unlabeled Chinese corpus, the proposed ChineseBERT model yields significant performance boost over baseline models with fewer training steps. The porpsoed model achieves new SOTA performances on a wide range of Chinese NLP tasks, including machine reading comprehension, natural language inference, text classification, sentence pair matching, and competitive performances in named entity recognition. Code and pretrained models are publicly available at https://github.com/ShannonAI/ChineseBert."]], "corpus": "arxiv"}
|
403 |
+
{"tstamp": 1722602916.6805, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1722602914.1978, "finish": 1722602916.6805, "ip": "", "conv_id": "87f840669a704ace9c11b13737d46ad0", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Paper combining glyph and pinyin information for better Chinese language pretraining.", "output": [["Paper combining glyph and pinyin information for better Chinese language pretraining.", "Title: ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information\n\nAbstract: Recent pretraining models in Chinese neglect two important aspects specific to the Chinese language: glyph and pinyin, which carry significant syntax and semantic information for language understanding. In this work, we propose ChineseBERT, which incorporates both the {\\it glyph} and {\\it pinyin} information of Chinese characters into language model pretraining. The glyph embedding is obtained based on different fonts of a Chinese character, being able to capture character semantics from the visual features, and the pinyin embedding characterizes the pronunciation of Chinese characters, which handles the highly prevalent heteronym phenomenon in Chinese (the same character has different pronunciations with different meanings). Pretrained on large-scale unlabeled Chinese corpus, the proposed ChineseBERT model yields significant performance boost over baseline models with fewer training steps. The porpsoed model achieves new SOTA performances on a wide range of Chinese NLP tasks, including machine reading comprehension, natural language inference, text classification, sentence pair matching, and competitive performances in named entity recognition. Code and pretrained models are publicly available at https://github.com/ShannonAI/ChineseBert."]], "corpus": "arxiv"}
|