-
Text Embeddings by Weakly-Supervised Contrastive Pre-training
Paper • 2212.03533 • Published • 1 -
Gecko: Versatile Text Embeddings Distilled from Large Language Models
Paper • 2403.20327 • Published • 41 -
Improving Text Embeddings with Large Language Models
Paper • 2401.00368 • Published • 72 -
Generative Representational Instruction Tuning
Paper • 2402.09906 • Published • 50
Collections
Discover the best community collections!
Collections including paper arxiv:2402.09906
-
LLM Augmented LLMs: Expanding Capabilities through Composition
Paper • 2401.02412 • Published • 35 -
Generative Representational Instruction Tuning
Paper • 2402.09906 • Published • 50 -
Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
Paper • 2305.02301 • Published • 1 -
Evolutionary Optimization of Model Merging Recipes
Paper • 2403.13187 • Published • 44
-
A Survey on Data Selection for LLM Instruction Tuning
Paper • 2402.05123 • Published • 3 -
WaveCoder: Widespread And Versatile Enhanced Instruction Tuning with Refined Data Generation
Paper • 2312.14187 • Published • 49 -
Generative Representational Instruction Tuning
Paper • 2402.09906 • Published • 50 -
Instruction-tuned Language Models are Better Knowledge Learners
Paper • 2402.12847 • Published • 25
-
Speculative Streaming: Fast LLM Inference without Auxiliary Models
Paper • 2402.11131 • Published • 41 -
Generative Representational Instruction Tuning
Paper • 2402.09906 • Published • 50 -
Chain-of-Thought Reasoning Without Prompting
Paper • 2402.10200 • Published • 90 -
BitDelta: Your Fine-Tune May Only Be Worth One Bit
Paper • 2402.10193 • Published • 17
-
OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models
Paper • 2402.01739 • Published • 26 -
Rethinking Interpretability in the Era of Large Language Models
Paper • 2402.01761 • Published • 18 -
Self-Discover: Large Language Models Self-Compose Reasoning Structures
Paper • 2402.03620 • Published • 102 -
Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model
Paper • 2402.07827 • Published • 43
-
Chain-of-Thought Reasoning Without Prompting
Paper • 2402.10200 • Published • 90 -
How to Train Data-Efficient LLMs
Paper • 2402.09668 • Published • 33 -
BitDelta: Your Fine-Tune May Only Be Worth One Bit
Paper • 2402.10193 • Published • 17 -
A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts
Paper • 2402.09727 • Published • 35