Collections
Discover the best community collections!
Collections including paper arxiv:2310.16944
-
A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity
Paper • 2401.01967 • Published -
Secrets of RLHF in Large Language Models Part I: PPO
Paper • 2307.04964 • Published • 26 -
Zephyr: Direct Distillation of LM Alignment
Paper • 2310.16944 • Published • 116 -
LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
Paper • 2404.05961 • Published • 62
-
PERL: Parameter Efficient Reinforcement Learning from Human Feedback
Paper • 2403.10704 • Published • 55 -
ReFT: Representation Finetuning for Language Models
Paper • 2404.03592 • Published • 74 -
Ferret-v2: An Improved Baseline for Referring and Grounding with Large Language Models
Paper • 2404.07973 • Published • 28 -
Zephyr: Direct Distillation of LM Alignment
Paper • 2310.16944 • Published • 116
-
FinTral: A Family of GPT-4 Level Multimodal Financial Large Language Models
Paper • 2402.10986 • Published • 73 -
bigcode/starcoder2-15b
Text Generation • Updated • 19.4k • 509 -
Zephyr: Direct Distillation of LM Alignment
Paper • 2310.16944 • Published • 116 -
mixedbread-ai/mxbai-rerank-large-v1
Text Classification • Updated • 30.8k • 70
-
Metadata Might Make Language Models Better
Paper • 2211.10086 • Published • 3 -
Empirical Analysis of the Strengths and Weaknesses of PEFT Techniques for LLMs
Paper • 2304.14999 • Published • 2 -
PEFT for Speech: Unveiling Optimal Placement, Merging Strategies, and Ensemble Techniques
Paper • 2401.02122 • Published • 2 -
Zephyr: Direct Distillation of LM Alignment
Paper • 2310.16944 • Published • 116
-
LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
Paper • 2309.12307 • Published • 82 -
NEFTune: Noisy Embeddings Improve Instruction Finetuning
Paper • 2310.05914 • Published • 13 -
SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling
Paper • 2312.15166 • Published • 55 -
Soaring from 4K to 400K: Extending LLM's Context with Activation Beacon
Paper • 2401.03462 • Published • 25
-
Attention Is All You Need
Paper • 1706.03762 • Published • 36 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 11 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 7 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 11
-
A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense Captions
Paper • 2312.08578 • Published • 15 -
ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks
Paper • 2312.08583 • Published • 9 -
Vision-Language Models as a Source of Rewards
Paper • 2312.09187 • Published • 10 -
StemGen: A music generation model that listens
Paper • 2312.08723 • Published • 45
-
Zephyr: Direct Distillation of LM Alignment
Paper • 2310.16944 • Published • 116 -
Exponentially Faster Language Modelling
Paper • 2311.10770 • Published • 117 -
System 2 Attention (is something you might need too)
Paper • 2311.11829 • Published • 38 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 37