Collections
Discover the best community collections!
Collections including paper arxiv:2307.09288
-
Large Language Model Alignment: A Survey
Paper • 2309.15025 • Published • 2 -
Aligning Large Language Models with Human: A Survey
Paper • 2307.12966 • Published • 1 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 37 -
SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF
Paper • 2310.05344 • Published • 1
-
mistralai/Mixtral-8x7B-Instruct-v0.1
Text Generation • Updated • 455k • 3.86k -
HuggingFaceM4/WebSight
Viewer • Updated • 252 • 287 -
LLM in a flash: Efficient Large Language Model Inference with Limited Memory
Paper • 2312.11514 • Published • 253 -
Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 235
-
Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 235 -
GAIA: a benchmark for General AI Assistants
Paper • 2311.12983 • Published • 172 -
DocLLM: A layout-aware generative language model for multimodal document understanding
Paper • 2401.00908 • Published • 174 -
LLM in a flash: Efficient Large Language Model Inference with Limited Memory
Paper • 2312.11514 • Published • 253
-
TheBloke/Wizard-Vicuna-13B-Uncensored-GPTQ
Text Generation • Updated • 26.1k • 302 -
Isonium/WhiteRabbitNeo-33B-v1-GGUF
Updated • 449 • 6 -
Masterjp123/SnowyRP-FinalV1-L2-13B-GPTQ
Text Generation • Updated • 1 • 2 -
GAIA: a benchmark for General AI Assistants
Paper • 2311.12983 • Published • 172
-
LoRA: Low-Rank Adaptation of Large Language Models
Paper • 2106.09685 • Published • 24 -
Attention Is All You Need
Paper • 1706.03762 • Published • 36 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 37 -
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 32
-
Mistral 7B
Paper • 2310.06825 • Published • 43 -
BloombergGPT: A Large Language Model for Finance
Paper • 2303.17564 • Published • 16 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 11 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 11
-
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 32 -
Efficient Estimation of Word Representations in Vector Space
Paper • 1301.3781 • Published • 6 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 11 -
Attention Is All You Need
Paper • 1706.03762 • Published • 36