-
BitNet: Scaling 1-bit Transformers for Large Language Models
Paper • 2310.11453 • Published • 94 -
Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
Paper • 2310.11511 • Published • 63 -
In-Context Learning Creates Task Vectors
Paper • 2310.15916 • Published • 39 -
Matryoshka Diffusion Models
Paper • 2310.15111 • Published • 39
Collections
Discover the best community collections!
Collections including paper arxiv:2306.11644
-
A Survey on Language Models for Code
Paper • 2311.07989 • Published • 21 -
CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation
Paper • 2102.04664 • Published • 2 -
Evaluating Large Language Models Trained on Code
Paper • 2107.03374 • Published • 6 -
Out of the BLEU: how should we assess quality of the Code Generation models?
Paper • 2208.03133 • Published • 2
-
AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn
Paper • 2306.08640 • Published • 26 -
Demystifying GPT Self-Repair for Code Generation
Paper • 2306.09896 • Published • 18 -
Textbooks Are All You Need
Paper • 2306.11644 • Published • 139 -
nampdn-ai/tiny-codes
Viewer • Updated • 604 • 196
-
Attention Is All You Need
Paper • 1706.03762 • Published • 35 -
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Paper • 2005.11401 • Published • 11 -
LoRA: Low-Rank Adaptation of Large Language Models
Paper • 2106.09685 • Published • 24 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 8
-
Contrastive Decoding Improves Reasoning in Large Language Models
Paper • 2309.09117 • Published • 37 -
LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
Paper • 2309.12307 • Published • 82 -
Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 235 -
Efficient Memory Management for Large Language Model Serving with PagedAttention
Paper • 2309.06180 • Published • 25