view article Article Binary and Scalar Embedding Quantization for Significantly Faster & Cheaper Retrieval Mar 22 • 35
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits Paper • 2402.17764 • Published Feb 27 • 566
RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture Paper • 2401.08406 • Published Jan 16 • 36