Frame Representation Hypothesis: Multi-Token LLM Interpretability and Concept-Guided Text Generation Paper • 2412.07334 • Published Dec 10, 2024 • 16
What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective Paper • 2410.23743 • Published Oct 31, 2024 • 59
Beyond Fine-tuning: Unleashing the Potential of Continuous Pretraining for Clinical LLMs Paper • 2409.14988 • Published Sep 23, 2024 • 22
Beyond Fine-tuning: Unleashing the Potential of Continuous Pretraining for Clinical LLMs Paper • 2409.14988 • Published Sep 23, 2024 • 22
MEDIC: Towards a Comprehensive Framework for Evaluating LLMs in Clinical Applications Paper • 2409.07314 • Published Sep 11, 2024 • 51
ReFT: Representation Finetuning for Language Models Paper • 2404.03592 • Published Apr 4, 2024 • 91
From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic Data Paper • 2406.19292 • Published Jun 27, 2024 • 1