Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
dhuynh95 
posted an update Feb 9
Post
✨ In-context learning is all you need!

This super interesting paper shows that fine-tuning with #SFT or #RLHF only helps on the form but does not impact knowledge or reasoning abilities, and in some cases, actually decreases performance!

They tested it with Mistral-base vs Mistral FT-ed, as well as Llama 2 70b base and FT-ed and results are consistent.

Providing the right prompt to the base model actually makes the model better and has 0 training cost!

Paper: https://arxiv.org/abs/2312.01552
In this post