view post Post 3891 I was initially pretty sceptical about Meta's Coconut paper [1] because the largest perf gains were reported on toy linguistic problems. However, these results on machine translation are pretty impressive!https://x.com/casper_hansen_/status/1875872309996855343Together with the recent PRIME method [2] for scaling RL, reasoning for open models is looking pretty exciting for 2025![1] Training Large Language Models to Reason in a Continuous Latent Space (2412.06769)[2] https://huggingface.co/blog/ganqu/prime See translation ๐ฅ 8 8 ๐ง 2 2 + Reply