zhibin-msft commited on
Commit
f0a86ca
β€’
1 Parent(s): 9a1cf58

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -4
README.md CHANGED
@@ -18,9 +18,7 @@ Rho-1: Not All Tokens Are What You Need
18
  <a href="https://arxiv.org/abs/2404.07965"><b>[πŸ“œ Arxiv]</b></a> β€’
19
  <a href="https://huggingface.co/papers/2404.07965"><b>[πŸ’¬ HF Paper]</b></a> β€’
20
  <a href="https://huggingface.co/microsoft/rho-math-1b-v0.1"><b>[πŸ€— Models]</b></a> β€’
21
- <a href="https://github.com/microsoft/rho"><b>[🐱 GitHub]</b></a> β€’
22
- <a href="https://twitter.com/zebgou/status/1778676535404396697"><b>[🐦 Twitter]</b></a> β€’
23
- <a href="https://huggingface.co/spaces/zubingou/rho-1"><b>[πŸ€– Gradio Demo]</b></a>
24
  </p>
25
 
26
  <p align="center">
@@ -32,7 +30,6 @@ Rho-1: Not All Tokens Are What You Need
32
 
33
  ## πŸ”₯ News
34
 
35
- - [2024/04/14] πŸš€πŸš€πŸš€ We release [Gradio demo of Rho-1 Code Interpreter](https://huggingface.co/spaces/zubingou/rho-1), try it out!
36
  - [2024/04/12] πŸ”₯πŸ”₯πŸ”₯ Rho-Math-v0.1 models released at πŸ€— HuggingFace!
37
  - [Rho-Math-1B](https://huggingface.co/microsoft/rho-math-1b-v0.1) and [Rho-Math-7B](https://huggingface.co/microsoft/rho-math-7b-v0.1) achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively β€” matching DeepSeekMath with only 3\% of the pretraining tokens.
38
  - [Rho-Math-1B-Interpreter](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) is the first 1B LLM that achieves over 40% accuracy on MATH.
 
18
  <a href="https://arxiv.org/abs/2404.07965"><b>[πŸ“œ Arxiv]</b></a> β€’
19
  <a href="https://huggingface.co/papers/2404.07965"><b>[πŸ’¬ HF Paper]</b></a> β€’
20
  <a href="https://huggingface.co/microsoft/rho-math-1b-v0.1"><b>[πŸ€— Models]</b></a> β€’
21
+ <a href="https://github.com/microsoft/rho"><b>[🐱 GitHub]</b></a>
 
 
22
  </p>
23
 
24
  <p align="center">
 
30
 
31
  ## πŸ”₯ News
32
 
 
33
  - [2024/04/12] πŸ”₯πŸ”₯πŸ”₯ Rho-Math-v0.1 models released at πŸ€— HuggingFace!
34
  - [Rho-Math-1B](https://huggingface.co/microsoft/rho-math-1b-v0.1) and [Rho-Math-7B](https://huggingface.co/microsoft/rho-math-7b-v0.1) achieve 15.6% and 31.0% few-shot accuracy on MATH dataset, respectively β€” matching DeepSeekMath with only 3\% of the pretraining tokens.
35
  - [Rho-Math-1B-Interpreter](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) is the first 1B LLM that achieves over 40% accuracy on MATH.