taesunwhang commited on
Commit
8be0038
1 Parent(s): bdee94c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -0
README.md CHANGED
@@ -3,3 +3,31 @@ license: llama2
3
  ---
4
 
5
  Model merge based on [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) and [meta-math/MetaMath-Llemma-7B](https://huggingface.co/meta-math/MetaMath-Llemma-7B)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
  Model merge based on [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) and [meta-math/MetaMath-Llemma-7B](https://huggingface.co/meta-math/MetaMath-Llemma-7B)
6
+
7
+ - Forked from [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5)
8
+ - Forked from [meta-math/MetaMath-Llemma-7B](https://huggingface.co/meta-math/MetaMath-Llemma-7B)
9
+
10
+ 1. Vicuna
11
+
12
+ ## Model Details
13
+
14
+ Vicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT.
15
+
16
+ - **Developed by:** [LMSYS](https://lmsys.org/)
17
+ - **Model type:** An auto-regressive language model based on the transformer architecture
18
+ - **License:** Llama 2 Community License Agreement
19
+ - **Finetuned from model:** [Llama 2](https://arxiv.org/abs/2307.09288)
20
+
21
+ ### Model Sources
22
+
23
+ - **Repository:** https://github.com/lm-sys/FastChat
24
+ - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
25
+ - **Paper:** https://arxiv.org/abs/2306.05685
26
+ - **Demo:** https://chat.lmsys.org/
27
+
28
+ 2. MetaMath Llemma
29
+
30
+ ## Model Details
31
+
32
+ MetaMath-Llemma-7B is fully fine-tuned on the MetaMathQA datasets and based on the powerful Llemma-7B model. It is glad to see using MetaMathQA datasets and change the base model from llama-2-7B to Llemma-7B can boost the MATH performance from 19.8 to **30.0**.
33
+