File size: 1,427 Bytes
d788394
481cc0d
d788394
 
 
 
481cc0d
 
 
d788394
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: llama2
library_name: transformers
tags:
- mergekit
- merge
base_model:
- lmsys/vicuna-7b-v1.5
- meta-math/MetaMath-Llemma-7B
---
# merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
Model merge (slerp) based on [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) and [meta-math/MetaMath-Llemma-7B](https://huggingface.co/meta-math/MetaMath-Llemma-7B)

1. Vicuna

    ## Model Details
    
    Vicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT.
    
    - **Developed by:** [LMSYS](https://lmsys.org/)
    - **Model type:** An auto-regressive language model based on the transformer architecture
    - **License:** Llama 2 Community License Agreement	
    - **Finetuned from model:** [Llama 2](https://arxiv.org/abs/2307.09288)

    ### Model Sources
    
    - **Repository:** https://github.com/lm-sys/FastChat
    - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
    - **Paper:** https://arxiv.org/abs/2306.05685
    - **Demo:** https://chat.lmsys.org/

2. MetaMath Llemma

    ## Model Details
    
    MetaMath-Llemma-7B is fully fine-tuned on the MetaMathQA datasets and based on the powerful Llemma-7B model. It is glad to see using MetaMathQA datasets and change the base model from llama-2-7B to Llemma-7B can boost the MATH performance from 19.8 to **30.0**.