MingLiiii commited on
Commit
50e3436
·
1 Parent(s): 46db9f7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -0
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ language:
4
+ - en
5
+ ---
6
+ # Model Card for umd-zhou-lab/recycled-wizardlm-7b-v2.0
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+ This model is trained by fine-tuning llama-2 with recycled WizardLM(70k) data V2.
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+ - **Developed by:** UMD Tianyi Zhou Lab
20
+ - **Model type:** An auto-regressive language model based on the transformer architecture
21
+ - **License:** Llama 2 Community License Agreement
22
+ - **Finetuned from model:** [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b)
23
+
24
+ ### Model Sources
25
+
26
+ <!-- Provide the basic links for the model. -->
27
+
28
+ - **GitHub:** [Reflection-Tuning](https://github.com/tianyi-lab/Reflection_Tuning)
29
+ - **Paper:** [Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning](https://arxiv.org/abs/2310.11716)
30
+ - **Data:** Coming soon
31
+
32
+ ## Uses
33
+
34
+ The primary use of this model is research on large language models and chatbots.
35
+ The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
36
+
37
+ ## Training
38
+
39
+ We use the prompt from [FastChat](https://github.com/lm-sys/FastChat):
40
+
41
+ ```
42
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am ...</s>......
43
+ ```
44
+
45
+ | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay | Warmup Rate |
46
+ | --- | ---: | ---: | ---: | ---: | ---: | ---: |
47
+ | Recycled Models (7B) | 128 | 2e-5 | 3 | 2048 | 0 | 0.03 |
48
+
49
+ ## Performance
50
+
51
+ The following table provides a comparison between our recycled models (V2) and baseline models on the AlpacaEval Leaderboard and Huggingface Open LLM Leaderboard. <br>
52
+
53
+ The V2 Recycled Alpaca Data and WizardLM data, and the corresponding paper will be released soon.
54
+
55
+ | | **AlpacaEval** || **Avg** | **ARC** | **HellaSwag** | **MMLU** | **TruthfulQA** || **Model**|
56
+ |--------------------------|:--------------:|:-:|:-----------:|:-------:|:-------------:|:-------:|:--------------:|:-:|:-:|
57
+ | **Alpaca 7B** | 26.46 || 50.21 | 42.65 | 76.91 | 41.73 | 39.55 ||/|
58
+ | **Recycled Alpaca 7B V2.0** | 79.58 || 56.05 | 54.01 | 78.07 | 46.69 | 45.41 |||
59
+ |||||||||||
60
+ | **WizardLM 7B** | 67.64 || 54.18 | 51.60 | 77.70 | 42.70 | 44.70 ||/|
61
+ | **Recycled WizardLM 7B V2.0** | 83.48 || 56.79 | 54.78 | 77.86 | 45.63 | 48.91 |||
62
+ |||||||||
63
+
64
+ ## Citation
65
+
66
+ Please consider citing our paper if you think our codes, data, or models are useful. Thank you!
67
+ ```
68
+ @misc{li2023reflectiontuning,
69
+ title={Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning},
70
+ author={Ming Li and Lichang Chen and Jiuhai Chen and Shwai He and Heng Huang and Jiuxiang Gu and Tianyi Zhou},
71
+ year={2023},
72
+ eprint={2310.11716},
73
+ archivePrefix={arXiv},
74
+ primaryClass={cs.CL}
75
+ }
76
+ ```