paopao0226 commited on
Commit
be5cb84
1 Parent(s): f806e66

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -3
README.md CHANGED
@@ -1,3 +1,41 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # PandaLM
2
+
3
+ We are glad to introduce the **original version** of Alpaca based on PandaLM project. To highlight the effectiveness of using PandaLM-7B for instruction tuning LLMs, we check the performance of models tuned with PandaLM’s selected optimal hyperparameters. This version and original alpaca version have been submitted to hugging face [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
4
+
5
+ the full checkpoint has been uploaded to the Hugging face, so you can just load the model and tokenizer for downstream tasks.
6
+
7
+ ```python
8
+ from transformers import AutoTokenizer,AutoModelForCausalLM
9
+ tokenizer = AutoTokenizer.from_pretrained("WeOpenML/Alpaca-7B-v1",use_fast=False)
10
+ model = AutoModelForCausalLM.from_pretrained("WeOpenML/Alpaca-7B-v1")
11
+ ```
12
+
13
+ For more information about PandaLM, pls check out [our github](https://github.com/WeOpenML/PandaLM), [our paper](https://arxiv.org/abs/2306.05087) and [PandaLM model](https://huggingface.co/WeOpenML/PandaLM-7B-v1). The repo is under Apache License 2.0.
14
+
15
+ ## Updates
16
+
17
+ ***
18
+
19
+ - 2023.7.21: We updated the model card and basic info.
20
+ - 2023.7.18: We released the checkpoint on the Hugging face.
21
+
22
+ ## Citation
23
+
24
+ ```
25
+ @misc{pandalm2023,
26
+ title={PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization},
27
+ author={Wang, Yidong and Yu, Zhuohao and Zeng, Zhengran and Yang, Linyi and Wang, Cunxiang and Chen, Hao and Jiang, Chaoya and Xie, Rui and Wang, Jindong and Xie, Xing and Ye, Wei and Zhang, Shikun and Zhang, Yue},
28
+ year={2023},
29
+ journal={arXiv preprint arXiv:2306.05087}
30
+ }
31
+
32
+ @misc{PandaLM,
33
+ author = {Wang, Yidong and Yu, Zhuohao and Zeng, Zhengran and Yang, Linyi and Heng, Qiang and Wang, Cunxiang and Chen, Hao and Jiang, Chaoya and Xie, Rui and Wang, Jindong and Xie, Xing and Ye, Wei and Zhang, Shikun and Zhang, Yue},
34
+ title = {PandaLM: Reproducible and Automated Language Model Assessment},
35
+ year = {2023},
36
+ publisher = {GitHub},
37
+ journal = {GitHub repository},
38
+ howpublished = {\url{https://github.com/WeOpenML/PandaLM}},
39
+ }
40
+ ```
41
+