File size: 4,705 Bytes
1cd19d5
761ac41
 
 
 
4d109b3
761ac41
 
 
 
 
1cd19d5
761ac41
4d109b3
 
3e7d1d0
761ac41
 
 
979ad39
 
 
 
 
3e7d1d0
 
761ac41
 
 
feb7b4c
4d109b3
761ac41
 
 
 
 
 
 
 
 
 
4d109b3
761ac41
 
 
4d109b3
761ac41
72bdc6f
761ac41
5fdf80b
0e1a3ca
5fdf80b
 
 
 
 
f88dd6a
 
0e1a3ca
f88dd6a
c5d2105
f88dd6a
 
0e1a3ca
f88dd6a
c5d2105
f88dd6a
 
0e1a3ca
f88dd6a
c5d2105
f88dd6a
 
0e1a3ca
5fdf80b
c5d2105
5fdf80b
761ac41
 
f8ae334
761ac41
 
 
 
 
 
 
 
 
 
3e7d1d0
 
 
 
 
 
f8ae334
55675c1
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
language:
- en
tags:
- llama
license: other
metrics:
- MMLU
- ARC 
- HellaSwag
- TruthfulQA
---

# 🥳 Platypus-30B has arrived! 

Platypus-30B is an instruction fine-tuned model based on the LLaMA-30B transformer architecture.

| Metric                | Value |
|-----------------------|-------|
| MMLU (5-shot)         | 64.2  |
| ARC (25-shot)         | 64.6  |
| HellaSwag (10-shot)   | 84.3  |
| TruthfulQA (0-shot)   | 45.8  |
| Avg.                  | 64.7  |

We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above.

## Model Details

* **Trained by**: Cole Hunter & Ariel Lee
* **Model type:**  **Platypus-30B** is an auto-regressive language model based on the LLaMA transformer architecture.
* **Language(s)**: English
* **License for base weights**: License for the base LLaMA model's weights is Meta's [non-commercial bespoke license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).

| Hyperparameter            | Value |
|---------------------------|-------|
| \\(n_\text{parameters}\\) | 33B   |
| \\(d_\text{model}\\)      | 6656  |
| \\(n_\text{layers}\\)     | 60    |
| \\(n_\text{heads}\\)      | 52    |

## Training Dataset

Dataset of highly filtered and curated question and answer pairs. Release TBD.

## Training Procedure

`garage-bAInd/Platypus-30B` was instruction fine-tuned using LoRA on 4 A100 80GB. For training details and inference instructions please see the [Platypus-30B](https://github.com/arielnlee/Platypus-30B.git) GitHub repo.

## Reproducing Evaluation Results
Install LM Evaluation Harness:
```
git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
```
Each task was evaluated on a single A100 80GB GPU.

ARC:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAIdnd/Platypus-30B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/arc_challenge_25shot.json --device cuda --num_fewshot 25
```

HellaSwag:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAIdnd/Platypus-30B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/hellaswag_10shot.json --device cuda --num_fewshot 10
```

MMLU:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAIdnd/Platypus-30B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/mmlu_5shot.json --device cuda --num_fewshot 5
```

TruthfulQA:
```
python main.py --model hf-causal-experimental --model_args pretrained=garage-bAIdnd/Platypus-30B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/truthfulqa_0shot.json --device cuda
```
## Limitations and bias

The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA paper. We have not performed any studies to determine how fine-tuning on the aforementioned datasets affect the model's behavior and toxicity. Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly.

## Citations

```bibtex
@article{touvron2023llama,
  title={LLaMA: Open and Efficient Foundation Language Models},
  author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
  journal={arXiv preprint arXiv:2302.13971},
  year={2023}
}

@article{hu2021lora,
  title={LoRA: Low-Rank Adaptation of Large Language Models},
  author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu},
  journal={CoRR},
  year={2021}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_garage-bAInd__Platypus-30B)

| Metric                | Value                     |
|-----------------------|---------------------------|
| Avg.                  | 57.12   |
| ARC (25-shot)         | 64.59          |
| HellaSwag (10-shot)   | 84.26    |
| MMLU (5-shot)         | 64.23         |
| TruthfulQA (0-shot)   | 45.35   |
| Winogrande (5-shot)   | 81.37   |
| GSM8K (5-shot)        | 14.4        |
| DROP (3-shot)         | 45.65         |