Text Generation
Transformers
PyTorch
English
llama
Inference Endpoints
text-generation-inference
bleysg commited on
Commit
b734d6b
1 Parent(s): 082b15b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -12
README.md CHANGED
@@ -6,15 +6,19 @@ datasets:
6
  license: cc-by-nc-4.0
7
  ---
8
 
 
 
 
 
 
9
  # OpenOrca-Platypus2-13B
10
 
11
  OpenOrca-Platypus2-13B is a merge of [`garage-bAInd/Platypus2-13B`](https://huggingface.co/garage-bAInd/Platypus2-13B) and [`Open-Orca/OpenOrcaxOpenChat-Preview2-13B`](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B).
12
 
13
  This model is more than the sum of its parts! We are happy to be teaming up with the Platypus team to bring you a new model which once again tops the leaderboards!
14
 
15
- ![Platty](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypusMerge.jpg)
16
 
17
- ### Benchmark Metrics
18
 
19
  | Metric | Value |
20
  |-----------------------|-------|
@@ -24,9 +28,10 @@ This model is more than the sum of its parts! We are happy to be teaming up with
24
  | TruthfulQA (0-shot) | 52.69 |
25
  | Avg. | 64.56 |
26
 
27
- We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
 
28
 
29
- ### Model Details
30
 
31
  * **Trained by**: **Platypus2-13B** trained by Cole Hunter & Ariel Lee; **OpenOrcaxOpenChat-Preview2-13B** trained by Open-Orca
32
  * **Model type:** **OpenOrca-Platypus2-13B** is an auto-regressive language model based on the LLaMA 2 transformer architecture.
@@ -34,7 +39,8 @@ We use state-of-the-art [Language Model Evaluation Harness](https://github.com/E
34
  * **License for Platypus2-13B base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/))
35
  * **License for OpenOrcaxOpenChat-Preview2-13B base weights**: LLaMa-2 commercial
36
 
37
- ### Prompt Template for base Platypus2-13B
 
38
  ```
39
  ### Instruction:
40
 
@@ -42,10 +48,14 @@ We use state-of-the-art [Language Model Evaluation Harness](https://github.com/E
42
 
43
  ### Response:
44
  ```
45
- ### Prompt Template for base OpenOrcaxOpenChat-Preview2-13B
46
- OpenChat Llama2 V1: see [Open-Orca's page](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B) for additional information.
47
 
48
- ### Training Datasets
 
 
 
 
 
 
49
 
50
  `garage-bAInd/Platypus2-13B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
51
 
@@ -53,11 +63,13 @@ Please see our [paper](https://platypus-llm.github.io/Platypus.pdf) and [project
53
 
54
  [`Open-Orca/OpenOrcaxOpenChat-Preview2-13B`] trained using a refined, 220k subset of the [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca).
55
 
56
- ### Training Procedure
 
57
 
58
  `garage-bAInd/Platypus2-13B` was instruction fine-tuned using LoRA on 1 A100 80GB. For training details and inference instructions please see the [Platypus](https://github.com/arielnlee/Platypus) GitHub repo.
59
 
60
- ### Reproducing Evaluation Results
 
61
 
62
  Install LM Evaluation Harness:
63
  ```
@@ -92,13 +104,15 @@ TruthfulQA:
92
  python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/OpenOrca-Platypus2-13B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/truthfulqa_0shot.json --device cuda
93
  ```
94
 
95
- ### Limitations and bias
 
96
 
97
  Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
98
 
99
  Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
100
 
101
- ### Citations
 
102
 
103
  ```bibtex
104
  @misc{touvron2023llama,
 
6
  license: cc-by-nc-4.0
7
  ---
8
 
9
+ <p><h1>🐋 The First OrcaPlatypus! 🐋</h1></p>
10
+
11
+ ![Platty](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypusMerge.jpg)
12
+
13
+
14
  # OpenOrca-Platypus2-13B
15
 
16
  OpenOrca-Platypus2-13B is a merge of [`garage-bAInd/Platypus2-13B`](https://huggingface.co/garage-bAInd/Platypus2-13B) and [`Open-Orca/OpenOrcaxOpenChat-Preview2-13B`](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B).
17
 
18
  This model is more than the sum of its parts! We are happy to be teaming up with the Platypus team to bring you a new model which once again tops the leaderboards!
19
 
 
20
 
21
+ # Benchmark Metrics
22
 
23
  | Metric | Value |
24
  |-----------------------|-------|
 
28
  | TruthfulQA (0-shot) | 52.69 |
29
  | Avg. | 64.56 |
30
 
31
+ We use [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
32
+
33
 
34
+ # Model Details
35
 
36
  * **Trained by**: **Platypus2-13B** trained by Cole Hunter & Ariel Lee; **OpenOrcaxOpenChat-Preview2-13B** trained by Open-Orca
37
  * **Model type:** **OpenOrca-Platypus2-13B** is an auto-regressive language model based on the LLaMA 2 transformer architecture.
 
39
  * **License for Platypus2-13B base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/))
40
  * **License for OpenOrcaxOpenChat-Preview2-13B base weights**: LLaMa-2 commercial
41
 
42
+
43
+ # Prompt Template for base Platypus2-13B
44
  ```
45
  ### Instruction:
46
 
 
48
 
49
  ### Response:
50
  ```
 
 
51
 
52
+
53
+ # Prompt Template for base OpenOrcaxOpenChat-Preview2-13B
54
+
55
+ OpenChat Llama2 V1: see [OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B) for additional information.
56
+
57
+
58
+ # Training Datasets
59
 
60
  `garage-bAInd/Platypus2-13B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
61
 
 
63
 
64
  [`Open-Orca/OpenOrcaxOpenChat-Preview2-13B`] trained using a refined, 220k subset of the [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca).
65
 
66
+
67
+ # Training Procedure
68
 
69
  `garage-bAInd/Platypus2-13B` was instruction fine-tuned using LoRA on 1 A100 80GB. For training details and inference instructions please see the [Platypus](https://github.com/arielnlee/Platypus) GitHub repo.
70
 
71
+
72
+ # Reproducing Evaluation Results
73
 
74
  Install LM Evaluation Harness:
75
  ```
 
104
  python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/OpenOrca-Platypus2-13B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/truthfulqa_0shot.json --device cuda
105
  ```
106
 
107
+
108
+ # Limitations and bias
109
 
110
  Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
111
 
112
  Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
113
 
114
+
115
+ # Citations
116
 
117
  ```bibtex
118
  @misc{touvron2023llama,