Update README.md
#3
by
ari9dam
- opened
README.md
CHANGED
@@ -4,24 +4,17 @@ tags:
|
|
4 |
- orca
|
5 |
- orca2
|
6 |
- microsoft
|
7 |
-
license: other
|
8 |
-
license_name: microsoft-research-license
|
9 |
-
license_link: LICENSE
|
10 |
---
|
11 |
|
12 |
# Orca 2
|
13 |
|
14 |
<!-- Provide a quick summary of what the model is/does. -->
|
15 |
|
16 |
-
Orca 2 is built for research purposes only and provides a single turn response
|
|
|
|
|
17 |
|
18 |
-
|
19 |
-
|
20 |
-
1. This is a research model, intended to show that we can use capable models and complex workflows (advanced prompts, multiple calls) to create synthetic data that can teach Small Language Models (SLMs) new capabilities. We chose reasoning because it is a widely useful capability that SLMs lack.
|
21 |
-
2. The model is not optimized for chat and has not been trained with RLHF or DPO. It is best used after being finetuned for chat or for a specific task.
|
22 |
-
3. Beyond reasoning, the model inherits capabilities and limitations of its base (LLAMA-2 base). We have already seen that the benefits of the Orca training can be applied to other base model too.
|
23 |
-
|
24 |
-
We make Orca 2's weights publicly available to support further research on the development, evaluation, and alignment of SLMs.
|
25 |
|
26 |
## What is Orca 2’s intended use(s)?
|
27 |
|
@@ -32,12 +25,12 @@ building better frontier models.
|
|
32 |
## How was Orca 2 evaluated?
|
33 |
|
34 |
+ Orca 2 has been evaluated on a large number of tasks ranging from reasoning to grounding and safety. Please refer
|
35 |
-
to Section 6 and Appendix in the
|
36 |
|
37 |
## Model Details
|
38 |
|
39 |
Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities.
|
40 |
-
All synthetic training data was moderated using the Microsoft Azure content filters. More details about the model can be found
|
41 |
|
42 |
Please refer to LLaMA-2 technical report for details on the model architecture.
|
43 |
|
@@ -231,16 +224,4 @@ answers = tokenizer.batch_decode(new_output_ids, skip_special_tokens=True)
|
|
231 |
final_output = answers[0] if not should_filter_out(answers[0]) else "[Content Filtered]"
|
232 |
|
233 |
print(final_output)
|
234 |
-
```
|
235 |
-
|
236 |
-
## Citation
|
237 |
-
```bibtex
|
238 |
-
@misc{mitra2023orca,
|
239 |
-
title={Orca 2: Teaching Small Language Models How to Reason},
|
240 |
-
author={Arindam Mitra and Luciano Del Corro and Shweti Mahajan and Andres Codas and Clarisse Simoes and Sahaj Agrawal and Xuxi Chen and Anastasia Razdaibiedina and Erik Jones and Kriti Aggarwal and Hamid Palangi and Guoqing Zheng and Corby Rosset and Hamed Khanpour and Ahmed Awadallah},
|
241 |
-
year={2023},
|
242 |
-
eprint={2311.11045},
|
243 |
-
archivePrefix={arXiv},
|
244 |
-
primaryClass={cs.AI}
|
245 |
-
}
|
246 |
```
|
|
|
4 |
- orca
|
5 |
- orca2
|
6 |
- microsoft
|
|
|
|
|
|
|
7 |
---
|
8 |
|
9 |
# Orca 2
|
10 |
|
11 |
<!-- Provide a quick summary of what the model is/does. -->
|
12 |
|
13 |
+
Orca 2 is a helpful assistant that is built for research purposes only and provides a single turn response
|
14 |
+
in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization.
|
15 |
+
The model is designed to excel particularly in reasoning.
|
16 |
|
17 |
+
We open-source Orca 2 to encourage further research on the development, evaluation, and alignment of smaller LMs.
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
## What is Orca 2’s intended use(s)?
|
20 |
|
|
|
25 |
## How was Orca 2 evaluated?
|
26 |
|
27 |
+ Orca 2 has been evaluated on a large number of tasks ranging from reasoning to grounding and safety. Please refer
|
28 |
+
to Section 6 and Appendix in the paper for details on evaluations.
|
29 |
|
30 |
## Model Details
|
31 |
|
32 |
Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities.
|
33 |
+
All synthetic training data was moderated using the Microsoft Azure content filters. More details about the model can be found at: LINK to Tech Report
|
34 |
|
35 |
Please refer to LLaMA-2 technical report for details on the model architecture.
|
36 |
|
|
|
224 |
final_output = answers[0] if not should_filter_out(answers[0]) else "[Content Filtered]"
|
225 |
|
226 |
print(final_output)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
227 |
```
|