nicholasKluge commited on
Commit
84221b6
1 Parent(s): 787769a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -18
README.md CHANGED
@@ -39,7 +39,7 @@ co2_eq_emissions:
39
  ---
40
  # Aira-2-1B1
41
 
42
- `Aira-2` is the second version of the Aira instruction-tuned series. `Aira-2-1B1` is an instruction-tuned model based on [TinyLlama-1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T). The model was trained with a dataset composed of prompts and completions generated synthetically by prompting already-tuned models (ChatGPT, Llama, Open-Assistant, etc).
43
 
44
  Check our gradio-demo in [Spaces](https://huggingface.co/spaces/nicholasKluge/Aira-Demo).
45
 
@@ -100,9 +100,11 @@ The model will output something like:
100
 
101
  ## Limitations
102
 
103
- 🤥 Generative models can perpetuate the generation of pseudo-informative content, that is, false information that may appear truthful.
104
 
105
- 🤬 In certain types of tasks, generative models can produce harmful and discriminatory content inspired by historical stereotypes.
 
 
106
 
107
  ## Evaluation
108
 
@@ -131,18 +133,4 @@ The model will output something like:
131
 
132
  ## License
133
 
134
- The `Aira-2-1B1` is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
135
-
136
- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
137
- Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nicholasKluge__Aira-2-1B1)
138
-
139
- | Metric | Value |
140
- |-----------------------|---------------------------|
141
- | Avg. | 25.19 |
142
- | ARC (25-shot) | 23.21 |
143
- | HellaSwag (10-shot) | 26.97 |
144
- | MMLU (5-shot) | 24.86 |
145
- | TruthfulQA (0-shot) | 50.63 |
146
- | Winogrande (5-shot) | 50.28 |
147
- | GSM8K (5-shot) | 0.0 |
148
- | DROP (3-shot) | 0.39 |
 
39
  ---
40
  # Aira-2-1B1
41
 
42
+ Aira-2 is the second version of the Aira instruction-tuned series. Aira-2-1B1 is an instruction-tuned model based on [TinyLlama-1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T). The model was trained with a dataset composed of prompts and completions generated synthetically by prompting already-tuned models (ChatGPT, Llama, Open-Assistant, etc).
43
 
44
  Check our gradio-demo in [Spaces](https://huggingface.co/spaces/nicholasKluge/Aira-Demo).
45
 
 
100
 
101
  ## Limitations
102
 
103
+ - **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
104
 
105
+ - **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
106
+
107
+ - **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
108
 
109
  ## Evaluation
110
 
 
133
 
134
  ## License
135
 
136
+ Aira-2-1B1 is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.