dfurman commited on
Commit
08dd113
1 Parent(s): 3d9dc92

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -28,27 +28,27 @@ The primary use of LLaMA is research on large language models, including: explor
28
  The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
29
 
30
  **Out-of-scope use cases:**
31
- LLaMA is a base model, also known as a foundation model. As such, it should not be used on downstream applications without further risk evaluation, mitigation, and potential further fine-tuning. In particular, the model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
32
 
33
  ## Factors
34
  **Relevant factors:**
35
- One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of the LLaMA dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for LLaMA.
36
 
37
  **Evaluation factors:**
38
- As LLaMA is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
39
 
40
  ## Ethical considerations
41
  **Data:**
42
- The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
43
 
44
  **Human life:**
45
  The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
46
 
47
  **Mitigations:**
48
- We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
49
 
50
  **Risks and harms:**
51
- Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect LLaMA to be an exception in this regard.
52
 
53
  **Use cases:**
54
  LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
 
28
  The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
29
 
30
  **Out-of-scope use cases:**
31
+ LLaMA is a base model, also known as a foundation model. As such, it should not be used on downstream applications without further risk evaluation, mitigation, and additional fine-tuning. In particular, the model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
32
 
33
  ## Factors
34
  **Relevant factors:**
35
+ One of the most relevant factors for which model performance may vary is which language is used. Although 20 languages were included in the training data, most of the LLaMA dataset is made of English text, and the model is thus expected to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, which is likely also the case for LLaMA.
36
 
37
  **Evaluation factors:**
38
+ As LLaMA is trained on data from the Web, it is expected that the model reflects biases from this source. The RAI datasets are thus used to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. The toxicity of model generations is also measured, depending on the toxicity of the context used to prompt the model.
39
 
40
  ## Ethical considerations
41
  **Data:**
42
+ The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. LLaMA is thus expected to exhibit such biases from the training data.
43
 
44
  **Human life:**
45
  The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
46
 
47
  **Mitigations:**
48
+ The data was filtered from the Web based on its proximity to Wikipedia text and references. For this, the Kneser-Ney language model is used with a fastText linear classifier.
49
 
50
  **Risks and harms:**
51
+ Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. LLaMA is not expected to be an exception in this regard.
52
 
53
  **Use cases:**
54
  LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.