Update README.md
Browse files
README.md
CHANGED
@@ -24,7 +24,7 @@ Mistral-NeMo-12B-Base is a completion model intended for use in over 80+ program
|
|
24 |
|
25 |
**Model Developer:** [NVIDIA](https://www.nvidia.com/en-us/) and [MistralAI](https://mistral.ai/)
|
26 |
|
27 |
-
**Model Dates:** Mistral-NeMo-12B-Base was trained between
|
28 |
|
29 |
### Model Architecture:
|
30 |
|
@@ -42,6 +42,12 @@ Mistral-NeMo-12B-Base is a transformer model, with the following architecture ch
|
|
42 |
|
43 |
**Architecture Type:** Transformer Decoder (auto-regressive language model)
|
44 |
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
### Evaluation Results
|
46 |
|
47 |
**Main Benchmarks**
|
@@ -65,3 +71,12 @@ Multilingual MMLU in 5-shot setting:
|
|
65 |
- Russian: 59.2%
|
66 |
- Chinese: 59.0%
|
67 |
- Japanese: 59.0%
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
**Model Developer:** [NVIDIA](https://www.nvidia.com/en-us/) and [MistralAI](https://mistral.ai/)
|
26 |
|
27 |
+
**Model Dates:** Mistral-NeMo-12B-Base was trained between April 2024 and June 2024.
|
28 |
|
29 |
### Model Architecture:
|
30 |
|
|
|
42 |
|
43 |
**Architecture Type:** Transformer Decoder (auto-regressive language model)
|
44 |
|
45 |
+
### Dataset & Training
|
46 |
+
|
47 |
+
The training corpus for Mistral-NeMo-12B-Base consists of English and multilingual text, as well as code. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more.
|
48 |
+
|
49 |
+
**Data Freshness:** The pretraining data has a cutoff of April 2024.
|
50 |
+
|
51 |
### Evaluation Results
|
52 |
|
53 |
**Main Benchmarks**
|
|
|
71 |
- Russian: 59.2%
|
72 |
- Chinese: 59.0%
|
73 |
- Japanese: 59.0%
|
74 |
+
|
75 |
+
### Limitations
|
76 |
+
|
77 |
+
The model was trained on data that contains toxic language, unsafe content, and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
|
78 |
+
|
79 |
+
### Ethical Considerations
|
80 |
+
|
81 |
+
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
82 |
+
|