christian-dynamofl commited on
Commit
ead8964
1 Parent(s): 3c05346

Added bulletpoints

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -2,14 +2,16 @@ Dynamo 8B is an improvement of the Mistral-7B architecture for the purpose of mu
2
 
3
  Dynamo 8B has not been instruction fine-tuned and has not undergone alignment using techniques like reinforcement learning from human feedback. The intention behind crafting this model is to provide the research community with a model to explore vital multilingual capabilities that enable widespread use of LLMs globally.
4
 
 
5
  Model Specifications:
6
 
7
- Supported Languages: English, German, Spanish, Korean, Italian, Turkish.
8
 
9
- Context Window: 128K tokens*
10
 
11
- License: At the moment, Dynamo-8B is released under a research-only license.
12
 
13
  *Pretraining on the multilingual dataset was done with a sequence length of 4096 tokens
14
 
 
15
  Dynamo 8B is a pre-trained model that can be adapted and fine-tuned for a variety of tasks. However, it is new technology that carries risks. In some scenarios, it may generate inaccurate, unverified, or biased output despite efforts we have made to maximize model safety. As with all LLMs, we recommend users exercise critical thinking, validate outputs, and perform the requisite safety evaluations for specific downstream applications of the Dynamo model.
 
2
 
3
  Dynamo 8B has not been instruction fine-tuned and has not undergone alignment using techniques like reinforcement learning from human feedback. The intention behind crafting this model is to provide the research community with a model to explore vital multilingual capabilities that enable widespread use of LLMs globally.
4
 
5
+
6
  Model Specifications:
7
 
8
+ - Supported Languages: English, German, Spanish, Korean, Italian, Turkish.
9
 
10
+ - Context Window: 128K tokens*
11
 
12
+ - License: At the moment, Dynamo-8B is released under a research-only license.
13
 
14
  *Pretraining on the multilingual dataset was done with a sequence length of 4096 tokens
15
 
16
+
17
  Dynamo 8B is a pre-trained model that can be adapted and fine-tuned for a variety of tasks. However, it is new technology that carries risks. In some scenarios, it may generate inaccurate, unverified, or biased output despite efforts we have made to maximize model safety. As with all LLMs, we recommend users exercise critical thinking, validate outputs, and perform the requisite safety evaluations for specific downstream applications of the Dynamo model.