Update README.md
Browse files
README.md
CHANGED
@@ -15,14 +15,6 @@ shoutout to [Teknium](https://huggingface.co/teknium) and the NousResearch team
|
|
15 |
|
16 |
Try out the model on the [Fireworks platform](https://fireworks.ai/models/fireworks/mixtral-8x22b-instruct-preview).
|
17 |
|
18 |
-
## Model Details
|
19 |
-
The model is LoRA finetune. We use the following settings
|
20 |
-
1. Lora_R - 8
|
21 |
-
2. Lora_alpha - 16
|
22 |
-
3. Lora_dropout - 0.05
|
23 |
-
4. Context length - 2048
|
24 |
-
5. Chat template format - [Vicuna](https://github.com/chujiezheng/chat_templates/blob/main/chat_templates/vicuna.jinja)
|
25 |
-
|
26 |
## How to Get Started with the Model
|
27 |
|
28 |
For trying out the model on a hosted platform go [here](https://fireworks.ai/models/fireworks/mixtral-8x22b-instruct-preview).
|
|
|
15 |
|
16 |
Try out the model on the [Fireworks platform](https://fireworks.ai/models/fireworks/mixtral-8x22b-instruct-preview).
|
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
## How to Get Started with the Model
|
19 |
|
20 |
For trying out the model on a hosted platform go [here](https://fireworks.ai/models/fireworks/mixtral-8x22b-instruct-preview).
|