Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,5 @@
|
|
1 |
---
|
|
|
2 |
base_model: []
|
3 |
library_name: transformers
|
4 |
license: llama2
|
@@ -11,7 +12,7 @@ Thank you to Meta for the weights for Meta-Llama-3-8B-Instruct
|
|
11 |
|
12 |
This is an upscaling of the Meta-Llama-3-8B-Instruct Ai using techniques created for Mistral-Evolved-11b-v0.1. This Ai model has been upscaled from 8b parameters to 13b parameters without any continuous pretraining or fine-tuning.
|
13 |
|
14 |
-
From testing, the model seems to function perfectly at fp16, but has some issues at 4-bit quantization.
|
15 |
|
16 |
The model that was used to create this one is linked below:
|
17 |
|
|
|
1 |
---
|
2 |
+
|
3 |
base_model: []
|
4 |
library_name: transformers
|
5 |
license: llama2
|
|
|
12 |
|
13 |
This is an upscaling of the Meta-Llama-3-8B-Instruct Ai using techniques created for Mistral-Evolved-11b-v0.1. This Ai model has been upscaled from 8b parameters to 13b parameters without any continuous pretraining or fine-tuning.
|
14 |
|
15 |
+
From testing, the model seems to function perfectly at fp16, but has some issues at 4-bit quantization using bitsandbytes.
|
16 |
|
17 |
The model that was used to create this one is linked below:
|
18 |
|