huihui-ai commited on
Commit
0c136b3
1 Parent(s): 862dea5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -17,7 +17,7 @@ language:
17
  ## Overview
18
  `Llama-3.1-8B-Fusion-7030` is a mixed model that combines the strengths of two powerful Llama-based models: [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) and [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated). The weights are blended in a 7:3 ratio, with 70% of the weights from SuperNova-Lite and 30% from the abliterated Meta-Llama-3.1-8B-Instruct model.
19
  **Although it's a simple mix, the model is usable, and no gibberish has appeared**.
20
- his is an experiment. I test the [9:1](https://huggingface.co/huihui-ai/Llama-3.1-8B-Fusion-9010), [8:2](https://huggingface.co/huihui-ai/Llama-3.1-8B-Fusion-8020), [7:3](https://huggingface.co/huihui-ai/Llama-3.1-8B-Fusion-7030), [6:4](https://huggingface.co/huihui-ai/Llama-3.1-8B-Fusion-6040) and [5:5](https://huggingface.co/huihui-ai/Llama-3.1-8B-Fusion-5050) ratios separately to see how much impact they have on the model.
21
  All model evaluation reports will be provided subsequently.
22
 
23
  ## Model Details
 
17
  ## Overview
18
  `Llama-3.1-8B-Fusion-7030` is a mixed model that combines the strengths of two powerful Llama-based models: [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) and [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated). The weights are blended in a 7:3 ratio, with 70% of the weights from SuperNova-Lite and 30% from the abliterated Meta-Llama-3.1-8B-Instruct model.
19
  **Although it's a simple mix, the model is usable, and no gibberish has appeared**.
20
+ This is an experiment. I test the [9:1](https://huggingface.co/huihui-ai/Llama-3.1-8B-Fusion-9010), [8:2](https://huggingface.co/huihui-ai/Llama-3.1-8B-Fusion-8020), [7:3](https://huggingface.co/huihui-ai/Llama-3.1-8B-Fusion-7030), [6:4](https://huggingface.co/huihui-ai/Llama-3.1-8B-Fusion-6040) and [5:5](https://huggingface.co/huihui-ai/Llama-3.1-8B-Fusion-5050) ratios separately to see how much impact they have on the model.
21
  All model evaluation reports will be provided subsequently.
22
 
23
  ## Model Details