sherazkhan commited on
Commit
f59cf85
1 Parent(s): fe18f92

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -6,11 +6,11 @@ library_name: transformers
6
  tags:
7
  - text Generation
8
  ---
9
- # Mixllama-8x8b-Instruct-v0.1 based on LLaMA 3
10
 
11
  An experimental MoE (Mixture of Experts) model based on the LLaMA-3-8B.
12
- MixLLaMA-8x8b combines 8 fine-tuned LLaMA 8B models, each specialized in a specific set of tasks.
13
- By leveraging the strengths of each expert model, Mixllama-8x8b aims to deliver enhanced performance and adaptability across a wide range of applications.
14
 
15
 
16
  ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/64414d01bd0c97265297acc5/OQ-cZNYe_2r1JK4Z6fCgg.gif)
 
6
  tags:
7
  - text Generation
8
  ---
9
+ # Mixllama3-8x8b-Instruct-v0.1 based on LLaMA 3
10
 
11
  An experimental MoE (Mixture of Experts) model based on the LLaMA-3-8B.
12
+ MixLLaMA3-8x8b combines 8 fine-tuned LLaMA 8B models, each specialized in a specific set of tasks.
13
+ By leveraging the strengths of each expert model, Mixllama3-8x8b aims to deliver enhanced performance and adaptability across a wide range of applications.
14
 
15
 
16
  ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/64414d01bd0c97265297acc5/OQ-cZNYe_2r1JK4Z6fCgg.gif)