osanseviero HF staff librarian-bot commited on
Commit
a2b6b70
1 Parent(s): f033992

Librarian Bot: Add moe tag to model (#10)

Browse files

- Librarian Bot: Add moe tag to model (0f05aaa9733641ab723d9e70b2b5ba6a31078407)


Co-authored-by: Librarian Bot (Bot) <librarian-bot@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -1,13 +1,15 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - fr
5
  - it
6
  - de
7
  - es
8
  - en
9
- inference: false
10
  library_name: mlx
 
 
 
11
  ---
12
  # Model Card for Mixtral-8x7B
13
  The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
 
1
  ---
 
2
  language:
3
  - fr
4
  - it
5
  - de
6
  - es
7
  - en
8
+ license: apache-2.0
9
  library_name: mlx
10
+ tags:
11
+ - moe
12
+ inference: false
13
  ---
14
  # Model Card for Mixtral-8x7B
15
  The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.