MaziyarPanahi commited on
Commit
e57b55a
1 Parent(s): c21d268

Correct number of parameters

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -32,7 +32,7 @@ language:
32
 
33
  On April 10th, [@MistralAI](https://huggingface.co/mistralai) released a model named "Mixtral 8x22B," an 176B MoE via magnet link (torrent):
34
 
35
- - 176B MoE with ~40B active
36
  - Context length of 65k tokens
37
  - The base model can be fine-tuned
38
  - Requires ~260GB VRAM in fp16, 73GB in int4
 
32
 
33
  On April 10th, [@MistralAI](https://huggingface.co/mistralai) released a model named "Mixtral 8x22B," an 176B MoE via magnet link (torrent):
34
 
35
+ - 141B MoE with ~35B active
36
  - Context length of 65k tokens
37
  - The base model can be fine-tuned
38
  - Requires ~260GB VRAM in fp16, 73GB in int4