JayhC commited on
Commit
4c3b52f
1 Parent(s): 9829f6e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -0
README.md CHANGED
@@ -14,6 +14,14 @@ widget:
14
  - role: user
15
  content: What is your favorite condiment?
16
  ---
 
 
 
 
 
 
 
 
17
  # Model Card for Mixtral-8x7B
18
  The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
19
 
 
14
  - role: user
15
  content: What is your favorite condiment?
16
  ---
17
+
18
+
19
+ 4.5bpw/h6 exl2 quantization of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) using default exllamav2 calibration dataset, to fully use my 31gb VRAM (-1 cuz windows..).
20
+
21
+ ---
22
+
23
+ **ORIGINAL CARD:**
24
+
25
  # Model Card for Mixtral-8x7B
26
  The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
27