Skylaude commited on
Commit
641b19b
1 Parent(s): 62b75ce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -1,3 +1,17 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ tags:
4
+ - MoE
5
+ - merge
6
+ - mergekit
7
+ - Mistral
8
+ - Microsoft/WizardLM-2-7B
9
  ---
10
+
11
+ # WizardLM-2-4x7B-MoE-exl2-4_25bpw
12
+
13
+ This is a quantized version of [WizardLM-2-4x7B-MoE](https://huggingface.co/Skylaude/WizardLM-2-4x7B-MoE) an experimental MoE model made with [Mergekit](https://github.com/arcee-ai/mergekit). Quantization was done using version 0.0.18 of [ExLlamaV2](https://github.com/turboderp/exllamav2).
14
+
15
+ Please be sure to set experts per token to 4 for the best results! Context length should be the same as Mistral-7B-Instruct-v0.1 (8k tokens). For instruction templates, Vicuna-v1.1 is recommended.
16
+
17
+ For more information see the [original repository](https://huggingface.co/Skylaude/WizardLM-2-4x7B-MoE).