Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ quantized_by: Thireus
|
|
11 |
# WizardLM 70B V1.0 - EXL2
|
12 |
- Model creator: [WizardLM](https://huggingface.co/WizardLM)
|
13 |
- Original model: [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0)
|
14 |
-
-
|
15 |
|
16 |
## Models available in this repository
|
17 |
|
@@ -56,7 +56,7 @@ mkdir -p ~/EXL2/WizardLM-70B-V1.0-HF_4bit # Create the output directory
|
|
56 |
python convert.py -i ~/float16_safetensored/WizardLM-70B-V1.0-HF -o ~/EXL2/WizardLM-70B-V1.0-HF_4bit -c ~/EXL2/0000.parquet -b 4.0 -hb 6
|
57 |
```
|
58 |
|
59 |
-
(*) Use any one of the following scripts to convert your
|
60 |
|
61 |
- https://github.com/turboderp/exllamav2/blob/master/util/convert_safetensors.py
|
62 |
- https://huggingface.co/Panchovix/airoboros-l2-70b-gpt4-1.4.1-safetensors/blob/main/bin2safetensors/convert.py
|
|
|
11 |
# WizardLM 70B V1.0 - EXL2
|
12 |
- Model creator: [WizardLM](https://huggingface.co/WizardLM)
|
13 |
- Original model: [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0)
|
14 |
+
- Model used for quantization: [WizardLM 70B V1.0-HF](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) – float16 of [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0)
|
15 |
|
16 |
## Models available in this repository
|
17 |
|
|
|
56 |
python convert.py -i ~/float16_safetensored/WizardLM-70B-V1.0-HF -o ~/EXL2/WizardLM-70B-V1.0-HF_4bit -c ~/EXL2/0000.parquet -b 4.0 -hb 6
|
57 |
```
|
58 |
|
59 |
+
(*) Use any one of the following scripts to convert your pytorch_model bin files to safetensors:
|
60 |
|
61 |
- https://github.com/turboderp/exllamav2/blob/master/util/convert_safetensors.py
|
62 |
- https://huggingface.co/Panchovix/airoboros-l2-70b-gpt4-1.4.1-safetensors/blob/main/bin2safetensors/convert.py
|