TechxGenus's picture
Upload folder using huggingface_hub
0d4fc31 verified
|
raw
history blame contribute delete
No virus
465 Bytes

GPTQ quantized version of Mistral-7B-v0.2-hf model.


Mistral 7b v0.2 with attention_dropout=0.6, for training purposes

Conversion process:

  1. Download original weights from https://models.mistralcdn.com/mistral-7b-v0-2/mistral-7B-v0.2.tar

  2. Convert with https://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/convert_mistral_weights_to_hf.py

  3. You may need to copy the tokenizer.model from Mistral-7B-Instruct-v0.2 repo.