File size: 1,355 Bytes
278786e
 
 
06cda4b
278786e
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
EXL2 quants of Mistral-7B-instruct

Converted from [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1). This is a
straight conversion, but I have modified the `config.json` to make the default context size 7168 tokens, since in
initial testing the model becomes unstable a while after that. It's possible that sliding window attention will
allow the model to use its advertised 32k-token context, but this hasn't been tested yet.

[2.50 bits per weight](https://huggingface.co/turboderp/Mistral-7B-instruct-exl2/tree/2.5bpw)    
[2.70 bits per weight](https://huggingface.co/turboderp/Mistral-7B-instruct-exl2/tree/2.7bpw)    
[3.00 bits per weight](https://huggingface.co/turboderp/Mistral-7B-instruct-exl2/tree/3.0bpw)    
[3.50 bits per weight](https://huggingface.co/turboderp/Mistral-7B-instruct-exl2/tree/3.5bpw)    
[4.00 bits per weight](https://huggingface.co/turboderp/Mistral-7B-instruct-exl2/tree/4.0bpw)    
[4.65 bits per weight](https://huggingface.co/turboderp/Mistral-7B-instruct-exl2/tree/4.65bpw)    
[5.00 bits per weight](https://huggingface.co/turboderp/Mistral-7B-instruct-exl2/tree/5.0bpw)    
[6.00 bits per weight](https://huggingface.co/turboderp/Mistral-7B-instruct-exl2/tree/6.0bpw)    

[measurement.json](https://huggingface.co/turboderp/Mistral-7B-instruct-exl2/blob/main/measurement.json)