sail
/

GGUF
7 languages
multilingual
sea
sailor
sft
chat
instruction
chaoscodes commited on
Commit
13fd76c
1 Parent(s): bb4ec00

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -0
README.md CHANGED
@@ -61,6 +61,21 @@ Through systematic experiments to determine the weights of different languages,
61
  The approach boosts their performance on SEA languages while maintaining proficiency in English and Chinese without significant compromise.
62
  Finally, we continually pre-train the Qwen1.5-0.5B model with 400 Billion tokens, and other models with 200 Billion tokens to obtain the Sailor models.
63
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
  ### How to run with `llama.cpp`
65
 
66
  ```shell
 
61
  The approach boosts their performance on SEA languages while maintaining proficiency in English and Chinese without significant compromise.
62
  Finally, we continually pre-train the Qwen1.5-0.5B model with 400 Billion tokens, and other models with 200 Billion tokens to obtain the Sailor models.
63
 
64
+ ### GGUF model list
65
+ | Name | Quant method | Bits | Size | Use case |
66
+ | ------------------------------------------------------------ | ------------ | ---- | -------- | -------------------------------------- |
67
+ | [ggml-model-Q2_K.gguf](https://huggingface.co/sail/Sailor-7B-Chat-gguf/blob/main/ggml-model-Q2_K.gguf) | Q2_K | 2 | 3.10 GB | medium, significant quality loss |
68
+ | [ggml-model-Q3_K_L.gguf](https://huggingface.co/sail/Sailor-7B-Chat-gguf/blob/main/ggml-model-Q3_K_L.gguf) | Q3_K_L | 3 | 4.22 GB | large, substantial quality loss |
69
+ | [ggml-model-Q3_K_M.gguf](https://huggingface.co/sail/Sailor-7B-Chat-gguf/blob/main/ggml-model-Q3_K_M.gguf) | Q3_K_M | 3 | 3.92 GB | medium, balanced quality |
70
+ | [ggml-model-Q3_K_S.gguf](https://huggingface.co/sail/Sailor-7B-Chat-gguf/blob/main/ggml-model-Q3_K_S.gguf) | Q3_K_S | 3 | 3.57 GB | medium, high quality loss |
71
+ | [ggml-model-Q4_K_M.gguf](https://huggingface.co/sail/Sailor-7B-Chat-gguf/blob/main/ggml-model-Q4_K_M.gguf) | Q4_K_M | 4 | 4.77 GB | large, balanced quality |
72
+ | [ggml-model-Q4_K_S.gguf](https://huggingface.co/sail/Sailor-7B-Chat-gguf/blob/main/ggml-model-Q4_K_S.gguf) | Q4_K_S | 4 | 4.54 GB | large, greater quality loss |
73
+ | [ggml-model-Q5_K_M.gguf](https://huggingface.co/sail/Sailor-7B-Chat-gguf/blob/main/ggml-model-Q5_K_M.gguf) | Q5_K_M | 5 | 5.53 GB | large, balanced quality |
74
+ | [ggml-model-Q5_K_S.gguf](https://huggingface.co/sail/Sailor-7B-Chat-gguf/blob/main/ggml-model-Q5_K_S.gguf) | Q5_K_S | 5 | 5.4 GB | large, very low quality loss |
75
+ | [ggml-model-Q6_K.gguf](https://huggingface.co/sail/Sailor-7B-Chat-gguf/blob/main/ggml-model-Q6_K.gguf) | Q6_K | 6 | 6.34 GB | large, extremely low quality loss |
76
+ | [ggml-model-Q8_0.gguf](https://huggingface.co/sail/Sailor-7B-Chat-gguf/blob/main/ggml-model-Q8_0.gguf) | Q8_0 | 8 | 8.21 GB | very large, extremely low quality loss |
77
+ | [ggml-model-f16.gguf](https://huggingface.co/sail/Sailor-7B-Chat-gguf/blob/main/ggml-model-f16.gguf) | f16 | 16 | 15.40 GB | very large, no quality loss |
78
+
79
  ### How to run with `llama.cpp`
80
 
81
  ```shell