doof-ferb commited on
Commit
5076f53
1 Parent(s): 729413b

adding details about files like TheBloke

Browse files
Files changed (1) hide show
  1. README.md +31 -3
README.md CHANGED
@@ -1,7 +1,15 @@
1
  ---
2
- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
3
- # Doc / guide: https://huggingface.co/docs/hub/model-cards
4
- {}
 
 
 
 
 
 
 
 
5
  ---
6
 
7
  # PhoGPT-7B5-Instruct.GGUF
@@ -19,3 +27,23 @@ Select and download the quantization version that fits the needs.
19
  ## License
20
 
21
  PhoGPT is licensed under the [PhoGPT Community License](https://github.com/VinAIResearch/PhoGPT/blob/main/LICENSE), Copyright (c) VinAI. All Rights Reserved.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: other
3
+ language:
4
+ - vi
5
+ model_name: PhoGPT 7B5 Instruct
6
+ inference: false
7
+ model_creator: VinAI Research
8
+ model_link: https://huggingface.co/vinai/PhoGPT-7B5-Instruct
9
+ model_type: mpt
10
+ pipeline_tag: text-generation
11
+ quantized_by: nguyenviet
12
+ base_model: vinai/PhoGPT-7B5-Instruct
13
  ---
14
 
15
  # PhoGPT-7B5-Instruct.GGUF
 
27
  ## License
28
 
29
  PhoGPT is licensed under the [PhoGPT Community License](https://github.com/VinAIResearch/PhoGPT/blob/main/LICENSE), Copyright (c) VinAI. All Rights Reserved.
30
+
31
+ ## Provided files
32
+
33
+ | Name | Quant method | Size | Use case |
34
+ | ---- | ---- | ---- | ----- |
35
+ | [PhoGPT-7B5-Instruct-q2_k.gguf](https://huggingface.co/nguyenviet/PhoGPT-7B5-Instruct-GGUF/blob/main/PhoGPT-7B5-Instruct-q2_k.gguf) | Q2_K | 3.8 GB | smallest, significant quality loss - not recommended for most purposes |
36
+ | [PhoGPT-7B5-Instruct-q3_k_s.gguf](https://huggingface.co/nguyenviet/PhoGPT-7B5-Instruct-GGUF/blob/main/PhoGPT-7B5-Instruct-q3_k_s.gguf) | Q3_K_S | 4.07 GB | very small, high quality loss |
37
+ | [PhoGPT-7B5-Instruct-q3_k_m.gguf](https://huggingface.co/nguyenviet/PhoGPT-7B5-Instruct-GGUF/blob/main/PhoGPT-7B5-Instruct-q3_k_m.gguf) | Q3_K_M | 4.66 GB | very small, high quality loss |
38
+ | [PhoGPT-7B5-Instruct-q3_k_l.gguf](https://huggingface.co/nguyenviet/PhoGPT-7B5-Instruct-GGUF/blob/main/PhoGPT-7B5-Instruct-q3_k_l.gguf) | Q3_K_L | 4.98 GB | small, substantial quality loss |
39
+ | [PhoGPT-7B5-Instruct-q4_0.gguf](https://huggingface.co/nguyenviet/PhoGPT-7B5-Instruct-GGUF/blob/main/PhoGPT-7B5-Instruct-q4_0.gguf) | Q4_0 | 5.06 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
40
+ | [PhoGPT-7B5-Instruct-q4_k_s.gguf](https://huggingface.co/nguyenviet/PhoGPT-7B5-Instruct-GGUF/blob/main/PhoGPT-7B5-Instruct-q4_k_s.gguf) | Q4_K_S | 5.1 GB | small, greater quality loss |
41
+ | [PhoGPT-7B5-Instruct-q4_k_m.gguf](https://huggingface.co/nguyenviet/PhoGPT-7B5-Instruct-GGUF/blob/main/PhoGPT-7B5-Instruct-q4_k_m.gguf) | Q4_K_M | 5.54 GB | medium, balanced quality - recommended |
42
+ | [PhoGPT-7B5-Instruct-q4_1.gguf](https://huggingface.co/nguyenviet/PhoGPT-7B5-Instruct-GGUF/blob/main/PhoGPT-7B5-Instruct-q4_1.gguf) | Q4_1 | 5.53 GB | legacy; higher accuracy than Q4_0 but not as high as Q5_0, however has quicker inference than Q5 models.
43
+ | [PhoGPT-7B5-Instruct-q5_0.gguf](https://huggingface.co/nguyenviet/PhoGPT-7B5-Instruct-GGUF/blob/main/PhoGPT-7B5-Instruct-q5_0.gguf) | Q5_0 | 6 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
44
+ | [PhoGPT-7B5-Instruct-q5_k_s.gguf](https://huggingface.co/nguyenviet/PhoGPT-7B5-Instruct-GGUF/blob/main/PhoGPT-7B5-Instruct-q5_k_s.gguf) | Q5_K_S | 6 GB | large, low quality loss - recommended |
45
+ | [PhoGPT-7B5-Instruct-q5_k_m.gguf](https://huggingface.co/nguyenviet/PhoGPT-7B5-Instruct-GGUF/blob/main/PhoGPT-7B5-Instruct-q5_k_m.gguf) | Q5_K_M | 6.35 GB | large, very low quality loss - recommended |
46
+ | [PhoGPT-7B5-Instruct-q5_1.gguf](https://huggingface.co/nguyenviet/PhoGPT-7B5-Instruct-GGUF/blob/main/PhoGPT-7B5-Instruct-q5_1.gguf) | Q5_1 | 6.46 GB | legacy; even higher accuracy, resource usage and slower inference.
47
+ | [PhoGPT-7B5-Instruct-q6_k.gguf](https://huggingface.co/nguyenviet/PhoGPT-7B5-Instruct-GGUF/blob/main/PhoGPT-7B5-Instruct-q6_k.gguf) | Q6_K | 6.99 GB | very large, extremely low quality loss |
48
+ | [PhoGPT-7B5-Instruct-q8_0.gguf](https://huggingface.co/nguyenviet/PhoGPT-7B5-Instruct-GGUF/blob/main/PhoGPT-7B5-Instruct-q8_0.gguf) | Q8_0 | 9.05 GB | almost indistinguishable from float16. High resource use and slow, not recommended for most users |
49
+ | [PhoGPT-7B5-Instruct-f16.gguf](https://huggingface.co/nguyenviet/PhoGPT-7B5-Instruct-GGUF/blob/main/PhoGPT-7B5-Instruct-f16.gguf) | float16 | 17 GB | very large, extremely low quality loss - not recommended |