anon8231489123 commited on
Commit
4ef20dd
1 Parent(s): 2de81a2

added ggml quantization for cuda model

Browse files
ggml_README.txt ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The model is for: https://github.com/ggerganov/llama.cpp
2
+
3
+ Date: 2023-04-01
4
+ ggml model file magic: 0x67676a74 (ggjt in hex)
5
+ ggml model file version: 1
6
+
7
+ Torrent contents:
8
+ The fine tune described at https://huggingface.co/chavinlo/gpt4-x-alpaca converted to ggml format from https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g/blob/f267949dcd5a5e6451933cec3d0b5661f4f9c889/gpt-x-alpaca-13b-native-4bit-128g-cuda.pt
9
+ Details about the GPTQ quantization process: https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g/blob/f267949dcd5a5e6451933cec3d0b5661f4f9c889/README.md
10
+
11
+ Tools used:
12
+ [1] Conversion to ggml: https://github.com/ggerganov/llama.cpp/blob/3265b102beb7674d010644ca2a1bd30a58f9f6b5/convert.py and [2]
13
+ [2] Added extra tokens: https://huggingface.co/chavinlo/alpaca-13b/blob/464a0bd1ec16f3a7d5295a0035aff87f307e62f1/added_tokens.json
14
+ [3] Migration to the latest llama.cpp model format: https://github.com/ggerganov/llama.cpp/blob/3525899277d2e2bdc8ec3f0e6e40c47251608700/migrate-ggml-2023-03-30-pr613.py
15
+
gpt4-x-alpaca-13b-ggml-q4_1-from-gptq-4bit-128g/ggml-model-q4_1.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4a640a1ce33009c244a361c6f87733aacbc2bea90e84d3c304a4c8be2bdf22d
3
+ size 10173322368