icpp commited on
Commit
3a9132a
1 Parent(s): 86b0f7a

Add .gguf to lfs to track

Browse files
Files changed (2) hide show
  1. .gitattributes +1 -0
  2. README.md +68 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.gguf filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,71 @@
1
  ---
2
  license: mit
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ language:
4
+ - en
5
  ---
6
+
7
+ # On-chain llama.cpp - Internet Computer
8
+
9
+
10
+ - These models were created for Internet Computer canisters deployed with [onicai/llama_cpp_canister](https://github.com/onicai/llama_cpp_canister)
11
+ - They are used in the LLM canisters of [ICGPT](https://icgpt.icpp.world/)
12
+ - The models were created with the training procedure outlined in [karpathy/llama2.c](https://github.com/karpathy/llama2.c)
13
+ - You can run them local too, as described in [karpathy/llama2.c](https://github.com/karpathy/llama2.c)
14
+
15
+ ## Setup git
16
+
17
+ See: [Getting Started: set-up](https://huggingface.co/docs/hub/repositories-getting-started#set-up)
18
+
19
+ ```bash
20
+ pip install huggingface-hub
21
+
22
+ git clone <this-repo>
23
+ cd <this-repo>
24
+
25
+ git lfs install
26
+ git lfs track "*.gguf"
27
+ huggingface-cli lfs-enable-largefiles .
28
+
29
+ # add & push as usual with git
30
+ git add <file-name>
31
+ git commit -m "Adding <file-name>"
32
+ git push -u origin main
33
+ ```
34
+
35
+ ## TinyStories models
36
+
37
+ | model | notes |
38
+ |-------|-------|
39
+ | stories260Ktok512.guff | Use this for development & debugging |
40
+ | stories15Mtok4096.guff | Fits in canister & works well ! |
41
+ | stories42Mtok4096.guff | As of April 28, hits instruction limit of canister |
42
+ | stories42Mtok32000.guff (*) | As of April 28, hits instruction limit of canister |
43
+ | stories110Mtok32000.guff (*) | As of April 28, hits instruction limit of canister |
44
+
45
+
46
+ We used [convert-llama2c-to-ggml](https://github.com/ggerganov/llama.cpp/tree/32c8486e1f0297393cb22ac0a0d26a6b17ad4d54/examples/convert-llama2c-to-ggml) to convert the llama2.c model+tokenizer to llama.cpp gguf format.
47
+ - Good read: [lama : add support for llama2.c models](https://github.com/ggerganov/llama.cpp/issues/2379)
48
+
49
+ For example:
50
+ ```bash
51
+ # From llama.cpp root folder
52
+
53
+ # Build everything
54
+ make -j
55
+
56
+ # Convert a llama2c model+tokenizer to gguf
57
+ convert-llama2c-to-ggml --llama2c-model stories260Ktok512.bin --copy-vocab-from-model tok512.bin --llama2c-output-model stories260Ktok512.gguf
58
+ convert-llama2c-to-ggml --llama2c-model stories15Mtok4096.bin --copy-vocab-from-model tok4096.bin --llama2c-output-model stories15Mtok4096.gguf
59
+ convert-llama2c-to-ggml --llama2c-model stories42Mtok4096.bin --copy-vocab-from-model tok4096.bin --llama2c-output-model stories42Mtok4096.gguf
60
+ convert-llama2c-to-ggml --llama2c-model stories110Mtok32000.bin --copy-vocab-from-model models/ggml-vocab-llama.gguf --llama2c-output-model stories110Mtok32000.gguf
61
+ convert-llama2c-to-ggml --llama2c-model stories42Mtok32000.bin --copy-vocab-from-model models/ggml-vocab-llama.gguf --llama2c-output-model stories42Mtok32000.gguf
62
+
63
+ # Run it local, like this
64
+ main -m stories15Mtok4096.gguf -p "Joe loves writing stories" -n 600 -c 128
65
+
66
+ # Quantization
67
+ #
68
+ ```
69
+
70
+ (*) Files with asterix behind them were not trained by us, but simply copied from [karpathy/tinyllamas](https://huggingface.co/karpathy/tinyllamas/tree/main) and renamed. We are providing them here under a different name for clarity and ease-of-access.
71
+