File size: 2,964 Bytes
86b0f7a
 
3a9132a
 
86b0f7a
3a9132a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
license: mit
language:
- en
---

# On-chain llama.cpp - Internet Computer


- These models were created for Internet Computer canisters deployed with [onicai/llama_cpp_canister](https://github.com/onicai/llama_cpp_canister) 
- They are used in the LLM canisters of [ICGPT](https://icgpt.icpp.world/)
- The models were created with the training procedure outlined in [karpathy/llama2.c](https://github.com/karpathy/llama2.c)
- You can run them local too, as described in [karpathy/llama2.c](https://github.com/karpathy/llama2.c)

## Setup git

See: [Getting Started: set-up](https://huggingface.co/docs/hub/repositories-getting-started#set-up)

```bash
pip install huggingface-hub

git clone <this-repo>
cd <this-repo>

git lfs install
git lfs track "*.gguf"
huggingface-cli lfs-enable-largefiles .

# add & push as usual with git
git add <file-name>
git commit -m "Adding <file-name>"
git push -u origin main
```

## TinyStories models

| model | notes |
|-------|-------|
| stories260Ktok512.guff       | Use this for development & debugging |
| stories15Mtok4096.guff       | Fits in canister & works well ! |
| stories42Mtok4096.guff       | As of April 28, hits instruction limit of canister |
| stories42Mtok32000.guff  (*) | As of April 28, hits instruction limit of canister |
| stories110Mtok32000.guff (*) | As of April 28, hits instruction limit of canister |


We used [convert-llama2c-to-ggml](https://github.com/ggerganov/llama.cpp/tree/32c8486e1f0297393cb22ac0a0d26a6b17ad4d54/examples/convert-llama2c-to-ggml) to convert the llama2.c model+tokenizer to llama.cpp gguf format.
- Good read: [lama : add support for llama2.c models](https://github.com/ggerganov/llama.cpp/issues/2379)

For example:
```bash
# From llama.cpp root folder

# Build everything
make -j

# Convert a llama2c model+tokenizer to gguf
convert-llama2c-to-ggml --llama2c-model stories260Ktok512.bin --copy-vocab-from-model tok512.bin --llama2c-output-model stories260Ktok512.gguf
convert-llama2c-to-ggml --llama2c-model stories15Mtok4096.bin --copy-vocab-from-model tok4096.bin --llama2c-output-model stories15Mtok4096.gguf
convert-llama2c-to-ggml --llama2c-model stories42Mtok4096.bin --copy-vocab-from-model tok4096.bin --llama2c-output-model stories42Mtok4096.gguf
convert-llama2c-to-ggml --llama2c-model stories110Mtok32000.bin --copy-vocab-from-model models/ggml-vocab-llama.gguf --llama2c-output-model stories110Mtok32000.gguf
convert-llama2c-to-ggml --llama2c-model stories42Mtok32000.bin --copy-vocab-from-model models/ggml-vocab-llama.gguf --llama2c-output-model stories42Mtok32000.gguf

# Run it local, like this
main -m stories15Mtok4096.gguf -p "Joe loves writing stories" -n 600 -c 128

# Quantization
#
```

(*) Files with asterix behind them were not trained by us, but simply copied from [karpathy/tinyllamas](https://huggingface.co/karpathy/tinyllamas/tree/main) and renamed. We are providing them here under a different name for clarity and ease-of-access.