metadata
license: mit
language:
- en
On-chain llama.cpp - Internet Computer
- These models were created for Internet Computer canisters deployed with onicai/llama_cpp_canister
- They are used in the LLM canisters of ICGPT
- The models were created with the training procedure outlined in karpathy/llama2.c
- You can run them local too, as described in karpathy/llama2.c
Setup git
pip install huggingface-hub
git clone <this-repo>
cd <this-repo>
git lfs install
git lfs track "*.gguf"
huggingface-cli lfs-enable-largefiles .
# add & push as usual with git
git add <file-name>
git commit -m "Adding <file-name>"
git push -u origin main
TinyStories models
model | notes |
---|---|
stories260Ktok512.guff | Use this for development & debugging |
stories15Mtok4096.guff | Fits in canister & works well ! |
stories42Mtok4096.guff | As of April 28, hits instruction limit of canister |
stories42Mtok32000.guff (*) | As of April 28, hits instruction limit of canister |
stories110Mtok32000.guff (*) | As of April 28, hits instruction limit of canister |
We used convert-llama2c-to-ggml to convert the llama2.c model+tokenizer to llama.cpp gguf format.
- Good read: lama : add support for llama2.c models
For example:
# From llama.cpp root folder
# Build everything
make -j
# Convert a llama2c model+tokenizer to gguf
convert-llama2c-to-ggml --llama2c-model stories260Ktok512.bin --copy-vocab-from-model tok512.bin --llama2c-output-model stories260Ktok512.gguf
convert-llama2c-to-ggml --llama2c-model stories15Mtok4096.bin --copy-vocab-from-model tok4096.bin --llama2c-output-model stories15Mtok4096.gguf
convert-llama2c-to-ggml --llama2c-model stories42Mtok4096.bin --copy-vocab-from-model tok4096.bin --llama2c-output-model stories42Mtok4096.gguf
convert-llama2c-to-ggml --llama2c-model stories110Mtok32000.bin --copy-vocab-from-model models/ggml-vocab-llama.gguf --llama2c-output-model stories110Mtok32000.gguf
convert-llama2c-to-ggml --llama2c-model stories42Mtok32000.bin --copy-vocab-from-model models/ggml-vocab-llama.gguf --llama2c-output-model stories42Mtok32000.gguf
# Run it local, like this
main -m stories15Mtok4096.gguf -p "Joe loves writing stories" -n 600 -c 128
# Quantization
#
(*) Files with asterix behind them were not trained by us, but simply copied from karpathy/tinyllamas and renamed. We are providing them here under a different name for clarity and ease-of-access.