Edit model card

On-chain llama.cpp - Internet Computer

TinyStories models

model notes
stories260Ktok512.guff Use this for development & debugging
stories15Mtok4096.guff Fits in canister & works well !
stories42Mtok4096.guff As of April 28, hits instruction limit of canister
stories42Mtok32000.guff As of April 28, hits instruction limit of canister
stories110Mtok32000.guff As of April 28, hits instruction limit of canister

Setup local git with lfs

See: Getting Started: set-up

# install git lfs
# Ubuntu
git lfs install
# Mac
brew install git-lfs

# install huggingface CLI tools in a python environment
pip install huggingface-hub

# Clone this repo
# https
git clone https://huggingface.co/onicai/llama_cpp_canister_models
# ssh
git clone git@hf.co:onicai/llama_cpp_canister_models

cd llama_cpp_canister_models

# configure lfs for local repo
huggingface-cli lfs-enable-largefiles .

# tell lfs what files to track (.gitattributes)
git lfs track "*.gguf"

# add, commit & push as usual with git
git add <file-name>
git commit -m "Adding <file-name>"
git push -u origin main

Model creation

We used convert-llama2c-to-ggml to convert the llama2.c model+tokenizer to llama.cpp gguf format.

For example:

# From llama.cpp root folder

# Build everything
make -j

# Convert a llama2c model+tokenizer to gguf
convert-llama2c-to-ggml --llama2c-model stories260Ktok512.bin --copy-vocab-from-model tok512.bin --llama2c-output-model stories260Ktok512.gguf
convert-llama2c-to-ggml --llama2c-model stories15Mtok4096.bin --copy-vocab-from-model tok4096.bin --llama2c-output-model stories15Mtok4096.gguf
convert-llama2c-to-ggml --llama2c-model stories42Mtok4096.bin --copy-vocab-from-model tok4096.bin --llama2c-output-model stories42Mtok4096.gguf
convert-llama2c-to-ggml --llama2c-model stories110Mtok32000.bin --copy-vocab-from-model models/ggml-vocab-llama.gguf --llama2c-output-model stories110Mtok32000.gguf
convert-llama2c-to-ggml --llama2c-model stories42Mtok32000.bin --copy-vocab-from-model models/ggml-vocab-llama.gguf --llama2c-output-model stories42Mtok32000.gguf

# Run it local, like this
main -m stories15Mtok4096.gguf -p "Joe loves writing stories" -n 600 -c 128

# Quantization
# TODO
Downloads last month
169
GGUF
Model size
29.5M params
Architecture
llama
Unable to determine this model's library. Check the docs .