icpp's picture
Add .gguf to lfs to track
3a9132a
|
raw
history blame
No virus
2.96 kB
metadata
license: mit
language:
  - en

On-chain llama.cpp - Internet Computer

Setup git

See: Getting Started: set-up

pip install huggingface-hub

git clone <this-repo>
cd <this-repo>

git lfs install
git lfs track "*.gguf"
huggingface-cli lfs-enable-largefiles .

# add & push as usual with git
git add <file-name>
git commit -m "Adding <file-name>"
git push -u origin main

TinyStories models

model notes
stories260Ktok512.guff Use this for development & debugging
stories15Mtok4096.guff Fits in canister & works well !
stories42Mtok4096.guff As of April 28, hits instruction limit of canister
stories42Mtok32000.guff (*) As of April 28, hits instruction limit of canister
stories110Mtok32000.guff (*) As of April 28, hits instruction limit of canister

We used convert-llama2c-to-ggml to convert the llama2.c model+tokenizer to llama.cpp gguf format.

For example:

# From llama.cpp root folder

# Build everything
make -j

# Convert a llama2c model+tokenizer to gguf
convert-llama2c-to-ggml --llama2c-model stories260Ktok512.bin --copy-vocab-from-model tok512.bin --llama2c-output-model stories260Ktok512.gguf
convert-llama2c-to-ggml --llama2c-model stories15Mtok4096.bin --copy-vocab-from-model tok4096.bin --llama2c-output-model stories15Mtok4096.gguf
convert-llama2c-to-ggml --llama2c-model stories42Mtok4096.bin --copy-vocab-from-model tok4096.bin --llama2c-output-model stories42Mtok4096.gguf
convert-llama2c-to-ggml --llama2c-model stories110Mtok32000.bin --copy-vocab-from-model models/ggml-vocab-llama.gguf --llama2c-output-model stories110Mtok32000.gguf
convert-llama2c-to-ggml --llama2c-model stories42Mtok32000.bin --copy-vocab-from-model models/ggml-vocab-llama.gguf --llama2c-output-model stories42Mtok32000.gguf

# Run it local, like this
main -m stories15Mtok4096.gguf -p "Joe loves writing stories" -n 600 -c 128

# Quantization
#

(*) Files with asterix behind them were not trained by us, but simply copied from karpathy/tinyllamas and renamed. We are providing them here under a different name for clarity and ease-of-access.