|
--- |
|
license: mit |
|
language: |
|
- en |
|
--- |
|
|
|
# On-chain llama.cpp - Internet Computer |
|
|
|
|
|
- Run on-chain (Internet Computer) with [onicai/llama_cpp_canister](https://github.com/onicai/llama_cpp_canister) |
|
- Run local with [ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp) |
|
- Try them out at [ICGPT](https://icgpt.icpp.world/) |
|
- The models were created with the training procedure outlined in [karpathy/llama2.c](https://github.com/karpathy/llama2.c) and then converted into *.gguf format as described below. |
|
|
|
## TinyStories models |
|
|
|
| model | notes | |
|
|-------|-------| |
|
| stories260Ktok512.guff | Use this for development & debugging | |
|
| stories15Mtok4096.guff | Fits in canister & works well ! | |
|
| stories42Mtok4096.guff | As of April 28, hits instruction limit of canister | |
|
| stories42Mtok32000.guff | As of April 28, hits instruction limit of canister | |
|
| stories110Mtok32000.guff | As of April 28, hits instruction limit of canister | |
|
|
|
|
|
## Setup local git with lfs |
|
|
|
See: [Getting Started: set-up](https://huggingface.co/docs/hub/repositories-getting-started#set-up) |
|
|
|
```bash |
|
# install git lfs |
|
# Ubuntu |
|
git lfs install |
|
# Mac |
|
brew install git-lfs |
|
|
|
# install huggingface CLI tools in a python environment |
|
pip install huggingface-hub |
|
|
|
# Clone this repo |
|
# https |
|
git clone https://huggingface.co/onicai/llama_cpp_canister_models |
|
# ssh |
|
git clone git@hf.co:onicai/llama_cpp_canister_models |
|
|
|
cd llama2_c_canister_models |
|
|
|
# configure lfs for local repo |
|
huggingface-cli lfs-enable-largefiles . |
|
|
|
# tell lfs what files to track (.gitattributes) |
|
git lfs track "*.gguf" |
|
|
|
# add, commit & push as usual with git |
|
git add <file-name> |
|
git commit -m "Adding <file-name>" |
|
git push -u origin main |
|
``` |
|
|
|
## Model creation |
|
|
|
We used [convert-llama2c-to-ggml](https://github.com/ggerganov/llama.cpp/tree/32c8486e1f0297393cb22ac0a0d26a6b17ad4d54/examples/convert-llama2c-to-ggml) to convert the llama2.c model+tokenizer to llama.cpp gguf format. |
|
- Good read: [lama : add support for llama2.c models](https://github.com/ggerganov/llama.cpp/issues/2379) |
|
|
|
For example: |
|
```bash |
|
# From llama.cpp root folder |
|
|
|
# Build everything |
|
make -j |
|
|
|
# Convert a llama2c model+tokenizer to gguf |
|
convert-llama2c-to-ggml --llama2c-model stories260Ktok512.bin --copy-vocab-from-model tok512.bin --llama2c-output-model stories260Ktok512.gguf |
|
convert-llama2c-to-ggml --llama2c-model stories15Mtok4096.bin --copy-vocab-from-model tok4096.bin --llama2c-output-model stories15Mtok4096.gguf |
|
convert-llama2c-to-ggml --llama2c-model stories42Mtok4096.bin --copy-vocab-from-model tok4096.bin --llama2c-output-model stories42Mtok4096.gguf |
|
convert-llama2c-to-ggml --llama2c-model stories110Mtok32000.bin --copy-vocab-from-model models/ggml-vocab-llama.gguf --llama2c-output-model stories110Mtok32000.gguf |
|
convert-llama2c-to-ggml --llama2c-model stories42Mtok32000.bin --copy-vocab-from-model models/ggml-vocab-llama.gguf --llama2c-output-model stories42Mtok32000.gguf |
|
|
|
# Run it local, like this |
|
main -m stories15Mtok4096.gguf -p "Joe loves writing stories" -n 600 -c 128 |
|
|
|
# Quantization |
|
# TODO |
|
``` |
|
|
|
|
|
|
|
|
|
|