On-chain llama2.c - Internet Computer
- Run on-chain (Internet Computer) with icpp_llm/llama2_c
- Run local with karpathy/llama2.c
- Try them out at ICGPT
- The models were created with the training procedure outlined in karpathy/llama2.c
TinyStories models
model | tokenizer | notes |
---|---|---|
stories260Ktok512.bin | tok512.bin | Use this for development & debugging |
stories15Mtok4096.bin | tok4096.bin | Fits in canister & works well ! |
stories42Mtok4096.bin | tok4096.bin | As of April 28, hits instruction limit of canister |
stories42Mtok32000.bin (*) | tok32000.bin (*) | As of April 28, hits instruction limit of canister |
stories110Mtok32000.bin (*) | tok32000.bin (*) | As of April 28, hits instruction limit of canister |
(*) Files with asterix behind them were not trained by us, but simply copied from karpathy/tinyllamas and renamed. We are providing them here under a different name for clarity and ease-of-access.
Setup local git with lfs
# install git lfs
# Ubuntu
git lfs install
# Mac
brew install git-lfs
# install huggingface CLI tools in a python environment
pip install huggingface-hub
# Clone this repo
# https
git clone https://huggingface.co/onicai/llama2_c_canister_models
# ssh
git clone git@hf.co:onicai/llama2_c_canister_models
cd llama2_c_canister_models
# configure lfs for local repo
huggingface-cli lfs-enable-largefiles .
# tell lfs what files to track (.gitattributes)
git lfs track "*.bin"
# add, commit & push as usual with git
git add <file-name>
git commit -m "Adding <file-name>"
git push -u origin main
- Downloads last month
- 0
Unable to determine this model's library. Check the
docs
.