Edit model card

Prerequisites

The vitalik-7b.csv file contains the QA set required for the fine-tuning. The convert.py script converts the CSV file into QAs in the llama2 chat template.

python convert.py

It generates a vitalik-7b.txt file, which can now be used in fine-tuning.

Fine-tuning steps

Clone this repo into the llama.cpp/models/ folder.

cd llama.cpp/models
git clone https://huggingface.co/gaianet/vitalik.eth-7b

Move the Llama2-7b-chat base model to the folder.

cd vitalik-7b
mv path/to/llama-2-7b-chat.Q5_K_M.gguf .

From the llama.cpp/models/vitalik-7b folder run the following command.

../../build/bin/finetune --model-base llama-2-7b-chat.Q5_K_M.gguf --lora-out lora.bin --train-data vitalik-7b.txt --sample-start '<SFT>' --adam-iter 1024

Wait for several days until the above process finishes. You will have a lora.bin file, which can generate the fine-tuned model.

../../build/bin/export-lora --model-base llama-2-7b-chat.Q5_K_M.gguf --lora lora.bin --model-out vitalik.eth-7b-q5_k_m.gguf

Learn more about Llama2 model fine tuning here.

Run with Gaianet

Prompt template:

prompt template: llama-2-chat

Context size:

chat_ctx_size: 4096

Run with GaiaNet:

Downloads last month
289
GGUF
Model size
6.74B params
Architecture
llama
+8
Unable to determine this model's library. Check the docs .