Llama.cpp compatible version of an original 7B model.

How to run:

sudo apt-get install git-lfs
git clone https://huggingface.co/IlyaGusev/llama_7b_ru_turbo_alpaca_lora_llamacpp
cd llama_7b_ru_turbo_alpaca_lora_llamacpp && git lfs install && git lfs pull && cd ..

git clone https://github.com/ggerganov/llama.cpp
cp -R llama_7b_ru_turbo_alpaca_lora_llamacpp/* llama.cpp/models/
cd llama.cpp
make
./main -m ./models/7B/ggml-model-q4_0.bin -p "Вопрос: Почему трава зеленая? Ответ:" -n 512 --temp 0.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Inference API (serverless) has been turned off for this model.

Dataset used to train IlyaGusev/llama_7b_ru_turbo_alpaca_lora_llamacpp