saiga2_13b_gguf / README.md
IlyaGusev's picture
Update README.md
9bd8bdc
|
raw
history blame
785 Bytes
metadata
datasets:
  - IlyaGusev/ru_turbo_alpaca
  - IlyaGusev/ru_turbo_saiga
  - IlyaGusev/ru_sharegpt_cleaned
  - IlyaGusev/oasst1_ru_main_branch
  - IlyaGusev/ru_turbo_alpaca_evol_instruct
  - lksy/ru_instruct_gpt4
language:
  - ru
inference: false
pipeline_tag: conversational
license: llama2

Llama.cpp compatible versions of an original 13B model.

How to run:

sudo apt-get install git-lfs
pip install llama-cpp-python fire

python3 interact_llamacpp.py ggml-model-q4_K.gguf

System requirements:

  • 18GB RAM for q8_K
  • 8GB RAM for q4_K