Merge branch 'main' of https://huggingface.co/IlyaGusev/saiga_7b_lora_llamacpp into main
Browse files
README.md
CHANGED
@@ -12,14 +12,14 @@ pipeline_tag: text2text-generation
|
|
12 |
Llama.cpp compatible versions of an original [7B model](https://huggingface.co/IlyaGusev/saiga_7b_lora).
|
13 |
|
14 |
* Download one of the versions, for example `ggml-model-q4_1.bin`.
|
15 |
-
* Download [
|
16 |
|
17 |
How to run:
|
18 |
```
|
19 |
sudo apt-get install git-lfs
|
20 |
pip install llama-cpp-python fire
|
21 |
|
22 |
-
python3
|
23 |
```
|
24 |
|
25 |
System requirements:
|
|
|
12 |
Llama.cpp compatible versions of an original [7B model](https://huggingface.co/IlyaGusev/saiga_7b_lora).
|
13 |
|
14 |
* Download one of the versions, for example `ggml-model-q4_1.bin`.
|
15 |
+
* Download [interact_llamacpp.py](https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llamacpp.py)
|
16 |
|
17 |
How to run:
|
18 |
```
|
19 |
sudo apt-get install git-lfs
|
20 |
pip install llama-cpp-python fire
|
21 |
|
22 |
+
python3 interact_llamacpp.py ggml-model-q4_1.bin
|
23 |
```
|
24 |
|
25 |
System requirements:
|