IlyaGusev commited on
Commit
14f7cf2
β€’
1 Parent(s): ff1636a
.gitattributes CHANGED
@@ -43,3 +43,8 @@ ggml-model-q3_K.gguf filter=lfs diff=lfs merge=lfs -text
43
  ggml-model-q4_K.gguf filter=lfs diff=lfs merge=lfs -text
44
  ggml-model-q5_K.gguf filter=lfs diff=lfs merge=lfs -text
45
  ggml-model-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
43
  ggml-model-q4_K.gguf filter=lfs diff=lfs merge=lfs -text
44
  ggml-model-q5_K.gguf filter=lfs diff=lfs merge=lfs -text
45
  ggml-model-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
46
+ model-q3_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ model-q4_K.gguf filter=lfs diff=lfs merge=lfs -text
48
+ model-q5_K.gguf filter=lfs diff=lfs merge=lfs -text
49
+ model-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
50
+ model-q2_K.gguf filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -15,12 +15,18 @@ license: llama2
15
 
16
  Llama.cpp compatible versions of an original [13B model](https://huggingface.co/IlyaGusev/saiga2_13b_lora).
17
 
18
- * Download one of the versions, for example `ggml-model-q4_K.gguf`.
19
- * Download [interact_llamacpp.py](https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llamacpp.py)
 
 
 
 
 
 
 
20
 
21
  How to run:
22
  ```
23
- sudo apt-get install git-lfs
24
  pip install llama-cpp-python fire
25
 
26
  python3 interact_llamacpp.py ggml-model-q4_K.gguf
@@ -28,4 +34,4 @@ python3 interact_llamacpp.py ggml-model-q4_K.gguf
28
 
29
  System requirements:
30
  * 18GB RAM for q8_K
31
- * 10GB RAM for q4_K
 
15
 
16
  Llama.cpp compatible versions of an original [13B model](https://huggingface.co/IlyaGusev/saiga2_13b_lora).
17
 
18
+ Download one of the versions, for example `model-q4_K.gguf`.
19
+ ```
20
+ wget https://huggingface.co/IlyaGusev/saiga2_13b_gguf/resolve/main/model-q4_K.gguf
21
+ ```
22
+
23
+ Download [interact_llamacpp.py](https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llamacpp.py)
24
+ ```
25
+ wget https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llamacpp.py
26
+ ```
27
 
28
  How to run:
29
  ```
 
30
  pip install llama-cpp-python fire
31
 
32
  python3 interact_llamacpp.py ggml-model-q4_K.gguf
 
34
 
35
  System requirements:
36
  * 18GB RAM for q8_K
37
+ * 10GB RAM for q4_K
ggml-model-q2_K.gguf β†’ model-q2_K.gguf RENAMED
File without changes
ggml-model-q3_K.gguf β†’ model-q3_K.gguf RENAMED
File without changes
ggml-model-q4_K.gguf β†’ model-q4_K.gguf RENAMED
File without changes
ggml-model-q5_K.gguf β†’ model-q5_K.gguf RENAMED
File without changes
ggml-model-q8_0.gguf β†’ model-q8_0.gguf RENAMED
File without changes