DFofanov78 commited on
Commit
6d9f075
1 Parent(s): bd5d519

Upload folder using huggingface_hub

Browse files
Files changed (5) hide show
  1. .gitattributes +3 -0
  2. README.md +35 -0
  3. model-q2_K.gguf +3 -0
  4. model-q4_K.gguf +3 -0
  5. model-q8_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ model-q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ model-q4_K.gguf filter=lfs diff=lfs merge=lfs -text
38
+ model-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - IlyaGusev/ru_turbo_saiga
4
+ - IlyaGusev/ru_sharegpt_cleaned
5
+ - IlyaGusev/oasst1_ru_main_branch
6
+ - IlyaGusev/ru_turbo_alpaca_evol_instruct
7
+ - lksy/ru_instruct_gpt4
8
+ language:
9
+ - ru
10
+ inference: false
11
+ pipeline_tag: conversational
12
+ license: apache-2.0
13
+ ---
14
+
15
+ Llama.cpp compatible versions of an original [7B model](https://huggingface.co/IlyaGusev/saiga_mistral_7b_lora).
16
+
17
+ Download one of the versions, for example `model-q4_K.gguf`.
18
+ ```
19
+ wget https://huggingface.co/IlyaGusev/saiga_mistral_7b_gguf/resolve/main/model-q4_K.gguf
20
+ ```
21
+
22
+ Download [interact_mistral_llamacpp.py](https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_mistral_llamacpp.py)
23
+ ```
24
+ wget https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_mistral_llamacpp.py
25
+ ```
26
+
27
+ How to run:
28
+ ```
29
+ pip install llama-cpp-python fire
30
+
31
+ python3 interact_mistral_llamacpp.py model-q4_K.gguf
32
+ ```
33
+
34
+ System requirements:
35
+ * 10GB RAM for q8_0 and less for smaller quantizations
model-q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:178e59c10b8bd6fc31a6596f4a4250aff5b66c370018910fb2ead50a0a57594a
3
+ size 3083107232
model-q4_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2798f33ff63c791a21f05c1ee9a10bc95630b17225c140c197188a3d5cf32644
3
+ size 4368450336
model-q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a39fdd999a61231b274ea7ed14aaca0e77e1bd8754699542328a84ceaeba4ab6
3
+ size 7695874784