newsletter commited on
Commit
aa76890
1 Parent(s): 2c4f116

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +60 -0
README.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ tags:
7
+ - code
8
+ - llama-cpp
9
+ - gguf-my-repo
10
+ datasets:
11
+ - m-a-p/Code-Feedback
12
+ - HuggingFaceTB/cosmopedia-100k
13
+ - LDJnr/Capybara
14
+ - vicgalle/alpaca-gpt4
15
+ - glaiveai/glaive-code-assistant-v2
16
+ - WhiteRabbitNeo/WRN-Chapter-1
17
+ - WhiteRabbitNeo/WRN-Chapter-2
18
+ - m-a-p/CodeFeedback-Filtered-Instruction
19
+ - jondurbin/airoboros-3.2
20
+ - euclaise/WritingPrompts_curated
21
+ - derek-thomas/squad-v1.1-t5-question-generation
22
+ - reinforz/question_generation_data
23
+ - teknium/GPTeacher-General-Instruct
24
+ - dim/roleplay_instruct_v2_final
25
+ - TIGER-Lab/MathInstruct
26
+ - abacusai/SystemChat
27
+ - Mihaiii/OpenHermes-2.5-1k-longest-curated
28
+ license_name: a
29
+ license_link: LICENSE
30
+ ---
31
+
32
+ # newsletter/NinjaMouse-2.4B-32L-danube-Q6_K-GGUF
33
+ This model was converted to GGUF format from [`trollek/NinjaMouse-2.4B-32L-danube`](https://huggingface.co/trollek/NinjaMouse-2.4B-32L-danube) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
34
+ Refer to the [original model card](https://huggingface.co/trollek/NinjaMouse-2.4B-32L-danube) for more details on the model.
35
+ ## Use with llama.cpp
36
+
37
+ Install llama.cpp through brew.
38
+
39
+ ```bash
40
+ brew install ggerganov/ggerganov/llama.cpp
41
+ ```
42
+ Invoke the llama.cpp server or the CLI.
43
+
44
+ CLI:
45
+
46
+ ```bash
47
+ llama-cli --hf-repo newsletter/NinjaMouse-2.4B-32L-danube-Q6_K-GGUF --model ninjamouse-2.4b-32l-danube.Q6_K.gguf -p "The meaning to life and the universe is"
48
+ ```
49
+
50
+ Server:
51
+
52
+ ```bash
53
+ llama-server --hf-repo newsletter/NinjaMouse-2.4B-32L-danube-Q6_K-GGUF --model ninjamouse-2.4b-32l-danube.Q6_K.gguf -c 2048
54
+ ```
55
+
56
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
57
+
58
+ ```
59
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m ninjamouse-2.4b-32l-danube.Q6_K.gguf -n 128
60
+ ```