DavidAU commited on
Commit
8b972af
1 Parent(s): 36c5a44

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - moe
5
+ - frankenmoe
6
+ - merge
7
+ - mergekit
8
+ - lazymergekit
9
+ - TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T
10
+ - cognitivecomputations/TinyDolphin-2.8.2-1.1b-laser
11
+ - cognitivecomputations/TinyDolphin-2.8.1-1.1b
12
+ - TinyLlama/TinyLlama-1.1B-Chat-v1.0
13
+ - llama-cpp
14
+ - gguf-my-repo
15
+ base_model:
16
+ - TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T
17
+ - cognitivecomputations/TinyDolphin-2.8.2-1.1b-laser
18
+ - cognitivecomputations/TinyDolphin-2.8.1-1.1b
19
+ - TinyLlama/TinyLlama-1.1B-Chat-v1.0
20
+ ---
21
+
22
+ # DavidAU/Tiny-Llama-Llama-Dolphin-laser-1b-moe-Q6_K-GGUF
23
+ This model was converted to GGUF format from [`jtatman/Tiny-Llama-Llama-Dolphin-laser-1b-moe`](https://huggingface.co/jtatman/Tiny-Llama-Llama-Dolphin-laser-1b-moe) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
24
+ Refer to the [original model card](https://huggingface.co/jtatman/Tiny-Llama-Llama-Dolphin-laser-1b-moe) for more details on the model.
25
+ ## Use with llama.cpp
26
+
27
+ Install llama.cpp through brew.
28
+
29
+ ```bash
30
+ brew install ggerganov/ggerganov/llama.cpp
31
+ ```
32
+ Invoke the llama.cpp server or the CLI.
33
+
34
+ CLI:
35
+
36
+ ```bash
37
+ llama-cli --hf-repo DavidAU/Tiny-Llama-Llama-Dolphin-laser-1b-moe-Q6_K-GGUF --model tiny-llama-llama-dolphin-laser-1b-moe.Q6_K.gguf -p "The meaning to life and the universe is"
38
+ ```
39
+
40
+ Server:
41
+
42
+ ```bash
43
+ llama-server --hf-repo DavidAU/Tiny-Llama-Llama-Dolphin-laser-1b-moe-Q6_K-GGUF --model tiny-llama-llama-dolphin-laser-1b-moe.Q6_K.gguf -c 2048
44
+ ```
45
+
46
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
47
+
48
+ ```
49
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tiny-llama-llama-dolphin-laser-1b-moe.Q6_K.gguf -n 128
50
+ ```