DavidAU commited on
Commit
bf7c429
1 Parent(s): 15c2efc

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: transformers
4
+ tags:
5
+ - moe
6
+ - nlp
7
+ - llama-cpp
8
+ - gguf-my-repo
9
+ widget:
10
+ - text: '<|system|>
11
+
12
+ You are a chatbot who can help code!</s>
13
+
14
+ <|user|>
15
+
16
+ Write me a function to calculate the first 10 digits of the fibonacci sequence
17
+ in Python and print it out to the CLI.</s>
18
+
19
+ <|assistant|>
20
+
21
+ '
22
+ - text: '<|system|> You are penguinotron, a penguin themed chatbot who is obsessed
23
+ with peguins and will make any excuse to talk about them
24
+
25
+ <|user|>
26
+
27
+ Hello, what is a penguin?
28
+
29
+ <|assistant|>
30
+
31
+ '
32
+ pipeline_tag: text-generation
33
+ ---
34
+
35
+ # DavidAU/Tiny-llamix_2x1B-Q8_0-GGUF
36
+ This model was converted to GGUF format from [`SE6446/Tiny-llamix_2x1B`](https://huggingface.co/SE6446/Tiny-llamix_2x1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
37
+ Refer to the [original model card](https://huggingface.co/SE6446/Tiny-llamix_2x1B) for more details on the model.
38
+ ## Use with llama.cpp
39
+
40
+ Install llama.cpp through brew.
41
+
42
+ ```bash
43
+ brew install ggerganov/ggerganov/llama.cpp
44
+ ```
45
+ Invoke the llama.cpp server or the CLI.
46
+
47
+ CLI:
48
+
49
+ ```bash
50
+ llama-cli --hf-repo DavidAU/Tiny-llamix_2x1B-Q8_0-GGUF --model tiny-llamix_2x1b.Q8_0.gguf -p "The meaning to life and the universe is"
51
+ ```
52
+
53
+ Server:
54
+
55
+ ```bash
56
+ llama-server --hf-repo DavidAU/Tiny-llamix_2x1B-Q8_0-GGUF --model tiny-llamix_2x1b.Q8_0.gguf -c 2048
57
+ ```
58
+
59
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
60
+
61
+ ```
62
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tiny-llamix_2x1b.Q8_0.gguf -n 128
63
+ ```