joeshmoethefunnyone commited on
Commit
b50c8d2
1 Parent(s): 1050deb

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ library_name: gpt-neox
6
+ tags:
7
+ - pytorch
8
+ - causal-lm
9
+ - pythia
10
+ - llama-cpp
11
+ - gguf-my-repo
12
+ datasets:
13
+ - EleutherAI/pile
14
+ ---
15
+
16
+ # joeshmoethefunnyone/pythia-70m-Q8_0-GGUF
17
+ This model was converted to GGUF format from [`EleutherAI/pythia-70m`](https://huggingface.co/EleutherAI/pythia-70m) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
18
+ Refer to the [original model card](https://huggingface.co/EleutherAI/pythia-70m) for more details on the model.
19
+ ## Use with llama.cpp
20
+
21
+ Install llama.cpp through brew.
22
+
23
+ ```bash
24
+ brew install ggerganov/ggerganov/llama.cpp
25
+ ```
26
+ Invoke the llama.cpp server or the CLI.
27
+
28
+ CLI:
29
+
30
+ ```bash
31
+ llama-cli --hf-repo joeshmoethefunnyone/pythia-70m-Q8_0-GGUF --model pythia-70m.Q8_0.gguf -p "The meaning to life and the universe is"
32
+ ```
33
+
34
+ Server:
35
+
36
+ ```bash
37
+ llama-server --hf-repo joeshmoethefunnyone/pythia-70m-Q8_0-GGUF --model pythia-70m.Q8_0.gguf -c 2048
38
+ ```
39
+
40
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
41
+
42
+ ```
43
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m pythia-70m.Q8_0.gguf -n 128
44
+ ```