Manel commited on
Commit
b8d0856
·
verified ·
1 Parent(s): 6a96231

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -58
README.md CHANGED
@@ -1,58 +0,0 @@
1
- ---
2
- license: gemma
3
- library_name: transformers
4
- pipeline_tag: text-generation
5
- extra_gated_heading: Access Gemma on Hugging Face
6
- extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
7
- agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
8
- Face and click below. Requests are processed immediately.
9
- extra_gated_button_content: Acknowledge license
10
- base_model: google/gemma-2-9b
11
- tags:
12
- - llama-cpp
13
- - gguf-my-repo
14
- ---
15
-
16
- # Manel/gemma-2-9b-Q4_0-GGUF
17
- This model was converted to GGUF format from [`google/gemma-2-9b`](https://huggingface.co/google/gemma-2-9b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
18
- Refer to the [original model card](https://huggingface.co/google/gemma-2-9b) for more details on the model.
19
-
20
- ## Use with llama.cpp
21
- Install llama.cpp through brew (works on Mac and Linux)
22
-
23
- ```bash
24
- brew install llama.cpp
25
-
26
- ```
27
- Invoke the llama.cpp server or the CLI.
28
-
29
- ### CLI:
30
- ```bash
31
- llama-cli --hf-repo Manel/gemma-2-9b-Q4_0-GGUF --hf-file gemma-2-9b-q4_0.gguf -p "The meaning to life and the universe is"
32
- ```
33
-
34
- ### Server:
35
- ```bash
36
- llama-server --hf-repo Manel/gemma-2-9b-Q4_0-GGUF --hf-file gemma-2-9b-q4_0.gguf -c 2048
37
- ```
38
-
39
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
40
-
41
- Step 1: Clone llama.cpp from GitHub.
42
- ```
43
- git clone https://github.com/ggerganov/llama.cpp
44
- ```
45
-
46
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
47
- ```
48
- cd llama.cpp && LLAMA_CURL=1 make
49
- ```
50
-
51
- Step 3: Run inference through the main binary.
52
- ```
53
- ./llama-cli --hf-repo Manel/gemma-2-9b-Q4_0-GGUF --hf-file gemma-2-9b-q4_0.gguf -p "The meaning to life and the universe is"
54
- ```
55
- or
56
- ```
57
- ./llama-server --hf-repo Manel/gemma-2-9b-Q4_0-GGUF --hf-file gemma-2-9b-q4_0.gguf -c 2048
58
- ```