jodog0412 commited on
Commit
30fd7a1
·
verified ·
1 Parent(s): c80da59

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -34,16 +34,16 @@ pip install -U "huggingface_hub[cli]"
34
  Then, you can target the specific file you want:
35
 
36
  ```
37
- huggingface-cli download bartowski/gemma-2-9b-it-GGUF --include "gemma-2-9b-it-Q4_K_M.gguf" --local-dir ./
38
  ```
39
 
40
  If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
41
 
42
  ```
43
- huggingface-cli download bartowski/gemma-2-9b-it-GGUF --include "gemma-2-9b-it-Q8_0.gguf/*" --local-dir gemma-2-9b-it-Q8_0
44
  ```
45
 
46
- You can either specify a new local-dir (gemma-2-9b-it-Q8_0) or download them all in place (./)
47
  ## Which file should I choose?
48
  A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
49
  The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
 
34
  Then, you can target the specific file you want:
35
 
36
  ```
37
+ huggingface-cli download jodog0412/gemma-2-27b-it-Q4_K_M --include "gemma-2-instruct-Q4_K_M.gguf" --local-dir ./
38
  ```
39
 
40
  If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
41
 
42
  ```
43
+ huggingface-cli download jodog0412/gemma-2-27b-it-Q4_K_M --include "gemma-2-instruct-Q4_K_M.gguf/*" --local-dir gemma-2-instruct-Q4_K_M
44
  ```
45
 
46
+ You can either specify a new local-dir (gemma-2-instruct-Q4_K_M) or download them all in place (./)
47
  ## Which file should I choose?
48
  A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
49
  The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.