Update README.md
#2
by
jocastroc
- opened
README.md
CHANGED
@@ -160,7 +160,7 @@ pip3 install huggingface-hub>=0.17.1
|
|
160 |
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
161 |
|
162 |
```shell
|
163 |
-
huggingface-cli download TheBloke/Llama-2-7B-32K-Instruct-GGUF llama-2-7b-32k-instruct.
|
164 |
```
|
165 |
|
166 |
<details>
|
@@ -183,7 +183,7 @@ pip3 install hf_transfer
|
|
183 |
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
184 |
|
185 |
```shell
|
186 |
-
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama-2-7B-32K-Instruct-GGUF llama-2-7b-32k-instruct.
|
187 |
```
|
188 |
|
189 |
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
|
@@ -196,7 +196,7 @@ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running
|
|
196 |
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
197 |
|
198 |
```shell
|
199 |
-
./main -ngl 32 -m llama-2-7b-32k-instruct.
|
200 |
```
|
201 |
|
202 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
@@ -236,7 +236,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
|
|
236 |
from ctransformers import AutoModelForCausalLM
|
237 |
|
238 |
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
239 |
-
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-7B-32K-Instruct-GGUF", model_file="llama-2-7b-32k-instruct.
|
240 |
|
241 |
print(llm("AI is going to"))
|
242 |
```
|
|
|
160 |
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
161 |
|
162 |
```shell
|
163 |
+
huggingface-cli download TheBloke/Llama-2-7B-32K-Instruct-GGUF llama-2-7b-32k-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
164 |
```
|
165 |
|
166 |
<details>
|
|
|
183 |
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
184 |
|
185 |
```shell
|
186 |
+
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama-2-7B-32K-Instruct-GGUF llama-2-7b-32k-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
187 |
```
|
188 |
|
189 |
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
|
|
|
196 |
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
197 |
|
198 |
```shell
|
199 |
+
./main -ngl 32 -m llama-2-7b-32k-instruct.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST]\n{prompt}\n[\INST]"
|
200 |
```
|
201 |
|
202 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
|
|
236 |
from ctransformers import AutoModelForCausalLM
|
237 |
|
238 |
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
239 |
+
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-7B-32K-Instruct-GGUF", model_file="llama-2-7b-32k-instruct.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
|
240 |
|
241 |
print(llm("AI is going to"))
|
242 |
```
|