MaziyarPanahi
commited on
Commit
•
7f63b89
1
Parent(s):
e2b4ad7
Update README.md
Browse files
README.md
CHANGED
@@ -117,7 +117,7 @@ pip3 install hf_transfer
|
|
117 |
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
118 |
|
119 |
```shell
|
120 |
-
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Experiment26-7B-GGUF Experiment26-7B
|
121 |
```
|
122 |
|
123 |
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
|
@@ -128,7 +128,7 @@ Windows Command Line users: You can set the environment variable by running `set
|
|
128 |
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
129 |
|
130 |
```shell
|
131 |
-
./main -ngl 35 -m Experiment26-7B
|
132 |
{system_message}<|im_end|>
|
133 |
<|im_start|>user
|
134 |
{prompt}<|im_end|>
|
@@ -185,7 +185,7 @@ from llama_cpp import Llama
|
|
185 |
|
186 |
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
187 |
llm = Llama(
|
188 |
-
model_path="./Experiment26-7B
|
189 |
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
|
190 |
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
|
191 |
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
|
@@ -205,7 +205,7 @@ output = llm(
|
|
205 |
|
206 |
# Chat Completion API
|
207 |
|
208 |
-
llm = Llama(model_path="./Experiment26-7B
|
209 |
llm.create_chat_completion(
|
210 |
messages = [
|
211 |
{"role": "system", "content": "You are a story writing assistant."},
|
|
|
117 |
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
118 |
|
119 |
```shell
|
120 |
+
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Experiment26-7B-GGUF Experiment26-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
121 |
```
|
122 |
|
123 |
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
|
|
|
128 |
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
129 |
|
130 |
```shell
|
131 |
+
./main -ngl 35 -m Experiment26-7B.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
|
132 |
{system_message}<|im_end|>
|
133 |
<|im_start|>user
|
134 |
{prompt}<|im_end|>
|
|
|
185 |
|
186 |
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
187 |
llm = Llama(
|
188 |
+
model_path="./Experiment26-7B.Q4_K_M.gguf", # Download the model file first
|
189 |
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
|
190 |
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
|
191 |
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
|
|
|
205 |
|
206 |
# Chat Completion API
|
207 |
|
208 |
+
llm = Llama(model_path="./Experiment26-7B.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
|
209 |
llm.create_chat_completion(
|
210 |
messages = [
|
211 |
{"role": "system", "content": "You are a story writing assistant."},
|