Felladrin commited on
Commit
ca36310
1 Parent(s): c98195f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -9
README.md CHANGED
@@ -5,22 +5,24 @@ base_model: Felladrin/TinyMistral-248M-Chat-v2
5
 
6
  GGUF version of [Felladrin/TinyMistral-248M-Chat-v2](https://huggingface.co/Felladrin/TinyMistral-248M-Chat-v2).
7
 
8
- ## Usage with llama.cpp
9
 
10
  ```bash
11
  brew install ggerganov/ggerganov/llama.cpp
12
  ```
13
-
14
  ```bash
15
  llama-cli \
16
  --hf-repo Felladrin/gguf-TinyMistral-248M-Chat-v2 \
17
  --model TinyMistral-248M-Chat-v2.Q8_0.gguf \
18
- -p "<|im_start|>system\nYou are a helpful assistant who answers user's questions with details and curiosity.<|im_end|>\n<|im_start|>user\nWhat are some potential applications for quantum computing?<|im_end|>\n<|im_start|>assistant\n" \
19
- -e \
20
- --dynatemp-range "0.1-0.35" \
21
- --min-p 0.05 \
 
 
 
 
22
  --repeat-penalty 1.1 \
23
- -c 2048 \
24
- -n 250 \
25
- -r "<|im_end|>"
26
  ```
 
5
 
6
  GGUF version of [Felladrin/TinyMistral-248M-Chat-v2](https://huggingface.co/Felladrin/TinyMistral-248M-Chat-v2).
7
 
8
+ ## Try it with [llama.cpp](https://github.com/ggerganov/llama.cpp)
9
 
10
  ```bash
11
  brew install ggerganov/ggerganov/llama.cpp
12
  ```
 
13
  ```bash
14
  llama-cli \
15
  --hf-repo Felladrin/gguf-TinyMistral-248M-Chat-v2 \
16
  --model TinyMistral-248M-Chat-v2.Q8_0.gguf \
17
+ --random-prompt \
18
+ --dynatemp-range "0.1-2.5" \
19
+ --top-k 0 \
20
+ --top-p 1 \
21
+ --min-p 0.1 \
22
+ --typical 0.85 \
23
+ --mirostat 2 \
24
+ --mirostat-ent 3.5 \
25
  --repeat-penalty 1.1 \
26
+ --repeat-last-n -1 \
27
+ -n 256
 
28
  ```