apepkuss79 commited on
Commit
19c7cc6
1 Parent(s): b5ee25e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -3
README.md CHANGED
@@ -26,7 +26,7 @@ quantized_by: Second State Inc.
26
 
27
  ## Run with LlamaEdge
28
 
29
- - LlamaEdge version: [v0.2.8](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.2.8) and above
30
 
31
  - Prompt template
32
 
@@ -47,13 +47,20 @@ quantized_by: Second State Inc.
47
  - Run as LlamaEdge service
48
 
49
  ```bash
50
- wasmedge --dir .:. --nn-preload default:GGML:AUTO:TinyLlama-1.1B-Chat-v1.0-Q5_K_M.gguf llama-api-server.wasm -p chatml
 
 
 
 
51
  ```
52
 
53
  - Run as LlamaEdge command app
54
 
55
  ```bash
56
- wasmedge --dir .:. --nn-preload default:GGML:AUTO:TinyLlama-1.1B-Chat-v1.0-Q5_K_M.gguf llama-chat.wasm -p chatml
 
 
 
57
  ```
58
 
59
  ## Quantized GGUF Models
 
26
 
27
  ## Run with LlamaEdge
28
 
29
+ - LlamaEdge version: [v0.12.3](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.12.3) and above
30
 
31
  - Prompt template
32
 
 
47
  - Run as LlamaEdge service
48
 
49
  ```bash
50
+ wasmedge --dir .:. --nn-preload default:GGML:AUTO:TinyLlama-1.1B-Chat-v1.0-Q5_K_M.gguf \
51
+ llama-api-server.wasm \
52
+ --prompt-template zephyr \
53
+ --ctx-size 2048 \
54
+ --model-name TinyLlama-1.1B-Chat
55
  ```
56
 
57
  - Run as LlamaEdge command app
58
 
59
  ```bash
60
+ wasmedge --dir .:. --nn-preload default:GGML:AUTO:TinyLlama-1.1B-Chat-v1.0-Q5_K_M.gguf \
61
+ llama-chat.wasm \
62
+ --prompt-template zephyr \
63
+ --ctx-size 2048
64
  ```
65
 
66
  ## Quantized GGUF Models