Text Generation
Transformers
GGUF
English
llama
uncensored
apepkuss79 commited on
Commit
fd348c3
·
verified ·
1 Parent(s): 413a54b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -19,7 +19,7 @@ datasets:
19
  <!-- header start -->
20
  <!-- 200823 -->
21
  <div style="width: auto; margin-left: auto; margin-right: auto">
22
- <img src="https://github.com/second-state/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
23
  </div>
24
  <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
25
  <!-- header end -->
@@ -32,7 +32,7 @@ datasets:
32
 
33
  ## Run with LlamaEdge
34
 
35
- - LlamaEdge version: [v0.2.8](https://github.com/second-state/LlamaEdge/releases/tag/0.2.8) and above
36
 
37
  - Prompt template
38
 
@@ -47,13 +47,13 @@ datasets:
47
  - Run as LlamaEdge service
48
 
49
  ```bash
50
- wasmedge --dir .:. --nn-preload default:GGML:AUTO:wizard-vicuna-13b-ggml-model-q8_0.gguf llama-api-server.wasm -p vicuna-chat
51
  ```
52
 
53
  - Run as LlamaEdge command app
54
 
55
  ```bash
56
- wasmedge --dir .:. --nn-preload default:GGML:AUTO:wizard-vicuna-13b-ggml-model-q8_0.gguf llama-chat.wasm -p vicuna-chat
57
  ```
58
 
59
  ## Quantized GGUF Models
 
19
  <!-- header start -->
20
  <!-- 200823 -->
21
  <div style="width: auto; margin-left: auto; margin-right: auto">
22
+ <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
23
  </div>
24
  <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
25
  <!-- header end -->
 
32
 
33
  ## Run with LlamaEdge
34
 
35
+ - LlamaEdge version: [v0.2.8](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.2.8) and above
36
 
37
  - Prompt template
38
 
 
47
  - Run as LlamaEdge service
48
 
49
  ```bash
50
+ wasmedge --dir .:. --nn-preload default:GGML:AUTO:Wizard-Vicuna-13B-Uncensored-Q5_K_M.gguf llama-api-server.wasm -p vicuna-chat
51
  ```
52
 
53
  - Run as LlamaEdge command app
54
 
55
  ```bash
56
+ wasmedge --dir .:. --nn-preload default:GGML:AUTO:Wizard-Vicuna-13B-Uncensored-Q5_K_M.gguf llama-chat.wasm -p vicuna-chat
57
  ```
58
 
59
  ## Quantized GGUF Models