TheBloke ryanconley commited on
Commit
ffcff16
1 Parent(s): e79b44a

Update Readme to Correct 3x Typos in "VMware" (#1)

Browse files

- Update Readme to Correct 3x Typos in "VMware" (e5db1d8c5da358b4ae635922130f72257e34a343)


Co-authored-by: Ryan Conley <ryanconley@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -17,9 +17,9 @@ license: other
17
  </div>
18
  <!-- header end -->
19
 
20
- # VMWare's open-llama-7B-open-instruct GGML
21
 
22
- These files are GGML format model files for [VMWare's open-llama-7B-open-instruct](https://huggingface.co/VMware/open-llama-7b-open-instruct).
23
 
24
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
25
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
@@ -139,7 +139,7 @@ Thank you to all my generous patrons and donaters!
139
 
140
  <!-- footer end -->
141
 
142
- # Original model card: VMWare's open-llama-7B-open-instruct
143
 
144
 
145
  # VMware/open-llama-7B-open-instruct
 
17
  </div>
18
  <!-- header end -->
19
 
20
+ # VMware's open-llama-7B-open-instruct GGML
21
 
22
+ These files are GGML format model files for [VMware's open-llama-7B-open-instruct](https://huggingface.co/VMware/open-llama-7b-open-instruct).
23
 
24
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
25
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
 
139
 
140
  <!-- footer end -->
141
 
142
+ # Original model card: VMware's open-llama-7B-open-instruct
143
 
144
 
145
  # VMware/open-llama-7B-open-instruct