juntaoyuan commited on
Commit
f2ed886
1 Parent(s): 048f2d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -4,14 +4,14 @@ license: apache-2.0
4
 
5
  <center>
6
  <h3>Welcome to llawa</h3>
7
- a.k.a Llama2 + Wasm QA<br/>
8
- ![llawa logo](llawa-logo.png)
9
  </center>
10
 
11
 
12
  The models in this repo are Llama2 7b chat models further fine-tuned with Wasm-related Q&As.
13
  Instead of struggling with Python and PyTorch, the simplest way to run them on your own laptops, servers, or edge devices is to use the [WasmEdge Runtime](https://github.com/WasmEdge/WasmEdge).
14
- [Learn more](https://medium.com/stackademic/fast-and-portable-llama2-inference-on-the-heterogeneous-edge-a62508e82359) about this fast, lightweight, portable, and ZERO Python dependency approach for running AI applications!
15
 
16
  1. Install WasmEdge
17
 
 
4
 
5
  <center>
6
  <h3>Welcome to llawa</h3>
7
+ <img src="https://huggingface.co/juntaoyuan/llawa/resolve/main/llawa-logo.png"/>
8
+ <br/><i>a.k.a Llama2 + Wasm QA</i>
9
  </center>
10
 
11
 
12
  The models in this repo are Llama2 7b chat models further fine-tuned with Wasm-related Q&As.
13
  Instead of struggling with Python and PyTorch, the simplest way to run them on your own laptops, servers, or edge devices is to use the [WasmEdge Runtime](https://github.com/WasmEdge/WasmEdge).
14
+ Learn more about this [fast, lightweight, portable, and ZERO Python dependency approach](https://medium.com/stackademic/fast-and-portable-llama2-inference-on-the-heterogeneous-edge-a62508e82359) for running AI applications!
15
 
16
  1. Install WasmEdge
17