Commit
•
048f2d2
1
Parent(s):
c84c951
Update README.md
Browse files
README.md
CHANGED
@@ -2,6 +2,13 @@
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
The models in this repo are Llama2 7b chat models further fine-tuned with Wasm-related Q&As.
|
6 |
Instead of struggling with Python and PyTorch, the simplest way to run them on your own laptops, servers, or edge devices is to use the [WasmEdge Runtime](https://github.com/WasmEdge/WasmEdge).
|
7 |
[Learn more](https://medium.com/stackademic/fast-and-portable-llama2-inference-on-the-heterogeneous-edge-a62508e82359) about this fast, lightweight, portable, and ZERO Python dependency approach for running AI applications!
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
+
<center>
|
6 |
+
<h3>Welcome to llawa</h3>
|
7 |
+
a.k.a Llama2 + Wasm QA<br/>
|
8 |
+
![llawa logo](llawa-logo.png)
|
9 |
+
</center>
|
10 |
+
|
11 |
+
|
12 |
The models in this repo are Llama2 7b chat models further fine-tuned with Wasm-related Q&As.
|
13 |
Instead of struggling with Python and PyTorch, the simplest way to run them on your own laptops, servers, or edge devices is to use the [WasmEdge Runtime](https://github.com/WasmEdge/WasmEdge).
|
14 |
[Learn more](https://medium.com/stackademic/fast-and-portable-llama2-inference-on-the-heterogeneous-edge-a62508e82359) about this fast, lightweight, portable, and ZERO Python dependency approach for running AI applications!
|