Update README.md
Browse files
README.md
CHANGED
@@ -27,6 +27,8 @@ But maybe gguf model little bit slower then GPTQ especialy long text.
|
|
27 |
|
28 |
You can use [text-generation-webui](https://github.com/oobabooga/text-generation-webui) to run this model fast(about 16 tokens/s on my RTX 3060) on your local PC.
|
29 |
|
|
|
|
|
30 |
The explanation of [how to install Japanese text-generation-webui is here.](https://webbigdata.jp/post-19926/).
|
31 |
|
32 |
### simple sample code
|
|
|
27 |
|
28 |
You can use [text-generation-webui](https://github.com/oobabooga/text-generation-webui) to run this model fast(about 16 tokens/s on my RTX 3060) on your local PC.
|
29 |
|
30 |
+
<img title="text-generation-webui-sample" alt="text-generation-webui" src="text-generation-webui-sample.png">
|
31 |
+
|
32 |
The explanation of [how to install Japanese text-generation-webui is here.](https://webbigdata.jp/post-19926/).
|
33 |
|
34 |
### simple sample code
|