munish0838
commited on
Commit
•
21774c6
1
Parent(s):
dd8915f
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- augmxnt/ultra-orca-boros-en-ja-v1
|
5 |
+
language:
|
6 |
+
- ja
|
7 |
+
- en
|
8 |
+
base_model: augmxnt/shisa-gamma-7b-v1
|
9 |
+
---
|
10 |
+
|
11 |
+
# QuantFactory/shisa-gamma-7b-v1-GGUF
|
12 |
+
This is quantized version of [augmxnt/shisa-gamma-7b-v1](https://huggingface.co/augmxnt/shisa-gamma-7b-v1) created using llama.cpp
|
13 |
+
|
14 |
+
# Model Description
|
15 |
+
|
16 |
+
For more information see our main [Shisa 7B](https://huggingface.co/augmxnt/shisa-gamma-7b-v1/resolve/main/shisa-comparison.png) model
|
17 |
+
|
18 |
+
We applied a version of our fine-tune data set onto [Japanese Stable LM Base Gamma 7B](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b) and it performed pretty well, just sharing since it might be of interest.
|
19 |
+
|
20 |
+
Check out our [JA MT-Bench results](https://github.com/AUGMXNT/shisa/wiki/Evals-%3A-JA-MT%E2%80%90Bench).
|
21 |
+
|
22 |
+
![Comparison vs shisa-7b-v1](https://huggingface.co/augmxnt/shisa-gamma-7b-v1/resolve/main/shisa-comparison.png)
|
23 |
+
|
24 |
+
![Comparison vs other recently released JA models](https://huggingface.co/augmxnt/shisa-gamma-7b-v1/resolve/main/ja-comparison.png)
|