Update README.md
Browse files
README.md
CHANGED
@@ -33,7 +33,7 @@ If you want to run it in a CPU-only environment, you may want to check this.
|
|
33 |
|
34 |
### Japanese automated benchmark result
|
35 |
|
36 |
-
Benchmark settings are the same as
|
37 |
|
38 |
| Task |Version| Metric |Value | |Stderr|
|
39 |
|----------------------|------:|--------|-----:|---|-----:|
|
@@ -90,9 +90,8 @@ output = model.generate(
|
|
90 |
eos_token_id=tokenizer.eos_token_id)
|
91 |
print(tokenizer.decode(output[0]))
|
92 |
```
|
93 |
-
This is cherry picking result.
|
94 |
-
It is relatively easy to follow instructions for writing sentences.
|
95 |
|
|
|
96 |
|
97 |
```
|
98 |
<s><s> [INST] <<SYS>>
|
@@ -133,6 +132,8 @@ So if you need high performance, please use the original model.
|
|
133 |
|
134 |
### 引用 Citations
|
135 |
|
|
|
|
|
136 |
```tex
|
137 |
@misc{elyzallama2023,
|
138 |
title={ELYZA-japanese-Llama-2-7b},
|
|
|
33 |
|
34 |
### Japanese automated benchmark result
|
35 |
|
36 |
+
Benchmark settings are the same as [weblab-10b-instruction-sft-GPTQ](https://huggingface.co/dahara1/weblab-10b-instruction-sft-GPTQ)
|
37 |
|
38 |
| Task |Version| Metric |Value | |Stderr|
|
39 |
|----------------------|------:|--------|-----:|---|-----:|
|
|
|
90 |
eos_token_id=tokenizer.eos_token_id)
|
91 |
print(tokenizer.decode(output[0]))
|
92 |
```
|
|
|
|
|
93 |
|
94 |
+
result.
|
95 |
|
96 |
```
|
97 |
<s><s> [INST] <<SYS>>
|
|
|
132 |
|
133 |
### 引用 Citations
|
134 |
|
135 |
+
This model is based on the work of the following people:
|
136 |
+
|
137 |
```tex
|
138 |
@misc{elyzallama2023,
|
139 |
title={ELYZA-japanese-Llama-2-7b},
|