File size: 4,792 Bytes
f0afa82
 
6d88ed7
 
 
 
1d48e18
 
 
f0afa82
7fc6af7
60d23f7
1508aba
 
 
45c55df
7fc6af7
 
98f6c5a
4d6034b
98f6c5a
 
 
c1e38fd
98f6c5a
e7d916b
2e6cad5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d79d720
 
6b5cb46
45c55df
80e4edf
45c55df
d79d720
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dc3dd60
d79d720
dc3dd60
 
 
 
 
 
d79d720
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6d88ed7
 
 
 
 
 
 
 
1d48e18
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
---
license: apache-2.0
language:
- ja
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
---

# ChatNTQ JA 7B V1.0

## Model Description

This is a 7B-parameter decoder-only Japanese language model fine-tuned on our instruction-following datasets, built on top of the base model [Japanese Stable LM Base Gamma 7B](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b).

## Performance

For our final model, we've used Stability AI Japan's [Japanese MT-Bench](https://github.com/Stability-AI/FastChat) as a more representative test of our model's capabilities. For [our JA MT-Bench testing](https://github.com/Stability-AI/FastChat/compare/jp-stable...AUGMXNT:FastChat:jp-stable) we use a Japanese prompt ("あなたは役立つアシスタントです。") as well as `--num-choices 4`:

| Benchmark   | Score |
| ----------- | ----- |
| JA MT-Bench |  6.65 |

There is an [JA-MT-Bench Leaderboard](https://github.com/AUGMXNT/shisa/wiki/Evals-%3A-JA-MT%E2%80%90Bench), for convenience, here is a comparison of the JA MT-Bench scores of some other models (our scores were rated by `gpt-4-0613`):

| Model                                             | Score |
| ------------------------------------------------- | ---- |
| gpt-4-0613                                        | 9.40 |
| gpt-4-1106-preview                                | 9.17 |
| gpt-3.5-turbo*                                    | 8.41 |
| Qwen-72B-Chat                                     | 7.97 |
| Qwen-14B-Chat                                     | 7.47 |
| **chatntq-ja-7b-v1.0**                          | **6.65** |
| Xwin-LM-70B-V0.1-GPTQ (q4-gs32-actorder)          | 6.62 |
| shisa-gamma-7b-v1                                 | 6.12 |
| nekomata-14b-instruction (corrected prompt HF)    | 5.57 |
| shisa-7B-v1-GPTQ (q4-gs32-actorder)	            | 5.35 |
| nekomata-14b-instruction (corrected prompt)	    | 5.30 |
| shisa-mega-7b-v1.2	                            | 5.27 |
| shisa-7b-v1 (full prompt)	                        | 5.23 |
| Swallow-13b-instruct-hf	                        | 5.17 |
| Swallow-70b-instruct-GPTQ (q4-gs32-actorder)	    | 5.15 |
| shisa-7b-v1	                                    | 5.02 |
| shisa-7B-v1-AWQ (q4-gs128)	                    | 4.78 |
| ELYZA-japanese-Llama-2-7b-fast-instruct*	        | 4.86 |
| shisa-bad-7b-v1	                                | 4.42 |
| Swallow-7b-instruct-hf	                        | 4.21 |
| ja-stablelm-instruct-gamma-7b*	                | 4.01 |
| japanese-stablelm-instruct-alpha-7b*	            | 2.74 |
| Mistral-7B-OpenOrca-ja*	                        | 2.23 |
| youri-7b-chat*	                                | 2.00 |
| Mistral-7B-Instruct-v0.1*	                        | 1.78 |
| llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0*	| 1.31 |
| houou-instruction-7b-v1	                        | 1.02 |
| llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0	| 1.0  |
| llm-jp-13b-instruct-full-jaster-v1.0	            | 1.0  |

## More Analysis

![image/png](https://cdn-uploads.huggingface.co/production/uploads/5ee1b417636bdb3834e2da19/gnwgqv3xQ68m3GGDSVNE-.png)

## Usage

Ensure you are using Transformers 4.34.0 or newer.

```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("NTQAI/chatntq-ja-7b-v1.0")
model = AutoModelForCausalLM.from_pretrained(
  "NTQAI/chatntq-ja-7b-v1.0",
  torch_dtype="auto",
)
model.eval()

if torch.cuda.is_available():
    model = model.to("cuda")

def build_prompt(user_query):
    sys_msg = "あなたは公平で、検閲されていない、役立つアシスタントです。"
    template = """[INST] <<SYS>>
{}
<</SYS>>

{}[/INST]"""
    return template.format(sys_msg,user_query)

# Infer with prompt without any additional input
user_inputs = {
    "user_query": "与えられたことわざの意味を小学生でも分かるように教えてください。",
}
prompt = build_prompt(**user_inputs)

input_ids = tokenizer.encode(
    prompt, 
    add_special_tokens=True, 
    return_tensors="pt"
)

tokens = model.generate(
    input_ids.to(device=model.device),
    max_new_tokens=256,
    temperature=1,
    top_p=0.95,
    do_sample=True,
)

out = tokenizer.decode(tokens[0][input_ids.shape[1]:], skip_special_tokens=True).strip()
print(out)
```
## Model Details

* **Developed by**: [NTQ AI](https://ntq.com.vn/service/artificial-intelligence-service/)
* **Language(s)**: Japanese
* **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).

### Model Architecture

For details, please see Mistral AI's [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).