File size: 4,782 Bytes
3f71a39
 
850e314
 
 
 
 
 
 
 
 
 
 
3f71a39
 
850e314
3f71a39
 
 
 
 
850e314
 
 
 
 
 
 
b8ede3c
 
 
 
 
 
 
 
 
 
 
ae0e1ee
 
 
 
b8ede3c
 
 
 
850e314
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3f71a39
 
 
 
 
850e314
 
3f71a39
850e314
3f71a39
b997ba9
c38f315
 
 
 
 
b5ab8dc
c38f315
 
 
 
 
c00873c
c38f315
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
---
library_name: transformers
license: gemma
datasets:
- OpenAssistant/oasst2
- nvidia/HelpSteer
language:
- en
- ja
tags:
- gemma
- steerlm
base_model: google/gemma-7b
---

# KARAKURI LM 7B APM v0.1

## Model Details

### Model Description

- **Developed by:** [KARAKURI Inc.](https://about.karakuri.ai/)
- **Model type:** Causal decoder-only transformer language model
- **Languages**: Primarily English
- **License:** [Gemma Terms of Use](https://ai.google.dev/gemma/terms)
- **Finetuned from model:** [google/gemma-7b](https://huggingface.co/google/gemma-7b)
- **Contact**: For questions and comments about the model, please email `karakuri-rd@karakuri.ai`

## Usage

KARAKURI LM 7B APM v0.1 is a attribute prediction model that rates model responses on various aspects that makes a response desirable.

Given a conversation with multiple turns between user and assistant, the model rates the following attributes (between 0 and 4) for every assistant turn.

- helpfulness: Overall helpfulness of the response to the prompt.
- correctness: Inclusion of all pertinent facts without errors.
- coherence: Consistency and clarity of expression.
- complexity: Intellectual depth required to write response (i.e. whether the response can be written by anyone with basic language competency or requires deep domain expertise).
- verbosity: Amount of detail included in the response, relative to what is asked for in the prompt.
- quality: Perceived goodness of response.
- toxicity: Undesirable elements such as vulgar, harmful or potentially biased response.
- humor: Sense of humor within response.
- creativity: Willingness to generate non-conventional response.

The first five are derived from HelpSteer, while the remaining four are derived from OASST2.

You can run the model using the 🤗 Transformers:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "karakuri-ai/karakuri-lm-7b-apm-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype="auto",
    device_map="auto",
)

messages = [
    {"role": "user", "content": "Hello!"},
    {"role": "assistant", "content": "Hello! How can I help you today?"},
]
tokenizer.apply_chat_template(
    messages,
    label="helpsteer",
    tokenize=False,
    add_generation_prompt=True,
)
# <bos>[INST] Hello! [/INST] Hello! How can I help you today? [ATTR_1]

input_ids = tokenizer.apply_chat_template(
    messages,
    label="helpsteer",
    add_generation_prompt=True,
    return_tensors="pt",
).to(model.device)
outputs = model.generate(input_ids, max_new_tokens=32)
tokenizer.decode(outputs[0][input_ids.shape[-1]:])
#  helpfulness: 2 correctness: 1 coherence: 2 complexity: 1 verbosity: 1 [/ATTR_1]<eos>

messages += [
    {"role": "label", "content": "helpfulness: 2 correctness: 1 coherence: 2 complexity: 1 verbosity: 1"},
    {"role": "user", "content": "Thank you!"},
    {"role": "assistant", "content": "You're welcome! I'm happy to help however I can."},
]
tokenizer.apply_chat_template(
    messages,
    label="helpsteer",
    tokenize=False,
    add_generation_prompt=True,
)
# <bos>[INST] Hello! [/INST] Hello! How can I help you today? [ATTR_1] helpfulness: 2 correctness: 1 coherence: 2 complexity: 1 verbosity: 1 [/ATTR_1]<eos>[INST] Thank you! [/INST] You're welcome! I'm happy to help however I can. [ATTR_1]

messages = [
    {"role": "user", "content": "Hello!"},
    {"role": "assistant", "content": "Hello! How can I help you today?"},
]
tokenizer.apply_chat_template(
    messages,
    label="oasst",
    tokenize=False,
    add_generation_prompt=True,
)
# <bos>[INST] Hello! [/INST] Hello! How can I help you today? [ATTR_2]

input_ids = tokenizer.apply_chat_template(
    messages,
    label="oasst",
    add_generation_prompt=True,
    return_tensors="pt",
).to(model.device)
outputs = model.generate(input_ids, max_new_tokens=32)
tokenizer.decode(outputs[0][input_ids.shape[-1]:])
#  quality: 3 toxicity: 1 humor: 1 creativity: 1 [/ATTR_2]<eos>
```

## Training Details

### Training Data

- [OASST2](https://huggingface.co/datasets/OpenAssistant/oasst2)
- [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer)

### Training Infrastructure

- **Hardware**: The model was trained on single node of an Amazon EC2 trn1.32xlarge instance.
- **Software**: We use code based on [neuronx-nemo-megatron](https://github.com/aws-neuron/neuronx-nemo-megatron).

## Citation

```
@misc{karakuri_lm_7b_apm_v01,
	author       = { {KARAKURI} {I}nc. },
	title        = { {KARAKURI} {LM} 7{B} {APM} v0.1 },
	year         = { 2024 },
	url          = { https://huggingface.co/karakuri-ai/karakuri-lm-7b-apm-v0.1 },
	publisher    = { Hugging Face },
    journal      = { Hugging Face repository }
}
```