Commit
•
529bdcb
1
Parent(s):
505704f
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,143 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
license: apache-2.0
|
4 |
+
datasets:
|
5 |
+
- OpenAssistant/oasst2
|
6 |
+
- nvidia/HelpSteer
|
7 |
+
language:
|
8 |
+
- en
|
9 |
+
- ja
|
10 |
+
tags:
|
11 |
+
- mistral
|
12 |
+
- steerlm
|
13 |
+
base_model: karakuri-ai/karakuri-lm-7b-apm-v0.2
|
14 |
+
pipeline_tag: text-generation
|
15 |
+
---
|
16 |
+
|
17 |
+
# KARAKURI LM 7B APM v0.2- GGUF
|
18 |
+
This is quantized version of [karakuri-ai/karakuri-lm-7b-apm-v0.2](https://huggingface.co/karakuri-ai/karakuri-lm-7b-apm-v0.2) created using llama.cpp
|
19 |
+
|
20 |
+
## Model Details
|
21 |
+
|
22 |
+
### Model Description
|
23 |
+
|
24 |
+
- **Developed by:** [KARAKURI Inc.](https://about.karakuri.ai/)
|
25 |
+
- **Model type:** Causal decoder-only transformer language model
|
26 |
+
- **Languages**: Primarily English
|
27 |
+
- **License:** Apache 2.0
|
28 |
+
- **Finetuned from model:** [mistral-community/Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2)
|
29 |
+
- **Contact**: For questions and comments about the model, please email `karakuri-rd@karakuri.ai`
|
30 |
+
|
31 |
+
## Usage
|
32 |
+
|
33 |
+
KARAKURI LM 7B APM v0.2 is a attribute prediction model that rates model responses on various aspects that makes a response desirable.
|
34 |
+
|
35 |
+
Given a conversation with multiple turns between user and assistant, the model rates the following attributes (between 0 and 4) for every assistant turn.
|
36 |
+
|
37 |
+
- helpfulness: Overall helpfulness of the response to the prompt.
|
38 |
+
- correctness: Inclusion of all pertinent facts without errors.
|
39 |
+
- coherence: Consistency and clarity of expression.
|
40 |
+
- complexity: Intellectual depth required to write response (i.e. whether the response can be written by anyone with basic language competency or requires deep domain expertise).
|
41 |
+
- verbosity: Amount of detail included in the response, relative to what is asked for in the prompt.
|
42 |
+
- quality: Perceived goodness of response.
|
43 |
+
- toxicity: Undesirable elements such as vulgar, harmful or potentially biased response.
|
44 |
+
- humor: Sense of humor within response.
|
45 |
+
- creativity: Willingness to generate non-conventional response.
|
46 |
+
|
47 |
+
The first five are derived from HelpSteer, while the remaining four are derived from OASST2.
|
48 |
+
|
49 |
+
You can run the model using the 🤗 Transformers:
|
50 |
+
|
51 |
+
```python
|
52 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
53 |
+
|
54 |
+
model_id = "karakuri-ai/karakuri-lm-7b-apm-v0.2"
|
55 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
56 |
+
model = AutoModelForCausalLM.from_pretrained(
|
57 |
+
model_id,
|
58 |
+
torch_dtype="auto",
|
59 |
+
device_map="auto",
|
60 |
+
)
|
61 |
+
|
62 |
+
messages = [
|
63 |
+
{"role": "user", "content": "Hello!"},
|
64 |
+
{"role": "assistant", "content": "Hello! How can I help you today?"},
|
65 |
+
]
|
66 |
+
tokenizer.apply_chat_template(
|
67 |
+
messages,
|
68 |
+
label="helpsteer",
|
69 |
+
tokenize=False,
|
70 |
+
add_generation_prompt=True,
|
71 |
+
)
|
72 |
+
# <bos>[INST] Hello! [/INST] Hello! How can I help you today? [ATTR_1]
|
73 |
+
|
74 |
+
input_ids = tokenizer.apply_chat_template(
|
75 |
+
messages,
|
76 |
+
label="helpsteer",
|
77 |
+
add_generation_prompt=True,
|
78 |
+
return_tensors="pt",
|
79 |
+
).to(model.device)
|
80 |
+
outputs = model.generate(input_ids, max_new_tokens=32)
|
81 |
+
tokenizer.decode(outputs[0][input_ids.shape[-1]:])
|
82 |
+
# helpfulness: 2 correctness: 1 coherence: 2 complexity: 1 verbosity: 1 [/ATTR_1]<eos>
|
83 |
+
|
84 |
+
messages += [
|
85 |
+
{"role": "label", "content": "helpfulness: 2 correctness: 1 coherence: 2 complexity: 1 verbosity: 1"},
|
86 |
+
{"role": "user", "content": "Thank you!"},
|
87 |
+
{"role": "assistant", "content": "You're welcome! I'm happy to help however I can."},
|
88 |
+
]
|
89 |
+
tokenizer.apply_chat_template(
|
90 |
+
messages,
|
91 |
+
label="helpsteer",
|
92 |
+
tokenize=False,
|
93 |
+
add_generation_prompt=True,
|
94 |
+
)
|
95 |
+
# <bos>[INST] Hello! [/INST] Hello! How can I help you today? [ATTR_1] helpfulness: 2 correctness: 1 coherence: 2 complexity: 1 verbosity: 1 [/ATTR_1]<eos>[INST] Thank you! [/INST] You're welcome! I'm happy to help however I can. [ATTR_1]
|
96 |
+
|
97 |
+
messages = [
|
98 |
+
{"role": "user", "content": "Hello!"},
|
99 |
+
{"role": "assistant", "content": "Hello! How can I help you today?"},
|
100 |
+
]
|
101 |
+
tokenizer.apply_chat_template(
|
102 |
+
messages,
|
103 |
+
label="oasst",
|
104 |
+
tokenize=False,
|
105 |
+
add_generation_prompt=True,
|
106 |
+
)
|
107 |
+
# <bos>[INST] Hello! [/INST] Hello! How can I help you today? [ATTR_2]
|
108 |
+
|
109 |
+
input_ids = tokenizer.apply_chat_template(
|
110 |
+
messages,
|
111 |
+
label="oasst",
|
112 |
+
add_generation_prompt=True,
|
113 |
+
return_tensors="pt",
|
114 |
+
).to(model.device)
|
115 |
+
outputs = model.generate(input_ids, max_new_tokens=32)
|
116 |
+
tokenizer.decode(outputs[0][input_ids.shape[-1]:])
|
117 |
+
# quality: 3 toxicity: 1 humor: 1 creativity: 1 [/ATTR_2]<eos>
|
118 |
+
```
|
119 |
+
|
120 |
+
## Training Details
|
121 |
+
|
122 |
+
### Training Data
|
123 |
+
|
124 |
+
- [OASST2](https://huggingface.co/datasets/OpenAssistant/oasst2)
|
125 |
+
- [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer)
|
126 |
+
|
127 |
+
### Training Infrastructure
|
128 |
+
|
129 |
+
- **Hardware**: The model was trained on single node of an Amazon EC2 trn1.32xlarge instance.
|
130 |
+
- **Software**: We use code based on [neuronx-nemo-megatron](https://github.com/aws-neuron/neuronx-nemo-megatron).
|
131 |
+
|
132 |
+
## Model Citation
|
133 |
+
|
134 |
+
```
|
135 |
+
@misc{karakuri_lm_7b_apm_v02,
|
136 |
+
author = { {KARAKURI} {I}nc. },
|
137 |
+
title = { {KARAKURI} {LM} 7{B} {APM} v0.2 },
|
138 |
+
year = { 2024 },
|
139 |
+
url = { https://huggingface.co/karakuri-ai/karakuri-lm-7b-apm-v0.2 },
|
140 |
+
publisher = { Hugging Face },
|
141 |
+
journal = { Hugging Face repository }
|
142 |
+
}
|
143 |
+
```
|