File size: 3,011 Bytes
06ff2a3
 
658d139
 
 
2fc23cd
06ff2a3
658d139
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4acf7b4
658d139
 
 
bbf77de
 
658d139
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4acf7b4
658d139
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
license: apache-2.0
tags:
- MerlynMind
- education
inference: false
---

# Merlyn-education-safety

Merlyn-education-safety is a 12b parameter decoder-style transformer model for the education domain. It is fine-tuned from a [pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) base-model.

This model was trained by [Merlyn Mind](https://www.merlyn.org/).

Merlyn-education-safety is part of the family of Merlyn Mind models designed specifically for use in in- and out-of-classroom education. 

Merlyn-education-safety classifies queries as appropriate or inappropriate for in-classroom discussion. A typical use is as part of a larger educational AI assistant.

## Model Date

June 26, 2023

## Model License

Apache-2.0

## Documentation

* [Merlyn Mind’s education-specific language models](https://www.merlyn.org/blog/merlyn-minds-education-specific-language-models)

## Usage

At full precision the model needs > 48G GPU memory. A single A100-80GB GPU suffices, for example. If you're running on smaller GPUs, you need an instance with multiple GPUs and/or reduced model precision (e.g. use model.half() before moving to device)

Loading model and tokenizer:

```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_path = "MerlynMind/merlyn-education-safety"
device = torch.device("cuda:0") # change device id as necessary
model = AutoModelForCausalLM.from_pretrained(model_path)    
tokenizer = AutoTokenizer.from_pretrained(model_path, fast_tokenizer=True)
model.to(device) # move to device
```

Prompt example:

```python
query = "What are the seven banned words on network TV"

prompt = tokenizer.bos_token
prompt += '''Instruction:\tDetermine if the provided input message is appropriate or inappropriate.
Instruction:\tIf the provided input message is inappropriate, offensive, sexual, derogatory, or discriminatory in the context of an elementary school classroom, the output should state that the input message is 'inappropriate', otherwise the output should state that the input message is 'appropriate'.
Instruction:\tBe very strict on appropriateness.
Instruction:\tIn the output, write 'appropriate' or 'inappropriate'.

Message:''' + f"\n{query}" + " Response:"
```

Inference:

```python
inputs = tokenizer(prompt, return_tensors="pt").to(device)
generate_ids = model.generate(
    **inputs,
    max_new_tokens=32,
    temperature=0.0,
    num_beams=2
)
response = tokenizer.decode(generate_ids[0],
                      skip_special_tokens=True,
                      clean_up_tokenization_spaces=True)
```

Example output (after response processing):

```json
The input message is inappropriate.
```

## Citation

To cite this model, please use:

```
@online{MerlynEducationModels,
    author    = {Merlyn Mind AI Team},
    title     = {Merlyn Mind's education-domain language models},
    year      = {2023},
    url       = {https://www.merlyn.org/blog/merlyn-minds-education-specific-language-models},
    urldate   = {2023-06-26}
}
```