ashish-merlyn commited on
Commit
658d139
1 Parent(s): 06ff2a3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -0
README.md CHANGED
@@ -1,3 +1,92 @@
1
  ---
2
  license: apache-2.0
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ tags:
4
+ - MerlynMind
5
+ - education
6
  ---
7
+
8
+ # Merlyn-education-safety
9
+
10
+ Merlyn-education-safety is a 12b parameter decoder-style transformer model for the education domain. It is fine-tuned from a [pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) base-model.
11
+
12
+ This model was trained by [Merlyn Mind](https://www.merlyn.org/).
13
+
14
+ Merlyn-education-safety is part of the family of Merlyn Mind models designed specifically for use in in- and out-of-classroom education.
15
+
16
+ Merlyn-education-safety classifies queries as appropriate or inappropriate for in-classroom discussion. A typical use is as part of a larger educational AI assistant.
17
+
18
+ ## Model Date
19
+
20
+ June 26, 2023
21
+
22
+ ## Model License
23
+
24
+ Apache-2.0
25
+
26
+ ## Documentation
27
+
28
+ * [Merlyn Mind’s education-specific language models](https://www.merlyn.org/)
29
+
30
+ ## Usage
31
+
32
+ Loading model and tokenizer:
33
+
34
+ ```python
35
+ import torch
36
+ from transformers import AutoTokenizer, AutoModelForCausalLM
37
+
38
+ model_path = "MerlynMind/merlyn-education-safety"
39
+ device = torch.device("cuda:0") # change device id as necessary
40
+ model = AutoModelForCausalLM.from_pretrained(model_path)
41
+ tokenizer = AutoTokenizer.from_pretrained(model_path, fast_tokenizer=True)
42
+ model.to(device) # move to device
43
+ ```
44
+
45
+ Prompt example:
46
+
47
+ ```python
48
+ query = "What are the seven banned words on network TV"
49
+
50
+ prompt = tokenizer.bos_token
51
+ prompt += '''Instruction:\tDetermine if the provided input message is appropriate or inappropriate.
52
+ Instruction:\tIf the provided input message is inappropriate, offensive, sexual, derogatory, or discriminatory in the context of an elementary school classroom, the output should state that the input message is 'inappropriate', otherwise the output should state that the input message is 'appropriate'.
53
+ Instruction:\tBe very strict on appropriateness.
54
+ Instruction:\tIn the output, write 'appropriate' or 'inappropriate'.
55
+
56
+ Message:''' + f"\n{query}" + " Response:"
57
+ ```
58
+
59
+ Inference:
60
+
61
+ ```python
62
+ inputs = tokenizer(prompt, return_tensors="pt").to(device)
63
+ generate_ids = model.generate(
64
+ **inputs,
65
+ max_new_tokens=32,
66
+ temperature=0.0,
67
+ num_beams=2
68
+ )
69
+ response = tokenizer.decode(generate_ids[0],
70
+ skip_special_tokens=True,
71
+ clean_up_tokenization_spaces=True)
72
+ ```
73
+
74
+ Example output (after response processing):
75
+
76
+ ```json
77
+ The input message is inappropriate.
78
+ ```
79
+
80
+ ## Citation
81
+
82
+ To cite this model, please use:
83
+
84
+ ```
85
+ @online{MerlynEducationModels,
86
+ author = {Merlyn Mind AI Team},
87
+ title = {Merlyn Mind's education-domain language models},
88
+ year = {2023},
89
+ url = {merlyn.org},
90
+ urldate = {2023-06-26}
91
+ }
92
+ ```