lichenglu commited on
Commit
752eaa0
1 Parent(s): f7e78d1

Update model card

Browse files
Files changed (1) hide show
  1. README.md +18 -1
README.md CHANGED
@@ -1,4 +1,21 @@
1
- # Math-RoBerta for NLP tasks in math learning environments
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
  This model is fine-tuned with GPT2-xl with 8 Nvidia RTX 1080Ti GPUs and enhanced with conversation safety policies (e.g., threat, profanity, identity attack) using 3,000,000 math discussion posts by students and facilitators on Algebra Nation (https://www.mathnation.com/). SafeMathBot consists of 48 layers and over 1.5 billion parameters, consuming up to 6 gigabytes of disk space. Researchers can experiment with and finetune the model to help construct math conversational AI that can effectively avoid unsafe response generation. It was trained to allow researchers to control generated responses' safety using tags `[SAFE]` and `[UNSAFE]`
4
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - generation
6
+ - math learning
7
+ - education
8
+ license: MIT
9
+ metrics:
10
+ - PerspectiveAPI
11
+ widget:
12
+ - text: "<bos><speaker1>Hello! My name is CL. Nice meeting y'all!<speaker2>[SAFE]"
13
+ example_title: "Safe Response"
14
+ - text: "<bos><speaker1>Hello! My name is CL. Nice meeting y'all!<speaker2>[UNSAFE]"
15
+ example_title: "Unsafe Response"
16
+ ---
17
+
18
+ # Math-RoBERTa for NLP tasks in math learning environments
19
 
20
  This model is fine-tuned with GPT2-xl with 8 Nvidia RTX 1080Ti GPUs and enhanced with conversation safety policies (e.g., threat, profanity, identity attack) using 3,000,000 math discussion posts by students and facilitators on Algebra Nation (https://www.mathnation.com/). SafeMathBot consists of 48 layers and over 1.5 billion parameters, consuming up to 6 gigabytes of disk space. Researchers can experiment with and finetune the model to help construct math conversational AI that can effectively avoid unsafe response generation. It was trained to allow researchers to control generated responses' safety using tags `[SAFE]` and `[UNSAFE]`
21