talaugust commited on
Commit
54297b3
1 Parent(s): eb5983f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -1
README.md CHANGED
@@ -1,12 +1,57 @@
1
  ## BART Scientific Definition Generation
 
 
 
 
 
 
 
2
 
3
  ## Description
4
 
 
 
 
5
  ## Intended use
6
 
 
 
7
  ## Training data
8
 
 
 
9
  ## How to use
 
10
 
11
- ## Biases & Limitations
12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ## BART Scientific Definition Generation
2
+ This is a finetuned BART Large model from the paper:
3
+
4
+ "Generating Scientific Definitions with Controllable Complexity"
5
+
6
+ By Tal August, Katharina Reinecke, and Noah A. Smith
7
+
8
+ Abstract: Unfamiliar terminology and complex language can present barriers to understanding science. Natural language processing stands to help address these issues by automatically defining unfamiliar terms. We introduce a new task and dataset for defining scientific terms and controlling the complexity of gen- erated definitions as a way of adapting to a specific reader’s background knowledge. We test four definition generation methods for this new task, finding that a sequence-to-sequence approach is most successful. We then explore the version of the task in which definitions are generated at a target complexity level. We in- troduce a novel reranking approach and find in human evaluations that it offers superior fluency while also controlling complexity, compared to several controllable generation baselines.
9
 
10
  ## Description
11
 
12
+ The model is finetuned on the task of generating definitions of scientific terms. We frame our task as generating an answer to the question “What is (are) X?” Along with the question, the model takes a support document of scientific abstracted related to the term being defined.
13
+
14
+
15
  ## Intended use
16
 
17
+ The intended use of this model is to generate definitions of scientific terms. It is NOT intended for public deployment due to the risk of hallucinated information in model output. Strong supervision of definition factuality is important for any future deployment of such a system. While hallucinated information can be damaging in any generation context, incorrect scientific definitions could mislead readers and potentially contribute to broader scientific misinformation. The model is trained on data we believe is trustworthy (e.g., questions and answers from NIH websites); however, this is no guarantee that model output will be.
18
+
19
  ## Training data
20
 
21
+ The model is trained on data from two sources: Wikipedia science glossaries and a portion of the [MedQuAD dataset](https://github.com/abachaa/MedQuAD), which contains healthcare consumer questions and answers from NIH websites. For more information on these data sources, see the [github repository](https://github.com/talaugust/definition-complexity) for the paper.
22
+
23
  ## How to use
24
+ Note that this model was trained and evaluated using transformers version 4.2.2
25
 
 
26
 
27
+ from transformers import (
28
+ AutoTokenizer,
29
+ AutoModelForSeq2SeqLM,
30
+ AutoConfig,
31
+ )
32
+
33
+ bart_sci_def_tokenizer = AutoTokenizer.from_pretrained("talaugust/bart-sci-definition")
34
+ bart_sci_def_model = AutoModelForSeq2SeqLM.from_pretrained("talaugust/bart-sci-definition")
35
+
36
+ inputs = bart_sci_def_tokenizer("question: What is (are) surfactants? context: <P> .... <P> ...." , return_tensors='pt')
37
+
38
+ outputs = bart_sci_def_model.generate(**inputs,
39
+ decoder_start_token_id=tokenizer.bos_token_id,
40
+ num_return_sequences=1,
41
+ num_beams=5,
42
+ max_length=64,
43
+ min_length=8,
44
+ early_stopping=True,
45
+ temperature=None,
46
+ do_sample=True,
47
+ top_k=50,
48
+ top_p=0.9,
49
+ max_input_length=1024,
50
+ no_repeat_ngram_size=3,
51
+ device=None)
52
+ answers = [bart_sci_def_tokenizer.decode(ans_ids, skip_special_tokens=True).strip() for ans_ids in outputs[0]]
53
+
54
+
55
+
56
+ ## Biases & Limitations
57
+ The goal of this model is to enable a wider audience of readers to understand and engage with scientific writing. A risk, though, is that such attempts might instead widen the gap to accessing scientific information. The texts in the datasets we train our models on are in General or Academic American. English. Many people, especially those who have been historically underrepresented in STEM disciplines and medicine, may not be comfortable with this dialect of English. This risks further alienating the readers we hope to serve. An important and exciting direction in NLP is making models more flexible to dialects and low-resource languages.