Model Card for Rainier
Rainier is a knowledge introspection model for commonsense QA. See our paper at: https://arxiv.org/abs/2210.03078.
Model Details
Model Description
Given a commonsense question as input, Rainier generates a knowledge statement that is related to the question and (hopefully) helpful for answering it. By sampling from the output, Rainier can generate a diverse set of related knowledge. The introspected knowledge can be further prompted to a QA model and improves its prediction accuracy.
- Developed by: Jiacheng Liu, Skyler Hallinan, Ximing Lu, Pengfei He, Sean Welleck, Hannaneh Hajishirzi, Yejin Choi
- Shared by [optional]: Jiacheng Liu
- Model type: Transformers
- Language(s) (NLP): English
- License: MIT
- Finetuned from model [optional]: T5-large
Model Sources [optional]
- Repository: https://github.com/liujch1998/rainier
- Paper [optional]: https://arxiv.org/abs/2210.03078
- Demo [optional]: https://huggingface.co/spaces/liujch1998/rainier
Uses
Direct Use
Rainier is intended to generate commonsense knowledge statements related to answering a given commonsense question.
Out-of-Scope Use
Rainier is a research prototype and may generate incorrect or irrelevant knowledge. Do not use for making critical decisions. It is intended to generate knowledge statements for questions about commonsense, and may be unreliable when taking input out of this scope.
Bias, Risks, and Limitations
See the Limitations section of our paper.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('t5-large')
model = T5ForConditionalGeneration.from_pretrained('liujch1998/rainier-large')
question = "Sydney rubbed Addison’s head because she had a horrible headache. " \
"What will happen to Sydney? \\n " \
"(A) drift to sleep (B) receive thanks (C) be reprimanded"
input_ids = tokenizer(question, return_tensors='pt').input_ids
output_ids = model.generate(input_ids, do_sample=True, top_p=0.5, num_return_sequences=10)
knowledges = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
print(list(set(knowledges)))
Outputs:
Sydney is a good friend to Addison.
Sydney is a kind person.
One should be thankful for the help of others.
Rubbed head is a good way to relieve headaches.
The head is a very sensitive area.
One should be grateful for the help of others.
The head is the most sensitive part of the body.
The person who rubs the head is a good person.
Sydney will be grateful.
The head is a sensitive area.
You may also refer to https://huggingface.co/spaces/liujch1998/rainier/blob/main/app.py#L16-L100 for implementation.
Citation [optional]
BibTeX:
@article{Liu2022RainierRK,
title={Rainier: Reinforced Knowledge Introspector for Commonsense Question Answering},
author={Jiacheng Liu and Skyler Hallinan and Ximing Lu and Pengfei He and Sean Welleck and Hannaneh Hajishirzi and Yejin Choi},
journal={ArXiv},
year={2022},
volume={abs/2210.03078},
url={https://api.semanticscholar.org/CorpusID:252735191}
}
Model Card Contact
Jiacheng Liu
- Downloads last month
- 5