Model Card for π§π»βπCOSMO
π§π»βπCOSMO is a conversation agent with greater generalizability on both in- and out-of-domain chitchat datasets (e.g., DailyDialog, BlendedSkillTalk). It is trained on two datasets: SODA and ProsocialDialog. COSMO is especially aiming to model natural human conversations. It can accept situation descriptions as well as instructions on what role it should play in the situation.
Model Description
- Repository: Code
- Paper: SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization
- Point of Contact: Hyunwoo Kim
Model Training
π§π»βπCOSMO is trained on our two recent datasets: π₯€SODA and Prosocial Dialog. The backbone model of COSMO is the lm-adapted T5.
How to use
π‘ Note: The HuggingFace inference API for Cosmo is not working correctly, we gently guide you to our repository to try out the demo code!
π¨ Disclaimer: We would like to emphasize that COSMO is trained on SODA and ProsocialDialog mainly for academic/research purposes. We discourage using COSMO in real-world applications or services as is. Model outputs should not be used for advice for humans, and could be potentially offensive, problematic, or harmful. The modelβs output does not necessarily reflect the views and opinions of the authors and their associated affiliations.
Below is a simple code snippet to get Cosmo running :)
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained("allenai/cosmo-xl")
model = AutoModelForSeq2SeqLM.from_pretrained("allenai/cosmo-xl").to(device)
def set_input(situation_narrative, role_instruction, conversation_history):
input_text = " <turn> ".join(conversation_history)
if role_instruction != "":
input_text = "{} <sep> {}".format(role_instruction, input_text)
if situation_narrative != "":
input_text = "{} <sep> {}".format(situation_narrative, input_text)
return input_text
def generate(situation_narrative, role_instruction, conversation_history):
"""
situation_narrative: the description of situation/context with the characters included (e.g., "David goes to an amusement park")
role_instruction: the perspective/speaker instruction (e.g., "Imagine you are David and speak to his friend Sarah").
conversation_history: the previous utterances in the conversation in a list
"""
input_text = set_input(situation_narrative, role_instruction, conversation_history)
inputs = tokenizer([input_text], return_tensors="pt").to(device)
outputs = model.generate(inputs["input_ids"], max_new_tokens=128, temperature=1.0, top_p=.95, do_sample=True)
response = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
return response
situation = "Cosmo had a really fun time participating in the EMNLP conference at Abu Dhabi."
instruction = "You are Cosmo and you are talking to a friend." # You can also leave the instruction empty
conversation = [
"Hey, how was your trip to Abu Dhabi?"
]
response = generate(situation, instruction, conversation)
print(response)
Further Details, Social Impacts, Bias, and Limitations
Please refer to our paper. Cosmo is mostly trained on social chitchat. Therefore, we do not encourage having knowledge-intensive conversations (e.g., science, medical issues, law). Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. 2021 and Bender et al. 2021). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
Additional Information
For a brief summary of our paper, please see this tweet.
Citation
Please cite our work if you find the resources in this repository useful:
@article{kim2022soda,
title={SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization},
author={Hyunwoo Kim and Jack Hessel and Liwei Jiang and Peter West and Ximing Lu and Youngjae Yu and Pei Zhou and Ronan Le Bras and Malihe Alikhani and Gunhee Kim and Maarten Sap and Yejin Choi},
journal={ArXiv},
year={2022},
volume={abs/2212.10465}
}
- Downloads last month
- 313