Back to all models
text-classification mask_token:
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚡️ Upgrade your account to access the Inference API

Share Copied link to clipboard

Monthly model downloads

microsoft/DialogRPT-updown microsoft/DialogRPT-updown
193 downloads
last 30 days

pytorch

tf

Contributed by

Microsoft company
15 team members · 31 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("microsoft/DialogRPT-updown") model = AutoModelForSequenceClassification.from_pretrained("microsoft/DialogRPT-updown")
Uploaded in S3

Demo

Please try this ➤➤➤ Colab Notebook Demo (click me!)

Context Response updown score
I love NLP! Here’s a free textbook (URL) in case anyone needs it. 0.613
I love NLP! Me too! 0.111

The updown score predicts how likely the response is getting upvoted.

DialogRPT-updown

Dialog Ranking Pretrained Transformers

How likely a dialog response is upvoted 👍 and/or gets replied 💬?

This is what DialogRPT is learned to predict. It is a set of dialog response ranking models proposed by Microsoft Research NLP Group trained on 100 + millions of human feedback data. It can be used to improve existing dialog generation model (e.g., DialoGPT) by re-ranking the generated response candidates.

Quick Links:

We considered the following tasks and provided corresponding pretrained models. This page is for the updown task, and other model cards can be found in table below.

Task Description Pretrained model
Human feedback given a context and its two human responses, predict...
updown ... which gets more upvotes? this model
width ... which gets more direct replies? model card
depth ... which gets longer follow-up thread? model card
Human-like (human vs fake) given a context and one human response, distinguish it with...
human_vs_rand ... a random human response model card
human_vs_machine ... a machine generated response model card

Contact:

Please create an issue on our repo

Citation:

@inproceedings{gao2020dialogrpt,
    title={Dialogue Response RankingTraining with Large-Scale Human Feedback Data},
    author={Xiang Gao and Yizhe Zhang and Michel Galley and Chris Brockett and Bill Dolan},
    year={2020},
    booktitle={EMNLP}
}