Back to all models
text-classification mask_token:
Query this model
πŸ”₯ This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚑️ Upgrade your account to access the Inference API

Share Copied link to clipboard

Monthly model downloads

microsoft/DialogRPT-width microsoft/DialogRPT-width
last 30 days



Contributed by

Microsoft company
15 team members Β· 31 models

How to use this model directly from the πŸ€—/transformers library:

Copy to clipboard
from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("microsoft/DialogRPT-width") model = AutoModelForSequenceClassification.from_pretrained("microsoft/DialogRPT-width")
Uploaded in S3


Please try this ➀➀➀ Colab Notebook Demo (click me!)

Context Response width score
I love NLP! Can anyone recommend a nice review paper? 0.701
I love NLP! Me too! 0.029

The width score predicts how likely the response is getting replied.


Dialog Ranking Pretrained Transformers

How likely a dialog response is upvoted πŸ‘ and/or gets replied πŸ’¬?

This is what DialogRPT is learned to predict. It is a set of dialog response ranking models proposed by Microsoft Research NLP Group trained on 100 + millions of human feedback data. It can be used to improve existing dialog generation model (e.g., DialoGPT) by re-ranking the generated response candidates.

Quick Links:

We considered the following tasks and provided corresponding pretrained models.

Task Description Pretrained model
Human feedback given a context and its two human responses, predict...
updown ... which gets more upvotes? model card
width ... which gets more direct replies? this model
depth ... which gets longer follow-up thread? model card
Human-like (human vs fake) given a context and one human response, distinguish it with...
human_vs_rand ... a random human response model card
human_vs_machine ... a machine generated response model card


Please create an issue on our repo


    title={Dialogue Response RankingTraining with Large-Scale Human Feedback Data},
    author={Xiang Gao and Yizhe Zhang and Michel Galley and Chris Brockett and Bill Dolan},