Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)


Please try this ➀➀➀ Colab Notebook Demo (click me!)

Context Response updown score
I love NLP! Here’s a free textbook (URL) in case anyone needs it. 0.613
I love NLP! Me too! 0.111

The updown score predicts how likely the response is getting upvoted.


Dialog Ranking Pretrained Transformers

How likely a dialog response is upvoted πŸ‘ and/or gets replied πŸ’¬?

This is what DialogRPT is learned to predict. It is a set of dialog response ranking models proposed by Microsoft Research NLP Group trained on 100 + millions of human feedback data. It can be used to improve existing dialog generation model (e.g., DialoGPT) by re-ranking the generated response candidates.

Quick Links:

We considered the following tasks and provided corresponding pretrained models. This page is for the updown task, and other model cards can be found in table below.

Task Description Pretrained model
Human feedback given a context and its two human responses, predict...
updown ... which gets more upvotes? this model
width ... which gets more direct replies? model card
depth ... which gets longer follow-up thread? model card
Human-like (human vs fake) given a context and one human response, distinguish it with...
human_vs_rand ... a random human response model card
human_vs_machine ... a machine generated response model card


Please create an issue on our repo


    title={Dialogue Response RankingTraining with Large-Scale Human Feedback Data},
    author={Xiang Gao and Yizhe Zhang and Michel Galley and Chris Brockett and Bill Dolan},
Downloads last month

Spaces using microsoft/DialogRPT-updown 7