Back to all models
text-classification mask_token:
Query this model
πŸ”₯ This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚑️ Upgrade your account to access the Inference API

Share Copied link to clipboard

Monthly model downloads

microsoft/DialogRPT-depth microsoft/DialogRPT-depth
44 downloads
last 30 days

pytorch

tf

Contributed by

Microsoft company
15 team members Β· 31 models

How to use this model directly from the πŸ€—/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("microsoft/DialogRPT-depth") model = AutoModelForSequenceClassification.from_pretrained("microsoft/DialogRPT-depth")
Uploaded in S3

Demo

Please try this ➀➀➀ Colab Notebook Demo (click me!)

Context Response depth score
I love NLP! Can anyone recommend a nice review paper? 0.724
I love NLP! Me too! 0.032

The depth score predicts how likely the response is getting a long follow-up discussion thread.

DialogRPT-depth

Dialog Ranking Pretrained Transformers

How likely a dialog response is upvoted πŸ‘ and/or gets replied πŸ’¬?

This is what DialogRPT is learned to predict. It is a set of dialog response ranking models proposed by Microsoft Research NLP Group trained on 100 + millions of human feedback data. It can be used to improve existing dialog generation model (e.g., DialoGPT) by re-ranking the generated response candidates.

Quick Links:

We considered the following tasks and provided corresponding pretrained models.

Task Description Pretrained model
Human feedback given a context and its two human responses, predict...
updown ... which gets more upvotes? model card
width ... which gets more direct replies? model card
depth ... which gets longer follow-up thread? this model
Human-like (human vs fake) given a context and one human response, distinguish it with...
human_vs_rand ... a random human response model card
human_vs_machine ... a machine generated response model card

Contact:

Please create an issue on our repo

Citation:

@inproceedings{gao2020dialogrpt,
    title={Dialogue Response RankingTraining with Large-Scale Human Feedback Data},
    author={Xiang Gao and Yizhe Zhang and Michel Galley and Chris Brockett and Bill Dolan},
    year={2020},
    booktitle={EMNLP}
}