Ming Zhong
Create README.md
6f3d7c7

DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization.

Introduction

DialogLED is a pre-trained model for long dialogue understanding and summarization. It builds on the Longformer-Encoder-Decoder (LED) architecture and uses window-based denoising as the pre-training task on a large amount of long dialogue data for further training. Here is a large version of DialogLED, the input length is limited to 5,120 in the pre-training phase.

Finetuning for Downstream Tasks

Please refer to our GitHub page.