Edit model card

FutureTOD: Teaching Future Knowledge to Pre-trained Language Model for Task-Oriented Dialogue

We present our dialogue-pertaining model, FutureTOD, which distills future knowledge into the representation of the previous dialogue context using a self-training framework. Extensive experiments on diverse downstream dialogue tasks demonstrate the effectiveness of our model, especially its generalization, robustness, and ability to learn discriminative dialogue representations.

This paper has been accepted at the ACL 2023 Main Conference.

Usage

We release our futuretod-base-v1.0 model here. You can use this model for downstream TOD tasks follow instructions in FutureTOD.

Quotation

If you find our work helpful, please consider quoting the following papers.

@article{zeng2023futuretod,
  title={FutureTOD: Teaching Future Knowledge to Pre-trained Language Model for Task-Oriented Dialogue},
  author={Zeng, Weihao and He, Keqing and Wang, Yejie and Zeng, Chen and Wang, Jingang and Xian, Yunsen and Xu, Weiran},
  journal={arXiv preprint arXiv:2306.10315},
  year={2023}
}
Downloads last month
1