Transformers
PyTorch
Inference Endpoints
cocolm-base / README.md
Chenyan Xiong
Create README.md
7d68583
|
raw
history blame
693 Bytes

COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining

This model card contains the COCO-LM model (base++ version) proposed in this paper. The official GitHub repository can be found here.

Citation

If you find this model card useful for your research, please cite the following paper:

@inproceedings{meng2021coco,
  title={{COCO-LM}: Correcting and contrasting text sequences for language model pretraining},
  author={Meng, Yu and Xiong, Chenyan and Bajaj, Payal and Tiwary, Saurabh and Bennett, Paul and Han, Jiawei and Song, Xia},
  booktitle={NeurIPS},
  year={2021}
}