Description

Chinese T5-base model continuously pre-trained on 1.4GB of Chinese recipe from Langboat/mengzi-t5-base.

DiNeR: A Large Realistic Dataset for Evaluating Compositional Generalization

Usage

from transformers import T5Tokenizer, T5ForConditionalGeneration

tokenizer = T5Tokenizer.from_pretrained("Jumpy-pku/t5-recipe-continue-pretrained")
model = T5ForConditionalGeneration.from_pretrained("Jumpy-pku/t5-recipe-continue-pretrained")

Citation

If you find the technical report or resource is useful, please cite the following technical report in your paper.

@inproceedings{hu-etal-2023-diner,
    title = "{D}i{N}e{R}: A Large Realistic Dataset for Evaluating Compositional Generalization",
    author = "Hu, Chengang  and
      Liu, Xiao  and
      Feng, Yansong",
    editor = "Bouamor, Houda  and
      Pino, Juan  and
      Bali, Kalika",
    booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.emnlp-main.924",
    doi = "10.18653/v1/2023.emnlp-main.924",
    pages = "14938--14947",
}
Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support