Edit model card

Chinese pre-trained dialogue model (CDial-GPT)

This project provides a large-scale Chinese GPT model pre-trained on the dataset LCCC.

We present a series of Chinese GPT model that are first pre-trained on a Chinese novel dataset and then post-trained on our LCCC dataset.

Similar to TransferTransfo, we concatenate all dialogue histories into one context sentence, and use this sentence to predict the response. The input of our model consists of word embedding, speaker embedding, and positional embedding of each word.

Paper: A Large-Scale Chinese Short-Text Conversation Dataset

How to use

from transformers import OpenAIGPTLMHeadModel, GPT2LMHeadModel, BertTokenizer
import torch


tokenizer = BertTokenizer.from_pretrained("thu-coai/CDial-GPT2_LCCC-base")
model = GPT2LMHeadModel.from_pretrained("thu-coai/CDial-GPT2_LCCC-base")

For more details, please refer to our repo. on github.

Downloads last month
7
Hosted inference API
Conversational
Examples
Examples
Input a message to start chatting with huolongguo10/CDial-GPT2-LCCC-Base-copy.
This model can be loaded on the Inference API on-demand.

Dataset used to train huolongguo10/CDial-GPT2-LCCC-Base-copy

Space using huolongguo10/CDial-GPT2-LCCC-Base-copy 1