Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,26 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
2 |
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- conversational
|
4 |
+
license: mit
|
5 |
+
---
|
6 |
|
7 |
+
## Chinese pre-trained dialogue model (CDial-GPT)
|
8 |
+
|
9 |
+
This project provides a large-scale Chinese GPT model pre-trained on the dataset [LCCC](https://huggingface.co/datasets/silver/lccc).
|
10 |
+
|
11 |
+
We present a series of Chinese GPT model that are first pre-trained on a Chinese novel dataset and then post-trained on our LCCC dataset.
|
12 |
+
|
13 |
+
Similar to [TransferTransfo](https://arxiv.org/abs/1901.08149), we concatenate all dialogue histories into one context sentence, and use this sentence to predict the response. The input of our model consists of word embedding, speaker embedding, and positional embedding of each word.
|
14 |
+
|
15 |
+
Paper: [A Large-Scale Chinese Short-Text Conversation Dataset](https://arxiv.org/pdf/2008.03946.pdf)
|
16 |
+
|
17 |
+
### How to use
|
18 |
+
|
19 |
+
```python
|
20 |
+
from transformers import OpenAIGPTLMHeadModel, GPT2LMHeadModel, BertTokenizer
|
21 |
+
import torch
|
22 |
+
tokenizer = BertTokenizer.from_pretrained("thu-coai/CDial-GPT_LCCC-large")
|
23 |
+
model = OpenAIGPTLMHeadModel.from_pretrained("thu-coai/CDial-GPT_LCCC-large")
|
24 |
+
```
|
25 |
+
|
26 |
+
For more details, please refer to our [repo.](https://github.com/thu-coai/CDial-GPT) on github.
|