Edit model card

使用的预训练模型为uer/gpt2-chinese-cluecorpussmall

Usage

from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("xmj2002/gpt2_tang_poetry")
model = AutoModelForCausalLM.from_pretrained("xmj2002/gpt2_tang_poetry")

text = "白居易《远方》"
inputs = tokenizer(text, return_tensors="pt").input_ids
outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_k=100, top_p=0.95)
tokenizer.decode(outputs[0], skip_special_tokens=True)
Downloads last month
3

Dataset used to train xmj2002/gpt2_tang_poetry