使用的预训练模型为uer/gpt2-chinese-cluecorpussmall

Usage

from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("xmj2002/gpt2_tang_poetry")
model = AutoModelForCausalLM.from_pretrained("xmj2002/gpt2_tang_poetry")

text = "白居易《远方》"
inputs = tokenizer(text, return_tensors="pt").input_ids
outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_k=100, top_p=0.95)
tokenizer.decode(outputs[0], skip_special_tokens=True)
Downloads last month
16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train xmj2002/gpt2_tang_poetry