gpt2_tang_poetry / README.md
xmj2002's picture
Create README.md
922ed69
|
raw
history blame
682 Bytes
metadata
license: apache-2.0
datasets:
  - xmj2002/tang_poems
language:
  - zh

使用的预训练模型为uer/gpt2-chinese-cluecorpussmall

Usage

from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("xmj2002/gpt2_tang_poetry")
model = AutoModelForCausalLM.from_pretrained("xmj2002/gpt2_tang_poetry")

text = "白居易《远方》"
inputs = tokenizer(text, return_tensors="pt").input_ids
outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_k=100, top_p=0.95)
tokenizer.decode(outputs[0], skip_special_tokens=True)