Edit model card

A demo for generating text using Tibetan Roberta Causal Language Model

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

model_name = 'sangjeedondrub/tibetan-roberta-causal-base'
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

text_gen_pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer
)

init_text = 'རིན་'

outputs = text_gen_pipe(init_text,
              do_sample=True,
              max_new_tokens=200,
              temperature=.9,
              top_k=10,
              top_p=0.92,
              num_return_sequences=10,
              truncate=True)
for idx, output in enumerate(outputs, start=1):
  print(idx)
  print(output['generated_text'])

About

This model is trained and released by Sangjee Dondrub [sangjeedondrub at live dot com], the mere purpose of conducting these experiments is to improve my familiarity with Transformers APIs.

Downloads last month
32
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.