File size: 1,283 Bytes
4cbc768
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
270b935
4cbc768
 
 
8de9432
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
language: 
  - bo
tags:
- tibetan
- pretrained causal language model
- roberta
widget:
- text: "རིན་"
- text: "རྫོགས་པའི་"
- text: "ཆོས་ཀྱི་"
- text: "གངས་རིའི་"
- text: "བོད་ཀྱི་སྨན་"
license: "mit"
---

# A demo for generating text using `Tibetan Roberta Causal Language Model`

```
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

model_name = 'sangjeedondrub/tibetan-roberta-causal-base'
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

text_gen_pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer
)

init_text = 'རིན་'

outputs = text_gen_pipe(init_text,
              do_sample=True,
              max_new_tokens=200,
              temperature=.9,
              top_k=10,
              top_p=0.92,
              num_return_sequences=10,
              truncate=True)
for idx, output in enumerate(outputs, start=1):
  print(idx)
  print(output['generated_text'])
```

# About

This model is trained and released by Sangjee Dondrub [sangjeedondrub at live dot com], the mere purpose of conducting these experiments is to improve my familiarity with Transformers APIs.