mrrano commited on
Commit
e24bc52
1 Parent(s): c3c2888

Update README.md

Browse files

from transformers import GPT2LMHeadModel, GPT2Tokenizer
model_name_or_path = "sberbank-ai/rugpt3large_based_on_gpt2" (можно использовать sberbank-ai/rugpt3xl)
tokenizer = GPT2Tokenizer.from_pretrained(model_name_or_path)
model = GPT2LMHeadModel.from_pretrained(model_name_or_path).cpu()
text = "Иисус Христос родился в "
input_ids = tokenizer.encode(text, return_tensors="pt").cpu()
out = model.generate(input_ids.cpu())
print(generated_text)
generated_text = list(map(tokenizer.decode, out))[0]
print(generated_text)

Files changed (1) hide show
  1. README.md +11 -0
README.md CHANGED
@@ -14,3 +14,14 @@ Model was trained with 512 sequence length using [Deepspeed](https://github.com/
14
  Total training time was around 10 days on 256 GPUs.
15
  Final perplexity on test set is `12.05`.
16
  Model parameters: 1.3B.
 
 
 
 
 
 
 
 
 
 
 
 
14
  Total training time was around 10 days on 256 GPUs.
15
  Final perplexity on test set is `12.05`.
16
  Model parameters: 1.3B.
17
+ from transformers import GPT2LMHeadModel, GPT2Tokenizer
18
+ model_name_or_path = "sberbank-ai/rugpt3large_based_on_gpt2" (можно использовать sberbank-ai/rugpt3xl)
19
+ tokenizer = GPT2Tokenizer.from_pretrained(model_name_or_path)
20
+ model = GPT2LMHeadModel.from_pretrained(model_name_or_path).cpu()
21
+ text = "Иисус Христос родился в "
22
+ input_ids = tokenizer.encode(text, return_tensors="pt").cpu()
23
+ out = model.generate(input_ids.cpu())
24
+ print(generated_text)
25
+ generated_text = list(map(tokenizer.decode, out))[0]
26
+ print(generated_text)
27
+