File size: 1,451 Bytes
7f4ffee
 
 
eed2ee9
 
 
 
 
 
 
 
 
 
 
 
 
96ed2e0
eed2ee9
5b9beb8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eed2ee9
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
language:
- ro
license: mit  # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses

tags:
- romanian
- text generation
- causal lm
- gpt-neo
---

# GPT-Neo Romanian 125M

This model is a GPT-Neo transformer decoder model designed using EleutherAI's replication of the GPT-3 architecture. 

It was trained on a thoroughly cleaned corpus of Romanian text of about 40GB composed of Oscar, Opus, Wikipedia, literature and various other bits and pieces of text, joined together and deduplicated. It was trained for about a month, totaling 5.8M steps on a v3 TPU machine.

```python 
from transformers import GPTNeoForCausalLM, GPT2Tokenizer

model = GPTNeoForCausalLM.from_pretrained("iliemihai/gpt-neo-romanian-125m")
tokenizer = GPT2Tokenizer.from_pretrained("iliemihai/gpt-neo-romanian-125m")

prompt = "Cine a fost mihai eminescu"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids

output = model.generate(input_ids, penalty_alpha=0.6, top_k=4, max_length=64)
result = tokenizer.decode(output[0], skip_special_tokens=True)

print(result)
```

### Authors:
* Dumitrescu Stefan
* Mihai Ilie

### Evaluation
Evaluation to be added soon, also on [https://github.com/dumitrescustefan/Romanian-Transformers](https://github.com/dumitrescustefan/Romanian-Transformers)

### Acknowledgements

Thanks [TPU Research Cloud](https://sites.research.google/trc/about/) for the TPUv3 machine needed to train this model!