--- language: - ro license: mit # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses tags: - romanian - text generation - causal lm - gpt-neo --- # GPT-Neo Romanian 125M This model is a GPT-Neo transformer decoder model designed using EleutherAI's replication of the GPT-3 architecture. It was trained on a thoroughly cleaned corpus of Romanian text of about 40GB composed of Oscar, Opus, Wikipedia, literature and various other bits and pieces of text, joined together and deduplicated. It was trained for about a month, totaling 5.8M steps on a v3 TPU machine. ```python from transformers import GPTNeoForCausalLM, GPT2Tokenizer model = GPTNeoForCausalLM.from_pretrained("iliemihai/gpt-neo-romanian-125m") tokenizer = GPT2Tokenizer.from_pretrained("iliemihai/gpt-neo-romanian-125m") prompt = "Cine a fost mihai eminescu" input_ids = tokenizer(prompt, return_tensors="pt").input_ids output = model.generate(input_ids, penalty_alpha=0.6, top_k=4, max_length=64) result = tokenizer.decode(output[0], skip_special_tokens=True) print(result) ``` ### Authors: * Dumitrescu Stefan * Mihai Ilie ### Evaluation Evaluation to be added soon, also on [https://github.com/dumitrescustefan/Romanian-Transformers](https://github.com/dumitrescustefan/Romanian-Transformers) ### Acknowledgements Thanks [TPU Research Cloud](https://sites.research.google/trc/about/) for the TPUv3 machine needed to train this model!