Edit model card

Pretrained from scratch using GPT-2 architecture and a dataset of Latin texts (Corpus Corporum) 64 token context, loss 4.5, trained on 1 epoch of 492 million tokens GPT2 style tokenizer trained with min_frequency of 2000

Tends to get repetitive and is not very coherent, due to size and limited data.

Downloads last month
22
Safetensors
Model size
99.3M params
Tensor type
F32
·
Inference API
This model can be loaded on Inference API (serverless).

Dataset used to train gaodrew/cicero