FinText (base version) models, pre-trained on the RoBERTa architecture with 51.48 million parameters, have undergone further pre-training.