Edit model card

DistilBERT model fine-tuned on 117567 English language tweets from a range of political agencies (EU_Commission, UN, OECD, IMF, ECB, Council of the European Union, UK government, Scottish government). Fine-tuned with learning rate = 2e-5, 16-sample batches, chunk_size = 50 tokens, 100 epochs (early stopping patience = 3 epochs), 3 warmup epochs. More details can be found at: https://github.com/rbroc/eucomm-twitter. No evaluation was performed, as fine-tuning was merely functional to providing checkpoints for a contextualized topic model.

Downloads last month
12