--- annotations_creators: - no-annotation language: - en - es - pt - ja - ar - in - ko - tr - fr - tl - ru - it - th - de - hi - pl - nl - fa - et - ht - ur - sv - ca - el - fi - cs - iw - da - vi - zh - ta - ro - no - uk - cy - ne - hu - eu - sl - lv - lt - bn - sr - bg - mr - ml - is - te - gu - kn - ps - ckb - si - hy - or - pa - am - sd - my - ka - km - dv - lo - ug - bo language_creators: - found license: - mit multilinguality: - multilingual pretty_name: Bernice Pretrain Data size_categories: - 1B Alexandra DeLucia, Shijie Wu, Aaron Mueller, Carlos Aguirre, Philip Resnik, and Mark Dredze. 2022. Bernice: A Multilingual Pre-trained Encoder for Twitter. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6191–6205, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. ### Contributions Dataset uploaded by [@AADeLucia](https://github.com/AADeLucia).