added paper link to summary
Browse files
README.md
CHANGED
@@ -127,6 +127,7 @@ task_ids: []
|
|
127 |
### Dataset Summary
|
128 |
|
129 |
Tweet IDs for the 2.5 billion multilingual tweets used to train Bernice, a Twitter encoder.
|
|
|
130 |
The tweets are from the public 1% Twitter API stream from January 2016 to December 2021.
|
131 |
Twitter-provided language metadata is provided with the tweet ID. The data contains 66 unique languages, as identified by [ISO 639 language codes](https://www.wikiwand.com/en/List_of_ISO_639-1_codes), including `und` for undefined languages.
|
132 |
Tweets need to be re-gathered via the Twitter API. We suggest [Hydrator](https://github.com/DocNow/hydrator) or [tweepy](https://www.tweepy.org/).
|
|
|
127 |
### Dataset Summary
|
128 |
|
129 |
Tweet IDs for the 2.5 billion multilingual tweets used to train Bernice, a Twitter encoder.
|
130 |
+
Read the paper [here](https://preview.aclanthology.org/emnlp-22-ingestion/2022.emnlp-main.415).
|
131 |
The tweets are from the public 1% Twitter API stream from January 2016 to December 2021.
|
132 |
Twitter-provided language metadata is provided with the tweet ID. The data contains 66 unique languages, as identified by [ISO 639 language codes](https://www.wikiwand.com/en/List_of_ISO_639-1_codes), including `und` for undefined languages.
|
133 |
Tweets need to be re-gathered via the Twitter API. We suggest [Hydrator](https://github.com/DocNow/hydrator) or [tweepy](https://www.tweepy.org/).
|