added loading demo
Browse files
README.md
CHANGED
@@ -132,6 +132,24 @@ The tweets are from the public 1% Twitter API stream from January 2016 to Decemb
|
|
132 |
Twitter-provided language metadata is provided with the tweet ID. The data contains 66 unique languages, as identified by [ISO 639 language codes](https://www.wikiwand.com/en/List_of_ISO_639-1_codes), including `und` for undefined languages.
|
133 |
Tweets need to be re-gathered via the Twitter API. We suggest [Hydrator](https://github.com/DocNow/hydrator) or [tweepy](https://www.tweepy.org/).
|
134 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
135 |
|
136 |
### Supported Tasks and Leaderboards
|
137 |
|
|
|
132 |
Twitter-provided language metadata is provided with the tweet ID. The data contains 66 unique languages, as identified by [ISO 639 language codes](https://www.wikiwand.com/en/List_of_ISO_639-1_codes), including `und` for undefined languages.
|
133 |
Tweets need to be re-gathered via the Twitter API. We suggest [Hydrator](https://github.com/DocNow/hydrator) or [tweepy](https://www.tweepy.org/).
|
134 |
|
135 |
+
To load with HuggingFace:
|
136 |
+
|
137 |
+
```python
|
138 |
+
from datasets import load_dataset
|
139 |
+
dataset = load_dataset("jhu-clsp/bernice-pretrain-data")
|
140 |
+
|
141 |
+
for i, row in enumerate(dataset["train"]):
|
142 |
+
print(row)
|
143 |
+
if i > 10:
|
144 |
+
break
|
145 |
+
```
|
146 |
+
|
147 |
+
If you only want Indic languages, use
|
148 |
+
|
149 |
+
```python
|
150 |
+
dataset = load_dataset("jhu-clsp/bernice-pretrain-data", "indic")
|
151 |
+
```
|
152 |
+
|
153 |
|
154 |
### Supported Tasks and Leaderboards
|
155 |
|