Datasets:

Tasks:
Other
Languages:
Arabic
Multilinguality:
monolingual
Size Categories:
1M<n<10M
Language Creators:
found
Annotations Creators:
no-annotation
Source Datasets:
original
ArXiv:
Tags:
data-mining
albertvillanova HF staff commited on
Commit
7b74c1d
1 Parent(s): ab307c1

Host data files and update dates (#1)

Browse files

- Add data file (0f26a0cc23ea8aa19afcb11a79b0eba4630a2f42)
- Update loading script (f18514cd1fd20dbda5b98f28a5304fb452709064)
- Update metadata num_examples from 1090591 to 2541575 (f0059d7d03589a431143b09336c3f8210680ccd5)
- Update script to iterate data files for all dates (5b609e65a8b2842d4f0b8dbab2c70585fde7cf6d)
- Update metadata num_examples from 2541575 to 3140158 (86fb2fd673767d9dbcc967010320ea1453928272)
- Update metadata dates in dataset card (09034ad1097e28c418049ff188807a75978cc6d1)
- Delete legacy dataset_infos.json (4b119b723b16ee8bb7b020a558d3c7513f68c07e)

Files changed (4) hide show
  1. README.md +22 -10
  2. ar_cov19.py +5 -4
  3. dataset-all_tweets.zip +3 -0
  4. dataset_infos.json +0 -1
README.md CHANGED
@@ -19,16 +19,16 @@ pretty_name: ArCOV19
19
  tags:
20
  - data-mining
21
  dataset_info:
 
22
  features:
23
  - name: tweetID
24
  dtype: int64
25
- config_name: ar_cov19
26
  splits:
27
  - name: train
28
- num_bytes: 8724728
29
- num_examples: 1090591
30
- download_size: 54902390
31
- dataset_size: 8724728
32
  ---
33
 
34
  # Dataset Card for ArCOV19
@@ -67,7 +67,15 @@ dataset_info:
67
 
68
  ### Dataset Summary
69
 
70
- ArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 30th of April 2020. ArCOV-19 is the first publicly-available Arabic Twitter dataset covering COVID-19 pandemic that includes over 1M tweets alongside the propagation networks of the most-popular subset of them (i.e., most-retweeted and-liked). The propagation networks include both retweets and conversational threads (i.e., threads of replies). ArCOV-19 is designed to enable research under several domains including natural language processing, information retrieval, and social computing, among others. Preliminary analysis shows that ArCOV-19 captures rising discussions associated with the first reported cases of the disease as they appeared in the Arab world. In addition to the source tweets and the propagation networks, we also release the search queries and the language-independent crawler used to collect the tweets to encourage the curation of similar datasets.
 
 
 
 
 
 
 
 
71
 
72
  ### Supported Tasks and Leaderboards
73
 
@@ -155,12 +163,16 @@ No annotation was provided with the dataset.
155
 
156
  ### Citation Information
157
 
 
158
  @article{haouari2020arcov19,
159
- title={ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks},
160
- author={Fatima Haouari and Maram Hasanain and Reem Suwaileh and Tamer Elsayed},
161
- journal={arXiv preprint arXiv:2004.05861},
162
- year={2020}
 
 
163
  }
 
164
 
165
  ### Contributions
166
 
19
  tags:
20
  - data-mining
21
  dataset_info:
22
+ config_name: ar_cov19
23
  features:
24
  - name: tweetID
25
  dtype: int64
 
26
  splits:
27
  - name: train
28
+ num_bytes: 25121264
29
+ num_examples: 3140158
30
+ download_size: 23678407
31
+ dataset_size: 25121264
32
  ---
33
 
34
  # Dataset Card for ArCOV19
67
 
68
  ### Dataset Summary
69
 
70
+ ArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 5th of May 2021.
71
+ ArCOV-19 is the first publicly-available Arabic Twitter dataset covering COVID-19 pandemic that includes about 3.2M
72
+ tweets alongside the propagation networks of the most-popular subset of them (i.e., most-retweeted and-liked).
73
+ The propagation networks include both retweets and conversational threads (i.e., threads of replies).
74
+ ArCOV-19 is designed to enable research under several domains including natural language processing, information
75
+ retrieval, and social computing, among others. Preliminary analysis shows that ArCOV-19 captures rising discussions
76
+ associated with the first reported cases of the disease as they appeared in the Arab world. In addition to the source
77
+ tweets and the propagation networks, we also release the search queries and the language-independent crawler used to
78
+ collect the tweets to encourage the curation of similar datasets.
79
 
80
  ### Supported Tasks and Leaderboards
81
 
163
 
164
  ### Citation Information
165
 
166
+ ```
167
  @article{haouari2020arcov19,
168
+ title={ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks},
169
+ author={Fatima Haouari and Maram Hasanain and Reem Suwaileh and Tamer Elsayed},
170
+ year={2021},
171
+ eprint={2004.05861},
172
+ archivePrefix={arXiv},
173
+ primaryClass={cs.CL}
174
  }
175
+ ```
176
 
177
  ### Contributions
178
 
ar_cov19.py CHANGED
@@ -48,7 +48,7 @@ _HOMEPAGE = "https://gitlab.com/bigirqu/ArCOV-19"
48
  # TODO: Add link to the official dataset URLs here
49
  # The HuggingFace dataset library don't host the datasets but only point to the original files
50
  # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
51
- _URL = "https://gitlab.com/bigirqu/ArCOV-19/-/archive/master/ArCOV-19-master.zip"
52
  # _URL="https://gitlab.com/bigirqu/ArCOV-19/-/archive/master/ArCOV-19-master.zip?path=dataset/all_tweets"
53
 
54
 
@@ -121,14 +121,15 @@ class ArCov19(datasets.GeneratorBasedBuilder):
121
  # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
122
  # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
123
  data_dir = dl_manager.download_and_extract(_URL)
124
- return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"data_dir": data_dir})]
 
125
 
126
- def _generate_examples(self, data_dir):
127
  """Yields examples."""
128
  # TODO: This method will receive as arguments the `gen_kwargs` defined in the previous `_split_generators` method.
129
  # It is in charge of opening the given file and yielding (key, example) tuples from the dataset
130
  # The key is not important, it's more here for legacy reason (legacy from tfds)
131
- for fname in sorted(glob.glob(os.path.join(data_dir, "ArCOV-19-master/dataset/all_tweets/2020-*"))):
132
 
133
  df = pd.read_csv(fname, names=["tweetID"])
134
  for id_, record in df.iterrows():
48
  # TODO: Add link to the official dataset URLs here
49
  # The HuggingFace dataset library don't host the datasets but only point to the original files
50
  # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
51
+ _URL = "dataset-all_tweets.zip"
52
  # _URL="https://gitlab.com/bigirqu/ArCOV-19/-/archive/master/ArCOV-19-master.zip?path=dataset/all_tweets"
53
 
54
 
121
  # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
122
  # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
123
  data_dir = dl_manager.download_and_extract(_URL)
124
+ data_files = dl_manager.iter_files(data_dir)
125
+ return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"data_files": data_files})]
126
 
127
+ def _generate_examples(self, data_files):
128
  """Yields examples."""
129
  # TODO: This method will receive as arguments the `gen_kwargs` defined in the previous `_split_generators` method.
130
  # It is in charge of opening the given file and yielding (key, example) tuples from the dataset
131
  # The key is not important, it's more here for legacy reason (legacy from tfds)
132
+ for fname in data_files:
133
 
134
  df = pd.read_csv(fname, names=["tweetID"])
135
  for id_, record in df.iterrows():
dataset-all_tweets.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c503fcf94f768847e59323aca91052bf22c3f102a01ece358ea421cf3abcbde
3
+ size 23678407
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"ar_cov19": {"description": "ArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 30th of April 2020. ArCOV-19 is designed to enable research under several domains including natural language processing, information retrieval, and social computing, among others\n", "citation": "@article{haouari2020arcov19,\n title={ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks},\n author={Fatima Haouari and Maram Hasanain and Reem Suwaileh and Tamer Elsayed},\n journal={arXiv preprint arXiv:2004.05861},\n year={2020}\n", "homepage": "https://gitlab.com/bigirqu/ArCOV-19", "license": "", "features": {"tweetID": {"dtype": "int64", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "ar_cov19", "config_name": "ar_cov19", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 8724728, "num_examples": 1090591, "dataset_name": "ar_cov19"}}, "download_checksums": {"https://gitlab.com/bigirqu/ArCOV-19/-/archive/master/ArCOV-19-master.zip": {"num_bytes": 54902390, "checksum": "96211408035f8082c072a5eb4fbccf28ef4de00379abfc9523c552ab84646579"}}, "download_size": 54902390, "post_processing_size": null, "dataset_size": 8724728, "size_in_bytes": 63627118}}