robcaulk commited on
Commit
8638fb5
1 Parent(s): 6c55b5c

chore: write README

Browse files
README.md CHANGED
@@ -18,41 +18,70 @@ This dataset card aims to be a base template for new datasets. It has been gener
18
 
19
 
20
 
21
- - **Curated by:** [More Information Needed]
22
- - **Funded by [optional]:** [More Information Needed]
23
- - **Shared by [optional]:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
 
27
- ### Dataset Sources [optional]
28
 
29
  <!-- Provide the basic links for the dataset. -->
30
 
31
- - **Repository:** [More Information Needed]
32
- - **Paper [optional]:** [More Information Needed]
33
  - **Demo [optional]:** [More Information Needed]
34
 
35
  ## Uses
36
 
37
  <!-- Address questions around how the dataset is intended to be used. -->
38
 
39
- ### Direct Use
40
 
41
- <!-- This section describes suitable use cases for the dataset. -->
42
 
43
- [More Information Needed]
44
-
45
- ### Out-of-Scope Use
46
-
47
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
48
-
49
- [More Information Needed]
50
 
51
  ## Dataset Structure
52
 
53
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
54
 
55
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
 
57
  ## Dataset Creation
58
 
@@ -60,57 +89,82 @@ This dataset card aims to be a base template for new datasets. It has been gener
60
 
61
  <!-- Motivation for the creation of this dataset. -->
62
 
63
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
 
65
  ### Source Data
66
 
67
  <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
68
 
 
 
69
  #### Data Collection and Processing
70
 
71
  <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
72
 
73
- [More Information Needed]
 
 
 
 
74
 
75
  #### Who are the source data producers?
76
 
77
  <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
78
 
79
- [More Information Needed]
80
-
81
- ### Annotations [optional]
82
 
83
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
84
 
85
  #### Annotation process
86
 
87
  <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
88
 
89
- [More Information Needed]
90
 
91
  #### Who are the annotators?
92
 
93
  <!-- This section describes the people or systems who created the annotations. -->
94
 
95
- [More Information Needed]
96
 
97
  #### Personal and Sensitive Information
98
 
99
  <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
100
 
101
- [More Information Needed]
102
 
103
  ## Bias, Risks, and Limitations
104
 
105
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
106
 
107
- [More Information Needed]
 
 
108
 
109
  ### Recommendations
110
 
111
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
112
 
113
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
114
 
115
  ## Citation [optional]
116
 
@@ -124,20 +178,9 @@ Users should be made aware of the risks, biases and limitations of the dataset.
124
 
125
  [More Information Needed]
126
 
127
- ## Glossary [optional]
128
 
129
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
130
-
131
- [More Information Needed]
132
-
133
- ## More Information [optional]
134
-
135
- [More Information Needed]
136
-
137
- ## Dataset Card Authors [optional]
138
-
139
- [More Information Needed]
140
 
141
- ## Dataset Card Contact
 
142
 
143
- [More Information Needed]
 
18
 
19
 
20
 
21
+ - **Curated by:** [Emergent Methods](https://www.emergentmethods.ai/)
22
+ - **Funded by:** [Emergent Methods](https://www.emergentmethods.ai/)
23
+ - **Shared by:** [Emergent Methods](https://www.emergentmethods.ai/)
24
+ - **Language(s) (NLP):** English (en), and translated French, Spanish, German, Swedish, Italian, Arabic, Chinese, Norwegian, Danish, Portuguese, Dutch, Russian, Ukranian
25
+ - **License:** Apache 2.0
26
 
27
+ ### Dataset Sources
28
 
29
  <!-- Provide the basic links for the dataset. -->
30
 
31
+ - **Repository:** [AskNews API](https://docs.asknews.app)
32
+ - **Paper:** [More Information Needed]
33
  - **Demo [optional]:** [More Information Needed]
34
 
35
  ## Uses
36
 
37
  <!-- Address questions around how the dataset is intended to be used. -->
38
 
39
+ This dataset is intended to be used to fine-tune entity extractors for improved generalization, as well as higher accuracy on the latest news events. For example, we used this dataset to fine-tune `GLiNER-news`, which is a fine-tuned version of `GLiNER`, geared toward improved entity extraction on news articles. The fine-tune improved performance for nearly all benchmarks (even beyond news).
40
 
 
41
 
 
 
 
 
 
 
 
42
 
43
  ## Dataset Structure
44
 
45
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
46
 
47
+ The dataset is structured as follows:
48
+
49
+ ```
50
+ 5049-formatted-summaries_llama3-dataset_splits.json
51
+ - train
52
+ - test
53
+ - validation
54
+ ```
55
+
56
+ Where each split is a list of structured JSON, where each sample is structured as follows:
57
+
58
+ ```json
59
+ {
60
+ "metadata": {
61
+ "source_country": <country str>,
62
+ "article_language": <language str>,
63
+ "article_pubDate": <pub_date datetime>,
64
+ "topic-classification": [
65
+ <topic classification str>
66
+ ],
67
+ "articleId": <AskNews article uuid>
68
+ },
69
+ "tokenized_text": [
70
+ <word string>,
71
+ <word string>,
72
+ ...
73
+ ],
74
+ "ner": [
75
+ [
76
+ <Start word int>,
77
+ <Stop word int>,
78
+ <Entity str>
79
+ ],
80
+ ...
81
+ ]
82
+ },
83
+ ...
84
+ ```
85
 
86
  ## Dataset Creation
87
 
 
89
 
90
  <!-- Motivation for the creation of this dataset. -->
91
 
92
+ This dataset was created in an effort to improve the representation of underrepresented topics and entities in entity extractors, thereby improving entity extraction accuracy and generalization. The pre-processing pipeline for this dataset follows a strict set of steps:
93
+
94
+ [AskNews API](https://docs.asknews.app):
95
+ 1. Enforce diversity on the collection of news articles from diverse countries/languages/sources.
96
+ 2. Translate and summarize the articles with Llama2.
97
+ 3. Embed summaries to vectors
98
+
99
+ Present dataset curation:
100
+ 4. Cluster embeddings according to topic, for each 4 hour bucket of articles throughout the duration of February 20-March 30 2024.
101
+ 5. Pull samples from clusters, distributing evenly across country of origin.
102
+ 6. Extract entities from each summary using Llama3.
103
+
104
+ ![countries distribution](figures/countries_distribution.png)
105
+
106
+ The data was used to train `GLiNER-news`, which is a fine-tuned version of `GLiNER`, geared towared improved entity extraction on news articles. The fine-tune improved performance for nearly all benchmarks (even beyond news):
107
+
108
+ ![topic distribution](figures/topics_fig_connected.png)
109
+
110
+ The entity types in the dataset are limited to the following:
111
+
112
+ ![entity-types](figures/entity-types_limited.png)
113
 
114
  ### Source Data
115
 
116
  <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
117
 
118
+ The synthetic data is pulled from [AskNews API](https://docs.asknews.app), which generates news translations and summaries using Llama2/3 from open-web news content.
119
+
120
  #### Data Collection and Processing
121
 
122
  <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
123
 
124
+ The [AskNews API](https://docs.asknews.app) uses open-web news articles to generate synthetic data (news article summaries) with Llama2/3. This dataset was pulled from the API by querying 4 hour buckets of articles between February 20 and March 31, 2024. These buckets were then processed with the following steps:
125
+
126
+ 4. Cluster embeddings according to topic, for each 4 hour bucket of articles throughout the duration of February 20-March 30 2024.
127
+ 5. Pull samples from clusters, distributing evenly across country of origin.
128
+ 6. Extract entities from each summary using Llama3.
129
 
130
  #### Who are the source data producers?
131
 
132
  <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
133
 
134
+ The source data producer is the [AskNews API](https://docs.asknews.app), which uses open-web news articles to generate translations and summaries.
 
 
135
 
 
136
 
137
  #### Annotation process
138
 
139
  <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
140
 
141
+ The news translations and summaries are passed to Llama3 for entity extraction to extract entities.
142
 
143
  #### Who are the annotators?
144
 
145
  <!-- This section describes the people or systems who created the annotations. -->
146
 
147
+ [Emergent Methods](https://www.emergentmethods.ai/) built and oversaw the systems used to annotate the dataset.
148
 
149
  #### Personal and Sensitive Information
150
 
151
  <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
152
 
153
+ This dataset does not contain any information that is not publicly available on the open-web.
154
 
155
  ## Bias, Risks, and Limitations
156
 
157
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
158
 
159
+ Although the goal of the dataset is to reduce bias, and improve diversity, it is still biased to western languages and countries. This limitation originates from the abilities of Llama2 for the translation and summary generations. Further, any bias originating in Llama2 training data will also be present in this dataset, since Llama2 was used to summarize the open-web articles. Further, any biases present in Llama3 will be present in the present dataaset since Llama3 was used to extract entities from the summaries.
160
+
161
+ ![countries distribution](figures/topics_fig_connected.png)
162
 
163
  ### Recommendations
164
 
165
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
166
 
167
+ Carefully consider the dataset topic, country, and language distributions when implementing or training on this data.
168
 
169
  ## Citation [optional]
170
 
 
178
 
179
  [More Information Needed]
180
 
 
181
 
182
+ ## Dataset Card Authors
 
 
 
 
 
 
 
 
 
 
183
 
184
+ Elin Törnquist, Emergent Methods elin at emergentmethods.ai
185
+ Robert Caulk, Emergent Methods rob at emergentmethods.ai
186
 
 
figures/countries_distribution.png ADDED

Git LFS Details

  • SHA256: df265a6bcee74209fe23a443b80088ddafd8970c212acc785e013af45bbb7be4
  • Pointer size: 131 Bytes
  • Size of remote file: 398 kB
figures/entity-types_limited.png ADDED

Git LFS Details

  • SHA256: ddcc6f30e3367a995f08223a4bcc6b51544b68f99c2da31a2620d579f2518895
  • Pointer size: 131 Bytes
  • Size of remote file: 179 kB
figures/topics_fig_connected.png ADDED

Git LFS Details

  • SHA256: d3aecd2cdb60e2861ed82dfc69a33a2a73128e2d0b9dfe57707c3bcc6cb115dd
  • Pointer size: 131 Bytes
  • Size of remote file: 172 kB