AskNews-NER-v0 / README.md
thorntwig's picture
Update README.md
b8aab03 verified
|
raw
history blame
8.39 kB
---
license: apache-2.0
viewer: false
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset aims to improve the representation of underrepresented topics and entities in entity extractors, thereby improving entity extraction accuracy and generalization, especially on the latest news events (dataset represents broad news coverage between February 20-March 31, 2024). The dataset is a collection of news article summaries, translated and summarized with Llama2, and then entities extracted with Llama3. The distribution of data origin follows:
![countries distribution](figures/countries_distribution.png)
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [Emergent Methods](https://www.emergentmethods.ai/)
- **Funded by:** [Emergent Methods](https://www.emergentmethods.ai/)
- **Shared by:** [Emergent Methods](https://www.emergentmethods.ai/)
- **Language(s) (NLP):** English (en) (English texts and translations from Spanish (es), Portuguese (pt), German (de), Russian (ru), French (fr), Arabic (ar), Italian (it), Ukrainian (uk), Norwegian (no), Swedish (sv), Danish (da)).
- **License:** Apache 2.0
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** [AskNews API](https://docs.asknews.app)
- **Paper:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
This dataset is intended to be used to fine-tune entity extractors for improved generalization, as well as higher accuracy on the latest news events. For example, we used this dataset to fine-tune `GLiNER-news`, which is a fine-tuned version of `GLiNER`, geared toward improved entity extraction on news articles. The fine-tune improved performance for nearly all benchmarks (even beyond news).
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
The dataset is structured as follows:
```
5049-formatted-summaries_llama3-dataset_splits.json
- train
- test
- validation
```
Where each split is a list of structured JSON, where each sample is structured as follows:
```json
{
"metadata": {
"source_country": <country str>,
"article_language": <language str>,
"article_pubDate": <pub_date datetime>,
"topic-classification": [
<topic classification str>
],
"articleId": <AskNews article uuid>
},
"tokenized_text": [
<word string>,
<word string>,
...
],
"ner": [
[
<Start word int>,
<Stop word int>,
<Entity str>
],
...
]
},
...
```
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
This dataset was created in an effort to improve the representation of underrepresented topics and entities in entity extractors, thereby improving entity extraction accuracy and generalization. The pre-processing pipeline for this dataset follows a strict set of steps:
[AskNews API](https://docs.asknews.app):
1. Enforce diversity on the collection of news articles from diverse countries/languages/sources.
2. Translate and summarize the articles with Llama2.
3. Embed summaries to vectors
Present dataset curation:
4. Cluster embeddings according to topic, for each 4 hour bucket of articles throughout the duration of February 20-March 30 2024.
5. Pull samples from clusters, distributing evenly across country of origin.
6. Extract entities from each summary using Llama3.
The data was used to train `GLiNER-news`, which is a fine-tuned version of `GLiNER`, geared towared improved entity extraction on news articles. The fine-tune improved performance for nearly all benchmarks (even beyond news):
![topic distribution](figures/zeros-shot_20_table.png)
The entity types in the dataset are limited to the following:
![entity-types](figures/entity-types_limited.png)
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
The synthetic data is pulled from [AskNews API](https://docs.asknews.app), which generates news translations and summaries using Llama2/3 from open-web news content.
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
The [AskNews API](https://docs.asknews.app) uses open-web news articles to generate synthetic data (news article summaries) with Llama2/3. This dataset was pulled from the API by querying 4 hour buckets of articles between February 20 and March 31, 2024. These buckets were then processed with the following steps:
4. Cluster embeddings according to topic, for 29 4-hour buckets of articles evenly dispersed throughout the duration of February 20-March 30 2024.
5. Pull samples from clusters, distributing evenly across country of origin.
6. Extract entities from each summary using Llama3.
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
The source data producer is the [AskNews API](https://docs.asknews.app), which uses open-web news articles to generate translations and summaries.
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
The news translations and summaries are passed to Llama3 for entity extraction to extract entities.
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[Emergent Methods](https://www.emergentmethods.ai/) built and oversaw the systems used to annotate the dataset.
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
This dataset does not contain any information that is not publicly available on the open-web.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Although the goal of the dataset is to reduce bias, and improve diversity, it is still biased to western languages and countries. This limitation originates from the abilities of Llama2 for the translation and summary generations. Further, any bias originating in Llama2 training data will also be present in this dataset, since Llama2 was used to summarize the open-web articles. Further, any biases present in Llama3 will be present in the present dataset since Llama3 was used to extract entities from the summaries.
![countries distribution](figures/topics_fig_connected.png)
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Carefully consider the dataset topic, country, and language distributions when implementing or training on this data.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Dataset Card Authors
Elin Törnquist, Emergent Methods elin at emergentmethods.ai
Robert Caulk, Emergent Methods rob at emergentmethods.ai