chew_lexical / README.md
hsuvaskakoty's picture
Create README.md
4e1084a verified
---
license: mit
task_categories:
- text-classification
- zero-shot-classification
- feature-extraction
- text-generation
language:
- en
pretty_name: 'CHEW: A Dataset of CHanging Events in Wikipedia'
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This is the lexical/no-overlapping split of the CHEW dataset(CHEW: A Dataset of CHanging Events in Wikipedia).
## Dataset Details
### Dataset Description
This dataset is the Lexical/No-overlapping split of the CHEW Dataset,where CHEW stands for CHanging Events in Wikipedia. It contains Wikipedia titles, text in two timestamped versions and Binary Label showing Change(1) or No change(0). Change here means there has been informationm change in second version of the text with respect to the first version. The dataset is created for Information change detection of two versions of a Wikipedia article.
The fields in the dataset are:
- Title: Wikipedia article title
- Text: Contains the two versions of the text separated by comma(','). The format of this column is:
<t> TITLE </t> <y> TIMESTAMP1 </y> TEXT1 , <t> TITLE </t> <y> TIMESTAMP2 </y> TEXT2
- Label: Binary Label containing
- 0: No change/syntactic changes
- 1: Information change
- **Curated by:** Hsuvas Borkakoty
- **Language(s) (NLP):** English
- **License:** MIT
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/hsuvas/temporal_wikipedia
- **Paper [optional]:** CHEW: A Dataset of CHanging Events in Wikipedia (Borkakoty and Espinosa-Anke,2024)