Datasets:

Languages:
French
Size:
n<1K
License:
boudinfl commited on
Commit
45d173d
1 Parent(s): 584174d

first blood

Browse files
Files changed (4) hide show
  1. .gitignore +4 -0
  2. README.md +42 -0
  3. test.jsonl +0 -0
  4. wikinews-fr-100.py +130 -0
.gitignore ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+
2
+ **.DS_Store
3
+ .idea
4
+ src/
README.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Wikinews-fr-100 Benchmark Dataset for Keyphrase Generation
2
+
3
+ ## About
4
+
5
+ Wikinews-fr-100 is a dataset for benchmarking keyphrase extraction and generation models.
6
+ The dataset is composed of 100 news articles in French collected from [wikinews](https://fr.wikinews.org/wiki/Accueil).
7
+ Keyphrases were annotated by readers (students in computer science) in an uncontrolled setting (that is, not limited to thesaurus entries).
8
+ Details about the dataset can be found in the original paper [(Bougouin et al., 2013)][bougouin-2013].
9
+
10
+ Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021].
11
+
12
+ Text pre-processing (tokenization) is carried out using `spacy` (`fr_core_news_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
13
+ Stemming (Snowball stemmer implementation for french provided in `nltk`) is applied before reference keyphrases are matched against the source text.
14
+ Details about the process can be found in `prmu.py`.
15
+
16
+ ## Content and statistics
17
+
18
+ The dataset is divided into the following three splits:
19
+
20
+ | Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
21
+ | :--------- | ----------: | -----: | -----------: | --------: | ----------: | ------: | -------: |
22
+ | Test | 100 | | | | | | |
23
+
24
+ The following data fields are available :
25
+
26
+ - **id**: unique identifier of the document.
27
+ - **title**: title of the document.
28
+ - **abstract**: abstract of the document.
29
+ - **keyphrases**: list of reference keyphrases.
30
+ - **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
31
+
32
+ ## References
33
+
34
+ - (Bougouin et al., 2013) Adrien Bougouin, Florian Boudin, and Béatrice Daille. 2013.
35
+ [TopicRank: Graph-Based Topic Ranking for Keyphrase Extraction](https://aclanthology.org/I13-1062/).
36
+ In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 543–551, Nagoya, Japan. Asian Federation of Natural Language Processing.
37
+ - (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
38
+ [Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](https://aclanthology.org/2021.naacl-main.330/).
39
+ In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
40
+
41
+ [bougouin-2013]: https://aclanthology.org/I13-1062/
42
+ [boudin-2021]: https://aclanthology.org/2021.naacl-main.330/
test.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
wikinews-fr-100.py ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Inspec benchmark dataset for keyphrase extraction an generation."""
2
+
3
+ import csv
4
+ import json
5
+ import os
6
+ import datasets
7
+
8
+ # TODO: Add BibTeX citation
9
+ # Find for instance the citation on arxiv or on the dataset repo/website
10
+ _CITATION = """\
11
+ @inproceedings{bougouin-etal-2013-topicrank,
12
+ title = "{T}opic{R}ank: Graph-Based Topic Ranking for Keyphrase Extraction",
13
+ author = "Bougouin, Adrien and
14
+ Boudin, Florian and
15
+ Daille, B{\'e}atrice",
16
+ booktitle = "Proceedings of the Sixth International Joint Conference on Natural Language Processing",
17
+ month = oct,
18
+ year = "2013",
19
+ address = "Nagoya, Japan",
20
+ publisher = "Asian Federation of Natural Language Processing",
21
+ url = "https://aclanthology.org/I13-1062",
22
+ pages = "543--551",
23
+ }
24
+ """
25
+
26
+ # You can copy an official description
27
+ _DESCRIPTION = """\
28
+ Wikinews-fr-100 benchmark dataset for keyphrase extraction an generation.
29
+ """
30
+
31
+ # TODO: Add a link to an official homepage for the dataset here
32
+ _HOMEPAGE = "https://aclanthology.org/I13-1062.pdf"
33
+
34
+ # TODO: Add the licence for the dataset here if you can find it
35
+ _LICENSE = "Apache 2.0 License"
36
+
37
+ # TODO: Add link to the official dataset URLs here
38
+ # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
39
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
40
+ _URLS = {
41
+ "test": "test.jsonl"
42
+ }
43
+
44
+ # TODO: Name of the dataset usually match the script name with CamelCase instead of snake_case
45
+ class Inspec(datasets.GeneratorBasedBuilder):
46
+ """TODO: Short description of my dataset."""
47
+
48
+ VERSION = datasets.Version("1.0.0")
49
+
50
+ # This is an example of a dataset with multiple configurations.
51
+ # If you don't want/need to define several sub-sets in your dataset,
52
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
53
+
54
+ # If you need to make complex sub-parts in the datasets with configurable options
55
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
56
+ # BUILDER_CONFIG_CLASS = MyBuilderConfig
57
+
58
+ # You will be able to load one or the other configurations in the following list with
59
+ # data = datasets.load_dataset('my_dataset', 'first_domain')
60
+ # data = datasets.load_dataset('my_dataset', 'second_domain')
61
+ BUILDER_CONFIGS = [
62
+ datasets.BuilderConfig(name="raw", version=VERSION, description="This part of my dataset covers the raw data."),
63
+ ]
64
+
65
+ DEFAULT_CONFIG_NAME = "raw" # It's not mandatory to have a default configuration. Just use one if it make sense.
66
+
67
+ def _info(self):
68
+ # TODO: This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
69
+ if self.config.name == "raw": # This is the name of the configuration selected in BUILDER_CONFIGS above
70
+ features = datasets.Features(
71
+ {
72
+ "id": datasets.Value("int64"),
73
+ "title": datasets.Value("string"),
74
+ "abstract": datasets.Value("string"),
75
+ "keyphrases": datasets.features.Sequence(datasets.Value("string")),
76
+ "prmu": datasets.features.Sequence(datasets.Value("string")),
77
+ }
78
+ )
79
+ return datasets.DatasetInfo(
80
+ # This is the description that will appear on the datasets page.
81
+ description=_DESCRIPTION,
82
+ # This defines the different columns of the dataset and their types
83
+ features=features, # Here we define them above because they are different between the two configurations
84
+ # If there's a common (input, target) tuple from the features, uncomment supervised_keys line below and
85
+ # specify them. They'll be used if as_supervised=True in builder.as_dataset.
86
+ # supervised_keys=("sentence", "label"),
87
+ # Homepage of the dataset for documentation
88
+ homepage=_HOMEPAGE,
89
+ # License for the dataset if available
90
+ license=_LICENSE,
91
+ # Citation for the dataset
92
+ citation=_CITATION,
93
+ )
94
+
95
+ def _split_generators(self, dl_manager):
96
+ # TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
97
+ # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
98
+
99
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS
100
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
101
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
102
+ urls = _URLS
103
+ data_dir = dl_manager.download_and_extract(urls)
104
+ return [
105
+ datasets.SplitGenerator(
106
+ name=datasets.Split.TEST,
107
+ # These kwargs will be passed to _generate_examples
108
+ gen_kwargs={
109
+ "filepath": os.path.join(data_dir["test"]),
110
+ "split": "test"
111
+ },
112
+ ),
113
+ ]
114
+
115
+ # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
116
+ def _generate_examples(self, filepath, split):
117
+ # TODO: This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
118
+ # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
119
+ with open(filepath, encoding="utf-8") as f:
120
+ for key, row in enumerate(f):
121
+ data = json.loads(row)
122
+ # Yields examples as (key, example) tuples
123
+ yield key, {
124
+ "id": data["id"],
125
+ "title": data["title"],
126
+ "abstract": data["abstract"],
127
+ "keyphrases": data["keyphrases"],
128
+ "prmu": data["prmu"],
129
+ }
130
+