PM-AI commited on
Commit
0965acc
1 Parent(s): 33a82e7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +158 -1
README.md CHANGED
@@ -31,4 +31,161 @@ task_ids:
31
  - open-domain-qa
32
  - closed-domain-qa
33
  viewer: true
34
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  - open-domain-qa
32
  - closed-domain-qa
33
  viewer: true
34
+ ---
35
+
36
+ # Dataset Card for germanDPR-beir
37
+
38
+ ## Dataset Summary
39
+
40
+ This database has been used to evaluate a newly trained [bi-encoder model](https://huggingface.co/PM-AI/bi-encoder_msmarco_bert-base_german) via [BEIR framework](https://github.com/beir-cellar/beir).
41
+ The benchmark framework requires a particular dataset structure by default which has been created locally and uploaded here.
42
+
43
+ Acknowledgement: The dataset was initially created as "[deepset/germanDPR](https://www.deepset.ai/germanquad)" by Timo Möller, Julian Risch, Malte Pietsch, Julian Gutsch, Tom Hersperger, Luise Köhler, Iuliia Mozhina, and Justus Peter, during work done at deepset.ai.
44
+
45
+ ### Dataset Creation
46
+ First, the original dataset [deepset/germanDPR](https://huggingface.co/datasets/deepset/germandpr) was converted into three files for BEIR compatibility:
47
+ - The first file is `queries.jsonl` and contains an ID and a question in each line.
48
+ - The second file, `corpus.jsonl`, contains in each line an ID, a title, a text and some metadata.
49
+ - In the `qrel` folder is the third file. It connects every question from `queries.json` (via `q_id`) with a relevant text/answer from `corpus.jsonl` (via `c_id`)
50
+
51
+ This process has been done for `train` and `test` split separately based on the original germanDPR dataset.
52
+ Approaching the dataset creation like that is necessary because queries AND corpus both differ in deepset's germanDPR dataset
53
+ and it might be confusion changing this specific split.
54
+ In conclusion, queries and corpus differ between train and test split and not only qrels data!
55
+ Note: If you want one big corpus use `datasets.concatenate_datasets()`.
56
+
57
+ In the original dataset, there is one passage containing the answer and three "wrong" passages for each question.
58
+ During the creation of this customized dataset, all four passages are added, but only if they are not already present (... meaning they have been deduplicated).
59
+
60
+ It should be noted, that BEIR is combining `title` + `text` in `corpus.jsonl` to a new string which may produce odd results:
61
+ The original germanDPR dataset does not always contain "classical" titles (i.e. short), but sometimes consists of whole sentences, which are also present in the "text" field.
62
+ This results in very long passages as well as duplications.
63
+ In addition, both title and text contain specially formatted content.
64
+ For example, the words used in titles are often connected with underscores:
65
+
66
+ > `Apple_Magic_Mouse`
67
+
68
+ And texts begin with special characters to distinguish headings and subheadings:
69
+
70
+ > `Wirtschaft_der_Vereinigten_Staaten\n\n== Verschuldung ==\nEin durchschnittlicher Haushalt (...)`
71
+
72
+ Line breaks are also frequently found, as you can see.
73
+
74
+ Of course, it depends on the application whether these things become a problem or not.
75
+ However, it was decided to release two variants of the original dataset:
76
+ - The `original` variant leaves the titles and texts as they are. There are no modifications.
77
+ - The `processed` variant removes the title completely and simplifies the texts by removing the special formatting.
78
+
79
+ The creation of both variants can be viewed in [create_dataset.py](https://huggingface.co/datasets/PM-AI/germandpr-beir/resolve/main/create_dataset.py).
80
+ In particular, the following parameters were used:
81
+ - `original`: `SPLIT=test/train, TEXT_PREPROCESSING=False, KEEP_TITLE=True`
82
+ - `processed`: `SPLIT=test/Train, TEXT_PREPROCESSING=True, KEEP_TITLE=False`
83
+
84
+ One final thing to mention: The IDs for queries and the corpus should not match!!!
85
+ During the evaluation using BEIR, it was found that if these IDs match, the result for that entry is completely removed.
86
+ This means some of the results are missing.
87
+ A correct calculation of the overall result is no longer possible.
88
+ Have a look into [BEIR's evaluation.py](https://github.com/beir-cellar/beir/blob/c3334fd5b336dba03c5e3e605a82fcfb1bdf667d/beir/retrieval/evaluation.py#L49) for further understanding.
89
+
90
+ ### Dataset Usage
91
+ As earlier mentioned, this dataset is intended to be used with the BEIR benchmark framework.
92
+ The file and data structure required for BEIR can only be used to a limited extent with Huggingface Datasets or it is necessary to define multiple dataset repositories at once.
93
+ To make it easier, the [dl_dataset.py](https://huggingface.co/datasets/PM-AI/germandpr-beir/tree/main/dl_dataset.py) script is provided to download the dataset and to ensure the correct file and folder structure.
94
+
95
+ ```python
96
+ # dl_dataset.py
97
+ import json
98
+ import os
99
+
100
+ import datasets
101
+ from beir.datasets.data_loader import GenericDataLoader
102
+
103
+ # ----------------------------------------
104
+ # This scripts downloads the BEIR compatible deepsetDPR dataset from "Huggingface Datasets" to your local machine.
105
+ # Please see dataset's description/readme to learn more about how the dataset was created.
106
+ # If you want to use deepset/germandpr without any changes, use TYPE "original"
107
+ # If you want to reproduce PM-AI/bi-encoder_msmarco_bert-base_german, use TYPE "processed"
108
+ # ----------------------------------------
109
+
110
+
111
+ TYPE = "processed" # or "original"
112
+ SPLIT = "train" # or "train"
113
+ DOWNLOAD_DIR = "germandpr-beir-dataset"
114
+ DOWNLOAD_DIR = os.path.join(DOWNLOAD_DIR, f'{TYPE}/{SPLIT}')
115
+ DOWNLOAD_QREL_DIR = os.path.join(DOWNLOAD_DIR, f'qrels/')
116
+
117
+ os.makedirs(DOWNLOAD_QREL_DIR, exist_ok=True)
118
+
119
+ # for BEIR compatibility we need queries, corpus and qrels all together
120
+ # ensure to always load these three based on the same type (all "processed" or all "original")
121
+ for subset_name in ["queries", "corpus", "qrels"]:
122
+ subset = datasets.load_dataset("PM-AI/germandpr-beir", f'{TYPE}-{subset_name}', split=SPLIT)
123
+ if subset_name == "qrels":
124
+ out_path = os.path.join(DOWNLOAD_QREL_DIR, f'{SPLIT}.tsv')
125
+ subset.to_csv(out_path, sep="\t", index=False)
126
+ else:
127
+ if subset_name == "queries":
128
+ _row_to_json = lambda row: json.dumps({"_id": row["_id"], "text": row["text"]}, ensure_ascii=False)
129
+ else:
130
+ _row_to_json = lambda row: json.dumps({"_id": row["_id"], "title": row["title"], "text": row["text"]}, ensure_ascii=False)
131
+
132
+ with open(os.path.join(DOWNLOAD_DIR, f'{subset_name}.jsonl'), "w", encoding="utf-8") as out_file:
133
+ for row in subset:
134
+ out_file.write(_row_to_json(row) + "\n")
135
+
136
+
137
+ # GenericDataLoader is part of BEIR. If everything is working correctly we can now load the dataset
138
+ corpus, queries, qrels = GenericDataLoader(data_folder=DOWNLOAD_DIR).load(SPLIT)
139
+ print(f'{SPLIT} corpus size: {len(corpus)}\n'
140
+ f'{SPLIT} queries size: {len(queries)}\n'
141
+ f'{SPLIT} qrels: {len(qrels)}\n')
142
+
143
+ print("--------------------------------------------------------------------------------------------------------------\n"
144
+ "Now you can use the downloaded files in BEIR framework\n"
145
+ "Example: https://github.com/beir-cellar/beir/blob/v1.0.1/examples/retrieval/evaluation/dense/evaluate_sbert.py\n"
146
+ "--------------------------------------------------------------------------------------------------------------")
147
+ ```
148
+
149
+ Alternatively, the data sets can be downloaded directly:
150
+ - https://huggingface.co/datasets/PM-AI/germandpr-beir/resolve/main/data/original.tar.gz
151
+ - https://huggingface.co/datasets/PM-AI/germandpr-beir/resolve/main/data/processed.tar.gz
152
+
153
+ Now you can use the downloaded files in BEIR framework:
154
+ - For Example: [evaluate_sbert.py](https://github.com/beir-cellar/beir/blob/v1.0.1/examples/retrieval/evaluation/dense/evaluate_sbert.py)
155
+ - Just set variable `"dataset"` to `"germandpr-beir-dataset/processed/test"` or `"germandpr-beir-dataset/original/test"`.
156
+ - Same goes for `"train"`.
157
+
158
+ ### Dataset Sizes
159
+ - Original **train** `corpus` size, `queries` size and `qrels` size: `24009`, `9275` and `9275`
160
+ - Original **test** `corpus` size, `queries` size and `qrels` size: `2876`, `1025` and `1025`
161
+
162
+ - Processed **train** `corpus` size, `queries` size and `qrels` size: `23993`, `9275` and `9275`
163
+ - Processed **test** `corpus` size, `queries` size and `qrels` size: `2875` and `1025` and `1025`
164
+
165
+ ### Languages
166
+
167
+ This dataset only supports german (aka. de, DE).
168
+
169
+ ### Acknowledgment
170
+
171
+ The dataset was initially created as "[deepset/germanDPR](https://www.deepset.ai/germanquad)" by Timo Möller, Julian Risch, Malte Pietsch, Julian Gutsch, Tom Hersperger, Luise Köhler, Iuliia Mozhina, and Justus Peter, during work done at [deepset.ai](https://www.deepset.ai/).
172
+
173
+ This work is a collaboration between [Technical University of Applied Sciences Wildau (TH Wildau)](https://en.th-wildau.de/) and [sense.ai.tion GmbH](https://senseaition.com/).
174
+ You can contact us via:
175
+ * [Philipp Müller (M.Eng.)](www.linkedin.com/in/herrphilipps); Author
176
+ * [Prof. Dr. Janett Mohnke](mailto:icampus@th-wildau.de); TH Wildau
177
+ * [Dr. Matthias Boldt, Jörg Oehmichen](mailto:info@senseaition.com); sense.AI.tion GmbH
178
+
179
+ This work was funded by the European Regional Development Fund (EFRE) and the State of Brandenburg. Project/Vorhaben: "ProFIT: Natürlichsprachliche Dialogassistenten in der Pflege".
180
+
181
+ <div style="display:flex">
182
+ <div style="padding-left:20px;">
183
+ <a href="https://efre.brandenburg.de/efre/de/"><img src="https://huggingface.co/datasets/PM-AI/germandpr-beir/resolve/main/res/EFRE-Logo_rechts_oweb_en_rgb.jpeg" alt="Logo of European Regional Development Fund (EFRE)" width="200"/></a>
184
+ </div>
185
+ <div style="padding-left:20px;">
186
+ <a href="https://www.senseaition.com"><img src="https://senseaition.com/wp-content/uploads/thegem-logos/logo_c847aaa8f42141c4055d4a8665eb208d_3x.png" alt="Logo of senseaition GmbH" width="200"/></a>
187
+ </div>
188
+ <div style="padding-left:20px;">
189
+ <a href="https://www.th-wildau.de"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/f/f6/TH_Wildau_Logo.png/640px-TH_Wildau_Logo.png" alt="Logo of TH Wildau" width="180"/></a>
190
+ </div>
191
+ </div>