AbrarHyder
commited on
Commit
•
756c9cf
1
Parent(s):
b8a5a45
Added a description of the dataset
Browse files
README.md
CHANGED
@@ -57,4 +57,29 @@ task_categories:
|
|
57 |
- question-answering
|
58 |
language:
|
59 |
- de
|
60 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
57 |
- question-answering
|
58 |
language:
|
59 |
- de
|
60 |
+
---
|
61 |
+
|
62 |
+
|
63 |
+
## Original Dataset
|
64 |
+
|
65 |
+
The original dataset, [`deepset/germandpr`](https://huggingface.co/datasets/deepset/germandpr), contains:
|
66 |
+
- **9275 training examples**
|
67 |
+
- **1025 testing examples**
|
68 |
+
|
69 |
+
Each example is a question/answer pair, consisting of:
|
70 |
+
- One question
|
71 |
+
- One answer
|
72 |
+
- One positive context
|
73 |
+
- Three negative contexts
|
74 |
+
|
75 |
+
You can find the original dataset [here](https://huggingface.co/datasets/deepset/germandpr).
|
76 |
+
|
77 |
+
## Modifications
|
78 |
+
|
79 |
+
### Adding Easy Negative Examples
|
80 |
+
|
81 |
+
To enhance the dataset, an "easy negative example" was added to each row. The objective of this addition is to train the model to better distinguish between relevant and irrelevant contexts by exposing it to plausible but incorrect information.
|
82 |
+
|
83 |
+
## Method
|
84 |
+
|
85 |
+
For identifying easy negative examples, I have used utilized the L2 distance metric from [Faiss](https://github.com/facebookresearch/faiss) to find the most dissimilar index (vector) relative to the positive context in each row. This dissimilar index was then selected as the easy negative example.
|