asahi417 commited on
Commit
a8807ea
1 Parent(s): d22f92b
README.md CHANGED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4-0
3
+ pretty_name: SubjQA for question generation
4
+ languages: en
5
+ multilinguality: monolingual
6
+ size_categories: 10K<n<100K
7
+ source_datasets: subjqa
8
+ task_categories: question-generation
9
+ task_ids: question-generation
10
+ ---
11
+
12
+ # Dataset Card for "qg_sqyadshifts"
13
+
14
+ ## Table of Contents
15
+ - [Dataset Description](#dataset-description)
16
+ - [Dataset Summary](#dataset-summary)
17
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
18
+ - [Languages](#languages)
19
+ - [Dataset Structure](#dataset-structure)
20
+ - [Data Instances](#data-instances)
21
+ - [Data Fields](#data-fields)
22
+ - [Data Splits](#data-splits)
23
+ - [Dataset Creation](#dataset-creation)
24
+ - [Curation Rationale](#curation-rationale)
25
+ - [Source Data](#source-data)
26
+ - [Annotations](#annotations)
27
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
28
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
29
+ - [Social Impact of Dataset](#social-impact-of-dataset)
30
+ - [Discussion of Biases](#discussion-of-biases)
31
+ - [Other Known Limitations](#other-known-limitations)
32
+ - [Additional Information](#additional-information)
33
+ - [Dataset Curators](#dataset-curators)
34
+ - [Licensing Information](#licensing-information)
35
+ - [Citation Information](#citation-information)
36
+
37
+ ## Dataset Description
38
+ - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
39
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
40
+ - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
41
+
42
+ ### Dataset Summary
43
+ Modified version of [SQuADShifts](https://modestyachts.github.io/squadshifts-website/index.html) for question generation (QG) task.
44
+
45
+ ### Supported Tasks and Leaderboards
46
+ * `question-generation`: The dataset can be used to train a model for question generation.
47
+ Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L score.
48
+
49
+ ### Languages
50
+ English (en)
51
+
52
+ ## Dataset Structure
53
+ ### Data Instances
54
+ #### plain_text
55
+ An example of 'train' looks as follows.
56
+ ```
57
+ {
58
+ "question": "How is book?",
59
+ "paragraph": "I am giving "Gone Girl" 3 stars, but only begrudgingly. In my mind, any book that takes me 3 months and 20 different tries to read is not worth 3 stars, especially a book written by an author I already respect. And I am not kidding, for me the first half of "Gone Girl" was a PURE TORTURE to read.Amy Dunn disappears on the day of her 5th wedding anniversary. All gradually uncovered evidence suggests that her husband, Nick, is somehow involved. Did he kill her? Was she kidnapped? What happened to Amy? One thing is clear, Nick and Amy's marriage wasn't as perfect as everybody thought.The first part of the novel is all about the investigation into Amy's disappearance, slow unraveling of Nick's dirty secrets, reminiscing about the troubled history of Nick and Amy's marriage as told in Amy's hidden diary. I strained and strained my brain trying to understand why this chunk of Gone Girl had no appeal to me whatsoever. The only answer I have is this: I am really not into reading about rich white people's problems. You want to whine to me about your dwindling trust fund? Losing your cushy New York job? Moving south and "only" renting a mansion there? Being unhappy because you have too much free time on your hands and you are used to only work as a hobby? You want to make fun of your lowly, un-posh neighbors and their casseroles? Well, I am not interested. I'd rather read about someone not necessarily likable, but at least worthy of my empathy, not waste my time on self-centered, spoiled, pathetic people who don't know what real problems are. Granted, characters in Flynn's previous novels ("Sharp Objects" and "Dark Places") are pretty pathetic and and at times revolting too, but I always felt some strange empathy towards them, not annoyance and boredom, like I felt reading about Amy and Nick's marriage voes.But then second part, with its wicked twist, changed everything. The story became much more exciting, dangerous and deranged. The main characters revealed sides to them that were quite shocking and VERY entertaining. I thought the Gillian Flynn I knew before finally unleashed her talent for writing utterly unlikable and crafty women. THEN I got invested in the story, THEN I cared.Was it too little too late though? I think it was. Something needed to be done to make "Gone Girl" a better read. Make it shorter? Cut out first part completely? I don't know. But because of my uneven experience with this novel I won't be able to recommend "Gone Girl" as readily as I did Flynn's earlier novels, even though I think this horror marriage story (it's not a true mystery, IMO) has some brilliantly written psycho goodness in it and an absolutely messed up ending that many loathed but I LOVED. I wish it didn't take so much time and patience to get to all of that...",
60
+ "answer": "any book that takes me 3 months and 20 different tries to read is not worth 3 stars",
61
+ "sentence": "In my mind, any book that takes me 3 months and 20 different tries to read is not worth 3 stars , especially a book written by an author I already respect.",
62
+ "paragraph_sentence": "I am giving "Gone Girl" 3 stars, but only begrudgingly. <hl> In my mind, any book that takes me 3 months and 20 different tries to read is not worth 3 stars , especially a book written by an author I already respect. <hl> And I am not kidding, for me the first half of "Gone Girl" was a PURE TORTURE to read. Amy Dunn disappears on the day of her 5th wedding anniversary. All gradually uncovered evidence suggests that her husband, Nick, is somehow involved. Did he kill her? Was she kidnapped? What happened to Amy? One thing is clear, Nick and Amy's marriage wasn't as perfect as everybody thought. The first part of the novel is all about the investigation into Amy's disappearance, slow unraveling of Nick's dirty secrets, reminiscing about the troubled history of Nick and Amy's marriage as told in Amy's hidden diary. I strained and strained my brain trying to understand why this chunk of Gone Girl had no appeal to me whatsoever. The only answer I have is this: I am really not into reading about rich white people's problems. You want to whine to me about your dwindling trust fund? Losing your cushy New York job? Moving south and "only" renting a mansion there? Being unhappy because you have too much free time on your hands and you are used to only work as a hobby? You want to make fun of your lowly, un-posh neighbors and their casseroles? Well, I am not interested. I'd rather read about someone not necessarily likable, but at least worthy of my empathy, not waste my time on self-centered, spoiled, pathetic people who don't know what real problems are. Granted, characters in Flynn's previous novels ("Sharp Objects" and "Dark Places") are pretty pathetic and and at times revolting too, but I always felt some strange empathy towards them, not annoyance and boredom, like I felt reading about Amy and Nick's marriage voes. But then second part, with its wicked twist, changed everything. The story became much more exciting, dangerous and deranged. The main characters revealed sides to them that were quite shocking and VERY entertaining. I thought the Gillian Flynn I knew before finally unleashed her talent for writing utterly unlikable and crafty women. THEN I got invested in the story, THEN I cared. Was it too little too late though? I think it was. Something needed to be done to make "Gone Girl" a better read. Make it shorter? Cut out first part completely? I don't know. But because of my uneven experience with this novel I won't be able to recommend "Gone Girl" as readily as I did Flynn's earlier novels, even though I think this horror marriage story (it's not a true mystery, IMO) has some brilliantly written psycho goodness in it and an absolutely messed up ending that many loathed but I LOVED. I wish it didn't take so much time and patience to get to all of that...",
63
+ "paragraph_answer": "I am giving "Gone Girl" 3 stars, but only begrudgingly. In my mind, <hl> any book that takes me 3 months and 20 different tries to read is not worth 3 stars <hl>, especially a book written by an author I already respect. And I am not kidding, for me the first half of "Gone Girl" was a PURE TORTURE to read.Amy Dunn disappears on the day of her 5th wedding anniversary. All gradually uncovered evidence suggests that her husband, Nick, is somehow involved. Did he kill her? Was she kidnapped? What happened to Amy? One thing is clear, Nick and Amy's marriage wasn't as perfect as everybody thought.The first part of the novel is all about the investigation into Amy's disappearance, slow unraveling of Nick's dirty secrets, reminiscing about the troubled history of Nick and Amy's marriage as told in Amy's hidden diary. I strained and strained my brain trying to understand why this chunk of Gone Girl had no appeal to me whatsoever. The only answer I have is this: I am really not into reading about rich white people's problems. You want to whine to me about your dwindling trust fund? Losing your cushy New York job? Moving south and "only" renting a mansion there? Being unhappy because you have too much free time on your hands and you are used to only work as a hobby? You want to make fun of your lowly, un-posh neighbors and their casseroles? Well, I am not interested. I'd rather read about someone not necessarily likable, but at least worthy of my empathy, not waste my time on self-centered, spoiled, pathetic people who don't know what real problems are. Granted, characters in Flynn's previous novels ("Sharp Objects" and "Dark Places") are pretty pathetic and and at times revolting too, but I always felt some strange empathy towards them, not annoyance and boredom, like I felt reading about Amy and Nick's marriage voes.But then second part, with its wicked twist, changed everything. The story became much more exciting, dangerous and deranged. The main characters revealed sides to them that were quite shocking and VERY entertaining. I thought the Gillian Flynn I knew before finally unleashed her talent for writing utterly unlikable and crafty women. THEN I got invested in the story, THEN I cared.Was it too little too late though? I think it was. Something needed to be done to make "Gone Girl" a better read. Make it shorter? Cut out first part completely? I don't know. But because of my uneven experience with this novel I won't be able to recommend "Gone Girl" as readily as I did Flynn's earlier novels, even though I think this horror marriage story (it's not a true mystery, IMO) has some brilliantly written psycho goodness in it and an absolutely messed up ending that many loathed but I LOVED. I wish it didn't take so much time and patience to get to all of that...",
64
+ "sentence_answer": "In my mind, <hl> any book that takes me 3 months and 20 different tries to read is not worth 3 stars <hl> , especially a book written by an author I already respect.",
65
+ }
66
+ ```
67
+ ### Data Fields
68
+ The data fields are the same among all splits.
69
+ #### plain_text
70
+ - `question`: a `string` feature.
71
+ - `paragraph`: a `string` feature.
72
+ - `answer`: a `string` feature.
73
+ - `sentence`: a `string` feature.
74
+ - `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
75
+ - `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
76
+ - `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
77
+
78
+ Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
79
+ but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
80
+ `paragraph_sentence` feature is for sentence-aware question generation.
81
+
82
+ ### Data Splits
83
+
84
+ | name |test |
85
+ |-------------|----:|
86
+ |default (all)|4437 |
87
+ | books |636 |
88
+ | movies |723 |
89
+ | grocery |686 |
90
+ | restaurants |822 |
91
+
92
+
93
+ ## Dataset Creation
94
+ ### Curation Rationale
95
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
96
+ ### Source Data
97
+ #### Initial Data Collection and Normalization
98
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
99
+ #### Who are the source language producers?
100
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
101
+ ### Annotations
102
+ #### Annotation process
103
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
104
+ #### Who are the annotators?
105
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
+ ### Personal and Sensitive Information
107
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
108
+ ## Considerations for Using the Data
109
+ ### Social Impact of Dataset
110
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
111
+ ### Discussion of Biases
112
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
113
+ ### Other Known Limitations
114
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
115
+ ## Additional Information
116
+ ### Dataset Curators
117
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
118
+ ### Licensing Information
119
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
+ ### Citation Information
121
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
data/processed/amazon.test00.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/amazon.test01.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/amazon.test02.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/amazon.test03.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/amazon.test04.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/amazon.test05.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/amazon.test06.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/new_wiki.test00.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/new_wiki.test01.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/new_wiki.test02.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/new_wiki.test03.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/new_wiki.test04.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/new_wiki.test05.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/nyt.test00.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/nyt.test01.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/nyt.test02.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/nyt.test03.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/nyt.test04.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/nyt.test05.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/nyt.test06.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/reddit.test00.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/reddit.test01.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/reddit.test02.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/reddit.test03.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/reddit.test04.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
data/processed/reddit.test05.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
data/processed/reddit.test06.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
generate_reference_files.py CHANGED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from datasets import load_dataset
3
+
4
+ os.makedirs('./reference_files', exist_ok=True)
5
+
6
+
7
+ for split in ['test']:
8
+ for domain in ["default", 'new_wiki', 'nyt', 'reddit', 'amazon']:
9
+ dataset = load_dataset('asahi417/qg_squadshifts', domain, split=split)
10
+ for data in ['question', 'answer', 'sentence', 'paragraph']:
11
+ with open('./reference_files/{}-{}.{}txt'.format(data, split, "" if domain == 'default' else f"{domain}."), 'w') as f:
12
+ f.write('\n'.join(dataset[data]))
13
+
process.py CHANGED
@@ -1,9 +1,9 @@
1
  """ Script to process raw SQuADshift file for Question Generation format
2
  cd data/processed
3
- gsplit -b 6M -d --additional-suffix=.jsonl new_wiki.test.jsonl new_wiki.test
4
- gsplit -b 6M -d --additional-suffix=.jsonl nyt.test.jsonl nyt.test
5
- gsplit -b 6M -d --additional-suffix=.jsonl reddit.test.jsonl reddit.test
6
- gsplit -b 6M -d --additional-suffix=.jsonl amazon.test.jsonl amazon.test
7
 
8
  rm -rf new_wiki.test.jsonl
9
  rm -rf nyt.test.jsonl
@@ -92,11 +92,12 @@ if __name__ == '__main__':
92
  tmp_dataset = dataset[_split]
93
  with open(f'{output}/{data_type}.{_split}.jsonl', 'w') as f:
94
  for single_data in tqdm(tmp_dataset):
 
 
95
  answer_str = single_data['answers']['text']
96
- question_str = single_data['question']
97
- paragraph_str = single_data['context']
98
  if type(answer_str) == list:
99
  answer_str = answer_str[0]
 
100
  assert type(answer_str) is str, answer_str
101
  assert type(question_str) is str, question_str
102
  assert type(paragraph_str) is str, paragraph_str
 
1
  """ Script to process raw SQuADshift file for Question Generation format
2
  cd data/processed
3
+ gsplit -l 1500 -d --additional-suffix=.jsonl new_wiki.test.jsonl new_wiki.test
4
+ gsplit -l 1500 -d --additional-suffix=.jsonl nyt.test.jsonl nyt.test
5
+ gsplit -l 1500 -d --additional-suffix=.jsonl reddit.test.jsonl reddit.test
6
+ gsplit -l 1500 -d --additional-suffix=.jsonl amazon.test.jsonl amazon.test
7
 
8
  rm -rf new_wiki.test.jsonl
9
  rm -rf nyt.test.jsonl
 
92
  tmp_dataset = dataset[_split]
93
  with open(f'{output}/{data_type}.{_split}.jsonl', 'w') as f:
94
  for single_data in tqdm(tmp_dataset):
95
+ question_str = single_data['question'] #.replace("\n", ".").replace('"', "'")
96
+ paragraph_str = single_data['context'] #.replace("\n", ".").replace('"', "'")
97
  answer_str = single_data['answers']['text']
 
 
98
  if type(answer_str) == list:
99
  answer_str = answer_str[0]
100
+ # answer_str = answer_str.replace("\n", ".").replace('"', "'")
101
  assert type(answer_str) is str, answer_str
102
  assert type(question_str) is str, question_str
103
  assert type(paragraph_str) is str, paragraph_str
qg_squadshift.py → qg_squadshifts.py RENAMED
@@ -1,11 +1,20 @@
 
1
  import json
 
2
  import datasets
3
 
4
  logger = datasets.logging.get_logger(__name__)
5
  _DESCRIPTION = """[SQuAD Shifts](https://modestyachts.github.io/squadshifts-website/index.html) dataset for question generation (QG) task."""
6
  _URL = 'https://huggingface.co/datasets/asahi417/qg_squadshift/raw/main/data/processed'
7
- _DOMAINS = ['new_wiki', 'nyt', 'reddit', 'amazon']
8
- _FILESIZE = [4, 5, 5, 5]
 
 
 
 
 
 
 
9
 
10
 
11
  class QGSQuADShiftsConfig(datasets.BuilderConfig):
@@ -22,7 +31,7 @@ class QGSQuADShiftsConfig(datasets.BuilderConfig):
22
  class QGSQuADShifts(datasets.GeneratorBasedBuilder):
23
 
24
  BUILDER_CONFIGS = [QGSQuADShiftsConfig(name="default", description="All domain.")]
25
- BUILDER_CONFIGS += [QGSQuADShiftsConfig(name=i, description=i) for i in _DOMAINS]
26
 
27
  def _info(self):
28
  return datasets.DatasetInfo(
@@ -35,11 +44,7 @@ class QGSQuADShifts(datasets.GeneratorBasedBuilder):
35
  "paragraph": datasets.Value("string"),
36
  "sentence_answer": datasets.Value("string"),
37
  "paragraph_answer": datasets.Value("string"),
38
- "paragraph_sentence": datasets.Value("string"),
39
- "paragraph_id": datasets.Value("string"),
40
- "question_subj_level": datasets.Value("int32"),
41
- "answer_subj_level": datasets.Value("int32"),
42
- "domain": datasets.Value("string"),
43
  }
44
  ),
45
  supervised_keys=None,
@@ -48,22 +53,10 @@ class QGSQuADShifts(datasets.GeneratorBasedBuilder):
48
 
49
  def _split_generators(self, dl_manager):
50
  if self.config.name == 'default':
51
- downloaded_file = dl_manager.download_and_extract({
52
- 'train': [f"{_URL}/{i}.train.jsonl" for i in _DOMAINS],
53
- 'dev': [f"{_URL}/{i}.dev.jsonl" for i in _DOMAINS],
54
- 'test': [f"{_URL}/{i}.test.jsonl" for i in _DOMAINS]
55
- })
56
  else:
57
- downloaded_file = dl_manager.download_and_extract({
58
- 'train': [f"{_URL}/{self.config.name}.train.jsonl"],
59
- 'dev': [f"{_URL}/{self.config.name}.dev.jsonl"],
60
- 'test': [f"{_URL}/{self.config.name}.test.jsonl"]
61
- })
62
- return [
63
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": downloaded_file["train"]}),
64
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepaths": downloaded_file["dev"]}),
65
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepaths": downloaded_file["test"]})
66
- ]
67
 
68
  def _generate_examples(self, filepaths):
69
  _key = 0
 
1
+ """ python -c "from datasets import load_dataset;load_dataset('.')" """
2
  import json
3
+ from itertools import chain
4
  import datasets
5
 
6
  logger = datasets.logging.get_logger(__name__)
7
  _DESCRIPTION = """[SQuAD Shifts](https://modestyachts.github.io/squadshifts-website/index.html) dataset for question generation (QG) task."""
8
  _URL = 'https://huggingface.co/datasets/asahi417/qg_squadshift/raw/main/data/processed'
9
+ _FILES = {
10
+ datasets.Split.TEST:
11
+ {
12
+ 'new_wiki': [f'{_URL}/new_wiki.test{i:02d}.jsonl' for i in range(4)],
13
+ 'nyt': [f'{_URL}/nyt.test{i:02d}.jsonl' for i in range(4)],
14
+ 'reddit': [f'{_URL}/reddit.test{i:02d}.jsonl' for i in range(4)],
15
+ 'amazon': [f'{_URL}/amazon.test{i:02d}.jsonl' for i in range(4)]
16
+ }
17
+ }
18
 
19
 
20
  class QGSQuADShiftsConfig(datasets.BuilderConfig):
 
31
  class QGSQuADShifts(datasets.GeneratorBasedBuilder):
32
 
33
  BUILDER_CONFIGS = [QGSQuADShiftsConfig(name="default", description="All domain.")]
34
+ BUILDER_CONFIGS += [QGSQuADShiftsConfig(name=i, description=i) for i in sorted(_FILES.keys())]
35
 
36
  def _info(self):
37
  return datasets.DatasetInfo(
 
44
  "paragraph": datasets.Value("string"),
45
  "sentence_answer": datasets.Value("string"),
46
  "paragraph_answer": datasets.Value("string"),
47
+ "paragraph_sentence": datasets.Value("string")
 
 
 
 
48
  }
49
  ),
50
  supervised_keys=None,
 
53
 
54
  def _split_generators(self, dl_manager):
55
  if self.config.name == 'default':
56
+ downloaded_file = dl_manager.download_and_extract({k: list(chain(*list(v.values()))) for k, v in _FILES.items()})
 
 
 
 
57
  else:
58
+ downloaded_file = dl_manager.download_and_extract({k: v[self.config.name] for k, v in _FILES.items()})
59
+ return [datasets.SplitGenerator(name=k, gen_kwargs={"filepaths": downloaded_file[k]}) for k in _FILES.keys()]
 
 
 
 
 
 
 
 
60
 
61
  def _generate_examples(self, filepaths):
62
  _key = 0