Datasets:
lmqg
/

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
parquet-converter commited on
Commit
7ec3759
1 Parent(s): f807452

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,58 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ckpt filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.lz4 filter=lfs diff=lfs merge=lfs -text
12
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
- *.model filter=lfs diff=lfs merge=lfs -text
14
- *.msgpack filter=lfs diff=lfs merge=lfs -text
15
- *.npy filter=lfs diff=lfs merge=lfs -text
16
- *.npz filter=lfs diff=lfs merge=lfs -text
17
- *.onnx filter=lfs diff=lfs merge=lfs -text
18
- *.ot filter=lfs diff=lfs merge=lfs -text
19
- *.parquet filter=lfs diff=lfs merge=lfs -text
20
- *.pb filter=lfs diff=lfs merge=lfs -text
21
- *.pickle filter=lfs diff=lfs merge=lfs -text
22
- *.pkl filter=lfs diff=lfs merge=lfs -text
23
- *.pt filter=lfs diff=lfs merge=lfs -text
24
- *.pth filter=lfs diff=lfs merge=lfs -text
25
- *.rar filter=lfs diff=lfs merge=lfs -text
26
- *.safetensors filter=lfs diff=lfs merge=lfs -text
27
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
- *.tar.* filter=lfs diff=lfs merge=lfs -text
29
- *.tflite filter=lfs diff=lfs merge=lfs -text
30
- *.tgz filter=lfs diff=lfs merge=lfs -text
31
- *.wasm filter=lfs diff=lfs merge=lfs -text
32
- *.xz filter=lfs diff=lfs merge=lfs -text
33
- *.zip filter=lfs diff=lfs merge=lfs -text
34
- *.zst filter=lfs diff=lfs merge=lfs -text
35
- *tfevents* filter=lfs diff=lfs merge=lfs -text
36
- # Audio files - uncompressed
37
- *.pcm filter=lfs diff=lfs merge=lfs -text
38
- *.sam filter=lfs diff=lfs merge=lfs -text
39
- *.raw filter=lfs diff=lfs merge=lfs -text
40
- # Audio files - compressed
41
- *.aac filter=lfs diff=lfs merge=lfs -text
42
- *.flac filter=lfs diff=lfs merge=lfs -text
43
- *.mp3 filter=lfs diff=lfs merge=lfs -text
44
- *.ogg filter=lfs diff=lfs merge=lfs -text
45
- *.wav filter=lfs diff=lfs merge=lfs -text
46
- # Image files - uncompressed
47
- *.bmp filter=lfs diff=lfs merge=lfs -text
48
- *.gif filter=lfs diff=lfs merge=lfs -text
49
- *.png filter=lfs diff=lfs merge=lfs -text
50
- *.tiff filter=lfs diff=lfs merge=lfs -text
51
- # Image files - compressed
52
- *.jpg filter=lfs diff=lfs merge=lfs -text
53
- *.jpeg filter=lfs diff=lfs merge=lfs -text
54
- *.webp filter=lfs diff=lfs merge=lfs -text
55
- data filter=lfs diff=lfs merge=lfs -text
56
- data/processed/test.jsonl filter=lfs diff=lfs merge=lfs -text
57
- data/processed/train.jsonl filter=lfs diff=lfs merge=lfs -text
58
- data/processed/validation.jsonl filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,71 +0,0 @@
1
- ---
2
- license: cc-by-sa-4.0
3
- pretty_name: TweetQA for question generation
4
- language: en
5
- multilinguality: monolingual
6
- size_categories: 1k<n<10K
7
- source_datasets: tweet_qa
8
- task_categories:
9
- - text-generation
10
- task_ids:
11
- - language-modeling
12
- tags:
13
- - question-generation
14
- ---
15
-
16
- # Dataset Card for "lmqg/qag_tweetqa"
17
-
18
-
19
- ## Dataset Description
20
- - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
21
- - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
22
- - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
23
-
24
- ### Dataset Summary
25
- This is the question & answer generation dataset based on the [tweet_qa](https://huggingface.co/datasets/tweet_qa). The test set of the original data is not publicly released, so we randomly sampled test questions from the training set.
26
-
27
- ### Supported Tasks and Leaderboards
28
- * `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
29
- Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
30
-
31
- ### Languages
32
- English (en)
33
-
34
- ## Dataset Structure
35
- An example of 'train' looks as follows.
36
- ```
37
- {
38
- "paragraph": "I would hope that Phylicia Rashad would apologize now that @missjillscott has! You cannot discount 30 victims who come with similar stories.— JDWhitner (@JDWhitner) July 7, 2015",
39
- "questions": [ "what should phylicia rashad do now?", "how many victims have come forward?" ],
40
- "answers": [ "apologize", "30" ],
41
- "questions_answers": "Q: what should phylicia rashad do now?, A: apologize Q: how many victims have come forward?, A: 30"
42
- }
43
- ```
44
- The data fields are the same among all splits.
45
- - `questions`: a `list` of `string` features.
46
- - `answers`: a `list` of `string` features.
47
- - `paragraph`: a `string` feature.
48
- - `questions_answers`: a `string` feature.
49
-
50
- ## Data Splits
51
-
52
- |train|validation|test |
53
- |----:|---------:|----:|
54
- |4536 | 583| 583|
55
-
56
-
57
- ## Citation Information
58
-
59
- ```
60
- @inproceedings{ushio-etal-2022-generative,
61
- title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
62
- author = "Ushio, Asahi and
63
- Alva-Manchego, Fernando and
64
- Camacho-Collados, Jose",
65
- booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
66
- month = dec,
67
- year = "2022",
68
- address = "Abu Dhabi, U.A.E.",
69
- publisher = "Association for Computational Linguistics",
70
- }
71
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
process.py DELETED
@@ -1,43 +0,0 @@
1
- import json
2
- import os
3
- from random import seed, shuffle
4
- import re
5
- from tqdm import tqdm
6
- from typing import Dict
7
- from datasets import load_dataset
8
-
9
-
10
- SEP_TOKEN = " | "
11
-
12
-
13
- def create_data(hf_data):
14
- df = hf_data.to_pandas()
15
- output = []
16
- for tweet, g in df.groupby("Tweet"):
17
- example = {
18
- 'paragraph': tweet.replace(SEP_TOKEN, " "),
19
- "paragraph_id": '-'.join(g['qid']),
20
- 'questions': [_g.replace(SEP_TOKEN, " ") for _g in g['Question']],
21
- 'answers': [_g[0].replace(SEP_TOKEN, " ") for _g in g['Answer']],
22
- }
23
- example["questions_answers"] = SEP_TOKEN.join([f"question: {q}, answer: {a}" for q, a in zip(example["questions"], example["answers"])])
24
- output.append(example)
25
- return output
26
-
27
-
28
- if __name__ == '__main__':
29
- tweet_qa = load_dataset("tweet_qa")
30
- data_valid = create_data(tweet_qa['validation'])
31
- data_train = create_data(tweet_qa['train'])
32
- seed(1)
33
- test_len = len(data_valid)
34
- shuffle(data_train)
35
- data_test = data_train[:test_len]
36
- data_train = data_train[test_len:]
37
- data_all = {'train': data_train, 'validation': data_valid, 'test': data_test}
38
- output = './data/processed'
39
- os.makedirs(output, exist_ok=True)
40
- for k, _data in data_all.items():
41
- with open('{}/{}.jsonl'.format(output, k), 'w') as f:
42
- for single_data in tqdm(_data):
43
- f.write(json.dumps(single_data) + '\n')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
qag_tweetqa.py DELETED
@@ -1,80 +0,0 @@
1
- import json
2
- import datasets
3
-
4
- logger = datasets.logging.get_logger(__name__)
5
- _VERSION = "2.0.1"
6
- _NAME = "qag_tweetqa"
7
- _CITATION = """
8
- @inproceedings{ushio-etal-2022-generative,
9
- title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
10
- author = "Ushio, Asahi and
11
- Alva-Manchego, Fernando and
12
- Camacho-Collados, Jose",
13
- booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
14
- month = dec,
15
- year = "2022",
16
- address = "Abu Dhabi, U.A.E.",
17
- publisher = "Association for Computational Linguistics",
18
- }
19
- """
20
- _DESCRIPTION = """Question & answer generation dataset based on [TweetQA](https://huggingface.co/datasets/tweet_qa)."""
21
- _URL = "https://huggingface.co/datasets/lmqg/qag_tweetqa/resolve/main/data/processed"
22
- _URLS = {
23
- 'train': f'{_URL}/train.jsonl',
24
- 'test': f'{_URL}/test.jsonl',
25
- 'validation': f'{_URL}/validation.jsonl'
26
- }
27
-
28
-
29
- class QAGTweetQAConfig(datasets.BuilderConfig):
30
- """BuilderConfig"""
31
-
32
- def __init__(self, **kwargs):
33
- """BuilderConfig.
34
- Args:
35
- **kwargs: keyword arguments forwarded to super.
36
- """
37
- super(QAGTweetQAConfig, self).__init__(**kwargs)
38
-
39
-
40
- class QAGTweetQA(datasets.GeneratorBasedBuilder):
41
-
42
- BUILDER_CONFIGS = [
43
- QAGTweetQAConfig(name=_NAME, version=datasets.Version(_VERSION), description=_DESCRIPTION),
44
- ]
45
-
46
- def _info(self):
47
- return datasets.DatasetInfo(
48
- description=_DESCRIPTION,
49
- features=datasets.Features(
50
- {
51
- "answers": datasets.Sequence(datasets.Value("string")),
52
- "questions": datasets.Sequence(datasets.Value("string")),
53
- "paragraph": datasets.Value("string"),
54
- "paragraph_id": datasets.Value("string"),
55
- "questions_answers": datasets.Value("string")
56
- }
57
- ),
58
- supervised_keys=None,
59
- homepage="https://github.com/asahi417/lm-question-generation"
60
- )
61
-
62
- def _split_generators(self, dl_manager):
63
- downloaded_file = dl_manager.download_and_extract(_URLS)
64
- return [
65
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_file["train"]}),
66
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_file["validation"]}),
67
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_file["test"]}),
68
- ]
69
-
70
- def _generate_examples(self, filepath):
71
- _key = 0
72
- logger.info("generating examples from = %s", filepath)
73
- with open(filepath, encoding="utf-8") as f:
74
- _list = f.read().split('\n')
75
- if _list[-1] == '':
76
- _list = _list[:-1]
77
- for i in _list:
78
- data = json.loads(i)
79
- yield _key, data
80
- _key += 1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/processed/validation.jsonl → qag_tweetqa/qag_tweetqa-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:17af18ec0a59ced874b64719b7b09169b26b7af59fd3561b082fb864dbedf981
3
- size 317972
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a41e003185d3349ba34e72df207cbd5015c8e45202cf66a960a68e8e034d373
3
+ size 211184
data/processed/train.jsonl → qag_tweetqa/qag_tweetqa-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6d53ee0b2c968056b1236508f65687e20b0e61218e75ea89c095a42934f1a531
3
- size 2713920
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6557d9b667e20dde5bc7e1c74d06032af679dc10241432930e604f6311d7431f
3
+ size 1617334
data/processed/test.jsonl → qag_tweetqa/qag_tweetqa-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9cbd2af4000c47675677106d5cc5cc796ee70b1f70c2a34faa1fc76f9361f9ea
3
- size 349824
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b247492aabb791bb81c6a713cdc1af3d64d832fcccfcc733428036ace8abc3e0
3
+ size 190659