maximedb commited on
Commit
3f4bbb5
1 Parent(s): 6e72876

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -24
README.md CHANGED
@@ -62,7 +62,7 @@ task_ids:
62
  MQA is a multilingual corpus of questions and answers parsed from the [Common Crawl](https://commoncrawl.org/). Questions are divided between *Frequently Asked Questions (FAQ)* pages and *Community Question Answering (CQA)* pages.
63
  ```
64
  from datasets import load_dataset
65
- load_dataset("clips/mqa", language="en")
66
  {
67
  "name": "the title of the question (if any)",
68
  "text": "the body of the question (if any)",
@@ -71,10 +71,12 @@ load_dataset("clips/mqa", language="en")
71
  "is_accepted": "true|false"
72
  }]
73
  }
 
 
74
  ```
75
 
76
  ## Languages
77
- We collected around 234M pairs of questions and answers in 39 different languages. To download a language specific subset you need to specify the language key as configuration. See below for an example.
78
  ```
79
  load_dataset("clips/mqa", language="en") # replace "en" by any language listed below
80
  ```
@@ -104,26 +106,41 @@ load_dataset("clips/mqa", language="en") # replace "en" by any language listed b
104
  | Hungarian | hu | 8,598 | 1,264 |
105
  | Croatian | hr | 5,215 | 819 |
106
 
107
- ## Data Fields
108
- #### Nested (per page - default)
109
- The data is organized by page. Each page contains a list of questions and answers.
110
- - **id**
111
- - **language**
112
- - **num_pairs**: the number of FAQs on the page
113
- - **domain**: source web domain of the FAQs
114
- - **qa_pairs**: a list of questions and answers
115
- - **question**
116
- - **answer**
117
- - **language**
118
-
119
- #### Flattened
120
- The data is organized by pair (i.e. pages are flattened). You can access the flat version of any language by appending `_flat` to the configuration (e.g. `en_flat`). The data will be returned pair-by-pair instead of page-by-page.
121
- - **domain_id**
122
- - **pair_id**
123
- - **language**
124
- - **domain**: source web domain of the FAQs
125
- - **question**
126
- - **answer**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
127
 
128
  ## Source Data
129
 
@@ -131,10 +148,10 @@ This section was adapted from the source data description of [OSCAR](https://hug
131
 
132
  Common Crawl is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected nofollow and robots.txt policies.
133
 
134
- To construct MFAQ, the WARC files of Common Crawl were used. We looked for `FAQPage` markup in the HTML and subsequently parsed the `FAQItem` from the page.
135
 
136
  ## People
137
- This model was developed by [Maxime De Bruyn](https://www.linkedin.com/in/maximedebruyn/), Ehsan Lotfi, Jeska Buhmann and Walter Daelemans.
138
 
139
  ## Licensing Information
140
  ```
 
62
  MQA is a multilingual corpus of questions and answers parsed from the [Common Crawl](https://commoncrawl.org/). Questions are divided between *Frequently Asked Questions (FAQ)* pages and *Community Question Answering (CQA)* pages.
63
  ```
64
  from datasets import load_dataset
65
+ all = load_dataset("clips/mqa", language="en")
66
  {
67
  "name": "the title of the question (if any)",
68
  "text": "the body of the question (if any)",
 
71
  "is_accepted": "true|false"
72
  }]
73
  }
74
+ faq = load_dataset("clips/mqa", scope="faq", language="en")
75
+ cqa = load_dataset("clips/mqa", scope="cqa", language="en")
76
  ```
77
 
78
  ## Languages
79
+ We collected around **234M pairs** of questions and answers in **39 languages**. To download a language specific subset you need to specify the language key as configuration. See below for an example.
80
  ```
81
  load_dataset("clips/mqa", language="en") # replace "en" by any language listed below
82
  ```
 
106
  | Hungarian | hu | 8,598 | 1,264 |
107
  | Croatian | hr | 5,215 | 819 |
108
 
109
+ ## FAQ vs. CQA
110
+ You can download the *Frequently Asked Questions* or the *Community Question Answering* part of the dataset by specifying the scope in the dataset construction. Specify `all` to download both.
111
+
112
+ ```
113
+ faq = load_dataset("clips/mqa", scope="faq")
114
+ cqa = load_dataset("clips/mqa", scope="cqa")
115
+ all = load_dataset("clips/mqa", scope="all")
116
+ ```
117
+
118
+ ## Nesting and Data Fields
119
+ You can specify three different nesting level: `question`, `page` and `domain`.
120
+ #### Question
121
+ ```
122
+ load_dataset("clips/mqa", level="question") # default
123
+ ```
124
+ The default level is the question object:
125
+ - **name**: the title of the question (if any)
126
+ - **text**: the body of the question (if any)
127
+ - **answers**: a list of answers
128
+ - **text**: the title of the answer (if any)
129
+ - **name**: the body of the answer
130
+ - **is_accepted**: true if the answer is selected.
131
+
132
+ #### Page
133
+ This level returns a list of questions present on the same page. This is mostly useful for FAQs since CQAs already have one question per page.
134
+ ```
135
+ load_dataset("clips/mqa", level="page")
136
+ ```
137
+
138
+ #### Domain
139
+ This level returns a list of pages present on the web domain. This is a good way to cope with FAQs duplication by sampling one page per domain at each epoch.
140
+
141
+ ```
142
+ load_dataset("clips/mqa", level="domain")
143
+ ```
144
 
145
  ## Source Data
146
 
 
148
 
149
  Common Crawl is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected nofollow and robots.txt policies.
150
 
151
+ To construct MQA, we used the WARC files of Common Crawl.
152
 
153
  ## People
154
+ This model was developed by [Maxime De Bruyn](https://maximedb.vercel.app), Ehsan Lotfi, Jeska Buhmann and Walter Daelemans.
155
 
156
  ## Licensing Information
157
  ```