Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -94,3 +94,12 @@ This section was adapted from the source data description of [OSCAR](https://hug
|
|
94 |
Common Crawl is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected nofollow and robots.txt policies.
|
95 |
|
96 |
To construct MFAQ, the WARC files of Common Crawl were used. We looked for `FAQPage` markup in the HTML and subsequently parsed the `FAQItem` from the page.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
94 |
Common Crawl is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected nofollow and robots.txt policies.
|
95 |
|
96 |
To construct MFAQ, the WARC files of Common Crawl were used. We looked for `FAQPage` markup in the HTML and subsequently parsed the `FAQItem` from the page.
|
97 |
+
|
98 |
+
## Citation information
|
99 |
+
```
|
100 |
+
@InProceedings{mfaq_a_multilingual_dataset,
|
101 |
+
title={MFAQ: a Multilingual FAQ Dataset},
|
102 |
+
author={Maxime {De Bruyn} and Ehsan Lotfi and Jeska Buhmann and Walter Daelemans},
|
103 |
+
year={2021}
|
104 |
+
}
|
105 |
+
```
|