albertvillanova HF staff commited on
Commit
6a1b95a
1 Parent(s): 9a45e3f

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +254 -0
README.md CHANGED
@@ -1,4 +1,92 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  - config_name: 20231201.ar
4
  features:
@@ -1442,3 +1530,169 @@ configs:
1442
  - split: train
1443
  path: 20231201.zh/train-*
1444
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - ar
4
+ - as
5
+ - az
6
+ - ban
7
+ - be
8
+ - bg
9
+ - bn
10
+ - br
11
+ - bs
12
+ - ca
13
+ - cs
14
+ - cy
15
+ - da
16
+ - de
17
+ - el
18
+ - en
19
+ - eo
20
+ - es
21
+ - et
22
+ - eu
23
+ - fa
24
+ - fi
25
+ - fo
26
+ - fr
27
+ - gl
28
+ - gu
29
+ - he
30
+ - hi
31
+ - hr
32
+ - hu
33
+ - hy
34
+ - id
35
+ - is
36
+ - it
37
+ - ja
38
+ - jv
39
+ - kn
40
+ - ko
41
+ - la
42
+ - li
43
+ - lij
44
+ - lt
45
+ - mk
46
+ - ml
47
+ - mr
48
+ - nan
49
+ - nap
50
+ - nl
51
+ - no
52
+ - or
53
+ - pa
54
+ - pl
55
+ - pms
56
+ - pt
57
+ - ro
58
+ - ru
59
+ - sa
60
+ - sah
61
+ - sk
62
+ - sl
63
+ - sr
64
+ - su
65
+ - sv
66
+ - ta
67
+ - te
68
+ - th
69
+ - tr
70
+ - uk
71
+ - vec
72
+ - vi
73
+ - wa
74
+ - yi
75
+ - zh
76
+ license:
77
+ - cc-by-sa-3.0
78
+ - gfdl
79
+ size_categories:
80
+ - n<1K
81
+ - 1K<n<10K
82
+ - 10K<n<100K
83
+ - 100K<n<1M
84
+ task_categories:
85
+ - text-generation
86
+ - fill-mask
87
+ task_ids:
88
+ - language-modeling
89
+ - masked-language-modeling
90
  dataset_info:
91
  - config_name: 20231201.ar
92
  features:
 
1530
  - split: train
1531
  path: 20231201.zh/train-*
1532
  ---
1533
+
1534
+ # Dataset Card for Wikimedia Wikisource
1535
+
1536
+ ## Table of Contents
1537
+ - [Table of Contents](#table-of-contents)
1538
+ - [Dataset Description](#dataset-description)
1539
+ - [Dataset Summary](#dataset-summary)
1540
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
1541
+ - [Languages](#languages)
1542
+ - [Dataset Structure](#dataset-structure)
1543
+ - [Data Instances](#data-instances)
1544
+ - [Data Fields](#data-fields)
1545
+ - [Data Splits](#data-splits)
1546
+ - [Dataset Creation](#dataset-creation)
1547
+ - [Curation Rationale](#curation-rationale)
1548
+ - [Source Data](#source-data)
1549
+ - [Annotations](#annotations)
1550
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
1551
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
1552
+ - [Social Impact of Dataset](#social-impact-of-dataset)
1553
+ - [Discussion of Biases](#discussion-of-biases)
1554
+ - [Other Known Limitations](#other-known-limitations)
1555
+ - [Additional Information](#additional-information)
1556
+ - [Dataset Curators](#dataset-curators)
1557
+ - [Licensing Information](#licensing-information)
1558
+ - [Citation Information](#citation-information)
1559
+ - [Contributions](#contributions)
1560
+
1561
+ ## Dataset Description
1562
+
1563
+ - **Homepage:** https://dumps.wikimedia.org
1564
+ - **Repository:**
1565
+ - **Paper:**
1566
+ - **Leaderboard:**
1567
+ - **Point of Contact:**
1568
+
1569
+ ### Dataset Summary
1570
+
1571
+ Wikisource dataset containing cleaned articles of all languages.
1572
+
1573
+ The dataset is built from the Wikisource dumps (https://dumps.wikimedia.org/)
1574
+ with one subset per language, each containing a single train split.
1575
+
1576
+ Each example contains the content of one full Wikisource text with cleaning to strip
1577
+ markdown and unwanted sections (references, etc.).
1578
+
1579
+
1580
+ All language subsets have already been processed for recent dump, and you can load them by date and language like this:
1581
+ ```python
1582
+ from datasets import load_dataset
1583
+
1584
+ ds = load_dataset("wikimedia/wikisource", "20231201.en")
1585
+ ```
1586
+
1587
+ ### Supported Tasks and Leaderboards
1588
+
1589
+ The dataset is generally used for Language Modeling.
1590
+
1591
+ ### Languages
1592
+
1593
+ You can find the list of all languages here: https://meta.wikimedia.org/wiki/Wikisource#List_of_Wikisources
1594
+
1595
+ Note that the wiki code "www" contains multilingual texts. You can find the list of languages at the "www" Multilingual
1596
+ Wikisource here: https://wikisource.org/wiki/Wikisource:Languages
1597
+
1598
+ ## Dataset Structure
1599
+
1600
+ ### Data Instances
1601
+
1602
+ An example looks as follows:
1603
+ ```
1604
+ {'id': '36',
1605
+ 'url': 'https://ca.wikisource.org/wiki/Comunicat%20de%20Berl%C3%ADn',
1606
+ 'title': 'Comunicat de Berlín',
1607
+ 'text': "\n\nPreàmbul \nEl 19 de juny de 1999, un any després de la Declaració de la Sorbona,..."
1608
+ }
1609
+ ```
1610
+
1611
+ ### Data Fields
1612
+
1613
+ The data fields are the same among all configurations:
1614
+ - `id` (`str`): ID of the article.
1615
+ - `url` (`str`): URL of the article.
1616
+ - `title` (`str`): Title of the article.
1617
+ - `text` (`str`): Text content of the article.
1618
+
1619
+ ### Data Splits
1620
+
1621
+ All configurations contain a single `train` split.
1622
+
1623
+ ## Dataset Creation
1624
+
1625
+ ### Curation Rationale
1626
+
1627
+ [More Information Needed]
1628
+
1629
+ ### Source Data
1630
+
1631
+ #### Initial Data Collection and Normalization
1632
+
1633
+ The dataset is built from the Wikisource dumps: https://dumps.wikimedia.org
1634
+
1635
+ You can find the full list of languages and dates here: https://dumps.wikimedia.org/backup-index.html
1636
+
1637
+ The articles have been parsed using the [`mwparserfromhell`](https://mwparserfromhell.readthedocs.io) tool.
1638
+
1639
+ #### Who are the source language producers?
1640
+
1641
+ [More Information Needed]
1642
+
1643
+ ### Annotations
1644
+
1645
+ #### Annotation process
1646
+
1647
+ [More Information Needed]
1648
+
1649
+ #### Who are the annotators?
1650
+
1651
+ [More Information Needed]
1652
+
1653
+ ### Personal and Sensitive Information
1654
+
1655
+ [More Information Needed]
1656
+
1657
+ ## Considerations for Using the Data
1658
+
1659
+ ### Social Impact of Dataset
1660
+
1661
+ [More Information Needed]
1662
+
1663
+ ### Discussion of Biases
1664
+
1665
+ [More Information Needed]
1666
+
1667
+ ### Other Known Limitations
1668
+
1669
+ [More Information Needed]
1670
+
1671
+ ## Additional Information
1672
+
1673
+ ### Dataset Curators
1674
+
1675
+ [More Information Needed]
1676
+
1677
+ ### Licensing Information
1678
+
1679
+ Copyright licensing information: https://dumps.wikimedia.org/legal.html
1680
+
1681
+ All original textual content is licensed under the [GNU Free Documentation License](https://www.gnu.org/licenses/fdl-1.3.html) (GFDL)
1682
+ and the [Creative Commons Attribution-Share-Alike 3.0 License](https://creativecommons.org/licenses/by-sa/3.0/).
1683
+ Some text may be available only under the Creative Commons license; see their [Terms of Use](https://foundation.wikimedia.org/wiki/Policy:Terms_of_Use) for details.
1684
+ Text written by some authors may be released under additional licenses or into the public domain.
1685
+
1686
+ ### Citation Information
1687
+
1688
+ ```
1689
+ @ONLINE{wikidump,
1690
+ author = "Wikimedia Foundation",
1691
+ title = "Wikimedia Downloads",
1692
+ url = "https://dumps.wikimedia.org"
1693
+ }
1694
+ ```
1695
+
1696
+ ### Contributions
1697
+
1698
+ Thanks to [@albertvillanova](https://huggingface.co/albertvillanova) for adding this dataset.