Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

This dataset repo contains the data supporting the Parallelopedia. It contains files for the Wiki and GPT2 components.

Wiki

enwiki-20150205-pages-articles.xml

An entire Wikipedia export was downloaded on February 5, 2015. The single file is named enwiki-20150205-pages-articles.xml. The file was split into 2GB chunks and then compressed with zstd using the highest compression level, per the split-enwiki-xml.sh file.

To reconstitute the file from its constituent *.zstd parts, run the join-enwiki-xml.sh file.

To load the entire file via mmap in Python:

import mmap
wiki_xml_path = 'enwiki-20150205-pages-articles.xml'
wiki_xml_file = open(wiki_xml_path, 'rb')
xml = mmap.mmap(
    wiki_xml_file.fileno(),
    length=0,
    flags=mmap.MAP_SHARED,
    prot=mmap.PROT_READ,
    offset=0,
)
try:
    xml.madvise(mmap.MADV_RANDOM)
except AttributeError:
    # Ignore if platform doesn't support.
    pass

titles.trie

This is a single Python datrie file that maps every single title occurring in the .xml file (i.e. every string between the XML elements <title>...</title>) to a 64-bit unsigned integer value, which represents the byte-offset within the .xml file where the line containing the <title> element starts. Here's what the file content would look like for the page with the title Python:

<page>\n    <title>Python</title>\n
        ^
        |
        |

The offset points to the start of the line that contains the <title> element. It will always start with four spaces, and then <title>, and then the actual title string. It will always be preceded by <page>\n. Thus, to find the offset of the encompassing <page> element, you'd take the first offset and minus 7 from it, as len('<page>\n') == 7.

To load the trie:

import datrie
trie = datrie.Trie.load('titles.trie')

N.B. This will take a while, the trie is huge (~1.9GB).

titles_offsets.npy

This is a NumPy 1D array of signed 64-bit integers representing the sorted byte offsets of all values contained within the trie above. Thus, in order to find the byte range within the XML file for the page matching a given title, you would first obtain the trie entry for the title via o = trie[title][0]. If an offset is negative, it means that the title looked up was lowercase, but the actual title had different casing, e.g.:

offset1 = trie['Python'][0]
offset2 = trie['python'][0]

print(f'offset1: {offset1}\noffset2: {offset2})

That will print:

offset1: 33919833357
offset2: -33919833357

This improves the usefulness of the trie, allowing lookups via lowercase representations of titles, rather than requiring the casing the match exactly.

So, in order to get the actual byte offset, you would wrap the offset code as follows:

offset = trie[title][0]
# Normalize to a positive value.
offset = offset if offset > 0 else -1 * offset

Assuming the titles_offsets.npy has been loaded as follows:

import numpy as np
offsets = np.load('titles_offsets.npy')

Then, given the starting offset of a title, you can find where the title ends by searching the numpy array for the offset that comes after the one you have, as follows:

next_title_offset = offsets.searchsorted(offset, side='right')

In order to bracket the entire containing <page>\n...</page> content, you subtract 7 bytes from the first offset, and 10 bytes from the second. That gives you the exact byte range of the page content, including the opening and closing <page> and '' elements respectively:

start = offset - 7 # len('<page>\n') == 7
end = offset - 10  # len('\n  <page>\n') == 10
page = xml[start:end]

print(page[:33])
print('...')
print(page[-20:])

This will print:

b'<page>\n    <title>Python</title>\n'
...
b'/revision>\n  </page>'
Downloads last month
7