nreimers commited on
Commit
dec357d
1 Parent(s): 95a3e7a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +95 -0
README.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ This dataset contains a pre-processed version from Wikipedia suitable for semantic search.
2
+
3
+ You can load the dataset like this:
4
+
5
+ ```python
6
+ from datasets import load_dataset
7
+ lang = 'en'
8
+ data = load_dataset(f"Cohere/wikipedia-22-12", lang, split='train', streaming=True)
9
+ for row in data:
10
+ print(row)
11
+ break
12
+ ```
13
+
14
+ This will load the dataset in a streaming mode (so that you don't need to download the whole dataset) and you can process it row-by-row.
15
+
16
+ The articles are splitted into paragraphs. Further, for each article we added statistics on the page views in 2022 as well as in how many other languages an article is available.
17
+ The dataset is sorted by page views, so that the most popular Wikipedia articles come first. So if you e.g. read the top-100k rows, you get quite a good coverage on topics that
18
+ are broadly interesting for people.
19
+
20
+ ## Semantic Search Embeddings
21
+
22
+ We also provide versions where documents have been embedded using the [cohere multilingual embedding model](https://txt.cohere.ai/multilingual/),
23
+ e.g. [wikipedia-22-12-en-embeddings](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings) contains the paragraphs and their respective embeddings for English.
24
+ You can find the embeddings for other languages in the datasets `wikipedia-22-12-{lang}-embeddings`.
25
+
26
+
27
+ ## Dataset Creation
28
+ The [XML data dumps](https://dumps.wikimedia.org/backup-index.html) from December 20th, 2022 where downloaded and processed
29
+ with [wikiextractor](https://github.com/attardi/wikiextractor) (with Version: 2.75) and the following command:
30
+ ```
31
+ python WikiExtractor.py --json -s --lists ../dumps/dewiki-20210101-pages-articles.xml.bz2 -o text_de
32
+ ```
33
+
34
+ To count in how many languages an article is available, we downloaded the SQL files with language links from:
35
+ ```
36
+ https://dumps.wikimedia.org/{lang}wiki/{datestr}/{filename}
37
+ ```
38
+ And processed the SQL file to read for each article the outbound links.
39
+
40
+ Pageviews where downloaded from:
41
+ ```
42
+ https://dumps.wikimedia.org/other/pageviews/{year}/{year}-{month_str}/pageviews-{year}{month_str}{day_str}-{hour_str}0000.gz
43
+ ```
44
+
45
+ We downloaded for each day the pageviews for a random hour. We then computed the harmonic mean of page views. We used harmonic mean to address cases where articles receive
46
+ a very high number of page views at e.g. a certain time point. We use the log scores for the page views to increase the numerical stability.
47
+
48
+ Code to compute the page views was:
49
+ ```python
50
+ import gzip
51
+ import sys
52
+ from collections import Counter, defaultdict
53
+ import math
54
+ import tqdm
55
+ import json
56
+
57
+
58
+ title_views = {}
59
+
60
+ #Score: Harmonic mean (View_Day_1 * View_Day_2 * View_day_3)
61
+ # Add log for better numerical stabilitiy
62
+ # Add +1 to avoid log(0)
63
+ # Compare the sum, so that days without view are counted as 0 views
64
+ for filepath in tqdm.tqdm(sys.argv[1:]):
65
+ with gzip.open(filepath, "rt") as fIn:
66
+ for line in fIn:
67
+ splits = line.strip().split()
68
+ if len(splits) == 4:
69
+ lang, title, views, _ = line.strip().split()
70
+ lang = lang.lower()
71
+
72
+ if lang.endswith(".m"): #Add mobile page scores to main score
73
+ lang = lang[0:-2]
74
+
75
+ if lang.count(".") > 0:
76
+ continue
77
+
78
+ if lang not in title_views:
79
+ title_views[lang] = {}
80
+ if title not in title_views[lang]:
81
+ title_views[lang][title] = 0.0
82
+
83
+ title_views[lang][title] += math.log(int(views)+1)
84
+
85
+
86
+
87
+ #Save results
88
+ for lang in title_views:
89
+ with open(f"pageviews_summary/{lang}.json", "w") as fOut:
90
+ fOut.write(json.dumps(title_views[lang]))
91
+ ```
92
+
93
+
94
+ We filter out paragraphs that start with `BULLET::::`, `Section::::`, `<templatestyles`, or `[[File:`.
95
+ Further, we also only include paragraphs with at least 100 characters (using Python len method=.