karmiq commited on
Commit
159610c
1 Parent(s): dfb3e84

Update README

Browse files
Files changed (1) hide show
  1. README.md +156 -18
README.md CHANGED
@@ -1,26 +1,164 @@
1
  ---
2
  dataset_info:
3
  features:
4
- - name: id
5
- dtype: string
6
- - name: url
7
- dtype: string
8
- - name: title
9
- dtype: string
10
- - name: chunks
11
- sequence: string
12
- - name: embeddings
13
- sequence:
14
- sequence: float32
15
  splits:
16
- - name: train
17
- num_bytes: 5021489124
18
- num_examples: 534044
19
  download_size: 4750515911
20
  dataset_size: 5021489124
 
21
  configs:
22
- - config_name: default
23
- data_files:
24
- - split: train
25
- path: data/train-*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  dataset_info:
3
  features:
4
+ - name: id
5
+ dtype: string
6
+ - name: url
7
+ dtype: string
8
+ - name: title
9
+ dtype: string
10
+ - name: chunks
11
+ sequence: string
12
+ - name: embeddings
13
+ sequence:
14
+ sequence: float32
15
  splits:
16
+ - name: train
17
+ num_bytes: 5021489124
18
+ num_examples: 534044
19
  download_size: 4750515911
20
  dataset_size: 5021489124
21
+
22
  configs:
23
+ - config_name: default
24
+ data_files:
25
+ - split: train
26
+ path: data/train-*
27
+
28
+ language:
29
+ - cs
30
+
31
+ size_categories:
32
+ - 100K<n<1M
33
+
34
+ task_categories:
35
+ - text-generation
36
+ - fill-mask
37
+
38
+ license:
39
+ - cc-by-sa-3.0
40
+ - gfdl
41
  ---
42
+
43
+ This dataset contains the Czech subset of the [`wikimedia/wikipedia`](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. Each page is divided into paragraphs, stored as a list in the `chunks` column. For every paragraph, embeddings are created using the [`intfloat/multilingual-e5-base`](https://huggingface.co/intfloat/multilingual-e5-base) model.
44
+
45
+ ## Usage
46
+
47
+ Load the dataset:
48
+
49
+ ```python
50
+ from datasets import load_dataset
51
+
52
+ ds = load_dataset("karmiq/wikipedia-embeddings-cs-e5-base", split="train")
53
+ ds[1]
54
+ ```
55
+
56
+ ```
57
+ {
58
+ 'id': '1',
59
+ 'url': 'https://cs.wikipedia.org/wiki/Astronomie',
60
+ 'title': 'Astronomie',
61
+ 'chunks': [
62
+ 'Astronomie, řecky αστρονομία z άστρον ( astron ) hvězda a νόμος ( nomos )...',
63
+ 'Myšlenky Aristotelovy rozvinul ve 2. století našeho letopočtu Klaudios Ptolemaios...',
64
+ ...,
65
+ ],
66
+ 'embeddings': [
67
+ [0.09006806463003159, -0.009814552962779999, ...],
68
+ [0.10767366737127304, ...],
69
+ ...
70
+ ]
71
+ }
72
+ ```
73
+
74
+ The structure makes it easy to use the dataset for implementing semantic search.
75
+
76
+ <details>
77
+ <summary>Load the data in Elasticsearch</summary>
78
+
79
+ ```python
80
+ def doc_generator(data, batch_size=1000):
81
+ for batch in data.with_format("numpy").iter(batch_size):
82
+ for i, id in enumerate(batch["id"]):
83
+ output = {"id": id}
84
+ output["title"] = batch["title"][i]
85
+ output["url"] = batch["url"][i]
86
+ output["parts"] = [
87
+ { "chunk": chunk, "embedding": embedding }
88
+ for chunk, embedding in zip(batch["chunks"][i], batch["embeddings"][i])
89
+ ]
90
+ yield output
91
+
92
+ num_indexed, num_failed = 0, 0,
93
+ progress = tqdm(total=ds.num_rows, unit="doc", desc="Indexing")
94
+
95
+ for ok, info in parallel_bulk(
96
+ es,
97
+ index="wikipedia-search",
98
+ actions=doc_generator(ds),
99
+ raise_on_error=False,
100
+ ):
101
+ if not ok:
102
+ print(f"ERROR {info['index']['status']}: "
103
+ f"{info['index']['error']['type']}: {info['index']['error']['caused_by']['type']}: "
104
+ f"{info['index']['error']['caused_by']['reason'][:250]}")
105
+
106
+ progress.update(1)
107
+ ```
108
+ </details>
109
+
110
+ <details>
111
+ <summary>Use <code>sentence_transformers.util.semantic_search</code></summary>
112
+
113
+ ```python
114
+ import sentence_transformers
115
+ model = sentence_transformers.SentenceTransformer("intfloat/multilingual-e5-base")
116
+
117
+ ds.set_format(type="torch", columns=["embeddings"], output_all_columns=True)
118
+
119
+ # Flatten the dataset
120
+ def explode_sequence(batch):
121
+ output = { "id": [], "url": [], "title": [], "chunk": [], "embedding": [] }
122
+
123
+ for id, url, title, chunks, embeddings in zip(
124
+ batch["id"], batch["url"], batch["title"], batch["chunks"], batch["embeddings"]
125
+ ):
126
+ output["id"].extend([id for _ in range(len(chunks))])
127
+ output["url"].extend([url for _ in range(len(chunks))])
128
+ output["title"].extend([title for _ in range(len(chunks))])
129
+ output["chunk"].extend(chunks)
130
+ output["embedding"].extend(embeddings)
131
+
132
+ return output
133
+
134
+ ds_flat = ds.map(
135
+ explode_sequence,
136
+ batched=True,
137
+ remove_columns=ds.column_names,
138
+ num_proc=min(os.cpu_count(), 32),
139
+ desc="Flatten")
140
+ ds_flat
141
+
142
+ query = "Čím se zabývá fyzika?"
143
+
144
+ hits = sentence_transformers.util.semantic_search(
145
+ query_embeddings=model.encode(query),
146
+ corpus_embeddings=ds_flat["embedding"],
147
+ top_k=10)
148
+
149
+ for hit in hits[0]:
150
+ title = ds_flat[hit['corpus_id']]['title']
151
+ chunk = ds_flat[hit['corpus_id']]['chunk']
152
+ print(f"[{hit['score']:0.2f}] {textwrap.shorten(chunk, width=100, placeholder='…')} [{title}]")
153
+
154
+ # [0.90] Fyzika částic ( též částicová fyzika ) je oblast fyziky, která se zabývá částicemi. V širším smyslu… [Fyzika částic]
155
+ # [0.89] Fyzika ( z řeckého φυσικός ( fysikos ): přírodní, ze základu φύσις ( fysis ): příroda, archaicky… [Fyzika]
156
+ # ...
157
+ ```
158
+ </details>
159
+
160
+ The embeddings generation took about 2 hours on an NVIDIA A100 80GB GPU.
161
+
162
+ ## License
163
+
164
+ See license of the original dataset: <https://huggingface.co/datasets/wikimedia/wikipedia>.