|
--- |
|
dataset_info: |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: url |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: chunks |
|
sequence: string |
|
- name: embeddings |
|
sequence: |
|
sequence: float32 |
|
splits: |
|
- name: train |
|
num_bytes: 5021489124 |
|
num_examples: 534044 |
|
download_size: 4750515911 |
|
dataset_size: 5021489124 |
|
|
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
|
|
language: |
|
- cs |
|
|
|
size_categories: |
|
- 100K<n<1M |
|
|
|
task_categories: |
|
- text-generation |
|
- fill-mask |
|
|
|
license: |
|
- cc-by-sa-3.0 |
|
- gfdl |
|
--- |
|
|
|
This dataset contains the Czech subset of the [`wikimedia/wikipedia`](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. Each page is divided into paragraphs, stored as a list in the `chunks` column. For every paragraph, embeddings are created using the [`intfloat/multilingual-e5-base`](https://huggingface.co/intfloat/multilingual-e5-base) model. |
|
|
|
## Usage |
|
|
|
Load the dataset: |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
ds = load_dataset("karmiq/wikipedia-embeddings-cs-e5-base", split="train") |
|
ds[1] |
|
``` |
|
|
|
``` |
|
{ |
|
'id': '1', |
|
'url': 'https://cs.wikipedia.org/wiki/Astronomie', |
|
'title': 'Astronomie', |
|
'chunks': [ |
|
'Astronomie, řecky αστρονομία z άστρον ( astron ) hvězda a νόμος ( nomos )...', |
|
'Myšlenky Aristotelovy rozvinul ve 2. století našeho letopočtu Klaudios Ptolemaios...', |
|
..., |
|
], |
|
'embeddings': [ |
|
[0.09006806463003159, -0.009814552962779999, ...], |
|
[0.10767366737127304, ...], |
|
... |
|
] |
|
} |
|
``` |
|
|
|
The structure makes it easy to use the dataset for implementing semantic search. |
|
|
|
<details> |
|
<summary>Load the data in Elasticsearch</summary> |
|
|
|
```python |
|
def doc_generator(data, batch_size=1000): |
|
for batch in data.with_format("numpy").iter(batch_size): |
|
for i, id in enumerate(batch["id"]): |
|
output = {"id": id} |
|
output["title"] = batch["title"][i] |
|
output["url"] = batch["url"][i] |
|
output["parts"] = [ |
|
{ "chunk": chunk, "embedding": embedding } |
|
for chunk, embedding in zip(batch["chunks"][i], batch["embeddings"][i]) |
|
] |
|
yield output |
|
|
|
num_indexed, num_failed = 0, 0, |
|
progress = tqdm(total=ds.num_rows, unit="doc", desc="Indexing") |
|
|
|
for ok, info in parallel_bulk( |
|
es, |
|
index="wikipedia-search", |
|
actions=doc_generator(ds), |
|
raise_on_error=False, |
|
): |
|
if not ok: |
|
print(f"ERROR {info['index']['status']}: " |
|
f"{info['index']['error']['type']}: {info['index']['error']['caused_by']['type']}: " |
|
f"{info['index']['error']['caused_by']['reason'][:250]}") |
|
|
|
progress.update(1) |
|
``` |
|
</details> |
|
|
|
<details> |
|
<summary>Use <code>sentence_transformers.util.semantic_search</code></summary> |
|
|
|
```python |
|
import sentence_transformers |
|
model = sentence_transformers.SentenceTransformer("intfloat/multilingual-e5-base") |
|
|
|
ds.set_format(type="torch", columns=["embeddings"], output_all_columns=True) |
|
|
|
# Flatten the dataset |
|
def explode_sequence(batch): |
|
output = { "id": [], "url": [], "title": [], "chunk": [], "embedding": [] } |
|
|
|
for id, url, title, chunks, embeddings in zip( |
|
batch["id"], batch["url"], batch["title"], batch["chunks"], batch["embeddings"] |
|
): |
|
output["id"].extend([id for _ in range(len(chunks))]) |
|
output["url"].extend([url for _ in range(len(chunks))]) |
|
output["title"].extend([title for _ in range(len(chunks))]) |
|
output["chunk"].extend(chunks) |
|
output["embedding"].extend(embeddings) |
|
|
|
return output |
|
|
|
ds_flat = ds.map( |
|
explode_sequence, |
|
batched=True, |
|
remove_columns=ds.column_names, |
|
num_proc=min(os.cpu_count(), 32), |
|
desc="Flatten") |
|
ds_flat |
|
|
|
query = "Čím se zabývá fyzika?" |
|
|
|
hits = sentence_transformers.util.semantic_search( |
|
query_embeddings=model.encode(query), |
|
corpus_embeddings=ds_flat["embedding"], |
|
top_k=10) |
|
|
|
for hit in hits[0]: |
|
title = ds_flat[hit['corpus_id']]['title'] |
|
chunk = ds_flat[hit['corpus_id']]['chunk'] |
|
print(f"[{hit['score']:0.2f}] {textwrap.shorten(chunk, width=100, placeholder='…')} [{title}]") |
|
|
|
# [0.90] Fyzika částic ( též částicová fyzika ) je oblast fyziky, která se zabývá částicemi. V širším smyslu… [Fyzika částic] |
|
# [0.89] Fyzika ( z řeckého φυσικός ( fysikos ): přírodní, ze základu φύσις ( fysis ): příroda, archaicky… [Fyzika] |
|
# ... |
|
``` |
|
</details> |
|
|
|
The embeddings generation took about 2 hours on an NVIDIA A100 80GB GPU. |
|
|
|
## License |
|
|
|
See license of the original dataset: <https://huggingface.co/datasets/wikimedia/wikipedia>. |
|
|