lhq's picture
Update README.md
fa4f24e verified
|
raw
history blame
1.46 kB
metadata
dataset_info:
  features:
    - name: type_
      dtype: string
    - name: block
      struct:
        - name: html_tag
          dtype: string
        - name: id
          dtype: string
        - name: order
          dtype: int64
        - name: origin_type
          dtype: string
        - name: text
          struct:
            - name: embedding
              sequence: float64
            - name: text
              dtype: string
  splits:
    - name: train
      num_bytes: 2266682282
      num_examples: 260843
  download_size: 2272790159
  dataset_size: 2266682282
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Dataset Card for "es_indexing_benchmark"

Here is a code on how to pull and index this dataset to elasticsearch:

import datasets
from tqdm import tqdm

from src.store.es.search import ESBaseClient
from src.store.es.model import ESNode

ds = datasets.load_dataset('stellia/es_indexing_benchmark', split='train', ignore_verifications=True)
client = ESBaseClient()


index_name = "tmp_es_index"
nodes = []
for row in tqdm(ds):
    esnode = ESNode(**row)
    esnode.meta.id = esnode.block.id
    nodes.append(esnode)


client.delete_index(index_name)
client.init_index(index_name)

batch_size = 5000
for i in tqdm(range(0, len(nodes), batch_size)):
    client.save(index_name, nodes[i:i+batch_size], refresh=False)

Consider empty ~/.cache/huggingface/datasets with rm -rf ~/.cache/huggingface/datasets if you have problem loading the dataset.