--- license: cc-by-nc-4.0 --- # Baidu ULTR Dataset - Baidu BERT-12l-12h ## Setup 1. Install huggingface [datasets](https://huggingface.co/docs/datasets/installation) 2. Install [pandas](https://github.com/pandas-dev/pandas) and [pyarrow](https://arrow.apache.org/docs/python/index.html): `pip install pandas pyarrow` 3. Optionally, you might need to install a [pyarrow-hotfix](https://github.com/pitrou/pyarrow-hotfix) if you cannot install `pyarrow >= 14.0.1` 4. You can now use the dataset as described below. ## Load train / test click dataset: ```Python from datasets import load_dataset dataset = load_dataset( "philipphager/baidu-ultr_baidu-mlm-ctr", name="clicks", split="train", # ["train", "test"] cache_dir="~/.cache/huggingface", ) dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"] ``` ## Load expert annotations: ```Python from datasets import load_dataset dataset = load_dataset( "philipphager/baidu-ultr_baidu-mlm-ctr", name="annotations", split="test", cache_dir="~/.cache/huggingface", ) dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"] ``` ## Example PyTorch collate function Each sample in the dataset is a single query with multiple documents. The following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding: ```Python import torch from typing import List from collections import defaultdict from torch.nn.utils.rnn import pad_sequence from torch.utils.data import DataLoader def collate_clicks(samples: List): batch = defaultdict(lambda: []) for sample in samples: batch["query_document_embedding"].append(sample["query_document_embedding"]) batch["position"].append(sample["position"]) batch["click"].append(sample["click"]) batch["n"].append(sample["n"]) return { "query_document_embedding": pad_sequence(batch["query_document_embedding"], batch_first=True), "position": pad_sequence(batch["position"], batch_first=True), "click": pad_sequence(batch["click"], batch_first=True), "n": torch.tensor(batch["n"]), } loader = DataLoader(dataset, collate_fn=collate_clicks, batch_size=16) ```