Datasets:
ArXiv:
License:
metadata
license: cc-by-nc-4.0
viewer: false
Baidu ULTR Dataset - UvA BERT-12l-12h
Query-document vectors and clicks for a subset of the Baidu Unbiased Learning to Rank
dataset.
This dataset uses a BERT cross-encoder with 12 layers trained on a Masked Language Modeling (MLM) and click-through-rate (CTR) prediction task to compute query-document vectors (768 dims).
The model is available under model/
.
Setup
- Install huggingface datasets
- Install pandas and pyarrow:
pip install pandas pyarrow
- Optionally, you might need to install a pyarrow-hotfix if you cannot install
pyarrow >= 14.0.1
- You can now use the dataset as described below.
Load train / test click dataset:
from datasets import load_dataset
dataset = load_dataset(
"philipphager/baidu-ultr_uva-mlm-ctr",
name="clicks",
split="train", # ["train", "test"]
cache_dir="~/.cache/huggingface",
)
dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
Load expert annotations:
from datasets import load_dataset
dataset = load_dataset(
"philipphager/baidu-ultr_uva-mlm-ctr",
name="annotations",
split="test",
cache_dir="~/.cache/huggingface",
)
dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
Available features
Each row of the click / annotation dataset contains the following attributes. Use a custom collate_fn
to select specific features (see below):
Click dataset
name | dtype | description |
---|---|---|
query_id | string | Baidu query_id |
query_md5 | string | MD5 hash of query text |
url_md5 | List[string] | MD5 hash of document url, most reliable document identifier |
text_md5 | List[string] | MD5 hash of document title and abstract |
query_document_embedding | Tensor[float16] | BERT CLS token |
click | Tensor[int32] | Click / no click on a document |
n | int32 | Number of documents for current query, useful for padding |
position | Tensor[int32] | Position in ranking (does not always match original item position) |
media_type | Tensor[int32] | Document type (label encoding recommended as ids do not occupy a continous integer range) |
displayed_time | Tensor[float32] | Seconds a document was displayed on screen |
serp_height | Tensor[int32] | Pixel height of a document on screen |
slipoff_count_after_click | Tensor[int32] | Number of times a document was scrolled off screen after previously clicking on it |
Expert annotation dataset
name | dtype | description |
---|---|---|
query_id | string | Baidu query_id |
query_md5 | string | MD5 hash of query text |
text_md5 | List[string] | MD5 hash of document title and abstract |
query_document_embedding | Tensor[float16] | BERT CLS token |
label | Tensor[int32] | Relevance judgment on a scale from 0 (bad) to 4 (excellent) |
n | int32 | Number of documents for current query, useful for padding |
frequency_bucket | int32 | Monthly frequency of query (bucket) from 0 (high frequency) to 9 (low frequency) |
Example PyTorch collate function
Each sample in the dataset is a single query with multiple documents. The following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding:
import torch
from typing import List
from collections import defaultdict
from torch.nn.utils.rnn import pad_sequence
from torch.utils.data import DataLoader
def collate_clicks(samples: List):
batch = defaultdict(lambda: [])
for sample in samples:
batch["query_document_embedding"].append(sample["query_document_embedding"])
batch["position"].append(sample["position"])
batch["click"].append(sample["click"])
batch["n"].append(sample["n"])
return {
"query_document_embedding": pad_sequence(batch["query_document_embedding"], batch_first=True),
"position": pad_sequence(batch["position"], batch_first=True),
"click": pad_sequence(batch["click"], batch_first=True),
"n": torch.tensor(batch["n"]),
}
loader = DataLoader(dataset, collate_fn=collate_clicks, batch_size=16)