philipphager's picture
Update README.md
412ff10 verified
---
license: cc-by-nc-4.0
viewer: false
---
# Baidu ULTR Dataset - Baidu BERT-12l-12h
Query-document vectors and clicks for a subset of the [Baidu Unbiased Learning to Rank
dataset](https://arxiv.org/abs/2207.03051).
This dataset uses the BERT cross-encoder with 12 layers from Baidu released in the [official starter-kit](https://github.com/ChuXiaokai/baidu_ultr_dataset/) to compute query-document vectors (768 dims).
## Setup
1. Install huggingface [datasets](https://huggingface.co/docs/datasets/installation)
2. Install [pandas](https://github.com/pandas-dev/pandas) and [pyarrow](https://arrow.apache.org/docs/python/index.html): `pip install pandas pyarrow`
3. Optionally, you might need to install a [pyarrow-hotfix](https://github.com/pitrou/pyarrow-hotfix) if you cannot install `pyarrow >= 14.0.1`
4. You can now use the dataset as described below.
## Load train / test click dataset:
```Python
from datasets import load_dataset
dataset = load_dataset(
"philipphager/baidu-ultr_baidu-mlm-ctr",
name="clicks",
split="train", # ["train", "test"]
cache_dir="~/.cache/huggingface",
)
dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
```
## Load expert annotations:
```Python
from datasets import load_dataset
dataset = load_dataset(
"philipphager/baidu-ultr_baidu-mlm-ctr",
name="annotations",
split="test",
cache_dir="~/.cache/huggingface",
)
dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
```
## Available features
Each row of the click / annotation dataset contains the following attributes. Use a custom `collate_fn` to select specific features (see below):
### Click dataset
| name | dtype | description |
|------------------------------|----------------|-------------|
| query_id | string | Baidu query_id |
| query_md5 | string | MD5 hash of query text |
| query | List[int32] | List of query tokens |
| query_length | int32 | Number of query tokens |
| n | int32 | Number of documents for current query, useful for padding |
| url_md5 | List[string] | MD5 hash of document URL, most reliable document identifier |
| text_md5 | List[string] | MD5 hash of document title and abstract |
| title | List[List[int32]] | List of tokens for document titles |
| abstract | List[List[int32]] | List of tokens for document abstracts |
| query_document_embedding | Tensor[Tensor[float16]]| BERT CLS token |
| click | Tensor[int32] | Click / no click on a document |
| position | Tensor[int32] | Position in ranking (does not always match original item position) |
| media_type | Tensor[int32] | Document type (label encoding recommended as IDs do not occupy a continuous integer range) |
| displayed_time | Tensor[float32]| Seconds a document was displayed on the screen |
| serp_height | Tensor[int32] | Pixel height of a document on the screen |
| slipoff_count_after_click | Tensor[int32] | Number of times a document was scrolled off the screen after previously clicking on it |
| bm25 | Tensor[float32] | BM25 score for documents |
| bm25_title | Tensor[float32] | BM25 score for document titles |
| bm25_abstract | Tensor[float32] | BM25 score for document abstracts |
| tf_idf | Tensor[float32] | TF-IDF score for documents |
| tf | Tensor[float32] | Term frequency for documents |
| idf | Tensor[float32] | Inverse document frequency for documents |
| ql_jelinek_mercer_short | Tensor[float32] | Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.1) |
| ql_jelinek_mercer_long | Tensor[float32] | Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.7) |
| ql_dirichlet | Tensor[float32] | Query likelihood score for documents using Dirichlet smoothing (lambda = 128) |
| document_length | Tensor[int32] | Length of documents |
| title_length | Tensor[int32] | Length of document titles |
| abstract_length | Tensor[int32] | Length of document abstracts |
### Expert annotation dataset
| name | dtype | description |
|------------------------------|----------------|-------------|
| query_id | string | Baidu query_id |
| query_md5 | string | MD5 hash of query text |
| query | List[int32] | List of query tokens |
| query_length | int32 | Number of query tokens |
| frequency_bucket | int32 | Monthly frequency of query (bucket) from 0 (high frequency) to 9 (low frequency) |
| n | int32 | Number of documents for current query, useful for padding |
| url_md5 | List[string] | MD5 hash of document URL, most reliable document identifier |
| text_md5 | List[string] | MD5 hash of document title and abstract |
| title | List[List[int32]] | List of tokens for document titles |
| abstract | List[List[int32]] | List of tokens for document abstracts |
| query_document_embedding | Tensor[Tensor[float16]] | BERT CLS token |
| label | Tensor[int32] | Relevance judgments on a scale from 0 (bad) to 4 (excellent) |
| bm25 | Tensor[float32] | BM25 score for documents |
| bm25_title | Tensor[float32] | BM25 score for document titles |
| bm25_abstract | Tensor[float32] | BM25 score for document abstracts |
| tf_idf | Tensor[float32] | TF-IDF score for documents |
| tf | Tensor[float32] | Term frequency for documents |
| idf | Tensor[float32] | Inverse document frequency for documents |
| ql_jelinek_mercer_short | Tensor[float32] | Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.1) |
| ql_jelinek_mercer_long | Tensor[float32] | Query likelihood score for documents using Jelinek-Mercer smoothing (alpha = 0.7) |
| ql_dirichlet | Tensor[float32] | Query likelihood score for documents using Dirichlet smoothing (lambda = 128) |
| document_length | Tensor[int32] | Length of documents |
| title_length | Tensor[int32] | Length of document titles |
| abstract_length | Tensor[int32] | Length of document abstracts |
## Example PyTorch collate function
Each sample in the dataset is a single query with multiple documents.
The following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding:
```Python
import torch
from typing import List
from collections import defaultdict
from torch.nn.utils.rnn import pad_sequence
from torch.utils.data import DataLoader
def collate_clicks(samples: List):
batch = defaultdict(lambda: [])
for sample in samples:
batch["query_document_embedding"].append(sample["query_document_embedding"])
batch["position"].append(sample["position"])
batch["click"].append(sample["click"])
batch["n"].append(sample["n"])
return {
"query_document_embedding": pad_sequence(
batch["query_document_embedding"], batch_first=True
),
"position": pad_sequence(batch["position"], batch_first=True),
"click": pad_sequence(batch["click"], batch_first=True),
"n": torch.tensor(batch["n"]),
}
loader = DataLoader(dataset, collate_fn=collate_clicks, batch_size=16)
```