Datasets:
ArXiv:
License:
Commit
•
1cd8271
1
Parent(s):
e44b64b
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,69 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
3 |
---
|
4 |
+
|
5 |
+
# Baidu ULTR Dataset - Baidu BERT-12l-12h
|
6 |
+
## Setup
|
7 |
+
1. Install huggingface [datasets](https://huggingface.co/docs/datasets/installation)
|
8 |
+
2. Install [pandas](https://github.com/pandas-dev/pandas) and [pyarrow](https://arrow.apache.org/docs/python/index.html): `pip install pandas pyarrow`
|
9 |
+
3. Optionally, you might need to install a [pyarrow-hotfix](https://github.com/pitrou/pyarrow-hotfix) if you cannot install `pyarrow >= 14.0.1`
|
10 |
+
4. You can now use the dataset as described below.
|
11 |
+
|
12 |
+
## Load train / test click dataset:
|
13 |
+
```Python
|
14 |
+
from datasets import load_dataset
|
15 |
+
|
16 |
+
dataset = load_dataset(
|
17 |
+
"philipphager/baidu-ultr_baidu-mlm-ctr",
|
18 |
+
name="clicks",
|
19 |
+
split="train", # ["train", "test"]
|
20 |
+
cache_dir="~/.cache/huggingface",
|
21 |
+
)
|
22 |
+
|
23 |
+
dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
|
24 |
+
```
|
25 |
+
|
26 |
+
## Load expert annotations:
|
27 |
+
```Python
|
28 |
+
from datasets import load_dataset
|
29 |
+
|
30 |
+
dataset = load_dataset(
|
31 |
+
"philipphager/baidu-ultr_baidu-mlm-ctr",
|
32 |
+
name="annotations",
|
33 |
+
split="test",
|
34 |
+
cache_dir="~/.cache/huggingface",
|
35 |
+
)
|
36 |
+
|
37 |
+
dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
|
38 |
+
```
|
39 |
+
|
40 |
+
## Example PyTorch collate function
|
41 |
+
Each sample in the dataset is a single query with multiple documents.
|
42 |
+
The following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding:
|
43 |
+
|
44 |
+
```Python
|
45 |
+
import torch
|
46 |
+
from typing import List
|
47 |
+
from collections import defaultdict
|
48 |
+
from torch.nn.utils.rnn import pad_sequence
|
49 |
+
from torch.utils.data import DataLoader
|
50 |
+
|
51 |
+
|
52 |
+
def collate_clicks(samples: List):
|
53 |
+
batch = defaultdict(lambda: [])
|
54 |
+
|
55 |
+
for sample in samples:
|
56 |
+
batch["query_document_embedding"].append(sample["query_document_embedding"])
|
57 |
+
batch["position"].append(sample["position"])
|
58 |
+
batch["click"].append(sample["click"])
|
59 |
+
batch["n"].append(sample["n"])
|
60 |
+
|
61 |
+
return {
|
62 |
+
"query_document_embedding": pad_sequence(batch["query_document_embedding"], batch_first=True),
|
63 |
+
"position": pad_sequence(batch["position"], batch_first=True),
|
64 |
+
"click": pad_sequence(batch["click"], batch_first=True),
|
65 |
+
"n": torch.tensor(batch["n"]),
|
66 |
+
}
|
67 |
+
|
68 |
+
loader = DataLoader(dataset, collate_fn=collate_clicks, batch_size=16)
|
69 |
+
```
|