Datasets:
ArXiv:
License:
philipphager
commited on
Commit
•
5ad864e
1
Parent(s):
80548d2
Update README.md
Browse files
README.md
CHANGED
@@ -2,7 +2,6 @@
|
|
2 |
license: cc-by-nc-4.0
|
3 |
viewer: false
|
4 |
---
|
5 |
-
|
6 |
# Baidu ULTR Dataset - Baidu BERT-12l-12h
|
7 |
## Setup
|
8 |
1. Install huggingface [datasets](https://huggingface.co/docs/datasets/installation)
|
@@ -38,6 +37,37 @@ dataset = load_dataset(
|
|
38 |
dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
|
39 |
```
|
40 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
## Example PyTorch collate function
|
42 |
Each sample in the dataset is a single query with multiple documents.
|
43 |
The following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding:
|
@@ -60,7 +90,9 @@ def collate_clicks(samples: List):
|
|
60 |
batch["n"].append(sample["n"])
|
61 |
|
62 |
return {
|
63 |
-
"query_document_embedding": pad_sequence(
|
|
|
|
|
64 |
"position": pad_sequence(batch["position"], batch_first=True),
|
65 |
"click": pad_sequence(batch["click"], batch_first=True),
|
66 |
"n": torch.tensor(batch["n"]),
|
|
|
2 |
license: cc-by-nc-4.0
|
3 |
viewer: false
|
4 |
---
|
|
|
5 |
# Baidu ULTR Dataset - Baidu BERT-12l-12h
|
6 |
## Setup
|
7 |
1. Install huggingface [datasets](https://huggingface.co/docs/datasets/installation)
|
|
|
37 |
dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
|
38 |
```
|
39 |
|
40 |
+
## Available features
|
41 |
+
Each row of the click / annotation dataset contains the following attributes. Use a custom `collate_fn` to select specific features (see below):
|
42 |
+
|
43 |
+
### Click dataset
|
44 |
+
| name | dtype | description |
|
45 |
+
|------------------------------|----------------|-------------|
|
46 |
+
| query_id | string | Baidu query_id |
|
47 |
+
| query_md5 | string | MD5 hash of query text |
|
48 |
+
| url_md5 | List[string] | MD5 hash of document url, most reliable document identifier |
|
49 |
+
| text_md5 | List[string] | MD5 hash of document title and abstract |
|
50 |
+
| query_document_embedding | Tensor[float16]| BERT CLS token |
|
51 |
+
| click | Tensor[int32] | Click / no click on a document |
|
52 |
+
| n | int32 | Number of documents for current query, useful for padding |
|
53 |
+
| position | Tensor[int32] | Position in ranking (does not always match original item position) |
|
54 |
+
| media_type | Tensor[int32] | Document type (label encoding recommended as ids do not occupy a continous integer range) |
|
55 |
+
| displayed_time | Tensor[float32]| Seconds a document was displayed on screen |
|
56 |
+
| serp_height | Tensor[int32] | Pixel height of a document on screen |
|
57 |
+
| slipoff_count_after_click | Tensor[int32] | Number of times a document was scrolled off screen after previously clicking on it |
|
58 |
+
|
59 |
+
|
60 |
+
### Expert annotation dataset
|
61 |
+
| name | dtype | description |
|
62 |
+
|------------------------------|----------------|-------------|
|
63 |
+
| query_id | string | Baidu query_id |
|
64 |
+
| query_md5 | string | MD5 hash of query text |
|
65 |
+
| text_md5 | List[string] | MD5 hash of document title and abstract |
|
66 |
+
| query_document_embedding | Tensor[float16]| BERT CLS token |
|
67 |
+
| label | Tensor[int32] | Relevance judgment on a scale from 0 (bad) to 4 (excellent) |
|
68 |
+
| n | int32 | Number of documents for current query, useful for padding |
|
69 |
+
| frequency_bucket | int32 | Monthly frequency of query (bucket) from 0 (high frequency) to 9 (low frequency) |
|
70 |
+
|
71 |
## Example PyTorch collate function
|
72 |
Each sample in the dataset is a single query with multiple documents.
|
73 |
The following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding:
|
|
|
90 |
batch["n"].append(sample["n"])
|
91 |
|
92 |
return {
|
93 |
+
"query_document_embedding": pad_sequence(
|
94 |
+
batch["query_document_embedding"], batch_first=True
|
95 |
+
),
|
96 |
"position": pad_sequence(batch["position"], batch_first=True),
|
97 |
"click": pad_sequence(batch["click"], batch_first=True),
|
98 |
"n": torch.tensor(batch["n"]),
|