itsjhuang commited on
Commit
a2c830f
·
verified ·
1 Parent(s): 425829e

Add dataset files and documentation

Browse files

Add train/validation/test CSV splits (400 examples total, 200 per class), dataset card (README.md), and metadata (dataset_info.json).

Binary classification dataset derived from ibm-research/watsonxDocsQA.
Labels: conceptual (0), how-to (1). Split: 70/15/15, stratified.

Files changed (5) hide show
  1. README.md +93 -0
  2. dataset_info.json +66 -0
  3. test.csv +0 -0
  4. train.csv +0 -0
  5. validation.csv +0 -0
README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language:
5
+ - en
6
+ license:
7
+ - other
8
+ pretty_name: Watsonx Docs Document Type Classification
9
+ size_categories:
10
+ - n<1K
11
+ source_datasets:
12
+ - ibm-research/watsonxDocsQA
13
+ task_categories:
14
+ - text-classification
15
+ task_ids:
16
+ - document-classification
17
+ ---
18
+
19
+ # Watsonx Docs Document Type Classification
20
+
21
+ This dataset is a balanced binary document-level classification subset derived
22
+ from `ibm-research/watsonxDocsQA`.
23
+
24
+ ## Task
25
+
26
+ Classify IBM Watsonx documentation pages by their dominant user-facing purpose:
27
+
28
+ - `conceptual`: documents primarily used to understand or look up information.
29
+ - `how-to`: documents primarily used to complete a procedure or fix a problem.
30
+
31
+ ## Splits
32
+
33
+ | Split | conceptual | how-to | Total |
34
+ |---|---:|---:|---:|
35
+ | train | 140 | 140 | 280 |
36
+ | validation | 30 | 30 | 60 |
37
+ | test | 30 | 30 | 60 |
38
+
39
+ ## Fields
40
+
41
+ - `doc_id`: original document ID from the source dataset.
42
+ - `url`: source documentation URL.
43
+ - `title`: documentation page title.
44
+ - `text`: model input text, constructed as `title + "\n" + first 800 words of document`. The title is preserved in full; the document body is truncated to keep inputs manageable for embedding-based classifiers.
45
+ - `label`: string label, either `conceptual` or `how-to`.
46
+ - `label_id`: numeric label ID, where `conceptual = 0` and `how-to = 1`.
47
+ - `split`: dataset split.
48
+
49
+ ## Usage
50
+
51
+ ```python
52
+ from datasets import load_dataset
53
+
54
+ data_files = {
55
+ "train": "train.csv",
56
+ "validation": "validation.csv",
57
+ "test": "test.csv",
58
+ }
59
+
60
+ dataset = load_dataset("csv", data_files=data_files)
61
+ ```
62
+
63
+ ## Curation Notes
64
+
65
+ IBM technical documentation has traditionally been structured around DITA
66
+ (Darwin Information Typing Architecture), which classifies documents into four
67
+ types: `task`, `concept`, `reference`, and `troubleshooting`. This dataset
68
+ adapts that taxonomy into two classes: `conceptual` merges `concept` and
69
+ `reference` (both primarily information-seeking); `how-to` merges `task` and
70
+ `troubleshooting` (both action- or fix-oriented). The binary schema was chosen
71
+ because `troubleshooting` was too rare to form a reliable standalone class, and
72
+ `reference` and `concept` were difficult to separate consistently.
73
+
74
+ Annotation followed a semi-automatic process. Labelling rules were first defined
75
+ based on IBM Writing Style guidelines, then applied by a heuristic script to
76
+ generate candidate labels. Each candidate was assigned a confidence tier:
77
+ `title_high` (strong title signal), `body_medium` (body-text signal only, no
78
+ strong title match), or `default_low` (no strong signal in either title or
79
+ body). All tiers except `body_medium` how-to rows were manually reviewed. The
80
+ `body_medium` how-to subset (333 rows) was left unreviewed because the remaining
81
+ manually checked data was sufficient to construct a balanced 400-example
82
+ dataset; retaining unreviewed borderline rows would have introduced noise
83
+ without benefit.
84
+
85
+ Rows marked `X` during manual review were removed because the source document
86
+ was incomplete or too ambiguous to label reliably. Rows marked `?` were
87
+ interpreted as belonging to the opposite binary class.
88
+
89
+ The final subset contains 400 examples, sampled with random seed `42` after
90
+ manual correction and filtering.
91
+
92
+ License follows the terms of the source dataset `ibm-research/watsonxDocsQA`.
93
+ Please refer to the original dataset for licensing details.
dataset_info.json ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "dataset_name": "watsonx-docs-document-type",
3
+ "version": "1.0.0",
4
+ "date_created": "2026-05",
5
+ "source_dataset": "ibm-research/watsonxDocsQA",
6
+ "task": "binary document-level technical documentation type classification",
7
+ "labels": {
8
+ "conceptual": 0,
9
+ "how-to": 1
10
+ },
11
+ "text_field": "text",
12
+ "text_construction": "title + '\\n' + first 800 words of document",
13
+ "label_field": "label",
14
+ "label_id_field": "label_id",
15
+ "random_seed": 42,
16
+ "source_rows": 1144,
17
+ "annotation_process": {
18
+ "step_1": "heuristic pre-annotation using title and body patterns based on IBM Writing Style guidelines",
19
+ "step_2": "human review of all rows except suggested_how_to_body_medium subset",
20
+ "confidence_tiers": {
21
+ "title_high": "strong title signal",
22
+ "body_medium": "body-text signal only, no strong title match",
23
+ "default_low": "no strong signal in either title or body"
24
+ },
25
+ "manual_markers": {
26
+ "empty": "accept heuristic label",
27
+ "?": "flip to opposite binary class",
28
+ "X": "remove document"
29
+ }
30
+ },
31
+ "excluded": {
32
+ "suggested_how_to_body_medium": {
33
+ "count": 333,
34
+ "reason": "not manually reviewed; weak body-only heuristic signal considered unreliable"
35
+ },
36
+ "final_label_X": {
37
+ "count": 17,
38
+ "reason": "document was incomplete, low-quality, or too ambiguous to label reliably"
39
+ }
40
+ },
41
+ "usable_after_exclusion": {
42
+ "how-to": 231,
43
+ "conceptual": 563
44
+ },
45
+ "selected": {
46
+ "conceptual": 200,
47
+ "how-to": 200
48
+ },
49
+ "splits": {
50
+ "train": {
51
+ "conceptual": 140,
52
+ "how-to": 140,
53
+ "total": 280
54
+ },
55
+ "validation": {
56
+ "conceptual": 30,
57
+ "how-to": 30,
58
+ "total": 60
59
+ },
60
+ "test": {
61
+ "conceptual": 30,
62
+ "how-to": 30,
63
+ "total": 60
64
+ }
65
+ }
66
+ }
test.csv ADDED
The diff for this file is too large to render. See raw diff
 
train.csv ADDED
The diff for this file is too large to render. See raw diff
 
validation.csv ADDED
The diff for this file is too large to render. See raw diff