Datasets:
RAB-Cred
RAB-Cred is a text classification dataset, where the task is to identify the presence and sentiment of credibility assessments in Danish asylum decision texts. The three classes are:
- No credibility assessment: ABSENT
- Positive credibility assessment: POSITIVE
- Negative credibility assessment: NEGATIVE
The RAB-Cred dataset features high-quality, gold-standard expert annotations and valuable metadata such as annotator confidence and asylum case outcome. Decisions texts were obtained from the Danish Refugee Appeals Board (RAB) website. For more information, see the paper LLMs as annotators of credibility assessment in Danish asylum decisions: evaluating classification performance and errors beyond aggregated metrics, to be presented at the 20th Linguistic Annotation Workshop (LAW-XX) @ ACL 2026 (arXiV link coming soon).
The paper's source code and instructions to reproduce the experiments are available here: https://github.com/glhr/RAB-Cred
Configs and splits
The validation set was jointly annotated by two domain experts (H1 and H2) and thus each text has a single agreed-upon label. The test set was independently labeled by H1 and H2, and disagreement was independently resolved by H3.
We provide two versions of the dataset:
- In the
labelsconfiguration, only the 3-class label is provided (ABSENT, POSITIVE or NEGATIVE) for each text. - The
metadataconfiguration includes the raw annotations provided by the expert annotators (where the annotation is decomposed as two questions, and each question has its own confidence field):
| Item | Values / Meaning |
|---|---|
Presence question (Q1) |
Y (credibility assessment present) / N (absent) |
Sentiment question (Q2) |
POSITIVE, NEGATIVE, - |
| Confidence fields | HIGH, MEDIUM, LOW, - |
Meaning of - in sentiment columns |
No sentiment annotation because no credibility assessment is present |
Column Descriptions By Config
labels_val (validation split)
| Column | Description |
|---|---|
Text/Decision |
Decision text. |
Year |
Decision year. |
link |
Source URL. |
archive_link |
Web archive URL for reproducibility. |
CASE OUTCOME |
Asylum case outcome (rejection_upheld, rejection_reversed, or remanded). |
CRED LABEL |
3-class credibility label (jointly annotated by H1 and H2). |
Index |
Case identifier. |
labels_test (test split)
| Column | Description |
|---|---|
Text/Decision |
Decision text. |
Year |
Decision year. |
link |
Source URL. |
archive_link |
Web archive URL for reproducibility. |
CASE OUTCOME |
Asylum case outcome (rejection_upheld, rejection_reversed, or remanded). |
CRED LABEL (MAJORITY) |
Final test label from majority vote. |
CRED LABEL (H1) |
Annotator H1 3-class label. |
CRED LABEL (H2) |
Annotator H2 3-class label. |
CRED LABEL (H3) |
Annotator H3 3-class label. |
Index |
Case identifier. |
metadata_val (validation split)
| Column | Description |
|---|---|
Text/Decision |
Decision text. |
Year |
Decision year. |
link |
Source URL. |
archive_link |
Web archive URL for reproducibility. |
CASE OUTCOME |
Asylum case outcome (rejection_upheld, rejection_reversed, or remanded). |
Q1: Credibility assessment presence |
Presence annotation (Y/N). |
Q2: Credibility assessment sentiment |
Sentiment annotation (POSITIVE/NEGATIVE/-). |
Confidence Q1 |
Confidence in the presence annotation (HIGH/MEDIUM/LOW/-). |
Confidence Q2 |
Confidence in the sentiment annotation (HIGH/MEDIUM/LOW/-). |
Index |
Case identifier. |
metadata_test (test split)
| Column | Description |
|---|---|
Text/Decision |
Decision text. |
Year |
Decision year. |
link |
Source URL. |
archive_link |
Web archive URL for reproducibility. |
CASE OUTCOME |
Asylum case outcome (rejection_upheld, rejection_reversed, or remanded). |
Q1: Credibility assessment presence (H1) |
H1 presence annotation (Y/N). |
Confidence Q1 (H1) |
H1 confidence for Q1 (HIGH/MEDIUM/LOW/-). |
Q1: Credibility assessment presence (H2) |
H2 presence annotation (Y/N). |
Confidence Q1 (H2) |
H2 confidence for Q1 (HIGH/MEDIUM/LOW/-). |
Q1: Credibility assessment presence (H3) |
H3 presence annotation (Y/N). |
Q2: Credibility assessment sentiment (H1) |
H1 sentiment annotation (POSITIVE/NEGATIVE/-). |
Confidence Q2 (H1) |
H1 confidence for Q2 (HIGH/MEDIUM/LOW/-). |
Q2: Credibility assessment sentiment (H2) |
H2 sentiment annotation (POSITIVE/NEGATIVE/-). |
Confidence Q2 (H2) |
H2 confidence for Q2 (HIGH/MEDIUM/LOW/-). |
Q2: Credibility assessment sentiment (H3) |
H3 sentiment annotation (POSITIVE/NEGATIVE/-). |
Index |
Unique case identifier. |
Usage
The dataset can be loaded using the HuggingFace datasets library as follows:
from datasets import load_dataset
ds = load_dataset("XAI-CRED/RAB-Cred", "labels_test")
Alternatively, you can also directly clone the dataset repository and access the CSV files:
# Make sure git-xet is installed (https://hf.co/docs/hub/git-xet)
curl -sSfL https://hf.co/git-xet/install.sh | sh
git clone https://huggingface.co/datasets/XAI-CRED/RAB-Cred
Or you can use the huggingface CLI:
# Make sure the hf CLI is installed
curl -LsSf https://hf.co/cli/install.sh | bash
hf datasets download XAI-CRED/RAB-Cred
Citation
If you use this dataset, please cite:
@inproceedings{rab-cred_2026,
title = "LLMs as annotators of credibility assessment in Danish asylum decisions: evaluating classification performance and errors beyond aggregated metrics",
author = "Galadrielle Humblot-Renaux and Mohammad Naser Sabet Jahromi and Rohat Bakuri-Jørgensen and Marieke Anne Heyl and Asta S. Stage Jarlner and Maria Vlachou and Anna Murphy Høgenhaug and Desmond Elliott and Thomas Gammeltoft-Hansen and Thomas B. Moeslund",
booktitle = "Proceedings of the 20th Linguistic Annotation Workshop (LAW-XX)",
year = "2026",
publisher = "Association for Computational Linguistics"
}
- Downloads last month
- 14