Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
csv
Sub-tasks:
intent-classification
Size:
10K - 100K
License:
metadata
license: cc-by-4.0
language:
- en
- de
multilinguality:
- multilingual
source_datasets:
- extended|deutsche-telekom/NLU-Evaluation-Data-en-de
size_categories:
- n<1K
NLU Few-shot Benchmark - English and German
This is a few shot training dataset from the domain of human-robot interaction. It contains texts in German and English language with 64 different utterances (classes). Each utterance (class) has exactly 20 samples in the training set. This leads to a total of 1280 different training samples.
With this publication, we are building on our deutsche-telekom/NLU-Evaluation-Data-en-de data set.
Processing Steps
- drop
NaN
values - drop duplicates in
answer_de
andanswer
- delete all rows where
answer_de
has more than 70 characters - add column
label
:df["label"] = df["scenario"] + "_" + df["intent"]
- remove classes (
label
) with less than 25 samples:audio_volume_other
cooking_query
general_greet
music_dislikeness
- random selection for train set - exactly 20 samples for each class (
label
) - rest for test set
Copyright
Copyright (c) the authors of xliuhw/NLU-Evaluation-Data
Copyright (c) 2022 Philip May, Deutsche Telekom AG
All data is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).