|
--- |
|
license: cc-by-nc-sa-4.0 |
|
language: |
|
- de |
|
tags: |
|
- embeddings |
|
- clustering |
|
- benchmark |
|
size_categories: |
|
- 10K<n<100K |
|
--- |
|
This dataset can be used as a benchmark for clustering word embeddings for <b>German</b>. |
|
|
|
The datasets contains news article titles and is based on the dataset of the [One Million Posts Corpus](https://ofai.github.io/million-post-corpus/) and [10kGNAD](https://github.com/tblock/10kGNAD). It contains 10'267 unique samples, 10 splits with 1'436 to 9'962 samples and 9 unique classes. Splits are built similarly to MTEB's [TwentyNewsgroupsClustering](https://huggingface.co/datasets/mteb/twentynewsgroups-clustering). |
|
|
|
Have a look at German Text Embedding Clustering Benchmark ([Github](https://github.com/ClimSocAna/tecb-de), [Paper](https://arxiv.org/abs/2401.02709)) for more infos, datasets and evaluation results. |