File size: 841 Bytes
cfd04a4
 
6cddbe0
 
 
 
 
 
 
 
cfd04a4
6cddbe0
 
acaa4ca
6cddbe0
acaa4ca
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
---
license: cc-by-nc-sa-4.0
language:
- de
tags:
- embeddings
- clustering
- benchmark
size_categories:
- 10K<n<100K
---
This dataset can be used as a benchmark for clustering word embeddings for <b>German</b>. 

The datasets contains news article titles and is based on the dataset of the [One Million Posts Corpus](https://ofai.github.io/million-post-corpus/) and [10kGNAD](https://github.com/tblock/10kGNAD). It contains 10'267 unique samples, 10 splits with 1'436 to 9'962 samples and 9 unique classes. Splits are built similarly to MTEB's [TwentyNewsgroupsClustering](https://huggingface.co/datasets/mteb/twentynewsgroups-clustering).

Have a look at German Text Embedding Clustering Benchmark ([Github](https://github.com/ClimSocAna/tecb-de), [Paper](https://arxiv.org/abs/2401.02709)) for more infos, datasets and evaluation results.