Add task categories, link to paper.
#2
by
nielsr
HF staff
- opened
README.md
CHANGED
@@ -2,6 +2,8 @@
|
|
2 |
license: mit
|
3 |
language:
|
4 |
- en
|
|
|
|
|
5 |
tags:
|
6 |
- embedding
|
7 |
- multimodal
|
@@ -42,7 +44,6 @@ configs:
|
|
42 |
|
43 |
# XTD Multimodal Multilingual Data With Instruction
|
44 |
|
45 |
-
|
46 |
This dataset contains datasets (**with English instruction**) used for evaluating the multilingual capability of a multimodal embedding model, including seven languages:
|
47 |
- **it**, **es**, **ru**, **zh**, **pl**, **tr**, **ko**
|
48 |
|
@@ -52,7 +53,6 @@ This dataset contains datasets (**with English instruction**) used for evaluatin
|
|
52 |
- The instruction on the document side is: "Represent the given image."
|
53 |
- Each example contains a query and a set of targets. The first one in the candidate list is the groundtruth target.
|
54 |
|
55 |
-
|
56 |
## Image Preparation
|
57 |
|
58 |
First, you should prepare the images used for evaluation:
|
@@ -67,7 +67,6 @@ wget https://huggingface.co/datasets/Haon-Chen/XTD-10/resolve/main/XTD10_dataset
|
|
67 |
tar -I "pigz -d -p 8" -xf XTD10_dataset.tar.gz
|
68 |
```
|
69 |
|
70 |
-
|
71 |
### Image Organization
|
72 |
|
73 |
```
|
@@ -80,10 +79,12 @@ You can refer to the image paths in each subset to view the image organization.
|
|
80 |
|
81 |
You can also customize your image paths by altering the image_path fields.
|
82 |
|
83 |
-
|
84 |
## Citation
|
85 |
|
86 |
-
If you use this dataset in your research, feel free to cite the original paper of XTD and mmE5 paper.
|
|
|
|
|
|
|
87 |
```
|
88 |
@article{chen2025mmE5,
|
89 |
title={mmE5: Improving Multimodal Multilingual Embeddings via High-quality Synthetic Data},
|
|
|
2 |
license: mit
|
3 |
language:
|
4 |
- en
|
5 |
+
task_categories:
|
6 |
+
- image-text-to-text
|
7 |
tags:
|
8 |
- embedding
|
9 |
- multimodal
|
|
|
44 |
|
45 |
# XTD Multimodal Multilingual Data With Instruction
|
46 |
|
|
|
47 |
This dataset contains datasets (**with English instruction**) used for evaluating the multilingual capability of a multimodal embedding model, including seven languages:
|
48 |
- **it**, **es**, **ru**, **zh**, **pl**, **tr**, **ko**
|
49 |
|
|
|
53 |
- The instruction on the document side is: "Represent the given image."
|
54 |
- Each example contains a query and a set of targets. The first one in the candidate list is the groundtruth target.
|
55 |
|
|
|
56 |
## Image Preparation
|
57 |
|
58 |
First, you should prepare the images used for evaluation:
|
|
|
67 |
tar -I "pigz -d -p 8" -xf XTD10_dataset.tar.gz
|
68 |
```
|
69 |
|
|
|
70 |
### Image Organization
|
71 |
|
72 |
```
|
|
|
79 |
|
80 |
You can also customize your image paths by altering the image_path fields.
|
81 |
|
|
|
82 |
## Citation
|
83 |
|
84 |
+
If you use this dataset in your research, feel free to cite the original paper of XTD and the mmE5 paper.
|
85 |
+
|
86 |
+
[mmE5: Improving Multimodal Multilingual Embeddings via High-quality Synthetic Data](https://huggingface.co/papers/2502.08468)
|
87 |
+
|
88 |
```
|
89 |
@article{chen2025mmE5,
|
90 |
title={mmE5: Improving Multimodal Multilingual Embeddings via High-quality Synthetic Data},
|