VirtualLUO commited on
Commit
65a0ba8
·
verified ·
1 Parent(s): 40446be

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +125 -0
README.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MMTIT-Bench
2
+
3
+ **A Multilingual and Multi-Scenario Benchmark with Cognition–Perception–Reasoning Guided Text-Image Machine Translation**
4
+
5
+ <p align="center">
6
+ <a href="README_ZH.md">中文版</a> •
7
+ <a href="https://arxiv.org/abs/2603.23896">Paper</a>
8
+ </p>
9
+
10
+ ## Overview
11
+
12
+ **MMTIT-Bench** is a human-verified benchmark for end-to-end Text-Image Machine Translation (TIMT). It contains **1,400 images** spanning **14 non-English and non-Chinese languages** across diverse real-world scenarios, with bilingual (Chinese & English) translation annotations.
13
+
14
+ We also propose **CPR-Trans** (Cognition–Perception–Reasoning for Translation), a reasoning-oriented data paradigm that unifies scene cognition, text perception, and translation reasoning within a structured chain-of-thought framework.
15
+
16
+ <p align="center">
17
+ <img src="assets/overview.png" width="90%" alt="MMTIT-Bench Overview">
18
+ </p>
19
+
20
+ ## Benchmark Statistics
21
+
22
+ | Item | Details |
23
+ |------|---------|
24
+ | Total Images | 1,400 |
25
+ | Languages | 14 (AR, DE, ES, FR, ID, IT, JA, KO, MS, PT, RU, TH, TR, VI) |
26
+ | Translation Directions | Other→Chinese, Other→English |
27
+ | Scenarios | Documents, Menus, Books, Attractions, Posters, Commodities, etc. |
28
+ | Annotation | Human-verified OCR + Bilingual translations |
29
+
30
+ ## Data Format
31
+
32
+ ### Directory Structure
33
+
34
+ ```
35
+ MMTIT-Bench/
36
+ ├── README.md
37
+ ├── README_ZH.md
38
+ ├── annotation.jsonl # Benchmark annotations
39
+ ├── images.zip # Benchmark images
40
+ ├── eval_comet_demo.py # COMET evaluation script
41
+ └── prediction_demo.jsonl # Example prediction file
42
+ ```
43
+
44
+ ### Annotation (`annotation.jsonl`)
45
+
46
+ Each line is a JSON object:
47
+
48
+ ```json
49
+ {
50
+ "image_id": "Korea_Menu_20843.jpg",
51
+ "parsing_anno": "멜로우스트리트\n\n위치: 서울특별시 관악구...",
52
+ "translation_zh": "梅尔街\n\n位置:首尔特别市 冠岳区...",
53
+ "translation_en": "Mellow Street\n\nLocation: 1st Floor, 104 Gwanak-ro..."
54
+ }
55
+ ```
56
+
57
+ | Field | Description |
58
+ |-------|-------------|
59
+ | `image_id` | Image filename, formatted as `{Language}_{Scenario}_{ID}.jpg` |
60
+ | `parsing_anno` | OCR text parsing annotation (source language) |
61
+ | `translation_zh` | Chinese translation |
62
+ | `translation_en` | English translation |
63
+
64
+ ### Prediction File
65
+
66
+ Your prediction file should be a JSONL with the following fields:
67
+
68
+ ```json
69
+ {"image_id": "Korea_Menu_20843.jpg", "pred": "Your model's translation output"}
70
+ ```
71
+
72
+ ## Evaluation
73
+
74
+ We use [COMET](https://github.com/Unbabel/COMET) (`Unbabel/wmt22-comet-da`) as the rule-based evaluation metric.
75
+
76
+ ### Install
77
+
78
+ ```bash
79
+ pip install unbabel-comet
80
+ ```
81
+
82
+ ### Run
83
+
84
+ ```bash
85
+ # Other → Chinese
86
+ python eval_comet_demo.py \
87
+ --prediction your_prediction.jsonl \
88
+ --annotation annotation.jsonl \
89
+ --direction other2zh \
90
+ --batch_size 16 --gpus 0
91
+
92
+ # Other → English
93
+ python eval_comet_demo.py \
94
+ --prediction your_prediction.jsonl \
95
+ --annotation annotation.jsonl \
96
+ --direction other2en \
97
+ --batch_size 16 --gpus 1
98
+ ```
99
+
100
+ ### Arguments
101
+
102
+ | Argument | Default | Description |
103
+ |----------|---------|-------------|
104
+ | `--prediction` | *(required)* | Path to your prediction JSONL |
105
+ | `--annotation` | `annotation.jsonl` | Path to benchmark annotations |
106
+ | `--direction` | *(required)* | `other2zh` or `other2en` |
107
+ | `--batch_size` | `16` | Batch size for inference |
108
+ | `--gpus` | `0` | Number of GPUs (0 = CPU) |
109
+ | `--output` | `comet_results_{direction}.jsonl` | Output path for per-sample scores |
110
+
111
+ ## Citation
112
+
113
+ ```bibtex
114
+ @misc{li2026mmtitbench,
115
+ title={MMTIT-Bench: A Multilingual and Multi-Scenario Benchmark with Cognition-Perception-Reasoning Guided Text-Image Machine Translation},
116
+ author={Gengluo Li and Chengquan Zhang and Yupu Liang and Huawen Shen and Yaping Zhang and Pengyuan Lyu and Weinong Wang and Xingyu Wan and Gangyan Zeng and Han Hu and Can Ma and Yu Zhou},
117
+ year={2026},
118
+ journal={arXiv preprint arXiv:2603.23896},
119
+ url={https://arxiv.org/abs/2603.23896},
120
+ }
121
+ ```
122
+
123
+ ## License
124
+
125
+ This benchmark is released for **research purposes only**.