Update README.md
Browse files
README.md
CHANGED
|
@@ -7,13 +7,14 @@ We introduce Parallel Multimodal Large Diffusion Language Models for Thinking-Aw
|
|
| 7 |
|
| 8 |
This variant is based on Amused-VQ, trained from Lumina-DiMOO, with better quality and robustness.
|
| 9 |
|
| 10 |
-
[Paper](https://arxiv.org/abs/
|
| 11 |
|
| 12 |
# Citation
|
| 13 |
```
|
| 14 |
@article{tian2025mmadaparallel,
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
|
|
|
| 19 |
```
|
|
|
|
| 7 |
|
| 8 |
This variant is based on Amused-VQ, trained from Lumina-DiMOO, with better quality and robustness.
|
| 9 |
|
| 10 |
+
[Paper](https://arxiv.org/abs/2511.09611) | [Code](https://github.com/tyfeld/MMaDA-Parallel)
|
| 11 |
|
| 12 |
# Citation
|
| 13 |
```
|
| 14 |
@article{tian2025mmadaparallel,
|
| 15 |
+
title={MMaDA-Parallel: Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation},
|
| 16 |
+
author={Tian, Ye and Yang, Ling and Yang, Jiongfan and Wang, Anran and Tian, Yu and Zheng, Jiani and Wang, Haochen and Teng, Zhiyang and Wang, Zhuochen and Wang, Yinjie and Tong, Yunhai and Wang, Mengdi and Li, Xiangtai},
|
| 17 |
+
journal={arXiv preprint arXiv:2511.09611},
|
| 18 |
+
year={2025}
|
| 19 |
+
}
|
| 20 |
```
|