tyfeld commited on
Commit
6f2b19f
·
verified ·
1 Parent(s): 1ebf0a0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -7,13 +7,14 @@ We introduce Parallel Multimodal Large Diffusion Language Models for Thinking-Aw
7
 
8
  This variant is based on Amused-VQ, trained from Lumina-DiMOO, with better quality and robustness.
9
 
10
- [Paper](https://arxiv.org/abs/2505.15809) | [Code](https://github.com/tyfeld/MMaDA-Parallel)
11
 
12
  # Citation
13
  ```
14
  @article{tian2025mmadaparallel,
15
- title={MMaDA-Parallel: Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation},
16
- author={Tian, Ye and Yang, Ling and Yang, Jiongfan and Wang, Anran and Tian, Yu and Zheng, Jiani and Wang, Haochen and Teng, Zhiyang and Wang, Zhuochen and Wang, Yinjie and Tong, Yunhai and Wang, Mengdi and Li, Xiangtai},
17
- journal={Preprint},
18
- year={2025}}
 
19
  ```
 
7
 
8
  This variant is based on Amused-VQ, trained from Lumina-DiMOO, with better quality and robustness.
9
 
10
+ [Paper](https://arxiv.org/abs/2511.09611) | [Code](https://github.com/tyfeld/MMaDA-Parallel)
11
 
12
  # Citation
13
  ```
14
  @article{tian2025mmadaparallel,
15
+ title={MMaDA-Parallel: Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation},
16
+ author={Tian, Ye and Yang, Ling and Yang, Jiongfan and Wang, Anran and Tian, Yu and Zheng, Jiani and Wang, Haochen and Teng, Zhiyang and Wang, Zhuochen and Wang, Yinjie and Tong, Yunhai and Wang, Mengdi and Li, Xiangtai},
17
+ journal={arXiv preprint arXiv:2511.09611},
18
+ year={2025}
19
+ }
20
  ```