ZYMPKU commited on
Commit
a179a6a
·
1 Parent(s): 8f4f8b2

update readme

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +148 -1
  3. teaser.png +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ teaser.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,150 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: diffusers
3
+ inference: true
4
+ tags:
5
+ - lora
6
+ - text-to-image
7
+ - stable-diffusion
8
+ - flux
9
+ base_model: black-forest-labs/FLUX.1-dev
10
  ---
11
+
12
+ # ART - Anonymous Region Transformer for Variable Multi-Layer Transparent Image
13
+ - Paper: *[Arxiv](https://arxiv.org/abs/2502.18364)*
14
+
15
+ - Official Repository: *[ART](https://github.com/microsoft/art-msra)*.
16
+
17
+ - Project Page: https://art-msra.github.io/
18
+
19
+ - Demo: [🤗demo](http://20.65.136.27:8060/)
20
+
21
+ ![teaser](./teaser.png)
22
+
23
+
24
+ ## News🔥🔥🔥
25
+
26
+ * Feb, 26 2025. 💥💥💥 Weights uploaded.
27
+
28
+ ## Abstract
29
+
30
+ Multi-layer image generation is a fundamental task that enables users to isolate, select, and edit specific image layers, thereby revolutionizing interactions with generative models. In this paper, we introduce the Anonymous Region Transformer (ART), which facilitates the direct generation of variable multi-layer transparent images based on a global text prompt and an anonymous region layout.
31
+
32
+ Inspired by Schema theory, this anonymous region layout allows the generative model to autonomously determine which set of visual tokens should align with which text tokens, which is in contrast to the previously dominant semantic layout for the image generation task.
33
+
34
+ In addition, the layer-wise region crop mechanism, which only selects the visual tokens belonging to each anonymous region, significantly reduces attention computation costs and enables the efficient generation of images with numerous distinct layers (e.g., 50+). When compared to the full attention approach, our method is over 12 times faster and exhibits fewer layer conflicts.
35
+
36
+ Furthermore, we propose a high-quality multi-layer transparent image autoencoder that supports the direct encoding and decoding of the transparency of variable multi-layer images in a joint manner.
37
+
38
+ By enabling precise control and scalable layer generation, ART establishes a new paradigm for interactive content creation.
39
+
40
+ ## Usage
41
+ Please first clone our repository and run the code below to have a try!
42
+ ```python
43
+ import os
44
+ import math
45
+ import torch
46
+ import argparse
47
+ from PIL import Image
48
+ from multi_layer_gen.custom_model_mmdit import CustomFluxTransformer2DModel
49
+ from multi_layer_gen.custom_model_transp_vae import AutoencoderKLTransformerTraining as CustomVAE
50
+ from multi_layer_gen.custom_pipeline import CustomFluxPipelineCfg
51
+
52
+ def test_sample(pipeline, transp_vae, batch, args):
53
+
54
+ def adjust_coordinate(value, floor_or_ceil, k=16, min_val=0, max_val=1024):
55
+ # Round the value to the nearest multiple of k
56
+ if floor_or_ceil == "floor":
57
+ rounded_value = math.floor(value / k) * k
58
+ else:
59
+ rounded_value = math.ceil(value / k) * k
60
+ # Clamp the value between min_val and max_val
61
+ return max(min_val, min(rounded_value, max_val))
62
+
63
+ validation_prompt = batch["wholecaption"]
64
+ validation_box_raw = batch["layout"]
65
+ validation_box = [
66
+ (
67
+ adjust_coordinate(rect[0], floor_or_ceil="floor"),
68
+ adjust_coordinate(rect[1], floor_or_ceil="floor"),
69
+ adjust_coordinate(rect[2], floor_or_ceil="ceil"),
70
+ adjust_coordinate(rect[3], floor_or_ceil="ceil"),
71
+ )
72
+ for rect in validation_box_raw
73
+ ]
74
+ if len(validation_box) > 52:
75
+ validation_box = validation_box[:52]
76
+
77
+ generator = torch.Generator(device=torch.device("cuda", index=args.gpu_id)).manual_seed(args.seed) if args.seed else None
78
+ output, rgba_output, _, _ = pipeline(
79
+ prompt=validation_prompt,
80
+ validation_box=validation_box,
81
+ generator=generator,
82
+ height=args.resolution,
83
+ width=args.resolution,
84
+ num_layers=len(validation_box),
85
+ guidance_scale=args.cfg,
86
+ num_inference_steps=args.steps,
87
+ transparent_decoder=transp_vae,
88
+ )
89
+ images = output.images # list of PIL, len=layers
90
+ rgba_images = [Image.fromarray(arr, 'RGBA') for arr in rgba_output]
91
+
92
+ os.makedirs(os.path.join(args.save_dir, this_index), exist_ok=True)
93
+ for frame_idx, frame_pil in enumerate(images):
94
+ frame_pil.save(os.path.join(args.save_dir, this_index, f"layer_{frame_idx}.png"))
95
+ if frame_idx == 0:
96
+ frame_pil.save(os.path.join(args.save_dir, this_index, "merged.png"))
97
+ merged_pil = images[1].convert('RGBA')
98
+ for frame_idx, frame_pil in enumerate(rgba_images):
99
+ if frame_idx < 2:
100
+ frame_pil = images[frame_idx].convert('RGBA') # merged and background
101
+ else:
102
+ merged_pil = Image.alpha_composite(merged_pil, frame_pil)
103
+ frame_pil.save(os.path.join(args.save_dir, this_index, f"layer_{frame_idx}_rgba.png"))
104
+
105
+ merged_pil = merged_pil.convert('RGB')
106
+ merged_pil.save(os.path.join(args.save_dir, this_index, "merged_rgba.png"))
107
+
108
+
109
+ args = dict(
110
+ save_dir="output/",
111
+ resolution=512,
112
+ cfg=4.0,
113
+ steps=28,
114
+ seed=41,
115
+ gpu_id=1,
116
+ )
117
+ args = argparse.Namespace(**args)
118
+
119
+ transformer = CustomFluxTransformer2DModel.from_pretrained("ART-Release/ART_v1.0", subfolder="transformer", torch_dtype=torch.bfloat16)
120
+ transp_vae = CustomVAE.from_pretrained("ART-Release/ART_v1.0", subfolder="transp_vae", torch_dtype=torch.float32)
121
+ pipeline = CustomFluxPipelineCfg.from_pretrained(
122
+ "black-forest-labs/FLUX.1-dev",
123
+ transformer=transformer,
124
+ torch_dtype=torch.bfloat16,
125
+ ).to(torch.device("cuda", index=args.gpu_id))
126
+ pipeline.enable_model_cpu_offload(gpu_id=args.gpu_id) # Save GPU memory
127
+
128
+ sample = {
129
+ "index": "reso512_3",
130
+ "wholecaption": 'Floral wedding invitation: green leaves, white flowers; circular border. Center: "JOIN US CELEBRATING OUR WEDDING" (cursive), "DONNA AND HARPER" (bold), "03 JUNE 2023" (small bold). White, green color scheme, elegant, natural.',
131
+ "layout": [(0, 0, 512, 512), (0, 0, 512, 512), (0, 0, 512, 352), (144, 384, 368, 448), (160, 192, 352, 432), (368, 0, 512, 144), (0, 0, 144, 144), (128, 80, 384, 208), (128, 448, 384, 496), (176, 48, 336, 80)],
132
+ }
133
+
134
+ test_sample(pipeline=pipeline, transp_vae=transp_vae, batch=sample, args=args)
135
+
136
+ del pipeline
137
+ if torch.cuda.is_available():
138
+ torch.cuda.empty_cache()
139
+ ```
140
+
141
+
142
+ ## Citation
143
+ ```bibtex
144
+ @article{pu2025art,
145
+ title={ART: Anonymous Region Transformer for Variable Multi-Layer Transparent Image Generation},
146
+ author={Yifan Pu and Yiming Zhao and Zhicong Tang and Ruihong Yin and Haoxing Ye and Yuhui Yuan and Dong Chen and Jianmin Bao and Sirui Zhang and Yanbin Wang and Lin Liang and Lijuan Wang and Ji Li and Xiu Li and Zhouhui Lian and Gao Huang and Baining Guo},
147
+ journal={arXiv preprint arXiv:2502.18364},
148
+ year={2025},
149
+ }
150
+ ```
teaser.png ADDED

Git LFS Details

  • SHA256: 7c5a6689212c393dfa7db8f5da2766d9076e89bf5663135852a4c1cbaea12620
  • Pointer size: 132 Bytes
  • Size of remote file: 2.34 MB