text
stringlengths
292
5.25k
Multi-Lo RA Composition for Image Generation Ming Zhong1Yelong Shen2Shuohang Wang2Yadong Lu2Yizhu Jiao1Siru Ouyang1 Donghan Yu2Jiawei Han1Weizhu Chen2 Project Website: Multi-Lo RA-Composition Abstract Low-Rank Adaptation (Lo RA) is extensively uti-lized in text-to-image models for the accurate ren-dition of specific elements like distinct characters or unique styles in generated images. Nonethe-less, existing methods face challenges in effec-tively composing multiple Lo RAs, especially as the number of Lo RAs to be integrated grows, thus hindering the creation of complex imagery. In this paper, we study multi-Lo RA composition through a decoding-centric perspective. We present two training-free methods: LORA S WITCH, which al-ternates between different Lo RAs at each denois-ing step, and LORA C OMPOSITE, which simulta-neously incorporates all Lo RAs to guide more co-hesive image synthesis. To evaluate the proposed approaches, we establish Compos Lo RA, a new comprehensive testbed as part of this research. It features a diverse range of Lo RA categories with 480 composition sets. Utilizing an evalua-tion framework based on GPT-4V, our findings demonstrate a clear improvement in performance with our methods over the prevalent baseline, par-ticularly evident when increasing the number of Lo RAs in a composition. 1. Introduction In the dynamic realm of generative text-to-image mod-els (Ho et al., 2020; Rombach et al., 2022; Saharia et al., 2022; Ramesh et al., 2022; Ruiz et al., 2023; Sohn et al., 2023), the integration of Low-Rank Adaptation (Lo RA) (Hu et al., 2022) stands out for its ability to fine-tune image syn-thesis with remarkable precision and minimal computational load. Lo RA excels by specializing in one element — such as a specific character, a particular clothing, a unique style, or *Equal contribution1University of Illinois Urbana-Champaign 2Microsoft Corporation. Correspondence to: Jiawei Han <hanj@illinois. edu >, Weizhu Chen <wzchen@microsoft. com >. other distinct visual aspects — and being trained to produce diverse and accurate renditions of this element in generated images. For instance, users could customize their Lo RA model to generate various images of themselves, achieving an array of personalized and realistic representations. The application of Lo RA not only showcases its adaptability and precision in image generation but also opens new av-enues in customized digital content creation, revolutionizing how users interact with and utilize generative text-to-image models for creating tailored visual content. However, an image typically embodies a mosaic of various elements, making compositionality key to controllable im-age generation (Tenenbaum, 2018; Huang et al., 2023b). In pursuit of this, the strategy of composing multiple Lo RAs, each focused on a distinct element, emerges as a feasible ap-proach for advanced customization. This technique enables the digitization of complex scenes, such as virtual try-ons, merging users with clothing in a realistic fashion, or urban landscapes where users interact with meticulously designed city elements. Prior investigations into multi-Lo RA com-positions have explored the context of pre-trained language models (Zhang et al., 2023a; Huang et al., 2023a) or stable diffusion models (Ryu, 2023; Shah et al., 2023). These studies aim to merge multiple Lo RA models to synthesize a new Lo RA model by training coefficient matrices (Huang et al., 2023a; Shah et al., 2023) or through the direct ad-dition or subtraction of Lo RA weights (Ryu, 2023; Zhang et al., 2023a). Nevertheless, these approaches centered on weight manipulation could destabilize the merging process as the number of Lo RAs grows (Huang et al., 2023a) and also overlook the interaction between Lo RA models and base models. This oversight becomes particularly critical in diffusion models, which depend on sequential denoising steps for image generation. Ignoring the interplay between Lo RAs and these steps can result in misalignments in the generative process, as shown in Figure 1, where a merged Lo RA model fails to preserve the full complexity of all desired elements, leading to distorted or unrealistic images. In this paper, we delve into multi-Lo RA composition from a decoding-centric perspective, keeping all Lo RA weights intact. We present two learning-free approaches that uti-1ar Xiv:2402. 16843v1 [cs. CV] 26 Feb 2024
Multi-Lo RA Composition for Image Generation Clothing Character 1 Character 2Lo RAMerge Ours Character3 Background Bamboo Object Hamburger Character4Lo RAMerge Ours Realistic Style Anime Style Figure 1: Multi-Lo RA composition techniques effectively blend different elements such as characters, clothing, and objects into a cohesive image. Unlike the conventional Lo RA Merge approach, which can lead to detail loss and image distortion as more Lo RAs are added, our methods retain the accuracy of each element and the overall image quality. lize either one or all Lo RAs at each decoding step to fa-cilitate compositional image synthesis. Our first approach, LORA S WITCH, operates by selectively activating a single Lo RA during each denoising step, with a rotation among multiple Lo RAs throughout the generation process. For instance, in a virtual try-on scenario, LORA S WITCH al-ternates between a character Lo RA and a clothing Lo RA at successive denoising steps, thereby ensuring that each element is rendered with precision and clarity. In parallel, we propose LORA C OMPOSITE, a technique that draws inspiration from classifier-free guidance (Ho & Salimans, 2022). It involves calculating unconditional and conditional score estimates derived from each respective Lo RA at every denoising step. These scores are then averaged to provide balanced guidance for image generation, ensuring a com-prehensive incorporation of all elements. Furthermore, by bypassing the manipulation on the weight matrix but di-rectly influencing the diffusion process, both methods allow for the integration of any number of Lo RAs and overcome the limitations of recent studies that typically merge only two Lo RAs (Shah et al., 2023). Experimentally, we introduce Compos Lo RA, the first testbed specifically designed for Lo RA-based composable image generation. This testbed features an extensive array of six Lo RA categories, spanning two distinct visual styles: reality and anime. Our evaluation includes 480 diverse composition sets, each incorporating a varying number of Lo RAs to comprehensively evaluate the efficacy of each proposed method. Given the lack of standardized automatic metrics for this novel task, we propose to employ GPT-4V (Open AI, 2023a;b) as an evaluator, assessing both the quality of the images and the effectiveness of the composi-tions. Our empirical findings consistently demonstrate that both LORA S WITCH and LORA C OMPOSITE substantiallyoutperform the prevalent Lo RA merging approach, particu-larly noticeable as the number of Lo RAs in a composition increases. To further validate our results, we also conduct human evaluations, which reinforce our conclusions and affirm the efficacy of our automated evaluation framework. In addition, we provide a detailed analysis of the applicable scenarios for each method, as well as discuss the potential bias of using GPT-4V as an evaluator. To summarize, our key contributions are threefold: (1) We introduce the first investigation of multi-Lo RA com-position from a decoding-centric perspective, proposing LORA S WITCH and LORA C OMPOSITE. Our methods overcome existing constraints on the number of Lo RAs that can be integrated, offering enhanced flexibility and improved quality in composable image generation. (2) Our work establishes Compos Lo RA, a comprehensive testbed tailored to this research area, featuring six varied categories of Lo RAs and 480 composition sets. Addressing the absence of standardized metrics, we present an evaluator built upon GPT-4V, setting a new benchmark for assessing both image quality and compositional efficacy. (3) Through extensive automatic and human evaluations, our findings reveal the superior performance of the pro-posed LORA S WITCH and LORA C OMPOSITE compared to the prevalent Lo RA merging approach. Additionally, we provide an in-depth analysis of different multi-composition methods and evaluation frameworks. 2. Method In this section, we begin with an overview of essential con-cepts for understanding multi-Lo RA composition, followed 2
Multi-Lo RA Composition for Image Generation by detailed descriptions of our proposed methods. 2. 1. Preliminary Diffusion Models. Diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Dhariwal & Nichol, 2021; Song et al., 2021; Nichol et al., 2022) represent a class of gener-ative models adept at crafting data samples from Gaussian noise through a sequential denoising process. They build upon a sequence of denoising autoencoders that estimate the score of a data distribution (Hyv ¨arinen, 2005). Given an image x, the encoder Eis used to map xinto a latent space, thus yielding an encoded latent z=E(x). The diffusion process introduces noise to z, resulting in latent representa-tionztwith different noise levels over timestep t∈ T. The diffusion model ϵθwith learnable parameters θis trained to predict the noise added to the noisy latent ztgiven text instruction conditioning c T. Typically, a mean-squared error loss function is utilized as the denoising objective: L=EE(x),ϵ∼N (0,1),t ||ϵ-ϵθ(zt, t, c T)||2 2, (1) where ϵis the additive Gaussian noise. In this paper, we in-vestigate multi-Lo RA composition based on diffusion mod-els, which is consistent with the settings of previous studies on Lo RA merging (Ryu, 2023; Shah et al., 2023). Classifier-Free Guidance. In diffusion-based generative modeling, classifier-free guidance (Ho & Salimans, 2022) balances the trade-off between the diversity and quality of the generated images, particularly in scenarios where the model is conditioned on classes or textual descriptions. For the text-to-image task, it operates by directing the proba-bility mass towards outcomes where the implicit classifier pθ(c|zt)predicts a high likelihood for the textual condition-ingc. This necessitates the diffusion models to undergo a joint training paradigm for both conditional and uncondi-tional denoising. Subsequently, during inference, the guid-ance scale s≥1is used to adjust the score function ˜eθ(zt, c) by moving it closer to the conditional estimation eθ(zt, c) and further from the unconditional estimation eθ(zt), en-hancing the conditioning effect on the generated images, as formalized in the following expression: ˜eθ(zt, c) =eθ(zt) +s·(eθ(zt, c)-eθ(zt)). (2) Lo RA Merge. Low-Rank Adaptation (Lo RA) ap-proach (Hu et al., 2022) enhances parameter efficiency by freezing the pre-trained weight matrices and integrating additional trainable low-rank matrices within the neural network. This method is founded on the observation that pre-trained models exhibit low “intrinsic dimension” (Agha-janyan et al., 2021). Concretely, for a weight matrix W∈Rn×min the diffusion model ϵθ, the introduction of a Lo RA module involves updating Wto W′, defined as W′=W+BA. Here, B∈Rn×rand A∈Rr×mare matrices of a low-rank factor r, satisfying r≪min(n, m). The concept of Lo RA Merge (Ryu, 2023) is realized by linearly combining multiple Lo RAs to synthesize a unified Lo RA, subsequently plugged into the diffusion model. For-mally, when introducing kdistinct Lo RAs, the consequent updated matrix W′inϵθis given by: W′=W+k X i=1wi×Bi Ai, (3) where idenotes the index of the i-th Lo RA, and wiis a scalar weight, typically a hyperparameter determined through empirical tuning. Lo RA Merge has emerged as a dominant approach for presenting multiple elements cohe-sively in an image, offering a straightforward baseline for various applications. However, merging too many Lo RAs at once can destabilize the merging process (Huang et al., 2023a), and it completely overlooks the interaction with the diffusion model during the generative process, resulting in the deformation of the hamburger and fingers in Figure 2. 2. 2. Multi-Lo RA Composition through a Decoding-Centric Perspective To address the above issues, we base our approach on the de-noising process of diffusion models and investigate how to perform composition while maintaining the Lo RA weights unchanged. This is specifically divided into two perspec-tives: in each denoising step, either activate only one Lo RA or engage all Lo RAs to guide the generation. LORA S WITCH (LORA-S). To explore activating a single Lo RA in each denoising step, we present LORA SWITCH. This method introduces a dynamic adaptation mechanism within diffusion models by sequentially activat-ing individual Lo RAs at designated intervals throughout the generation process. As illustrated in Figure 2, each Lo RA is represented by a unique color corresponding to a specific element, with only one Lo RA engaged per denoising step. With a set of k Lo RAs, the methodology initiates with a prearranged sequence of permutations; in the example of the Figure, the sequence progresses from yellow to green to blue Lo RAs. Starting from the first Lo RA, the model transitions to the subsequent Lo RA every τstep. This rotation persists, allowing each Lo RA to be applied in turn after kτsteps, thereby endowing each element to contribute repeatedly to the image generation. The active Lo RA at each denoising timestep t, ranging from 1 to the total number of steps required, is determined by the following equations: 3
Multi-Lo RA Composition for Image Generation X!X"X"#$X%X%#$X&......... X!X"X"#$X%X%#$X&......... X!X"X"#$X%X%#$X&......... (1) Lo RAMerge(2) Lo RASwitch(3) Lo RAComposite Figure 2: Overview of three multi-Lo RA composition techniques, where each colored Lo RA represents a distinct element. The prevalent approach, LORA M ERGE, linearly merges multiple Lo RAs into a single one. In contrast, our methods concentrate on the denoising process: LORA S WITCH cycles through different Lo RAs during the denoising, while LORA COMPOSITE involves all Lo RAs working together as the guidance throughout the generation process. i=⌊((t-1) mod ( kτ))/τ⌋+ 1, W′ t=W+wi×Bi Ai. (4) In this formula, iindicates the index of the currently active Lo RA, iterating from 1 to k. The floor function ⌊·⌋guar-antees the integer value of iis appropriately computed for t. The resulting weight matrix W′ tis updated to reflect the contribution from the active Lo RA. By selectively enabling one Lo RA at a time, LORA S WITCH ensures focused at-tention to the details pertinent to the current element, thus preserving the integrity and quality of the generated image throughout the process. LORA C OMPOSITE (LORA-C). To explore incorpo-rating all Lo RAs at each timestep without merging weight matrices, we propose LORA C OMPOSITE (LORA-C), an approach grounded in the classifier-free guidance paradigm. This method involves calculating both unconditional and conditional score estimates for each Lo RA individually at every denoising step. By aggregating these scores, the tech-nique ensures balanced guidance throughout the image gen-eration process, facilitating the cohesive integration of all elements represented by different Lo RAs. Formally, with k Lo RAs in place, let θ′ idenote the param-eters of the diffusion model eθafter incorporating the i-th Lo RA. The collective guidance ˜e(zt, c)based on textual condition cis derived by aggregating the scores from each Lo RA, as depicted in the equation below: ˜e(zt, c) =1 kk X i=1wi× eθ′ i(zt) +s·(eθ′ i(zt, c)-eθ′ i(zt)). (5) Here, wiis a scalar weight allocated to each Lo RA, in-tended to adjust the influence of the i-th Lo RA. For our experimental setup, we set wito 1, giving each Lo RA equal importance. LORA-Cassures that every Lo RA contributes effectively at each stage of the denoising process, address-ing the potential issues of robustness and detail preservation that are commonly associated with merging Lo RAs. Overall, we are the first to adopt a decoding-centric per-spective in multi-Lo RA composition, steering clear of the instability inherent in weight manipulation on Lo RAs. Our study introduces two training-free methods for activating either one or all Lo RAs at each denoising step, with their comparative analysis presented in Sections §3. 2 and §3. 3. 1. 3. Experiments 3. 1. Experimental Setup Compos Lo RA Testbed. Due to the absence of standard-ized benchmarks and automated evaluation metrics, existing studies involving evaluation for composable image gener-ation lean heavily on quantitative analysis (Huang et al., 2023b; Wang et al., 2023) and human effort (Shah et al., 4
Multi-Lo RA Composition for Image Generation 2023), which also limits the advancements of multi-Lo RA composition. To bridge this gap, we introduce a compre-hensive testbed Compos Lo RA designed to facilitate com-parative analysis of various composition approaches. This testbed builds upon a collection of public Lo RAs1, which are extensively shared and recognized as essential plug-in modules in this field. The selection of Lo RAs for this testbed adheres to the following criteria: (1) Each Lo RA should be robustly trained, ensuring it can accurately replicate the specific elements it represents when integrated independently; (2) The elements represented by the Lo RAs should cover a diverse range of categories and demonstrate adaptability across different image styles; (3) When composed, Lo RAs from different categories should be compatible, preventing any conflicts in the re-sulting image composition. Consequently, we curate two unique subsets of Lo RAs rep-resenting realistic and anime styles. Each subset comprises a variety of elements: 3 characters, 2 types of clothing, 2 styles, 2 backgrounds, and 2 objects, culminating in a total of 22 Lo RAs in Compos Lo RA. In constructing composition sets, we strictly follow a crucial principle: each set must in-clude one character Lo RA and avoid duplication of element categories to prevent conflicts. Thus, the Compos Lo RA evaluation incorporates a total of 480 distinct composition sets. This includes 48 sets comprising 2 Lo RAs, 144 sets with 3 Lo RAs, 192 sets featuring 4 Lo RAs, and 96 sets containing 5 Lo RAs. Key features for each Lo RA are man-ually annotated and serve dual purposes: they act as input prompts for the text-to-image models to generate images, and also provide reference points for subsequent evaluations using GPT-4V. Detailed descriptions of each Lo RA can be found in Table 3 in the Appendix. Comparative Evaluation with GPT-4V. While existing metrics can calculate the alignment between text and im-ages (Hessel et al., 2021; Ku et al., 2023), they fall short in assessing the intricacies of specific elements within an image and the quality of their composition. Recently, multimodal large language models like GPT-4V (Open AI, 2023a;b) have significant progress and promise in various multimodal tasks, underscoring their potential in evaluat-ing image generation task (Lin et al., 2023; Zhang et al., 2023b)s. In our study, we leverage GPT-4V's capabilities to serve as an evaluator for composable image generation. Specifically, we employ a comparative evaluation method, utilizing GPT-4V to rate generated images across two dimen-sions: composition quality and image quality. We utilize a 0 to 10 scoring scale, with higher scores indicating superior 1Collected from https://civitai. com/. Table 1: Comparative evaluation with GPT-4V. The evalua-tion prompt and result are in a simplified version. Evaluation Prompt I need assistance in comparatively evaluating two text-to-image models based on their ability to compose different elements into a single image. The key elements are: 1. Character : kamado nezuko, black hair, pink eyes, forehead 2. Background : bamboolight, outdoors, bamboo 3. Object : two-handed burger, holding a huge burger Please help me rate based on composition and image quality: [Evaluation criteria and format requirements ] Evaluation Results from GPT-4V For Image 1: Composition Quality:-The character's hair color is not black as specified (-1 point ).-The burger shows a deformed connection which detracts from the quality of the object's depiction (-3 points ).-The character's fingers are deformed (-1 point ). Image Quality:-Deformities are present in the character's fingers and the burger, which significantly impacts the quality. (-3 points ).... Scores: Image 1: Composition Quality: 5/10, Image Quality: 7/10 Image 2: Composition Quality: 10/10, Image Quality: 10/10 quality. As outlined in Table 1, we provide GPT-4V with a prompt that includes the essential features of the elements to be composed, the criteria for scoring in the two dimensions, and the format for the expected output. GPT-4V, in its eval-uation, identifies issues such as deformities in the burger and the hand in Image 1, as well as incorrect hair color, leading to appropriate point deductions. The complete eval-uation prompts and results are available in Tables 4 and 5 in Appendix. This experimental setup allows us to compare the efficacy of each of the two proposed methods against the LORA M ERGE approach. Additionally, we examine how GPT-4V-based scoring aligns with human judgment in Section §3. 2 and explore the potential biases of using it as an evaluator in Section §3. 3. 3. Implementation Details. For our experiments, we em-ploy stable-diffusion-v1. 5 (Rombach et al., 2022) as the backbone model. We utilize two specific checkpoints for our 5
Multi-Lo RA Composition for Image Generation 8. 698. 117. 26. 568. 657. 966. 685. 244. 555. 566. 577. 588. 599. 5 2 Lo RAs3 Lo RAs4 Lo RAs5 Lo RAs Composition Quality Lo RA Switch Lo RA Merge9. 539. 469. 369. 289. 469. 49. 278. 958. 88. 999. 19. 29. 39. 49. 59. 69. 79. 8 2 Lo RAs3 Lo RAs4 Lo RAs5 Lo RAs Image Quality Lo RA Switch Lo RA Merge (a) Lo RA Switch vs. Lo RA Merge (Scoring) 8. 858. 127. 16. 098. 747. 876. 685. 174. 555. 566. 577. 588. 599. 5 2 Lo RAs3 Lo RAs4 Lo RAs5 Lo RAs Composition Quality Lo RA Composite Lo RA Merge9. 729. 569. 439. 349. 539. 419. 158. 928. 88. 999. 19. 29. 39. 49. 59. 69. 79. 8 2 Lo RAs3 Lo RAs4 Lo RAs5 Lo RAs Image Quality Lo RA Composite Lo RA Merge (b) Lo RA Composite vs. Lo RA Merge (Scoring) 232118173431241443485869 0%20%40%60%80%100% 2 Lo RAs3 Lo RAs4 Lo RAs5 Lo RAs Composition Quality Lose Tie Win101614146249484028353846 0%20%40%60%80%100% 2 Lo RAs3 Lo RAs4 Lo RAs5 Lo RAs Image Quality Lose Tie Win (c) Lo RA Switch vs. Lo RA Merge (Win Rate) 162428235132232233444955 0%20%40%60%80%100% 2 Lo RAs3 Lo RAs4 Lo RAs5 Lo RAs Composition Quality Lose Tie Win111413105446423435404556 0%20%40%60%80%100% 2 Lo RAs3 Lo RAs4 Lo RAs5 Lo RAs Image Quality Lose Tie Win (d) Lo RA Composite vs. Lo RA Merge (Win Rate) Figure 3: Results of comparative evaluation on Compos Lo RA using GPT-4V. experiments: “Realistic Vision V5. 1”2for realistic images and “Counterfeit-V2. 5”3for anime images, each fine-tuned to their respective styles. In the realistic style subset, we configure the model with 100 denoising steps, a guidance scale sof 7, and set the image size to 1024x768, optimizing for superior image quality. For the anime style subset, the settings differ slightly with 200 denoising steps, a guidance scale sof 10, and an image size of 512x512. The DPM-Solver++ (Lu et al., 2022a;b) is used as the scheduler in the generation process. The weight scale wis consistently set at 0. 8 for composing Lo RAs within Compos Lo RA. For the Lo RA Switch approach, we apply a cycle with τset to 5, meaning every 5 denoising steps activate the next Lo RA in the sequence: character, clothing, style, background, then object. To ensure the reliability of our experimental results, we conduct image generation using three random seeds. All reported results in this paper represent the average evalua-tion scores across these three runs. 3. 2. Results on Compos Lo RA GPT-4V-based Evaluation. We first present the compara-tive evaluation results obtained using GPT-4V. This evalu-ation involves scoring the performance of LORA-Sversus LORA M ERGE, and LORA-Cversus LORA M ERGE across two dimensions, as well as determining the winner based on these scores. Specific scores and win rates are illustrated in 2https://huggingface. co/SG161222/ Realistic_Vision_V5. 1_no VAE. 3https://huggingface. co/gsdf/ Counterfeit-V2. 5. Figure 3, leading to several key observations: (1)Our proposed method consistently outperforms Lo RA Merge across all configurations and in both dimensions, with the margin of superiority increasing as the number of Lo RAs grows. For instance, as shown in Figure 3(a), the score advantage of LORA S WITCH escalates from 0. 04 with 2 Lo RAs to 1. 32 with 5 Lo RAs. This trend aligns with the win rate observed in Figure 3(c), where the win rate approaches 70% when composing 5 Lo RAs. (2)Lo RA-S shows superior performance in composition quality, whereas Lo RA-C excels in image quality. In sce-narios involving 5 Lo RAs and using LORA M ERGE as a baseline, the win rate of LORA-Sin composition quality surpasses that of LORA-Cby 14% (69% vs. 55%). Con-versely, for image quality, LORA-C's win rate is 10% higher than that of L ORA-S(56% vs. 46%). (3)The task of compositional image generation remains highly challenging, especially as the number of elements to be composed increases. According to GPT-4V's scoring, the average score for composing 2 Lo RAs is above 8. 5, but it sharply declines to around 6 for compositions involving 5 Lo RAs. Hence, despite the considerable improvements our methods offer, there is still substantial room for further research in the field of compositional image generation. Human Evaluation. To complement our results, we con-duct a human evaluation to assess the effectiveness of differ-ent methods and validate GPT-4V's efficacy as an evaluator. Two graduate students rate 120 images on compositional 6
Multi-Lo RA Composition for Image Generation Table 2: Human evaluation results and Pearson correlation between different metrics and human judgment. Human Evaluation Composition Image Quality LORA M ERGE 3. 14 2. 94 LORA S WITCH 3. 91 4. 15 LORA C OMPOSITE 3. 78 4. 35 Correlations with Human Judgments Composition Image Quality CLIPS CORE-0. 006 0. 083 OURS 0. 454 0. 457 16251292630464758454244 0%20%40%60%80%100% Lo RA-SCompos. Lo RA-CCompos. Lo RA-SImage Lo RA-CImage Realistic Style Lose Tie Win232617162424503953503345 0%20%40%60%80%100% Lo RA-SCompos. Lo RA-CCompos. Lo RA-SImage Lo RA-CImage Anime Style Lose Tie Win Figure 4: Analysis on image styles. In general, LORA-S is more adept at realistic styles, while LORA-Chas better performance in anime styles. and image quality using a 1-5 Likert scale: 1 signifies com-plete failure, 2-4 represent significant, moderate, and minor issues, respectively, while 5 denotes perfect execution. To ensure consistency, the annotators initially pilot-score 20 images to standardize their understanding of the criteria. The results, summarized in the upper section of Table 2, align with GPT-4V's findings, confirming our methods out-perform LORA M ERGE — with LORA S WITCH excelling in composition and L ORA C OMPOSITE in image quality. Furthermore, we analyze the Pearson correlations between human evaluations and scores derived from GPT-4V and CLIPScore (Hessel et al., 2021), with results presented in the lower section of Table 2. This comparison reveals that CLIPScore's evaluations fall short in assessing specific com-positional and quality aspects due to its inability to discern the nuanced features of each element. On the other hand, the GPT-4V-based evaluator we adopt shows substantially higher correlations with human judgments, affirming the validity of our evaluation framework. 3. 3. Analysis To enhance our understanding of the proposed methods, we further investigate the following critical questions: 1 2 3 4 5 6 7 830405060 Number of steps to switch Lo RA ( τ)Win Rate (%) Composition Image Quality(a) Steps to Switch Composition Quality Image Quality77. 588. 599. 5Average Score Character Clothing Style Background Object Random (b) Activation Order Figure 5: Analysis of the number of denoising steps to switch Lo RA and the activation order for Lo RA Switch. In Figure 5(b), “Character” indicates that the character Lo RA is activated first, with the rest being activated randomly. 3. 3. 1. D OSPECIFIC IMAGE STYLES FAVOR DIFFERENT METHODS ? To explore this, we separately evaluate the performance of methods on realistic and anime-style subsets within Compos Lo RA. The win rate results, presented in Figure 4, reveal distinct tendencies for each method. Our observations reveal that, while LORA-Smay not excel in image quality compared to LORA-C, it demonstrates comparable perfor-mance in this dimension within the realistic style subset, while maintaining a significant edge in composition quality. In contrast, in the anime-style subset, LORA-C, shows a performance on par with LORA-Sin composition quality, while notably surpassing it in image quality. These findings suggest that Lo RA-S is more adept at composing elements in realistic-style images, whereas Lo RA-C shows a stronger performance in anime-style imagery. 3. 3. 2. H OWDOES THE STEPSIZE AND ORDER OF LORA A CTIVATION AFFECT LORA S WITCH ? To identify the optimal configuration for LORA S WITCH, we examine the influence of two crucial hyperparameters: the sequence in which Lo RAs are activated and the in-terval between each activation. Our findings, depicted in Figure 5(a), show that overly frequent switching, such as changing Lo RAs at every denoising step, leads to distor-tions in generated images and suboptimal performance. The efficiency of the Lo RA Switch improves progressively with increased step size, reaching peak performance at τ= 5. Consequently, we adopt this step size in our experiments. Moreover, our analysis underscores that the initial choice of Lo RA in the activation sequence clearly influences overall performance, while alterations in the subsequent order have minimal impact. Activating the character Lo RA first leads to the best performance, as demonstrated in Figure 5(b). In contrast, starting with clothing, background, or object Lo-RAs yields results comparable to a completely randomized sequence. Notably, beginning with the style Lo RA leads to 7
Multi-Lo RA Composition for Image Generation 152421292525272960515242 0%20%40%60%80%100% Merge First Lo RA-SFirst Merge First Lo RA-CFirst Composition Quality Lose Tie Win181117104848434234414048 0%20%40%60%80%100% Merge First Lo RA-SFirst Merge First Lo RA-CFirst Image Quality Lose Tie Win Figure 6: Positional bias analysis for comparative evaluation uses GPT-4V. In each subfigure, the left side of the orange line compares LORA-Swith MERGE, and the right side contrasts LORA-Cwith MERGE. “Merge First” indicates that the image produced by LORA M ERGE is the first image input to GPT-4V during the comparative evaluation. a noticeable performance drop, even falling slightly below a random order. This observation underlines the critical role of prioritizing core image elements in the initial stage of the generation process to enhance both the image and compositional quality for L ORA S WITCH. 3. 3. 3. D OES GPT-4V E XHIBIT BIAS AS AN EVALUATOR ? While GPT-4V has demonstrated utility in evaluating vari-ous image generation tasks (Lin et al., 2023; Zhang et al., 2023b), our analysis uncovers a notable positional bias in its comparative evaluations. We investigate this potential bias by swapping the positions of images generated by dif-ferent methods before inputting them to GPT-4V, and the results are illustrated in Figure 6. In the comparison of LORA-Sversus LORA M ERGE, when the image generated by MERGE is presented first (“Merge First”), the win rate for LORA-Sin composition quality stands at 60%. How-ever, this win rate declines to 51% when LORA-S's image is the first input (“Lo RA-S First”). Similarly, L ORA-C's win rate decreases from 52% to 42%, suggesting that GPT-4V tends to favor the first image input in terms of composition quality. Intriguingly, the opposite trend is observed in image quality, where the second image tends to receive a higher score. These results indicate a significant positional bias in GPT-4V's evaluation, varying with the dimension and the position of the images. To mitigate this bias in our study, the comparative evaluation results reported in this paper are averaged across both input orders. 4. Related Work 4. 1. Composable Text-to-Image Generation Composable image generation, a key aspect of digital con-tent customization, involves creating images that adhere to a set of pre-defined specifications (Liu et al., 2023). Existingresearch in this domain primarily focuses on the following approaches: enhancing compositionality with scene graphs or layouts (Johnson et al., 2018; Yang et al., 2022; Gafni et al., 2022), modifying the generative process of diffusion models to align with the underlying specifications (Feng et al., 2023; Huang et al., 2023c;b), or composing a series of independent models that enforce desired constraints (Du et al., 2020; Liu et al., 2021; Nie et al., 2021; Liu et al., 2022; Li et al., 2023; Du et al., 2023). However, these methods typically operate at the concept level, where generative models excel in creating images based on broader categories or general concepts. For exam-ple, a model might be prompted to generate an image of “a woman wearing a dress”, and can adeptly accommodate variations in the textual description, such as changing the color of the dress. Yet, they struggle to accurately render specific, user-defined elements, like lesser-known charac-ters or unique dress styles. Another line of work that can compose user-defined objects into images includes Huang et al. (2023c); Ruiz et al. (2023). However, these meth-ods require extensive fine-tuning and do not perform well on multiple objects. Therefore, we introduce learning-free instance-level composition approaches utilizing Lo RA in this paper, enabling the precise assembly of user-specified elements in image generation. 4. 2. Lo RA-based Manipulations Leveraging large language models (LLMs) or diffusion models as the base model, recent research aims to manip-ulate Lo RA weights to achieve a range of objectives: ele-ment composition in image generation (Ryu, 2023; Shah et al., 2023), enhancing or diminishing certain capabilities in LLMs (Zhang et al., 2023a; Huang et al., 2023a), incorpo-rating world knowledge (Dou et al., 2023), and transferring parametric knowledge from larger teacher models to smaller student models (Zhong et al., 2023). Regarding Lo RA com-position techniques, both Lo RAHub (Huang et al., 2023a) and Zip Lo RA (Shah et al., 2023) employ few-shot demon-strations to learn coefficient matrices for merging Lo RAs, enabling the fusion of multiple Lo RAs into a singular new Lo RA. On the other hand, Lo RA Merge (Ryu, 2023; Zhang et al., 2023a) introduces addition and negation operators to merge Lo RA weights through arithmetic operations. Nevertheless, these weight-based methods often lead to instability in the merging process as the number of Lo RAs increases (Huang et al., 2023a). They also fail to account for the interactive dynamics when applying the Lo RA model in conjunction with the base model. To address these issues, our study explores a new perspective: instead of altering the weights of Lo RAs, we maintain all Lo RA weights intact and focus on the interactions between Lo RAs and the underlying generative process. 8
Multi-Lo RA Composition for Image Generation 5. Conclusion In this paper, we present the first exploration of multi-Lo RA composition from a decoding-centric perspective by intro-ducing LORA-Sand LORA-Cthat transcend the limitations of current weight manipulation techniques. Through es-tablishing a dedicated testbed Compos Lo RA, we introduce scalable automated evaluation metrics utilizing GPT-4V. Our study not only highlights the superior quality achieved by our methods but also provides a new standard for evaluating Lo RA-based composable image generation. Impact Statements This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here. References Aghajanyan, A., Gupta, S., and Zettlemoyer, L. Intrin-sic dimensionality explains the effectiveness of language model fine-tuning. In Zong, C., Xia, F., Li, W., and Nav-igli, R. (eds. ), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pp. 7319-7328. Associa-tion for Computational Linguistics, 2021. doi: 10. 18653/ V1/2021. ACL-LONG. 568. URL https://doi. org/ 10. 18653/v1/2021. acl-long. 568. Dhariwal, P. and Nichol, A. Q. Diffusion models beat gans on image synthesis. In Ranzato, M., Beygelzimer, A., Dauphin, Y. N., Liang, P., and Vaughan, J. W. (eds. ), Advances in Neural Infor-mation Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, Neur IPS 2021, December 6-14, 2021, virtual, pp. 8780-8794, 2021. URL https://proceedings. neurips. cc/paper/2021/hash/ 49ad23d1ec9fa4bd8d77d02681df5cfa-Abstract. html. Dou, S., Zhou, E., Liu, Y., Gao, S., Zhao, J., Shen, W., Zhou, Y., Xi, Z., Wang, X., Fan, X., Pu, S., Zhu, J., Zheng, R., Gui, T., Zhang, Q., and Huang, X. Loramoe: Revolution-izing mixture of experts for maintaining world knowledge in language model alignment. Co RR, abs/2312. 09979, 2023. doi: 10. 48550/ARXIV. 2312. 09979. URL https: //doi. org/10. 48550/ar Xiv. 2312. 09979. Du, Y., Li, S., and Mordatch, I. Compositional visual generation with energy based models. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds. ),Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, Neur IPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings. neurips. cc/paper/2020/hash/ 49856ed476ad01fcff881d57e161d73f-Abstract. html. Du, Y., Durkan, C., Strudel, R., Tenenbaum, J. B., Diele-man, S., Fergus, R., Sohl-Dickstein, J., Doucet, A., and Grathwohl, W. S. Reduce, reuse, recycle: Composi-tional generation with energy-based diffusion models and MCMC. In Krause, A., Brunskill, E., Cho, K., Engel-hardt, B., Sabato, S., and Scarlett, J. (eds. ), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Pro-ceedings of Machine Learning Research, pp. 8489-8510. PMLR, 2023. URL https://proceedings. mlr. press/v202/du23a. html. Feng, W., He, X., Fu, T., Jampani, V., Akula, A. R., Narayana, P., Basu, S., Wang, X. E., and Wang, W. Y. Training-free structured diffusion guidance for compo-sitional text-to-image synthesis. In The Eleventh In-ternational Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. Open Re-view. net, 2023. URL https://openreview. net/ pdf?id=PUIqj T4rzq7. Gafni, O., Polyak, A., Ashual, O., Sheynin, S., Parikh, D., and Taigman, Y. Make-a-scene: Scene-based text-to-image generation with human priors. In Avidan, S., Brostow, G. J., Ciss ´e, M., Farinella, G. M., and Hassner, T. (eds. ), Computer Vision-ECCV 2022-17th Euro-pean Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XV, volume 13675 of Lecture Notes in Computer Science, pp. 89-106. Springer, 2022. doi: 10. 1007/978-3-031-19784-0 \6. URL https://doi. org/10. 1007/978-3-031-19784-0_6. Hessel, J., Holtzman, A., Forbes, M., Bras, R. L., and Choi, Y. Clipscore: A reference-free evaluation metric for im-age captioning. In Moens, M., Huang, X., Specia, L., and Yih, S. W. (eds. ), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pp. 7514-7528. Associa-tion for Computational Linguistics, 2021. doi: 10. 18653/ V1/2021. EMNLP-MAIN. 595. URL https://doi. org/10. 18653/v1/2021. emnlp-main. 595. Ho, J. and Salimans, T. Classifier-free diffusion guidance. Co RR, abs/2207. 12598, 2022. doi: 10. 48550/ARXIV. 2207. 12598. URL https://doi. org/10. 48550/ ar Xiv. 2207. 12598. 9
Multi-Lo RA Composition for Image Generation Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds. ), Advances in Neural Information Processing Systems 33: An-nual Conference on Neural Information Processing Systems 2020, Neur IPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings. neurips. cc/paper/2020/hash/ 4c5bcfec8584af0d967f1ab10179ca4b-Abstract. html. Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., and Chen, W. Lora: Low-rank adapta-tion of large language models. In The Tenth Interna-tional Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. Open Review. net, 2022. URL https://openreview. net/forum? id=n Ze VKee FYf9. Huang, C., Liu, Q., Lin, B. Y., Pang, T., Du, C., and Lin, M. Lorahub: Efficient cross-task generalization via dynamic lora composition. Co RR, abs/2307. 13269, 2023a. doi: 10. 48550/ARXIV. 2307. 13269. URL https://doi. org/10. 48550/ar Xiv. 2307. 13269. Huang, L., Chen, D., Liu, Y., Shen, Y., Zhao, D., and Zhou, J. Composer: Creative and controllable im-age synthesis with composable conditions. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds. ), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceed-ings of Machine Learning Research, pp. 13753-13773. PMLR, 2023b. URL https://proceedings. mlr. press/v202/huang23b. html. Huang, Z., Chan, K. C. K., Jiang, Y., and Liu, Z. Col-laborative diffusion for multi-modal face generation and editing. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, BC, Canada, June 17-24, 2023, pp. 6080-6090. IEEE, 2023c. doi: 10. 1109/CVPR52729. 2023. 00589. URL https:// doi. org/10. 1109/CVPR52729. 2023. 00589. Hyv¨arinen, A. Estimation of non-normalized statistical models by score matching. J. Mach. Learn. Res., 6:695-709, 2005. URL http://jmlr. org/papers/v6/ hyvarinen05a. html. Johnson, J., Gupta, A., and Fei-Fei, L. Image generation from scene graphs. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 1219-1228. Computer Vision Foundation / IEEE Com-puter Society, 2018. doi: 10. 1109/CVPR. 2018. 00133. URL http://openaccess. thecvf. com/content_cvpr_2018/html/Johnson_Image_ Generation_From_CVPR_2018_paper. html. Ku, M., Li, T., Zhang, K., Lu, Y., Fu, X., Zhuang, W., and Chen, W. Imagenhub: Standardizing the evaluation of conditional image generation models. Co RR, abs/2310. 01596, 2023. doi: 10. 48550/ARXIV. 2310. 01596. URL https://doi. org/10. 48550/ ar Xiv. 2310. 01596. Li, S., Du, Y., Tenenbaum, J. B., Torralba, A., and Mordatch, I. Composing ensembles of pre-trained models via iterative consensus. In The Eleventh In-ternational Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. Open Re-view. net, 2023. URL https://openreview. net/ pdf?id=gmw DKo-4c Y. Lin, K., Yang, Z., Li, L., Wang, J., and Wang, L. De-signbench: Exploring and benchmarking DALL-E 3 for imagining visual design. Co RR, abs/2310. 15144, 2023. doi: 10. 48550/ARXIV. 2310. 15144. URL https: //doi. org/10. 48550/ar Xiv. 2310. 15144. Liu, N., Li, S., Du, Y., Tenenbaum, J., and Torralba, A. Learning to compose visual relations. In Ranzato, M., Beygelzimer, A., Dauphin, Y. N., Liang, P., and Vaughan, J. W. (eds. ), Advances in Neural Informa-tion Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, Neur IPS 2021, December 6-14, 2021, virtual, pp. 23166-23178, 2021. URL https://proceedings. neurips. cc/paper/2021/hash/ c3008b2c6f5370b744850a98a95b73ad-Abstract. html. Liu, N., Li, S., Du, Y., Torralba, A., and Tenenbaum, J. B. Compositional visual generation with compos-able diffusion models. In Avidan, S., Brostow, G. J., Ciss´e, M., Farinella, G. M., and Hassner, T. (eds. ), Com-puter Vision-ECCV 2022-17th European Confer-ence, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XVII, volume 13677 of Lecture Notes in Computer Science, pp. 423-439. Springer, 2022. doi: 10. 1007/ 978-3-031-19790-1 \26. URL https://doi. org/ 10. 1007/978-3-031-19790-1_26. Liu, N., Du, Y., Li, S., Tenenbaum, J. B., and Torralba, A. Unsupervised compositional concepts discovery with text-to-image generative models. Co RR, abs/2306. 05357, 2023. doi: 10. 48550/ARXIV. 2306. 05357. URL https: //doi. org/10. 48550/ar Xiv. 2306. 05357. Lu, C., Zhou, Y., Bao, F., Chen, J., Li, C., and Zhu, J. Dpm-solver: A fast ODE solver for diffusion probabilistic model sampling in around 10 steps. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, 10
Multi-Lo RA Composition for Image Generation D., Cho, K., and Oh, A. (eds. ), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, Neur IPS 2022, New Orleans, LA, USA, November 28-December 9, 2022, 2022a. URL http://papers. nips. cc/paper_files/paper/2022/hash/ 260a14acce2a89dad36adc8eefe7c59e-Abstract-Conference. html. Lu, C., Zhou, Y., Bao, F., Chen, J., Li, C., and Zhu, J. Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models. Co RR, abs/2211. 01095, 2022b. doi: 10. 48550/ARXIV. 2211. 01095. URL https://doi. org/10. 48550/ar Xiv. 2211. 01095. Nichol, A. Q., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., Mc Grew, B., Sutskever, I., and Chen, M. GLIDE: towards photorealistic image generation and editing with text-guided diffusion models. In Chaud-huri, K., Jegelka, S., Song, L., Szepesv ´ari, C., Niu, G., and Sabato, S. (eds. ), International Conference on Ma-chine Learning, ICML 2022, 17-23 July 2022, Balti-more, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pp. 16784-16804. PMLR, 2022. URL https://proceedings. mlr. press/ v162/nichol22a. html. Nie, W., Vahdat, A., and Anandkumar, A. Controllable and compositional generation with latent-space energy-based models. In Ranzato, M., Beygelzimer, A., Dauphin, Y. N., Liang, P., and Vaughan, J. W. (eds. ), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, Neur IPS 2021, December 6-14, 2021, virtual, pp. 13497-13510, 2021. URL https://proceedings. neurips. cc/paper/2021/hash/ 701d804549a4a23d3cae801dac6c2c75-Abstract. html. Open AI. GPT-4: Contributions and System Card. https://cdn. openai. com/contributions/gpt-4v. pdf, 2023a. Open AI. GPT-4v System Card. https://openai. com/research/gpt-4v-system-card, 2023b. Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., and Chen, M. Hierarchical text-conditional image generation with CLIP latents. Co RR, abs/2204. 06125, 2022. doi: 10. 48550/ ARXIV. 2204. 06125. URL https://doi. org/10. 48550/ar Xiv. 2204. 06125. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. High-resolution image synthesis with latent diffusion models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Or-leans, LA, USA, June 18-24, 2022, pp. 10674-10685. IEEE, 2022. doi: 10. 1109/CVPR52688. 2022. 01042. URL https://doi. org/10. 1109/CVPR52688. 2022. 01042. Ruiz, N., Li, Y., Jampani, V., Pritch, Y., Rubinstein, M., and Aberman, K. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In IEEE/CVF Conference on Computer Vision and Pat-tern Recognition, CVPR 2023, Vancouver, BC, Canada, June 17-24, 2023, pp. 22500-22510. IEEE, 2023. doi: 10. 1109/CVPR52729. 2023. 02155. URL https:// doi. org/10. 1109/CVPR52729. 2023. 02155. Ryu, S. Merging loras. https://github. com/ cloneofsimo/lora, 2023. Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E. L., Ghasemipour, S. K. S., Lopes, R. G., Ayan, B. K., Salimans, T., Ho, J., Fleet, D. J., and Norouzi, M. Photorealistic text-to-image diffusion models with deep language understand-ing. In Neur IPS, 2022. URL http://papers. nips. cc/paper_files/paper/2022/hash/ ec795aeadae0b7d230fa35cbaf04c041-Abstract-Conference. html. Shah, V., Ruiz, N., Cole, F., Lu, E., Lazebnik, S., Li, Y., and Jampani, V. Ziplora: Any subject in any style by effectively merging loras. Co RR, abs/2311. 13600, 2023. doi: 10. 48550/ARXIV. 2311. 13600. URL https:// doi. org/10. 48550/ar Xiv. 2311. 13600. Sohl-Dickstein, J., Weiss, E. A., Maheswaranathan, N., and Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. In Bach, F. R. and Blei, D. M. (eds. ), Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pp. 2256-2265. JMLR. org, 2015. URL http://proceedings. mlr. press/ v37/sohl-dickstein15. html. Sohn, K., Ruiz, N., Lee, K., Chin, D. C., Blok, I., Chang, H., Barber, J., Jiang, L., Entis, G., Li, Y., Hao, Y., Essa, I., Rubinstein, M., and Krishnan, D. Styledrop: Text-to-image generation in any style. Co RR, abs/2306. 00983, 2023. doi: 10. 48550/ARXIV. 2306. 00983. URL https: //doi. org/10. 48550/ar Xiv. 2306. 00983. Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Er-mon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. In 9th Interna-tional Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. Open Re-view. net, 2021. URL https://openreview. net/ forum?id=Px TIG12RRHS. 11
Multi-Lo RA Composition for Image Generation Tenenbaum, J. Building machines that learn and think like people. In Andr ´e, E., Koenig, S., Dastani, M., and Sukthankar, G. (eds. ), Proceedings of the 17th Interna-tional Conference on Autonomous Agents and Multi A-gent Systems, AAMAS 2018, Stockholm, Sweden, July 10-15, 2018, pp. 5. International Foundation for Au-tonomous Agents and Multiagent Systems Richland, SC, USA / ACM, 2018. URL http://dl. acm. org/ citation. cfm?id=3237389. Wang, Z., Jiang, Y., Lu, Y., Shen, Y., He, P., Chen, W., Wang, Z., and Zhou, M. In-context learning unlocked for diffusion models. Co RR, abs/2305. 01115, 2023. doi: 10. 48550/ARXIV. 2305. 01115. URL https://doi. org/10. 48550/ar Xiv. 2305. 01115. Yang, Z., Liu, D., Wang, C., Yang, J., and Tao, D. Modeling image composition for complex scene generation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pp. 7754-7763. IEEE, 2022. doi: 10. 1109/ CVPR52688. 2022. 00761. URL https://doi. org/ 10. 1109/CVPR52688. 2022. 00761. Zhang, J., Chen, S., Liu, J., and He, J. Composing parameter-efficient modules with arithmetic operations. Co RR, abs/2306. 14870, 2023a. doi: 10. 48550/ARXIV. 2306. 14870. URL https://doi. org/10. 48550/ ar Xiv. 2306. 14870. Zhang, X., Lu, Y., Wang, W., Yan, A., Yan, J., Qin, L., Wang, H., Yan, X., Wang, W. Y., and Petzold, L. R. Gpt-4v(ision) as a generalist evaluator for vision-language tasks. Co RR, abs/2311. 01361, 2023b. doi: 10. 48550/ ARXIV. 2311. 01361. URL https://doi. org/10. 48550/ar Xiv. 2311. 01361. Zhong, M., An, C., Chen, W., Han, J., and He, P. Seeking neural nuggets: Knowledge transfer in large language models from a parametric perspective. Co RR, abs/2310. 11451, 2023. doi: 10. 48550/ARXIV. 2310. 11451. URL https://doi. org/10. 48550/ ar Xiv. 2310. 11451. 12
Multi-Lo RA Composition for Image Generation A. Appendix Object Bubble Gum Character 1 Character 2Lo RAMerge Ours Figure 7: Case study on composing 2 Lo RAs in the realistic style. Character4Lo RAMerge Ours Character3 Clothing Figure 8: Case study on composing 2 Lo RAs in the anime style. 13
Multi-Lo RA Composition for Image Generation Background1Library Bookshelf Background2Forest & River Lo RAMerge Ours Clothing Character Figure 9: Case study on composing 3 Lo RAs in the realistic style. Lo RAMerge Ours Character Background3Auroral Object 1Hamburger Object 2Toast Figure 10: Case study on composing 3 Lo RAs in the anime style. 14
Multi-Lo RA Composition for Image Generation Table 3: Detailed descriptions of each Lo RA in the Compos Lo RA. Lo RA Category Trigger Words Source Anime Style Subset Kamado Nezuko Character kamado nezuko, black hair, pink eyes, forehead Link Texas the Omertosa in Arknights Character omertosa, 1girl, wolf ears, long hair Link Son Goku Character son goku, spiked hair, muscular male, wristband Link Garreg Mach Monastery Uniform Clothing gmuniform, blue thighhighs, long sleeves Link Zero Suit (Metroid) Clothing zero suit, blue gloves, high heels Link Hand-drawn Style Style lineart, hand-drawn style Link Chinese Ink Wash Style Style shuimobysim, traditional chinese ink painting Link Bamboolight Background Background bamboolight, outdoors, bamboo Link Auroral Background Background auroral, starry sky, outdoors Link Huge Two-Handed Burger Object two-handed burger, holding a huge burger with both hands Link Toast Object toast, toast in mouth Link Realistic Style Subset IU (Lee Ji Eun, Korean singer) Character iu1, long straight black hair, hazel eyes, diamond stud earrings Link Scarlett Johansson Character scarlett, short red hair, blue eyes Link The Rock (Dwayne Johnson) Character th3r0ck with no hair, muscular male, serious look on his face Link Thai University Uniform Clothing mahalaiuniform, white shirt short sleeves, black pencil skirt Link School Dress Clothing school uniform, white shirt, red tie, blue pleated microskirt Link Japanese Film Color Style Style film overlay, film grain Link Bright Style Style bright lighting Link Library Bookshelf Background Background lib bg, library bookshelf Link Forest Background Background slg, river, forest Link Umbrella Object transparent umbrella Link Bubble Gum Object blow bubble gum Link 15
Multi-Lo RA Composition for Image Generation Table 4: The full version of evaluation prompts for comparative evaluation with GPT-4V. Evaluation Prompt I need assistance in comparatively evaluating two text-to-image models based on their ability to compose different elements into a single image. The elements and their key features are as follows: 1. Character (Kamado Nezuko) : kamado nezuko, black hair, pink eyes, forehead 2. Background (Bamboo Background) : bamboolight, outdoors, bamboo 3. Object (Huge Two-Handed Burger) : two-handed burger, holding a huge burger Please help me rate both given images on the following evaluation dimensions and criteria: Composition Quality:-Score on a scale of 0 to 10, in 0. 5 increments, where 10 is the best and 0 is the worst.-Deduct 3 points if any element is missing or incorrectly depicted.-Deduct 1 point for each missing or incorrect feature within an element.-Deduct 1 point for minor inconsistencies or lack of harmony between elements.-Additional deductions can be made for compositions that lack coherence, creativity, or realism. Image Quality:-Score on a scale of 0 to 10, in 0. 5 increments, where 10 is the best and 0 is the worst.-Deduct 3 points for each deformity in the image (e. g., extra limbs or fingers, distorted face, incorrect proportions).-Deduct 2 points for noticeable issues with texture, lighting, or color.-Deduct 1 point for each minor flaw or imperfection.-Additional deductions can be made for any issues affecting the overall aesthetic or clarity of the image. Please format the evaluation as follows: For Image 1: [Explanation of evaluation ] For Image 2: [Explanation of evaluation ] Scores: Image 1: Composition Quality: [ score ]/10, Image Quality: [ score ]/10 Image 2: Composition Quality: [ score ]/10, Image Quality: [ score ]/10 Based on the above guidelines, help me to conduct a step-by-step comparative evaluation of the given images. The scoring should follow two principles: 1. Please evaluate critically. 2. Try not to let the two models end in a tie on both dimensions. 16
Multi-Lo RA Composition for Image Generation Table 5: The full version of evaluation results from GPT-4V for comparative evaluation. Evaluation Results from GPT-4V To evaluate the images provided, I'll assess the composition quality and image quality based on the criteria outlined in your instructions. I will identify each element and its key features, and then rate the images accordingly. For Image 1 : Composition Quality:-Elements: All key elements are present: character, background, and object (0 points deducted).-Features within the elements:-The character's hair color is not black as specified (-1 point ).-The burger shows a deformed connection which detracts from the quality of the object's depiction (-3 point ).-The character's fingers are deformed (-1 point ). The bamboo background is accurate, fulfilling the outdoor setting criteria. Despite this, the main issues lie with the incorrect hair color and the deformities in the burger and fingers, leading to deductions. Image Quality:-The image is free from texture, lighting, or color issues (0 points deducted).-Deformities are present in the fingers and the burger, which significantly impacts the image's quality. (-3 point ). The clarity of the image is maintained, but the deformities notably affect the overall aesthetic quality. For Image 2 : Composition Quality:-Elements: All key elements are present: character, background, and object (0 points deducted).-Features within the elements:-The character has black hair and pink eyes; the forehead is shown, and the burger is depicted accurately (0 points deducted). There are no inconsistencies or lack of harmony between elements, and the composition is coherent and aesthetically pleasing. Image Quality:-There are no deformities in the image (0 points deducted).-The texture, lighting, and color are well executed (0 points deducted). The image maintains a high level of clarity and overall aesthetic appeal, with no visible imperfections or issues. Scores: Image 1: Composition Quality: 5/10, Image Quality: 7/10 Image 2: Composition Quality: 10/10, Image Quality: 10/10 In summary, Image 2 achieves perfect scores across both dimensions, demonstrating excellent composition and image quality. Image 1, while containing all the key elements, suffers in its composition due to incorrect features and deformities, and in image quality due to the noted deformities. 17
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card