Linaqruf commited on
Commit
5d85572
1 Parent(s): 83b822e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +261 -8
README.md CHANGED
@@ -53,13 +53,6 @@ widget:
53
  margin-top: 2em;
54
  }
55
 
56
- .custom-table {
57
- table-layout: fixed;
58
- width: 100%;
59
- border-collapse: collapse;
60
- margin-top: 2em;
61
- }
62
-
63
  .custom-table td {
64
  width: 50%;
65
  vertical-align: top;
@@ -99,6 +92,40 @@ widget:
99
  .custom-image-container:hover .nsfw-filter {
100
  filter: none; /* Remove the blur effect on hover */
101
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
102
  </style>
103
 
104
  <h1 class="title">
@@ -132,6 +159,232 @@ widget:
132
  </tr>
133
  </table>
134
 
135
- <hr>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
136
 
 
137
 
 
 
 
 
 
 
 
53
  margin-top: 2em;
54
  }
55
 
 
 
 
 
 
 
 
56
  .custom-table td {
57
  width: 50%;
58
  vertical-align: top;
 
92
  .custom-image-container:hover .nsfw-filter {
93
  filter: none; /* Remove the blur effect on hover */
94
  }
95
+
96
+ .overlay {
97
+ position: absolute;
98
+ bottom: 0;
99
+ left: 0;
100
+ right: 0;
101
+ color: white;
102
+ width: 100%;
103
+ height: 40%;
104
+ display: flex;
105
+ flex-direction: column;
106
+ justify-content: center;
107
+ align-items: center;
108
+ font-size: 1.5vw;
109
+ font-style: bold;
110
+ text-align: center;
111
+ opacity: 0;
112
+ /* Keep the text fully opaque */
113
+ background: linear-gradient(0deg, rgba(0, 0, 0, 0.8) 60%, rgba(0, 0, 0, 0) 100%);
114
+ transition: opacity .5s;
115
+ }
116
+ .custom-image-container:hover .overlay {
117
+ opacity: 1;
118
+ /* Make the overlay always visible */
119
+ }
120
+ .overlay-text {
121
+ background: linear-gradient(45deg, #7ed56f, #28b485);
122
+ -webkit-background-clip: text;
123
+ color: transparent;
124
+ /* Fallback for browsers that do not support this effect */
125
+ text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.7);
126
+ /* Enhanced text shadow for better legibility */
127
+ }
128
+
129
  </style>
130
 
131
  <h1 class="title">
 
159
  </tr>
160
  </table>
161
 
162
+ ## Overview
163
+
164
+ **Animagine XL 2.0** represents the cutting-edge in latent text-to-image diffusion models, specializing in the generation of high-resolution, aesthetically rich, and detailed anime images. This model emerges as an enhancement over its predecessor, Animagine XL 1.0, by incorporating advancements from the Stable Diffusion XL 1.0. Its unique fine-tuning process, leveraging a comprehensive anime-style image dataset, enables Animagine XL 2.0 to adeptly capture the myriad styles inherent in anime art, significantly elevating both image quality and artistic expression.
165
+
166
+ ## Model Details
167
+
168
+ - **Developed by:** [Linaqruf](https://github.com/Linaqruf)
169
+ - **Model type:** Diffusion-based text-to-image generative model
170
+ - **Model Description:** This is a model that excels in creating detailed and high-quality anime images from text descriptions. It's fine-tuned to understand and interpret a wide range of descriptive prompts, turning them into stunning visual art.
171
+ - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
172
+ - **Finetuned from model:** [Stable Diffusion XL 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
173
+
174
+ ## LoRA Collection
175
+
176
+ <table class="custom-table">
177
+ <tr>
178
+ <td>
179
+ <div class="custom-image-container">
180
+ <a href="https://huggingface.co/Linaqruf/style-enhancer-xl-lora">
181
+ <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/7k2c5pW6zMpOiuW9kVsrs.png" alt="sample1">
182
+ <div class="overlay"> Style Enhancer </div>
183
+ </a>
184
+ </div>
185
+ </td>
186
+ <td>
187
+ <div class="custom-image-container">
188
+ <a href="https://huggingface.co/Linaqruf/anime-detailer-xl-lora">
189
+ <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/2yAWKA84ux1wfzaMD3cNu.png" alt="sample1">
190
+ <div class="overlay"> Anime Detailer </div>
191
+ </a>
192
+ </div>
193
+ </td>
194
+ <td>
195
+ <div class="custom-image-container">
196
+ <a href="https://huggingface.co/Linaqruf/sketch-style-xl-lora">
197
+ <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/Iv6h6wC4HTq0ue5UABe_W.png" alt="sample1">
198
+ <div class="overlay"> Sketch Style </div>
199
+ </a>
200
+ </div>
201
+ </td>
202
+ <td>
203
+ <div class="custom-image-container">
204
+ <a href="https://huggingface.co/Linaqruf/pastel-style-xl-lora">
205
+ <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/0Bu6fj33VHC2rTXoD-anR.png" alt="sample1">
206
+ <div class="overlay"> Pastel Style </div>
207
+ </a>
208
+ </div>
209
+ </td>
210
+ <td>
211
+ <div class="custom-image-container">
212
+ <a href="https://huggingface.co/Linaqruf/anime-nouveau-xl-lora">
213
+ <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/Mw_U_1VcrcBGt-i6Lu06d.png" alt="sample1">
214
+ <div class="overlay"> Anime Nouveau </div>
215
+ </a>
216
+ </div>
217
+ </td>
218
+ </tr>
219
+ </table>
220
+
221
+ ## Gradio & Colab Integration
222
+
223
+ Animagine XL is accessible via [Gradio](https://github.com/gradio-app/gradio) Web UI and Google Colab, offering user-friendly interfaces for image generation:
224
+
225
+ - **Gradio Web UI**: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/Linaqruf/Animagine-XL)
226
+ - **Google Colab**: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https%3A//huggingface.co/Linaqruf/animagine-xl/blob/main/Animagine_XL_demo.ipynb)
227
+
228
+ ## 🧨 Diffusers Installation
229
+
230
+ Ensure the installation of the latest `diffusers` library, along with other essential packages:
231
+
232
+ ```bash
233
+ pip install diffusers --upgrade
234
+ pip install transformers accelerate safetensors
235
+ ```
236
+
237
+ The following Python script demonstrates how to do inference with Animagine XL 2.0. The default scheduler in the model config is EulerAncestralDiscreteScheduler, but it can be explicitly defined for clarity.
238
+
239
+ ```py
240
+ import torch
241
+ from diffusers import (
242
+ StableDiffusionXLPipeline,
243
+ EulerAncestralDiscreteScheduler,
244
+ AutoencoderKL
245
+ )
246
+
247
+ # Load VAE component
248
+ vae = AutoencoderKL.from_pretrained(
249
+ "madebyollin/sdxl-vae-fp16-fix",
250
+ torch_dtype=torch.float16
251
+ )
252
+
253
+ # Configure the pipeline
254
+ pipe = StableDiffusionXLPipeline.from_pretrained(
255
+ "Linaqruf/animagine-xl-2.0",
256
+ vae=vae,
257
+ torch_dtype=torch.float16,
258
+ use_safetensors=True,
259
+ variant="fp16"
260
+ )
261
+ pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
262
+ pipe.to('cuda')
263
+
264
+ # Define prompts and generate image
265
+ prompt = "face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck"
266
+ negative_prompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"
267
+
268
+ image = pipe(
269
+ prompt,
270
+ negative_prompt=negative_prompt,
271
+ width=1024,
272
+ height=1024,
273
+ guidance_scale=12,
274
+ num_inference_steps=50
275
+ ).images[0]
276
+
277
+ ```
278
+
279
+ ## Usage Guidelines
280
+
281
+ ### Prompt Guidelines
282
+ Animagine XL 2.0 responds effectively to natural language descriptions for image generation. For example:
283
+ ```
284
+ A girl with mesmerizing blue eyes looks at the viewer. Her long, white hair is adorned with blue butterfly hair ornaments.
285
+ ```
286
+
287
+ However, to achieve optimal results, it's recommended to use Danbooru-style tagging in your prompts, as the model is trained with images labeled using these tags. For instance:
288
+ ```
289
+ 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck
290
+ ```
291
+
292
+ This model incorporates quality and rating modifiers during dataset processing, influencing image generation based on specified criteria:
293
+
294
+
295
+ ### Quality Modifiers
296
+
297
+ | Quality Modifier | Score Criterion |
298
+ | ---------------- | --------------- |
299
+ | masterpiece | >150 |
300
+ | best quality | 100-150 |
301
+ | high quality | 75-100 |
302
+ | medium quality | 25-75 |
303
+ | normal quality | 0-25 |
304
+ | low quality | -5-0 |
305
+ | worst quality | <-5 |
306
+
307
+ ### Rating Modifiers
308
+
309
+ | Rating Modifier | Rating Criterion |
310
+ | --------------- | ---------------- |
311
+ | - | general |
312
+ | - | sensitive |
313
+ | nsfw | questionable |
314
+ | nsfw | explicit |
315
+
316
+ To guide the model towards generating high-aesthetic images, use negative prompts like:
317
+
318
+ ```
319
+ lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
320
+ ```
321
+ For higher quality outcomes, prepend prompts with:
322
+
323
+ ```
324
+ masterpiece, best quality
325
+ ```
326
+
327
+ ### Multi Aspect Resolution
328
+
329
+ This model supports generating images at the following dimensions:
330
+ | Dimensions | Aspect Ratio |
331
+ |-----------------|-----------------|
332
+ | 1024 x 1024 | 1:1 Square |
333
+ | 1152 x 896 | 9:7 |
334
+ | 896 x 1152 | 7:9 |
335
+ | 1216 x 832 | 19:13 |
336
+ | 832 x 1216 | 13:19 |
337
+ | 1344 x 768 | 7:4 Horizontal |
338
+ | 768 x 1344 | 4:7 Vertical |
339
+ | 1536 x 640 | 12:5 Horizontal |
340
+ | 640 x 1536 | 5:12 Vertical |
341
+
342
+
343
+ ## Training and Hyperparameters
344
+
345
+ - **Animagine XL** was trained on a 1x A100 GPU with 80GB memory. The training process encompassed two stages:
346
+ - **Feature Alignment Stage**: Utilized 170k images to acquaint the model with basic anime concepts.
347
+ - **Aesthetic Tuning Stage**: Employed 83k high-quality synthetic datasets to refine the model's art style.
348
+
349
+ ### Hyperparameters
350
+
351
+ - Global Epochs: 20
352
+ - Learning Rate: 1e-6
353
+ - Batch Size: 32
354
+ - Train Text Encoder: True
355
+ - Image Resolution: 1024 (2048 x 512)
356
+ - Mixed-Precision: fp16
357
+
358
+ *Note: The model's training configuration is subject to future enhancements.*
359
+
360
+ ## Direct Use
361
+
362
+ The Animagine XL 2.0 model, with its advanced text-to-image diffusion capabilities, is highly versatile and can be applied in various fields:
363
+
364
+ - **Art and Design:** This model is a powerful tool for artists and designers, enabling the creation of unique and high-quality anime-style artworks. It can serve as a source of inspiration and a means to enhance creative processes.
365
+ - **Education:** In educational contexts, Animagine XL 2.0 can be used to develop engaging visual content, assisting in teaching concepts related to art, technology, and media.
366
+ - **Entertainment and Media:** The model's ability to generate detailed anime images makes it ideal for use in animation, graphic novels, and other media production, offering a new avenue for storytelling.
367
+ - **Research:** Academics and researchers can leverage Animagine XL 2.0 to explore the frontiers of AI-driven art generation, study the intricacies of generative models, and assess the model's capabilities and limitations.
368
+ - **Personal Use:** Anime enthusiasts can use Animagine XL 2.0 to bring their imaginative concepts to life, creating personalized artwork based on their favorite genres and styles.
369
+
370
+ ## Limitations
371
+
372
+ The Animagine XL 2.0 model, while advanced in its capabilities, has certain limitations that users should be aware of:
373
+
374
+ - **Style Bias:** The model exhibits a bias towards a specific art style, as it was fine-tuned using approximately 80,000 images with a similar aesthetic. This may limit the diversity in the styles of generated images.
375
+ - **Rendering Challenges:** There are occasional inaccuracies in rendering hands or feet, which may not always be depicted with high fidelity.
376
+ - **Realism Constraint:** Animagine XL 2.0 is not designed for generating realistic images, given its focus on anime-style content.
377
+ - **Natural Language Limitations:** The model may not perform optimally when prompted with natural language descriptions, as it is tailored more towards anime-specific terminologies and styles.
378
+ - **Dataset Scope:** Currently, the model is primarily effective in generating content related to the 'Honkai' series and 'Genshin Impact' due to the dataset's scope. Expansion to include more diverse concepts is planned for future iterations.
379
+ - **NSFW Content Generation:** The model is not proficient in generating NSFW content, as it was not a focus during the training process, aligning with the intention to promote safe and appropriate content generation.
380
+
381
+ ## Acknowledgements
382
 
383
+ We extend our gratitude to:
384
 
385
+ - **Chai AI:** For the open-source grant ([Chai AI](https://www.chai-research.com/)) supporting our research.
386
+ - **Kohya SS:** For providing the essential training script.
387
+ - **Camenduru Server Community:** For invaluable insights and support.
388
+ - **NovelAI:** For inspiring the Quality Tags feature.
389
+ - **Waifu DIffusion Team:** for inspiring the optimal training pipeline with bigger datasets.
390
+ - **Shadow Lilac:** For the image classification model ([Hugging Face - shadowlilac/aesthetic-shadow](https://huggingface.co/shadowlilac/aesthetic-shadow)) crucial in our quality assessment process.