File size: 18,808 Bytes
4502e17
83b822e
4502e17
 
 
 
83b822e
 
 
 
 
 
 
4502e17
 
044e028
 
83b822e
 
044e028
83b822e
044e028
 
83b822e
 
 
 
 
 
 
 
 
 
 
 
5c9af6f
83b822e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5d85572
 
 
 
 
 
 
 
 
 
 
 
 
2d6808d
5d85572
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5c9af6f
 
 
 
 
5d85572
5c9af6f
 
 
 
 
 
83b822e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5d85572
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5c9af6f
5d85572
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5c9af6f
 
 
 
 
 
 
 
 
 
 
 
5d85572
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5c9af6f
 
5d85572
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5c9af6f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5d85572
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83b822e
5d85572
83b822e
5d85572
 
 
 
 
5c9af6f
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
---
library_name: diffusers
license: openrail++
language:
- en
tags:
  - text-to-image
  - stable-diffusion
  - safetensors
  - stable-diffusion-xl
base_model: stabilityai/stable-diffusion-xl-base-1.0
widget:
- text: face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck
  parameter:
    negative_prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
  output:
    url: https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/cR_r0k0CSapphAaFrkN1h.png
  example_title: 1girl
- text: face focus, bishounen, masterpiece, best quality, 1boy, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck
  parameter: 
    negative_prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
  output:
    url: https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/EteXoZZN4SwlkqfbPpNak.png  
  example_title: 1boy
---

<style>
  .title-container {
    display: flex;
    justify-content: center;
    align-items: center;
    height: 100vh; /* Adjust this value to position the title vertically */
  }
  
  .title {
    font-size: 2.5em;
    text-align: center;
    color: #333;
    font-family: 'Helvetica Neue', sans-serif;
    text-transform: uppercase;
    letter-spacing: 0.1em;
    padding: 0.5em 0;
    background: transparent;
  }
  
  .title span {
    background: -webkit-linear-gradient(45deg, #7ed56f, #28b485);
    -webkit-background-clip: text;
    -webkit-text-fill-color: transparent;
  }
  
  .custom-table {
    table-layout: fixed;
    width: 100%;
    border-collapse: collapse;
    margin-top: 2em;
  }
  
  .custom-table td {
    width: 50%;
    vertical-align: top;
    padding: 10px;
    box-shadow: 0px 0px 0px 0px rgba(0, 0, 0, 0.15);
  }

  .custom-image-container {
    position: relative;
    width: 100%;
    margin-bottom: 0em;
    overflow: hidden;
    border-radius: 10px;
    transition: transform .7s;
    /* Smooth transition for the container */
  }

  .custom-image-container:hover {
    transform: scale(1.05);
    /* Scale the container on hover */
  }

  .custom-image {
    width: 100%;
    height: auto;
    object-fit: cover;
    border-radius: 10px;
    transition: transform .7s;
    margin-bottom: 0em;
  }

  .nsfw-filter {
    filter: blur(8px); /* Apply a blur effect */
    transition: filter 0.3s ease; /* Smooth transition for the blur effect */
  }

  .custom-image-container:hover .nsfw-filter {
    filter: none; /* Remove the blur effect on hover */
  }
  
  .overlay {
    position: absolute;
    bottom: 0;
    left: 0;
    right: 0;
    color: white;
    width: 100%;
    height: 40%;
    display: flex;
    flex-direction: column;
    justify-content: center;
    align-items: center;
    font-size: 1vw;
    font-style: bold;
    text-align: center;
    opacity: 0;
    /* Keep the text fully opaque */
    background: linear-gradient(0deg, rgba(0, 0, 0, 0.8) 60%, rgba(0, 0, 0, 0) 100%);
    transition: opacity .5s;
  }
  .custom-image-container:hover .overlay {
    opacity: 1;
    /* Make the overlay always visible */
  }
  .overlay-text {
    background: linear-gradient(45deg, #7ed56f, #28b485);
    -webkit-background-clip: text;
    color: transparent;
    /* Fallback for browsers that do not support this effect */
    text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.7);
    /* Enhanced text shadow for better legibility */
    
  .overlay-subtext {
    font-size: 0.75em;
    margin-top: 0.5em;
    font-style: italic;
  }
    
  .overlay,
  .overlay-subtext {
    text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5);
  }
    
</style>

<h1 class="title">
  <span>Animagine XL 2.0</span>
</h1>
<table class="custom-table">
  <tr>
    <td>
      <div class="custom-image-container">
        <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/fmkK9WYAPgwbrDcKOybBZ.png" alt="sample1">
      </div>
      <div class="custom-image-container">
        <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/TFaH_13XbFh0_NSn4Tzav.png" alt="sample4">
      </div>
    </td>
    <td>
      <div class="custom-image-container">
        <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/twkZ4xvmUBTWZZ88DG0v-.png" alt="sample2">
      </div>
      <div class="custom-image-container">
        <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/5LyRRqLwt73u-eOy1HZ_7.png" alt="sample3">
    </td>
    <td>
      <div class="custom-image-container">
        <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/f8aLXc_Slewo7iVxlE246.png" alt="sample1">
      </div>
      <div class="custom-image-container">
        <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/PYI5I7VR_zdEZUidn8fIr.png" alt="sample4">
      </div>
    </td>
  </tr>
</table>

## Overview 

**Animagine XL 2.0** represents the cutting-edge in latent text-to-image diffusion models, specializing in the generation of high-resolution, aesthetically rich, and detailed anime images. This model emerges as an enhancement over its predecessor, Animagine XL 1.0, by incorporating advancements from the Stable Diffusion XL 1.0. Its unique fine-tuning process, leveraging a comprehensive anime-style image dataset, enables Animagine XL 2.0 to adeptly capture the myriad styles inherent in anime art, significantly elevating both image quality and artistic expression.

## Model Details

- **Developed by:** [Linaqruf](https://github.com/Linaqruf)
- **Model type:** Diffusion-based text-to-image generative model
- **Model Description:** This is a model that excels in creating detailed and high-quality anime images from text descriptions. It's fine-tuned to understand and interpret a wide range of descriptive prompts, turning them into stunning visual art.
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
- **Finetuned from model:** [Stable Diffusion XL 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)

## LoRA Collection

<table class="custom-table">
  <tr>
    <td>
      <div class="custom-image-container">
        <a href="https://huggingface.co/Linaqruf/style-enhancer-xl-lora">
          <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/7k2c5pW6zMpOiuW9kVsrs.png" alt="sample1">
          <div class="overlay"> Style Enhancer </div>
        </a>
      </div>
    </td>
    <td>
      <div class="custom-image-container">
        <a href="https://huggingface.co/Linaqruf/anime-detailer-xl-lora">
          <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/2yAWKA84ux1wfzaMD3cNu.png" alt="sample1">
          <div class="overlay"> Anime Detailer </div>
        </a>
      </div>
    </td>
    <td>
      <div class="custom-image-container">
        <a href="https://huggingface.co/Linaqruf/sketch-style-xl-lora">
          <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/Iv6h6wC4HTq0ue5UABe_W.png" alt="sample1">
          <div class="overlay"> Sketch Style </div>
        </a>
      </div>
    </td>
    <td>
      <div class="custom-image-container">
        <a href="https://huggingface.co/Linaqruf/pastel-style-xl-lora">
          <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/0Bu6fj33VHC2rTXoD-anR.png" alt="sample1">
          <div class="overlay"> Pastel Style </div>
        </a>
      </div>
    </td>
    <td>
      <div class="custom-image-container">
        <a href="https://huggingface.co/Linaqruf/anime-nouveau-xl-lora">
          <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/Mw_U_1VcrcBGt-i6Lu06d.png" alt="sample1">
          <div class="overlay"> Anime Nouveau </div>
        </a>
      </div>
    </td>
  </tr>
</table>

## Gradio & Colab Integration

Animagine XL is accessible via [Gradio](https://github.com/gradio-app/gradio) Web UI and Google Colab, offering user-friendly interfaces for image generation:

- **Gradio Web UI**: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/Linaqruf/Animagine-XL)
- **Google Colab**: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https%3A//huggingface.co/Linaqruf/animagine-xl/blob/main/Animagine_XL_demo.ipynb)

## 🧨 Diffusers Installation

Ensure the installation of the latest `diffusers` library, along with other essential packages:

```bash
pip install diffusers --upgrade
pip install transformers accelerate safetensors
```

The following Python script demonstrates how to do inference with Animagine XL 2.0. The default scheduler in the model config is EulerAncestralDiscreteScheduler, but it can be explicitly defined for clarity.

```py
import torch
from diffusers import (
    StableDiffusionXLPipeline, 
    EulerAncestralDiscreteScheduler,
    AutoencoderKL
)

# Load VAE component
vae = AutoencoderKL.from_pretrained(
    "madebyollin/sdxl-vae-fp16-fix", 
    torch_dtype=torch.float16
)

# Configure the pipeline
pipe = StableDiffusionXLPipeline.from_pretrained(
    "Linaqruf/animagine-xl-2.0", 
    vae=vae,
    torch_dtype=torch.float16, 
    use_safetensors=True, 
    variant="fp16"
)
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.to('cuda')

# Define prompts and generate image
prompt = "face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck"
negative_prompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"

image = pipe(
    prompt, 
    negative_prompt=negative_prompt, 
    width=1024,
    height=1024,
    guidance_scale=12,
    num_inference_steps=50
).images[0]

```

## Usage Guidelines

### Prompt Guidelines

Animagine XL 2.0 responds effectively to natural language descriptions for image generation. For example:
```
A girl with mesmerizing blue eyes looks at the viewer. Her long, white hair is adorned with blue butterfly hair ornaments.
```

However, to achieve optimal results, it's recommended to use Danbooru-style tagging in your prompts, as the model is trained with images labeled using these tags. For instance:
```
1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck
```

This model incorporates quality and rating modifiers during dataset processing, influencing image generation based on specified criteria:


### Quality Modifiers

| Quality Modifier | Score Criterion |
| ---------------- | --------------- |
| masterpiece      | >150            |
| best quality     | 100-150         |
| high quality     | 75-100          |
| medium quality   | 25-75           |
| normal quality   | 0-25            |
| low quality      | -5-0            |
| worst quality    | <-5             |

### Rating Modifiers

| Rating Modifier | Rating Criterion |
| --------------- | ---------------- |
| -               | general          |
| -               | sensitive        |
| nsfw            | questionable     |
| nsfw            | explicit         |

To guide the model towards generating high-aesthetic images, use negative prompts like:

```
lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
```
For higher quality outcomes, prepend prompts with:

```
masterpiece, best quality
```

<table class="custom-table">
  <tr>
    <td>
      <div class="custom-image-container">
        <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/m6BGzrJgYTb9QrZprVAqZ.png" alt="sample1">
        <div class="overlay" style="font-size: 3vw;"> Twilight Contemplation <div class="overlay-subtext" style="font-size: 0.75em; font-style: italic;">"Stelle, Amidst Shooting Stars and Mountain Silhouettes"</div>
        </div>
      </div>
    </td>
  </tr>
</table>

### Multi Aspect Resolution

This model supports generating images at the following dimensions:
| Dimensions      | Aspect Ratio    |
|-----------------|-----------------|
| 1024 x 1024     | 1:1 Square      |
| 1152 x 896      | 9:7             |
| 896 x 1152      | 7:9             |
| 1216 x 832      | 19:13           |
| 832 x 1216      | 13:19           |
| 1344 x 768      | 7:4 Horizontal  |
| 768 x 1344      | 4:7 Vertical    |
| 1536 x 640      | 12:5 Horizontal |
| 640 x 1536      | 5:12 Vertical   |

## Examples 


## Training and Hyperparameters

- **Animagine XL** was trained on a 1x A100 GPU with 80GB memory. The training process encompassed two stages:
  - **Feature Alignment Stage**: Utilized 170k images to acquaint the model with basic anime concepts.
  - **Aesthetic Tuning Stage**: Employed 83k high-quality synthetic datasets to refine the model's art style.

### Hyperparameters

- Global Epochs: 20
- Learning Rate: 1e-6
- Batch Size: 32
- Train Text Encoder: True
- Image Resolution: 1024 (2048 x 512)
- Mixed-Precision: fp16

*Note: The model's training configuration is subject to future enhancements.*

## Model Comparison (Animagine XL 1.0 vs Animagine XL 2.0)

### Image Comparison

In the second iteration (Animagine XL 2.0), we have addressed the 'broken neck' issue prevalent in poses like "looking back" and "from behind". Now, characters are consistently "looking at viewer" by default, enhancing the naturalism and accuracy of the generated images.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6365c8dbf31ef76df4042821/oSssetgmuLEV6RlaSC5Tr.png)

### Training Config

| Configuration Item    | Animagine XL 1.0   | Animagine XL 2.0        |
|-----------------------|--------------------|--------------------------|
| **GPU**               | A100 40G           | A100 80G                 |
| **Dataset**           | 8000 images        | 170k + 83k images        |
| **Global Epochs**     | Not Applicable     | 20                       |
| **Learning Rate**     | 4e-7               | 1e-6                     |
| **Batch Size**        | 16                 | 32                       |
| **Train Text Encoder**| False              | True                     |
| **Train Special Tags**| False              | True                     |
| **Image Resolution**  | 1024               | 1024                     |
| **Bucket Resolution** | 1024 x 256         | 2048 x 512               |
| **Caption Dropout**   | 0.5                | 0                        |

## Direct Use

The Animagine XL 2.0 model, with its advanced text-to-image diffusion capabilities, is highly versatile and can be applied in various fields:

- **Art and Design:** This model is a powerful tool for artists and designers, enabling the creation of unique and high-quality anime-style artworks. It can serve as a source of inspiration and a means to enhance creative processes.
- **Education:** In educational contexts, Animagine XL 2.0 can be used to develop engaging visual content, assisting in teaching concepts related to art, technology, and media.
- **Entertainment and Media:** The model's ability to generate detailed anime images makes it ideal for use in animation, graphic novels, and other media production, offering a new avenue for storytelling.
- **Research:** Academics and researchers can leverage Animagine XL 2.0 to explore the frontiers of AI-driven art generation, study the intricacies of generative models, and assess the model's capabilities and limitations.
- **Personal Use:** Anime enthusiasts can use Animagine XL 2.0 to bring their imaginative concepts to life, creating personalized artwork based on their favorite genres and styles.

## Limitations

The Animagine XL 2.0 model, while advanced in its capabilities, has certain limitations that users should be aware of:

- **Style Bias:** The model exhibits a bias towards a specific art style, as it was fine-tuned using approximately 80,000 images with a similar aesthetic. This may limit the diversity in the styles of generated images.
- **Rendering Challenges:** There are occasional inaccuracies in rendering hands or feet, which may not always be depicted with high fidelity.
- **Realism Constraint:** Animagine XL 2.0 is not designed for generating realistic images, given its focus on anime-style content.
- **Natural Language Limitations:** The model may not perform optimally when prompted with natural language descriptions, as it is tailored more towards anime-specific terminologies and styles.
- **Dataset Scope:** Currently, the model is primarily effective in generating content related to the 'Honkai' series and 'Genshin Impact' due to the dataset's scope. Expansion to include more diverse concepts is planned for future iterations.
- **NSFW Content Generation:** The model is not proficient in generating NSFW content, as it was not a focus during the training process, aligning with the intention to promote safe and appropriate content generation.

## Acknowledgements

We extend our gratitude to:

- **Chai AI:** For the open-source grant ([Chai AI](https://www.chai-research.com/)) supporting our research.
- **Kohya SS:** For providing the essential training script.
- **Camenduru Server Community:** For invaluable insights and support.
- **NovelAI:** For inspiring the Quality Tags feature.
- **Waifu DIffusion Team:** for inspiring the optimal training pipeline with bigger datasets.
- **Shadow Lilac:** For the image classification model ([shadowlilac/aesthetic-shadow](https://huggingface.co/shadowlilac/aesthetic-shadow)) crucial in our quality assessment process.

<h1 class="title">
  <span>Anything you can Imagine!</span>
</h1>