File size: 2,571 Bytes
0e76abd
 
 
a3e389a
 
 
 
 
94b73c1
 
ceb4afb
 
 
 
d21ebed
6047051
d21ebed
 
15e142f
d21ebed
914df37
6047051
a96d4c0
6047051
14f8bd0
 
 
 
 
 
 
94d2f6a
 
15e142f
 
1759a25
14f8bd0
 
 
1759a25
 
 
 
ceb4afb
94b73c1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4fd02c8
94b73c1
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
datasets:
- huggan/few-shot-aurora
tags:
- aurora
- pytorch
- diffusers
- unconditional-image-generation
---

<center>

![Aurora](https://huggingface.co/li-yan/diffusion-aurora-256/resolve/main/doc/Aurora.gif)

![Aurora Photo](https://huggingface.co/li-yan/diffusion-aurora-256/resolve/main/doc/Aurora-by-Li-Yan.jpg)

</center>

# Description

Have you ever seen aurora with your own eyes? Check the above picture I got in Alaska in Winter. Beautiful right?

However, aurora is so rare that we can hardly see it even in the very north places like Alaska.

Don't worry. Now we have generative models!!! Here are the pictures generated by this model:



| ![sample1](https://huggingface.co/li-yan/diffusion-aurora-256/resolve/main/doc/sample_1.png) | ![sample1](https://huggingface.co/li-yan/diffusion-aurora-256/resolve/main/doc/sample_2.png) | ![sample1](https://huggingface.co/li-yan/diffusion-aurora-256/resolve/main/doc/sample_3.png) | ![sample1](https://huggingface.co/li-yan/diffusion-aurora-256/resolve/main/doc/sample_4.png) |
|--|--|--|--|
| ![sample1](https://huggingface.co/li-yan/diffusion-aurora-256/resolve/main/doc/sample_5.png) | ![sample1](https://huggingface.co/li-yan/diffusion-aurora-256/resolve/main/doc/sample_6.png) | ![sample1](https://huggingface.co/li-yan/diffusion-aurora-256/resolve/main/doc/sample_7.png) | ![sample1](https://huggingface.co/li-yan/diffusion-aurora-256/resolve/main/doc/sample_8.png) |



# Model Details

This model generate 256 * 256 pixel pictures of aurora.

It is trained from dataset [huggan/few-shot-aurora](https://huggingface.co/datasets/huggan/few-shot-aurora).

The training method is modified from this [example](https://colab.sandbox.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb).

You can check my training source code here: [<img src="https://colab.research.google.com/assets/colab-badge.svg">](https://colab.sandbox.google.com/github/Li-Yan/Diffusion-Model/blob/main/li_yan_diffusers_training_accelerate.ipynb)

# Usage

## Option 1 (Slow)
```python
from diffusers import DDPMPipeline

pipeline = DDPMPipeline.from_pretrained('li-yan/diffusion-aurora-256')
image = pipeline().images[0]
image
```

## Option 2 (Fast)
```python
from diffusers import DiffusionPipeline, DDIMScheduler

scheduler = DDIMScheduler.from_pretrained('li-yan/diffusion-aurora-256')
scheduler.set_timesteps(num_inference_steps=40)

pipeline = DiffusionPipeline.from_pretrained(
    'li-yan/diffusion-aurora-256', scheduler=scheduler)

images = pipeline(num_inference_steps=40).images
images[0]
```