File size: 2,191 Bytes
98298c1
 
222f45c
 
 
 
 
 
 
 
98298c1
222f45c
 
463624c
222f45c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
license: creativeml-openrail-m
base_model: kyujinpy/KO-anything-v4-5
training_prompt: A bear is playing guitar
tags:
- tune-a-video
- text-to-video
- diffusers
- korean
inference: false
---

# Tune-A-VideKO-anything
Github: [Kyujinpy/Tune-A-VideKO](https://github.com/KyujinHan/Tune-A-VideKO)

## Model Description
- Base model: [kyujinpy/KO-anything-v4-5](https://huggingface.co/kyujinpy/KO-anything-v4-5)
- Training prompt: A bear is playing guitar
![sample-train](bear.gif)

## Samples

![sample-500](video1.gif)
Test prompt: 1์†Œ๋…€๋Š” ๊ธฐํƒ€๋ฅผ ์—ฐ์ฃผํ•˜๊ณ  ์žˆ๋‹ค, ํฐ ๋จธ๋ฆฌ, ์ค‘๊ฐ„ ๋จธ๋ฆฌ, ๊ณ ์–‘์ด ๊ท€, ๊ท€์—ฌ์šด, ์Šค์นดํ”„, ์žฌํ‚ท, ์•ผ์™ธ, ๊ฑฐ๋ฆฌ, ์†Œ๋…€

![sample-500](video2.gif)
Test prompt: 1์†Œ๋…€๊ฐ€ ๊ธฐํƒ€ ์—ฐ์ฃผ๋ฅผ ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค, ๋ฐ”๋‹ค, ๋ˆˆ์„ ๊ฐ์Œ, ๊ธด ๋จธ๋ฆฌ, ์นด๋ฆฌ์Šค๋งˆ

![sample-500](video3.gif)
Test prompt: 1์†Œ๋…„, ๊ธฐํƒ€ ์—ฐ์ฃผ, ์ž˜์ƒ๊น€, ์•‰์•„์žˆ๋Š”, ๋นจ๊ฐ„์ƒ‰ ๊ธฐํƒ€, ํ•ด๋ณ€

## Usage
Clone the github repo
```bash
git clone https://github.com/showlab/Tune-A-Video.git
```

Run inference code

```python
from tuneavideo.pipelines.pipeline_tuneavideo import TuneAVideoPipeline
from tuneavideo.models.unet import UNet3DConditionModel
from tuneavideo.util import save_videos_grid
import torch

pretrained_model_path = "kyujinpy/KO-anything-v4-5"
unet_model_path = "kyujinpy/Tune-A-VideKO-anything"
unet = UNet3DConditionModel.from_pretrained(unet_model_path, subfolder='unet', torch_dtype=torch.float16).to('cuda')
pipe = TuneAVideoPipeline.from_pretrained(pretrained_model_path, unet=unet, torch_dtype=torch.float16).to("cuda")
pipe.enable_xformers_memory_efficient_attention()

prompt = "1์†Œ๋…€๋Š” ๊ธฐํƒ€๋ฅผ ์—ฐ์ฃผํ•˜๊ณ  ์žˆ๋‹ค, ํฐ ๋จธ๋ฆฌ, ์ค‘๊ฐ„ ๋จธ๋ฆฌ, ๊ณ ์–‘์ด ๊ท€, ๊ท€์—ฌ์šด, ์Šค์นดํ”„, ์žฌํ‚ท, ์•ผ์™ธ, ๊ฑฐ๋ฆฌ, ์†Œ๋…€"
video = pipe(prompt, video_length=8, height=512, width=512, num_inference_steps=50, guidance_scale=12.5).videos

save_videos_grid(video, f"./{prompt}.gif")
```

## Related Papers:
- [Tune-A-Video](https://arxiv.org/abs/2212.11565): One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
- [Stable Diffusion](https://arxiv.org/abs/2112.10752): High-Resolution Image Synthesis with Latent Diffusion Models