CiaraRowles commited on
Commit
8c223b1
1 Parent(s): 4a1207e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -8
README.md CHANGED
@@ -8,23 +8,30 @@ base_model: runwayml/stable-diffusion-v1-5
8
  ---
9
  Introducing the Beta Version of TemporalNet
10
 
11
- TemporalNet is a ControlNet model designed to enhance the temporal consistency of generated outputs, as demonstrated in this example: https://twitter.com/CiaraRowles1/status/1637486561917906944. While it does not eliminate all flickering, it significantly reduces it, particularly at higher denoise levels. For optimal results, it is recommended to use TemporalNet in combination with other methods.
12
 
13
- Instructions for Use:
14
 
15
- 1) Add the model "diff_control_sd15_temporalnet_fp16.safetensors" to your models folder in the ControlNet extension in Automatic1111's Web UI.
 
16
 
17
- 2) Create a folder that contains:
18
 
19
- - A subfolder named "Input_Images" with the input frames
 
 
 
 
 
 
20
  - A PNG file called "init.png" that is pre-stylized in your desired style
21
  - The "temporalvideo.py" script
22
 
23
- 3) Customize the "temporalvideo.py" script according to your preferences, such as the image resolution, prompt, and control net settings.
24
 
25
- 4) Launch Automatic1111's Web UI with the --api setting enabled.
26
 
27
- 5) Execute the Python script.
28
 
29
  *Please note that the "init.png" image will not significantly influence the style of the output video. Its primary purpose is to prevent a drastic change in aesthetics during the first few frames.*
30
 
 
8
  ---
9
  Introducing the Beta Version of TemporalNet
10
 
11
+ TemporalNet is a ControlNet model designed to enhance the temporal consistency of generated outputs
12
 
13
+ TemporalNet 2 is an evolution on the concept, where the generated outputs are guided by both the last frame *and* an optical flow map between the frames, improving generation consistency.
14
 
15
+ This took some modification of the original controlnet code so you'll have to do some extra things. If you just want to run a gradio example or look at the modified controlnet code,
16
+ that's here: https://github.com/CiaraStrawberry/TemporalNet Just drop the model from this directory into that model folder and make sure the gradio_temporalnet.py script poitns at the model.
17
 
18
+ To use with stable diffusion, you can either use it with TemporalKit, or use it just by accessing the base api through the temporalvideo.py script:
19
 
20
+ 1) move your controlnet webui install to this branch: https://github.com/CiaraStrawberry/sd-webui-controlnet-TemporalNet-API
21
+
22
+ 2) Add the model "diff_control_sd15_temporalnet_fp16.safetensors" to your models folder in the ControlNet extension in Automatic1111's Web UI.
23
+
24
+ 3) Check you have:
25
+
26
+ - A folder named "Input_Images" with the input frames
27
  - A PNG file called "init.png" that is pre-stylized in your desired style
28
  - The "temporalvideo.py" script
29
 
30
+ 4) Customize the "temporalvideo.py" script according to your preferences, such as the image resolution, prompt, and control net settings.
31
 
32
+ 5) Launch Automatic1111's Web UI with the --api setting enabled.
33
 
34
+ 6) Execute the Python script.
35
 
36
  *Please note that the "init.png" image will not significantly influence the style of the output video. Its primary purpose is to prevent a drastic change in aesthetics during the first few frames.*
37