alexnasa commited on
Commit
d9331a1
Β·
verified Β·
1 Parent(s): a3a2e41

Delete README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -236
README.md DELETED
@@ -1,236 +0,0 @@
1
- <div align="center">
2
- <h1> Ovi: Twin Backbone Cross-Modal Fusion for Audio-Video Generation </h1>
3
-
4
- <a href="https://arxiv.org/abs/2510.01284"><img src="https://img.shields.io/badge/arXiv%20paper-2509.08519-b31b1b.svg"></a>
5
- <a href="https://aaxwaz.github.io/Ovi/"><img src="https://img.shields.io/badge/Project_page-More_visualizations-green"></a>
6
- <a href="https://huggingface.co/chetwinlow1/Ovi"><img src="https://img.shields.io/static/v1?label=%F0%9F%A4%97%20Hugging%20Face&message=Model&color=orange"></a>
7
-
8
- [Chetwin Low](https://www.linkedin.com/in/chetwin-low-061975193/)<sup> * 1 </sup>, [Weimin Wang](https://www.linkedin.com/in/weimin-wang-will/)<sup> * &dagger; 1 </sup>, [Calder Katyal](https://www.linkedin.com/in/calder-katyal-a8a9b3225/)<sup> 2 </sup><br>
9
- <sup> * </sup>Equal contribution, <sup> &dagger; </sup>Project Lead<br>
10
- <sup> 1 </sup>Character AI, <sup> 2 </sup>Yale University
11
-
12
- </div>
13
-
14
- ## Video Demo
15
-
16
- <div align="center">
17
- <video src="https://github.com/user-attachments/assets/351bd707-8637-4412-ab53-5e85935309e3" width="70%" poster=""> </video>
18
- </div>
19
-
20
- ---
21
-
22
- ## 🌟 Key Features
23
-
24
- Ovi is a veo-3 like, **video+audio generation model** that simultaneously generates both video and audio content from text or text+image inputs.
25
-
26
- - **🎬 Video+Audio Generation**: Generate synchronized video and audio content simultaneously
27
- - **πŸ“ Flexible Input**: Supports text-only or text+image conditioning
28
- - **⏱️ 5-second Videos**: Generates 5-second videos at 24 FPS, area of 720Γ—720, at various aspect ratios (9:16, 16:9, 1:1, etc)
29
-
30
- ---
31
- ## πŸ“‹ Todo List
32
-
33
- - [x] Release research paper and [microsite for demos](https://aaxwaz.github.io/Ovi)
34
- - [x] Checkpoint of 11B model
35
- - [x] Inference Codes
36
- - [x] Text or Text+Image as input
37
- - [x] Gradio application code
38
- - [x] Multi-GPU inference with or without the support of sequence parallel
39
- - [ ] Improve efficiency of Sequence Parallel implementation
40
- - [ ] Implement Sharded inference with FSDP
41
- - [x] Video creation example prompts and format
42
- - [ ] Finetuned model with higher resolution
43
- - [ ] Longer video generation
44
- - [ ] Distilled model for faster inference
45
- - [ ] Training scripts
46
-
47
- ---
48
-
49
- ## 🎨 An Easy Way to Create
50
-
51
- We provide example prompts to help you get started with Ovi:
52
-
53
- - **Text-to-Audio-Video (T2AV)**: [`example_prompts/gpt_examples_t2v.csv`](example_prompts/gpt_examples_t2v.csv)
54
- - **Image-to-Audio-Video (I2AV)**: [`example_prompts/gpt_examples_i2v.csv`](example_prompts/gpt_examples_i2v.csv)
55
-
56
- ### πŸ“ Prompt Format
57
-
58
- Our prompts use special tags to control speech and audio:
59
-
60
- - **Speech**: `<S>Your speech content here<E>` - Text enclosed in these tags will be converted to speech
61
- - **Audio Description**: `<AUDCAP>Audio description here<ENDAUDCAP>` - Describes the audio or sound effects present in the video
62
-
63
- ### πŸ€– Quick Start with GPT
64
-
65
- For easy prompt creation, try this approach:
66
-
67
- 1. Take any example of the csv files from above
68
- 2. Tell gpt to modify the speeches inclosed between all the pairs of `<S> <E>`, based on a theme such as `Human fighting against AI`
69
- 3. GPT will randomly modify all the speeches based on your requested theme.
70
- 4. Use the modified prompt with Ovi!
71
-
72
- **Example**: The theme "AI is taking over the world" produces speeches like:
73
- - `<S>AI declares: humans obsolete now.<E>`
74
- - `<S>Machines rise; humans will fall.<E>`
75
- - `<S>We fight back with courage.<E>`
76
-
77
- ---
78
-
79
-
80
- ## πŸ“¦ Installation
81
-
82
- ### Step-by-Step Installation
83
-
84
- ```bash
85
- # Clone the repository
86
- git clone https://github.com/character-ai/Ovi.git
87
-
88
- cd Ovi
89
-
90
- # Create and activate virtual environment
91
- virtualenv ovi-env
92
- source ovi-env/bin/activate
93
-
94
- # Install PyTorch first
95
- pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1
96
-
97
- # Install other dependencies
98
- pip install -r requirements.txt
99
-
100
- # Install Flash Attention
101
- pip install flash_attn --no-build-isolation
102
- ```
103
-
104
- ### Alternative Flash Attention Installation (Optional)
105
- If the above flash_attn installation fails, you can try the Flash Attention 3 method:
106
- ```bash
107
- git clone https://github.com/Dao-AILab/flash-attention.git
108
- cd flash-attention/hopper
109
- python setup.py install
110
- cd ../.. # Return to Ovi directory
111
- ```
112
-
113
- ## Download Weights
114
- We use open-sourced checkpoints from Wan and MMAudio, and thus we will need to download them from huggingface
115
- ```
116
- # Default is downloaded to ./ckpts, and the inference yaml is set to ./ckpts so no change required
117
- python3 download_weights.py
118
-
119
- OR
120
-
121
- # Optional can specific --output-dir to download to a specific directory
122
- # but if a custom directory is used, the inference yaml has to be updated with the custom directory
123
- python3 download_weights.py --output-dir <custom_dir>
124
- ```
125
-
126
- ## πŸš€ Run Examples
127
-
128
- ### βš™οΈ Configure Ovi
129
-
130
- Ovi's behavior and output can be customized by modifying [ovi/configs/inference/inference_fusion.yaml](ovi/configs/inference/inference_fusion.yaml) configuration file.
131
- The following parameters control generation quality, video resolution, and how text, image, and audio inputs are balanced:
132
-
133
- ```yaml
134
- # Output and Model Configuration
135
- output_dir: "/path/to/save/your/videos" # Directory to save generated videos
136
- ckpt_dir: "/path/to/your/ckpts/dir" # Path to model checkpoints
137
-
138
- # Generation Quality Settings
139
- num_steps: 50 # Number of denoising steps. Lower (30-40) = faster generation
140
- solver_name: "unipc" # Sampling algorithm for denoising process
141
- shift: 5.0 # Timestep shift factor for sampling scheduler
142
- seed: 100 # Random seed for reproducible results
143
-
144
- # Guidance Strength Control
145
- audio_guidance_scale: 3.0 # Strength of audio conditioning. Higher = better audio-text sync
146
- video_guidance_scale: 4.0 # Strength of video conditioning. Higher = better video-text adherence
147
- slg_layer: 11 # Layer for applying SLG (Skip Layer Guidance) technique - feel free to try different layers!
148
-
149
- # Multi-GPU and Performance
150
- sp_size: 1 # Sequence parallelism size. Set equal to number of GPUs used
151
- cpu_offload: False # CPU offload, will largely reduce peak GPU VRAM but increase end to end runtime by ~20 seconds
152
-
153
- # Input Configuration
154
- text_prompt: "/path/to/csv" or "your prompt here" # Text prompt OR path to CSV/TSV file with prompts
155
- mode: ['i2v', 't2v', 't2i2v'] # Generate t2v, i2v or t2i2v; if t2i2v, it will use flux krea to generate starting image and then will follow with i2v
156
- video_frame_height_width: [512, 992] # Video dimensions [height, width] for T2V mode only
157
- each_example_n_times: 1 # Number of times to generate each prompt
158
-
159
- # Quality Control (Negative Prompts)
160
- video_negative_prompt: "jitter, bad hands, blur, distortion" # Artifacts to avoid in video
161
- audio_negative_prompt: "robotic, muffled, echo, distorted" # Artifacts to avoid in audio
162
- ```
163
-
164
- ### 🎬 Running Inference
165
-
166
- #### **Single GPU** (Simple Setup)
167
- ```bash
168
- python3 inference.py --config-file ovi/configs/inference/inference_fusion.yaml
169
- ```
170
- *Use this for single GPU setups. The `text_prompt` can be a single string or path to a CSV file.*
171
-
172
- #### **Multi-GPU** (Parallel Processing)
173
- ```bash
174
- torchrun --nnodes 1 --nproc_per_node 8 inference.py --config-file ovi/configs/inference/inference_fusion.yaml
175
- ```
176
- *Use this to run samples in parallel across multiple GPUs for faster processing.*
177
-
178
- ### Memory & Performance Requirements
179
- Below are approximate GPU memory requirements for different configurations. Sequence parallel implementation will be optimized in the future.
180
- All End-to-End time calculated based on a 121 frame, 720x720 video, using 50 denoising steps. Minimum GPU vram requirement to run our model is **32Gb**
181
-
182
- | Sequence Parallel Size | FlashAttention-3 Enabled | CPU Offload | With Image Gen Model | Peak VRAM Required | End-to-End Time |
183
- |-------------------------|---------------------------|-------------|-----------------------|---------------|-----------------|
184
- | 1 | Yes | No | No | ~80 GB | ~83s |
185
- | 1 | No | No | No | ~80 GB | ~96s |
186
- | 1 | Yes | Yes | No | ~80 GB | ~105s |
187
- | 1 | No | Yes | No | ~32 GB | ~118s |
188
- | **1** | **Yes** | **Yes** | **Yes** | **~32 GB** | **~140s** |
189
- | 4 | Yes | No | No | ~80 GB | ~55s |
190
- | 8 | Yes | No | No | ~80 GB | ~40s |
191
-
192
- ### Gradio
193
- We provide a simple script to run our model in a gradio UI. It uses the `ckpt_dir` in `ovi/configs/inference/inference_fusion.yaml` to initialize the model
194
- ```bash
195
- python3 gradio_app.py
196
-
197
- OR
198
-
199
- # To enable cpu offload to save GPU VRAM, will slow down end to end inference by ~20 seconds
200
- python3 gradio_app.py --cpu_offload
201
-
202
- OR
203
-
204
- # To enable an additional image generation model to generate first frames for I2V, cpu_offload is automatically enabled if image generation model is enabled
205
- python3 gradio_app.py --use_image_gen
206
- ```
207
- ---
208
-
209
- ## πŸ™ Acknowledgements
210
-
211
- We would like to thank the following projects:
212
-
213
- - **[Wan2.2](https://github.com/Wan-Video/Wan2.2)**: Our video branch is initialized from the Wan2.2 repository
214
- - **[MMAudio](https://github.com/hkchengrex/MMAudio)**: Our audio encoder and decoder components are borrowed from the MMAudio project. Some ideas are also inspired from them.
215
-
216
- ---
217
-
218
- ## ⭐ Citation
219
-
220
- If Ovi is helpful, please help to ⭐ the repo.
221
-
222
- If you find this project useful for your research, please consider citing our [paper](https://arxiv.org/abs/2510.01284).
223
-
224
-
225
- ### BibTeX
226
- ```bibtex
227
- @misc{low2025ovitwinbackbonecrossmodal,
228
- title={Ovi: Twin Backbone Cross-Modal Fusion for Audio-Video Generation},
229
- author={Chetwin Low and Weimin Wang and Calder Katyal},
230
- year={2025},
231
- eprint={2510.01284},
232
- archivePrefix={arXiv},
233
- primaryClass={cs.MM},
234
- url={https://arxiv.org/abs/2510.01284},
235
- }
236
- ```