benleader commited on
Commit
4e764ec
1 Parent(s): e368ace

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -28,16 +28,16 @@ Wenjiang Zhou
28
  **[project](comming soon)** **Technical report (comming soon)**
29
 
30
 
31
- We have setup the world simulator vision since March 2023, believing diffusion models can simulate the world. `MuseV` was a milestone achieved around July 2023. Amazed by the progress of Sora, we decided to opensource `MuseV`, hopefully it will benefit the community. Next we will move on to the promising diffusion+transformer scheme.
32
 
33
  We will soon release `MuseTalk`, a diffusion-baesd lip sync model, which can be applied with MuseV as a complete virtual human generation solution. Please stay tuned!
34
 
35
  # Intro
36
  `MuseV` is a diffusion-based virtual human video generation framework, which
37
- 1. supports infinite length generation using a novel Parallel Denoising scheme.
38
  2. checkpoint available for virtual human video generation trained on human dataset.
39
  3. supports Image2Video, Text2Image2Video, Video2Video.
40
- 4. compatible with the Stable Diffusion ecosystem, including `base_model`, `lora`, `controlnet`, etc.
41
  5. supports multi reference image technology, including `IPAdapter`, `ReferenceOnly`, `ReferenceNet`, `IPAdapterFaceID`.
42
  6. training codes (comming very soon).
43
 
 
28
  **[project](comming soon)** **Technical report (comming soon)**
29
 
30
 
31
+ We have setup **the world simulator vision since March 2023, believing diffusion models can simulate the world**. `MuseV` was a milestone achieved around **July 2023**. Amazed by the progress of Sora, we decided to opensource `MuseV`, hopefully it will benefit the community. Next we will move on to the promising diffusion+transformer scheme.
32
 
33
  We will soon release `MuseTalk`, a diffusion-baesd lip sync model, which can be applied with MuseV as a complete virtual human generation solution. Please stay tuned!
34
 
35
  # Intro
36
  `MuseV` is a diffusion-based virtual human video generation framework, which
37
+ 1. supports **infinite length** generation using a novel **Parallel Denoising scheme**.
38
  2. checkpoint available for virtual human video generation trained on human dataset.
39
  3. supports Image2Video, Text2Image2Video, Video2Video.
40
+ 4. compatible with the **Stable Diffusion ecosystem**, including `base_model`, `lora`, `controlnet`, etc.
41
  5. supports multi reference image technology, including `IPAdapter`, `ReferenceOnly`, `ReferenceNet`, `IPAdapterFaceID`.
42
  6. training codes (comming very soon).
43