svjack commited on
Commit
1d8c239
·
verified ·
1 Parent(s): df5a32a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +151 -0
README.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Installation
2
+
3
+ ### Prerequisites
4
+
5
+ Before you begin, ensure you have the following installed:
6
+
7
+ - `git-lfs`
8
+ - `cbm`
9
+ - `ffmpeg`
10
+
11
+ You can install these prerequisites using the following command:
12
+
13
+ ```bash
14
+ sudo apt-get update && sudo apt-get install git-lfs cbm ffmpeg
15
+ ```
16
+
17
+ ### Installation Steps
18
+
19
+ 1. **Install `comfy-cli`:**
20
+
21
+ ```bash
22
+ pip install comfy-cli
23
+ ```
24
+
25
+ 2. **Initialize ComfyUI:**
26
+
27
+ ```bash
28
+ comfy --here install
29
+ ```
30
+
31
+ 3. **Clone and Install ComfyScript:**
32
+
33
+ ```bash
34
+ cd ComfyUI/custom_nodes
35
+ git clone https://github.com/Chaoses-Ib/ComfyScript.git
36
+ cd ComfyScript
37
+ pip install -e ".[default,cli]"
38
+ pip uninstall aiohttp
39
+ pip install -U aiohttp
40
+ ```
41
+
42
+ 4. **Clone and Install ComfyUI-HunyuanVideoWrapper:**
43
+
44
+ ```bash
45
+ cd ../
46
+ git clone https://github.com/svjack/ComfyUI-HunyuanVideoWrapper
47
+ cd ComfyUI-HunyuanVideoWrapper
48
+ pip install -r requirements.txt
49
+ ```
50
+
51
+ 5. **Load ComfyScript Runtime:**
52
+
53
+ ```python
54
+ from comfy_script.runtime import *
55
+ load()
56
+ from comfy_script.runtime.nodes import *
57
+ ```
58
+
59
+ 6. **Install Example Dependencies:**
60
+
61
+ ```bash
62
+ cd examples
63
+ comfy node install-deps --workflow='hunyuanvideo lora Walking Animation Share.json'
64
+ ```
65
+
66
+ 7. **Update ComfyUI Dependencies:**
67
+
68
+ ```bash
69
+ cd ../../ComfyUI
70
+ pip install --upgrade torch torchvision torchaudio -r requirements.txt
71
+ ```
72
+
73
+ 8. **Transpile Example Workflow:**
74
+
75
+ ```bash
76
+ python -m comfy_script.transpile hyvideo_t2v_example_01.json
77
+ ```
78
+
79
+ 9. **Download and Place Model Files:**
80
+
81
+ Download the required model files from Hugging Face:
82
+
83
+ ```bash
84
+ huggingface-cli download Kijai/HunyuanVideo_comfy --local-dir ./HunyuanVideo_comfy
85
+ ```
86
+
87
+ Copy the downloaded files to the appropriate directories:
88
+
89
+ ```bash
90
+ cp -r HunyuanVideo_comfy/ .
91
+ cp HunyuanVideo_comfy/hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors ComfyUI/models/diffusion_models
92
+ cp HunyuanVideo_comfy/hunyuan_video_vae_bf16.safetensors ComfyUI/models/vae
93
+ ```
94
+
95
+ ### Genshin Impact Character XiangLing LoRA Example (early tuned version)
96
+
97
+ 10. **Download the Makima LoRA Model:**
98
+
99
+ Download the Makima LoRA model from Huggingface:
100
+
101
+ - [Xiangling Character LoRA](https://huggingface.co/svjack/Genshin_Impact_XiangLing_HunyuanVideo_lora_early)
102
+
103
+ ```bash
104
+ xiangling_test_epoch4.safetensors
105
+ ```
106
+
107
+ Copy the model to the `loras` directory:
108
+
109
+ ```bash
110
+ cp xiangling_test_epoch4.safetensors ComfyUI/models/loras
111
+ ```
112
+
113
+ 11. **Run the Workflow:**
114
+
115
+ Create a Python script `run_t2v_xiangling_lora.py`:
116
+
117
+ ```python
118
+ #### character do something (seed 42)
119
+ from comfy_script.runtime import *
120
+ load()
121
+ from comfy_script.runtime.nodes import *
122
+ with Workflow():
123
+ vae = HyVideoVAELoader(r'hunyuan_video_vae_bf16.safetensors', 'bf16', None)
124
+ lora = HyVideoLoraSelect('xiangling_test_epoch4.safetensors', 2.0, None, None)
125
+ model = HyVideoModelLoader(r'hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors', 'bf16', 'fp8_e4m3fn', 'offload_device', 'sdpa', None, None, lora)
126
+ hyvid_text_encoder = DownloadAndLoadHyVideoTextEncoder('Kijai/llava-llama-3-8b-text-encoder-tokenizer', 'openai/clip-vit-large-patch14', 'fp16', False, 2, 'disabled')
127
+ hyvid_embeds = HyVideoTextEncode(hyvid_text_encoder, "solo,Xiangling, cook rice in a pot genshin impact ,1girl,highres,", 'bad quality video', 'video', None, None, None)
128
+ samples = HyVideoSampler(model, hyvid_embeds, 478, 512, 85, 30, 6, 9, 42, 1, None, 1, None)
129
+ images = HyVideoDecode(vae, samples, True, 64, 256, True)
130
+ #_ = VHSVideoCombine(images, 24, 0, 'HunyuanVideo', 'video/h264-mp4', False, True, None, None, None)
131
+ _ = VHSVideoCombine(images, 24, 0, 'HunyuanVideo', 'video/h264-mp4', False, True, None, None, None,
132
+ pix_fmt = 'yuv420p', crf=19, save_metadata = True, trim_to_audio = False)
133
+ ```
134
+
135
+ Run the script:
136
+
137
+ ```bash
138
+ python run_t2v_xiangling_lora.py
139
+ ```
140
+
141
+ <br/>
142
+
143
+ - prompt = "solo,Xiangling, cook rice in a pot genshin impact ,1girl,highres,"
144
+
145
+
146
+
147
+ https://github.com/user-attachments/assets/f09a7bfc-08d2-41ea-86a0-85e5d048e4fe
148
+
149
+
150
+
151
+ <br/>