CodeGoat24 commited on
Commit
2d2a14a
1 Parent(s): d1fa6ec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -3
README.md CHANGED
@@ -1,3 +1,47 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ base_model:
6
+ - Efficient-Large-Model/VILA1.5-13b
7
+ pipeline_tag: video-text-to-text
8
+ ---
9
+
10
+ # LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment
11
+
12
+ LiFT-Critic is a novel Video-Text-to-Text reward model for synthesized video evaluation.
13
+
14
+ ## 🔧 Installation
15
+
16
+ 1. Clone the github repository and navigate to LiFT folder
17
+ ```bash
18
+ git clone https://github.com/CodeGoat24/LiFT.git
19
+ cd LiFT
20
+ ```
21
+ 2. Install packages
22
+ ```
23
+ bash ./environment_setup.sh lift
24
+ ```
25
+
26
+ ## 🚀 Inference
27
+
28
+ ### Run
29
+ Please download this public [LiFT-Critic-13b-lora](https://huggingface.co/Fudan-FUXI/LiFT-Critic-13b-lora) checkpoints.
30
+
31
+ We provide some synthesized videos for quick inference in `./demo` directory.
32
+
33
+ ```bash
34
+ python LiFT-Critic/test/run_critic_13b.py --model-path ./LiFT-Critic-13b-lora
35
+ ```
36
+
37
+ # Citation
38
+
39
+ If you find Euclid useful for your research and applications, please cite using this BibTeX:
40
+ ```bibtex
41
+ @article{LiFT,
42
+ title={LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment.},
43
+ author={Wang, Yibin and Tan, Zhiyu, and Wang, Junyan and Yang, Xiaomeng and Jin, Cheng and Li, Hao},
44
+ journal={arXiv preprint arXiv:2412.04814},
45
+ year={2024}
46
+ }
47
+ ```