Fudan-FUXI commited on
Commit
cd4d187
1 Parent(s): 52405fe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -3
README.md CHANGED
@@ -1,3 +1,53 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ base_model:
6
+ - Efficient-Large-Model/VILA1.5-40b
7
+ pipeline_tag: video-text-to-text
8
+ ---
9
+
10
+ # LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment
11
+
12
+
13
+ ## Summary
14
+ This is the model checkpoint proposed in our paper "LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment". LiFT-Critic is a novel Video-Text-to-Text Reward Model for synthesized video evaluation.
15
+
16
+ Project: https://codegoat24.github.io/LiFT/
17
+
18
+ Code: https://github.com/CodeGoat24/LiFT
19
+
20
+ ## 🔧 Installation
21
+
22
+ 1. Clone the github repository and navigate to LiFT folder
23
+ ```bash
24
+ git clone https://github.com/CodeGoat24/LiFT.git
25
+ cd LiFT
26
+ ```
27
+ 2. Install packages
28
+ ```
29
+ bash ./environment_setup.sh lift
30
+ ```
31
+
32
+ ## 🚀 Inference
33
+
34
+ ### Run
35
+ Please download this public [LiFT-Critic-40b-lora-v1.5](https://huggingface.co/Fudan-FUXI/LiFT-Critic-13b-lora-v1.5) checkpoints.
36
+
37
+ We provide some synthesized videos for quick inference in `./demo` directory.
38
+
39
+ ```bash
40
+ python LiFT-Critic/test/run_critic_40b.py --model-path ./LiFT-Critic-40b-lora-v1.5
41
+ ```
42
+
43
+ # 🖊️ Citation
44
+
45
+ If you find our work helpful, please cite our paper.
46
+ ```bibtex
47
+ @article{LiFT,
48
+ title={LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment.},
49
+ author={Wang, Yibin and Tan, Zhiyu, and Wang, Junyan and Yang, Xiaomeng and Jin, Cheng and Li, Hao},
50
+ journal={arXiv preprint arXiv:2412.04814},
51
+ year={2024}
52
+ }
53
+ ```