Add paper link to connect the model to the paper on Daily Papers page (#2)
Browse files- Add paper link to connect the model to the paper on Daily Papers page (ffd354ba21f6e8f5de04ba2ea1303638039a2555)
Co-authored-by: Adina Yakefu <AdinaY@users.noreply.huggingface.co>
README.md
CHANGED
@@ -1,41 +1,44 @@
|
|
1 |
-
---
|
2 |
-
inference: false
|
3 |
-
datasets:
|
4 |
-
- ShareGPT4Video/ShareGPT4Video
|
5 |
-
---
|
6 |
-
<br>
|
7 |
-
<br>
|
8 |
-
|
9 |
-
# ShareCaptioner-Video Model Card
|
10 |
-
|
11 |
-
## Model details
|
12 |
-
|
13 |
-
**Model type:**
|
14 |
-
ShareCaptioner-Video is an open-source captioner fine-tuned on GPT4V-assisted [ShareGPT4Video](https://huggingface.co/datasets/Lin-Chen/ShareGPT4Video) detailed caption data with supporting various durations, aspect ratios, and resolutions of videos. ShareCaptioner-Video is based on the [InternLM-Xcomposer2-4KHD](https://github.com/InternLM/InternLM-XComposer) model.
|
15 |
-
|
16 |
-
ShareCaptaioner-Video features 4 roles:
|
17 |
-
|
18 |
-
- **Fast Captioning:** The model employs an image-grid format for direct video captioning, providing rapid generation speeds that are ideal for short videos. In practice, we concatenate all the keyframes of a video into a vertically elongated image and train the model on a caption task.
|
19 |
-
- **Sliding Captioning:** The model supports streaming captioning in a differential sliding-window format, yielding high-quality captions that are suitable for long videos. We take the two adjacent keyframes alongside the previous differential caption as input, and train the model to describe the events occurring between them.
|
20 |
-
- **Clip Summarizing:** The model can swiftly summarize any clip from ShareGPT4Video or videos that have undergone the differential sliding-window captioning process, eliminating the need to re-process frames. We use all the differential descriptions as input, and the output is the video caption.
|
21 |
-
- **Prompt Re-Captioning:** The model can rephrase prompts input by users who prefer specific video generation areas, ensuring that T2VMs trained on high-quality video-caption data maintain format alignment during inference with their training. In practice, we use GPT-4 to generate Sora-style prompts for our dense captions, and we train the re-captioning task in reverse, i.e., by using the generated prompt as input and the dense caption as the training target.
|
22 |
-
|
23 |
-
**Model date:**
|
24 |
-
ShareCaptioner was trained in May 2024.
|
25 |
-
|
26 |
-
**Paper or resources for more information:**
|
27 |
-
[[Project](https://ShareGPT4Video.github.io/)] [[Paper]()] [[Code](https://github.com/ShareGPT4Omni/ShareGPT4Video)]
|
28 |
-
|
29 |
-
## Intended use
|
30 |
-
|
31 |
-
**Primary intended uses:**
|
32 |
-
The primary use of ShareCaptioner-Video is about producing high-quality video captions.
|
33 |
-
|
34 |
-
**Primary intended users:**
|
35 |
-
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
|
36 |
-
|
37 |
-
## Finetuning dataset
|
38 |
-
|
39 |
-
- 40K GPT4V-generated video-caption pairs
|
40 |
-
- 40K differential sliding-window captioning conversations
|
41 |
-
- 40K prompt-to-caption textual data
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
inference: false
|
3 |
+
datasets:
|
4 |
+
- ShareGPT4Video/ShareGPT4Video
|
5 |
+
---
|
6 |
+
<br>
|
7 |
+
<br>
|
8 |
+
|
9 |
+
# ShareCaptioner-Video Model Card
|
10 |
+
|
11 |
+
## Model details
|
12 |
+
|
13 |
+
**Model type:**
|
14 |
+
ShareCaptioner-Video is an open-source captioner fine-tuned on GPT4V-assisted [ShareGPT4Video](https://huggingface.co/datasets/Lin-Chen/ShareGPT4Video) detailed caption data with supporting various durations, aspect ratios, and resolutions of videos. ShareCaptioner-Video is based on the [InternLM-Xcomposer2-4KHD](https://github.com/InternLM/InternLM-XComposer) model.
|
15 |
+
|
16 |
+
ShareCaptaioner-Video features 4 roles:
|
17 |
+
|
18 |
+
- **Fast Captioning:** The model employs an image-grid format for direct video captioning, providing rapid generation speeds that are ideal for short videos. In practice, we concatenate all the keyframes of a video into a vertically elongated image and train the model on a caption task.
|
19 |
+
- **Sliding Captioning:** The model supports streaming captioning in a differential sliding-window format, yielding high-quality captions that are suitable for long videos. We take the two adjacent keyframes alongside the previous differential caption as input, and train the model to describe the events occurring between them.
|
20 |
+
- **Clip Summarizing:** The model can swiftly summarize any clip from ShareGPT4Video or videos that have undergone the differential sliding-window captioning process, eliminating the need to re-process frames. We use all the differential descriptions as input, and the output is the video caption.
|
21 |
+
- **Prompt Re-Captioning:** The model can rephrase prompts input by users who prefer specific video generation areas, ensuring that T2VMs trained on high-quality video-caption data maintain format alignment during inference with their training. In practice, we use GPT-4 to generate Sora-style prompts for our dense captions, and we train the re-captioning task in reverse, i.e., by using the generated prompt as input and the dense caption as the training target.
|
22 |
+
|
23 |
+
**Model date:**
|
24 |
+
ShareCaptioner was trained in May 2024.
|
25 |
+
|
26 |
+
**Paper or resources for more information:**
|
27 |
+
[[Project](https://ShareGPT4Video.github.io/)] [[Paper]()] [[Code](https://github.com/ShareGPT4Omni/ShareGPT4Video)]
|
28 |
+
|
29 |
+
## Intended use
|
30 |
+
|
31 |
+
**Primary intended uses:**
|
32 |
+
The primary use of ShareCaptioner-Video is about producing high-quality video captions.
|
33 |
+
|
34 |
+
**Primary intended users:**
|
35 |
+
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
|
36 |
+
|
37 |
+
## Finetuning dataset
|
38 |
+
|
39 |
+
- 40K GPT4V-generated video-caption pairs
|
40 |
+
- 40K differential sliding-window captioning conversations
|
41 |
+
- 40K prompt-to-caption textual data
|
42 |
+
|
43 |
+
## Paper
|
44 |
+
arxiv.org/abs/2406.04325
|