Update README.md
Browse files
README.md
CHANGED
|
@@ -42,8 +42,68 @@ configs:
|
|
| 42 |
path: data/train-*
|
| 43 |
---
|
| 44 |
|
|
|
|
|
|
|
| 45 |
## π Links
|
| 46 |
|
| 47 |
-
- [
|
| 48 |
-
- [π€ Dataset](https://huggingface.co/datasets/Entroplay/Visual-Intelligence)
|
| 49 |
- [π Blog](https://entroplay.ai/research/video-intelligence)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
path: data/train-*
|
| 43 |
---
|
| 44 |
|
| 45 |
+
# Visual-Intelligence
|
| 46 |
+
|
| 47 |
## π Links
|
| 48 |
|
| 49 |
+
- [πΎ Github Repo](https://github.com/Entroplay/Visual-Intelligence)
|
| 50 |
+
- [π€ HF Dataset](https://huggingface.co/datasets/Entroplay/Visual-Intelligence)
|
| 51 |
- [π Blog](https://entroplay.ai/research/video-intelligence)
|
| 52 |
+
|
| 53 |
+
## π Dataset Introduction
|
| 54 |
+
|
| 55 |
+
### Dataset Schema
|
| 56 |
+
|
| 57 |
+
- **id**: Unique sample identifier.
|
| 58 |
+
- **input**: Ordered list describing the input context.
|
| 59 |
+
- **type**: Either "image" or "text".
|
| 60 |
+
- **content**: For "image", a relative path to the first-frame image. For "text", the prompt text.
|
| 61 |
+
- **output**: Generated candidates and final selections by model.
|
| 62 |
+
- **veo3**: Relative paths to videos generated by the VEO3 pipeline.
|
| 63 |
+
- **framepack**: Relative paths to videos generated by FramePack across multiple runs.
|
| 64 |
+
- **hunyuan**: Relative paths to videos generated by Hunyuan across multiple runs.
|
| 65 |
+
- **wan2.2-5b**: Relative paths to videos generated by Wan-2.2-5B across multiple runs.
|
| 66 |
+
- **wan2.2-14b**: Relative paths to videos generated by Wan-2.2-14B across multiple runs.
|
| 67 |
+
- **framepack_seleted_video**: Selected best video among FramePack candidates.
|
| 68 |
+
- **hunyuan_seleted_video**: Selected best video among Hunyuan candidates.
|
| 69 |
+
- **wan2.2-5b_seleted_video**: Selected best video among Wan 2.2 5B candidates.
|
| 70 |
+
- **wan2.2-14b_seleted_video**: Selected best video among Wan 2.2 14B candidates.
|
| 71 |
+
|
| 72 |
+
### Data Format:
|
| 73 |
+
|
| 74 |
+
```json
|
| 75 |
+
{
|
| 76 |
+
"id": 1,
|
| 77 |
+
"input": [
|
| 78 |
+
{ "type": "image", "content": "thumbnails/mp4/keypoint_localization.jpg" },
|
| 79 |
+
{ "type": "text", "content": "Add a bright blue dot at the tip of the branch on which the macaw is sitting. ..." }
|
| 80 |
+
],
|
| 81 |
+
"output": {
|
| 82 |
+
"veo3": ["videos/mp4/keypoint_localization.mp4"],
|
| 83 |
+
"framepack": [
|
| 84 |
+
"videos/1_framepack_1.mp4",
|
| 85 |
+
"videos/1_framepack_2.mp4"
|
| 86 |
+
],
|
| 87 |
+
"hunyuan": [
|
| 88 |
+
"videos/1_hunyuan_1.mp4",
|
| 89 |
+
"videos/1_hunyuan_2.mp4"
|
| 90 |
+
],
|
| 91 |
+
"wan2.2-5b": [
|
| 92 |
+
"videos/1_wan2.2-5b_1.mp4",
|
| 93 |
+
"videos/1_wan2.2-5b_2.mp4"
|
| 94 |
+
],
|
| 95 |
+
"wan2.2-14b": [
|
| 96 |
+
"videos/1_wan2.2-14b_1.mp4",
|
| 97 |
+
"videos/1_wan2.2-14b_2.mp4"
|
| 98 |
+
],
|
| 99 |
+
"framepack_seleted_video": "videos/1_framepack_1.mp4",
|
| 100 |
+
"hunyuan_seleted_video": "videos/1_hunyuan_1.mp4",
|
| 101 |
+
"wan2.2-5b_seleted_video": "videos/1_wan2.2-5b_1.mp4",
|
| 102 |
+
"wan2.2-14b_seleted_video": "videos/1_wan2.2-14b_1.mp4"
|
| 103 |
+
}
|
| 104 |
+
}
|
| 105 |
+
```
|
| 106 |
+
|
| 107 |
+
## π About project
|
| 108 |
+
|
| 109 |
+
Google' Veo 3 shows extreme promise in visual intelligence, demonstrating strong visual commonsense and reasoning in visual generation. We aim to construct a fully open-source evaluation suite to measure current progress in video generative intelligence across various dimensions among several state-of-the-art proprietary and open-source models.
|