matybohacek
commited on
Commit
•
e816036
1
Parent(s):
d6aeb80
Update README.md
Browse files
README.md
CHANGED
@@ -139,7 +139,9 @@ dataset = load_dataset("faridlab/deepaction_v1", trust_remote_code=True)
|
|
139 |
|
140 |
## Data
|
141 |
|
142 |
-
The data is structured into
|
|
|
|
|
143 |
|
144 |
<table class="video-table">
|
145 |
<tr>
|
|
|
139 |
|
140 |
## Data
|
141 |
|
142 |
+
The data is structured into seven folders corresponding to text-to-video AI models, each with 100 subfolders corresponding to human action classes. All videos in a given subfolder were generated using the same prompt (see the list of prompts <a href=''>here</a>).
|
143 |
+
|
144 |
+
Included below are example videos generated using the prompt "a person taking a selfie". Note that, since each text-to-video AI model generates videos with different ratios and resolutions, these videos were normalized 512x512.
|
145 |
|
146 |
<table class="video-table">
|
147 |
<tr>
|