Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,11 @@
|
|
1 |
# Altogether-FT
|
2 |
|
3 |
-
(EMNLP 2024) Altogether-FT is
|
4 |
-
|
|
|
|
|
|
|
|
|
5 |
|
6 |
![Altogether](altogether.png)
|
7 |
|
|
|
1 |
# Altogether-FT
|
2 |
|
3 |
+
(EMNLP 2024) Altogether-FT is a dataset that transforms/re-aligns Internet-scale alt-texts into dense captions. It does not caption images from scratch and generate naive captions that provide little value to an average user (e.g., "a dog is walking in the park" offer minimal utility to users not blind). Instead, it complements and completes alt-texts into dense captions, while preserving supervisions in alt-texts by expert human/agents around the world (that describe the images an average annotators do not understand).
|
4 |
+
|
5 |
+
It contains 15448 examples for training and 500 examples for evaluation from [WIT](https://arxiv.org/abs/2103.01913) and [DataComp](https://arxiv.org/abs/2304.14108).
|
6 |
+
|
7 |
+
We use this re-aligned captions to train MetaCLIPv2.
|
8 |
+
|
9 |
|
10 |
![Altogether](altogether.png)
|
11 |
|