Datasets:
Tasks:
Image-to-Text
Formats:
parquet
Sub-tasks:
image-captioning
Languages:
English
Size:
1M - 10M
License:
Commit
•
3964a83
1
Parent(s):
0477f73
Make code for image downloading from image urls cacheable (#4218)
Browse files* Make code for image downloading from image urls cacheable
* Minor improvement in RedCaps card
* Minor fixes in formatting
Commit from https://github.com/huggingface/datasets/commit/6a201a6f8fb8837f37925e38c4cc69e92155120f
README.md
CHANGED
@@ -75,13 +75,16 @@ from datasets import load_dataset
|
|
75 |
from datasets.utils.file_utils import get_datasets_user_agent
|
76 |
|
77 |
|
|
|
|
|
|
|
78 |
def fetch_single_image(image_url, timeout=None, retries=0):
|
79 |
for _ in range(retries + 1):
|
80 |
try:
|
81 |
request = urllib.request.Request(
|
82 |
image_url,
|
83 |
data=None,
|
84 |
-
headers={"user-agent":
|
85 |
)
|
86 |
with urllib.request.urlopen(request, timeout=timeout) as req:
|
87 |
image = PIL.Image.open(io.BytesIO(req.read()))
|
@@ -177,19 +180,19 @@ From the paper:
|
|
177 |
#### Initial Data Collection and Normalization
|
178 |
|
179 |
From the homepage:
|
180 |
-
>For Conceptual Captions, we developed a fully automatic pipeline that extracts, filters, and transforms
|
181 |
>
|
182 |
-
>To generate this dataset, we started with a Flume pipeline that processes billions of Internet webpages,
|
183 |
>
|
184 |
-
>We first screen for certain properties like size, aspect ratio, adult content scores. These filters discard
|
185 |
>
|
186 |
-
>In the next step, we filter out candidates for which none of the text tokens can be mapped to the visual
|
187 |
>
|
188 |
-
>The candidates passing the above filters tend to be good Alt-text image descriptions. However, a large
|
189 |
>
|
190 |
-
>We address the above problems with the insight that proper names should be replaced by words that represent
|
191 |
>
|
192 |
-
>Finally, we perform another round of filtering to identify concepts with low-count. We cluster all resolved
|
193 |
|
194 |
#### Who are the source language producers?
|
195 |
|
|
|
75 |
from datasets.utils.file_utils import get_datasets_user_agent
|
76 |
|
77 |
|
78 |
+
USER_AGENT = get_datasets_user_agent()
|
79 |
+
|
80 |
+
|
81 |
def fetch_single_image(image_url, timeout=None, retries=0):
|
82 |
for _ in range(retries + 1):
|
83 |
try:
|
84 |
request = urllib.request.Request(
|
85 |
image_url,
|
86 |
data=None,
|
87 |
+
headers={"user-agent": USER_AGENT},
|
88 |
)
|
89 |
with urllib.request.urlopen(request, timeout=timeout) as req:
|
90 |
image = PIL.Image.open(io.BytesIO(req.read()))
|
|
|
180 |
#### Initial Data Collection and Normalization
|
181 |
|
182 |
From the homepage:
|
183 |
+
>For Conceptual Captions, we developed a fully automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions. Because no human annotators are involved, the Conceptual Captions dataset generation process is highly scalable.
|
184 |
>
|
185 |
+
>To generate this dataset, we started with a Flume pipeline that processes billions of Internet webpages, extracting, filtering, and processing candidate image and caption pairs, and keeping those that pass through several filters.
|
186 |
>
|
187 |
+
>We first screen for certain properties like size, aspect ratio, adult content scores. These filters discard more than 65% of the candidates. Next, we use Alt-Texts for text-based filtering, removing captions with non-descriptive text (such as SEO tags or hashtags); we also discard texts with high sentiment polarity or adult content scores, resulting in just 3% of the incoming candidates passing through.
|
188 |
>
|
189 |
+
>In the next step, we filter out candidates for which none of the text tokens can be mapped to the visual content of the image. We use image classifiers (e.g., Google Cloud Vision APIs) to assign class labels to images and match these labels against the candidate text (allowing morphological transformations), discarding >around 60% of the candidates that reach this stage.
|
190 |
>
|
191 |
+
>The candidates passing the above filters tend to be good Alt-text image descriptions. However, a large majority of these use proper names (for people, venues, locations, etc.), brands, dates, quotes, etc. This creates two distinct problems. First, some of these cannot be inferred based on the image pixels alone. This is problematic because unless the image has the necessary visual information it is not useful for training. Second, even if the proper names could be inferred from the image it is extremely difficult for a model to learn to perform both fine-grained classification and natural-language descriptions simultaneously. We posit that if automatic determination of names, locations, brands, etc. is needed, it should be done as a separate task that may leverage image meta-information (e.g. GPS info), or complementary techniques such as OCR.
|
192 |
>
|
193 |
+
>We address the above problems with the insight that proper names should be replaced by words that represent the same general notion, i.e., by their concept. For example, we remove locations (“Crowd at a concert in Los Angeles“ becomes “Crowd at a concert”), names (e.g., “Former Miss World Priyanka Chopra on the red carpet” becomes “actor on the red carpet”), proper noun modifiers (e.g., “Italian cuisine” becomes just “cuisine”) and noun phrases (e.g., “actor and actor” becomes “actors”). Around 20% of the samples are discarded during this transformation because it can leave sentences too short, or otherwise inconsistent.
|
194 |
>
|
195 |
+
>Finally, we perform another round of filtering to identify concepts with low-count. We cluster all resolved entities (e.g., “actor”, “dog”, “neighborhood”, etc.) and keep only the candidate types which have a count of over 100 mentions. This retains around 16K entity concepts such as: “person”, “actor”, “artist”, “player” and “illustration”. The less frequent ones that we dropped include “baguette”, “bridle”, “deadline”, “ministry” and “funnel”.
|
196 |
|
197 |
#### Who are the source language producers?
|
198 |
|