--- annotations_creators: - found language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 1M One contribution is our technique for the automatic collection of this new dataset – performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results. ### Source Data The source images come from Flickr. #### Initial Data Collection and Normalization One key contribution of our paper is a novel web-scale database of photographs with associated descriptive text. To enable effective captioning of novel images, this database must be good in two ways: 1) It must be large so that image based matches to a query are reasonably similar, 2) The captions associated with the data base photographs must be visually relevant so that transferring captions between pictures is useful. To achieve the first requirement we query Flickr using a huge number of pairs of query terms (objects, attributes, actions, stuff, and scenes). This produces a very large, but noisy initial set of photographs with associated text. #### Who are the source language producers? The Flickr users. ### Annotations #### Annotation process Text descriptions associated with the images are inherited as annotations/captions. #### Who are the annotators? The Flickr users. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators Vicente Ordonez, Girish Kulkarni and Tamara L. Berg. ### Licensing Information Not specified. ### Citation Information ```bibtex @inproceedings{NIPS2011_5dd9db5e, author = {Ordonez, Vicente and Kulkarni, Girish and Berg, Tamara}, booktitle = {Advances in Neural Information Processing Systems}, editor = {J. Shawe-Taylor and R. Zemel and P. Bartlett and F. Pereira and K.Q. Weinberger}, pages = {}, publisher = {Curran Associates, Inc.}, title = {Im2Text: Describing Images Using 1 Million Captioned Photographs}, url = {https://proceedings.neurips.cc/paper/2011/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf}, volume = {24}, year = {2011} } ``` ### Contributions Thanks to [@thomasw21](https://github.com/thomasw21) for adding this dataset