Papers
arxiv:2306.14824

Kosmos-2: Grounding Multimodal Large Language Models to the World

Published on Jun 26, 2023
ยท Submitted by akhaliq on Jun 27, 2023
#1 Paper of the day
Authors:
,
,

Abstract

We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Data, demo, and pretrained models are available at https://aka.ms/kosmos-2.

Community

Is the GRiT dataset publicly available?

Paper author

Is the GRiT dataset publicly available?

Yes, It is available in huggingface now.

what is this?

![depositphotos_146330135-stock-photo-portrait-of-sexy-man.jpg]({"error":"Only PNG, JPG, JPEG, GIF, MP4, MOV, QT, WEBM, MP3, MPGA, WAV files are supported"})

describe the image

Simple-Men-Suits-2015-Hot-Groomsmen-Tuxedo-Wedding-Suits-for-Men-Tailored-Groom-Suit-Business-Suits.jpg_640x640.jpg

Describe the image

No description provided.

Kosmos-2: Bridging Text and Vision with Grounded AI

Links ๐Ÿ”—:

๐Ÿ‘‰ Subscribe: https://www.youtube.com/@Arxflix
๐Ÿ‘‰ Twitter: https://x.com/arxflix
๐Ÿ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

This comment has been hidden

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2306.14824 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 8

Collections including this paper 5