ACL-OCL / Base_JSON /prefixV /json /vigil /2021.vigil-1.0.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
11.8 kB
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:14:07.468876Z"
},
"title": "",
"authors": [],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "Welcome to the Fourth Workshop on Visually Grounded Interaction and Language (ViGIL).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Language is neither learned nor used in a vacuum, but rather grounded within a rich, embodied experience rife with physical groundings (vision, audition, touch) and social influences (pragmatic reasoning about interlocutors, commonsense reasoning, learning from interaction). For example, studies of language acquisition in children show a strong interdependence between perception, motor control, and language understanding. Yet, AI research has traditionally carved out individual components of this multimodal puzzle-perception (computer vision, audio processing, haptics), interaction with the world or other agents (robotics, reinforcement learning), and natural language processing-rather than adopting an interdisciplinary approach.",
"cite_spans": [
{
"start": 135,
"end": 160,
"text": "(vision, audition, touch)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "This fractured lens makes it difficult to address key language understanding problems that future agents will face in the wild. For example, describing \"a bird perched on the lowest branch singing in a high pitch trill\" requires grounding to perception. Likewise, providing the instruction to \"move the jack to the left so it pushes on the frame of the car\" requires not only perceptual grounding, but also physical understanding. For these reasons, language, perception, and interaction should be learned and bootstrapped together. In the last several years, efforts to merge subsets of these areas have gained popularity through tasks like instruction-guided navigation in 3D environments, audio-visual navigation, video descriptions, question-answering, and language-conditioned robotic control, though these primarily study disembodied problems via static datasets. As such, there remains considerable scientific uncertainty around how to bridge the gap from current monolithic systems to holistic agents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "As in past incarnations, the goal of this 4th ViGIL workshop is to support and promote this research direction by bringing together scientists from diverse backgrounds-natural language processing, machine learning, computer vision, robotics, neuroscience, cognitive science, psychology, and philosophy-to share their perspectives on language grounding, embodiment, and interaction. ViGIL provides a unique opportunity for interdisciplinary discussion. We intend to utilize this variety of perspectives to foster new ideas about how to define, evaluate, learn, and leverage language grounding. This one-day session would enable in-depth conversations on understanding the boundaries of current work and establishing promising avenues for future work, with the overall aim to bridge the scientific fields of human cognition and machine learning. This year, ViGIL will be co-located with the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). We accepted twenty-seven non-archival papers to be presented at our workshop, with topics including instruction following, image captioning, emergent communication, interactive learning, and semantic parsing, among others. The workshop features eight invited speakers with a diverse set of perspectives on language grounding, with research focuses including cognitive science, robotics, computer vision, psycholinguistics, and core natural language processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "What are the tasks? The environments? How to design and train such models? To transfer knowledge between modalities? To perform multimodal reasoning? To deploy language agents in the wild?",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": ") focuses on infant language acquisition and development of concepts and language, and the relation between the two",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandra Waxman (Professor, Department of Psychology, Northwestern University) focuses on infant language acquisition and development of concepts and language, and the relation between the two.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "UC Berkeley) focuses on computer vision, language, machine learning, graphics, and perception-based human computer interfaces",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trevor Darrell (Professor, Electrical Engineering and Computer Sciences, UC Berkeley) focuses on computer vision, language, machine learning, graphics, and perception-based human computer interfaces.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": ") focuses on the implementation of biologically realistic neural-network in language, memory and visual perception",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Max Garagnani (Lecturer, Department of Computing, University of London) focuses on the implementation of biologically realistic neural-network in language, memory and visual perception.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Massachusetts Institute of Technology) focuses on understanding the cognitive underpinning of natural language processing and acquisition",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roger Levy (Associate Professor, Department of Brain and Cognitive Science, Massachusetts Institute of Technology) focuses on understanding the cognitive underpinning of natural language processing and acquisition.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Allen Institute for Artificial Intelligence) works at the intersection of natural language and machine learning, with interests in computer vision and digital humanities",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allen Institute for Artificial Intelligence) works at the intersection of natural language and machine learning, with interests in computer vision and digital humanities.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": ") focuses on constructing robots that seamlessly use natural language to communicate with humans",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefanie Tellex (Associate Professor, Department of Computer Science, Brown University) focuses on constructing robots that seamlessly use natural language to communicate with humans.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Department of Machine Learning, Carnegie Mellon University) explores building machines that understand the stories that videos portray and, using videos to teach machines about the world",
"authors": [
{
"first": "Katerina",
"middle": [],
"last": "Fragkiadaki",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katerina Fragkiadaki (Assistant Professor, Department of Machine Learning, Carnegie Mellon University) explores building machines that understand the stories that videos portray and, using videos to teach machines about the world.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Visiting Researcher at Facebook AI Research) focuses on visual reasoning, vision and language, image generation, and 3D reasoning using deep neural networks",
"authors": [
{
"first": "Justin",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justin Johnson (Assistant Professor, Department of Electrical Engineering and Computer Science, University of Michigan; Visiting Researcher at Facebook AI Research) focuses on visual reasoning, vision and language, image generation, and 3D reasoning using deep neural networks.",
"links": null
}
},
"ref_entries": {}
}
}