|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:29:34.810835Z" |
|
}, |
|
"title": "Bridging Multi-disciplinary Collaboration Challenges in ML Development Workflow via Domain Knowledge Elicitation", |
|
"authors": [ |
|
{ |
|
"first": "Soya", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "soya@mit.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Building a machine learning model in a sophisticated domain is a time-consuming process, partially due to the steep learning curve of domain knowledge for data scientists. We introduce Ziva, an interface for supporting domain knowledge from domain experts to data scientists in two ways: (1) a concept creation interface where domain experts extract important concept of the domain and (2) five kinds of justification elicitation interfaces that solicit elicitation how the domain concept are expressed in data instances.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Building a machine learning model in a sophisticated domain is a time-consuming process, partially due to the steep learning curve of domain knowledge for data scientists. We introduce Ziva, an interface for supporting domain knowledge from domain experts to data scientists in two ways: (1) a concept creation interface where domain experts extract important concept of the domain and (2) five kinds of justification elicitation interfaces that solicit elicitation how the domain concept are expressed in data instances.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In recent decades, machine learning (ML) technologies have been sought out by an increasing number of professionals to automate their work tasks or augment their decision-making (Yang et al., 2019) . Broad areas of applications are benefiting from integration of ML, such as healthcare (Cai et al., 2019a,b) , finance (Culkin and Das, 2017) , employment (Manyika et al., 2017) , and so on. However, building an ML model in a specialized domain is still expensive and time-consuming for at least two reasons. First, a common bottleneck in developing modern ML technologies is the requirement of a large quantity of labeled data. Second, many steps in an ML development pipeline, from problem definition to feature engineering to model debugging, necessitate an understanding of domain-specific knowledge and requirements . Data scientists therefore often require input from domain experts to obtain labeled data, to understand model requirements, to inspire feature engineering, and to get feedback on model behavior. In practice, such knowledge transfer between domain experts and data scientists is very much ad-hoc, with few standardized practices or proven effective approaches, and requires significant direct interaction between data scientists and domain experts. Building a high-quality legal, medical, or financial model will inevitably require a data scientist to consult with professionals in such domains. In practice, these are often costly and frustrating iterative conversations and labeling exercises that can go on for weeks and months, which usually still do not yield output in a form readily consumable by a model development pipeline.", |
|
"cite_spans": [ |
|
{ |
|
"start": 178, |
|
"end": 197, |
|
"text": "(Yang et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 286, |
|
"end": 307, |
|
"text": "(Cai et al., 2019a,b)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 318, |
|
"end": 340, |
|
"text": "(Culkin and Das, 2017)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 354, |
|
"end": 376, |
|
"text": "(Manyika et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we set out to develop methods and interfaces that facilitate knowledge sharing from domain experts to data scientists for model development. We developed a domain-knowledge acquisition interface Ziva (With Zero knowledge, How do I deVelop A machine learning model?). Instead of a data-labeling tool, Ziva intends to provide a diverse set of elicitation methods to gather knowledge from domain experts, then present the results as a repository to data scientists to serve their domain understanding needs and to build ML models for specialized domains. Ziva scaffolds the knowledge sharing in desired formats and allows asynchronous exchange between domain experts and data scientists. It also allows flexible re-use of the knowledge repository for different modeling tasks in the domain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Specifically, Ziva focuses on eliciting key concepts in the text data of a domain (concept creation), and rationale justifying a label that a domain expert gives to a representative data instance (justification elicitation). In the current version of Ziva, we provide five different justification elicitation methods -bag of words, simplification, perturbation, concept bag of words, and concept annotation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Creating a taxonomy is an effective way of organizing information (Laniado et al., 2007; Chilton et al., 2013) . Ziva provides an interface where SMEs can raw input", |
|
"cite_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 88, |
|
"text": "(Laniado et al., 2007;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 89, |
|
"end": 110, |
|
"text": "Chilton et al., 2013)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concept creation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The red velvet is rich and moist! I think that the waiter was friendly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concept creation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The clam chowder was not tasty.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concept creation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Took forever to get my drink Yesterday I ate a terrible burger. \u2026", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concept creation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The red velvet is rich and moist! I think that the waiter was friendly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ziva raw input", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Took forever to get my drink", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ziva raw input", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Input curation: Ziva automatically selects representative inputs to domain experts", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain experts", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A domain expert (1) extracts taxonomy from the inputs and (2) explains rationales of the labeling Data scientists use domain experts' review to understand the domain and build a model. The red velvet cake is tasteless. Figure 1 : To facilitate domain knowledge sharing, Ziva presents representative instances and to interfaces to review the instances to domain experts, then which will be used by data scientists.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 227, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Knowledge extraction:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "extract domain concepts. Users are asked to categorize each example instance, presented as a card, via a card-sorting activity. Users first group cards by topic (general concepts of the domain such as atmosphere, food, service, price). Cards in each topic are then further divided cards into descriptions referencing specific attributes for a topic (e.g., cool, tasty, kind, high).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Knowledge extraction:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Once a domain expert finishes the concept extraction, they review each instance using one of elicitation interfaces, which ask the domain expert to justify an instance's label (this information is then intended for consumption by data scientists ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Justification-elicitation interface", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The justification elicitation interfaces were designed through an iterative process of paper prototyping, starting with initial designs inspired by our preliminary interviews. As we conducted paper prototyping, we examined if (1) the answers from different participants were consistent and (2) the information from participants' answers were useful to data scientists.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Justification-elicitation interface", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Bag of words. This base condition reflects the most common current approach. Given an instance and a label (e.g., positive, negative), the domain experts are asked to highlight the text snippets that justify the label assignment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Justification-elicitation interface", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Instance perturbation. Inspired by one of our data scientists in the formative study, this condition asks a domain expert to perturb (edit) the instance such that the assigned label is no longer justifiable by the resulting text. For example, in the restaurant domain, \"our server was kind\", can be modified to no longer convey a positive sentiment by either negating an aspect (e.g., \"our server was not kind\") or altering it (e.g., \"our server was rude\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Justification-elicitation interface", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Instance simplification. This condition asks domain experts to shorten an instance as much as possible, leaving only text that justifies the assigned label of the original instance. For example, \"That's right. The red velvet cake... ohhhh.. it was rich and moist\", can be simplified to \"The cake was rich and moist\", as the rest of the content does not convey any sentiment, and can therefore be judged irrelevant to the sentiment analysis task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Justification-elicitation interface", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Concept bag of words. This condition incorporates the concept extracted in the prior step. Similar to the Bag of words condition, domain experts are asked to highlight relevant text within each instance to justify the assigned label; however, each highlight must be grouped into one of the concepts. If, during Concept creation, the domain expert copied a card to assign multiple topics and descriptions, then the interface prompts multiple times to highlight relevant text for each one. For example, if they classified the instance, \"That's right. The red velvet cake... ohhhh.. it was rich and moist\", into the concept \"food is tasty\", they can select rich, moist and cake as being indicative words for that concept.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Justification-elicitation interface", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Concept annotation. This condition is similar to the above Concept bag of words condition. However, when annotating the instance text, domain experts are directed to distinguish between words relevant to the topic and words relevant to the description. Given the above sample instance, the domain expert would need to indicate which part of the sentence applies to food (e.g., cake) and which to tasty (e.g., rich and moist). Both this and the previous concept condition are motivated by the well-established knowledge that a variety of NLP tasks, such as relation extraction, question answering, clustering and text generation can benefit from tapping into the the conceptual relationship present in the hierarchies of human knowledge (Zhang et al., 2016) . Learning taxonomies from text corpora is a significant NLP research direction, especially for long-tailed and domain-specific knowledge acquisition (Wang et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 736, |
|
"end": 756, |
|
"text": "(Zhang et al., 2016)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 907, |
|
"end": 926, |
|
"text": "(Wang et al., 2017)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Justification-elicitation interface", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Details of the interface design and the evaluation can be found in .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Justification-elicitation interface", |
|
"sec_num": "2.2" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "hello ai\": Uncovering the onboarding needs of medical practitioners for human-ai collaborative decision-making", |
|
"authors": [ |
|
{ |
|
"first": "Carrie", |
|
"middle": [ |
|
"Jun" |
|
], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samantha", |
|
"middle": [], |
|
"last": "Winter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Steiner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lauren", |
|
"middle": [], |
|
"last": "Wilcox", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Terry", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carrie Jun Cai, Samantha Winter, David Steiner, Lau- ren Wilcox, and Michael Terry. 2019a. \"hello ai\": Uncovering the onboarding needs of medical practi- tioners for human-ai collaborative decision-making.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Human-centered tools for coping with imperfect algorithms during medical decision-making", |
|
"authors": [ |
|
{ |
|
"first": "Carrie Jun", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carrie Jun Cai et al. 2019b. Human-centered tools for coping with imperfect algorithms during medical decision-making.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Cascade: Crowdsourcing taxonomy creation", |
|
"authors": [ |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Lydia B Chilton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Darren", |
|
"middle": [], |
|
"last": "Little", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Edge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Daniel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Weld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Landay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1999--2008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lydia B Chilton, Greg Little, Darren Edge, Daniel S Weld, and James A Landay. 2013. Cascade: Crowd- sourcing taxonomy creation. In Proceedings of the SIGCHI Conference on Human Factors in Comput- ing Systems, pages 1999-2008.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Machine learning in finance: The case of deep learning for option pricing", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Culkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Sanjiv", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Journal of Investment Management", |
|
"volume": "15", |
|
"issue": "4", |
|
"pages": "92--100", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert Culkin and Sanjiv R Das. 2017. Machine learn- ing in finance: The case of deep learning for op- tion pricing. Journal of Investment Management, 15(4):92-100.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Using wordnet to turn a folksonomy into a hierarchy of concepts", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Laniado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Davide", |
|
"middle": [], |
|
"last": "Eynard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Colombetti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Semantic Web Application and Perspectives-Fourth Italian Semantic Web Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "192--201", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Laniado, Davide Eynard, Marco Colombetti, et al. 2007. Using wordnet to turn a folksonomy into a hierarchy of concepts. In Semantic Web Ap- plication and Perspectives-Fourth Italian Semantic Web Workshop, pages 192-201.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A future that works: Ai, automation, employment, and productivity", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Manyika", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Chui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehdi", |
|
"middle": [], |
|
"last": "Miremadi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Manyika, Michael Chui, Mehdi Miremadi, et al. 2017. A future that works: Ai, automation, employ- ment, and productivity. McKinsey Global Institute Research, Tech. Rep, 60.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Facilitating knowledge sharing from domain experts to data scientists for building nlp models", |
|
"authors": [ |
|
{ |
|
"first": "Soya", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "April", |
|
"middle": [ |
|
"Yi" |
|
], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ban", |
|
"middle": [], |
|
"last": "Kawas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vera", |
|
"middle": [], |
|
"last": "Liao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Piorkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marina", |
|
"middle": [], |
|
"last": "Danilevsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "26th International Conference on Intelligent User Interfaces", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "585--596", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Soya Park, April Yi Wang, Ban Kawas, Q Vera Liao, David Piorkowski, and Marina Danilevsky. 2021. Facilitating knowledge sharing from domain experts to data scientists for building nlp models. In 26th International Conference on Intelligent User Inter- faces, pages 585-596.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "How ai developers overcome communication challenges in a multidisciplinary team: A case study", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Piorkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Soya", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "April", |
|
"middle": [ |
|
"Yi" |
|
], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dakuo", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Muller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Portnoy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Piorkowski, Soya Park, April Yi Wang, Dakuo Wang, Michael Muller, and Felix Portnoy. 2021. How ai developers overcome communication chal- lenges in a multidisciplinary team: A case study.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A short survey on taxonomy learning from text corpora: Issues, resources and recent advances", |
|
"authors": [ |
|
{ |
|
"first": "Chengyu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaofeng", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aoying", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1190--1203", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1123" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chengyu Wang, Xiaofeng He, and Aoying Zhou. 2017. A short survey on taxonomy learning from text cor- pora: Issues, resources and recent advances. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing, pages 1190- 1203. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Unremarkable ai: Fitting intelligent decision support into critical, clinical decision-making processes", |
|
"authors": [ |
|
{ |
|
"first": "Qian", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Steinfeld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Zimmerman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--11", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qian Yang, Aaron Steinfeld, and John Zimmerman. 2019. Unremarkable ai: Fitting intelligent decision support into critical, clinical decision-making pro- cesses. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pages 1- 11.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Learning concept taxonomies from multi-modal data", |
|
"authors": [ |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1791--1801", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1169" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hao Zhang et al. 2016. Learning concept taxonomies from multi-modal data. In Proceedings of the 54th Annual Meeting of the Association for Computa- tional Linguistics, pages 1791-1801. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "the day was clam chowder and it was not incredible." |
|
} |
|
} |
|
} |
|
} |