{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:29:34.181644Z" }, "title": "A Computational Model for Interactive Transcription", "authors": [ { "first": "William", "middle": [], "last": "Lane", "suffix": "", "affiliation": { "laboratory": "", "institution": "Charles Darwin University", "location": {} }, "email": "" }, { "first": "Steven", "middle": [], "last": "Bettinson", "suffix": "", "affiliation": { "laboratory": "", "institution": "Charles Darwin University", "location": {} }, "email": "" }, { "first": "", "middle": [], "last": "Bird", "suffix": "", "affiliation": { "laboratory": "", "institution": "Charles Darwin University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Transcribing low resource languages can be challenging in the absence of a comprehensive lexicon and proficient transcribers. Accordingly, we seek a way to enable interactive transcription, whereby the machine amplifies human efforts. This paper presents a computational model for interactive transcription, supporting multiple modes of interactivity and increasing the likelihood of finding tasks that stimulate local participation. The approach also supports other applications which are useful in low resource contexts, including spoken document retrieval and language learning.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Transcribing low resource languages can be challenging in the absence of a comprehensive lexicon and proficient transcribers. Accordingly, we seek a way to enable interactive transcription, whereby the machine amplifies human efforts. This paper presents a computational model for interactive transcription, supporting multiple modes of interactivity and increasing the likelihood of finding tasks that stimulate local participation. The approach also supports other applications which are useful in low resource contexts, including spoken document retrieval and language learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Understanding the \"transcription challenge\" is a prerequisite to designing effective solutions, minimizing bottlenecks (Himmelmann, 2018) . We must face realities such as the lack of a good lexicon, the short supply of transcribers, and the difficulty of engaging people in arduous work. Sparse transcription is an approach to transcribing speech in these low-resource situations, an approach which is well suited to places where there is limited capacity for transcription. Sparse transcription admits multi-user workflows built around shared data, for human-in-the-loop transcriptional practices, or \"interactive transcription\" (Bird, 2020b; Le Ferrand et al., 2020) .", "cite_spans": [ { "start": 119, "end": 137, "text": "(Himmelmann, 2018)", "ref_id": "BIBREF9" }, { "start": 630, "end": 643, "text": "(Bird, 2020b;", "ref_id": "BIBREF2" }, { "start": 644, "end": 668, "text": "Le Ferrand et al., 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Sparse transcription is 'sparse' because we do not produce contiguous transcriptions up front. Instead, we transcribe what we can, and lean on computational support to amplify those efforts across the corpus. This is not suggested as an alternative to contiguous transcription, but as a more efficient way to produce it, especially in those situations where linguists and speakers are \"learning to transcribe\" (Bird, 2020b, page 716) . Sparse transcription relies on word spotting. Wordforms that occur frequently in the transcribed portion of a corpus are used to spot forms in the untranscribed portion.", "cite_spans": [ { "start": 410, "end": 433, "text": "(Bird, 2020b, page 716)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These are presented for manual verification, speeding up the contiguous transcription work while indexing the entire corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Sparse transcription accepts the realities of early transcription: we lack a good lexicon; we need to grow the lexicon as we go; and we do not have a ready workforce of transcribers. Moreover, in the context of language documentation, transcription is iterative and interactive. Linguists and speakers leverage complementary skills to accomplish the task (Crowley, 2007; Austin, 2007; Rice, 2009) .", "cite_spans": [ { "start": 355, "end": 370, "text": "(Crowley, 2007;", "ref_id": "BIBREF3" }, { "start": 371, "end": 384, "text": "Austin, 2007;", "ref_id": "BIBREF0" }, { "start": 385, "end": 396, "text": "Rice, 2009)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Sparse transcription leverages the kind of work speakers are motivated to do. For example, when it comes to recordings, speakers tend to engage with the content more than the particular form of expression (Maddieson, 2001, page 215) . Identifying key words and clarifying their meanings is often more engaging than puzzling over the transcription of unclear passages (Bird, 2020b) . An indexed corpus can be searched to identify additional high-value recordings for transcription.", "cite_spans": [ { "start": 205, "end": 232, "text": "(Maddieson, 2001, page 215)", "ref_id": null }, { "start": 367, "end": 380, "text": "(Bird, 2020b)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We report on a computational model for interactive transcription in low-resource situations. We discuss the kinds of interactivity which the sparse transcription model enables, and propose an extension which provides real-time word discovery in a sparse transcription system. For concreteness we also present a user interface which provides real-time suggestions as the user enters words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We work with speakers of Kunwinjku (ISO gup), a polysynthetic Indigenous language of northern Australia. Members of this community have expressed interest using technology to support their own language goals. Through this work we hope to support language learning and corpus indexing, and produce locally meaningful results that help to decolonize the practice of language technology (Bird, 2020a) .", "cite_spans": [ { "start": 384, "end": 397, "text": "(Bird, 2020a)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper is organized as follows. Section 2 gives an overview of the sparse transcription model. Section 3 describes a particular use case of sparse transcription: interactive transcription. In Section 4 we describe the system architecture and the design decisions which enable an interactive humancomputer workflow. Section 5 describes the user interface and shows screenshots of the implementation. We conclude with a summary in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Following Bird (2020b), we understand transcription to be the task of identifying meaningful units in connected speech. These units belong to a growing inventory (the glossary, or lexicon); their orthographic representation is generally not settled. We add each new meaningful unit to the glossary as it is encountered, initializing the entry with a form and a gloss. Thus, a transcriptional token is a pairing of a locus in the speech stream with a glossary entry. We are agnostic about the size of this unit; it could be a morpheme, word, or multi-word expression.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Sparse Transcription Model", "sec_num": "2" }, { "text": "Transcription begins with a lexicon. There is always a word list, since this is what is used for establishing the distinct identity of a language. There may also be some historical transcriptions, and these words can be included in the initial lexicon. From this point on, transcription involves growing the lexicon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Sparse Transcription Model", "sec_num": "2" }, { "text": "The speech stream is broken up into 'breath groups' which we use as manageable chunks for transcription. In the course of transcription, it is a natural thing for a non-speaker linguist to attempt to repeat any new word and have a speaker say it correctly and give a meaning. Thus, the process is interactive in the interpersonal sense. We hear and confirm the word in context, and record it in the lexicon with a lexical identifier and a pointer to where it occurs in the media. In the background, a sparse transcription system uses this confirmed glossary entry to spot more instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Sparse Transcription Model", "sec_num": "2" }, { "text": "Word spotting is an automatic task which discovers putative tokens of glossary entries. Glossary entries are already stored with pointers to occurrences in particular breath groups. Discovering new instances through word spotting then becomes a retrieval task, where each breath group is seen as a mini-document. Breath groups which are determined to contain the exemplar lexical entry are queued for speaker confirmation. Confirmed spottings are updated with pointers to their respective breath groups.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Sparse Transcription Model", "sec_num": "2" }, { "text": "Word spotting proceeds iteratively and interac-tively, continually expanding the lexicon while transcribing more speech. As we focus on completing the contiguous transcription of a particular text, we grow the lexicon and the system attempts to discover other instances across the wider corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Sparse Transcription Model", "sec_num": "2" }, { "text": "As the system calls our attention to untranscribed regions, which may be difficult to complete for a variety of reasons, we effectively marshall the whole corpus to help us. A sparse transcription system is a form of computer supported collaborative work, in that it alleviates productivity bottlenecks via automation and asynchronous workflows (Greif, 1988; Hanke, 2017) . The sparse transcription model-organized around a growing glossary of entries with pointers to instances in speech-can underlie a variety of special-purpose apps which support various tasks in the transcription workflow. For example, Le Ferrand et al. 2020demonstrate the use of a word confirmation app based on word-spotted data for the purpose of confirming automatically-generated hypotheses.", "cite_spans": [ { "start": 345, "end": 358, "text": "(Greif, 1988;", "ref_id": "BIBREF5" }, { "start": 359, "end": 371, "text": "Hanke, 2017)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "The Sparse Transcription Model", "sec_num": "2" }, { "text": "We have prototyped a system which implements the core functionalities described in this section, and which includes a user interface which supports interactive transcription. Figure 2 gives a schematic view of the sparse transcription model. 1", "cite_spans": [], "ref_spans": [ { "start": 175, "end": 183, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "The Sparse Transcription Model", "sec_num": "2" }, { "text": "A linguist, learning to transcribe, is capable of listening to audio and quickly transcribing the lexemes they recognize. As lexemes are recorded, they are added to the transcriber's personal glossary. Entries in this glossary may be morphs, words, or other longer units such as multi-word expressions. The record-keeping of the glossary helps manage the linguist's uncertainty in an accountable way, as they give the task their best first-pass. As is the standard behavior in sparse transcription, a glossary is updated with links from glossary entries to the segment of audio in which they were found.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning to Transcribe", "sec_num": "3" }, { "text": "Speakers of the language can access a view of the linguist's glossary entries, and confirm entry tokens for admission to the global glossary. The design decision to maintain personal glossaries for individual users and postpone adjudication with a shared, canonical glossary is an extension of the concept defined in the sparse transcription model. Figure 1 : Word spotting in the sparse transcription model begins when the user confirms the existence of a glossary entry in the audio. A token is created for that instance of the glossary entry, and can be used to spot similar instances in other breath groups across the corpus. Figure 2 : The Sparse Transcription Model: Audio is segmented into breath groups, each one a mini spoken document where words may be spotted (with given probability); interpretations span one or more breath groups (Bird, 2020b) .", "cite_spans": [ { "start": 844, "end": 857, "text": "(Bird, 2020b)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 349, "end": 357, "text": "Figure 1", "ref_id": null }, { "start": 630, "end": 638, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Learning to Transcribe", "sec_num": "3" }, { "text": "Multiple transcribers can contribute to the shared glossary, initializing their own project with the current state of the global lexicon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning to Transcribe", "sec_num": "3" }, { "text": "Confirmed glossary entries can be used to spot similar entries across the whole corpus, maximizing the efforts of the learner, and providing more pointers from a glossary entry to breath groups where it occurs. Over time, this process leads to more contiguous transcriptions as the transcriber revisits and revises their lexicon in the course of their transcription work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning to Transcribe", "sec_num": "3" }, { "text": "However, there is an opportunity here to get more immediate feedback from the system. A sparsely transcribed breath group (whether system or human transcribed) provides signal about the breath group as a whole. Combined with the fact that the human is currently engaged in entering their hypotheses, we can provide system suggestions conditioned on sparsely transcribed data which are updated interactively as the user types. Anchored at the locus of a known lexeme, and conditioned on additional available signal i.e., a predicted phone sequence, the system posits suggestions for untranscribed regions. We can refer to this as 'local word discovery' (Fig. 3) .", "cite_spans": [], "ref_spans": [ { "start": 652, "end": 660, "text": "(Fig. 3)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Learning to Transcribe", "sec_num": "3" }, { "text": "Working together with the system, a linguist's hypotheses can be queued for confirmation in the same way that word spotting queues hypotheses for speaker confirmation. Simultaneously, the tran-scriber leverages a model to get immediate feedback on the connections between what they hear and what a model encodes about the language, potentially aiding language learning (Hermes and Engman, 2017) .", "cite_spans": [ { "start": 369, "end": 394, "text": "(Hermes and Engman, 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Learning to Transcribe", "sec_num": "3" }, { "text": "Up to this point, we have established the interactive nature of transcription on three levels. First, it is interpersonally interactive, as a linguist works with speakers to associate forms with meanings. Second, sparse transcription is interactive in the sense that it attempts to amplify the effort of transcribers by propagating lexical entries across the whole corpus via word spotting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning to Transcribe", "sec_num": "3" }, { "text": "Finally, the implementation of local word discovery is interactive in the context of the \"learning to transcribe\" use case. It occupies a distinct niche with a smaller feedback loop than word spotting: transcription hints are polled from the model and filtered with every keystroke (Figs. 6-8 ). It is improved by word spotting because contiguous transcriptions reduce uncertainty in the input to the local word discovery model. It allows a linguist to prepare and prioritize work for the interpersonally interactive task of confirming entries with a speaker. ", "cite_spans": [], "ref_spans": [ { "start": 282, "end": 292, "text": "(Figs. 6-8", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Learning to Transcribe", "sec_num": "3" }, { "text": "The interactive transcription use case calls for a variety of computational agents. Some agents ser- Agents are implemented as containerized services, some corresponding to long-running tasks, e.g. media processing, while others are integral to the user interface, e.g. phone alignment. The implementation supports RESTful endpoints, and a real-time websocket-based API.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Architecture", "sec_num": "4" }, { "text": "The API layer responds to events in the client, and endpoints support the methods in the data model. There are three main kinds of operation; simple CRUD operations like uploading media, data model operations such as adding a token to a glossary, and real-time queries such as word discovery. Data validation is distributed across the client and the server, for performance reasons and to mitigate the effects of network dropouts. The client replicates a subset of the server data model, storing this in the browser's database and synchronizing it with the server opportunistically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Architecture", "sec_num": "4" }, { "text": "We utilise a continuous web socket session to relay user input to the server, fetching and displaying results in real time. Commonly seen in web search, this is a form of distributed user interface where computational resources are distributed across platforms and architectures (Elmqvist, 2011) . This is achieved via asynchronous programming with observable streams, via implementations of the Reactive X pattern for JavaScript (rxjs) on the client and Python (rxpy) on the server. Input events from the browser are filtered, debounced and piped through a websocket transport to a session handler on the back end. Similarly, components of the client sub-scribe to session event streams coming from the back end, such as aligning user input to a phone stream, and presenting a series of word completions.", "cite_spans": [ { "start": 279, "end": 295, "text": "(Elmqvist, 2011)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "System Architecture", "sec_num": "4" }, { "text": "The system makes use of several agents whose implementation may vary across contexts or evolve over time. We have implemented the following agents:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Architecture", "sec_num": "4" }, { "text": "Audio pre-processing. When a user adds an audio file to a transcription project, the audio is preprocessed and we store metadata and alternative representations which are useful for downstream tasks. For example, the pipeline includes voice activity detection (VAD), which identifies breath groups. Next, we calculate peaks-acoustic amplitude values-which we use to visualize speech activity over time. Finally, the audio is resampled and sent to the phone recognition agent, and the results are displayed beneath the waveform as extra information to support transcription.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Architecture", "sec_num": "4" }, { "text": "Phone recognition. Allosaurus is a universal phone recognizer trained on over 2,000 languages (Li et al., 2020) . The model can be used as-is to provide phones from a universal set, or it can be fine-tuned with language specific phonemic transcriptions. The model currently we currently deploy is fine-tuned on 68 minutes of Kunwinjku speech across 5 speakers. We calculated a 25.6% phone error rate on 10 minutes of speech from a hold-out speaker.", "cite_spans": [ { "start": 94, "end": 111, "text": "(Li et al., 2020)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "System Architecture", "sec_num": "4" }, { "text": "Word spotting. Word spotting traditionally is audio exemplar matching against spans of raw audio (Myers et al., 1980) . It has been shown to be feasible in low resource scenarios using neural approaches (Menon et al., 2018b,a) . Le Ferrand et al. (2020) describes several plausible speech representations suited for low-resource word spotting.", "cite_spans": [ { "start": 97, "end": 117, "text": "(Myers et al., 1980)", "ref_id": "BIBREF18" }, { "start": 203, "end": 226, "text": "(Menon et al., 2018b,a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "System Architecture", "sec_num": "4" }, { "text": "Local word discovery. This is distinct from word spotting, which locates more tokens of existing glossary entries. Local word discovery attempts to fill in untranscribed regions between existing tokens. This agent provides transcription hints via a smaller feedback loop, the third kind of interactivity discussed in Section 3. The system retrieves the potentially large set of suggested words, and filters it down interactively as the transcriber types. The model is free to favor recall, because the raw suggestions do not need to be immediately revealed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Architecture", "sec_num": "4" }, { "text": "We implement local word discovery using a finite state analyzer for Kunwinjku (Lane and Bird, 2019) , modified to recognize possible word-forms given a stream of phones and the offsets of known lexemes. We use PanPhon to estimate articulatory distances between lexemes and phone subsequences to obtain rough alignments (Mortensen et al., 2016) .", "cite_spans": [ { "start": 78, "end": 99, "text": "(Lane and Bird, 2019)", "ref_id": "BIBREF10" }, { "start": 319, "end": 343, "text": "(Mortensen et al., 2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "System Architecture", "sec_num": "4" }, { "text": "The user interface (Fig. 5) is inspired by minimalist design, motivated by the need for an inclusive agenda in language work (cf. Hatton, 2013) . In the left column is a waveform which has been automatically segmented into breath groups. Below the waveform is a map of waveform peaks, to facilitate navigation across long audio files. Useful context is also displayed, including the transcript of the preceding breath group, followed by the sequence of phones produced from the audio, with user transcriptions aligned roughly to the phone sequence. Below this is the input box, scoped to the current breath group, where users enter lexemes, with occasional suggestions offered by the local word discovery module, and which filter interactively per keystroke (Figs. 6-8 ).", "cite_spans": [ { "start": 130, "end": 143, "text": "Hatton, 2013)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 19, "end": 27, "text": "(Fig. 5)", "ref_id": "FIGREF2" }, { "start": 758, "end": 768, "text": "(Figs. 6-8", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "User Interface", "sec_num": "5" }, { "text": "In the right column, there is a running transcript of the audio file, with the text of the transcript for the current breath group shown in bold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User Interface", "sec_num": "5" }, { "text": "The user interface is designed to be navigable entirely through the keyboard, to support ergonomic transcription (cf. Luz et al., 2008) .", "cite_spans": [ { "start": 118, "end": 135, "text": "Luz et al., 2008)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "User Interface", "sec_num": "5" }, { "text": "Transcription is especially challenging when we lack a good lexicon and trained transcribers. Consequently, we seek to bring all available resources to bear, including the knowledge of speakers, linguists, and a system, all of whom are \"learning to transcribe.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We presented a use case for interactive transcription and showed how this can be supported within the sparse transcription model. In designing and implementing a sparse transcription system for a specific use case, we elaborated on some concepts presented in (Bird, 2020b) . We examined various kinds of interactivity in low-resource language transcription, and we proposed local word discovery as a grammatically-informed approach to word spotting. This allows individual users to manage their local lexicon independently of the task of curating a canonical lexicon, enabling multi-user workflows.", "cite_spans": [ { "start": 259, "end": 272, "text": "(Bird, 2020b)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Finally, we reported on the architecture and implementation of an interactive transcription system. It enables a transcriber to take care of much of the arduous transcription task up front, and to allocate more meaningful work for speakers. The product of interaction with the system is an expanded lexicon, which can be used to index the corpus for information retrieval, thus supporting the community goal of access to knowledge locked up in many hours of recorded audio. Additionally, we anticipate that support for growing personal lexicons will be a valuable resource for the language learning that takes place alongside transcription. In short, the system is designed to produce the content that language communities care about, in a way that leverages the kind of language work that people are willing to do.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Operationalizing the sparse transcription model makes it possible to streamline field-based transcriptional practices, and is expected to lead to further implementations of special purpose interfaces that support transcription of low-resource languages. As the user continues typing, the list of suggestions is filtered down to those which remain compatible. Figure 8 : Thus, the user is guided to grammatically valid transcriptions which can be added to their lexicon.", "cite_spans": [], "ref_spans": [ { "start": 359, "end": 367, "text": "Figure 8", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The system prototype and a reference implementation of the sparse transcription model can both be found at https: //cdu-tell.gitlab.io/tech-resources/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We are grateful for the support of the Warddeken Rangers of West Arnhem. This work was covered by a research permit from the Northern Land Council, and was sponsored by the Australian government through a PhD scholarship, and grants from the Australian Research Council and the Indigenous Language and Arts Program.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Training for language documentation: Experiences at the School of Oriental and African Studies", "authors": [ { "first": "Peter", "middle": [], "last": "Austin", "suffix": "" } ], "year": 2007, "venue": "Documenting and Revitalizing Austronesian Languages, number 1 in Language Documentation and Conservation Special Issue", "volume": "", "issue": "", "pages": "25--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Austin. 2007. Training for language documen- tation: Experiences at the School of Oriental and African Studies. In Victoria Rau and Margaret Flo- rey, editors, Documenting and Revitalizing Austrone- sian Languages, number 1 in Language Documenta- tion and Conservation Special Issue, pages 25-41. University of Hawai'i Press.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Decolonising speech and language technology", "authors": [ { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "3504--3523", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Bird. 2020a. Decolonising speech and lan- guage technology. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, page 3504-19, Barcelona, Spain.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Sparse transcription. Computational Linguistics", "authors": [ { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" } ], "year": 2020, "venue": "", "volume": "46", "issue": "", "pages": "713--744", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Bird. 2020b. Sparse transcription. Computa- tional Linguistics, 46:713-744.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Field Linguistics: A Beginner's Guide", "authors": [ { "first": "Terry", "middle": [], "last": "Crowley", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Terry Crowley. 2007. Field Linguistics: A Beginner's Guide. Oxford University Press.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Distributed user interfaces: State of the art", "authors": [ { "first": "Niklas", "middle": [], "last": "Elmqvist", "suffix": "" } ], "year": 2011, "venue": "Distributed User Interfaces", "volume": "", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niklas Elmqvist. 2011. Distributed user interfaces: State of the art. In Distributed User Interfaces, pages 1-12. Springer.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Computer-Supported Cooperative Work: A Book of Readings", "authors": [ { "first": "Irene", "middle": [], "last": "Greif", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Irene Greif. 1988. Computer-Supported Cooperative Work: A Book of Readings. Morgan Kaufmann.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Computer-Supported Cooperative Language Documentation", "authors": [ { "first": "Florian", "middle": [], "last": "Hanke", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Florian Hanke. 2017. Computer-Supported Coopera- tive Language Documentation. Ph.D. thesis, Univer- sity of Melbourne.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "SayMore: Language documentation productivity. Presentation at International Conference Language Documentation and Conservation", "authors": [ { "first": "John", "middle": [], "last": "Hatton", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Hatton. 2013. SayMore: Language documenta- tion productivity. Presentation at International Con- ference Language Documentation and Conservation.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Resounding the clarion call: Indigenous language learners and documentation. Language Documentation and Description", "authors": [ { "first": "Mary", "middle": [], "last": "Hermes", "suffix": "" }, { "first": "Mel", "middle": [], "last": "Engman", "suffix": "" } ], "year": 2017, "venue": "", "volume": "14", "issue": "", "pages": "59--87", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mary Hermes and Mel Engman. 2017. Resounding the clarion call: Indigenous language learners and doc- umentation. Language Documentation and Descrip- tion, 14:59-87.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "15 of Language Documentation and Conservation Special Publication", "authors": [ { "first": "P", "middle": [], "last": "Nikolaus", "suffix": "" }, { "first": "", "middle": [], "last": "Himmelmann", "suffix": "" } ], "year": 1998, "venue": "Reflections on Language Documentation 20 Years after Himmelmann", "volume": "", "issue": "", "pages": "33--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikolaus P Himmelmann. 2018. Meeting the transcrip- tion challenge. In Reflections on Language Doc- umentation 20 Years after Himmelmann 1998, vol- ume 15 of Language Documentation and Conserva- tion Special Publication, pages 33-40. University of Hawai'i Press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Towards a robust morphological analyzer for Kunwinjku", "authors": [ { "first": "William", "middle": [], "last": "Lane", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 17th Annual Workshop of the Australasian Language Technology Association", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Lane and Steven Bird. 2019. Towards a ro- bust morphological analyzer for Kunwinjku. In Pro- ceedings of the 17th Annual Workshop of the Aus- tralasian Language Technology Association, pages 1-9.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Enabling interactive transcription in an Indigenous community", "authors": [ { "first": "Steven", "middle": [], "last": "\u00c9ric Le Ferrand", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Bird", "suffix": "" }, { "first": "", "middle": [], "last": "Besacier", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "3422--3450", "other_ids": {}, "num": null, "urls": [], "raw_text": "\u00c9ric Le Ferrand, Steven Bird, and Laurent Besacier. 2020. Enabling interactive transcription in an In- digenous community. In Proceedings of the 28th International Conference on Computational Linguis- tics, pages 3422-28. International Committee on Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Universal phone recognition with a multilingual allophone system", "authors": [ { "first": "Xinjian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Siddharth", "middle": [], "last": "Dalmia", "suffix": "" }, { "first": "Juncheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Littell", "suffix": "" }, { "first": "Jiali", "middle": [], "last": "Yao", "suffix": "" }, { "first": ";", "middle": [], "last": "David R Mortensen", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Black", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Metze", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "8249--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinjian Li, Siddharth Dalmia, Juncheng Li, Matthew Lee, Patrick Littell, Jiali Yao, Antonios Anasta- sopoulos, David R Mortensen, Graham Neubig, Alan Black, and Florian Metze. 2020. Universal phone recognition with a multilingual allophone sys- tem. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing, pages 8249-53. IEEE.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Interface design strategies for computer-assisted speech transcription", "authors": [ { "first": "Saturnino", "middle": [], "last": "Luz", "suffix": "" }, { "first": "Masood", "middle": [], "last": "Masoodian", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Deering", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 20th Australasian Conference on Computer-Human Interaction: Designing for Habitus and Habitat", "volume": "", "issue": "", "pages": "203--213", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saturnino Luz, Masood Masoodian, Bill Rogers, and Chris Deering. 2008. Interface design strategies for computer-assisted speech transcription. In Pro- ceedings of the 20th Australasian Conference on Computer-Human Interaction: Designing for Habi- tus and Habitat, pages 203-10.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Phonetic fieldwork", "authors": [ { "first": "Ian", "middle": [], "last": "Maddieson", "suffix": "" } ], "year": 2001, "venue": "Linguistic Fieldwork", "volume": "", "issue": "", "pages": "211--229", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Maddieson. 2001. Phonetic fieldwork. In Paul Newman and Martha Ratcliff, editors, Linguistic Fieldwork, pages 211-229. Cambridge University Press.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Fast ASR-free and almost zero-resource keyword spotting using DTW and CNNs for humanitarian monitoring", "authors": [ { "first": "Raghav", "middle": [], "last": "Menon", "suffix": "" }, { "first": "Herman", "middle": [], "last": "Kamper", "suffix": "" }, { "first": "John", "middle": [], "last": "Quinn", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Niesler", "suffix": "" } ], "year": 2018, "venue": "Interspeech", "volume": "", "issue": "", "pages": "3475--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raghav Menon, Herman Kamper, John Quinn, and Thomas Niesler. 2018a. Fast ASR-free and al- most zero-resource keyword spotting using DTW and CNNs for humanitarian monitoring. In Inter- speech, pages 3475-79.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "ASR-free CNN-DTW keyword spotting using multilingual bottleneck features for almost zero-resource languages", "authors": [ { "first": "Raghav", "middle": [], "last": "Menon", "suffix": "" }, { "first": "Herman", "middle": [], "last": "Kamper", "suffix": "" }, { "first": "Emre", "middle": [], "last": "Yilmaz", "suffix": "" }, { "first": "John", "middle": [], "last": "Quinn", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Niesler", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 6th International Workshop on Spoken Language Technologies for Under-Resourced Languages", "volume": "", "issue": "", "pages": "182--186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raghav Menon, Herman Kamper, Emre Yilmaz, John Quinn, and Thomas Niesler. 2018b. ASR-free CNN-DTW keyword spotting using multilingual bottleneck features for almost zero-resource lan- guages. In Proceedings of the 6th International Workshop on Spoken Language Technologies for Under-Resourced Languages, pages 182-186.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Panphon: A resource for mapping IPA segments to articulatory feature vectors", "authors": [ { "first": "David", "middle": [ "R" ], "last": "Mortensen", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Littell", "suffix": "" }, { "first": "Akash", "middle": [], "last": "Bharadwaj", "suffix": "" }, { "first": "Kartik", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Lori", "middle": [], "last": "Levin", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 26th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "3475--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "David R. Mortensen, Patrick Littell, Akash Bharadwaj, Kartik Goyal, Chris Dyer, and Lori Levin. 2016. Panphon: A resource for mapping IPA segments to articulatory feature vectors. In Proceedings of the 26th International Conference on Computational Linguistics, pages 3475-84. Association for Compu- tational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "An investigation of the use of dynamic time warping for word spotting and connected speech recognition", "authors": [ { "first": "Cory", "middle": [], "last": "Myers", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Rabiner", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Rosenberg", "suffix": "" } ], "year": 1980, "venue": "Proceedings of the International Conference on Acoustics, Speech, and Signal Processing", "volume": "5", "issue": "", "pages": "173--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cory Myers, Lawrence Rabiner, and Andrew Rosen- berg. 1980. An investigation of the use of dy- namic time warping for word spotting and connected speech recognition. In Proceedings of the Interna- tional Conference on Acoustics, Speech, and Signal Processing, volume 5, pages 173-177. IEEE.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Must there be two solitudes? Language activists and linguists working together", "authors": [ { "first": "Keren", "middle": [ "Rice" ], "last": "", "suffix": "" } ], "year": 2009, "venue": "Indigenous language revitalization: Encouragement, guidance, and lessons learned", "volume": "", "issue": "", "pages": "37--59", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keren Rice. 2009. Must there be two solitudes? Lan- guage activists and linguists working together. In Jon Reyhner and Louise Lockhard, editors, Indige- nous language revitalization: Encouragement, guid- ance, and lessons learned, pages 37-59. Northern Arizona University.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Sparsely transcribed input can be leveraged for local word discovery methods which are complementary to word spotting.", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "The system architecture vice computationally-expensive batch tasks, while others are coupled with user events down to the level of keystrokes.", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "The transcription user interface connects to the data model, which facilitates word spotting and local word discovery agents.", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": "Local word discovery predicts possible words in the audio, conditioned on known lexemes and a flexible interpretation of the surrounding sounds.", "uris": null, "type_str": "figure", "num": null }, "FIGREF4": { "text": "Figure 7: As the user continues typing, the list of suggestions is filtered down to those which remain compatible.", "uris": null, "type_str": "figure", "num": null } } } }