_id
stringlengths
36
36
text
stringlengths
200
328k
label
stringclasses
5 values
4829cc93-6d75-4604-91ba-9e5a41948113
This perceived quality level was analyzed in RQ3 with respect to user expertise, their accuracy, and the interaction protocol. As expected, the difference in explanations' quality between human-first and AI-first interaction protocols was not significant and was associated with only a negligible effect: indeed, as mentioned previously in regard to trust, readers were assigned randomly to the two interaction protocols which, aside from when the AI advice was shown, were essentially equivalent in terms of the given explanations. By contrast, even though the difference in explanation accuracy with respect to readers' baseline accuracy was similarly non-significant,the relationship between the two variables was associated with a medium-to-large effect size. Furthermore, also the difference in explanations' quality between novice and expert readers was associated with a large effect size and was also statistically significant. These results highlight how readers' proficiency in the ECG reading task (as measured by either self-reported expertise or, more quantitatively, by basal accuracy) might have a significant effect on the perception of explanatory advice. A possible explanation for this observation, which was already mentioned above in reference to trust, might be related to an increased acquaintance with AI and XAI systems for the less expert readers (who presumably were also less accurate). Furthermore, less expert readers (e.g., the students and residents) might have found the explanations' quality higher due to their perceived usefulness in helping them identify characteristics of interest in an ECG that they were not able to interpret alone: more experienced or more accurate readers, who by definition were more well versed in the interpretation of ECG diagrams, might have missed this novelty element of explanations, which might have led to a lower, on average, perceived quality. Furthermore, explanations' quality was weakly correlated with the readers' basal trust in AI-based support systems, but significantly and strongly correlated with final trust. Importantly, it appears that the users' initial attitude had little influence on the perceived quality of explanations: this can be interpreted as a consequence of the fact that the explanations were evaluated for their intrinsic value, rather than for a halo effect (due to trust) [1]}. As for final trust, by contrast, the observed improvement is likely due to a simple reason: the support was perceived as a worthy addition to the decision-making process. Even though due to how the user experiment was constructed we cannot decouple the contribution of XAI from plain AI, given that the accuracy of the decision support was higher than that of the readers but overall not particularly high (equalling 70%), we conjecture that the explanation was probably the main factor in the trust increase, for its novelty and appropriability [2]} when compared to a simple categorical advice. Since the increase in final trust is highly dependent, and significantly so, on explanation quality, this finding reinforces the idea that explanations do influence trust. We believe that this relevant finding also provides an alternative and complementary explanation of the observed effect of readers' expertise on final trust and trust difference. Indeed, this latter effect could be explained as rising from the fact that less experienced readers rated more favorably the quality of explanations than the more expert readers: in light of the strong relationship between explanations' quality and final trust, this might explain why we observed a larger increase in trust for novices than for experts.
d
703bbe50-f3d8-43e1-abef-b951a8177d32
The two final research questions - RQ4 and RQ5 - further delve on the effect of explanations, by adopting the lens of the theory of technology dominance [1]}, [2]} through which we investigated possible correlations between the readers' susceptibility to this phenomenon (which, in this article, was operationalized through the rate of decision change due to the exposition to the output of an AI system) and explanations' perceived quality and actual correctness.
d
a44db747-b28c-49c7-9bbd-84d682424660
As clearly shown in Figure REF , explanation quality influences dominance, and especially positive dominance, strongly and also significantly so. A further confirmation of this effect can also be traced back to the observations in Figure REF . Indeed, both quality and dominance were increased when an explanation was correct and pertinent to the cases at hand. We believe this finding to be of particular interest since it confirms that high-quality explanations increase the persuasion potential of the XAI system - and especially so for the good (see, in particular, the rightmost panel in Figure REF ). Nonetheless, as it can be noticed from the rightmost panel in Figure REF , quality can influence the users also for worse, as highlighted by the fact that explanations' quality was moderately associated with negative dominance (hence, opinion changes from a correct to an incorrect diagnosis). A possible explanation for this effect can be traced back to the imperfect ability of the readers' in discriminating a correct explanation from a wrong one in terms of perceived quality (see Figure REF ). In turn, such an effect could motivate the emergence of biases and cognitive effects associated with automation, and especially so with those that are directly related to the role of XAI, as in the case of the white-box paradox [1]}, [2]}. Despite the relevance of these results, we remark however that further research should address whether this correlation also holds in the case of placebic information (i.e. not semantically sensible, nor structurally consistent) as described by Langer [3]}, translating this effort from the interpersonal dimension to that of Human-Computer Interaction.
d
942cb381-ff62-47ad-8e1c-30b0cb37e328
This study is exploratory as its main limitations regard the relatively small sample of cases considered, if not of readers involved. In fact, in regard to participation this study can leverage the perceptions and opinions of tens of cardiology, of different competences and expertise. However, the study regards a serious game where the doctors involved knew no harm could be caused to real patients. That said, two main areas where further research could extend similar studies regard the stratification by explanation types, and the analysis of the impact of explanations on the readers' confidence. On one hand, explanations should be distinguished according to a reference taxonomy, for instance those recently proposed in [1]}, [2]}, [3]}, [4]}, to see if different types of explanations can have different effects on decision making: we recall that in this study we focused on textual explanations of justificatory and causal kind [4]}. Moreover, explanations can be wrong in different ways: for instance, an explanation can be wrong because it does not regard (or is badly fitted to) either the case at hand or the machine's advice; or because it expresses a wrong way of reasoning. This macro distinction reflects the typology proposed by Reason about human error [6]}, in which lapses and mistakes, respectively, regard errors in perception or attention, and the latter ones regard errors in reasoning and the application of domain knowledge. The explanations provided in the study presented in this paper were of various kinds, depending on the case at hand and the ECG to read.
d
c5dc0660-c192-4fea-a306-044fc6626922
On the other hand, as mentioned above, we did not collect confidence scores at each human decision step (i.e., HD1, HD2 and FHD, see Figure REF ); for this reason, we cannot address the research question whether explanations would improve confidence in the decision reported or not. However, we addressed this question in another study , where preliminary results suggest that (visual) explanations may paradoxically make users (slightly) less confident in their final decision. For this reason, further research should be aimed at investigating also the confidence construct, and its relationship with perceived user experience and satisfaction.
d
ab4533b2-7a9c-44d7-812b-464c6c8c74f9
The current interest in XAI is ostensibly and programmatically motivated by the need to make artificial intelligence (AI) systems more transparent, understandable, and thus usable. However, in light of some empirically-grounded findings and the literature on naturalistic decision making [1]}, [2]}, this interest appears to be more instrumental to the rising prevalence and diffusion of automated decision making (ADM) systems, especially when their use is anticipated in contexts for which the main legislative frameworks (e.g., the EU GDPR) require these systems to also provide reasons of their output whenever this latter can have a legal effects. This addresses a requirement for justification, rather than explanation, although these two concepts are often conflated (for an argumentation about the difference between explanation and justification, see [3]}).
d
015b889c-4cac-4923-8359-8d866dbdc660
In this light, it is important to notice that, metaphorically speaking, providing AI with explainability, that is the capability to properly explain its own output, is more akin to painting the black box of inscrutable algorithms (such as deep learning or ensemble models) white, rather than making them transparent. What we mean with this metaphoric statement is that XAI explanations do not necessarily explain (as by definition or ontological status) but rather describe the main output of systems aimed at supporting (or making) decisions: this is why we described XAI explanations as a meta output. As such, explanations can fail to make the output (they relate to) more comprehensible, or its reasons explicit; or, even, they can be wrong.
d
b4eb4c7c-446e-4dd6-a249-e33ed2fda069
This trivial observation is seldom emphasized, and it should motivate researchers to further investigate what happens when these failures occur, not only in terms of AI effectiveness (that is accuracy of the hybrid decision making [1]}), but also in terms of efficiency (e.g., “does providing explanations make decision making more time consuming or just more difficult”); and satisfaction (“do the decision makers find the additional information useful, or at least do they feel more confident in their decision after consuming an explanation?”).
d
16d83b15-bff0-4bfc-b566-37f3e1a73ddc
Our findings suggest that we should not take the perceived and actual utility of explainable systems for granted: these qualities should be assessed in an ongoing manner, and in vivo rather than in labo, ensuring that quantitative measures of performance such as error rates, throughput and execution times do not take undue precedence over the evaluation of use experience [1]}. Rather, we should ground our design choice to make AI systems explainable (that is, capable of supplying explanations) on empirical evidence on the basis of the fit (or cognitive congruence) between the user and the artifact (i.e., trust and expectations), user and task (i.e., skill-difficulty match, expertise), and the artifact and the task to support, as proposed in the theory of dominance [2]}, [3]}.
d
1eeb0053-6550-49a9-9220-be2bf39fa46d
By evaluating the quality and usefulness of explanations in relation to user perception and performance, rather than in isolation, our study brought to light some paradoxical effects related to the introduction of explanations into diagnostic AI systems; therefore our study aims at contributing to the discussion around the necessity of a relational approach to AI design and evaluation. Following Virginia Dignum [1]}, we also call for greater attention to the dynamics of decision-making settings, as well as to how humans and machines come to interact, and even “collaborate” in so-called Hybrid Human-Artificial Intelligence ensembles [2]}, [3]}. This leads to our belief that, even more so than from Engineering and Computer Science, the greatest advances for AI are likely to emerge from a multidisciplinary effort gathering relevant contributions from the scholarly fields of Cognitive Ergonomics [4]}, [5]}, Social psychology [6]}, Human-Computer Interaction [7]}, Computer-Supported Cooperative Work [8]}, [9]}, and Human Factors [10]}, [11]}.
d
8327d0fe-ce68-42c2-a93a-67b4b6c50dd6
Remixing or the task of manipulating the level and/or effects of individual instruments to create a derivative recording is widely used for audio and music production applications such as music content creation (e.g. DJ performances), audio-visual post-production, remastering, podcasting, and more. Music remixing, in particular, is of critical interest and can be used to modify an original version of a song into a different version to suit a specific genre, e.g from country to rock; or to alter the sound stage, e.g., re-position an instrument's stereophonic location from the center to the left. In addition, there are several interactive applications of music remixing. This includes various education scenarios: the learner first wants to boost the volume of the guitar solo to familiarize herself to it, then she wants to suppress it so that she can play along with the music, but without the guitar solo. Another example could be an amateur band who wants to adjust the level or stereophonic image of all the instruments, while those multiple sound sources are recorded altogether as a single mixture.
i
c86a095a-3143-421b-8bb5-64abe0e1e86b
For these applications and more, separate tracks of the original music are required, which are not always available. So, source separation algorithms are commonly used to estimate individual instrument tracks to allow users to manipulate estimates of separated instruments. Because of this, remixing systems heavily rely on the separation performance and perform well only if a nearly-perfect separation is achieved. Such high-quality results, however, are challenging even for the state-of-art source separation systems, especially when the number of sources becomes large and the separation process is unsupervised [1]}, [2]}. <FIGURE>
i
f8251bac-3d2f-4d3a-a08c-78553294f73f
In this work, we aim to minimize separation artifacts found in remixing systems by learning to remix directly. To do so, we first justify our problem formulation with analysis based on commonly used source separation evaluation metrics. Then, we extend a state-of-art source separation model, Conv-TasNet[1]}, and propose two adaptations toward end-to-end remixing (a) one that applies remixing weights to the source estimates (b) a latent variable control by applying the weights to the bottleneck feature of Conv-TasNet, which is why we chose it as the baseline model. Both of them are regularized, so the separator is informed of the remixing target. We evaluate the proposed methods on two music source separation datasets, Slakh [2]} and MUSDB [3]} and analyze the behavior of the models on various remixing scenarios.
i
7ac77157-9554-49a7-8e8f-d640d6b40344
In the music information retrieval literature, remixing has been studied from various perspectives, e.g. amplifying single instruments [1]}, [2]}, user interface system design [3]}, and for hearing health [4]}. The methodologies in the previous work can be categorized into two kinds: feature-based methods and source separation-based methods. When aiming at adjusting one or a limited number of specific musical instruments among others (usually drums), remix often relies on the feature of the target sources. Yoshii et al. apply template matching to localize the single tone of drum in the polyphonic music and to detect its onset and offset, aiming to adjust the volume and timbre of drum and bass sources [2]}. Similarly, in [1]} band-wise harmonic/noise decomposition utilizes the stochastic part of the signal in different frequency bands to detect drum events. Those drum-specific remixing systems report successful performance in both attenuation (\(-10\) and \(-6\) dB) and amplification (10 dB) tasks. However, its application on limited instruments is an obvious downside. Therefore, the multi-source control scenario necessitates another methodology—remixing based on music source separation (MSS).
w
6aaae20c-6d9d-4cf7-a052-60221263507a
When a remixing system uses estimated sources, it inevitably suffers from the imperfect separation quality (will be discussed in detail in Sec. REF ). In earlier years, side information was often provided to aid the less powerful separation models. Woodruff et al. proposed a remixing system that works with stereophonic signals [1]}. They define an informed sources separation system which makes use of the knowledge of spatial information and scores to assist the separation decision. Itoyama et al. describe a user interface system for music remixing, where the MSS step integrates harmonic and inharmonic models, and relies on synchronized MIDI files [2]}.
w
a087dc8f-1d09-4b7c-83c7-04072f2ea342
The MSS performance has increased significantly, since deep learning was introduced in the field. By utilizing the deep neural network's capability of learning complex representation, researchers have explored a variety of methods and architectures for MSS, such as joint optimization of masks and recurrent neural networks [1]}, network blending [2]}, unsupervised learning [3]}, and manipulation of latent variables [4]}, [5]}. Compared to conventional signal processing methods, deep learning also shows its strength in waveform-domain processing [6]}, [7]}, [8]} which to the most extent preserves the phase information. Recent research shows a trend to take account more auxiliary information, e.g., musicology [9]} or content information [10]}. Open-Unmix nicely summarizes various open-source MSS implementations [11]}. <FIGURE>
w
5e23e303-064f-47b9-bc45-47eb084a5eef
The advancements in deep learning-based MSS also influences the research on remixing [1]}, [2]}. In [3]} an evaluation mechanism is proposed to investigate the influence of five source separation algorithms on remixing, four of which to some extent involve deep neural networks, either convolutional network or recurrent neural network. The best result reported in this paper is to increase the level of the vocals by up to 6 dB.
w
c99f73d8-1d61-4c1f-ab65-30cc869cd7ed
It is noticeable that all the remixing systems in the literature treat source separation and remix as two independent processes, where the remixing serves a post-processor's role that completely relies on the separation effect. In contrast, our proposed neural remixing systems take mixture signals as input and perform separation and remixing through a single inference pass, which is an end-to-end manner. We claim that this is the first attempt to integrate remixing and source separation in the single model to our best knowledge. Furthermore, it has been shown that manipulating the hidden variables in the neural network provides additional benefits in a source separation system [1]}. In this work, we will also investigate the potential of controlling the estimated sources in the latent representations. This kind of interactive system has been proven to be useful in source separation systems [2]}.
w
ff770ae2-2675-4e64-a090-9e178c231d0a
This paper introduced a neural remixing model to tackle the remix use case where the separated source tracks are not available. As an alternative to the conventional separator-remixer workflow, we integrated the two processes together by adding an extra regularizer to the proposed end-to-end neural remixer. We evaluated the model on two music source separation datasets, Slakh and MUSDB. Results on both datasets manifested that the regularization mechanism greatly reduces the artifact produced in the process of source separation. Therefore, our models achieved significant improvement in the remix quality. From the perspective of user interaction, we demonstrated that the relationship between the estimated remix and the intended one is reasonably correlated as opposed to that induced from the baseline model. Sound examples and source codes are available at https://saige.sice.indiana.edu/research-projects/neural-remixer.
d
d3b4d62d-c3e7-450d-b5d1-ca54e5334447
The inherent structure found in documents – paragraphs, sentences, and tokens – and their inter-dependence is vital to document-level sentiment, as rhetorical devices and anaphora relationships disperse the sentiment signal across the various sub-components [1]}. This also means that not all sub-components contribute equally towards identifying the overall polarity of a document [2]}, [3]} and models that are able to take these relationships into account should theoretically perform better.
i
7cad50ed-9397-4d16-b3da-d502e69e8964
Recently, two divergent research directions have shown promise for document-classification: on the one hand, transfer learning [1]}, [2]}, [3]} and on the other hand hierarchical modeling [4]}, [5]}, [6]}. Transfer learning (in its current form) attempts to take advantage of large amounts of unlabeled text in order to improve contextualized representations of tokens, while ignoring the structure of documents. Hierarchical models, on the other hand, attempt to take document structure into account by first building up representations for sentences and then aggregating them to create document representations.
i
353fa21d-65c0-4b83-8ab2-d820b7a2fc0c
While the two approaches are complementary in the sense that one could use pretrained LMs for transfer-learning also for hierarchical models, we here focus on isolating their relative strengths and weaknesses. In this paper we empirically show that methods which explicitly incorporate the structure of documents outperform those that do not and further examine the influence of data characteristics such as document length and size of training data on the choice of architecture. Finally, we release the code to reproduce the results from our study. https://github.com/jerbarnes/hier_vs_transfer
i
4327fe9d-f6a9-4e65-a67f-aba472e35b84
The main research questions we seek to address in this section are: for document-level sentiment classification, are there systematic performance differences between flat, hierarchical, and LM-pretrained models, and do any of these approaches offer consistent improvements? Further, we investigate how performance is affected by several relevant data characteristics. Besides testing on the original data, we also test on documents where the order of the sentences is shuffled (shuffled), to determine sensitivity to sentence order.
m
0e137ebc-a874-4a2f-af62-227c1490d4a9
Table REF shows the accuracy (\(\text{F}_1\) results are similar) of the seven models for all five languages and their average, as well as statistical significanceWe calculate statistical significance with approximate randomization testing [1]} with 10000 runs.). The Bow model performs well across all experiments, achieving an average 64.2 accuracy, and ties Han on the German dataset (73.2). The Cnn performs worse than the Bow across all experiments except Japanese (an average loss of 1.2 percentage points (pp)). An has the second best overall performance (66.4).
r
d2c3b28d-5157-42ed-8643-985bd2dd0545
Regarding the hierarchical models, the Hcnn performs on par with the flat Cnn (avg. 63.6), while the Han model is the best on Norwegian (61.4) and the best overall (avg. 66.4). This seems to indicate that while it is useful to explicitly model hierarchical structure using the Han model, the Hcnn is not as well suited to the task.
r
198f2e89-fad4-43fe-abb0-567e9e5b5492
ULMFiT performs better than Bow on Norwegian and Japanese (0.1 / 3.1 pp), but 0.4 pp worse overall. mBert is the best model on English, French, and German (73.4, 65.6, 74.6 respectively), but performs poorly on Norwegian (54.4) and Japanese (56.3).
r
40c3a811-bf0d-4fc8-9608-1ebe28911547
Shuffling the test data hurt 22 of 30 experiments and had the highest impact on the Norwegian and Japanese data – an average drop of 5.6 pp – both of which have the longest documents. The Cnn seems most sensitive to sentence order, losing 8.3 pp overall, followed by ULMFiT 3.3 and mBert 2.0. Surprisingly, the hierarchical models are more robust to changes in sentence order.
r
01420fb2-6549-4a04-94a1-bc04f188e598
We have compared flat, hierarchical, and transfer learning models for document-level sentiment classification for five different languages and have shown that the best model depends on characteristics of your data. We also found that hierarchical models perform similar to transfer learning approaches even in low-resource scenarios, contrary to expectation.
d
6fc0ea09-cf58-4cf2-a6c8-512cbbace447
Given a specific problem, does there exist the “fastest” algorithm for it? Does there exist a proof system possessing the “shortest” proofs of the positive solutions to the problem? Although the first result in this direction was obtained by Levin [1]} in 1970s, these important questions are still open for most interesting languages, for example, the language of propositional tautologies.
i
777d3edb-c10b-4a21-9323-1670dded7eea
Version 4.04 of the OCaml programming language, released in November 2016, introduced the possibility to unbox single-constructor variants and single-immutable-field records. In other words, a value inhabiting such a datatype is exactly represented at runtime as the value that it contains, rather than as a pointer to a block containing a tag for the constructor and the contained value. The removal of this indirection is called constructor unboxing, or unboxing for short, and it allows for a slight improvement in speed and memory usage.
i
f884661b-fa1a-4e6f-a20f-c79780735598
One of the main interests of unboxing is that it allows the incorporation of semantic type distinctions without losing runtime efficiency – the mythical zero-cost abstraction. For example, the elements of ocamluid are distinct from those of ocamlint for the type checker, but they have the same runtime representation. Unboxing resolves a tension between software engineering and performance.
i
3d0ec1eb-6f1c-4919-95ef-773ce8cf6ed7
Unboxing becomes even more interesting when it is combined with advanced features such as existential types, with GADTs, or higher-rank polymorphism, with polymorphic record fields, which are otherwise only accessible in boxed form. ocaml type 'a data = name : string ; data : 'a type anydata = Anydata : 'a data -> anydata [@@unboxed]
i
75c3df80-6260-4c38-ab04-728c7c2bd377
The memory representation of values used in OCaml and similar languages finds its origin in Lisp-like languages [1]}. Despite the advantage of allowing every value to have the same, single-word representation, this approach also has obvious limitations in terms of performances due to the introduction of indirections. As a consequence, ways of lowering this overhead in certain scenarios have been investigated, one possibility being to mix tagged and untagged representations [2]}, [1]}, [4]}. Another idea that has been investigated is to consider unboxed values as first-class citizens, although distinguished by their types [5]}.
w
c4cade30-377a-4ab1-b1f8-4b0e7c79e2d0
Due to increasing traffic in urban environments and changing customer demands, self-driving vehicles are one of the most discussed and promising technologies in the car and robotics industry. Still no system was presented yet, that allows to localize robustly under all light, weather and environmental conditions. However, precise localization is a vital feature for each autonomous driving task, since a wrong pose estimate may lead to accidents. Especially in urban environments, safety margins on the position of the car are small due to crowded traffic and other traffic participants (e.g. pedestrians, cyclists). Because of multi-path effects or satellite blockage, GPS sensors cannot be used reliably under those urban conditions. Thus, other sensors have to be used for localization. For this purpose, mainly LiDARs and cameras have been used in the last years. Appearance changes in urban environments challenge visual localization approaches. However, such driving scenarios contain persistent structures even under those appearance changes. Curbstones are one such feature. Curbstones are used to protect pedestrians from cars and to separate the sidewalk from the street. As they delimit the street, they also offer information of the area where the car is allowed to be placed in. Detection of their position relative to the car can thus allow to localize inside the lane. In contrast to other geometrical shapes such as poles and road markings, curbstone measurements are found more frequently in urban environments and yield a reliable, continuous lateral constraint for pose refinement. Due to their shape and their contrasting color with respect to the pavement in many cases, they can be detected both in camera images as well as in LiDAR pointclouds. <FIGURE>
i
8acf750b-dbfb-4c41-a1dc-0bb57c18f924
Therefore, our pipeline named MOZARD extends our previous visual localization system - VIZARD[1]} with the use of additional geometrical features from LiDAR data, for the self-driving cars used in the UP-Drive projectThe UP-Drive project is a research endeavor funded by the European Commission, aiming at advancing research and development towards fully autonomous cars in urban environment. See www.up-drive.eu.. In a thorough evaluation of our proposed localization system using our long-term outdoor dataset collection, we investigate key performance metrics such as localization accuracy and recall and demonstrate in a case study possible failure scenarios.
i
c1ea274e-953f-451f-8842-d273629135df
A semantic extension of our key-point-based localization pipeline based upon the extraction of curbstone information is presented, that allows to bridge sparse key-point based feature scenarios in visual localization. In a thorough evaluation on the long-term dataset collection UP-Drive, we demonstrate a reliable localization performance across different appearance conditions in urban outdoor environments. We compare our results to our vision-based localization pipeline and demonstrate significant performance increases. A computational performance analysis showing that our proposed algorithm exhibits real-time capabilities and better scalability.
i
9afd2d46-f505-479f-8f3d-3dbd89ca840d
Since our localization system is a multi-modal semantic extension of our previous work we will concentrate the related work on frameworks that exploit semantic features using either one modality or fusing multiple modalities in different ways. Therefore these works can be subdivided into each of their specific sensor setup. For general related work on our prior visual key-point based system we refer to our previous work [1]}.
w
7a1c6d03-bcc2-4824-95a1-c9e3a6e04430
A schematic overview of MOZARD can be found in Figure REF . Since our work extends the VIZARD framework, we refer to the general methodology from Buerki et al. [1]}. We assume that our visual localization pipeline already created a map by tracking and triangulating local 2D features extracted along a trajectory.
m
a60467ea-c3cc-4766-9e2a-52bb81e90967
In the following section, the performance of the proposed pipeline is evaluated and compared against the VIZARD pipeline as a benchmark. Long-term experiments in an urban scenario are performed on varying weather and appearance conditions. A special focus is set on how curbstone map tracking influences localization accuracy and recall. Example cases are presented, where localization gaps in the VIZARD pipeline could be bridged using curbstone localization. The sensor set-up of the UP-Drive vehicle and the datasets used in the experiments are described in the next section.
m
d6d37caa-cf65-4f69-b756-93c01321f1db
We presented MOZARD, a geometric extension to our visual localization system for urban outdoor environments. Through our evaluation on 8 datasets, including several kilometers of real-world driving conditions, we demonstrated the benefits of using curbstone information for localization and mapping. Our datasets used in the experiments contain challenging appearance conditions such as seasonal changes, wet road surfaces and sun reflections. A comparison with our prior work demonstrated that we can achieve a higher recall performance while using less datasets during the mapping process, as the pipeline would fail due to sparse keypoint scenarios. Our run-time analysis shows that our approach demonstrates real time capabilities. Although the curbstone detection stack of MOZARD takes in average more computing time than VIZARD, it is to note that an object segmentation/detection algorithm on a self-driving car has to be deployed for environmental perception independent of whether a localization takes place or not. Even taking in account the total computational time, our approach still runs at \(10Hz\) while needing up to four times less data while achieving the same localization performance. We also showed specific cases where both of our pipelines would fail due to occlusions and/or curbstone misalignment giving suggestions for future work such as the extension of our approach to poles and road markings. Our findings showed that by extending a keypoint based visual localization approach with geometric features - curbstones in our case, an improvement in robustness with consistent high accuracy in localization is obtained.
d
31537821-7c08-4947-ba9d-6f4372d2c737
Verse-chorus song form is a very common structure for popular music. In it, verses alternate with choruses, with the lyrics of the verses varying and the choruses repeating more strictly and more frequently. The authors of [1]} cite other generalizations used to define choruses, including that they are the `most prominent' and `most catchy' sections of a piece. These traits make it desirable to detect choruses automatically, whether for generating “thumbnails” [2]}, [3]}, [4]}, for finding the emotional “highlights” of a piece [5]}, or for enabling convenient navigation based on the song structure [6]}.
i
9001fa92-1762-4ba3-bca3-eb220da910dd
However, most previous approaches to chorus detection and thumbnailing [1]}, [2]}, [3]}, [4]} are unsupervised. They begin with an observation about what typefies chorus sections, and search for them on this basis: e.g., finding the loudest, most frequently repeated, and/or the most homogenous section. Since the definition of `chorus' is a generalization that does not apply in all cases, even a perfectly-designed system of this type will fail to detect the chorus in many songs. A better approach may be to let a model learn what defines `chorusness' from labeled examples; this would allow a system to leverage the timbral and spectral features identified by [5]} in a study of what acoustic features differentiate choruses.
i
de309614-b651-4f49-a185-9d17b2d16657
This approach, when applied to the related task of music boundary detection by [1]}, led to a huge leap in the state of the art. Prior segmentation algorithms would generally focus on a definable proxy task (e.g., detecting points of change or onsets of repetitions), assisted by sensible heuristics (e.g., rounding boundary estimates to the nearest downbeat). A convolutional neural network (CNN) is trained to detect whether the center of a 16-second input is a boundary. When post-processed with an appropriate threshold, [1]} demonstrated a 10% improvement in f-measure over the state of the art.
i
e09547e0-b8bc-484f-8efa-bbdbf4b7a9f3
We propose a similar approach: train a neural network to predict the “chorusness” of an excerpt directly from the audio, and without the context of the rest of the song. We train a binary classifier to predict the “chorusness” of each point in a window, and slide this window throughout the song to obtain a chorus probability curve. However, this leaves the problem of finding an appropriate threshold for post-processing. To ease this, we propose to jointly model the chorus activation and boundary activation curves, so that the loss on the signals around the boundaries is naturally emphasized. At the inference phase, it also eases the process of converting the raw probability curve to a binary output for a song.
i
d7e0fa10-b169-455d-a9cb-6ff9e696d061
Chorus detection is clearly related to two tasks with a long tradition of MIR research: thumbnailing and music structure analysis (MSA) [1]}. The objective of thumbnailing is to find a short excerpt of a song that would be an effective preview. However, there is no definition of what makes a good preview; [2]} cited several. In practice, thumbnailing systems are evaluated by testing how often they select all or part of a chorus [3]}, or whichever segment is repeated most often [4]}. Recently, [5]} proposed a novel, related objective—to find the emotional highlights of pop songs—and evaluated their system based on whether it captured the choruses, which were assumed to correspond to the highlights, but their system used a neural network trained to detect emotion, not choruses.
i
ea80f309-33f1-4799-a2aa-4d91ff01b390
In music structure analysis, it is assumed that one family of segments corresponds to the chorus, but predicting which one is only rarely attempted. We are aware of three prior systems: [1]}, who assumed a highly restricted template for song structures and used heuristics to predict labels; [2]}, who paired a standard structure analysis system with an HMM trained to label the sections; and [3]}, published very recently, who proposed a hierarchical generative model (with section parts generating chord progressions, and these in turn generating observed feature sequences). This last model benefits from supervision, but still relies on a hand-set strategy of detecting homogeneity and repetitions, based on handcrafted features (chroma and MFCCs).
i
28297f19-c01e-49af-a58e-a58e56c7806e
The lack of attention paid to chorus detection may be due to the difficulty of obtaining sufficient training data. SALAMI [1]} contains 1446 songs, but these come from diverse genres, so it may be difficult to learn a coherent notion of “chorusness” from it. Introduced in 2019, the Harmonix Set [2]} contains 912 songs, 888 with “chorus” sections; it is the most frequent label, with over 3100 choruses altogether, which is 41% more than the “verse” instances. We also have the annotated chorus locations for an internal dataset (denoted as In-House) of 2480 Asian pop songs. We use these three sources to train or evaluate our system. Since the data sources all have different properties, we investigate the cross-dataset performance of our system.
i
bb8cb62f-a8bc-45ec-942c-44f5c9524d41
In addition to the usefulness of detecting choruses for other applications, the annotations of choruses (that we depend on) seem more reliable than for other sections. In SALAMI, we observed that if one annotator perceives a segment starting at time \(t\) , there is a 66% chance that the other annotator placed a boundary at the same time (within 0.5 seconds)—but this probability rises to 78% if the boundary marks the start of a `chorus'. This greater agreement could be the result of choruses having more salient beginnings than other section types [1]}. Therefore, the reliability of the annotations makes a supervised system more feasible.
i
56c77009-906f-4efa-b807-802be992edd1
We have presented a supervised approach to detecting choruses in music audio. In experiments, our systems performed better than several existing ones, even when trained on other datasets. With this promising result, we believe that more types of segment labels, such as verse, bridge and solo, can be detected with supervised learning, and with less dependence on context. The current model is relatively simple: it only considers the local context of audio signals. It could be improved if we use features and techniques to inform it of a greater context, such as structure features [1]}, recurrent architecture and attention modelling [2]}.
d
77ff5b4e-8228-4333-9b33-269502f682f4
For 32-bit numbers, random hashing with good theoretical guarantees can be just as fast as popular alternatives [1]}. In turn, these guarantees ensure the reliability of various algorithms and data structures: frequent-item mining [2]}, count estimation [3]}, [4]}, and hash tables [5]}, [6]}. We want to show that we can also get good theoretical guarantees over larger objects (such as strings) without sacrificing speed. For example, we consider variable-length strings made of 32-bit characters: all data structures can be represented as such strings, up to some padding.
i
468fd830-4cea-47a8-bd45-933b39ef4876
We restrict our attention to hash functions mapping strings to \(L\) -bit integers, that is, integer in \([0,2^L)\) for some positive integer \(L\) . In random hashing, we select a hash function at random from a family [1]}, [2]}. The hash function can be chosen whenever the software is initialized. While random hashing is not yet commonplace, it can have significant security benefits [3]} in a hash table: without randomness, an attacker can more easily exploit the fact that adding \(n\)  keys hashing to the same value typically takes quadratic time (\(\Theta (n^2)\) ). For this reason, random hashing was adopted in the Ruby language as of version 1.9 [4]} and in the Perl language as of version 5.8.1.
i
0871b814-ed10-4c5e-a98f-fff221450875
A family of hash functions is \(k\) -wise independent if the hash values of any \(k\)  distinct elements are independent. For example, a family is pairwise independent—or strongly universal—if given any two distinct elements \(s\) and \(s^{\prime }\) , their hash values \(h(s)\) and \(h(s^{\prime })\) are independent: \(P(h(s)=y | h(s^{\prime })=y^{\prime }) = P(h(s)=y)\)
i
ee49a992-f9d2-4530-a1dd-7234e3f6b8a4
for any two hash values \(y,y^{\prime }\) . When a hashing family is not strongly universal, it can still be universal if the probability of a collision is no larger than if it were strongly universal: \(P(h(s)=h(s^{\prime }))\le 1/2^L\) when \(2^L\) is the number of hash values. If the collision probability is merely bounded by some \(\epsilon \) larger than \(1/2^L\) but smaller than 1 (\(P(h(s)=h(s^{\prime }))\le \epsilon <1\) ), we have an almost universal family. However, strong universality might be more desirable than universality or almost universality:
i
5e9325e6-e9be-42ba-9909-3b4277e683e6
We say that a family is uniform if all hash values are equiprobable (\(P(h(s)=y)=1/2^L\) for all \(y\) and \(s\) ): strongly universal families are uniform, but universal or almost universal families may fail to be uniform. To see that universality fails to imply uniformity, consider the family made of the two functions over 1-bit integers (0,1): the identity and a function mapping all values to zero. The probability of a collision between two distinct values is exactly \(1/2\) which ensures universality even though we do not have uniformity since \(P(h(0)=0)=1\) . Moreover, if we have strong universality over \(L\)  bits, then we also have it over any subset of bits. The corresponding result may fail for universal and almost universal families: we might have universality over \(L\)  bits, but fail to have almost universality over some subset of bits. Consider the non-uniform but universal family \(\lbrace h(x)= x\rbrace \) over \(L\) -bit integers: if we keep only the least significant \(L^{\prime }\)  bits (\(0<L^{\prime }<L\) ), universality is lost since \(h(0)\bmod {2^{L^{\prime }}} = h(2^{L^{\prime }}) \bmod {2^{L^{\prime }}}\) .
i
62b99b14-cda7-4593-8d48-2de78a20ce57
There is no need to use slow operations such as modulo operations, divisions or operations in finite fields to have strong universality. In fact, for short strings having few distinct characters, Zobrist hashing requires nothing more than table look-ups and bitwise exclusive-or operations, and it is more than strongly universal (3-wise independent) [1]}, [2]}. Unfortunately, it becomes prohibitive for long strings as it requires the storage of \(n c\)  random numbers where \(n\) is the maximal length of a string and \(c\) is the number of distinct characters.
i
8e0ab08f-54e1-4a89-8c45-cee686180b5a
A more practical approach to strong universality is Multilinear hashing (§ ). Unfortunately, it normally requires that the computations be executed in a finite field. Some processors have instructions for finite fields (§ ) or they can be emulated with a software library (§ REF ). However, if we are willing to double the number of random bits, we can implement it using regular integer arithmetic. Indeed, using an idea from Dietzfelbinger [1]}, we implement it using only one multiplication and one addition per character (§ ). We further attempt to speed it up by reducing the number of multiplications by half. We believe that these families are the fastest strongly universal hashing families on current computers. We evaluate these hash families experimentally (§ ):
i
679ad974-73e0-41ad-bbc3-d605d021771d
Using fewer multiplications has often improved performance, especially on low-power processors [1]}. Yet trading away the number of multiplications fails to improve (and may even degrade) performance on several processors according to our experiments—which include low-power processors. However, reducing the number of multiplications is beneficial on other processors (e.g., AMD), even server-class processors. We also find that strongly universal hashing may be computationally inexpensive compared to common hashing functions, as long as we ignore the overhead of generating long strings of random numbers. In effect—if memory is abundant compared to the length of the strings—the strongly universal Multilinear family is faster than many of the commonly used alternatives. We consider hash functions designed for hardware supported carry-less multiplications (§ ). This support should drastically improve the speed of some operations over binary finite fields (\(GF(2^L)\) ). Unfortunately, we find that the carry-less hash functions fail to be competitive (§ REF ).
i
5424eee3-937f-4d92-856d-07ec550ddefe
It is best to implement Multilinear with loop unrolling. With this optimization, Multilinear is just as fast (on Intel processors) as Multilinear-HM, even though it has twice the number of multiplications. In general, processor microarchitectural differences are important in determining which method is fastest. (§ REF ) In the absence of processor support for carry-less multiplication (see § ), hashing using Multilinear over binary finite fields is an order of magnitude slower than Multilinear even when using a highly optimized library. (§ REF ) Even with hardware support for carry-less multiplication, hashing using Multilinear over binary finite fields remains nearly an order of magnitude slower than Multilinear. (§ REF ) Given a 64-bit processor, it is noticeably faster to use a word size of 64 bits even though a larger word size (128 bits) uses fewer random bits (33% less). Use of multiprecision arithmetic libraries can further reduce the overhead from accessing random bits, but they are also not competitive with respect to speed, though they can halve the number of required random bits. (§ REF ) Multilinear is generally faster than popular string-hashing algorithms. (§ REF )
m
e32a2477-5113-4f0f-a560-05f2954707c2
We evaluated the hashing functions on the platforms shown in Table REF . Our software is freely available online [1]}. For Intel and AMD processors, we used the processor's time stamp counter (rdtsc instruction [2]}) to estimate the number of cycles required to hash each byte. Unfortunately, the ARM instruction set does not provide access to such a counter. Hence, for ARM processors (Apple A4 and Nvidia Tegra), we estimated the number of cycles required by dividing the wall-clock time by the documented processor clock rate (1 GHz). <TABLE>
m
76caedc9-2f9a-4ff7-9347-514b171e7d70
For the 64-bit machines, 64-bit executables were produced and all operations were executed using 64-bit arithmetic except where noted. All timings were repeated three times. For the 32-bit processors, 32-bit operations were used to process 16-bit strings. Therefore, results between 32- and 64-bit processors are not directly comparable. Good optimization flags were found by a trial-and-error process. We note that using profile-guided optimizations did not improve this code any more than simply enabling loop unrolling (-funroll-loops). With (only) versions 4.4 and higher of GCC, it was important for speed to forbid use of SSE2 instructions when compiling Multilinear and Multilinear-HM.
m
e05b4012-8100-4f13-bb5f-4f46dc9fc756
We found that the speed is insensitive to the content of the string: in our tests we hashed randomly generated strings. We reuse the same string for all tests. Unless otherwise specified, we hash randomly generated 32-bit strings of 1024 characters.
m
fe040672-09f0-4c4d-9082-1ecbc4946224
Over moderately long 32-bit strings (\(\approx \) 1024 characters), current desktop processors can achieve strongly universal hashing with no more than 0.5 CPU cycle per byte, and sometimes as little as 0.2 CPU cycle per byte. Meanwhile, at least twice as many cycles are required for Rabin-Karp hashing even though it is not even universal.
d
49ed3cbf-f85f-4a69-8821-3e001b99b480
While it uses half the number of multiplications, we have found that Multilinear-HM is often no faster than Multilinear on Intel processors. Clearly, Intel's pipelining architecture has some benefits.
d
96f96fc4-ded8-4630-b2d0-f60c603e1f22
For AMD processors, Multilinear-HM was faster (\(\approx \) 33%), as expected because it uses fewer multiplications. Yet another alternative, Multilinear (2-by-2), was slightly faster (\(\approx \) 15%) for 32-bit hashing on the mobile ARM-based processors even though it requires twice as many multiplication as Multilinear-HM. These mobile ARM-based processors also computed 32-bit Rabin-Karp hashing with fewer cycles per byte than many desktop processors. We believe that this is related to the presence of a multiply-accumulate in the ARM instruction set.
d
69fd8354-3d64-4fdc-8b6f-10f002da087a
Despite the impressive accuracy of deep convolution neural networks (CNNs), the computational cost and memory consumption of those CNNs have made them difficult to be applied on resource limited devices. Methods including pruning and quantization have been widely studied to reduce the model sizes [1]}, [2]}, [3]}, [4]}. Later model compression methods tend to preserve the model structure for better hardware utilization [5]} [6]} [7]}. One of the structure preserving model compression methods is using low-rank approximation [8]}, [9]}, [10]}, [11]}, which has the potential to reveal latent relations among underlying structures and to achieve a better compression ratio.
i
37a8e4b9-fad7-48b2-a146-8bcfdac9de5b
However, exiting model compression method for convolution layers based on low-rank approximations face many computational challenges [1]}, [2]}, which have hindered this direction. First, exiting methods usually perform the low-rank approximation layer by layer, so it is hard to achieve global optimization for model compression. When the number of layers increases, those methods suffer large accuracy loss and long processing time. Second, the compressed model usually requires a lot of retraining to recover the original accuracy, which makes it hard to compress a large model. Last, the rank of a high order tensor is not well defined [3]}. For the applications of model compression, the situation becomes more complex, because the elements of a network are changed over epochs.
i
75eb7c23-3c4d-4040-8221-7f41046a6f0d
In this paper, we proposed a model reduction method to compress the pre-trained networks using low rank approximation for the convolution layers. The major differences distinguishing our work from the previous ones is that our method considers the global optimization of all layers during the model compression. In addition, unlike existing methods which trim the network after tensor decomposition and re-train the model to retain the original accuracy, our method factorizes the network first, and instruments a regularization gate to each factor of the decomposed tensor. After that, our method trains the factorized model to attain a higher accuracy than the original model. The pruning process is based on the values of the regularization gates, which removes the factors with small gate values. Last, to avoid the difficulty of selecting a proper rank, we designed a new regularization function, called funnel function, which can better separate the desired and undesired factors.
i
41f355f7-cadc-4121-9189-1e866b3365e1
We have conducted experiments to evaluate the effectiveness of our method. The results show that our method can reduce more model parameters than other tensor compression methods for various models. For ResNet18 with ImageNet dataset, our method can achieve more than two times speed up in terms of GMAC with similar Top-1 accuracy. Figure REF compares the results of our algorithm with other methods in terms of compression ratio (speed-up of GMAC) and Top-1 accuracy. Our method outperforms most of the method except the stable low-rank method [1]}, whose values are reported from their paper.
i
7ced8139-f7e4-4fd9-858f-220cc3f80781
The rest of this paper is organized as follows. Section introduces related work for low-rank approximation and rank selection. Section illustrates our method. Section presents the experimental results. The conclusion and the future work are given in the last Section. <FIGURE>
i
18431ab3-c2bc-4f1b-88b8-b432cde40c17
CNNs are mainly consisted of convolution layer and fully-connected layer, where convolution layers dominate most computation cost of inference. Misha Denil [1]} showed most model parameters can be predicted by a small subset of model parameters, which indicate the possibility of removing redundant model parameters. Emily Denton [2]} applied truncated singular value decomposition (SVD) on the weight matrix of fully-connected layer, which does not cause a significant drop in accuracy. After that, various methods were also proposed [3]} [4]} [5]} [6]}, showing better compression capability than SVD.
w
b69f40d3-d977-4fff-86dd-722e48e69e05
Several methods based on low-rank decomposition of convolution kernel tensor were also considered to accelerate convolution operations. A tensor is defined as a multi-dimensional array. Canonical Polyadic Decomposition (CPD) [1]} [2]} [3]} and Tucker Decomposition (TKD) [4]} [5]} [6]} are two most popular tensor decomposition methods. Vadim Lebedev [7]} used Canonical Polyadic Decomposition (CPD) to speed up CNNs. Due to the instability of CPD, only the result on single layer is reported. This problem of CPD has been addressed in [8]}, in which the first order perturbation is used to stabilize CPD. Yong-Deok Kim [9]} used Tucker Decomposition (TKD) to speed up CNNs. The ranks of convolution kernels are determined by the solution of Variational Bayesian Matrix Factorization (VBMF) [10]} on unfolded tensor in different modes. However, the rank determined by VBMF is too aggressive to recover the original accuracy. In MUSCO [11]}, the authors proposed the concept of weakened rank and extreme rank, where extreme rank is determined by VBMF. The ranks of network are then gradually reduced from weakened rank to extreme rank. In [8]}, authors presented a stablization method for CP to achieve better compression ratio. However, it still uses brute force method for rank selection.
w
73957eea-8719-4961-ae46-ffb543828206
In this paper, we employed TKD for model compression, since it generally performs better than CPD for data compression [1]}. As an extension of SVD, TKD computes the orthonormal basis of different modes of a tensor. The core tensor obtained in TKD can be seen as a compressed form of the original tensor. However, TKD of a tensor is not unique. There are various methods to compute TKD of a given tensor \(T\) [2]} [3]} [4]} [5]}. Here we adopt HOSVD [6]} to calculate the TKD of a tensor \(T\) , where the factor matrices of TKD are computed as the \(R^n\) leading left singular vectors of the mode-n unfolded \(T\) . The value of \(R^n\) is the approximation rank of mode-n unfolded \(T\) [1]}. When \(R^n\) is smaller than the rank of mode-n unfolded \(T\) for at least one mode, the decomposition is called truncated HOSVD, which is not optimal in terms of the norm difference.
w
2909e8cf-c175-4945-a7ce-7ea3eb4ca3d2
Other tensor decomposition methods, such as Block tensor decomposition [1]} and Tensor Train decomposition [2]}, are not considered. Block tensor decomposition requires to select a proper block size, which may complicated the problem; and the Tensor Train decomposition is more suitable higher order tensors. For CNNs, the orders of tensors are usually less than or equal to four.
w
8c9a7043-4797-410d-a006-1f455b6814d8
Conventional model compression methods based on low-rank approximation usually utilize the iterative approach to reduce the model size [1]}. First they decompose one or some layers and prune the factors with lower rank. Then, they retrain the model to recover the accuracy. Such processes are performed iteratively until convergence. The iterative methods have several drawbacks. First, the pruning is based on rank of the original model, whose number of parameters and their values would be changed after decomposition and retraining. Second, the training process that performs low-rank approximation layer by layer overlooks the interconnection among different layers, and therefore is hard to achieve global optimization.
m
e4861046-44b0-4b8a-a133-630059ef06f2
Decomposition: decomposes each layer of the model with full rank. Training: trains the decomposed model to obtain a higher accuracy. Compression: applies regularization technique to measure the importance of each decomposed factor, and removes the redundant ones for compression. Fine-tuning: fine tunes the whole model to achieve higher accuracy and lower computation cost.
m
df1ac14c-c883-44b6-b9f9-b7490bc40d6b
Our experiments use Imagenet2012 dataset, and run on a single NVIDIA Tesla V100 GPU card. The used platform is pytorch version 20.08 and python version 3.3. We evaluated our method on ResNet18, ResNet50 and DenseNet121. Without further specification, the used tensor decomposition is TKD; the learning rate used for decomposition is \(10^{-3}\) ; the parameter \(c\) used in the funnel function is exponentially decay from 1 to \(10^{-4}\) . The pruning threshold for ResNet18 is \(10^{-3}\) , and for ResNet50 and DenseNet121 is \(10^{-4}\) .
m
eae14031-fe48-4c7f-af35-d01382a48e4d
The measurement of accuracy is Top-1. Another performance measurement is the theoretical speed up, which is defined as \(\mbox{Speed up}=\frac{\mbox{GMAC of compressed model}}{\mbox{GMAC of pre-trained model}},\)
m
172671cf-8312-4f6a-8899-6551dffd3cc4
In this paper, we presented a model compression method based on the low-rank approximation. Unlike the previous methods, our method does not compress the network layer by layer, but utilizes the optimization method to determine the pruning strategy for all layers simultaneously. Moreover, we proposed a new compression flow, which trains the decomposed model first before pruning. Last, a new regularization function, called funnel function, is proposed to make the rank selection easier. Experiments on various models show the effectiveness of the proposed method. Comparing to other existing methods, our algorithm has the good speedup and the smallest accuracy drop.
d
53d92161-eaf2-4d2e-92e6-8a2c688e7938
There are many directions to explore in the future. First, we can combine our method with the stabilization approach [1]}, since it still uses bruit force method for rank selection. Second, the proposed method could be extended to other types of layers which can be formulated as tensors, or other types of network, such as transformer [2]} or GPT-3 [3]}. Third, our experiments showed the power of using the funnel function. However, its properties may require more studied, such as how to adjust the parameter \(c\) adaptively. Last, the theoretical analysis on the trade-off between model size and model accuracy is an important research, since it would give us a solid ground to develop better model compression methods.
d
ad1c4e9c-6bb4-42ea-8267-d39fe0cf2f92
Despite an increasing awareness in the software industry about accessibility over the past few years, many mainstream software products fall short of meeting accessibility criteria [1]}, [2]}, [3]}, [4]}, [5]}. Software professionals point to a lack of relevant knowledge and skills in computing education for this gap in implementing accessibility [6]}. According to the computing (CS) faculty, the main barriers to teaching accessibility include the lack of clear learning objectives about accessibility and their lack of knowledge about accessibility [7]}, [8]}. Also, most instructors who teach accessibility have a background in human-computer interaction (HCI) and related fields and teach these topics in specialized electives [7]}. This indicates that other instructors may find it challenging to build their expertise and incorporate accessibility topics in the “mainstream” CS courses. Our work contributes to the literature that aims to bridge this gap by developing relevant course material, resources, and assignments that other instructors can readily integrate into their courses without much prior knowledge about accessibility and without diluting the “core” learning objectives.
i
94d9b3ff-2031-43d9-b357-660d8babc7bf
We report on our experience integrating accessibility topics in a mobile app development elective for junior/senior undergraduate CS students. All resources, including slides, video lectures, assignments, starter codes, and exam questions, are available online at https://swaroopjoshi.in/project/sugamyata/. We describe the course (Sec. ), the instruments we used to evaluate the effects of integrating these topics and our findings (Sec. ), and discuss the results (Sec. ) and directions for future work (Sec. ) in the rest of this paper.
i
2282a1e1-f5a5-4dce-80ab-7d3fca0a74d9
Educators have reported on teaching accessibility in special topics courses such as assistive technology [1]} and universal access [2]} and developing specialized modules for graduate programs [3]}. Some others have integrated accessibility topics in existing courses such as web development [4]}, [5]}, [6]}, artificial intelligence [7]}, software engineering [8]}, [9]}, [10]}, HCI [11]}, [12]}, and introduction to programming (CS1/CS2) [13]}, [10]}, [15]}. Such courses mainly focus on teaching: (i) Accessibility awareness, (ii) Technical knowledge like tools for accessibility testing, (iii) Empathy, and (iv) Potential endeavours using various instructional methods such as in-class activities, projects, lectures, assignments, videos, simulated disabilities, interaction with people with disabilities, guest lectures, and research [16]}. Other instructor resources include materials by the AccessComputing initiative [17]}, Accessibility guidelines like WCAG 2.0 [18]}, Accessibility Learning Labs [19]}, the Mobile inclusive learning kit [20]}, and games for accessibility awareness [21]}.
w
4455f698-fdb1-49e3-b330-e80f2b14373f
Our work differs from the existing literature in that, to our knowledge, this is the first report on a mobile app development course with accessibility as an underlying theme [1]}, with multiple assignments and evaluative components assessing the accessibility knowledge of students throughout the semester. Another contribution is the Inclusive Thinking Questionnaire, which can be used in other contexts to assess whether the respondents consider various inclusivity criteria when designing software.
w
35b4d34f-ae7a-4215-b436-2c4204c23b14
This work reports on our experience integrating accessibility in a regular mobile app development course that uses Java-based Android programming. Our findings demonstrate that: (a) students' awareness about considering accessibility when designing, developing, and testing software has increased after completing the course, (b) students acquired technical knowledge about accessibility guidelines, tools to test for accessibility features, and best practices for implementing accessibility in Android apps, and (c) empathy-creating exercises like interacting with the mobile device blindfolded using a screen reader resulted in increased empathy for challenges faced by persons with disabilities when using inaccessible software. Other educators can teach accessibility topics in similar courses without diluting their core learning objectives and without much prior knowledge about accessibility. All course material and resources are publicly available at https://swaroopjoshi.in/project/sugamyata/. This work is supported by Birla Institute of Technology and Science, Pilani under grants BPGC/RIG/2020-21/04-2021/02 and GOA/ACG/2021-2022/Nov/05.
d
c723e869-25a9-4067-9645-3c1619332814
Topic segmentation is a fundamental NLP task with the goal to separate textual documents into coherent segments (consisting of one or more sentences), following the document's underlying topical structure. The structural knowledge obtained from topic segmentation has been shown to play a vital role in key NLP downstream tasks, such as document summarization [1]}, [2]}, [3]}, question answering [4]}, [5]} and dialogue modeling [6]}, [7]}. The aim of topic segmentation makes it tightly connected to related research areas aiming to understand the latent structure of long and potentially complex text. <FIGURE>
i
7ede7d8b-769b-4a1d-9cf4-6039119fb0a4
Specifically, understanding the semantic and pragmatic underpinnings of a document can arguably support the task of separating continuous text into topical segments. To this end, discourse analysis and discourse parsing provide the means to understand and infer the semantic and pragmatic relationships underlying complete documents, well aligned with the local text coherence and highly correlated to the inter-sentential topical consistency, as shown in [1]} and [2]}. With a variety of linguistic theories proposed in the past, such as the Rhetorical Structure Theory (RST) [3]}, the lexicalized discourse framework [4]} (underlying PDTB), and the Segmented Discourse Representation Theory (SDRT) [5]}, [6]}, we follow the RST framework in this work (1) as we focus on monologue text (as compared to dialogue frameworks, such as SDRT) and (2) since RST postulates complete discourse trees spanning whole documents, directly aligned with the topical structure of complete documents [7]}.
i
3e1c0dfa-2a6a-4a58-9f9c-e3bf954eaa0a
We further motivate the synergistic relationship between topic segmentation and discourse analysis/parsing in Figure REF , showing anecdotal evidence of the alignment between the document's topical structure and the respective RST-style discourse dependency graph. Starting from a sequence of sentences, the task of topic segmentation addresses the problem of splitting the given Wikipedia article into an ordered set of topical-coherent fragments (here: T1, T2 and T3) by predicting topical boundaries. As shown in the example, the document discourse tree is indicative of the topical structure of the document, as discourse dependencies occur considerably more often within a topic segment than across topic segments.
i
0556b00f-4ef0-4006-8f6a-d66b6aed0ffb
Given significant influence on a variety of real-world tasks, topic segmentation is an active research area in the field of NLP. As such, modern, neural methods for monologue topic segmentation are proposed by formulating the task as a sentence-level sequence labeling problem, trained and evaluated on the large-scale Wikipedia dataset [1]}, [2]}, [3]}, [4]}. These Wikipedia articles are well-suited for the task of topic segmentation, providing natural section marks which can be reasonably used as ground-truth segment boundaries [5]}, [6]}, superseding previously proposed unsupervised methods [7]}, [8]}, [9]}, [10]}. Despite the significant improvements achieved by neural supervised topic segmentation models, it remains unclear if these topic segmenters effectively learn to cluster sentences into topical-coherent pieces based on the (document-level) topical consistency, or solely exploit superficial patterns (e.g., simple linguistic cues) in the training domain.
i
15a2c583-0fcd-4a0f-bcab-b47cc5b04115
To address this challenge, in this paper, we propose a more discourse-aware neural topic segmentation model. We thereby inject above-sentence discourse structures into basic topic segmenter to encourage the model to base its topic boundary prediction more explicitly on the topical consistency between sentences. More specifically, we propose to exploit a discourse dependency parser pre-trained on out-of-domain data to induce inter-sentential discourse dependency trees. Subsequently, we convert the dependency tree into a directed discourse graph with sentences as nodes and discourse dependencies as edges. With the generated discourse graph, a Graph Attention Network (GAT) [1]} is used to encode sentences as discourse-contextualized representations by aggregating information from neighboring sentence nodes in the graph. Finally, the discourse-infused sentence representations are concatenated with standard encodings for segment boundary prediction.
i
a3db2488-7273-446d-ab01-5da8f2bc0f69
In our empirical study conducted on English evaluation datasets, we show that: (\(i\) ) Injecting discourse structures can substantially improve the performance of the basic neural topic segmentation model on three datasets. (\(ii\) ) Our novel, discourse-enhanced topic segmenter is more robust compared to the basic neural model in settings that require domain transfer, showing superior performance on four challenging real-world test sets, to confirm the improved domain-independence. (\(iii\) ) Even if our proposal has inferior accuracy against a state-of-the-art segmenter sharing the same basic architecture, it does achieve significantly better efficiency assessed by model's parameter size and speeds for learning and inference, which makes it potentially more favorable in real-world use.
i
591457e0-7391-4399-9e04-db946522bbda
As shown in Figure REF , our proposed discourse-aware neural topic segmentation model comprises two components: the Hierarchical Topic Segmenter and Discourse Graph Modeling, highlighted in green and red respectively. Discourse Graph Modeling further comprises of a Discourse Graph Construction and Graph Modeling component.
m
6c264f32-3a6d-46ef-a4ab-c7b3213da531
In order to quantitatively evaluate the effectiveness, generality and efficiency of our proposal, we conduct three sets of experiments to compare our topic segmentation approach against a variety of baselines and previous models. Namely, we assess the performance of our model in regards to the Intra-Domain Segment Inference Performance, Domain Transfer Segment Inference Performance, and conduct an additional Efficiency Analysis.
m
945727a8-9c50-437b-8a25-7797a6a03af9
In this paper, we present a neural topic segmentation model with injection of above-sentence discourse dependency structures inferred from a state-of-the-art discourse dependency parser. Different from previously proposed methods, our segmenter leverages the discourse signal by encoding the topical consistency between sentences from a more global and interpretable point of view. Experiments on multiple settings (intra-domain, domain transfer and efficiency comparison) show that our system achieves comparable performance to one of the current top-performing topic segmenters, with much less model size increase and speed degradation.
d
897c9230-723e-47f1-923f-f17a0fc7f6ac
In the near future, we plan to investigate the synergy between topic segmentation and discourse parsing more comprehensively, by incorporating the type of inter-sentential rhetorical relations and analyzing whether and how this discourse knowledge can enhance supervised topic segmentation frameworks. In the long run, we intend to explore the possibility for discourse parsing to benefit segment topic labeling, which is another important task usually coupled together with topic segmentation to provide the coarse-grained structural information for documents. Particularly, we believe discourse parsing can potentially enhance the step of key phrase extraction in segment topic labeling due to the significant improvement it brings to the related task of name entity recognition (NER) [1]}.
d
6064b41b-b831-47f1-b3fc-d42b1f676d1b
TL is when a model is pre-trained on a rich dataset before fine-tuning or using feature-based transfer for a domain-specific task [1]}, [2]}, [3]}. The role of TL in carrying out NLP tasks has increased significantly due to the need for understanding task interests in target domains where there is a lack of large enough datasets. In such cases, the transfer of knowledge from other domains is known to mitigate the problems of overfitting and yield strong model performance while carrying out the predictions[4]}, [5]}.
i
1e40e384-7312-44b4-a640-e8af33bc2baa
Earlier studies using the traditional kernel-based and feature-based models have shared various methods for domain adaptation, examples including structure correspondence learning, instance weighting, and many such [1]}, [2]}.
i
6cd66313-9434-49cf-a17b-877edaaae1ce
In recent years, we have seen a significant increase in the use of TL for deep neural networks because of their higher ability to learn non-linear features over traditional methods [1]}, [2]}. This ability brings a risk of being prone to overfitting, due to which TL becomes important as it can be used to train the neural networks because of the incremental learning nature.
i
c042a26e-f5d6-442f-81da-08dd626ca738
Along with all the positives that TL brings, many works have shown negative impacts when TL is being used over solely supervised training on in-target data. The negative impact caused by the TL is widely termed as negative transfer. Generally, it is observed when a transfer is shown with two less closely related datasets, like the transfer of knowledge of an English Part-of-Speech (POS) tagger [1]}, [2]} and applying it to a Hindi corpus. However, it is still uncertain when and how the TL will be used with different in-target data to get the best model performance.
i
3cff11f2-e4dd-43a8-9055-4955a0b16ca0
In this study, we aim to investigate how TL affects the performance of popular pre-trained models like BERT, RoBERTa, and XLNet when the knowledge is being transferred over different domains and languages on three NLP tasks: text classification, sentimental analysis, and sentence similarity.
i
d76c6e6a-a215-461a-b56a-5d18210f25cb
Several works have shown the effectiveness of TL across various domains starting from image segmentation to several tasks in NLP [1]}, [2]}, [3]}. Felbo et al. represented the understanding of emotions, sentiment and sarcasm across the text from various domains using the knowledge from a huge data source and transferring it to the model when the prediction was performed on the target domains [4]}. There is an improvement in the prediction against the model performance when trained only using the target dataset.
w