_id
stringlengths
36
36
text
stringlengths
200
328k
label
stringclasses
5 values
84ea59ab-1d3e-4ce3-8524-77fee97f4d43
It is very easy to analyze complex scenarios through visualizations of observational data based on positive (red dots) and negative (lack of red dots) scenarios; Observational data will consist mostly of positive examples; If we cannot get insights from failures or negative scenarios, our analysis may be compromised. <FIGURE>
i
6337ce2d-3317-434b-8a21-cc5200ebe19f
By observational data, we are referring in this paper to data collected over time from a reactive system [1]} - a system that responds to changing actions and conditions, influenced by logical rules and structures. The collected data consists of sequences of inputs and outputs of the system, or sometimes just the outputs, which we denote by logs of the system. We assume an associated formal mathematical model, sometimes referred to simply as the model. We assume here the formal model encompasses all of the rules and constraints of the reactive system.
i
b12706b1-9d54-42a5-bf89-a11ffc6ada05
We consider that both the real reactive system and the model are capable of generating log data. However, the formal model by construction is a conceptual representation of the real world, which may contain further restrictions on how the reactive system behaves over time. For example, in a real world we may have limitations on what inputs a reactive system may see in the future based on previous outputs, such as in the case of intelligent agents [1]}.
i
bfc4c88d-9160-485e-80e3-65b7845727d3
In numerous Machine Learning and Deep Learning based studies of log data  [1]}, [2]}, [3]}, [4]}, it is common for a user who wants to better understand the underlying behavior of the system, or automatically analyze input-output responses of the system, to obtain actionable insights through the collected logs. However, by construction, these logs give rise to survivorship bias [5]}, simply because they reflect observable behaviors of the reactive system over a limited time and limited scenarios, with only normal cases or with at most a very limited number of rare cases [6]}, [7]}. Some of the representative examples where validation and analysis based on observational logs can lead to catastrophic consequences are the Pentium bug [8]}, and the crash of flight AF-447 [9]}.
i
bc80541a-8910-4861-a925-dae677aa08c0
When analyzing logs, several techniques have been used in the past to reduce the effect of this bias of observable data, such as the use of constrained random simulation [1]} or generation of synthetic training data [2]}. However, it eventually becomes too hard if not impossible to collect data on certain rare scenarios, and even attempting to creating synthetic data may become a complex task [3]}.
i
479871e4-fa17-47df-9cef-c0bd1cdb82ac
Logs can consists of vast amounts of data, and it is usually useful to determine if one has seen anomalies in the logs, usually occurring as a post processing step [1]} or during data collection [2]}. Although machine learning models are useful to summarize and find anomalies in the data in a probabilistic way, because data is biased during data generation, a user may be interested to determine if a scenario is ever possible to occur. Although this question may not be suitable for a machine learning or deep learning model, it can be answered by a symbolic proof system based on a formal model of the reactive system [3]}, [4]}, [5]}, [6]}.
i
49b9bf16-ee4a-4340-b6d5-a6fe9af6e24d
This paper presents Log2NS (Log-to-neural-symbolic), a framework that enables users to reason about logs from a reactive systems by using machine learning and formal techniques. This solution can be best represented in Fig. REF as a revisit of the figure originally presented at [1]}, and it can be seen as an instance of using neural-symbolic systems [2]}, [3]}. <FIGURE>
i
f79ae00f-c2b4-4111-ba32-7f34e0502a4b
The framework first analyzes logs by computing embedding vectors on the entities of the logs. Then, it combines these vectors to represent log entry entities, using these representations to create visualizations. A powerful query language enables users to query the entire system for positive examples, by statically searching the logs or by computing correlations on the vector embeddings, and it uses formal engines to generate negative examples or even positive examples, when the query is only partially expressed or expressed as complex set of constraints. In this case, the biased insights from observational data are complemented by queries using the formal engine, providing users with a comprehensive insights that enables one to draw conclusions. The rest of the paper is organized as follows: Section 2 outlines the related work. In section 3 we will introduce the necessary background on Representation learning using embedding and its applications. Section 3 also introduces Formal methods on security policy. In section 4, we present our approach. Finally, in section 5, we conclude the paper by presenting our experimental evaluation on a real data and outlining the areas for future research.
i
b083f644-a7ea-42ac-8f08-47d83f753379
Machine learning has proved very powerful but has the limitation of interpretability. Symbolic AI was the main direction taken by AI research for a long time. Neuro-symbolic AI is a relatively new approach to AI that combines machine learning and symbolic AI. Neuro-symbolic AI can be seen as largely working within the framework of symbolic AI, using machine learning to infer and build the formal model upon which symbolic AI can reason [1]}. Our approach differs from Neuro-symbolic AI in that we begin with a formal model, and its observational data, and move forward from there. Rather than build a formal model from the ML model, our approach treats the ML model and formal model as collaborative players in a framework that accommodates complex real-world rule sets to enhance our understanding of the data flows they govern.
w
b3aac1d4-bd61-4d90-a6f7-b660e2882a41
The formal model studied in this paper begins with firewall configuration components and policy rules. These rules are built by people over time. They grow and accrete as their underlying networks concerns change. Maintenance of policy sets is challenging given network complexity and the changing landscape of threats. Of high importance are questions such as: do these rules keep my network secure? Is a particular rule letting in unwanted traffic? Are my rules too strict, and disallowing necessary traffic? Are the rules creating bottlenecks? Have route changes or network changes rendered my rules obsolete? Bodei et. al have developed methods to assess firewall configurations using a formal model [1]}. In Log2NS a machine learned model of the observational data and a formal model of the firewall configuration work together to increase understanding of possible network problems and potential flaws or inconsistencies in the firewall configuration.
w
3e44a478-3f79-41c4-956e-7f9fab253cae
We demonstrate the Log2NS framework by building ML model from the observational data, symbolic model from the firewall configuration. We show the effectiveness of our approach in analyzing the access patterns and associated security rules.
m
12e21d5f-77e6-4286-ac15-07042a47e6f4
In this paper, we addressed the problem of enhancing insights from observational data coming from logs by using formal engines. We showed use cases, where logs can suffer from the survivorship bias problem, and because of that, insights derived from them will be inherently biased, if analysis is not complemented by other methods.
d
589bc6ed-c3cb-45ca-b252-00ed04d83fbb
We introduced Log2NS, a framework that enables users to reason about logs and to understand a complex system by using deep learning techniques on logs. We captured system behavior using embeddings on log entries, that were later combined, and clustering techniques were used to understand interesting scenarios. Because log data is biased, we enhanced the logs using formal technology. Formal engines and a constraint language that accepted partially specified constraints and negation enabled a user to query the framework to understand rare conditions or negative examples. We showed how the framework was used to extract insights in a network security environment, where logs were obtained from firewall network and security configuration. A corresponding formal model was used to determine if traffic could be accepted or denied, and explore optimizations of security rules.
d
ade002cd-12d4-45b9-b595-b00b4d969ce8
Morphology is a powerful tool for languages to form new words out of existing ones through inflection, derivation and compounding. It is also a compact way of packing a whole lot of information into a single word such as in the case of the Finnish word hatussanikinko (in my hat as well?). This complexity, however, poses challenges for NLP systems, and in the work concerning endangered languages, morphology is one of the first NLP problems people address.
i
6581f8c6-2c79-4748-89d3-4e107168d503
The GiellaLT infrastructure [1]} has HFST-based [2]} finite-state transducers (FSTs) for several morphologically rich (and mostly Uralic) languages. These FSTs are capable of lemmatization, morphological analysis and morphological generation of different words.
i
351f1723-7f51-4c1b-a34f-55b02fdb7a43
These transducers are at the core of this infrastructure, and they are in use in many higher level NLP tasks, such as rule-based [1]} and neural disambiguation [2]}, dependency parsing [3]} and machine translation [4]}. The transducers are also in constant use in several real world applications such as online dictionaries [5]}, spell checkers [6]}, online creative writing tools [7]}, automated news generation [8]}, language learning tools [9]} and documentation of endangered languages [10]}, [11]}. As an additional important application we can mention the wide use of FSTs in the creation of Universal Dependencies treebanks for low-resource languages, at least with Erzya [12]}, Northern Saami [13]} Karelian [14]} and Komi-Zyrian [15]}.
i
cc27e8d7-b264-46ce-9d5b-3b423629b94e
Especially in the context of endangered languages, accuracy is a virtue. Rule-based methods not only serve as NLP tools but also as a way of documenting languages in a machine-readable fashion. Members of language communities do not benefit, for example, from a neural spell checker that works to a degree in a closed test set, but fails miserably in real world usage. On the contrary, a rule based description of morphology can only go so far. New words appear and disappear all the time in a language, and keeping up with that pace is a never ending job. This is where neural models come in as they can learn to generalize rules for out-of-vocabulary words as well. Pirinen pirinen2019a also showed recently that at least with Finnish the neural models do outperform the rule-based models. This said, Finnish is already a larger language, so the experience doesn't necessarily translate into low-resource scenario (see [1]}).
i
a7a68ccf-6e24-46d8-945a-01166d48caf1
The purpose of this paper is to propose neural models for the three different tasks the GiellaLT FSTs can handle: morphological analysis (i.e. given a form such as kissan, produce the morphological reading +N+Sg+Gen), morphological generation (i.e. given a lemma and a morphology, generate the desired form such as kissa+N+Sg+Gen to kissan) and lemmatization (i.e. given a form, produce the lemma such as kissan to kissa `a cat'). The goal is not to replace the FSTs, but to produce neural fallback models that can be used for words an FST does not cover. This way, the mistakes of the neural models can easily be fixed by fixing the FST, while the overall coverage of the system increases by the fact that a neural model can cover for an FST.
i
3bc33d60-6722-46e1-95d9-fb4e15dba980
The main goal of this paper is not to propose a state of the art solution in neural morphology. The goal is to first build the resources needed to train such neural models so that they will follow the same morphological tags as the GiellaLT FSTs, and secondly train models that can be used together with the FSTs. All of the trained models will be made publicly available in a Python library that supports the use of the neural models and the FSTs simultaneously. The dataset built in this paper and the exact train, validation and test splits used in this paper have been made publicly available for others to use on the permanent archiving platform Zenodo.
i
036d5bc6-cdf6-40de-bcce-436bc86ce18e
We report the performance of the models in terms of accuracy, meaning how many results were fully right (entirely correct lemma, entirely correctly generated form and entirely correct morphological analysis). In addition, we report CER (character error rate) for the lemmatizers and generators, and a MER (morphological error rate) for the analyzers. These values indicate how close the model got to the correct result even if some of the results were a bit erroneous. <TABLE>
r
5b471feb-2433-4efa-9e1b-19d3d9d397f5
The results can be seen in Table REF , the models reaching to an accuracy to over 80 % are highlighted in bold. The results indicate that lemmatization is the easiest task for the model to learn, and after that generation. Morphological analysis is the most difficult task as it receives the scores lower than the generation or lemmatization. Needless to say, some results are exceptionally good for specific languages such as for Erzya (myv) and Western Mari (mrj), while they are not good for others like Finnish (fin) and German (deu). This calls for more investigation of the results. <FIGURE>
r
e612e806-6be4-4111-b601-917872facdee
Figure REF shows the accuracy of each model based on the morphological complexity of the input. The complexity is measured by the number of morphological tags in the FST produced data. The complexity axis of the plots shows a relative complexity for each language, meaning that 1.0 has the maximum number of tags, 0.8 shows results for input having 80% of the maximum number of tags and so on. The maximum complexity is shown in brackets after the language ISO-code. Analyzers seem to have a lower accuracy for most of the languages when the complexity is small. This is probably due to the fact that shorter word forms tend to have more ambiguity to begin with and might be analyzed as a word different from the one in the gold standard. For many languages, the accuracy increases towards the average complexity and drop again for the most complex forms. It is to be remembered that these accuracies are also affected by the peculiarities of the transducers themselves and their tagging conventions.
r
a0662111-5c59-4a18-bb68-e1437471e43a
Lemmatizers seem to follow the pattern of the analyzers but do so more clearly. Lemmatization of morpholgically simple forms is not as easy as more complex forms. However, as the complexity increases, the lemmatization accuracy does not drop for most of the languages. This has probably something to do with the fact that unlike morphological tags, the word forms follow clearer patterns as they do not have such a large amount of subjectivity in the tagging decisions the different linguists working on these transducers have introduced.
r
1ef48917-2a69-40b4-a0da-fc987b1bd6d8
Generators are very even for most of the languages in the sense that they produce consistently around the same accuracy regardless of the morphological complexity. Although, some of the languages follow a more analyzer like pattern, generating wrong with small and large morphological complexity. <TABLE>
r
6742efc0-63c2-4f91-9533-9c33af6441d1
Table REF shows the most difficult tags for the analyzers. The missing predictions column shows the most frequent tags the analyzer did not predict even though they were in the gold data, and the wrong predictions column shows the most frequent ones the analyzer predicted but were not in the gold data. We can see that many of the most challenging tags are shared by different languages. In various Uralic languages, for example, connegatives and imperatives, or connegatives and infinitives, are homonymous, and cannot be predicted correctly just from the surface form alone. Similarly cases such as illative and inessive are in many complex forms homonymous in Permic languages, which surfaces in missing predictions of all these languages. In the languages where transitivity is a feature coded into FST, there are regular problems in predicting these categories correctly. Similarly, in many Indo-European languages gender is primarily a lexical category, and in many instances the model cannot predict it correctly in cases where only the surface form that doesn't show the gender is presented. In the Section REF we go through more in detail this kind of instances, for example, in relation to purely lexically determined Komi-Zyrian stem consonants.
r
e24355f8-e571-412e-b991-bc0da025a3c6
Table REF shows the morphological constructions that were the most difficult ones for the models to lemmatize and generate correctly in their respective columns. For instance, the Erzya (myv) generation indicates the translative with subsequent possessive-suffix marking is the most problematic. If it had been lemmatization, the explanation would point to the extreme infrequency of these translative forms and the fact that there is an ambiguity with genitive and nominative forms of derivations in ks. Lemmatization for Erzya, however, appears to have no issues with ambiguity at all. The same difficulties are not shared by other languages, but seem to all be language specific. Eastern and Meadow Mari (mhr), for example, appear to have difficulties with generation and lemmatization of nearly the same tag set, namely, the illative plural with a third person plural possessive suffix (ordered: possessive, plural and finally case marker). Looking at the sibling language Western Mari (mrj), we will note that there is a different tagging strategy in use, but here as well there seems to be an intersection where the same forms present problems for both generation and lemmatization.
r
9b3f6531-921c-4fc8-a77e-aa156bb30187
This could be seen as a type of sanity test whereby simple flaws in the transducers might be detected. The Latvian (lav) transducer is a blatant example of inconsistencies in transducer development. The problem, which has now been addressed and corrected, was in the multiple exponence of part-of-speech tags, i.e. there are double +V and +N tags due to the introduction of automated part-of-speech tagging in XML dictionary to FST formalism transformation without removing the part-of-speech tagging in subsequent continuation lexica of the rule-based transducer. Development of the Mari pair might be greatly enhanced through the introduction of a segment-ordering tag in Western or Hill Mari (mrj), which would bring it closer to the strategy followed in the Eastern and Meadow Mari (mhr) use of +So/PNC. These questions with tag and suffix ordering appear also as important factor in Komi-Zyrian morphological generation, as discussed in Section REF . <TABLE>
r
87381100-1cb4-43e2-8611-3dacbc53159f
In this paper, we have presented a method for automatically extracting inflectional forms from FST transducers by composing a regular expression transducer for each word with an existing FST transducer. This way, we have been able to gather very large morphological training data for analysis, lemmatization and generation for 22 languages, 17 out of which are endangered and fighting for their survival. We have used this dataset to train neural models for each language. Because the data follows the tags and conventions used in the GiellaLT infrastructure, these neural models can be used directly side by side with the FST transducers in many of the applications that depend on them.
d
c1230879-a6a9-4009-9e2c-af6928db3c9b
The results look very good for some languages while being a bit more modest for others. Analysis seems to be the hardest problem out of the three, and its training also took the longest time. Despite this, many models reached to an over 80% accuracy in the tasks. This is rather good given that the evaluation was conducted entirely on out-of-vocabulary words.
d
3e40e828-03f7-43c4-94ca-c908b02c44c3
The accuracies reported in this paper are a somewhat lower than what they could be. This is due to the fact that we ran the evaluation by producing one result only for each input with the neural models and compared that input directly to the one in the test data. As we saw in our analysis, many of the inputs in the test data were ambiguous, which caused the neural model to produce an output that is correct, but not the one in the test data. However, the right way to overcome this problem would be to research how to deal with ambiguity. The neural models we trained can already now produce N best candidates for each input.
d
d3846f14-ce69-44aa-9a5f-d856ac592544
It is probable that within those N best candidates, the models actually cater for the ambiguity and produce other results that are correct as well. For instance, the Finnish word noita, can be an accusative singular noun meaning `witch' or a partitive of nuo meaning `them'. Knowing how to maximize the number of forms the neural model produces while minimizing the number of incorrect forms is a question for another paper. Although, some methods could already be used with the models trained in this paper by introducing simple modifications to how the results are predicted [1]}.
d
5b83c0e4-e4de-4f84-96d6-b1cc15c6fe93
Even though we aimed for a real world scale morphological tag complexity by querying all possibilities from the FSTs, there are still a couple of morphological categories we did not tackle for practical reasons. One of them is the use of clitics. The problem with these is that they can be attached to almost any kind of word regardless of its part-of-speech and inflectional form. On top of this, multiple clitics can be added one after another. To give an idea of the scale, with clitics, Finnish has 9425 unique forms for nouns (instead of 850), 216 for adverbs (instead of 16), 14794 for adjectives (instead of 1244) and a whopping 88044 forms for verbs (instead of 6667). This means that clitics need to be solved by taking a different approach than the one we had. One could, for example, introduce some forms with different combinations of clitics here and there in the training data, in which case the question arises on how many forms need to appear with clitics in order for the model to generalize their usage.
d
22b22e08-c450-4f7a-84ac-769c4ac541c1
Compounds and derivations could not be included because of how the FSTs were implemented. If you ask an FST for compounds and derivations, you will surely get them! Even in such quantities that your computer will run out of RAM and swap memory for the forms of a single word, as there is no limit to how many words can be written together to form a compound or how many times one can derive a new word from another. We people might have our cognitive limits for that, but the FSTs will notTake, for instance, a look at this derivational Skolt Sami word produced by the FST Piân'njatõõvvõlltâs- ttiatemesvuõt'tsážvuõðtõvvstõlškuät'tteškuättõõlstõlstââst- stõõstčâtttömâs for piânnai+N+Der/Dimin+Der/N2A+Der/toovvyd +Der/oollyd+Der/jed+V+Der/Caus+Der/Dimin+Der/NomAg+N +Der/Dimin+Der/N2A+Der/teqm+A+Attr+Der/vuott+N +Der/sazh+A+Err/Orth+Attr+Der/vuott+N+Der/toovvyd +Err/Orth+Der/stoollyd+V+Der/shkueqtted+Der/jed +V+Der/Caus+Der/shkueqtted+Der/oollyd+Der/stoollyd +Der/Dimin+V+Der/Dimin+Der/Dimin+V+Der/Dimin+Der/ched +Der/Caus+Der/t+A+Superl+Attr. The problem of compounds is probably best to leave for a separate model to solve, as there are already methods out there for predicting word boundaries [1]}, [2]}. The compound splits by such methods could then be fed into the neural models trained in this paper. As for derivations, some of them could be included in the training data, but the question of how many forms are needed would still require further research.
d
bf26442c-9ac8-41d3-b7a9-cc4ec58ebf27
ACM's consolidated article template, introduced in 2017, provides a consistent style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific templates have been examined, and their unique features incorporated into this single new template.
i
f9b3e3c4-de1f-47ed-a616-e845424d8cfb
If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template.
i
e0f5ed37-fed6-40a7-94b9-6009d5443334
The “acmart” document class can be used to prepare articles for any ACM publication — conference or journal, and for any stage of publication, from review to final “camera-ready” copy, to the author's own version, with very few changes to the source.
i
a16d945d-e6e5-424e-8979-66c35bba42b8
Data derived from social media has been successfully used to facilitate the detection of influenza epidemics [1]}, [2]}. In addition, [3]} provides a thorough review of the use of Twitter in public health surveillance for the purpose of monitoring, detecting and forecasting influenza-like illnesses. Since the start of the COVID-19 pandemic, a number of mobile app-based self-reported symptom tools have emerged, to track novel symptoms [4]}. The mobile application in [5]} applied Logistic Regression (LR) to predict percentage of probable infected cases among the total app users in the US and UK combined. The authors in [6]} performed a statistical analysis on primary care Electronic Health Records (EHR) data to find longitudinal dynamics of symptoms prior to and throughout the infection.
w
07319665-cbfb-410f-b125-383faefd201f
At an individual diagnostic level, Zimmerman et al. [1]} previously have applied Classification and Regression Trees to determine the likelihood of symptom severity of influenza in clinical settings. Moreover, machine learning algorithms, such as decision trees, have shown promising results in detecting COVID-19 from blood test analyses [2]}. Here, we focus on features extracted from a textual source to triage and diagnose COVID-19 for the purpose of providing population level statistics in the context of public health surveillance. Studies related to our work deploy features obtained from online portals, telehealth visits, and structured and unstructured patient/doctors notes from EHR. In general, COVID-19 clinical prediction models can broadly be categorised into risk, diagnosis and prognosis models [3]}.
w
31d3a5bd-3dc4-47b0-ba55-8496d683ff2d
In [1]}, a portal-based COVID-19 self-triage and self-scheduling tool was employed to segment patients into four risk categories: emergent, urgent, no-urgent and self-care. Whereas, the online telemedicine system in [2]} used LR to predict low, moderate and high risk patients, by utilising demographic information, clinical symptoms, blood tests and computed tomography (CT) scan results.
w
fe56e14a-5b15-42a0-ae80-ebfbbcc697da
In [1]}, various machine learning models were developed to predict patients outcome from clinical, laboratory and demographic features found in EHR [2]}. They reported that Gradient Boosting (XGB), Random Forest and SVM are the best performing models for predicting COVID-19 test results, and, hospital and ICU admissions for positive patients, respectively. A detailed list of clinical and laboratory features can be found in [3]}, where they developed predictive models for the inpatient mortality in Wuhan, using an ensemble of XGB models. Similarly, In [4]}, mortality and critical events for patients using XGB classifiers were predicted. Finally, a critical review on various diagnostic and prognostic models of COVID-19 used in clinical settings, can be found in [5]}.
w
8519bba2-2624-497a-87f2-f723d7a82990
In [1]}, COVID-19 symptoms from unstructured clinical notes in the EHR of patients subjected to COVID-19 PCR testing were extracted. In addition, COVID-19 SignSym [2]} was designed to automatically extract symptoms and related attributes from free text. Furthermore, the study in [3]} utilises radiological text reports form lung CT scans to diagnose COVID-19. Similar to our approach, Lopez et al. [3]} first extracted concepts using a popular medical ontology [5]} and then constructed a document representation using word embeddings [6]} and concept vectors [3]}. However, our methodology differs from theirs with respect to the extraction of relations between concepts, and moreover, our data set, comprising posts obtained from medical social media, is more challenging to work with, since social media posts exhibit greater heterogeneity in language than radiological text reports.
w
cccdba77-c6a4-4fca-ac53-69d17cef0550
A schematic of our methodology to triage and diagnose patients from their social posts is shown in Figure REF . Here, the circles denote the steps followed in the pipeline. We now detail each of these steps.
m
a822ff05-14a6-40cc-abae-67fae6df9036
We evaluate the performance of the CRF and SVM classification algorithms using the standard measures of precision (\(P\) ), recall (\(R\) ) and macro- and micro-averaged \(F_1\) scores [1]}. macro-averaged scores are computed by considering the score independently for each class and then taking the average, while micro-averaged scores are computed by considering all the classes together.
m
2fad4698-e8e6-44c8-aa62-5e8d5ce50e6a
As our data set is sufficiently balanced with COVID and NO_COVID classes as can be seen in Figure  REF , we report micro-averaged scores for SVR classification. On the other hand, in case of concept extraction, the Other class dominates. So, we report the macro-averaged scores for the CRF classification results.
m
2e11748f-57bc-4082-a414-a0a67fdde28d
For the CRF we report 3-fold cross validated macro-averaged results. Specifically, we trained each fold by a Python wrapper [1]} for CRFsuite, see [2]}. For relation extraction, we ran our unsupervised rule-based algorithm on the 500 posts and calculated the \(F_1\) scores by varying distances considering the two cases with and without stop words.
m
4de0b93b-aafe-431e-bcb0-18292fa9050c
We constructed SVM binary classifiers, SVM Classifier 1 and SVM Classifier 2, using the Python wrapper for LIBSVM [1]} implemented in Sklearn [2]} with both Linear and Gaussian Radial Basis Function (RBF) kernels [3]}. Similarly, the SVR [4]}, implemented using LIBSVM, is built with both Linear and RBF kernels. The hyperparameters (\(C=10\) for the penalty, \(\gamma =0.01\) for the RBF kernel, and \(\epsilon =0.5\) for the threshold) were discovered using grid search [2]}.
m
45c18216-7cc3-4deb-9ce2-2d73716ac6ab
We simulated two cases for COVID-19 triage and diagnosis. First SVM and SVR models trained with the ground truth examine the predictive performance when they are deployed as stand-alone applications. Second, when trained with the predictions from CRF and RB classifier, they resemble an end-to-end NLP application. To get a comparable result, the models were always tested with the ground truth. As a measure of performance, we report macro and micro-averaged \(F_1\) scores for SVM classifiers and SVR, respectively. <FIGURE><FIGURE>
m
4c8b9ac9-f74a-4bc3-8bc3-040273bc7245
The coronavirus pandemic has drawn a spotlight on the need to develop automated processes to provide additional information to researchers, health professionals and decision-makers. Medical social media comprises a rich resource of timely information that could fit this purpose. We have demonstrated that it is possible to take an approach which aims at the detection of COVID-19 using an automated triage and diagnosis system, in order to augment public health surveillance systems, despite the heterogeneous nature of typical social media posts. The outputs from such an approach could be used to indicate the severity and estimate the prevalence of the disease in the population.
d
4a813290-091f-4c16-8529-da63f4c360be
Modern deep neural networks (DNNs) have expressive power and high capacity, resulting in accurate modeling and promising generalization. However, a big challenge in the age of deep learning is distribution shift (DS), where training and test data come from two different distributions: the training data are drawn from \(p_s({\mathbf {x}}, y)\) , the test data are drawn from \(p_t({\mathbf {x}}, y)\) , and \(p_s({\mathbf {x}}, y) \ne p_t({\mathbf {x}}, y)\) . Under DS, supervised deep learning can lead to deep classifiers biased to the training data whose performance may significantly drop on the test data. In this work, we focus on learning with noisy labels, one of the most popular DS settings where \(p_s({\mathbf {x}}) = p_t({\mathbf {x}})\) and \(p_s(y \vert {\mathbf {x}}) \ne p_t(y \vert {\mathbf {x}})\) . [1]}, [2]} <TABLE>
i
b0a21796-ce1d-44bc-bbf0-75bc1564b21c
As DNNs can essentially memorize any (even random) labeling of data, noisy labels have a drastic effect on the generalization performance of DNNs. Many approaches have been proposed to improve the robustness against noisy data for learning with noisy labels. Robustness to label noise can be pursued by identifying noisy samples to reduce their contribution to the loss [1]}, [2]}, correct their labels [3]}, [4]}, utilize the robust loss function [5]}, [6]} and regularization [7]}. While these existing methods are effective in mitigating label noise, since their criterion for identifying noisy examples uses the posterior information of the corrupted network, the network may suffer from undesirable bias. Recent studies point out that existing methods utilize biased information from corrupted networks. [8]} <FIGURE>
i
64601fe0-1a97-4ab8-9355-b1cadaa63135
While previous regularization-based learning frameworks [1]}, [2]} alleviate this problem, these methods require a lot of computational resources and are difficult to apply in practice. To mitigate such impractical issues for previous research, we provide a label-noise-aware learning framework with an efficient regularization method. Through the theoretical analysis that LS implicitly causes Lipschitz regularization, our proposed framework, coined as ALASCA, applies LS on sub-classifiers with varying regularization strength depending on the particular data point. The strength is larger for low confidence instances that most of the noisy instances are included. Unlike the previous regularization-based approaches, our ALASCA is a practical application of Lipschitz regularization on intermediate layer by using adaptive LS and sub-classifiers. We show that ALASCA is a universal framework by combining and comparing existing learning with noisy label (LNL) methods and validate that it consistently improves the generalization.
i
9f10cda9-f741-417f-9ac6-0563b9cd7661
To verify the superiority of ALASCA, we combine and compare our proposed method with various existing LNL methods and identify that ours consistently improves the generalization in the presence of noise labels. The related work, experimental setups, and additional results are detailed in Appendix.
m
c148e1a0-67aa-46c2-bac3-7ff6bb9f86d2
This paper introduces a label-noise-aware learning framework, ALASCA, which regularizes more strongly on data points with noisy labels on intermediate layers. Based on our main finding that LS implicitly incurs LR with theoretical guarantees, we suggested to conduct ALS on sub-classifiers with negligible additional computation. Furthermore, our experimental results on benchmark-simulated and real-world datasets demonstrate the superiority of the ALASCA. We hope that our contribution will lower the risk of label noise and develop robust models. Appendix: Rethinking Label Smoothing for Deep Learning Under Label Noise Related Works Learning with Noise Label [1]} empirically showed that any convolutional neural networks trained using stochastic gradient methods easily fit a random labeling of the training data. To tackle this issue, numerous works have examined the classification task with noisy labels. Existing methods address this problem by (1) filtering out the noisy examples and only using clean examples for training [2]}, [3]}, [4]}, [5]} or (2) relabeling the noisy examples by the model itself or another model trained only on a clean dataset [6]}, [7]}, [8]}, [9]}, [10]}. Some approaches focus on designing loss functions that have robust behaviors and provable tolerance to label noise [11]}, [12]}, [13]}, [14]}, [15]}, [16]}. Regularization-based Methods. Another line of works has attempted to design regularization based techniques. For example, some studies have stated that the early-stopped model can prevent the memorization phenomenon for noisy labels [17]}, [18]} and theoretically analyzed it. Based on this intuition, [10]} proposed an ELR loss function to prohibit memorizing noisy data by leveraging the semi-supervised learning (SSL) techniques. [20]} clarified which neural network parameters cause memorization and proposed a robust training strategy for these parameters. Efforts have been made to develop regularizations on the prediction level by smoothing the one-hot vector [21]}, using linear interpolation between data instances [22]}, and distilling the rescaled prediction of other models [23]}, [24]}. Recently, [25]} proposed heteroskedastic adaptive regularization which applies stronger regularization to noisy instances. Noise-Cleansing based Approaches. Existing methods address this problem by (1) filtering out the noisy data and only using the clean data for training or (2) relabeling the noisy data by the model during training or by another model trained only on a clean dataset. Firstly, for sample selection approaches, [2]} suggested decoupling method that trains two networks simultaneously, and then updates models only using the instances that have different predictions from two networks. [3]} proposed a noisy detection approaches, named co-teaching, that utilizes two networks, extracts subsets of instances with small losses from each network, and trains each network with subsets of instances filtered by another network. Recently, new noisy detector with theoretical support have been developed. [4]} introduced an algorithm that selects subsets of clean instances that provide an approximately low-rank Jacobian matrix and proved that gradient decent applied to the subsets prevent overfitting to noisy labels. [5]} proposed detecting framework, termed FINE which utilize the high-order topological information of data in latent space by using eigen decomposition of their covariance matrix. Secondly, for label correction approaches, [30]} provided a semi-supervised learning framework that facilitates small sets of clean instances and an additional label cleaning network to correct the massive sets of noisy labels. [6]} proposed a joint optimization framework that optimizes the network parameters and class labels using an alternative strategy. [7]} introduced CleanNet, which learns a class-embedding vector and a query-embedding vector with a similarity matching constraint to identify the noise instances that have less similar embedding vectors with their class-embedding vectors. [8]} provided a probabilistic learning framework that utilizes label distribution updated by back-propagation to correct the noisy labels. Recently, [9]} modeled the per-sample loss distribution and divide it into a labeled set with celan samples and an unlabeled set with noisy samples, and they leverage the noisy samples through the well-known semi-supervised learning technique MixMatch [35]}. Robust Loss Functions. Some approaches focus on designing loss functions that have robust behaviors and provable tolerance to label noise. [11]} theoretically proved that the mean absolute error (MAE) might be robust against noisy labels while commonly used cross-entropy loss are not. [12]} argued that MAE performed poorly with DNNs and proposed a GCE loss function, which can be seen as a generalization of MAE and CE. [13]} introduced the reverse version of the cross-entropy term (RCE) and suggested the SCE loss function as a weighted sum of CE and RCE. [14]} showed that normalized version of any loss would be robust to noisy labels. Recently, [15]} argued that symmetric condition of existing robust loss function is overly restrictive and proposed asymmetric loss function families which alleviate symmetric condition while maintain noise-tolerant of noise robust loss function. Label Smoothing Smoothing the label \(\mathbf {y}\) is a common method for improving the performance of DNNs by preventing the over-confident predictions [41]}. LSR is a technique that facilitates the generalization by replacing a ground-truth one-hot vector \(\mathbf {y}\) with a weighted mixture of hard targets \(\bar{\mathbf {y}}\) : \(\bar{\mathbf {y}}_{k} = {\left\lbrace \begin{array}{ll}(1 - \alpha ) + \frac{a}{L} &\text{if $\mathbf {y}_{k}=1$} \\ \frac{\alpha }{L} &\text{otherwise,} \end{array}\right.}\) where \(k\) denotes the index, \(L\) denotes number of classes, and \(\alpha \) is a constant. [21]} argued that LS denoise label noise by causing label correction effect and weight shrinkage regularization effect. [43]} demonstrated that KD might be a category of LS by using the adaptive noise (i.e., KD is a label regularization method). Recently, [44]} theoretically show that the LS incurs over-smooth of classifiers and leads to performance degradation under severe noise rates. Knowledge Distillation. Several works have attempted to view knowledge distillation (KD) as learned LS. The goal of KD [45]} is to effectively train a simpler network called student network by transferring the knowledge of a pretrained complex network. Although KD used in a wide of range, however, there are two distinct limitations: 1) KD requires pre-training of the complex teacher model, and 2) variation of the teacher networks will result in different performances with same student network. To mitigate these issues, self-distillation approaches have been proposed to enhances the effectiveness of training a student network by utilizing is own knowledge without a teacher network. [46]} introduced self-distillation method which utilize auxiliary weak classifier networks that classify the output with the feature from the middle-hidden layers. Recently, [47]} view self-distillation as instance-specific LS. Theoretical Motivation for ALASCA Framework Proof of Theorem 1 Definition B.1 (Lipschitzness) A continuously differentiable function \(\mathbf {f}\) is called Lipschitz continuous with a Lipschitz constant \(L_{\mathbf {f}} \in [0, \infty )\) if \(\left\Vert \mathbf {f}({\mathbf {x}}_{i}) - \mathbf {f}({\mathbf {x}}_{j}) \right\Vert \le L_{\mathbf {f}} \left\Vert {\mathbf {x}}_i - {\mathbf {x}}_j \right\Vert \) for all \({\mathbf {x}}_{i}, {\mathbf {x}}_{j} \in \text{dom} \, \mathbf {f}\) Definition B.2 (Lipschitz Regularization) For \(\mathcal {F}: \mathbb {R}^{D} \rightarrow \mathbb {R}^{L}\) be a twice-differentiable model family. The definition of Lipschitz regularization if aiming to optimize the function with smoothness penalty as follows: \(\min _{\mathbf {f} \in \mathcal {F}} \frac{1}{N} \sum _{n=1}^{N} \ell (\mathbf {f} ({\mathbf {x}}_{n}), {\mathbf {y}}_{n}) + \lambda \left\Vert \mathbf {J}_{\mathbf {f}}({\mathbf {x}}_n) \right\Vert _F,\) where \(\lambda \) is the regularization coefficient, \(N\) is the number of training data points, and \(\mathbf {J}_{\mathbf {f}}\) is the Jacobian matrix of \(\mathbf {f}\) . Lemma B.3 Define a function \(\Phi := \Omega \circ \mathbf {g}\) , i.e. \(\Phi (\mathbf {z}) = \Omega (\mathbf {g}(\mathbf {z}))= \Omega (\mathbf {f})\) . Then, \(\Phi (\mathbf {z})\) is twice differentiable and also convex. Let \(Q\) denote the dimension of \(\mathbf {z}\) , i.e. \(\mathbf {z} \in \mathbb {R}^{Q}\) . That is, \(\Phi \) is a function from \(\mathbb {R}^{Q}\) to \(\mathbb {R}\) . The gradient and Hessian of \(\Phi (\mathbf {z})\) is as follows: \(\begin{split}\nabla \Phi (\mathbf {z})= \frac{\partial \Phi (\mathbf {z})}{\partial \mathbf {z}} &= \frac{\partial }{\partial \mathbf {z}} \left[ L \cdot \log \left[ \sum _{i=1}^{L} e^{\left< \mathbf {W}_{i}, \mathbf {z} \right>} \right] - \sum _{i=1}^{L} \left< \mathbf {W}_{i}, \mathbf {z} \right> \right] \\&= L \cdot \frac{\sum _{i=1}^{L} e^{\left< \mathbf {W}_{i}, \mathbf {z} \right>} \, \mathbf {W}_{i}}{\sum _{i=1}^{L} e^{\left< \mathbf {W}_{i}, \mathbf {z} \right>}} - \sum _{i=1}^{L} \mathbf {W}_{i},\end{split}\) \(\begin{split}\nabla ^2 \Phi (\mathbf {z})&= L \cdot \frac{\sum _{i=1}^{L} e^{\left< \mathbf {W}_{i}, \mathbf {z} \right>} \, \mathbf {W}_{i} \mathbf {W}_{i}^{\intercal } \cdot \sum _{i=1}^{L} e^{\left< \mathbf {W}_{i}, \mathbf {z} \right>} - \left( \sum _{i=1}^{L} e^{\left< \mathbf {W}_{i}, \mathbf {z} \right>} \, \mathbf {W}_{i}\right)\left( \sum _{i=1}^{L} e^{\left< \mathbf {W}_{i}, \mathbf {z} \right>} \, \mathbf {W}_{i}^{\intercal }\right) }{\left(\sum _{i=1}^{L} e^{\left< \mathbf {W}_{i}, \mathbf {z} \right>}\right)^2}.\end{split}\) Now, let's show that \(\nabla ^2 \Phi (\mathbf {z})\) is positive semi-definite for all \(z\in \mathbb {R}^{Q}\) . For any \(\mathbf {v} \in \mathbb {R}^{Q}\) , \(\begin{split}& \qquad \mathbf {v}^{\intercal } \left( \sum _{i=1}^{L} e^{\left< \mathbf {W}_{i}, \mathbf {z} \right>} \, \mathbf {W}_{i} \mathbf {W}_{i}^{\intercal } \cdot \sum _{i=1}^{L} e^{\left< \mathbf {W}_{i}, \mathbf {z} \right>} - \left( \sum _{i=1}^{L} e^{\left< \mathbf {W}_{i}, \mathbf {z} \right>} \, \mathbf {W}_{i}\right)\left( \sum _{i=1}^{L} e^{\left< \mathbf {W}_{i}, \mathbf {z} \right>} \, \mathbf {W}_{i}^{\intercal }\right)\right) \mathbf {v} \\& = \sum _{i=1}^{L} e^{\left< \mathbf {W}_{i}, \mathbf {z} \right>} (\mathbf {W}_{i}^{\intercal } \mathbf {v} )^2 \cdot \sum _{i=1}^{L} e^{\left< \mathbf {W}_{i}, \mathbf {z} \right>} - \left( \sum _{i=1}^{L} e^{\left< \mathbf {W}_{i}, \mathbf {z} \right>} \, \mathbf {W}_{i}^{\intercal } \mathbf {v} \right)^2 \ge 0\end{split}\) by Cauchy-Schwarz Inequality. Thus, \(\mathbf {v}^{\intercal } \nabla ^2 \Phi (\mathbf {z}) \mathbf {v}\ge 0\) for all \(\mathbf {v} \in \mathbb {R}^{Q}\) and this proves that \(\nabla ^2 \Phi (\mathbf {z})\) is positive semi-definite for all \(\mathbf {z}\in \mathbb {R}^{Q}\) and \(\Phi (\mathbf {z})\) is convex. Assumption 1. \(\mathbf {W}_1, \mathbf {W}_2, \cdots , \mathbf {W}_L\) is an affine basis of \(\mathbb {R}^{Q}\) , where \(\mathbf {W}_i\) is the \(i\) -th column of \(\mathbf {W}\) . (i.e., \( \mathbf {W}_2-\mathbf {W}_1, \mathbf {W}_3-\mathbf {W}_1, \cdots , \mathbf {W}_L-\mathbf {W}_1\) are linearly independent.) Assumption 2. Each gradient \(\nabla h_i(\mathbf {x})\) is Lipschitz continuous with a Lipschitz constant \(L_h\) for all \(i\) , where \(\mathbf {h}(\mathbf {x})=(h_1(\mathbf {x}), h_2(\mathbf {x}), \cdots , h_Q(\mathbf {x}))\) . Note that \(\mathbf {W}_2-\mathbf {W}_1, \mathbf {W}_3-\mathbf {W}_1, \cdots , \mathbf {W}_L-\mathbf {W}_1 \) being linearly independent is equivalent to \( \mathbf {W}_2-\mathbf {W}_1, \mathbf {W}_3-\mathbf {W}_2, \cdots , \mathbf {W}_L-\mathbf {W}_{L-1}\) being linearly independent in Assumption REF . Both of these will be used in the proof of method:theorem1. Theorem 1. \(\Omega (\mathbf {f})\) is minimized when \(\mathbf {h}(\mathbf {x})=\mathbf {0}\) for all \(\mathbf {x}\) (*) and if weight matrix \(\mathbf {W}\) satisfies Assumption REF , (*) is unique minimizer. Furthermore, with Assumption REF , \(\left\Vert \mathbf {J_f}(\mathbf {x})\right\Vert _F \rightarrow \mathbf {0}\) as \(N \rightarrow \infty \) holds, where \(\mathbf {J_f}\) is the Jacobian matrix of \(\mathbf {f}\) . We specify the domain and codomain of each function: \(\mathbf {g}: \mathbb {R}^{Q} \rightarrow \mathbb {R}^{L}, \mathbf {h}: \mathcal {X} \rightarrow \mathbb {R}^{Q}\) where \(\mathcal {X} \subset \mathbb {R}^{D}\) . Then, \(\left.\frac{\partial \Omega (\mathbf {f})}{\partial \mathbf {z}} \right|_{\mathbf {z}=\mathbf {0}} = \left. L \cdot \frac{\sum _{i=1}^{L} e^{\left< \mathbf {W}_{i}, \mathbf {z} \right>} \, \mathbf {W}_{i}}{\sum _{i=1}^{L} e^{\left< \mathbf {W}_{i}, \mathbf {z} \right>}} - \sum _{i=1}^{L} \mathbf {W}_{i} \right|_{\mathbf {z}=\mathbf {0}} = \mathbf {0}.\) Since \(\Omega (\mathbf {f})\) is convex respect to \(\mathbf {z}\) by Lemma REF , \(\mathbf {z}=\mathbf {h}(\mathbf {x})=\mathbf {0}\) is the global minimizer of \(\Omega (\mathbf {f})\) . Now, by Assumption REF , \(\mathbf {W}_2-\mathbf {W}_1, \mathbf {W}_3-\mathbf {W}_1, \cdots , \mathbf {W}_L-\mathbf {W}_1\) are linearly independent and is a basis for \(\mathbb {R}^{Q}\) . We express \(\frac{\partial \Omega (\mathbf {f})}{\partial \mathbf {z}}\) with the basis as follows: \(\begin{split}\frac{\partial \Omega (\mathbf {f})}{\partial \mathbf {z}} &= L \cdot \frac{\sum _{i=1}^{L} e^{\left< \mathbf {W}_{i}, \mathbf {z} \right>} \, \mathbf {W}_{i}}{\sum _{i=1}^{L} e^{\left< \mathbf {W}_{i}, \mathbf {z} \right>}} - \sum _{i=1}^{L} \mathbf {W}_{i} \\&= L \cdot \frac{\sum _{i=1}^{L} e^{\left< \mathbf {W}_{i} - \mathbf {W}_{1}, \mathbf {z} \right>} \, \mathbf {W}_{i}}{\sum _{i=1}^{L} e^{\left< \mathbf {W}_{i} - \mathbf {W}_{1}, \mathbf {z} \right>}} - \sum _{i=1}^{L} \mathbf {W}_{i} \\& = L \cdot \frac{\sum _{i=1}^{L} e^{\left< \mathbf {W}_{i} - \mathbf {W}_{1}, \mathbf {z} \right>} \, (\mathbf {W}_{i} - \mathbf {W}_{1})}{\sum _{i=1}^{L} e^{\left< \mathbf {W}_{i} - \mathbf {W}_{1}, \mathbf {z} \right>}} - \sum _{i=1}^{L} (\mathbf {W}_{i} - \mathbf {W}_{1}).\end{split}\) Since \(\lbrace \mathbf {W}_{j} - \mathbf {W}_{1}: 2 \le j \le L \rbrace \) forms a basis, we have \(\begin{split}\frac{\partial \Omega (\mathbf {f})}{\partial \mathbf {z}} = \mathbf {0} & \iff L \cdot \frac{ e^{\left< \mathbf {W}_{j} - \mathbf {W}_{1}, \mathbf {z} \right>} }{\sum _{i=1}^{L} e^{\left< \mathbf {W}_{i} - \mathbf {W}_{1}, \mathbf {z} \right>}} - 1 = 0 \quad \text{ for all } \;1\le j \le L \\& \iff \left< \mathbf {W}_{j} - \mathbf {W}_{1}, \mathbf {z} \right> = 0 \quad \text{ for all } \;2\le j \le L \\& \iff \left< \mathbf {W}_{i}, \mathbf {z} \right> = \left< \mathbf {W}_{j}, \mathbf {z} \right> \quad \text{ for all } \;1\le i, j \le L .\end{split}\) We can find the optimal value \(\mathbf {z}\) which minimize \(\Omega (\mathbf {f})\) by solving the \(\frac{\partial \Omega (\mathbf {f})}{\partial \mathbf {z}} = \mathbf {0}\) . Using Equation REF , we have \(\left< \mathbf {W}_{1} - \mathbf {W}_{2}, \mathbf {z} \right> = \left< \mathbf {W}_{2} - \mathbf {W}_{3}, \mathbf {z} \right> = \cdots = \left< \mathbf {W}_{L-1} - \mathbf {W}_{L}, \mathbf {z} \right>= 0\) and \(\begin{pmatrix}\mathbf {W}_{1} - \mathbf {W}_{2}\\\mathbf {W}_{2} - \mathbf {W}_{3}\\\vdots \\\mathbf {W}_{L-1} - \mathbf {W}_{L}\end{pmatrix}^{\intercal } \mathbf {z} = \mathbf {0}.\) Again by Assumption REF , the left multiplied matrix is a full rank sqaure matrix. Hence, we obtain \(\frac{\partial \Omega (\mathbf {f})}{\partial \mathbf {z}} = \mathbf {0} \Leftrightarrow \mathbf {z} = \mathbf {0}\) , which implies that \(\mathbf {z}=\mathbf {h}(\mathbf {x})=\mathbf {0}\) is the unique minimzer of \(\Omega (\mathbf {f})\) . Lastly, we show that when \(\mathbf {h}(\mathbf {x})=\mathbf {0}\) for all \(\mathbf {x}\) and with Assumption REF , \(\left\Vert \mathbf {J_f}(\mathbf {x})\right\Vert _F \rightarrow \mathbf {0}\) holds as the number of training sample \(N\) goes to \(\infty \) . Let \(\mathbf {x}_1, \mathbf {x}_2, \cdots , \mathbf {x}_N\) be the data points used in our training. For all \(i\) and \(j\) , because we have \(\mathbf {h}(\mathbf {x}_i) = \mathbf {h}(\mathbf {x}_{i+1}) = \mathbf {0}, \;\exists \; \mathbf {c}_{i,j} \in \overline{\mathbf {x}_i \mathbf {x}_{i+1}} \;\,s.t. \;\nabla h_j(\mathbf {c}_{i,j}) = \mathbf {0}\) by the Mean Value Theorem. Then, using Assumption REF , we obtain \(\left\Vert \nabla h_j(\mathbf {x}_{i}) \right\Vert \le L_h \left\Vert \mathbf {x}_{i} - \mathbf {c}_{i,j} \right\Vert \le L_h \left\Vert \mathbf {x}_{i} - \mathbf {x}_{i+1} \right\Vert .\) Therefore, \(\left\Vert \mathbf {J_h}(\mathbf {x}_i)\right\Vert _F = \sqrt{\sum _{j=1}^{Q} \left\Vert \nabla h_j(\mathbf {x}_{i}) \right\Vert ^2} \le L_h \sqrt{Q} \left\Vert \mathbf {x}_{i} - \mathbf {x}_{i+1} \right\Vert \) and for sufficiently large \(N\) and any \(\epsilon >0\) , we can have the training set satisfy \(\left\Vert \mathbf {x}_{i} - \mathbf {x}_{i+1} \right\Vert \le \frac{\epsilon }{L_h \sqrt{Q}}\) . This leads to \(\left\Vert \mathbf {J_h}(\mathbf {x}_i)\right\Vert _F \le \epsilon \) which implies that \(\left\Vert \mathbf {J_h}(\mathbf {x})\right\Vert _F \rightarrow \mathbf {0}\) as \(N\) goes to \(\infty \) . Finally, since \(\mathbf {f}\) is a composite function of \(\mathbf {g}\) and \(\mathbf {h}\) , \(\left\Vert \mathbf {J_f}(\mathbf {x})\right\Vert _F \rightarrow \mathbf {0}\) as \(N \rightarrow \infty \) also holds. Hence, we have that the label smoothing regularizer encourages Lipschitz regularization of deep neural network functions. Additional Theorem We consider a binary classification task where \(\mathcal {Y}=\lbrace -1,1\rbrace \) . We decompose MSE loss with LSR into MSE loss with one-hot and regularization term, and consequently show that the regularization term encourages Lipschitz regularization. Theorem B.4 Assume that \(f: [0,1]^{D} \rightarrow \mathbb {R}\) is differentiable and the range includes \([-1,1]\) . Let loss function \(\ell : \mathbb {R} \times \mathbb {R} \rightarrow \mathbb {R}_{+}\) be a square loss, i.e. \(\ell (f(\mathbf {x}), y) = ( f(\mathbf {x}) - y)^2\) . Then, minimizing the square loss with LSR \(\ell (f(\mathbf {x}), \bar{y})\) encourages Lipschitz regularization. We can express the label smoothing term as \(\bar{y} = (1-\alpha )y\) in a binary classification problem with label -1 and 1. We decompose MSE loss with LSR as follows: \(\begin{split}\ell (f(\mathbf {x}), \bar{y}) & = (f(\mathbf {x}) - \bar{y})^2 = (f(\mathbf {x}) - (1-\alpha )y)^2 \\& =\left( (1-\alpha )(f(\mathbf {x})-y)+\alpha f(\mathbf {x})\right)^2 \\& = (1-\alpha )^2 \ell (f(\mathbf {x}), y) + \alpha (2-\alpha ) f(\mathbf {x})^2 - 2\alpha (1-\alpha ) yf(\mathbf {x}) \\& = (1-\alpha )^2 \ell (f(\mathbf {x}), y) + \alpha (2-\alpha ) \left( f(\mathbf {x}) - \frac{\bar{y}}{2-\alpha } \right)^2 + c.\end{split}\) Since \(f\) is continuous and the range includes \([-1,1]\) , there exists \(\mathbf {x}_0\) such that \(f(\mathbf {x}_0) = \frac{\bar{y}}{2-\alpha }\) . Then, by Mean Value Theorem and Cauchy-Schwarz Inequality, we obtain an upper bound of MSE loss with LSR: \(\begin{split}\ell (f(\mathbf {x}), \bar{y}) & = (1-\alpha )^2 \ell (f(\mathbf {x}), y) + \alpha (2-\alpha ) \left( f(\mathbf {x}) - f(\mathbf {x}_0) \right)^2 + c\\& \approx (1-\alpha )^2 \ell (f(\mathbf {x}), y) + \alpha (2-\alpha ) \left( \nabla f(\mathbf {x}) \cdot (\mathbf {x}-\mathbf {x_0}) \right)^2 + c \\& \le \ell (f(\mathbf {x}), y) + \alpha (2-\alpha ) \left\Vert \nabla f(\mathbf {x})\right\Vert ^2 \left\Vert \mathbf {x}-\mathbf {x_0}\right\Vert ^2 + c\\& \le \ell (f(\mathbf {x}), y) + \alpha (2-\alpha ) \left\Vert \nabla f(\mathbf {x})\right\Vert ^2 D + c.\end{split}\) Since \(\alpha (2-\alpha )\) is an increasing function of \(\alpha \) when \(0 \le \alpha \le 1\) , we can say that Lipschitz regularization term is included in the upper bound of \(\ell (f(\mathbf {x}), \bar{y})\) . Thus, we conclude that minimizing the square loss with LSR encourages Lipschitz regularization. <FIGURE><FIGURE> Validation of the Norm of Jacobian According to [48]}, [25]}, existing studies replace Lipschitz regularization in intermediate layer of r-layer DNN by regularizing \(R(\mathbf {x}) = \left( \sum _{j=1}^{r} \left\Vert J^{(j)}(\mathbf {x}) \right\Vert ^{2}_{F} \right)^{1/2}\) with \(j\) -th hidden layer of network \(h^{(j)}\) , where \(J^{(j)}(\mathbf {x}) \frac{\partial }{\partial h^{(j)}} \ell (\mathbf {f}(\mathbf {x}), \mathbf {y})\) is Jacobian of the loss with respect to \(h^{(j)}\) . To validate our perspective, we compare the norm of Jacobian matrix for last layer of all residual blocks of ResNet34 trained on CIFAR-10 with symmetric 60% noise across different LS methods. For ALS, we used warm-up epochs as 20, for gaining an accurate regularization power coefficient \(\beta \) during total training of 120 epochs. We can observe that ALS conduct stronger regularization on noisy instances and weaker regularization on clean instances in app:fig1. Additionally, in app:fig2, conducting ALS on sub-classifier also regularize the smoothness in instance-dependent manner. Supplementary Descriptions for ALASCA Algorithm <FIGURE>Overall Description Motivated by [46]}, [51]}, [52]}, We apply sub-classifier for regularizing feature extractor, while do not affect to main classifiers. As can be seen in fig:concept, the sub-classifiers can be located everywhere of networks. In our experiments, we observe that LS leads to over-smooth of main classifier and can hurt the generalization of networks under high noise rates that previously argued in [44]}. Although ALS in ALASCA is similar to Tf-KD\(_\text{reg}\) in [43]}, our motivation is different from Tf-KD\(_\text{reg}\) . Additionally, our most important contribution is conducting ALS on sub-classifiers to incur LR but not over-smoothing. The loss function of ALASCA is as follows: \(\mathcal {L} = \ell _{LNL} ({\mathbf {q}}^{C}, {\mathbf {y}}) + \lambda \cdot \sum _{i=1}^{C-1} \ell ({\mathbf {q}}^{i}, \gamma \cdot \hat{{\mathbf {y}}} + (1-\gamma ) \cdot \mathbf {1}),\) where \(\ell _{LNL}\) is the loss function for the main classifier which can be any loss function of existing LNL methods (e.g., GCE, Co-teaching) and \(\ell \) is the cross entropy loss function for sub-classifiers. Note that \({\mathbf {q}}^{i}\) denotes the softmax vector for \(i\) -th classifier, and \(\alpha \) is EMA confidence and \(\hat{{\mathbf {y}}}\) is corrected label. We set the hyperparameter for overall regularization strength \(\lambda \) as 2.0 for all experiments. EMA Confidence As seen in fig:fig3 and app:fig3, the instantaneous confidence of each example have high variance across the training epoch. This enables to conduct of wrong smoothing factor for ALS which leads the feature extractor to sub-optimal. To conduct correct smoothing factor for each instance in training procedure, we apply exponential moving average for computing ALS confidence. The computation procedure for EMA confidence is as follows: Average : We conduce exponential moving average procedure on logit values. With EMA logit \({\mathbf {z}}_{EMA}\) , instantaneous logit at \(t-\) th epoch \({\mathbf {z}}_{t}\) , the formula for average procedure is as follows: \({\mathbf {z}}_{\text{EMA}} = w_{\text{EMA}} \cdot {\mathbf {z}}_{\text{EMA}} + (1 - w_{\text{EMA}}) \cdot {\mathbf {z}}_{t}\) In our experiments, we set the initial values for EMA values as all zero vectors and the EMA weight \(w_{EMA}\) as 0.7. Sharpen : Since the averaged logits are over-smooth, which can incur weak regularization for noisy instances, and strong regularization for clean instances. Hence, we conduct sharpening the EMA logits by dividing sharpen temperature \(\tau \) to logits. The final smoothing factor is as follows: \(\gamma = \hat{{\mathbf {y}}}^{\intercal } \cdot \texttt {Softmax} ({\mathbf {z}}_{\text{EMA}} \cdot \tau ),\) where \(\hat{{\mathbf {y}}}\) is corrected label following by app:LCA. We set the optimal sharpen parameter value as 3.0. By applying sharpening, we can apply very different smoothing factors to clean and noisy instances as shown in fig:fig3 and app:fig3. <FIGURE> Label Correction with Agreement One approach to train robust feature extractor is explicitly label correction method. Compared to LS approaches, self-knowledge distillation methods [55]}, [46]} implicitly correction the corrupted labels, since it transfer the predictions of main classifier to sub-classifiers. While this procedure is simple yet effective, these predictions could be inaccurate since these are posterior information of corrupted networks from noisy labels. To mitigate this problem, we provide LCA (Label Correction with Agreement), that lower the transferring inaccurate information. Overall procedure for LCA is as follows: Case 1: \(\hat{{\mathbf {y}}} = \operatornamewithlimits{arg\,max}_{i \in L} {\mathbf {q}}^{C}_{i}\) . If \(\operatornamewithlimits{arg\,max}_{i \in L} {\mathbf {q}}^{C}_{i} = \operatornamewithlimits{arg\,max}_{i \in L} {\mathbf {q}}^{C-1}_{i}\) and \(\tilde{{\mathbf {y}}} \ne \operatornamewithlimits{arg\,max}_{i \in L} {\mathbf {q}}^{C}_{i}\) . (After warm-up epoch) Case 2: \(\hat{{\mathbf {y}}} = \tilde{{\mathbf {y}}}\) . Otherwise. We observe that LCA lowers the risk that the feature extractors are corrupted by spurious information from main classifiers in fig:fig4. Since the classifiers are under-fitted to train datasets for early phase, we set warm-up epoch for label correction as 20 epochs. Sub-Classifier As following [46]}, we define the sub-classifier as a combination of a bottleneck layer and a fully connected layer. Our sub-classifiers can be positioned every part of feature extractor. To conduct a fair comparison with other self-knowledge distillation methods, we attached the sub-classifiers to the end of all residual blocks. Supplementary Descriptions for Main Results Experimental Configurations Noisy Label Generation We inject uniform randomness into a fraction of labels for symmetric noise, and flip a label only to a specific class for asymmetric noise. For example, we generate asymmetric noise by mapping TRUCK \(\rightarrow \) AUTOMOBILE, BIRD \(\rightarrow \) AIRPLANE, DEER \(\rightarrow \) HORSE, CAT \(\rightarrow \) DOG for CIFAR-10. For CIFAR-100, we divide entire dataset into 20 five-size super-classes and generate asymmetric noise by changing each class to the next class within super-classes. Also, based on [58]}, we generate instance-based noise and compare the performance with existing LNL methods. For generating instance-dependent noise we follows the state-of-the art method [58]}. Define the noise rate (the global flipping rate) as \(\epsilon \) . First, in order to control \(\epsilon \) but without constraining all of the instances to have a same flip rate, we sample their flip rates from a truncated normal distribution \(\mathbf {N}(\epsilon , 0.1^{2}, \left[0, 1 \right])\) , where \(\left[0, 1 \right]\) indicates the range of the truncated normal distribution. Second, we sample parameters \(W\) from the standard normal distribution for generating instance-dependent label noise. The size of \(W\) is \(S \times K\) , where \(S\) denotes the length of each feature. Hyperparameter Setups for Main Results Noise Robust Loss Functions. We conduct experiments with CE, GCE, SCE, ELR as mentioned in main text. We follow all experiments settings presented in the [10]}. We use ResNet34 models and trained them using a standard Pytorch SGD optimizer with a momentum of 0.9. We set a batch size of 128 for all experiments. We utilize weight decay of \(5 \times 10^{-4}\) and set the initial learning rate as 0.02, and reduce it by a factor of 100 after 40 and 80 epochs for CIFAR datasets (total 120 epochs). For noise-robust loss functions, we train the network with 0.1 as initial \(\alpha \) value and 0.7 as final \(\alpha \) value and linearly increasing for the first half of training epochs. Sample-Selection Methods. In this experiment, we use ResNet34 models (or as backbone with ALASCA) and trained them using a standard Pytorch SGD optimizer with initial learning rate of 0.02 and a momenutm of 0.9. We set a bacth size of 128 for all experiments and utilize weight decay of \(5 \times 10^{-4}\) . Unlike the noise robust loss functions, we trained all baseline with our settings and reported the results. Additionally, while we use ALASCA, we use 0.1 as initial \(\alpha \) value and 0.7 as final \(\alpha \) value and linearly increasing for the first half of training epochs. For Decoupling and Co-teaching, we utilize the warm-up epochs as 30, and we utilize the coreset size of CRUST as 0.5. Semi-Supervised Approaches As an extension on the experiments in original papers [9]}, [10]}, we conduct experiments on various noise settings. Our experiments settings follow [9]}. We use the same hyperparameter settings written in each paper. Similar to above, while we use ALASCA, we use 0.1 as initial \(\alpha \) value and 0.9 as final \(\alpha \) value and linearly increasing for the first half of training epochs. Detailed Results Intuitively, when the representation from the feature extractor is sufficient, the classifier's decision boundary can be robust in spite of strong noise. To verify superiority of our framework, we combine the ALASCA with various existing LNL methods and identify that ours consistently improves the generalization in the presence of noisy data: noise-robust loss functions, sample selection approaches, and semi-supervised learning (SSL) approaches. Noise Robust Loss Functions The noise-robust loss function is to achieve a small risk for unseen clean data even when noisy labels exist in the training data. Here, we state the conjunction effects of ALASCA with various noise-robust loss functions: generalized cross entropy (GCE) [12]}, symmetric cross entropy (SCE) [13]}, and early-learning regularizationSince ELR is similar to other robust-loss functions in that the additive term regularizes the output of the network, we classified ELR as a robust-loss function and did not compare it in Section 4.1. Instead, we orthogonally utilize ELR and ALASCA together in this section and get better performance than the original results in [10]} for all experiment settings. (ELR) [10]}. tab:robust-loss shows that ALASCA enhances generalization in the application of noise-robust loss functions on severe noise rate settings. Sample Selection Methods Sample selection which selects clean sample candidates from the whole training dataset is one of the most popular choices among the various methods for LNL. A widely used sample selection strategies for DNN training are to select with small training loss [2]}, [3]}, or small weight gradient [4]}. In this section, we combine our proposed method with following sample selection approaches; (1) Decoupling [2]}, (2) Co-teaching [3]}, (3) CRUST [4]}. tab:sample-selection summarizes the performances of different sample selection approaches on various noise distribution and datasets. We observe that our ALASCA method consistently improves the performance of networks compared to the network training with the naive sample selection method. Specifically, we experimentally show that the network trained robustly with ALASCA can select clean examples better than baseline in fig:SEDAN-selection, F-scores for A-Coteaching are consistently higher than original Co-teaching. <FIGURE> Semi-Supervised Approaches SSL approaches [9]}, [10]} divide the training data into the clean and noisy as labeled and unlabeled examples, respectively, use both of them in semi-supervised learning. Recently, methods belonging to this category have shown the best performance among the various LNL methods, and these methods possibly train robust networks for even extremely high noise rates. However, pseudo labels generated from the corrupted networks might lead to the network as suboptimal. We combine the ALASCA with existing semi-supervised approaches and compare the performance with baseline. We achieve consistently higher performance than baseline since ALASCA enables robust training of feature extractor as shown in tab:semisupervised. Interestingly, as fig:SEDAN-selection shows, clean and noisy data is well-divided in A-DivideMix under extreme noise case (CIFAR-10 with 90% symmetric noise). <TABLE><TABLE><TABLE> Additional Results Comparison with Existing Regularization Methods To verify our implicit regularization framework ALASCA, we compare its performance of ALASCA with standard training and existing regularization methods. We compare our framework with the following regularization-based approaches: (1) (uniform) LSR [21]}: it modify each one-hot label to smooth label with \(\alpha = 0.5\) , (2) Mixup [22]}: it trains network on convex combinations of pairs of examples and their labels, (3) HAR [25]}: it explicitly and adaptively regularizes the norm of Jacobian matrices of the data points in higher-uncertainty, lower-density regions more heavily, (4) CDR [20]}: it identify and regularize the non-critical parameters which tend to fit noisy labels and cannot generalize well. While the latter two methods are similar to our proposed method to regularize the intermediate layer; however, they are explicit regularization methods that find noise data points or non-critical parameters to regularize them. tab:regularize-methods provides the performance of different regularization-based methods on various noise distribution and datasets. For the efficiency test, since the training stages for each method are different, we compute the computation memory and training time with entire training procedure.For example, HAR needs to train network twice to estimate regularization power and train with regularization for the re-initialized network. CDR needs to additional computation to identify non-critical parameters. We observe that our ALASCA method consistently outperforms the competitive regularization-based methods over the various noise types and rates as well as enables effective training in terms of computation memory and training time. <TABLE> Robust to Hyperparameters Selection of LNL Methods Existing LNL methods have significant performance differences depending on their hyperparameters, and if we select hyperparameters value improperly, the performance is provably lower than of standard training. However, the values of the optimal hyperparameters vary depending on the network architectures or datasets. Despite its importance, many works do not focus on hyperparameter selection, and it is challenging to achieve performance improvement in real-world datasets, as reported in the original work. We combine the ALASCA and existing LNL method with various hyperparameter settings: (1) Standard training with different weight decay coefficients (2) ELR [10]} with different regularization coefficient (3) Co-teaching [3]} along the different warmup epochs (4) CRUST [4]} with different coreset sizes. The detailed experimental setups are as follows: Standard Training. Yu et al. [83]} argue that LSR denoise label noise by encouraging weight shrinkage of DNNs. Motivated by this view, we find that the performance of network are varying depends on weight decay factor for training. We train the networks with weight decay factor of 1e-4 (S1), 5e-4 (S2), and 1e-3 (S3). Early Learning Regularization. ELR [10]} uses two types of hyperparameters: the temporal ensembling parameter \(\beta \) and the regularization coefficient \(\lambda \) . They perform hyperparameter tuning on the CIFAR datasets via grid serach. The selected values are \(\beta =0.7\) and \(\lambda =3\) (S1) for CIFAR-10 with symmetric noise, \(\beta =0.9\) and \(\lambda =1\) (S2) for CIFAR-10 with asymmetric noise, and \(\beta =0.9\) and \(\lambda =7\) (S3) for CIFAR-100. We train the network with CIFAR-10 under 40% asymmetric noise with hyperparameter settings S1, S2, and S3. <TABLE> Co-teaching. The warm-up epochs for Co-teaching [3]} is very important. If number of warm-up epochs is too small, the network start to filter the noisy examples with underfitted network parameters to clean examples while the network is overfitted to noisy examples if number of warm-up epochs it too large. While the original paper reports the warm-up epochs as 30, here, we train the network with warm-up epochs 10 (S1), 20 (S2), 30 (S3). CRUST. The coreset size is important to CRUST since the network risks overfitting to noisy examples with large coreset and underfitting to clean examples with small coreset. The original paper of CRUST [4]} select the coresets of size 50% of the size of the dataset. In our experiments, we train the networks with coreset size of 30% (S1), 50% (S2), 70% (S3) of the size of the dataset. The results are shown in tab:hyperparameter-selection. Since the optimal hyperparameters vary depending on the experiment settings, we conduct the experiments on various network and noise rate. While the baseline performances broadly differ depending on the hyperparameter settings, performances with ALASCA sustain robustly even when different hyperparameters are used. Components Analysis <TABLE>We perform an analysis on each component of our method, namely the use of EMA confidence (EMAC) and label correction with agreement (LCA), by comparing their test accuracy. Our experiments are conducted on CIFAR10 under various noise distribution. The results in tab:ablation demonstrate each component is indeed effective, as the performance improves step by step with the addition of the component.
d