{"cells":[{"source":"<a href=\"https://www.kaggle.com/code/tamlhp/machine-unlearning-the-right-to-be-forgotten?scriptVersionId=136509735\" target=\"_blank\"><img align=\"left\" alt=\"Kaggle\" title=\"Open in Kaggle\" src=\"https://kaggle.com/static/images/open-in-kaggle.svg\"></a>","metadata":{},"cell_type":"markdown"},{"cell_type":"markdown","id":"9f593017","metadata":{"papermill":{"duration":0.009844,"end_time":"2023-07-12T05:38:47.069248","exception":false,"start_time":"2023-07-12T05:38:47.059404","status":"completed"},"tags":[]},"source":["# Machine Unlearning: The Right to be Forgotten\n","\n","<div class=\"figure*\">\n","<figure>\n","<img src=\"https://raw.githubusercontent.com/tamlhp/awesome-machine-unlearning/main/framework.png\" alt=\"image\" style=\"max-width: 100%;\"/>\n","<figcaption aria-hidden=\"true\">A Typical Machine Unlearning Process</figcaption>\n","</figure>\n","</div>\n","\n","## Abstract\n","\n","In today's digital landscape, computer systems store vast amounts of personal data, enabling breakthroughs in artificial intelligence (AI), particularly machine learning. However, the abundance of data poses risks to user privacy and can erode the trust between humans and AI. To address these concerns, recent regulations, commonly known as \"the right to be forgotten\", mandate the removal of private user information from computer systems and machine learning models. While erasing data from back-end databases is relatively straightforward, it is insufficient in the context of AI since machine learning models often retain memories of the old data. Additionally, recent adversarial attacks on trained models have demonstrated the ability to identify whether instances or attributes belonged to the training data. This necessitates a new approach called *machine unlearning* to make machine learning models forget specific data. However, existing works on machine unlearning have yet to fully solve the problem due to the lack of standardized frameworks and resources.\n","\n","This survey aims to provide a comprehensive examination of machine unlearning, including its concepts, scenarios, methods, and applications. By assembling a collection of cutting-edge studies, our intention is to serve as a valuable resource for researchers and practitioners seeking an introduction to machine unlearning. We cover various aspects of the topic, including formulation, design criteria, removal requests, algorithms, and practical applications. Furthermore, we aim to identify key findings, current trends, and unexplored research areas that could greatly benefit from the application of machine unlearning. Our ultimate goal is to contribute to the advancement of privacy technologies and provide insights for machine learning researchers and innovators.\n","\n","For the convenience of the readers, all the resources related to this survey can be accessed publicly on our GitHub repository at <https://github.com/tamlhp/awesome-machine-unlearning>. This survey supersedes [our 2022 version](https://arxiv.org/abs/2209.02299) by including 2022-2023 scientific articles and revising important sections.\n","\n","As the importance of privacy and data protection grows in the AI era, the concept of machine unlearning emerges as a critical tool to ensure user privacy and trust. This comprehensive survey provides an overview of the fundamental principles, methodologies, and real-world applications of machine unlearning. By bridging the gap in existing research and highlighting untapped potential, we hope to inspire further exploration and innovation in the field. We believe this survey will serve as a valuable resource for researchers and practitioners seeking to advance the understanding and implementation of machine unlearning in the context of privacy preservation.\n","\n","\n","\n"]},{"cell_type":"markdown","id":"d8001da8","metadata":{"papermill":{"duration":0.011324,"end_time":"2023-07-12T05:38:47.089836","exception":false,"start_time":"2023-07-12T05:38:47.078512","status":"completed"},"tags":[]},"source":["<h1 id=\"sec:intro\">1. Introduction</h1>\n","<p>Computer systems in today's digital landscape store vast amounts of personal data, facilitated by significant advancements in data storage and transfer technologies. The proliferation of data production, recording, and processing has skyrocketed, exemplified by the staggering number of daily YouTube video views, which exceeds four billion&#xA0;<a href=\"#ref-sari2020learning\">(Sari et al.\n","2020)</a>. These online personal data encompass digital footprints left by internet users, reflecting their behaviors, interactions, and communication patterns in the real world&#xA0;<a href=\"#ref-nguyen2019debunking\">(Nguyen et al, 2019)</a>. Additionally, personal data originates from various sources, including user-generated digital content such as product reviews, blog posts (e.g., Medium), social media status updates (e.g., Instagram), and knowledge sharing platforms (e.g., Wikipedia)&#xA0;<a href=\"#ref-nguyen2021judo\">(Nguyen et al. 2021)</a>. Furthermore, the scope of personal data has expanded to encompass information collected from wearable devices&#xA0;<a href=\"#ref-ren2022prototype\">(Ren et al., 2022)</a>. While this abundance of data has propelled advancements in artificial intelligence (AI), it also poses threats to user privacy and has contributed to numerous data breaches<a href=\"#ref-cao2015towards\">(Cao and Yang, 2015)</a>. Consequently, some users may desire complete removal of their data from systems, particularly in sensitive domains like finance or healthcare&#xA0;<a href=\"#ref-ren2022prototype\">(Ren et al., 2022)</a>. Recent regulations now mandate organizations to provide users with the \"right to be forgotten,\" granting them the ability to request the deletion of all or part of their data from a system&#xA0;<a href=\"#ref-dang2021right\">(Dang et al., 2021)</a>.\n","</p>\n","\n","<p>\n","Although complying with regulations by removing data from back-end databases is a step in the right direction, it falls short in the AI context due to the persistent &#x2018;memory&#x2019; of old data by machine learning models. During the training phase of these models, millions, if not billions, of users data are processed. However, unlike humans who grasp general patterns, machine learning models tend to function more like lossy data compression mechanisms&#xA0;<a href=\"#ref-schelter2020amnesia\">(Schelter, 2020)</a>, and some become overly specialized to their training data. Notably, the success of deep learning models has recently been attributed to the compression of training data&#xA0;<a href=\"#ref-tishby2000information tishby2015deep\">(Tishby et al. 2000; Tishby and Zaslavsky 2015)</a>. The existence of this memorization behavior is further substantiated by studies on adversarial attacks, which have demonstrated the extraction of private information from trained models within specific target data&#xA0;<a href=\"#ref-ren2020generating chang2022example ren2020enhancing\">(Z.\n","Ren, Baird, et al. 2020; Chang et al. 2022; Z. Ren, Han, et al.\n","2020)</a>. However, it is important to recognize that the parameters of a trained model generally lack a clear connection to the data used for training&#xA0;<a href=\"#ref-shwartz2017opening\">(Shwartz-Ziv and Tishby 2017)</a>. Consequently, removing information associated with a particular data item from a machine learning model presents a challenge, making it difficult to ensure that the model forgets a user's data.\n","</p>\n","\n","<p>\n","Addressing the challenge of enabling users to have the option and flexibility to completely delete their data from a machine learning model necessitates a novel approach known as <em>machine unlearning</em>&#xA0;<a href=\"#ref-nguyen2022markov baumhauer2020machine tahiliani2021machine\">(Q. P. Nguyen et al. 2022; Baumhauer, Sch&#xF6;ttle, and Zeppelzauer 2020;\n","Tahiliani et al. 2021)</a>. Ideally, a machine unlearning mechanism would remove the data from the model without requiring a complete retraining process&#xA0;<a href=\"#ref-nguyen2022markov\">(Q. P. Nguyen et al. 2022)</a>. By adhering to users' right to be forgotten, this approach would protect the model owner from repetitive and costly retraining procedures.\n","</p>\n","\n","<p>\n","Researchers have already initiated investigations into various aspects of machine unlearning, including the removal of specific training data and the subsequent analysis of model predictions&#xA0;<a href=\"#ref-nguyen2022markov thudi2021necessity\">(Q. P. Nguyen et al. 2022; Thudi, Jia, et al. 2022)</a>. However, it has become evident that this problem cannot be fully resolved due to the absence of standardized frameworks and resources&#xA0;<a href=\"#ref-villaronga2018humans veale2018algorithms shintre2019making schelter2020amnesia\">(Villaronga,\n","Kieseberg, and Li 2018; Veale, Binns, and Edwards 2018; Shintre et al.\n","2019; Schelter 2020)</a>. In order to lay the groundwork in this emerging field, we have undertaken a comprehensive survey of machine unlearning, encompassing its definitions, scenarios, mechanisms, and applications. Our survey findings and accompanying resources are publicly accessible at&#xA0;<a href=\"#fn1\"\n","class=\"footnote-ref\" id=\"fnref1\"\n","role=\"doc-noteref\"><sup>1</sup></a>.\n","</p>\n","\n","<h2 id=\"reasons-for-machine-unlearning\">1.1. Reasons for Machine\n","Unlearning</h2>\n","<p>There are various reasons why users might want to delete their data from a system. We can categorize these reasons into four main groups: security, privacy, usability, and fidelity. Each of these reasons is discussed in more detail below.</p>\n","\n","<p><strong>Security.</strong> Deep learning models have recently revealed vulnerabilities to external attacks, particularly adversarial attacks <a href=\"#ref-ren2020adversarial\">(K. Ren et al., 2020)</a>. In an adversarial attack, the attacker generates adversarial data that closely resembles the original data to the point where human perception cannot distinguish between the real and fake data. This adversarial data is purposely crafted to manipulate deep learning models, causing them to generate inaccurate predictions, often leading to significant consequences. For instance, in healthcare, an erroneous prediction could result in misdiagnosis, inappropriate treatment, or even loss of life. Therefore, it is imperative to detect and remove adversarial data to ensure the security of the model. Once an attack is identified, the model must be capable of deleting the adversarial data through a machine unlearning mechanism <a href=\"#ref-cao2015towards marchant2022hard\">(Y. Cao and Yang 2015; Marchant, Rubinstein, and Alfeld 2022)</a>.</p>\n","\n","<p><strong>Privacy.</strong> Numerous privacy-preserving regulations have recently been implemented, encompassing the right to be forgotten <a href=\"#ref-bourtoule2021machine dang2021right\">(Bourtoule et al., 2021; Dang, 2021)</a>, such as the General Data Protection Regulation (GDPR) of the European Union <a href=\"#ref-magdziarczyk2019right\">(Mantelero, 2013)</a> and the California Consumer Privacy Act <a href=\"#ref-pardau2018california\">(Pardau, 2018)</a>. These regulations grant users the right to request the deletion of their data and related information in order to safeguard their privacy. Such legislation has emerged in response to instances of privacy breaches. For example, cloud systems can inadvertently expose user data due to multiple copies stored by various entities, backup policies, and replication strategies <a href=\"#ref-singh2017data\">(A. Singh and Anand, 2017)</a>. In another scenario, machine learning techniques used in genetic data processing have been found to unintentionally disclose patients' genetic markers <a href=\"#ref-fredrikson2014privacy wang2009learning\">(Fredrikson et al., 2014; R. Wang et al., 2009)</a>. Hence, it is unsurprising that users seek to remove their data to mitigate the risks of data leaks <a href=\"#ref-cao2015towards\">(Y. Cao and Yang, 2015)</a>.</p>\n","\n","<p><strong>Usability.</strong> People have diverse preferences when it comes to online applications and services, particularly recommender systems. An application's recommendations can be inconvenient if it fails to completely remove incorrect data (e.g., noise, malicious data, out-of-distribution data) associated with a user. For instance, if someone unintentionally searches for an illegal product on their laptop and continues to receive recommendations for that product on their phone, even after clearing their web browser history, it leads to undesired usability <a href=\"#ref-cao2015towards\">(Y. Cao and Yang, 2015)</a>. Such persistent data retention not only results in inaccurate predictions but also reduces user satisfaction and engagement.</p>\n","\n","<p><strong>Fidelity.</strong> Biased machine learning models can prompt requests for unlearning. Despite recent advancements, machine learning models are still susceptible to bias, resulting in outputs that unfairly discriminate against specific groups of people <a href=\"#ref-mehrabi2021survey\">(Mehrabi et al., 2021)</a>. For instance, COMPAS, a software employed by courts to determine parole cases, demonstrates a higher tendency to assign elevated risk scores to African-American offenders compared to Caucasians, even when ethnicity information is not included as input <a href=\"#ref-zou2018ai\">(Zou and Schiebinger, 2018)</a>. Similar instances of bias have been observed in beauty contests judged by AI, which exhibited prejudice against participants with darker skin tones, as well as facial recognition AI systems that inaccurately recognized Asian facial features <a href=\"#ref-feuerriegel2020fair\">(Feuerriegel, Dolata, and Schwabe, 2020)</a>.</p>\n","\n","<p>The origin of these biases can often be traced back to the data itself. For instance, AI systems trained on public datasets that predominantly feature individuals of white ethnicity, such as ImageNet, are more prone to making errors when processing images of individuals with black ethnicity. Similarly, in an application screening system, the machine learning model might unintentionally acquire inappropriate features, such as gender or race information, during the learning process <a href=\"#ref-dinsdale2021deep dinsdale2020unlearning\">(Dinsdale, Jenkinson, and Namburete, 2021; Dinsdale, Jenkinson, et al., 2020)</a>. Consequently, there is a necessity to unlearn such data, which involves discarding the associated features and affected data items.</p>\n","\n","<h2 id=\"challenges-in-machine-unlearning\">1.2. Challenges in Machine\n","Unlearning</h2>\n","\n","<p>Before we can fully achieve machine unlearning, there are several challenges that must be addressed in order to selectively remove specific portions of the training data. These challenges can be summarized as follows:</p>\n","<p><strong>Stochasticity of training.</strong> Due to the stochastic nature of the training process, we lack precise knowledge of the impact of each data point encountered during training on the machine learning model <a href=\"#ref-bourtoule2021machine\">(Bourtoule et al., 2021)</a>. For example, neural networks are typically trained on random mini-batches consisting of a fixed number of data samples, and the order of these training batches is also randomized <a href=\"#ref-bourtoule2021machine\">(Bourtoule et al., 2021)</a>. This stochasticity poses challenges for machine unlearning since the specific data sample to be removed would need to be eliminated from all batches consistently.</p>\n","<p><strong>Incrementality of training.</strong> The training process of a model is incremental, meaning that updates made to the model based on a particular data sample will impact the model's performance on subsequent data samples <a href=\"#ref-bourtoule2021machine\">(Bourtoule et al., 2021)</a>. Additionally, a model's performance on a given data sample is influenced by prior data samples. Finding a way to nullify the effects of a targeted training sample on the model's future performance poses a challenge for machine unlearning.</p>\n","<p><strong>Catastrophic unlearning.</strong> Typically, an unlearned model performs worse than a model retrained on the remaining data <a href=\"#ref-nguyen2020variational nguyen2022markov\">(Q. P. Nguyen, Low, and Jaillet, 2020; Q. P. Nguyen et al., 2022)</a>. However, the degradation can become exponential when more data is unlearned, leading to what is commonly known as catastrophic unlearning <a href=\"#ref-nguyen2020variational\">(Q. P. Nguyen, Low, and Jaillet, 2020)</a>. While various studies <a href=\"#ref-du2019lifelong golatkar2020eternal\">(Du, Chen, et al., 2019; Golatkar, Achille, and Soatto, 2020a)</a> have explored techniques to mitigate catastrophic unlearning through the design of specialized loss functions, finding natural ways to prevent catastrophic unlearning remains an open question.</p>"]},{"cell_type":"markdown","id":"0e72b55b","metadata":{"papermill":{"duration":0.008716,"end_time":"2023-07-12T05:38:47.108479","exception":false,"start_time":"2023-07-12T05:38:47.099763","status":"completed"},"tags":[]},"source":["<h2 id=\"contributions-of-this-survey\">1.3. Contributions of this survey</h2>\n","<p>The aim of this paper is to supply a complete examination of research\n","studies on machine unlearning as well as a discussion on potential new\n","research directions in machine unlearning. The contributions of our\n","survey can therefore be summarized as follows.\n","First, we show how to design an unlearning framework. We discuss the\n","design requirements, different types of unlearning requests, and how to\n","verify the unlearned model.\n","Second, we show how to define an unlearning problem in machine\n","learning systems. This includes the formulation of exact unlearning and\n","approximate unlearning as well as the definition of indistinguishability\n","metrics to compare two given models (i.e., the unlearned model and the\n","retrained model).\n","Third, we introduce a unified taxonomy that categorizes the machine\n","unlearning approaches into three branches: model-agnostic methods,\n","model-intrinsic methods, and data-driven methods.\n","Finally, we highlight the findings, trends and the forthcoming\n","according to our survey results.\n","</p>\n","\n","<h2 id=\"differences-between-this-and-previous-surveys\">1.4. Differences\n","between this and previous surveys</h2>\n","<p>The below table\n","summarizes the differences between our survey and existing efforts to\n","unify the field. In particular, machine unlearning is different\n","from data deletion&#xA0;<a href=\"#ref-garg2020formalizing\">(Garg, Goldwasser, and Vasudevan\n","2020)</a>. Both topics concern the right to be forgotten legislated\n","and exercised across the world&#xA0;<a href=\"#ref-magdziarczyk2019right\">(Mantelero 2013)</a>. However, the\n","latter focuses only on the data perspective following the General Data\n","Protection Regulation (GDPR)&#xA0;<a href=\"#ref-voigt2017eu\">(Voigt and Von dem Bussche 2017)</a>, while\n","machine unlearning also addresses privacy problems from a model\n","perspective.\n","On the other hand, there are some other concepts that might be mistaken as machine\n","unlearning, such as data redaction that aims to poison the label\n","information of the data to be forgotten inside the model&#xA0;<a\n","href=\"#ref-felps2020class\">(Felps et al. 2020)</a>.\n","In other words, it forces the model make wrong predictions about the\n","forgotten data. Although applicable in some setting, this approach is\n","not fully compatible with machine unlearning as the forgotten data has\n","to be known a priori when the original model is trained&#xA0;<a\n","href=\"#ref-felps2020class\">(Felps et al.\n","2020)</a>.\n"," It is also noteworthy that this survey supersedes our 2022 version&#xA0;<a\n","href=\"#ref-nguyen2022survey\">(Nguyen et al.\n","2022)</a> by including 2022-2023 scientific articles and revising important sections.\n","</p>\n","\n","<div class=\"figure*\">\n","<figure>\n","<img src=\"https://raw.githubusercontent.com/tamlhp/awesome-machine-unlearning/main/kaggle/surveys.png\" alt=\"image\" style=\"max-width: 60%;\"/>\n","<figcaption aria-hidden=\"true\">Comparison between existing surveys on machine unlearning</figcaption>\n","</figure>\n","</div>"]},{"cell_type":"markdown","id":"58eea81e","metadata":{"papermill":{"duration":0.008871,"end_time":"2023-07-12T05:38:47.126751","exception":false,"start_time":"2023-07-12T05:38:47.11788","status":"completed"},"tags":[]},"source":["<h1 id=\"sec:framework\">2. Unlearning Framework</h1>\n","<h2 id=\"unlearning-workflow\">2.1. Unlearning Workflow</h2>\n","\n","<p>The unlearning framework in <a href=\"#fig:unlearning_workflow\"\n","data-reference-type=\"autoref\"\n","data-reference=\"fig:unlearning_workflow\">[fig:unlearning_workflow]</a>\n","presents the typical workflow of a machine learning model in the\n","presence of a data removal request. In general, a model is trained on\n","some data and is then used for inference. Upon a removal request, the\n","data-to-be-forgotten is unlearned from the model. The unlearned model is\n","then verified against privacy criteria, and, if these criteria are not\n","met, the model is retrained, i.e., if the model still leaks some\n","information about the forgotten data. There are two main components to\n","this process: the <em>learning component</em> (left) and the\n","<em>unlearning component</em> (right). The learning component involves\n","the current data, a learning algorithm, and the current model. In the\n","beginning, the initial model is trained from the whole dataset using the\n","learning algorithm. The unlearning component involves an unlearning\n","algorithm, the unlearned model, optimization requirements, evaluation\n","metrics, and a verification mechanism. Upon a data removal request, the\n","current model will be processed by an unlearning algorithm to forget the\n","corresponding information of that data inside the model. The unlearning\n","algorithm might take several requirements into account such as\n","completeness, timeliness, and privacy guarantees. The outcome is an\n","unlearned model, which will be evaluated against different performance\n","metrics (e.g., accuracy, ZRF score, anamnesis index). However, to\n","provide a privacy certificate for the unlearned model, a verification\n","(or audit) is needed to prove that the model actually forgot the\n","requested data and that there are no information leaks. This audit might\n","include a feature injection test, a membership inference attack,\n","forgetting measurements, etc.</p>\n","<div class=\"figure*\">\n","<figure>\n","<img src=\"https://raw.githubusercontent.com/tamlhp/awesome-machine-unlearning/main/framework.png\" alt=\"image\" style=\"max-width: 80%;\"/>\n","<figcaption aria-hidden=\"true\">A Typical Machine Unlearning Process</figcaption>\n","</figure>\n","</div>"]},{"cell_type":"markdown","id":"a1591616","metadata":{"papermill":{"duration":0.009315,"end_time":"2023-07-12T05:38:47.145108","exception":false,"start_time":"2023-07-12T05:38:47.135793","status":"completed"},"tags":[]},"source":["<p>If the unlearned model passes the verification, it becomes the new\n","model for downstream tasks (e.g., inference, prediction, classification,\n","recommendation). If the model does not pass verification, the remaining\n","data, i.e., the original data excluding the data to be forgotten, needs\n","to be used to retrain the model. Either way, the unlearning component\n","will be called repeatedly upon a new removal request.</p>\n","\n","<h2 id=\"unlearning-requests\">2.2. Unlearning Requests</h2>\n","<p><strong>Item Removal.</strong> Requests to remove certain\n","items/samples from the training data are the most common requests in\n","machine unlearning&#xA0;<a href=\"#ref-bourtoule2021machine\">(Bourtoule et al. 2021)</a>. The\n","techniques used to unlearn these data are described in detail in <a\n","href=\"#sec:algorithms\" data-reference-type=\"autoref\"\n","data-reference=\"sec:algorithms\">[sec:algorithms]</a>.</p>\n","<p><strong>Feature Removal.</strong> In many scenarios, privacy leaks\n","might not only originate from a single data item but also in a group of\n","data with the similar features or labels&#xA0;<a href=\"#ref-warnecke2021machine\">(Warnecke et al. 2021)</a>. For\n","example, a poisoned spam filter might misclassify malicious addresses\n","that are present in thousands of emails. Thus, unlearning suspicious\n","emails might not enough. Similarly, in an application screening system,\n","inappropriate features, such as the gender or race of applicants, might\n","need to be unlearned for thousands of affected applications.</p>\n","<p>In such cases, naively unlearning the affected data items\n","sequentially is imprudent as repeated retraining is computationally\n","expensive. Moreover, unlearning too many data items can inherently\n","reduce the performance of the model, regardless of the unlearning\n","mechanism used. Thus, there is a need for unlearning data at the feature\n","or label level with an arbitrary number of data items.</p>\n","<p>Warnecke et al.&#xA0;<a href=\"#ref-warnecke2021machine\">(Warnecke et al. 2021)</a> proposed\n","a technique for unlearning a group of training data based on influence\n","functions. More precisely, the effect of training data on model\n","parameter updates is estimated and formularized in closed-form. As a\n","result of this formulation, influences of the learning sets act as a\n","compact update instead of solving an optimisation problem iteratively\n","(e.g., loss minimization). First-order and second-order derivatives are\n","the keys to computing this update effectively&#xA0;<a href=\"#ref-warnecke2021machine\">(Warnecke et al. 2021)</a>.</p>\n","<p>Guo et al.&#xA0;<a href=\"#ref-guo2022efficient\">(T.\n","Guo et al. 2022)</a> proposed another technique to unlearn a feature\n","in the data based on disentangled representation. The core idea is to\n","learn the correlation between features from the latent space as well as\n","the effects of each feature on the output space. Using this information,\n","certain features can be progressively detached from the learnt model\n","upon request, while the remaining features are still preserved to\n","maintain good accuracy. However, this method is mostly applicable to\n","deep neural networks in the image domain, in which the deeper\n","convolutional layers become smaller and can therefore identify abstract\n","features that match real-world data attributes.</p>\n","<p><strong>Class Removal.</strong> There are many scenarios where the\n","forgetting data belongs to single or multiple classes from a trained\n","model. For example, in face recognition applications, each class is a\n","person&#x2019;s face so there could potentially be thousands or millions of\n","classes. However, when a user opts out of the system, their face\n","information must be removed without using a sample of their face.</p>\n","<p>Similar to feature removal, class removal is more challenging than\n","item removal because retraining solutions can incur many unlearning\n","passes. Even though each pass might only come at a small computational\n","cost due to data partitioning, the expense mounts up. However,\n","partitioning data by class itself does not help the model&#x2019;s training in\n","the first place, as learning the differences between classes is the core\n","of many learning algorithms&#xA0;<a href=\"#ref-tanha2020boosting\">(Tanha et al. 2020)</a>. Although some\n","of the above techniques for feature removal can be applied to class\n","removal&#xA0;<a href=\"#ref-warnecke2021machine\">(Warnecke et al. 2021)</a>, it is\n","not always the case as class information might be implicit in many\n","scenarios.</p>\n","<p>Tarun et al.&#xA0;<a href=\"#ref-tarun2021fast\">(Tarun\n","et al. 2021)</a> proposed an unlearning method for class removal\n","based on data augmentation. The basic concept is to introduce noise into\n","the model such that the classification error is maximized for the target\n","class(es). The model is updated by training on this noise without the\n","need to access any samples of the target class(es). Since such impair\n","step may disturb the model weights and degrade the classification\n","performance for the remaining classes, a repair step is needed to train\n","the model for one or a few more epochs on the remaining data. Their\n","experiments show that the method can be efficient for large-scale\n","multi-class problems (100 classes). Further, the method worked\n","especially well with face recognition tasks because the deep neural\n","networks were originally trained on triplet loss and negative samples so\n","the difference between the classes was quite significant&#xA0;<a href=\"#ref-masi2018deep\">(Masi et al.\n","2018)</a>.</p>\n","<p>Baumhauer et al.&#xA0;<a href=\"#ref-baumhauer2020machine\">(Baumhauer, Sch&#xF6;ttle, and Zeppelzauer\n","2020)</a> proposed an unlearning method for class removal based on a\n","linear filtration operator that proportionally shifts the classification\n","of the samples of the class to be forgotten to other classes. However,\n","the approach is only applicable to class removal due to the\n","characteristics of this operator.</p>\n","<p><strong>Task Removal.</strong> Today, machine learning models are not\n","only trained for a single task but also for multiple tasks. This\n","paradigm, aka continual learning or lifelong learning&#xA0;<a href=\"#ref-parisi2019continual\">(Parisi et al.\n","2019)</a>, is motivated by the human brain, in which learning\n","multiple tasks can benefit each other due to their correlations. This\n","technique is also used overcome data sparsity or cold-start problems\n","where there is not enough data to train a single task effectively.</p>\n","<p>However, in these settings too, there can be a need to remove private\n","data related to a specific task. For example, consider a robot that is\n","trained to assist a patient at home during their medical treatment. This\n","robot may be asked to forget this assistance behaviour after the patient\n","has recovered&#xA0;<a href=\"#ref-liu2022continual\">(B.\n","Liu, Liu, et al. 2022)</a>. To this end, temporarily learning a task\n","and forgetting it in the future has become a need for lifelong learning\n","models.</p>\n","<p>In general, unlearning a task is uniquely challenging as continual\n","learning might depend on the order of the learned tasks. Therefore,\n","removing a task might create a catastrophic unlearning effect, where the\n","overall performance of multiple tasks is degraded in a\n","domino-effect&#xA0;<a href=\"#ref-liu2022continual\">(B.\n","Liu, Liu, et al. 2022)</a>. Mitigating this problem requires the\n","model to be aware of that the task may potentially be removed in future.\n","Liu et al.&#xA0;<a href=\"#ref-liu2022continual\">(B. Liu,\n","Liu, et al. 2022)</a> explains that this requires users to explicitly\n","define which tasks will be learned permanently and which tasks will be\n","learned only temporarily.</p>\n","<p><strong>Stream Removal.</strong> Handling data streams where a huge\n","amount of data arrives online requires some mechanisms to retain or\n","ignore certain data while maintaining limited storage&#xA0;<a href=\"#ref-nguyen2017retaining\">(Tam et al.\n","2017)</a>. In the context of machine unlearning, however, handling\n","data streams is more about dealing with a stream of removal\n","requests.</p>\n","<p>Gupta et el.&#xA0;<a href=\"#ref-gupta2021adaptive\">(Gupta et al. 2021)</a> proposed a\n","streaming unlearning setting involving a sequence of data removal\n","requests. This is motivated by the fact that many users can be involved\n","in a machine learning system and decide to delete their data\n","sequentially. Such is also the case when the training data has been\n","poisoned in an adversarial attack and the data needs to be deleted\n","gradually to recover the model&#x2019;s performance. These streaming requests\n","can be either non-adaptive or adaptive. A non-adaptive request means\n","that the removal sequence does not depend on the intermediate results of\n","each unlearning request, whereas and adaptive request means that the\n","data to be removed depends on the current unlearned model. In other\n","words, after the poisonous data is detected, the model is unlearned\n","gradually so as to decide which data item is most beneficial to unlearn\n","next.</p>\n","\n","<h2 id=\"design-requirements\">2.3. Design Requirements</h2>\n","<p><strong>Completeness (Consistency).</strong> A good unlearning\n","algorithm should be complete&#xA0;<a href=\"#ref-cao2015towards\">(Y. Cao and Yang 2015)</a>, i.e.&#xA0;the\n","unlearned model and the retrained model make the same predictions about\n","any possible data sample (whether right or wrong). One way to measure\n","this consistency is to compute the percentage of the same prediction\n","results on a test data. This requirement can be designed as an\n","optimization objective in an unlearning definition (<a\n","href=\"#sec:exact_unlearning\" data-reference-type=\"autoref\"\n","data-reference=\"sec:exact_unlearning\">[sec:exact_unlearning]</a>) by\n","formulating the difference between the output space of the two models.\n","Many works on adversarial attacks can help with this formulation&#xA0;<a href=\"#ref-sommer2022athena chen2021machine\">(Sommer\n","et al. 2022; M. Chen et al. 2021b)</a>.</p>\n","<p><strong>Timeliness.</strong> In general, retraining can fully solve\n","any unlearning problem. However, retraining is time-consuming,\n","especially when the distribution of the data to be forgotten is\n","unknown&#xA0;<a href=\"#ref-cao2015towards bourtoule2021machine\">(Y. Cao and Yang 2015;\n","Bourtoule et al. 2021)</a>. As a result, there needs to be a\n","trade-off between completeness and timeliness. Unlearning techniques\n","that do not use retraining might be inherently not complete, i.e., they\n","may lead to some privacy leaks, even though some provable guarantees are\n","provided for special cases&#xA0;<a href=\"#ref-GuoGHM20 marchant2022hard neel2021descent\">(C. Guo et al.\n","2020; Marchant, Rubinstein, and Alfeld 2022; Neel, Roth, and\n","Sharifi-Malvajerdi 2021)</a>. To measure timeliness, we can measure\n","the speed up of unlearning over retraining after an unlearning request\n","is invoked.</p>\n","<p>It is also worth recognizing the cause of this trade-off between\n","retraining and unlearning. When there is not much data to be forgotten,\n","unlearning is generally more beneficial as the effects on model accuracy\n","are small. However, when there is much forgetting data, retraining might\n","be better as unlearning many times, even bounded, may catastrophically\n","degrade the model&#x2019;s accuracy&#xA0;<a href=\"#ref-cao2015towards\">(Y. Cao and Yang 2015)</a>.</p>\n","<p><strong>Accuracy.</strong> An unlearned model should be able to\n","predict test samples correctly. Or at least its accuracy should be\n","comparable to the retrained model. However, as retraining is\n","computationally costly, retrained models are not always available for\n","comparison. To address this issue, the accuracy of the unlearned model\n","is often measured on a new test set, or it is compared with that of the\n","original model before unlearning&#xA0;<a href=\"#ref-he2021deepobliviate\">(He et al. 2021)</a>.</p>\n","<p><strong>Light-weight.</strong> To prepare for unlearning process,\n","many techniques need to store model checkpoints, historical model\n","updates, training data, and other temporary data&#xA0;<a href=\"#ref-he2021deepobliviate bourtoule2021machine liu2020federated\">(He\n","et al. 2021; Bourtoule et al. 2021; G. Liu et al. 2020)</a>. A good\n","unlearning algorithm should be light-weight and scale with big data. Any\n","other computational overhead beside unlearning time and storage cost\n","should be reduced as well&#xA0;<a href=\"#ref-bourtoule2021machine\">(Bourtoule et al. 2021)</a>.</p>\n","<p><strong>Provable guarantees.</strong> With the exception of\n","retraining, any unlearning process might be inherently approximate. It\n","is practical for an unlearning method to provide a provable guarantee on\n","the unlearned model. To this end, many works have designed unlearning\n","techniques with bounded approximations on retraining&#xA0;<a href=\"#ref-GuoGHM20 marchant2022hard neel2021descent\">(C. Guo et al.\n","2020; Marchant, Rubinstein, and Alfeld 2022; Neel, Roth, and\n","Sharifi-Malvajerdi 2021)</a>. Nonetheless, these approaches are\n","founded on the premise that models with comparable parameters will have\n","comparable accuracy.</p>\n","<p><strong>Model-agnostic.</strong> An unlearning process should be\n","generic for different learning algorithms and machine learning\n","models&#xA0;<a href=\"#ref-bourtoule2021machine\">(Bourtoule et al. 2021)</a>,\n","especially with provable guarantees as well. However, as machine\n","learning models are different and have different learning algorithms as\n","well, designing a model-agnostic unlearning framework could be\n","challenging.</p>\n","<p><strong>Verifiability.</strong> Beyond unlearning requests, another\n","demand by users is to verify that the unlearned model now protects their\n","privacy. To this end, a good unlearning framework should provide\n","end-users with a verification mechanism. For example, backdoor attacks\n","can be used to verify unlearning by injecting backdoor samples into the\n","training data&#xA0;<a href=\"#ref-sommer2020towards\">(Sommer et al. 2020)</a>. If the\n","backdoor can be detected in the original model while not detected in the\n","unlearned model, then verification is considered to be a success.\n","However, such verification might be too intrusive for a trustworthy\n","machine learning system and the verification might still introduce false\n","positive due to the inherent uncertainty in backdoor detection.</p>\n","\n","<h2 id=\"unlearning-verification\">2.4. Unlearning Verification</h2>\n","<p>The goal of unlearning verification methods is to certify that one\n","cannot easily distinguish between the unlearned models and their\n","retrained counterparts&#xA0;<a href=\"#ref-thudi2021necessity\">(Thudi, Jia, et al. 2022)</a>. While\n","the evaluation metrics (<a href=\"#sec:metrics\"\n","data-reference-type=\"autoref\"\n","data-reference=\"sec:metrics\">[sec:metrics]</a>) are theoretical criteria\n","for machine unlearning, unlearning verification can act as a certificate\n","for an unlearned model. They also include best practices for validating\n","the unlearned models efficiently.</p>\n","<p>It is noteworthy that while unlearning metrics (in <a\n","href=\"#sec:formulation\" data-reference-type=\"autoref\"\n","data-reference=\"sec:formulation\">[sec:formulation]</a>) and verification\n","metrics share some overlaps, the big difference is that the former can\n","be used for optimization or to provide a bounded guarantee, while the\n","latter is used for evaluation only.</p>\n","<p><strong>Feature Injection Test.</strong> The goal of this test is to\n","verify whether the unlearned model has adjusted the weights\n","corresponding to the removed data samples based on data\n","features/attributes&#xA0;<a href=\"#ref-izzo2021approximate\">(Izzo et al. 2021)</a>. The idea is\n","that if the set of data to be forgotten has a very distinct feature\n","distinguishing it from the remaining set, it gives a strong signal for\n","the model weights. However, this feature needs to be correlated with the\n","labels of the set to be forgotten, otherwise the model might not learn\n","anything from this feature.</p>\n","<p>More precisely, an extra feature is added for each data item such\n","that it is equal to zero for the remaining set and is perfectly\n","correlated with the labels of the set to forget. Izzo et al.&#xA0;<a href=\"#ref-izzo2021approximate\">(Izzo et al.\n","2021)</a> applied this idea with linear classifiers, where the weight\n","associated with this extra feature is expected to be significantly\n","different from zero after training. After the model is unlearned, this\n","weight is expected to become zero. As a result, the difference of this\n","weight can be plotted before and after unlearning as a measure of\n","effectiveness of the unlearning process.</p>\n","<p>One limitation of this verification method is that the current\n","solution is only applicable for linear and logistic models&#xA0;<a href=\"#ref-izzo2021approximate\">(Izzo et al.\n","2021)</a>. This is because these models have explicit weights\n","associated with the injected feature, whereas, for other models such as\n","deep learning, injecting such a feature as a strong signal is\n","non-trivial, even though the set to be forgotten is small. Another\n","limitation to these types of methods is that an injected version of the\n","data needs to be created so that the model can be learned (either from\n","scratch or incrementally depending on the type of the model).</p>\n","<p><strong>Forgetting Measuring.</strong> Even after the data to be\n","forgotten has been unlearned from the model, it is still possible for\n","the model to carry detectable traces of those samples&#xA0;<a href=\"#ref-jagielski2022measuring\">(Jagielski et al.\n","2022)</a>. Jagielski et al.&#xA0;<a href=\"#ref-jagielski2022measuring\">(Jagielski et al. 2022)</a>\n","proposed a formal way to measure the forgetfulness of a model via\n","privacy attacks. More precisely, a model is said to <span\n","class=\"math inline\"><em>&#x3B1;</em></span>-forget a training sample if a\n","privacy attack (e.g., a membership inference) on that sample achieves no\n","greater than success rate <span class=\"math inline\"><em>&#x3B1;</em></span>.\n","This definition is more flexible than differential privacy because a\n","training algorithm is differentially private only if it immediately\n","forgets every sample it learns. As a result, this definition allows a\n","sample to be temporarily learned, and measures how long until it is\n","forgotten by the model.</p>\n","<p><strong>Information Leakage.</strong> Many machine learning models\n","inherently leak information during the model updating process&#xA0;<a href=\"#ref-chen2021machine\">(M. Chen et al.\n","2021b)</a>. Recent works have exploited this phenomenon by comparing\n","the model before and after unlearning to measure the information\n","leakage. More precisely, Salem et al.&#xA0;<a href=\"#ref-salem2020updates\">(Salem et al. 2020)</a> proposed an\n","adversary attack in the image domain that could reconstruct a removed\n","sample when a classifier is unlearned on a data sample. Brockschmidt et\n","al.&#xA0;<a href=\"#ref-zanella2020analyzing\">(Zanella-B&#xE9;guelin et al. 2020)</a>\n","suggested a similar approach for the text domain. Chen et al.&#xA0;<a href=\"#ref-chen2021machine\">(M. Chen et al.\n","2021b)</a> introduced a membership inference attack to detect whether\n","a removed sample belongs to the learning set. Compared to previous\n","works&#xA0;<a href=\"#ref-Salem0HBF019 shokri2017membership\">(Salem et al. 2019;\n","Shokri et al. 2017)</a>, their approach additionally makes use of the\n","posterior output distribution of the original model, besides that of the\n","unlearned model. Chen et al.&#xA0;<a href=\"#ref-chen2021machine\">(M. Chen et al. 2021b)</a> also proposed\n","two leakage metrics, namely the degradation count and the degradation\n","rate.</p>\n","<div class=\"compactitem\">\n","<p>The <em>degradation count:</em> is defined as the ratio between the\n","number of target samples whose membership can be inferred by the\n","proposed attack with higher confidence compared to traditional attacks\n","and the total number of samples.</p>\n","<p>The <em>degradation rate:</em> is defined the average improvement\n","rate of the confidence of the proposed attack compared to traditional\n","attacks.</p>\n","</div>\n","<p><strong>Membership Inference Attacks.</strong> This kind of attack is\n","designed to detect whether a target model leaks data&#xA0;<a href=\"#ref-shokri2017membership thudi2022bounding chen2021machine\">(Shokri\n","et al. 2017; Thudi, Shumailov, et al. 2022; M. Chen et al.\n","2021b)</a>. Specifically, an inference model is trained to recognise\n","new data samples from the training data used to optimize the target\n","model. In&#xA0;<a href=\"#ref-shokri2017membership\">(Shokri et al. 2017)</a>, a set of\n","shallow models were trained on a new set of data items different from\n","the one that the target model was trained on. The attack model was then\n","trained to predict whether a data item belonged to the training data\n","based on the predictions made by shallow models for training as well as\n","testing data. The training set for the shallow and attack models share\n","similar data distribution to the target model. Membership inference\n","attacks are helpful for detecting data leaks. Hence, they are useful for\n","verifying the effectiveness of the machine unlearning&#xA0;<a href=\"#ref-chen2021machine\">(M. Chen et al.\n","2021b)</a>.</p>\n","<p><strong>Backdoor attacks.</strong> Backdoor attacks were proposed to\n","inject backdoors to the data for deceiving a machine learning\n","model&#xA0;<a href=\"#ref-wang2019neural\">(B. Wang et al.\n","2019)</a>. The deceived model makes correct predictions with clean\n","data, but with poison data in a target class as a backdoor trigger, it\n","makes incorrect predictions. Backdoor attacks were used to verify the\n","effectiveness of machine unlearning in&#xA0;<a href=\"#ref-sommer2020towards sommer2022athena\">(Sommer et al. 2020,\n","2022)</a>. Specifically, the setting begins with training a model\n","that has a mixture of clean and poison data items across all users. Some\n","of the users want their data deleted. If the users&#x2019; data are not\n","successfully deleted, the poison samples will be predicted as the target\n","class. Otherwise, the model will not predict the poison samples as the\n","target class. However, there is no absolute guarantee that this rule is\n","always correct, although one can increase the number of poison samples\n","to make this rule less likely to fail.</p>\n","<p><strong>Slow-down attacks.</strong> Some studies focus on the\n","theoretical guarantee of indistinguishability between an unlearned and a\n","retrained models. However, the practical bounds on computation costs are\n","largely neglected in these papers&#xA0;<a href=\"#ref-marchant2022hard\">(Marchant, Rubinstein, and Alfeld\n","2022)</a>. As a result, a new threat has been introduced to machine\n","unlearning where poisoning attacks are used to slow down the unlearning\n","process. Formally, let <span\n","class=\"math inline\"><em>h</em><sub>0</sub>&#x2004;=&#x2004;<em>A</em>(<em>D</em>)</span>\n","be an initial model trained by a learning algorithm <span\n","class=\"math inline\"><em>A</em></span> on a dataset <span\n","class=\"math inline\"><em>D</em></span>. The goal of the attacker is to\n","poison a subset <span\n","class=\"math inline\"><em>D</em><sub><em>p</em><em>o</em><em>i</em><em>s</em><em>o</em><em>n</em></sub>&#x2004;&#x2282;&#x2004;<em>D</em></span>\n","such as to maximize the computation cost of removing <span\n","class=\"math inline\"><em>D</em><sub><em>p</em><em>o</em><em>i</em><em>s</em><em>o</em><em>n</em></sub></span>\n","from <span class=\"math inline\"><em>h&#x302;</em></span> using an unlearning\n","algorithm <span class=\"math inline\"><em>U</em></span>. Marchant et al.\n","&#xA0;<a href=\"#ref-marchant2022hard\">(Marchant,\n","Rubinstein, and Alfeld 2022)</a> defined and estimated an efficient\n","computation cost for certifying removal methods. However, generalizing\n","this computation cost for different unlearning methods is still an open\n","research direction.</p>\n","<p><strong>Interclass Confusion Test.</strong> The idea of this test is\n","to investigate whether information from the data to be forgotten can\n","still be inferred from an unlearned model&#xA0;<a href=\"#ref-goel2022evaluating\">(Goel, Prabhu, and Kumaraguru\n","2022)</a>. Different from traditional approximate unlearning\n","definitions that focus on the indistinguishability between unlearned and\n","retrained models in the parameter space, this test focuses on the output\n","space. More precisely, the test involves randomly selecting a set of\n","samples <span class=\"math inline\"><em>S</em>&#x2004;&#x2282;&#x2004;<em>D</em></span> from\n","two chosen classes in the training data <span\n","class=\"math inline\"><em>D</em></span> and then randomly swapping the\n","label assignment between the samples of different classes to result in a\n","confused set <span class=\"math inline\"><em>S</em>&#x2032;</span>. Together\n","<span class=\"math inline\"><em>S</em>&#x2032;</span> and <span\n","class=\"math inline\"><em>D</em>&#x2005;\\&#x2005;<em>S</em></span> form a new training\n","dataset <span class=\"math inline\"><em>D</em>&#x2032;</span>, resulting in a new\n","trained model. <span class=\"math inline\"><em>S</em>&#x2032;</span> is\n","considered to be the forgotten data. From this, Goet et al.&#xA0;<a href=\"#ref-goel2022evaluating\">(Goel, Prabhu, and\n","Kumaraguru 2022)</a> computes a forgetting score from a confusion\n","matrix generated by the unlearned model. A lower forgetting score means\n","a better unlearned model.</p>\n","<p><strong>Federated verification.</strong> Unlearning verification in\n","federated learning is uniquely challenging. First, the participation of\n","one or a few clients in the federation may subtly change the global\n","model&#x2019;s performance, making verification in the output space\n","challenging. Second, verification using adversarial attacks is not\n","applicable in the federated setting because it might introduce new\n","security threats to the infrastructure&#xA0;<a href=\"#ref-gao2022verifi\">(X. Gao et al. 2022)</a>. As a result, Gao\n","et al.&#xA0;<a href=\"#ref-gao2022verifi\">(X. Gao et al.\n","2022)</a> proposes a verification mechanism that uses a few\n","communication rounds for clients to verify their data in the global\n","model. This approach is compatible with federated settings because the\n","model is trained in the same way where the clients communicate with the\n","server over several rounds.</p>\n","<p><strong>Cryptographic proofs.</strong> Since most of existing\n","verification frameworks do not provide any theoretical guarantee,\n","Eisenhofer et al.&#xA0;<a href=\"#ref-eisenhofer2022verifiable\">(Eisenhofer et al. 2022)</a>\n","proposed a cryptography-informed protocol to compute two proofs,\n","i.e.&#xA0;proof of update (the model was trained on a particular dataset\n","<span class=\"math inline\"><em>D</em></span>) and proof of unlearning\n","(the forget item <span class=\"math inline\"><em>d</em></span> is not a\n","member of <span class=\"math inline\"><em>D</em></span>). The core idea of\n","the proof of update is using SNARK&#xA0;<a href=\"#ref-bitansky2012extractable\">(Bitansky et al. 2012)</a> data\n","structure to commit a hash whenever the model is updated (learned or\n","unlearned) while ensuring that: (i) the model was obtained from the\n","remaining data, (ii) the remaining data does not contain any forget\n","items, (iii) the previous forget set is a subset of the current forget\n","set, and (iv) the forget items are never re-added into the training\n","data. The core idea of the proof of unlearning is using the Merkle tree\n","to maintain the order of data items in the training data so that an\n","unlearned item cannot be added to the training data again. While the\n","approach is demonstrated on SISA (efficient retraining)&#xA0;<a href=\"#ref-bourtoule2021machine\">(Bourtoule et al.\n","2021)</a>, it is applicable for any unlearning method.</p>\n","\n"]},{"cell_type":"markdown","id":"89b43000","metadata":{"papermill":{"duration":0.00913,"end_time":"2023-07-12T05:38:47.163691","exception":false,"start_time":"2023-07-12T05:38:47.154561","status":"completed"},"tags":[]},"source":["<h1 id=\"sec:problem\">3. Unlearning Definition</h1>\n","<p>While the application of machine unlearning can originate from\n","security, usability, fidelity, and privacy reasons, it is often\n","formulated as a privacy preserving problem where users can ask for the\n","removal of their data from computer systems and machine learning\n","models&#xA0;<a href=\"#ref-sekhari2021remember ginart2019making bourtoule2021machine garg2020formalizing\">(Sekhari\n","et al. 2021; Ginart et al. 2019; Bourtoule et al. 2021; Garg,\n","Goldwasser, and Vasudevan 2020)</a>. The forgetting request can be\n","motivated by security and usability reasons as well. For example, the\n","models can be attacked by adversarial data and produce wrong outputs.\n","Once these types of attacks are detected, the corresponding adversarial\n","data has to be removed as well without harming the model&#x2019;s predictive\n","performance.</p>\n","<p>When fulfilling a removal request, the computer system needs to\n","remove all user&#x2019;s data and &#x2018;forget&#x2019; any influence on the models that\n","were trained on those data. As removing data from a database is\n","considered trivial, the literature mostly concerns how to unlearn data\n","from a model&#xA0;<a href=\"#ref-GuoGHM20 izzo2021approximate neel2021descent ullah2021machine\">(C.\n","Guo et al. 2020; Izzo et al. 2021; Neel, Roth, and Sharifi-Malvajerdi\n","2021; Ullah et al. 2021)</a>.</p>\n","<p>To properly formulate an unlearning problem, we need to introduce a\n","few concepts. First, let us denote <span class=\"math inline\">&#x1D4B5;</span> as\n","an example space, i.e., a space of data items or examples (called\n","samples). Then, the set of all possible training datasets is denoted as\n","<span class=\"math inline\">&#x1D4B5;<sup>*</sup></span>. One can argue that <span\n","class=\"math inline\">&#x1D4B5;<sup>*</sup>&#x2004;=&#x2004;2<sup>&#x1D4B5;</sup></span> but that is not\n","important, as a particular training dataset <span\n","class=\"math inline\"><em>D</em>&#x2004;&#x2208;&#x2004;<em>Z</em><sup>*</sup></span> is often\n","given as input. Given <span class=\"math inline\"><em>D</em></span>, we\n","want to get a machine learning model from a hypothesis space <span\n","class=\"math inline\">&#x210B;</span>. In general, the hypothesis space <span\n","class=\"math inline\">&#x210B;</span> covers the parameters and the meta-data of\n","the models. Sometimes, it is modeled as <span\n","class=\"math inline\">&#x1D4B2;&#x2005;&#xD7;&#x2005;<em>&#x398;</em></span>, where <span\n","class=\"math inline\">&#x1D4B2;</span> is the parameter space and <span\n","class=\"math inline\"><em>&#x398;</em></span> is the metadata/state space. The\n","process of training a model on <span\n","class=\"math inline\"><em>D</em></span> in the given computer system is\n","enabled by a learning algorithm, denoted by a function <span\n","class=\"math inline\"><em>A</em>&#x2004;:&#x2004;&#x1D4B5;<sup>*</sup>&#x2004;&#x2192;&#x2004;&#x210B;</span>, with the\n","trained model denoted as <span\n","class=\"math inline\"><em>A</em>(<em>D</em>)</span>.</p>\n","<p>To support forgetting requests, the computer system needs to have an\n","unlearning mechanism, denoted by a function <span\n","class=\"math inline\"><em>U</em></span>, that takes as input a training\n","dataset <span\n","class=\"math inline\"><em>D</em>&#x2004;&#x2208;&#x2004;<em>Z</em><sup>*</sup></span>, a forget\n","set <span\n","class=\"math inline\"><em>D</em><sub><em>f</em></sub>&#x2004;&#x2282;&#x2004;<em>D</em></span>\n","(data to forget) and a model <span\n","class=\"math inline\"><em>A</em>(<em>D</em>)</span>. It returns a\n","sanitized (or unlearned) model <span\n","class=\"math inline\"><em>U</em>(<em>D</em>,<em>D</em><sub><em>f</em></sub>,<em>A</em>(<em>D</em>))&#x2004;&#x2208;&#x2004;&#x210B;</span>.\n","The unlearned model is expected to be the same or similar to a retrained\n","model <span\n","class=\"math inline\"><em>A</em>(<em>D</em>\\<em>D</em><sub><em>f</em></sub>)</span>\n","(i.e., a model as if it had been trained on the remaining data). Note\n","that <span class=\"math inline\"><em>A</em></span> and <span\n","class=\"math inline\"><em>U</em></span> are assumed to be randomized\n","algorithms, i.e., the output is non-deterministic and can be modelled as\n","a conditional probability distribution over the hypothesis space given\n","the input data&#xA0;<a href=\"#ref-marchant2022hard\">(Marchant, Rubinstein, and Alfeld\n","2022)</a>. This assumption is reasonable as many learning algorithms\n","are inherently stochastic (e.g., SGD) and some floating-point operations\n","involve randomness in computer implementations&#xA0;<a href=\"#ref-bourtoule2021machine\">(Bourtoule et al. 2021)</a>.\n","Another note is that we do not define the function <span\n","class=\"math inline\"><em>U</em></span> precisely before-hand as its\n","definition varies with different settings.</p>\n","\n"]},{"cell_type":"markdown","id":"fcec9a18","metadata":{"papermill":{"duration":0.008787,"end_time":"2023-07-12T05:38:47.181975","exception":false,"start_time":"2023-07-12T05:38:47.173188","status":"completed"},"tags":[]},"source":["<h1 id=\"sec:algorithms\">4. Unlearning Algorithms</h1>\n","<p>As mentioned in the Section&#xA0;<a href=\"#sec:intro\"\n","data-reference-type=\"ref\" data-reference=\"sec:intro\">1</a>, machine\n","unlearning can remove data and data linkages without retraining the\n","machine learning model from scratch, saving time and computational\n","resources&#xA0;<a href=\"#ref-wang2022federated chen2021machinegan\">(J. Wang, Guo, et al.\n","2022; K. Chen, Huang, et al. 2021)</a>. The specific approaches of\n","machine unlearning can be categorized into model-agnostic,\n","model-intrinsic, and data-driven approaches.</p>\n","\n","<div class=\"figure*\">\n","<figure>\n","<img src=\"https://raw.githubusercontent.com/tamlhp/awesome-machine-unlearning/main/kaggle/algorithms.png\" alt=\"image\" style=\"max-width: 80%;\"/>\n","<figcaption aria-hidden=\"true\">Comparison of Unlearning Methods</figcaption>\n","</figure>\n","</div>"]},{"cell_type":"markdown","id":"2626bb57","metadata":{"papermill":{"duration":0.00877,"end_time":"2023-07-12T05:38:47.199777","exception":false,"start_time":"2023-07-12T05:38:47.191007","status":"completed"},"tags":[]},"source":["<h2 id=\"sec:model-agnostic\">4.1. Model-Agnostic Approaches</h2>\n","\n","\n","\n","<p>Model-agnostic machine unlearning methodologies include unlearning\n","processes or frameworks that are applicable to different models.\n","However, in some cases, theoretical guarantees are only provided for a\n","class of models (e.g., linear models). Nonetheless, they are still\n","considered to be model-agnostic as their core ideas are applicable to\n","complex models (e.g.&#xA0;deep neural networks) with practical results.</p>\n","\n","<div class=\"figure*\">\n","<figure>\n","<img src=\"https://raw.githubusercontent.com/tamlhp/awesome-machine-unlearning/main/figs/model-agnostic.png\" alt=\"https://arxiv.org/abs/2209.02299\" style=\"max-width: 70%;\"/>\n","</figure>\n","</div>\n","\n","<p><strong>Differential Privacy.</strong> Differential privacy was first\n","proposed to bound a data sample&#x2019;s influence on a machine learning\n","model&#xA0;<a href=\"#ref-dwork2008differential\">(Dwork\n","2008)</a>. <span class=\"math inline\"><em>&#x3F5;</em></span>-differential\n","privacy unlearns a data sample by setting <span\n","class=\"math inline\"><em>&#x3F5;</em>&#x2004;=&#x2004;0</span>, where <span\n","class=\"math inline\"><em>&#x3F5;</em></span> bounds the level of change in any\n","model parameters affected by that data sample&#xA0;<a href=\"#ref-bourtoule2021machine thudi2022unrolling\">(Bourtoule et al.\n","2021; Thudi, Deza, et al. 2022)</a>. However, Bourtoule et al.&#xA0;<a href=\"#ref-bourtoule2021machine\">(Bourtoule et al.\n","2021)</a> notes that the algorithm cannot learn from the training\n","data in such a case. Gupta et el.&#xA0;<a href=\"#ref-gupta2021adaptive\">(Gupta et al. 2021)</a> proposed a\n","differentially private unlearning mechanism for streaming data removal\n","requests. These requests are adaptive as well, meaning the data to be\n","removed depends on the current unlearned model. The idea, which is based\n","on differential privacy, can be roughly formulated as: <span\n","class=\"math display\">Pr&#x2006;(<em>U</em>(<em>D</em>,<em>s</em>,<em>A</em>(<em>D</em>))&#x2208;&#x1D4AF;)&#x2004;&#x2264;&#x2004;<em>e</em><sup><em>&#x3F5;</em></sup><em>P</em><em>r</em>(<em>A</em>(<em>D</em>\\<em>s</em>)&#x2208;&#x1D4AF;)&#x2005;+&#x2005;<em>&#x3B2;</em></span>\n","for all adaptive removal sequences <span\n","class=\"math inline\"><em>s</em>&#x2004;=&#x2004;(<em>z</em><sub>1</sub>,&#x2026;,<em>z</em><sub><em>k</em></sub>)</span>.\n","One weakness of this condition is that it only guarantees the upper\n","bound of the unlearning scheme compared to full retraining. However, its\n","strength is that it supports a user&#x2019;s belief that the system has engaged\n","in full retraining. Finally, an unlearning process is developed by a\n","notion of differentially private publishing functions and a theoretical\n","reduction from adaptive to non-adaptive sequences. Differentially\n","private publishing functions guarantee that the model before and after\n","an unlearning request do not differ too much.</p>\n","<p><strong>Certified Removal Mechanisms.</strong> Unlearning algorithms\n","falling into this category are the ones following the original\n","approximate definition of machine unlearning&#xA0;<a href=\"#ref-GuoGHM20 golatkar2020eternal\">(C. Guo et al. 2020; Golatkar,\n","Achille, and Soatto 2020a)</a>. While Guo et al.&#xA0;<a href=\"#ref-GuoGHM20\">(C. Guo et al. 2020)</a> focus\n","on theoretical guarantees for linear models and convex losses, Golatkar\n","et al.&#xA0;<a href=\"#ref-golatkar2020eternal\">(Golatkar, Achille, and Soatto\n","2020a)</a> introduce a computable upper bound for SGD-based learning\n","algorithms, especially deep neural networks. The core idea is based on\n","the notion of perturbation (noise) to mask the small residue incurred by\n","the gradient-based update (e.g., a one-step Newton update&#xA0;<a href=\"#ref-koh2017understanding\">(Koh et al.\n","2017)</a>). The idea is applicable to other cases, although no\n","theoretical guarantees are provided&#xA0;<a href=\"#ref-bourtoule2021machine\">(Bourtoule et al. 2021)</a>.</p>\n","<p>More precisely, certified removal mechanisms mainly accommodate those\n","linear models that minimize a standardized empirical risk, which is the\n","total value of a convex loss function that measures the distance of the\n","actual value from the expected one&#xA0;<a href=\"#ref-marchant2022hard\">(Marchant, Rubinstein, and Alfeld\n","2022)</a>. However, one has to rely on a customized learning\n","algorithm that optimizes a perturbed version of the regularized\n","empirical risk, where the added noise is drawn from a standard normal\n","distribution. This normalized noise allows conventional convex\n","optimization techniques to solve the learning problem with perturbation.\n","As a result, the unlearning request can be done by computing the model\n","perturbation towards the regularized empirical risk on the remaining\n","data. The final trick is that this perturbation can be approximated by\n","the influence function&#xA0;<a href=\"#ref-koh2017understanding\">(Koh et al. 2017)</a>, which is\n","computed by inverting the Hessian on training data and the gradient of\n","the data to be forgotten&#xA0;<a href=\"#ref-marchant2022hard\">(Marchant, Rubinstein, and Alfeld\n","2022)</a>. However, the error of model parameters in such a\n","computation can be so large that the added noise cannot mask it.\n","Therefore, if the provided theoretical upper bound exceeds a certain\n","threshold, the unlearning algorithm resorts to retraining from\n","scratch&#xA0;<a href=\"#ref-marchant2022hard\">(Marchant,\n","Rubinstein, and Alfeld 2022)</a>.</p>\n","<p>Following this idea, Neel et al.&#xA0;<a href=\"#ref-neel2021descent\">(Neel, Roth, and Sharifi-Malvajerdi\n","2021)</a> provided further extensions, namely regularized perturbed\n","gradient descent and distributed perturbed gradient descent, to support\n","weak convex losses and provide theoretical guarantees on\n","indistinguishability, accuracy, and unlearning times.</p>\n","<p>Ullah et al.&#xA0;<a href=\"#ref-ullah2021machine\">(Ullah et al. 2021)</a> continued\n","studying machine unlearning in the context of SGD and streaming removal\n","requests. They define the notation of total variation stability for a\n","learning algorithm: <span\n","class=\"math display\">sup<sub><em>D</em>,&#x2006;<em>D</em>&#x2032;&#x2004;:&#x2004;|<em>D</em>\\<em>D</em>&#x2032;|&#x2005;+&#x2005;|<em>D</em>&#x2032;\\<em>D</em>|</sub><em>&#x394;</em>(<em>A</em>(<em>D</em>),<em>A</em>(<em>D</em>&#x2032;))&#x2004;&#x2264;&#x2004;<em>&#x3C1;</em></span>\n","where <span class=\"math inline\"><em>&#x394;</em>(.)</span> is the largest\n","possible difference between the two probabilities such that they can\n","assign to the same event, aka total variance distance&#xA0;<a href=\"#ref-verdu2014total\">(Verd&#xFA; 2014)</a>. This\n","is also a special case of the optimal transportation cost between two\n","probability distributions&#xA0; <a href=\"#ref-lei2019geometric\">(Lei et al. 2019)</a>. In other words,\n","a learning algorithm <span class=\"math inline\"><em>A</em>(.)</span> is\n","said to be <span class=\"math inline\"><em>&#x3C1;</em></span>-TV-stable if\n","given any two training datasets <span\n","class=\"math inline\"><em>D</em></span> and <span\n","class=\"math inline\"><em>D</em>&#x2032;</span>, as long as they have 1 common\n","data item, the cost of transporting from the model distribution <span\n","class=\"math inline\"><em>A</em>(<em>D</em>)</span> to <span\n","class=\"math inline\"><em>A</em>(<em>D</em>&#x2032;)</span> is bounded by <span\n","class=\"math inline\"><em>&#x3C1;</em></span>. For any <span\n","class=\"math inline\">1/<em>n</em>&#x2004;&#x2264;&#x2004;<em>&#x3C1;</em>&#x2004;&lt;&#x2004;&#x221E;</span>, Ullah et\n","al.&#xA0;<a href=\"#ref-ullah2021machine\">(Ullah et al.\n","2021)</a> proved that there exists an unlearning process that\n","satisfies exact unlearning at any time in the streaming removal request\n","while the model accuracy and the unlearning time are bounded w.r.t.\n","<span class=\"math inline\"><em>&#x3C1;</em></span>.</p>\n","<p><strong>Statistical Query Learning.</strong> Statistical query\n","learning is a form of machine learning that trains models by querying\n","statistics on the training data rather than itself&#xA0;<a href=\"#ref-cao2015towards\">(Y. Cao and Yang\n","2015)</a>. In this form, a data sample can be forgotten efficiently\n","by recomputing the statistics over the remaining data&#xA0;<a href=\"#ref-bourtoule2021machine\">(Bourtoule et al.\n","2021)</a>. More precisely, statistical query learning assumes that\n","most of the learning algorithms can be represented as a sum of some\n","efficiently computable transformations, called statistical queries&#xA0;<a href=\"#ref-kearns1998efficient\">(Kearns 1998)</a>.\n","These statistical queries are basically requests to an oracle (e.g., a\n","ground truth) to estimate a statistical function over all training data.\n","Cao et al.&#xA0;<a href=\"#ref-cao2015towards\">(Y. Cao\n","and Yang 2015)</a> showed that this formulation can generalize many\n","algorithms for machine learning, such as the Chi-square test, naive\n","Bayes, and linear regression. For example, in naive Bayes, these\n","statistical queries are indicator functions that return 1 when the\n","output is a target label and zero otherwise&#xA0;<a href=\"#ref-cao2015towards\">(Y. Cao and Yang 2015)</a>. In the\n","unlearning process, these queries are simply recomputed over the\n","remaining data. The approach is efficient as these statistical functions\n","are computationally efficient in the first place. Moreover, statistical\n","query learning also supports adaptive statistical queries, which are\n","computed based on the prior state of the learning models, including\n","k-means, SVM, and gradient descent. Although this time, the unlearning\n","update makes the model not convergent any more, only a few learning\n","iterations (adaptive statistical queries) are needed since the model\n","starts from an almost-converged state. Moreover, if the old results of\n","the summations are cached, say, via dynamic programming, then the\n","speedup might be even higher.</p>\n","<p>The limitation of this approach is that it does not scale with\n","complex models such as deep neural networks. Indeed, in complex models,\n","the number of statistical queries could become exponentially large&#xA0;<a href=\"#ref-bourtoule2021machine\">(Bourtoule et al.\n","2021)</a>, making both the unlearning and relearning steps less\n","efficient.</p>\n","<p>In general, statistical query learning supports item removal and can\n","be partially applied to stream removal&#xA0;<a href=\"#ref-gupta2021adaptive\">(Gupta et al. 2021)</a> as well,\n","although the streaming updates to the summations could be unbounded. It\n","supports exact unlearning, but only partially when the statistical\n","queries are non-adaptive. It also partially supports zero-shot\n","unlearning, because only the statistics over the data need to be\n","accessed, not the individual training data items.</p>\n","<p><strong>Decremental Learning.</strong> Decremental learning\n","algorithms were originally designed to remove redundant samples and\n","reduce the training load on the processor for support vector machines\n","(SVM)&#xA0;<a href=\"#ref-chen2019novel cauwenberghs2000incremental tveit2003multicategory tveit2003incremental romero2007incremental duan2007decremental\">(Y.\n","Chen et al. 2019; Cauwenberghs et al. 2000; Tveit et al. 2003; Tveit,\n","Hetland, and Engum 2003; Romero, Barrio, and Belanche 2007; Duan et al.\n","2007)</a> and linear classification&#xA0;<a href=\"#ref-karasuyama2009multiple karasuyama2010multiple tsai2014incremental\">(Karasuyama\n","and Takeuchi 2009, 2010; Tsai, Lin, and Lin 2014)</a>. As such, they\n","focus on accuracy rather than the completeness of the machine\n","unlearning.</p>\n","<p>Ginart et al.&#xA0;<a href=\"#ref-ginart2019making\">(Ginart et al. 2019)</a> developed\n","decremental learning solutions for <span\n","class=\"math inline\"><em>k</em></span>-means clustering based on\n","quantization and data partition. The idea of quantization is to ensure\n","that small changes in the data do not change the model. Quantization\n","helps to avoid unnecessary unlearning so that accuracy is not\n","catastrophically degraded. However, it is only applicable when there are\n","few model parameters compared to the size of the dataset. The idea\n","behind the data partitioning is to restrict the data&#x2019;s influence on the\n","model parameters to only a few specific data partitions. This process\n","helps to pinpoint the effects of unlearning to a few data features. But,\n","again, the approach is only effective with a small number of features\n","compared to the size of the dataset. Notably, data privacy and data\n","deletion are not completely correlative&#xA0;<a href=\"#ref-ginart2019making\">(Ginart et al. 2019)</a>. Data privacy\n","does not have to ensure data deletion (e.g., differential privacy), and\n","data deletion does not have to ensure data privacy.</p>\n","<p><strong>Knowledge Adaptation.</strong> Knowledge adaptation\n","selectively removes to-be-forgotten data samples&#xA0;<a href=\"#ref-chundawat2022can\">(Chundawat et al. 2022a)</a>. In this\n","approach&#xA0;<a href=\"#ref-chundawat2022can\">(Chundawat\n","et al. 2022a)</a>, one trains two neural networks as teachers\n","(competent and incompetent) and one neural network as a student. The\n","competent teacher is trained on the complete dataset, while the\n","incompetent teacher is randomly initialised. The student is initialised\n","with the competent teacher&#x2019;s model parameters. The student is trained to\n","mimic both competent teacher and incompetent teacher by a loss function\n","with KL-divergence evaluation values between the student and each of the\n","two teachers. Notably, the competent teacher processes the retained data\n","and the incompetent teacher deals with the forgotten data.</p>\n","<p>Beyond Chundwat et al.&#xA0;<a href=\"#ref-chundawat2022can\">(Chundawat et al. 2022a)</a>, machine\n","learning models have been quickly and accurately adapted by\n","reconstructing the past gradients of knowledge-adaptation priors\n","in&#xA0;<a href=\"#ref-khan2021knowledge\">(Khan et al.\n","2021)</a>. Ideas similar to knowledge-adaptation priors were also\n","investigated in&#xA0;<a href=\"#ref-ginart2019making wu2020priu\">(Ginart et al. 2019; Y. Wu,\n","Tannen, and Davidson 2020)</a>. In general, knowledge adaptation is\n","applicable to a wide range of unlearning requests and scenarios.\n","However, it is difficult to provide a theoretical guarantee for this\n","approach.</p>\n","<p><strong>MCMC Unlearning (Parameter Sampling).</strong> Sampling-based\n","machine unlearning has also been suggested as a way to train a standard\n","machine learning model to forget data samples from the training\n","data&#xA0;<a href=\"#ref-nguyen2022markov\">(Q. P. Nguyen\n","et al. 2022)</a>. The idea is to sample the distribution of model\n","parameters using Markov chain Monte Carlo (MCMC). It is assumed that the\n","forgetting set is often significantly smaller than the training data\n","(otherwise retraining might be a better solution). Thus, the parameter\n","distribution <span\n","class=\"math inline\"><em>P</em><em>r</em>(<em>w</em><sub><em>r</em></sub>)</span>\n","of the retrained models should not differ much from that of the original\n","model <span class=\"math inline\"><em>P</em><em>r</em>(<em>w</em>)</span>.\n","In other words, the posterior density <span\n","class=\"math inline\"><em>P</em><em>r</em>(<em>w</em><sub><em>r</em></sub>|<em>D</em>)</span>\n","should be sufficient large for sampling&#xA0;<a href=\"#ref-nguyen2022markov\">(Q. P. Nguyen et al. 2022)</a>. More\n","precisely, the posterior distribution from the retrained parameters can\n","be defined as: <span\n","class=\"math display\"><em>P</em><em>r</em>(<em>w</em><sub><em>r</em></sub>|<em>D</em>)&#x2004;&#x2248;&#x2004;<em>P</em><em>r</em>(<em>w</em>|<em>D</em>)&#x2004;&#x221D;&#x2004;<em>P</em><em>r</em>(<em>D</em>|<em>w</em>)<em>P</em><em>r</em>(<em>w</em>)</span>\n","Here, the prior distribution <span\n","class=\"math inline\"><em>P</em><em>r</em>(<em>w</em>)</span> is often\n","available from the learning algorithm, which means the stochasticity of\n","learning via sampling can be estimated. The likelihood <span\n","class=\"math inline\"><em>P</em><em>r</em>(<em>D</em>|<em>w</em>)</span>\n","is the prediction output of the model itself, which is also available\n","after training. From <a href=\"#eq:mcmc_unlearning\"\n","data-reference-type=\"autoref\"\n","data-reference=\"eq:mcmc_unlearning\">[eq:mcmc_unlearning]</a>, we only\n","know that the density function of <span\n","class=\"math inline\"><em>P</em><em>r</em>(<em>w</em>|<em>D</em>)</span>\n","is proportional to a function <span\n","class=\"math inline\"><em>f</em>(<em>w</em>)&#x2004;=&#x2004;<em>P</em><em>r</em>(<em>D</em>|<em>w</em>)<em>P</em><em>r</em>(<em>w</em>)</span>,\n","which means <span\n","class=\"math inline\"><em>P</em><em>r</em>(<em>w</em>|<em>D</em>)</span>\n","cannot be directly sampled. This is where MCMC comes into play, as it\n","can still generate the next samples using a proposal density <span\n","class=\"math inline\"><em>g</em>(<em>w</em>&#x2032;|<em>w</em>)</span>&#xA0;<a href=\"#ref-nguyen2022markov\">(Q. P. Nguyen et al.\n","2022)</a>. However, <span\n","class=\"math inline\"><em>g</em>(<em>w</em>&#x2032;|<em>w</em>)</span> is assumed\n","to be a Gaussian distribution centered on the current sample (the\n","sampling process can be initialized with the original model).</p>\n","<p>As a result, a candidate set of model parameters <span\n","class=\"math inline\"><em>P</em><em>r</em>(<em>w</em><sub><em>r</em></sub>|<em>D</em>)</span>\n","is constructed from the sampling, and the unlearning output is\n","calculated by simply maximizing the posterior probability <span\n","class=\"math inline\"><em>P</em><em>r</em>(<em>w</em>|<em>D</em><sub><em>r</em></sub>)</span>,\n","i.e.: <span\n","class=\"math display\"><em>w</em><sub><em>r</em></sub>&#x2004;=&#x2004;arg&#x2006;max<sub><em>w</em></sub><em>P</em><em>r</em>(<em>w</em>|<em>D</em><sub><em>r</em></sub>)</span>\n","The benefit of such sampling-based unlearning is that no access to the\n","forgetting set is required.</p>\n","\n","| **Paper Title** | **Year** | **Author** | **Venue** | **Model** | **Code** | **Type** |\n","| --------------- | :----: | ---- | :----: | :----: | :----: | :----: |\n","| [Towards Adversarial Evaluations for Inexact Machine Unlearning](https://arxiv.org/abs/2201.06640) | 2023 | Goel et al. | _arXiv_ | EU-k, CF-k | [[Code]](https://github.com/shash42/Evaluating-Inexact-Unlearning) |\n","| [On the Trade-Off between Actionable Explanations and the Right to be Forgotten](https://openreview.net/pdf?id=HWt4BBZjVW) | 2023 | Pawelczyk et al. | _arXiv_ | - | - |  |\n","| [Towards Unbounded Machine Unlearning](https://arxiv.org/pdf/2302.09880) | 2023 | Kurmanji et al. | _arXiv_ | SCRUB | [[Code]](https://github.com/Meghdad92/SCRUB) | approximate unlearning |\n","| [Netflix and Forget: Efficient and Exact Machine Unlearning from Bi-linear Recommendations](https://arxiv.org/abs/2302.06676) | 2023 | Xu et al. | _arXiv_ | Unlearn-ALS | - | Exact Unlearning |\n","| [To Be Forgotten or To Be Fair: Unveiling Fairness Implications of Machine Unlearning Methods](https://arxiv.org/abs/2302.03350) | 2023 | Zhang et al. | _arXiv_ | - | [[Code]](https://github.com/cleverhans-lab/machine-unlearning) | |\n","| [Sequential Informed Federated Unlearning: Efficient and Provable Client Unlearning in Federated Optimization](https://arxiv.org/abs/2211.11656) | 2022 | Fraboni et al. | _arXiv_ | SIFU | - | |\n","| [Certified Data Removal in Sum-Product Networks](https://arxiv.org/abs/2210.01451) | 2022 | Becker and Liebig | _ICKG_ | UNLEARNSPN | [[Code]](https://github.com/ROYALBEFF/UnlearnSPN) | Certified Removal Mechanisms |\n","| [Learning with Recoverable Forgetting](https://arxiv.org/abs/2207.08224) | 2022 | Ye et al.  | _ECCV_ | LIRF | - |  |\n","| [Continual Learning and Private Unlearning](https://arxiv.org/abs/2203.12817) | 2022 | Liu et al. | _CoLLAs_ | CLPU | [[Code]](https://github.com/Cranial-XIX/Continual-Learning-Private-Unlearning) | |\n","| [Verifiable and Provably Secure Machine Unlearning](https://arxiv.org/abs/2210.09126) | 2022 | Eisenhofer et al. | _arXiv_ | - | [[Code]](https://github.com/cleverhans-lab/verifiable-unlearning) |  Certified Removal Mechanisms |\n","| [VeriFi: Towards Verifiable Federated Unlearning](https://arxiv.org/abs/2205.12709) | 2022 | Gao et al. | _arXiv_ | VERIFI | - | Certified Removal Mechanisms |\n","| [FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information](https://arxiv.org/abs/2210.10936) | 2022 | Cao et al. | _S&P_ | FedRecover | - | recovery method |\n","| [Fast Yet Effective Machine Unlearning](https://arxiv.org/abs/2111.08947) | 2022 | Tarun et al. | _arXiv_ | UNSIR | - |  |\n","| [Membership Inference via Backdooring](https://arxiv.org/abs/2206.04823) | 2022 | Hu et al.  | _IJCAI_ | MIB | [[Code]](https://github.com/HongshengHu/membership-inference-via-backdooring) | Membership Inferencing |\n","| [Forget Unlearning: Towards True Data-Deletion in Machine Learning](https://arxiv.org/abs/2210.08911) | 2022 | Chourasia et al. | _ICLR_ | - | - | noisy gradient descent |\n","| [Zero-Shot Machine Unlearning](https://arxiv.org/abs/2201.05629) | 2022 | Chundawat et al. | _arXiv_ | - | - |  |\n","| [Efficient Attribute Unlearning: Towards Selective Removal of Input Attributes from Feature Representations](https://arxiv.org/abs/2202.13295) | 2022 | Guo et al. | _arXiv_ | attribute unlearning | - |  |\n","| [Few-Shot Unlearning](https://download.huan-zhang.com/events/srml2022/accepted/yoon22fewshot.pdf) | 2022 | Yoon et al.   | _ICLR_ | - | - |  |\n","| [Federated Unlearning: How to Efficiently Erase a Client in FL?](https://arxiv.org/abs/2207.05521) | 2022 | Halimi et al. | _UpML Workshop_ | - | - | federated learning |\n","| [Machine Unlearning Method Based On Projection Residual](https://arxiv.org/abs/2209.15276) | 2022 | Cao et al. | _DSAA_ | - | - |  Projection Residual Method |\n","| [Hard to Forget: Poisoning Attacks on Certified Machine Unlearning](https://ojs.aaai.org/index.php/AAAI/article/view/20736) | 2022 | Marchant et al. | _AAAI_ | - | [[Code]](https://github.com/ngmarchant/attack-unlearning) | Certified Removal Mechanisms |\n","| [Athena: Probabilistic Verification of Machine Unlearning](https://web.archive.org/web/20220721061150id_/https://petsymposium.org/popets/2022/popets-2022-0072.pdf) | 2022 | Sommer et al. | _PoPETs_ | ATHENA | - | |\n","| [FP2-MIA: A Membership Inference Attack Free of Posterior Probability in Machine Unlearning](https://link.springer.com/chapter/10.1007/978-3-031-20917-8_12) | 2022 | Lu et al. | _ProvSec_ | FP2-MIA | - | inference attack |\n","| [Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning](https://arxiv.org/abs/2202.03460) | 2022 | Gao et al. | _PETS_ | - | - |  |\n","| [Prompt Certified Machine Unlearning with Randomized Gradient Smoothing and Quantization](https://openreview.net/pdf?id=ue4gP8ZKiWb) | 2022 | Zhang et al.   | _NeurIPS_ | PCMU | - | Certified Removal Mechanisms |\n","| [The Right to be Forgotten in Federated Learning: An Efficient Realization with Rapid Retraining](https://arxiv.org/abs/2203.07320) | 2022 | Liu et al. | _INFOCOM_ | - | [[Code]](https://github.com/yiliucs/federated-unlearning) |  |\n","| [Backdoor Defense with Machine Unlearning](https://arxiv.org/abs/2201.09538) | 2022 | Liu et al. | _INFOCOM_ | BAERASER | - | Backdoor defense |\n","| [Markov Chain Monte Carlo-Based Machine Unlearning: Unlearning What Needs to be Forgotten](https://dl.acm.org/doi/abs/10.1145/3488932.3517406) | 2022 | Nguyen et al. | _ASIA CCS_ | MCU | - | MCMC Unlearning  |\n","| [Federated Unlearning for On-Device Recommendation](https://arxiv.org/abs/2210.10958) | 2022 | Yuan et al. | _arXiv_ | - | - |  |\n","| [Can Bad Teaching Induce Forgetting? Unlearning in Deep Networks using an Incompetent Teacher](https://arxiv.org/abs/2205.08096) | 2022 | Chundawat et al. | _arXiv_ | - | - | Knowledge Adaptation |\n","| [ Efficient Two-Stage Model Retraining for Machine Unlearning](https://openaccess.thecvf.com/content/CVPR2022W/HCIS/html/Kim_Efficient_Two-Stage_Model_Retraining_for_Machine_Unlearning_CVPRW_2022_paper.html) | 2022 | Kim and Woo | _CVPR Workshop_ | - | - |  |\n","| [Learn to Forget: Machine Unlearning Via Neuron Masking](https://ieeexplore.ieee.org/abstract/document/9844865?casa_token=_eowH3BTt1sAAAAA:X0uCpLxOwcFRNJHoo3AtA0ay4t075_cSptgTMznsjusnvgySq-rJe8GC285YhWG4Q0fUmP9Sodw0) | 2021 | Ma et al. | _IEEE_ | Forsaken | - | Mask Gradients |\n","| [Adaptive Machine Unlearning](https://proceedings.neurips.cc/paper/2021/hash/87f7ee4fdb57bdfd52179947211b7ebb-Abstract.html) | 2021 | Gupta et al. | _NeurIPS_ | - | [[Code]](https://github.com/ChrisWaites/adaptive-machine-unlearning) | Differential Privacy |\n","| [Descent-to-Delete: Gradient-Based Methods for Machine Unlearning](https://proceedings.mlr.press/v132/neel21a.html) | 2021 | Neel et al. | _ALT_ | - | - | Certified Removal Mechanisms |\n","| [Remember What You Want to Forget: Algorithms for Machine Unlearning](https://arxiv.org/abs/2103.03279) | 2021 | Sekhari et al. | _NeurIPS_ | - | - |  |\n","| [FedEraser: Enabling Efficient Client-Level Data Removal from Federated Learning Models](https://ieeexplore.ieee.org/abstract/document/9521274) | 2021 | Liu et al. | _IWQoS_ | FedEraser | - |  |\n","| [Federated Unlearning](https://arxiv.org/abs/2012.13891) | 2021 | Liu et al. | _IWQoS_ | FedEraser | [[Code]](https://www.dropbox.com/s/1lhx962axovbbom/FedEraser-Code.zip?dl=0) |  |\n","| [Machine Unlearning via Algorithmic Stability](https://proceedings.mlr.press/v134/ullah21a.html) | 2021 | Ullah et al. | _COLT_ | TV | - | Certified Removal Mechanisms |\n","| [EMA: Auditing Data Removal from Trained Models](https://link.springer.com/chapter/10.1007/978-3-030-87240-3_76) | 2021 | Huang et al. | _MICCAI_ | EMA | [[Code]](https://github.com/Hazelsuko07/EMA) | Certified Removal Mechanisms |\n","| [Knowledge-Adaptation Priors](https://proceedings.neurips.cc/paper/2021/hash/a4380923dd651c195b1631af7c829187-Abstract.html) | 2021 | Khan and Swaroop | _NeurIPS_ | K-prior | [[Code]](https://github.com/team-approx-bayes/kpriors) | Knowledge Adaptation |\n","| [PrIU: A Provenance-Based Approach for Incrementally Updating Regression Models](https://dl.acm.org/doi/abs/10.1145/3318464.3380571) | 2020 | Wu et al. | _NeurIPS_ | PrIU | - | Knowledge Adaptation |\n","| [Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks](https://arxiv.org/abs/1911.04933) | 2020 | Golatkar et al. | _CVPR_ | - | - | Certified Removal Mechanisms |\n","| [Learn to Forget: User-Level Memorization Elimination in Federated Learning](https://www.researchgate.net/profile/Ximeng-Liu-5/publication/340134612_Learn_to_Forget_User-Level_Memorization_Elimination_in_Federated_Learning/links/5e849e64a6fdcca789e5f955/Learn-to-Forget-User-Level-Memorization-Elimination-in-Federated-Learning.pdf) | 2020 | Liu et al. | _arXiv_ | Forsaken | - |  |\n","| [Certified Data Removal from Machine Learning Models](https://proceedings.mlr.press/v119/guo20c.html) | 2020 | Guo et al. | _ICML_ | - | - | Certified Removal Mechanisms |\n","| [Class Clown: Data Redaction in Machine Unlearning at Enterprise Scale](https://arxiv.org/abs/2012.04699) | 2020 | Felps et al. | _arXiv_ | - | - | Decremental Learning |\n","| [A Novel Online Incremental and Decremental Learning Algorithm Based on Variable Support Vector Machine](https://link.springer.com/article/10.1007/s10586-018-1772-4) | 2019 | Chen et al. | _Cluster Computing_ | - | - | Decremental Learning  |\n","| [Making AI Forget You: Data Deletion in Machine Learning](https://papers.nips.cc/paper/2019/hash/cb79f8fa58b91d3af6c9c991f63962d3-Abstract.html) | 2019 | Ginart et al. | _NeurIPS_ | - | - | Decremental Learning  |\n","| [Lifelong Anomaly Detection Through Unlearning](https://dl.acm.org/doi/abs/10.1145/3319535.3363226) | 2019 | Du et al. | _CCS_ | - | - |  |\n","| [Learning Not to Learn: Training Deep Neural Networks With Biased Data](https://openaccess.thecvf.com/content_CVPR_2019/html/Kim_Learning_Not_to_Learn_Training_Deep_Neural_Networks_With_Biased_CVPR_2019_paper.html) | 2019 | Kim et al. | _CVPR_ | - | - |  |\n","| [Efficient Repair of Polluted Machine Learning Systems via Causal Unlearning](https://dl.acm.org/citation.cfm?id=3196517) | 2018 | Cao et al. | _ASIACCS_ | KARMA | [[Code]](https://github.com/CausalUnlearning/KARMA) |  |\n","| [Understanding Black-box Predictions via Influence Functions](https://proceedings.mlr.press/v70/koh17a.html) | 2017 | Koh et al. | _ICML_ | - | [[Code]](https://github.com/kohpangwei/influence-release) | Certified Removal Mechanisms |\n","| [Towards Making Systems Forget with Machine Unlearning](https://ieeexplore.ieee.org/abstract/document/7163042) | 2015 | Cao and Yang | _S&P_ | - |  |\n","| [Towards Making Systems Forget with Machine Unlearning](https://dl.acm.org/doi/10.1109/SP.2015.35) | 2015 | Cao et al. | _S&P_ | - | - | Statistical Query Learning  |\n","| [Incremental and decremental training for linear classification](https://dl.acm.org/doi/10.1145/2623330.2623661) | 2014 | Tsai et al. | _KDD_ | - | [[Code]](https://www.csie.ntu.edu.tw/~cjlin/papers/ws/) | Decremental Learning  |\n","| [Multiple Incremental Decremental Learning of Support Vector Machines](https://dl.acm.org/doi/10.5555/2984093.2984196) | 2009 | Karasuyama et al. | _NIPS_ | - | - | Decremental Learning  |\n","| [Incremental and Decremental Learning for Linear Support Vector Machines](https://dl.acm.org/doi/10.5555/1776814.1776838) | 2007 | Romero et al. | _ICANN_ | - | - | Decremental Learning  |\n","| [Decremental Learning Algorithms for Nonlinear Langrangian and Least Squares Support Vector Machines](https://www.semanticscholar.org/paper/Decremental-Learning-Algorithms-for-Nonlinear-and-Duan-Li/312c677f0882d0dfd60bfd77346588f52aefd10f) | 2007 | Duan et al. | _OSB_ | - | - | Decremental Learning  |\n","| [Multicategory Incremental Proximal Support Vector Classifiers](https://link.springer.com/chapter/10.1007/978-3-540-45224-9_54) | 2003 | Tveit et al. | _KES_ | - | - | Decremental Learning  |\n","| [Incremental and Decremental Proximal Support Vector Classification using Decay Coefficients](https://link.springer.com/chapter/10.1007/978-3-540-45228-7_42) | 2003 | Tveit et al. | _DaWak_ | - | - | Decremental Learning  |\n","| [Incremental and Decremental Support Vector Machine Learning](https://dl.acm.org/doi/10.5555/3008751.3008808) | 2000 | Cauwenberg et al. | _NeurIPS_ | - | - | Decremental Learning  |\n","----------"]},{"cell_type":"markdown","id":"13a76a2f","metadata":{"papermill":{"duration":0.008677,"end_time":"2023-07-12T05:38:47.217888","exception":false,"start_time":"2023-07-12T05:38:47.209211","status":"completed"},"tags":[]},"source":["<h2 id=\"model-intrinsic-approaches\">4.2. Model-Intrinsic Approaches</h2>\n","\n","<p>The model-intrinsic approaches include unlearning methods designed\n","for a specific type of models. Although they are model-intrinsic, their\n","applications are not necessarily narrow, as many machine learning models\n","can share the same type.</p>\n","\n","\n","<div class=\"figure*\">\n","<figure>\n","<img src=\"https://raw.githubusercontent.com/tamlhp/awesome-machine-unlearning/main/figs/model-intrinsic.png\" alt=\"https://arxiv.org/abs/2209.02299\" style=\"max-width: 70%;\"/>\n","</figure>\n","</div>\n","\n","\n","<p><strong>Unlearning for softmax classifiers (logit-based\n","classifiers).</strong> Softmax (or logit-based) classifiers are\n","classification models <span\n","class=\"math inline\"><em>M</em>&#x2004;:&#x2004;&#x1D4B5;&#x2004;&#x2192;&#x2004;&#x211D;<sup><em>K</em></sup></span> that\n","output a vector of logits <span\n","class=\"math inline\"><em>l</em>&#x2004;&#x2208;&#x2004;&#x211D;<sup><em>k</em></sup></span>, where\n","<span class=\"math inline\"><em>K</em></span> is the number of classes,\n","for each data sample <span class=\"math inline\"><em>x</em>&#x2004;&#x2208;&#x2004;&#x1D4B5;</span>.\n","The core task of <span class=\"math inline\"><em>M</em>(<em>x</em>)</span>\n","is to estimate the probability distribution <span\n","class=\"math inline\"><em>P</em><em>r</em>(<em>X</em>,<em>Y</em>)</span>,\n","where <span class=\"math inline\"><em>X</em></span> is the random variable\n","in <span class=\"math inline\">&#x1D4B3;</span>, and <span\n","class=\"math inline\"><em>Y</em></span> is the random variable in <span\n","class=\"math inline\">1,&#x2006;&#x2026;,&#x2006;<em>K</em></span>, such that: <span\n","class=\"math display\"><em>P</em><em>r</em>(<em>Y</em>=<em>i</em>|<em>X</em>=<em>x</em>)&#x2004;&#x2248;&#x2004;<em>&#x3C3;</em>(<em>l</em><sub><em>i</em></sub>)</span>\n","Here, <span class=\"math inline\">$\\sigma(l_i) =\n","\\frac{\\exp(l_i)}{\\sum_{j=1..K} \\exp l_j}$</span> is the softmax\n","function. This formulation applies to logistic regression and deep\n","neural networks with a densely connected output layer using softmax\n","activations&#xA0;<a href=\"#ref-baumhauer2020machine\">(Baumhauer, Sch&#xF6;ttle, and Zeppelzauer\n","2020)</a>. Baumhauer et al.&#xA0;<a href=\"#ref-baumhauer2020machine\">(Baumhauer, Sch&#xF6;ttle, and Zeppelzauer\n","2020)</a> proposed an unlearning method for softmax classifiers based\n","on a linear filtration operator to proportionally shift the\n","classification of the to-be-forgetten class samples to other classes.\n","However, this approach is only works for class removal.</p>\n","<p><strong>Unlearning for linear models.</strong> Izzo et al.&#xA0;<a href=\"#ref-izzo2021approximate\">(Izzo et al.\n","2021)</a> proposed an approximate unlearning method for linear and\n","logistic models based on influence functions. They approximated a\n","Hessian matrix computation with a project residual update&#xA0;<a href=\"#ref-izzo2021approximate cao2022machine\">(Izzo\n","et al. 2021; Z. Cao et al. 2022)</a> that combines gradient methods\n","with synthetic data. It is suitable for forgetting small groups of\n","points out of a learned model. Some other studies consider an online\n","setting for machine unlearning (aka online data deletion)&#xA0;<a href=\"#ref-ginart2019making li2020online\">(Ginart et\n","al. 2019; Li, Wang, and Cheng 2021)</a>, in which the removal request\n","is a sequence of entries that indicates which data item is to be\n","unlearned. In general, this setting is more challenging than normal\n","setting because indistinguishability must hold for any entry and for the\n","end of the deletion sequence. The goal is to achieve a lower bound on\n","amortized computation time&#xA0;<a href=\"#ref-ginart2019making li2020online\">(Ginart et al. 2019; Li,\n","Wang, and Cheng 2021)</a>.</p>\n","<p>Li et al.&#xA0;<a href=\"#ref-li2020online\">(Li, Wang,\n","and Cheng 2021)</a> formulated a special case of the online setting\n","where data is only accessible for a limited time so there is no full\n","training process in the first place. More precisely, the system is\n","allowed a constant memory to store historical data or a data sketch, and\n","it has to make predictions within a bounded period of time. Although the\n","data to be forgotten can be unlearned from a model on-the-fly using a\n","regret scheme on the memory, this particular unlearning process is only\n","applicable to ordinary linear regression&#xA0;<a href=\"#ref-li2020online\">(Li, Wang, and Cheng 2021)</a>.</p>\n","<p><strong>Unlearning for Tree-based Models.</strong> Tree-based models\n","are classification techniques that partition the feature space\n","recursively, where the features and cut-off thresholds to split the data\n","are determined by some criterion, such as information gain&#xA0;<a href=\"#ref-schelter2021hedgecut\">(Schelter,\n","Grafberger, and Dunning 2021)</a>. There is a class of tree-based\n","models, called extremely randomized trees&#xA0;<a href=\"#ref-geurts2006extremely\">(Geurts, Ernst, et al. 2006)</a>,\n","that are built by an ensemble of decision trees. These are very\n","efficient because the candidate set of split features and cut-off\n","thresholds are randomly generated. The best candidate is selected by a\n","reduction in Gini impurity, which avoids the heavy computation of\n","logarithms.</p>\n","<p>Schelter et al.&#xA0;<a href=\"#ref-schelter2021hedgecut\">(Schelter, Grafberger, and Dunning\n","2021)</a> proposed an unlearning solution for extremely randomized\n","trees by measuring the robustness of the split decisions. A split\n","decision is robust if removing <span\n","class=\"math inline\"><em>k</em></span> data items does not reverse that\n","split. Note that <span class=\"math inline\"><em>k</em></span> can be\n","bounded, and it is often small as only one in ten-thousand users who\n","wants to remove their data at a time&#xA0;<a href=\"#ref-schelter2021hedgecut\">(Schelter, Grafberger, and Dunning\n","2021)</a>). The learning algorithm is redesigned such that most of\n","splits, especially the high-level ones, are robust. For the non-robust\n","splits, all subtree variants are grown from all split candidates and\n","maintained until a removal request would revise that split. When that\n","happens, the split is switched to its variant with higher Gini gain. As\n","a result, the unlearning process involves recalculating the Gini gains\n","and updating the splits if necessary.</p>\n","<p>One limitation of this approach is that if the set to be forgotten is\n","too large, there might be many non-robust splits. This would lead to\n","high storage costs for the subtree variants. However, it does give a\n","parameterized choice between unlearning and retraining. If there are\n","many removal requests, retraining might be the best asymptotically.\n","Alternatively, one might limit the maximum number of removal requests to\n","be processed at a time. Moreover, tree-based models have a highly\n","competitive performance for many predictive applications&#xA0;<a href=\"#ref-schelter2021hedgecut\">(Schelter,\n","Grafberger, and Dunning 2021)</a>.</p>\n","<p><strong>Unlearning for Bayesian Models.</strong> Bayesian models are\n","probabilistic models that approximate a posterior likelihood&#xA0;<a href=\"#ref-fu2022knowledge fu2021bayesian jose2021unified nguyen2020variational\">(Fu,\n","He, et al. 2022; Fu et al. 2021; Jose and Simeone 2021; Q. P. Nguyen,\n","Low, and Jaillet 2020)</a>. Also known as Bayesian inference, this\n","process is particularly useful when a loss function is not well-defined\n","or does not even exist. Bayesian models cover a wide range of machine\n","learning algorithms, such as Bayesian neural networks, probabilistic\n","graphical models, generative models, topic modeling, and probabilistic\n","matrix factorization&#xA0;<a href=\"#ref-zhang2020deep roth2018bayesian pearce2020uncertainty\">(H.\n","Zhang et al. 2020; Roth and Pernkopf 2018; Pearce, Leibfried, and\n","Brintrup 2020)</a>.</p>\n","<p>Unlearning for Bayesian models requires a special treatment, as the\n","training already involves optimizing the posterior distribution of the\n","model&#x2019;s parameters. It also often involves optimizing the\n","Kullback-Leibler divergence between a prior belief and the posterior\n","distribution&#xA0;<a href=\"#ref-nguyen2020variational\">(Q. P. Nguyen, Low, and Jaillet\n","2020)</a>. Nguyen et al.&#xA0;<a href=\"#ref-nguyen2020variational\">(Q. P. Nguyen, Low, and Jaillet\n","2020)</a> proposed the notion of <em>exact Bayesian learning</em>:\n","<span\n","class=\"math display\"><em>P</em><em>r</em>(<em>w</em>|<em>D</em><sub><em>r</em></sub>)&#x2004;=&#x2004;<em>P</em><em>r</em>(<em>w</em>|<em>D</em>)<em>P</em><em>r</em>(<em>D</em><sub><em>f</em></sub>|<em>D</em><sub><em>r</em></sub>)/<em>P</em><em>r</em>(<em>D</em><sub><em>f</em></sub>|<em>w</em>)&#x2004;&#x221D;&#x2004;<em>P</em><em>r</em>(<em>w</em>|<em>D</em>)/<em>P</em><em>r</em>(<em>D</em><sub><em>f</em></sub>|<em>w</em>)</span>\n","where <span\n","class=\"math inline\"><em>P</em><em>r</em>(<em>w</em>|<em>D</em><sub><em>r</em></sub>)</span>\n","is the distribution of a retrained model (as if it were trained only on\n","<span class=\"math inline\"><em>D</em><sub><em>r</em></sub></span>).\n","However, the posterior distribution <span\n","class=\"math inline\"><em>P</em><em>r</em>(<em>w</em>|<em>D</em><sub><em>r</em></sub>)</span>\n","can only be sampled directly when the model parameters are\n","discrete-valued (quantized) or the prior is conjugate&#xA0;<a href=\"#ref-nguyen2020variational\">(Q. P. Nguyen, Low,\n","and Jaillet 2020)</a>. For non-conjugate priors, Nguyen et al.&#xA0;<a href=\"#ref-nguyen2020variational\">(Q. P. Nguyen, Low,\n","and Jaillet 2020)</a> proved that we can approximate <span\n","class=\"math inline\"><em>P</em><em>r</em>(<em>w</em>|<em>D</em><sub><em>r</em></sub>)</span>\n","by minimizing the KL divergence between <span\n","class=\"math inline\"><em>P</em><em>r</em>(<em>w</em>|<em>D</em>)</span>\n","and <span\n","class=\"math inline\"><em>P</em><em>r</em>(<em>w</em>|<em>D</em><sub><em>r</em></sub>)</span>.\n","Since <span\n","class=\"math inline\"><em>P</em><em>r</em>(<em>w</em>|<em>D</em>)</span>\n","is the original model&#x2019;s parameter distribution, this approximation\n","prevents catastrophic unlearning. As such, the retained model performs\n","significantly better than the unlearned model in terms of accuracy.</p>\n","<p>A notion of certified Bayesian unlearning has also been studied,\n","where the KL divergence between the unlearned model and the retrained\n","model is bounded&#xA0;<a href=\"#ref-fu2022knowledge fu2021bayesian jose2021unified\">(Fu, He, et\n","al. 2022; Fu et al. 2021; Jose and Simeone 2021)</a>: <span\n","class=\"math display\"><em>K</em><em>L</em>(<em>P</em><em>r</em>(<em>A</em>(<em>D</em><sub><em>r</em></sub>)),&#x1D53C;<sub><em>A</em>(<em>D</em>)</sub><em>P</em><em>r</em>(<em>U</em>(<em>D</em>,<em>D</em><sub><em>f</em></sub>,<em>A</em>(<em>D</em>))))&#x2004;&#x2264;&#x2004;<em>&#x3F5;</em></span>\n","Here, the result of the unlearning process is an expectation over the\n","parameter distribution of the original model <span\n","class=\"math inline\"><em>A</em>(<em>D</em>)&#x2004;&#x223C;&#x2004;<em>P</em><em>r</em>(<em>w</em>|<em>D</em>)</span>.\n","This certification can be achieved for some energy functions when\n","formulating the evidence lower bound (ELBO) in Bayesian models&#xA0;<a href=\"#ref-fu2022knowledge fu2021bayesian jose2021unified\">(Fu, He, et\n","al. 2022; Fu et al. 2021; Jose and Simeone 2021)</a>.</p>\n","<p><strong>Unlearning for DNN-based Models.</strong> Deep neural\n","networks are advanced models that automatically learn features from\n","data. As a result, it is very difficult to pinpoint the exact model\n","update for each data item&#xA0;<a href=\"#ref-golatkar2020forgetting golatkar2020eternal mehta2022deep he2021deepobliviate goyal2021revisiting\">(Golatkar,\n","Achille, and Soatto 2020b, 2020a; Mehta et al. 2022; He et al. 2021;\n","Goyal, Hassija, and Albuquerque 2021)</a>. Fortunately, deep neural\n","networks consist of multiple layers. For layers with convex activation\n","functions, existing unlearning methods such as certified removal\n","mechanisms can be applied&#xA0;<a href=\"#ref-GuoGHM20 neel2021descent sekhari2021remember cao2022machine\">(C.\n","Guo et al. 2020; Neel, Roth, and Sharifi-Malvajerdi 2021; Sekhari et al.\n","2021; Z. Cao et al. 2022)</a>. For non-convex layers, Golatkar et\n","al.&#xA0;<a href=\"#ref-golatkar2021mixed golatkar2020forgetting\">(Golatkar et al.\n","2021; Golatkar, Achille, and Soatto 2020b)</a> proposed a caching\n","approach that trains the model on data that are known a priori to be\n","permanent. Then the model is fine-tuned on user data using some convex\n","optimization.</p>\n","<p>Sophisticated unlearning methods for DNNs rely primarily on influence\n","functions&#xA0;<a href=\"#ref-koh2017understanding zhang2022machine\">(Koh et al. 2017;\n","P.-F. Zhang et al. 2022)</a>. Here, Taylor expansions are used to\n","approximate the impact of a data item on the parameters of black-box\n","models&#xA0;<a href=\"#ref-zeng2021learning\">(Zeng et al.\n","2021)</a>. Some variants include DeltaGrad&#xA0;<a href=\"#ref-wu2020deltagrad\">(Y. Wu et al. 2020)</a>, which stores\n","the historical updates for each data item, and Fisher-based\n","unlearning&#xA0;<a href=\"#ref-golatkar2020eternal\">(Golatkar, Achille, and Soatto\n","2020a)</a>, which we discussed under <a href=\"#sec:model-agnostic\"\n","data-reference-type=\"autoref\"\n","data-reference=\"sec:model-agnostic\">[sec:model-agnostic]</a>). However,\n","influence functions in deep neural networks are not stable with a large\n","forget set&#xA0;<a href=\"#ref-basu2021influence mahadevan2021certifiable mahadevan2022certifiable\">(Basu,\n","Pope, and Feizi 2021; Mahadevan and Mathioudakis 2021, 2022)</a>.</p>\n","<p>More precisely, after the data to be forgotten has been deleted from\n","database, Fisher-based unlearning&#xA0;<a href=\"#ref-golatkar2020eternal\">(Golatkar, Achille, and Soatto\n","2020a)</a> works on the remaining training data with the Newton&#x2019;s\n","method, which uses a second-order gradient. To mitigate potential\n","information leaks, noise is injected into the model&#x2019;s parameters&#xA0;<a href=\"#ref-conggrapheditor\">(Cong and Mahdavi\n","2022a)</a>. As the Fisher-based method aims to approximate the model\n","without the deleted data, there can be no guarantee that all the\n","influence of the deleted data has been removed. Although injecting noise\n","can help mitigate information leaks, the model&#x2019;s performance may be\n","affected by the noise&#xA0;<a href=\"#ref-conggrapheditor\">(Cong and Mahdavi 2022a)</a>.</p>\n","<p>Golatkar et al.&#xA0;<a href=\"#ref-golatkar2020eternal\">(Golatkar, Achille, and Soatto\n","2020a)</a> point out that the Hessian computation in certified\n","removal mechanisms is too expensive for complex models like deep neural\n","networks. Hence, they resorted to an approximation of Hessian via\n","Levenberg-Marquardt semi-positive-definite approximation, which turns\n","out to correspond with the Fisher Information Matrix&#xA0;<a href=\"#ref-martens2020new\">(Martens 2020)</a>.\n","Although it does not provide a concrete theoretical guarantee,\n","Fisher-based unlearning could lead to further information-theoretic\n","approaches to machine unlearning&#xA0;<a href=\"#ref-guo2022efficient golatkar2020forgetting\">(T. Guo et al.\n","2022; Golatkar, Achille, and Soatto 2020b)</a>.</p>\n","\n","| **Paper Title** | **Year** | **Author** | **Venue** | **Model** | **Code** | **Type** |\n","| --------------- | :----: | ---- | :----: | :----: | :----: | :----: |\n","| [Towards Adversarial Evaluations for Inexact Machine Unlearning](https://arxiv.org/abs/2201.06640) | 2023 | Goel et al. | _arXiv_ | EU-k, CF-k | [[Code]](https://github.com/shash42/Evaluating-Inexact-Unlearning) |\n","| [On the Trade-Off between Actionable Explanations and the Right to be Forgotten](https://openreview.net/pdf?id=HWt4BBZjVW) | 2023 | Pawelczyk et al. | _arXiv_ | - | - |  |\n","| [Towards Unbounded Machine Unlearning](https://arxiv.org/pdf/2302.09880) | 2023 | Kurmanji et al. | _arXiv_ | SCRUB | [[Code]](https://github.com/Meghdad92/SCRUB) | approximate unlearning |\n","| [Netflix and Forget: Efficient and Exact Machine Unlearning from Bi-linear Recommendations](https://arxiv.org/abs/2302.06676) | 2023 | Xu et al. | _arXiv_ | Unlearn-ALS | - | Exact Unlearning |\n","| [To Be Forgotten or To Be Fair: Unveiling Fairness Implications of Machine Unlearning Methods](https://arxiv.org/abs/2302.03350) | 2023 | Zhang et al. | _arXiv_ | - | [[Code]](https://github.com/cleverhans-lab/machine-unlearning) | |\n","| [Sequential Informed Federated Unlearning: Efficient and Provable Client Unlearning in Federated Optimization](https://arxiv.org/abs/2211.11656) | 2022 | Fraboni et al. | _arXiv_ | SIFU | - | |\n","| [Certified Data Removal in Sum-Product Networks](https://arxiv.org/abs/2210.01451) | 2022 | Becker and Liebig | _ICKG_ | UNLEARNSPN | [[Code]](https://github.com/ROYALBEFF/UnlearnSPN) | Certified Removal Mechanisms |\n","| [Learning with Recoverable Forgetting](https://arxiv.org/abs/2207.08224) | 2022 | Ye et al.  | _ECCV_ | LIRF | - |  |\n","| [Continual Learning and Private Unlearning](https://arxiv.org/abs/2203.12817) | 2022 | Liu et al. | _CoLLAs_ | CLPU | [[Code]](https://github.com/Cranial-XIX/Continual-Learning-Private-Unlearning) | |\n","| [Verifiable and Provably Secure Machine Unlearning](https://arxiv.org/abs/2210.09126) | 2022 | Eisenhofer et al. | _arXiv_ | - | [[Code]](https://github.com/cleverhans-lab/verifiable-unlearning) |  Certified Removal Mechanisms |\n","| [VeriFi: Towards Verifiable Federated Unlearning](https://arxiv.org/abs/2205.12709) | 2022 | Gao et al. | _arXiv_ | VERIFI | - | Certified Removal Mechanisms |\n","| [FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information](https://arxiv.org/abs/2210.10936) | 2022 | Cao et al. | _S&P_ | FedRecover | - | recovery method |\n","| [Fast Yet Effective Machine Unlearning](https://arxiv.org/abs/2111.08947) | 2022 | Tarun et al. | _arXiv_ | UNSIR | - |  |\n","| [Membership Inference via Backdooring](https://arxiv.org/abs/2206.04823) | 2022 | Hu et al.  | _IJCAI_ | MIB | [[Code]](https://github.com/HongshengHu/membership-inference-via-backdooring) | Membership Inferencing |\n","| [Forget Unlearning: Towards True Data-Deletion in Machine Learning](https://arxiv.org/abs/2210.08911) | 2022 | Chourasia et al. | _ICLR_ | - | - | noisy gradient descent |\n","| [Zero-Shot Machine Unlearning](https://arxiv.org/abs/2201.05629) | 2022 | Chundawat et al. | _arXiv_ | - | - |  |\n","| [Efficient Attribute Unlearning: Towards Selective Removal of Input Attributes from Feature Representations](https://arxiv.org/abs/2202.13295) | 2022 | Guo et al. | _arXiv_ | attribute unlearning | - |  |\n","| [Few-Shot Unlearning](https://download.huan-zhang.com/events/srml2022/accepted/yoon22fewshot.pdf) | 2022 | Yoon et al.   | _ICLR_ | - | - |  |\n","| [Federated Unlearning: How to Efficiently Erase a Client in FL?](https://arxiv.org/abs/2207.05521) | 2022 | Halimi et al. | _UpML Workshop_ | - | - | federated learning |\n","| [Machine Unlearning Method Based On Projection Residual](https://arxiv.org/abs/2209.15276) | 2022 | Cao et al. | _DSAA_ | - | - |  Projection Residual Method |\n","| [Hard to Forget: Poisoning Attacks on Certified Machine Unlearning](https://ojs.aaai.org/index.php/AAAI/article/view/20736) | 2022 | Marchant et al. | _AAAI_ | - | [[Code]](https://github.com/ngmarchant/attack-unlearning) | Certified Removal Mechanisms |\n","| [Athena: Probabilistic Verification of Machine Unlearning](https://web.archive.org/web/20220721061150id_/https://petsymposium.org/popets/2022/popets-2022-0072.pdf) | 2022 | Sommer et al. | _PoPETs_ | ATHENA | - | |\n","| [FP2-MIA: A Membership Inference Attack Free of Posterior Probability in Machine Unlearning](https://link.springer.com/chapter/10.1007/978-3-031-20917-8_12) | 2022 | Lu et al. | _ProvSec_ | FP2-MIA | - | inference attack |\n","| [Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning](https://arxiv.org/abs/2202.03460) | 2022 | Gao et al. | _PETS_ | - | - |  |\n","| [Prompt Certified Machine Unlearning with Randomized Gradient Smoothing and Quantization](https://openreview.net/pdf?id=ue4gP8ZKiWb) | 2022 | Zhang et al.   | _NeurIPS_ | PCMU | - | Certified Removal Mechanisms |\n","| [The Right to be Forgotten in Federated Learning: An Efficient Realization with Rapid Retraining](https://arxiv.org/abs/2203.07320) | 2022 | Liu et al. | _INFOCOM_ | - | [[Code]](https://github.com/yiliucs/federated-unlearning) |  |\n","| [Backdoor Defense with Machine Unlearning](https://arxiv.org/abs/2201.09538) | 2022 | Liu et al. | _INFOCOM_ | BAERASER | - | Backdoor defense |\n","| [Markov Chain Monte Carlo-Based Machine Unlearning: Unlearning What Needs to be Forgotten](https://dl.acm.org/doi/abs/10.1145/3488932.3517406) | 2022 | Nguyen et al. | _ASIA CCS_ | MCU | - | MCMC Unlearning  |\n","| [Federated Unlearning for On-Device Recommendation](https://arxiv.org/abs/2210.10958) | 2022 | Yuan et al. | _arXiv_ | - | - |  |\n","| [Can Bad Teaching Induce Forgetting? Unlearning in Deep Networks using an Incompetent Teacher](https://arxiv.org/abs/2205.08096) | 2022 | Chundawat et al. | _arXiv_ | - | - | Knowledge Adaptation |\n","| [ Efficient Two-Stage Model Retraining for Machine Unlearning](https://openaccess.thecvf.com/content/CVPR2022W/HCIS/html/Kim_Efficient_Two-Stage_Model_Retraining_for_Machine_Unlearning_CVPRW_2022_paper.html) | 2022 | Kim and Woo | _CVPR Workshop_ | - | - |  |\n","| [Learn to Forget: Machine Unlearning Via Neuron Masking](https://ieeexplore.ieee.org/abstract/document/9844865?casa_token=_eowH3BTt1sAAAAA:X0uCpLxOwcFRNJHoo3AtA0ay4t075_cSptgTMznsjusnvgySq-rJe8GC285YhWG4Q0fUmP9Sodw0) | 2021 | Ma et al. | _IEEE_ | Forsaken | - | Mask Gradients |\n","| [Adaptive Machine Unlearning](https://proceedings.neurips.cc/paper/2021/hash/87f7ee4fdb57bdfd52179947211b7ebb-Abstract.html) | 2021 | Gupta et al. | _NeurIPS_ | - | [[Code]](https://github.com/ChrisWaites/adaptive-machine-unlearning) | Differential Privacy |\n","| [Descent-to-Delete: Gradient-Based Methods for Machine Unlearning](https://proceedings.mlr.press/v132/neel21a.html) | 2021 | Neel et al. | _ALT_ | - | - | Certified Removal Mechanisms |\n","| [Remember What You Want to Forget: Algorithms for Machine Unlearning](https://arxiv.org/abs/2103.03279) | 2021 | Sekhari et al. | _NeurIPS_ | - | - |  |\n","| [FedEraser: Enabling Efficient Client-Level Data Removal from Federated Learning Models](https://ieeexplore.ieee.org/abstract/document/9521274) | 2021 | Liu et al. | _IWQoS_ | FedEraser | - |  |\n","| [Federated Unlearning](https://arxiv.org/abs/2012.13891) | 2021 | Liu et al. | _IWQoS_ | FedEraser | [[Code]](https://www.dropbox.com/s/1lhx962axovbbom/FedEraser-Code.zip?dl=0) |  |\n","| [Machine Unlearning via Algorithmic Stability](https://proceedings.mlr.press/v134/ullah21a.html) | 2021 | Ullah et al. | _COLT_ | TV | - | Certified Removal Mechanisms |\n","| [EMA: Auditing Data Removal from Trained Models](https://link.springer.com/chapter/10.1007/978-3-030-87240-3_76) | 2021 | Huang et al. | _MICCAI_ | EMA | [[Code]](https://github.com/Hazelsuko07/EMA) | Certified Removal Mechanisms |\n","| [Knowledge-Adaptation Priors](https://proceedings.neurips.cc/paper/2021/hash/a4380923dd651c195b1631af7c829187-Abstract.html) | 2021 | Khan and Swaroop | _NeurIPS_ | K-prior | [[Code]](https://github.com/team-approx-bayes/kpriors) | Knowledge Adaptation |\n","| [PrIU: A Provenance-Based Approach for Incrementally Updating Regression Models](https://dl.acm.org/doi/abs/10.1145/3318464.3380571) | 2020 | Wu et al. | _NeurIPS_ | PrIU | - | Knowledge Adaptation |\n","| [Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks](https://arxiv.org/abs/1911.04933) | 2020 | Golatkar et al. | _CVPR_ | - | - | Certified Removal Mechanisms |\n","| [Learn to Forget: User-Level Memorization Elimination in Federated Learning](https://www.researchgate.net/profile/Ximeng-Liu-5/publication/340134612_Learn_to_Forget_User-Level_Memorization_Elimination_in_Federated_Learning/links/5e849e64a6fdcca789e5f955/Learn-to-Forget-User-Level-Memorization-Elimination-in-Federated-Learning.pdf) | 2020 | Liu et al. | _arXiv_ | Forsaken | - |  |\n","| [Certified Data Removal from Machine Learning Models](https://proceedings.mlr.press/v119/guo20c.html) | 2020 | Guo et al. | _ICML_ | - | - | Certified Removal Mechanisms |\n","| [Class Clown: Data Redaction in Machine Unlearning at Enterprise Scale](https://arxiv.org/abs/2012.04699) | 2020 | Felps et al. | _arXiv_ | - | - | Decremental Learning |\n","| [A Novel Online Incremental and Decremental Learning Algorithm Based on Variable Support Vector Machine](https://link.springer.com/article/10.1007/s10586-018-1772-4) | 2019 | Chen et al. | _Cluster Computing_ | - | - | Decremental Learning  |\n","| [Making AI Forget You: Data Deletion in Machine Learning](https://papers.nips.cc/paper/2019/hash/cb79f8fa58b91d3af6c9c991f63962d3-Abstract.html) | 2019 | Ginart et al. | _NeurIPS_ | - | - | Decremental Learning  |\n","| [Lifelong Anomaly Detection Through Unlearning](https://dl.acm.org/doi/abs/10.1145/3319535.3363226) | 2019 | Du et al. | _CCS_ | - | - |  |\n","| [Learning Not to Learn: Training Deep Neural Networks With Biased Data](https://openaccess.thecvf.com/content_CVPR_2019/html/Kim_Learning_Not_to_Learn_Training_Deep_Neural_Networks_With_Biased_CVPR_2019_paper.html) | 2019 | Kim et al. | _CVPR_ | - | - |  |\n","| [Efficient Repair of Polluted Machine Learning Systems via Causal Unlearning](https://dl.acm.org/citation.cfm?id=3196517) | 2018 | Cao et al. | _ASIACCS_ | KARMA | [[Code]](https://github.com/CausalUnlearning/KARMA) |  |\n","| [Understanding Black-box Predictions via Influence Functions](https://proceedings.mlr.press/v70/koh17a.html) | 2017 | Koh et al. | _ICML_ | - | [[Code]](https://github.com/kohpangwei/influence-release) | Certified Removal Mechanisms |\n","| [Towards Making Systems Forget with Machine Unlearning](https://ieeexplore.ieee.org/abstract/document/7163042) | 2015 | Cao and Yang | _S&P_ | - |  |\n","| [Towards Making Systems Forget with Machine Unlearning](https://dl.acm.org/doi/10.1109/SP.2015.35) | 2015 | Cao et al. | _S&P_ | - | - | Statistical Query Learning  |\n","| [Incremental and decremental training for linear classification](https://dl.acm.org/doi/10.1145/2623330.2623661) | 2014 | Tsai et al. | _KDD_ | - | [[Code]](https://www.csie.ntu.edu.tw/~cjlin/papers/ws/) | Decremental Learning  |\n","| [Multiple Incremental Decremental Learning of Support Vector Machines](https://dl.acm.org/doi/10.5555/2984093.2984196) | 2009 | Karasuyama et al. | _NIPS_ | - | - | Decremental Learning  |\n","| [Incremental and Decremental Learning for Linear Support Vector Machines](https://dl.acm.org/doi/10.5555/1776814.1776838) | 2007 | Romero et al. | _ICANN_ | - | - | Decremental Learning  |\n","| [Decremental Learning Algorithms for Nonlinear Langrangian and Least Squares Support Vector Machines](https://www.semanticscholar.org/paper/Decremental-Learning-Algorithms-for-Nonlinear-and-Duan-Li/312c677f0882d0dfd60bfd77346588f52aefd10f) | 2007 | Duan et al. | _OSB_ | - | - | Decremental Learning  |\n","| [Multicategory Incremental Proximal Support Vector Classifiers](https://link.springer.com/chapter/10.1007/978-3-540-45224-9_54) | 2003 | Tveit et al. | _KES_ | - | - | Decremental Learning  |\n","| [Incremental and Decremental Proximal Support Vector Classification using Decay Coefficients](https://link.springer.com/chapter/10.1007/978-3-540-45228-7_42) | 2003 | Tveit et al. | _DaWak_ | - | - | Decremental Learning  |\n","| [Incremental and Decremental Support Vector Machine Learning](https://dl.acm.org/doi/10.5555/3008751.3008808) | 2000 | Cauwenberg et al. | _NeurIPS_ | - | - | Decremental Learning  |\n","----------"]},{"cell_type":"markdown","id":"588ab764","metadata":{"papermill":{"duration":0.008693,"end_time":"2023-07-12T05:38:47.237247","exception":false,"start_time":"2023-07-12T05:38:47.228554","status":"completed"},"tags":[]},"source":["<h2 id=\"data-driven-approaches\">4.3. Data-Driven Approaches</h2>\n","\n","The approaches fallen into this category use data partition, data augmentation and data influence to speed up the retraining process. Methods of attack by data manipulation (e.g. data poisoning) are also included for reference.\n","\n","<div class=\"figure*\">\n","<figure>\n","<img src=\"https://raw.githubusercontent.com/tamlhp/awesome-machine-unlearning/main/figs/data-driven.png\" alt=\"https://arxiv.org/abs/2209.02299\" style=\"max-width: 60%;\"/>\n","</figure>\n","</div>\n","\n","<p><strong>Data Partitioning (Efficient Retraining).</strong> The\n","approaches falling into this category uses data partitioning mechanisms\n","to speed up the retraining process. Alternatively, they partially\n","retrain the model with some bounds on accuracy. Bourtoule et al.&#xA0;<a href=\"#ref-bourtoule2021machine\">(Bourtoule et al.\n","2021)</a> proposed the well-known SISA framework (<a\n","href=\"#fig:partition\" data-reference-type=\"autoref\"\n","data-reference=\"fig:partition\">[fig:partition]</a>) that partitions the\n","data into shards and slices. Each shard has a single model, and the\n","final output is an aggregation of multiple models over these shards. For\n","each slice of a shard, a model checkpoint is stored during training so\n","that a new model can be retrained from an intermediate state&#xA0;<a href=\"#ref-bourtoule2021machine aldaghri2021coded\">(Bourtoule et al.\n","2021; Aldaghri, Mahdavifar, et al. 2021)</a>.</p>\n","\n","<figure id=\"fig:partition\">\n","<img src=\"https://raw.githubusercontent.com/tamlhp/awesome-machine-unlearning/main/kaggle/partition.png\"\n","alt=\"Efficient retraining for machine unlearning using data partition\" style=\"max-width: 80%;\" />\n","<figcaption aria-hidden=\"true\">Efficient retraining for machine\n","unlearning using data partition</figcaption>\n","</figure>\n","\n","<p><strong>Data Augmentation (Error-manipulation noise).</strong> Data\n","augmentation is the process of enriching or adding more data to support\n","a model&#x2019;s training&#xA0;<a href=\"#ref-yu2021does\">(D. Yu\n","et al. 2021)</a>. Such mechanisms can be used to support machine\n","unlearning as well. Huang et al.&#xA0;<a href=\"#ref-huang2021unlearnable\">(H. Huang et al. 2021)</a> proposed\n","the idea of error-minimizing noise, which tricks a model into thinking\n","that there is nothing to be learned from a given set of data (i.e., the\n","loss does not change). However, it can only be used to protect a\n","particular data item before the model is trained. A similar setting was\n","also studied by Fawkes&#xA0;<a href=\"#ref-shan2020protecting\">(Shan et al. 2020)</a>, in which a\n","targeted adversarial attack is used to ensure the model does not learn\n","anything from a targeted data item.</p>\n","<p>Conversely, Tarun et al.&#xA0;<a href=\"#ref-tarun2021fast\">(Tarun et al. 2021)</a> proposed\n","error-maximizing noise to impair the model on a target class of data (to\n","be forgotten). However, this tactic does not work on specific data items\n","as it is easier to interfere with a model&#x2019;s prediction on a whole class\n","as opposed to a specific data item of that class&#xA0;<a href=\"#ref-tarun2021fast\">(Tarun et al. 2021)</a>.</p>\n","<p><strong>Data influence.</strong> This group of unlearning approaches\n","studies how a change in training data impacts a model&#x2019;s parameters&#xA0;<a href=\"#ref-wu2022puma conggrapheditor cao2022machine\">(G. Wu, Hashemi,\n","and Srinivasa 2022; Cong and Mahdavi 2022a; Z. Cao et al. 2022)</a>,\n","where impact is computed using influence functions&#xA0;<a href=\"#ref-mahadevan2022certifiable chundawat2022zero\">(Mahadevan and\n","Mathioudakis 2022; Chundawat et al. 2022b)</a>. However, influence\n","functions depend on the current state of a learning algorithm&#xA0;<a href=\"#ref-wu2022puma\">(G. Wu, Hashemi, and Srinivasa\n","2022)</a>. To mitigate this issue, several works store a training\n","history of intermediate quantities (e.g., model parameters or gradients)\n","generated by each step of model training&#xA0;<a href=\"#ref-graves2021amnesiac neel2021descent wu2020deltagrad wu2020priu\">(Graves,\n","Nagisetty, and Ganesh 2021; Neel, Roth, and Sharifi-Malvajerdi 2021; Y.\n","Wu et al. 2020; Y. Wu, Tannen, and Davidson 2020)</a>. Then, the\n","unlearning process becomes one of subtracting these historical updates.\n","However, the model&#x2019;s accuracy might degrade significantly due to\n","catastrophic unlearning&#xA0;<a href=\"#ref-nguyen2020variational\">(Q. P. Nguyen, Low, and Jaillet\n","2020)</a> since the order in which the training data is fed matters\n","to the learning model. Moreover, the influence itself does not verify\n","whether the data to be forgotten is still included in the unlearned\n","model&#xA0;<a href=\"#ref-thudi2021necessity thudi2022unrolling\">(Thudi, Jia, et al.\n","2022; Thudi, Deza, et al. 2022)</a>.</p>\n","<p>Zeng et al.&#xA0;<a href=\"#ref-zeng2021learning\">(Zeng et al. 2021)</a> suggested a new\n","method of modeling data influence by adding regularization terms into\n","the learning algorithm. Although this method is model-agnostic, it\n","requires intervening in the original training process of the original\n","model. Moreover, it is only applicable to convex learning problems and\n","deep neural networks.</p>\n","<p>Peste et al.&#xA0;<a href=\"#ref-peste2021ssse\">(Peste, Alistarh, and Lampert 2021)</a>\n","closed this gap by introducing a new Fisher-based unlearning method,\n","which can approximate the Hessian matrix. This method works for both\n","shallow and deep models, and also convex and non-convex problems. The\n","idea is to efficiently compute the matrix inversion of a Fisher\n","Information Matrix using rank-one updates. However, as the whole process\n","is approximate, there is no concrete guarantee on the unlearned\n","model.</p>\n","\n","| **Paper Title** | **Year** | **Author** | **Venue** | **Model** | **Code** | **Type** |\n","| --------------- | :----: | ---- | :----: | :----: | :----: | :----: |\n","| [Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks](https://arxiv.org/abs/2212.10717) | 2022 | Di et al. | _NeurIPS_ | - | [[Code]](https://github.com/Jimmy-di/camouflage-poisoning) | Data Poisoning |\n","| [Forget Unlearning: Towards True Data Deletion in Machine Learning](https://arxiv.org/pdf/2210.08911.pdf) | 2022 | Chourasia et al. | _ICLR_ | - | - | Data Influence |\n","| [ARCANE: An Efficient Architecture for Exact Machine Unlearning](https://www.ijcai.org/proceedings/2022/0556.pdf) | 2022 | Yan et al.  | _IJCAI_ | ARCANE | - | Data Partition |\n","| [PUMA: Performance Unchanged Model Augmentation for Training Data Removal](https://ojs.aaai.org/index.php/AAAI/article/view/20846) | 2022 | Wu et al. | _AAAI_ | PUMA | - | Data Influence |\n","| [Certifiable Unlearning Pipelines for Logistic Regression: An Experimental Study](https://www.mdpi.com/2504-4990/4/3/28) | 2022 | Mahadevan and Mathioudakis | _MAKE_ | - | [[Code]](https://version.helsinki.fi/mahadeva/unlearning-experiments) | Data Influence |\n","| [Zero-Shot Machine Unlearning](https://arxiv.org/abs/2201.05629) | 2022 | Chundawat et al. | _arXiv_ | - | - | Data Influence |\n","| [GRAPHEDITOR: An Efficient Graph Representation Learning and Unlearning Approach](https://congweilin.github.io/CongWeilin.io/files/GraphEditor.pdf) | 2022 | Cong and Mahdavi | - | GRAPHEDITOR | [[Code]](https://anonymous.4open.science/r/GraphEditor-NeurIPS22-856E/README.md) | Data Influence |\n","| [Fast Model Update for IoT Traffic Anomaly Detection with Machine Unlearning](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9927728) | 2022 | Fan et al. | _IEEE IoT-J_ | ViFLa | - | Data Partition |\n","| [Learning to Refit for Convex Learning Problems](https://arxiv.org/abs/2111.12545) | 2021 | Zeng et al. | _arXiv_ | OPTLEARN | - | Data Influence |\n","| [Fast Yet Effective Machine Unlearning](https://arxiv.org/abs/2111.08947) | 2021 | Ayush et al. | _arXiv_ | - | - | Data Augmentation |\n","| [Learning with Selective Forgetting](https://www.ijcai.org/proceedings/2021/0137.pdf) | 2021 | Shibata et al. | _IJCAI_ | - | - | Data Augmentation |\n","| [SSSE: Efficiently Erasing Samples from Trained Machine Learning Models](https://openreview.net/forum?id=GRMKEx3kEo) | 2021 | Peste et al. | _NeurIPS_ | SSSE | - | Data Influence |\n","| [How Does Data Augmentation Affect Privacy in Machine Learning?](https://arxiv.org/abs/2007.10567) | 2021 | Yu et al. | _AAAI_ | - | [[Code]](https://github.com/dayu11/MI_with_DA) | Data Augmentation |\n","| [Coded Machine Unlearning](https://ieeexplore.ieee.org/document/9458237) | 2021 | Aldaghri et al. | _IEEE_ | - | - | Data Partitioning |\n","| [Machine Unlearning](https://ieeexplore.ieee.org/document/9519428) | 2021 | Bourtoule et al. | _IEEE_ | SISA | [[Code]](https://github.com/cleverhans-lab/machine-unlearning) | Data Partitioning |\n","| [How Does Data Augmentation Affect Privacy in Machine Learning?](https://ojs.aaai.org/index.php/AAAI/article/view/17284/) | 2021 | Yu et al. | _AAAI_ | - | [[Code]](https://github.com/dayu11/MI_with_DA) | Data Augmentation |\n","| [Amnesiac Machine Learning](https://ojs.aaai.org/index.php/AAAI/article/view/17371) | 2021 | Graves et al. | _AAAI_ | AmnesiacML | [[Code]](https://github.com/lmgraves/AmnesiacML) | Data Influence |\n","| [Unlearnable Examples: Making Personal Data Unexploitable](https://arxiv.org/abs/2101.04898) | 2021 | Huang et al. | _ICLR_ | - | [[Code]](https://github.com/HanxunH/Unlearnable-Examples) | Data Augmentation |\n","| [Descent-to-Delete: Gradient-Based Methods for Machine Unlearning](https://proceedings.mlr.press/v132/neel21a.html) | 2021 | Neel et al. | _ALT_ | - | - | Data Influence |\n","| [Fawkes: Protecting Privacy against Unauthorized Deep Learning Models](https://dl.acm.org/doi/abs/10.5555/3489212.3489302) | 2020 | Shan et al. | _USENIX Sec. Sym._ | Fawkes | [[Code]](https://github.com/Shawn-Shan/fawkes) | Data Augmentation |\n","| [PrIU: A Provenance-Based Approach for Incrementally Updating Regression Models](https://dl.acm.org/doi/abs/10.1145/3318464.3380571) | 2020 | Wu et al. | _SIGMOD_ | PrIU/PrIU-opt | - | Data Influence |\n","| [DeltaGrad: Rapid retraining of machine learning models](https://proceedings.mlr.press/v119/wu20b.html) | 2020 | Wu et al. | _ICML_ | DeltaGrad | [[Code]](https://github.com/thuwuyinjun/DeltaGrad) | Data Influence |\n","\n","\n","----------"]},{"cell_type":"markdown","id":"131d6842","metadata":{"papermill":{"duration":0.009368,"end_time":"2023-07-12T05:38:47.256196","exception":false,"start_time":"2023-07-12T05:38:47.246828","status":"completed"},"tags":[]},"source":["<h2 id=\"sec:metrics\">5. Evaluation Metrics</h2>\n","\n","| Metrics | Formula/Description | Usage |\n","| ---- | ---- | ---- |\n","| Accuracy | Accuracy on unlearned model on forget set and retrain set | Evaluating the predictive performance of unlearned model |\n","| Completeness | The overlapping (e.g. Jaccard distance) of output space between the retrained and the unlearned model | Evaluating the indistinguishability between model outputs |\n","| Unlearn time | The amount of time of unlearning request | Evaluating the unlearning efficiency |\n","| Relearn Time | The epochs number required for the unlearned model to reach the accuracy of source model | Evaluating the unlearning efficiency (relearn with some data sample) |\n","| Layer-wise Distance | The weight difference between original model and retrain model | Evaluate the indistinguishability between model parameters |\n","| Activation Distance | An average of the L2-distance between the unlearned model and retrained model’s predicted probabilities on the forget set | Evaluating the indistinguishability between model outputs | \n","| JS-Divergence | Jensen-Shannon divergence between the predictions of the unlearned and retrained model | Evaluating the indistinguishability between model outputs |\n","| Membership Inference Attack | Recall (#detected items / #forget items) | Verify the influence of forget data on the unlearned model |\n","| ZRF score | $\\mathcal{ZRF} = 1 - \\frac{1}{nf}\\sum\\limits_{i=0}^{n_f} \\mathcal{JS}(M(x_i), T_d(x_i))$ | The unlearned model should not intentionally give wrong output $(\\mathcal{ZRF} = 0)$ or random output $(\\mathcal{ZRF} = 1)$ on the forget item |\n","| Anamnesis Index (AIN) | $AIN = \\frac{r_t (M_u, M_{orig}, \\alpha)}{r_t (M_s, M_{orig}, \\alpha)}$ | Zero-shot machine unlearning | \n","| Epistemic Uncertainty | if $\\mbox{i(w;D) > 0}$, then $\\mbox{efficacy}(w;D) = \\frac{1}{i(w; D)}$;<br />otherwise $\\mbox{efficacy}(w;D) = \\infty$ | How much information the model exposes |\n","| Model Inversion Attack | Visualization | Qualitative verifications and evaluations |\n","\n","----------\n","\n","<p>The most often used metrics for measuring anomaly detection\n","performance include accuracy, completeness, unlearn time, distance, and\n","forgetting scores. Their formulas and common usage are summarized in <a\n","href=\"#tab:metrics\" data-reference-type=\"autoref\"\n","data-reference=\"tab:metrics\">[tab:metrics]</a>. More detailed\n","descriptions are given below.</p>\n","\n","<p><strong>Accuracy.</strong> In machine unlearning, a model&#x2019;s accuracy\n","needs to be compared on three different datasets: (1) The set to be\n","forgotten. Since the expected behaviour of an unlearned model after\n","unlearning should mirror that of a retrained model, the accuracy on the\n","remaining data should be similar to the retrained model. (2) The\n","retained set. The retained set&#x2019;s accuracy should be close to that of the\n","original model. (3) The test set. The unlearned model should still\n","perform well on a separate test dataset compared to the retrained\n","model.</p>\n","<p><strong>Completeness.</strong> The influence of the to-be-removed\n","samples on the unlearned model must be completely eliminated.\n","Completeness, hence, measures the degree to which an unlearned model is\n","compatible with a retrained model&#xA0;<a href=\"#ref-cao2015towards\">(Y. Cao and Yang 2015)</a>. If the\n","unlearned model gives similar predictions to a retrained model for all\n","samples, the operation of feeding samples or observing the model&#x2019;s\n","information is impractical to achieve the forgotten data and its\n","lineage. The final metric is often calculated as the overlap of output\n","space (e.g., the Jaccard distance) between the unlearned model and the\n","retraining. However, computing this metric is often exhaustive.</p>\n","<p><strong>Unlearning time and Retraining time.</strong> Timeliness\n","quantifies the time saved when using unlearning instead of retraining\n","for model update. The quicker the system restores privacy, security, and\n","usefulness, the more timely the unlearning process. In particular,\n","retraining uses the whole training set to execute the learning\n","algorithm, whereas unlearning executes the learning algorithm on a\n","limited amount of summations; hence, the speed of unlearning is quicker\n","due to the reduced size of the training data.</p>\n","<p><strong>Relearn time.</strong> Relearning time is an excellent proxy\n","for measuring the amount of unlearned data information left in the\n","model. If a model recovers its performance on unlearned data with just a\n","few steps of retraining, it is extremely probable that the model has\n","retained some knowledge of the unlearned data.</p>\n","<p><strong>The layer-wise distance.</strong> The layer-wise distance\n","between the original and unlearned models helps when trying to\n","understand the impact of the unlearning on each layer. The weight\n","difference should be comparable to a retrained model given that a\n","shorter distance indicates ineffective unlearning. Likewise, a much\n","longer distance may point to a Streisand effect and possible information\n","leaks.</p>\n","<p><strong>Activation Distance.</strong> The activation distance is the\n","separation between the final activation of the scrubbed weights and the\n","retrained model. A shorter activation distance indicates superior\n","unlearning.</p>\n","<p><strong>JS-Divergence.</strong> When paired with the activation\n","distance, the JS-Divergence between the predictions of the unlearned and\n","retrained model provides a more full picture of unlearning. Less\n","divergence results in better unlearning. The formula of JS-Divergence is\n","<span\n","class=\"math inline\">&#x1D4A5;&#x1D4AE;(<em>M</em>(<em>x</em>),<em>T</em><sub><em>d</em></sub>(<em>x</em>))&#x2004;=&#x2004;0.5&#x2005;*&#x2005;&#x1D4A6;&#x2112;(<em>M</em>(<em>x</em>)||<em>m</em>)&#x2005;+&#x2005;0.5&#x2005;*&#x2005;&#x1D4A6;&#x2112;(<em>T</em><sub><em>d</em></sub>(<em>x</em>)||<em>m</em>)</span>,\n","where <span class=\"math inline\"><em>M</em></span> is unlearned model,\n","<span class=\"math inline\"><em>T</em><sub><em>d</em></sub></span> is a\n","competent teacher, and <span class=\"math inline\">&#x1D4A6;&#x2112;</span> is The\n","Kullback-Leibler divergence <a href=\"#ref-KLFormula\">(Kullback et al. 1951)</a>, <span\n","class=\"math inline\">$m= \\frac{M(x)+T_d(x)}{2}$</span>.</p>\n","<p><strong>Membership Inference.</strong> The membership inference\n","metric leverages a membership inference attack to determine whether or\n","not any information about the forgotten samples remains in the\n","model&#xA0;<a href=\"#ref-chen2021machine\">(M. Chen et\n","al. 2021b)</a>. The set to be forgotten should have reduced the\n","attack probability in the unlearned model. The chance of an inference\n","attack should be reduced in the unlearned model compared to the original\n","model for the forgotten class data.</p>\n","<p><strong>ZRF score.</strong> Zero Retrain Forgetting (ZRF) makes it\n","possible to evaluate unlearning approaches independent of\n","retraining&#xA0;<a href=\"#ref-chundawat2022can\">(Chundawat et al. 2022a)</a>. The\n","unpredictability of the model&#x2019;s predictions is measured by comparing\n","them to an unskilled instructor. ZRF compares the set to be forgotten&#x2019;s\n","output distribution to the output of a randomly initialised model, which\n","in most situations is our lousy instructor. The ZRF score ranges between\n","0 and 1; it will be near to 1 if the model&#x2019;s behaviour with the\n","forgotten samples is entirely random, and close to 0 if it exhibits a\n","certain pattern. The formula of ZRF score is <span\n","class=\"math inline\">$\\mathcal{ZRF} = 1 -\n","\\frac{1}{nf}\\sum\\limits_{i=0}^{n_f} \\mathcal{JS}(M(x_i),\n","T_d(x_i))$</span>, where <span\n","class=\"math inline\"><em>x</em><sub><em>i</em></sub></span> is the <span\n","class=\"math inline\"><em>i</em><sub><em>t</em><em>h</em></sub></span>\n","sample from the set to be forgotten with a total number of samples <span\n","class=\"math inline\"><em>n</em><sub><em>f</em></sub></span></p>\n","<p><strong>Anamnesis Index (AIN).</strong> AIN values range between 0\n","and 1. The better the unlearning, the closer to 1. Instances where\n","information from the classes to be forgotten are still preserved in the\n","model correlate to AIN levels well below 1. A score closer to 0 also\n","suggests that the unlearned model will rapidly relearn to generate\n","correct predictions. This may be due to the fact that the last layers\n","contain limited reversible modifications, which degrades the performance\n","of the model on the forgotten classes. If an AIN score is much greater\n","than 1, it may suggest that the approach causes parameter changes that\n","are so severe that the unlearning itself may be detected (Streisand\n","effect). This might be due to the fact that the model was pushed away\n","from the original point and, as a result, is unable to retrieve\n","previously learned knowledge about the forgotten class(es). The formula\n","for calculating an AIN value is <span class=\"math inline\">$AIN =\n","\\frac{r_t (M_u, M_{orig}, \\alpha)}{r_t (M_s, M_{orig}, \\alpha)}$</span>,\n","where <span class=\"math inline\"><em>&#x3B1;</em>%</span> is a margin around\n","the initial precision used to determine relearn time. <span\n","class=\"math inline\"><em>r</em><sub><em>t</em></sub>(<em>M</em>,<em>M</em><sub><em>o</em><em>r</em><em>i</em><em>g</em></sub>,<em>&#x3B1;</em>)</span>\n","are mini-batches (or steps) to be achieved by the model <span\n","class=\"math inline\"><em>M</em></span> on the classes to be forgotten\n","within <span class=\"math inline\"><em>&#x3B1;</em>%</span> of the precision\n","compared to the original model <span\n","class=\"math inline\"><em>M</em><sub><em>o</em><em>r</em><em>i</em><em>g</em></sub></span>.\n","<span class=\"math inline\"><em>M</em><sub><em>u</em></sub></span> and\n","<span class=\"math inline\"><em>M</em><sub><em>s</em></sub></span>\n","respectively represent the unlearned model and a model trained from\n","scratch.</p>\n"]},{"cell_type":"markdown","id":"6adcd2c1","metadata":{"papermill":{"duration":0.008705,"end_time":"2023-07-12T05:38:47.274365","exception":false,"start_time":"2023-07-12T05:38:47.26566","status":"completed"},"tags":[]},"source":["<h1 id=\"unlearning-applications\">6. Unlearning Applications</h1>\n","<h2 id=\"unlearning-in-recommender-systems\">6.1. Unlearning in Recommender\n","Systems</h2>\n","<p>In the field of machine learning, recommender systems are used to\n","predict what a user might want to buy or watch. They often use\n","collaborative filtering to learn a user&#x2019;s preferences based on their\n","past behavior. However, a recommender system may be required to forget\n","private training points and its complete impact on a model in order to\n","protect user privacy or comply with explicit user removal requests.\n","Utility is another reason for unlearning requests. For example, the\n","accuracy of a recommendation can be degraded due to out-of-distribution\n","data or poisoning attacks&#xA0;<a href=\"#ref-marchant2022hard du2019lifelong\">(Marchant, Rubinstein, and\n","Alfeld 2022; Du, Chen, et al. 2019)</a>. In the latter case, data\n","that is detected as poisoned will need to be removed, while in the\n","former case, old data may need to be removed so that the system keeps up\n","with the new data distribution.</p>\n","<p>Unlearning techniques for machine learning in general cannot be used\n","directly on recommender systems&#xA0;<a href=\"#ref-chen2022recommendation wang2022efficiently\">(C. Chen et al.\n","2022; B. L. Wang and Schelter 2022)</a>. For example, collaborative\n","filtering recommendation uses the information of similarity across\n","users-item interaction; and thus, arbitrarily partitioning the training\n","sets could break such coupling information&#xA0;<a href=\"#ref-bourtoule2021machine\">(Bourtoule et al. 2021)</a>. Some\n","researchers have developed unlearning methods for graph data&#xA0;<a href=\"#ref-chen2021graph\">(M. Chen et al.\n","2021a)</a> and while recommendation data can be modelled as graphs,\n","their user-item interactions are not uniform&#xA0;<a href=\"#ref-chen2022recommendation\">(C. Chen et al. 2022)</a>.</p>\n","<p>To overcome these challenges, Chen et al.&#xA0;<a href=\"#ref-chen2022recommendation\">(C. Chen et al. 2022)</a>\n","proposed a partition-based retraining process, called smart retraining,\n","to unlearn the model from the removed user behavior data. The idea is to\n","develop a strategy to partition the data with regard to the resemblance\n","between users and items while maintaining the balance between different\n","partitions for retraining. Next, the output of the submodels is combined\n","using an attention-based method, each of which is associated with each\n","disjoint partition.</p>\n","\n","<h2 id=\"unlearning-federated-learning\">6.2. Unlearning Federated\n","Learning</h2>\n","<p>Recently, federated learning has become popular in the field of\n","machine learning&#xA0;<a href=\"#ref-mcmahan2017communication\">(McMahan et al. 2017)</a>. One\n","typical federated learning scenario is building a machine learning model\n","from healthcare information. Due to privacy regulations, the medical\n","record data cannot leave the clients&#x2019; devices. Here, the clients could\n","be hospitals or personal computers and are assumed to have machine\n","learning environments. The server does not transmit the actual data to\n","the global model. Rather, there is a communication protocol between the\n","clients and servers that governs collaborative model training&#xA0;<a href=\"#ref-wu2022federated\">(C. Wu et al.\n","2022)</a>. In the literature, the communication protocol Federated\n","Average (FedAvg)&#xA0;<a href=\"#ref-mcmahan2017communication\">(McMahan et al. 2017)</a> is\n","typically used for model training. It consists of multiple rounds. Each\n","round, the current global model weights are transmitted to the clients.\n","Based on these weights, each client uses stochastic gradient descent to\n","adjust their local model. Then the local model&#x2019;s weights are forwarded\n","to the server. In the final phase of each loop, the server aggregates\n","the received weights by (weighted) averaging to prepare the global model\n","for the next round&#xA0;<a href=\"#ref-halimi2022federated\">(Halimi et al. 2022)</a>.</p>\n","<p>Given such training protocols, machine unlearning cannot be extended\n","easily to the federated learning setting. This is because the global\n","weights are computed by aggregations rather than raw gradients. These\n","are especially mixed up when many clients participate&#xA0;<a href=\"#ref-gao2022verifi\">(X. Gao et al. 2022)</a>.\n","Moreover, these clients might have some overlapping data, making it\n","difficult to quantify the impact of each training item on the model\n","weights&#xA0;<a href=\"#ref-liu2020learn\">(Yang Liu et\n","al. 2020)</a>. Using classic unlearning methods by gradient\n","manipulation may even lead to severe accuracy degradation or new privacy\n","threats&#xA0;<a href=\"#ref-liu2021federaser\">(G. Liu et\n","al. 2021)</a>.</p>\n","<p>Additionally, current studies on federated unlearning tend to assume\n","that the data to be removed belongs wholly to one client&#xA0;<a href=\"#ref-liu2021federaser wang2022federated wu2022federated liu2020learn\">(G.\n","Liu et al. 2021; J. Wang, Guo, et al. 2022; C. Wu et al. 2022; Yang Liu\n","et al. 2020)</a>. With this assumption, the historical contributions\n","of particular clients to the global model&#x2019;s training can be logged and\n","erased easily. However, erasing historical parameter updates might still\n","damage the global model, but there are many strategies for overcoming\n","this issue. For example, Liu et al.&#xA0;<a href=\"#ref-liu2021federaser\">(G. Liu et al. 2021)</a> proposed\n","calibration training to separate the individual contributions of clients\n","as much as possible. This mechanism does not work well for deep neural\n","networks, but it does work well with shallow architectures such as a\n","2-layer CNN or a network with two fully-connected layers. In addition,\n","there is a trade-off between scalability and precision due to the cost\n","of storing historical information on the federated server. Wu et\n","al.&#xA0;<a href=\"#ref-wu2022federated\">(C. Wu et al.\n","2022)</a> put forward a knowledge distillation strategy that uses a\n","prime global model to train the unlearned model on the remaining data.\n","However, as the clients&#x2019; data is not accessible by the server, some\n","unlabeled (synthetic) data that follows the distribution of the whole\n","dataset needs to be sampled and extra rounds of information exchange are\n","needed between the clients and server. As a result, the whole process is\n","costly and approximate. Also, it might be further offset when the data\n","is non-IID&#xA0;<a href=\"#ref-liu2022right\">(Yi Liu et\n","al. 2022)</a>. On another spectrum, Liu et al.&#xA0;<a href=\"#ref-liu2022right\">(Yi Liu et al. 2022)</a> proposed a smart\n","retraining method for federated unlearning without communication\n","protocols. The approach uses the L-BFGS algorithm&#xA0;<a href=\"#ref-berahas2016multi bollapragada2018progressive\">(Berahas,\n","Nocedal, et al. 2016; Bollapragada et al. 2018)</a> to efficient\n","solve a Hessian approximation with historical parameter updates for\n","global model retraining. However, this method is only applicable to\n","small models (<span class=\"math inline\">&#x2264;</span> 10K parameters). Plus,\n","it involves storing old model snapshots (including historical gradients\n","and parameters), which poses some privacy threats.</p>\n","\n","<h2 id=\"unlearning-in-graph-embedding\">6.3. Unlearning in Graph\n","Embedding</h2>\n","<p>So far the data in machine unlearning settings is assumed to be\n","independent. However, there are many cases where the data samples are\n","relational, such as is the case with graph data. Graph representation\n","learning is a well-established research direction in machine learning,\n","specifically in deep learning&#xA0;<a href=\"#ref-hamilton2020graph chen2020graph\">(Hamilton 2020; F. Chen et\n","al. 2020)</a>. However, applying machine unlearning to graph\n","representation learning is arguably more challenging. First, the data is\n","correlated, and it is non-trivial to partition the data, even uniformly.\n","Second, the unlearning requests can happen upon a node or an edge.\n","Third, the graph data itself might be non-uniform due to unbalanced\n","connected components in the graph. Therefore, existing graph partition\n","methods might lead to unbalanced data partitions, making the retraining\n","process non-uniform.</p>\n","<p>To mitigate these problems, Chen et al.&#xA0;<a href=\"#ref-chen2021graph\">(M. Chen et al. 2021a)</a> proposed a new\n","graph partitioning strategies especially for machine unlearning. The\n","general idea is based on the notion of assignment preference that\n","represents the benefit of a node assigned to a shard (i.e., a data\n","partition). Such node-shard pairs are further fine-tuned with neighbor\n","counts, which track down the number of neighbors of a node belonging to\n","the same target shard. The authors also proposed an aggregation method\n","to combine different partition strategies. Further, the retraining\n","process is based on message passing in graph neural networks, which\n","facilitates fast retraining.</p>\n","<p>Unlearning without retraining is also possible for graph embedding.\n","However, several challenges need to be overcome. The interdependency of\n","graph data, especially across different subgraphs, is non-trivial for\n","model training. A removal of node and edge could not only cause an\n","impact on its neighbor but also on multi-hops. Cong et al.&#xA0;<a href=\"#ref-congprivacy conggrapheditor\">(Cong and\n","Mahdavi 2022b, 2022a)</a> proposed a one-shot unlearning solution\n","that only requires access to the data to be forgotten. The idea is\n","inspired by the architecture of a linear graph neural network (GNN), in\n","which non-linearities in a typical GNN are replaced by a single weight\n","matrix between consecutive convolutional layers. Despite its linear span\n","over all input node features, such linear-GNNs have shown competent\n","performance, e.g.&#xA0;SGC&#xA0;<a href=\"#ref-wu2019simplifying\">(F. Wu et al. 2019)</a> and\n","APPNP&#xA0;<a href=\"#ref-gasteiger2018combining\">(Gasteiger, Bojchevski, and\n","G&#xFC;nnemann 2019)</a>. Using this property, Cong et al.&#xA0;<a href=\"#ref-congprivacy conggrapheditor\">(Cong and\n","Mahdavi 2022b, 2022a)</a> proposed an exact unlearning process at the\n","algorithmic level based on linear operations such as projection and\n","recombination.</p>\n","\n","<h2 id=\"unlearning-in-lifelong-learning\">6.4. Unlearning in Lifelong\n","Learning</h2>\n","<p>Unlearning is not always a bad thing for the accuracy of a machine\n","learning model. Machine unlearning has been researched as a\n","countermeasure against catastrophic forgetting in deep neural\n","networks&#xA0;<a href=\"#ref-du2019lifelong parne2021machine liu2022continual\">(Du, Chen,\n","et al. 2019; Parne et al. 2021; B. Liu, Liu, et al. 2022)</a>.\n","Catastrophic forgetting is a phenomenon where deep neural networks\n","perform badly after learning too many tasks&#xA0;<a href=\"#ref-kirkpatrick2017overcoming\">(Kirkpatrick et al. 2017)</a>.\n","One naive solution to this problem is training the model on the\n","historical data again. Clearly, this solution is impractical not only\n","due to computational cost but also because there is no guarantee that\n","the model will converge and nor is there a guarantee that the forgetting\n","will not happen again&#xA0;<a href=\"#ref-parisi2019continual\">(Parisi et al. 2019)</a>. Du et\n","al.&#xA0;<a href=\"#ref-du2019lifelong\">(Du, Chen, et al.\n","2019)</a> suggested a solution based on unlearning to prevent\n","catastrophic forgetting. The core idea is to unlearn harmful samples\n","(e.g., false negatives/positives) and then update the model so that its\n","performance before the forgetting effect is maintained.</p>\n","<p>Unlearning has also been used to handle exploding losses in machine\n","learning. Here, the term loss involves the computation of <span\n","class=\"math inline\">&#x2005;&#x2212;&#x2005;log&#x2006;<em>P</em><em>r</em>(<em>x</em>)</span>, and,\n","when <span class=\"math inline\"><em>P</em><em>r</em>(<em>x</em>)</span>\n","is approximately zero, the loss may be arbitrarily significant. The\n","problem is more severe in anomaly detection where normal samples can\n","have very small <span\n","class=\"math inline\"><em>P</em><em>r</em>(<em>x</em>)</span> (abnormal\n","samples have very large <span\n","class=\"math inline\"><em>P</em><em>r</em>(<em>x</em>)</span> and their\n","sum of probabilities is one). Du et al.&#xA0;<a href=\"#ref-du2019lifelong\">(Du, Chen, et al. 2019)</a> hence\n","proposed an unlearning method to mitigate this problem with an\n","unlearning loss that regularizes those extreme cases.</p>\n","<p>Unlearning has been studied for other lifelong settings as well&#xA0;<a href=\"#ref-parne2021machine\">(Parne et al.\n","2021)</a>. These setting use incremental models, such as decision\n","tree and naive Bayes, which allows the model to unlearn data samples\n","on-the-fly. Liu et al.&#xA0;<a href=\"#ref-liu2022continual\">(B. Liu, Liu, et al. 2022)</a>\n","considered requests on unlearning specific tasks for lifelong models. In\n","particular, there are three types of requests in lifelong learning: (i)\n","to learn a task permanently, (ii) to learn a task temporarily and forget\n","it later upon a privacy request, and (iii) to forget a task. Different\n","from traditional machine unlearning, unlearning in lifelong learning\n","needs to maintain knowledge transfer between tasks but also preserve all\n","knowledge for the remaining tasks. Moreover, the setting is more\n","challenging as it depends on the order of tasks as the tasks are learnt\n","online during the model lifetime. Additionally, the model cannot keep\n","all previous data (zero-glance unlearning), making the unlearning\n","process more challenging. Liu et al.&#xA0;<a href=\"#ref-liu2022continual\">(B. Liu, Liu, et al. 2022)</a> proposed\n","a solution inspired by SISA, the data partitioning mechanism for smart\n","retraining&#xA0;<a href=\"#ref-bourtoule2021machine\">(Bourtoule et al. 2021)</a>. It\n","creates an isolated temporary model for each task and merges the\n","isolated models into the main model.</p>\n","\n","\n","<h1 id=\"sec:discussion\">7. Discussion and Future Prospects</h1>\n","<p>In this section, we analyze the current and potential developments in\n","machine unlearning and summarize our findings. In addition, we identify\n","a number of unanswered research topics that could be addressed to\n","progress the foundation of machine unlearning.</p>\n","\n","<h2 id=\"summary-and-trends\">7.1. Summary and Trends</h2>\n","<p><strong>Influence functions are dominant methods.</strong>\n","Understanding the impact of a given data item on a model&#x2019;s parameters or\n","model performance is the key to machine unlearning&#xA0;<a href=\"#ref-koh2017understanding basu2021influence\">(Koh et al. 2017;\n","Basu, Pope, and Feizi 2021)</a>. Such insights will speed-up the\n","unlearning process immensely by simply reversing the model updates\n","associated with the target data. Although there could be some offset in\n","doing so, promising results have shown that this offset can be\n","bounded&#xA0;<a href=\"#ref-mahadevan2021certifiable mahadevan2022certifiable chundawat2022zero\">(Mahadevan\n","and Mathioudakis 2021, 2022; Chundawat et al. 2022b)</a>.</p>\n","<p><strong>Reachability of model parameters.</strong> Existing works\n","define unlearning as obtaining a new model with an accuracy as good as\n","if the model had retrained without the data to be forgotten &#xA0;<a href=\"#ref-golatkar2020forgetting golatkar2020eternal graves2021amnesiac thudi2021necessity\">(Golatkar,\n","Achille, and Soatto 2020b, 2020a; Graves, Nagisetty, and Ganesh 2021;\n","Thudi, Jia, et al. 2022)</a>. We argue that such a parameter-space\n","assumption should be taken into serious considerations. As model\n","parameters can be reachable with or without some given data, is there\n","any case where the original and unlearned models share the same\n","parameters?&#xA0;<a href=\"#ref-thudi2021necessity\">(Thudi, Jia, et al. 2022)</a>\n","Although some studies use parameter distribution to bound the\n","problem&#xA0;<a href=\"#ref-GuoGHM20\">(C. Guo et al.\n","2020)</a>, there could still be false positive cases, where some\n","effects from the forgotten data still exist in the unlearned model.</p>\n","<p><strong>Unlearning verification (Data auditing) is needed.</strong>\n","Unlearning verification (or data auditing) is the process of determining\n","whether specific data have been eliminated from a model. To fully enable\n","regulations over the right to be forgotten, the unlearning effect should\n","be independently verified. There have been only a few works on\n","unlearning verification&#xA0;<a href=\"#ref-gao2022verifi huang2021mathsf\">(X. Gao et al. 2022; Y.\n","Huang, Li, et al. 2021)</a>. However, the definition of a successful\n","verification is still controversial as different unlearning solutions\n","use different evaluation metrics, especially when the cutting threshold\n","of a verification metric still depends on the application domain&#xA0;<a href=\"#ref-thudi2021necessity\">(Thudi, Jia, et al.\n","2022)</a>.</p>\n","<p><strong>Federated unlearning is emerging.</strong> Federated learning\n","brings about a unique setting for machine unlearning research&#xA0;&#xA0;<a href=\"#ref-liu2021federaser wang2022federated wu2022federated liu2020learn\">(G.\n","Liu et al. 2021; J. Wang, Guo, et al. 2022; C. Wu et al. 2022; Yang Liu\n","et al. 2020)</a>. It has separate clients participating in the\n","federated training process. As a result, removing a client out of the\n","federation could be done precisely using historical updates. The\n","rational behind this is that the user data on a client mostly helps to\n","make correct predictions about that user. This locality helps to avoid a\n","catastrophic unlearning phenomenon in a traditional machine learning\n","setting. However, we all need to be aware that there are many cases in\n","federated learning where the data is non-IID or the removal request only\n","covers part of the client data.</p>\n","<p><strong>Model repair via unlearning.</strong> Machine learning models\n","can be poisoned by adversarial attacks&#xA0;<a href=\"#ref-wang2019neural liu2022backdoor\">(B. Wang et al. 2019; Yang\n","Liu et al. 2022)</a>. Intuitively, if the poisonous data is detected\n","and removed and then the model is retrained, the new model should be\n","poison-free. However, the retraining would be too expensive. This is\n","indeed similar to the unlearning setting. Compared to existing defence\n","methods, the models in machine learning determine then update the inner\n","problematic weights through influence functions.</p>\n","<p>A similar application is to remove bias from the model due to some\n","biased feature in the data&#xA0;<a href=\"#ref-dinsdale2021deep dinsdale2020unlearning\">(Dinsdale,\n","Jenkinson, and Namburete 2021; Dinsdale, Jenkinson, et al. 2020)</a>.\n","Status quo studies on fairness and de-biasing learning mostly focus on\n","learning a fair and unbiased feature representation&#xA0;<a href=\"#ref-ramaswamy2021fair nam2020learning singh2022anatomizing wang2020towards\">(Ramaswamy,\n","Kim, and Russakovsky 2021; Nam et al. 2020; R. Singh et al. 2022; Z.\n","Wang et al. 2020)</a>, where, machine unlearning, e.g.&#xA0;feature\n","unlearning&#xA0;<a href=\"#ref-guo2022efficient\">(T. Guo\n","et al. 2022)</a>, would ensure the biased features are deleted\n","properly but the model&#x2019;s quality would still be maintained.</p>\n","<p>In another setting, machine unlearning can be used to repair\n","overtrained deep neural networks by actively unlearning useless,\n","obsolete, or redundant data samples that could cause catastrophic\n","forgetting&#xA0;<a href=\"#ref-du2019lifelong golatkar2020eternal\">(Du, Chen, et al. 2019;\n","Golatkar, Achille, and Soatto 2020a)</a>. Moreover, machine\n","unlearning might be used to boost the model&#x2019;s accuracy as well&#xA0;<a\n","href=\"#fn2\" class=\"footnote-ref\" id=\"fnref2\"\n","role=\"doc-noteref\"><sup>2</sup></a>, e.g.&#xA0;as forgetting is similar to\n","compressing in information bottleneck theory&#xA0;<a href=\"#ref-shwartz2017opening tishby2000information tishby2015deep\">(Shwartz-Ziv\n","and Tishby 2017; Tishby et al. 2000; Tishby and Zaslavsky\n","2015)</a>&#xA0;<a href=\"#fn3\" class=\"footnote-ref\" id=\"fnref3\"\n","role=\"doc-noteref\"><sup>3</sup></a>.</p>\n","\n","<h2 id=\"open-research-questions\">7.2. Open Questions</h2>\n","<p>There are several open questions that future studies can\n","address. This section will list and discuss those fundamental topics in\n","machine unlearning.</p>\n","<p><strong>Unified Design Requirements.</strong> Among the current\n","unlearning approaches, there is no absolute winner that satisfies all\n","design requirements. Most unlearning algorithms focus on approximate\n","unlearning scenarios and data item removal (<a\n","href=\"#tab:unlearning_comparison\" data-reference-type=\"autoref\"\n","data-reference=\"tab:unlearning_comparison\">[tab:unlearning_comparison]</a>).\n","However, there are other types of practical unlearning scenarios that\n","need to be considered, such as zero-glance, zero-shot, few-shot\n","learning. Likewise, there are other types of removal requests that must\n","be handled, e.g., feature removal, class removal, task removal, stream\n","removal, and so on. Moreover, satisfying all design requirements &#x2013;\n","completeness, timeliness, accuracy, etc. &#x2013; would make unlearning\n","solutions more applicable to industry-grade systems.</p>\n","<p><strong>Unified Benchmarking.</strong> Although there have been many\n","works on machine unlearning recently, not many of them have a common\n","setting for benchmarking comparisons. In particular, there is not a lot\n","of published source code (<a href=\"#tab:algorithms\"\n","data-reference-type=\"autoref\"\n","data-reference=\"tab:algorithms\">[tab:algorithms]</a>) and each of them\n","targets different learning algorithms or different applications\n","(e.g.&#xA0;recommender systems, graph embedding). Schelter et al.&#xA0;<a href=\"#ref-schelter2020amnesia\">(Schelter 2020)</a>\n","undertook an empirical study but the benchmark was limited to\n","decremental learning methods and focused only on efficiency.</p>\n","<p><strong>Adversarial Machine Unlearning.</strong> More studies have\n","been undertaken on attacking ML systems so that we can gain a better\n","understanding of and better protect our systems&#xA0;<a href=\"#ref-veale2018algorithms wang2019neural ren2020adversarial\">(Veale,\n","Binns, and Edwards 2018; B. Wang et al. 2019; K. Ren et al.\n","2020)</a>. Adversarial machine unlearning is the study of the attacks\n","on unlearning algorithms to better certify the unlearned models&#xA0;<a href=\"#ref-gao2022deletion marchant2022hard\">(J. Gao\n","et al. 2022; Marchant, Rubinstein, and Alfeld 2022)</a>. Unlike using\n","machine unlearning to mitigate adversarial attacks&#xA0;<a href=\"#ref-liu2022backdoor\">(Yang Liu et al.\n","2022)</a>, adversarial machine unlearning is far stricter, as it\n","concerns not only the model accuracy but also the privacy guarantees.\n","For example, adversarial machine unlearning might not have knowledge of\n","the learning algorithms, but it might have access to the unlearning\n","process.</p>\n","<p><strong>Interpretable Machine Unlearning.</strong> In the future,\n","explanations for machine unlearning can be used to increase confidence\n","in human-AI interactions and enable unlearning verification or removed\n","data auditing&#xA0;<a href=\"#ref-gao2022verifi huang2021mathsf\">(X. Gao et al. 2022; Y.\n","Huang, Li, et al. 2021)</a>. However, the inverted nature of machine\n","unlearning might pose problems for explanation methods to be applicable\n","at all. Devising techniques aimed at explaining the unlearning process\n","(e.g.&#xA0;using influence functions) is still an unsolved task&#xA0;<a href=\"#ref-koh2017understanding basu2021influence\">(Koh et al. 2017;\n","Basu, Pope, and Feizi 2021)</a>.</p>\n","<p><strong>Machine Unlearning in Evolving Data Streams.</strong>\n","Evolving data streams pose problems to machine learning models,\n","especially neural networks, due to shifts in the data distributions and\n","the model predictions&#xA0;<a href=\"#ref-haug2021learning\">(Haug and Kasneci 2021)</a>. Although\n","there are ways to overcome this limitation&#xA0;<a href=\"#ref-duda2020training\">(Duda et al. 2020)</a>, they rely on\n","the changes in the model parameters to detect concept drift&#xA0;<a href=\"#ref-haug2021learning\">(Haug and Kasneci\n","2021)</a>. However, such detection might be not reliably correct in\n","unlearning settings, where the changes in model parameters are\n","approximate. Consequently, it is expected that machine learning for\n","streaming removal request may attract more attention in the next few\n","years. It is noteworthy that unlearning might be used to repair obsolete\n","models by forgetting old data that contradicts with the detected concept\n","drift. However, this requires a contradiction analysis between old and\n","new data&#xA0;<a href=\"#ref-hu2021distilling\">(Hu et al.\n","2021)</a>.</p>\n","<p>A similar setting is the consideration of out-of-distribution (OOD)\n","data in the forget set. For those settings, the unlearning is\n","imbalanced. Some data might have no impact while the others have great\n","influence on the model parameters. There are studies on learning\n","algorithms for OOD data in federated learning&#xA0;<a href=\"#ref-sattler2021fedaux\">(Sattler et al. 2021)</a>. Hence, it\n","may be worthwhile investigating novel unlearning algorithms tailored to\n","OOD data.</p>\n","<p><strong>Causality in Machine Unlearning.</strong> There are cases\n","where a large amount of data might need to be removed from an machine\n","learning system, even though the portion of data to be forgotten might\n","be insignificant in comparison to all the data. For example, a data\n","pollution attack might affect millions of data items, but only a few of\n","them can be detected by human experts or SOTA detection methods&#xA0;<a href=\"#ref-cao2018efficient\">(Y. Cao et al.\n","2018)</a>. Causality analysis&#xA0;<a href=\"#ref-guo2020survey\">(R. Guo et al. 2020)</a> could become a\n","useful tool to automatically unlearn the polluted data in this setting\n","and guarantee the non-existence of the polluted information in the final\n","model.</p>\n","\n","<h1 id=\"sec:conclusion\">8. Conclusions</h1>\n","<p>This survey is the first to investigate machine unlearning techniques\n","in a systematic manner. In this survey, we addressed the primary\n","difficulties and research advancements in conceptualizing, planning, and\n","solving the problems of machine unlearning. In addition, we presented a\n","unified taxonomy that divides machine unlearning strategies into three\n","approaches: model-agnostic methods, model-intrinsic methods, and\n","data-driven methods. We hope that our taxonomy can help categorize\n","future studies and gain deeper insight into methodologies as well as\n","address the difficulties in machine unlearning. Also, we expect this\n","survey can assist researchers in identifying the most optimal unlearning\n","strategies for different applications. The survey provides clear\n","summaries and comparisons between various unlearning methodologies,\n","giving a comprehensive and general view of current work as well as the\n","current process of machine unlearning.</p>"]},{"cell_type":"markdown","id":"cdad0f32","metadata":{"papermill":{"duration":0.008674,"end_time":"2023-07-12T05:38:47.292239","exception":false,"start_time":"2023-07-12T05:38:47.283565","status":"completed"},"tags":[]},"source":["<h1 id=\"sec:reference\">References</h1>\n","\n","<div id=\"refs\" class=\"references csl-bib-body hanging-indent\"\n","role=\"list\">\n","<div id=\"ref-abadi2016deep\" class=\"csl-entry\" role=\"listitem\">\n","Abadi, Martin, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya\n","Mironov, Kunal Talwar, and Li Zhang. 2016. <span>&#x201C;Deep Learning with\n","Differential Privacy.&#x201D;</span> In <em>SIGSAC</em>, 308&#x2013;18.\n","</div>\n","<div id=\"ref-aldaghri2021coded\" class=\"csl-entry\" role=\"listitem\">\n","Aldaghri, Nasser, Hessam Mahdavifar, et al. 2021. <span>&#x201C;Coded Machine\n","Unlearning.&#x201D;</span> <em>IEEE Access</em> 9: 88137&#x2013;50.\n","</div>\n","<div id=\"ref-basu2021influence\" class=\"csl-entry\" role=\"listitem\">\n","Basu, Samyadeep, Phil Pope, and Soheil Feizi. 2021. <span>&#x201C;Influence\n","Functions in Deep Learning Are Fragile.&#x201D;</span> In <em>ICLR</em>.\n","</div>\n","<div id=\"ref-baumhauer2020machine\" class=\"csl-entry\" role=\"listitem\">\n","Baumhauer, Thomas, Pascal Sch&#xF6;ttle, and Matthias Zeppelzauer. 2020.\n","<span>&#x201C;Machine Unlearning: Linear Filtration for Logit-Based\n","Classifiers.&#x201D;</span> <em>arXiv Preprint arXiv:2002.02730</em>.\n","</div>\n","<div id=\"ref-becker2022epistemic\" class=\"csl-entry\" role=\"listitem\">\n","Becker, Alexander, and Thomas Liebig. 2022. <span>&#x201C;Evaluating Machine\n","Unlearning via Epistemic Uncertainty.&#x201D;</span>\n","</div>\n","<div id=\"ref-berahas2016multi\" class=\"csl-entry\" role=\"listitem\">\n","Berahas, Albert S, Jorge Nocedal, et al. 2016. <span>&#x201C;A Multi-Batch\n","l-BFGS Method for Machine Learning.&#x201D;</span> <em>NIPS</em> 29.\n","</div>\n","<div id=\"ref-bitansky2012extractable\" class=\"csl-entry\" role=\"listitem\">\n","Bitansky, Nir, Ran Canetti, Alessandro Chiesa, and Eran Tromer. 2012.\n","<span>&#x201C;From Extractable Collision Resistance to Succinct Non-Interactive\n","Arguments of Knowledge, and Back Again.&#x201D;</span> In <em>ITCS</em>,\n","326&#x2013;49.\n","</div>\n","<div id=\"ref-bollapragada2018progressive\" class=\"csl-entry\"\n","role=\"listitem\">\n","Bollapragada, Raghu, Jorge Nocedal, Dheevatsa Mudigere, Hao-Jun Shi, and\n","Ping Tak Peter Tang. 2018. <span>&#x201C;A Progressive Batching l-BFGS Method\n","for Machine Learning.&#x201D;</span> In <em>ICML</em>, 620&#x2013;29.\n","</div>\n","<div id=\"ref-bourtoule2021machine\" class=\"csl-entry\" role=\"listitem\">\n","Bourtoule, Lucas, Varun Chandrasekaran, Christopher A Choquette-Choo,\n","Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas\n","Papernot. 2021. <span>&#x201C;Machine Unlearning.&#x201D;</span> In <em>SP</em>,\n","141&#x2013;59.\n","</div>\n","<div id=\"ref-brophy2021machine\" class=\"csl-entry\" role=\"listitem\">\n","Brophy, Jonathan, and Daniel Lowd. 2021. <span>&#x201C;Machine Unlearning for\n","Random Forests.&#x201D;</span> In <em>ICML</em>, 1092&#x2013;1104.\n","</div>\n","<div id=\"ref-cao2015towards\" class=\"csl-entry\" role=\"listitem\">\n","Cao, Yinzhi, and Junfeng Yang. 2015. <span>&#x201C;Towards Making Systems\n","Forget with Machine Unlearning.&#x201D;</span> In <em>2015 IEEE Symposium on\n","Security and Privacy</em>, 463&#x2013;80.\n","</div>\n","<div id=\"ref-cao2018efficient\" class=\"csl-entry\" role=\"listitem\">\n","Cao, Yinzhi, Alexander Fangxiao Yu, Andrew Aday, Eric Stahl, Jon\n","Merwine, and Junfeng Yang. 2018. <span>&#x201C;Efficient Repair of Polluted\n","Machine Learning Systems via Causal Unlearning.&#x201D;</span> In\n","<em>ASIACCS</em>, 735&#x2013;47.\n","</div>\n","<div id=\"ref-cao2022machine\" class=\"csl-entry\" role=\"listitem\">\n","Cao, Zihao, Jianzong Wang, Shijing Si, Zhangcheng Huang, and Jing Xiao.\n","2022. <span>&#x201C;Machine Unlearning Method Based on Projection\n","Residual.&#x201D;</span> In <em>DSAA</em>, 1&#x2013;8.\n","</div>\n","<div id=\"ref-cauwenberghs2000incremental\" class=\"csl-entry\"\n","role=\"listitem\">\n","Cauwenberghs, Gert et al. 2000. <span>&#x201C;Incremental and Decremental\n","Support Vector Machine Learning.&#x201D;</span> <em>NIPS</em> 13.\n","</div>\n","<div id=\"ref-chang2022example\" class=\"csl-entry\" role=\"listitem\">\n","Chang, Yi, Zhao Ren, Thanh Tam Nguyen, Wolfgang Nejdl, and Bj&#xF6;rn W\n","Schuller. 2022. <span>&#x201C;Example-Based Explanations with Adversarial\n","Attacks for Respiratory Sound Analysis.&#x201D;</span> In <em>INTERSPEECH</em>.\n","</div>\n","<div id=\"ref-chaudhuri2011differentially\" class=\"csl-entry\"\n","role=\"listitem\">\n","Chaudhuri, Kamalika, Claire Monteleoni, and Anand D Sarwate. 2011.\n","<span>&#x201C;Differentially Private Empirical Risk Minimization.&#x201D;</span>\n","<em>JMLR</em> 12 (3).\n","</div>\n","<div id=\"ref-chen2022recommendation\" class=\"csl-entry\" role=\"listitem\">\n","Chen, Chong, Fei Sun, Min Zhang, and Bolin Ding. 2022.\n","<span>&#x201C;Recommendation Unlearning.&#x201D;</span> In <em>WWW</em>, 2768&#x2013;77.\n","</div>\n","<div id=\"ref-chen2020graph\" class=\"csl-entry\" role=\"listitem\">\n","Chen, Fenxiao, Yun-Cheng Wang, Bin Wang, et al. 2020. <span>&#x201C;Graph\n","Representation Learning: A Survey.&#x201D;</span> <em>ATSIP</em> 9.\n","</div>\n","<div id=\"ref-chen2021machinegan\" class=\"csl-entry\" role=\"listitem\">\n","Chen, Kongyang, Yao Huang, et al. 2021. <span>&#x201C;Machine Unlearning via\n","GAN.&#x201D;</span> <em>arXiv Preprint arXiv:2111.11869</em>.\n","</div>\n","<div id=\"ref-chen2021graph\" class=\"csl-entry\" role=\"listitem\">\n","Chen, Min, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert,\n","and Yang Zhang. 2021a. <span>&#x201C;Graph Unlearning.&#x201D;</span> <em>arXiv\n","Preprint arXiv:2103.14991</em>.\n","</div>\n","<div id=\"ref-chen2021machine\" class=\"csl-entry\" role=\"listitem\">\n","Chen, Min, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, \n","and Yang Zhang. 2021b. <span>&#x201C;When Machine Unlearning Jeopardizes Privacy.&#x201D;</span>\n","In <em>SIGSAC</em>, 896&#x2013;911.\n","</div>\n","<div id=\"ref-chen2019novel\" class=\"csl-entry\" role=\"listitem\">\n","Chen, Yuantao, Jie Xiong, Weihong Xu, and Jingwen Zuo. 2019. <span>&#x201C;A\n","Novel Online Incremental and Decremental Learning Algorithm Based on\n","Variable Support Vector Machine.&#x201D;</span> <em>Cluster Computing</em> 22\n","(3): 7435&#x2013;45.\n","</div>\n","<div id=\"ref-cheng2023gnndelete\" class=\"csl-entry\" role=\"listitem\">\n","Cheng, Jiali and Dasoulas, George and He, Huan and Agarwal, Chirag and Zitnik, Marinka. 2023. <span>&#x201C;GNNDelete: A General Strategy for Unlearning in Graph Neural Networks.&#x201D;</span> <em>arXiv Preprint arXiv:2302.13406</em>.\n","</div>\n","<div id=\"ref-chien2022certified\" class=\"csl-entry\" role=\"listitem\">\n","Chien, Eli, Chao Pan, et al. 2022. <span>&#x201C;Certified Graph\n","Unlearning.&#x201D;</span> <em>arXiv Preprint arXiv:2206.09140</em>.\n","</div>\n","<div id=\"ref-chundawat2022can\" class=\"csl-entry\" role=\"listitem\">\n","Chundawat, Vikram S, Ayush K Tarun, Murari Mandal, and Mohan\n","Kankanhalli. 2022a. <span>&#x201C;Can Bad Teaching Induce Forgetting?\n","Unlearning in Deep Networks Using an Incompetent Teacher.&#x201D;</span>\n","<em>arXiv Preprint arXiv:2205.08096</em>.\n","</div>\n","<div id=\"ref-chundawat2022zero\" class=\"csl-entry\" role=\"listitem\">\n","Chundawat, Vikram S, Ayush K Tarun, Murari Mandal, and Mohan\n","Kankanhalli. 2022b. <span>&#x201C;Zero-Shot Machine Unlearning.&#x201D;</span> <em>arXiv\n","Preprint arXiv:2201.05629</em>.\n","</div>\n","<div id=\"ref-conggrapheditor\" class=\"csl-entry\" role=\"listitem\">\n","Cong, Weilin, and Mehrdad Mahdavi. 2022a. <span>&#x201C;GRAPHEDITOR: An\n","Efficient Graph Representation Learning and Unlearning Approach.&#x201D;</span>\n","<em><a href=\"https://congweilin.github.io/CongWeilin.io/\"\n","class=\"uri\">Https://Congweilin.github.io/CongWeilin.io/</a></em>.\n","</div>\n","<div id=\"ref-congprivacy\" class=\"csl-entry\" role=\"listitem\">\n","Cong, Weilin, and Mehrdad Mahdavi. 2022b. <span>&#x201C;Privacy Matters! Efficient Graph Representation\n","Unlearning with Data Removal Guarantee.&#x201D;</span> <em><a\n","href=\"https://congweilin.github.io/CongWeilin.io/\"\n","class=\"uri\">Https://Congweilin.github.io/CongWeilin.io/</a></em>.\n","</div>\n","<div id=\"ref-cong2023efficiently\" class=\"csl-entry\" role=\"listitem\">\n","Cong, Weilin and Mahdavi, Mehrdad. 2023.\n","<span>&#x201C;Efficiently Forgetting What You Have Learned in Graph Representation Learning via Projection.&#x201D;</span> In\n","<em>AISTAT</em>, 6674&#x2013;6703.\n","</div>\n","<div id=\"ref-DaiDHSCW22\" class=\"csl-entry\" role=\"listitem\">\n","Dai, Damai, Li Dong, et al. 2022. <span>&#x201C;Knowledge Neurons in Pretrained\n","Transformers.&#x201D;</span> In <em>ACL</em>, 8493&#x2013;8502.\n","</div>\n","<div id=\"ref-dang2021right\" class=\"csl-entry\" role=\"listitem\">\n","Dang, Quang-Vinh. 2021. <span>&#x201C;Right to Be Forgotten in the Age of\n","Machine Learning.&#x201D;</span> In <em>ICADS</em>, 403&#x2013;11.\n","</div>\n","<div id=\"ref-deng2009imagenet\" class=\"csl-entry\" role=\"listitem\">\n","Deng, Jia, Wei Dong, Richard Socher, et al. 2009. <span>&#x201C;Imagenet: A\n","Large-Scale Hierarchical Image Database.&#x201D;</span> In <em>CVPR</em>,\n","248&#x2013;55.\n","</div>\n","<div id=\"ref-di2022hidden\" class=\"csl-entry\" role=\"listitem\">\n","Di, Jimmy Z and Douglas, Jack and Acharya, Jayadev and Kamath, Gautam and Sekhari, Ayush. 2022.\n","<span>&#x201C;Hidden poison: Machine unlearning enables camouflaged poisoning attacks.&#x201D;</span> In <em>NeurIPS 2022 ML Safety Workshop</em>.\n","</div>\n","<div id=\"ref-dinsdale2020unlearning\" class=\"csl-entry\" role=\"listitem\">\n","Dinsdale, Nicola K, Mark Jenkinson, et al. 2020. <span>&#x201C;Unlearning\n","Scanner Bias for Mri Harmonisation.&#x201D;</span> In <em>MICCAI</em>, 369&#x2013;78.\n","</div>\n","<div id=\"ref-dinsdale2021deep\" class=\"csl-entry\" role=\"listitem\">\n","Dinsdale, Nicola K, Mark Jenkinson, and Ana IL Namburete. 2021.\n","<span>&#x201C;Deep Learning-Based Unlearning of Dataset Bias for MRI\n","Harmonisation and Confound Removal.&#x201D;</span> <em>NeuroImage</em> 228:\n","117689.\n","</div>\n","<div id=\"ref-du2019lifelong\" class=\"csl-entry\" role=\"listitem\">\n","Du, Min, Zhi Chen, et al. 2019. <span>&#x201C;Lifelong Anomaly Detection\n","Through Unlearning.&#x201D;</span> In <em>SIGSAC</em>, 1283&#x2013;97.\n","</div>\n","<div id=\"ref-duan2007decremental\" class=\"csl-entry\" role=\"listitem\">\n","Duan, Hua, Hua Li, Guoping He, and Qingtian Zeng. 2007.\n","<span>&#x201C;Decremental Learning Algorithms for Nonlinear Langrangian and\n","Least Squares Support Vector Machines.&#x201D;</span> In <em>OSB</em>, 358&#x2013;66.\n","</div>\n","<div id=\"ref-duda2020training\" class=\"csl-entry\" role=\"listitem\">\n","Duda, Piotr, Maciej Jaworski, Andrzej Cader, and Lipo Wang. 2020.\n","<span>&#x201C;On Training Deep Neural Networks Using a Streaming\n","Approach.&#x201D;</span> <em>JAISCR</em> 10.\n","</div>\n","<div id=\"ref-dwork2008differential\" class=\"csl-entry\" role=\"listitem\">\n","Dwork, Cynthia. 2008. <span>&#x201C;Differential Privacy: A Survey of\n","Results.&#x201D;</span> In <em>TAMC</em>, 1&#x2013;19.\n","</div>\n","<div id=\"ref-dwork2014algorithmic\" class=\"csl-entry\" role=\"listitem\">\n","Dwork, Cynthia, Aaron Roth, et al. 2014. <span>&#x201C;The Algorithmic\n","Foundations of Differential Privacy.&#x201D;</span> <em>Foundations and\n","Trends<span></span> in Theoretical Computer Science</em> 9 (3&#x2013;4):\n","211&#x2013;407.\n","</div>\n","<div id=\"ref-eisenhofer2022verifiable\" class=\"csl-entry\"\n","role=\"listitem\">\n","Eisenhofer, Thorsten, Doreen Riepel, Varun Chandrasekaran, Esha Ghosh,\n","Olga Ohrimenko, and Nicolas Papernot. 2022. <span>&#x201C;Verifiable and\n","Provably Secure Machine Unlearning.&#x201D;</span> <em>arXiv Preprint\n","arXiv:2210.09126</em>.\n","</div>\n","<div id=\"ref-felps2020class\" class=\"csl-entry\" role=\"listitem\">\n","Felps, Daniel L, Amelia D Schwickerath, Joyce D Williams, Trung N Vuong,\n","Alan Briggs, Matthew Hunt, Evan Sakmar, David D Saranchak, and Tyler\n","Shumaker. 2020. <span>&#x201C;Class Clown: Data Redaction in Machine Unlearning\n","at Enterprise Scale.&#x201D;</span> <em>arXiv Preprint arXiv:2012.04699</em>.\n","</div>\n","<div id=\"ref-feuerriegel2020fair\" class=\"csl-entry\" role=\"listitem\">\n","Feuerriegel, Stefan, Mateusz Dolata, and Gerhard Schwabe. 2020.\n","<span>&#x201C;Fair AI.&#x201D;</span> <em>Business &amp; Information Systems\n","Engineering</em> 62 (4): 379&#x2013;84.\n","</div>\n","<div id=\"ref-fraboni2022sequential\" class=\"csl-entry\" role=\"listitem\">\n","Fraboni, Yann and Vidal, Richard and Kameni, Laetitia and Lorenzi, Marco. 2022. <span>&#x201C;Sequential Informed Federated Unlearning: Efficient and Provable Client Unlearning in Federated Optimization.&#x201D;</span> <em>arXiv Preprint arXiv:2211.11656</em>.\n","</div>\n","<div id=\"ref-fredrikson2014privacy\" class=\"csl-entry\" role=\"listitem\">\n","Fredrikson, Matthew, Eric Lantz, Somesh Jha, Simon Lin, David Page, and\n","Thomas Ristenpart. 2014. <span>&#x201C;Privacy in Pharmacogenetics: An <span\n","class=\"math inline\">{</span>End-to-End<span class=\"math inline\">}</span>\n","Case Study of Personalized Warfarin Dosing.&#x201D;</span> In <em>USENIX\n","Security</em>, 17&#x2013;32.\n","</div>\n","<div id=\"ref-fu2022knowledge\" class=\"csl-entry\" role=\"listitem\">\n","Fu, Shaopeng, Fengxiang He, et al. 2022. <span>&#x201C;Knowledge Removal in\n","Sampling-Based Bayesian Inference.&#x201D;</span> In <em>ICLR</em>.\n","</div>\n","<div id=\"ref-fu2021bayesian\" class=\"csl-entry\" role=\"listitem\">\n","Fu, Shaopeng, Fengxiang He, Yue Xu, and Dacheng Tao. 2021.\n","<span>&#x201C;Bayesian Inference Forgetting.&#x201D;</span> <em>arXiv Preprint\n","arXiv:2101.06417</em>.\n","</div>\n","<div id=\"ref-gao2022deletion\" class=\"csl-entry\" role=\"listitem\">\n","Gao, Ji, Sanjam Garg, Mohammad Mahmoody, and Prashant Nalini Vasudevan.\n","2022. <span>&#x201C;Deletion Inference, Reconstruction, and Compliance in\n","Machine (Un) Learning.&#x201D;</span> <em>Proc. Priv. Enhancing Technol.</em>\n","2022 (3): 415&#x2013;36.\n","</div>\n","<div id=\"ref-gao2022verifi\" class=\"csl-entry\" role=\"listitem\">\n","Gao, Xiangshan, Xingjun Ma, Jingyi Wang, Youcheng Sun, Bo Li, Shouling\n","Ji, Peng Cheng, and Jiming Chen. 2022. <span>&#x201C;VeriFi: Towards Verifiable\n","Federated Unlearning.&#x201D;</span> <em>arXiv Preprint arXiv:2205.12709</em>.\n","</div>\n","<div id=\"ref-garg2020formalizing\" class=\"csl-entry\" role=\"listitem\">\n","Garg, Sanjam, Shafi Goldwasser, and Prashant Nalini Vasudevan. 2020.\n","<span>&#x201C;Formalizing Data Deletion in the Context of the Right to Be\n","Forgotten.&#x201D;</span> In <em>EUROCRYPT</em>, 373&#x2013;402.\n","</div>\n","<div id=\"ref-gasteiger2018combining\" class=\"csl-entry\" role=\"listitem\">\n","Gasteiger, Johannes, Aleksandar Bojchevski, and Stephan G&#xFC;nnemann. 2019.\n","<span>&#x201C;Combining Neural Networks with Personalized PageRank for\n","Classification on Graphs.&#x201D;</span> In <em>ICLR</em>.\n","</div>\n","<div id=\"ref-geurts2006extremely\" class=\"csl-entry\" role=\"listitem\">\n","Geurts, Pierre, Damien Ernst, et al. 2006. <span>&#x201C;Extremely Randomized\n","Trees.&#x201D;</span> <em>Machine Learning</em> 63 (1): 3&#x2013;42.\n","</div>\n","<div id=\"ref-ginart2019making\" class=\"csl-entry\" role=\"listitem\">\n","Ginart, Antonio, Melody Guan, Gregory Valiant, and James Y Zou. 2019.\n","<span>&#x201C;Making Ai Forget You: Data Deletion in Machine Learning.&#x201D;</span>\n","<em>NIPS</em> 32.\n","</div>\n","<div id=\"ref-goel2022evaluating\" class=\"csl-entry\" role=\"listitem\">\n","Goel, Shashwat, Ameya Prabhu, and Ponnurangam Kumaraguru. 2022.\n","<span>&#x201C;Evaluating Inexact Unlearning Requires Revisiting\n","Forgetting.&#x201D;</span> <em>arXiv Preprint arXiv:2201.06640</em>.\n","</div>\n","<div id=\"ref-goel2022towards\" class=\"csl-entry\" role=\"listitem\">\n","Goel, Shashwat and Prabhu, Ameya and Sanyal, Amartya and Lim, Ser-Nam and Torr, Philip and Kumaraguru, Ponnurangam. 2023. <span>&#x201C;Towards adversarial evaluations for inexact machine unlearning.&#x201D;</span> <em>arXiv Preprint arXiv:2201.06640</em>.\n","</div>\n","<div id=\"ref-golatkar2021mixed\" class=\"csl-entry\" role=\"listitem\">\n","Golatkar, Aditya, Alessandro Achille, Avinash Ravichandran, Marzia\n","Polito, and Stefano Soatto. 2021. <span>&#x201C;Mixed-Privacy Forgetting in\n","Deep Networks.&#x201D;</span> In <em>CVPR</em>, 792&#x2013;801.\n","</div>\n","<div id=\"ref-golatkar2020eternal\" class=\"csl-entry\" role=\"listitem\">\n","Golatkar, Aditya, Alessandro Achille, and Stefano Soatto. 2020a.\n","<span>&#x201C;Eternal Sunshine of the Spotless Net: Selective Forgetting in\n","Deep Networks.&#x201D;</span> In <em>CVPR</em>, 9304&#x2013;12.\n","</div>\n","<div id=\"ref-golatkar2020forgetting\" class=\"csl-entry\" role=\"listitem\">\n","Golatkar, Aditya, Alessandro Achille, and Stefano Soatto. 2020b. <span>&#x201C;Forgetting Outside the Box: Scrubbing Deep Networks\n","of Information Accessible from Input-Output Observations.&#x201D;</span> In\n","<em>ECCV</em>, 383&#x2013;98.\n","</div>\n","<div id=\"ref-goyal2021revisiting\" class=\"csl-entry\" role=\"listitem\">\n","Goyal, Adit, Vikas Hassija, and Victor Hugo C de Albuquerque. 2021.\n","<span>&#x201C;Revisiting Machine Learning Training Process for Enhanced Data\n","Privacy.&#x201D;</span> In <em>IC3</em>, 247&#x2013;51.\n","</div>\n","<div id=\"ref-graves2021amnesiac\" class=\"csl-entry\" role=\"listitem\">\n","Graves, Laura, Vineel Nagisetty, and Vijay Ganesh. 2021. <span>&#x201C;Amnesiac\n","Machine Learning.&#x201D;</span> In <em>AAAI</em>, 35:11516&#x2013;24. 13.\n","</div>\n","<div id=\"ref-GuoGHM20\" class=\"csl-entry\" role=\"listitem\">\n","Guo, Chuan, Tom Goldstein, Awni Y. Hannun, and Laurens van der Maaten.\n","2020. <span>&#x201C;Certified Data Removal from Machine Learning\n","Models.&#x201D;</span> In <em>ICML</em>, 119:3832&#x2013;42.\n","</div>\n","<div id=\"ref-guo2020survey\" class=\"csl-entry\" role=\"listitem\">\n","Guo, Ruocheng, Lu Cheng, Jundong Li, P Richard Hahn, and Huan Liu. 2020.\n","<span>&#x201C;A Survey of Learning Causality with Data: Problems and\n","Methods.&#x201D;</span> <em>CSUR</em> 53 (4): 1&#x2013;37.\n","</div>\n","<div id=\"ref-guo2022efficient\" class=\"csl-entry\" role=\"listitem\">\n","Guo, Tao, Song Guo, Jiewei Zhang, Wenchao Xu, and Junxiao Wang. 2022.\n","<span>&#x201C;Efficient Attribute Unlearning: Towards Selective Removal of\n","Input Attributes from Feature Representations.&#x201D;</span> <em>arXiv\n","Preprint arXiv:2202.13295</em>.\n","</div>\n","<div id=\"ref-gupta2021adaptive\" class=\"csl-entry\" role=\"listitem\">\n","Gupta, Varun, Christopher Jung, Seth Neel, Aaron Roth, Saeed\n","Sharifi-Malvajerdi, and Chris Waites. 2021. <span>&#x201C;Adaptive Machine\n","Unlearning.&#x201D;</span> <em>NIPS</em> 34: 16319&#x2013;30.\n","</div>\n","<div id=\"ref-halimi2022federated\" class=\"csl-entry\" role=\"listitem\">\n","Halimi, Anisa, Swanand Kadhe, Ambrish Rawat, and Nathalie Baracaldo.\n","2022. <span>&#x201C;Federated Unlearning: How to Efficiently Erase a Client in\n","FL?&#x201D;</span> <em>arXiv Preprint arXiv:2207.05521</em>.\n","</div>\n","<div id=\"ref-hamilton2020graph\" class=\"csl-entry\" role=\"listitem\">\n","Hamilton, William L. 2020. <span>&#x201C;Graph Representation Learning.&#x201D;</span>\n","<em>Synthesis Lectures on Artifical Intelligence and Machine\n","Learning</em> 14 (3): 1&#x2013;159.\n","</div>\n","<div id=\"ref-haug2021learning\" class=\"csl-entry\" role=\"listitem\">\n","Haug, Johannes, and Gjergji Kasneci. 2021. <span>&#x201C;Learning Parameter\n","Distributions to Detect Concept Drift in Data Streams.&#x201D;</span> In\n","<em>ICPR</em>, 9452&#x2013;59.\n","</div>\n","<div id=\"ref-he2021deepobliviate\" class=\"csl-entry\" role=\"listitem\">\n","He, Yingzhe, Guozhu Meng, Kai Chen, Jinwen He, and Xingbo Hu. 2021.\n","<span>&#x201C;Deepobliviate: A Powerful Charm for Erasing Data Residual Memory\n","in Deep Neural Networks.&#x201D;</span> <em>arXiv Preprint\n","arXiv:2105.06209</em>.\n","</div>\n","<div id=\"ref-hu2021distilling\" class=\"csl-entry\" role=\"listitem\">\n","Hu, Xinting, Kaihua Tang, Chunyan Miao, Xian-Sheng Hua, and Hanwang\n","Zhang. 2021. <span>&#x201C;Distilling Causal Effect of Data in\n","Class-Incremental Learning.&#x201D;</span> In <em>CVPR</em>, 3957&#x2013;66.\n","</div>\n","<div id=\"ref-huang2021unlearnable\" class=\"csl-entry\" role=\"listitem\">\n","Huang, Hanxun, Xingjun Ma, Sarah Monazam Erfani, James Bailey, and Yisen\n","Wang. 2021. <span>&#x201C;Unlearnable Examples: Making Personal Data\n","Unexploitable.&#x201D;</span> In <em>ICLR</em>.\n","</div>\n","<div id=\"ref-huang2021mathsf\" class=\"csl-entry\" role=\"listitem\">\n","Huang, Yangsibo, Xiaoxiao Li, et al. 2021. <span>&#x201C;EMA: Auditing Data\n","Removal from Trained Models.&#x201D;</span> In <em>MICCAI</em>, 793&#x2013;803.\n","</div>\n","<div id=\"ref-hullermeier2021aleatoric\" class=\"csl-entry\"\n","role=\"listitem\">\n","H&#xFC;llermeier, Eyke, and Willem Waegeman. 2021. <span>&#x201C;Aleatoric and\n","Epistemic Uncertainty in Machine Learning: An Introduction to Concepts\n","and Methods.&#x201D;</span> <em>Machine Learning</em> 110 (3): 457&#x2013;506.\n","</div>\n","<div id=\"ref-izzo2021approximate\" class=\"csl-entry\" role=\"listitem\">\n","Izzo, Zachary, Mary Anne Smart, Kamalika Chaudhuri, and James Zou. 2021.\n","<span>&#x201C;Approximate Data Deletion from Machine Learning Models.&#x201D;</span>\n","In <em>AISTAT</em>, 2008&#x2013;16.\n","</div>\n","<div id=\"ref-jagielski2022measuring\" class=\"csl-entry\" role=\"listitem\">\n","Jagielski, Matthew, Om Thakkar, Florian Tram&#xE8;r, Daphne Ippolito,\n","Katherine Lee, Nicholas Carlini, Eric Wallace, et al. 2022.\n","<span>&#x201C;Measuring Forgetting of Memorized Training Examples.&#x201D;</span>\n","<em>arXiv Preprint arXiv:2207.00099</em>.\n","</div>\n","<div id=\"ref-jia2021proof\" class=\"csl-entry\" role=\"listitem\">\n","Jia, Hengrui, Mohammad Yaghini, Christopher A Choquette-Choo, Natalie\n","Dullerud, Anvith Thudi, Varun Chandrasekaran, and Nicolas Papernot.\n","2021. <span>&#x201C;Proof-of-Learning: Definitions and Practice.&#x201D;</span> In\n","<em>SP</em>, 1039&#x2013;56.\n","</div>\n","<div id=\"ref-jose2021unified\" class=\"csl-entry\" role=\"listitem\">\n","Jose, Sharu Theresa, and Osvaldo Simeone. 2021. <span>&#x201C;A Unified\n","PAC-Bayesian Framework for Machine Unlearning via Information Risk\n","Minimization.&#x201D;</span> In <em>MLSP</em>, 1&#x2013;6.\n","</div>\n","<div id=\"ref-karasuyama2009multiple\" class=\"csl-entry\" role=\"listitem\">\n","Karasuyama, Masayuki, and Ichiro Takeuchi. 2009. <span>&#x201C;Multiple\n","Incremental Decremental Learning of Support Vector Machines.&#x201D;</span>\n","<em>NIPS</em> 22.\n","</div>\n","<div id=\"ref-karasuyama2010multiple\" class=\"csl-entry\" role=\"listitem\">\n","Karasuyama, Masayuki, and Ichiro Takeuchi. 2010. <span>&#x201C;Multiple Incremental Decremental Learning of Support\n","Vector Machines.&#x201D;</span> <em>IEEE Transactions on Neural Networks</em>\n","21 (7): 1048&#x2013;59.\n","</div>\n","<div id=\"ref-kearns1998efficient\" class=\"csl-entry\" role=\"listitem\">\n","Kearns, Michael. 1998. <span>&#x201C;Efficient Noise-Tolerant Learning from\n","Statistical Queries.&#x201D;</span> <em>JACM</em> 45 (6): 983&#x2013;1006.\n","</div>\n","<div id=\"ref-khan2021knowledge\" class=\"csl-entry\" role=\"listitem\">\n","Khan, Mohammad Emtiyaz E et al. 2021. <span>&#x201C;Knowledge-Adaptation\n","Priors.&#x201D;</span> <em>NIPS</em> 34: 19757&#x2013;70.\n","</div>\n","<div id=\"ref-kirkpatrick2017overcoming\" class=\"csl-entry\"\n","role=\"listitem\">\n","Kirkpatrick, James, Razvan Pascanu, Neil Rabinowitz, Joel Veness,\n","Guillaume Desjardins, Andrei A Rusu, Kieran Milan, et al. 2017.\n","<span>&#x201C;Overcoming Catastrophic Forgetting in Neural Networks.&#x201D;</span>\n","<em>PNAS</em> 114 (13): 3521&#x2013;26.\n","</div>\n","<div id=\"ref-koh2017understanding\" class=\"csl-entry\" role=\"listitem\">\n","Koh, Pang Wei et al. 2017. <span>&#x201C;Understanding Black-Box Predictions\n","via Influence Functions.&#x201D;</span> In <em>ICML</em>, 1885&#x2013;94.\n","</div>\n","<div id=\"ref-KLFormula\" class=\"csl-entry\" role=\"listitem\">\n","Kullback, S. et al. 1951. <span>&#x201C;<span class=\"nocase\">On Information and\n","Sufficiency</span>.&#x201D;</span> <em>The Annals of Mathematical\n","Statistics</em> 22 (1): 79&#x2013;86.\n","</div>\n","<div id=\"ref-kurmanji2023towards\" class=\"csl-entry\" role=\"listitem\">\n","Kurmanji, Meghdad and Triantafillou, Peter and Triantafillou, Eleni. 2023.\n","<span>&#x201C;Towards Unbounded Machine Unlearning.&#x201D;</span> <em>arXiv Preprint arXiv:2302.09880</em>.\n","</div>\n","<div id=\"ref-lei2019geometric\" class=\"csl-entry\" role=\"listitem\">\n","Lei, Na, Kehua Su, Li Cui, Shing-Tung Yau, and Xianfeng David Gu. 2019.\n","<span>&#x201C;A Geometric View of Optimal Transportation and Generative\n","Model.&#x201D;</span> <em>Computer Aided Geometric Design</em> 68: 1&#x2013;21.\n","</div>\n","<div id=\"ref-li2020online\" class=\"csl-entry\" role=\"listitem\">\n","Li, Yuantong, Chi-Hua Wang, and Guang Cheng. 2021. <span>&#x201C;Online\n","Forgetting Process for Linear Regression Models.&#x201D;</span> In\n","<em>AISTAT</em>, 130:217&#x2013;25.\n","</div>\n","<div id=\"ref-lin2023erm\" class=\"csl-entry\" role=\"listitem\">\n","Lin, Shen and Zhang, Xiaoyu and Chen, Chenyang and Chen, Xiaofeng and Susilo, Willy. 2023. <span>&#x201C;ERM-KTP: Knowledge-Level Machine Unlearning via Knowledge Transfer.&#x201D;</span> In\n","<em>CVPR</em>, 20147&#x2013;20155.\n","</div>\n","<div id=\"ref-liu2022continual\" class=\"csl-entry\" role=\"listitem\">\n","Liu, Bo, Qiang Liu, et al. 2022. <span>&#x201C;Continual Learning and Private\n","Unlearning.&#x201D;</span> <em>arXiv Preprint arXiv:2203.12817</em>.\n","</div>\n","<div id=\"ref-liu2020federated\" class=\"csl-entry\" role=\"listitem\">\n","Liu, Gaoyang, Xiaoqiang Ma, Yang Yang, Chen Wang, and Jiangchuan Liu.\n","2020. <span>&#x201C;Federated Unlearning.&#x201D;</span> <em>arXiv Preprint\n","arXiv:2012.13891</em>.\n","</div>\n","<div id=\"ref-liu2021federaser\" class=\"csl-entry\" role=\"listitem\">\n","Liu, Gaoyang, Xiaoqiang Ma, Yang Yang, Chen Wang, and Jiangchuan Liu. 2021. <span>&#x201C;Federaser: Enabling Efficient Client-Level Data\n","Removal from Federated Learning Models.&#x201D;</span> In <em>IWQOS</em>, 1&#x2013;10.\n","</div>\n","<div id=\"ref-liu2020have\" class=\"csl-entry\" role=\"listitem\">\n","Liu, Xiao, and Sotirios A Tsaftaris. 2020. <span>&#x201C;Have You Forgotten? A\n","Method to Assess If Machine Learning Models Have Forgotten Data.&#x201D;</span>\n","In <em>MICCAI</em>, 95&#x2013;105.\n","</div>\n","<div id=\"ref-liu2022backdoor\" class=\"csl-entry\" role=\"listitem\">\n","Liu, Yang, Mingyuan Fan, Cen Chen, Ximeng Liu, Zhuo Ma, Li Wang, and\n","Jianfeng Ma. 2022. <span>&#x201C;Backdoor Defense with Machine\n","Unlearning.&#x201D;</span> <em>arXiv Preprint arXiv:2201.09538</em>.\n","</div>\n","<div id=\"ref-liu2020learn\" class=\"csl-entry\" role=\"listitem\">\n","Liu, Yang, Zhuo Ma, Ximeng Liu, Jian Liu, Zhongyuan Jiang, Jianfeng Ma,\n","Philip Yu, and Kui Ren. 2020. <span>&#x201C;Learn to Forget: Machine Unlearning\n","via Neuron Masking.&#x201D;</span> <em>arXiv Preprint arXiv:2003.10933</em>.\n","</div>\n","<div id=\"ref-liu2021revfrf\" class=\"csl-entry\" role=\"listitem\">\n","Liu, Yang, Zhuo Ma, Yilong Yang, Ximeng Liu, Jianfeng Ma, and Kui Ren.\n","2021. <span>&#x201C;Revfrf: Enabling Cross-Domain Random Forest Training with\n","Revocable Federated Learning.&#x201D;</span> <em>TDSC</em>.\n","</div>\n","<div id=\"ref-liu2022right\" class=\"csl-entry\" role=\"listitem\">\n","Liu, Yi, Lei Xu, Xingliang Yuan, Cong Wang, and Bo Li. 2022. <span>&#x201C;The\n","Right to Be Forgotten in Federated Learning: An Efficient Realization\n","with Rapid Retraining.&#x201D;</span> In <em>INFOCOM</em>, 1749&#x2013;58.\n","</div>\n","<div id=\"ref-mahadevan2021certifiable\" class=\"csl-entry\"\n","role=\"listitem\">\n","Mahadevan, Ananth, and Michael Mathioudakis. 2021. <span>&#x201C;Certifiable\n","Machine Unlearning for Linear Models.&#x201D;</span> <em>arXiv Preprint\n","arXiv:2106.15093</em>.\n","</div>\n","<div id=\"ref-mahadevan2022certifiable\" class=\"csl-entry\"\n","role=\"listitem\">\n","Mahadevan, Ananth, and Michael Mathioudakis. 2022. <span>&#x201C;Certifiable Unlearning Pipelines for Logistic\n","Regression: An Experimental Study.&#x201D;</span> <em>Machine Learning and\n","Knowledge Extraction</em> 4 (3): 591&#x2013;620.\n","</div>\n","<div id=\"ref-magdziarczyk2019right\" class=\"csl-entry\" role=\"listitem\">\n","Mantelero, Alessandro. 2013. <span>&#x201C;The EU Proposal for a General Data\n","Protection Regulation and the Roots of the <span>&#x2018;Right to Be\n","Forgotten&#x2019;</span>.&#x201D;</span> <em>Computer Law &amp; Security Review</em>\n","29 (3): 229&#x2013;35.\n","</div>\n","<div id=\"ref-marchant2022hard\" class=\"csl-entry\" role=\"listitem\">\n","Marchant, Neil G, Benjamin IP Rubinstein, and Scott Alfeld. 2022.\n","<span>&#x201C;Hard to Forget: Poisoning Attacks on Certified Machine\n","Unlearning.&#x201D;</span> In <em>AAAI</em>, 36:7691&#x2013;7700. 7.\n","</div>\n","<div id=\"ref-martens2020new\" class=\"csl-entry\" role=\"listitem\">\n","Martens, James. 2020. <span>&#x201C;New Insights and Perspectives on the\n","Natural Gradient Method.&#x201D;</span> <em>JMLR</em> 21 (1): 5776&#x2013;5851.\n","</div>\n","<div id=\"ref-masi2018deep\" class=\"csl-entry\" role=\"listitem\">\n","Masi, Iacopo, Yue Wu, Tal Hassner, and Prem Natarajan. 2018. <span>&#x201C;Deep\n","Face Recognition: A Survey.&#x201D;</span> In <em>SIBGRAPI</em>, 471&#x2013;78.\n","</div>\n","<div id=\"ref-mcmahan2017communication\" class=\"csl-entry\"\n","role=\"listitem\">\n","McMahan, Brendan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise\n","Aguera y Arcas. 2017. <span>&#x201C;Communication-Efficient Learning of Deep\n","Networks from Decentralized Data.&#x201D;</span> In <em>AISTAT</em>, 1273&#x2013;82.\n","</div>\n","<div id=\"ref-mehrabi2021survey\" class=\"csl-entry\" role=\"listitem\">\n","Mehrabi, Ninareh, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and\n","Aram Galstyan. 2021. <span>&#x201C;A Survey on Bias and Fairness in Machine\n","Learning.&#x201D;</span> <em>CSUR</em> 54 (6): 1&#x2013;35.\n","</div>\n","<div id=\"ref-mehta2022deep\" class=\"csl-entry\" role=\"listitem\">\n","Mehta, Ronak, Sourav Pal, Vikas Singh, and Sathya N Ravi. 2022.\n","<span>&#x201C;Deep Unlearning via Randomized Conditionally Independent\n","Hessians.&#x201D;</span> In <em>CVPR</em>, 10422&#x2013;31.\n","</div>\n","<div id=\"ref-micaelli2019zero\" class=\"csl-entry\" role=\"listitem\">\n","Micaelli, Paul et al. 2019. <span>&#x201C;Zero-Shot Knowledge Transfer via\n","Adversarial Belief Matching.&#x201D;</span> <em>NIPS</em> 32.\n","</div>\n","<div id=\"ref-nam2020learning\" class=\"csl-entry\" role=\"listitem\">\n","Nam, Junhyun, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, and Jinwoo Shin.\n","2020. <span>&#x201C;Learning from Failure: De-Biasing Classifier from Biased\n","Classifier.&#x201D;</span> <em>NIPS</em> 33: 20673&#x2013;84.\n","</div>\n","<div id=\"ref-neel2021descent\" class=\"csl-entry\" role=\"listitem\">\n","Neel, Seth, Aaron Roth, and Saeed Sharifi-Malvajerdi. 2021.\n","<span>&#x201C;Descent-to-Delete: Gradient-Based Methods for Machine\n","Unlearning.&#x201D;</span> In <em>Algorithmic Learning Theory</em>, 931&#x2013;62.\n","</div>\n","<div id=\"ref-nguyen2020variational\" class=\"csl-entry\" role=\"listitem\">\n","Nguyen, Quoc Phong, Bryan Kian Hsiang Low, and Patrick Jaillet. 2020.\n","<span>&#x201C;Variational Bayesian Unlearning.&#x201D;</span> <em>NIPS</em> 33:\n","16025&#x2013;36.\n","</div>\n","<div id=\"ref-nguyen2022markov\" class=\"csl-entry\" role=\"listitem\">\n","Nguyen, Quoc Phong, Ryutaro Oikawa, Dinil Mon Divakaran, Mun Choon Chan,\n","et al. 2022. <span>&#x201C;Markov Chain Monte Carlo-Based Machine Unlearning:\n","Unlearning What Needs to Be Forgotten.&#x201D;</span> In <em>ASIACCS</em>,\n","351&#x2013;63.\n","</div>\n","<div id=\"ref-nguyen2019debunking\" class=\"csl-entry\" role=\"listitem\">\n","Nguyen, Thanh Tam. 2019. <span>&#x201C;Debunking Misinformation on the Web:\n","Detection, Validation, and Visualisation.&#x201D;</span> PhD thesis, EPFL,\n","Switzerland.\n","</div>\n","<div id=\"ref-nguyen2022survey\" class=\"csl-entry\" role=\"listitem\">\n","Nguyen, Thanh Tam, Thanh Trung Huynh, Phi Le Nguyen, Alan Wee-Chung\n","Liew, Hongzhi Yin, and Quoc Viet Hung Nguyen. 2022. <span>&#x201C;A Survey of\n","Machine Unlearning.&#x201D;</span> <em>arXiv Preprint arXiv:2209.02299</em>.\n","</div>\n","<div id=\"ref-nguyen2021judo\" class=\"csl-entry\" role=\"listitem\">\n","Nguyen, Thanh Toan, Thanh Tam Nguyen, Thanh Thi Nguyen, Bay Vo, Jun Jo,\n","and Quoc Viet Hung Nguyen. 2021. <span>&#x201C;Judo: Just-in-Time Rumour\n","Detection in Streaming Social Platforms.&#x201D;</span> <em>Information\n","Sciences</em> 570: 70&#x2013;93.\n","</div>\n","<div id=\"ref-pardau2018california\" class=\"csl-entry\" role=\"listitem\">\n","Pardau, Stuart L. 2018. <span>&#x201C;The California Consumer Privacy Act:\n","Towards a European-Style Privacy Regime in the United States.&#x201D;</span>\n","<em>J. Tech. L. &amp; Pol&#x2019;y</em> 23: 68.\n","</div>\n","<div id=\"ref-parisi2019continual\" class=\"csl-entry\" role=\"listitem\">\n","Parisi, German I, Ronald Kemker, Jose L Part, Christopher Kanan, and\n","Stefan Wermter. 2019. <span>&#x201C;Continual Lifelong Learning with Neural\n","Networks: A Review.&#x201D;</span> <em>Neural Networks</em> 113: 54&#x2013;71.\n","</div>\n","<div id=\"ref-parne2021machine\" class=\"csl-entry\" role=\"listitem\">\n","Parne, Nishchal, Kyathi Puppaala, Nithish Bhupathi, and Ripon Patgiri.\n","2021. <span>&#x201C;An Investigation on Learning, Polluting, and Unlearning the\n","Spam Emails for Lifelong Learning.&#x201D;</span> <em>arXiv Preprint\n","arXiv:2111.14609</em>.\n","</div>\n","<div id=\"ref-pawelczyk2022trade\" class=\"csl-entry\" role=\"listitem\">\n","Pawelczyk, Martin and Leemann, Tobias and Biega, Asia and Kasneci, Gjergji. 2023. <span>&#x201C;On the Trade-Off between actionable explanations and the right to be forgotten.&#x201D;</span> In <em>ICLR</em>.\n","</div>\n","<div id=\"ref-pearce2020uncertainty\" class=\"csl-entry\" role=\"listitem\">\n","Pearce, Tim, Felix Leibfried, and Alexandra Brintrup. 2020.\n","<span>&#x201C;Uncertainty in Neural Networks: Approximately Bayesian\n","Ensembling.&#x201D;</span> In <em>AISTATS</em>, 234&#x2013;44.\n","</div>\n","<div id=\"ref-peste2021ssse\" class=\"csl-entry\" role=\"listitem\">\n","Peste, Alexandra, Dan Alistarh, and Christoph H Lampert. 2021.\n","<span>&#x201C;<span>SSSE</span>: Efficiently Erasing Samples from Trained\n","Machine Learning Models.&#x201D;</span> In <em>NeurIPS 2021 Workshop Privacy in\n","Machine Learning</em>.\n","</div>\n","<div id=\"ref-ramaswamy2021fair\" class=\"csl-entry\" role=\"listitem\">\n","Ramaswamy, Vikram V, Sunnie SY Kim, and Olga Russakovsky. 2021.\n","<span>&#x201C;Fair Attribute Classification Through Latent Space\n","de-Biasing.&#x201D;</span> In <em>CVPR</em>, 9301&#x2013;10.\n","</div>\n","<div id=\"ref-ren2020adversarial\" class=\"csl-entry\" role=\"listitem\">\n","Ren, Kui, Tianhang Zheng, Zhan Qin, and Xue Liu. 2020.\n","<span>&#x201C;Adversarial Attacks and Defenses in Deep Learning.&#x201D;</span>\n","<em>Engineering</em> 6 (3): 346&#x2013;60.\n","</div>\n","<div id=\"ref-ren2020generating\" class=\"csl-entry\" role=\"listitem\">\n","Ren, Zhao, Alice Baird, Jing Han, Zixing Zhang, and Bj&#xF6;rn Schuller.\n","2020. <span>&#x201C;Generating and Protecting Against Adversarial Attacks for\n","Deep Speech-Based Emotion Recognition Models.&#x201D;</span> In\n","<em>ICASSP</em>, 7184&#x2013;88.\n","</div>\n","<div id=\"ref-ren2020enhancing\" class=\"csl-entry\" role=\"listitem\">\n","Ren, Zhao, Jing Han, Nicholas Cummins, and Bj&#xF6;rn W Schuller. 2020.\n","<span>&#x201C;Enhancing Transferability of Black-Box Adversarial Attacks via\n","Lifelong Learning for Speech Emotion Recognition Models.&#x201D;</span> In\n","<em>INTERSPEECH</em>, 496&#x2013;500.\n","</div>\n","<div id=\"ref-ren2022prototype\" class=\"csl-entry\" role=\"listitem\">\n","Ren, Zhao, Thanh Tam Nguyen, and Wolfgang Nejdl. 2022. <span>&#x201C;Prototype\n","Learning for Interpretable Respiratory Sound Analysis.&#x201D;</span> In\n","<em>ICASSP</em>, 9087&#x2013;91.\n","</div>\n","<div id=\"ref-romero2007incremental\" class=\"csl-entry\" role=\"listitem\">\n","Romero, Enrique, Ignacio Barrio, and Llu&#x131;&#x301;s Belanche. 2007.\n","<span>&#x201C;Incremental and Decremental Learning for Linear Support Vector\n","Machines.&#x201D;</span> In <em>ICANN</em>, 209&#x2013;18.\n","</div>\n","<div id=\"ref-roth2018bayesian\" class=\"csl-entry\" role=\"listitem\">\n","Roth, Wolfgang, and Franz Pernkopf. 2018. <span>&#x201C;Bayesian Neural\n","Networks with Weight Sharing Using Dirichlet Processes.&#x201D;</span>\n","<em>TPAMI</em> 42 (1): 246&#x2013;52.\n","</div>\n","<div id=\"ref-salem2020updates\" class=\"csl-entry\" role=\"listitem\">\n","Salem, Ahmed, Apratim Bhattacharya, Michael Backes, Mario Fritz, and\n","Yang Zhang. 2020. <span>&#x201C;<span\n","class=\"math inline\">{</span>Updates-Leak<span\n","class=\"math inline\">}</span>: Data Set Inference and Reconstruction\n","Attacks in Online Learning.&#x201D;</span> In <em>USENIX Security</em>,\n","1291&#x2013;1308.\n","</div>\n","<div id=\"ref-Salem0HBF019\" class=\"csl-entry\" role=\"listitem\">\n","Salem, Ahmed, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz,\n","and Michael Backes. 2019. <span>&#x201C;ML-Leaks: Model and Data Independent\n","Membership Inference Attacks and Defenses on Machine Learning\n","Models.&#x201D;</span> In <em>NDSS</em>.\n","</div>\n","<div id=\"ref-sari2020learning\" class=\"csl-entry\" role=\"listitem\">\n","Sari, WN, BS Samosir, N Sahara, L Agustina, and Y Anita. 2020.\n","<span>&#x201C;Learning Mathematics <span>&#x2018;Asyik&#x2019;</span> with Youtube Educative\n","Media.&#x201D;</span> In <em>Journal of Physics: Conference Series</em>,\n","1477:022012. 2.\n","</div>\n","<div id=\"ref-sattler2021fedaux\" class=\"csl-entry\" role=\"listitem\">\n","Sattler, Felix, Tim Korjakow, Roman Rischke, and Wojciech Samek. 2021.\n","<span>&#x201C;Fedaux: Leveraging Unlabeled Auxiliary Data in Federated\n","Learning.&#x201D;</span> <em>TNNLS</em>.\n","</div>\n","<div id=\"ref-schelter2020amnesia\" class=\"csl-entry\" role=\"listitem\">\n","Schelter, Sebastian. 2020. <span>&#x201C;<span>&#x2018;Amnesia&#x2019;</span> - a Selection\n","of Machine Learning Models That Can Forget User Data Very Fast.&#x201D;</span>\n","In <em>CIDR</em>.\n","</div>\n","<div id=\"ref-schelter2021hedgecut\" class=\"csl-entry\" role=\"listitem\">\n","Schelter, Sebastian, Stefan Grafberger, and Ted Dunning. 2021.\n","<span>&#x201C;Hedgecut: Maintaining Randomised Trees for Low-Latency Machine\n","Unlearning.&#x201D;</span> In <em>SIGMOD</em>, 1545&#x2013;57.\n","</div>\n","<div id=\"ref-sekhari2021remember\" class=\"csl-entry\" role=\"listitem\">\n","Sekhari, Ayush, Jayadev Acharya, Gautam Kamath, and Ananda Theertha\n","Suresh. 2021. <span>&#x201C;Remember What You Want to Forget: Algorithms for\n","Machine Unlearning.&#x201D;</span> <em>NIPS</em> 34: 18075&#x2013;86.\n","</div>\n","<div id=\"ref-shan2020protecting\" class=\"csl-entry\" role=\"listitem\">\n","Shan, S, E Wenger, J Zhang, H Li, H Zheng, and BY Zhao. 2020.\n","<span>&#x201C;Protecting Personal Privacy Against Una Uthorized Deep Learning\n","Models.&#x201D;</span> In <em>USENIX Security</em>, 1&#x2013;16.\n","</div>\n","<div id=\"ref-shibata2021learning\" class=\"csl-entry\" role=\"listitem\">\n","Shibata, Takashi, Go Irie, Daiki Ikami, and Yu Mitsuzumi. 2021.\n","<span>&#x201C;Learning with Selective Forgetting.&#x201D;</span> In <em>IJCAI</em>,\n","2:6. 4.\n","</div>\n","<div id=\"ref-shintre2019making\" class=\"csl-entry\" role=\"listitem\">\n","Shintre, Saurabh et al. 2019. <span>&#x201C;Making Machine Learning\n","Forget.&#x201D;</span> In <em>Annual Privacy Forum</em>, 72&#x2013;83.\n","</div>\n","<div id=\"ref-shokri2017membership\" class=\"csl-entry\" role=\"listitem\">\n","Shokri, Reza, Marco Stronati, Congzheng Song, and Vitaly Shmatikov.\n","2017. <span>&#x201C;Membership Inference Attacks Against Machine Learning\n","Models.&#x201D;</span> In <em>SP</em>, 3&#x2013;18.\n","</div>\n","<div id=\"ref-shwartz2017opening\" class=\"csl-entry\" role=\"listitem\">\n","Shwartz-Ziv, Ravid, and Naftali Tishby. 2017. <span>&#x201C;Opening the Black\n","Box of Deep Neural Networks via Information.&#x201D;</span> <em>arXiv Preprint\n","arXiv:1703.00810</em>.\n","</div>\n","<div id=\"ref-singh2017data\" class=\"csl-entry\" role=\"listitem\">\n","Singh, Abhijeet, and Abhineet Anand. 2017. <span>&#x201C;Data Leakage Detection\n","Using Cloud Computing.&#x201D;</span> <em>IJECS</em> 6 (4).\n","</div>\n","<div id=\"ref-singh2022anatomizing\" class=\"csl-entry\" role=\"listitem\">\n","Singh, Richa, Puspita Majumdar, Surbhi Mittal, and Mayank Vatsa. 2022.\n","<span>&#x201C;Anatomizing Bias in Facial Analysis.&#x201D;</span> In <em>AAAI</em>,\n","36:12351&#x2013;58. 11.\n","</div>\n","<div id=\"ref-sommer2020towards\" class=\"csl-entry\" role=\"listitem\">\n","Sommer, David Marco, Liwei Song, Sameer Wagh, and Prateek Mittal. 2020.\n","<span>&#x201C;Towards Probabilistic Verification of Machine Unlearning.&#x201D;</span>\n","<em>arXiv Preprint arXiv:2003.04247</em>.\n","</div>\n","<div id=\"ref-sommer2022athena\" class=\"csl-entry\" role=\"listitem\">\n","Sommer, David Marco, Liwei Song, Sameer Wagh, and Prateek Mittal. 2022. <span>&#x201C;Athena: Probabilistic Verification of Machine\n","Unlearning.&#x201D;</span> <em>Proc. Priv. Enhancing Technol.</em> 2022 (3):\n","268&#x2013;90.\n","</div>\n","<div id=\"ref-tahiliani2021machine\" class=\"csl-entry\" role=\"listitem\">\n","Tahiliani, Aman, Vikas Hassija, Vinay Chamola, and Mohsen Guizani. 2021.\n","<span>&#x201C;Machine Unlearning: Its Need and Implementation\n","Strategies.&#x201D;</span> In <em>IC3</em>, 241&#x2013;46.\n","</div>\n","<div id=\"ref-nguyen2017retaining\" class=\"csl-entry\" role=\"listitem\">\n","Tam, Nguyen Thanh, Matthias Weidlich, Duong Chi Thang, Hongzhi Yin, and\n","Nguyen Quoc Viet Hung. 2017. <span>&#x201C;Retaining Data from Streams of\n","Social Platforms with Minimal Regret.&#x201D;</span> In <em>IJCAI</em>,\n","2850&#x2013;56.\n","</div>\n","<div id=\"ref-tan2023unfolded\" class=\"csl-entry\" role=\"listitem\">\n","Tan, Kim Yong and Yueming, Lyu and Ong, Yew-Soon and Tsang, Ivor. 2023. <span>&#x201C;Unfolded Self-Reconstruction LSH: Towards Machine Unlearning in Approximate Nearest Neighbour Search.&#x201D;</span> <em>arXiv Preprint arXiv:2304.02350</em>.\n","</div>\n","<div id=\"ref-tanha2020boosting\" class=\"csl-entry\" role=\"listitem\">\n","Tanha, Jafar, Yousef Abdi, Negin Samadi, Nazila Razzaghi, and Mohammad\n","Asadpour. 2020. <span>&#x201C;Boosting Methods for Multi-Class Imbalanced Data\n","Classification: An Experimental Review.&#x201D;</span> <em>Journal of Big\n","Data</em> 7 (1): 1&#x2013;47.\n","</div>\n","<div id=\"ref-tarun2021fast\" class=\"csl-entry\" role=\"listitem\">\n","Tarun, Ayush K, Vikram S Chundawat, Murari Mandal, and Mohan\n","Kankanhalli. 2021. <span>&#x201C;Fast yet Effective Machine Unlearning.&#x201D;</span>\n","<em>arXiv Preprint arXiv:2111.08947</em>.\n","</div>\n","<div id=\"ref-thudi2022unrolling\" class=\"csl-entry\" role=\"listitem\">\n","Thudi, Anvith, Gabriel Deza, Varun Chandrasekaran, and Nicolas Papernot.\n","2022. <span>&#x201C;Unrolling Sgd: Understanding Factors Influencing Machine\n","Unlearning.&#x201D;</span> In <em>EuroS&amp;p</em>, 303&#x2013;19.\n","</div>\n","<div id=\"ref-thudi2021necessity\" class=\"csl-entry\" role=\"listitem\">\n","Thudi, Anvith, Hengrui Jia, Ilia Shumailov, and Nicolas Papernot. 2022.\n","<span>&#x201C;On the Necessity of Auditable Algorithmic Definitions for Machine\n","Unlearning.&#x201D;</span> In <em>USENIX Security</em>, 4007&#x2013;22.\n","</div>\n","<div id=\"ref-thudi2022bounding\" class=\"csl-entry\" role=\"listitem\">\n","Thudi, Anvith, Ilia Shumailov, Franziska Boenisch, and Nicolas Papernot.\n","2022. <span>&#x201C;Bounding Membership Inference.&#x201D;</span> <em>arXiv Preprint\n","arXiv:2202.12232</em>.\n","</div>\n","<div id=\"ref-tishby2000information\" class=\"csl-entry\" role=\"listitem\">\n","Tishby, Naftali et al. 2000. <span>&#x201C;The Information Bottleneck\n","Method.&#x201D;</span> <em>arXiv Preprint Physics/0004057</em>.\n","</div>\n","<div id=\"ref-tishby2015deep\" class=\"csl-entry\" role=\"listitem\">\n","Tishby, Naftali, and Noga Zaslavsky. 2015. <span>&#x201C;Deep Learning and the\n","Information Bottleneck Principle.&#x201D;</span> In <em>ITW</em>, 1&#x2013;5.\n","</div>\n","<div id=\"ref-tsai2014incremental\" class=\"csl-entry\" role=\"listitem\">\n","Tsai, Cheng-Hao, Chieh-Yen Lin, and Chih-Jen Lin. 2014.\n","<span>&#x201C;Incremental and Decremental Training for Linear\n","Classification.&#x201D;</span> In <em>KDD</em>, 343&#x2013;52.\n","</div>\n","<div id=\"ref-tveit2003multicategory\" class=\"csl-entry\" role=\"listitem\">\n","Tveit, Amund et al. 2003. <span>&#x201C;Multicategory Incremental Proximal\n","Support Vector Classifiers.&#x201D;</span> In <em>KES</em>, 386&#x2013;92.\n","</div>\n","<div id=\"ref-tveit2003incremental\" class=\"csl-entry\" role=\"listitem\">\n","Tveit, Amund, Magnus Lie Hetland, and H&#xE5;avard Engum. 2003.\n","<span>&#x201C;Incremental and Decremental Proximal Support Vector\n","Classification Using Decay Coefficients.&#x201D;</span> In <em>DaWaK</em>,\n","422&#x2013;29.\n","</div>\n","<div id=\"ref-ullah2021machine\" class=\"csl-entry\" role=\"listitem\">\n","Ullah, Enayat, Tung Mai, Anup Rao, Ryan A Rossi, and Raman Arora. 2021.\n","<span>&#x201C;Machine Unlearning via Algorithmic Stability.&#x201D;</span> In\n","<em>Conference on Learning Theory</em>, 4126&#x2013;42.\n","</div>\n","<div id=\"ref-veale2018algorithms\" class=\"csl-entry\" role=\"listitem\">\n","Veale, Michael, Reuben Binns, and Lilian Edwards. 2018.\n","<span>&#x201C;Algorithms That Remember: Model Inversion Attacks and Data\n","Protection Law.&#x201D;</span> <em>Philos. Trans. R. Soc. A</em> 376 (2133):\n","20180083.\n","</div>\n","<div id=\"ref-verdu2014total\" class=\"csl-entry\" role=\"listitem\">\n","Verd&#xFA;, Sergio. 2014. <span>&#x201C;Total Variation Distance and the\n","Distribution of Relative Information.&#x201D;</span> In <em>ITA</em>, 1&#x2013;3.\n","</div>\n","<div id=\"ref-villaronga2018humans\" class=\"csl-entry\" role=\"listitem\">\n","Villaronga, Eduard Fosch, Peter Kieseberg, and Tiffany Li. 2018.\n","<span>&#x201C;Humans Forget, Machines Remember: Artificial Intelligence and the\n","Right to Be Forgotten.&#x201D;</span> <em>Computer Law &amp; Security\n","Review</em> 34 (2): 304&#x2013;13.\n","</div>\n","<div id=\"ref-voigt2017eu\" class=\"csl-entry\" role=\"listitem\">\n","Voigt, Paul, and Axel Von dem Bussche. 2017. <span>&#x201C;The Eu General Data\n","Protection Regulation (Gdpr).&#x201D;</span> <em>A Practical Guide, 1st Ed.,\n","Cham: Springer International Publishing</em> 10 (3152676): 10&#x2013;5555.\n","</div>\n","<div id=\"ref-wang2022efficiently\" class=\"csl-entry\" role=\"listitem\">\n","Wang, Benjamin Longxiang, and Sebastian Schelter. 2022.\n","<span>&#x201C;Efficiently Maintaining Next Basket Recommendations Under\n","Additions and Deletions of Baskets and Items.&#x201D;</span> <em>arXiv Preprint\n","arXiv:2201.13313</em>.\n","</div>\n","<div id=\"ref-wang2019neural\" class=\"csl-entry\" role=\"listitem\">\n","Wang, Bolun, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath,\n","Haitao Zheng, and Ben Y Zhao. 2019. <span>&#x201C;Neural Cleanse: Identifying\n","and Mitigating Backdoor Attacks in Neural Networks.&#x201D;</span> In\n","<em>SP</em>, 707&#x2013;23.\n","</div>\n","<div id=\"ref-wang2023inductive\" class=\"csl-entry\" role=\"listitem\">\n","Wang, Cheng-Long and Huai, Mengdi and Wang, Di. 2023.\n","<span>&#x201C;Inductive Graph Unlearning.&#x201D;</span> In <em>USENIX Security</em>.\n","</div>\n","<div id=\"ref-wang2022federated\" class=\"csl-entry\" role=\"listitem\">\n","Wang, Junxiao, Song Guo, et al. 2022. <span>&#x201C;Federated Unlearning via\n","Class-Discriminative Pruning.&#x201D;</span> In <em>WWW</em>, 622&#x2013;32.\n","</div>\n","<div id=\"ref-wang2009learning\" class=\"csl-entry\" role=\"listitem\">\n","Wang, Rui, Yong Fuga Li, XiaoFeng Wang, Haixu Tang, and Xiaoyong Zhou.\n","2009. <span>&#x201C;Learning Your Identity and Disease from Research Papers:\n","Information Leaks in Genome Wide Association Study.&#x201D;</span> In\n","<em>CCS</em>, 534&#x2013;44.\n","</div>\n","<div id=\"ref-wang2020towards\" class=\"csl-entry\" role=\"listitem\">\n","Wang, Zeyu, Klint Qinami, Ioannis Christos Karakozis, Kyle Genova, Prem\n","Nair, Kenji Hata, and Olga Russakovsky. 2020. <span>&#x201C;Towards Fairness in\n","Visual Recognition: Effective Strategies for Bias Mitigation.&#x201D;</span> In\n","<em>CVPR</em>, 8919&#x2013;28.\n","</div>\n","<div id=\"ref-warnecke2021machine\" class=\"csl-entry\" role=\"listitem\">\n","Warnecke, Alexander, Lukas Pirch, Christian Wressnegger, and Konrad\n","Rieck. 2021. <span>&#x201C;Machine Unlearning of Features and Labels.&#x201D;</span>\n","<em>arXiv Preprint arXiv:2108.11577</em>.\n","</div>\n","<div id=\"ref-wu2022federated\" class=\"csl-entry\" role=\"listitem\">\n","Wu, Chen et al. 2022. <span>&#x201C;Federated Unlearning with Knowledge\n","Distillation.&#x201D;</span> <em>arXiv Preprint arXiv:2201.09441</em>.\n","</div>\n","<div id=\"ref-wu2019simplifying\" class=\"csl-entry\" role=\"listitem\">\n","Wu, Felix, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and\n","Kilian Weinberger. 2019. <span>&#x201C;Simplifying Graph Convolutional\n","Networks.&#x201D;</span> In <em>ICML</em>, 6861&#x2013;71.\n","</div>\n","<div id=\"ref-wu2022puma\" class=\"csl-entry\" role=\"listitem\">\n","Wu, Ga, Masoud Hashemi, and Christopher Srinivasa. 2022. <span>&#x201C;PUMA:\n","Performance Unchanged Model Augmentation for Training Data\n","Removal.&#x201D;</span> In <em>AAAI</em>.\n","</div>\n","<div id=\"ref-wu2020deltagrad\" class=\"csl-entry\" role=\"listitem\">\n","Wu, Yinjun et al. 2020. <span>&#x201C;Deltagrad: Rapid Retraining of Machine\n","Learning Models.&#x201D;</span> In <em>ICML</em>, 10355&#x2013;66.\n","</div>\n","<div id=\"ref-wu2020priu\" class=\"csl-entry\" role=\"listitem\">\n","Wu, Yinjun, Val Tannen, and Susan B Davidson. 2020. <span>&#x201C;Priu: A\n","Provenance-Based Approach for Incrementally Updating Regression\n","Models.&#x201D;</span> In <em>SIGMOD</em>, 447&#x2013;62.\n","</div>\n","<div id=\"ref-xu2023netflix\" class=\"csl-entry\" role=\"listitem\">\n","Xu, Mimee and Sun, Jiankai and Yang, Xin and Yao, Kevin and Wang, Chong. 2023.\n","<span>&#x201C;Netflix and Forget: Efficient and Exact Machine Unlearning from Bi-linear Recommendations.&#x201D;</span> <em>arXiv Preprint arXiv:2302.06676</em>.\n","</div>\n","<div id=\"ref-yamashita2023one\" class=\"csl-entry\" role=\"listitem\">\n","Yamashita, Tomoya and Yamada, Masanori and Shibata, Takashi. 2023. <span>&#x201C;One-Shot Machine Unlearning with Mnemonic Code.&#x201D;</span> <em>arXiv Preprint arXiv:2306.05670</em>.\n","</div>\n","<div id=\"ref-yoon2022few\" class=\"csl-entry\" role=\"listitem\">\n","Yoon, Youngsik, Jinhwan Nam, Hyojeong Yun, Dongwoo Kim, and Jungseul Ok.\n","2022. <span>&#x201C;Few-Shot Unlearning by Model Inversion.&#x201D;</span> <em>arXiv\n","Preprint arXiv:2205.15567</em>.\n","</div>\n","<div id=\"ref-yu2021does\" class=\"csl-entry\" role=\"listitem\">\n","Yu, Da, Huishuai Zhang, Wei Chen, Jian Yin, and Tie-Yan Liu. 2021.\n","<span>&#x201C;How Does Data Augmentation Affect Privacy in Machine\n","Learning?&#x201D;</span> In <em>AAAI</em>, 35:10746&#x2013;53. 12.\n","</div>\n","<div id=\"ref-yu2015lsun\" class=\"csl-entry\" role=\"listitem\">\n","Yu, Fisher, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and\n","Jianxiong Xiao. 2015. <span>&#x201C;Lsun: Construction of a Large-Scale Image\n","Dataset Using Deep Learning with Humans in the Loop.&#x201D;</span> <em>arXiv\n","Preprint arXiv:1506.03365</em>.\n","</div>\n","<div id=\"ref-zanella2020analyzing\" class=\"csl-entry\" role=\"listitem\">\n","Zanella-B&#xE9;guelin, Santiago, Lukas Wutschitz, Shruti Tople, Victor R&#xFC;hle,\n","Andrew Paverd, Olga Ohrimenko, et al. 2020. <span>&#x201C;Analyzing Information\n","Leakage of Updates to Natural Language Models.&#x201D;</span> In\n","<em>SIGSAC</em>, 363&#x2013;75.\n","</div>\n","<div id=\"ref-zeng2021learning\" class=\"csl-entry\" role=\"listitem\">\n","Zeng, Yingyan, Tianhao Wang, Si Chen, Hoang Anh Just, Ran Jin, and Ruoxi\n","Jia. 2021. <span>&#x201C;Learning to Refit for Convex Learning\n","Problems.&#x201D;</span> <em>arXiv Preprint arXiv:2111.12545</em>.\n","</div>\n","<div id=\"ref-zhang2020deep\" class=\"csl-entry\" role=\"listitem\">\n","Zhang, Hao, Bo Chen, Yulai Cong, Dandan Guo, Hongwei Liu, and Mingyuan\n","Zhou. 2020. <span>&#x201C;Deep Autoencoding Topic Model with Scalable Hybrid\n","Bayesian Inference.&#x201D;</span> <em>TPAMI</em> 43 (12): 4306&#x2013;22.\n","</div>\n","<div id=\"ref-zhang2023forgotten\" class=\"csl-entry\" role=\"listitem\">\n","Zhang, Dawen and Pan, Shidong and Hoang, Thong and Xing, Zhenchang and Staples, Mark and Xu, Xiwei and Yao, Lina and Lu, Qinghua and Zhu, Liming. 2023. <span>&#x201C;To be forgotten or to be fair: Unveiling fairness implications of machine unlearning methods.&#x201D;</span> <em>arXiv Preprint arXiv:2302.03350</em>.\n","</div>\n","<div id=\"ref-zhang2022machine\" class=\"csl-entry\" role=\"listitem\">\n","Zhang, Peng-Fei, Guangdong Bai, Zi Huang, and Xin-Shun Xu. 2022.\n","<span>&#x201C;Machine Unlearning for Image Retrieval: A Generative Scrubbing\n","Approach.&#x201D;</span> In <em>MM</em>, 237&#x2013;45.\n","</div>\n","<div id=\"ref-zou2018ai\" class=\"csl-entry\" role=\"listitem\">\n","Zou, James, and Londa Schiebinger. 2018. <span>&#x201C;AI Can Be Sexist and\n","Racist &#x2013; It&#x2019;s Time to Make It Fair.&#x201D;</span> Nature Publishing Group.\n","</div>\n","</div>"]},{"cell_type":"markdown","id":"a7abc6b1","metadata":{"papermill":{"duration":0.008811,"end_time":"2023-07-12T05:38:47.310068","exception":false,"start_time":"2023-07-12T05:38:47.301257","status":"completed"},"tags":[]},"source":["<aside id=\"footnotes\" class=\"footnotes footnotes-end-of-document\"\n","role=\"doc-endnotes\">\n","<hr />\n","<ol>\n","<li id=\"fn1\"><p><a\n","href=\"https://github.com/tamlhp/awesome-machine-unlearning\"\n","class=\"uri\">https://github.com/tamlhp/awesome-machine-unlearning</a><a\n","href=\"#fnref1\" class=\"footnote-back\" role=\"doc-backlink\">&#x21A9;&#xFE0E;</a></p></li>\n","<li id=\"fn2\"><p><a\n","href=\"https://insights.daffodilsw.com/blog/machine-unlearning-what-it-is-all-about\"\n","class=\"uri\">https://insights.daffodilsw.com/blog/machine-unlearning-what-it-is-all-about</a><a\n","href=\"#fnref2\" class=\"footnote-back\" role=\"doc-backlink\">&#x21A9;&#xFE0E;</a></p></li>\n","<li id=\"fn3\"><p><a\n","href=\"https://github.com/ZIYU-DEEP/Awesome-Information-Bottleneck\"\n","class=\"uri\">https://github.com/ZIYU-DEEP/Awesome-Information-Bottleneck</a><a\n","href=\"#fnref3\" class=\"footnote-back\" role=\"doc-backlink\">&#x21A9;&#xFE0E;</a></p></li>\n","</ol>\n","</aside>"]}],"metadata":{"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.10.10"},"papermill":{"default_parameters":{},"duration":18.246676,"end_time":"2023-07-12T05:38:48.35308","environment_variables":{},"exception":null,"input_path":"__notebook__.ipynb","output_path":"__notebook__.ipynb","parameters":{},"start_time":"2023-07-12T05:38:30.106404","version":"2.4.0"}},"nbformat":4,"nbformat_minor":5}