Spaces:
Sleeping
Sleeping
File size: 94,310 Bytes
ed050a6 |
1 |
{"podcast_details": {"podcast_title": "The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)", "episode_title": "Explainable AI for Biology and Medicine with Su-In Lee - #642", "episode_image": "https://megaphone.imgix.net/podcasts/35230150-ee98-11eb-ad1a-b38cbabcd053/image/TWIML_AI_Podcast_Official_Cover_Art_1400px.png?ixlib=rails-4.3.1&max-w=3000&max-h=3000&fit=crop&auto=format,compress", "podcast_transcript": "(0.169,0.289) Unknown : you\n\n(8.34,9.201) SPEAKER_01 : All right, everyone.\n\n(9.201,12.583) SPEAKER_01 : Welcome to another episode of the TwiML AI Podcast.\n\n(12.583,14.965) SPEAKER_01 : I am your host, Sam Charrington.\n\n(14.965,16.927) SPEAKER_01 : And today I'm joined by Suin Lee.\n\n(16.927,24.753) SPEAKER_01 : Suin is a professor at the Paul G. Allen School of Computer Science and Engineering at the University of Washington.\n\n(24.753,30.497) SPEAKER_01 : Before we get going, be sure to take a moment to hit that subscribe button wherever you're listening to today's show.\n\n(30.497,32.739) SPEAKER_01 : Suin, welcome to the podcast.\n\n(32.739,33.92) SPEAKER_02 : Thank you for the introduction.\n\n(34.79,38.132) SPEAKER_01 : I'm looking forward to digging into our talk.\n\n(38.132,46.357) SPEAKER_01 : You are an invited speaker at the 2023 ICML workshop on computational biology.\n\n(46.357,54.683) SPEAKER_01 : And we'll be talking about your talk there, which is really centered around your research into explainable AI, an important topic.\n\n(54.683,61.447) SPEAKER_01 : But before we jump into that, I'd love to have you share a little bit about your background and how you came to work in the field.\n\n(61.993,63.053) SPEAKER_02 : Thank you so much.\n\n(63.053,71.236) SPEAKER_02 : So my lab is currently working on a broad spectrum of a problem, for example, developing explainable AI techniques.\n\n(71.236,73.076) SPEAKER_02 : So that's a core machine learning.\n\n(73.076,79.558) SPEAKER_02 : And then we also work on identifying cause and treatment of challenging diseases such as cancer and Alzheimer's disease.\n\n(79.558,81.259) SPEAKER_02 : So that's computational biology.\n\n(81.259,87.801) SPEAKER_02 : And then also we develop clinical diagnosis or auditing frameworks for clinical AI.\n\n(88.641,91.243) SPEAKER_02 : And then you asked about how I got into this field.\n\n(91.243,96.706) SPEAKER_02 : So I was trained as a machine learning researcher when I was a PhD student.\n\n(96.706,101.309) SPEAKER_02 : I was working on the problem of dealing with high dimensional data.\n\n(101.309,104.592) SPEAKER_02 : And then at that time, when I was a PhD student at Stanford,\n\n(105.472,111.895) SPEAKER_02 : In the field of computational biology, there was something really exciting happened, something called a microarray data.\n\n(111.895,116.176) SPEAKER_02 : So it's a gene expression data that measures expression levels of 20,000 genes.\n\n(116.176,128.401) SPEAKER_02 : And I suddenly thought that if machine learning researchers develop a powerful and effective method to identify cause of diseases such as cancer and then therapeutic targets,\n\n(129.101,131.443) SPEAKER_02 : for those diseases.\n\n(131.443,137.229) SPEAKER_02 : Then as a machine learning researcher, I can contribute hugely to the science and also medicine.\n\n(137.229,140.432) SPEAKER_02 : And I just fell in love with this field.\n\n(140.432,147.76) SPEAKER_02 : So that's how I got into the research at the intersection of computational machine learning and computational biology.\n\n(148.3,167.945) SPEAKER_02 : After I got a job at the University of Washington that has a very strong medical school, and then I had wonderful colleagues, amazing people who had medical data, electronic health records, and then introduced me to this field of EHR data analysis in various\n\n(168.605,174.249) SPEAKER_02 : clinical departments, anesthesiology and dermatology, and then emergency medicine.\n\n(174.249,186.656) SPEAKER_02 : And then I just got really interested into the possibility, the potential that AI researchers or machine learning researchers myself and my students can contribute to medicine.\n\n(186.656,190.799) SPEAKER_02 : That's how I got into this field of largely three fields.\n\n(190.799,193.401) SPEAKER_02 : So one is machine learning and AI.\n\n(193.401,197.023) SPEAKER_02 : And the second is computational biology and then clinical medicine.\n\n(197.509,206.042) SPEAKER_01 : You probably thought that you had to deal with messy data when you were in clinical biology and computational biology until you saw some of that EHR data.\n\n(206.042,207.725) SPEAKER_01 : That data can be very messy.\n\n(208.147,208.827) SPEAKER_02 : It is.\n\n(208.827,211.709) SPEAKER_02 : The goals of the fields are slightly different to each other.\n\n(211.709,217.693) SPEAKER_02 : But in the future, I strongly believe that those two fields will merge, biology and medicine.\n\n(217.693,226.238) SPEAKER_02 : So in a clinical side, researchers are already generating the biological molecular biology data from patients.\n\n(226.238,234.963) SPEAKER_02 : So for example, for cancer patients, you can think about measuring the gene expression levels or genetic data from those cancer patients.\n\n(234.963,237.465) SPEAKER_02 : And then what you want is the treatment.\n\n(237.985,248.617) SPEAKER_02 : You want the AI or machine learning models to tell you which treatment, which drug, anti-cancer drugs are going to work the best for that particular patient.\n\n(248.617,255.385) SPEAKER_02 : For that, you definitely need the biological knowledge and then actual mechanistic understanding of cancer.\n\n(256.993,265.478) SPEAKER_01 : And what says to you that the fields will merge as opposed to kind of collaborate closely?\n\n(265.478,274.624) SPEAKER_01 : Clearly they need to collaborate closely, but when I think of merge, and maybe I'm taking this too far, I'm thinking of like single models that operate in both domains.\n\n(275.769,276.87) SPEAKER_02 : Yeah, I know what you're saying.\n\n(276.87,294.687) SPEAKER_02 : So I tell my students or other young people that to actually move the field forward, to advance this field of biology, medicine, or biomedical sciences, you really need to become a bilingual researcher, or even trilingual these days.\n\n(294.687,298.411) SPEAKER_02 : You know, computer science plus biology plus medicine.\n\n(298.411,298.771) SPEAKER_02 : When you\n\n(299.832,307.753) SPEAKER_02 : are you have one brain that really thinks like, you know, machine learning researchers and biologists and then clinical experts.\n\n(307.753,318.675) SPEAKER_02 : It's, you know, usually that really helps to come up with creative approach and that can really move the field to benefit patients.\n\n(318.675,328.277) SPEAKER_02 : And then at the end, the ultimate goal of a biology and molecular biology is to understand life better so that you can advance the health.\n\n(328.677,329.937) SPEAKER_02 : of humans, right?\n\n(329.937,342.821) SPEAKER_02 : So I think collaborations definitely help, but at the end, we really need to think about how to produce these young researchers so that they really think like experts in this area.\n\n(342.821,348.943) SPEAKER_02 : These things already happened earlier in computational biology than clinical medicine.\n\n(348.943,352.504) SPEAKER_02 : And when I was doing the PhD, it was usually based on collaborations.\n\n(352.844,367.067) SPEAKER_02 : people who were trained primarily as a machine learning researcher and people who were trained as molecular biologists who hold pipettes and they work in the wet labs and then they form a collaboration and then write papers.\n\n(367.828,375.554) SPEAKER_02 : But then later, you know, we see a lot of departments that's named, you know, computational biology or, you know, biomedical science departments.\n\n(375.554,381.419) SPEAKER_02 : So it's a really healthy move for, you know, this kind of interdisciplinary fields.\n\n(381.419,382.5) SPEAKER_02 : It makes total difference.\n\n(383.182,383.822) SPEAKER_01 : Yeah.\n\n(383.822,392.708) SPEAKER_01 : Your research and again, your presentation at the conference are focused on explainable AI, XAI.\n\n(392.708,401.092) SPEAKER_01 : Tell us a little bit about some of the things that you think are most important about explainability as applied to these fields.\n\n(401.092,408.477) SPEAKER_01 : I think we get that machine learning and models in general can be opaque and make important high stakes decisions.\n\n(408.477,410.278) SPEAKER_01 : You need some degree of explainability.\n\n(411.08,415.236) SPEAKER_01 : What's unique about your take in applying applicability in your field?\n\n(415.757,416.197) SPEAKER_02 : Right.\n\n(416.197,416.938) SPEAKER_02 : OK, thank you.\n\n(416.938,418.279) SPEAKER_02 : That's an excellent question.\n\n(418.279,427.346) SPEAKER_02 : So the core part of explainable AI, at least this theoretical framework, it basically means feature attributions.\n\n(427.346,429.848) SPEAKER_02 : So imagine you have a black box model.\n\n(429.848,439.615) SPEAKER_02 : You have a set of input, a vector x. And then you have an output y. And then when you have a prediction, you want to find a way to attribute two features.\n\n(439.615,443.158) SPEAKER_02 : You want to know which features contributed the most.\n\n(443.738,446.56) SPEAKER_02 : And then, you know, there are mathematical frameworks.\n\n(446.56,450.884) SPEAKER_02 : Our particular approach that's called the SHAP framework, it is based on game theory.\n\n(450.884,455.007) SPEAKER_02 : So you want to find a way to understand which features are important.\n\n(455.007,459.17) SPEAKER_02 : So that's the core of the technical side of explainable AI.\n\n(459.59,470.355) SPEAKER_02 : And then on the other hand, if you just apply this explainable AI technique, you know, off the shelf explainable AI algorithm to biology, mostly it's useless.\n\n(470.355,472.276) SPEAKER_02 : It's not very useful.\n\n(472.276,475.137) SPEAKER_02 : It's not useful in terms of biological insights.\n\n(475.797,481.478) SPEAKER_02 : What you really want to understand is how these features collaborate with each other.\n\n(481.478,484.439) SPEAKER_02 : Imagine that you have a set of genes as a feature.\n\n(484.439,490.3) SPEAKER_02 : So you have 20,000 genes, 20,000 expression levels are the input of the black box model.\n\n(490.3,496.041) SPEAKER_02 : And then your prediction is which cancer drug is going to work the best for each patient.\n\n(496.041,503.943) SPEAKER_02 : And then individual genes contributions and then gene importance scores by themselves, they are not going to be really useful.\n\n(503.943,505.363) SPEAKER_02 : It will be only useful when\n\n(506.063,519.146) SPEAKER_02 : some explainable AI model, explainable AI algorithm can tell you which pathway, how genes collaborate with each other and then how genetic factors play a role into that.\n\n(519.146,525.068) SPEAKER_02 : And then also how that leads to the good prognosis of the cancer patient and also\n\n(525.768,529.77) SPEAKER_02 : sensitivity, the good responsiveness to that drug.\n\n(529.77,532.632) SPEAKER_02 : So there is something missing there.\n\n(532.632,548.38) SPEAKER_02 : And then the uniqueness of my research is that we want to develop this explainable AI method for biology and then also clinical medicine such that it can make real meaningful contribution to these fields.\n\n(548.38,552.762) SPEAKER_02 : Another example in the medicine side is that imagine that you have a deep\n\n(553.382,557.605) SPEAKER_02 : model, deep neural network, that's going to take you a dermatology image.\n\n(557.605,562.208) SPEAKER_02 : So say that you find something unusual in your skin and then you take a picture.\n\n(562.208,563.89) SPEAKER_02 : That's your dermatological image.\n\n(563.89,568.393) SPEAKER_02 : And then let's say that you want to know that has features of melanoma or not.\n\n(568.913,572.756) SPEAKER_02 : So the prediction results itself is not going to be really useful.\n\n(572.756,588.428) SPEAKER_02 : And then even the current explainable AI methods that's going to tell you which pixels, which parts of the images led to the prediction of melanoma or not, those are not going to be very useful to understand how this black box model really works.\n\n(589.255,600.879) SPEAKER_02 : When you try, for example, that you modify the image and then generate a counterfactual, small changes to the image such that it changes the prediction.\n\n(600.879,604.66) SPEAKER_02 : Let's say that that changes the prediction from melanoma to normal.\n\n(604.66,612.502) SPEAKER_02 : Only then you can understand how this model works, what the reasoning process of this black box machine learning model is like.\n\n(612.942,618.283) SPEAKER_02 : So those examples, I'm going to show many examples like that.\n\n(618.283,638.788) SPEAKER_02 : Basically, the message there is going to be that the current state-of-the-art explainable AI that tells you theoretically supported importance values for the features are not going to be enough to make meaningful contributions to both biological science and then also clinical medicine as well.\n\n(639.467,659.61) SPEAKER_01 : It sounds like you're calling out a broad deficiency in the approach and kind of saying that as opposed to this feature level explainability, we need more system level or process level explainability that is more grounded in the use cases or the application than what we have available today.\n\n(660.383,661.284) SPEAKER_02 : Exactly.\n\n(661.284,663.566) SPEAKER_02 : The question is how to do that.\n\n(663.566,667.13) SPEAKER_02 : For that, we need a new explainable AI method.\n\n(667.13,676.56) SPEAKER_02 : In the first part of the talk, I'm going to show many examples of what explainable AI, almost as is, can do.\n\n(676.56,679.062) SPEAKER_02 : Those are the papers that we published\n\n(679.743,685.785) SPEAKER_02 : a couple of years ago, so that it addresses new scientific questions.\n\n(685.785,691.567) SPEAKER_02 : Even explainable AI or feature attribution methods as is can be useful.\n\n(691.567,695.988) SPEAKER_02 : So I'm going to show many examples like that in both biology and medicine.\n\n(695.988,706.172) SPEAKER_02 : But in the second part of the talk, I'm going to show how explainable AI can even open new research directions specifically for biology and health care.\n\n(707.012,719.374) SPEAKER_02 : So those examples I showed you, the systems level insights or this counterfactual image generation that can facilitate collaboration with humans, in this case, a clinical expert.\n\n(719.374,726.756) SPEAKER_02 : So in the second part of the talk, I'm going to show how this explainably I can open new research directions.\n\n(726.756,731.597) SPEAKER_02 : And then part of the second part will be I'm going to have a deep dive into our recent paper,\n\n(732.177,739.645) SPEAKER_02 : to highlight how Explainable AI can help cancer medicine design, cancer therapy design.\n\n(739.645,747.474) SPEAKER_02 : So basically, how to choose two chemotherapy drugs that's going to have a synergy for a particular patient.\n\n(747.474,751.178) SPEAKER_02 : So that's the paper that was recently published in Nature Biomedical Engineering.\n\n(751.701,767.357) SPEAKER_01 : Before we dig into that paper, the most recent paper, can you talk us through in a little bit more detail some of the examples of the foundational machine learning research and how they contribute to the problems you're trying to solve?\n\n(768.274,768.914) SPEAKER_02 : Okay.\n\n(768.914,773.84) SPEAKER_02 : So some of the foundational AI methods we developed, I'm going to talk about.\n\n(773.84,776.883) SPEAKER_02 : It can be summarized into three parts.\n\n(776.883,783.35) SPEAKER_02 : So one is, you know, principled understanding of current explainable AI methods.\n\n(783.35,785.993) SPEAKER_02 : So specifically feature attribution methods.\n\n(786.233,809.664) SPEAKER_02 : So, for example, in one work, we showed that our feature attribution method, that's SHAP, it was published in NeurIPS in 2017, we showed that it unifies a large portion of the explainable AI literature and 25 methods following the exact same principle, and all explaining by removing features.\n\n(810.104,821.856) SPEAKER_02 : So it turned out that 25 methods, feature attribution methods that are widely used in the field and machine learning applications, they all go by the same principle.\n\n(821.856,828.003) SPEAKER_02 : You want to assess the importance of each feature by removing them or removing subsets of them.\n\n(828.363,832.564) SPEAKER_02 : So that helps us understand what goes on.\n\n(832.564,839.567) SPEAKER_02 : For example, when they fail, you want to understand what goes on and also improve and then develop new explainable AI methods.\n\n(839.567,844.168) SPEAKER_02 : So I'm going to introduce a couple of unifying frameworks.\n\n(844.168,850.33) SPEAKER_02 : So this is about how to understand the principled understanding of feature attribution methods.\n\n(851.17,861.837) SPEAKER_02 : Also, on a computational side, we have explored many avenues to make this SHAP computation even feasible and faster.\n\n(861.837,867.041) SPEAKER_02 : So, SHAP stands for Shapely Editive... I suddenly forgot.\n\n(867.041,867.921) SPEAKER_02 : I can't forget this.\n\n(867.921,868.622) SPEAKER_01 : Explanations.\n\n(869.521,871.905) SPEAKER_02 : Yes, a sharply additive explanation.\n\n(871.905,878.615) SPEAKER_01 : It's kind of weird because they chose the third letter of the word.\n\n(878.615,882.821) SPEAKER_02 : That's the first author, my student, Scott's choice.\n\n(882.821,883.942) SPEAKER_02 : I love the name, by the way.\n\n(884.503,892.331) SPEAKER_02 : Computing sharp values is theoretically very well supported, but then computation-wise, it's not really easy to compute.\n\n(892.331,894.413) SPEAKER_02 : It involves exponential computation.\n\n(894.413,899.998) SPEAKER_02 : So we need to develop approximation methods such that we can compute them in a feasible manner.\n\n(899.998,904.262) SPEAKER_02 : So we developed many fast statistical estimation approaches\n\n(905.003,912.575) SPEAKER_02 : And then you want to make sure that there is a convergence and all the desirable theoretical properties are already there.\n\n(912.575,918.083) SPEAKER_02 : And then also, we developed approaches for specific model types.\n\n(918.083,920.326) SPEAKER_02 : For example, ensemble tree models.\n\n(920.947,922.629) SPEAKER_02 : And then also deep neural networks.\n\n(922.629,925.573) SPEAKER_02 : So we have a deep shape and then tree shape.\n\n(925.573,929.879) SPEAKER_02 : And then more recently, we also have a vision transformer Shapley.\n\n(929.879,935.407) SPEAKER_02 : So that's a way to compute the Shapley values for transformers, vision transformers.\n\n(936.235,938.637) SPEAKER_02 : And then there is another one that's called the FASTA SHAPE.\n\n(938.637,948.302) SPEAKER_02 : So the one way to make the SHAPE computation more feasible is to focus on specific particular aspects of models.\n\n(948.302,954.066) SPEAKER_02 : So for example, tree ensembles or deep neural network, they have some particular model types.\n\n(954.066,964.933) SPEAKER_02 : There is a way to make this computation a little faster, basically make... So model specific versions of SHAPE implementation.\n\n(965.473,966.393) SPEAKER_02 : Yes, yes.\n\n(966.393,966.673) SPEAKER_02 : Yeah.\n\n(966.673,969.014) SPEAKER_02 : So that's another line of research.\n\n(969.014,975.796) SPEAKER_02 : And then more recently, we also started to understand the robustness of the shaft value.\n\n(975.796,977.856) SPEAKER_02 : So adversarial attack.\n\n(977.856,988.899) SPEAKER_02 : A few years ago, in the field of machine learning, researchers have tried to understand how robust the machine learning model itself, the prediction results are toward\n\n(989.419,990.9) SPEAKER_02 : adversarial attacks.\n\n(990.9,996.544) SPEAKER_02 : And then now we are looking into this issue in terms of the model explanations.\n\n(996.544,999.826) SPEAKER_02 : So how feature attributions are robust.\n\n(999.826,1007.151) SPEAKER_02 : So in our most recent paper, we basically showed the removal-based approaches, including SHAPE.\n\n(1007.151,1014.536) SPEAKER_02 : Like earlier I said, many of the feature attribution methods turned out to be to have the same principle, which is explaining by removal.\n\n(1014.536,1016.197) SPEAKER_02 : So those line of, you know,\n\n(1017.037,1021.439) SPEAKER_02 : methods is more robust to this kind of adversarial attacks.\n\n(1021.439,1031.642) SPEAKER_02 : So, and then, you know, multimodality, you know, those other kinds of issues, we are actively doing this research in terms of, you know, foundational AI algorithms also.\n\n(1032.362,1041.952) SPEAKER_01 : And SHAP, as you've mentioned, is broadly used, both the original algorithm as well as the related algorithms as you described.\n\n(1041.952,1047.417) SPEAKER_01 : But it's also one of the first explainability approaches to be popularized\n\n(1048.238,1051.501) SPEAKER_01 : Where does it sit in terms of relevance?\n\n(1051.501,1070.518) SPEAKER_01 : Are there different kind of wholly different approaches that have overtaken it in popularity or applicability based on kind of today's models and applications or is SHAP still kind of a core approach to the way explainability is looked at in practice?\n\n(1071.373,1073.514) SPEAKER_02 : It's more on the later side.\n\n(1073.514,1080.577) SPEAKER_02 : We believe that this removal-based approach and in this cooperative game theory, we believe in that.\n\n(1080.577,1084.539) SPEAKER_02 : And then also, it has the desirable properties, first of all.\n\n(1084.539,1094.403) SPEAKER_02 : And then we, in our many experiments, we still see that removal-based approaches are more robust, as I said, those berserker attacks.\n\n(1094.403,1098.545) SPEAKER_02 : And then also, in terms of various evaluation criteria, we still think that those\n\n(1099.045,1107.935) SPEAKER_02 : methods are more robust than the other class, which we characterized as a propagation-based approach or gradient-based approaches.\n\n(1107.935,1111.259) SPEAKER_02 : So we would prefer just remover-based approaches.\n\n(1111.259,1115.744) SPEAKER_02 : But on the other hand, those approaches are very computationally very intensive.\n\n(1115.744,1115.864) SPEAKER_02 : So\n\n(1116.244,1129.678) SPEAKER_02 : Well, the way SHEP works is basically that you try all subset of features and then you add a feature of interest and then see the model, check the model output and you average across all subsets of features.\n\n(1129.678,1132.781) SPEAKER_02 : So as you can imagine, it's computationally very intensive.\n\n(1132.781,1136.184) SPEAKER_02 : So when we now think about foundational models or\n\n(1136.905,1142.228) SPEAKER_02 : large language models, these really large models of a ton, a lot of parameters.\n\n(1142.228,1148.531) SPEAKER_02 : And then deep neural network and, you know, gradient computation is perhaps easier than trying all sorts of features, right?\n\n(1148.531,1154.394) SPEAKER_02 : So practically, it's not, you know, as easy as the other class in terms of\n\n(1155.094,1160.497) SPEAKER_02 : the computation, but we still want to make this computational more feasible.\n\n(1160.497,1175.385) SPEAKER_02 : We want to develop various clever approaches to reduce the computation and then still maintain the desirable theoretical properties that this removal-based approach or SHAP in particular has.\n\n(1175.385,1175.625) SPEAKER_01 : Got it.\n\n(1176.299,1191.191) SPEAKER_01 : And so that is an example of kind of the foundational research that your lab does that contributes not only to your work on the biological science side or computational biology side, but broadly to the field.\n\n(1191.191,1199.177) SPEAKER_01 : And then your more recent paper is an example of the kind of contributions you're making on the medicine side.\n\n(1199.177,1201.399) SPEAKER_01 : Can you talk a little bit about that cancer paper?\n\n(1201.943,1202.964) SPEAKER_02 : Yeah, sure.\n\n(1202.964,1204.566) SPEAKER_02 : It is about AML.\n\n(1204.566,1208.03) SPEAKER_02 : So we chose AML as an example application.\n\n(1208.03,1216.079) SPEAKER_02 : So it's acute myeloid leukemia, it's aggressive blood cancer, and it's relatively common for older people.\n\n(1216.079,1223.587) SPEAKER_02 : So to give you a bit of a background in general, the cutting edge in the treatment of cancers, such as AML,\n\n(1223.867,1226.968) SPEAKER_02 : has increasingly become combination therapy.\n\n(1226.968,1238.312) SPEAKER_02 : So the rationale here is that by choosing drugs that target complementary biological pathways, we can achieve greater anti-cancer efficacy.\n\n(1238.312,1242.734) SPEAKER_02 : So basically, you choose two or three chemotherapy drugs,\n\n(1243.394,1251.941) SPEAKER_02 : and then use them together so that when there is a synergy, usually there is a very good anti-cancer efficacy.\n\n(1251.941,1256.906) SPEAKER_02 : But the issue is that choosing optimal combinations of drugs is a really hard problem.\n\n(1256.906,1265.954) SPEAKER_02 : So there are about hundreds of individual FDA approved anti-cancer drugs, which means that there will be tens of thousands of possible combinations.\n\n(1266.594,1279.4) SPEAKER_02 : But when you consider pairwise combination, and there could be even more if you consider non-FDA approved experimental drugs in development, or consider a combination of more than two drugs.\n\n(1279.4,1291.426) SPEAKER_02 : And then the different patients, even patients who have the same type of cancer may respond differently to exact same drugs because of this individual, the particular genomic characteristics.\n\n(1292.166,1295.449) SPEAKER_02 : then formulate this problem as a machine learning problem.\n\n(1295.449,1309.6) SPEAKER_02 : So you take this AML patient's gene expression levels, so you get the blood of the patient and then purify the cells so you have only cancer cells, and then say you measure expression levels of 20,000 genes.\n\n(1309.6,1312.662) SPEAKER_02 : So mathematically, this is 20,000 dimensional vector.\n\n(1313.563,1323.967) SPEAKER_02 : And then also, let's say you consider a pair of drugs, drugs A and B, and then you use various information about this drug.\n\n(1323.967,1328.028) SPEAKER_02 : For example, structure of these drugs or their biological targets.\n\n(1328.028,1331.889) SPEAKER_02 : There are many data sets that can tell you that information.\n\n(1331.889,1339.072) SPEAKER_02 : And then you take those as a machine learning input, and then you want to predict the synergy between the drugs A and B.\n\n(1339.852,1347.817) SPEAKER_02 : So in this kind of a problem, and as I said, there will be tens of thousands of pairwise combinations of those drugs.\n\n(1347.817,1355.803) SPEAKER_02 : And so in this kind of situation, not only the prediction, but also explanations will be extremely important.\n\n(1355.803,1362.908) SPEAKER_02 : So say you want to be able to say that drug A and B is going to work well, are going to have a synergy together because\n\n(1363.568,1371.355) SPEAKER_02 : this patient X has gene expression levels of A, B, and C high.\n\n(1371.355,1377.7) SPEAKER_02 : Or you say expression levels of a certain biological pathway, those genes are highly expressed.\n\n(1377.7,1380.442) SPEAKER_02 : So you need a set of explanation to do that.\n\n(1380.442,1383.185) SPEAKER_02 : And then more importantly, if you think about\n\n(1383.765,1384.807) SPEAKER_02 : all pairs of drugs.\n\n(1384.807,1394.458) SPEAKER_02 : If there is an underlying principle in terms of when two drugs are likely to have a synergy, then it's going to be even more useful.\n\n(1394.458,1398.804) SPEAKER_02 : So what we did in this paper was that we got the explanations.\n\n(1398.804,1400.806) SPEAKER_02 : We computed the shaft values for\n\n(1401.647,1421.131) SPEAKER_02 : many combinations of drugs from the machine learning model, and then we analyzed that, and then we identified the unifying principle in terms of when, in what case, any pair of drugs A and B have a synergy, and then we identified the pathway.\n\n(1421.131,1424.732) SPEAKER_02 : It is called stemness pathway.\n\n(1424.732,1431.313) SPEAKER_02 : It is also called, trying to find in that part of the slide, this hematopoietic stem cell-like signature.\n\n(1432.053,1436.134) SPEAKER_02 : Cancers are sometimes more differentiated or less differentiated.\n\n(1436.134,1439.976) SPEAKER_02 : If you had a family member who had cancer, you probably understand this term.\n\n(1439.976,1446.498) SPEAKER_02 : Usually, less differentiated cancers have worse prognosis than more differentiated cancers.\n\n(1447.338,1467.014) SPEAKER_02 : we identified this pathway that's really relevant to this stem-ness mechanism and then found the underlying principle, which basically says that it's good to have two drugs, one drug targeting less differentiated, the other one targeting more differentiated cancer, likely work the best.\n\n(1467.014,1468.956) SPEAKER_02 : So in this project, not only\n\n(1469.616,1480.822) SPEAKER_02 : our algorithm can tell oncologists or biological scientists which genes are important, which feature attributions, which features are important for drug synergy.\n\n(1480.822,1495.829) SPEAKER_02 : But also, by analyzing many model explanations from many patients, we can have an understanding of these underlying principles in terms of what makes a successful drug combination therapy.\n\n(1496.289,1498.27) SPEAKER_02 : Cancer therapy design, I would say.\n\n(1498.27,1506.873) SPEAKER_02 : This is an example where we can see how explainable AI can be effective in cancer therapy design.\n\n(1506.873,1519.058) SPEAKER_01 : Is AML unique in having a well-understood pathway or is that a bottleneck for the application of this technique to the broader set of cancers?\n\n(1519.58,1523.662) SPEAKER_02 : Oh, so AML is just one example.\n\n(1523.662,1527.564) SPEAKER_02 : I mean, this kind of principle can be applied to too many data sets.\n\n(1527.564,1533.387) SPEAKER_02 : You know, computational biologists often need to work on the problem where the data are available.\n\n(1533.387,1545.133) SPEAKER_02 : So, you know, as you can imagine, blood cancers, those tissues are relatively easy to, it's relatively easier to obtain, you know, blood tissues compared to other kinds of tissues.\n\n(1546.013,1553.155) SPEAKER_02 : There are many available data sets and then also the measurement of the drug synergy from many samples.\n\n(1553.155,1558.096) SPEAKER_02 : So we happen to choose this cancer type because of the data availability.\n\n(1558.096,1563.618) SPEAKER_02 : But this approach can be broadly applicable to other types of cancer.\n\n(1564.027,1566.389) SPEAKER_02 : So this is one of the... Yeah, go ahead.\n\n(1566.389,1581.904) SPEAKER_01 : I'm maybe trying to get a broader question, which is the explainability method is kind of explaining over a set of known features and pathways and processes and things like that.\n\n(1581.904,1586.208) SPEAKER_01 : And my sense is that for many of the\n\n(1586.949,1593.234) SPEAKER_01 : potential applications, the pathways are still a subject of research themselves.\n\n(1593.234,1598.599) SPEAKER_01 : Meaning, you know, maybe there's some aspect of pathway that's known, but there are others.\n\n(1598.599,1603.143) SPEAKER_01 : There are, you know, or some diseases for which there aren't pathways.\n\n(1603.143,1604.264) SPEAKER_01 : And I guess I'm wondering\n\n(1604.884,1611.63) SPEAKER_01 : the way you think about applying techniques like this in a... A, is that actually the case or am I all wrong there?\n\n(1611.63,1619.017) SPEAKER_01 : But otherwise, how will you apply techniques like this in rapidly evolving fields that are very complex, meaning... That's an excellent question.\n\n(1619.017,1628.746) SPEAKER_01 : Maybe you're giving an explanation and the explanation is based on the pathway as you understand it, but there's so many other things going on in the system that you really have not accounted for.\n\n(1629.184,1630.124) SPEAKER_02 : Yeah, exactly.\n\n(1630.124,1630.885) SPEAKER_02 : Right.\n\n(1630.885,1634.166) SPEAKER_02 : So first of all, Pathway is not unique to disease.\n\n(1634.166,1640.409) SPEAKER_02 : So when we say, you know, Pathway databases, it basically tells you the members of the genes in each pathway.\n\n(1640.409,1640.869) SPEAKER_02 : That's it.\n\n(1640.869,1643.67) SPEAKER_02 : I mean, it's like, you know, many, many sets of genes.\n\n(1643.67,1646.292) SPEAKER_02 : We also sometimes call it gene sets.\n\n(1646.292,1647.732) SPEAKER_02 : It doesn't depend on the disease.\n\n(1647.732,1656.016) SPEAKER_02 : And then the way we view is that it's not like all genes need to be activated for the pathway needs to be activated.\n\n(1656.596,1658.818) SPEAKER_02 : It would be only a subset of genes.\n\n(1658.818,1664.501) SPEAKER_02 : We would expect only a subset of genes to be highly expressed to say, you know, that pathway is activated.\n\n(1664.501,1674.768) SPEAKER_02 : And then it's really extremely important for a computational biologist when we develop, you know, a method like this to get biological insights from large scale data sets.\n\n(1674.768,1682.953) SPEAKER_02 : When we develop such a method, we need to make sure that it does not fully depend on any sort of prior knowledge.\n\n(1683.713,1686.095) SPEAKER_02 : And then the algorithm needs to be flexible.\n\n(1686.095,1688.277) SPEAKER_02 : So that's of key importance.\n\n(1688.277,1694.563) SPEAKER_02 : So in this particular example, we didn't use a pathway actually from the beginning.\n\n(1694.563,1698.967) SPEAKER_02 : When the model training happens, we used genes as individual features.\n\n(1698.967,1706.935) SPEAKER_02 : And then we analyzed the feature attributions and then did the statistical test to see which pathways seem to be more activated.\n\n(1707.395,1709.057) SPEAKER_02 : You made a really good point.\n\n(1709.057,1713.683) SPEAKER_02 : In all computational biology methods, it's really important not to make it too rigid.\n\n(1713.683,1717.287) SPEAKER_02 : For the existing knowledge, it needs to be flexible.\n\n(1717.287,1721.372) SPEAKER_01 : And so how do you evaluate your results in this particular paper?\n\n(1722.069,1732.257) SPEAKER_02 : Oh, so say that you have a feature attribution for all genes, for a certain patient, and then for a certain combination of drugs.\n\n(1732.257,1736.961) SPEAKER_02 : And then say you will have a lot of feature attributions then, right?\n\n(1736.961,1741.785) SPEAKER_02 : Combining all patients and all pairs of drugs you considered.\n\n(1741.785,1744.727) SPEAKER_02 : And then we perform the statistical test.\n\n(1744.727,1747.51) SPEAKER_02 : So for example, it's a simple, you know, features exact\n\n(1747.97,1761.441) SPEAKER_02 : test kind of statistical test where you see whether there is significantly, you know, large value of attribution values for certain set of genes defined by certain pathway.\n\n(1761.441,1768.787) SPEAKER_02 : And then you do, you know, multiple hypothesis testing and then see whether that, you know, significance is indeed is relevant.\n\n(1768.787,1775.173) SPEAKER_02 : So the pathway-based analysis was done in a post-hoc manner after model training and then\n\n(1775.713,1778.474) SPEAKER_02 : obtaining all, you know, model explanations.\n\n(1778.474,1790.477) SPEAKER_02 : So another challenge we ran into in that project was that was really not addressed properly by this foundational AI field was feature correlation.\n\n(1790.477,1795.779) SPEAKER_02 : So in many biomedical data sets, you will see lots of features that are correlated with each other.\n\n(1795.779,1797.739) SPEAKER_02 : Many genes are correlated.\n\n(1797.739,1800.16) SPEAKER_02 : It's a really modular gene expression, you know,\n\n(1800.68,1807.266) SPEAKER_02 : levels are very modular, so you easily see a subset of genes that are very highly correlated with each other.\n\n(1807.266,1818.036) SPEAKER_02 : So in that kind of case, shaft values are not going to be extremely accurate because imagine that there are two genes that are perfectly correlated with each other.\n\n(1818.036,1818.356) SPEAKER_02 : Then\n\n(1818.977,1823.36) SPEAKER_02 : There will be infinite ways to attribute to these two genes.\n\n(1823.36,1831.444) SPEAKER_02 : So in that paper, in that Nature Biomedical Engineering paper, we addressed it by considering ensemble model.\n\n(1831.444,1835.407) SPEAKER_02 : So we ran many ensemble of model explanations.\n\n(1835.407,1837.308) SPEAKER_02 : So we ran the model.\n\n(1837.308,1839.589) SPEAKER_02 : In this case, it was not your deep neural network.\n\n(1839.589,1841.03) SPEAKER_02 : It was three ensembles.\n\n(1841.03,1842.891) SPEAKER_02 : And then we averaged.\n\n(1842.891,1846.493) SPEAKER_02 : We averaged the feature attributions that are from many models.\n\n(1846.773,1853.262) SPEAKER_02 : And then we showed that it gives you more robust feature attributions when the features are correlated with each other.\n\n(1853.262,1855.545) SPEAKER_01 : Awesome.\n\n(1855.545,1860.472) SPEAKER_01 : So talk a little bit about where you see the future of your research going.\n\n(1861.683,1864.565) SPEAKER_02 : That's a really important question.\n\n(1864.565,1878.114) SPEAKER_02 : In all three ways, first of all, in the foundational AI method, as I briefly mentioned, this robustness issues and also multi-modal data.\n\n(1878.114,1883.337) SPEAKER_02 : Let's say that you have a set of features and each feature belongs to different category.\n\n(1883.337,1884.978) SPEAKER_02 : They are in different modality and then\n\n(1885.618,1891.021) SPEAKER_02 : how to attribute to these features that are in different modalities.\n\n(1891.021,1893.603) SPEAKER_02 : So that's an open problem.\n\n(1893.603,1900.287) SPEAKER_02 : So it was actually motivated by biomedical problem, but it's widely applicable to other applications.\n\n(1902.194,1908.516) SPEAKER_02 : And then also these emerging models of LLMs or other foundational models.\n\n(1908.516,1916.757) SPEAKER_02 : And in this kind of really large models, how to actually compute the feature attributions properly.\n\n(1916.757,1926.98) SPEAKER_02 : And then also, we are really interested in sample-based importance to say that you transpose the matrix transpose of your feature matrix.\n\n(1926.98,1931.861) SPEAKER_02 : So I've been talking about these feature attributions a lot, but you can also apply Shapley values\n\n(1932.501,1937.404) SPEAKER_02 : to gain insights into which samples are important for your model training.\n\n(1937.404,1948.73) SPEAKER_02 : So that can help us understand how foundational models in various fields or large language models rely on which training samples.\n\n(1948.73,1959.015) SPEAKER_02 : So that can be really important for model auditing perspective, first of all, and then to gain insight in terms of which samples were important.\n\n(1959.495,1963.117) SPEAKER_02 : for these large models to behave a certain way, right?\n\n(1963.117,1968.659) SPEAKER_02 : So sample-based explanation is also one of the things that we are mainly working on.\n\n(1968.659,1972.2) SPEAKER_02 : In the biomedical side, there are many projects.\n\n(1972.2,1978.403) SPEAKER_02 : So, you know, single cell data science is one of the big themes in my lab now.\n\n(1978.943,1986.667) SPEAKER_02 : So you obtain gene expression levels or other kinds of molecular level information at a single cell level.\n\n(1986.667,1989.788) SPEAKER_02 : So the advantage is that you will have a ton of samples.\n\n(1989.788,1999.672) SPEAKER_02 : So one experiment is going to give you many samples, which is really appropriate for large scale models these days based on deep neural networks.\n\n(1999.672,2005.675) SPEAKER_02 : So for example, the researchers started looking into foundational model for a single cell data set.\n\n(2005.955,2015.926) SPEAKER_02 : So in this kind of, you know, data sets that have still, you know, high dimensional and then researchers are now obtaining multi-omic data.\n\n(2015.926,2021.252) SPEAKER_02 : So not only gene expressions, you can also obtain other kinds of, you know, genomic information.\n\n(2021.252,2021.892) SPEAKER_02 : So that's going to\n\n(2022.533,2026.655) SPEAKER_02 : increase the dimensionality also, and then larger sample sizes.\n\n(2026.655,2032.979) SPEAKER_02 : How to learn the biologically interpretable representation space?\n\n(2032.979,2037.381) SPEAKER_02 : That's one of the big questions in the research in my lab.\n\n(2039.142,2045.229) SPEAKER_02 : All feature attribution methods at the end in the downstream prediction test, you attribute to features.\n\n(2045.229,2050.074) SPEAKER_02 : And then the assumption is that each feature is an interpretable unit.\n\n(2050.074,2055.059) SPEAKER_02 : In biology, as I mentioned earlier, it's not the case in biology, right?\n\n(2055.059,2061.966) SPEAKER_02 : So the functional unit in biology is much more interpretable than in any individual genes.\n\n(2062.226,2073.893) SPEAKER_02 : So how to learn the features that have more broadly representation, feature representation space that's biologically more interpretable.\n\n(2073.893,2079.296) SPEAKER_02 : And then also how to make foundational models learned based on single cell data sets.\n\n(2079.296,2088.101) SPEAKER_02 : So researchers started publishing those papers that are about applying this foundational model approach to single cell data sets.\n\n(2088.101,2091.243) SPEAKER_02 : And then how to make it biologically interpretable\n\n(2091.743,2104.866) SPEAKER_02 : so that you can gain scientific insights from the model results and then also audit those models to make sure that users can actually safely use them for scientific discoveries.\n\n(2104.866,2110.188) SPEAKER_02 : So, attribution methods for this kind of modern machine learning models\n\n(2111.168,2113.47) SPEAKER_02 : so that you can gain biological insights.\n\n(2113.47,2115.151) SPEAKER_02 : So that's another theme.\n\n(2115.151,2119.113) SPEAKER_02 : In a clinical side, we are really interested in this model auditing.\n\n(2119.113,2125.677) SPEAKER_02 : In our most recent paper that's in review, we are focusing on dermatology example.\n\n(2125.677,2126.958) SPEAKER_02 : So dermatological image\n\n(2127.778,2135.867) SPEAKER_02 : is inputted into deep neural network, and then you want to know whether the prediction result is melanoma or not.\n\n(2135.867,2147.059) SPEAKER_02 : There are many algorithms out there, some published in very high-profile medical journals, and also some available through the cell phone apps.\n\n(2147.059,2148.441) SPEAKER_02 : So there are many algorithms.\n\n(2148.741,2160.633) SPEAKER_02 : And then we recently tested them, we just separate the held out tested samples, and then got the result that's a little concerning in terms of usage.\n\n(2160.633,2165.478) SPEAKER_02 : And then our analysis showed that explainable AI was extremely helpful.\n\n(2165.978,2172.644) SPEAKER_02 : So for example, in the skin image, which part of the image led to that kind of a prediction?\n\n(2172.644,2177.948) SPEAKER_02 : Or as I said, using this counterfactual image generation.\n\n(2177.948,2184.173) SPEAKER_02 : So you make small changes to the input dermatology image such that it changes.\n\n(2184.173,2189.718) SPEAKER_02 : It crosses the decision boundary of the classifier and then see what features were changes.\n\n(2189.718,2194.682) SPEAKER_02 : So that way you can see the reasoning process of this classifier.\n\n(2194.922,2196.583) SPEAKER_02 : the clinical AI model.\n\n(2196.583,2206.411) SPEAKER_02 : So for that, there needs to be some technological development there because the feature attributions themselves are not going to be enough.\n\n(2206.411,2212.996) SPEAKER_02 : It shows only very small part of the inner workings of the machine learning model.\n\n(2212.996,2222.804) SPEAKER_02 : So developing methods for auditing clinical AI models, that's the research we are currently performing in the clinical area.\n\n(2223.264,2226.75) SPEAKER_02 : So all three areas, we are doing exciting research.\n\n(2226.75,2229.696) SPEAKER_01 : Well, Suwan, it sounds like you've got a lot of work ahead of you.\n\n(2229.696,2230.096) SPEAKER_02 : Yes.\n\n(2230.096,2230.557) SPEAKER_02 : Yeah.\n\n(2230.557,2231.238) SPEAKER_02 : Very busy.\n\n(2231.238,2232.02) SPEAKER_01 : I bet.\n\n(2232.02,2233.863) SPEAKER_01 : Thanks so much for joining us.\n\n(2233.863,2234.504) SPEAKER_02 : Thank you.\n\n(2234.504,2235.726) SPEAKER_02 : Thank you for inviting me.\n\n(2235.726,2235.987) SPEAKER_01 : Thank you.\n\n(2239.223,2242.184) SPEAKER_00 : All right, everyone, that's our show for today.\n\n(2242.184,2248.167) SPEAKER_00 : To learn more about today's guest or the topics mentioned in this interview, visit TwiMLAI.com.\n\n(2248.167,2255.951) SPEAKER_00 : Of course, if you like what you hear on the podcast, please subscribe, rate, and review the show on your favorite podcatcher.\n\n(2255.951,2258.652) SPEAKER_00 : Thanks so much for listening and catch you next time."}, "podcast_summary": "In this podcast, Sam Charrington interviews Suin Lee, a professor at the University of Washington, about their research in explainable AI. Suin discusses their work in computational biology and clinical medicine, focusing on the development of explainable AI techniques and their application in identifying causes and treatments for diseases like cancer. They also talk about the importance of collaboration between machine learning researchers, biologists, and clinical experts in advancing the field. Suin shares examples of their research, including the use of feature attribution methods like SHAP to understand the importance of genes in cancer drug synergy. They also discuss the challenges of applying explainability techniques in rapidly evolving fields and the future directions of their research, such as sample-based explanations and model auditing in clinical AI.", "podcast_guest": {"podcast_guest": "Suin Lee", "podcast_guest_org": "Paul G. Allen School of Computer Science and Engineering at the University of Washington", "podcast_guest_title": "Professor", "wikipedia_summary": "Not able to Find info on Wikipedia"}, "podcast_highlights": "- Suin Lee is a professor at the Paul G. Allen School of Computer Science and Engineering at the University of Washington.\n- Suin's research focuses on explainable AI and its applications in computational biology and clinical medicine.\n- The podcast discusses the importance of explainability in AI and its relevance in understanding complex diseases like cancer.\n- Suin shares her background and how she got into the field of computational biology and medicine.\n- The conversation highlights the need for collaboration between machine learning researchers, biologists, and clinical experts.\n- Suin's research involves developing explainable AI techniques for identifying causes and treatments of diseases, such as cancer and Alzheimer's.\n- The podcast also explores the challenges of applying explainability methods to rapidly evolving fields and complex systems.\n- Suin discusses her recent paper on using explainable AI to design cancer therapy, specifically focusing on acute myeloid leukemia (AML).\n- The paper identifies the stemness pathway as a key factor in predicting drug synergy for AML patients.\n- The future of Suin's research includes addressing robustness issues, multi-modal data, and sample-based importance in foundational AI methods.\n- In the clinical field, Suin is working on auditing clinical AI models and developing methods for biologically interpretable", "key_moments_and_key_topics": "1) Key Topics of the podcast transcript:\n- Explainable AI in computational biology and clinical medicine\n- The importance of feature attributions in AI models\n- The challenges of applying explainability techniques to rapidly evolving fields\n- The future of research in foundational AI methods and biomedical applications\n- The need for model auditing in clinical AI\n\n2) Key Highlights of the podcast transcript:\n- (46.357,54.683) The importance of explainable AI in computational biology and clinical medicine\n- (81.259,87.801) The background and motivation of the guest's research in the intersection of machine learning, computational biology, and clinical medicine\n- (148.3,167.945) The guest's work on explainable AI and feature attribution methods, including the SHAP framework\n- (383.182,392.708) The guest's research on explainability as applied to cancer therapy design\n- (1968.659,1972.2) The future directions of research, including the challenges of feature correlation and the need for sample-based explanations", "gpt_podcast_transcript": "(0.169,0.289) Unknown : you\n\n(8.34,9.201) Sam Charrington : All right, everyone.\n\n(9.201,12.583) Sam Charrington : Welcome to another episode of the TwiML AI Podcast.\n\n(12.583,14.965) Sam Charrington : I am your host, Sam Charrington.\n\n(14.965,16.927) Sam Charrington : And today I'm joined by Suin Lee.\n\n(16.927,24.753) Sam Charrington : Suin is a professor at the Paul G. Allen School of Computer Science and Engineering at the University of Washington.\n\n(24.753,30.497) Sam Charrington : Before we get going, be sure to take a moment to hit that subscribe button wherever you're listening to today's show.\n\n(30.497,32.739) Sam Charrington : Suin, welcome to the podcast.\n\n(32.739,33.92) Suin Lee : Thank you for the introduction.\n\n(34.79,38.132) Sam Charrington : I'm looking forward to digging into our talk.\n\n(38.132,46.357) Sam Charrington : You are an invited speaker at the 2023 ICML workshop on computational biology.\n\n(46.357,54.683) Sam Charrington : And we'll be talking about your talk there, which is really centered around your research into explainable AI, an important topic.\n\n(54.683,61.447) Sam Charrington : But before we jump into that, I'd love to have you share a little bit about your background and how you came to work in the field.\n\n(61.993,63.053) Suin Lee : Thank you so much.\n\n(63.053,71.236) Suin Lee : So my lab is currently working on a broad spectrum of a problem, for example, developing explainable AI techniques.\n\n(71.236,73.076) Suin Lee : So that's a core machine learning.\n\n(73.076,79.558) Suin Lee : And then we also work on identifying cause and treatment of challenging diseases such as cancer and Alzheimer's disease.\n\n(79.558,81.259) Suin Lee : So that's computational biology.\n\n(81.259,87.801) Suin Lee : And then also we develop clinical diagnosis or auditing frameworks for clinical AI.\n\n(88.641,91.243) Suin Lee : And then you asked about how I got into this field.\n\n(91.243,96.706) Suin Lee : So I was trained as a machine learning researcher when I was a PhD student.\n\n(96.706,101.309) Suin Lee : I was working on the problem of dealing with high dimensional data.\n\n(101.309,104.592) Suin Lee : And then at that time, when I was a PhD student at Stanford,\n\n(105.472,111.895) Suin Lee : In the field of computational biology, there was something really exciting happened, something called a microarray data.\n\n(111.895,116.176) Suin Lee : So it's a gene expression data that measures expression levels of 20,000 genes.\n\n(116.176,128.401) Suin Lee : And I suddenly thought that if machine learning researchers develop a powerful and effective method to identify cause of diseases such as cancer and then therapeutic targets,\n\n(129.101,131.443) Suin Lee : for those diseases.\n\n(131.443,137.229) Suin Lee : Then as a machine learning researcher, I can contribute hugely to the science and also medicine.\n\n(137.229,140.432) Suin Lee : And I just fell in love with this field.\n\n(140.432,147.76) Suin Lee : So that's how I got into the research at the intersection of computational machine learning and computational biology.\n\n(148.3,167.945) Suin Lee : After I got a job at the University of Washington that has a very strong medical school, and then I had wonderful colleagues, amazing people who had medical data, electronic health records, and then introduced me to this field of EHR data analysis in various\n\n(168.605,174.249) Suin Lee : clinical departments, anesthesiology and dermatology, and then emergency medicine.\n\n(174.249,186.656) Suin Lee : And then I just got really interested into the possibility, the potential that AI researchers or machine learning researchers myself and my students can contribute to medicine.\n\n(186.656,190.799) Suin Lee : That's how I got into this field of largely three fields.\n\n(190.799,193.401) Suin Lee : So one is machine learning and AI.\n\n(193.401,197.023) Suin Lee : And the second is computational biology and then clinical medicine.\n\n(197.509,206.042) Sam Charrington : You probably thought that you had to deal with messy data when you were in clinical biology and computational biology until you saw some of that EHR data.\n\n(206.042,207.725) Sam Charrington : That data can be very messy.\n\n(208.147,208.827) Suin Lee : It is.\n\n(208.827,211.709) Suin Lee : The goals of the fields are slightly different to each other.\n\n(211.709,217.693) Suin Lee : But in the future, I strongly believe that those two fields will merge, biology and medicine.\n\n(217.693,226.238) Suin Lee : So in a clinical side, researchers are already generating the biological molecular biology data from patients.\n\n(226.238,234.963) Suin Lee : So for example, for cancer patients, you can think about measuring the gene expression levels or genetic data from those cancer patients.\n\n(234.963,237.465) Suin Lee : And then what you want is the treatment.\n\n(237.985,248.617) Suin Lee : You want the AI or machine learning models to tell you which treatment, which drug, anti-cancer drugs are going to work the best for that particular patient.\n\n(248.617,255.385) Suin Lee : For that, you definitely need the biological knowledge and then actual mechanistic understanding of cancer.\n\n(256.993,265.478) Sam Charrington : And what says to you that the fields will merge as opposed to kind of collaborate closely?\n\n(265.478,274.624) Sam Charrington : Clearly they need to collaborate closely, but when I think of merge, and maybe I'm taking this too far, I'm thinking of like single models that operate in both domains.\n\n(275.769,276.87) Suin Lee : Yeah, I know what you're saying.\n\n(276.87,294.687) Suin Lee : So I tell my students or other young people that to actually move the field forward, to advance this field of biology, medicine, or biomedical sciences, you really need to become a bilingual researcher, or even trilingual these days.\n\n(294.687,298.411) Suin Lee : You know, computer science plus biology plus medicine.\n\n(298.411,298.771) Suin Lee : When you\n\n(299.832,307.753) Suin Lee : are you have one brain that really thinks like, you know, machine learning researchers and biologists and then clinical experts.\n\n(307.753,318.675) Suin Lee : It's, you know, usually that really helps to come up with creative approach and that can really move the field to benefit patients.\n\n(318.675,328.277) Suin Lee : And then at the end, the ultimate goal of a biology and molecular biology is to understand life better so that you can advance the health.\n\n(328.677,329.937) Suin Lee : of humans, right?\n\n(329.937,342.821) Suin Lee : So I think collaborations definitely help, but at the end, we really need to think about how to produce these young researchers so that they really think like experts in this area.\n\n(342.821,348.943) Suin Lee : These things already happened earlier in computational biology than clinical medicine.\n\n(348.943,352.504) Suin Lee : And when I was doing the PhD, it was usually based on collaborations.\n\n(352.844,367.067) Suin Lee : people who were trained primarily as a machine learning researcher and people who were trained as molecular biologists who hold pipettes and they work in the wet labs and then they form a collaboration and then write papers.\n\n(367.828,375.554) Suin Lee : But then later, you know, we see a lot of departments that's named, you know, computational biology or, you know, biomedical science departments.\n\n(375.554,381.419) Suin Lee : So it's a really healthy move for, you know, this kind of interdisciplinary fields.\n\n(381.419,382.5) Suin Lee : It makes total difference.\n\n(383.182,383.822) Sam Charrington : Yeah.\n\n(383.822,392.708) Sam Charrington : Your research and again, your presentation at the conference are focused on explainable AI, XAI.\n\n(392.708,401.092) Sam Charrington : Tell us a little bit about some of the things that you think are most important about explainability as applied to these fields.\n\n(401.092,408.477) Sam Charrington : I think we get that machine learning and models in general can be opaque and make important high stakes decisions.\n\n(408.477,410.278) Sam Charrington : You need some degree of explainability.\n\n(411.08,415.236) Sam Charrington : What's unique about your take in applying applicability in your field?\n\n(415.757,416.197) Suin Lee : Right.\n\n(416.197,416.938) Suin Lee : OK, thank you.\n\n(416.938,418.279) Suin Lee : That's an excellent question.\n\n(418.279,427.346) Suin Lee : So the core part of explainable AI, at least this theoretical framework, it basically means feature attributions.\n\n(427.346,429.848) Suin Lee : So imagine you have a black box model.\n\n(429.848,439.615) Suin Lee : You have a set of input, a vector x. And then you have an output y. And then when you have a prediction, you want to find a way to attribute two features.\n\n(439.615,443.158) Suin Lee : You want to know which features contributed the most.\n\n(443.738,446.56) Suin Lee : And then, you know, there are mathematical frameworks.\n\n(446.56,450.884) Suin Lee : Our particular approach that's called the SHAP framework, it is based on game theory.\n\n(450.884,455.007) Suin Lee : So you want to find a way to understand which features are important.\n\n(455.007,459.17) Suin Lee : So that's the core of the technical side of explainable AI.\n\n(459.59,470.355) Suin Lee : And then on the other hand, if you just apply this explainable AI technique, you know, off the shelf explainable AI algorithm to biology, mostly it's useless.\n\n(470.355,472.276) Suin Lee : It's not very useful.\n\n(472.276,475.137) Suin Lee : It's not useful in terms of biological insights.\n\n(475.797,481.478) Suin Lee : What you really want to understand is how these features collaborate with each other.\n\n(481.478,484.439) Suin Lee : Imagine that you have a set of genes as a feature.\n\n(484.439,490.3) Suin Lee : So you have 20,000 genes, 20,000 expression levels are the input of the black box model.\n\n(490.3,496.041) Suin Lee : And then your prediction is which cancer drug is going to work the best for each patient.\n\n(496.041,503.943) Suin Lee : And then individual genes contributions and then gene importance scores by themselves, they are not going to be really useful.\n\n(503.943,505.363) Suin Lee : It will be only useful when\n\n(506.063,519.146) Suin Lee : some explainable AI model, explainable AI algorithm can tell you which pathway, how genes collaborate with each other and then how genetic factors play a role into that.\n\n(519.146,525.068) Suin Lee : And then also how that leads to the good prognosis of the cancer patient and also\n\n(525.768,529.77) Suin Lee : sensitivity, the good responsiveness to that drug.\n\n(529.77,532.632) Suin Lee : So there is something missing there.\n\n(532.632,548.38) Suin Lee : And then the uniqueness of my research is that we want to develop this explainable AI method for biology and then also clinical medicine such that it can make real meaningful contribution to these fields.\n\n(548.38,552.762) Suin Lee : Another example in the medicine side is that imagine that you have a deep\n\n(553.382,557.605) Suin Lee : model, deep neural network, that's going to take you a dermatology image.\n\n(557.605,562.208) Suin Lee : So say that you find something unusual in your skin and then you take a picture.\n\n(562.208,563.89) Suin Lee : That's your dermatological image.\n\n(563.89,568.393) Suin Lee : And then let's say that you want to know that has features of melanoma or not.\n\n(568.913,572.756) Suin Lee : So the prediction results itself is not going to be really useful.\n\n(572.756,588.428) Suin Lee : And then even the current explainable AI methods that's going to tell you which pixels, which parts of the images led to the prediction of melanoma or not, those are not going to be very useful to understand how this black box model really works.\n\n(589.255,600.879) Suin Lee : When you try, for example, that you modify the image and then generate a counterfactual, small changes to the image such that it changes the prediction.\n\n(600.879,604.66) Suin Lee : Let's say that that changes the prediction from melanoma to normal.\n\n(604.66,612.502) Suin Lee : Only then you can understand how this model works, what the reasoning process of this black box machine learning model is like.\n\n(612.942,618.283) Suin Lee : So those examples, I'm going to show many examples like that.\n\n(618.283,638.788) Suin Lee : Basically, the message there is going to be that the current state-of-the-art explainable AI that tells you theoretically supported importance values for the features are not going to be enough to make meaningful contributions to both biological science and then also clinical medicine as well.\n\n(639.467,659.61) Sam Charrington : It sounds like you're calling out a broad deficiency in the approach and kind of saying that as opposed to this feature level explainability, we need more system level or process level explainability that is more grounded in the use cases or the application than what we have available today.\n\n(660.383,661.284) Suin Lee : Exactly.\n\n(661.284,663.566) Suin Lee : The question is how to do that.\n\n(663.566,667.13) Suin Lee : For that, we need a new explainable AI method.\n\n(667.13,676.56) Suin Lee : In the first part of the talk, I'm going to show many examples of what explainable AI, almost as is, can do.\n\n(676.56,679.062) Suin Lee : Those are the papers that we published\n\n(679.743,685.785) Suin Lee : a couple of years ago, so that it addresses new scientific questions.\n\n(685.785,691.567) Suin Lee : Even explainable AI or feature attribution methods as is can be useful.\n\n(691.567,695.988) Suin Lee : So I'm going to show many examples like that in both biology and medicine.\n\n(695.988,706.172) Suin Lee : But in the second part of the talk, I'm going to show how explainable AI can even open new research directions specifically for biology and health care.\n\n(707.012,719.374) Suin Lee : So those examples I showed you, the systems level insights or this counterfactual image generation that can facilitate collaboration with humans, in this case, a clinical expert.\n\n(719.374,726.756) Suin Lee : So in the second part of the talk, I'm going to show how this explainably I can open new research directions.\n\n(726.756,731.597) Suin Lee : And then part of the second part will be I'm going to have a deep dive into our recent paper,\n\n(732.177,739.645) Suin Lee : to highlight how Explainable AI can help cancer medicine design, cancer therapy design.\n\n(739.645,747.474) Suin Lee : So basically, how to choose two chemotherapy drugs that's going to have a synergy for a particular patient.\n\n(747.474,751.178) Suin Lee : So that's the paper that was recently published in Nature Biomedical Engineering.\n\n(751.701,767.357) Sam Charrington : Before we dig into that paper, the most recent paper, can you talk us through in a little bit more detail some of the examples of the foundational machine learning research and how they contribute to the problems you're trying to solve?\n\n(768.274,768.914) Suin Lee : Okay.\n\n(768.914,773.84) Suin Lee : So some of the foundational AI methods we developed, I'm going to talk about.\n\n(773.84,776.883) Suin Lee : It can be summarized into three parts.\n\n(776.883,783.35) Suin Lee : So one is, you know, principled understanding of current explainable AI methods.\n\n(783.35,785.993) Suin Lee : So specifically feature attribution methods.\n\n(786.233,809.664) Suin Lee : So, for example, in one work, we showed that our feature attribution method, that's SHAP, it was published in NeurIPS in 2017, we showed that it unifies a large portion of the explainable AI literature and 25 methods following the exact same principle, and all explaining by removing features.\n\n(810.104,821.856) Suin Lee : So it turned out that 25 methods, feature attribution methods that are widely used in the field and machine learning applications, they all go by the same principle.\n\n(821.856,828.003) Suin Lee : You want to assess the importance of each feature by removing them or removing subsets of them.\n\n(828.363,832.564) Suin Lee : So that helps us understand what goes on.\n\n(832.564,839.567) Suin Lee : For example, when they fail, you want to understand what goes on and also improve and then develop new explainable AI methods.\n\n(839.567,844.168) Suin Lee : So I'm going to introduce a couple of unifying frameworks.\n\n(844.168,850.33) Suin Lee : So this is about how to understand the principled understanding of feature attribution methods.\n\n(851.17,861.837) Suin Lee : Also, on a computational side, we have explored many avenues to make this SHAP computation even feasible and faster.\n\n(861.837,867.041) Suin Lee : So, SHAP stands for Shapely Editive... I suddenly forgot.\n\n(867.041,867.921) Suin Lee : I can't forget this.\n\n(867.921,868.622) Sam Charrington : Explanations.\n\n(869.521,871.905) Suin Lee : Yes, a sharply additive explanation.\n\n(871.905,878.615) Sam Charrington : It's kind of weird because they chose the third letter of the word.\n\n(878.615,882.821) Suin Lee : That's the first author, my student, Scott's choice.\n\n(882.821,883.942) Suin Lee : I love the name, by the way.\n\n(884.503,892.331) Suin Lee : Computing sharp values is theoretically very well supported, but then computation-wise, it's not really easy to compute.\n\n(892.331,894.413) Suin Lee : It involves exponential computation.\n\n(894.413,899.998) Suin Lee : So we need to develop approximation methods such that we can compute them in a feasible manner.\n\n(899.998,904.262) Suin Lee : So we developed many fast statistical estimation approaches\n\n(905.003,912.575) Suin Lee : And then you want to make sure that there is a convergence and all the desirable theoretical properties are already there.\n\n(912.575,918.083) Suin Lee : And then also, we developed approaches for specific model types.\n\n(918.083,920.326) Suin Lee : For example, ensemble tree models.\n\n(920.947,922.629) Suin Lee : And then also deep neural networks.\n\n(922.629,925.573) Suin Lee : So we have a deep shape and then tree shape.\n\n(925.573,929.879) Suin Lee : And then more recently, we also have a vision transformer Shapley.\n\n(929.879,935.407) Suin Lee : So that's a way to compute the Shapley values for transformers, vision transformers.\n\n(936.235,938.637) Suin Lee : And then there is another one that's called the FASTA SHAPE.\n\n(938.637,948.302) Suin Lee : So the one way to make the SHAPE computation more feasible is to focus on specific particular aspects of models.\n\n(948.302,954.066) Suin Lee : So for example, tree ensembles or deep neural network, they have some particular model types.\n\n(954.066,964.933) Suin Lee : There is a way to make this computation a little faster, basically make... So model specific versions of SHAPE implementation.\n\n(965.473,966.393) Suin Lee : Yes, yes.\n\n(966.393,966.673) Suin Lee : Yeah.\n\n(966.673,969.014) Suin Lee : So that's another line of research.\n\n(969.014,975.796) Suin Lee : And then more recently, we also started to understand the robustness of the shaft value.\n\n(975.796,977.856) Suin Lee : So adversarial attack.\n\n(977.856,988.899) Suin Lee : A few years ago, in the field of machine learning, researchers have tried to understand how robust the machine learning model itself, the prediction results are toward\n\n(989.419,990.9) Suin Lee : adversarial attacks.\n\n(990.9,996.544) Suin Lee : And then now we are looking into this issue in terms of the model explanations.\n\n(996.544,999.826) Suin Lee : So how feature attributions are robust.\n\n(999.826,1007.151) Suin Lee : So in our most recent paper, we basically showed the removal-based approaches, including SHAPE.\n\n(1007.151,1014.536) Suin Lee : Like earlier I said, many of the feature attribution methods turned out to be to have the same principle, which is explaining by removal.\n\n(1014.536,1016.197) Suin Lee : So those line of, you know,\n\n(1017.037,1021.439) Suin Lee : methods is more robust to this kind of adversarial attacks.\n\n(1021.439,1031.642) Suin Lee : So, and then, you know, multimodality, you know, those other kinds of issues, we are actively doing this research in terms of, you know, foundational AI algorithms also.\n\n(1032.362,1041.952) Sam Charrington : And SHAP, as you've mentioned, is broadly used, both the original algorithm as well as the related algorithms as you described.\n\n(1041.952,1047.417) Sam Charrington : But it's also one of the first explainability approaches to be popularized\n\n(1048.238,1051.501) Sam Charrington : Where does it sit in terms of relevance?\n\n(1051.501,1070.518) Sam Charrington : Are there different kind of wholly different approaches that have overtaken it in popularity or applicability based on kind of today's models and applications or is SHAP still kind of a core approach to the way explainability is looked at in practice?\n\n(1071.373,1073.514) Suin Lee : It's more on the later side.\n\n(1073.514,1080.577) Suin Lee : We believe that this removal-based approach and in this cooperative game theory, we believe in that.\n\n(1080.577,1084.539) Suin Lee : And then also, it has the desirable properties, first of all.\n\n(1084.539,1094.403) Suin Lee : And then we, in our many experiments, we still see that removal-based approaches are more robust, as I said, those berserker attacks.\n\n(1094.403,1098.545) Suin Lee : And then also, in terms of various evaluation criteria, we still think that those\n\n(1099.045,1107.935) Suin Lee : methods are more robust than the other class, which we characterized as a propagation-based approach or gradient-based approaches.\n\n(1107.935,1111.259) Suin Lee : So we would prefer just remover-based approaches.\n\n(1111.259,1115.744) Suin Lee : But on the other hand, those approaches are very computationally very intensive.\n\n(1115.744,1115.864) Suin Lee : So\n\n(1116.244,1129.678) Suin Lee : Well, the way SHEP works is basically that you try all subset of features and then you add a feature of interest and then see the model, check the model output and you average across all subsets of features.\n\n(1129.678,1132.781) Suin Lee : So as you can imagine, it's computationally very intensive.\n\n(1132.781,1136.184) Suin Lee : So when we now think about foundational models or\n\n(1136.905,1142.228) Suin Lee : large language models, these really large models of a ton, a lot of parameters.\n\n(1142.228,1148.531) Suin Lee : And then deep neural network and, you know, gradient computation is perhaps easier than trying all sorts of features, right?\n\n(1148.531,1154.394) Suin Lee : So practically, it's not, you know, as easy as the other class in terms of\n\n(1155.094,1160.497) Suin Lee : the computation, but we still want to make this computational more feasible.\n\n(1160.497,1175.385) Suin Lee : We want to develop various clever approaches to reduce the computation and then still maintain the desirable theoretical properties that this removal-based approach or SHAP in particular has.\n\n(1175.385,1175.625) Sam Charrington : Got it.\n\n(1176.299,1191.191) Sam Charrington : And so that is an example of kind of the foundational research that your lab does that contributes not only to your work on the biological science side or computational biology side, but broadly to the field.\n\n(1191.191,1199.177) Sam Charrington : And then your more recent paper is an example of the kind of contributions you're making on the medicine side.\n\n(1199.177,1201.399) Sam Charrington : Can you talk a little bit about that cancer paper?\n\n(1201.943,1202.964) Suin Lee : Yeah, sure.\n\n(1202.964,1204.566) Suin Lee : It is about AML.\n\n(1204.566,1208.03) Suin Lee : So we chose AML as an example application.\n\n(1208.03,1216.079) Suin Lee : So it's acute myeloid leukemia, it's aggressive blood cancer, and it's relatively common for older people.\n\n(1216.079,1223.587) Suin Lee : So to give you a bit of a background in general, the cutting edge in the treatment of cancers, such as AML,\n\n(1223.867,1226.968) Suin Lee : has increasingly become combination therapy.\n\n(1226.968,1238.312) Suin Lee : So the rationale here is that by choosing drugs that target complementary biological pathways, we can achieve greater anti-cancer efficacy.\n\n(1238.312,1242.734) Suin Lee : So basically, you choose two or three chemotherapy drugs,\n\n(1243.394,1251.941) Suin Lee : and then use them together so that when there is a synergy, usually there is a very good anti-cancer efficacy.\n\n(1251.941,1256.906) Suin Lee : But the issue is that choosing optimal combinations of drugs is a really hard problem.\n\n(1256.906,1265.954) Suin Lee : So there are about hundreds of individual FDA approved anti-cancer drugs, which means that there will be tens of thousands of possible combinations.\n\n(1266.594,1279.4) Suin Lee : But when you consider pairwise combination, and there could be even more if you consider non-FDA approved experimental drugs in development, or consider a combination of more than two drugs.\n\n(1279.4,1291.426) Suin Lee : And then the different patients, even patients who have the same type of cancer may respond differently to exact same drugs because of this individual, the particular genomic characteristics.\n\n(1292.166,1295.449) Suin Lee : then formulate this problem as a machine learning problem.\n\n(1295.449,1309.6) Suin Lee : So you take this AML patient's gene expression levels, so you get the blood of the patient and then purify the cells so you have only cancer cells, and then say you measure expression levels of 20,000 genes.\n\n(1309.6,1312.662) Suin Lee : So mathematically, this is 20,000 dimensional vector.\n\n(1313.563,1323.967) Suin Lee : And then also, let's say you consider a pair of drugs, drugs A and B, and then you use various information about this drug.\n\n(1323.967,1328.028) Suin Lee : For example, structure of these drugs or their biological targets.\n\n(1328.028,1331.889) Suin Lee : There are many data sets that can tell you that information.\n\n(1331.889,1339.072) Suin Lee : And then you take those as a machine learning input, and then you want to predict the synergy between the drugs A and B.\n\n(1339.852,1347.817) Suin Lee : So in this kind of a problem, and as I said, there will be tens of thousands of pairwise combinations of those drugs.\n\n(1347.817,1355.803) Suin Lee : And so in this kind of situation, not only the prediction, but also explanations will be extremely important.\n\n(1355.803,1362.908) Suin Lee : So say you want to be able to say that drug A and B is going to work well, are going to have a synergy together because\n\n(1363.568,1371.355) Suin Lee : this patient X has gene expression levels of A, B, and C high.\n\n(1371.355,1377.7) Suin Lee : Or you say expression levels of a certain biological pathway, those genes are highly expressed.\n\n(1377.7,1380.442) Suin Lee : So you need a set of explanation to do that.\n\n(1380.442,1383.185) Suin Lee : And then more importantly, if you think about\n\n(1383.765,1384.807) Suin Lee : all pairs of drugs.\n\n(1384.807,1394.458) Suin Lee : If there is an underlying principle in terms of when two drugs are likely to have a synergy, then it's going to be even more useful.\n\n(1394.458,1398.804) Suin Lee : So what we did in this paper was that we got the explanations.\n\n(1398.804,1400.806) Suin Lee : We computed the shaft values for\n\n(1401.647,1421.131) Suin Lee : many combinations of drugs from the machine learning model, and then we analyzed that, and then we identified the unifying principle in terms of when, in what case, any pair of drugs A and B have a synergy, and then we identified the pathway.\n\n(1421.131,1424.732) Suin Lee : It is called stemness pathway.\n\n(1424.732,1431.313) Suin Lee : It is also called, trying to find in that part of the slide, this hematopoietic stem cell-like signature.\n\n(1432.053,1436.134) Suin Lee : Cancers are sometimes more differentiated or less differentiated.\n\n(1436.134,1439.976) Suin Lee : If you had a family member who had cancer, you probably understand this term.\n\n(1439.976,1446.498) Suin Lee : Usually, less differentiated cancers have worse prognosis than more differentiated cancers.\n\n(1447.338,1467.014) Suin Lee : we identified this pathway that's really relevant to this stem-ness mechanism and then found the underlying principle, which basically says that it's good to have two drugs, one drug targeting less differentiated, the other one targeting more differentiated cancer, likely work the best.\n\n(1467.014,1468.956) Suin Lee : So in this project, not only\n\n(1469.616,1480.822) Suin Lee : our algorithm can tell oncologists or biological scientists which genes are important, which feature attributions, which features are important for drug synergy.\n\n(1480.822,1495.829) Suin Lee : But also, by analyzing many model explanations from many patients, we can have an understanding of these underlying principles in terms of what makes a successful drug combination therapy.\n\n(1496.289,1498.27) Suin Lee : Cancer therapy design, I would say.\n\n(1498.27,1506.873) Suin Lee : This is an example where we can see how explainable AI can be effective in cancer therapy design.\n\n(1506.873,1519.058) Sam Charrington : Is AML unique in having a well-understood pathway or is that a bottleneck for the application of this technique to the broader set of cancers?\n\n(1519.58,1523.662) Suin Lee : Oh, so AML is just one example.\n\n(1523.662,1527.564) Suin Lee : I mean, this kind of principle can be applied to too many data sets.\n\n(1527.564,1533.387) Suin Lee : You know, computational biologists often need to work on the problem where the data are available.\n\n(1533.387,1545.133) Suin Lee : So, you know, as you can imagine, blood cancers, those tissues are relatively easy to, it's relatively easier to obtain, you know, blood tissues compared to other kinds of tissues.\n\n(1546.013,1553.155) Suin Lee : There are many available data sets and then also the measurement of the drug synergy from many samples.\n\n(1553.155,1558.096) Suin Lee : So we happen to choose this cancer type because of the data availability.\n\n(1558.096,1563.618) Suin Lee : But this approach can be broadly applicable to other types of cancer.\n\n(1564.027,1566.389) Suin Lee : So this is one of the... Yeah, go ahead.\n\n(1566.389,1581.904) Sam Charrington : I'm maybe trying to get a broader question, which is the explainability method is kind of explaining over a set of known features and pathways and processes and things like that.\n\n(1581.904,1586.208) Sam Charrington : And my sense is that for many of the\n\n(1586.949,1593.234) Sam Charrington : potential applications, the pathways are still a subject of research themselves.\n\n(1593.234,1598.599) Sam Charrington : Meaning, you know, maybe there's some aspect of pathway that's known, but there are others.\n\n(1598.599,1603.143) Sam Charrington : There are, you know, or some diseases for which there aren't pathways.\n\n(1603.143,1604.264) Sam Charrington : And I guess I'm wondering\n\n(1604.884,1611.63) Sam Charrington : the way you think about applying techniques like this in a... A, is that actually the case or am I all wrong there?\n\n(1611.63,1619.017) Sam Charrington : But otherwise, how will you apply techniques like this in rapidly evolving fields that are very complex, meaning... That's an excellent question.\n\n(1619.017,1628.746) Sam Charrington : Maybe you're giving an explanation and the explanation is based on the pathway as you understand it, but there's so many other things going on in the system that you really have not accounted for.\n\n(1629.184,1630.124) Suin Lee : Yeah, exactly.\n\n(1630.124,1630.885) Suin Lee : Right.\n\n(1630.885,1634.166) Suin Lee : So first of all, Pathway is not unique to disease.\n\n(1634.166,1640.409) Suin Lee : So when we say, you know, Pathway databases, it basically tells you the members of the genes in each pathway.\n\n(1640.409,1640.869) Suin Lee : That's it.\n\n(1640.869,1643.67) Suin Lee : I mean, it's like, you know, many, many sets of genes.\n\n(1643.67,1646.292) Suin Lee : We also sometimes call it gene sets.\n\n(1646.292,1647.732) Suin Lee : It doesn't depend on the disease.\n\n(1647.732,1656.016) Suin Lee : And then the way we view is that it's not like all genes need to be activated for the pathway needs to be activated.\n\n(1656.596,1658.818) Suin Lee : It would be only a subset of genes.\n\n(1658.818,1664.501) Suin Lee : We would expect only a subset of genes to be highly expressed to say, you know, that pathway is activated.\n\n(1664.501,1674.768) Suin Lee : And then it's really extremely important for a computational biologist when we develop, you know, a method like this to get biological insights from large scale data sets.\n\n(1674.768,1682.953) Suin Lee : When we develop such a method, we need to make sure that it does not fully depend on any sort of prior knowledge.\n\n(1683.713,1686.095) Suin Lee : And then the algorithm needs to be flexible.\n\n(1686.095,1688.277) Suin Lee : So that's of key importance.\n\n(1688.277,1694.563) Suin Lee : So in this particular example, we didn't use a pathway actually from the beginning.\n\n(1694.563,1698.967) Suin Lee : When the model training happens, we used genes as individual features.\n\n(1698.967,1706.935) Suin Lee : And then we analyzed the feature attributions and then did the statistical test to see which pathways seem to be more activated.\n\n(1707.395,1709.057) Suin Lee : You made a really good point.\n\n(1709.057,1713.683) Suin Lee : In all computational biology methods, it's really important not to make it too rigid.\n\n(1713.683,1717.287) Suin Lee : For the existing knowledge, it needs to be flexible.\n\n(1717.287,1721.372) Sam Charrington : And so how do you evaluate your results in this particular paper?\n\n(1722.069,1732.257) Suin Lee : Oh, so say that you have a feature attribution for all genes, for a certain patient, and then for a certain combination of drugs.\n\n(1732.257,1736.961) Suin Lee : And then say you will have a lot of feature attributions then, right?\n\n(1736.961,1741.785) Suin Lee : Combining all patients and all pairs of drugs you considered.\n\n(1741.785,1744.727) Suin Lee : And then we perform the statistical test.\n\n(1744.727,1747.51) Suin Lee : So for example, it's a simple, you know, features exact\n\n(1747.97,1761.441) Suin Lee : test kind of statistical test where you see whether there is significantly, you know, large value of attribution values for certain set of genes defined by certain pathway.\n\n(1761.441,1768.787) Suin Lee : And then you do, you know, multiple hypothesis testing and then see whether that, you know, significance is indeed is relevant.\n\n(1768.787,1775.173) Suin Lee : So the pathway-based analysis was done in a post-hoc manner after model training and then\n\n(1775.713,1778.474) Suin Lee : obtaining all, you know, model explanations.\n\n(1778.474,1790.477) Suin Lee : So another challenge we ran into in that project was that was really not addressed properly by this foundational AI field was feature correlation.\n\n(1790.477,1795.779) Suin Lee : So in many biomedical data sets, you will see lots of features that are correlated with each other.\n\n(1795.779,1797.739) Suin Lee : Many genes are correlated.\n\n(1797.739,1800.16) Suin Lee : It's a really modular gene expression, you know,\n\n(1800.68,1807.266) Suin Lee : levels are very modular, so you easily see a subset of genes that are very highly correlated with each other.\n\n(1807.266,1818.036) Suin Lee : So in that kind of case, shaft values are not going to be extremely accurate because imagine that there are two genes that are perfectly correlated with each other.\n\n(1818.036,1818.356) Suin Lee : Then\n\n(1818.977,1823.36) Suin Lee : There will be infinite ways to attribute to these two genes.\n\n(1823.36,1831.444) Suin Lee : So in that paper, in that Nature Biomedical Engineering paper, we addressed it by considering ensemble model.\n\n(1831.444,1835.407) Suin Lee : So we ran many ensemble of model explanations.\n\n(1835.407,1837.308) Suin Lee : So we ran the model.\n\n(1837.308,1839.589) Suin Lee : In this case, it was not your deep neural network.\n\n(1839.589,1841.03) Suin Lee : It was three ensembles.\n\n(1841.03,1842.891) Suin Lee : And then we averaged.\n\n(1842.891,1846.493) Suin Lee : We averaged the feature attributions that are from many models.\n\n(1846.773,1853.262) Suin Lee : And then we showed that it gives you more robust feature attributions when the features are correlated with each other.\n\n(1853.262,1855.545) Sam Charrington : Awesome.\n\n(1855.545,1860.472) Sam Charrington : So talk a little bit about where you see the future of your research going.\n\n(1861.683,1864.565) Suin Lee : That's a really important question.\n\n(1864.565,1878.114) Suin Lee : In all three ways, first of all, in the foundational AI method, as I briefly mentioned, this robustness issues and also multi-modal data.\n\n(1878.114,1883.337) Suin Lee : Let's say that you have a set of features and each feature belongs to different category.\n\n(1883.337,1884.978) Suin Lee : They are in different modality and then\n\n(1885.618,1891.021) Suin Lee : how to attribute to these features that are in different modalities.\n\n(1891.021,1893.603) Suin Lee : So that's an open problem.\n\n(1893.603,1900.287) Suin Lee : So it was actually motivated by biomedical problem, but it's widely applicable to other applications.\n\n(1902.194,1908.516) Suin Lee : And then also these emerging models of LLMs or other foundational models.\n\n(1908.516,1916.757) Suin Lee : And in this kind of really large models, how to actually compute the feature attributions properly.\n\n(1916.757,1926.98) Suin Lee : And then also, we are really interested in sample-based importance to say that you transpose the matrix transpose of your feature matrix.\n\n(1926.98,1931.861) Suin Lee : So I've been talking about these feature attributions a lot, but you can also apply Shapley values\n\n(1932.501,1937.404) Suin Lee : to gain insights into which samples are important for your model training.\n\n(1937.404,1948.73) Suin Lee : So that can help us understand how foundational models in various fields or large language models rely on which training samples.\n\n(1948.73,1959.015) Suin Lee : So that can be really important for model auditing perspective, first of all, and then to gain insight in terms of which samples were important.\n\n(1959.495,1963.117) Suin Lee : for these large models to behave a certain way, right?\n\n(1963.117,1968.659) Suin Lee : So sample-based explanation is also one of the things that we are mainly working on.\n\n(1968.659,1972.2) Suin Lee : In the biomedical side, there are many projects.\n\n(1972.2,1978.403) Suin Lee : So, you know, single cell data science is one of the big themes in my lab now.\n\n(1978.943,1986.667) Suin Lee : So you obtain gene expression levels or other kinds of molecular level information at a single cell level.\n\n(1986.667,1989.788) Suin Lee : So the advantage is that you will have a ton of samples.\n\n(1989.788,1999.672) Suin Lee : So one experiment is going to give you many samples, which is really appropriate for large scale models these days based on deep neural networks.\n\n(1999.672,2005.675) Suin Lee : So for example, the researchers started looking into foundational model for a single cell data set.\n\n(2005.955,2015.926) Suin Lee : So in this kind of, you know, data sets that have still, you know, high dimensional and then researchers are now obtaining multi-omic data.\n\n(2015.926,2021.252) Suin Lee : So not only gene expressions, you can also obtain other kinds of, you know, genomic information.\n\n(2021.252,2021.892) Suin Lee : So that's going to\n\n(2022.533,2026.655) Suin Lee : increase the dimensionality also, and then larger sample sizes.\n\n(2026.655,2032.979) Suin Lee : How to learn the biologically interpretable representation space?\n\n(2032.979,2037.381) Suin Lee : That's one of the big questions in the research in my lab.\n\n(2039.142,2045.229) Suin Lee : All feature attribution methods at the end in the downstream prediction test, you attribute to features.\n\n(2045.229,2050.074) Suin Lee : And then the assumption is that each feature is an interpretable unit.\n\n(2050.074,2055.059) Suin Lee : In biology, as I mentioned earlier, it's not the case in biology, right?\n\n(2055.059,2061.966) Suin Lee : So the functional unit in biology is much more interpretable than in any individual genes.\n\n(2062.226,2073.893) Suin Lee : So how to learn the features that have more broadly representation, feature representation space that's biologically more interpretable.\n\n(2073.893,2079.296) Suin Lee : And then also how to make foundational models learned based on single cell data sets.\n\n(2079.296,2088.101) Suin Lee : So researchers started publishing those papers that are about applying this foundational model approach to single cell data sets.\n\n(2088.101,2091.243) Suin Lee : And then how to make it biologically interpretable\n\n(2091.743,2104.866) Suin Lee : so that you can gain scientific insights from the model results and then also audit those models to make sure that users can actually safely use them for scientific discoveries.\n\n(2104.866,2110.188) Suin Lee : So, attribution methods for this kind of modern machine learning models\n\n(2111.168,2113.47) Suin Lee : so that you can gain biological insights.\n\n(2113.47,2115.151) Suin Lee : So that's another theme.\n\n(2115.151,2119.113) Suin Lee : In a clinical side, we are really interested in this model auditing.\n\n(2119.113,2125.677) Suin Lee : In our most recent paper that's in review, we are focusing on dermatology example.\n\n(2125.677,2126.958) Suin Lee : So dermatological image\n\n(2127.778,2135.867) Suin Lee : is inputted into deep neural network, and then you want to know whether the prediction result is melanoma or not.\n\n(2135.867,2147.059) Suin Lee : There are many algorithms out there, some published in very high-profile medical journals, and also some available through the cell phone apps.\n\n(2147.059,2148.441) Suin Lee : So there are many algorithms.\n\n(2148.741,2160.633) Suin Lee : And then we recently tested them, we just separate the held out tested samples, and then got the result that's a little concerning in terms of usage.\n\n(2160.633,2165.478) Suin Lee : And then our analysis showed that explainable AI was extremely helpful.\n\n(2165.978,2172.644) Suin Lee : So for example, in the skin image, which part of the image led to that kind of a prediction?\n\n(2172.644,2177.948) Suin Lee : Or as I said, using this counterfactual image generation.\n\n(2177.948,2184.173) Suin Lee : So you make small changes to the input dermatology image such that it changes.\n\n(2184.173,2189.718) Suin Lee : It crosses the decision boundary of the classifier and then see what features were changes.\n\n(2189.718,2194.682) Suin Lee : So that way you can see the reasoning process of this classifier.\n\n(2194.922,2196.583) Suin Lee : the clinical AI model.\n\n(2196.583,2206.411) Suin Lee : So for that, there needs to be some technological development there because the feature attributions themselves are not going to be enough.\n\n(2206.411,2212.996) Suin Lee : It shows only very small part of the inner workings of the machine learning model.\n\n(2212.996,2222.804) Suin Lee : So developing methods for auditing clinical AI models, that's the research we are currently performing in the clinical area.\n\n(2223.264,2226.75) Suin Lee : So all three areas, we are doing exciting research.\n\n(2226.75,2229.696) Sam Charrington : Well, Suwan, it sounds like you've got a lot of work ahead of you.\n\n(2229.696,2230.096) Suin Lee : Yes.\n\n(2230.096,2230.557) Suin Lee : Yeah.\n\n(2230.557,2231.238) Suin Lee : Very busy.\n\n(2231.238,2232.02) Sam Charrington : I bet.\n\n(2232.02,2233.863) Sam Charrington : Thanks so much for joining us.\n\n(2233.863,2234.504) Suin Lee : Thank you.\n\n(2234.504,2235.726) Suin Lee : Thank you for inviting me.\n\n(2235.726,2235.987) Sam Charrington : Thank you.\n\n(2239.223,2242.184) Unknown : All right, everyone, that's our show for today.\n\n(2242.184,2248.167) Unknown : To learn more about today's guest or the topics mentioned in this interview, visit TwiMLAI.com.\n\n(2248.167,2255.951) Unknown : Of course, if you like what you hear on the podcast, please subscribe, rate, and review the show on your favorite podcatcher.\n\n(2255.951,2258.652) Unknown : Thanks so much for listening and catch you next time."} |