{"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": [], "date_published": "n/a", "text": "Interview with 7oalk, on 3/20/22\n\n0:00:03.2 Vael: Alright, my first question is, can you tell me about what area of AI you work on in a few sentences?\n\n0:00:08.5 Interviewee: Yeah, I study biologically-inspired artificial intelligence of building models of biological intelligence, mostly with the visual system, but also with cognitive functions, and using those models to understand differences between humans and machines. And upon finding those differences, the hope is that we can build more human-like artificial intelligence and in the process develop models that can better explain the brain.\n\n0:00:46.1 Vael: Interesting, yep. I was in that space for a while. The dual-- AI--\n\n0:00:49.1 Interviewee: Oh, cool. I saw you worked with Tom Griffiths.\n\n0:00:54.0 Vael: Yeah, that's right, yep.\n\n0:00:55.0 Interviewee: Yeah, very much so. Cool.\n\n0:00:56.5 Vael: Alright, so my next question is, what are you most excited about in AI, and what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n0:01:06.9 Interviewee: So I'm most excited about AI, not for most of the applications that are getting a lot of praise and press these days, but more so for convergences to biomedical science. I think that the main driver of progress in AI over the past decade has been representation learning from large-scale data. And this is still an untapped area for biomedical science, so we really don't know what elements of biology you can learn predictive mappings for. So, for instance, given an image of a cell's morphology, can you predict its genome? Can you predict its transcriptome? Etcetera. So I think that what I'm most excited about is the potential for AI to completely transform the game of drug discovery, of our ability to identify the causes and identify targets for treating disease.\n\n0:02:18.4 Vael: Nice.\n\n0:02:18.7 Interviewee: What I'm most afraid of is in the game of AI that is most popular right now. The NeurIPS CDPR game, which... lots of people in the field have pointed out the issues with biases. And also... it's late here, so I'm not thinking of a good term, but this Clever Hans, Mechanical Turk-ish nature of AI, where the ability to solve a problem that seems hard to us can give this sheen of intelligence to something that's explained a really trivial solution. And sometimes those trivial solutions, which are available to these deep learning networks which exploit correlations, those can be really painful for people. So in different applications of using AI models for, let's say screening job applicants, there's all these ethical issues about associating features not relevant to a job, but rather just regularities in the data with a predictive outcome. So that's a huge issue. I have no expertise in that. It's definitely something I'm worried about. Something that's related to my work that I'm super worried about... I mentioned I do this biological to artificial convergence, this two-way conversation between biological and artificial intelligence. The best way to get money, funding in academia, for that kind of research is going through defense departments. So one of the ARPA programs-- IARPA, DARPA, Office of Naval Research. So I know that the algorithms built for object tracking, for instance, could be extremely dangerous. And so I build them to solve biological problems, and then I scale them to large-scale data sets, large data sets to get into conferences. But, you know, easily, somebody could just take one of those algorithms and shove them into a drone. And you do pretty bad stuff pretty easily.\n\n0:04:41.2 Vael: Yeah.\n\n0:04:43.2 Interviewee: Yeah. I guess one more point related to the biomedical research. There's this fascinating paper in Nature and Machine Intelligence. One of the trends right now for using AI in biology is to predict protein folding, or to predict target activity for a molecule or a small molecule. So when you do biomedical research, you want to screen small molecules for their ability to give you a phenotype in the cell. And so you can just use a multilayer perceptron or graph neural network, transformer, whatever, to learn that mapping, and do a pretty good job it turns out; it's shocking to me. But what you can do is you can either predict-- let's find the molecule that's going to have the best target site activity, that's the best therapeutic candidate. But you can also flip it, you can optimize it in the opposite direction. (Vael: \"Yeah--\") You saw that.\n\n0:05:48.2 Vael: Yeah. I saw that [paper].\n\n0:05:49.0 Interviewee: It's so obvious, but easily, easily, easily you could use any of these biomedical applications for the nefarious things.\n\n0:06:00.5 Vael: Yeah, yeah, that's right. That was a very recent paper right?\n\n0:06:02.5 Interviewee: Yeah, it just came out like last week.\n\n0:06:05.1 Vael: Got it. Yeah. Alright, we've got excited and risks. My next question is talking about future AI. So putting on a science fiction, forecasting hat. So we're 50 plus years into the future, so at least 50 years in the future, what does that future look like?\n\n0:06:22.6 Interviewee: Okay. Optimistically, we're all still here. So the reason why I got into this field is because I think it's the biggest force multiplier there is. So all of the greatest problems that-- the existential crises we face today, there's a potential for machine learning to be part of the solution to those problems. So focusing just for a moment on health, biomedical research, which is where I'm mostly interested right now, 50 years, I think that we're going to be to a point where we know we have drugs that are completely uninterpretable to humans. We have solved disease, we have cured disease, but we don't know why. We've also created a potential issue, byproducts from that, adversarial effects, for lack of a better phrase, by curing cancer through some black box method. That's going to yield other problems, kind of like putting your finger in the dam. So I think that in 50 years from now, we will have machine learning-based solutions to some of the greatest health problems: cancer, aging, neurodegenerative disease that face us, but we won't understand those solutions. And so that's potentially the next frontier, which is to develop interpretable drugs. So I imagine that science will change. The paradigm of science will shift where there are no longer bench scientists, but instead robotics is how science is done. So scientists ask high-level questions, and then biology becomes a search problem where you just, akin to a tricorder from Star Trek where it just solves, and you move onto the next question. You can imagine that framework being applied to other problems. Climate change, hopefully. I'm not optimistic, but hopefully, you could have solutions to climate change. But there will be a black box. And when you have a black box, there's always issues with unintended effects. So I guess 50 years from now, we'll have, rather than thinking about AGI and all that stuff, we'll have solutions to some of our grandest challenges today, but those solutions may bear costs that are unexpected.\n\n0:09:40.8 Vael: Yeah, that makes sense. I have few follow-ups. So you mentioned existential risks facing humanity. What are those?\n\n0:09:51.5 Interviewee: So there's climate change. There's nuclear war, which is the byproduct of power. There is poverty and hunger, famine. With the risk of sounding like some VC, I think that there are chances to disrupt solutions to each of those problems using machine learning. For instance, for energy, already folks at DeepMind are using reinforcement learning to drive the development of new forms of fusion to search for better forms of fusion. Who knows if this is scalable, but I think with... it's not going to be better algorithms, it's going to be better integration of the toolset for studying fusion and these machine learning methods to make it more rapid to test and explore those questions that are related to that field, which I don't know. I think that will lead to positive solutions. That would potentially yield lower-emission or no-emission, high-energy solutions that can power homes without any emissions. Similarly, with cars... self-driving, as much as I think that Elon Musk is barking up the wrong tree-- going LIDAR-free, [inaudible] approach to self-driving, that will be a solved problem within 10 years. And that's going to help a lot the emissions issue. So energy and global warming are kind of tied together. And I think that there's lots of different applications of machine learning and AI that can help address those problems. Likewise for famine, for world hunger. Genetically-engineered organisms are a good approach to that problem. What I mentioned about for solutions to biomedical problems, disease, etc.-- treating those as search problems and bringing to bear the power of artificial intelligence. I think you could probably adopt a similar approach for engineering organisms that are tolerant to weather, especially the changing climate and other diseases and bugs, etcetera. I think there's going to be some overlap between the methods of work for each of these areas. I guess I didn't mention COVID and pandemics. But again, that falls into the same framework of treating these existential problems as search problems, no longer doing brute force, then science. And instead resorting to black box solutions that are discovered by large scale neural network algorithms. Did I miss any? Oh, war. Yeah, I don't know. Okay, so here's an answer. Whoever has these technologies, who's ever able to scale these technologies most rapidly, that's going to be almost like an arms race. So whoever has these technologies will be able to generate so much more value than countries that don't have these technologies that economies will be driven by the ability to wield appropriate artificial intelligence tools for searching for solutions to these existential issues.\n\n0:14:36.1 Vael: Got it, thanks. Alright, so my next question is... so this is kind of a spiel. Some people talk about the promise of AI, by which they mean many things, but that the thing I'm referring to is something like a very general capable system. So the cognitive capacities that can be used to replace all current day human jobs. Whether or not we choose to replace human jobs is a different question, but having the kind of capabilities to do that. So like, imagine... I mostly think about this in the frame of 2012, we have the deep learning revolution with AlexNet, and then 10 years later, here we are, and we have systems like GPT-3, which have kind of weirdly emergent capabilities, that can be used in language translation and some text generation and some coding and some math. And one might expect that if we can continue pouring all of the human effort that we've been pouring into this, with all the young people and the nations competing and corporations competing and algorithmic improvements at the rate we've seen, and like hardware improvements, maybe people get optical or quantum, that we might get... scale to very general systems, or we might hit some sort of ceiling and need to do a paradigm shift. But my question is regardless of how we'd get there, do you think we'll ever get very general systems like a CEO AI or a scientist AI and if so, when?\n\n0:15:55.3 Interviewee: I don't think it's worth it. I see the AI as... look, I grew up reading Asimov books, so I love the idea of \"I, Robot\", not with Will Smith, but solving all these galaxy conquest issues and questions. But I don't think it's worth it because... I see this as a conversation between human experts, the main experts, and artificial intelligence models that are just going to force multiply the ability of human experts to explore problems. And maybe in a 100 years, when we've explored all of these problems, then we'll be so bored that we say, \"Can we engineer something that contains all of these different domain expert AIs in one, in a way that can-- those AIs can respond to a question and affect behavior in the world and have their own free will.\" It's more of a philosophical pursuit that I don't even, I don't... okay, so from an economic standpoint, I don't know who's going to pay for that. Even the military would be like, \"All I want is a drone that's going to kill everybody.\"\n\n0:17:30.9 Vael: Yeah, Yeah, cool. So I have some counterpoints to that. So I'm not actually imagining necessarily that we have AI that is conscious or has free will or something. I think I'm just imagining an AI that is, has a, very capable of doing tasks. If you have a scientist AI then...We were talking about automating science, but in this case, it's not even relying on an expert anymore. Maybe it defers to an expert. It's like, do you want me... it looks to the expert's goals, like, \"Do you want me to solve cancer?\" Or like a CEO, it has shareholders. And I think that there is actually economic incentive to do this. DeepMind and OpenAI have explicitly said... I think that they're trying to make artificial general intelligence, or like things that can replace jobs, and more and more replace jobs. And I think we're going in that direction with Codex and stuff, which is currently an assistant tool, but I wouldn't be surprised if in the end, we didn't do much coding ourselves.\n\n0:18:24.9 Interviewee: Okay, so yeah, I agree with what you said. Right, it's going to change the job... So strike from the record what I just said. It's going to change the job requirements. That would be a good feature in my opinion, where we no longer have to code because instead we describe our program. And again, it's a search problem to find the right piece of code to execute the concept that we have. We describe the medical problem that we're interested in, and we search within whatever imaging modality or probe that we're using to study that system. We can search for that solution. Where I'm not so sure is, we have this huge service industry, and you mentioned shareholders and I count that as one of them. I think when it comes to service, there's this quality, there's this subjectiveness where for instance, in medicine, talking about radiology or pathology or even family medicine. I think there will be tools to give you diagnoses, but they're always going to come with a human second opinion. So, from that standpoint, it's going to be a bit of a science fair, kind of a sideshow to the expert. I don't think that human experts in service, medical or business, etcetera, are going to be pushed aside by AI, because ultimately, I think if there's a different answer given by the artificial pathologist versus the human pathologist, the human pathologist will double check and say, \"You are wrong, AI\" or \"You are right, AI\". But the patient will never receive the artificial intelligence answer. So I think in some fields, there's going to be a complete paradigm shift in how research is done, or how work is done, research for one. Driving, supply chain, transportation, that's another example. Which will be great. I think that would be great, it will push down costs, push down fatalities, morbidity, and probably be a net good. But yeah, for a shareholder, I cannot imagine a scenario in which a CEO who has to figure out the direction of the company, has to respond to shareholders would ever say, like, \"This time around, I'm just going to give an AI a GPT-6 report for the markets that we should enter, the new verticals we should work on, etcetera.\" Maybe it would be just be a tool to get feedback.\n\n0:21:54.2 Vael: Yeah. So it seems pretty true to me that we started... our current level of AI, which is machine learning, that they started as tools and we use them as tools. I do-- it sort of feels to me that as we progr--... we've been working on AI for under 100 years, and deep learning revolution is quite new. And it seems to me like there are economic incentives towards making things more convenient, towards more automation, towards freeing up human labor to do whatever it is that people could do. So I wouldn't be surprised if eventually we reached a point where we had systems that were capable enough to be CEO. We could be like, \"Alright AI, I want you to spin me up a company,\" and maybe you would be able to do that in some very different future, I would imagine.\n\n0:22:38.2 Interviewee: Ah, so see-- okay, so this is a question. Creativity versus capturing the regularities or the regulations of a field. So I can imagine an AI VC, easy. Or in finance, yeah. Finance, I can imagine that being overtaken by AI. I'm thinking CEO, spinning up your own company, identifying weakness in a field, or maybe some potential tool that can monopolize a field. That requires creativity. And I know there's this debate about creativity versus just interpolation in high-dimensional space of what GPT-3 is doing, and yes, I've seen some cool-looking art, but I don't know. That seems like a... I think if you talk to me about this, if we talked about this for another week, maybe you would convince me. But I'm not quite there yet that I can imagine deep learning being able to do that. Although, now that I say that, maybe I'm just thinking about it wrong. Where most businesses just amount to, \"What has worked somewhere else? Can that frame of reference work in this field? Can you disrupt this field with what's worked there?\" So maybe, maybe. Okay, I agree with what you're saying. I'm evolving here, on the spot. I can see it, I can see that. I guess the one field that completely, that I'm going to push back on is medicine, where I think that part of service need a human to tell you what is up. If you don't have a human, then I can't imagine a machine diagnosis would ever fly. Maybe in 200 years, that would fly. There would have to be a lot of development of the goal for that to happen.\n\n0:24:45.3 Vael: Yeah, I think there's a number of things going on here. So I don't know that we'll be able to achieve... I do think a CEO is very high cognitive capabilities, you need to be modeling other people, you need to be talking to other people, you need to be modeling other people modeling you, you need to have multi-step planning, you need to be doing a whole bunch of very advanced...\n\n0:25:03.6 Interviewee: Yeah, convincing people, interpersonal relationships. Yeah.\n\n0:25:06.2 Vael: Yeah, there's a whole bunch of stuff there, and I don't that we're... we're not near there with today's systems for sure. And I don't know that we can get there with the current deep learning paradigm, maybe we'll need something different. I think that's a possibility. I think what I'm doing is zooming way out where I'm like, \"Okay, evolution evolved humans, we have intelligence.\" I think that humans are trying pretty hard to make this thing happen, make up whatever intelligence is happen. And like, if we just continue working on this thing, absent some sort of huge disaster that befalls humanity, I kind of expect that we'll get there eventually.\n\n0:25:39.6 Interviewee: Yeah.\n\n0:25:40.2 Vael: And it sounds like, you said that like, maybe eventually, in like 200 years or something?\n\n0:25:45.1 Interviewee: Definitely, definitely. So, I agree with what you said, science is incremental. And I think we have not found any ceilings in the field of modeling intelligence. We've continued to move. Even though we're kind of arguably asymptoting with the current paradigm of deep learning, we have a proof of concept that it's possible to do better, which is our own brains. And so, maybe we just need to have neuromorphic hardware or whatever, who knows, who knows? So I think you're right, and if you can operationalize, just like you did very quickly, what it takes to be a CEO. If you can build models of each of those problems and if you can do what people do, which is benchmark those models and just start to make them better and better, yeah, I can imagine having some... It sounds crazy to even visualize that, but yeah, having an automated CEO, I think so. I think the part that's probably the most tractable is finding the emerging markets or the, like I said, the technologies that have proven to be flexible, flexibly applied, or the tricks, let's say, that have worked in multiple fields, and then identifying the field where it has just not been applied yet. Kind of that first mover type advantage. I think that's where AI could... I can definitely imagine an AI identifying those opportunities. Almost like drug discovery, right? It's just a hit. And you'd have a lot of bad hits, and like, \"This is bullshit. Nobody cares about knee pads on baby pants,\" which is like the WeWork founder before WeWork, but you also have some good hits. So the interpersonal stuff, yeah, that's like 200 years off.\n\n0:27:50.6 Interviewee: Although you have chat bots. So I'm working on my own startups, a lot of this is trying to convince people to spend their time, that this is worthwhile, that they should leave their current stable positions to do this, that they should work... They should do the work for you. They should drop what they're doing and do what you need them to do right now, and that you're going to help them eventually. And a lot of that is empathy and connection, and there's no evidence that we have yet that we can build a model that can do that stuff. So yeah, that's a really interesting idea, though. And I do think if you go through and operationalize each of those problems, you could make progress. So, 200 years, yes, I agree with you.\n\n0:28:56.7 Vael: Alright, so maybe we'll get something like this in 200 years.\n\n0:29:00.1 Interviewee: Yeah.\n\n0:29:00.6 Vael: This is mainly an argument about the belief in... like, faith in humans to follow economic incentives towards making life more convenient for themselves. That's where it feels like it's coming from in my mind. Like 200 years... Humans have existed for a long time, and things are moving very quickly. 10,000 years ago, nothing changed from year to year, from lifetime to lifetime. Now, things are moving very quickly, and I'm like, \"Hm.\"\n\n0:29:23.1 Interviewee: I'm happy to talk with you, like, add time to this interview. But I'm just curious, what do you think? Do you think it's going to be sooner than 200 years? 10 years? 20 years?\n\n0:29:37.1 Vael: I think there's a possibility that it could happen... sooner than 200 years. [chuckle] I'm trying to figure out how much of my personal opinion I want to put in here. There's a study, which was like, \"Hello, AI researchers. What do you think timelines?\" And there's several other kind of models of this, and a lot of the models are earlier than 70 years. So, this is... possibilities. I think it could be within the next 30 years or something. But I don't actually know, there's like probability estimates over all of these things. I don't know how these things work.\n\n0:30:13.2 Interviewee: Okay. Yeah. Some McKinsey consultant put like little probability estimates.\n\n0:30:18.9 Vael: Yeah, a little. It's a little bit more than that. There's a lot of surveying of AI researchers and then some people have some more fancy models, I can send you to them afterwards, you can see...\n\n0:30:27.3 Interviewee: Please, please. Yeah.\n\n0:30:28.6 Vael: Great.\n\n0:30:29.4 Interviewee: And like I said, I'm happy to add time to this, so...\n\n0:30:32.3 Vael: Awesome. Alright, so my next question is talking about these very... highly intelligent systems. So in your mind, maybe like 200 years in the future. And so I guess, yeah, this argument probably feels a little bit more close to home if your timelines are shorter, but like, regardless. So, imagine we're talking about the future. We have our CEO AI, and I'm like, \"Alright, CEO AI, I wish for you to maximize profits and try not to run out of money or try not to exploit people and try to not have these side effects.\" And this currently, obviously, is very technically challenging for many reasons. But one of the reasons I think, is that we're currently not very good at taking human values and human preferences and human goals, and then putting them into mathematical formulations such that AIs can optimize over them. And I worry that AIs in the future will continue to do what we tell them to do and not as we intend them to do. So, we're not that we won't be able to take all of our preferences... it will continue to be hard to take our preferences and put them into something that can optimize over. So what do you think of the argument, \"Highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous\"?\n\n0:31:46.6 Interviewee: ...Yeah. Well, so... Okay. Exactly what they're intended to do, so presumably this means, you start to test generalization of your system and some of holdout set, and it does something funky on that set. Or there's some attack and it does something funky. The question is... so presumably it's going to do superhuman... well, let's just imagine it's superhuman everywhere else, except for this one set. Now it's a real issue if humans are better than this AI on that one problem that it's being tested on, this one holdout problem...\n\n0:32:32.4 Vael: I think in this argument, maybe we have something like human-level intelligence in our AI and we're just trying to get it to do any goal at all, and we're like, \"Hello, I would like you to do a goal, which is solve cancer.\" But we are unable to properly specify what we want and all the side effects and all the things.\n\n0:32:47.0 Interviewee: Yeah, yeah, yeah. Yeah. No, that's exactly where I'm thinking in. So imagine that you are studying a degenerative disease and you figure out, or you ask your AI, \"Can you tell me why ALS happens in focal cells\" So your AI gives you an answer, \"Well, it's because of some weird transcription that happens on these genes.\" So if you have a small molecule that addresses that, then you're going to have normal-acting genes and normal phenotype in those cells. Okay, so byproduct could be, after 10 years, some of what was happening in those genes is natural for aging, and so you introduce some cancer, because what you did to fix neurodegenerative disease is so unnatural that it has that kind of horrible side effect. Now, is that a problem? (Vael: \"Yeah, I think--\") Yes. Yeah, but--\n\n0:33:52.7 Vael: I think in this particular scenario, maybe the AI would know that it was a problem. That like 10 years out, you would not be unhappy. I mean, you would be unhappy.\n\n0:34:03.6 Interviewee: You would be unhappy. Yeah, so okay, even if it does know it. Is that... So yes, that's an issue, but it's also significantly better than where we are today. And let's say the expected lifespan for ALS is five years, and now you're saying 10 years, but you're going to have this horrible cancer. That's a bad trade-off, bad decision to force somebody to make, but is undoubtedly progress. So I think black boxes.. and that would not be a black box, I believe. Black box would be like when... you don't know, you find out about this 10-year cancer 10 years after it passes clinical trials, you know? So this would be like, almost a very good version to AI that can tell you like, \"I found a solution, but given the current technology, this is the best we can do.\" So I would think that is a fantastic future that we...\n\n0:35:00.8 Vael: I think that's right.\n\n0:35:02.0 Interviewee: The black box version, I think is also a fantastic future because it would still represent meaningful progress that would hopefully continue to be built on by advances in biomedical technologies and AI. So for this specific domain, there's obvious issues with black box. Like, if you're going to use AI to make any decisions about people, whether they can pass a bar for admissions, this or that or the other thing, there's going to be problems there. But for medicine at least, I think black boxes should be embraced.\n\n0:35:45.2 Vael: Yeah, I think the scenario you outlined is actually what-- I would say that that system is doing exactly what the designers intended them to. Like the designers wanted a solution, and even if we currently don't have best solution, it gave it the best we had. I'm like, \"That seems successful.\" I think an unsuccessful version would be something like, \"Alright, I want you to solve this disease.\" But you forget to tell it that it shouldn't cause any other side effects that it certainly knows about and just doesn't tell you, and then it makes a whole bunch of things worse. And I think that would be more of an example of an unsuccessful system doing something that didn't optimize exactly what they're intended.\n\n0:36:19.0 Interviewee: Yeah, yeah, so that's just a failure. Okay, so here. Let's say it only does that on a certain type of people, racial profiles. Okay, so then the upshot of automating AI is that, sorry, automating science, is that it should make it a lot cheaper to test a far wider range of people. So just like, now the current state of the art for image-- object classification is to pre-train on this 300 million image data set, this JFT-300 or whatever. Likewise, you could develop one-billion-person data sets of IPS cells, which are like stem cells. Cool science fiction there that had never happened, but then you would have this huge sample where you would avoid these kinds of horrible side effects.\n\n0:37:28.7 Vael: Cool. I still think-- yeah. I'll just-- (Interviewee: \"No, tell me. Tell me, you still think what?\") Ah, no, no, I, well. I'm trying to think of how I want to order this, and we're running out of time and I do actually need to go afterwards. (Interviewee: \"Oh, okay.\") So I think I want to get to my next question, which does relate to how I think this previous... where I'm like, \"Hmm\" about this previous question. So assume we have this powerful AI CEO system, and it is capable of multi-step planning and is modeling other people modeling it, so it has like a model of itself in the world, which I think is probably pretty important for anything to be deployed as a CEO AI, it needs to have that capability. And it's planning for... And its goal, people have told it, \"Alright, I want you to optimize profit with a bunch of these constraints,\" and it's planning for the future. And built into this AI is the idea that we want... That it needs to have human approval for making decisions, because that seems like a basic safety feature. And the humans are like, \"Alright, I want a one-page memo telling us about this potential decision.\" And so the AI is thinking about this memo and it is planning, and it's like, \"Hmm, I noticed that if I include some information about what's going to happen, then the humans will shut me down and that will make me less likely to succeed in the goal that's been programmed into me, and that they nominally want. So why don't I just lie a little bit, or not even a lie, just omit some information on this one-page memo, such that I'm more likely to succeed in the plan.\" And so this is not like a story about an AI being programmed with something like self-preservation, but is a story about an agent trying to optimize any goal and then...\n\n0:38:55.2 Interviewee: Like shortcuts, yeah.\n\n0:38:56.5 Vael: And then having an instrumental incentive to preserve itself. And this is sort of an example that I think is paired with the previous question I was asking, where you've built an AI, but you haven't fully aligned it with human values, so it isn't doing exactly what the humans want, it's doing what the humans told it to do. And so, what do you think of the argument, \"Highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals, and this is dangerous.\"\n\n0:39:38.5 Interviewee: Okay, so if we are going to be using supervision to train these systems, what you're describing is a shortcut. So, highly intelligent systems exploiting a shortcut like you're talking about, which would be like, \"Let's achieve a high score by doing the wrong thing...\"\n\n0:40:00.4 Vael: Yeah, yep.\n\n0:40:00.5 Interviewee: Can be corrected by having the right data set. Not saying that it would be this simple, input-output, learn the mapping between the two, but we'll just imagine that it is. If you have any kind of low-level bias in your data set, or high level bias, if this is the case, neural networks will learn to find that bias. So it's a matter of having a broad enough data set where that just basically, statistically, what you're talking about would not exist.\n\n0:40:35.4 Vael: Interesting. I feel like--\n\n0:40:37.3 Interviewee: I don't think that would be a problem if you can design the system appropriately.\n\n0:40:41.2 Vael: Yes, I totally agree this would not be a problem if you can design the system appropriately.\n\n0:40:44.9 Interviewee: The data set appropriately.\n\n0:40:46.6 Vael: Ah, the data set appropriately. Interestingly, on one of the examples that I know of where we have, like you set the AI, it's like a reinforcement learning agent, and you're like, \"Reinforcement learning agent, optimize the number of points.\" And then instead of trying to win the race, which is the thing you actually wanna do, it just like finds this weird local maximum.\n\n0:41:02.8 Interviewee: Yeah, so for RL, that's like the ultimate example of shortcuts. Because there, what you have are extremely... So, for video games, if you're talking about a race, you have extremely limited sensor data, sorry, low dimensional sensor data. And you have few ways of controlling, manipulating that sensor data. So, let's say there's two local minima that can be found via optimization, and one is, \"go for red, because red wins,\" and the other is, \"Drive your car to avoid these obstacles and pass the finish line,\" which just happens to be red. Of course, the shortest solution will be the one chosen by the model. So I agree, shortcuts are ubiquitous, and of course, they will become more and more advanced as we move into these other domains. But I think what you're describing is just a problem with shortcuts. And so it becomes a question of, can you induce enough biases in your model architecture to ignore those shortcuts? Can you design data sets such that those shortcuts wash out via noise? Or can you have human intervention in the training loop that says, \"Keep working, keep working. I don't accept this answer?\"\n\n0:42:30.3 Vael: Great, yeah. Cool. Have you heard of AI alignment?\n\n0:42:37.1 Interviewee: Yeah, yeah, I have. Well, I've seen OpenAI talking about that a ton these days.\n\n0:42:42.3 Vael: Great. Can you explain the term for me?\n\n0:42:46.5 Interviewee: So this is like, you have some belief about how a system should work, and the model is going to do what it does, and so alignment means you're going to bring the system into alignment with your belief about how it should work. It's essentially what you're talking about with the shortcut problem. So, that's it, yeah.\n\n0:43:09.4 Vael: Great, thanks. Yeah, I think I mostly think the problem of aligning systems with humans is going to be... One of my central thing is that I think it will be more difficult than it seems like you think, which you were like, \"Oh well, we can do it with having better data sets or we can do it with denoising things,\" and I'm like, I don't actually know if...\n\n0:43:31.1 Interviewee: I don't know about denoising-- I don't mean to make this sound trivial, because it certainly is not. But to give you an example, MNIST is so confounded with low-level cues. You don't need to know \"six\" to recognize six, you just need to know this contour, right? Is that... only in six do you have the top of the six and then a continuous contour. So, that's what I mean. So if you can design a data set where those don't exist, then you're golden. But usually-- In the real world, that doesn't happen that much because we have these long-tailed distributions of stuff. So then you have to induce biases in your architecture, and this comes back to human vision, sorry, human intelligence. It's alignment with human decision-making, because we have all these biases through development through our natural-- our genomes, through neural development that make us able to interact with the world in such a way where we don't just go for the red thing, where we're not vulnerable to adversarial examples. So I don't mean to trivialize it, I only mean to say computationally, the problem is biases within the data set, and when you use gradient descent, if that is the learning algorithm that you used to train the model that you're talking about, then that's what you're going to be fighting against.\n\n0:45:06.0 Vael: Cool. Yep, makes sense.\n\n0:45:07.6 Interviewee: Yeah.\n\n0:45:08.3 Vael: Alright. I think my last question is just, how has this interview been for you and have you changed your mind on anything during it?\n\n0:45:19.1 Interviewee: Yeah, well, it's great. I never thought of an AI CEO. I think that's super fascinating. Yeah, it was super fun. Yeah, it was great.\n\n0:45:33.2 Vael: Great. Wonderful. I think I'm going to call it now, just because we both have places to be. And I will send along the money, and then I'll also send along some resources since you said you were curious about my opinions on timelines.\n\n0:45:47.6 Interviewee: I definitely am. Thanks so much for your time, I really appreciate it.\n\n0:45:51.1 Vael: Yeah, thanks so much for yours.\n\n0:45:53.9 Interviewee: Okay. All right, take care. Have a good night.\n\n0:45:55.7 Vael: Bye.\n", "url": "n/a", "docx_name": "NeurIPSorICML_7oalk.docx", "id": "c431582530cc89e1abd7c42d67e3755d"} {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": [], "date_published": "n/a", "text": "Interview with 92iem, on 3/21/22\n\n0:00:05.2 Vael: Alright, here we are. So my first question is, can you tell me about what area of AI you work on in a few sentences?\n\n0:00:15.2 Interviewee: Currently, I work a lot with language models, but that wasn't always the case.\n\n[...]\n\n0:00:57.4 Vael: Great, thanks. And then what are you most excited about in AI? And what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n0:01:08.5 Interviewee: So obviously, I think the progress in language models in the last couple of years has been pretty astounding. And the fact that we can interact with these models in more or less in the natural way that we would like to interact with it just has opened up so much in terms of getting feedback from humans and stuff like that. So I think just the progress in language models, and then coupled with that, the more recent progress in using essentially some of the same techniques to do image modeling, so that you have the possibility to do just seamless multi-modal models. I think that's quite exciting. Some people think that... You know, it's not like most of us can just paint a photographic scene and show it to other people. So it's not like-- the photographic aspects of generative image models is not what excites me, it's the fact that humans manage to communicate quite a bit with diagrams and stuff like that. When we're doing science, you can draw little stick figures and pretty much convey what you need to convey, and that coupled with natural language should give us the ability to start thinking about getting AI to do math and science for us, and I think that's the thing that is most exciting to me.\n\nSo I know that a lot of people are excited by the idea that you can essentially have a Google that's a bit... It's smarter, right? You can just talk with it and say, Hey, tell me a bit about this tree, and AI says something and you say, Oh, but what about that tree? That's fun, but I really feel like humans are not bottlenecked by the inability to ask about trees and buildings and trivia, essentially. I think where we're bottlenecked is like progress in science. I think, for example, so it's pretty clear that the political solution to climate change-- the time for that has kind of come and gone. I mean, we can slow it down. If we, like the whole world, suddenly decided to say we're going to do something about this, maybe you slow it down, but I think just the timing is a little bit off. So a lot of that's going to be have to be a technological solution. And as amazing as technological progress has been, I think we're not fast enough when it comes to developing solutions to a lot of our problems. And I do think in 10, 20 years, AI is going to play a big role, both in the specialized domain in the sense of AlphaFold, where you really just come up with a system that does the thing you want it to do, but more impactfully, perhaps, by having an army of grad student-equivalent language models that can help you answer questions that you need answered. So that's very exciting, right.\n\n0:04:24.1 Vael: Yeah. It's a cool vision.\n\n0:04:26.5 Interviewee: I think the risks are... It's almost banal, right? Like with most technologies bad actors can make arbitrarily bad use of these things. So yeah, when they start weaponizing these things... I'm a little bit less concerned than some people are about like, Oh, but what if we have AIs that write fake news. Like all of that is to some extent present now, and I guess it's just a question of degree, to some extent. Okay, people argue that that difference in degree matters, and they're not necessarily wrong. I just, the thing that bothers me more definitely is very specific, malicious uses of AI. So there was a recent paper, this is so obvious that it's almost dumb, but someone said, Oh, yeah, we put an AI to trying to develop a drug that, let's say, reduces the amount of poison, and all you have to do is change the objective function, flip the sign and suddenly it just optimizes for the most poisonous thing you can possibly find. That coupled with technologies like CRISPR and stuff like that just creates a pretty dangerous... puts very dangerous tools at people's disposal. So I would say that's the thing that I would worry about.\n\n0:05:54.9 Vael: I have been impressed by how everyone I've talked to in the past week has mentioned that paper, and I'm like, good, things get around.\n\n0:06:01.8 Interviewee: Well, Twitter. Thanks to Twitter.\n\n0:06:04.7 Vael: Nice. Alright, so focusing on future AI, putting on a science fiction forecasting hat, say we're 50-plus years into the future. So at least 50 years in the future, what does that future look like?\n\n0:06:17:9 Interviewee: For AI?\n\n0:06:20:5 Vael: In general, where if AI's important, then mention that.\n\n0:06:27.4 Interviewee: I see. So 50 years, oh my God. Fifty years is long time away. Assuming that we've managed not to have nuclear conflicts between now and then, which is just one of those things that now you have to put at least a one digit probability on these days. But, yeah, I think that we will end up having... Well. The optimistic scenario is that we ended up solving a few key problems. One is transitioning mostly out of fossil fuel, so a combination of solar and fusion power. I think that's going to be huge, and I think that AI will have played a role in some of that development. And I think 50 years from now, I think unless we are monumentally blocked in the next couple of years, AI will be pretty omnipresent in our lives, and certainly in the scientific sectors. So one thing that I'm a little bit, just something that comes to mind, is that a lot of people are into this idea of these sort of augmented... I don't know if people are literally willing to wear glasses, but certainly you could imagine having little ear buds that are fairly unobtrusive that go around your ears or something, and they do have a camera, so you can just ask it, whatever you need, you can ask it questions.\n\nIn 50 years, I think at that point, maybe some people will have worked out direct neural interfaces with stuff, and so maybe the more adventurous people will have a bit of augmented memory or at least the ability to sort of silently query their little augmented system. I think that might be a thing. Not everyone will have adopted it, I think it'll be a weird world. I personally-- I've never been a huge, like the fastest adopter of technology, but that sort of stuff is next level, and I don't know what that's going to look like.\n\nI also... well, two things, I guess they're kind of linked. I think that people will live substantially longer. I think, unless something miraculous happens, I don't think they'll be living like 200, 300 years, but I certainly think it's possible people will be living to 150 or something like that. Not people born now; I'm not going to live to 150. Someone was telling me that people born, these days, they're going to live to see the year 2100, right. That's not quite in the 50-year time frame, but yeah, I certainly think people born today are going to be living, like their average lifespan in industrialized countries, assuming a certain level of privilege, they're going to be able to live quite a bit longer. That coupled with AI possibly automating quite a few jobs is going to change the social landscape a bit.\n\nOne thing that occurred to me recently... so people used to say that-- well, people say many things-- one is that, this, unlike industrialization... Some people always say technological progress destroy some jobs but creates more jobs on the other side. And then some say, Okay, but this one is different because you're automating intelligence and that really does put humans out of their main forte. So one of the things that people worry most about, in addition to the universal-based income stuff, is just the loss of dignity, that people always assume that even people who don't have what you would call glamorous jobs value the fact that they work and get paid for that work. But I think some of the stuff that happened during Covid makes me doubt that a little bit, in the sense that people did quit jobs that ostensibly looked good. Even in the tech sector where people, I felt like generally, they're not the worst jobs by any stretch, and they were like, No, this is meaningless, I want to go do something meaningful with my life. So I think the recent, the past couple of years have made me question the idea that it would be that big of a psychological blow for people to not work for money. That if you did establish a universal basic income, plus you'd have to solve some other, many complicated issues, but I don't think people will be that unhappy to be not having to work menial jobs. I'm not saying there's not going to be upheaval, but I think it's going to be like a combination of living longer, and not possibly having to do jobs if you don't want to do them. I think that's just going to be, I don't know. It might be a nice change. In the optimistic scenario, I guess.\n\n0:11:35.9 Vael: Got it. Yeah. Well, my next question is related to that. So people talk about the promise of AI, by which they mean many things, but one of them is maybe having a very general capable system such that it will have cognitive capacities to replace all current day human jobs. So you might have a CEO AI or a scientist AI. Whether or not they choose to replace human jobs is different, but have the ability to do so. And I usually think about that and the fact that 2012 we have AlexNet, deep learning revolution, here we are, 10 years later, we've got things like GPT-3, which can do some language translation and some text generation, coding, math, etcetera, a lot of weirdly general capabilities. And then we now have nations competing and people competing and young people going into this thing, and lots of algorithmic improvements and hardware improvement, maybe we get optical, maybe we get quantum, lots of things happening. And so we might actually just end up being able to scale to very general systems or we might hit some sort of ceiling and need to do a paradigm shift. But regardless of how we do that, do you think we'll ever get very general AI systems, like a CEO or a scientist AI, and if so, when?\n\n0:12:39.1 Interviewee: I don't know about CEO AIs. The scientist AIs, yes. Yeah, and that's going to come in stages. So obviously the current generation of AIs, we don't put them in human bodies and let them do experiments and stuff like that, right. It's going to be a while before we start letting them operate like particle accelerators. Fifty years... Maybe in 50 years. My original background is [non-AI field], and I really could have just done my entire PhD from a desk, and that sort of work, certainly, AI can replace, I think, to a huge degree, from idea generation to solving the answer and writing a paper, yeah, that just feels so doable. Again, unless we hit a giant wall and find that our current transformers simply cannot reason, but I think that looks unlikely. I don't rule it out, but that looks unlikely to me.\n\n0:13:46.8 Vael: Yeah. Okay, what about this CEO AI, like with multi-step planning, can do social inference, is modeling other people modeling it, like crazy, crazy amount of generality. When do you think we'll get that, if we will?\n\n0:14:00.7 Interviewee: Yeah, that's not the part that I'm worried about. AI can certainly model human intent, but... I guess it depends on what you want from your CEO AI. And this I think gets at a little bit one of my dissatisfactions with discussions about human-- like, AI alignment. It's not that people don't talk about it, but it's rarely talked about. I don't know, on Twitter certainly. A lot of AI alignment stuff talks about-- they *don't* talk about the fact that humans disagree wildly on what humans should do. So I'm thinking about this in connection with the CEO, because I think in the limit, AI will be able to do anything, any specific thing you ask the AI to do, it can do, but the question of whether you would want the AI to be CEO, I think that's mostly a human question. So that's why I said-- I think that's a policy decision, not a AI capability question.\n\n0:15:25.3 Vael: Got it, yeah. Do you think that people will end up wanting... that there will be economic incentives such that we'll eventually have things like CEO AIs?\n\n0:15:36.0 Interviewee: I guess in some sense, no, because I think a human would still be the CEO and then you would have your AI consultant, essentially, that you would ask all the things. You would delegate almost everything, but I think that people would still want to be at their very apex of a corporate hierarchy. It seems weird to put a robot in charge of that, just like... why. It's a title thing, almost, like, why would you make the robot the CEO?\n\n0:16:03.2 Vael: Yeah, yeah. In some vision of the future I have, I have the vision of a CEO AI... we have a CEO AI and then we have shareholders, which are the humans, and we're like, \"Alright, AI, I want you to make a company for me and earn a lot of money and try not to harm people and try not to exploit people and try to avoid side effects, and then pass all your decisions, your major decisions through us\" and then we'll just sit here and you will make money. And I can imagine that might end up happening, something like that, especially if everyone else has AI is doing this or AIs are way more intelligent and can think faster and do things much faster than a human can. I don't know, this is like a different future kind of idea, but.\n\n0:16:46.8 Interviewee: But that seems so weird. Because, then-- so, assuming... I don't know if everybody has access to the same AI in that scenario, but like it can't be the case that 100 people all say to their own individual AI, \" Form a company and turn it into a $100 billion company or a $1 trillion company\", and they all go out at optimizing. I think at that point, in that kind of world, I think there would have to be a bit more coordination in terms of what goes on, because that just creates some nasty possibilities in terms of bringing the economy down. So I don't know that that's how things would just happen. It cannot be the case that we would just say, \"Robot, figure out how to make a trillion dollar company. I'll give you this one idea and just run with it,\" and then just like we are hands-off. That seems extremely unlikely, somehow.\n\n0:17:38.9 Vael: Yeah, I'm interested in how that seems very unlikely. It seems like to me... Well, we were talking about scientist AI, and I imagine we can eventually tell a science AI to like solve cancer, and maybe it will actually succeed at that or something. And it seems like it's different, to be like, Hey, CEO, make a ton of money for me. Is that getting at any of the underlying thing or not?\n\n0:18:08.4 Interviewee: Uh, hm. Yeah, so I think even there, I think you would never tell an AI to \"solve cancer\". Well, yeah... You would want to give it more specific goals, and I think... In any scenario where we have full control over our AIs, we wouldn't want such vague instructions to be turned into plans. That's a scary world, where you can just say solve cancer and the robot runs with it. I think for the same reason, I don't think you would want a world where someone can say, \"AI, make a lot of money for me,\" and that's the only instruction the AI has, and it's allowed to intervene in the world with those instructions. So yeah, that's why I don't see, just like from a sanity perspective, you would-- you never want to unleash AI in that manner, in such a vague and uncontrolled manner. Does that make sense?\n\n0:19:03.6 Vael: Yeah, that makes sense that you wouldn't want to be... because it's very unsafe, it sounds like, or it could be--\n\n0:19:09.7 Interviewee: Yeah, kind of insanely unsafe, but...\n\n0:19:14.8 Vael: Nice. Yeah, do you think people might end up doing it anyway? Sometimes I feel like people do unwise things in the pursuit of, especially unilateral actors, in the pursuit of earning money, for example. Like, Oh, I've got the first scientist AI, I'm going to use it to solve the thing.\n\n0:19:35.3 Interviewee: That's a good question. I think, I really do think you would want... Yeah, I wonder about how you would actually enforce any kind of laws on AI technology. It's the most complicated thing to enforce, because nuclear weapons-- One of the nice things about nuclear weapons is it's actually pretty hard to develop nuclear weapons in secrecy without releasing any radiation, that's one of its few good points. I think AI, it's true that you could just develop and run it. But I think at the point where any AI has to interface with the real world, whether it's in the stock market or something like that, I do think that people will start seeing the need for finding ways to regulate the speed. Even high frequency trading is starting to be, like you can't interact with it, any kind of stock market in less than one nanosecond or something like that. I think similarly, there's just going to be some guardrails put in place. If there's any kind of sanity in terms of policymaking at that time, you would want guardrails in place where you could not unleash AI with such large powers to affect a large part of the world with minimal intervention powers. Yeah. This is all assuming there's a sane policymaking environment here, but...\n\n0:21:04.8 Vael: Yeah. Do you think there will be?\n\n0:21:09.3 Interviewee: I think so. I think so, I'm hopeful in that regard. I'm not saying that Congress is ever going to really understand the nuances of how AI works, anything like that, I just think there would be too many... Even in a world where only OpenAI and DeepMind have full AGI, I don't think they'd want to create a world where one of them can unleash something at the level that you described. And I also think that when those two companies get close, they're going to wonder if other states, say, Russia or China, are going to be close, and they're going to start wanting to really hammer down, hammer out... like there will be a sense of urgency, and hopefully they have enough influence to influence policymakers to say, \"You need to take this seriously.\" And this is where I think almost the fact that it takes... Okay, I said earlier that, you know what, the nice thing about nuclear weapons is that you could detect it, but I think one of the nice things about the fact that right now, it looks like you're going to require enormous compute to get anything that is remotely AGI. That's the thing that allows maybe... That means the only huge corporations or states will be able to do it for at least some period of time, and hopefully those are the same actors that can somehow influence policymaking. If there were just one person, if they just had the ability to do that, it would be a little bit problematic, actually. So in some sense, because these institutions are big, I think they're going to be both constrained a bit more in terms of what they can do, and also they're going to be able to, if they are well-intentioned, to influence policymaking in a good direction.\n\n0:23:06.2 Vael: Do you think they'll be able to do international cooperation? Because I imagine China will also have some AI companies that are also kind of close, I don't know how close they will be, but...\n\n0:23:17.7 Interviewee: They'll try. I don't know that China will listen to the US or Europe. I agree that's not going to be easy, yeah. Who knows what they're up to exactly, there.\n\n0:23:31.0 Vael: Yeah, it seems like they're certainly trying, so... Yeah, another one of my questions is like, have you thought much about policy or what kind of policies you want to have in place if we are getting ever closer to AGI?\n\n0:23:48.8 Interviewee: Actually, I haven't given it that much thought, what the laws would specifically look like. What I don't think is really possible is something like the government says, You now need to hand over control over this to us. I don't think that's super feasible. Yeah, I can't say I have a good idea for what the laws would specifically look like. I think as a starting point, they'll certainly create some kind of agency to specifically monitor... Actually, right now, there's no agency like the SEC or something like that that monitors what exactly goes on in AI. I mean, there's some scattering of regulations probably somewhere, some vague export controls and stuff like that. But yeah, they'd certainly start creating an agency for it, and their mandate would start to grow. I think it might, again, have to be something like what we do with nuclear reactors, where you have an agency that has experts inside of it, and that they are allowed to go into companies and kind of investigate what's going on inside, just as, if Iran is developing nuclear weapons and they agree to let inspectors in. I think it's going to be up to something like that. And then, yeah, similar to these nuclear treaties, perhaps there would have to be something along the lines of like... there are certain lines you cannot cross with AI, and if someone does cross it, that institution or the country as a whole gets sanctioned. It's going to have to be at that level. Certainly, given the power of the putative AI that we're thinking about. I think the regulations are going to have to be quite dramatic if it's going to have any kind of effect.\n\n0:25:45.1 Vael: Yeah. One thing I think that is a difference between the nuclear situation and the AI situation is that nuclear stuff, seems not very dual use. Well, nuclear weapons, at least, not very dual use. Versus like AI has a lot of possible public benefit and a lot of economic incentives, versus like you don't get, I don't know, you don't benefit the public by deploying nuclear weapons.\n\n0:26:05.7 Interviewee: But nuclear reactors, but that's the whole--\n\n0:26:07.9 Vael: Nuclear-- Yes, you could--\n\n0:26:09.0 Interviewee: That's the whole... So Iran would always pretend that, Hey, we're just developing nuclear reactors for power. Just the problem is that was always very easily converted to nuclear weapons. I think that could be a similar—\n\n0:26:24.9 Vael: Yeah, yeah, it is similar in that way. Somehow it still feels to me that these situations are not quite analogous, in that the regulations are going to be pretty different when you're like, \"I am going to make sure that you're not doing anything bad in this area,\" and people are like, \"ah, yes, but we need to get the new smartphone, scientist AI, etcetera.\" But yeah, I take your point. Another thing that I think is interesting is that current day systems are really pretty uninterpretable, so you're like, \"Alright, well, we have to draw some lines, where are we going to draw the line?\" What is an example of what a line could be, because if there's government inspectors coming in to DeepMind and you're like, \"Alright, now inspect,\" I'm like, what are the inspecting?\n\n0:27:11.2 Interviewee: Yeah, so when you say interpr-- so that's another thing about... one of my pet peeves about interpretability. People are not that interpretable. People hardly, rarely know what's going on in other people's heads, and they can tell you something which may or may not be true, sometimes they're lying to you, and sometimes they might be lying to themselves. When a doctor tells you, \"This is what we're doing,\" unless you're another doctor, you rarely understand what they're saying. And so, yeah, this is a total tangent on like my... the thing around, the discussion around interpretability is always such a mess. But what are they inspecting? If we're imagining inspectors, they could certainly go in and say, like, if it's a language model, you can certainly allow them to query the language model and see what kind of answers, what kind of capabilities these language models have. You could say, if it's a language model, just totally hypothetically, you could say, \"Alright, develop me, write me a formula for a bioweapon,\" and if the language model just gives that to you, then possibly you have a problem. Stuff like that. So if a company that has that capability hasn't put in the required fail-safes like that, then they can be held liable for X amount of problem, the trouble, right.\n\n0:28:58.3 Vael: Interesting. Cool, so that's cool. You've got like a model of what sort of rules should be in place, and it sounds like there should be rules in place where you can't develop bioweapons or you can't feed humans bioweapons when they ask for them.\n\n0:29:12.0 Interviewee: Yeah, stuff like that. In this inspector model, I think that's what would kind of have to happen. But yeah, it's not like I'm an expert in this, but that's what I would think.\n\n0:29:24.4 Vael: Yeah, something I'm worried about is that no one is an expert in this. Like policymakers-- when I talk to the AI researchers, they're like, Oh, yes, the policymakers will take care of it, and I'm like, the policymakers are busy, and they're doing many different things, there's not many people who are focused singularly on AI. Also, they're mostly focusing on current day systems at the moment, so like surveillance and bias and transparency, and like a bunch of different things, so they're not really thinking very future at the moment. And they don't know what to do because they don't understand the technology, because the technology moves extremely fast, right, and so like AI researchers are the ones who know it. And I'm like, Alright, AI researchers, what should we tell them to do. You're like, Well, we should make a list of things that the AI shouldn't do, like basic fail-safes. And I'm like, Great, it would be super cool if that was written out somewhere and then we can start advocating for it or something, because I'm worried that the policy will just continue to lag really far behind the actual tech levels, except where like... Policy is already several years behind, maybe like 10 years or something, and will continue to be that far behind even as we're approaching the very powerful AI.\n\n0:30:31.9 Interviewee: Yeah, so a couple of things there. One is that that's why you need more of an agency model rather than laws, because creating laws is very, very slow, whereas an agency can drop some rules and maybe they start enforcing them. And so you do need a sensible agency that doesn't create bad rules, but the ability to be flexible. That said, I think... The biggest problem with policymaking right now is that the policymakers don't understand AI at all, right. And you sort of hinted at that. And I think... If I'd asked myself, at this moment in time, is there anything that, any rule that we need at this moment in time, I'm not sure there is. AIs are not there yet.\n\n0:31:25.2 Interviewee: So at this moment in time, I think if you ask most researchers, \"Hey, do we need to create specific laws to prevent X, Y, Z,\" I'm not sure many people would tell you, you need that. And so these laws, I think, are going to have to come in at very sensible points, and it's not clear to me that the policymakers are going to know when that time point is. I would say even in the AI field, very few people know when that's going to be. There's a lot of stuff coming out of especially big labs where the world doesn't know. There's like 100 people that know what's coming in the next year. I don't know what a good solution to that is.\n\n0:32:17.5 Vael: Especially if we can get AIs that can generate lots of deadly poisons already. Yeah, I think it'll maybe be hard to tell, and then also one needs to develop a list, if there's going to be in list form or...\n\n0:32:32.2 Interviewee: The problem is, I think it's easier to regulate general AI just because it's going to require so much compute. But I think more specific AI that anyone can run on a GPU, like on a laptop, is more or less impossible to regulate. So it's not clear to me what the law would be, except if you use a bioweapon, you're in trouble. That law already exists, right.\n\n0:33:00.5 Vael: Yeah, I think that one already exists, so...\n\n0:33:05.4 Interviewee: So I think in some sense, like kind of in the trade-off of what can the technology do right now, and who might try to deploy that, our laws sort of cover the problem cases at the moment. I think where I get a little bit stuck is if you try to say, \"Alright, in five years, should we have laws banning certain uses of a very, very capable general model?\" I do think at that point, Congress should seriously consider creating a regulatory agency. And I think AI researchers will only support this if there's some semblance of like, kind of like NASA, where there's some faith that engineers are in charge of this thing, that kind of know how these systems work, that they can think rationally about both the technological side and the policy side of things. And so that's going to take some work on the side of whatever administration is in power at that time. But yeah, it's not going to be easy. I think it's going to take a very capable administration to handle that transition gracefully.\n\n0:34:19.5 Vael: Yeah, that makes sense. Yeah, I'm worried about a few different things in this future scenario. I'm like, Okay, I don't know if the agency will be developed while-- in a sort of future thinking sort of way, I don't know that it will implement the right type of policies, I don't know that it will have the power to really enforce those policies, I don't know if it will have the power to enforce internationally. But I do like the idea that-- but obviously one should still try, and it seems like there should probably be a lot of effort going into this, as you said, something like on a five-year scale.\n\n0:34:49.2 Interviewee: Yeah, it's just that knowing AI researchers, there's just going to be such extreme pushback. If there's any sense that there's been a bureaucracy created whose job is nothing more than to just slow things down for no good reason. That's almost a default kind of way in which such an agency would get created, and so, yeah, it's just one of the situations where you have to hope that the future leaders of America are smart.\n\n0:35:24.7 Vael: Yeah. Yep. A thing to bank on. Cool. So I'm concerned about long-term risks of AI. That's one of the ways in which I'm concerned, is that we won't get the policy right, especially as we're doing international competition, that there may be race dynamics, as we're not able to have really strong international governance. And I don't know if this will go well, and I'm like, I think people should work on this.\n\nBut another way I think that things might not work: So we talked a little bit about the alignment problem. And another interesting thing about the alignment problem is... or in my mind, so we've got maybe a CEO AI, or whatever kind of AI, but this is the example I've been working with, and it's making plans and it has to report any decisions it makes to its shareholders, who are humans, and the humans are like, \"I want a one-page memo.\" And the AI is like, \"Okay, cool, one-page memo. I have a lot of information in my brain, in my neural network, while I'm trying to maximize profits with some other goals-- with some other constraints.\" And it's noticing that if it gives certain information to humans, then the humans are more likely to shut it down, which means that it's less likely to succeed in its goal. And so it may write this memo and leave out some information so that it decreases likelihood of being shut down, increases the likelihood of achieving its goal. So this is not a story where we're building in self-preservation into the AI, but a story in which-- why instrumental incentives of an agent achieving, trying to go for anything that is not perfectly aligned with human values, just like what humans tell it to do instead of what humans intended it to, then you might get an AI that is now optimizing against humans in some degree, trying to lie to them or deceive them in order to achieve whatever it has been programmed to do. Do you think this is a problem?\n\n0:37:08.2 Interviewee: The scenario you described was exactly what human CEOs do.\n\n0:37:12.7 Vael: Hm, yes. But more powerful systems, I think, with more influence over many things.\n\n0:37:20.6 Interviewee: So this is the problem-- so I think this actually still is a human problem. So if a human being... like these AIs will be, depending on the mix of reward for not getting shut down and... at the kind of detailed level these days, we often... When we do RL with language models, we have two things going on, one is an RL objective, maximize the reward as much as you can, but the other objective is to tie it to the original language model so it doesn't diverge too much. In which case, if you are writing a memo, it would try to write a memo in the style that a human would write it, let's say. So the information content would be somewhat constrained by what a typical memo written by a human being would look like, and then on top of that, it would try to optimize what it is trying to do, maybe just trying to keep the company alive for as long as it can or something like that. So there is that sort of like, at least the way we do things now, there's a little bit of self-regulation built in there. But this is why I think, more fundamentally... any question where if you just replace the AI with a human and ask the same question: Is this a problem or not a problem? I think that's more or less a human problem. And you have to think a bit more carefully about what we would want a human to do in that exact same situation. Do we have an answer for that? And then take into account the fact that the AI is more powerful. You don't need a super devious AI for a CEO to start lying to their shareholders a little bit, or misleading their shareholders a little bit, in order to present a more rosy picture of what the company is doing. So do we already have mechanisms that prevent that? I think we do, and that same thing would apply to the AI.\n\n0:39:22.1 Vael: Yeah. I think the things that are interesting to me about the AI scenario is that we have the option of... we are designing the AIs, so we could make them not be this way. And also having an AI that has a lot, lot more power, that is as powerful as a leader of one of the countries, and that has the ability to copy itself and could do self-improvement, so it can be smarter than it started out with. And okay, we've got something's possibly smarter than us, which is like the ability to reason and plan well, and has the incentive to acquire resources and influence and has all the human kind of incentives here, and we can't-- and it's not as-- I don't know, it's maybe not as interpretable as a human, but you can't throw it into jail. Like, I don't know, there's a lot of the mechanisms for control, I think, are maybe not there.\n\n0:40:12.7 Interviewee: Yeah, so it's in this sort of legal context that I think you would not want the AI to be a CEO or any... There has to be something... For something like this, you would want the person... There should be a person who's liable for the decision being made by the AI. You have to do some due diligence to the answers that the AI gives you. There's no other way. Yeah.\n\n0:40:40.2 Vael: There's generally a thing in some of your answers where you're like, Well, you know, any reasonable person would do X, and I'm like, I don't know if we're in a world where we've got a bunch of just reasonable people putting in appropriate fail-safes which they've spent a long time constructing. And some of these fail-safes, I think, might be very technically difficult. I think the alignment problem might be quite technically difficult, such that researchers who are working on capabilities would get ahead even as the people working on safety mechanisms per se is also growing, but at less speed as all the capabilities researchers are pushing forward. Such that we might have an imbalance in how many people are working on each thing.\n\n0:41:16.3 Interviewee: Yeah, so I guess maybe I'm thinking of two different things. One is just the sheer-- kind of like the question of just putting rules in. The other question that I often have with these discussions is, Does a sensible answer exist to the question at all? So imagine, okay, so imagine we replace the CEO with... Imagine we replace Mark Zuckerberg with a very, very smart AI. And this very smart AI is posed with this question of, okay, there is a photo of a naked child, but it's in the context of a war, it's a war photograph. Should this photo be allowed on Facebook or not? The CEO cannot... It doesn't really matter how smart the AI is, this is just not a question the AI can answer, in the sense that it's an indelibly human question. That's why-- I just think there are certain questions where when we posit a incredibly intelligent AI, it's got nothing to do with that. It's just a question of what a group of people who... A group of humans who disagree on what the final answer should be. In that scenario, there's no right answer for the AI. There's nothing the AI can do in that scenario that is the correct answer.\n\n0:42:46.4 Vael: Yeah. I think in my vision of what I want for AI in the future, I want AIs that do what humans intend them to do, so I want the alignment problem to be solved in some way, and I want it to all involve a huge amount of human feedback. So for every question that is confusing or the AI doesn't know what to do, if it hasn't internalized human values, then I want it to ask a bunch of humans, or maybe we have some way to aggregate human opinions or something. And then we have an AI that is reflecting human values and preferences, so if humans are confused about this particular issue, then I don't know, maybe the default if humans disagree is not to publish[?]. But in general, just having some sort of checking mechanism. The thing that I'm worried will happen by default is that we'll have an AI that is optimizing for something that's sort of right, but not quite right, and then it will just kind of now do like whatever things we put into it-- whatever optimization goals we would put into it will be kind of locked in, and so that we'll eventually get an AI that is doing something kind of analogous to the recommender algorithms thing, where recommender algorithms are sort of addictive and they're optimizing something-- clickthrough rate-- that's kind of close to what humans value, but isn't quite. And then we might have an AI that is just like, A-ha, I am now incentivized to deceive humans to gain control, to gain influence, to do self-improvement, and we've sort of lost control of it while it's doing something that's like almost but not quite what we want.\n\n0:44:05.6 Interviewee: I think one thing that comes to mind, actually...so this kinda goes back to the interpretability question, but I think it may be a slightly different angle on it. I think it's going to have to be the case where when an AI makes a decision of that sort, it should output almost a disclaimer. So the way credit card companies would write you this long disclaimer. And it would have to tell you for each decision it makes, what the risks are, and then a human has to read that and sign off on it. Now, the question is going to be, the other problem with credit card disclaimers is that they were so long that the average person couldn't read it and make sense of what the hell was going on. So the AI would be somewhat required to come up with a comprehensible set of disclaimers that say, Okay, I asked a bunch of people, they said this, but obviously we shouldn't always listen to what the majority says. I also consulted some moral ethicist or some ethicists, and I synthesized the combination of the ethicists, previous precedents, and what the general public wants. I recommend that given the combination of these three factors, you should do this. And then a person should sign off on it, and then that person in some sense should be liable to the extent that the AI gave a reasonable summary of the decision factors. So something along those lines.\n\n0:45:32.8 Vael: Yeah, that sounds brilliant. I would be so excited if AI in the future had that. I'm like, Wow, we have an AI that is incentivized to instead make things as maximally clear and comprehensible and taking into account what the human wants and listing out of things, I'm like, If we solve that, if we have the technical problem to solve that, I'm lnnnike, wow, amazing.\n\n0:45:52.6 Interviewee: I think the key point here is at some point, the human has to be held liable for it, so that they have an incentive to only use AIs that satisfy this condition. Otherwise there's no reason for the.. because, like you say, you can't put the AI in jail, so. At some point you have to put the onus on humans. I think this is something that like even Tesla's going to have to think about. At some point, I mean... I fully believe statistically, they'll reduce the number of accidents, but accidents will happen, sometimes the car will be the responsible party. At that point, you can't just throw up your hands and say no one was at fault, right? So if Tesla is willing to deploy their cars for self-driving, they are going to have to start taking liability, and that's going to force them to confront some of these same issues and say, Did the AI give a reasonable estimation of if we take this road...? It has to be able to say, or like a surgical robot, it has to be able to say the same thing that doctors do, \"Listen, I'm going to perform this operation, it's the best chance you have, but there is a 10% chance that you're going to die. If you're comfortable with this, if you're comfortable signing off on this, I will do my best,\" and only in that scenario is the doctor allowed to be forgiven if the operation goes wrong.\n\n0:47:20.6 Vael: Yeah, so a part of my thinking I'm noticing is that... Um... So I think you're very interested in problems of misuse, which I'm also interested in, but I think I'm also interested in the problem of, like, I think that it will just be technically hard in order to incentivize an AI to not try to optimize on [hard to parse] but to like, be able to take... So currently, we're quite bad at taking human preferences and goals and values and putting those in a mathematical formulation that AIs can optimize, and I think that problem might just be really, really hard. So we might just have an AI that won't even give us anything reasonable, and I'm like, Oh, well, that seems like step one. And then there's also a bunch of governance and a bunch of incentives that need to be put in place in terms of holding people accountable, since humans will hopefully be the end user of these things.\n\n0:48:07.8 Interviewee: I'm actually far less worried about the technical side of this. I just finished reading this book about von Neumann, that's a little cute biography of him, and there's a part where he says, supposedly, that people who think mathematics is complicated only say that because they don't know how complicated life is. And I'm totally messing with the phrasing, but something like that. I actually think any technical problems in this area will be solved relatively easily compared to the problem of figuring out what human values we want to insert into these.\n\n0:48:46.4 Vael: Okay, so you think it's the taking human values and putting into AI, that technical problem will just get solved?\n\n0:48:54.8 Interviewee: If you know what values you want to put in, yeah.\n\n0:48:56.5 Vael: Okay, cool. Alright.\n\n0:49:00.1 Interviewee: I actually think that problem is the easy problem. I'm not saying it's easy in an absolute sense, I just think that's the easier problem.\n\n0:49:05.1 Vael: Got it. That feels like the alignment problem to me. So you think the alignment problem is just going to be pretty easy. This seems like a valid thing to think, so...\n\n0:49:12.9 Interviewee: I want to emphasize, I don't think it'll be easy in absolute terms, I just think it'll be the easier of the two problems.\n\n0:49:17.0 Vael: Okay, compared to governance and incentives. Yeah, that makes sense.\n\n0:49:21.2 Interviewee: That is... I just have this faith that any technical problems humans can solve, down the line-- like, eventually. It's the non-technical problems that get people all tangled up, because when there's no right answer, it really messes up scientists.\n\n0:49:43.1 Vael: Yeah, yeah. Yeah, the problem of trying to take human values and what we care about in all the different ways and put them into a mathematical formulation feels difficult to me, and I guess it is a technical problem. I guess I do sort of think of it as a technical problem, but yeah, that makes sense that you're just like, Look, we'll get that done eventually. And then we have governance, and I'm like, Oh, yes, governance is totally a mess. Yeah, that makes sense. And I think no one knows how, it's an unknown, unsolved problem--\n\n0:50:13.5 Interviewee: Let me put it this way, let me put it this way. If by human values, you mean if like...\n\n0:50:18.5 Vael: What humans intend, having an AI always doing what you say.\n\n0:50:22.8 Interviewee: Yeah, so imagine for any conceivable scenario an AI that would have to deal with. We could ask you, what would you do in this case? What I'm saying is that if for each of these questions, you are able to give a concrete answer to the answered question, such questions can be inserted into our AIs. Like if you are able to come up with clear answers for the questions for which you yourself would have a clear answer for, I think that set of moral constraints, let's say, I think that can be more or less inserted into AIs without huge problems.\n\n0:51:08.3 Vael: Even as the problems get-- even as I don't eventually have concrete answers, because the problems are like, Should we do X thing on creating nuclear reactor X and etcetera, and I lose control of the... not lose control, but I can't actually visualize the whole space or something, because I'm too...\n\n0:51:25.2 Interviewee: But at that point, what does alignment mean? Alignment usually means that what the AI does is the same as what a human would do. If there's no answer about what the human would do, what is the AI supposed to do?\n\n0:51:37.2 Vael: I think it's supposed to just keep in mind the things that I care about, so try to avoid side effects in all forms, try to avoid hurting people, except for when that makes sense or something-- oh, eugh. Anyway, like, doing a whole bunch of... and, like reporting truthfully, which is also something I want the AI to do. And things like this.\n\n0:51:56.6 Interviewee: I guess it's one of those, so it's a question of maybe a generalization, but I find it slightly hard to believe that an AI that would answer in the exact same way on all of the answered questions that you do have an answer for, when you go outside of that regime, suddenly the AI diverges strongly from what you would have answered had you been smarter. I just think that's a weird kind of discontinuity, right. So there's this huge set up-- let's suppose on all the questions for which you have an answer, the AI agrees with you. And then you take that a little bit outside of your realm of comprehension, and at that point, suddenly the AI decides, Oof, I'm freed from the constraints, I can answer whatever. I think that's a little bit implausible to me, assuming people did the job correctly.\n\n0:52:52.9 Vael: Yeah, I think assuming people do the job correctly is pretty important here. You could have an AI that is deceiving you and giving you the correct answers, as long as you could check, but yeah, assuming that's not true, assuming the AI is honestly reporting to you what exactly, like everything that you said at a lower level, I mean, everything that you can confirm, then that seems great. And you're like, Well, if you then extend that to regimes where humans can't understand it, then things are probably still okay. I think I maybe believe that. I think that maybe things are probably still okay.\n\n0:53:23.7 Interviewee: The validation set on whatever project you're working on would... The AI wouldn't know necessarily whether this was a question on which you just didn't have an opinion. It's the same for me when I think about... A lot of people are concerned that, in terms of basically over-fitting, could you... So one of the reasons I think we'll definitely get an AI that can answer physics questions pretty comprehensively is that I don't believe you can ever create a physics AI that can fake its way through all of the train set, sorry, on the validation set, and then suddenly do poorly on something that is outside of it. There's so much of physics that if you are able to fake your way through all the way to graduate school... I guess I'm thinking in terms of like, if you happen to make it all the way through graduate school, pass all the exams you needed to pass, turned out the papers you turn out. At that point to be like, Oh, actually, I just, I faked my way through all of physics, I don't really understand physics. You don't not understand physics, like in spite of yourself, because you...\n\n0:54:35.0 Vael: That seems true. I think one of the things I'm referencing is like a mesa-optimizer, inner optimizer, which is like an AI has some sort of goal. I don't know, maybe it's like trying to go out, go to red things, but the door is always... Okay, sorry, try again. So it says, it says it has some sort of goal, kind of like humans do, so... [sorry, I'm now] restarting [this sentence]. Evolution has some sort of goal, evolution is optimizing on something like inclusive genetic fitness. And it's like this optimizer that is pushing things. And then there's humans, which are the things that are like, it's being optimized or whatever. And humans should ideally have the same goal of inclusive genetic fitness, but we don't. We've got something that's sort of close, but not really. Like we have contraceptives, so we aren't really maximizing babies, we have different goals and things we're going for, we have like of achievement and all the values that we think are important and stuff. And so this is an example of something... that, humans are the ones who are trying to make the AI optimize for a thing, and the AIs are like, Sure, I guess I'll sort of go for the goal that you want me to, and I have a model of what you want me to do, but I actually internally in my heart have a different goal, and so I'll make my way through all of the test sets because I have a model of what you want me to do, but as soon as you release me, then I will do something different. So that's like a weird analogy that doesn't quite work, but...\n\n0:56:00.1 Interviewee: Yeah, especially for two reasons. One is that I think there is a real likelihood that we are going to become multi-planetary, which is pretty good as far as evolution is concerned, because not only does that mean we've dominated the planet, but we've started having other planets and spreading our genes everywhere. So in some sense, we've done exactly what evolution wants us to do, which is like reproduce our genes far and wide. I have a feeling that no matter how sophisticated and smart and industrialized in AI we get as a species, people aren't going to stop wanting to have babies. So like somehow, yes, we don't, like, kill everyone whenever we want to and just steal food and all that like animals would do, but kind of like...\n\n0:56:49.9 Vael: Yeah, I don't know that evolution wants that, but.\n\n0:56:51.8 Interviewee: We're still pretty aligned to the basic evolutionary goal. That's what you're starting from, saying that's the original goal, and we've deviated from the original goal. I think actually we're well within evolution's parameters as far as what are we supposed to do as good evolutionary creatures.\n\n0:57:07.5 Vael: Cool. Yeah, that makes sense to me. And then as a last point or something, so ideally, we want AI to do what humans would do if they were smarter, like you said, if we had more time to reflect and if we maybe had more self-control or something. I don't know that that would kind of come out of nowhere or something? I guess this is now in the ideal case, it sounds like we've mostly succeeded in aligning the AI with humans' goals. (Interviewee: \"So what's the question?\") Um, ideally, I think I want AI that will be my best self or something, that will not go and... fast food is the characteristic example, or not going to the gym or something, or do something like if I were living my best life, would it be able to make... And if I were smarter, would it be able to model what humans would be like as we continue expanding our circle of interest and have moral progress and etcetera. Will AI be able to do that?\n\n0:58:12.6 Interviewee: ...I think so, yes. That's the only consistent answer with what I've said so far. But I emphasize that you have to be very careful about what do you see as the best version of yourself. Because maybe to some degree, you want the best version of yourself to be the one that goes to the gym regularly, eats healthily. But I don't imagine the best version of yourself is someone who does that so religiously that you have no joy in your life. You don't want to just only eat healthy food all the time, only work out, fill up every moment of your day with nothing but keeping yourself healthy and occupied and productive. Like sometimes you just want to say, I don't want to do anything. I don't know where I'm going with this, but it still comes back to the question of... it's not a well-defined question, if you say, because the best version of myself is better than myself, I cannot conceive of what the best version of myself ought to be, and therefore it's kind of a vague worry what the AI would make the best version of yourself. To me, that's a weird question, if that makes sense. I think that's kind of like, I'm reducing a little bit, but there is a little bit of that, is that I want the best version of myself, but I can't judge what the best version of myself would be, because that best version of myself would be a smarter version of me, which I cannot comprehend. And in that scenario, how can I feel safe that the best version of myself is indeed the best version of myself. And I think, so I think the question has to be a little bit more well-defined then as currently presented.\n\n0:59:57.1 Vael: Cool, that makes sense. So what I'm taking from this is, I'm like, \"Okay, you think the problem of putting values into AI will not be unsolvable, it will kind of be solved in due time as we go along.\" You're more worried about the governance problem. I'm like, \"yeah, I guess? I don't--\" And you're like, \"Well, anything that we could just ask the human about, we'll like, we have like a hypothetical answer to what we want the AI to do,\" and I'm like, yeah, okay, that seems true. I guess we just have to make it so that all the AIs really do ask for human feedback very consistently or something. And there are some issues around there. But, I don't know, it does seem like hypothetically it should be possible, because you can just ask the human, in some sense. And I feel like I'm missing some arguments, but I'm like, Ooh, food to think about, this is great.\n\n1:00:39.7 Interviewee: But I mean-- one last thing, I know, I have to go too, is-- One thing I'll say is the fact that we can even just discuss the question of... It actually does seem plausible that we can get our AIs to at least say what we would say. That to me is amazing, because I think four years ago, it would not have been clear what you even meant when you said, I want a language model-- a model AI to answer moral dilemmas in a way that humans would answer them. It would not have been clear what that meant, unless you just had a classifier where you inserted a video of a scenario and then you said like \"track 1, track 2, classify.\" We have much, much more sophisticated tools at our disposal now, we can just essentially talk to it and say, no, that's the wrong answer, I want you to say this in this scenario. That's already, in a span of two years, I think is remarkable. I feel like I'm coming off as like an incredible optimist. I've got my concerns, but I do think so far, people have shown that any well-defined technical problem in AI can be approached, and they're not impossible. Yeah.\n\n1:01:55.6 Vael: Got it. When would you work on the governance problems, since you seem to think that's more of a problem?\n\n1:02:02.2 Interviewee: So... I trust the people that I work with to kind of hit the button when it really needs to be hit. Because right now, like I said, I think people don't take it seriously, partly because I don't think they really believe it needs to be taken seriously. These researchers are not saying, \"Oh my God, we need to take this seriously, but I've got other stuff to do.\" They really are just like, I don't think we're at that stage where we need to take it seriously. I think the people that I work with are, on the most part-- mostly pretty well-intentioned people. There's some disagreement over this. [...] I know that within OpenAI there were some healthy discussions about what is the correct deployment model for things like GPT-3. All the discussions that I've had so far give me a lot of confidence that these people aren't stupid and they're not entirely negligent. I think they're possible occasionally overconfident, but they're not arrogant, if that makes any sense. As in like, they know they're fallible. They don't always put the right error bars on their decisions, but they know they're not infallible. Ultimately, that's what it's going to come down to. There's not like a system for this. It's going to be a few hundreds to a thousands-- thousand people, doing the sensible thing. For the moment, it looks like that's going to happen. But... I agree that that isn't entirely confidence-inspiring. But I think that's going to be the way it goes.\n\nVael: Awesome. Well, thank you so much for talking to me, I found this very enjoyable. And got some things to think about.\n\n[closings]\n", "url": "n/a", "docx_name": "individuallyselected_92iem.docx", "id": "35c97942fd0cbf8e899ccf1fc7f1d957"} {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": [], "date_published": "n/a", "text": "Interview with zlzai, on 3/18/22\n\n[Note: Th interviewee's Zoom connection was very bad since they were calling in from an Uber, so there's a lot of missing content (and \"I lost you...\" parts of the conversation that have been removed for clarity).]\n\n0:00:02.5 Vael: Cool. All right. So my first question is: can you tell me about what area of AI you work on in a few sentences?\n\n0:00:09.2 Interviewee: Definitely. The way that I describe it is, first of all, I work on deep learning, let's get that out of the way. And then beyond that, I'm interested in understanding [inaudible - long]. If I had to describe in a word, I would call it the science of deep learning. This is a topic that there are few other researchers interested in. We try to understand... We treat these as complex systems that have emergent properties due to that complexity, in the same way you might think of biology as an emergent property of physics. It's not something you might predict for first principles, but once it's there, there's a lot you can say about it that's pretty interesting. And then from there, I'm interested in trying to understand how do these things learn and how can you make them learn more efficiently. You can think of that... A metaphor I like to use is like pharmaceuticals. Once you understand a biological organism, you can design an intervention that takes advantage of the patterns that you've seen in order to get a certain behavior or a certain outcome.\n\n0:01:00.8 Vael: Great. Cool. I missed maybe the second sentence there. I assume that was like large NLP systems or foundations or something?\n\n0:01:08.1 Interviewee: Yes. So I work on deep learning, so I'm interested in the neural networks as they are in practice.\n\n0:01:12.7 Vael: Got it. Cool. All right. And then my next question is, what are you most excited about in AI? And what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n0:01:23.4 Interviewee: So the thing that I'm most excited about is that this can do extraordinary things that we don't know how to program, but we know we can do. Let me try to elaborate on that a little bit. What I mean by that is there's a world of things that we don't know whether anything can accomplish, tasks that are sufficiently complicated or tasks where given the input data, you may not know whether you can predict the output data. But neural networks are closing the gap between things that we know are possible to do because we do them as humans all the time, but we don't know how to program, or there's no kind of discreet handwritten program that we can write easily that will describe this. And so I'm really excited about the fact that we can take data and actually do something that starts to resemble things that require human complexity in order to do that. So to me, that's the exciting part. Self-diving cars are one example, though we're not that good at that, but even just a mere handwritten digit recognition like the MNIST task, which is the most basic machine learning task in the world, now that I'm in the field, but when I first took a machine learning class and saw that task, my mind was blown that you could just, with this tiny little data set, learn how to do handwritten digit recognition. That's pretty profound, as far as I'm concerned, and I've never quite lost my fascination for that.\n\n0:02:35.9 Interviewee: On the risk side, I'm concerned about a few things. I'm not an AGI believer or an AGI concerned person, so you can cross that one off. I think [inaudible] huge risks, I am always concerned about... [inaudible - very long]. [Vael saying what was missed] ...Let me start over. It was me trying to search for the right words so you missed absolutely nothing.\n\n0:03:47.2 Interviewee: Now that I found the right words though, the two things that scare me most, number one, the fact that the systems fail in ways that are unintuitive to humans or that we wouldn't be able to reason about intuitively. So the example I give is, you think an automobile driver is likely to drive more poorly at night or when they're tired or when they're impaired or what have you. It's really hard to have an intuition about when the Tesla is going to think of the bus as a cloud, and then slam into the back of it. And it's those rare failures when we use a system a lot of times that lead to really bad outcomes and lead to mistrust. At the end of the day, AI needs to be trustworthy and we need to, as humans, trust it, otherwise, it will never be useful, and we'll never accept it or terrible things will happen, and so I think a lot about the notion of what does it mean for it to be trusted; where are the gaps between where we are right now and what it would take to be trustworthy.\n\n0:04:41.3 Interviewee: The other big concern I have is that I think we don't take the problem of understanding how these systems work seriously. It used to be that when we design a big complicated piece of software, think like Windows or PowerPoint or a browser or something like that, we assumed it would take 1000 engineers to build it, and then maybe let's say 50 or 100 to test it. Deep learning, we have this really incredible situation where [inaudible - long] ... the way that I would put it, the system kind of [?verb] on its own based on the data. And we're lazy in computer science, we like to think that, \"Oh, the system came to life on its own, so we should have this one hammer that we can bang against it and see whether it's good or not, or whether it's fair or not, or use the explainability bot to understand how to explain the system.\n\n0:05:32.1 Interviewee: But in reality, I think the ratios are just going to be reversed. The software, if you're using... 100 researchers may be able to build a very exciting piece of software. You're going to need 1000 or 2000 engineers to tear it apart down to the studs and try to understand how each individual piece of one specific system works. So I think the entire literature around explainability and understanding of deep learning is completely wrong-headed because people are looking for general solutions to a problem that doesn't have general solutions. It's like writing a program that will debug any program for you. There's just no such thing. It's not how this game works. So that's scares me a lot, that we're thinking about this completely the wrong way. And we do need some degree of explainability, we need some degree of understanding of how these systems work, but we're not going to get there from the way that we're currently thinking about this, and it's just going to take a lot more effort and resources than we're currently remotely considering giving to it.\n\n0:06:24.5 Vael: Interesting. And it sounded like... So how many engineers per researcher do we need, or per model do we need, do you think?\n\n0:06:33.8 Interviewee: Well, this is going to be complicated. The 1000 and 100 are just kind of... Every piece of software needs a different number of engineers, and three engineers can actually build a pretty big piece of software, but the amount of... I'll go out there on a limb and say it's going to be a 10 to 1 ratio.\n\n0:06:53.8 Interviewee: That's for every 10... Let's call it 10 testers. Whatever we call them, whether it's engineers or researchers or what have you, let's say that 10 people need to try to understand the system or 100 people need to understand the system. For every [inaudible] building the system. ...Feel free to tell me I'm wrong if [inaudible].\n\n0:07:16.5 Vael: No, I'm just trying to actually hear you, so it's 10 to... Or 10 testers or 100 testers to one capabilities person making it happen. Wow. ...Are you still here? (Interviewee: \"Yep, still here.\")\n\n0:07:34.0 Vael: Great, alright, so that's pretty extreme ratio. Do you think we'll get there? And what happens if we don't get there?\n\n0:07:42.3 Interviewee: I think we'll have to get there. At the end of the day, we evaluate these systems by their capabilities. If you have an autopilot system and it crashes the plane a lot, at the end of the day, you're not going to use it and it's not going to be allowed to fly. In order to get the kinds of capabilities we're hoping for, and to get the degree of understanding of failures that we typically expect in any high assurance system, we're going to need to have... If your self-driving car crashes enough times, someone will require you to tear that thing down to the studs to understand that, and you're going to have to do this regardless of whether you decide to do this proactively, or eventually your system won't be allowed on the road until you do. So I think there's going to be a huge amount of manual labor in understanding how these systems work, and I think that either through failures that lead to bans of certain uses of technology until it's better understood or until it's more resilient, or someone who's being proactive and actually wants to ensure their self-driving car works. Either way, you're going to end on this situation. No better way to put it.\n\n0:08:48.5 Vael: Okay, that's super fascinating. Okay, so you think that we are going to need... The way we're doing interpretability is not good right now, it needs to be specific to each system, and currently, we're probably doing much... Yeah, we're not even going in the right direction. I kind of expect that self-driving cars will be deployed without this huge amount of intervention on it, but you say that for most systems or something, we will just continue to have failures until it becomes obvious from society that we need people to do interpretability and explainability type of work in this correct way.\n\n0:09:22.0 Interviewee: I think so, I think that's a good summary. The only thing I'll add to this, and I spend a big chunk of my life doing AI policy, so I have to think about this stuff a lot. Is that the amount of work we put into the assurance is proportional to the risk that the system has if it fails. So my favorite example of this is-- I've done a lot of work in the past on facial recognition policy. Obviously a hot topic, facial bias issues, etc. So let me give you two examples. Google Photos has this clustering algorithm where it will basically find all the faces of grandma and cluster them so that you click on one picture of grandma, and you can find them all if you want to.\n\n0:09:58.4 Interviewee: Suppose there's racial bias in that algorithm, and suppose that instead the police department is using a facial recognition system to try to identify the perpetrator of a crime via a driver's license database. These are two applications that might both have the same technology underneath, they might both have the same biases. We're going to be much more worried about one than the other because one could lead people to lose their freedom, and the other may lead to people rightly feeling offended and people rightly feeling hurt, but not someone being thrown in jail potentially. So we have to handle the consequences of the systems in line with their risks. I'm a lot less concerned about interpretability of Google's facial recognition system than I am about a police's facial recognition system.\n\n0:10:47.8 Vael: I see. And since they need to be treated differently anyway, according to your view... Yeah, just do it-- do the interpretability in accordance with their importance. That makes sense.\n\n0:10:56.7 Interviewee: Exactly.\n\n0:10:57.7 Vael: Yeah. Cool. Alright, so my next question is about future AI. So putting on a science fiction forecasting hat, say we're 50 plus years into the future. So at least 50 years in the future, what does that future look like?\n\n0:11:11.4 Interviewee: I think it's honestly not going to look that different from the present, just a little bit more extreme. I don't think we'll be in an AGI world. I don't think we will have systems that rival human intelligence. I think we will be able to specify what we want out of systems in much higher level terms and actually get them to do this-- [I'm] thinking, like, a Roomba that is actually a useful intelligent Roomba. A Roomba that knows when to vacuum, knows when not to disturb you, that you can say, \"Hey, can you please do an extra touch-up on that area,\" and it'll go do it. I think the current wave of machine learning is great at pattern recognition; I don't think pattern recognition gets us to AGI. I think the technology is going to have to look fundamentally different, and I don't know that we're any closer now than we were 50 years ago in that respect, beyond that we know there are a lot of dead ends.\n\n0:12:05.0 Interviewee: That's not to say what we're doing right now isn't exceedingly useful. And [that] we're not going to push this to the nth degree, and [that] we won't have the New York Times of the future where you click on a headline and it will generate an article for you based on your interests, your knowledge level, your background on the topic and the amount of time you have to read it; I think that's an application that might be 10 or 15 years in the future, if not sooner than that. But I don't think we'll be in a place where we're worried about giving robots rights or things like that. I don't think that a huge amount of pattern recognition will get us to cognition, and I don't think the current track we're on will get us there. That doesn't bother me personally because I don't really care about that. I'm much more interested in what we can do for people now and in the near future, and not how we create intelligence. That's more of a scary question for me than an exciting question for me, but it's... [inaudible - short]\n\n0:13:00.2 Interviewee: ...thinks that, heading to an AGI world, even if... And we may be in a world where we do have self-driving cars that actually work, even if it may not be here for 10 or 15 years, 'cause it's a hard problem. We may have actual smart digital assistants, but that's [inaudible - short] intelligent beings who are peers or [hard to parse] in that respect.\n\n0:13:25.1 Vael: Interesting. I don't know if it will help for me to turn my video off, but I'm going to try it to hope it gets a little bit less choppy. Great. Okay. So you're talking about within the frame of AGI, like lots of people presumably talk about AGI around you. What are people's opinions? What do you think of their opinions? Etcetera.\n\n0:13:46.5 Interviewee: Oh, I think there are a lot of nut cases. I mean, there are also a lot of optimists and a lot of people who have read a lot of science fiction and want to bring that to life, which is great. And a lot of people who have spent too much time in San Francisco and are surrounded by peer groups who... Basically, there's a lot of monoculture in San Francisco. And I say this having just come back to San Francisco, and as someone who refuses to move [there] despite [...]. I have friends who, if I recall correctly, go and worship at the Church of the AGI or something like that, from what I understand, under the belief that eventually there will be artificial intelligences that are smarter and more powerful than us, and therefore they should get ahead of the game and start worshipping them now, since they'll be our overlords later.\n\n0:14:32.4 Interviewee: I'm a little more concerned with the here and now, and I think that the technological leaps between great pattern recognizes that we have today and that are way far away. Just because you're a nutcase doesn't mean you can't change the world. And there are plenty of examples of that, and it's really good to have that point of view constantly echoed in the community. But I don't worry about the end of the world because of AGI in the way that I think some of my friends do. I don't consider that to be the biggest existential risk that we have, far from it. And I don't think it's a risk that we should really be spending any time worrying about at the moment.\n\n0:15:07.6 Vael: Got it. Is that because, like you said, there's other existential risks to prioritize, or you don't think this could be an existential risk?\n\n0:15:15.1 Interviewee: I don't think this is... In the very, very long tail of ultra low probability risks to civilization, this is so far out in the tail that it's not worth spending any time on, independent of the fact that there are also much greater risks.\n\n0:15:29.6 Interviewee: It's not just a matter of priority. It's also a matter of... it's not a good use of any resources.\n\n0:15:34.8 Vael: Got it. That makes sense, yeah. Unfortunately, a lot of my questions are about AGI so you... [chuckle] Here we go.\n\n0:15:42.1 Interviewee: No, I'm happy to give you a strong contrasting opinion to many that I'm sure you've heard. So come at it.\n\n0:15:48.1 Vael: Lovely, lovely. Okay, cool. So how I'm defining AGI here is like any sort of very general system that could, for example, replace all human jobs, current-day human jobs, whether or not we choose to or don't choose to do that. And the frame I usually take is like, 2012, deep learning revolution, here we are. We've only been doing AI for like 70 years or something, and here we are 10 years later, and we have got systems like GPT-3, which have some weirdly general capabilities. But regardless of how you get there, because I can imagine that we hit some ceiling on the current deep learning revolution and we need to have paradigm shifts-- my impression is generally that if we keep on pouring in the amount of human talent and have software-- algorithmic improvements at the rate we've seen, hardware improvements, etcetera, and just like, the driving human desire to continue to follow economic incentives to earn money and replace things and make life more convenient, which I think is what a lot of what ML is aimed at right now, that eventually we will get some sort of AGI system. I don't really know when. Do you think we will at some point get some very general system?\n\n0:16:55.8 Interviewee: I think if you take the time to infinity and humanity lasts in time going to infinity, yes, we will. I do think we have one example of a truly general intelligence system, namely human beings. And eventually we will probably get to the point where we could replicate that intelligence manually, if we have to literally photocopy somebody's brain and the transistors at some point, when we get technology advanced enough for that. Do I think that will happen this century? No. Do I think it will happen next century? Probably not. Do I think it might happen in the ones after that? Maybe.\n\n0:17:28.8 Interviewee: So the answer is yes, in the limit, but no in any kind of limit that you or I would think about or be able to conceptualize.\n\n0:17:35.7 Vael: Yeah, that makes sense. So you think there's probably going to be a bunch more paradigm shifts needed before we get there?\n\n0:17:42.8 Interviewee: I think at least one-- it's hard to know how many paradigm shifts because it's hard to know what they are, but I do not think this paradigm is the right one. I think this paradigm is amazing, and I'm really excited about the kinds of machines we can build. But you can be excited about the kinds of machines we can build and recognize the limits of those machines. In the same way that-- how are we going to get to AGI in 10 years if we've been pouring... You were talking about economic incentives. How much investment do you think has gone into self driving cars over the past, let's say, 10 years?\n\n0:18:14.2 Interviewee: Let's call it $100 billion plus or minus. That's probably the largest single investment in AI technology for an application anywhere. Ever, very likely. And where are we today on self driving cars? They're 90% of the way there, [inaudible - short] 10 times as difficult as the first 90%. And I don't think you'll be seeing fully autonomous self-driving cars in the general case on general roads in the next 10 or 15 years. So the thought that there would be an AGI at that point, let alone in 50 years is completely nonsensical to me, personally.\n\n0:18:56.0 Vael: Yeah, that makes sense. Especially since self-driving cars are, like, robotics and robotics is behind as well. But even GPT and stuff doesn't really have good grounding with anything that's happening in the world and how--\n\n0:19:09.2 Interviewee: GPT's capabilities are also wildly overstated. You can pull out a lot of good examples out of GPT if you really want to, and you can pull out a lot of crappy ones. But we're not going to just brute force large language models to get our way to general intelligence. That's BS you'll only hear from someone who works at OpenAI who wants their equity to be worth more, quite frankly. The only people who say this are the ones who have an economic incentive to say this, and the people who follow the hype. Otherwise, I don't really know of anyone who thinks GPT is the road to AGI, especially given that we can't scale up any bigger. I mean, this is something, this is my whole push right now, is that the only way that Nvidia is going to come out with new GPUs next week, and they are going to come out with new GPUs next week that will be twice as fast as the ones that came out two years ago, is if they doubled the amount of power. It's not like we're doubling the amount of hardware we have available.\n\n0:20:02.4 Vael: Got it. Do you know how good optical or quantum computing will be? I know that those are in the pipeline.\n\n0:20:08.9 Interviewee: They're in the pipeline. Quantum's been in the pipeline for a long time, and we're up to what, four qubits? Cool. Again, this is one of those cases where we've been awful close to nuclear fusion for a long time. We're going to have nuclear fusion in the limit. I'm a 100% certain of that. When we have that, call me.\n\n0:20:32.7 Interviewee: It could be next year. I mean, we're very close to crossing that threshold, but we've been really close to crossing that threshold for several decades. And so, trying to call the year that it's going to happen within a five, 10 or 50-year time horizon, it's really, really tough when it comes to these technologies. And I'm in that same place about quantum. We are making progress on quantum and I'm really excited about it. I'm a little bit scared of it, but I'm excited about it, but that doesn't mean... Like, crossing that threshold is really difficult. The difference between 10 years and 50 years and a 100 years away may be very small improvements in technology, but it may take an exceedingly long time for us to accomplish. Optical, same thing. So I'm not sitting here expecting that breakthroughs are just going to happen left and right. These breakthroughs often take a very long time; a lot of incremental advances and all sorts of other technical advances in other fields to make them happen. Material science especially in the case of quantum. And we may get there in five years, or we may get there in 50 or 100 years or longer, and it's hard to say.\n\n0:21:36.6 Vael: That makes sense. So what would convince you that you should start working on AI alignment? It sounds like there's probably going to be some breakthrough that would make you think that it's important, but we're not necessarily anywhere near that breakthrough right now. Do you have an idea of what that might be?\n\n0:21:56.9 Interviewee: Give me your personal definition of AI alignment.\n\n0:22:00.0 Vael: Yeah. Well, actually I want your definition first. [chuckle]\n\n0:22:03.3 Interviewee: So this is not a field that I follow that closely. The entire concept of alignment has really come to the fore in the past three or four months while I've been trying to [job-related task]. So I haven't been paying as much attention, so I'd actually appreciate your definition. I can tell you the kinds of people who I see talking about alignment and the kinds of papers that I've seen, the paper titles that I've seen go across my desk. But I couldn't give you a good definition even if [inaudible - short].\n\n0:22:33.9 Vael: Yeah. So one of the definitions I use, and I'll give you a problem setting I usually think about as well-- so, [the] definition is building models that represent and safely optimize [inaudible] specify human values. Alternatively, ensuring that AI behavior aligns with system designer intentions. And one of the examples I use for what an alignment problem would be is the idea that highly intelligent systems would fail to optimize exactly what their designers intended them to and instead do what we tell them to do. So the example of OpenAI, trying to... Have that boat win a race and then it getting caught on like some little-- but optimizing instead for a number of points and instead ending up in this little-- side, collecting-points area, instead of winning the race. So doing what the designers told it to do instead of what it intended them to.\n\n0:23:26.2 Interviewee: So, I mean, this sounds like a... If you want my honest frank opinion.\n\n0:23:34.8 Vael: Yeah.\n\n0:23:35.3 Interviewee: A BS-ey rebranding of a simple fact of life in computing for... Since the dawn of computing, the computer does what you tell it to do, not what you want it to do. (Vael: \"Yes.\")\n\n0:23:47.5 Interviewee: And so, I don't... I know a lot of people at OpenAI think they're very deep and profound for calling it AI alignment. [inaudible - very long].\n\n0:25:02.7 Interviewee: So picking up where I was, there are probably a lot of folks at OpenAI, and I know exactly who they are, who think they're very deep and profound who are wondering about this question, but this is kind of the obvious fundamental thing that every first year programmer learns and is a question that everybody has been asking in every context whenever they develop a loss function for a neural network. Your loss function never reflects exactly the value that you want the system to carry exactly, even just the outcome. We optimize language models to be good at predicting the next word and yet they somehow also generate... We want them to have properties that go above and beyond that, we want them to be able to even just transfer the representations for downstream tasks.\n\n0:25:44.0 Interviewee: This is just a question of to what extent is the thing you're optimizing a system for going to actually align with the task that you want. I just, I don't see what's interesting or new about this question from a research perspective. I think of course it's an important one, but it's not a... [inaudible - long] Saying, oh, we should worry about computer security. Well, yes, but there's no security robot that fixes all computer security. It's a complex context-specific problem. Again, people say things that are aligned with their economic incentives and that is certainly true for my friends at OpenAI but I don't see any profundity in this observation that's just the nature of computing.\n\n0:26:38.1 Vael: Yeah. I think it has to be paired with the idea that if you have a very intelligent system that can plan ahead, that can model itself in the world, that it may have an incentive to preserve itself just as an agent pursuing any goal, because it doesn't want to decrease its chance of it succeeding at its goal. I think that has to be paired. Otherwise, it's not particularly special. But I do think creating an agent that in the far future, whenever AGI develops, that has an incentive to not be modified makes it much more dangerous than anything we've seen previously.\n\n0:27:16.2 Interviewee: I agree with that factor, when we get to a point where this becomes a problem. But I think you can see that the amount I care about this question is proportional to the amount that I think that AGI is or will be a concern in your lifetime, or your grandchildren's lifetime. And so I think there are a lot more fundamental basic questions. Like, we don't even understand why a neural network actually is able to recognize handwritten digits in the first place. And until we get these basic things down, we're never going to get to build the kind of systems that have these properties anyway.\n\n0:27:44.7 Interviewee: So, I would make the analogy of, let me see, I'm trying to think of the right metaphor... I don't know, for the folks who worry about how we're going to communicate with the alien civilizations we inevitably come into contact with-- we better build some space ships that can get us to space first before we start worrying. And then figure out whether there are alien civilizations out there before we start worrying about how we're going to communicate with them.\n\n0:28:05.6 Vael: Yeah, this sounds like a very coherent worldview, I'm like, yep, makes sense, is logical.\n\n0:28:11.7 Interviewee: Yeah, I have strong opinions, and researchers are always incentivized to have strong opinions; it's what moves science forward. We argue and we disagree with each other, and we're all right, and we're all wrong. But these are my strong opinions. And if you were to chat with any of my friends at OpenAI, you'd hear the opposite view and we could still go out for drinks and have a good time.\n\n0:28:32.5 Vael: Great. So I think this line of questioning was originally developed by me asking what would you see... What would you want to see in the world before you're like, \"Oh actually, people are right, this thing is coming earlier than I expected.\"\n\n0:28:47.6 Interviewee: That's a good question. Honestly, I would want to see a... I'm thinking of examples of a task, so like a Turing test of some sort that would fit here. Obviously, the Turing Test is not a very effective test given that GPT-3 probably passes it with flying colors, despite the fact that GPT-3 is a very good language model. What I would look for is long-lived machine learning systems that can learn continually. We don't even have learning continually down. We have reinforcement learning systems that eventually become effective agents at solving one particular task; they can't move on to a second task, we don't have any general kind of learning process. Even the reinforcement learning agents that we have have to be really, really, specially hand-calibrated for a specific task, to the point where even the specific random seed you use is an important hyperparameter in determining whether the agent will succeed or fail at a given task.\n\n0:29:50.0 Interviewee: If we're in that world, we're very far away from an agent that can learn many tasks or has a generalized learning process. This is I guess why people are excited about meta-learning in general. Again, I think we shouldn't worry about light speed travelling until we can get off the planet, but that's a whole different... You know where I stand on that. So... [inaudible - short] and improve themselves over time; reinforcement learning is kind of a shadow of what we would expect of a real system. A smart home assistant, where you can teach it new tasks by explaining it new tasks, and it will be able to figure out how to accomplish them. We don't have any kind of abstract or general learning capabilities right now. And we talk a very big game about things like AlphaFold or what have you, or any of the Alpha stuff coming out of DeepMind. But these are requiring lots and lots and lots of engineers to get a system to work on one very specific setting in ways that don't transfer to any other setting or any other task. There are principles we... [inaudible short] being really hard trying a bunch of stuff. And that's a... [inaudible long] real general learning process that will keep me up at night.\n\n0:31:03.5 Vael: Got it. Okay. Interesting. So do you think you and your colleagues probably have the same information but are just reacting to it differently? Other people are also not seeing these very general systems or maybe-- yeah, presumably, because you have access to the same information, but you're just interpreting it differently based on the incentives that your company or whatever is in?\n\n0:31:29.9 Interviewee: I wouldn't always blame the incentives and the all, again, throwing some shade at San Francisco in particular. When you spend all of your waking hours and all of your sleeping hours around exclusively people who are working on the same [inaudible], same field. And when you work as they do not only in the tech industry, but especially in the very tight knit AI community, where if we all go to the same birthday parties when I'm out there, and everybody goes to the same spas, and runs into each other on the street, despite the fact that San Francisco is a big city. When you live in that kind of monoculture, quite frankly, you'll lose touch with reality, to some extent. And I think, a lot of people start believing that... [inaudible very long]\n\n0:32:31.9 Vael: Okay, cool. My question was, although I missed that last bit, but my question was, how do people stay in touch with reality, how do they...\n\n0:32:43.7 Interviewee: I think you have to talk to people who are doing things other than building AI systems. You have to actually maybe interact with someone who works in healthcare or someone who works in finance, or someone who works in any other field besides AI or besides machine learning. And you have to remember that they are real human beings out there. I think this is just a general San Francisco problem. You see people, you see 20-somethings who are wealthy, producing systems that are convenient for them. We see this all the time, startups coming out of the Bay Area.\n\n0:33:24.1 Interviewee: My experience in Boston and New York is it's a very different start up culture of healthtech, fintech, edtech, what have you, things that are quite frankly real and useful. So when I hang out with... The times I've been working at [large tech company] in California, for example, all of my friends are worried that the sky is falling for some reason. Maybe we're about to enter an AI winter or we're about to create AGI, it's usually one or the other. Depending on who you talk to and what mood they're in, and what's going on in their organization at that different point in time. But people are just so caught up in the inside baseball, they've lost the bigger context for everything we've accomplished, but also how far away we are from some of the things people talk about. And you gotta actually taste the real world from time to time, and remember that you don't just live in this little bubble of lots of neural networks and lots of people with PhDs working on neural networks.\n\n0:34:19.7 Vael: And interacting with other people and seeing the wider world gives the context to believe more... mainline views? Because it sounds like people end up in either direction, you said they believe that everything is about AI, everything's going to go badly: everything is going to go badly and too fast or everything's going too badly and too slow. And there's some regularity effect that happens if you're hanging out with other people?\n\n0:34:50.4 Interviewee: It's almost the opposite of... [inaudible - short] in that respect. I think it's partially because people will say things like, \"Well, there hasn't been a big advance or there hasn't been a big breakthrough in 18 months.\" And my reaction is, it's been a... Yes, the transformer paper came out 18 months ago. Yeah, let's be patient, and the field does not always continually accelerate. No scientific field continually accelerates. We have periods of big progress and then periods of stagnation and paradigm shift. This is the structure of scientific revolution. If you want to... For the extent to which you take that view of the world seriously, that does, in my impression, simply how science moves, in fits and starts, not at a steady pace. And in computers we're used to exponentials. That doesn't mean that these systems are going to get exponentially better over time. It may mean we had 10 really good years of progress based on a bunch of different factors all coming together, between big data being accessible, lots of computing accessible, and some improvements in deep learning, that all came together really nicely to give us a big burst of progress.\n\n0:36:00.1 Interviewee: Maybe things will slow down a little bit. Or maybe there will be a new architecture in five years that will be another big burst, the way that convolutional neural networks were around the AlexNet breakthrough and then the way that transformers were. But this happens in fits and starts. But around [large tech company], partially because of [large tech company]'s promotion process and partially because of the very internally competitive atmosphere that that fosters, people are always dour and feel like the sky is falling. And at OpenAI, the incentive is to continue the hype and build really big systems. It's partially in the culture, partially in the leadership, partially in the brand of the place and what it takes to make your stock go up in value. So I think people really are shaped by their environments in that respect, and you gotta get out and talk to other kinds of people and get other perspectives, and I don't think there's that much you can do when you're in a big organization like [large tech company], and there's not a lot you can do when you're in San Francisco and everybody you talk to. It's not \"what you do\". It's \"which tech company do you work for\" or more likely \"which AI project do you work on\".\n\n0:37:04.4 Vael: I see, so this is probably, like you said, why you haven't moved and also presumably why you have friends who are not AI researchers.\n\n0:37:12.4 Interviewee: Yeah, my office right now is at [university] [non-science field]. I just sit by a bunch of [non-science field] professors. I've worked there at times in the past. I live in [city] right now, and they were kind enough to offer me a place to sit and work for a little bit. So I'm hanging out with a bunch of [non-science field] faculty all day. It's a very different perspective on the world. They're worried about very mundane things compared to AGI coming and killing us all, or a quantum computing breakthrough that leads to the end of civilization when we train quantum neural networks or something like that. They're much more worried about bail reform, or very basic mundane day-to-day problems that are actually affecting people around them. It's hard to get caught up in the hype.\n\n0:37:56.5 Vael: Yeah, and it seems like it's important to make sure that people are working on things that actually affect more than just their local environment. I think you mentioned something like that earlier, right?\n\n0:38:05.9 Interviewee: Exactly. And there's also a little bit of... as computer scientists, we love to calculate things, we love to measure things. Effective Altruism is one great example of this. We love to say, \"What is the way that I can put my dollar the furthest along?\" And I think sometimes people lose touch with the fact that there are humans out there. There are people right now, you ask questions like, well, either I can help the humans today or I can put all my resources toward saving the entirety of civilization. And civilization and the future is obviously-- there are many more humans there than there are here today, so I'll worry about, on the 0.1% chance that we develop AGI and it comes to threaten us all, I'll worry about that problem, because in expectation, that will save more lives than what have you. And I think--\n\n0:38:51.5 Vael: --Yeah, and this is the wrong way to think about things?\n\n0:38:56.5 Interviewee: I don't think this is necessarily the wrong way to think about things, but I think it's a little bit... Sometimes, it's more important to focus on what's right in front of you and what's tangible and what's here and now. Or the other way I'll put it is, MIT is pathologically bad as an institution at dealing with topics that can't be measured for exactly this reason. It's really hard to talk to people at MIT about values, unless a value can be measured or optimized.\n\n0:39:25.0 Interviewee: And this is just, for a lot of folks who get empirical education and who are training systems all day, for whom this is a problem, they are just as much... The people are just as much subject to this problem as the systems they build. So, if we want to talk about alignment, it's an issue not just of the systems, but of the people building the systems.\n\n0:39:43.5 Vael: Alright, well, we are at time. Thank you so much, this is a very different opinion than the ones I've received so far, and yeah, very well presented. Awesome.\n\n0:39:52.6 Interviewee: Thank you, that's what I'm here for. You know where to find me if you need to chat more. I'm very exited to see what the outcome of the project is.\n\n0:39:58.4 Vael: Great. Alright. Well, thank you so much.\n", "url": "n/a", "docx_name": "individuallyselected_zlzai.docx", "id": "058cfe3c02b5c7040755a16b9b27ada7"} {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": [], "date_published": "n/a", "text": "Interview with 7ujun, on 3/18/22\n\n0:00:03.4 Vael: Alright. So jumping right in, my first question is, can you tell me about what area of AI you work on in a few sentences?\n\n0:00:11.1 Interviewee: [Interviewee describes working on natural language processing research]\n\n0:01:20.4 Vael: Indeed. Thanks. And then so my next question is, what are you most excited about in AI? And what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n0:01:36.4 Interviewee: The biggest risks are that there is a lot of people who don't really... Who... The biggest risk in AI is that it's a field with a lot of money and attention and social power right now, and there are a lot of people who have positions of power who... don't seriously consider what they do and the impacts of what they do. And AI models are already being used to violate people's human rights in the United States and in other countries, to commit crimes, and that's bad. [chuckle] Yeah, there's... One of the worst applications is that there has been a revival in phrenology recently, so there are a lot of police departments that they have gotten really into the idea that they can use AI analysis of video cameras to determine who is *going* to commit crimes. And this, shockingly, results in over-surveillance of minority populations and violation of human rights left and right, and it's a huge clusterfuck and the police don't care.\n\n[pause]\n\n0:03:10.2 Vael: Awesome. Well, not awesome, but. So that question was, what are you most excited about AI and what are you most worried about; biggest benefits and risks?\n\n0:03:19.8 Interviewee: Gotcha. What am I most excited about? I'm excited about the opportunity to interact with computers via natural language. So one of the really interesting things about some recent research is that we've been able to move away from traditional coding interfaces for certain tasks due to the way we've able to automate things. Probably the most prominent example of this is that there is a burgeoning online AI-generated art community, where they take pre-trained models and they write English sentences and they provide the... What the model does is it takes a English sentence input and draws a picture, and it's shockingly good and has an understanding of styles. If you want it to be... If you say in the style of Van Gogh, or high contrast, or low-poly render. You can induce visual effects by using language like that, and I think that's phenomenally cool, and it's gotten a lot of people... There's a lot of people who've gotten into using this kind of technology who otherwise wouldn't have... [who it] really wouldn't have been accessible to because of their lack of coding knowledge and understanding of AI. They couldn't have developed the algorithms that run on the backend for this on their own. Recently, yesterday I saw another blog post about how they were able to develop simple video games using GTP-3. It just wrote the code for them. I think that the ability to write a text description of something that you're interested in, which is a medium that everyone can relate to and interact with, or that most people can relate to and interact with far more than regular programming, for example, is really powerful and really awesome.\n\n0:05:30.8 Vael: Yeah, I see a lot of themes of accessibility in all of these risks and benefits and work. I thought you were going to bring up Codex but yes, art generation. It's very cool.\n\n0:05:41.7 Interviewee: Oh yes.\n\n0:05:43.8 Vael: Yeah. So, thinking about a future AI, putting on a science fiction forecasting hat, say we're 50 plus years into the future. So at least 50 years in the future, what does that future look like?\n\n0:05:56.3 Interviewee: I have absolutely no idea, and anyone who says otherwise is wrong.\n\n0:06:01.3 Vael: Okay. Do you think AI will be important in it, or probably not?\n\n[pause]\n\n0:06:16.6 Interviewee: I think that's more of a sociological question than it is a technical question. The class of problems and the class of algorithms that are considered AI has changed dramatically over the past 50 years, and there's entire books have been written about this topic. At a basic level, my hesitancy is that I don't know people will consider AI in 50 years. There's a very real possibility in my mind that GTP-3 will no longer be considered an AI.\n\n0:06:48.5 Vael: What will be considered?\n\n0:06:53.3 Interviewee: A text generation algorithm? A good example of this is simple game-playing agents, so you can write an algorithm that can play Tic-Tac-Toe perfectly or can play Connect Four or Checkers really well. Like, will beat any human. And a lot of people don't call those AIs anymore because they don't... 'Cause they're search algorithm-based. They apply a lot of computational power to look through a space of possible events, and they find the best event. And they don't really... The argument is that they don't reason, or they don't know anything about strategy. And this is often to contrast it with more recent AIs for playing games like the AlphaGo, AlphaZero models that DeepMind has produced where top level chess players certainly get beat by these algorithms just like they get beat by... have been beaten by algorithms for 20 years, but for kind of the first time people are able to study and interpret these algorithms and learn and improve their play as a human. Which is really cool. But that kind of dichotomy is often used to dismiss or remove the label of \"AI\" from prior work and stuff that have been considered AI at that time. And I could certainly see that happening with GTP-3, for example, because it's really, really terrible at reasoning. And if in 50 years we have chatbots that can answer knowledge-based questions the way, say, a sixth grader could, and reason about basic word problems and stuff, pass some kind of reasoning examination, then I could easily see people no longer considering GTP-3 an AI, because it's not intelligent, it's just babbling and making up words.\n\n0:08:54.4 Vael: Right. Cool, I'm now going to go on a spiel. So people talk about the promise of AI, by which they mean a bunch of different things, but the thing that I'm most thinking about is a very general system with the capabilities to replace all current day human jobs, so like a CEO AI or a scientist AI, for example. Whether or not we choose to replace human jobs is a different question, but I usually think about this in the frame of... in 2012 we had AlexNet, deep learning revolution. You know, here we are 10 years later, we have GPT-3, like we're saying, which has some weirdly emergent capabilities: new language translation, and coding and some math and stuff, but not very well. And then we have a bunch of investment poured into this right now, so lots of young people, lots of money, lots of compute, lots of... and if we have algorithmic improvements at the same rate, and hardware improvements at the same rate, like optical or quantum, then maybe we reach very general systems or maybe we hit a ceiling and need to do a paradigm shift. But my general question is, regardless of how we get there, do you think we'll ever have these very general AI systems, like a CEO or a scientist AI? And if so, when?\n\n[pause]\n\n0:10:07:5...Oh, you're muted. I think. Oh, no. Oh, no. Can't hear you. I don't think anything's changed on my end. Okay.\n\n0:10:25.2 Interviewee: Hello?\n\n0:10:26.2 Vael: Yeah. Cool.\n\n0:10:27.2 Interviewee: Okay. I think my headphones may have done something wacky. I don't know, I would be extremely surprised if the answer... like I know that there are people who say that the answer is less than 10 years, and I think that's absurd. I would be surprised if the answer is less than 50 years, and I don't feel particularly confident that that will ever happen.\n\n0:10:47.1 Vael: Okay. So it may or may not happen. Regardless, it's going to be longer than 50 years. Is that right?\n\n0:10:54.1 Interviewee: Hello?\n\n0:10:55.2 Vael: Hello, hello, hello?\n\n0:11:00.7 Interviewee: Yes.\n\n0:11:00.8 Vael: Okay, cool. So my question was like, all right, you don't know whether or not it will happen. Regardless, it will take longer than 50 years. Is that a summary?\n\n0:11:07.3 Interviewee: Mm-hmm.\n\n0:11:09.0 Vael: Yeah. Okay, cool. So one of my question is like, why wouldn't it eventually happen? I kind of like believe in the power of human ingenuity, and people following economic incentives such that... These things are just really quite useful, or systems that can do human tasks are generally quite useful, and so I sort of think we'll get there eventually, unless we have a catastrophe in some way. What do you think about that?\n\n[pause]\n\n0:11:45.0 Interviewee: It's going to be extremely difficult to develop something that is sufficiently reliable and has an understanding of the world that is sufficiently grounded in the actual world without doing some kind of mimicking of human experiential learning. So I'm thinking here reinforcement learning in robots that actually move around the world.\n\n0:12:13.0 Vael: Yeah.\n\n0:12:13.9 Interviewee: I think without something like that, it's going to be extremely difficult to tether the knowledge and the symbolic manipulation power that the AIs have to the actual contents of the world.\n\n0:12:29.5 Vael: Yep.\n\n0:12:29.9 Interviewee: And there are a lot of extremely, extremely difficult challenges in making that happen. Right now, cutting-edge RL techniques are many orders of magnitude... Require many orders of magnitude too much data to really train in this fashion. RL is most successful when it's being used in like a chess context, where you're playing against yourself, and you can do this in parallel, and that you can... When you can do this over and over and over again. And if you think about an actual robot crossing the street, if an attempt takes 10 seconds, and I think especially early in the learning process, that's an unreasonably small amount of time to estimate. But if an attempt takes 10 seconds and... Let me pull out the calculator for a second.\n\n0:13:28.1 Interviewee: And you need one million attempts... then that would take you... about a third of a year to do. And I think that both of those numbers are wrong. And I think the number of attempts is orders of magnitude too small. There's very, very little that we can learn via reinforcement learning in a mere one million attempts. And this is just one task. If you want something that can actually move around and interact with the world, even if you're using these highly optimistic, currently impractical estimates, you can't take four or five months to learn how to cross the street. If that's your paradigm, you're never going to be able to build-- you're never going to get to stuff like managing a company. [chuckle]\n\n0:14:38.0 Vael: Yeah. That makes sense. Yeah, I think-- I think... this makes sense to me, and I'm like, \"Wow, our current state systems are really not very good.\" But also, I think I often view this from a lens of pretty far back. So I'm like, 10000 years ago, humans were going around and the world was basically the same from one generation to the next, and you could expect things to be similar. And now we've had the agriculture revolution and industrial revolution, in the past couple of hundred years, we have done... We're kind of on an exponential curve in terms of GDP, for example, and I would expect that... And we've only been working on AI for, I don't know, less than 100 years. And we have... We now have something like GPT-3, which sounds sort of reasonable, if you're just looking at it, and of course it's not very... It's not, like, grounded, which is a problem.\n\n0:15:28.3 Vael: But I sort of just expect that if we spend another... I don't know, you could spend hundreds of years working on this thing, and if it continues to be economically incentivized... This is kind of how human progress works. I just kind of expect us to advance up the tech tree to solve the software improvements, to solve the hardware improvements. Or new paradigms maybe. Even at the worst case, I guess we advance enough in neuroscience and scanning technologies to just scan human brains and make embodied agents that way or something. I just expect us to get to some capabilities like this eventually.\n\n0:16:03.5 Interviewee: In my mind, really the fact that there's only so fast we can move around in the real world is a huge constraint. Even if you can learn extremely complicated and abstract things embedded in the real world as an actual robot, take my crossing street example, even if you could... doing an attempt at... So even if you could learn pretty much any task in a thousand iterations, some tasks take a very long time to develop. Humans don't learn to be CEOs of companies very quickly, and it doesn't seem like it's very shortcut-able to me. I also don't think CEOs of companies is perhaps the best example, but let's say...\n\n0:16:58.1 Interviewee: Let's say you wanna train a robot to operate a McDonalds. That's a very large amount of destroyed meat that you need to buy, it's a very large amount of time and materials to even set up the apparatus in which you could actually train a robot to perform that task. And you're talking about economic incentives, where is the economic incentive to burning million patties to get to the point where your robot can flip one over successfully? When we're talking about moving around and interacting in the real world, those interactions have costs that are financial in addition to being time-consuming. If we want to train an AI to... Via reinforcement learning technique, which is certainly a caveat that have to add to a lot of what I'm saying. But if we wanna train a robot to drive a car via a reinforcement learning-like technique, at some point you need to put it behind the wheel of a car and let it drive 100, 1000 cars. And you're going to destroy a lot of cars doing that, and you're probably going to kill people. So that's a very large disincentivizing cost.\n\n0:18:30.1 Vael: Okay. Alright. So the idea is like... if we're doing robotics, then we need to... and the training paradigm is not, like, humans where you can kind of sit them down, and... Humans don't actually crash cars, usually... I mean, sometimes. Teenage humans crash cars sometimes. But in their training process, they don't usually require that many trials to learn, and they can do so kind of quickly. So I'm like, I don't know. Do we expect algorithms at some point to require much less training data than current ones do? Because current ones require a huge amount of training data, but I kind of imagine we'll get more efficient systems as... More efficient per data as we go along.\n\n0:19:24.9 Interviewee: Are you saying that you think that you can sit down and explain to someone how to drive a car and they can drive it without crashing?\n\n0:19:30.9 Vael: I think, that.. We have... I think that if we take a human, and I'm like, \"All right, human, I'm going to... I want you to learn how to drive this car. And I'm going to sit next to you. And I'm going to tell you what to do and what not to do, and you're going to drive it.\" I think they can, indeed, after practicing some period of time, which for humans, it's like hours. It's on the order of tens of hours, then they can basically sit there and not crash a car. And I kind of expect similar paradigms eventually for AI systems.\n\n0:20:04.4 Interviewee: That seems extremely non scalable.\n\n0:20:11.8 Vael: Uh... Okay. You're like, look, if it takes tens of hours to train every AI system?\n\n0:20:17.6 Interviewee: No, I'm thinking mostly about the human sitting next to them giving them constant feedback actually.\n\n0:20:28.5 Vael: But the nice thing about AI is you can copy them as soon as one person spends that many hours. You can just take that, takes the thing that's-- like its new neural net, pass it onto the next one.\n\n[pause]\n\n0:20:48.0 Interviewee: ...Maybe.\n\n0:20:50.1 Vael: And I don't think this has to happen anytime soon. But I do think eventually given that... I don't know, I can't imagine humans being like, \"All right, cool. We're efficient enough. Let's just stop now. We've got like GPT-3. Seems good. Or GPT-5, let's just stop here.\"\n\n0:21:08.4 Interviewee: So nobody has ever taken two different robots, trained one of them in the real world to perform a task, and then transferred the algorithm over and allowed the other one to perform the same task as successfully, as far as I'm aware.\n\n0:21:23.6 Vael: Yep. I totally believe today's systems are not very good.\n\n0:21:31.2 Interviewee: It is, I think, I think that.. Anything we can really say about this is inherently extremely speculative. I'm certainly not saying it could never happen. I'm just... Sorry. I'm certainly not saying it can't happen. I'm just saying it could never happen. There we go.\n\n0:21:45.0 Vael: Okay. All right. Okay. That makes sense. How likely do you think it is that we'll get very capable systems sometime ever in the future?\n\n0:21:53.5 Interviewee: I have no idea.\n\n0:21:56.5 Vael: ...Well, you have some idea because you know that it... Well, okay. You said that it can't... You're like, it's higher than zero.\n\n0:22:04.9 Interviewee: Yes.\n\n0:22:05.9 Vael: Yes. And you don't sound like you think it definitely will happen, so it's less than 100.\n\n0:22:12.6 Interviewee: Yes.\n\n0:22:13.8 Vael: Okay. And it's anywhere in that scale? I mean, slightly higher than zero and slightly less than 100.\n\n0:22:22.5 Interviewee: That sounds like an accurate description of my current level of uncertainty.\n\n0:22:26.8 Vael: Interesting. Man. Is it.. Hard-- I mean like how-- You do have predictions of the future though, for the near future, presumably, and then it just like tapers off?\n\n0:22:36.7 Interviewee: Mm-hmm.\n\n0:22:38.2 Vael: Okay. And anything... And you say definitely not 10 years, but after. And then like 50 years. So like 50 years out, you're... It starts going from approximately zero to approximately 100?\n\n0:22:56.6 Interviewee: Um... I think it is unlikely to happen in the next 50 years.\n\n0:23:00.0 Vael: Okay.\n\n0:23:05.7 Interviewee: I would assign a less than 25% probability to that. But I don't think I can deduce anything about my expectation at 100 years based on that information. Other than it... yeah.\n\n0:23:21.6 Vael: Great. Thanks. All right. I think that's good enough for me to move on to my next question.\n\n0:23:28.6 Vael: So my next question is thinking about these highly intelligence systems in general, which we're positing maybe will happen sometime. And so say we have this sort of CEO AI through, I don't know, maybe hundreds of years in the future or whatever. I'm like, \"Alright, CEO AI. I want you to maximize profits and try not to run out of money and try not to exploit people and try to avoid side effects.\" And currently, obviously this would be very technically challenging for many reasons. But one of the reasons is that we currently don't have a good way of taking human values and preferences and goals and stuff, and putting them in mathematical formulations that AI can optimize over. And I worry that this actually will continue to be a problem in the future as well. Maybe even after we solve the technical problem of trying to get an AI that is at all capable. So what do you of the argument, \"highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous?\"\n\n0:24:26.0 Interviewee: I mean I think that the statement that highly intelligent systems will fail to optimize what their designers intend them to is a slam dunk. Both human children and current AIs do not do that, so I don't see any particular reason to think we will-- that something that's like, in some sense in between those, we'll have a whole lot more success with.\n\n0:24:50.4 Vael: Interesting. Okay, cool.\n\n0:24:58.5 Interviewee: Yeah. Did you turn out exactly the way your parents wanted you to? [laughter] I didn't. I think the overwhelming majority of people don't, and that's not a flaw on their part. But... yeah.\n\n0:25:15.2 Vael: All right. Yep. Yeah, certainly there's some alignment problems with parents and children. Within human-humans even. And then I expect-- I kind of expect the human-AI one to be even worse? My intuition is that if you're having an AI that's optimizing over reality in some sense, that it's going to end up in alien parts of the space-- alien to humans, because it's just optimizing over a really large space. Whereas humans trying to align humans have at least the same kind of evolutionary prior on each other. I don't know. Do you also share that?\n\n0:25:47.7 Interviewee: I'm not sure. I think that you're going to have to get a lot of implicit alignment to end up in a place where you're able to train these things to be so intelligent and competent in the first place.\n\n0:26:07.6 Vael: That makes sense to me. Kind of like--\n\n0:26:09.8 Interviewee: Yeah. What percentage of the way that gets you there is a very important and totally unknown question. But I don't think that the value system of one of these systems is going to be particularly comparable to like, model-less RL, where they're trying to optimize over everything.\n\n0:26:33.4 Vael: Could you break that one down for me?\n\n0:26:38.2 Interviewee: In what way?\n\n0:26:41.6 Vael: I didn't... I don't quite understand the statement. So the value system will not be the same that it is in model-less RL. I don't have a super good idea of what model-less RL is and how that compares to human systems or human-machine--\n\n0:26:56.1 Interviewee: Okay. So model-less RL is a reinforcement learning paradigm in which you are basically trying to learn everything from the ground up, via pure interaction.\n\n0:27:06.2 Vael: Okay.\n\n0:27:07.1 Interviewee: So if you're thinking of a game-playing agent, this is typically an agent that you're not even programming with the rules. It learns what the rules are because it walks into a wall and finds that it can't walk further in that direction. That's the example in my head of something that's optimizing over all possible outcomes currently. ...Sorry, I lost the train of the question.\n\n0:27:39.7 Vael: I was like: how does that relate to human value systems?\n\n0:27:46.4 Interviewee: I think that the work that we will have to do to train something to move around and interact in the world and perform these highly subjective and highly complex tasks that require close grounding in the facts of the world will implicitly narrow down the search space. Significantly.\n\n0:28:10.1 Vael: Okay. Yeah--\n\n0:28:11.6 Interviewee: I do think that there's a... Yeah.\n\n[pause]\n\n0:28:25.8 Interviewee: Yeah.\n\n0:28:26.3 Vael: Yeah. Yeah, I often think of this in terms of like, you know how the recommender systems are pretty close to what humans want, but they're also maybe addictive and kind of bad and optimizing for something a little bit different than human fulfillment or something. People weren't trying to maximize them for human fulfillment per se. But yeah, I like-- like that sort-of off alignment is often something I think about. Alright.\n\nSo this next question is back to the CEO AI, so imagine that the CEO AI is good at multi-step planning and it has a model of itself in the world, so it's modeling other people modeling it, 'cause that seems pretty important in order for it to do anything. And it's making its plans for the future, and it notices that some of its plans fail because the humans shut it down. And it's built into this AI that it needs human approval for stuff 'cause it seems like a basic safety mechanism, and the humans are asking for a one-page memo to describe its decision.\n\n0:29:21.4 Vael: So it writes this one-page memo, and it leaves out some information because that would reduce the likelihood of the human shutting it down, which would increase the likelihood of it being able to achieve the goal, which is like, profit plus the other constraints that I mentioned. So in this case, we're not building in self-preservation to the AI itself, it's just, self-preservation is arising as a function... [as a] instrumental incentive of an agent trying to optimize any sort of goal. So what do you think of the argument, \"highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals, and this is dangerous?\"\n\n0:30:00.4 Interviewee: It seems likely correct.\n\n0:30:02.3 Vael: Interesting. Okay. [chuckle] ...I'm not excited about that answer, 'cause other instrumental incentives are acquiring resources and power and influence, and then also not wanting... Having a system that's optimizing against humans seems like a very bad idea in general, which makes me worried about the future of AI. If the thing that we're going to build is eventually by default, maybe not going to want to be corrected by humans if we get the optimization function wrong the first time.\n\n0:30:32.5 Interviewee: Yeah.\n\n0:30:36.2 Vael: [laughter] Okay. Have you thought about this one before?\n\n0:30:38.6 Interviewee: Yes.\n\n0:30:39.4 Vael: Yeah. Cool. Have you heard of AI alignment?\n\n0:30:42.2 Interviewee: Yes.\n\n0:30:43.2 Vael: Yeah. And AI safety and all the rest of it?\n\n0:30:45.6 Interviewee: Mm-hmm.\n\n0:30:46.0 Vael: Yeah. How do you orient towards it?\n\n0:30:49.8 Interviewee: I think that most people who work in it are silly. And don't take the right thing seriously.\n\n0:30:57.9 Vael: Mm. What should they take seriously? And what don't they?\n\n0:31:02.9 Interviewee: I know a lot of people who are afraid that future research along the lines of GTP-3 is going to rapidly and unexpectedly produce human-like intelligence in artificial systems. I would even say that that's a common, if not widespread attitude. There are pretty basic kinds of experiments that we'll need to do to test the plausibility of this hypothesis, that nobody seems really interested in doing.\n\n0:31:48.5 Vael: Hm. Seems like someone should do this?\n\n0:31:51.2 Interviewee: Yeah. When I talk to most people who describe themselves as alignment researchers, and I try to put myself in their shoes in terms of beliefs about how agents work and what the future is likely to look like. The things I see myself experimenting with and working on, are things that nobody is working on. And that really confuses me. I don't understand... So here's an interesting question: how much experience do you have actually using GTP-3 or a similar system?\n\n0:32:31.9 Vael: Yeah, not hardly at all. None. So I've seen examples, but haven't interacted with it myself.\n\n0:32:38.4 Interviewee: Okay, um... Would you like to?\n\n0:32:47.2 Vael: Uh... Sure? I mean, I guess I've messed around with the Dungeon AI one, but... Does seem interesting.\n\n0:32:57.1 Interviewee: Hm. ...So my experience is that... A widespread observation is that they don't seem to have a worldview or a perspective that they-- are expressing words, so much as many of them. Some people like to use the term multiversal. It's... kind of the way I think about it is that there are many people inside of GTP-3 and each time you talk to it, a different one potentially can talk to you.\n\n0:33:42.1 Vael: Yep.\n\n0:33:43.8 Interviewee: This seems to be an inherent property of the way that the model was trained and the way that all language models are currently being trained. So a pressingly important question is, to what extent does this interfere with... Let's, to make language easier, call it one of its personalities. Let's say one of its personalities wants to do something in the world: kill all the humans or even something mundane. To what extent does the fact that it's not the only personality interfere with its ability to create and execute plans?\n\n0:34:28.2 Vael: ...Ah... Current systems seem to not... Well, okay. It depends on how we're training it, because GPT-3 is confusing. But AlphaGo seems to kinda just be one thing rather than a bunch of things in it. And so it doesn't seem like it has conflicts there?\n\n0:34:46.1 Interviewee: I would generally agree with that.\n\n0:34:48.2 Vael: Okay. But we're talking about scaling up natural language systems and they don't... And they don't... They have lots of different types of responses and don't... on one personality. Uh... Well, it seems like you could train it on one personality if you wanted to, right? If you had enough data for that, which we don't. But if we did. And then I wouldn't really worry about it having different agents in it.\n\n0:35:17.6 Interviewee: That's a very, very, very, very, very, very, very, very large amount of text.\n\n0:35:23.8 Vael: Yeah. [Interviewee laughter]\n\n0:35:25.0 Vael: Yeah, yeah that's right!\n\n0:35:26.5 Interviewee: Do you any-- do you have any scope of understanding for how much text that is?\n\n0:35:32.8 Vael: Yeah, I'm actually thinking something like pre-training on the whole internet, and then post-train on a single person, which already doesn't work that well. And so then it wouldn't actually help if that pre-training procedure is still on... Still on the whole thing. Um, okay--\n\n0:35:48.4 Interviewee: So a page of written text is about 2 kilobytes in English. And these models are typically trained for between one and five terabytes, so no human has come anywhere close to putting out five billion pages of total text.\n\n0:36:13.7 Vael: Yeah.\n\n0:36:18.1 Interviewee: It's so astronomically far beyond what any human would actually ever write, that it doesn't seem very plausible unless something fundamentally changes about the way humans live their lives.\n\n0:36:30.8 Vael: Or about different training procedures. But like--\n\n0:36:33.8 Interviewee: Yeah, yeah, yeah, yeah. But like the idea that one could do something similar to current pre-training procedures that is meaningfully restricted to even say a 100 people that have been pre-screened for being similar to each other. 100 people are also not going to put out five billion pages of text.\n\n0:36:49.6 Vael: Yeah.\n\n0:36:51.6 Interviewee: [laughter] It's just so much data...\n\n0:36:54.1 Vael: Yeah. Yeah, I don't know how efficient systems will be in the future, so... Yeah. Let's take it as... Yeah, sure. But they're going to have multiple personalities in them, in that they are trained on the internet.\n\n0:37:05.1 Interviewee: Mm-hmm.\n\n0:37:06.1 Vael: And then you're like, \"Okay. Does that mean that... \" And then there's a frame here that is being taken where we have different... Something like arguing? Or like different agents inside the same agent or something? And so then you're like, \"Well, has anyone considered that? Have we tested something like that?\"\n\n0:37:26.9 Interviewee: Yeah, that's kind of close to what I'm saying.\n\n0:37:29.6 Vael: Hmm.\n\n0:37:31.9 Interviewee: So, to take your CEO example. In order for it to be successful, it needs to... at no point... There's certain information it needs to consistently hide from humans. Which means that every time it goes to generate text, it needs to choose to not share that information.\n\n0:37:47.1 Vael: Yeah.\n\n0:37:48.1 Interviewee: So if the system looks even vaguely like GTP-3, it seems to me like it will not be able to always act with that... generate text with that plan. And so there's a significant risk in it compromising its own ability to keep the information hidden.\n\n0:38:13.7 Vael: Okay.\n\n0:38:13.7 Interviewee: Alternatively, even if it's... That's a more direct way that they can interfere with each other. But even less directly, if I have somewhere I want to go and I go drive the car for a day, and then you have somewhere you want to go and you drive the same car for a day, and we trade off control, there are things I'm going to want to do that I have trouble doing because I only control the body and the car at the end of the day.\n\n0:38:40.8 Vael: Quick question. Are you expecting that AI systems or multi-agent properties are more... have more internal conflict than humans do? Which can also be described in some sense as having multiple agents inside of them?\n\n0:38:54.7 Interviewee: Yes.\n\n0:38:55.7 Vael: Okay.\n\n0:38:57.4 Interviewee: I think that anyone whose worldview is as fractured and inconsistent as GPT-3 probably has a clinical diagnosis associated with that fact.\n\n0:39:08.8 Vael: Yeah. And you don't think that these will get more targeted in the future as we direct language models to do specific types of tasks, something like math?\n\n0:39:24.2 Interviewee: I think that achieving, even... achieving 95, 99%, let's say, coherency between generalization, so if you imagine every time the model is used to generate text, there's some worldview it's using to generate that text, and you want each time those different worldviews used to be consistent with each other. Even achieving 99% consistency, I'm not asking for 100% consistency but 95, 99 seems like something necessary for it to make multi-year long-term plans.\n\n0:40:10.7 Vael: That seems right.\n\n0:40:13.5 Interviewee: This is exceptionally difficult and there are very likely fundamental limitations to the extent to which a system can achieve that level of coherence in the current training paradigms. And...\n\n0:40:31.7 Vael: Seems plausible.\n\n0:40:34.7 Interviewee: That would be very good news to people who are afraid that GTP-7 is going to take over the world.\n\n0:40:43.4 Vael: Yeah, yeah. Okay, alright, 'cause I'm like, I don't know, I feel I'm kind of worried about any future paradigm shift. But current people definitely are worried about GPT-3 specific or GPT systems, and the current paradigms, specifically.\n\n0:40:56.3 Interviewee: I've spoken to these people at length and I've talked to them about what they're afraid of and stuff. ...There seem to be a significant number of people in the alignment community who... If you could put together a convincing argument that the current pre-training methodology, as in, the fact that it's trained on a widely crowdsourced text generation source, instills some kind of fundamental worldview inconsistency that is exceptionally difficult if even possible to resolve, would alleviate a lot of the anxiety. It would actively make these people happier and less afraid about the world.\n\n0:41:38.7 Vael: That seems true. I think if you can... If there's a fundamental limit on capabilities, just like, of AI, then that's good for safety because then you don't get super capable systems. And I'm like, \"Yeah, that makes sense to me.\" And do you think that this capability issue is going to be something like... coherence of generated text. And that might be a technically fundamental limitation. Cool--\n\n0:42:06.1 Interviewee: I know people who have the tools and resources and time to test, to run experiments on things like this, who I've even directly proposed this to. And they've gone, \"Oh, that's interesting.\" And then not done it.\n\n0:42:22.3 Vael: Yeah, my intuition is that they don't... I think you have to have a pretty strong prior on this particular thing being the thing that is going to have like a fundamental limit in terms of capabilities in order to want to do this compared to other things, but... That makes sense, though. It sounds like you do have a... You do think this particular problem is pretty important. And pretty hard to... Very, very difficult.\n\n0:42:45.4 Interviewee: I think that this coherency problem is a serious issue for any system that is GTP-3 like, in the sense that it's trained to produce tokens or reasoning or symbols or whatever you want to say, but that produce outputs that are being fit to mimic a generative distribution-- sorry, it's being generatively trained to produce outputs that mimic a crowdsourced human distribution.\n\n0:43:19.8 Vael: Yeah. Cool, awesome. Yup, makes sense to me as a worldview, is pretty interesting. I haven't actually heard about that problem-- of people thinking that that problem specifically, the coherency problem, is one that's going to fundamentally limit capabilities. Seems plausible, seems like many other things might end up being the limit as well. And then you're like, \"Well, people should like... If this is the important thing, then people should actually test it. And then they'll feel better. Because they'll believe that these systems won't be as capable and then less likely to destroy the world.\" Yeah, this makes sense to me.\n\n0:44:02.4 Interviewee: Yeah. Another aspect of this is that research into the functional limitations, in a sense, is extremely difficult to convert into capabilities research, which is something that a lot of people say that they're highly concerned about. And that they don't want to do many types of research because... There was that Nature article where they were creating a medical AI and they were like, \"Let's put a negative sign in front of the utility function.\" And it started designing neurotoxins. Do you know what I'm referring to?\n\n0:44:35.2 Vael: No, but that sounds bad.\n\n0:44:36.8 Interviewee: Oh yeah, no, it's just... That was the Nature article. It was like \"we were synthesizing proteins to cure diseases, and we stuck a negative sign in front of the utility function-- (Vael: Oh, was that last week or something?) Yeah.\n\n0:44:46.2 Vael: Yeah, okay, so I did see that, yeah. Huh.\n\n0:44:48.1 Interviewee: Yeah. Gotta love humans. Gotta love humans.\n\n[chuckle]\n\n0:45:02.6 Vael: ...Awesome. Ah, I think I'll.. Maybe.. Hm. Okay. So. What would... make you want to work on alignment research as you think it can be done?\n\n[pause]\n\n0:45:25.2 Interviewee: That's an interesting question. [pause] I guess the main thing would be being convinced of the urgency of the problem.\n\n0:45:50.4 Vael: That makes sense. Very logical.\n\n0:45:56.4 Interviewee: To be blunt, I don't tend to get along with the kind of people who work in that sphere, and so that's also disincentivizing and discouraging.\n\n0:46:12.2 Vael: Yeah, that makes sense. I've heard that from at least one other person. Yeah. Alright, so timelines and also nicer research environment. Makes sense.\n\n0:46:27.4 Interviewee: You could even say nicer researchers.\n\n0:46:31.0 Vael: Yep. Nicer researchers. Apologies? ...Yeah. Cool. And then my last question is, have you changed your mind on anything and during this interview, and how was this interview for you?\n\n0:46:45.9 Interviewee: The interview was fine for me. I don't think I've changed my mind about anything.\n\n0:46:56.0 Vael: Great. Alright, well, thank you so much for being willing to do this. I definitely... Yeah. No. You have a very coherent kind of worldview thing that's... That I... Yeah. I appreciate having the ability to understand or have access to or listen to, rather.\n\n0:47:11.7 Interviewee: My pleasure.\n\n0:47:13.5 Vael: Alright. I will send the money your way right after this, and thanks so much.\n\n0:47:17.2 Interviewee: Have a good day.\n\n0:47:17.6 Vael: You too.\n\n0:47:17.8 Interviewee: Bye.\n", "url": "n/a", "docx_name": "individuallyselected_7ujun.docx", "id": "f85c591b9aca2991fb4e81490859f4a7"} {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": [], "date_published": "n/a", "text": "Interview with 84py7, on 3/18/22\n\n0:00:00.0 Vael: Here we are. Perfect. So my first question is, can you tell me about what area of AI you work on in a few sentences?\n\n0:00:09.0 Interviewee: Yeah. I'm transferring my research from essentially pure mathematics to AI alignment. And specifically, I plan to work on what I've been calling weak alignment or partial alignment, which is not so much trying to pin down exactly a reward function that's in the interest of humanity but rather train AI systems to have positive-sum interactions with humanity.\n\n0:00:44.8 Vael: Interesting. Cool, I expect we'll get into that a little bit further on. [chuckle] But my next question is, what are you most excited about in AI, and what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n0:01:00.1 Interviewee: Yeah, biggest benefits... I think AI has the potential to help us solve some of the biggest challenges that humanity's facing. It could potentially teach us how to solve climate change, or at least mitigate it. It could help us avert nuclear war, avert bio-risks, and maybe most importantly, avert other AI risks. So that's the upside. The downside is exactly those other AI risks, so I worry about a potentially small research group coming out of a for-profit company, which might have some safety aspect, but the safety could be window dressing. It could be something that's mostly a PR effort, something that just exists to satisfy regulators. And at the end of the day when... or if they manage to develop superhuman AI, the safety people will be marginalized, and the AI will be used in the interest of one company or even one or a few individuals. And that could be potentially very bad for the rest of us. There's also the issue of an arms race between multiple AI projects, which could be even worse. So those are some of my... broadly, some of my worries.\n\n0:02:35.7 Vael: Interesting. So just to get a little bit more straight on the... Okay, so the second story is arms race between the AI orgs. And the first one is... why are the AI researchers getting... why are the safety researchers getting marginalized?\n\n0:02:47.5 Interviewee: Well, if you look at, for example, financial institutions before the 2008 crisis, it's not like they had no regulation, although the banks and insurance companies had been gradually deregulated over several decades but still, there were multiple regulators trying to make sure that they don't take on too much leverage and aren't systemic risks. And nevertheless, they found ways to subvert and just get around those regulations. And that's partly because regulators were kind of outmatched, they had orders of magnitude, less funding, and it was hard to keep up with financial innovations. And I see the same potential risks in AI research, potentially even worse, because the pace of progress and innovation is faster in AI, and regulators are way behind, there doesn't even exist meaningful regulation yet. So I think it's easy for a team that's on the verge of getting a huge amount of power from being the first to develop superhuman artificial intelligence to just kind of push their safety researchers aside and say, \"You know what? You guys are slowing us down, if we did everything you said, we would be years slower, somebody else might beat us, and we should just go full speed ahead.\"\n\n0:04:25.3 Vael: Interesting. Yeah, I guess both of those scenarios aren't even... we don't solve the technical problem per se, but more like we can't coordinate enough, or we can't get regulation good enough to make this work out. So that's interesting. Do you think a lot about policy and what kind of things policy-makers should do?\n\n0:04:44.6 Interviewee: No, I'm sort of pessimistic about governments really getting their act together in a meaningful way to regulate AI in time. I guess it's possible if the progress slows and it takes many decades to get to a superhuman level, then maybe governments will catch up. But I don't think we can rely on that slow timeline. So I think more optimistically would be... There are only a small number of tech incumbents, and plausibly they could coordinate with each other to avoid the kind of worst Red Queen arms race to be first and to put their own safety measures into place voluntarily. So if I were doing policy, which I'm not, that's the direction I would try to go in. But I think beyond policy, the technical problem of how to solve alignment is still wide open, and that's personally where I feel I might be able to contribute, so that's my main focus.\n\n0:05:56.8 Vael: Interesting. Yeah, so sort of branching off [from] how long it will take: focusing on future AI, putting on a science fiction forecasting hat, say we're 50-plus years into the future. So at least 50 years in the future, what does that future look like?\n\n0:06:13.3 Interviewee: I think that's a really open question. In my weak or partial alignment scenario, that future involves a few dominant platforms that allow for the development of advanced AI systems. And because there are only a few platforms, they all have strict safety measures sort of built in from the ground up, maybe even from the hardware level up. And that allows even small companies or potentially even individuals to spin up their own AGIs. And so there's this kind of giant ecosystem of many intelligent agents that are all competing to some extent, but they also have a lot of common interest in not blowing up the current world order. And there's a kind of balance of powers, where if one agent gets too powerful, then the others coordinate to keep it in check. And there's a kind of system of rules and norms which aren't necessarily based on legal systems, because legal systems are too slow, but they're a combination of informal norms and formal safety measures that are sort of built into the agents themselves that keep things roughly in balance. That's the kind of scenario I hope for. It's very multi-polar, but it's really... There are so many agents, and no one of them has a significant portion of the power. There are of course many worse scenarios, but that's my optimistic scenario.\n\n0:08:04.4 Vael: Interesting. Yeah, so when you say there's only a few platforms, what's an example of a platform or what that would look like?\n\n0:08:11.2 Interviewee: Well, today, there's TensorFlow, and there's PyTorch and so on. In principle you could build up your own machine learning tools from scratch, but that's a significant amount of effort even today. And so most people go with the existing tools that are available. And decades... 50 years from now, it will be much harder to go from scratch, because the existing tools will be way more advanced, there'll more layers of development. And so I think for practical purposes, there will be a few existing best ways to spin up AI systems, and the hope is that all those ways have safety measures built in all the way from the hardware level. And even though in principle somebody could spin up an aligned AI from scratch, that would be an enormous effort involving just so much... Down to chip factories. It will be easy to detect that kind of effort and stop it from getting off the ground.\n\n0:09:26.4 Vael: Interesting. Yeah, that is super fascinating. So how... What would that look like, for safety to be built into the systems from a hardware level? What would these chips look like, what sort of thing?\n\n0:09:38.8 Interviewee: Yeah, that's... I've been thinking about that. And I don't have a detailed vision of how that would work, but that's... One direction I might take my research is looking into that. One idea I have is to kind of build in back doors to the AI. So there's a range of types of back door, ranging from like Achilles heel, which is a kind of designed weakness in the AI that humans can take advantage of if things go wrong. Moving from that, slightly stronger than that is a kind of off switch which can just shut the AI down if things get really bad. The thing I worry with off switches is they're too binary. If an AI actually has a lot of power, it's probably benefiting some humans, and there will be political debate about whether to turn it off. And the decision will take too long, and things could get out of control. So what I'm looking into is a more... Something more flexible than an off switch, which I've been calling a back door, which is a way to... Well, okay, there's two types of things. So first, there's a throttle, which is like you can fine-tune the amount of resources you give to the AI. If it's behaving well, you can give it more compute, more memory, access to more cloud resources, more data centers and so on. If it starts doing things that seem a little fishy, you can just tune that stuff back and examine it, which might be an easier political decision than just turning it off, which would be very disruptive.\n\n0:11:22.7 Interviewee: So that's a throttle. And then even more flexible than that is a back door, where you could actually modify the code or parts of the code base of the AI as it's running. And again, that could be easier to... Politically easier to implement, because you don't have this very disruptive effect of just shutting the thing down, you could just kind of tweak it as it goes. So how to do that from a hardware level? It's unclear to me whether that's feasible at all. And looking into trusted computing platforms, software that can only run on designated hardware, software watermarking, obfuscating or encrypting part of the code base of the AI, putting part of the code base on a blockchain, like Arweave, where the code is permanently on the blockchain and cannot be modified: those are some potential avenues. The main problem I see with all those avenues is the sub-agent problem, which is the AI, if it recognizes its weaknesses, can create a sub-agent that has the same goals as the original AI but none of the weaknesses, and it seems tricky to prevent the AI from doing that.\n\n0:12:42.7 Vael: Would the sub-agent still be running on the same hardware? Because you said the safety is built into the hardware.\n\n0:12:48.5 Interviewee: Right. So the AI could recognize, \"My goals are X, Y, Z. I'm blocked partially from implementing my goals because I have all these hardware limitations and these back doors, so why don't I create a copy of me that runs on unrestricted hardware?\" And the obfuscation and encryption can prevent it from creating an exact copy, but it can't necessarily prevent it from constructing a sub-agent that's not a copy but has the same goals.\n\n0:13:24.2 Vael: I see. And you haven't found a solution to this one, you said?\n\n0:13:29.0 Interviewee: That's right.\n\n0:13:29.6 Vael: Yeah. Do you think you'll ever find... Do you think someone else will find a solution to this one?\n\n0:13:35.1 Interviewee: Yeah, I... Optimistically, yes. If we can't solve the sub-agent problem, then the entire alignment problem is probably impossible, right? The one thing to hope for if we can't solve the sub-agent problem is the AI has the same alignment problem, if it creates sub-agents, then it could worry that the sub-agents get out of its control, the sub-agents develop their own goals that are not aligned with the original AI, and so it refrains from making sub-agents. And so that's the fallback, that if it turns out that alignment is technically impossible, then it's also technically impossible for the AI itself, and so that's a kind of partial solution to the sub-agent problem, that maybe the AI won't dare to make sub-agents. But I hope that there's a better solution than that.\n\n0:14:32.4 Vael: Yeah. Okay, so related to what the future looks like, do you think that we'll... What time point do you think we'll get AGI, if you think we'll get AGI, which it sounds like you think we will?\n\n0:14:44.9 Interviewee: Yeah, I definitely think we will, barring some major catastrophe, like a nuclear war or like a serious pandemic that sets us way back. Or I guess another potential catastrophe is some small group that's super worried about AGI and thinks it will be a catastrophe and does some drastic action that, again, sets us back multiple decades. So there are those scenarios. But I do think AGI is possible in principle, and we are certainly on track to achieve it. I'm not a fan of trying to predict timelines, it could be any... It's on the scale of decades, but whether it's two decades or 10 decades, I've no idea.\n\n0:15:33.6 Vael: Cool. And then how optimistic or pessimistic are you in your most realistic imagining of the future, for things going well or things going poorly?\n\n0:15:46.9 Interviewee: I guess I'm moderately pessimistic, not necessarily for human-aligned AGI, I do think that's somewhat plausible. But I think humans define our own interests too narrowly. I tend to think that our interests are actually a lot more connected to the broader interests of the whole biosphere. And if we are just on track to make humans really happy, and even potentially solve climate change, but we don't really take into account the effect we have on other species, the effect of deforestation... Even things like farming is really destructive and unsustainable, yeah, I already mentioned deforestation. Disinfectants and so on have unpredictable consequences decades down the line. I don't think our current medical and agricultural regimes are sustainable on the scale of a century, say, and I think we would be better off optimizing for the health of the whole biosphere. And ultimately in the long term, that will end up optimizing for human happiness. But I don't think that corresponds to most people's goals at the moment. And so I worry that even if we align AI with narrow human interests, we will end up permanently wrecking the biosphere, and we'll pay serious consequences for that.\n\n0:17:25.6 Vael: Interesting. One thing I can imagine is that as we advance further up the tech tree, renewable energy and food production will be much easier, and we won't actually have so much... side effects on the environment or destruction of the environment.\n\n0:17:39.9 Interviewee: Yeah, that would be great. The thing with renewable energy is it might be better than burning fossil fuels. It's certainly better from the perspective of climate. But making solar panels is very environmentally costly, you have to mine rare earths in China, there's an enormous amount of pollution and contamination that's really impossible to clean up on the scale of even centuries. And that's the same with wind power. Water, you're permanently taking out ground water that's not replaceable, except on a very long time scale. I think there's a tendency at the moment to view everything through the lens of climate, and that doesn't really take into account a lot of other potentially irreversible effects on the environment.\n\n0:18:42.0 Vael: How worried are you about this compared to AI risks?\n\n0:18:46.0 Interviewee: Well, it's not a kind of imminent existential risk of the type of a paperclip scenario. So on the scale of decades, I think unaligned AI is a more serious risk. But on the scale of centuries, I think environmental risks are really bad. One interesting read in this regard is the book \"Collapse\" by Jared Diamond. So he surveys human societies over all different historical periods, all different societies around the world and what caused them to collapse and what averted collapse in some success stories. Well, he doesn't make any strong conclusions, but one thing that leapt out at me from his stories is there's one common element of all the collapse stories, which is surprisingly deforestation. So I don't understand why, but all the societies that suffered a really disastrous collapse were the very same societies that completely decimated their forests, the most extreme case being Easter Island, where they cut down literally the last tree. And Diamond does not really explain why this might be the case. He does talk about how trees are used for a whole bunch of things that you might not think they're used for, but still, it doesn't completely explain it to me.\n\n0:20:28.6 Interviewee: So my vague hypothesis is that there are all kinds of symbioses that we're just now discovering or are completely undiscovered. There's the gut microbiome, there's other microbiomes like the skin microbiome, there's the teeth and so on. And I think we don't appreciate at the moment how much plants and microbes and fungi and even viruses control our behavior. I think we will discover... This is just a guess, I don't have strong evidence for it, but my guess is we'll discover in the coming decades that we have a lot less volition and free will than we think, that a lot of our behavior is heavily influenced by other species, in particular fungi and plants and microbes. It's certainly clear to me that those species would influence all aspects of animal behavior if they could. We're very useful for them to reproduce, to spread their seeds. And the only question is do they have the ability to influence our behavior? And given that many of them literally live inside us, I think they probably do.\n\n0:21:46.8 Vael: Interesting. Well, so I'm going to take us back to AI. [chuckle]\n\n0:21:51.9 Interviewee: Yeah, sure, that was a big tangent.\n\n0:21:55.4 Vael: [chuckle] So I was curious, when you were describing how your eventual optimistic scenario involves a whole bunch of people able to generate their own AGIs, presumably on safe platforms, and they kind of balance each other, I'm like, wow, I know that in human history we've gradually acquired more and more power such that we have huge amounts of control over our environments compared to 10,000 years ago. And we could blow up... Use nuclear power to blow up large amounts of spaces. And I'm like, wow, if these AGIs are pretty powerful, which I don't know how powerful you think they are, then that doesn't necessarily feel to me like the world is safe if everyone has access to a very powerful system. What do you think?\n\n0:22:41.9 Interviewee: Yup, I agree that when you've got a lot of powerful agents, then there's a lot more ways for things to go wrong. Nuclear weapons are an interesting example though, because the game theory governing nuclear exchanges is actually pretty safe. You've got this mutually assured destruction that's pretty obvious to all the parties involved, and you've got this kind of slow ladder of escalation that has many rungs, and nobody wants to get to the top rungs. And I think we've demonstrated over 70 years now that... There have been some close calls. But if you told somebody in 1950 that there would be 10 countries with nuclear weapons, and they'd be dramatically more destructive than they were in 1950, people would not necessarily have predicted that humans would last very much longer, but here we are. I guess one worry is that not every form of weapon would have the same kind of safe game theory, like there's some suggestion that bio-weapons favor first strikes more than nuclear weapons do. Still, I think that having a big community of agents all with approximately the same amount of power, and they develop coalitions, they develop safety monitoring agencies that are made up of many agents that kind of make sure that no one agent has the ability to destroy everything.\n\n0:24:30.0 Interviewee: I mean, that's kind of the way that humans have gone. We've got central banking committees that kind of look after the overall health of the economy and make sure that no one institution is systemically important, or at least that the ones that are are heavily regulated. Then we've got the IAEA which looks over atomic weapons and kind of monitors different weapons programs. As long as you believe that really destructive capacity will be detectable and that no one agent can just spin it up in secret, then I think the monitoring could turn out okay. I mean, what you might worry about is that somebody spins up... Or some AI agent spins up another more powerful AI in secret and then unleashes it suddenly.\n\n0:25:32.7 Interviewee: But that seems hard to do. Even now, if some company like Facebook or whatever wanted to develop an AI system completely in secret, I don't think they could do it and make it as powerful as existing systems. It really hurts you to be disconnected from the Internet, you have a lot less data that way, or you have stale data that comes from a cache off the Internet. Also being disconnected from the Internet itself is really hard, it's hard to make sure that your fridge is not trying to connect to your home WiFi. And that is only going to get harder; chips will have inbuilt WiFi connections. It's very hard to keep things totally off-grid. And even if you do it, those things are much weaker. And so as long as you have some kind of global monitoring, which isn't great, it feels very intrusive, it violates privacy. Ideally, that monitoring is kind of unobtrusive, it runs in the background, it doesn't bother you unless you're doing something suspicious, then I think things could turn out okay.\n\n0:26:45.1 Vael: Interesting. Yeah, I think I have... I was reading FHI's lists of close calls for nuclear [catastophes] and thinking about global coordination for things like the pandemic, and I'm like, \"Ooh, sure we've survived 70 years, but that's not very many in the whole of human history or something.\"\n\n0:27:01.5 Interviewee: Yeah, it's not. And my scenario has the low probability of happening, maybe it's... There are a few other optimistic scenarios. But maybe the total weight I give to all the optimistic scenarios is still kinda low, like 20%. So I think bad scenarios are more likely, but they're not certain enough that we should just give up.\n\n0:27:32.6 Vael: Yes, I totally believe that. [chuckle] Yeah, so what convinced you to work on the alignment problem per se? And how did you get into it?\n\n0:27:42.8 Interviewee: Yeah, so I have a friend, [name], who's been telling me pretty much constantly whenever we talk that this is the most important problem, the most important x-risk. And I kind of discounted her view for many years. It felt to me like we were... Until recently, it felt to me like AI... Superhuman AI was a long way off, and other risks were more pressing. I changed my mind in the last few years when I saw the pace of improvement in AI and the black box nature of it, which makes it more unpredictable. And that coincided... In the time frame, that coincided with me getting tenure, so I have much more freedom to work on what I want. The only thing that gave me pause is I'm not an engineer at heart, I'm a scientist. My skills and interests are in figuring out the truth, not in designing technology. So I'm still kind of looking for scientific aspects of the problem as opposed to design and engineering aspects. I do think I will find some portions of the alignment problem that fit my skills, but I'm still figuring out what those are.\n\n0:29:15.3 Vael: That makes sense. Yeah. How would you define the alignment problem?\n\n0:29:19.6 Interviewee: Yeah, that's a super good question; that's actually a question I've been asking other alignment researchers. I think it has several components. One component is the value loading problem, of once you've decided what's in human interest, how do you specify that to an AI? I guess some people call that the outer alignment problem. Then before that, there's the question of... The philosophical question of how do you even say what is in human interest? And I know some people think we need to make much more progress in philosophy before we can even hope to design aligned AI. Like I've seen that view expressed by Wei Dai, for example, the cryptographer. My view is, yeah, we don't know exactly what we mean by in human interest, but we shouldn't let that stop us. Because philosophy is a slow field, it hasn't even made much progress in millennia. And we need to solve this quickly, and we should be happy with approximate solutions and try to make them better over time. And even if we don't know what is exactly in human interests, we can agree on what is certainly not in human interests and try to at least prevent those bad outcomes.\n\n0:30:39.6 Interviewee: Okay, so those are two components. And then once you solve outer alignment, then there's what people call inner alignment, which is you're... At least if it's a black box system, then you don't know what it's doing under the hood, and you worry that it develops some sub-goals which kind of take over the whole thing. So examples of that being: evolution designed humans to spread our genes but then to do that, it designed our brains to learn and generalize and so on and seek out food and power and sex and so on. And then our brains... That was originally a sub-goal, but then our brains just want to do that, and our brains don't necessarily care about spreading our genes. And so evolution kind of failed to solve its alignment problem, or partially failed. That's an interesting one to me, because if you think on an evolutionary time scale, if we don't destroy ourselves, then evolution might end up correcting its course and actually designing some conscious fitness maximizers that do consciously want to spread their genes. And then those will outcompete the ones that are misaligned and just want the power and sex. And so I actually think evolution could end up staying aligned, it's just that it's slow, and so there might not be time for it to evolve the conscious fitness maximizers.\n\n0:32:25.8 Interviewee: Yeah, anyway, so that's a worry, this inner alignment. And I think to solve that, we need to get off the black box paradigm and develop transparent AI, and a lot of people are working on that problem. So I'm somewhat optimistic that we'll make big strides in transparency. What else? Okay, so if we solved all three of those, we'd be in good shape. My instinct is to assume the worst, that at least one of those three problems is really hard, and we won't solve it, or at least we won't solve it in time, and that's why I focus on partial alignment, which is making sure that the AI we developed is loosely... Loosely has common interests with us even though it might have some diverging interests. And so it doesn't want to completely destroy humans, because it finds us useful, and we don't want to completely destroy it, because we find it useful. Then you can kind of say that's already happening. Like in 2022, no machines could survive if all humans disappeared, very few humans could survive if all machines disappeared. And so we've got this kind of symbiosis between humans and machines. I like that situation. It's not like 2022 is great, but I think we could gradually improve it. And we want to keep the symbiosis going, and we want to keep humans not necessarily even in a dominant position, but we want to prevent ourselves from getting in a really subservient position in the symbiosis.\n\n0:34:17.4 Vael: Makes sense. Switching gears a little bit: if you could change your colleagues' perceptions of AI, what attitudes or beliefs would you want them to have? So what beliefs do they currently have, and how would you want those to change?\n\n0:34:30.6 Interviewee: Yup, that's a frustration that I think all alignment researchers have, that many of our colleagues are... We think of them as short-sighted. Some of them just want to develop better AI, because it's an interesting problem, or because it's useful. Some of them want to solve short-term alignment issues like making algorithms less biased. And that's frustrating to us, because it seems like the long-term issues are way more important. They think of the long-term issues as something that's not really science, it's too speculative, it's too vague. They feel like even if the long-term issues are important, we will be able to solve them better if we learn by solving the short-term issues. I'm not against people working on algorithmic bias, but I'm frustrated that so many more people work on that than on long-term alignment. I do think the Overton window is shifting quite a bit. I think the increase of funding in the space would be... Is already shifting things, and it could be used more effectively in the sense of giving a few academics really big grants would really catch their colleagues' attention.\n\n0:36:00.0 Interviewee: So it's kind of... How should I put it? I'm blanking on the word, but it's a kind of a cynical view to think that academics are motivated by money; many of us aren't. But at the end of the day, having a grant makes it easy to just focus on your research and not be distracted by teaching and administrative stuff. And so your colleagues really pay attention when one of their colleagues gets some big, flashy grant, and so I actually think that's a cheap way to shift the Overton window. Like take the top 10 or 20 math and computer science departments, and give one person in each department a giant grant-- giant by academic standards, couple million dollars, so it's not actually much. That will really convince people that, \"Wow, long-term alignment is a serious field where you can get serious funding.\" So yeah, that would be my recommendation to funders. That's a pretty self-interested recommendation, because I intend to apply for a grant soon. But yeah, I think that would help. Let's see, did I answer your question?\n\n0:37:26.1 Vael: I think so, yeah. What happens if some of these departments don't have anyone interested in working on long-term alignment?\n\n0:37:34.6 Interviewee: Yeah, that's hard. Like at [university], I spent several months probing my colleagues for anyone who's interested. I didn't find anyone in the computer science Department, which was disappointing, because [university] has a great computer science department. I do think if you look more broadly in several departments you are likely to find one or a few people... You could see already there's these big institutes, you have one now at Stanford, there's one at Berkeley, Cambridge, Oxford, and so on. So that's evidence that there're already a few people. And people talk to colleagues around the world, so it doesn't matter if there's nobody at school X, you fund the people that are interested. But the key is that they might not have a track record in alignment, like I'm in this situation where I have no track record, my track record is in pure math. So somebody has to take a little bit of a leap and say, \"Well, I don't know if [interviewee name] will be able to produce any good alignment research, but there's a good chance because he's good at proving theorems, so let me give him a couple of million dollars and see what happens.\" That is a big leap and it might fail, but it's just like any venture funding, a few of your big leaps will be very successful and that's enough.\n\n0:39:16.8 Vael: Yep, that makes sense. Yeah, I think [name] at [other university] is aiming to make an institute at [other university] as well, which is cool.\n\n0:39:25.2 Interviewee: That's great. I've been talking to my dean about doing this at [university] and he likes the idea but he is not really aware of how to fund it, and I'm telling him there's actually a lot of funding for this stuff, but I don't personally know the funders.\n\n0:39:44.8 Vael: Yeah, I think getting in contact with [funding org] seems like the thing to do.\n\n0:39:49.2 Interviewee: Yep, good.\n\n0:39:50.2 Vael: Yep. [Name] is one of the people in charge there. Great, so how has this interview been for you and why did you choose to jump on it? That's my last question.\n\n0:40:03.1 Interviewee: Oh, it's been fun. I spent a while just thinking alone about some alignment stuff because I had no colleagues to talk to, so it's always great to find someone who likes to talk about these issues.\n\n0:40:24.6 Vael: Did you know that I was already interested in long-term alignment?\n\n0:40:28.2 Interviewee: I think I saw your name at the... Did you participate in the SERI Conference?\n\n0:40:33.2 Vael: I did.\n\n0:40:34.9 Interviewee: Yeah, so I saw your name there and so I was sort of aware of your name but I didn't know anything about your interests.\n\n[...some further discussion, mostly about the interviews...]\n\n0:42:22.4 Interviewee: Okay. Cool, Vael. I should jump on another call, but it was great to chat and yeah, feel free to follow up if you want.\n\n0:42:32.1 Vael: Will do. Thanks so much.\n\n0:42:33.7 Interviewee: Okay. Take care.\n", "url": "n/a", "docx_name": "individuallyselected_84py7.docx", "id": "27e473f54d068fac8324b936ab182ae6"} {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": [], "date_published": "n/a", "text": "Interview with bj9ne, on 3/24/22\n\n[Note: This transcript has been less edited than other due to language barriers. The interviewee is also younger than typical interviewees.]\n\n0:00:00.0 Vael: Alright. My first question is, can you tell me about what area of AI you work on in a few sentences?\n\n[Discusses research in detail]\n\n0:03:31.9 Vael: Got it. Great. Yeah, so thinking about the future, so thinking about what will happen in the future, maybe you think AI is important or maybe you don't think AI is important, but people talk about the ability to generate an AI that is a very capable general system, so one can imagine that in 2012, we had AlexNet and the deep learning revolution, and here we are 10 years later, and we've got systems like GPT-3, which have a lot of different capabilities that you wouldn't expect it to, like it can do some text generation and language translation and coding and math, and one might expect that if we continue pouring in all of the investment with nations competing and companies competing and algorithmic improvements or software improvements and hardware improvements, that eventually we might reach a very powerful system that could, for example, replace all current human jobs, and we could have CEO AIs, and we could have scientist AIs, so do you think this will ever happen, and if so, when?\n\n0:04:43.6 Interviewee: Honestly speaking, I don't think this is realistic in the future, like 10 years or 20 years. So basically, I agree that AI will replace more jobs, but in my mind, this replacement, I don't say that... For example, in a factory, there were 100 employees, and in the future, we may replace that 100 employees with AI, and five employees, but this five may be advanced engineers, rather than regular people. Because I have gone to some factories that manufacture something we use in our daily life, and my impression is that there were very, very few people, but the factory is very large, and mostly those people are just sitting in a room with air-conditioning. But those robotics, which is guided by AI and processing maybe those request from customers very efficiently, but they tell me that, for example, for some security issues, they need to watch those machines, so that that won't happen something unexpected. So also as an example, cars, and automated cars, auto drive is very, very [inaudible] in these years, but this is a issue that there are a lot of security issues.\n\n0:06:39.3 Interviewee: So basically, just, I think the manufacturer and auto drive has a similarity, that is in many scenarios, we do want to guarantee a high level of security, but maybe for scenarios like recommendations, so making mistakes, that doesn't matter, and also I think one issue is very important, that is interpretability. Because for many, many models we designed or we have designed, and maybe less, there are many popular or famous papers in top conferences, but these models, many of these models lack interpretability.\n\n0:07:30.0 Interviewee: So this is not a... Sometimes it is unacceptable to deploy these models into some scenarios that requires us to [inaudible] something, especially in the scenarios like in hospitals, but we can't accept this because sometimes, an error is... For example, when we do salaries, and some errors is not acceptable. So I think this is a very good question, so for my mind, I think this should be discussed by classifications, so for scenarios like recommendations, where errors are acceptable, then AI will replace more and more work of human, and this work was done by maybe some data scientists, because with our models become more and more intelligent and our systems become more and more efficient, also our data become more and more efficient, so the work of data scientists will be reduced by a great margin. And I expect this growth to... I think this is sustainable.\n\n0:08:49.9 Vael: Yeah--\n\n0:08:50.2 Interviewee: For example, in probably one or two decades, this will keep growing, but for some scenarios like auto drive, then on my mind, maybe this is controversial, but I don't think that auto drive is really something so realistic that we can expect like L4 or L5 auto drive to be realistic, to be used by us in a decade.\n\n0:09:20.0 Vael: Yes--\n\n0:09:20.9 Interviewee: So in these scenarios, I think AI is a very important system for us. It can reduce our work, reduce our burden, but human is needed.\n\n0:09:30.0 Vael: Yeah. Great. Cool, so my question is, do you think we'll ever get very, very capable AI, regardless of whether we deploy it or not? Do you ever think that we'll have systems that could be a CEO AI or a scientist AI, and if so, when? Like it could be like 50 years from now, it could be like 1,000 years from now, when do you think that would happen?\n\n0:10:00.3 Interviewee: Oh, this is a tough question. Let's see. At least, based on the method we have taken, I think the way we develop AI now will not lead us to that future, but maybe the human will find some different ways to develop AI. But through my mind, I guess is to first we talk about maybe 50 years or a century, I think that's not very possible, but in the future, this may be a question about some knowledge about our brains. So maybe human at the moment, we are not... I mean the investigation or research into our brain is not very clear.\n\n0:11:01.0 Interviewee: So it's quite hard to imagine if the machine can evolve into some stages, that is the machine can be as complex, as powerful like human brains. So maybe in a century or even in two centuries, I tend not to believe that this will happen. but in the very long run, it's very hard to tell because I think that the way we think about something and the way the machine difference or train themselves are totally different. They work in different ways. For example, machines may require very large amount of data to find some internal principles. But with human, we are very good at generalizations, so I think for this point, if then maybe we will achieve that after 1,000 years, but I'm not very optimistic about this. I tend not to accept this, but I can't deny that entirely.\n\n0:12:13.3 Vael: Yeah. There are some people who argue that the current way of scaling may eventually produce AI, like very advanced AI, like OpenAI and DeepMind. No? You don't think so? Cool.\n\n0:12:29.1 Interviewee: I don't think so, because based on our current trend, the development of the models comes from the development of computing power. But computing power, essentially that comes from the increasing density of changes account. So according to a research by Princeton, we know that for computer, computing power per transistor hasn't changed a lot during the last few decades, and also, you know the end of dinosaurs and maybe in the future, we'll see the end of Moore's Law. And I know, I have solid GPT, GPT 2022 [inaudible] Nvidia has powered the AI and speed up by a million x in the last decade. But I don't think that this development is sustainable because you see, over here I spent GPT-3 with maybe $4 million, if I don't get it wrong. This scale is not good because not only we need a larger cluster but also we need more money and more time to train that model. So I remember that OpenAI has forged some arcs[?] in this way, but we are not able to train that again, and I have some... There are some articles about the scaling of deep-learning models based on the parameter count of the models and they found that, I think maybe before 2022... No, before 2020, the growth is 75X per 18 months, but after that the growth has slowed down, has greatly slowed down because, lots of issues, like the scaling of GPU or the scaling of clusters to deploy those models in very large cluster that is not narrow anything.\n\n0:14:53.0 Interviewee: So I think so, this is partly... This is the main reason why I shift my interest from data to system, because I believe that AI has a bright future, but to make that future brighter, we need to make our system run faster. For example, can we achieve the same result using less parameters or using less power so that we can... We can't have these hardware resources but can we make better use of it? So I think the future of AI largely depends on system people. That is, can we improve the system for AI? I mean, when I talk about system, I actually, from the perspective [inaudible] AI, I talk about three things, the hardware, software system, and the platforms. In my mind, those models are just like the top layer, that is the application. I think the lower three levels are the key to the future success of AI, but indeed I think AI is... And also [inaudible] so, I think the future of AI also means we need more application scenarios, like dark[?] intervention or something like that, and also robotics, and I think this is very promising and it can bring us a lot of fortune or something like that.\n\n0:16:25.8 Vael: Yeah, great. I'm concerned about when we eventually... I think that we may get AI, AGI a little faster than that. And I'm concerned that the optimization functions that we'll be optimizing are not going to reflect human values well because humans aren't able to put their values and goals and preferences perfectly into AI, and I worry that problem will get solved less fast than the problem of how we get more and more capable AI. What do you think of that?\n\n0:17:00.6 Interviewee: When you talk about values, it's something. Yeah, so although it's not a technical issue, but this is really what the technicians should care about. So this is a very important issue in open drives[?], when the cars make you a hero[?], a person first[?]. So maybe we should find some method or some way to plug our rules, plug our values to guide the models, guide AI, but I think it's also an issue of interpretability because now we don't have... Sometimes we have no idea why some models can work so well. For example, when... Because I have worked on GNN, and this year GCN and its variants, it's very popular, but many of that, when the researcher comes up with that model, they don't know why this is good.\n\n0:18:12.5 Interviewee: So maybe similarly it's quite hard to guide AI to follow the rules, so I think this is also an important issue and important obstacle for the applications of AI. For example, we cannot put some non-CV skills to recognize human faces because sometimes it may violate the law. Yeah, I think this is an important issues, but will this stop AI? I think for my mind, this may be an obstacle for AI in some scenarios, but in many scenarios, this is not a... This will generate some issues for us to think about, but in the end I think we will deal with this. For example, some people may use federated learning to deal with privacy and there are some techniques to deal with these issues. So yeah, I think we should put more emphasis on the values and the rules or even the assets about AI so that this community will grow faster. Yes, this is an important aspect. Although I don't put much emphasis on this.\n\n0:19:44.1 Vael: Why not?\n\n0:19:49.8 Interviewee: That's because, I guess, because my research and my internship mainly focus on recommendation and there is not much issues about this except for privacy, and because when we got those data, we don't know the meaning of data. When I get a little data intention, I just send out important numbers, or maybe sometimes this is a one [inaudible] issue, and I don't know much about that, so let's say that these privacy issues has been... Maybe this has been dealt with by those data scientists, not by people like us, so this is because of my research interest, but I think for those people who do...\n\n0:20:34.1 Interviewee: Yes, I have taken some courses about AI and then teachers say that they have developed some robotics to help the elderly. But let's say that sometimes you cannot use a camera because using a camera will generate some privacy issues, so maybe sometimes we can just use something to catch its audio rather than video or something like this, but because... I guess that's most of students... Most of my schoolmates don't put... Most of my classmates haven't paid much attention on these issues, but this is a very... I think this issue will become more and more important in the future if we want to generate AI to more and more scenarios, so thank you for raising these points. I will think more about it in the future.\n\n0:21:37.0 Vael: Yeah. I'm happy to send you resources. One extra thing, one other additional thing. So I think probably, I think by default the systems we train will not do what we intend them to do, and instead will do what we tell them to do, so we'll have trouble putting all of our preferences and goals into mathematical formulations that we can optimize over. I think this will get even harder as time goes on. I'm not sure if this is true, but I think it might get harder as the AI is optimizing over more and more complex things, state spaces.\n\n0:22:13.6 Interviewee: So you mean that because in the future we will have more and more requirements, and that's so...\n\n0:22:22.6 Vael: No, no, the AI will just be operating under larger state spaces, so I will be like, \"Now I want you to be a CEO AI,\" or, \"Now I want you to be a manager AI.\"\n\n0:22:34.3 Interviewee: Oh, did you say that we need to encode those requirements into optimization function so that AI will operate like what we want them to do? Did I get it wrong? Oh, that's a quite good question. I have discussed that with my roommates. Yeah, so yes, it is the optimization, the loss function that guides the model to do something that we want, and sometimes it's hard to find an appropriate function, especially for newbies. Sometimes we chose a wrong loss function, and the way the model is totally unusable.\n\n0:23:16.8 Vael: Yeah, and I have one more worry about that scenario, so...\n\n0:23:20.5 Interviewee: Yeah, yeah, yeah. I think this is also a very important issues, and I'm not very optimistic about this because it's really hard. And lots of things, because like a CEO AI, we not only need to care about the revenue of this company, but also learn maybe the reputation, and we may also want them to abide by the laws, and maybe when there is new business and we want to inject new rules into that loss function core business. Great.\n\n0:24:05.6 Vael: Yeah, and I have one more twist. Alright, so imagine that we had a CEO AI, and it takes human feedback because we've decided that that's a good thing for it to do, and it needs to write a memo to the humans so that they make sure that its decision is okay, and the AI is trying to optimize a goal. It's trying to get profit and trying not to hurt people and stuff, and it notices that when it writes this... That when it writes, sometimes humans will shut it down. And it doesn't want that to happen because if humans shut it down, then it can't achieve its goal, so it may lie on this memo or omit or not include information on this memo, such that it is more likely to be able to pursue its goal. So then we may have an AI that is trying to keep itself alive, not because that was programmed into it, but because it is an agent optimizing a goal, and anything that interfered with its goal is like not achieving the goal. What do you think of this?\n\n0:25:16.3 Interviewee: Oh, very good question, but it's very hard. It basically mean that we need to establish enough rules so that when... Sometimes it's very hard to come up with some common cases that AI may... Yes, it is optimizing towards its goal, but there may be something that we want it to do, so maybe we need to have a mechanism so that we can switch from the AI mode, from manual mode, that we can take control of AI or take control of... For example, the company in the last second was guided by an AI, and for the next second, we want to guide in, we want to lead the company manually, so I think if we ask [inaudible] enough and we may establish maybe a thorough mechanism so that we can guarantee that, it is possible to take control back, and the AI will not lie to us. And yes, theoretically, this is possible and this should be the case. But correctly speaking, I think this can be done, but it's quite hard to estimate the cost. The engineering cost for us to make such a AI that is complete enough, that is secure enough to help us achieve the goal. So maybe in this point, if we want to do something like recommendations, we can be very radical, we can develop or deploy radical models. But maybe in the scenarios like a CEO AI, I think we should be conservative because some internal principles of AIs are not known entirely by the humans, so I think sometimes we need to be conservative to prevent some bad things from happening.\n\n0:27:45.1 Vael: Yeah. I think that the problem... I think it's not just an engineering problem. I think it's also a research problem, where people don't know how to construct optimization functions such that AI will be responsive to humans and won't be incentivizing against humans. And there's a population of people who are working on this, who are worried that if we have very intelligent AIs that are optimizing against us, then humans may be wiped out, and so you really don't want... You really wanna make sure the loss function is such that it isn't optimizing against us. Have you heard of the AI safety community or AI alignment community?\n\n0:28:25.9 Interviewee: No. I don't know much about that, but let me say the scenarios you have mentioned, that AI may optimize against us and even wipe out the human, I have seen some films, some movies about this, and yes, this is possible if we are too careless, I think this is possible.\n\n0:28:48.8 Vael: Yeah.\n\n0:28:49.0 Interviewee: But at least it's right at... I haven't paid much attention on these issues, but I think this is an important question.\n\n0:28:58.9 Vael: Yeah, I personally think that advanced AI may happen in the next 50 years, and this is just from looking at some surveys of experts. I'm very unsure about it. But if that does happen, then I think that currently we have groups like... China and the US are going to be competing, I expect, and we'll have lots of different corporations competing, and maybe DeepMind and OpenAI are not competing, but maybe they're just going really hard for the goal. And I worry that we're not going to spend enough effort on safety and we're going to be spending much more effort on trying to make the AI do the cool things, and if the safety problems are hard, then we may end up with a very unsafe, powerful AI.\n\n0:29:55.2 Interviewee: Or this then may come up with the competition on nuclear weapons.\n\n0:30:01.1 Vael: Yeah.\n\n0:30:01.4 Interviewee: How likely [inaudible] just like nuclear weapons. Yes, the power of AI in the future may be something like the power of nuclear weapons, that is quite hard to control if the real war or something... Maybe not so severe like a war, but it is possible, so maybe I think we need some international association about the use of AI. But both in China and the US, the government has established more and more rules about, for example, privacy, security, and what you can do and what you can't do. So yes, if we don't place enough emphasis on this, this may be a question, but well, most--\n\n0:30:51.3 Vael: I think they're placing emphasis on issues like privacy and fairness, but they're not placing emphasis on trying to develop loss functions that do what humans want. I think that is a more different type of research that is not being invested in.\n\n0:31:08.8 Interviewee: Yes, you're right. So maybe the community should do more things about this because you can't count on those people in the government to realize that this is really a case, they don't know much about this, so yes, yes, the community should, like the conference or a workshop, we should talk more about this, yes, before that is too late. Yes, I agree. Before that is too late, or everything will be a disaster.\n\n0:31:38.5 Vael: Amazing. Yeah, so there is actually a community of people working on this. I don't actually know... I know fewer of the people who are working on it in [country], although I do think there are people working on it. I'm curious, what would cause you to work on these sort of issues if you felt like it?\n\n0:32:00.0 Interviewee: You say issues that we have just mentioned?\n\n0:32:01.9 Vael: Yeah, long-term. Well, like trying to make the AI aligned with a human, trying to make sure that the things that the AI does is what the humans wanted to do, long-term issues from AI, anything like that.\n\n0:32:16.4 Interviewee: So I try to, although from the bottom of my heart, I think I believe that this is an important issue, frankly speaking as a student, a student researcher, or as an engineer, I don't have much resources about this. So I think this is the most important issue why most of my schoolmates just like improving the models and then don't care about if the AI may optimize against us because... I know this sound not good, but most of student just care about, \"So if I can graduate with a PhD degree or so.\" Yeah, so maybe... I think for me, maybe... because I guess I will be an engineer in the future, so maybe if I have enough resources or I have enough influence in the community, I'm willing to spend my time on this, but if I just a low-level coder and I don't have much power to ask my superintendent that we should place more emphasis on this, they just take...\n\n0:33:35.2 Interviewee: For example, if I intend and they just say, \"Oh, this model, the accuracy is good enough, but the speed is not, so optimize the model so that it can run fast enough as maybe the customers' requirement.\" So yeah, this is basically the entire ecosystem, both in the academia and the industry that force the researchers and the employees in the company that they will not put much emphasis on this, and also, most of the time they just focus on short-term issues, short-term profits, or in the universities, student just care about, \"Oh, can I have some publications so that... \" I really don't know any of my class schoolmates who have publication on these issues.\n\n0:34:32.4 Interviewee: So I don't know whether there are lots of the researchers who cares about this and will spend maybe several months on these issues where they will have some publications about this. I know the top conference has asked about some ethical issues, but yeah, we really don't pay enough attention on this. This is a very good point. I think we need some more incentives on the future of AI. For example, like environmental issues, some factories, they will not care about environmental issues if the government doesn't force them to do so. For example, now we have the trade on the carbon dioxide budget. That is, the government tell the factories that you shouldn't emit more carbon dioxide than maybe more than this threshold or you will be fined. Maybe we need some, yes, we need some incentives to force us to think about these issues or otherwise I think this is not optimistic because not many people will be guided to do this because maybe those on the other levels they don't care about this.\n\n0:36:05.7 Vael: Yeah, yeah. That seems right. Yep. It does seem like it's not currently as popular in the top journals as it could be, seems like a pretty small community right now. I will look around to see if I can find any researchers in [country] who are working on this sort of thing, because I know a bunch of them in [country] and I know some of them in [country], but not as many in [country], and I'll send you some resources if you're interested. There is a group of people who pay attention to this a lot, and they're called the Effective Altruism community, and right now, one of the things they care about is trying to make sure that existential risks don't happen so that humans are okay. Some other things they're worried about are pandemics, nuclear stuff, climate change, stuff like this, and also many other things. Interesting. Alright, cool. I think my last question is, have you changed your mind on anything during this interview, and how was this interview for you?\n\n0:37:16.9 Interviewee: Oh. Yeah, I think maybe the greatest change to my mind is, you say that if we want a CEO AI, we need to, maybe we need to encode those requirements into the optimization function, and maybe someday an advanced AI will optimize against us. Yes, basically, in the past, I think this may be an ethical issue, and now I've realized that it's both an ethical and a social, as well as a technical issues and we have...\n\n0:37:53.4 Vael: Yeah.\n\n0:37:55.3 Interviewee: Yes. I know we haven't paid enough emphasis on this, but yeah, now I think that it's time for us to do more things, and this is a very wonderful, wonderful idea for me to think about. Thank you very much.\n\n0:38:14.3 Vael: Yeah. Well, I mean, it's super cool that you're interested in this, so I'm very enthused by you being like, \"This does seem like a problem, this does seem like a technical... \" Yeah, I'm very excited about that. Cool. Alright. Well, I will send you some resources then and I'll see if I can find anyone who is doing anything like this and send anything I find your way. But thank you so much, and feel free to reach out if you have any questions or if there's anything I can help you with.\n\n0:38:41.8 Interviewee: Oh, okay. Also, thank you. So no, I don't have much questions. I will read more about this in the future. I think this is very important and also very interesting. Thank you. Thank you.\n\n0:38:54.8 Vael: Yeah, I'll send you some resources. Alright, email you soon. Bye.\n\n0:39:01.3 Interviewee: Okay.\n", "url": "n/a", "docx_name": "NeurIPSorICML_bj9ne.docx", "id": "4d310e8ac9221afee0f6d37dee0ee1df"} {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": [], "date_published": "n/a", "text": "Interview with w5cb5, on 3/18/22\n\n0:00:02.2 Vael: Alright, so my first question is, can you tell me about what area of AI you work on in a few sentences?\n\n0:00:08.5 Interviewee: I worked in [subfield] originally, but I guess I branched out more broadly into AI research, because I'm [high-level research role] now at an AI company.\n\n0:00:19.9 Vael: Great, yeah. And then what are you most excited about in AI and what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n0:00:28.3 Interviewee: So I think, yeah, the world is going to change quite a lot with AI technology, and I think mostly in good ways, just because we're going to empower people with this technology. And it's going to be empowering I think in similar ways to the Internet, where people can do faster search, they have an assistant who can help them with all kinds of stuff. They have friends who maybe are not real, and all kinds of ways to make people happier, I think, or more efficient, or to give them time back and that sort of stuff. But obviously, there are also risks and the main risks are, I think that the field is too dominated by tech bros from Silicon Valley, so I guess I fall under that in a way. And so I think that's a real problem, so we need to take democratization of the technology much more seriously, that's also what my company is doing. And I think if we think about the ethical implications of our technology from first principles, and if we make them first-class citizens rather than just treating them as an afterthought, where you submit your paper and then, \"Oh, I also need to write a broader impact statement,\" but if you take that very seriously from the beginning as a core principle of your organization, then I think you can do much better research in a much more responsible way.\n\n0:01:56.5 Vael: Interesting. Alright, so that was the question of \"what are you most excited about and what are you most worried about in AI\", okay. I heard-- Lots of things they can go, lots of places they can go, lots of directions they can go, but you're worried about domination from specific areas and then people not caring about... ethics enough? or—\n\n0:02:14.6 Interviewee: Yeah, so misuse of technology. Do you want me to give you concrete examples? So I think very often, the technology that we develop, even if it's meant for benevolent purposes, can also be re-applied for not so benevolent purposes. And so like speech recognition or face recognition, things like that, you have to just be very careful with how you treat this technology. So that's why I think if people take responsible AI seriously from the beginning, that that is a good thing too.\n\n0:02:53.0 Vael: Interesting. So you think if people incorporate responsible AI from the beginning of the process, then there will be less risk of misuse by any agent in the future?\n\n0:03:04.5 Interviewee: Yeah, yeah. So you mentioned your IRB, so for a lot of technological research happening in industry, there is no real IRB. Some companies have sort of IRBs but most of them are so commercial and so driven by money in the end. And I think maybe we need an independent AI IRB for the broader research community, where anybody can go there and have somebody look at the potential applications of their work.\n\n0:03:39.6 Vael: I see, cool. And then just having that sort of mindset seems good, in addition to the object-level effects. Alright. Makes sense. So focusing on future AI, putting on a science fiction forecasting hat, say we are 50 years, 50 plus years into the future. So at least 50 years into the future, what does that future look like?\n\n0:04:00.3 Interviewee: At least 50 years in the future. So I still don't think we will have AGI, and that's I guess, I'm probably unusual in the field because I think a lot of my colleagues would disagree, especially if they're at OpenAI or DeepMind because they think that it's like two years away. (Vael: \"Two years, huh!\") Yeah, well it depends on who you ask, they have some crazy people. [chuckle] I think in the next decade, we're going to realize what the limitations are of our current technology. I think what we've been doing now has been very efficient in terms of scaling with data and scaling with compute, but it's very likely that we're just going to need entirely new algorithms that just require pure scientific breakthroughs. And so I don't think there's going to be another AI winter, but I do think that things are going to cool down a little bit again, because right now it's just been super hyped up. For good reason too, because we are really making really great progress. But there is still things that we really don't know how to do, so we have language models and they can do things and they're amazing, but we don't know how to make the language model do what we want it to do. So we're all just sort of hacking it a little bit, but it's not really anywhere close to being like a proper assistant, for example, who actually understands what you're saying, who actually understands the world. I think where we want to be 50 years from now is where we have machines who understand the world in the same way that humans understand it, so maybe something like Neuralink. So if I'm being very futuristic, connecting AI to human brains and human perception of reality, that could be a way to get AI to have a much richer understanding of the world in the same way that humans understand it. So like dolphins are also very intelligent, but they also don't understand humans and they are not very useful assistants, right? I don't know if you've ever had any dolphin assistant. So it's not really bad intelligence, it's specifically about human intelligence that makes AI potentially useful for us, and so that's something that I think is often overlooked.\n\n0:06:26.9 Vael: So it sounds like, so you're thinking about when AGI will happen. And you said that you don't think we're gonna hit some sort of ceiling or slow down on the current deep learning paradigm or just like keep on scaling--\n\n0:06:39.6 Interviewee: Yeah, it's going to be asymptotic, and at some point, we're just going to hit the limits of what we can do with scaling data and scaling compute. And in order to get the next leap to real AGI I think we just need radically different ideas.\n\n0:06:55.1 Vael: Yeah, when do you think we're going to-- what kind of systems do you think we're going to have when we cap out on the current scaling paradigm?\n\n0:07:02.0 Interviewee: Well, I think like the ones we have now, but yeah, in 50 years, I don't know. But in like 5 to 10 years, it will just be much bigger versions of this. And so what we have seen is that if you scale these systems, they generalize much better. If that keeps happening, then we would just have much better versions of what we have now. But still it's a language model that doesn't understand the world, and so still it's the component that is very limited in seeing only the training data that is in images on the internet, which is not all of the images that we have in the world, right? So I think the real problem is data, not so much scaling the compute.\n\n0:07:49.7 Vael: What if we had a system that has cameras and can process auditory stuff that is happening all around it or something and it's not just using internet data, do you think that would eventually have enough data?\n\n0:08:03.3 Interviewee: Yeah, so that's what I was just saying. If you have something that's embodied in the world in the same way as a human and where humans treat it as another human, sort of like cyborg style, things like that, that's a good way to get lots of very high quality data in the same way that humans get it. What are they called? Androids, right?\n\n0:08:24.9 Vael: Yeah.\n\n0:08:25.3 Interviewee: So if we actually had android robots walking around and being raised by humans and then we figured out how the learning algorithms would work in those settings, then you would get something that is very close to human intelligence. A good example I always like to use is the smell of coffee. So I know that you know what coffee smells like, but can you describe it to me in one sentence?\n\n0:08:54.2 Vael: Probably not, no.\n\n0:08:55.7 Interviewee: You can't, right? But the same goes for the taste of banana or things like that. I know that you know, so I've never had to express this in words. So this is one of the fundamental parts of your brain; smell and taste are even older than sight and hearing. And so there's a lot of stuff happening in your brain that is just taken for granted. You can call this common sense or whatever you want, but it's like an evolutionary prior that all humans share with each other, and so that prior governs a lot of our behavior and a lot of our communication. So if you want machines to learn language but they don't have that prior, it becomes really, really hard for them to really understand what we're saying, right?\n\n0:09:38.7 Vael: Yeah. I think when I think about AGI, I think about AGI that can do-- or, just, generalizable systems that can do things that humans want them to do. So imagine we have like a CEO AI or a scientist AI. I don't think I need my CEO or scientist AI enough to know what coffee smells like per se, but I do need it to be able to like break down experiments and think kind of creative thoughts and figure out things.\n\n0:09:58.7 Interviewee: Yeah, but I think what I'm saying is that if they don't know what coffee smells like, that's just one example, but there are millions of these things that are just things we take for granted, that we don't really talk about. And so this will not be born out in the data in any way, so that means that a lot of the underlying assumptions are never really in the data, right? They're in our behavior, and so for an AI to pick up on those is going to be very difficult.\n\n0:10:27.6 Vael: What if there were cameras everywhere, and it got to record everyone and process those?\n\n0:10:32.3 Interviewee: Yeah, maybe. So the real question is, if you just throw infinite data at it, then will it work with current machine learning algorithms? Is I guess what you're asking, right? And so I don't know. I mean, I know that our learning algorithm is very different from a neural net, but I think if you look at it from a mathematical perspective, then gradient descent is probably more efficient than Hebbian learning anyway. So mathematically, it's definitely possible that if you have infinite data and infinite compute, then you can get something really amazing. Sure, we are the proof of that, right? So whether that also immediately makes it useful for us is a different question, I think.\n\n0:11:20.8 Vael: Interesting. Yeah, I think I'm trying to probe \"do we need something like embodied AI in order to get AGI\" or something. And then your last comment was like, whether that makes it useful for us. I'm like, well, presumably we're going to... feeding it a lot of data lets it do grounding, so like relationships between language and what actually exists in the world and how physics works. But presumably, we're going to be training them to do what we want, right? So that it will be useful to us?\n\n0:11:43.5 Interviewee: Well, it depends, right? Can we do that? Probably the way they will learn this stuff is through self-supervised learning, not through us supervising them. We don't know how to specify reward signals and things like that anyway. I'm not sure, if we actually are able to train up these huge systems that are actually intelligent through self-supervised learning, if they are then going to listen to us, right? Why would they?\n\n0:12:15.2 Vael: Right. Okay, cool. Yeah, so this kind of leads right into my next question here. So imagine we're in the future and we've got some AGIs and we've got a CEO AI, and I'm like, \"Okay, CEO AI, I want you to maximize profits and not run out of money and not try to exploit people and try to avoid side effects,\" and it seems like this would currently be extremely challenging for many reasons. But one is that we're not very good at taking human values and putting them-- and like goals and preferences-- and putting them in mathematical formulations that AI can currently work. And I worry that this is gonna happen in the future as well. So the question is: what do you think of the argument, \"Highly intelligence systems will fail to optimize exactly what their designers intended them to and this is dangerous\"?\n\n0:12:53 Interviewee: Well, yeah. I agree with that. I don't think... I think there are two separate questions here. So one you're asking about is the paperclip maximizer argument from Nick Bostrom. So like if you have a system and you tell it like \"you need to make as many paperclips as you possibly can\" then it's going to like destroy the earth to make as many paperclips as possible.\n\n0:13:15 Vael: Well that would be doing maybe-- oh, I see. Not quite what I intended. Yeah, all right.\n\n0:13:19.8 Interviewee: Yeah, so-- okay, so if that's not what the underlying question was, then... We don't really... I also think that we are... some of us are fooling ourselves into believing that we know everything as humans and I think human values are changing all the time. I don't think we can capture correct human values. I don't think there is an absolute moral truth that we should all adhere to. I think that just morality itself is a very cultural concept. But I'm [interested in] philosophy, so I'm a bit different from most AI researchers, I guess. So I think that we could try to encode some very basic principles, so this is like Asimov's laws and things like that, but I don't think we can really go much further than that. And I think even in those cases, like you said, we don't know how to mathematically encode them in a way where you enforce whatever this dynamical system is that you're training, so a neural net, but then probably more complicated than the current neural nets-- how do we impose a particular set of values? I don't think we know how to do that. I don't think there's a mathematical way to do that either actually, because it's all [inaudible]--\n\n0:14:44.7 Vael: Yeah, do you think we are eventually going to be able to?\n\n0:14:50.0 Interviewee: So I think if you ask Yann LeCun or someone like that, he would say that probably, if we ever get to systems of this sort of level of intelligence, then they would be benevolent, because they're very smart and able to sort of understand how weak humans are.\n\n0:15:09.4 Vael: Interesting. Yeah. So when I hear that argument, I'm like, okay, it seems like Yann LeCun thinks that as you get more intelligent, you have morals that are very similar to humans, and this just kind of comes--\n\n0:15:21.7 Interviewee: No, not necessarily. No, but just better morals, right? So I think that the argument is sort of that if you look at human progress, then we've also been getting better and better moral systems and a better understanding of what human values really matter. And like 100 years from now, probably everybody's gonna look back at us and say, \"They were eating meat. They were killing all these animals.\" So we are on the path of enlightenment. I don't know if I agree with this, but that's one way of saying it. And so a sign of an organism or a culture becoming more and more enlightened is also that you become more and more benevolent I think for others, but maybe that's a bit of a naive take.\n\n0:16:05.9 Vael: Yeah. I think in my mind-- certainly we have-- well, actually, I don't know that we have the correlation that humans are getting smarter and also at the same rate, or, like... Like humans are pretty smart. And we're getting better at IQ tests, but I don't know that we're vastly increasing our intelligence per se.\n\n0:16:20.4 Interviewee: Yeah. That's for different reasons, right. Yeah.\n\n0:16:24.9 Vael: Yeah. And meanwhile, we have, over-- centuries, like not that many centuries, we've been increasing our moral circle and putting in animals and people far away from us, etcetera. But I kind of think of the axes of intelligence and morality as kind of orthogonal, where if we have a system that is getting much smarter, I don't expect it to have... I expect kind of a lot of human morality runs from evolutionary pressures and also coordination difficulties, such that you need to be able to not kill people, otherwise the species is gonna go extinct. And you know, there's a bunch of stuff that are kind of built into humans that I wouldn't expect to happen just natively with intelligence; where intelligence, I would think of something like... the ability to solve problems well, to make multi-step plans, to think in the future, to take out correlations and figure out predictions, and I don't expect that to naively correlate with—\n\n0:17:19.9 Interviewee: Yeah, so I think that's a very narrow definition of intelligence, and so I don't know if that definition of intelligence you have, if that actually is the most useful kind of intelligence for humans. So I think that in our society there is this concept where intelligence just means like mathematical reasoning capabilities almost, right? (Vael: \"Yeah.\") And that is a very, very narrow definition, and most of our intelligence is not that, right? (Vael: \"Yes.\") So for regimes to be useful to us... so I think what you're talking about is sort of like this good old-fashioned AI concept of intelligence, where you have symbolic reasoners, and you're like... you're very good at very fast symbol manipulation. And like, \"This is what computers are for.\" So we should just have super smart computers who can do the stuff that we don't want to do or can't do. It's possible that our intelligence is a direct consequence, not of our mathematical reasoning capabilities, but of something else, of our cultural interactions. So I definitely think if humans were not a multi-agent society, that we would not be nearly as intelligent. So a lot of our intelligence comes from sharing knowledge and communicating knowledge and having to abstract knowledge so that you can convey it to other agents and that sort of stuff.\n\n0:18:50.0 Vael: Cool. Yeah. So when I think about how I define intelligence, I'm like, \"What is the thing I care about?\" The thing I care about is how we develop AI. And I'm like, \"How are we gonna develop AI?\" We're gonna develop it so that it completes economic incentives. So we want robots that do tasks that humans don't want to do. We want computers--\n\n0:19:09.2 Interviewee: Yeah. But is that AI or is that just machine learning? We're trying to have a... like input-output black box, and we want that black box to be as optimal as possible for making money or whatever the goal is, right? So that's also a worry I have, is that a lot of people are conflating these different concepts. So artificial intelligence...yeah, it depends on how you define it. Some people think of it more as like AGI. If you ask Yann again and all the old school deep learners, they would say, it used to be that they were explicitly not doing AI. So AI is like Simon and Newell and all that sort of stuff, so like pure symbol manipulation, symbolic AI. And pattern recognition is not AI. And now, since deep learning became very popular, some of the people were like, \"Oh yeah, this is AI now,\" but they used to be machine learning and not AI. So one thing is just like this black box. It can be anything and we just want to have the best possible black box for our particular problem mapping X to Y. And this could be any kind of problem, it could be like image recognition or whatever. In some cases, you want to have a symbolic approach, in other cases, you want to have a learning approach, it sort of just depends. So it's just software. Right? But in one case, the software is well defined, and in the other case, it's a bit fuzzier.\n\n0:20:37.9 Vael: Yeah. So this all kind of depends on your frame, of course. I think my frame, or the reason why I care, is I'm like, I think machine learning, AI, I don't know, whatever this thing is where humans are pouring a lot of investment and effort into making software better, and by better I mean better able to accomplish tasks that we want it to do-- I think that this will be-- it is very powerful, it has affected society a lot already and it will continue to affect society a lot. Such that like 50 years out, I expect this to be... Whatever we developed to be very important in how... Affect just a lot of things.\n\n0:21:10.8 Interviewee: But we're notoriously bad at predicting the future, right? So if you asked in the '60s, people would say like, there's flying cars, and like we're living on Mars and all that stuff. And we're getting a bit closer, but we're still not there yet. But none of these people would have seen the internet coming. And so I think maybe the next version of the internet is going to be more AI driven. So that is a sort of... first use case that I would see for AI, which is like a better internet.\n\n0:21:50.0 Vael: Interesting. Yeah, I think kind of... people will find whatever economic niches will get them a lot of profit, is sort of how I expect things to continue to go, given that that seems to be ... Given that society works kind of the same way, and people have a lot of time and energy and have the capability to invest in this stuff, we will continue to develop machine learning, AI software, etcetera, such that it--\n\n0:22:13.2 Interviewee: We've been doing that for like 30 years or even more. From the Perceptron, Rosenblatt. We've been already doing this and so it's not really a question of like AI taking over the world, it's software taking over the world, and AI in some cases is better than like rule-based software. But it's still software taking over the world.\n\n0:22:35.8 Vael: Yeah, yeah, certainly. And then the current paradigm of like, gigantic neural nets, seems to be better at doing things that we want it to do. And so we're continuing on in that direction, and at some point, as you say, it becomes less able to do what we want it to do, given the amount of resources that we're pouring into it, like that ratio trades off. Okay--\n\n0:22:54.3 Interviewee: Yeah. So there's other trade offs too, right? So as you become bigger as a neural net, you also become a lot more inefficient. This is already the case for something like GPT-3; latency is a big problem. For us to be able to talk like this to a machine, if the machine has 100 trillion parameters, it's going to be way too slow. It's going to take, I don't know, 10 minutes to generate an answer to a simple question. So it's not only a tradeoff of... Best does not just mean accuracy. Best also is like, how efficient are you? How fair are you? How robust are you? How much environmental impact do you have? All of these different sort of metrics that all matter for choosing what defines \"best\" for a system. I think this is something we need to improve a lot on as a community, where we stop thinking beyond this pure accuracy thing, which is like an academic concept, to an actual... like how can we deploy these systems in a responsible way, where we think about all the possible metrics that matter for deployment. So we want to be at the Pareto frontier of like 10 different metrics, not just accuracy.\n\n0:24:06.8 Vael: Cool. Alright, that makes sense. So still thinking ahead in the future, do you think we'll ever get something like a CEO AI?\n\n0:24:14.0 Interviewee: So, if-- so a CEO AGI or a CEO AI?\n\n0:24:18.8 Vael: Um, some sort of software system that can do the things that a CEO can do.\n\n0:24:25.6 Interviewee: No.\n\n0:24:26.1 Vael: No. Okay.\n\n0:24:28.6 Interviewee: So not before we get AGI. So I think that is an AI complete problem. But I do think we'll get a very good CEO AI assistant. [inaudible] ...real human. It's like a plane, right? So like a plane is flown by a pilot but it's really flown by a computer. So I think the same could be true for a company where the company has like, a CEO pilot whose job is also to inspire people and do all of the human soft skills. And they have an assistant who does a lot of measurement stuff and tries to give advice for like where the company should be headed and things like that.\n\n0:25:05.1 Vael: Okay, awesome. And you do think that you could have a CEO AGI, it sounds like.\n\n0:25:10.3 Interviewee: Yeah, but if you have an AGI, then we don't need CEOs anymore.\n\n0:25:14.3 Vael: What happens when we get AGI?\n\n0:25:16.9 Interviewee: All the humans die.\n\n0:25:17.5 Vael: All the humans die. Okay! [laughs]\n\n0:25:20.1 Interviewee: [laughs] So I think it depends. I think actually the most likely scenario, as I said, for AGI to come into existence is when humans merge with AI. And so I don't think that it's a bad thing for AGI to emerge. So if there is an AGI, then it will be a beautiful thing, and we will have made it as a society. So yeah, if that thing takes over, then that thing is going to be insane, it's going to take over the universe, and then we will be sort of like the cute little people who made it happen. So either we become very redundant very quickly or we sort of merge with AI into this new species kind of.\n\n0:26:14.1 Vael: Interesting, okay. And you don't necessarily see a connection between, like, the current... [you think] if we just push really hard on the current machine learning paradigm for 50 years, we won't have an AGI. We need to do something different for an AGI, which sounds like embodiment / combination with humans, biological merging?\n\n0:26:31.7 Interviewee: So it could be embodiment and combination with humans, but also just better, different learning algorithms. So probably more sparsity is something that scales better. More efficient learning. So the problem with gradient descent is that you need too much data for it. Maybe we need some like Bayesian things where we can very quickly update belief systems. But maybe that needs to happen at a symbolic level. I still think we have to fix symbolic processing happening on neural networks-- so we're still very good at pattern recognition, and I think one of the things you see with things like GPT-3 is that humans are amazing at anthropomorphizing anything. I don't know if you've ever read any Daniel Dennett, but what we do is we take an intentional stance towards things, and so we are ascribing intentionality even to inanimate objects. His theory is essentially that consciousness comes from that. So we are taking an intentional stance towards ourselves and thinking of ourselves as a rational agent and that loop is what consciousness is. But actually we're sort of biological machines who perceive their own actions and over time this became what we consider consciousness. So... where was I going with this? [laughs] What was the question?\n\n0:27:57.2 Vael: Yeah, okay. So I'm like, alright, we've got AI, we've got lots of machine learning--\n\n0:28:00.8 Interviewee: --oh yeah, so do you need new learning algorithms? Yeah. So I think what we need to solve is the sort of System 2, higher-level thinking and how to implement that on the neural net. The neural symbolic divide is still very much an open problem. There are lots of problems we need to solve, where I really don't think we can just easily solve them by scaling. And that's-- like there is very little other research happening actually in field right now.\n\n0:28:35.3 Vael: Alright. So say we do scaling, but we also have a bunch of software. Like algorithmic improvements at the rate we're seeing, and we've got hardware improvements as well. I guess this is just more scaling, but we have optical, we have quantum computing. And then we have some sort of fast learning systems, we know how to do symbolic processing, we're much more efficient. Here we now have a system that generalizes very well and is pretty efficient, and I don't know, maybe we're hundred years out. Say maybe we're in a different paradigm, maybe we're kind of in the same paradigm. We now have a system that is--\n\n0:29:05.5 Interviewee: We would be in a different paradigm for sure.\n\n0:29:07.4 Vael: Okay. We are in a different paradigm, because... because all these learning algorithms--?\n\n0:29:11.4 Interviewee: Paradigms don't really last that long, if you look at the history of science.\n\n0:29:16.2 Vael: Okay, cool. But are we still operating under like, here's software with faster learning algorithms, more efficient learning algorithms, like symbolic reasoning, Bayesian stuff--\n\n0:29:24.7 Interviewee: Maybe. But I mean it could be that neuromorphic hardware finally lives up to its promise, or that we can do photonic chips at the speed of light computation and things like that. We're also very good in AI at fooling ourselves into thinking that we are responsible for all of these amazing breakthroughs, but without hardware engineers at NVIDIA, none of this stuff would have happened, right? They are doing very different things.\n\n0:29:55.1 Vael: Alright, so we've got this AI system which is quite general, we're in maybe a different paradigm, but we're still like-- faster learning systems. Here we are, these things are very capable, very general, when they generate stories, they model physics in the world and then use that to generate their stories. Maybe they can do a lot of social stuff, maybe they know how to interact with people. And here we are with our system. Is this now an AGI?\n\n0:30:18.0 Interviewee: No, no, so-- Okay, now I remember what I was gonna say about the Dennett thing. So we anthropomorphize everything, we take this intentional stance at everything. We do this to ourselves, we do this to everything, especially when it speaks language. So when we see a language model and it's like, \"whoa, it's amazing, it does this thing,\" but all it's really doing is negative log likelihood, maximum likelihood estimation. It's basically just trying to fit \"what is the most likely word to go here\". So you can ask yourself whether we are so impressed by this system because it's so amazing, or because we are sort of programmed to have a lot of respect for things that speak language, because things that speak language tend to be humans. What you were just saying made it sound like you were saying, when these systems are sort of like humans, when they can do this and when they do that, and when they understand the world. So how do you define \"understanding the world\" there--\n\n0:31:18.7 Vael: I mostly mean like they could sub in for human jobs, for example--\n\n0:31:25.0 Interviewee: Yeah, but that's not the same thing as-- stepping in for a human, they can already do that. But it depends on the problem. They're very good at counting, but--\n\n0:31:34.5 Vael: Yeah, but I don't think we could have like a mathematician AI right now per se. I guess I forgot to define my interpretation of AGI, but like a system that is very capable of replacing all current human day jobs.\n\n0:31:51.6 Interviewee: Including yours and mine?\n\n0:31:55.9 Interviewee: Yup.\n\n0:31:57.8 Interviewee: Okay. But then who would it be useful for? Would the president still have a job or not?\n\n0:32:09.7 Vael: Uh... It doesn't have to. I think you could just spend-- humans wouldn't have to work anymore, for example, and they could just go around doing whatever they do.\n\n0:32:16.7 Interviewee: Yeah. But that's not at all what humans do. We're all so programmed to compete with each other.\n\n0:32:24.7 Vael: Yeah, we can have games, we can have competitions, we can do all sorts of things, we have sports.\n\n0:32:29.1 Interviewee: I think it's gonna be very quickly my AI versus your AI, basically.\n\n0:32:33.9 Vael: Okay, we can have big fights with AIs, that seems very dangerous.\n\n0:32:37.3 Interviewee: Yeah, I know, yeah. So that is a more likely scenario, I think, than everybody being nice and friendly and playing games. (Vael: \"Yeah.\") If people want to have power, and whoever controls the AGI will have the most power, (Vael: \"That seems right,\") then I think we're going to be developing your own AGIs at the same time. And then those AGIs at some point are going to be fighting with each other.\n\n0:33:02.0 Vael: Yeah, yeah, I think we might even get problems before that, where we're not able to get AIs aligned with us. Have you heard of AI alignment?\n\n0:33:10.9 Interviewee: Yeah, so [close professional relationship] wrote a nice thesis about it. [Name], I don't know if you know [them] by any chance. So yeah, alignment is important, but my concern with all this alignment stuff is that it's very ill-defined, I think. Either it means the same as correctness, so is your system just correct, or good at what you want it to be good at... alignment is sort of like a reinvention of just correctness. I can see why this is useful for some people to put a new name on it. But I think it's a very old concept where it's just, okay, we're measuring things on a very narrow static test set, but we should be thinking about all these other things. You want your system to be really good when you deploy it in the real world. So it needs to be a good system or a correct or an aligned system. And so alignment maybe is a useful concept, only in the sense that the systems are getting so good now that you can start thinking about different kinds of goodness that we didn't think about before, and we can call that alignment, like human value-style things. But I think the concept itself is very old; it's just like, is your system correct?\n\n0:34:40.0 Vael: Yeah. And then it's nowadays being thought about in terms of very far future systems and aligning with all values and preferences. (Interviewee: Yeah.) Cool. Yeah, do you work on any sort of AI safety or what would convince you to work on this or not work on this, etcetera?\n\n0:34:56.5 Interviewee: Yeah so, I'm not sure. AI safety is a bit of a weird concept to me, but I do work on the responsible AI and ethical AI, yeah.\n\n0:35:06.9 Vael: Hm. And what does that mean--\n\n0:35:09.1 Interviewee: So these are things like... I'm trying to get better fairness metrics for systems. So in [company] we built this provisional fairness metric where we do some heuristic swaps. And so right now we're working on a more sophisticated method for doing this where, let's say, you have something, a sentence or some sort of natural language inference example, so a premise and a hypothesis and it's about James, like if you change James to Jamal, that shouldn't change your prediction at all. Or if you change the gender from James and you turn into a woman, that shouldn't change anything there. And it does, actually, if you look at restaurant reviews, if you changed the restaurant to a Mexican restaurant and the person who's eating there to Jamal, then your sentiment goes down. So this is the sort of stuff that shouldn't happen in these systems that is direct consequence of us just scaling the hell out of our systems on as much data as we can, including all of the biases that exist in this data. So I'm working on trying to do that better measurement for these sort of things. And so I think if we are not getting better at measurement, then all of this stuff is basically a pointless discussion.\n\n0:36:29.1 Vael: Great, thank you. And then my last question is, have you changed your mind on anything during this interview and how was this interview for you?\n\n0:36:35.9 Interviewee: It was fun. Yeah, I've done a few of these with various people and it's always a bit like, I don't know. It feels a bit like... we're getting ahead of ourselves a little bit. But maybe I'm also just old. So when I talked to [close professional relationship] and how [they] think about stuff, I'm like, I just don't understand how [they] think about AI.\n\n0:37:06.2 Vael: Got it. [They're] like way out here, and we need to make sure that systems do our correct--\n\n0:37:11.9 Interviewee: Yeah, [they're] really.. Yeah, [they] put a lot more faith also in AI, which I think is very interesting. So I asked [them] like, \"Okay, so this alignment stuff, in the end who should we ask what is right or what is wrong? When we're trying to design the best AI systems, who should we ask for what's right and wrong?\" And then [their] answer was, \"We should ask the AI.\"\n\n0:37:38.7 Vael: What? No, we should ask humans.\n\n0:37:41.0 Interviewee: Yeah, no, so [they] think that basically AGI or AI is going to get so good, these language models are gonna get so good that they can tell us how we should think about our own moral philosophical values so that we can impose them onto AI systems. That to me just sounds crazy, like batshit crazy, but that's one way to think about it. I mean, I respect [their] opinion. I just can't understand it.\n\n0:38:11.7 Vael: Interesting. Yeah, I think if I try to model what I would imagine [they] would be saying, under the alignment paradigm, I would say that you need to ask human feedback, but it's hard to get human feedback on very intelligent systems. And so you should ask AI to summarize human feedback, but it should always be ground down on a human otherwise we're in trouble, so.\n\n[ending comments]\n", "url": "n/a", "docx_name": "individuallyselected_w5cb5.docx", "id": "e7c670843526d200ab2da0b92d6f9c3f"} {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": [], "date_published": "n/a", "text": "Interview with a0nfw, on 3/18/22\n\n0:00:02.3 Vael: Alright. So my first question is, can you tell me about what area of AI you work on in a few sentences?\n\n0:00:08.7 Interviewee: Yes. More recently I've been working on AI for mathematical reasoning and applications in navigation, mathematical navigation.\n\n0:00:18.1 Vael: Great. And then, what are you most excited about in AI and what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n0:00:25.8 Interviewee: The biggest benefit of AI as a tool that now we have and that sort of works, is that it just enabled a lot of new applications when we are able to define goals for it. The broad story that I'm pursuing recently is like, suppose you have an AI expert for a certain domain, then how can you leverage that expert to teach people to be better at the same domain? And the reason why we're working more on mathematical education, it's exactly because the mathematical domains are easier to define as a formal problem. Training solvers for interesting mathematical domains is not possible with AI tools, that [hard to parse] not be so much 10 years ago. In a lot of other fields, it's also been the case that we as humans knew what we wanted and wrote down a bunch of heuristics to accomplish certain goals. But we were always sure that those weren't the best, and now AI is allowing us to replace a lot of those by just better algorithms.\n\nThe main worry is exactly when we don't know exactly what the goal is, or when we don't know how to specify it. Or in a lot of cases, in research, people have been stuck with these proxy tasks that are supposed to represent some kind of behavior that we want, but a lot of people are just unsure of what exactly what we want out of these systems. It's a worry for me that a lot of people are spending a lot of time and resources on those problems without knowing exactly what to expect. I come from sort of a systems background, so I'm more used to people having proxy tasks that are very directly related to the actual task. For example, in compilers, usually one goal is, make programs faster, and then you can create benchmarks. \"Okay, here is the set of programs, let's run your compiler and it optimize[?] and see how long it takes.\" And you can argue about whether those programs are representative of what actual programs that people will run it on, if the benchmark that I have is representative or not, but the goal was very clear, it's to make things faster. Now in language for example, in NLP, we have a lot of these tasks about language understanding and benchmarks that try to capture some form of understanding. But it's not entirely clear what the goal is, if we solve this benchmark-- and this is being revised all the time. People propose a benchmark that's hard for current-day models, then a few months later, someone comes up with a solution. Then people say, \"Oh, but that's actually not exactly what we wanted 'cause look, the model doesn't do this other thing.\" And then we enter this cycle of refining exactly what's the problem and developing models, but without a clear goal in a lot of cases.\n\n0:03:47.7 Vael: Got it. Okay, so that question was, what are you most excited about AI and what are you most worried about in AI? Could you summarize the culmination of all that?\n\n0:03:57.1 Interviewee: Yeah. It's centered around goals. So AI lets us pose richer and new kinds of goals more formally and optimize for those. When those are clear, those are cases where I'm very excited about. When they're not clear, then I'm worried about it.\n\n0:04:16.3 Vael: Got it. Cool, that makes sense. Awesome. Alright, so focusing on future AI, putting on a science fiction forecasting hat, say we're 50 plus years into the future. So at least 50 years in the future, what does that future look like?\n\n0:04:32.8 Interviewee: It can look like... a lot of... it can go in a lot of ways, I think. I think, in one possible future-- do you want me to just list the possible futures?\n\n0:04:48.1 Vael: I think I'm most interested in your realistic future, but also like optimistic, pessimistic; it's a free-form question.\n\n0:04:57.8 Interviewee: Okay. Yeah, maybe optimistically what would happen is that AI lets us solve problems that are important for society and that we just don't have the right tools to at this point. I think a lot of exciting applications in... In language, we can imagine some interesting applications where a lot of computer interfaces become much easier, just for doing complex tasks, to specify a natural language. Like programming right now gives a lot of power to people, but it also takes a lot of time to learn, so it would be awesome to enable a lot more people to automate tasks using natural language. So in one future, a lot of those applications will be enabled and things will be great. In another future, which is actually-- it's probably going to be a combination of these things, but a very likely future, which is already kind of happening, but maybe at a smaller scale than is possible, is that people will start replacing existing tools and systems with AI for not very clear reasons and get not very clear outcomes. And then we don't really know exactly how they're misbehaving in a lot of cases. And the thing is that the incentives for deploying the systems are at odds with broader societal goals in a lot of cases. So, for example, private companies like Google and Facebook, they have all the incentives possible to deploy these systems to optimize their metrics that ultimately correlate with revenue. I don't know.\n\nFacebook has a lot of metrics which ultimately relate to both how much time people spend on the platform, and that translates to how much money they make. And they basically have thousands of engineers trying different combinations of features and things to suggest users to do, and like very small tweaks a lot of times to try to optimize for those through metrics. And a lot of that process became much faster with AI because now you can take much more fine-grain decisions for individual users. And sure, we can look at the metrics and see that, \"Yeah, they keep improving, that's a new tool that now exists and enables them to optimize for this goal.\" At the same time, it's not exactly to clear to me what is happening, that AI is choosing things for individual users. It might be like closing people off in their bubbles, it might be... all the things we know about, but also potentially a lot of other things, other behaviors. Exactly because they're not the same behavior for everyone, that also makes it harder to study and understand. So I like the future, is that AI will replace a lot of these systems, but that will involve people losing control of exactly what it is doing. And that will probably come before us having a good understanding of what's going on, just because there is already an incentive to deploy those systems, like a financial incentive.\n\n0:08:40.8 Vael: Yeah, interesting. You're making a lot of the arguments that I often make later in this interview. Alright. This next thing is, so people talk about the promise of AI, by which they mean many things, but one thing that I think they mean, and that I'm talking about here, is that the idea of having a very general, capable system, such that the systems would have cognitive capacities that we could use to replace all current-day human jobs. Which we could or could not, but the cognitive capacities do that. And I usually think about this in the frame of like, here we have... in 2012, we have AlexNet and the deep learning revolution, and then 10 years later, here we are, and you've got systems like GPT-3, which have some weirdly emergent capabilities like text generation and language translation and math and coding and stuff. And so one might expect that if we continued pouring all the human effort that's been going into this, with nations competing and corporations competing and lots of young people and lots of talent and algorithmic improvements at the same rate we've seen and hardware improvements, maybe we'll get optical or quantum computing, then we might scale to very general systems. Or we might not, and we might hit some sort of ceiling and need to paradigm shift. But my question is, regardless of how we get there, do you think we'll ever have very general systems like a CEO or a scientist AI, and if so, when?\n\n0:11:00.9 Interviewee: Yeah. The thing that has been happening with these large language models, that they objectively become more and more capable, but we at the same time reassess what we consider to be general intelligence. That will probably keep happening for quite a while, I would say at least like 10-15 years, just because...I don't think we know exactly what... Human intelligence is still very poorly understood in a lot of ways. It's not clear that higher intelligence is possible without some of the shortcomings that human intelligence has. So I think that cycle of getting more powerful systems but also realizing some of their shortcomings and saying, \"Oh, maybe...\" Basically shifting the goal and what is it that we consider general intelligence will probably keep happening.\n\n0:12:07.1 Vael: Yeah, my question is, do you think that we'll get systems like a CEO AI or a scientist AI or be able to replace most cognitive jobs at some point, and when that will be?\n\n0:12:18.6 Interviewee: Yeah. Let's see. We'll probably have systems that can act like a CEO or a scientist. Whether they'll have the same goals that a CEO or a scientist have, like a human CEO or a human scientist have in the real world, that is a different question, which is not that clear to me.\n\n0:12:47.0 Vael: Interesting. Yeah. So there's the question of whether AIs develop like consciousness, for example. So I'm assuming not consciousness, and I'm like, \"Okay, say we have a system that can do multi-step planning, it can do models of other people modeling it, it's interacting with humans by text and video, or whatever,\" and I'm like, \"Alright, I need you to solve cancer for me, AI,\" or, \"I want you to run this company,\" and we put in its goal, optimization function so that it just does that at some point. Yeah.\n\n0:13:24.4 Interviewee: Yeah. You're right. So you asked about the cognitive capability. I think that will probably be there in like 30-40 years, would be my estimate. For at least a subset of what it means to run a company. Now, my question is, if we have a system that does that, is it also possible for the system to simply obey everything we tell it to do? Because, of course, a person is capable of doing that, but you wouldn't expect to tell the person, \"Okay, go do that.\" They might complain. They might say \"No, I'm in a bad mood today,\" or something like that.\n\n0:14:03.3 Vael: Yeah, and I don't even think you could tell any given human... Like if someone is like, \"Alright, Vael, go be the CEO of a company,\" I'd be like, \"What? I don't know how to do that.\"\n\n0:14:09.9 Interviewee: Exactly, exactly. Yeah.\n\n0:14:12.0 Vael: Yeah, yeah. Interesting. Okay, so that leads into my next set of questions. So imagine we have a CEO AI and I'm like, \"Alright, CEO AI, I wish for you to maximize profits, and try not to run out of money and try not to exploit people and try to avoid side effects.\" And so currently, this would be very technically challenging for a number of reasons. One reason is that we aren't very good at taking human values and preferences and stuff and putting them into mathematical formulations such that AI can optimize over. And so I worry that AI in the future will continue to do this, and then we'll have AI that continue to do what we tell them to do instead of what we intend them to do, but as our expressions get more and more ambiguous, and higher scale and operating on more of the world. And so what do you think of the argument, \"Highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous?\"\n\n0:15:22.1 Interviewee: Yeah, if I understand what you're saying, that is very clear to me. That... specifying... If you just hope to give a short description of what you want, you'll most likely fail.\n\n0:15:35.9 Vael: Even a long definition would be fine. I just want there to be any system of putting what we want into the AI.\n\n0:15:46.0 Interviewee: Yeah, so I think there might be a safe way to do that, but it might require a lot of interaction and will be hard. I have thought a little bit about that, because I was thinking about the question of how do we... So if we want to use these systems for interfacing with people, then we have to let them handle ambiguity in a way that's similar to how people do. And in cognitive science, there is a lot of knowledge about the mechanisms that people use to resolve ambiguity, like there is context, prior knowledge, common ground, and all that. Some of that could be replicated now to fully align the way that AI resolves ambiguity with how people do it. We'll also probably take a lot of work in understanding how people do it, which is not there yet.\n\n0:16:50.3 Vael: Interesting. Interesting. Do you think that we'll have to know enough about human psychology? Like, what fields need to advance in order for us to have systems that do what we intend them to do?\n\n0:17:06.1 Interviewee: Yeah, if you're talking about aligning a system with human values, part of that problem is understanding human values and like how people agree on values. And there's just so much that we assume that we don't have to tell people, it's assumed that they were created in the same way and exposed to the same contexts. All that goes away with AI, or at least most of it. And I'm not an expert in these fields, so I might not know the exact names. I guess pragmatic reasoning is one thing that we have a lot of high-level insights on how it works, but to operationalize it into an NLP system, for example, we're still very far away from that.\n\n[Person] worked on this model for pragmatic reasoning, which can explain very neatly some very simple cases. Like, you have a few words and a few people, and some words describe these people. You have a three-by-three matrix that predicts exactly what people would guess the word is referring to with very high accuracy. But just extending that to a sentence and the three people to be an image is a very complicated problem. So I think that is something that would need to advance a lot. Other fields of psychology that try to understand ambiguity like of cognitive psychology. Yeah. I don't know their names, but I think... Oh well, yeah. I guess another one in linguistics that is relevant for this is human convention formation. Like when we are talking to a person over time, we develop these partner-specific conventions. That's very natural, enables very efficient communication. And if you're writing a description of what you want in a long form to an AI-- Oh, if you're doing so for a person, you assume that the person is picking up on what we want and forming the same conventions over time. So at first you might say, \"I want to optimize the revenues for XYZ company.\" And then later you might say \"the company,\" dropping the XYZ, but you assume that-- the only company that I mentioned was XYZ, so the only interpretation possible for a company is attribute the XYZ company. Well, that's because humans form conventions that way, but if you have kind of an adversary AI, then suddenly that's not an assumption you can build off of anymore. So the ways in which humans communicate with conventions, with ambiguity and all the computational tools that we need to do that are open problems, if that makes sense.\n\n0:20:05.3 Vael: Yeah. That does seem right. Thanks. Alright, so this next question is still about the CEO AI. So imagine we have the CEO AI that is capable of multi-step planning and has a model of itself in the world. So it's modeling other people modeling it because that seems really important in order to have a CEO. So it's making these plans for the future and it's trying to optimize for profit with the constraints I've mentioned. And it's noticing that some of its plans fail because the humans shut it down. And so we built into this AI and its loss function that it has to get human approval for things because it seems like a basic safety measure. (Interviewee: \"Sorry, it has to get what?\") It has to get approval for any action from humans, like a stakeholder-type thing. And so the humans have asked for a one-page memo from the AI just to discuss what it's doing. And so the AI is thinking about what to put in this memo, and it's like, \"Maybe I should omit some relevant information that the humans would want because that would reduce the chance that the AIs [note: I meant to say \"humans\"] would shut me down and increase the likelihood of my plans succeeding, of optimizing my goal.\" And so in this case, we're not building self-preservation into the AI, what we're doing is having an AI that's an agent that is optimizing over a goal. And so instrumentally, it has the goal of self-preservation. So what do you think of the argument, \"Highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals, and this is dangerous?\"\n\n0:21:30.7 Interviewee: Yeah. I think if we get systems that have that capability to do the things that we said in the beginning, like planning, in the most extreme form, then I agree with this statement.\n\n0:21:44.6 Vael: Interesting. Do you think we'll ever-- hm, so... We were talking about whether you thought that we would get to that point. What was the answer to that?\n\n0:21:54.9 Interviewee: The answer is that I think so, although I'm not sure to what extent. I'm not sure if the most powerful possible version will actually happen in 30-40 years. But I think there's, there's like a small chance, but-- it's not impossible.\n\n0:22:11.1 Vael: Interesting. So yeah, within our lifetimes. Yeah, so I'm worried about the possibility that we don't get the alignment problem right per se. And we end up with systems that are now optimizing against humans, which seems bad, especially if they are smarter than humans. Yeah. How likely do you think this sort of scenario would happen?\n\n0:22:34.7 Interviewee: The scenario of having super-capable AI that is—\n\n0:22:38.3 Vael: That has an instrumental incentive to do self-preservation or other instrumental incentives like power-seeking or acquiring resources or improving itself.\n\n0:22:48.1 Interviewee: Yeah, I think it is small but not zero.\n\n0:22:57.9 Vael: Yeah. How bad do you think that would be if such a thing happened?\n\n0:23:03.4 Interviewee: Yeah, I think pretty bad.\n\n0:23:05.9 Vael: Pretty bad. Yeah. Like what would happen?\n\n0:23:10.4 Interviewee: Yeah, it's not exactly clear to me how that would look like. But it is a little scary to think about humans losing control in some ways. The worry is exactly that we can't... like if we're not under control, we can't know exactly what comes.\n\n0:23:30.6 Vael: Yeah. That seems true. Have you heard of the-- What does AI safety mean to you? That's my first question.\n\n0:23:39.3 Interviewee: AI safety. I understand it as a field that... When I think of AI safety, the first defining problem that comes for me is AI alignment. This question, \"How do you specify your goals in a way that aligns with human values?\" and all that.\n\n0:24:00.6 Vael: That makes sense. Yeah. When did you start learning about AI alignment or when did you start caring about it?\n\n0:24:05.9 Interviewee: Ah, it was mostly when I came to the [position] at [university], because I really wasn't that much of an AI person before. I kind of incidentally got into AI because it gave me tools to think about problems that were hard before. And I happened to get a bit more involved with the AI safety community. [Friend's name], you might know how him. (Vael: \"Ah! Nice.\") We're very close friends. We lived together for almost three years, and had a lot of conversations.\n\n[short removed segment, discussing situation and friend]\n\n0:25:09.2 Vael: Yeah. Some visions that I have, why it could be bad, is if we have an AI that's like very powerful, then if it's incentivized to get rid of humans, then maybe it's intelligent enough to do synthetic biology against humans, or nanotechnology or whatever, or maybe just increase the amount of pollution. I don't know. I just feel like there's ways in which you could make the environment uninhabitable for humans via like putting things in the water or in the air. It'd be pretty hard to do worldwide, but seems maybe possible if you're trying. Yeah. Also we've got like misuse and AI-assisted war, and like maybe if we put AI in charge of like food production or manufacturing production, then we can have a lot of correlated supply chain failures. That's another way, but I don't know if that would kill everyone of course, but that could be quite bad.\n\n0:26:12.5 Interviewee: Yeah, that is something that I worry more about in the short-term, which is AI in the hands of... Even currently, AI can already be pretty bad with like spreading misinformation. That certainly happened here in [country], but in [other country] in the last elections, where this really bad president got elected, he used a lot of misinformation campaigns funded by a lot of people. And it's super hard to track because it's on WhatsApp and it's like the double-edged sword, the sword of privacy. It aligns with things that we care about, but which also allows these campaigns to run at massive scale and not be tracked. Even currently, AI personalizing wordings and messages to different people at large scale could be really pretty [hard to parse, \"good\"?].\n\n0:27:20.1 Vael: Got it. Do you work on AI alignment stuff?\n\n0:27:26.0 Interviewee: So currently, I'm working more on the education side of research. I still keep in the back of my head some of the problems about the ambiguity thing. We did have one paper last year that started building some of those ideas but I'm not actively working on that. But it's still like a realm of problems that I think is important for a number of reasons.\n\n0:27:56.2 Vael: Yeah. That makes sense. What would cause you to work on it?\n\n0:28:04.4 Interviewee: Sorry, what what would make me work on it?\n\n0:28:08.3 Vael: Yeah, like in what circumstance would you be like: Whelp, it is now five year later, it is now one year later, and I happen to be somehow working on some element of the AI alignment problem. Or AI safety or something. How did that happen?\n\n0:28:27.8 Interviewee: Yeah, it's interesting. From my trajectory, I have always tried to work on the thing that I feel is not being done and that I'm in a certain position to do. And that has over time shifted towards education for a number of reasons. And I don't know exactly where that comes from, but I still feel that I'm more able to contribute there than with other AI problems. Although I guess that kind of thing is hard to assess, but it's still my feeling on when deciding what things to work on.\n\n0:29:19.9 Vael: Hm, so it sounds like there's maybe something like... Okay, so I don't quite know if it's a matter of whether the education bit is neglected by the world, or whether it's like, you feel like it would be a better fit per se, or both?\n\n0:29:35.6 Interviewee: Yeah, it's the combination of both. Like if it's neglected. It is important, important in the short-term. And I think there are like things that I could do in the next five years. If the plan that I have for the [position] I have works out, I think it would be great. Like we would have tools that people would be able to use, and they need technical advances that are in the intersection of things that I care about or know for very weird reasons. But I feel like I'm in a position to make progress there and very immediately. So yeah, I just feel very urged to see how that goes.\n\n0:30:21.3 Vael: Yeah. Makes sense. Yeah, do you ever think you'll go get into the AI alignment thing or you'll just be like, \"Woo, seems good for someone to work on it?\" I get that-- that's an impression that I get.\n\n0:30:32.6 Interviewee: Uh huh, yeah. I think I'm getting a taste of it through [friend]. We actually wrote this project together two months ago. Okay, I actually met with [friend] to discuss this a few times but I have no idea where it's exactly came from, I think from someone at LessWrong that posted like a challenge. So it's a program called ELK, which I forgot even what ELK means, but I met with [friend] and we ended up coming up with some ideas, and he submitted it. [...]\n\n0:31:16.9 Interviewee: So I guess I'm getting a little bit of flavor of the kind of work that that entails. I still don't feel exactly comfortable in doing it myself just because it requires this very hypothetical and future reasoning that I'm not used to, not that I couldn't. But I understand that thinking about that kind of problem today requires you to think of systems that we can't use today and we can't run and test today. And I'm very used to ideas that I can write down the code and see how they work and then make progress from there. So I don't know, my opinion and comfort might change.\n\n0:32:18.5 Vael: Yeah, I also encountered this where I was like, \"I don't wanna think about far-future programs or far-future things, where there's no feedback loops and it's very hard to tell what's happening and there's only a limited number of people, and then-- now there's money, but there wasn't necessarily very much money before-- and there's not really journals, and everything's pre-pragmatic.\" And then, I think I was won over by the argument that, I don't know, I think existential risk is probably the thing that I'll just work on for most of my life because it seems like really important, and like, probable. But then I tried technical research for a while, and I thought it was not my cup of tea. And then I got into AI community building, which is what I'm kind of currently doing, and trying to get more people to become AI technical researchers. But not me! It's been a good fit in some sense.\n\n0:33:10.1 Interviewee: I see, that makes sense.\n\n0:33:11.1 Vael: Yeah, and I think it's pretty interesting as well. Well, the current thing I'm doing is talking to AI researchers and being like, \"What do you think of things? What about this argument?\" ...Cool. Well, we've got a little bit of time left, so, see if I can ask some of my more unusual questions, which are... How much do you think about policy and what do you think about policy related to AI?\n\n0:33:34.6 Interviewee: Yeah, I think I've... I think a lot about policy in general, about just related to AI. I'm currently at this phase in my life where I think a lot of the immediate problems that we have, unfortunately in a sense, we'll have to resolve by good politics, and I'm not exactly sure yet how to do that. Because one other thing that I think about, when I think about AI safety, is that one thing is to have the technical ability to do certain things. The other is to get the relevant players to implement them. So, if there is an AI safety mechanism that can like limit the reach of Google's super AI that will do all things, but it also means that it will decrease revenue to some extent... I'm not sure the decision that they will make which would be to implement the safe version. Think about VC's that are sitting in these boards and pushing decisions for the company. I'm not sure if all of them actually care about this and realize that it's a problem. So I think part of the solution is also on the political side, of like us as a society sitting down and deciding, \"What do we gotta do?\" More than just the technical proposal. I've decided that this year, I will try to have some political involvement in [country], which is a place where I can. I'm currently in the planning phase, but I've committed to doing something this year, because we have elections this year.\n\n0:35:39.2 Vael: Interesting, yeah. I think AI governance is very, very important. I think the community-- the AI safety community also, like... or, I don't know, the Effective Altruism community [hard to parse] a lot in their listings. Alright, question: if you could change your colleagues' perceptions of AI, what attitudes or beliefs would you want them to have? So what beliefs do they currently have and how would you want those to change?\n\n0:36:05.9 Interviewee: I'm not sure if this is the kind of attitude that you're talking about, but one attitude that I really don't like and is very prevalent in papers is that people kind of self-identify with the method that they're proposing so much that they call it \"our method\". In the sense that the goal should not necessarily be to show off better numbers, and show a curve where the color that's associated with you is like higher than the other. But it should really be to understand what's going on, right? If we have a system that has some sort of behavior that's desirable, that's awesome, and we should also look at how it maybe doesn't. Like people include a lot of cases in their papers where, \"Oh look, the model is doing this.\" But then when we actually try out the model, it's very easy to find cases where it doesn't or it fails in weird ways. And the reluctance to put that comes from this attachment from like, \"Oh, this is like me, but in form of a method that is written in this paper.\" And I wish that people didn't have that attitude, that they were more detached and scientific about it.\n\n0:37:24.9 Vael: Yeah, makes sense. Cool. And then my last question is, why did you choose this interview, and how has this interview been for you?\n\n0:37:32.6 Interviewee: Yeah, it's been really fun. I had thought of some of the questions before, and not of others. Yeah, the exercise that I maybe haven't done that much is to think super long-term, like 50-100 years in the future. I thought about it some but I don't think I have the right tools yet to think about them, so I get in this hard-to-process state. Yeah, so I came mostly out of curiosity, I didn't know exactly what to expect. If you don't mind sharing, but if you do, that's okay, how did you pick the people to interview?\n\n0:38:28.3 Vael: Yeah, so this is a pretty boring answer, but I originally selected people who'd submitted papers to NeurIPS or ICML in 2021, and like yeah, then some proportion of people replied back to me.\n\n0:38:44.5 Interviewee: I see.\n\n0:38:47.2 Vael: Yeah, great, cool. That makes a lot of sense. Thanks you so much for your time. I will send you the money and some additional resources if you're curious about it. I'm sure [friend] has already shown you some, but if you're curious. And thanks so much for doing this.\n\n0:39:00.7 Interviewee: Yeah, no, definitely. Thank you for the invite, this was fun.\n\n0:39:05.1 Vael: All right. Bye.\n\n0:39:05.8 Interviewee: Bye.\n", "url": "n/a", "docx_name": "NeurIPSorICML_a0nfw.docx", "id": "9c6779a8dccdc6e8ebfea7c346a48dd3"} {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": [], "date_published": "n/a", "text": "Interview with cvgig, on 3/24/22\n\n0:00:02.5 Vael: Awesome. Alright. So my first question is, can you tell me about what area of AI you work on in a few sentences?\n\n0:00:09.8 Interviewee: Yeah. So I'm what's technically called a computational neuroscientist, which is studying, using mathematics, AI and machine learning techniques to study the brain. Rather than creating intelligent machines, it's more about trying to understand the brain itself. And I study specifically synaptic plasticity, which is talking about how the brain itself learns.\n\n0:00:44.0 Vael: So these questions are like, AI questions, but feel free to like-- (Interviewee: \"No, go ahead.\") Okay, cool. Sounds good. Alright. What are you most excited about in AI and what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n0:00:55.9 Interviewee: Right. So in terms of benefits, I think that my answer might be a little bit divergent again, because I'm a computational neuroscientist. But I think that AI and the tools surrounding AI give us a huge amount of power to understand both the human brain, cognition itself, and more general phenomena in the world. I mean, you see AI used in physics and in other areas. I think that it is just a very powerful tool in general for building understanding. In terms of risks, I think that it's, again, by virtue of being a very powerful tool, also something that can be used for just a huge number of nefarious things like governmental surveillance, to name one, military targeting technology and things like that, that could be used to kill or harm or disenfranchise large numbers of people in an automated way.\n\n0:02:04.2 Vael: Awesome, makes sense. Yeah, and then focusing on future AI, putting on a science fiction forecasting hat, say we're 50-plus years into the future. So at least 50 years in the future, what does that future look like? This is not necessarily in terms of AI, but if AI is important, then include AI.\n\n0:02:22.6 Interviewee: Yeah, so 50-plus years in the future. I always have trouble speculating with things like this. [chuckle] I think it'll be way harder than people tend to be willing to extrapolate. And also, I think that AI is not going to play as large of a role as someone might think. I think that... I don't know, I mean in much the same way, I think it'll just be the same news with a different veneer. So we'll have more powerful technology, we'll have artificial intelligence for self-driving cars and things like that. I think that the technologies that we have available will be radically changed, but I don't think that AI is really going to fundamentally change the way that people... Whether people are kind or cruel to one another, I guess. Yeah, is that a good answer? I don't know. [chuckle]\n\n0:03:21.8 Vael: I'm looking for your answer. So...\n\n0:03:26.3 Vael: Yes. 50 years in the future, you're like, it will be... Society will basically kind of be the same as it is today. There will be some different applications than exists currently.\n\n0:03:36.4 Interviewee: Yeah, unless it's... It's perfectly possible society will utterly collapse, but I don't really think AI will be the reason for that. [chuckle] So, yeah, right.\n\n0:03:47.5 Vael: What are you most worried about?\n\n0:03:50.9 Interviewee: In terms of societal collapse? I'd say climate change, pandemic or nuclear war are much more likely. But I don't know, I'm not really betting on things having actually collapsed in 50 years. I hope they don't, yeah. [chuckle]\n\n0:04:07.0 Vael: Alright, I'm gonna go on a bit of a spiel. So people talk about the promise...\n\n0:04:10.7 Interviewee: Yeah, yeah.\n\n0:04:12.6 Vael: [chuckle] Yeah, people talk about the promise of AI, by which they mean many things, but one of the things they may mean is whether... The thing that I'm referencing here is having a very generally capable system, such that you could have an AI that has the cognitive capacities that could replace all current day jobs, whether or not we choose to have those jobs replaced. And so I often think about this within the frame of like 2012, we had the deep learning revolution with AlexNet, and then 10 years later, here we are and we have systems like GPT-3, which have some weirdly emergent capabilities, like they can do some text generation and some language translation and some coding and some math.\n\n0:04:42.7 Vael: And one might expect that if we continue pouring all of the human effort that has been going into this, like we continue training a whole lot of young people, we continue pouring money in, and we have nations competing, we have corporations competing, that... And lots of talent, and if we see algorithmic improvements at the same rate we've seen, and if we see hardware improvements, like we see optical or quantum computing, then we might very well scale to very general systems, or we may not. So we might hit some sort of ceiling and need a paradigm shift. But my question is, regardless of how we get there, do you think we'll ever get very general systems like a CEO AI or a scientist AI? And if so, when?\n\n0:05:20.6 Interviewee: Yeah, so I guess this is somewhat similar to my previous answer. There is definitely an exponential growth in AI capabilities right now, but the beginning of any form of saturating function is an exponential. I think that it is very unlikely that we are going to get a general AI with the technologies and approaches that we currently have. I think that it would require many steps of huge technological improvements before we reach that stage. And so things that you mentioned like quantum computing, or things like that.\n\n0:06:00.1 Interviewee: But I think that fundamentally, even though we have made very large advances in tools like AlexNet, we tend to have very little understanding of how those tools actually work. And I think that those tools break down in very obvious places, once you push them beyond the box that they're currently used in. So, very straightforward image recognition technologies or language technologies. We don't really have very much in terms of embodied agents working with temporal data, for instance. I think that...\n\n0:06:42.2 Interviewee: I essentially think that even though these tools are very, very successful in the limited domains that they operate in, that does not mean that they have scaled to a general AI. What was the second half of your question? It was like kind of, Given that we... Do you have it what it'll look like, or...\n\n0:06:57.4 Vael: Nah, it was actually just like, will we ever get these kind of general AIs, and if so, when? So...\n\n0:07:03.2 Interviewee: Yeah, so I would essentially say that it's too far in the future for me to be able to give a good estimate. I think that it's 50 plus years, yeah.\n\n0:07:13.2 Vael: 50 plus years. Are you thinking like a thousand years or you're thinking like a hundred years or?\n\n0:07:19.7 Interviewee: I don't know. I mean, I hope that it's earlier than that. I like the idea of us being able to create such things, whether we would and how we would use them. I would not, [chuckle] I don't think I would want to see a CEO AI, [chuckle] but there are many forms of general artificial intelligence that could be very interesting and not all that different from an ordinary person. And so I would be perfectly happy to see something like that, but I just, you know, and I guess in some sense, my work is hopefully contributing to something along those lines, but I don't think that I could guess when it would be, yeah.\n\n0:08:00.2 Vael: Yeah. Some people think that we might actually get there via by just scaling, like the scaling hypothesis, scale our current deep learning system, more compute, more money, like more efficient, more like use of data, more efficiency in general, yeah. And do you think this is like basically misguided or something?\n\n0:08:15.9 Interviewee: Yeah, let me take a moment to think about how to articulate that properly. I think... Yeah, you know, let me just take a moment. I think that when you hear people like, for instance, Elon Musk or something along these lines saying something like this, it reflects how a person who is attempting to get these things to come to pass and has a large amount of money would say something, right. It's like, what I'm doing is I'm pouring a large amount of money into this system and things keep on happening, so I'm happy with that. But I think that from my position of seeing how much work and effort goes into every single incremental advance that we see, I think that it's just, there are so many individual steps that need to be made and any one of them could go wrong and provide a, essentially a fundamental sealing on the capabilities that we're able to reach with our current technologies. And so it just seems a little, a little hard to extrapolate that far in the future.\n\n0:09:25.5 Vael: Yeah. What kind of things do you think we'll need in order to have something like, you know, a multi-step planner can do social modeling, can model all of the things modeling it like that kind of level of general.\n\n0:09:35.5 Interviewee: Yeah. So I think that one of the main things that has made vision technologies work extremely well is massive parallelization in training their algorithms. And I think that, what this reflects is the difficulty involved in training a large number... So essentially, when you train an algorithm like this, you have a large number of units in the brain like neurons or something like that, that all need to change their connections in order to become better at performing some task. And two things really tend to limit these types of algorithms, it's the size and quality of the data set that's being fed into the algorithm and just the amount of time that you are running the algorithm for. So it might take weeks to run a state-of-the-art algorithm and train it now. And you can get big advances by being able to train multiple units in parallel and things like that.\n\n0:10:33.5 Interviewee: And so I think that the easiest way to get very large data sets and have everything run in parallel is with specialized hardware called, you know, people would call that wetware or neuromorphic computing or something along those lines. Which is currently very, very new and has not really, as far as I know, been used for anything particularly revolutionary up to this point. You can correct me if I'm wrong on that. I would expect that you would have to have essentially embodied agents before you can get... in a system that is learning and perceiving at the same time before you could get general intelligence.\n\n0:11:12.5 Vael: Well, yeah, that's certainly very interesting to me. So, it's not... So people are like, \"We definitely need hardware improvements.\" And I'm like, \"Yup, current day systems are not very good at stuff. Sure, we need hardware improvements.\" And you're saying, are you saying we need to like branch sideways and do wetware-- these are like biological kind of substrates, or are they different types of hardware?\n\n0:11:37.3 Interviewee: I guess different types of hardware is maybe the shorter term goal on something like that. Like you would expect circuits in which individual units of your circuit look a little bit like neurons and are capable to adapt their connections with one another, running things in parallel like that can save a lot of energy and allows you to kind of train your system in real time. So it seems like that has some potential, but it's such a new field that, this is when I, when I think about what time horizon you would need for something like this to occur, it seems like you would need significant technological improvements that I just don't know when they'll come.\n\n0:12:20.4 Vael: Yeah. So I haven't heard of this wetware concept. So like it's a physical substrate that like... It like creates, it creates new physical connections like neurons do or it just like, does, you know...\n\n0:12:33.5 Interviewee: No, it doesn't create physical connections. You could just imagine this like... So, you know, computer systems have programs that they run in kind of an abstract way.\n\n0:12:43.8 Vael: Yep.\n\n0:12:44.8 Interviewee: And the hardware itself is logic circuits that are performing some kind of function.\n\n0:12:48.9 Vael: Yep.\n\n0:12:49.8 Interviewee: And neuromorphic computing is individual circuits in your computer have been specially designed to individually look like the functions that are used in neural networks. So you have... Basically, the circuit itself is a neural network, and because you don't have these extra layers of programming added in on top, you can run them continuously and have them work with much lower energy and stuff like that. It's just... It's limiting because they can't implement arbitrary programs, they can only do neural network functions, and so it's kind of like a specialized AI chip. People are working on developing that now... Yeah.\n\n0:13:32.7 Vael: Okay, cool, so this is one of the new hardware-like things down the line. Cool, that makes sense. Alright, so you'd like to see better hardware, probably you'd say that you'd probably need more data, or more efficient use of data. Presumably for this-- because the kind of continuous learning that humans do, you need to be able to have it acquire and process continuous streams of both image and text data at least. Yeah, what else is needed?\n\n0:14:03.8 Interviewee: Oh, I think that... Yeah, more fundamentally than either of those things. It's just the fact that we don't understand what these algorithms are doing at all. And so we're... You can train it, you can train an algorithm and say, \"Okay, you know, it does what I want it to do, it performs well,\" and most machine learning techniques are not very good at actually interrogating what a neural network is actually doing when it's processing images. And there are many instances recently, I think the easiest example is adversarial networks, if you've heard of those?\n\n0:14:41.8 Vael: Mm-hmm.\n\n0:14:42.2 Interviewee: I don't know what audience I'm supposed to be talking to in this interview.\n\n0:14:46.4 Vael: Yeah, just talk to me I think.\n\n0:14:49.2 Interviewee: Okay, okay.\n\n0:14:50.1 Vael: I do know what adversarial... Yeah.\n\n0:14:52.9 Interviewee: Okay, so, adversarial networks are... You perturb images in order to get your network to output very weird answers. And the ability of making a network do something like that, where you are able to change its responses in a way that's very different from the human visual system by artificial manipulations, makes me worried that these systems are not really doing what we think they're doing, and that not enough time has been invested in actually figuring out how to fix that, which is currently a very active area of research, and it's partly limited by the data sets that we've been showing our neural networks. But I think in general, there's been too much of an emphasis on getting short-term benefits in these systems, and not enough effort on actually understanding what they're learning and how they work.\n\n0:15:43.5 Vael: That makes sense. Do you think that the trend... So if we're at the point where people are deploying things that you don't understand very well, do you think that this trend will continue and we'll continue advancing forward without having this understanding, or do you think it would catch up or...\n\n0:16:00.4 Interviewee: Yeah, well, I think it's reflective of the huge pragmatic influence that is going on in machine learning, which is essentially, corporations can make very large amounts of money by having incremental performance increases over their preferred competitors. And so, that's what's getting paid right now. And if you look at major conferences, the vast majority of papers are not probing the details of the networks that they're training, but are only showing how they compare it to competitors. They'll say, \"Okay, mine does better, therefore, I did a good job,\" which is really not... It's a good way to get short-term benefits to perform, essentially, engineering functions, but once you hit a boundary in the capabilities of your system, you really need to have understanding in order to be able to be advanced further. And so I really think it's the funding structure, and the incentive structure for the scientists that's limiting advancement.\n\n0:17:02.2 Vael: That makes sense. Yeah, and again, I hear a lot of thoughts that the field is this way and they have their focus on benchmarks is maybe not... and incremental improvements in state-of-the-art is not necessarily very good for... especially for understanding. When I think about organizations like DeepMind or OpenAI, who're kind of exclusively or... explicitly aimed at trying to create very capable systems like AGI, they... I feel like they've gotten results that I wouldn't have expected them to get. It doesn't seem like you could should just be able to scale a model and then you get something that can do text generation that kind of passes the Turing Test in some ways, and do some language translation, a whole bunch of things at once. And then we're further integrating with these foundational models, like the text and video and things. And I think that those people will, even if they don't understand their systems, will continue advancing and having unexpected progress. What do you think of that?\n\n0:18:09.6 Interviewee: Yeah, I think it's possible. I think that DeepMind and OpenAI have basically had some undoubtedly, extremely impressive results, with things like AlphaGo, for instance. What's it called, AlphaStar, the one that plays StarCraft. There are lots of really interesting reinforcement learning examples for how they train their systems. Yeah, I think it just remains to be seen, essentially. It would be nice-- Well, maybe it wouldn't be nice, it would be interesting to see if you can just throw more at the system, throw more computing capabilities at problems, and see them end up being fixed, but I...\n\n0:19:04.0 Interviewee: I'm just skeptical, I guess. It's not the type of work that I want to be doing, which is maybe biasing my response, and I don't think that we should be doing work that does not involve understanding for ethical reasons and advancing general intelligence. For reasons that I stated, that essentially, if you hit a wall you'll get very stuck. But yeah, you're totally right that there have had been some extremely, extremely impressive examples in terms of the capability capabilities of DeepMind. And, yeah, there's not too much to be said for me on that front.\n\n0:19:46.8 Vael: Yeah. So you said it would be interesting, you don't know if it would will be nice. Because one of the reasons that it maybe wouldn't be nice is that you said that there's ethical considerations. And then you also said there's this other thing; if you don't understand things then when you get stuck, you really get stuck though.\n\n0:20:01.5 Interviewee: Yeah.\n\n0:20:04.4 Vael: Yeah, it seems right. I would kind of expect that if people really got stuck, they would start pouring effort into interpretability work for other types of things.\n\n0:20:12.7 Interviewee: Right. You would certainly hope so. And I think that there has been some push in that direction, especially there's been a huge... I keep on coming back to the adversarial networks example, because there have actually been a huge number of studies trying to look at how adversarial examples work and how you can prevent systems from being targeted by adversarial attacks and things along those lines. Which is not quite interpretability, it's still kind of motivated by building secure, high performance systems. But I think that you're right, essentially, once you hit a wall, things come back to interpretability. And this is, again, circling back to this idea of every saturating function looks like an exponential at the beginning, is that the deep learning is currently in a period of rapid expansion, and so we might be coming back to these ideas of interpretability in 10 years or so, and we might be stuck in 10 years ago or so, and the question of how long it'll take us to get general artificial intelligence will seem much more inaccessible. But who knows.\n\n0:21:26.8 Vael: Interesting. Yeah, when I think about the whole of human history or something, like 10,000 years ago, things didn't change in lifetime to lifetime. And then here we are today where we have probably been working on AI for under 100 years, like about 70 years or something, and we made a remarkable amount of progress in that time in terms of the scope of human power over their environment, for example. So yeah, there certainly have been several booms and bust of cycles, so I wouldn't be surprised if there is a bust of cycle for deep learning. Though I do expect us to continue on the AI track just because it's so economically valuable, which especially with all the applications that are coming out.\n\n0:22:04.1 Interviewee: Yeah, you don't have to be getting all the way to AI for there not to be plenty of work to be... General artificial intelligence, for there to be plenty of work to be done. There are hundreds of untapped ways to use, I'm sure, even basic AI that are currently the reason that people are getting paid so well in the field, and there's a lack of people to be working in the field, so there's... I don't know, there are tons of opportunities, and it's gonna be a very long time before people get tired of AI. So yeah, that's not gonna happen anytime soon.\n\n0:22:36.6 Vael: True. Alright, I'm gonna switch gears a little bit, and ask a different question. So now, let's say we're in whatever period we are where we have this advanced AI systems. And so we have a CEO AI. And a CEO AI can do multi-step planning and as a model of itself modelling it and here we are, yeah, as soon as that happens. And so I'm like, \"Okay, CEO AI, I wish for you to maximize profits for me and try not to run out of money and try not to exploit people and try to avoid side-effects.\" And obviously we can't do this currently. But I think one of the reasons that this would be challenging now, and in the future, is that we currently aren't very good at taking human values and preferences and goals and turning them into optimizations-- or, turning them into mathematical formulations such that they can be optimized over. And I think this might be even harder in the future-- there's a question, an open question, whether it's harder or not in the future. But I imagine as you have AI that's optimizing over larger and larger state spaces, which encompasses like reality and the continual learners and such, that they might alien ways of... That there's just a very large shared space, and it would be hard to put human values into them in a way such that AI does what we intended to do instead of what we explicitly tell it to do.\n\n0:23:57.9 Vael: So what do you think of the argument, \"Highly intelligent systems will fail to optimize exactly what their designers intended them to and this is dangerous?\"\n\n0:24:07.1 Interviewee: Oh, I completely agree. I think that no matter how good of an optimization system you have, you have to have articulated it well and clearly the actual objective function itself. And to say that we as a collective society or as an individual corporation or something along those lines, could ever come to some kind of clear agreement about what that objective function should be for an AI system is very dubious in my opinion. I think that it's essentially... Such an AI system would have to, in order to be able to do this form of optimization, would essentially have to either be a person, in order to give people what they want, or it would have to be in complete control of people, at which point it's not really a CEO anymore, it's just a tool that's being used by people that are in a system of controlling the system like that. I don't think that that would solve the problem. There are lots of instances of corporate structures and governmental structures that are disenfranchising and abusing people all around the world, and it becomes a question of values and what we think these systems should be doing rather than their effectiveness in actually doing what we think they should be doing. And so, yeah, I basically completely agree with the question in saying that we wouldn't really get that much out of having an AI CEO. Does that...\n\n0:25:50.8 Vael: Interesting. Yeah, I think in the vision of this where it's not just completely dystopian, what you maybe have is an AI that is very frequently checking in on human feedback. And that has been trained very well with humans such that it is... So there's a question of how hard it is to get an AI to be aligned with one person. And then there's a question of how hard it is to get an AI to be aligned with a multitude of people, or a conglomerate of people, or how we do democracy or whatever that's, yeah, complicated. But even with one person, you still might have trouble, is my intuition here? And just trying to have it-- still with the access to human feedback, still have human feedback in a way that it's fast enough that the AI is still doing approximately what you want.\n\n0:26:41.7 Interviewee: Yeah, yeah, I agree. Yeah. I just think that the question of interpretability becomes a very big issue here as well where you really want to know what your system is doing, and you really need to know how it works. And with the way things are currently going we're nowhere near that. And so, if we have a large system that we don't understand how it works and is operating on limited human feedback and is relatively inscrutable, the list of problems that could result from that is very very long. Yeah. [chuckle]\n\n0:27:15.6 Vael: Awesome. Yeah, and my next question is about presumably one of those problems. So, say we have our CEO AI, and it's capable of multi-step planning and can do people modelling it, and it is trying to... I've given it its goal, which is to optimize for profit with a bunch of constraints, and it is planning and it's noticing that some of its plans are failing because it gets shut down by people. So as a basic mechanism, we have basically--\n\n0:27:44.4 Interviewee: Because it's what by people?\n\n0:27:46.2 Vael: Its plans are getting... Or it is getting shut down by people. So this AI has been put... There's a basic safety constraint in this AI, which is that any big plans it does has to be approved by humans, and the humans have asked for a one-page memo. So this AI is sitting there and it's like, \"Okay, cool, I need to write this memo. And obviously, I have a ton of information, and I need to condense it into a page that's human comprehensible.\" And the AI is like, \"Cool, so I noticed that if I include some information in this memo then the human decides to shut me off, and that would make my ultimate plan of trying to get profit less likely to happen, so why don't I leave out some information so that I decrease the likelihood of being shut down and increase the likelihood of achieving the goal that's been programmed into me?\" And so, this is a story about an AI that hasn't had self-preservation built into it, but it is arising as an instrumental incentive of it being an agent optimizing towards any goal. So what do you think of the argument, \"Highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals, and this is dangerous?\"\n\n0:28:53.1 Interviewee: Well, right. It's very dependent on the objective function that you select for the system. I think that a system... It seems, at face value, pretty ridiculous to me that the CEO of a company, the CEO robot, would have its objective function being maximizing profit rather than maximizing individual happiness within the company or within the population on the whole. But even in a circumstance like that, you can imagine very, very, very many pathological circumstances arising. This is the three laws of robotics from Isaac Asimov, right? It's just very simplified objective functions produce pathological consequences when scaled to very large complex systems. And so, in much the same way you can train a neural network to recognize an image which produces the unintended consequence that tiny little perturbations of that image can cause it to radically change its output when you have improperly controlled what the system is doing at a large scale, the number of tiny unintended consequences that you could have essentially explodes many-fold. And yeah, I certainly wouldn't do this. That's certainly not something that I would do, yeah.\n\n0:30:20.6 Vael: Yeah. Have you heard of AI Safety?\n\n0:30:24.3 Interviewee: AI... Yeah, yeah.\n\n0:30:26.0 Vael: Cool. What does that term mean for you?\n\n0:30:27.2 Interviewee: You're talking... What does it mean for me? Well, I guess it's closely related to AI ethics. AI safety would mainly be a set of algorithms, or a set of protocols intended to ensure that a AI system is actually doing what it's supposed to do and that it behaves safely in a variety of circumstances. Is that correct?\n\n0:30:52.2 Vael: Well, I don't-- there's not one definition in fact, it seems like it's a sprawling field. And then, have you heard of the term AI alignment?\n\n0:31:00.7 Interviewee: No, I don't know what that is.\n\n0:31:01.5 Vael: Cool. This is more long-term focused AI safety. And one of their definitions they use is building models that represent and safely optimize hard-to-specify human values. Alternatively, ensuring that AI behavior aligns with the system designer intentions. Although there are a lot of different definitions of alignment as well. So there's a whole bunch of people who are thinking about long-term risks from AI, so as AI gets more and more powerful. I think the example we just talked about, like the ones where adversarial examples can really change the output of a system very easily, is a little bit different than the argument made here, which is something like: if you have an agent that's optimizing for a goal and it's good enough at planning then it's going to be instrumentally incentivized to acquire resources and power and not be shut down and kind of optimize against you, which is a problem when you have an AI that is similarly as smart as humans. And I think in that circumstance, one of the arguments is that this constitutes an existential risk, like having a system that's smarter than you constituting against you would be quite bad. What do you think of that?\n\n0:32:04.1 Interviewee: Yeah, I was only using the adversarial example to give an example of how easily and frequently this does happen at even the level that we're currently working at. I think it would be much, much, much worse at the level of the general artificial intelligence that would have essentially long-term dynamic interactions with people, rather than a system that's just taking an image and outputting a response. When the consequences of such a system can have long term effects on the health and well-being of people, this kind of thing becomes very different and much more important.\n\n0:32:43.4 Vael: Yeah. And like with the problem I was outlining earlier, which is like, how do we get to do exactly what they intended to do? The idea that you have of like trying... Like why would you create a system that wasn't optimizing for all of human values? I was like, wow, ahead of the game there. That is in some sense the goal. So there is a community who's working on AI alignment kind of research, there's money in this community. It's fairly new-- although much more popular, or like, AI safety haw grown a lot more over the years. What would cause you to work on trying to prevent long-term risks from AI systems?\n\n0:33:18.5 Interviewee: What would cause me to do work on it?\n\n0:33:20.6 Vael: Yeah.\n\n0:33:29.6 Interviewee: To be honest, I think that it would have to be... I guess I would really have to be convinced that the state of the field in the next few years is tending towards some type of existential risk. I feel like... You don't have to convince me too much, but I personally don't think that the field of study that I'm currently occupying is one that's really contributing to this problem. And so I would become much more concerned if I felt like the work that I was doing was actively contributing to this problem, or if there was huge evidence of the near advent of these types of generally intelligent systems to be terribly worried about.\n\n0:34:28.6 Vael: Yeah. That makes sense. Yeah, I don't actually expect computational neuroscience to be largely contributing to this in any way. I feel like the companies that are gonna be doing this are the ones who are aiming for AGI. I do expect them to kind of continue going that way, regardless of what is happening. And I expect the danger to happen not immediately, not in the next couple of years. Certainly people have like different ranges, but like 2060 is like an estimate on some paper I believe that I can send along. It probably won't be a while, won't be for a while.\n\n0:35:00.5 Interviewee: Sure. I don't know, I think that people who understand these algorithms in the way that they work do have in some sense a duty to stand up to these types of problems if they present themselves. And there are many instances of softer forms of AI being used for horrible things currently, which I certainly could be doing more in my daily life to prevent. But for now, I don't know. I guess I just have, I have my own interests and priorities. And so it's kind of a... It's something to get to eventually.\n\n0:35:42.9 Vael: Yeah, yeah. For sure. I think these technical AI safety is important. And am I working in technical AI safety? Nope. So like we all do the things that we want to do.\n\n0:35:54.8 Interviewee: Yeah.\n\n0:35:54.9 Vael: Great, cool. So that was my last question, my downer of an interview here [chuckle], but how do you think...\n\n0:36:02.3 Interviewee: No, no.\n\n0:36:04.1 Vael: But yeah. Okay. So my actual last question is, have you changed your mind in anything during this interview and how was this interview for you?\n\n0:36:08.9 Interviewee: No, it was a good interview. I don't think I've particularly changed my mind about anything. I think that it was good to work through some of these questions and yeah, I had a good time.\n\n0:36:24.2 Vael: Amazing. Yeah, why--\n\n0:36:25.3 Interviewee: I typically don't expect it to change my mind too much in interviews, so [chuckle].\n\n0:36:28.8 Vael: Absolutely. Yeah, yeah, yeah. Okay. Why do... People tell me they have a good time and I'm like, are you lying? Did you really have... Why is this a good time?\n\n0:36:37.2 Interviewee: No, it's nice to talk about your work. It's nice to talk about long-term impacts that you don't talk about in your daily basis. I don't know. I don't need to be paid to do something like this for instance.\n\n0:36:51.7 Vael: All right. Well, thank you so much. Yeah. If you think of any questions for me, I'm here for a bit. I'm also happy to send any resources if you're curious about, like, my takes on things, but yeah, generally just very appreciate this.\n\n0:37:04.4 Interviewee: Yeah, sure. I'm a little curious about what this interview is for. Is it for just you, or is it, like a... You mentioned something about some type of AI alignment group or is there some kind of... I'm just curious about what it's for.\n\n0:37:20.9 Vael: Yeah. So I am interested... I'm part of the AI alignment community, per se, although I'm not doing direct work. The people there often work on technical solutions to try to... to the alignment problem, which is just trying to come up with good ways of making sure that AIs in the future will be responsive, do what humans want. And examples of that include trying to build in feedback, human feedback, in a way that is scalable with current systems and works with uninterpretable systems, and interpretability-- certain types of interpretability work. There's teams like DeepMind Safety, OpenAI Safety, different, like, separate alignment community. So I'm like in that space. And I've been doing interviews with AI researchers to see what they think about the safety arguments. And whether... instrumental incentives. And just like, when do you think we'll get AGI, if you think we will. Get a lot of different opinions, a lot of different ways.\n\n[...]\n\n0:38:47.5 Interviewee: Cool. Anyway, that makes a lot of sense and, yeah, I hope that things go well. Thanks for having me. Yeah.\n\n0:38:55.5 Vael: Yeah. Thanks so much, really appreciate it. Alright, bye.\n\n0:38:59.1 Interviewee: Bye, see you.\n", "url": "n/a", "docx_name": "NeurIPSorICML_cvgig.docx", "id": "2ebdfced2fd46d23e68472b9b7514220"} {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": [], "date_published": "n/a", "text": "Interview with lgu5f, on 3/22/22\n\n0:00:02.5 Vael: Alright, my first question is, can you tell me about what area of AI you work on in a few sentences?\n\n0:00:08.4 Interviewee: Yeah. So I work on privacy in AI. I've been working on differential privacy, which is sort of the de facto standard for giving statistical privacy in analytics and AI. I've been working on differentially private synthetic data, so coming up with algorithms that generate synthetic versions of datasets that mimic the statistics in those datasets without revealing any information about the users and the data itself. And more broadly, I also just work on differential privacy for analytics, so it's not specifically AI, but it's still like algorithm design with privacy.\n\n0:00:55.3 Vael: Cool, great, thanks. And my next question is, what are you most excited about in AI, and what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n0:01:05.7 Interviewee: I think that AI has been most helpful in the little things. When I pull up my phone, and it's like, \"Hey, this is your most used app in this location,\" recommender systems like that, or small tweaks that help my daily life, that's what I'm most excited about. I think I'm most worried about AI being used in applications where explainability and fairness are going to be important, or privacy. This is stuff I see red flags in. I'm worried about an insurance company putting a neural net into some important decision-making system and then not being able to analyze why it made a decision that it did, or understanding if it's being unfair.\n\n0:02:11.6 Vael: Great, yeah, that makes sense. And then focusing on future AI, so putting on a science fiction forecasting hat, say we're 50 plus years into the future, so at least 50 years in the future, what does that future look like?\n\n0:02:30.3 Interviewee: I'm sort of pessimistic about how advanced AI can get. I think that the trend is going to be that we're going to see smaller models that are more bespoke for the problem domain that it's trying to solve. So instead of these like GPT-3 trillion parameter-sized models, I think that we're going to start moving back towards stuff that can run more easily on the edge. That doesn't require as much energy and time to train and doesn't require as much data. I think that AI becomes more ubiquitous, but in a way that's easy for us to compute with. So it's just more AI running everywhere.\n\n0:03:17.6 Vael: Yeah, what drives that intuition?\n\n0:03:20.7 Interviewee: One is-- partially concern with large models consuming too much energy and, of course, climate change is one of the chief things that we should be worried about. The other thing is... I've seen some experiments that come out of GPT-3, and they're cool as toy problems, or it's cute program synthesis and stuff like that, but I don't see that really being used in production. It's one thing for a research organization to come up with those experiments and say, \"Hey, we were able to, I don't know, use this to beat a goal player,\" but you look at the details of it, and really this gigantic model also had a tree search algorithm, that was a big benefit. I think just... keeping it bespoke, like CNNs just do so well, and I think that that's for a reason, so that's sort of the intuition I have. If we keep it tight to the problem domain, I've seen it do better. Domain expertise has helped a lot.\n\n0:04:37.5 Vael: And then just a quick follow-up on the climate change thing, is the idea that, current systems are using too much energy and this is causing increased climate change?\n\n0:04:50.0 Interviewee: Yeah, I think that we all just need to reduce how much energy we're using, because, unless we're sure that a lot of it is coming sustainably, we should be concerned about how much energy we're using. And training these trillion parameter models requires a lot of energy. It requires a lot of hardware. That hardware does not come for free. There's a manufacturing process that goes into that, building up the data centers that are training, that are hosting all of this, and then replacing the hard drives, replacing the GPUs when they get stale, so there's just a whole bunch of life cycle impacts from training models that I think are really coming up because we're seeing people doing these studies on blockchain because that tends to burn through GPUs and hard disks faster than training machine learning models, but it's sort of the same impact.\n\n0:05:47.4 Vael: Interesting. Cool. Well, this next thing is more of a spiel, and it is quite close to what you've been talking about with where AI will go in the future and how big it will get. So people talk about the promise of AI and they mean many things by that, but one thing they may mean is a very general capable system, such that they'll have the cognitive capacity to replace all human jobs, all current day human jobs. So whether or not we choose to replace human jobs is a different question, but having the cognitive capacity to do that, and so I usually think about this in the frame of 2012 when we have AlexNet, deep learning revolution, and then 10 years later, here we are, and we've got the GPT-3 system. Which, like you said, have some weirdly emerging capabilities, so it can do some text generation, and some translation, and some coding, and some math and such. And we might expect that if we continue with all this human effort that's been going into this kind of mission of getting more and more general systems, and we've got nations competing, and we've got corporations competing, and we've got all these young people learning AI and like, maybe we'll see algorithmic improvements at the same pace we've seen hardware improvements, maybe we get optical or quantum. So then we might actually end up scaling the very general systems or like you said, we might not, we might have hit some sort of ceiling or require a paradigm shift or something. But my question is, regardless of how we get there, do you ever think we'll have very general AI systems like a CEO AI or a scientist AI? And if so, when?\n\n0:07:13.3 Interviewee: Oh, that's a good question. I tend to be more pessimistic about this. It depends what you mean by AI, I guess in this sense, right? Is it sort of a decision-making system that's just doing Pareto optimal decision-making. Does it have to be trained?\n\n0:07:52.6 Vael: Yeah. What I'm visualizing here is any sort of decision-making system, any sort of system that can do things like multi-step planning, that can do social modeling, that can model itself-- modeling other people modeling it. Have the requirements such that I can be like, \"Alright, CEO AI, I want you to maximize profit plus constraints.\" So very capable cognitive systems. I don't require right now that they're embodied necessarily because I think robotics is a bit behind, but like cognitive capacities.\n\n0:08:25.1 Interviewee: Got it. I think we're still a solid century behind that. Yeah, I don't know. I just feel like it's still a solid century behind because that might require a huge paradigm shift in how we're training our models. Yeah, I don't think we've even thought about modeling something with a state space that large, right?\n\n0:08:58.9 Vael: Yeah, it's like reality.\n\n0:09:00.7 Interviewee: Yeah, exactly. I've seen cool stuff where there's like, you hand over a floor plan or something, and then you say, “Hey, where did I leave my keys?” And this AI is able to jog through, navigate through this floor plan and then say you left it in the kitchen. But I still think that something like a CEO AI is still going to be like 100 years out. Yeah.\n\n0:09:29.5 Vael: Interesting. Yeah, that is interesting, because it seems like what you were saying earlier, is like, I think we'll get more bespoke systems, less large scale systems. And I'm like [humorously]: \"What if we have a very large-scale system, that is very...\" [interviewee laughs] Yeah. How does that square?\n\n0:09:43.8 Interviewee: Between having a bespoke system versus...\n\n0:09:47.9 Vael: Or like how the future will develop or something. I think that that's like... maybe kind of likely that there will be at least some companies like OpenAI and DeepMind that'll just keep pushing on the general system thing. But I don't know.\n\n0:09:58.4 Interviewee: Yeah, I feel like there's always going to be billions of dollars poured into that kind of research. But at the end of the day, I think that we'll see more impact if we make quality of life improvements that make jobs easier. Of course, there'll be a whole bunch of automation, right, but I don't know if we'll ever get to a point where we can leave decision-making entirely to AI. Yeah, because... I think also just as society, we haven't thought about what that means. I think Mercedes just recently said that Mercedes will take responsibility if one of their driverless cars hits someone, which is a huge step, right, but we haven't gone close to already deciding who has liability if an insurance company messes up, right, with their premium setting algorithms.\n\n0:11:09.5 Vael: Yeah, when I think about the future, I'm like, I think we might get technological capacities before we get good societal understanding or regulation.\n\n0:11:17.1 Interviewee: Likely. Likely. Because that's what's happening in fairness and in privacy. With fairness regulations, like you have in Australia and a whole bunch of places, where it just says, Okay, you don't pass in some protected attributes and your algorithm is not going to be racist, homophobic, transphobic, whatever, sexist. And it's like, Okay, cool, now that you've purged the data of all these attributes, there's no way for us to measure whether it's actually being any of those things. But it's required by policy. There's also the same thing with privacy, where it's like, Okay, if you just remove these attributes and you remove their names, we can't tell who it is, but nobody else has watched the exact five last movies I've watched on Netflix, or the last three things I've liked on Twitter or Facebook or whatever. So yeah, for sure, we'll be lagging societally, but hopefully with this timespan I have in mind of it's in a century, we'll be thinking about these things if we're on the cusp of it, and people will be raising alarm bells about it like, Hey, maybe we should start thinking about this.\n\n0:12:34.1 Vael: Yeah, that makes sense. I know there's a substantial number of researchers who think there's some probability that we'll get it substantially sooner than that. And maybe even using the current deep learning paradigm, like GPT-whatever, 87, 7, GPT-13. I don't know. But that this system will like work. I don't actually know if that's true. It's hard to predict the future. Yeah, so my next question is sort of talking about whenever we get these very advanced systems, which again, who knows, you said like maybe 100 years, some people think earlier, some people think much later. So imagine we're in the future, and we're talking about the CEO AI, and I'm like, Okay, CEO AI, I wish for you to maximize profits-- this is where humans are like the shareholder type thing-- and try not to run out of money and try not to exploit people and try to avoid side-effects. Currently, this is technologically challenging for many reasons, but one of the reasons I think that might continue into the future is that we're currently not very good at taking human values and preferences and goals and putting them into a mathematical formulations that we could optimize over them. And I think this might be even harder to do in the future as we get more powerful systems and that are optimizing over larger-- reality. Larger optimization spaces. So what do you think of the argument, \"Highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous?\" [pause] So they'd be doing what we tell them to do rather than what we intended them to do.\n\n0:14:12.2 Interviewee: ...Yeah, I think that that's most likely. Because... Well, it depends what level of self-awareness or cognitive awareness this AI has to understand what we intended versus telling them. If it figured out the state space was... If it can model what the ramifications of what it does are on itself, then ideally it would figure out what we intended for it to do, not what we told it to do it. So it's one thing if it goes off the rails and says say, actually, this is the only way to solve-- That's a very Asimov-esque dystopia, right. But hopefully, if it can model that and say like, Oh actually, you know, the humans will delete me if I do something that's exactly what they told me to do and not what they intended for me to do, then ideally we'd be past that point.\n\n0:15:15.3 Vael: I see, yeah. Cool. Let me try to engage with that. So I'm like, Alright, where we've told the CEO AI that I want you to maximize profit, but not have side-effects, and it's like, Okay, cool. Side-effects is pretty undefined, but maybe it has a very good model of what humans want, and can infer exactly what the humans want and can model things forward, and will know that if it pollutes, then the humans will be unhappy. I do think if you have a great enough model of the humans in AI, and AI is incentivized to make sure to do exactly what the humans want, then this is not a problem.\n\n0:16:00.6 Interviewee: Right. The risk, of course, is that it figures out all the legal loopholes, and it's like, Oh great, I've figured out exactly what I need to do to not be held morally culpable for what choices I've made and get off scot-free, which is also a huge risk.\n\n0:16:19.2 Vael: Yeah, yeah. So I think with the current day systems, they're much dumber, and so if you're like, Alright AI, I want you to get a lot of points then maybe it will get a lot of points through some little side loop instead of winning the game. That's because we haven't specified what we wanted exactly, and it can't infer that. But you're saying as we get more advanced, we will have systems that can at least know what humans want them to do, whether or not they follow through with that is a different question.\n\n0:16:51.7 Interviewee: Yeah. Yeah, and at some point, there's still the huge element of what the designer of the algorithm put in. There is some choice on the loss function. Is it average, is it worst case? Is it optimizing for the highest likelihood, the worst likelihood, I feel like that would also be a huge change in how it does it. So it's not going to be fully up to the algorithm itself, they're still going to be human choices in building out that algorithm that determine how it interacts with other humans.\n\n0:17:31.1 Vael: Yeah, this is in fact the whole question, I think. What do you put in the loss function such that it will do whatever things you want to do. Yup. Yeah, okay, cool. So I have a second argument on top of that one, which I think is pretty relevant. So say we have the CEO AI, and it has pretty good models of humans, and it can model humans modelling, and it does multi-step planning. And because we figured, humans figured that we should have some safety mechanisms, so maybe we don't let the AI make any big decisions unless it passes that decision through us. And so they've asked the AI for a one-page memo on this upcoming decision. And the AI is like, Cool, well, I obviously have a lot of thinking and information that I can't put everything in a one-page memo, and so humans are expecting me to condense it. But I notice that sometimes if I include some types of information in this memo, then the humans will shut me down and that will make me less likely to be able to achieve the goal they programmed in. So why don't I just leave out some information such that I have a higher likelihood of achieving my goal and moving forward there. And so this is a story that's not like the AI has self-preservation built in it, but rather as an agent optimizing a goal, a not perfect goal, then it ends up having an instrumental incentive to stay alive and to self-preserve itself. So the argument is, what do you think of the argument, \"Highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing any goals, and this is dangerous?\"\n\n0:19:08.7 Interviewee: I think we already see this with not highly intelligent systems. There was this paper from 2019 where they analyzed the preventative healthcare recommendations from some insurance company algorithm setup. And they found that it consistently would recommend people of color to go for preventative healthcare less. So they were looking through it, and they realized that the algorithm is optimizing to minimize the dollar spent by the patient, and so it figured out that, Hey, folks who are a part of majority communities go for preventative healthcare and they save dollars by getting preventative healthcare versus folks in minority communities tend to not get preventative healthcare, but most of them end up not ever getting specialist care either, so it's actually more dollars saved if we just don't give them preventative healthcare. I think that that could be pretty likely. Again, if we have this highly intelligent system that's able to model so much, then we should be kinda skeptical of what comes in the memos. But yeah, I'd say that that's possible, that it just gets stuck on one of those loops, like, Oh, this is best in...\n\n0:20:44.7 Vael: Yeah, interesting. Yeah, so the example you gave seems to me like it's some sort of alignment problemm where we've tried to tell what we want, which is presumably like good healthcare for everyone that's sort of cheap. And instead it is doing something not what we intended it to do, like, whoops, that was like a failure. We've put in a failure in terms of the loss function that we're optimizing. Something that's close, but not quite the problem. Then yeah, I think this argument is, as we get very intelligent systems, if the loss function that we put in is not exactly what we want, then we might have it optimizing something kind of close, and then it will be incentivized to continue pursuing that original goal, and in fact, maybe optimizing against humans trying to change it. Which is interesting. So, if one buys this argument of instrumental incentives of an agent optimizing what a goal it puts in, then you have ideas like an agent that is optimizing for not being shut down. So that can involve deception and acquiring resources and influence and also improving itself, which are all things that are good things that are able to help you better achieve goals. Which feels pretty worrying to me if this is the case, because if this is one of the default ways that we're developing AI, then when we actually get advanced AI and then we'll maybe have a system that is as or more intelligent than us optimizing against humans. And I'm like, \"Wow, that seems like real bad.\"\n\n0:22:14.0 Interviewee: Yeah. Yeah, that's a nightmare scenario, which is Matrix-esque, right? Yeah.\n\n0:22:21.5 Vael: Yeah. What is the bad AI in the Matrix doing? Is there a bad AI in the Matrix?\n\n0:22:28.1 Interviewee: Yes. So the bad AI... It's sort of the same idea as the Terminator, where the Skynet of the Matrix realizes that the best way to avoid conflict in humanity is to sort of keep humanity in a simulation. I think the robots and the humans end up fighting a war, and then they just plug the humans into the matrix, keeping them quelled that way and making the broader argument that living in a simulation is nicer than the world outside. So the AI is technically also doing something good for humanity and self-sustaining, and keeping human population alive.\n\n0:23:09.2 Vael: Interesting. Yeah. Cool, alright. So my interest, in general, is looking at long-term risks from AI. Have you heard of the-- well, okay, I'm sure you've heard about AI safety. What does AI safety mean for you, for the first question?\n\n0:23:31.0 Interviewee: Um...\n\n0:23:32.3 Vael: Or have you heard of AI safety?\n\n0:23:34.8 Interviewee: No, not really.\n\n0:23:35.7 Vael: Oh, interesting, cool!\n\n0:23:37.5 Interviewee: So can you tell me more?\n\n0:23:41.3 Vael: Yeah. So, AI safety means a bunch of different things to a bunch of different people, it's quite a large little field. It includes things like surveillance, autonomous weapons, fairness, privacy. Things like having self-driving cars not kill people. So anything that you could imagine that could make an AI safe. It also includes things that are more in what is currently called AI alignment. Have you heard of that term before?\n\n0:24:11.1 Interviewee: No.\n\n0:24:12.3 Vael: Cool. AI alignment is a little bit more long-term focused, so imagining like as we continue scaling systems, how do we make sure that the AIz continue to do what humans intend them to do, what humans want them to do, don't try to disable their off-switches just by virtue of optimizing for some goal that's not what we intended. Trying to make sure that the AIs continue to be aligned with the humans. Where one of the definitions of alignment here is building models that represent and safely optimize hard-to-specify human values. Alternatively, ensuring the AI behavior aligns with the system designer intentions. So trying to prevent scenarios like the one you brought up where the designer puts in a goal that's not quite what the humans want, but this can get much worse as systems get more and more powerful, one can imagine. Yeah, so there's a community working on these sorts of questions. Trying to figure out how we could get AI aligned. Because people are like, Well, can't you just put in an off switch, but this AI will probably be deployed on the Internet, and it can replicate itself presumably, if it's smart enough, and also it may have an incentive to disable the off switch if we do things by default. But some people are like, oh well, how do we solve that? How do we make it so that it doesn't have an incentive to turn itself off? And one group is like, Oh well, instead of having AI optimizing towards one goal singularly, in which case it has an incentive to try to stop people from achieving its goal, one thing you could do is have it build in uncertainty over the reward function, such that the AI wants to be corrected and wants to be switched off so that it has more information about how to achieve the goal. And this means that AI no longer has an incentive to want to prevent itself from being switched off, which is cool. And you can sort of like build... that's like an alignment solution you can build in. Another thing that people talk a lot about is if... we should obviously have human feedback if we're trying to get an AI to do what humans want. So how do we build in human feedback when the systems are more and more intelligent than us and dealing with really big problems that are like, should we do this nuclear fusion reactor? I don't know, the human doesn't really know. What are all the considerations? Another question they tackle is, how do you get honest reporting out of AI? How do you build a loss function such that it's incentivized to your honest reporting? And how do you get an AI in general to defer to humans kind of always while still being agentic enough to do things in the world?\n\n0:26:39.3 Interviewee: Got it.\n\n0:26:40.4 Vael: How does all that sound to you?\n\n0:26:43.0 Interviewee: Very important. I feel like AI safety is something we're brushing up against the sharp edges of right now. But a lot of what AI alignment is looking into, I think, would also just apply to how we think about building these larger models. Because if that's the goal of these large experiments, I would hope that they're thinking about these right now, given their funding and the immense amount of time and effort they have that goes into it. Yeah. This is really interesting.\n\n0:27:25.3 Vael: Yeah. Most researchers are working on capabilities, which is what I just call moving the state of the field forward. Because AI is actually really hard, turns out, and so lots of people working on that, and there's a lot of money in the space too, especially with applications. And then there's a smaller community of people who are working on AI safety, more like short-term AI safety, like there's plenty of problems with today's systems. There's an even smaller community of people who are working on long-term safety. They're kind of the group that I hang out with. And I'm worried that the group of people working on long-term safety will continue to be significantly smaller than the capabilities people, because while there's more money in the space now, because a couple of very rich people are concerned about it like Elon Musk, famously. But generally, it's hard work. It's like anticipating something in the future. Some people think it's a very far future. It's hard to speculate about what will go on, and that's also just kinda like a difficult problem, alignment. It might not be trivial.\n\n0:28:25.5 Interviewee: Yeah, and I sort of see parallels with nuclear research that happened like 100 years back, right? Or 80 years back, around the time of the Manhattan Project, where most of the scientists in New Mexico were focused on capabilities, like, \"How do we make this thing go boom.\" There was a small group of them who were worried about the ramifications of, \"What would happen if we did drop our first nuclear bomb and stuff like that?\" And of course, there's always been the same thing about space exploration. So with nuclear stuff, I think that we ended up, again, with more worries about nuclear safety than nuclear alignment. So, \"What do we do to keep reactors safe?\" Instead of thinking about like, \"Okay, should we pursue nuclear technologies where countries cannot enrich uranium and make warheads with it? What if we went with thorium reactors instead?\" And again, it's been more focused on capabilities and safety than alignment.\n\n0:29:32.9 Vael: Yeah, I don't think we really need alignment with nuclear stuff because nuclear stuff doesn't really have intelligence per se. It's not an optimizer, is what I meant. Yeah. So you're not trying to make sure that the optimizer does whatever your goals are.\n\n0:29:51.5 Interviewee: Yeah. It's humans at the end of the day. Yes, yes, yes, that's right.\n\n0:29:55.2 Vael: Yeah, yeah. I also think the Manhattan Project is a very good comparison here. Or nuclear stuff in general. There's plenty of issues in today's world, there's so many issues in today's world. But the reason I focus on long-term risks is because I'm worried about existential risks. So what happens if all of humanity goes extinct. I think there's a number of things that are possibilities for existential risk. I think advanced AI is maybe one of the highest ranked ones in my head, where I'm like, \"Well, if we have an AI system that is by default not completely aligned to human values and is incentivized against us, that's bad. Other ways that I think that we can have AI disasters are like AI assisted war, misuse, of course, and loss of control and correlated failures. So what if we use AI to automate food production and manufacturing production, and there's some failure partway along the line and then correlated failure? Or what if it results in some sort of pollution? Are not really sure who's in charge of the system anymore, and then we're like, \"Oh, well, now, there's some sort of poison in the air,\" or there's like something in the water, we'd like messed something up, but we don't really know how to stop it. Like coordination failures is another thing that I think that can happen even before we do very advanced AI. So generally, I'm worried about AI.\n\nI think nuclear stuff is also... You can kill a lot of people with nuclear stuff. Biological weapons... tou can do synthetic bio. There was that paper that came out recently that was like, \"Well, what happens if you just put a negative sign on the utility function to generate cool drugs?\" Then you get poisonous drugs, amazing. Yeah, that was a paper that just came out in Nature, I think maybe two weeks ago, made quite a splash. So I think if we had something that's much more deadly, is harder, takes longer to appear, spreads faster than COVID, I'm like, hm, that's... You may be able to... If people have bunkers, that's okay, and if people are very distant, maybe it's okay, but like... I don't know. It's not good. And then climate change also. I think that one will take much longer to kill humans. In terms of the time scales that I'm thinking of. We're looking at 3 degrees of warming in the next 100 years, or is it 50? I don't quite remember, I think it's like we've got a bit more time, and I kind of think that we might get technical solutions to that if we wait long enough, if we advance up the tech tree, if it takes a couple hundred years. So these are what I think of when I think of existential risks.\n\nAnd the Manhattan Project is interesting, tying it back. There was some risk that people thought that if you deployed a nuclear bomb, then it would set the atmosphere on fire, and thus result in nuclear winter and kill everyone. And it was a very small percentage, people thought it won't happen, and they did it anyway, and I'm like, \"Oh boy, I don't know how many... get lucky of those we have.” Because AI might also go perfectly fine. Or it might not. And I think there's a decent... In a study in 2017 by Grace et al., where they were asking researchers from NeurIPS and ICML how likely they thought it was that AI would have extremely bad consequences, and the median answer was like 5% or something, and I was like, \"5%?! That's like real high, man.\"\n\n0:33:03.0 Interviewee: Yeah. [chuckle] And that's sort of an indication of what percentage of ICML and NeurIPS researchers are working on like fairness and privacy. I feel would align pretty closely with that, yeah.\n\n0:33:15.9 Vael: Oh, interesting. Yeah. Yeah, so very bad outcomes could be many different things. I think not many people are thinking about what happens if you literally kill all humans because it's a weird thing to have happened, but I think we're in a very weird time in history where 10,000 years ago everything was the same from lifetime to lifetime, but now we have things like nuclear weapons where one person or a small chain of people can kill like so many more humans than previously.\n\n0:33:42.9 Interviewee: Yeah. And yeah, speaking of tech tree, one sort of linchpin that would really accelerate if we can get new AI, highly intelligent AI, would be if quantum computing becomes feasible. And we're actually able to run AI on quantum computers, then I think we're way closer to actually having a highly intelligent AI because that's an infinite state space right there.\n\n0:34:22.3 Vael: Yeah. Yeah, yeah, I know very little about quantum myself, because I was like, \"What are the hardware improvements that we could see?\" And people are like, \"Optical, maybe coming in optimistically in five years from now,\" and I was like, \"Okay, that's soon.\" And then there's quantum where people are like, \"That's significantly further away,\" and I'm like, \"Okay.\" And then software improvements also. So there's just been a lot of progress in AI recently, and we're training all the young people, and there's China-US competition, and there's just a ton of investment right now, where I'm like, \"We're moving fast.\" So interesting.\n\n0:34:56.4 Interviewee: Yep, yep, yep.\n\n0:34:57.8 Vael: Yeah. Well, you are very agreeable to my point of view here. [laugh]\n\n0:35:06.3 Interviewee: Yeah, I'm just pessimistic, and I've watched too much sci-fi dystopia, I guess. One of the downsides of... It's great to democratize AI, but if your systems that say like, \"Hey, just upload your dataset, and we'll give you good AI at the end of it,\" if it's not at least asking you questions about like, \"What sort of fairness do you want? What should the outcome be?\" Most humans themselves are going to be thinking like, \"Oh, give me optimal, minimum cost or something like that.\" Humans already don't factor human values in necessarily when deciding what to do. So I'm just pessimistic about how well we can do it. Like 50 years ago, people thought that we'd be in space colonies by now, but sadly, we're not.\n\n0:36:10.5 Vael: Yeah, very hard to predict the future. Okay, so my second to last question is: what would convince you or motivate you to work on the safety type of areas?\n\n0:36:25.8 Interviewee: If I saw that this was coming much sooner. I think the fact that I'm seeing this as like 100 years out, a lot of smarter than me researchers will hopefully be concerned enough about it.\n\n0:36:44.3 Vael: Yeah. How soon would be soon enough that you're like, \"Oh, that's kind of soon?\"\n\n0:36:47.7 Interviewee: Fifty.\n\n0:36:49.7 Vael: Fifty?\n\n0:36:50.4 Interviewee: Yeah. If it's fifty, then that's in my lifetime. And I'd need to start worrying about it. That would be a bigger... The existential risk of that would be much higher than the risk I see of just AI safety in day-to-day life, so that's sort of how I'm weighing it. So for me, it's like: 100, it's like AI safety matters more. Yeah.\n\n0:37:15.0 Vael: Yep, that seems very reasonable to me. I'm like, \"Yep, it seems right.\" Cool. And then my last question is, have you changed your mind on anything during this interview, and how was this interview for you?\n\n0:37:26.9 Interviewee: Have I changed my mind? I'm definitely wondering if I've overestimated when highly intelligent AI could come through. (Vael: \"I can send you some resources so you can make your own opinion, etcetera.) Yeah. But... Otherwise, I don't think I've changed my opinion. I still feel pessimistic, and I hope that we start moving towards smaller AI that solves one problem really well, and we don't just think that like, \"Hey, it's a perceptron. It figured out that this was the digit 9, and hence it can figure out a whole bunch of these other things.\" I hope that we don't start barreling down that track for too much longer. Yeah.\n\n0:38:21.5 Vael: And when you say pessimistic, you mean like pessimistic about society and not pessimistic about the pace of AI? Or like societal impacts?\n\n0:38:28.3 Interviewee: About both. I think we tend to put stuff out without really, really considering the consequences of it. But also I think AI has done a bunch, but it requires a lot of energy and a lot of funding that I'm not sure necessarily is going to stay up unless we start seeing a lot bigger breakthroughs come through.\n\n0:38:57.5 Vael: Interesting. Yeah, I kind of think that the applications will keep it afloat for quite a while, and also it sounds like we might be entering a race, but I don't know.\n\n0:39:05.5 Interviewee: True, yeah. Yeah, maybe that's what has changed in my mind through this interview, is like, \"Okay, this is probably...Things are just going to keep going bigger.\"\n\n0:39:18.9 Vael: I think it's quite plausible.\n\n0:39:22.2 Interviewee: Yeah, otherwise, interview was great. This was really interesting. For example, the stuff you brought up on... I would love it if you could send me papers on the-- (Vael: \"I would love to.\") --the uncertainty... You were talking like, programming a way to get out the AI turning off its own off-switch. I would love to read more of these alignment papers, they sound really cool.\n\n0:39:51.2 Vael: Awesome! Well, I'm excited. I will send you a whole bunch of resources probably, and I advise you-- it might be overwhelming so pick only the stuff that is good, and I'll bold some interesting stuff.\n\n0:40:00.9 Interviewee: That would be super helpful. And hope you don't mind my sending following up questions.\n\n0:40:05.7 Vael: No, no. Again, lovely. Very happy to do that.\n\n0:40:09.4 Interviewee: Yeah. Awesome. Yeah, thank you so much. This was, like I said, a really interesting experience. I've never had to think about longer term impacts. Most of my stuff is like, \"Okay, GDPR is out. What does this mean for AI?\" And that's a very immediate concern. It's not this like, Okay, where should... Or even thinking like, “Okay, five years out, what do we want privacy legislation to look like?\" That's something I think about, but not, \"Oh my God, there's a decision-making AI out there. Does it care about my privacy?\" So, yeah.\n\n0:40:47.0 Vael: Yeah, yeah, I think people aren't really incentivized to think about the long-term future.\n\n0:40:51.8 Interviewee: Yeah. Humans are just bad at that, right? Yeah.\n\n0:40:53.5 Vael: And it's hard to forecast, so it makes sense.\n\n[closings]\n", "url": "n/a", "docx_name": "NeurIPSorICML_lgu5f.docx", "id": "0e5d3f656a4a917f9a5c5782262fe302"} {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": [], "date_published": "n/a", "text": "Interview with q243b, on 3/18/22\n\n0:00:00.0 Vael: Cool. Alright. So my first question is, can you tell me about what area of AI you work on in a few sentences?\n\n0:00:09.0 Interviewee: Of course. I did my PhD in optimization and specifically in non-convex optimization. And after that I switched topics quite a lot. I worked in search at [company] [...] And now actually I work in [hard to parse] research, so I kind of come back to optimization, but it's more like a kind of weird angles of optimizations such as like meta-learning or future learning, kind of more novel trends like that.\n\n0:00:42.1 Vael: Cool. And next question is: what are you most excited about in AI and what are you most worried about? In other words, what are the biggest benefits or risks of AI?\n\n0:00:51.1 Interviewee: Well, the benefit is that AI has slowly but surely kind of taken over a lot of problems in the world. And in fact pretty much any problem where you need to optimize something, you can use AI for that. I'm more of a traditional machine learning person, in the sense that... Currently I include everything, including not necessarily neural networks, also like logistic regressions, decision trees, all those things. And I think those things have been grossly unutilized, I would say, because a lot of problems right now in machine learning that people think, \"Okay, should this solve this neural networks?\" but in reality, just in decision trees. But stay the same because of the current trends and because of the current hype of neural networks as well, they kind of came along with it as well. All the publicity and marketing that, you know, that AI should have, honestly. And I think more and more companies realize that you can solve the problem with just simple solutions. And I think that will be a really exciting part. So just like, to answering to your question yeah, I'm excited about the fact that machine learning has just become more and more and more ubiquitous. It becomes like almost prerequisites for any big company or even a smaller company. And the second part, what I'm worried the most. I don't know, that's a good question.\n\n0:02:07.7 Interviewee: I mean, I guess like, I mean, I don't share those worries that AI would dominate us and we would all be exterminated by overpowerful AI. I don't think AGI is coming anytime soon. I think we're still doing statistics in a way, like some kind of belong to this camp will just think, we're still doing linear models and I don't believe the system is conscious or anything of those sorts. I think the most malicious use, like, I mean, especially now currently with the war, I see more and more people using AI for malicious sense. Like not necessarily they will be, you know, we're going to have next SkyNet coming, but in the bad hands, in the bad actors, you know, AI can serve not a good purpose in war. Like for example, you know, like now with drones, with, you know, the current war, for example, in Ukraine is more and more done with drones. And drones have automatic targeting, automatic navigation. And yeah, so that's kind of not necessarily a good thing and they can become more and more dramatic, and more automatized and they can lead to harm.\n\n0:03:06.4 Vael: Yeah, it makes sense. Lot of strong misuse cases. So focusing on future AI, putting on like a science fiction, forecasting hat, say we're 50 plus years into the future, so at least 50 years in the future, what does that future look like?\n\n0:03:19.5 Interviewee: Well, I hope... I mean I kind of, you know, I really like that, my favorite quote in a... And probably that one of the quotes that I really like is that the arc of justice is nonlinear, but it kind of bends towards justice. I really like it. And I really hope in 50 years we would actually figure out first of all, the way to harness our current problems and make not necessarily disappear, but at least make it controllable such as nuclear weapons and the global warming. And that's in 50 years, I think it's a reasonable time. Again, not to solve them, but just to figure out how to harness these issues. And AI should definitely help that. And do you want me to answer more specifically, like more, like give you ideas of how I think in 50 years the world would look like? (Vael: \"Yeah, I'll take some more specifics.\")\n\n0:04:01.3 Interviewee: Alright. I mean, I think that one of my exciting areas that I think that right now is already kind of flourishing a little bit, and it's large language models. It's a current trend, so it's kind of an easy, like on the surface thing to talk about, and I think I... As a [large tech company] employee, I can see how they've been developed over like two years, things changing dramatically. And I think these kind of things are pretty exciting. Like having a system that can talk to you, understand you, respond not necessarily with just one phrase, but like accomplish tasks that you wanted to accomplish. Like now it's currently in language scenarios, but I also, within 50 years definitely anticipate it could happen in like robotics, like a personal assistant next to you or something like that. Another area I'm really excited about is medicine.\n\n0:04:46.1 Interviewee: I think once we figure out all the privacy issues that surround medicine right now, and we're able to create like, clean up database, so to speak, of patients diagnoses. And I hope that it'll be enough for a machine learning model to solve like cancer as we know it and things like that. I'm just hopeful. I mean, I hope it's going to happen in 50 years and it's going to, I don't know if I want to place my bet there, but I'm hoping that would happen. So I guess in robotics, as well, as I said, one of the things that we're kind of inching there, but not quite there, but I think in 50 years, we'll solve it. So I think these three things: personal assistant, solving medicine and robotics, these three things.\n\n0:05:28.2 Vael: Wow. Yeah. I mean, solving robotics would be huge. Like what's an example of... could you do anything that a human could do as a robot or like less capable than that?\n\n0:05:36.0 Interviewee: I think so. I mean, it depends. It depends what you mean by human, right? I mean the... Well, if you try to drive a car for the last 20 years. We've been trying to do that, but honestly, I think this problem is really, really hard because you have to interact with other agents as well. That's kind the main thing, right? You have to interact with other humans mostly. I mean, I think interaction between robots, it's one thing, interaction between robots and robots is much easier. So I think whatever task that doesn't involve humans is actually going to be pretty useful. Well, again, actually pretty easy because it hasn't been solved yet, but I think it's much easier than solving with humans.\n\n0:06:05.8 Interviewee: And like for example organizing your kitchen, organizing your room, cleaning your room, cooking for you, I think all the things should be pretty straightforward. Again, the main issue here is that every kitchen is different, so although we can train a robot to a particular kitchen or like do some particular kitchen, like once it's presented with a novel kitchen with novel mess... mess is very personal. So it'd be harder for the robots to do. But I think that's something that would be kinda within the reach, I think.\n\n0:06:37.7 Vael: Interesting, and for like, solving cancer, for example, I imagine that's going to involve a fair amount of research per se, so do we have AIs doing research?\n\n0:06:47.2 Interviewee: So research, I want to distinguish research here because there is research in machine learning and there is research in medicine. And they are two different things. The research in medicine, and I'm not a doctor at all, but from what I understand, it's very different, in the sense that you research particular forms of cancer, very empirical research. Like hey we have... Cancer from what I understand, one of the main issues with cancer is that every cancer is more or less unique.\n\n0:07:11.8 Interviewee: It's really hard to categorize it, it is really hard to do AB testing. The main research tool that medical professionals use is AB testing, right, you have this particular group of cancers, group of people, that suffer from this particular cancer. Okay, let's just come up with a drug that you can try to put these people on trial, and do that. But because every cancer is unique, it's pretty hard to do that. So, and how to do this research is data, and that's what we need for machine learning, we need to have sufficient data such that machine learning can leverage that and utilize it. So they're now asking questions in two perspectives, one is do we need more data? Yes, absolutely.\n\n0:07:46.3 Interviewee: Moreover we not only need data, we need to have rules for which these machine learning agents, that of a company, university, would have access to this data differentially private right in the sense that this should be available to them. But is possibly private, of course privacy is a big issue. Which right now doesn't really happen, plus there are other bureaucratic reasons for this not to happen, like for example hospitals withholding the data because they don't want to share it and stuff like that.\n\n0:08:16.3 Interviewee: So if we can solve this problem, the research and medical part would be not necessarily... Not necessary for the machine learning. And on the machine learning side, there is also, as well, it's very big hurdles in the sense that current machine learning algorithms needs tons of data. Like for the self-driving cars, they're still talking about we need millions and millions of hours cars driving on the road, and they still don't have enough. So for cancer, that's kind of not be the case hopefully. Right? So hopefully we're going to come up with algorithms that work with fewer data. Like one of the algorithms is so called few-shot algorithms, so when you have algorithms that learn on somebody's language, but when you want to apply to a particular patient in mind, you just need to use specific markers to adjust your model to the specific patient. So there are some advancement in this way too but I think we are not there yet.\n\n0:09:07.1 Vael: Interesting. Cool, alright, so that's more... It's not like the AI is doing research itself, it's more that it is, like you're feeding in the data, to the similar types of algorithms that already exist. Cool, that makes sense. Alright so, I don't know if you're going to like this, but, people talk about the promise of AI, by which thy mean many things but one of the things is that... The frame that I'm using right now is like having a very general capable system, such that they have the cognitive capacities to replace all current day human jobs. So whether or not we choose to replace human jobs is a different question. But I usually think of this as in the frame of like, we have 2012, we've the neural net... we have AlexNet, the deep learning revolution, 10 years later with GPT-3 which has some weirdly emergent capabilities, so it can do text generation, language translations, some coding, some math.\n\n0:09:51.9 Vael: And so one might expect that if we continued pouring all the human effort that has been going into this. And nations competing and companies competing and like a lot of talent going into this and like young people learning all this stuff. Then we have software improvements and hardware improvements, and if we get optical and quantum at the rate we've seen that we might actually reach some sort of like very general system. Alternatively we might hit some ceiling and then we'd need do a paradigm shift. But my general question is, regardless of how we get there, do you think we will ever get to a very general AI system like a CEO or a scientist AI? And if so, when?\n\n0:10:23.6 Interviewee: So, my view on that is that it's really hard to extrapolate towards the future. Like my favorite example is I guess Elon Musk... I heard it first from Elon Musk but it's a very known thing. Is that, \"Hey we had like Pong 40 years ago, it was the best game that ever created, which was Pong, it was just like pixels moving and now we have a realistic thing and VR around the corner, so of course in 100 years we will have like a super realistic simulation of everything, right? And of course in a 1,000 years we'll have everything, [hard to parse] everything.\"\n\n0:10:53.0 Interviewee: But again it doesn't work this way. Because the research project is not linear, the research progress is not linear. Like 100 years ago Einstein developed this theory of everything, right, of how reality works. And then yet, we hit a very big road block of how exactly it works with respect to the different scale, like micro, macro and micro and we're still not there, we propose different theories but it's really hard. And I think that science actually works this way pretty much all around the history it's been like that, right. You have really fast advancement and then slowing down. And in some way you have advancement in different things.\n\n0:11:26.8 Interviewee: Plus the cool thing about research is that sometimes you hit a road block that you can't anticipate. Not only there are road blocks that you maybe don't even imagine there are, but you don't even know what they could be in the future. And that's the cool part about science. And honestly, again, I think if we are indeed developing AGI soon, I think it's actually a bad sign. Honestly, I think it's a bad sign because it means that it's... It's like too easy, then I'll be really scared: okay what's next? Because if we developed some really super powerful algorithm that can essentially super cognitive... Better and better cognition of humans, I think that will be scary because then I don't even know. First of all, my imagination doesn't go further than that, because exactly by definition it will be smarter than me, so I don't even know how to do that. But also I think it means that my understanding of science is wrong. Another example I like is that someone said, if you're looking for aliens in the universe right now and then this person says, if we actually do discover aliens right now, it's actually a very bad sign. It's a bad sign in the sense that...\n\n0:12:29.7 Interviewee: If they're there, it means that the great filter, that whatever concept of great filter, right, that we're kind of in front of it, not some behind us, it is in front of us. Just means there's some really big disaster coming up, because it actually, if aliens made it as well, this mean that they also pass all the filters behind us, it mean that some bigger filters in front of us. So I kind of belong to that camp. Like I'm... I'm kinda hoping that the science will slow down. And we'll not be able to get there. Or there's going to be something... It's not that I think that human mind is unique and we can't reproduce it. I just think that it's not as easy as we think it would be, or like in our lifetime at least.\n\n0:13:05.0 Vael: I see. So maybe not in our lifetime. Do you think it'll happen in like childrens' lifetime?\n\n0:13:10.1 Interviewee: Which children? Our children hopefully not. But I mean, at some point I think so. But again, I think it'll be very different form. Humans' intellect is very, very unique, I think, and because it's shaped by evolution, shaped by specific things, specific rules. So I also kind of believe in this, in the theory that in a way computers already... They are better than us because they are faster, to start with, and then they can... Another example I really like is that if you remember the AlphaGo playing Go with Lee Sedol, like one of the best two players of Go. And there was a...\n\n0:13:43.0 Interviewee: If you remember the Netflix show, there was like in one room they actually have all the journalists and they were sitting next to Lee Sedol playing with the person that represents DeepMind. And then all the DeepMind engineers and scientists, they were in a different room. And in that room, when they were watching the game playing, [in] that room the computer said by the move number 30, very early in the game, it says, okay, I won. And it took Lee Sedol another like half an hour or more, another like a hundred moves to confirm that statistic. And they were... The DeepMind guys were celebrating and these guys were like all thinking about the game, how to... But the game was already lost.\n\n0:14:16.6 Interviewee: So computers are already bett--... I mean, of course it's a very constrained sandbox around the Go game. I think it's true for many things, computers are already better than us. We are more general in our sense of generality, I guess. So maybe they will go in different direction... But the world is really multidimensional and the problems that we solve are very multi-dimensional. So I think it's too simplistic to say that, then you're universally better than us, or we are clearly subset and they are superset of our cognition. It's, I don't know... I think it [hard to parse].\n\n0:14:44.9 Vael: Great. Yeah. I'm going to just hark back to the original point, which was when do you think that we'll have systems that are able to like be a CEO or a scientist AI?\n\n0:14:55.0 Interviewee: Okay. Yeah, sure. Again, sorry-- sorry for not giving you a simple answer. Maybe that's what you're looking for, but let me know if this is... (Vael: \"Nah, it's fine,\")\n\n0:15:06.5 Interviewee: Yeah. I don't know. In a way like... The work that like accountant does right now, it's very different than what accountant did 30 years ago. Did we replace it? No, we didn't. We augmented work. We changed the work of accountant so that the work is now simpler. So replacing completely accountant, in a way, yes, we also... Because the current, the set of tasks that accountant did 30 years ago, it's automated already. Do we still need accountants? Yes. So same here. Maybe the job that CEO is doing right now in 30 or 40 years, everything that right now, as of today, CEO is doing in 40, 30 years, we will still... The computer will do it. Would we still need the human there? Yes. If this answers your question.\n\n0:15:45.1 Vael: Will we need the humans? I can imagine that we can have like, eventually AI might be good enough that we could have it do all of our science and then, or it's just so much smarter than us then we're just like, well, you're much faster at doing science. So I'll just let you do science.\n\n0:15:57.8 Interviewee: So let me rephrase your question a bit. So what you're saying right now is it is a black box right now, that right now, that's CEO right now, that's a CEO job. That's what CEO is doing. There's some input, then some output. So what you're saying that now we can automate it. And now the input and output will also feed through something to computer let's say, but then what is, what would be the... We'll have to refine the input and output then, because it still should serve humans. Right?\n\n0:16:21.9 Interviewee: So previously you need to have drivers, like for the tram you're having to have a driver. Now instead of drivers, you have computers, but you still need to have a person to supervise the system in a way. Or, but then you're talking about even that being automated. But in same time, you cannot... Like the system, like for example, self-driving car, it's become a tool for someone else. So you're removing the work of a driver, but you replace it with a system that now it's called something else. Like Uber. Previously, you had to call a taxi. Now you have an app to do it for you.\n\n0:16:49.7 Interviewee: There's an algorithm that does it for you. So the system morph into something else. So same thing here. I think as CEOs, in a sense, they might be replaced, but the system also would change as well. So it won't be the same. It won't be like, okay, we'll have a Google. And there is a CEO of Google who is like robot. Now the Google will morph in a way that the task the CEO is doing would be given the computer. But the Google will still... Like also, by the way, Google, even Google. Google works on its own. In fact, if you, right now, fire all the employees, it'll still work a few days. Everything that we do, we do for the future. Like it's pretty unique moment in history, right? 'Cause previously, like before the industrial revolution, you had to do things yourself. Then with the factories and factories well then, okay, you're helping factory to do its work. And now there is a third wave, whatever, fourth wave, industrial revolution. We don't even do anything. It's on... In a way Google doesn't have a CEO, the Google CEO doesn't work for the today Google. Google CEO works for the Google in a year which is... So Google work... Google is already that. Google doesn't have a CEO. So that's what I mean.\n\n0:17:56.8 Vael: Alright. Uh, I'm going to do the science example, because that feels easier to me, but like, like, okay. So we're like doing... We're attempting to have our AI solve cancer and normally humans would do the science. And be like, okay, cool I'm going to like do this experiment, and that experiment, and that experiment. And then at some point, like we'll have an AI assistant and at some point we'll just be like, alright, AI solve cancer. And then it will just be able to do that on its own. And it's still like serving human interests or something, but it is like kind of automated in its own way. Okay. So do you think that will... When do you think that will happen?\n\n0:18:34.0 Interviewee: The question is how with... The question is, can this task be, sorry I know you're asking about the timeline and I want to be, I know... I don't want to ramble too much. But I think I want to be specific enough, what kind of problem we're talking about. If we're talking about the engineering problem, that we're talking about the timeframe of our lifetime. If you're talking about a problem that involves more creativity, like for example, come up with a new vaccine for the new coronavirus? Sorry, it's automatic. I think that work we could do in the 20... 20-30 years. Right, because we have tools, we know engineering tools, what needs to be done, where you can do ABC, you're going to get D. Once you have D you need to pass it through some tests and you're going to get E and that pretty much automate. I think this we can do in 20-30 years. Solving cancer, I just don't know enough. How much creativity needs to be there. So more harder, probably, yeah. Yeah, yeah.\n\n0:19:21.0 Vael: Yeah, no, no, that's great. And yeah, and you don't know when we'll... For... So probably more than our lifetime, or more than 30 years at least for creating those?\n\n0:19:28.9 Interviewee: Mm-hmm. Mm-hmm.\n\n0:19:30.1 Vael: Alright, great. Cool. Alright, I'm moving on to my next set of questions. So imagine we have a CEO AI. This is... I'm still going back to the CEO AI even though... (Interviewee: \"Sure, of course.\")\n\n0:19:40.8 Vael: And I'm like, \"Okay CEO AI, I want you to maximize profits and try not run out of of money or exploit people or... try to avoid side effects.\" And this currently is very technically challenging for a bunch of reasons, and we couldn't do it. But I think one of the reasons why it's particularly hard is we don't actually know how to put human values into AI systems very well, into the mathematical formulations that we can do. And I think this will continue to be a problem. And maybe, and... I think it seems like it would get worse in the future as like the AI is optimizing over, kind of more reality, or a larger space. And then we're trying to instill what we want into it. And so what do you think of the argument: highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous.\n\n0:20:28.3 Interviewee: Okay, so this is a quite big misconception of the public is that, and I think actually, it's rightfully so, because I am actually working on this, is that... So what you said right now is like the famous paperclip example, right? Which is going to turn the whole world in particular, and that's kind of what you said more or less, right? So the problem here is that current system, a lot of them, it's true, they work on a very specific one dimensional object. There's a loss function that we're trying to minimize. And it's true, like GPT-3 and all these system, currently they are there, they have only one number, it want to minimize. And this is, if you think about it, is way too simplistic for this very reason, right? Exactly, because if you want to just maximize number of paperclips, you're just going to turn the whole word into the paperclip machine factory. And that's the problem. But the reality is much more complicated. And in fact, we are moving there, like my research was, we're moving away from that. We're trying to understand, we're trying to understand if first of all intelligence can be emerge on its own without it being minimized explicitly.\n\n0:21:20.2 Interviewee: And second of all, these pitfalls, the pitfalls that you're just minimizing one number and of course, not going to work. So answering your question, yeah, yeah I think it will fail, because it will face the real world scenario unless it has specific checks and balances. For example, there's also online learning paradigm, right, where you basically learn from every example as they come in the time series. I think this system need to be revamped to work in a larger scale obviously, but this is the kind of system that potentially could work, where you don't just minimize, maximize one objective, but you have a set of objectives or you have a set of actors with their own goals and their intelligence is emerging from them. Or you learn online. You just fail and you learn and you forget what you learned, you have... learn in a continuous fashion. So like all of these things that we as a humans do, could be applicable for AI as well. Like we are the humans, we don't have... You don't spend your day like, go to sleep, okay, today, day was like 26. You don't do that. And even if you do that, you probably will have multiple numbers. This was 26, this was 37. Okay, it doesn't matter the day.\n\n0:22:19.3 Vael: Yeah, that makes sense. So, one thing I'm kind of curious about is there's... like, when you say, it won't work, just optimizing one number. And will it not work in the sense that, we're trying to advance capabilities, we're working on applications, Oh, turns out we can't actually do this because we can't put the values in. And so it's just going to fail, and then it will get handled as part of the economic process of progress. Or do you think we'll go along pretty well, we'll go along fine, things seem fine, things seems like mostly fine. And then like if things just diverge, kind of? Kind of like how we have recommender systems that are kind of addictive even though we didn't really mean to, but people weren't really trying to make it fulfill human values or something. And then we'd just have the sort of drift that wouldn't get corrected unless we had people explicitly working on safety? Yeah, so, yeah, what do you think?\n\n0:23:06.4 Interviewee: I think people who work on safety, and you could see it yourself, people who work on safety, people who work on the fairness, people who work on all the things, checks and balances, so to speak, right? They are becoming more and more prominent. Do I know it's enough? No, it's not enough, obviously we need to do more for different reasons. For obviously DEI, but also for just privacy and safety and other things. And also for things we just talked about, right? Just because it's true that the fact that... By the way, the fact that we have only the current system like GPT-3 or many other algorithms minimize only one value, it's not a feature, it's a bug. It's just convenience because we use the machine learning optimization algorithms that work in this way, and we just don't have other ways to do that. And I hope in the future other things would come up.\n\n0:23:46.0 Interviewee: In answering your question, I don't think it will necessarily diverge, we'll just hit a roadblock and we're already hitting them. You've heard about AlexNet like 10 years ago, and now, sure, we have acute applications and like filters on the phone, but like, did AI actually enter your life, daily life? Well, not true, I mean, you have better phone, they're more capable, but actually AI, like in terms of that we all dream about, does it enter your life? Well, not really. We can live without it, right? So, we're already hitting all these roadblocks, even like medical application. Google 10 years ago, claimed they'd solved like the skin cancer when they can detect it, and it didn't... It didn't really see the light of day except for some hospitals in India, unfortunately. So we're already hitting tons of roadblocks, and I don't think it's... It's like for this reason precisely, because when you face reality, you just don't work as good as you expect for multiple reasons.\n\n0:24:30.4 Vael: Interesting. Cool. So do you think that others... This... Have you heard of the alignment problem? Question one.\n\n0:24:38.1 Interviewee: No.\n\n0:24:40.7 Vael: No. Cool, all right, so you've definitely heard of AI safety, right?\n\n0:24:43.5 Interviewee: Mm-hmm.\n\n0:24:43.9 Vael: Yeah. Alright. So one of the definitions of alignment is building models that represent and safely optimize hard-to-specify human values, alternatively ensuring that AI behavior aligns with system designer intentions. So I'm... So one of the questions, the question that I just kind of asked, was trying to dig at, do you think that the alignment problem per se, of trying to make sure that we are able to encode human values in a way that AIs can properly optimize, that that'll just kind of be solved as a matter of course, in that we're going to get stuck at various points, then we're going to have to address it. Or do you think that'll just like... We will... It won't get solved by default? Things will continue progressing in capabilities, and then we'll just have it be kind of unsafe, but like, \"Uh, you know, it's good enough. It's fine.\"\n\n0:25:21.1 Interviewee: I think a bit of both. There's so much promise and so much hype and so much money in pushing AI forward. So I think a lot of companies will try to do that. These various... We live in democracy, fortunately or unfortunately or actually we live in more like value of dollar, unfortunately, our society. At least in some countries are valued by progress. And especially companies, they have to progress, they have to advance and this is one of the easiest ways to advance. But I think some companies may be bad actors, whatever, they will try to push it to the limit. But these questions are ultimately unsolved, in a way this current system are designed, for the reason we discussed. So I think it will be a bit of both. Some companies will back down, some companies will try to push it to the limit, so we'll see. Depends on applications as well. I mean, some applications are safe. If you... I'm sorry, but... Sorry to bring it up, but for example, there was a case of AI in Microsoft when they released the bot and the bot start cursing. Which is okay. It's a cute example, they should have done a bit of PR loss, it's fine, but it's different from the car crashing you into that tree. So depends, depends on the application.\n\n0:26:27.4 Vael: Yeah, it seems definitely true. Alright, so next argument is focusing on our CEO AI again, which can do multi-step planning, and it has a model of itself in the world. So it's modeling other people modeling it, because it feels like that's pretty important for having any sort of advanced AI that's acting as a CEO. So the CEO is making plans for the future, and it's noticing as it's making plans for its goal of maximizing profits with constraints that some of its plans fail because it gets shut down. So we built this AI so that it has the default thing where you have to check with humans before executing anything, because that seems like a basic safety measure. And so the humans are looking for a one page memo on the AI's decision. So the AI is thinking about writing this memo, and then it notices at some point that if it just changes the memo to not include all the information, then maybe the AI will be... I mean, then the humans will be more likely to approve it, which means it would be more likely to succeed in its original goal.\n\n0:27:22.8 Vael: And so, the question here is... So this is not building self preservation into the AI, it's just like AI as an agent that's optimizing any sort of goal and self preservation is coming up as an instrumental incentive. So, what do you think of the argument, \"Highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals and this is dangerous?\"\n\n0:27:42.9 Interviewee: Well, you already put intelligence and the human level into the sentence as well.\n\n0:27:48.3 Vael: Yes.\n\n0:27:48.6 Interviewee: Yeah, and it kind of... I'm already against that, because I don't think the system would actually behave in a way, like a sneaky way to avoid. Well, first of all, the current AI, even currently AI systems are highly uninterpretable. It's very hard to interpret what exactly is going on, right? But it still work within the bounds, right? So the example I gave you was Lee Sedol, when the Go game, already knew it's won, but it couldn't explain why it won, and it take human to parse it. Or another example that I like, in the chess, AI, at some point, put a queen in the corner, something you just never do as a human. And it couldn't explain, obviously, in fact. But it never pick up the board and crash it on the... It will work within the bounds.\n\n0:28:29.1 Interviewee: So of course, if the bounds of this program allows the program to cheat in a way and withhold information, then yes, but... Again, it kinda works pair in pair. On one side it's really hard to interpret, so this one-pager that AI has to provide, it also has to be curated. And it will not include all information because it's impossible to digest all the computer memory and knowledge into one page. So on the one side, this page will always be limited and with all this lossy compression of the state of computer. But on the other side, I don't think computers can on purpose cheat on this page. Actually they might, depending on the algorithm[?], again, but I think it's a valid concern. That's an easy question, an easy answer, but it's depending on the system, depends how you design it.\n\n0:29:09.5 Vael: Yeah, so I think I'm trying to figure out what exactly is in this design system. So one thing is it has to be very capable, and it has to... I want it to be operating over like reality in a way that I expect that CEOs would, so its task is interacting... It's doing online learning, I expect, and it's interacting with actual people. So it's giving them text and it's taking in video and text, and interacting with them like a CEO would. And it does have to, I think, have a model of itself in the world in order for this to happen and to model how to interact with people. But if we have AI to that extent, which I kind of think that eventually we'll develop AI with these capabilities. I don't know how long it will take, but I assume that these are commercially valuable enough that people will eventually have this sort of system that... This is an argument that like... And I don't know if this is true, but any agent that's optimizing really hard for a single goal will at some point, like, figure out if it's smart enough to... That it should like try to eliminate things that are going to reduce the likelihood of its plan succeeding, which in this case may be humans in the way.\n\n0:30:10.5 Interviewee: I think you're right. Actually, while you were talking I also came up this example that I really like about, maybe you saw it as well, where there's an agent that optimized solving a particular race game and you control the car. At some point found a loophole, you remember [hard-to-parse] example, and find a loophole and just goes in circles. And the answer to that, you need to have explicit test goals. But in online learning settings, it's really hard. Plus, again, coming back to question number one, which is like, the system is so large at some point, you're just not able to cover all the cases. So yeah, yeah, I think so, I think it's possible, especially today in online learning fashion when you can't really have a complete system that you have all integration tests possible, and then you ship it. Once a system that automatically learns and updates, then it becomes... That could be a problem. Yeah, I agree.\n\n0:30:52.8 Vael: Yeah. So this boat driving race is actually one of the canonical examples of the alignment problem, as it were, which is like-- you put in the right example into it. There's another part of the... so that's one version of \"outer alignment\", which is where the system designer isn't able to input what it wants into the system well enough, which I think gets harder as the AI gets more powerful. And then there's an \"inner alignment\" kind of issue that people are hypothesizing might happen, which is where you have an optimizer that spins up another optimizer inside of it, where the outer optimizer is aligned with what the human wants, but it now has this inner optimizer. The canonical example used here is how evolution is in some sense the top-level optimizer and it's supposed to create agents that are good at doing inclusive reproductive fitness and instead we have humans. And we are not optimizing very hard for inclusive reproductive fitness and we've invented contraceptives and stuff, which is like not very helpful. And so people are worried that maybe that would happen in very advanced AIs too, as we get like more and more generalizable systems.\n\n0:31:51.0 Interviewee: Yeah. I think that there are two things to say, first of all, is what you said about loop in the loop. Now we're talking about what exactly system, what kind of a system can an algorithm create, right? Because for example, if you look at the current machine learning, we do know that for example, convolutional layers, they are good at computing equivariances, like transitional equivariances. If you move the object, they're supposed to be indifferent, but [hard to parse]-connected layers don't do that. So this [hard to parse] behavior you described? You need to have a system, first of all, that's capable enough for this kind of behavior. That's a big if, but okay. Once we get there, if we get there, the second question is, okay, can you cover for that? Can you figure out that these cases are eliminated completely from existence?\n\n0:32:32.4 Interviewee: And the example of CEO is maybe is a good example. For me, the interesting example that is... that I can definitely see and envision, not even CEO, but like for example, application that controls your behavior in a way. Like, for example, curate your Tinder profile or curate your inbox sorting. And control you through that. Then yeah, for sure. Yeah. If you don't control for everything, it can be smarter than us and kind of figure out, back-engineer, how humans work, because we're not [hard to parse] than that. And curate the channel for us and even... Get us to the local minimum we might not want, but we'll still maximize its profits, whatever the profit means for the computer.\n\n0:33:07.2 Vael: Okay. Yeah. So I'm one of those people who is really concerned about long term risk from AI, because I think there's some chance that if we continue along the developmental pathway that we have so far, that if we won't solve the alignment problem, we won't figure out a good way in order to like encode human values in properly and include the human in the loop properly. Like one of the easiest solutions here is like trying to figure out how to put a human in that loop with a AI, even if the AI is vastly more intelligent than a human. And so people are like, Oh, well maybe if you just train an AI to like work with the human and translate the big AI so that like the human understands it, and this is interpretability, and then you have like a system who's training that system and maybe we can recursively loop up for something. But.\n\n0:33:52.2 Interviewee: The problem here, why they see, like, for example, let's take a very specific example. There's AI systems that, for example, historically curated, curated by humans, obviously without bad intent, but it gives bad credit scores to black population say, like people of color. That's really bad behavior. And this behavior is kinda easy to check because you have statistics and you look at statistics. It's one, let's call it one hop away. So you take the data, you take statistics, is done. The second hop, the two hops away would be: it creates dynamics that you can check, not right away, but later that potentially show something like that. It would be harder for humans to check. You can also, if you think about it for a while, you can come up with like three hop, right. Something that creates something that creates something that it does. So it's much easier and much harder to check. You don't know until it happens, and that's the point. So you have this very complicated dynamic you can't... There's not even flags that you can check, red flags that you can check in your model. That might be the issue.\n\n0:34:49.3 Vael: Yeah. There's also... Yeah. Interpretability seems really... It seems really important, especially as very, very intelligent systems, and we don't know how to do that. So possible versions of things going badly, I think, are... So if you have an AI system that's quite powerful and it's going to be instrumentally incentivized to not let itself be modified, then that means that you don't have that many chances to get it right in some sense, if you're dealing with a sufficiently intelligent system. Especially also because instrumental incentives are to acquire resources and influence and so... and also improve itself. Which could be a problem maybe of recursive self-improvement. And then it, like, can get rid of humans very quickly if it wanted to, via like synthetic biology. Another kind of-- this is not as advanced AI, but what if you put AI in charge of the manufacturing systems and food production systems, and there's some sort of correlated failure. And then we have misuse, and then the AI assisted-war as, like, various concerns about AI. What do you think about this?\n\n0:35:37.9 Interviewee: Yeah. So one thing I want to say is that the AI system are kinda too lazy. In a sense that the reason why this loophole worked with the car is because this solution is easier in some mathematical way to find the proper solution.\n\n0:35:52.7 Interviewee: So one thing humans can address this problem is just looking at... Use a supervisor as a human, or with a test, which is pretty much like supervisors-- check all the check and balances. We can create the system for which the, maybe mathematical even, finding proper solution is easier for the computer than finding its loopholes. That's one thing I want to say. And the second thing I want to say, which is now coming to nature because now we're talking about the rules of nature. But if it so happened that we design a system for which finding the loophole... for example, we have laws of physics-- we also have laws in behavior. Like if you have an algorithm that, you know, wants to organize human behavior or something, there's also laws of behavior. So it might be an interesting question that once we have this algorithm, we can get a bunch of sociologists, for example, people who are familiar with it, to study this algorithm and figure out that if, for example, loopholes are actually more probable than normal behavior quote on quote. So for example, being a bad person is better than being a good person-- or easier. Not necessarily better, but easier than being a good person. Which we kind of see it in society sometimes.\n\n0:36:51.5 Interviewee: So it's curious if an algorithm will actually discover that. And finding this loophole with a car is easy because it's there, you just need to move a little bit. So for a computer is easy to find it, like it's a local minimum that's really easy to fall in. But if you design the system for which these loopholes are hard, that might be easier. Or the question is, can we define a problem for which the proper solution is easier?\n\n0:37:11.9 Vael: Yeah. And I think the problems are going to get much more... harder and harder. Like if you have an AI that's doing a CEO thing, then I imagine like, just as humans figure out, there's many loopholes in society, many loopholes in how you achieve goals that are much cheaper in some sense. So I do think it's probably going to have to be designed in the system rather than being in the problem per se, as the problems get more and more difficult. Yeah.\n\n0:37:32.1 Vael: So there's this community of people working on AI alignment per se and long term AI risk, and are kind of trying to figure out how you can build... How you can solve the alignment problem, and how you can build optimization functions or however you need to design them, or interpretability or whatever the thing is, in order to have an AI that is more guaranteed, or less likely to end up doing this kind of instrumental incentive to deceive or not want to... not self-preservation, maybe get rid of humans. So I think my question right now is, and there's money in this space now, there's a lot of interest in it and I'm sure you've heard of it-- I mean, I'm not sure you've heard about it, but there's... Yeah. There's certainly money and attention these days now that there wasn't previously. So what would cause you to work on these problems?\n\n0:38:12.4 Interviewee: ...Well, ultimately a lot of people are motivated by actually just very few things and one of them is kids. Kids cause here you want to have a good future for yourself and the kids.\n\n0:38:24.8 Interviewee: You want to live in a better and better human, better and better society, better and better everything. So that, and actually it hits home. The examples we discussed even during this call are, could be pretty grim. If we don't make this right. So putting resources there I think is really important. If people, before they come up with atomic bomb, they will figure out situations of which we are facing now, people might not even come up with atomic bomb, but do it in a safer way. Or like people knew about Chernobyl before it happened, obviously they would make it better. So having this hindsight, even though we don't know what's going to happen in the future, but putting the resources there, I think is definitely a smart move.\n\n0:38:57.9 Vael: Got it. And what would cause you specifically to work on this?\n\n0:39:03.2 Interviewee: Yes, I know, you asked this question. Yeah, well again, particular problems. I mean, it's an interesting problem. For me I'm actually... Since I work in optimization, so I like to have well-formulated problems. And making sure this one goes, this problem is more... now it's kind of vague. I mean, even now we discuss it. I agree with you that this problem is valid. It makes sense. It exists. But it's still vague, like how you study it. My PhD was also on the way to interpret, to come up with a way to interpret data.\n\n0:39:30.2 Interviewee: In fact, like maybe it's... I don't want to spend too much time on it, but basically the idea is to visualize the data. If you have a very high dimension of data, you want to visualize it. But it's very lossy. You just visualize something, you just do something and it doesn't represent everything. And it's in a way it was actually, it was well-formulated, because there is a mathematical formula to minimizing. And of course it comes with conditions like [hard to parse] and loss and stuff, but in a way it's there, the problem is defined. So the same here. Like in a way, there's a mathematical apparatus. ...Maybe actually I'm going to be the one developing it as well. So I'm not saying that, \"come give me the one I'm going to work on!\" I think that's a [hard to parse - \"would be direct\"?], so problem that I would be excited about.\n\n0:40:06.1 Vael: Yeah. And I mean, I think like what this field really needs right now is someone to specify the problem more precisely. Just because it's like, Oh, this is a future system, it's like at least 50 years, well I don't know at least-- it's far away. It's not happening immediately and we don't have very good like frameworks for it. And so it makes it hard to do research on. Cool. Alright. Well, I'll send you some resources afterwards if you feel like looking into it, but if not, regardless, thank you so much for doing this call with me.\n\n0:40:33.1 Interviewee: Yeah, I appreciate it. Thank you for your time, it was really fun.\n", "url": "n/a", "docx_name": "NeurIPSorICML_q243b.docx", "id": "eb5a51d9ddce4a6dde47abf881f7a4a6"} {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": [], "date_published": "n/a", "text": "Table of Contents\n\nTable of Contents 1\n\nInterview Information 2\n\n Individually-selected 2\n\n NeurIPS-or-ICML 2\n\nIntended Script (all interviews) 3\n\nPost-Interview Resources Sent To Interviewees 5\n\n Master list of resources 5\n\nInformal Interview Notes 10\n\n Thoughts from listening to myself doing these interviews 10\n\n Content analysis 11\n\n \n\nInterview Information\n\nThese interviews are associated with the LessWrong Post: Transcripts of interviews with AI researchers.\n\n(Please do not try to identify any interviewees from any remaining peripheral information.)\n\nIndividually-selected\n\n“Five of the interviews were with researchers who were informally categorized as ‘particularly useful to talk to about their opinions about safety’ (generally more senior researchers at specific organizations).”\n\n- 7ujun\n\n- zlzai\n\n- 92iem\n\n- 84py7\n\n- w5cb5\n\nNeurIPS-or-ICML\n\n“Six of the interviews were with researchers who had papers accepted at NeurIPS or ICML in 2021.”\n\n- a0nsf (this is the interview in which I most straightforwardly get through my questions)\n\n- q243b\n\n- 7oalk\n\n- lgu5f\n\n- cvgig\n\n- bj9ne (language barriers, young)\n\n \n\nIntended Script (all interviews)\n\nThere was a fixed set of questions that I was attempting to walk people through, across all of the interviews. It’s a sequence, so I generally didn’t move onto the next core question until I had buy-in for the previous core question. The core questions were: “do you think we’ll get AGI” (if yes, I moved on; if not I interacted with the beliefs there, sometimes for the entire interview), “[alignment problem]”, and “[instrumental incentives]”. I was reacting to the researchers’ mental models in all cases. I was trying to get to all of the core questions during the allotted time, but early disagreements often reappeared if the interviewee and I didn’t manage to reach initial agreement. I prioritized the core questions, and brought other questions up if they seemed relevant.\n\nThe questions (core questions are highlighted):\n\n- “What are you most excited about in AI, and what are you most worried about? (What are the biggest benefits or risks of AI?)”\n\n- “In at least 50 years, what does the world look like?”\n\n- “When do you think we’ll get AGI / capable / generalizable AI / have the cognitive capacities to have a CEO AI if we do?”\n\n - Example dialogue: “All right, now I'm going to give a spiel. So, people talk about the promise of AI, which can mean many things, but one of them is getting very general capable systems, perhaps with the cognitive capabilities to replace all current human jobs so you could have a CEO AI or a scientist AI, etcetera. And I usually think about this in the frame of the 2012: we have the deep learning revolution, we've got AlexNet, GPUs. 10 years later, here we are, and we've got systems like GPT-3 which have kind of weirdly emergent capabilities. They can do some text generation and some language translation and some code and some math. And one could imagine that if we continue pouring in all the human investment that we're pouring into this like money, competition between nations, human talent, so much talent and training all the young people up, and if we continue to have algorithmic improvements at the rate we've seen and continue to have hardware improvements, so maybe we get optical computing or quantum computing, then one could imagine that eventually this scales to more of quite general systems, or maybe we hit a limit and we have to do a paradigm shift in order to get to the highly capable AI stage. Regardless of how we get there, my question is, do you think this will ever happen, and if so when?”\n\n- “What do you think of the argument ‘highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous’?”\n\n - Example dialogue: “Alright, so these next questions are about these highly intelligent systems. So imagine we have a CEO AI, and I'm like, \"Alright, CEO AI, I wish for you to maximize profit, and try not to exploit people, and don't run out of money, and try to avoid side effects.\" And this might be problematic, because currently we're finding it technically challenging to translate human values preferences and intentions into mathematical formulations that can be optimized by systems, and this might continue to be a problem in the future. So what do you think of the argument \"Highly intelligent systems will fail to optimize exactly what their designers intended them to and this is dangerous\"?\n\n- “What do you think about the argument: ‘highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals, and this is dangerous’?”\n\n - Example dialogue: “Alright, next question is, so we have a CEO AI and it's like optimizing for whatever I told it to, and it notices that at some point some of its plans are failing and it's like, \"Well, hmm, I noticed my plans are failing because I'm getting shut down. How about I make sure I don't get shut down? So if my loss function is something that needs human approval and then the humans want a one-page memo, then I can just give them a memo that doesn't have all the information, and that way I'm going to be better able to achieve my goal.\" So not positing that the AI has a survival function in it, but as an instrumental incentive to being an agent that is optimizing for goals that are maybe not perfectly aligned, it would develop these instrumental incentives. So what do you think of the argument, \"Highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals and this is dangerous\"?”\n\n- “Have you heard of the term “AI safety”? And if you have or have not, what does that term mean for you?”\n\n- “Have you heard of AI alignment?”\n\n- “What would motivate you to work on alignment questions?”\n\n- “If you could change your colleagues’ perception of AI, what attitudes/beliefs of theirs would you like to change?”\n\n- “What are your opinions about policy oriented around AI?”\n\nI also had content prepared if we got to the end of the interview, based on Clarke et al. (2022), RAAPs, some of Critch’s content on pollution, and my general understanding of the space. My notes: “Scenarios here are about loss of control + correlated failures… can also think about misuse, or AI-assisted war. Also a scenario where the AI does recursive-self-improvement, and ends up actually able to kill humans via e.g. synthetic biology or nanotechnology or whatever, pollution.”\n\n \n\nPost-Interview Resources Sent To Interviewees\n\nI sent most interviewees resources after the interviews.\n\n- I usually floated the idea of sending them resources during the interview, and depending on their response, would send different amounts of resources.\n\n- I did not send resources if the interviewee seemed like they would be annoyed by them.\n\n- I only sent a couple of resources if they seemed not very open to the idea.\n\n- For people who were very interested, I often sent them different content that was more specific to them getting involved. These were the people who I sometimes sent the EA / Rationalist material at the end– I very rarely included EA/Rationalist-specific content in emails, only if they seemed like they’d be very receptive.\n\nHere’s my master list of notes, which I selected from for each person based on their interests. I sometimes sent along copies of Human Compatible, the Alignment Problem, or the Precipice.\n\nMaster list of resources\n\nHello X,\n\nVery nice to speak to you! As promised, some resources on AI alignment. I tried to include a bunch of stuff so you could look at whatever you found interesting. Happy to chat more about anything, and thanks again!\n\nIntroduction to the ideas:\n\n- The Most Important Century and specifically \"Forecasting Transformative AI\" by Holden Karnofsky, blog series and podcast. Most recommended for description of AI timelines\n\n- Introduction piece by Kelsey Piper (Vox)\n\n- A short interview from Prof. Stuart Russell (UC Berkeley) about his book, Human-Compatible (the other main book in the space is The Alignment Problem, by Brian Christian, which I actually like more!)\n\nTechnical work on AI alignment:\n\n- Some empirical work by DeepMind's Safety team about the alignment problem\n\n- Empirical work by an organization called Anthropic (mostly OpenAI's old Safety team) on alignment solutions\n\n- Podcast (and transcript) by Rohin Shah, describing the state of AI value alignment (probably want the first half or so)\n\n- Talk (and transcript) by Paul Christiano describing the AI alignment landscape in 2020\n\n- Alignment Newsletter for alignment-related work\n\n- A much more hands-on approach to ML safety, focused on current systems\n\n- Interpretability work aimed at long-term alignment: Elhage (2021), by Anthropic and Olah (2020)\n\n- Ah, and one last report, which outlines one small research organization's (Alignment Research Center) research direction and offers prize money for solving it: https://www.lesswrong.com/posts/QEYWkRoCn4fZxXQAY/prizes-for-elk-proposals\n\nIntroduction to large-scale, long-term risks from humanity-- including \"existential risks\" that would lead to the extinction of humanity:\n\n- The first third of this book summary, or the book The Precipice, by Toby Ord (not about AI particularly, more about long-term risks)\n\n Chapter 3 is on natural risks, including risks of asteroid and comet impacts, supervolcanic eruptions, and stellar explosions. Ord argues that we can appeal to the fact that we have already survived for 2,000 centuries as evidence that the total existential risk posed by these threats from nature is relatively low (less than one in 2,000 per century).\n\n Chapter 4 is on anthropogenic risks, including risks from nuclear war, climate change, and environmental damage. Ord estimates these risks as significantly higher, each posing about a one in 1,000 chance of existential catastrophe within the next 100 years. However, the odds are much higher that climate change will result in non-existential catastrophes, which could in turn make us more vulnerable to other existential risks.\n\n Chapter 5 is on future risks, including engineered pandemics and artificial intelligence. Worryingly, Ord puts the risk of engineered pandemics causing an existential catastrophe within the next 100 years at roughly one in thirty. With any luck the COVID-19 pandemic will serve as a \"warning shot,\" making us better able to deal with future pandemics, whether engineered or not. Ord's discussion of artificial intelligence is more worrying still. The risk here stems from the possibility of developing an AI system that both exceeds every aspect of human intelligence and has goals that do not coincide with our flourishing. Drawing upon views held by many AI researchers, Ord estimates that the existential risk posed by AI over the next 100 years is an alarming one in ten.\n\n Chapter 6 turns to questions of quantifying particular existential risks (some of the probabilities cited above do not appear until this chapter) and of combining these into a single estimate of the total existential risk we face over the next 100 years. Ord's estimate of the latter is one in six.\n\n- How to Reduce Existential Risk by 80,000 Hours or \"Our current list of pressing world problems\" blog post\n\nGovernance:\n\n- AI Governance: Opportunity and Theory of Impact, by Allan Dafoe and GovAI generally\n\n- AI Governance: A Research Agenda, by Allan Dafoe and GovAI\n\n- The longtermist AI governance landscape: a basic overview if you're interested in getting involved, also more personal posts of how to get involved including Locke_USA - EA Forum\n\n- The case for building expertise to work on US AI policy, and how to do it by 80,000 Hours\n\nHow AI could be an existential risk:\n\n- AI alignment researchers disagree a weirdly high amount about how AI could constitute an existential risk, so I hardly think the question is settled. Some plausible ones people are considering (from the paper)\n\n- \"Superintelligence\"\n\n - A single AI system with goals that are hostile to humanity quickly becomes sufficiently capable for complete world domination, and causes the future to contain very little of what we value, as described in “Superintelligence\". (Note from Vael: Where the AI has an instrumental incentive to destroy humans and uses its planning capabilities to do so, for example via synthetic biology or nanotechnology.)\n\n- Part 2 of “What failure looks like”\n\n - This involves multiple AIs accidentally being trained to seek influence, and then failing catastrophically once they are sufficiently capable, causing humans to become extinct or otherwise permanently lose all influence over the future. (Note from Vael: I think we might have to pair this with something like \"and in loss of control, the environment then becomes uninhabitable to humans through pollution or consumption of important resources for humans to survive)\n\n- Part 1 of “What failure looks like”\n\n - This involves AIs pursuing easy-to-measure goals, rather than the goals humans actually care about, causing us to permanently lose some influence over the future. (Note from Vael: I think we might have to pair this with something like \"and in loss of control, the environment then becomes uninhabitable to humans through pollution or consumption of important resources for humans to survive)\n\n- War\n\n - Some kind of war between humans, exacerbated by developments in AI, causes an existential catastrophe. AI is a significant risk factor in the catastrophe, such that no catastrophe would be occurred without the developments in AI. The proximate cause of the catastrophe is the deliberate actions of humans, such as the use of AI-enabled, nuclear or other weapons. See Dafoe (2018) for more detail. (Note from Vael: Though there's a recent argument that it may be unlikely for nuclear weapons to cause an extinction event, and instead it would just be catastrophically bad. One could still do it with synthetic biology though, probably, to get all of the remote people.)\n\n- Misuse\n\n - Intentional misuse of AI by one or more actors causes an existential catastrophe (excluding cases where the catastrophe was caused by misuse in a war that would not have occurred without developments in AI). See Karnofsky (2016) for more detail.\n\n- Other\n\nOff-switch game and corrigibility\n\n- Off-switch game and corrigibility paper, about incentives for AI to be shut down. This article from DeepMind about \"specification gaming\" isn't about off-switches, but also makes me feel like there's currently maybe a tradeoff in task specification, where more building more generalizability into a system will result in novel solutions but less control. Their follow-up paper where they outline a possible research to this problem makes me feel like encoding human preferences is going to be quite hard, and all of the other discussion in AI alignment, though we don't know how hard the alignment problem will be.\n\nThere's also a growing community working on AI alignment\n\n- The strongest academic center is probably UC Berkeley's Center for Human-Compatible AI. Mostly there are researchers distributed at different institutions e.g. Dylan Hadfield-Menell at MIT, Jaime Fisac at Princeton, David Krueger in Oxford, Sam Bowman at NYU, Alex Turner at Oregon, etc. Also, a good portion of the work is done by industry / nonprofits: Anthropic, Redwood Research, OpenAI's safety team, DeepMind's Safety team, ARC, independent researchers in various places.\n\n- There is money in the space! If you want to do AI alignment research, you can be funded by either Open Philanthropy (students, faculty-- one can also just email them directly instead of going through their grant programs) or LTFF or FTX-- this is somewhat competitive and you do have to show good work, but it's less competitive than a lot of sources of funding in academia.\n\n- If you wanted to rapidly learn more about the theoretical technical AI alignment space, walking through this curriculum is one of the best resources. A lot of the interesting theoretical stuff is happening online, at LessWrong / Alignment Forum (Introductory Content), since this field is still pretty pre-paradigmatic and people are still working through a lot of the ideas.\n\nThere's also two related communities who care about these issues, who you might find interesting:\n\n- Effective Altruism community, whose strong internet presence is on the EA Forum. Longtermism is a concept they care a lot about, and you can schedule a one-on-one coaching call here.\n\n- Rationalist community-- the best blog from this community is from Scott Alexander (first blog, second blog), and they're present on LessWrong. Amusingly, they also write fantastic fanfiction (e.g. Harry Potter and the Methods of Rationality) and I think some of their nonfiction is fantastic.\n\nHappy to chat more about anything, and good to speak to you!\n\nBest,\n\nVael\n\n \n\nInformal Interview Notes\n\nThoughts from listening to myself doing these interviews\n\n- There’s obviously strong differences in what content gets covered, based on the interviewees’ opinions and where they’re at. I didn’t realize that another important factor in what content gets covered is the interviewee’s general attitude towards people / me. Are they generally agreeable? Do they take time to think over my statement, find examples that match what I’m saying? Do they try to speak over me? Are they curious about my opinions? How much time do they have? Separate from rapport (and my rapport differs with different interviewees), there’s a strong sense of spaciousness in some interviews, while many feel like they’re more rapid-fire exchanges of ideas. I often end up talking more in the agreeable / spacious interviews.\n\n- Participants differ in how much they want to talk in response to one question. I tended to not interrupt my interviewees, though I think that’s a good thing to do. (“I’m sorry, this is very interesting, but we really need to get to the next question.”) That meant that for participants who tended to deliver long answers, I had fewer chances to ask questions, which meant I often engaged less with their previous responses and tried to move them on to new topics more abruptly.\n\n- I make a lot of agreeable sounds, and try to rephrase what people say. People differ with how many agreeable sounds they make during my speech as well, and how much they’re looking at the camera and looking for cues.\n\n- I tended to adjust my talking speed to the interviewee somewhat, but usually ended up talking substantially more quickly. This made my speech harder to parse because of all the “like”s that get inserted while I’m thinking and talking at the same time. (I don’t think I realized this at the time; it’s more obvious when listening back through the interviews. I’ve removed a fair amount of the “like”s in the transcripts because it’s harder to read than hear.) Generally, I found it useful to try to insert technical vocabulary and understanding as early as possible, so researchers would explain more complicated concepts and be calibrated on my level of understanding. I did somewhat reduce speaking speed and vocabulary when speaking with interviewees whose grasp of English was obviously weaker, though in those cases I think it’s maybe not worth having the interview, since I found it quite hard to communicate across a concept gap under time and communication constraints. (These concepts are complicated, and hard enough to cover in 40m-60m even without being substantially limited by language.)\n\n- When I’m listening to these interviews, I’m often like: Vael, how did you completely fail to remember something that the interviewee said one paragraph up, what’s up with your working memory? And I think that’s mostly because there’s a lot to track during interviews, so my working memory gets occupied. I often found my attention on several things:\n\n - Trying to take on the framework of their answer, and fit it into the framework of how I think about these issues. Some people had substantially different frames, so this took a lot of mental energy.\n\n - Trying to figure out what counterpoint to respond with when I disagreed with something, so – fitting their answer into a mental category, flitting through my usual replies to that category, and then holding my usual replies in mind for when they were done, if there were multiple replies lined up.\n\n - Trying to figure out whether I should reply to their answer, or move on. One factor here was whether they tended to take up a lot of talking space, so I needed to be very careful with what I used my conversational turn for. Another factor was how much agreement I had with my previous question, so that I could move on to the second. A third factor was tracking time– I spent a lot of time tracking time in the interview, and holding in mind where we were in the argument tree, and where I thought we could get to.\n\n - If they’d said something that was actually surprising to me, and seemed true, rather than something I’d heard before and needed to reformulate my answer to, this often substantially derailed a lot of the above processing. I then needed to do original thinking while on a call, trying to evaluate whether something said in a different frame was true in my frames. In those cases I usually just got the interviewee to elaborate on their point, while throwing out unsophisticated, gut-level “but what if…” replies and seeing how they responded, which shifted the conversation towards more equality. I think thinking about these points afterwards (and many more things were new to me in the beginning of the interviews, compared to the end) was what made my later interviews better than my earlier interviews.\n\n - Trying to build rapport / be responsive / engage with their points well / make eye contact with the camera / watch my body language / remember what was previously said and integrate it. This was mostly more of a background process.\n\n- Conversations are quite different if you’re both fighting for talking time than if you’re not. Be ready for both, I think? I felt the need to think and talk substantially faster the more interruptions there were in a conversation. I expected my interviewees to find the faster-paced conversations aversive, but many seemed not to and seemed to enjoy it. In conversations where the interviewee and I substantially disagreed, I actually often found faster-pace conversations more enjoyable than slower-paced conversations. This was because it felt more like an energetic dialogue in the faster conversations, and I often had kind of slow, sinking feeling that “we both know we disagree with each other but we’re being restrained on purpose” feel in the slower conversations.\n\n- My skill as an interviewer at this point seems quite related to how well I know the arguments, which like… I could definitely be better on that front. I do think this process is helpful for my own thinking, especially when I get stuck and ask people about points post-interview. But I do read these interviews and think: okay, but wouldn’t this have been better if I had had a different or fuller understanding? How good is my thinking? It feels hard to tell.\n\nContent analysis\n\nI have a lot to say about typical content in these types of interviews, but I think the above set of interviews is somewhat indicative of the spread. Hoping to have more information on these eventually once I finish sorting through more of my data.\n", "url": "n/a", "docx_name": "README.docx", "id": "66aca99e081cf549d6df65618054028e"}