diff --git "a/ai_tech_tu_delft.jsonl" "b/ai_tech_tu_delft.jsonl" new file mode 100644--- /dev/null +++ "b/ai_tech_tu_delft.jsonl" @@ -0,0 +1,43 @@ +{"id": "f1b42dca5b95bb1c4e866db1ca1d6c64", "title": "Sven Nyholm: Responsibility Gaps, Value Alignment, and Meaningful Human Control over AI", "url": "https://www.youtube.com/watch?v=cMAYhiMJ4k0", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "good\nso the recording is on so today's\nspeaker is\nsven newhom he's an assistant professor\nof\nphilosophical ethics at the utah\nuniversity\nhis main research focuses on ethics of\ntechnology ethical theory and history of\nethics\nso he has written on many things i think\nthe first time i\nsaw him uh give a talk it was on kant's\nuh universal law i was in\nironia that was seven years ago but he's\nnow mainly writing on many different\nthings self-driving cars humanoids\nrobots a dominus weapon system\nuh if you want to read more about his\nideas\ni can really recommend this roman and\nlittlefield's book\nhumans and robots uh\nso it's a really great book um today's\ntalk is titled responsibility cap's\nvalue alignment and meaningful human\ncontrol over ai\nand i would like to give the floor to\nyou as twin\nwell thanks a lot harman now and\nuh yeah i mean i am going to talk a\nlittle bit about meaningful human\ncontrol but mostly about the first\ntwo topics because i uh i mean i have a\npapers of people that\nare interested in reading a written\nversion of this uh\nplease feel free to get in touch with me\nand i would very much appreciate any\nfeedback people might have so\njust send me an email can you now see my\nslides yes\ngreat thanks okay so responsibility gaps\nvalue alignment and meaningful human\ncontrol over ai\nuh we're gonna start with lisa doll\nuh who as you will probably know i lost\nuh\nagainst the computer program alphago uh\nthey played five games and he actually\nwon\none of them but the other four games\nwere won by the\ncomputer program that had been training\nagainst itself\nand so although there was a human who\nwas sort of moving around the pieces on\nthe board\nthe guy on the left there i mean he\ndidn't understand the strategies\nof alphago and so he was a little bit\nlike the person in\njohn searle's uh chinese room experiment\ni mean so\nif you know about that experiment a\nthought experiment\nuh john searles imagine someone who gets\ninstructions about how to put together\nmessages in chinese and without knowing\nchinese the person can do that but they\nwouldn't know the meaning\nof those messages and this person who is\nkind of playing go against\nthe world champion it's a little bit\nlike that i mean he's\nmoving the piece around on the\nsuggestions from the computer program\ndoesn't know why exactly doesn't\nunderstand the strategy\nbut then the computer program won the\ngame listed all incidentally retired\nafter this\nhe didn't want to play go and morph they\nfelt meaningless\nanother case that you may have heard of\nuh quite different exactly two years\nlater\nso lizadol was beaten in uh by alphago\nin 2016 in march\nin march of 2018 uh the first time\nuh it happened that a person was hit and\nkilled by a self-driving car\nso there's a pretty gruesome video of\nthis and so the safety driver\ndid not react in time uh nor did the ai\nsystem in the car\nuh it classified the person first i\nbelieve it's a road sign then as a bike\nbecause she was walking with a bike then\nas a person then i think it's switched\nback and forth between classifying her\nas a bike and a person that just an\nappropriate action was not taken\nthe safety driver did not react in time\nand the woman\nelaine herzberg was hit and killed and\ndied on the way to the hospital\nuh some people worry about not having\ncontrol not understanding\nai systems uh some people even say that\nif we start using ai systems\nto create what they call killer robots\nautonomous ai driven\nweapons systems this might be another\ncase where we lose control\nwe don't understand what's exactly is\nhappening in these systems\nand so we cannot be held\nappropriately responsible but people\nshould be held responsible for\nthings that are both good and bad\noutcomes\nso we might have uh problems with what i\nwill call\nwell it's not just me it's a commonly\nused term responsibility gaps\nbut what i will do uh is to\ntry to identify sort of broad four broad\nclasses of responsibility gaps uh\nthis is the the fourth one i think i\nhaven't seen people talk about before\nand i would relate those types of\nresponsibility gaps to the issue of\nvalue alignment and then if there's time\ni will\ntalk a little bit about meaningful human\ncontrol but it's all going to be about\nmeaningful human control\nindirectly and we can discuss that more\nduring the q a\nso don't worry if you wanted to talk\nabout meaningful human control\nthat's going to be lurking in the\nbackground throughout the talk\nokay so i should first say what is meant\nby responsibility gaps more generally\nuh and it doesn't have to be anything to\ndo with ai it can be\nuh with large groups of people are doing\nthings together\nuh they can be good or bad effects but\nuh it could\ncan happen that maybe it's the culture\nof the\norganization uh maybe it is some other\ngroup level effect that means that it's\nvery hard to find some individual person\nor maybe even group of persons who can\nbe held responsible but there is some\noutcome\nthat you think it would be appropriate\nto hold somebody responsible for\nand so there's a responsibility gap and\nso related to ai so if there's an\nai system bringing about an effect and\nit seems that it's a good and bad effect\nyou want to hold someone responsible\nbut you can't find anyone who it's\nappropriate to hold responsible\nthen we have a responsibility gap\nokay so in order to get to my four\nkinds i'm gonna use two general\ndistinctions\nfrom a more generally fl a general\nphilosophy of responsibility\nthe first distinction is between what i\nwill call\nnegative and positive responsibility\nso negative responsibility i mean\nsomething bad has happened or someone\nhas\ndone something bad and we want to blame\nthem or punish them\nnegative uh or something good\nhas happened and we want to praise them\nor reward them\nthat's another way of holding someone\nresponsible giving them\ncredit to credit where credit is due and\nthat is what i mean by\npositive responsibility another\ndistinction is between\nbackward-looking and forward-looking\nresponsibility\nso uh after something has happened uh so\nsomeone has\nbeen injured or some good outcome has\nsomeone has been saved let's say\nwe want might want to blame or or praise\nsomeone for what has\nhas happened but we might also think\nthat\ncertain people have responsibilities to\ntake precautions to avoid\ncertain good outcomes or perhaps to\npromote certain good outcomes\nso looking at at the future uh who is\nresponsible for making sure that things\ngo certain ways\nrather than others that's what's meant\nby forward-looking responsibility\nso if you have these two distinctions\nand you're\nthinking about responsibility gaps you\ncan create a sort of classification\nmatrix that would look something like\nthis so\nyou could have responsibility gaps that\nare\nbackward looking and negative uh\nforward-looking and negative\nor backward-looking and positive or\nforward-looking\nand positive so those are the four\ncancer responsibility gaps i want to\ntalk about\nand let me start by just kind of mapping\nthis on to some of the existing\nliterature\ni mean as i said i have a paper version\nof this and i go into some more detail\nabout how to kind of\nmap the territory here but here are just\nsome examples\ni mentioned killer robots autonomous\nweapon systems earlier\nand there's a famous article by robert\nsparrow that maybe a lot of you have\nread\nwhen he argues that there might be\nresponsibility gaps related\nto them and that discussion is about\nbackward looking negative responsibility\ngaps after something bad has happened\nit's unclear who can be uh blamed\nnow and this is pretty much all the\ndiscussion about responsibility gaps\nis almost all about backward looking\nnegative responsibility gaps so john\ndiana her for example has a paper about\nwhat he calls retribution gaps who\nshould be punished if there's something\nbad happens\nand we can't find a appropriate person\nuh in a recent forthcoming paper by\nuh one of your one of your colleagues\nphilippo santorini the\nceo together with uh julio mccachey\nthey talk about what they call\nculpability gaps\nmoral accountability gaps public\nofficial counter accountability gaps\nthose are all backward looking negative\nresponsibility gaps\nwhat about positives forward-looking\nresponse sorry\nwhat about backward-looking positive\nresponsibility gaps well\nuh john dana her and i have written a\npaper\nabout uh the automation of work and it\ncould be that if more and more\nwork tasks are\nmade outsourced\nto ai systems and robots\nthen maybe the room for human\nachievement in the workplace is becoming\nbecoming smaller and smaller and so\nthere might be good outcomes\nuh so there might be an ai system that\nidentifies something as a cancer or\nsomething like that because that's\nwell that's not a nice outcome but it's\nnice that there's a diagnosis\nmaybe the ai system can even recommend\nthe treatment uh\nso that something good may have happened\nuh because the disease has been found in\na treatment has been recommended\nbut perhaps no human can sort of claim\ncredit because it was all done by\nmachine learning and uh in a way that's\nopaque to\npeople involved there might be kind of a\ngap in achievement\nuh what about forward-looking\nnegative responsibility gaps well in the\nbook that\nhermann very kindly recommended earlier\nhumans and robots\ni talk about what i call obligation gaps\nand so\nlet's say that self-driving cars are\ndriving and it's it should\nnot hit anyone uh but it cannot be held\nresponsible if it were to hit someone\nand so after the fact there might be a\nresponsibility gap\nuh of a backward looking kind but if\nthat's the case there would seemingly\nalso be a forward-looking responsibility\ngap because\nit it's not gonna be true that if it\nhits someone\nit can be blamed and perhaps the person\nriding in the car can also not be blamed\nso there's a worry that there's a gap in\nterms of who exactly\nis obligated to make sure that no one is\nhit\ni already mentioned felipo and julio and\nthey have another\ngap that they call active responsibility\ngaps and i think\ni'm not sure but i think that that\ncorresponds to what i'm calling\nuh forward-looking negative\nresponsibility gaps now\nthe box that i am sort of most\nfascinated by and\ni want to talk most about in this talk\nis the last one\nand uh as far as i know in the\nliterature people have not\ndiscussed this type of responsibility\ngap uh so who should make sure that\nsomething good happens in the future\nwho is responsible for that\nuh now before before i get to that\nlet me talk about two asymmetries that i\nthink there might be in terms of how\nwilling people are to fill these gaps by\nsort of taking responsibility\nby stepping forward and saying okay i\nwill be responsible\nfirst of all i think people are quite\nwilling to take responsibility for good\nthings that have happened in the past\nbut they're quite unwilling to take\nresponsibility for bad things that have\nhappened in the past\nthat's one asymmetry that we have to\ndeal with and people might\nwant to take responsibility for good\nthings that have happened even though\nthey're not\nsort of justified because they didn't\nactually do anything very impressive\nuh whereas if people have done something\nbad and they deserve to be punished\nor blamed they are usually pretty\nunwilling to be\nheld responsible so that's one issue\nhere another issue is when it comes to\nuh forward-looking versus\nbackward-looking\npositive responsibility uh people as i\nsaid are quite willing and eager to take\nresponsibility for good things that\nthat have happened in the past however\nwhen it comes to making sure that things\ngo well in the future\npeople tend to be uh much less willing\nto take on responsibilities\nnow all of these asymmetries or both of\nthem i think have to do with the costs\nthat are involved in taking\nresponsibility for something\nuh if you take responsibility for\nsomething good that has already happened\nthere's just uh things to be gained from\nthis you get praise you might get\nrewards etc but all other responsibility\ngaps\ninvolves taking on certain costs even\ntaking on the responsibility of making\nsure that good things happen in the\nfuture\ncan for example involve an opportunity\ncost uh because you have to you know\nwork to make sure that the good outcome\nis achieved and thereby\npossibly uh missing out on other\nopportunities\nto perhaps promote for example your\npersonal interests\nokay so it's pretty hard to fill these\nresponsibility gaps by just expecting\npeople to kind of step forward and take\nresponsibility\nuh now what other reasons are the wire\nresponsibility gaps arise\nuh it's quite common to say that uh ai\nand robots can become a sort of agent of\nsome sort\nand uh if they're acting in an\nautonomous independent way\nthese systems or these agents then uh\nand\nhumans don't are not able to control or\npredict what they're going to do then\nthat might be one reason why you think\nthere are responsibility gaps\nuh i mentioned robert sparrow he gives\nthe kind of argument in terms of\nwhen it comes to military robots another\nreason\nis the problem of many hands so this is\nwhere\nto make one ai system or some other\nthing\nat work it sometimes can be required to\nhave lots of different people involved\nperforming small tasks and parts and\neveryone might have a little bit of\nresponsibility but it might be unclear\nwho\nbears the most responsibility and so the\nresponsibility might be sort of watered\ndown\nand spread out too much over too many\npeople and therefore there might be a\ngap\nand lastly this is what i want to talk\nmost about\nis that not only do people shy away from\ntaking responsibility for making sure\nthat future good outcomes\nare uh achieved and this is the to do\nwith this empty box that\ni uh you know want to fill with the new\nkind of responsibility gaps that was\ndown in the corner\nuh it's also the case that common sense\nmorality\ndoesn't uh postulate as strong\npositive responsibilities to create a\ngood future as it creates response\nas it posts responsibilities to avoid\nharming people\nand so uh this is discussed for example\nby julian\nsavilescu at the bottom there and ingmar\nperson at the top\nuh where in their book that's called\nunfit for the future where they worry\nabout how common sense\nuh morality might undermine our\nsort of abilities to meet the challenges\nof the modern world\nthey argue that we have developed uh\nthroughout you know\nhuman evolution certain attitudes and\none of the attitudes that we have is\nthat we feel\nand i'm calling now intuitively\nresponsible uh\nmuch more for the harm that we cause\nthan for any benefits that we failed the\ncause\nand that we feel that also that we have\nmoral duties and obligations to not\nharm but not necessarily in the same way\nto benefit\nand so uh people\nare in common sense morality sort of\npermitted to promote their own interests\nand not necessarily responsible for or\nrequired\nto promote the overall good now what\nabout in ethical theories\nis that the same and what about for\nexample in utilitarianism that says that\nwe should\nshould promote the overall good um well\nactually even if you look at\nutilitarians they tend to shy away from\nsaying that this\nsort of positive responsibility that we\nhave uh\nboth bentham and mill for example in\ntheir utilitarian theory say that\nthe overall good is best promoted if\neveryone gets to promote their own\ninterests so long as they don't harm\nother people\nand a lot of contemporary\nconsequentialists say that a lot of the\nvalues that we have\nare to do with ourselves and our loved\nones and so maybe the overall good is\nbest promoted if everyone can kind of\npromote their own personal uh\ninterest or the interest of their\nchildren and friends and family\nand so even utilitarian theories don't\nsometimes postulate a strong\nresponsibility to promote\nthe overall future good what about\ncounty and ethics well canton ethics\nalso has\na duty to promote the good but there\ntoo can't describes this as a what it\ncalls a wide imperfect duty\nyou should have this as one of your aims\nbut you're not positively\nresponsible for spending all your time\non promoting other people's\ngood now i think there's an argument\nthat we can make for why\nuh this creates a problem for a i value\nalignment i think\nthis audience will all know what i'm\ntalking about when i say value alignment\nbut let me just remind you this is the\noutcome where\nwe create ai systems that promote uh\nhuman values that fit with human\nuh preferences and that further our\ninterests\nand when people talk about value\nalignment they often talk about\ntrying to create more and more advanced\nai systems that hopefully will\npromote human values and the good now\nso if it's true that common sense\nmorality and many moral theories\nthey avoid and they involve strong uh\nresponsibilities\nto avoid harm but not necessarily strong\nresponsibilities to activist drive or\npositive outcomes\nand ai alignment is the creating\ncreation of good future outcomes\nthere might be a kind of responsibility\ngap of the forward-looking positive kind\nhere\nuh because again if common sense\nmorality and\nmany moral theories don't say that we\nhave as strong responsibilities to do\ngood as we do to avoid harm\nand ai alignment is a way of doing good\nthen there might be a kind of\nresponsibility gap in terms of who\nexactly\nit is that should be promoting the good\nof ai\nalignment okay so uh\none way then in which we can maybe fill\nin this missing box\nis to say that uh the ai alignment\nissue especially future directed ai\nalignment of you know\nsystems that we don't yet have but we\nhope that someone will invent\nthat might be one case where we have an\ninstance of what i'm calling a\nforward-looking positive responsibility\ngap uh it would be good\nif we achieve this goal but it's unclear\nwho exactly\nit is that has a positive responsibility\nto strive towards\nfulfilling this goal uh\nokay so uh now what about the literature\nabout\nhow to fulfill responsibility gaps could\nwe somehow use\nthat i myself uh in one paper from 2018\nand also in the book\nuh that herman mentioned i have argued\nthat we can view\nhumans and ai systems and other\ntechnologies as forming a kind of teams\nuh\nand uh the maybe the the technology part\nof the team is not responsible but the\nhuman parts of those teams are\nresponsible\nand you know who are these human\ntechnology teams well we can ask\nquestions like you know who has the\nability to stop the technology\nand turn it on and off uh who has an\nunderstanding of how it works\nwho is able to update or request updates\nto the technology\nuh who is able to monitor the technology\nand so on and so forth so if we have\nthese kinds of questions\nand we have a lot of the answers\npointing in the direction of a certain\nindividual or team of individuals\nthen we can say that those are the ones\nthat are collaborating with the\ntechnologies and that could be\nheld uh i already mentioned philippo and\njulio they have uh what they call track\nand trace theory i sometimes like to\ncall their theory the\nthe the package delivery or personal\ndelivery theory because it sounds like\nyou order something\nyou know online and you're waiting for\nyour package to arrive the tracking\ncondition is that the technology should\nuh track human values uh and\ni mean it sounds like value alignment it\nshould align the human interests and\ngoals\nand the tracing oh i see that there's a\ntypo here should be tracking trace and\nnot track and track but\nuh that's what i meant not a tracking\ntrack but track and trace\nthe tracing requirement is that there\nshould be people who understand\nuh the how things work and also the\nmoral significance of using the\ntechnology in question\nso i mean there's really an overlap\nbetween what i had suggested and what\nthey are suggesting because\ni i had also asked you know who was able\nto uh\nto understand at least on a macro level\nhow the technologies in question work\nuh and whose interests are being uh you\nknow promoted by the technology\nthat's very similar to the track and\ntrace uh\nconditions that they have so i i think\nof our\nrespective theories as both being uh you\nknow\nvariations on the same theme basically\nso those\nare attempts to fill responsibility gaps\nuh\nbut but they wouldn't necessarily\nfulfill the forward-looking\nresponsibility gap that i've been\ntalking about since\nuh that is to do with value alignment\nand what\nfor example felipe and julia call\ntracking\nthat just seems to be value alignment\nand so uh there's a question of who\nshould bring\nabout that uh tracking or value\nalignment\nnow what about uh rethinking uh\nthese responsibility gaps well perhaps\nyou could for example say that okay\nthere are not uh strong to promote good\noutcomes\nbut uh if you don't promote good\noutcomes then bad\noutcomes might be brought about and so\nyou can kind of maybe\ntwist turn things around and say the\nfailure to promote the good outcomes\nwould bring about\na lot of risks and we do have a strong\nmoral obligation to avoid creating harm\nand so\nby maybe promoting the good outcome the\nvalue alignment could be a means to the\nend of avoiding\nbad outcomes or certain risks\nuh now i think these are all interesting\nand uh i mean in the paper that i when i\ntalk about this\nin some more detail i say that these are\nreally they're not so much practical\nsolutions as they're sort of\ntheoretical idealizations uh things that\nwe\ncan in theory do but in practice it's\nactually quite hard\nuh if you apply these sort of uh ideas\nto cases like the\nuh uber car uh hitting and killing uh\nthat uh\npedestrian elaine herzberg uh you know\nuh alphago winning the game\nyou know how exactly this uh figures\napply to those cases and who's\nresponsible who can get credit for\nwinning a game or\nuh be blamed for uh you know a\nself-driving car uh\nhitting and killing a person there might\nstill be a many hand\nproblem involved there might be uh\nworries to do with\nagain who should make sure that there's\nvalue alignment in the first place\nand even if you are able to sort of in\ntheory reconceptualize\nuh certain uh promotions of good\noutcomes as\navoidance of bad outcomes and risk\nmanagement\nthat still doesn't necessarily point to\na particular person or set of persons\nso i think a lot of the suggestions in\nthe literature about how to fill\nresponsibility gaps uh are interesting\nbut they're really\nuh a kind of theoretical uh\nidealizations that point out\nrelevant features of responsibility but\nthat don't necessarily give us a kind of\nchecklist\nfor how in every case we can easily fill\nresponsibility gaps\nand again i especially worry about what\ni call forward-looking possible\nresponsibility gaps related to ai\nvalue alignment okay so i have talked\nabout some different cases\nthere are you know bad outcomes such as\npeople being hit by self-driving cars\nthat are what you might think of as a\ngood outcome you know an ai\nsystem within a game of course in that\ncase it's not clear that it was actually\na good outcome because\nas i said the human world champion got\nhe became so\nuh disillusioned that he quit playing go\nand\nand so i i'm not even sure that that was\na particularly good outcome\nand that there was any what you might\ncall value alignment in that particular\ncase\nuh and so i want to say that we\nshouldn't only talk about\nbackward-looking negative responsibility\ngaps which has been the\nnormal thing to discuss in this\nliterature we should also look at\nforward-looking negative responsibility\ngaps and\nbackward and forward-looking positive\nresponsibility gaps\nwho exactly if anyone deserves credit if\nsomething good is\ndone by an ai system but more\nimportantly who\nhas a more responsibility to make sure\nthat we create\ngood air systems that align with human\nvalues\nand uh not bad ones that clash with them\nand\nuh since common sense and moral theory\nuh postulates much\nstronger responsibilities to avoid bad\noutcomes than to promote good ones that\nwe might have\na interesting and uh confusing\nforward-looking positive responsibility\ngap here\nall right so thanks a lot again if\npeople want to read a written\nversion of this please feel free to get\nin touch with me\ni would very much like to hear uh\nfeedback on that too\nso but but for now i'm very much looking\nforward to discussing this\nin this version of the material with you\nthanks\nso thanks friend uh they were really\ninteresting\nso really nice how you connected the\nvalue alignment with the\nproblem with responsibility gaps i think\nthat's a really really nice\nnovel contribution as well uh so\num the floor is now open for questions\nso i see one question in the chat by uh\nfull court uh so if you're still\nhere please feel free to ask your\nquestion\num\nmaybe he's already sorry\nso i think he left so i can read out the\nuh the question as sven\nand maybe you can can answer and then\nuh we can all benefit so the question is\nwhen you first mentioned negative and\npositive responsibility i\nexpected it to mean responsibility that\nsomething bad does not occur\nor that something good does occur how do\nyou think that this distinction might\nhelp in thinking about responsibility\ngap it might be\nmore useful than thinking about the\nresult of taking responsibility\nwhat seems to be the meaning you attach\nto post\nto negative positive\nyeah so that sounds as if what i was\ncalling forward looking responsibility\nis what they were referring to and it is\ntrue that\nuh some philosophers like vernon\nwilliams for example has used the\nexpression negative responsibility gap\nto mean avoiding the creation of certain\nbad outcomes but\nuh i mean these are just\nsince a lot of people more recently have\ntalked about forward-looking\nresponsibilities to refer to that thing\nuh and they have mostly then talked\nabout uh forward-looking\nresponsibilities\ngaps of the negative kind i i see this\nis just a\nissue of terminology but i was certainly\ntalking about that thing\nthat the person in question wanted me to\ntalk about when i talked about\nnegative forward-looking responsibility\ngaps\ngood yeah so that's that's how i\nunderstood this well thanks\nso i think the next question is from uh\nuh maximilian kino\num oh yeah thanks very much thanks when\ni i really enjoyed the talk i think it\nwas great um so i\ni'd like to ask you about the definition\nand the significance of the\nresponsibility gap to understand your\naccount a bit better\num so i think it's kind of assumed in\nthe literature that the mere absence of\nresponsibility\nis not enough to constitute a\nresponsibility gap so\nwe are not responsible for the movement\nof the planets but there is no\nresponsibility to get there and some\npeople try to describe what else we need\nand i wanted to ask you what you think\nis required so the absence of\nresponsibility\nand and something else what what is it\nthat creates responsibility gaps so\nthat's about the definition\nyeah and the second uh part of the\nquestion will be about the significance\nof those gaps i mean it strikes me that\nhardly anyone makes explicit why\nresponsibility gaps in particular the\nbackward looking negative ones are\nproblematic so you see the just war\ntheorists\nwho say well responsibility is in some\nway a condition of for just war\nbut in other contexts it's kind of\npuzzling why we should we\nshould worry so much about it couldn't\nyou just say well\nbeing responsible for bad things is a\nburden and if we can move on to an\ninsurance scheme and get rid of the\nresponsibility stuff\nwouldn't that be a good thing so why\nshould we worry so much about it\nokay great yeah great questions let me\nstart with the first i mean so\nyes i agree with you of course that's\nthe absence of someone who\nis or could to be held responsible\nthat's not enough there has to be\nsomething more and i think there are two\nmain things\nuh that could that could happen that\nwould make it seem as if there's a\nresponsibility gap\nthe first one is that it seems that it\nwould be right that someone should be\nheld responsible\nuh because the occurrence in question\nmay be\nunlike the movement of the planets or\nlike a you know gust of wind or\nsomething like that\nif we have technologies or even we have\ngroups of people interacting in some way\nthere seems to be some sort of agency\nthere and uh this\nwhatever happened didn't happen sort of\ntotally accidentally\nand so it just seems maybe seems right\nto us into it to be speaking so to speak\nthat someone should be held responsible\nit may also seem good that someone\nshould be held responsible it would be\nbetter\nuh and people in general find it\nyou know extra bad sometimes if\nsomething happens\nand no one can be blamed i mean so this\nmay to some people this is\ni mean especially when it comes to\npunishment some people don't like this\nidea\nuh they think that we may have kind of a\ndesire to to punish\nsometimes and so we feel that someone\nshould be punished and it would be good\nif someone would would be punished for\nsomething\nbut actually would be better to try to\nget rid of these\nkind of desires i mean so uh steven um\nkaya felt us on a paper that we tried to\ndebunk the idea of responsibility gaps\nbecause the argus that often it's driven\nby a kind of desire for punishment and\nthat we should try to rid ourselves of\nbut uh it can also be on the good side\nyou know we\nsomething good happens and we think it\nwould be nice if someone could be\nyou know praised or you know rewarded uh\nbut we\nmight not be able to find someone who\ndeserves it so\nin general then uh it seems right or it\nseems good or maybe we just want\nsomeone to be responsible but we can't\nfind anyone\nwho could justifiably be held\nresponsible at least\nto the right degree now uh and then the\nquestion was maybe we should\ngo the uh sort of stephen curry felt uh\nroute and try to\nuh you know accept that people are not\nresponsible\ni mean i um yeah i think\nuh sometimes it's just psychologically\nhard because people have a kind of\nstrong\ndisposition to want to find people to\nhold responsible\nuh it can also be a way of achieving\nmore you know control and meaningful\nhuman control is you know one of the\ntopics of this particular series\nso the sense that things happen in the\nworld beyond anyone's control and no\none's responsible\nuh i mean i i personally think and i\ndiscussed this in some other work in\nprogress that people sort of\nactually value control and they've had\nthey want\nthings to be happening in a responsible\nway that we can you know say that\nyou know someone can get that be praised\nor\nblamed for and it's uh you know it seems\nbad to a lot of people when things are\nout of human control but\nbut yeah so with individual cases i\nthink we can always discuss would it\nactually be better to not\ntry to hold people responsible but\nthat's a bigger\ndiscussion yeah i think you're right\nabout that if i just can\nsay one more sentence i guess so i agree\nwith what you say but i think these\nanswers also show that we are still\nunclear about the value of moral\nresponsibility in different\nsenses what what is the value that\npeople have moral responsibility for\ncertain things\nand there are some very i think\nfundamental discussions in philosophy\nbut in\nthe in a ethics it seems to me that we\nstill don't really understand that\nbut yeah but thanks very much friends\ngreat no i agree it's a bigger\ndiscussion and it's uh\nuh as i said i think it has something to\ndo with the kind of\nvalue that people implicitly put on the\nidea of control but uh\nthat can also sometimes uh go too far\nand sometimes we maybe should give up\nsome control and\ntry to be less controlling but i still\nthink that people have a desire for\ncontrol over certain parts of their\nlives and responsibility has to do with\nthat\nokay thanks uh so next in line is\nilsa\nyeah um i basically wrote my question in\nthe chat\nuh saying that if you look more\nfine-grained into\nbackward looking and forward\nresponsibility you can see that it\nconsists of several elements so i was\nwondering if you also take this into\nconsideration in your\nclassification\nuh yeah absolutely so i mean that can\nthat question can be taken in different\nways and so for example there's a paper\nby\ndaniel tigard where he discusses\nwhat uh gary watson and david schumacher\ncall the different faces of\nresponsibility\nso they talk about attributability\nanswerability\nuh accountability um for example and and\nthen\nuh tiger says well you know you can also\ni guess formulate\naccountability answerability and uh what\nwas the other one attributability gaps\nuh and then there could also so that's\nyou know in terms of what exactly do we\nwant do we want\nto find someone to hold to account do i\nwant someone to provide answers\ndo we want someone to say that that was\nthe person who did it\num that's one thing and others would be\nyou know what exactly are the criteria\nhas\nto do with control with agency uh with\nresources etc so yeah so i think within\nthe broader classes that i talked about\nyou know you can have many different\nthings and\na lot of what people have discussed\nwould be would still fall\nunder the class of what i'm calling\nbackward looking negative responsibility\ngaps\nbut some of the things fall into the\nother classes and\nagain i want to say that the whole idea\nof value alignment seems to fall into\nthis\notherwise um seemingly unoccupied a\nclass of\nforward-looking positive responsibility\ngaps\nbut yeah this this is for a higher order\ndiscussion of\nbroad classes and i agree with you hill\nso that\nwithin the classes there's a lot of\nfurther discussion and you can sort of\nsub\nclassify within the broad classes that i\npresented\nokay thank you okay\nand next question is andrea but on the\nconnection with free will\nhi sven thank you so much for this great\npresentation\nso my question is about free will as we\nall know free will is traditionally\nconsidered crucial\nelement for moral agency and moral\nresponsibility\nbut recently i did an empirical study\nand i was surprised to see that the\nlarge majority of the people that\nresponded to the survey\nactually did not consider free will as\nrelevant for moral agency in the context\nof artificial moral agency\nuh could you provide your opinion on\nthis\nyeah um yeah that's an interesting\nquestion i mean i\nmyself think that agency is actually a\nvery complex\nuh concept and so uh\nwith many different parts to it and so\nan agent is able to\nyou know perform actions to plan things\nto talk with other agents to be\nheld morally responsible maybe has free\nwill or doesn't etc\nis able to think about sequences of\nactions and make plans for the future\nto reflect on the past etc so i think\nthese are all examples of things that we\nyou know associate with what\nphilosophers call agency\nand uh because it's such a\nmulti-dimensional concept\ni think that that means that there could\nbe different kinds of agents\nand so i do think that a lot of people\nthink that when it comes to human agents\nthat we have\nwhat we call free will of course people\nmean different things by that\nexpression uh but maybe when we think\nabout\nother agents uh and let's say an insect\na\nhouse fly or a wasp or something like\nthat or be\nuh does it have free will well maybe of\nsome some other sort but it does seem to\nperform\nactions of some sort uh and in the same\nway\num you know some ai system or\nself-driving car can either go\nleft or right you know it comes to a\nfork in the road it's very hard not to\nthink in terms of it decided to go left\nlet's say\nbut you know does it have free will\nprobably not\nuh but a lot of the things that we\nassociate with agency could still be\ntrue i mean it's\nreacting to the environment and the\nseemingly intelligent\nway that seems to be in the service of a\ncertain goal to get to the destination\nbecause if you had gone right then it\nwould you it would have been a detour\nand it would have taken longer etc so\nthere seems to be some sort of\nuh agency there but yeah free will\nthat's maybe not\nseem to be there of course i mean again\nfree will is also one of these concepts\nthat\ni think not only can it be interpreted\nin different ways\neven within a certain interpretation\nit's probably going to have lots of\ndifferent aspects to it\nand i don't know why some of those\naspects couldn't be realized in\na sort of an advanced ai system uh even\nif maybe\nsomething like being conscious of what\nyou're doing which may i think we\nassociate with free will\nuh you probably wouldn't want to say\nthat the ai system or robot is conscious\nwell but then again i mean consciousness\ntoo\nhas aspect that you may be able to\nreplicate in a technology so\ni think that these are all just it's not\njust like either something has agency or\nconsciousness\nor free will or not i think it's usually\nbetter to say\nuh these things have different parts to\nthem and uh\ntechnologists may be able to kind of\nclick click some of the boxes\nnot all of them and have something like\nkind of agency\nkind of quasi free will kind of some\nsort of\nbeginning of consciousness and it's not\nreally an on off matter uh\nalthough when we are asked you know\ncould uh you know the\nai system have consciousness we might\nprobably gonna say no\ndoes that free will we program to say no\nbut i think when we sit down and think\nabout it we should\nbe more nuanced and think that okay it\npartly has to partly fulfill some of the\nconditions that that's the the sort of\nthe take i have on this thank you so\nmuch\ngreat uh acadi\nyes uh thank you sam that was a great\ntalk fascinating i like this angle on\nresponsibility on responsibility gaps in\nterms of uh positive uh\nkind of effects but then i wonder if you\nsee any potential\nlet's call them side effects of\nemphasizing the need to address\npositive responsibility gaps and then\nwhat the\nthe specific context that comes to mind\nis uh that it might be tempting for\nuh some people call them ai optimists to\nappeal to this kind of reasoning\nwhen advocating for their technological\nai based solutions to\nthe world's problems so do you see any\nparallels there\nand if so yeah how can we proactively\naddress that\nyeah okay so yeah thanks for that that's\na great question uh\nyeah i i in a way i don't like my my\nmy conclusions because uh i'm sort of\nsaying that there's\nit's hard to find someone to hold\nresponsible for creating good outcomes\ninvolving ai\nand the reason for that is that uh both\nordinary ways of thinking and people's\nattitudes and a lot of ethical theory\nare just focusing much more on negative\nresponsibilities\nbut we desperately sort of need to make\nsure that people create\na good rather than bad ai systems\nand we we are aware of the risks uh that\nuh you know could are involved so that\nthat's why i said it towards the end\nmaybe we can sort of try to rebrand some\nof the value alignment into value you\nknow to ai safety a securities it's\nreally a matter of\navoiding bad uh consequences rather than\npositively bringing about value\nalignment and that is more treated as a\nkind of\na means rather than the end but on the\nother hand\nthat's also not satisfying because you\nknow the reason that you would want\nadvanced ai systems would be uh not as a\nmeans to something else but\nas a goal i mean you you think that uh\nyou know ai systems that could detect\ncancer i mean in the example that i\nthink i mentioned\nand also that's scott robbins one of\nyour former colleagues during delft\ndiscusses in a really interesting paper\nuh from 2019\nmines and machines uh i mean\nwe want this good outcomes and so it\nisn't just a means for avoiding bad\noutcomes\ninvolving ai and if if that was only the\nproblem why would we want the systems in\nthe first place so yeah i um\ni don't kind of like my conclusions but\ni just think that there are\nimplications of the way that\nphilosophers usually talk about\nthe difference between positive and\nnegative responsibility\nand also the way that people in general\njust they feel that you know\nthey're they shouldn't be held\nresponsible for bringing about good\noutcomes for other people\nthey should be able to focus on\nthemselves maybe their children their\nfriends and family\nand so long as they don't harm other\npeople and this is what\nsometimes called well what people talk\nabout that's meals harm principle\nuh you know to do whatever you want as\nlong as you don't harm\nother people um and but it might seem\nwell you know if people are creating the\nsystems they should be striving towards\nthe good but if there's no\nresponsibility there yeah so i mean i\ndon't really have anything good to say\nother than that i don't\nlike my own conclusions but there just\nseemed to be implications of\nthe way that philosophers have discussed\nsome of these topics and if you just put\nthem together\nyou get what i was talking about i think\ngreat thanks okay for now the last\nquestion on my list is\nart\nyes uh thanks for your talk sven i'm\nwriting my master thesis about the\nresponsibility gap\nin a forward-looking way so it really\nprovides some useful insights so for my\nthesis\nand i was wondering i also read about\nthe problem of many hands from the pool\nand he provides us with two solutions to\nalleviate the gap\nand the first one was virtue and duty\nbased so\nthat sounds more like that the value\naligned solution that you also provided\nbut he also mentioned that we could use\nsome procedural\napproach to distribute more\nresponsibility for example with the\nprocedures of john rawls so i was\nthinking is that also\na possible way to deal with the positive\nforward-looking moral responsibility\nyeah um uh i mean probably\ni i'm not super i'm familiar with the\nthe virtue idea that\nuh thunder paul also discusses not some\nvery familiar with\nhow to apply the roles and i idea to\nthis particular case but i mean that\ndoes seem\npromising i mean something that's maybe\na little bit related that i've been\ntoying with myself\nis to think of responsibility i mean as\ni said\nif you're positively held responsibility\nafter something good has happened then\nyou know it's a benefit to you to be\nheld responsible and some people even\nthink that a negative responsibility can\nbe a kind of benefit because it shows\nthat you're an important player\ni mean there's a kind of interesting\ncase the norwegian mass murder on this\nbrave\nwas first deemed to be insane and not\nresponsible for his mass murder but he\nactually\nwanted to be insane and to be held\nresponsible\nbecause he wanted to be you know true uh\ni don't know fighter for his cause and\nif it was didn't\ninsane he couldn't um you know be viewed\nin that way so he\npreferred being sent to prison uh and\nheld responsible\nto being sent to you know to care and be\nregarded as insane uh and so even with\nnegative responsibility people might see\nthat as a kind of a positive thing\nuh but in general i think we can think\nabout okay who is it that would\nbenefit for example from ai technologies\nmaybe\nuh the reason that because they benefit\nthey should maybe also\nbe more responsible for people who maybe\nbenefit less\nand so we can use a kind of fairness\nreasoning so the more you benefit from\nsome technology some practice the more\nmaybe responsibility you should take on\nfor any outcomes good or bad for of that\npractice or that\ntechnology uh i think that would\nprobably be\nin line to some extent with roles in\nreasoning and you can probably provide a\nkind of roles and account\nof why that would be just fair so\nyeah i see potential there but uh\nalthough i have started thinking about\nthis a little bit myself i haven't\nlooked at\nspecifically the rolex in way of looking\nat it\nyeah okay thanks yeah i was not really\nspecifically about the roles in a way\nbut more\nand the fact that you can deliver some\nprocedures beforehand and think of\nscenarios\nin order to what can happen and then uh\ndistribute the responsibility beforehand\nsounded like a different way of\napproaching it than based on the duties\nso\nuh yeah right yeah no indeed and then\nyou can ask\nokay so if someone wants to you know to\ndo something in some domain or develop\nsome technology introduce some\ntechnology\nand with potentially then benefiting\nthemselves\nyou know for any uh income that they\nmight earn etc\nthen perhaps the procedure should be\nthat yes then they also have to take on\nresponsibility for anything\nbad or good that might happen so they\nthey could\nyou know get the rewards that they may\ncreate for the value that they\nrecreate but they should maybe also then\ninternalize\nany and any of the costs or possible\nproblems that\nso that might be fair in a sense so yeah\ni i think that's an interesting\nway of thinking about things okay thanks\nokay the new question uh luciano yeah\nso yeah thank you very much it was\nreally interesting and challenging to\nthink about these different angles on\nresponsibility gaps and i really like\nthe way you connected this to\nvalue alignment for example this\ntracking condition to be alignment i see\na lot of parallels was very nice\nmy question goes into this when creating\nmore and more\nai systems things we have also\nwhat uh mark cuckerberg was talking\nabout the problem of many things\nwe have the problem of many hands as\nwell the human things but we also have\nanother side more\ndifferent layers of ai agents that might\ninteract with each other make this kind\nof\ncollaborative decision so how do you see\nthe problem of many things\nhow how in which of the of the boxes\nwould that have an impact and how do you\nsee this\nthe seriousness of it i mean i i think\nit's a\nvery serious problem i re i really do\nworry a lot about the many hands and\nthings\nproblem here especially with ai systems\nusing a lot of data and uh\nyou know different companies producing\ndifferent parts of the system and\nyou know different companies providing\nthe data etc so\nthere are just so many hands and things\nand so many much data\nthat it's almost impossible to not worry\nabout a lot of responsibility gaps\nuh so yeah i mean i think this is gonna\ncreate problems in all of my boxes\nbasically uh\nbecause i mean it's it's quite different\nthan some more\nold-school technology and there is maybe\none producer of the technology and then\none user and it's all kind of isolated\nbut with with ai it's so spread out\nand so so many hands and so many things\nas\nyou know as mark cochlear was putting at\nthat but yeah i i'm\nworried and i think all boxes are\nimplicated\ni completely agree with you and i think\nin my\nown earlier work i was more uh you know\noptimistic about the\npossibility of solving responsibility\ngap problems uh but then hadn't maybe\nthought enough about uh this problem\nmany hands and things and\nyou know how many different players and\ndata sets and all sorts of things are\nimplicated\nyeah great so if i may follow up on this\nherman do it\ncan i have time okay okay uh\nso you you also on when concluding your\ntalk you mentioned about like that this\nthis approaches including your own the\none from\nfrom santoni and some others they are of\ncourse not a checkbox\nlike well what is it all is everything\nso if i check all the all these boxes i\npick everything then there's no\nresponsibility gap\nso it's not like this of course and but\ndo you think that we can\never get there to get this a list of\ncheck boxes\nor or do you see is this context\ndependent is this\na goal to to strike through according to\nyour vision\nyeah i mean uh in a way i think it's\nit is a goal insofar as people really\nlike it if they can\ndistribute responsibility to to people\nbecause we we feel more comfortable when\nsomeone is responsible for something uh\nand uh\ni mean one and what i had done in my\nprevious\nuh writings about this boss in a way to\nprovide a kind of check\nchecklist of like if you take most most\nof these boxes then you're probably\nresponsible\nthe problem though is that uh and this\nwas something i kind of briefly\nmentioned and then someone wrote a whole\npaper uh rose the young and that\nkind of calling me out on this that you\ncan have different people taking the\nboxes on the list\nand so maybe someone understands the\nsystem\nanother person is the person whose\ninterests are actually tracked\nyet another person is the one that's\nkind of monitoring the system in real\ntime\nanother person or set of persons uh you\nknow has some other relations\nso you you get again the problem many\nhands coming in from kind of another\nangle\nif even if you have a checklist because\ndifferent people may\nthen tick the boxes uh one solution then\nwould be to say that actually\nwhat people are responsible for are much\nsmaller things than the overall\noutcome uh maybe someone is responsible\nfor you know one aspect of the outcome\nsome\none else is responsible for another but\nthat's also a problem because\nat least it's not satisfying because\npeople in general tend to like it when\nresponsibility really\nis kind of local localized so there's\none agent or maybe group of agents\nwho's most clearly responsible rather\nthan having responsibility be kind of\nwatered down and spread out across\nmany people if everyone is responsible\nno one is responsible i think it's that\nkind of attitude that we have\nso it would maybe be desirable but it's\nit is hard in practice yeah indeed the\ncompletely agree also this diffusion of\nresponsibility through other layers is\ncomplete\nvery complex so thanks again sven thanks\nreally interesting\ngreat so i don't think there are any\nquestions so\nno i can ask my own question finally\num so when you described responsibility\ngaps you did that\nin epistemic terms so that we don't know\nwho is responsible\nuh where in the case where it seems\nappropriate that someone is responsible\nso is that your is that do you think\nthat's the main kind of worry that we\nhave or should have because you can also\nthink uh\nresponsibility gaps in terms of in a\nmetaphysical term so that\nthere should be someone responsible but\nno one is responsible or more immoral\nterms that\nnot the right person or not the right\ncollective\nis responsible so the distribution\nshould should change\nso do you see that as three different\nand relevant\nproblems or do you think one is much\nmore interesting than the other\nso what's your what's your take on that\nyeah no no okay that's a\ngreat point i mean so in the written\nversion i talk about what i call\nreal and apparent responsibility gaps\nand so\ni guess a parent would be the uh\nepistemic so you know\nwe don't we think there someone should\nbe responsible we don't know who is and\nso on\nwhereas the real would be that actually\nthere is no one who who could\njustifiably or rightly being held\nresponsible\nand so we it could of course also happen\nthat we do hold someone responsible but\nthey don't really kind of deserve it so\nto speak\nespecially when it comes to as i said\npeople are going to be\nquite willing to step forward and take\ncredit for good outcomes i mean whether\nit's produced by an ai system or by a\nperson\nwe're always eager to take credit for\ngood outcomes and so that\ncan mean that we maybe think that\nsomeone deserves\npraise but really uh from a kind of i\ndon't know\nmetaphysical point of view they don't\nreally so yeah i would agree with you\nthat we should\nuh this i mean this would be yet another\nway of getting more complexity to the\nuh i was trying to keep things simple\nbut yes you're right\nuh and uh yeah the language i have been\nusing\nin my written work was in terms of\napparent and real responsibility gaps\nuh perhaps it would be clear to talk in\nterms of epidemic or metaphysical or and\nso on so i agree with you that this is\nanother issue\nthat we should be thinking about\nabsolutely okay great\nyeah yeah so yeah so you you of course\nintroduced four distinctions uh\neveryone has his own distinctions uh\nthat the uh she likes\num but um yes so so did thinking that\nyou called it real and apparent\nso so that's at least\nhow i interpret it seems to be a\ndifferent thing so sometimes there\nseems to be a responsibility gap but in\nfact uh\nit was just an accident for example and\nif we\nlook closer in fact no one is and no one\nshould be responsible\nso and a self-driving car uh\ncauses and that causes an incident but\nit is yeah but\nwhen we look at it it is an accident in\na moral sense\nso uh human beings can also be involved\nin accidents and\nthen also no one is responsible so is\nthat the kind of thing you're talking\nabout or\nis this again a different distinction\nwell\ni actually think that's a really nice\ndifferent distinction so i think\nyou could subdivide the apparent\nresponsibility gaps\ninto ones where uh yeah you're right it\nactually was an\naccident in the sense that we know no\none should be held responsible but\nuh maybe we think someone should be but\nwe don't know who should be\nresponsible and yeah i can't remember\nexactly how you just\nmade it the distinction but it's as you\nwere speaking it\nseemed to me that you could candidate\ntake both of those and say that those\nare different ways in which there can be\nan apparent but not\nreal or actual responsibility gap so\nyeah\nthere's no sort of end to the uh the\npossibilities in terms of how to sub\ni mean i i actually think there's a\nvalue in making a kind of a taxonomy of\nall of these different ways in which\nresponsibility gaps can occur because uh\nthat just helps to show how\nsort of problematic this is and how many\ndifferent reasons and\ntypes there can be a responsibility gaps\nbut yeah what you were talking about\njust just now\nseem to mean to be two kinds of reasons\nwhy there can be apparent responsibility\ngaps\nokay thanks\num so it's two o'clock um so\num our official time is up um\ni i don't know if you have anything so\nwe could\npeople who still have time can remain in\nthis session if you have time sweat but\nif you have to go that's also fine\nuh so let me first ask you do you have\nsome time for some more questions if\nthere are any questions or\ni know you're busy so that would yeah\nwell first of all uh\nuh before people leave let me just uh\nsay thanks a lot for those who are here\nand for your very nice questions and\ncomments i mean this is\nnew material and i'm very eager to\ndiscuss with people so very very much\nappreciate\nthe opportunity to talk with you and i\nwould certainly be\nwilling to hang around a bit longer if\npeople have more things they want to\ndiscuss with me\ni would be eager to hear any further\ncomments or questions and also\nyou can always email comments and\nquestions to me as well\nit's kind of crazy enjoy listening to\nyou\nthanks great thanks everyone thanks uh\nsven\nand um yeah virtual or an actual\napplause\nthanks thanks so much so uh\nyeah so i have a another question but\nso if there are other other people who\nhave a question please\nshout out\ni i will stop the recording now that", "date_published": "2021-04-28T19:20:35Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "b3567e2a6d2cf014315f99455c902f6c", "title": "Designing AI for Wellbeing (Derek Lomas)", "url": "https://www.youtube.com/watch?v=XaPVlGdj4Yk", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "I think we all know recording is on so\nthat X started recording I think so this\nis an information so has Derek wanted to\nrecord the session for people who can't\nattend if there is someone that doesn't\nwant to have the finished recorded just\nswitch on the camera and that's the\ngeneral information so as you might know\nthat a kiss from the from my department\nactually we are from the same place from\nthe human centered design department is\nthe new name it is a correct and in the\nFaculty of industrial design engineering\ndoing research and teaching also on AI\nfor well-being that's correct right and\nit's going to show us some projects and\nresearch on how to design meaningful AI\nthat is meaningful for people and also\ntrigger some reflections on\ncontroversial aspects of it and I would\nsay I just leave this stage to you Derek\nso you can say more\ngreat thank you I will start my screen\nsharing and you can see yeah okay great\nbut yeah thank you so much for the\ninvite and the intro this is a topic I\ncare a lot about when I came to Delft my\ninterests were around data-driven system\ndesign and figuring out what we want to\nbe optimizing in those systems I was a\nlittle reluctant to attach to the label\nof AI just because the artificiality\nseems to exclude humans but no it's the\nit's big tent and I'll be talking about\nthat so in this presentation I'll give\na brief intro and then share a bit of my\ndesign and research journey I'll share\nit not quite a design framework but a\ntowards a design framework for AI for\nwell-being and definitely open to\ncomments and feedback as I develop this\nand there are a bunch of design examples\nthat will hopefully be useful for\ndiscussion and we'll see where that\nleads us so we're right at the beginning\nof all this you know we've been you know\nthis is what since the late 40s when one\ncan argue cybernetics but if the world\ndoesn't collapse which it seems like it\nis you know another 50 years we're gonna\nhave some pretty powerful systems out\nthere and there's really a concern about\nhow this is going to play out that the\npower of these systems whether they're\nactually going to enhance our humanity\nor degrade it so you might be familiar\nwith some of the work that's been going\non an ethical AI just to summarize this\nreal quick\ngreat recent paper by Florida looked at\nsome 47 different principles of ethical\nAPI and distilled it down in five\nbasically it should be good and it\nshouldn't be bad it should support human\ncontrol it should account for bias and\nit should be explainable but even within\nthis it doesn't really get at what it\nshould be optimizing for and that's\nsomething that I think is is really\ncritical it's partially in the be good\npart but it's it's hard to\noperationalize so you know we need to\nhave AI systems that don't dehumanize us\nthat are good for our well-being\nand one of the big issues is the\nintelligent systems by their nature\noptimized numerical metrics but some of\nthose metrics are more accessible like\nGDP or click-through rates than than\nothers like human wellbeing which is\nharder to measure especially in real\ntime and it's hard to get a lot of data\nfrom it so Don Norman sent this out\nyesterday so I thought it was relevant\nthis is in the context of the dumpster\nfire that is the United States of\nAmerica at the moment and he's saying\nthat you know we need to work together\nbuild a long term future um there are\nmajor issues around the world in terms\nof hunger poverty racial prejudice and\nthen he said we need to get rid of the\nGDP as a measure of success and replace\nit with measures of wellness and\nsatisfaction so that's that's great\nDon ID I really appreciate that that\nsort of sets things up here Don was my\npostdoc advisor and he wrote the book\ndesign of everyday things and this\nnotion of shifting from GDP as a measure\nof successes is interesting and\nchallenging so the the main design\nchallenge as I see it is to design these\nwell-being metrics these metrics for\ngood in a sense that they are accessible\nto AI systems and to do that we need to\ntranslate our humanistic felt values\ninto numbers and that is a tricky and\nfraught task so this is something that\nis economically important so Mark\nZuckerberg a couple years ago was making\nsome changes to the newsfeed that we're\ngoing to reduce revenues and wanted to\nprepare investors for that saying that\nwe feel a responsibility to make sure\nservices aren't just fun to use but also\ngood for people's well-being and so I\nthink we can all agree that we can leave\nto Mark Zuckerberg and he will figure\nthis all out that's a joke because\nFacebook right now is a serious threat\nto the stability of society I think one\ncan very reasonably argue is not figure\nthis out\nbut it is important and even from a\nself-interested perspective of a\nbusiness you can make some short-term\nrevenue gains but if you're really doing\nsomething that's bad for people bad for\nsociety that is a long-term risk so just\nto do some level setting examples of a I\nusually this is best approach from\nexamples so we've got the fame here\nwe've got the Facebook feed Amazon\nrecommendations Netflix queue Google\nsearch any online ad that you see\nanytime you make a purchase the sort of\nfraud detection algorithms that are\ntaking place facial recognition voice\ndetects autopilot I think is a great\nexample of AI because it was invented in\n1914 so um that's that's a good example\noh and then this is uh this is a piece\nthat's sold recently the AI art market\nis still fairly small but you know we'll\nsee where that goes so oftentimes people\ntry not to go too far into the\ndefinitions of intelligence but I really\nlike Peter Norvig research director at\nGoogle's definition so he says the\nability to select an action that is\nexpected to maximize a performance\nmeasure so a little bit arcane but the\nbasic idea is that to act intelligent\nyou need to measure outcomes you know\neverything in that sense is quantifiable\nbut even from a human intelligence\nperspective Robert Sternberg also talks\nabout success intelligence that\nessentially you know stupid is as stupid\ndoes and smart is as smart does if\nyou're doing something that's adding to\nthe success of your system then that's\nthat's a smart move and you know being\nable to have a measure of your successes\nis critical for this and so those\nmeasures of success and that\noptimization of those measures that's\nreally at the heart of intelligence now\nI like to take things back you know to\ncybernetics I think cybernetics is a\nmuch more coherent design perspective\nthan artificial intelligence\nyou know conceptually theoretically and\nyou know this is from Norbert wieners\n1948 perspective on perception action\nfeedback loops this is applicable not\njust to digital or artificial systems\nwhich is why I like it so much it it's a\ngeneral theory of of governance in\nsystems including biological systems\nit's extendable to business systems I\nlike the notion of a continuous\nimprovement loop I think it's a very\nhelpful framework for for designers that\nyou want to assess and adapt so you're\nlooking for ways of measuring outcomes\nand then modifying your designs in\nresponse to those measures or maybe more\nhumanely you want to identify areas of\nneed and then you want to do something\nabout it and so this this means that\neven the design of a chair can be set up\nas a cybernetic system if you are\ngathering feedback on the outcomes and\nmaking modifications in in response this\nis very generalizable but it's also I\nthink quite specific and so I like it\nyou wrote a paper about this recently\nfor tea MCE designing smart systems\nreframing artificial intelligence for\nhuman centered designers so just to tell\nyou a little bit about where I'm coming\nfrom so\nwhen I started my PhD at Carnegie Mellon\nin the human-computer interaction\nInstitute I had just gotten into game\ndesign for learning and looking at the\npotential of using low-cost computers\nand creating software for low-cost\ncomputers that could have an impact in\ndeveloping countries looking at how to\nscale digital education by making it\nengaging and effective and the\nengagement part was so critical I was\nreally excited about the science of fun\nya wanted to be a fun ologist I am a fun\nologist and wanting to combine that with\nlearning science and AI and my notion of\nphonology at the time it was it was sort\nof imagining all these sensors and EEG\nposture sensors all these different ways\nof measuring fun how do we how do we\nmeasure fun because that's what we're\ntrying to do optimize and you know we\nhad these games like this battleship\nnumber line game you're trying to blow\nup these targets on a number line you\nknow you're given a fraction and you've\ngot to find this hidden submarine made\nall kinds of different games for\nmathematics release them on app stores\nand online ended up making some forty 45\ndifferent games on different platforms\nand you know big question around all of\nthem was was whether they were working\nand how did we measure whether they were\nfunny and I came to learn about a/b\ntesting in online products so this is\nsome estimates are that there's over\n10,000 a day run by the big tech\ncompanies or they'll take different\ndesign variations and randomly assign\nthem to users and\nwhich design has the best effects on the\noutcomes and those evaluation criteria\nthat the outcomes they could be anything\nfrom revenue to click through rates\nwhatever is available and I thought it\nwas a little bit sad that there was so\nmuch technology and an effort put into\nimproving online advertisements instead\nof improving educational outcomes and\nwhat I what I found was that in my\ndesire to measure fun it didn't take all\nthose sensors really all it took was a\nmeasure of how long people were\nvoluntarily playing so this measure of\nengagement which was really what we\nwanted in the first place with these\neducational games we wanted students to\nbe voluntarily engaging in these games\nand playing them this was a great\nmeasure of motivation and a great\nmeasure of fun and so Mihai chips on me\nI he's he's famous for his flow theory\nhe has a particular notion that when\nyour abilities and a challenge in your\nenvironment are balanced then you can\nachieve flow and so the the implication\nhere is that things shouldn't be too\neasy or you get bored or too hard or you\nget anxious but when they're just when\nthere's just enough challenge you enjoy\nit\nand so yeah not not too lard not too\neasy so we have this hypothesis that in\nour games if we had a moderate level of\ndifficulty we'd have the maximum\nmotivation that kids playing our games\nbecause we get a few thousand players\nevery day that if we randomly assign\npeople to different levels of difficulty\nthen somewhere in the middle we would\nfind that optimal difficulty where we'd\nhave maximal motivation maximum fun and\nso this is uh this is the the game that\nthis battleship number line game you're\neither typing in a fraction or you're\nclicking\nat where you think a fraction is and\nthis is how it looked like back then so\nyou type in and you click yeah okay so\nreally simple game yeah again this is\nwhat it looks like now and we're running\nthese experiments again we actually were\ncollaborating on an open source a B\ntesting platform that's funded by the\nBill and Melinda Gates Foundation and\nthat Eric Schmidt futures fund so this\nis a a B testing platform for\neducational software it's called upgrade\nso that's a current project I'm working\non so back then to create these\ndifferent variations of game difficulty\nwe'd very different design factors so if\nwe made the target bigger it would be\neasier to hit if we made the time limit\nlonger there's a better chance that they\ncould answer successfully and certain\nitems were easier than others and so we\nran in a super experiment with yeah\n13,000 different variations at 2 by 9 by\n8 by 6 by 4 by 4 factorial with about\n70,000 players and to test this\nhypothesis so that this was creating all\nthese different variations in difficulty\nand the idea was is that according to\nthis theory of at moderate difficulty we\nshould have maximum motivation what we\nfound was that when we when we created a\nmodel of the difficulty of all these\ndifferent factors pretty much the harder\nwe made the game the less time people\nplayed and the easier we made it the\nmore time people played and so this was\na bit of a shock\nwe ran a number of different follow-up\nexperiments we did find that novelty\nworked really well so when we balance\nthe amount of novelty in the game that\nproduced this sort of inverted u-shaped\ncurve and so we we said okay well it's\nnot not too hard not too easy but not\ntoo hard not too boring people like\nthings to be easy if it's new they like\nto succeed but so all of all of this is\nis just this way in which we can use the\nscale of experimentation to both\ndirectly improve the the software but\nalso to improve the theory underlying\nthe software so you know we use learning\ntheory to design these games we bring\nthem to millions of users and then you\ncan run these experiments which either\nhave a direct applied outcome sort of\nlike a normal a B test or we can\ngenerate new theory like this idea that\nit's not too hard not too boring and\nthis is this is something that is is\ngeneralizable beyond education just that\nwe can use theory to inform designs and\nthen when they're it's scale we can run\nthese different types of experiments to\ncreate generalizable theories about the\neffects of designs and so you know we\nhad about five thousand players subjects\na day and we were able to run thousands\nof these experiments every year but it's\nit's difficult to set them all up and so\nwe started thinking about this a I\nassisted science of motivation and so\nthat's a good point to pause if you know\ntwenty one lessons for the 21st century\nor sapiens Homo dias you've all know a\nFerrari has been talking a lot about the\nrisk of hacking human beings so when we\nhave\ntheory and practices that understand how\nwe operate better than we do ourselves\nyou know when some other actor can kind\nof predict how we're going to act\nthey're able to manipulate us\nso it's against our interests of course\nin our case we're trying to do this in\nthe service of education but that's\nprobably not the intention behind many\nof these experiments that are that are\ntaking place so this is a pretty serious\nrisk but this this was our effort to\nembed this automated scientific\nexperimentation at scale paper called\ninterface design optimization was a\nmulti-armed bandit problem and the idea\nwas that we've got this feedback loop\nwhere the online game is used by the\nthousands of players and then we use\nmachine learning sort of simple\nreinforcement learning algorithm this\nmulti-armed bandit approach to search\nthe design space and figure out which\ndesign improvements are most effective\nand automatically increase game\nengagement and so this this actually\nworked and it worked pretty well but we\nstarted getting these phone calls that\nthere was a bug in the game and when we\nchecked out what the algorithm had done\nit it made the game something that was\nno longer having any real educational\nvalue so all it did was just\ndramatically increased the size of the\ntargets and it decided that that was\nwhat was generating the most engagement\nand the problem for us was that yes we\nwere trying to increase engagement but\nwe were also trying to improve the\nlearning outcomes and that wasn't\nincorporated in our in our metric in our\noptimization metric and so\nthis showed that it's really quite easy\nfor AI to optimize for the wrong thing\nit also showed that it was a little bit\nsilly that we just tried to create a\nclosed-loop system that excluded people\ndesigners it wasn't just like we had a\ndashboard we were able to kind of\nmonitor this so so I suppose technically\nthere is a human in the loop insofar as\nwe could look at the numbers but there\nwasn't a human in the loop in terms of\nmonitoring the experience of what was\nbeing optimized and so that's that's\nsomething that's really pretty important\nis making sure there's a bridge between\nthat experience that felt experience and\nthe quantitative optimization and then\nfinally that there really is this need\nfor a continuous alignment between the\nobjects of optimization and the\nunderlying values that are behind the\nsystem so this comes to this design\nframework that I'm working on around AI\nfor wellbeing so Delft has been really a\ncenter yeah please sorry we can address\none question that was in the chat if\nit's still or maybe it already\nJared what do you because I think it's a\nnice now we'll move into the framework\nand I thought it was a nice moment if\nsomeone has questions so know when you\nwere talking about it I was noting like\nhe was making the jump from education\nprogram fun equals engagement ego smoked\ndeviation equals playing time and I was\nlike worried that you were going to\noptimize playing time and run into\nproblems but that was precisely the\npoint you were making so no it's not\nother question any more you already\nanswered it yeah yeah I kind of set\nmyself up there yes\nso you're thinking about exactly the\nright things yeah feel free throw more\nquestions in as we go and then whenever\nI come to a pause I can\nto dress yeah there is Afghani that\nwould like to ask a question I think yes\nthanks\nDerrick I'd like to ask a question about\nso one of the introductory slides that\nyou gave where you talked about the\ndefinition of one definition of\nintelligence so that that definition\nthat seemed to really focus I knew you\nyou you came back to this point several\ntimes the idea of maximizing a\nquantifiable success and this is so I'd\nlike to just to to discuss this a bit\nmore with both hear your thoughts in\nmore detail about this because it seems\nto me that there are many situations\nwhere we are not able to you know put in\na quantifiable measure such a thing as\nlow being and when we talk about\nintelligence if we talk very narrowly\nabout intelligence is focusing on this\nkind of aspect we may be missing out\nmany many other dimensions of what it\nmeans to different people and society in\na given context of what well-being means\nmaybe maybe you will be addressing this\nin the in the in the remaining parts of\nyour presentation but I'd like to hear\nyour thoughts on this thank you yeah I'd\nsay that that is really the story of the\npresentation and so I think that this\nwould be a really good question to\nreturn to you with that at the end\nbecause it's um it's really central to\nthe challenge I think sure sounds good\nyeah\nyes yes a Derek thinks so for the\nexample that you end it with where the\ngame optimize something that you really\ndid not want to optimize this I think\ntypical example of what they call an AI\nreward hacking right so you have a\nreward function and you optimize the\nheck out of it and you get something\nthat was not the intention so I was\ntrying to understand your last slide\nwhat are you saying are you saying we\nshould keep optimizing the reward\nfunctions and try to get closer and\ncloser to what we really want or do you\nsay well this is an impossible task\nmaybe Eva you set in the beginning of\nonly a tricky but a fraught task and we\nshould never get the human out of the\nloop to make it a little bit black and\nwhite so where are you in oh that's what\nI would break apart the the the option\nthat you gave there so I I think it's\nreally important that we try and that we\ndon't give up the the effort to quantify\nsome of our most deeply head held values\neven though it is a serious risk and\nit's part of why I think it's really\nimportant to do this work in an academic\ncontext because it you know that I can\nimagine there being proofs that this is\na bad idea but I don't think we'll get\nthere until we try and what I'm more\nconcerned about is that if we abandon\nthe effort to measure what we treasure\nthe systems for optimization are so\npowerful that they will be used on\nvalues that we don't care about as much\nand just an example here is something\nlike test scores and education and\nwell-being and education so we measure\ntest scores we don't measure well-being\nand well-being is an input to education\nbut it's also an output of Education and\nby because we don't measure well-being\nit's almost invisible to large\ninstitutions so large institutions are\nunable\nto take institutional action to improve\nwell-being without measures and\nawareness or at least that's that's an\nargument that I'd make yeah it's an\ninteresting point if I may because this\nis not necessarily tied to artificial\nintelligence machine learning or any\nrecent advance of Technology this is a\nthis has been a problem of society for a\nmuch longer time we quantify stuff and\nwe know that we're limiting to that like\nyour GDP example but hey there's nothing\nelse I will live with it and we have a\ncertain resilience to the interpretation\nof that number you would hope\nreducing deeply held values to just\nnumbers and so I think you know I\ndefinitely sympathize with people that\nhave had a lot of really nice arguments\nwith people about whether we should be\nmeasuring at all these things that we\nvalue and what the alternatives are and\nfrom my you know again my perspective is\nthat it would be dangerous to not try\nand I think that the solution is not\njust having humans in the loop but\nhaving this continuous alignment\nmethodology so really moving away from\nautonomous systems\nI think autonomous systems are an\nillusion and I think they are it yeah I\nfeel I'd be interested in encounter\nexamples but I think that they are such\na profound illusion that they cause a\nconceptual barrier to the proper\ninvolvement of people simply because\nit's interesting that things can be done\nwithout peace\neven though you know the involvement of\npeople can make the system work better\nso that's why I put the you know instead\nof saying we're going for artificial\nintelligence here no no we're going for\nsmart systems if the involvement of\npeople and algorithms make the system\nwork better that's always preferable to\na purely autonomous system okay thank\nyou very much Derek I think time for the\nnext question yeah damn it with a\ncomplex question or at least it's long\nindeed yes so thanks a lot for the for\nthe talk so far and my my question is\nabout your connection to flow from\nCsikszentmihalyi and so my understanding\nflow is a dynamic emergent property and\nit seemed that your hypothesis was based\non the assumption that humans would stay\nstatic but perhaps I misunderstood and\nso my question was it seemed your\nmachine learning example actually put\nsome dynamics in and in making the game\nmore difficult perhaps his people\nprogressed and so I was wondering if\nthere would be a difference let's say\nunderlying hypothesis for the change in\ntask difficulty and all these these\nelements of your game yeah more related\nto the dynamics yeah so I I didn't\ninclude a lot of this background but\npart of what I was responding to was a\nstudy done by chip send me hi very\nrecently that that tested this\nhypothesis about moderate difficulty and\nenjoyment with chess games and showed\nthis very clear inverted u-shaped curve\nand my my conclusion out of all of this\nis that difficulty is not the same as\nchallenge\nactually so difficulty as defined by as\nthe probability of failure is not\nactually what he means by challenge\ngoing back and looking at what he's\nwritten in past work and that challenge\nactually has a lot more to do with\nnovelty and suspense and and choice even\nthan difficulty and so that's one piece\nI also don't really buy this balanced\napproach as a design method my current\nleading there chick semi-high and sort\nof hypothesis around flow which is a\nbeautiful concept I mean it's a really\nbeautiful concept and I share your\nreluctance to simplify it into the you\nknow just difficulty I view flow as\nwhole mindedness so when it is the only\nthing occupying your attention and\nbehavior that that is really the\nunderlying nature of the flow state when\neverything is harmoniously coherent and\nCsikszentmihalyi talks about that\nactually extensively but I think part of\nthe value of his model with regards to\nchallenge and ability is that it's more\nmeasurable and at the very end of the\npresentation I probably won't have time\nto get to I talk about how well at 1:1\napproach for how we might be able to\nmeasure that deep engagement with with\nEEG and and some work that I'm doing\naround that but yeah transition is what\nI what I would position is the core\ntheory of flow\nI have a suggestion from personal\nexperience you have to measure the\namount of time between that you need to\ngo to the bathroom and actually going to\nthe bathroom thank you what's that what\nthe hell is that I would say again\nexcellent I like that\nokay well on that note all I'll keep\ngoing but keep throwing in questions and\nso here's this this basic design printer\nso Delta's been a great place a Peter\nDesmond on a pole Meyer have been\npromoting and developing this theory of\npositive design that combines a design\nfor virtue you design for pleasure and\ndesign for personal significance as a\ndesign early approach for well-being\nthey primarily use this perma model of\nwell-being but they're pretty open to\nthe various measures and models of\nwell-being and you know in addition to\npositive emotions engagement\nrelationships meaning and accomplishment\nthey're obvious things like physical\nhealth that include factors like sleep\nnutrition and exercise and mental health\nthere are many factors that affect\nwell-being one of the interesting\nnotions about well-being is how\namazingly unitarity of a concept it\nbecomes of a construct it becomes in\nterms of subjective well-being because\nwhen you feel good that is really be\npart of it and of course there are\nthings you can do that can make you feel\ngood momentarily but not in the longer\nterm and that's that's the whole\nchallenge of human life in a way but it\nis incredible how much gets integrated\ninto this singular notion of\nfeeling good so again this idea of\ncybernetic loops and smart systems the\nalgorithm for AI for well-being is that\nwe need to assess well-being meets and\ndo something in response it's not that\ncomplicated\nit doesn't necessarily involve machine\nlearning at all but it does involve the\nassessment of well-being and this is\nyou'll see this in a number of\nsubsequent examples a place that I think\ndesign and human centered design have a\nreal role to play as an aside I'm doing\nsome work now with Freddy Hooper share\non the role of human centered design in\nAI system production which i think is\nthere are a lot of roles for human\ncentered designers in AI not just the\ndevelopment of measures there are a lot\nof different places where human centered\ndesign plays a role and that's that's a\ntopic for another talk\nbut this towards a framework so first of\nall that human intelligence needs to be\nwelcome in these AI systems or smart\nsystems for well-being it cannot be\nsomething that is a kind of gadget\noriented approach it needs to be\ninvolving human decision-making and\nhuman awareness and I'll give some\nexamples of how I think that contributes\nto the efficiency of the systems and you\nknow how to humanize that the AI because\nin and in human AI future is just\ndoesn't sound right so another part of\nthis framework is this idea that smart\nsystems or subsystems I really really\nthink it's important to recognize that\nwe're always designing subsystems or\nnever making an autonomous system it's\nalways part of something else and\ntherefore we need to think about those\ninterfaces\num in general we want to be focusing on\nimproving measures but we should be\nlooking at diverse measures of\nwell-being and a key idea that I think\nis that the best articulate and this\ntalk is how to combine metrics with\ndesign early vision and Paul Hecker\nwrote the book on vision and product\ndesign and it's largely his notion of\nvision that I'm referring to here but\nwhat I see is a productive tension\nbetween the qualitative and the\nquantitative the felt experience and the\nmeasurable these are not choices these\nare two approaches to the world that it\ngoes back to some of the earliest\nphilosophy the Pythagorean idea of\nnumerical harmonies in the cosmos\nbut this this idea that there is a\ntension between these approaches and\nthat that tension is productive so when\nI teach design students I often have\nthem develop measurable goals you know\ntwo SMART goals right you want to have\ngoals that are very clear and measurable\nbut you want to supplement that with a\nvision that is that is felt that is\nmetaphorical that is giving the sense of\nfeeling that you're going for\nnot just the the defined operationalize\ngoal and when you have those both\ntogether they work with each other and\nthis is this is an approach that I think\nis useful in in any kind of design\nprocess and is is really critical for an\nAI system that will by definition have\nthese defined operationalized goals but\nhaving the vision and developing strong\nvisions of what that future is that we\nwant to\nI think is critical so now I'm gonna\ngive some examples but I can take a\nmoment if there are any questions Jenna\nwanted to ask a question right yes so\nit's it's might be a small question so\nI'm wondering which of the following two\nquestions or another one you are trying\nto match are you looking for a good\nmetric for good AI or are you trying to\nfind out how we should use such metric\nso I have a feeling that's different\nquestions are hearing your story and I'm\nnot sure do your most on well I think\nit's the former but I think the former\nimplies the the latter so you know yes\nwhat how do we want to think about you\nknow so Facebook take Facebook they want\nto improve user well-being\nwith their newsfeed how do they measure\nthat I don't know whether what they're\ndoing is working how can they approach\nthat problem in a tractable way and when\nthey have found something how should\nthey go about responding to it I think\nthose are part of the same effort I'm\nnot convinced yet but I'll wait until\nyou've given more examples to throw more\nquestions about that to you yeah great\nand and please do that's a good point of\nclarity that I'd like to address so yeah\nany other questions before I go on to\nthese examples I didn't see has pretty\nlate yeah so most recently the notion of\nwell-being has come up quite strongly in\nthe context of the Cova 19 pandemic and\nso we've produced and released a new\nsystem called my wellness check it's an\nopen science project to understand human\nwellbeing at\nscale and over time the pandemic has\nproduced a lot of effects on our economy\na lot of effects on individual and\nsocial health and a lot of effects on\nmental health and so in trying to\nunderstand how the lockdowns and other\nactions are affecting people's\nwell-being we wanted to figure out\nwhat's up what's a better way of\nmeasuring it how can we use human\ncentered design to understand what\npeople are going through and you know\nwhat their needs are and how can we\nresponsibly assess and and from there\nthink about what we can do about it so\nthis is eventually trying to produce mmm\nthis complete cybernetic loop but the\nemphasis for now is on using design to\nimprove the assessment of well-being\nover time and so this my wellness check\ndot org this is a website and encourage\nyou to sign up you'll receive messages\nvia email or SMS that asks you to fill\nin a short assessment of well-being and\nthe the idea is how could we come up\nwith a sort of weather report to\nunderstand how well-being is affected\nand changing over time so these are just\nsome example screens that people have or\nasking people about their energy level\nwe're having them fill in emojis that\nrepresent some of the feelings that\nthey've had recently and really trying\nto be innovative around the types of\nmeasures and assessment while still\nincluding standardized validated\nmeasures all while keeping things as\nshort as possible\nso in the past month after after a month\nwe had just over a thousand total\nresponses and one thing that was quite\ninteresting that one of the most common\nmeasures of cognitive well-being is life\nsatisfaction and you can see this\nbimodal distribution popping up where\nthere are a number of people that are\nreally they're struggling they are they\nare dissatisfied and you can see some of\nthe recent behaviors people are having a\nhard time exercising sleeping not so\nhard of a time eating and these are just\nsome of the other questions and then the\nqualitative data that's come out as well\nyou know by far been the most\ninteresting part so we we've gathered a\nlot of this and being able to see how\nour people for instance with financial\nchallenges with low well being affected\ndifferently from those without financial\nchallenges you know that are also\nstruggling so here are just some some\nquotes one in the middle is a person\nwho's doing well they've they feel like\nthey've been doing better since the lock\ndowns and these are just representative\nquotes so this this project continues\nwe've now yesterday we had to redesign\npretty much all the messaging because of\nthe protests in the States some of the\nemphasis on Kovan 19 alone has started\nto sound a little tone deaf and so we\nneeded to adapt the the messaging and\nthe questions to try to capture how\npeople are are feeling without trying to\nis it too much on the the political\nsituation but you know when there are\nriots and dozens of cities across the\nstates people aren't doing well it's a\npretty clear sign and again this comes\nback to some of the orientations of\nreplacing GDP if this isn't a technology\nthere are some technology challenges in\nthat I mean if we want to better assess\nwell-being if we wanted to improve\nwell-being address people's needs in a\nmore systematic way as opposed to just\ngrow the economy and there are real\ntechnical challenges with that but it's\nnot just a technology issue I mean it's\na it's a policy issue and that's\nsomething that is a philosophy issue and\nthese are things that we can't help but\nengage with we need to engage with as as\ndesigners and and human beings it's not\nalways going to be at this social level\nbut even in the context of a company\nthat's you know setting up new metrics\nfor optimization the question of of what\nthose goals are and how those metrics\nrepresent those goals this is this is\nsomething that I'm trying to prepare\ndesigners to be able to dialogue with I\nthink they need to have the the you know\nsome basic data science skills and they\nneed to have some basic rhetorical\nskills to to engage these political and\nphilosophical questions so now I'm going\nto go through a set of design examples\nfrom students so this is an ITT project\nthat used a nightlight that would\nrespond to a child's mood as represented\nby these different\nand it was also tracking the button\npushes overtime and saving them in an\napp and the idea was to help families\ntalk about emotions and keep track of\ndifficult emotions and support\nsocial-emotional learning it's a cool\nproject this is and in this project\nyou'll notice there's no optimization in\nthe system the system is providing\nmeasurement but any sort of optimization\nis only on the human side in contrast\nthis good vibes project a smart blanket\nto help insomniacs fall asleep faster\nthis uses vibrotactile vibrations that\nare embedded in a way to blink it you\nget this kind of body scanning huh up\nand down your your body it just feels\nnice sort of zone out and the intention\nof this is to have it be based on some\nphysiological signals and so have a\nclosed loop here we don't you know we\ncan involve people but you don't really\nwant to be controlling it on your phone\nwhile you're trying to fall asleep and\nso this is a much more appropriate place\nfor an you know automated system because\nyou're trying to fall asleep and when\nyou do fall asleep it should probably\nturn off those those sorts of decisions\nso this is in contrast where the\nalgorithmic optimization you know should\nbe in the system\nand not relying on on the people this is\nanother system like this that uses the\nmuse eg for channel EEG to measure the\nindividual peak alpha frequency of a\nperson's brain waves so most in the\nvisual cortex you can the Alpha\nfrequencies the dominant frequency and\nindividuals have different alpha peak\nfrequencies that range from you know 8\nto 12 Hertz this varies between\nindividuals and over time and the\nhypothesis is this has not been tested\nis that by flickering lights at those\nfrequencies or at offsets of those\nfrequencies will be able to disrupt the\nrumination loops that are associated\nwith depression and burnout the kind of\nrepetitive strings of negative thoughts\nthis is an example of trying to combine\nartificial intelligence and human\nintelligence so it uses adaptive\nlearning algorithms to keep track of\nmath facts that a student has mastered\nand the ones that they struggle with and\nto provide those to parents so that the\nparent holding the question of their\nchild the algorithm determines what is\nthe next question to answer and this is\nable to leverage the you know parents\nability to intuit their child's emotion\nand support the motivation and so this\nis this AI human teamwork approach that\nis again just another example of\ninvolving humans and AI and smart\nsystems census was the original\nimplementation of the my wellness check\nbut focused on a health care setting my\nmy my father had cancer this past year\nhe passed away and in the year that he\nwas on chemo I was just I was a little\nbummed that the medical system that was\nextremely expensive and super high-tech\ndidn't really seem to be very interested\nin the other aspects of his well-being\nas a patient\neven on the aspects that would affect\noutcomes like you know getting exercise\nyou know eating sleeping\nthey just weren't tracking this sort of\nthing and other aspects of well-being\nlike yeah I'm talking it's are you doing\nthings for fun these these are both\ninputs to medical treatment but they're\nalso outputs I mean that's the point of\nthe medicine is that you can have\nwell-being and that is somehow a little\nbit divorced from the system today and\nso this is just an approach for making\nit easier for doctors or hospitals to\nprescribe remote wellness checks neuro\nUX is a company that started a couple of\nyears ago with a psychiatrist at UC San\nDiego we produce these mobile cognitive\nassessment tasks that are used by\ndifferent psychiatric researchers to\nassess working memory and efficient\ncontrol as well as these ecological\nmomentary assessment and what people are\ndoing different aspects of their\nbehavior attitudes etc and so the basic\nidea is how do we get more data into\npsychiatry so that treatments can be\nbetter researched and supported this is\na graduation project with GERD quartum\nand Jackie bourgeois with song shan liu\nand the idea was to embed sensors in a\nwheelchair that could identify behaviors\nassociated with well-being like posture\nand indifferent exercises and then and\nto motivate those behaviors so it was it\nwas a really nice project because it was\na clear you had a very clear approach to\nthe data collection and the alignment of\nmeasures with these underlying goals and\nit worked pretty well and cinah pal this\nis a graduation project done with paul\nHeckert and Mattias\nHeydrich I should know how to pronounce\nhis last name but apologies this was in\nresponse to the challenges observed with\nNetflix and other couldn't modern\nentertainment systems that are more or\nless trying to hack us into consuming\nyou know spending as much of our\nattention there as possible so he looked\nat well what would an AI streaming\nservice look like if it were designed to\ncontribute to individual well-being and\nthere is this whole notion of how can\nthe system better understand a person's\nintentions so that they can be supported\nyou know intentions everything from you\nknow how many episodes of Breaking Bad\ndo I really want to watch ahead of time\nto what kinds of feelings do I want from\nmy media consumption and using a kind of\ndata collection and discovery process to\ninform the streaming service this is a\nreally beautiful project that was a rare\ngraduation project that was launched the\nday of graduation so this is available\nfor sale today\nenvision glasses with the t delta\nstartup envision this is a firkin Menten\nand it was an application of google's\nsmart glasses for the visually impaired\nand the key insight here was\nhow to use human involvement when the AI\ncomputer vision breaks down which of\ncourse that inevitably does and to allow\na person within the interface to very\neasily call a friend or a volunteer or\npaid worker on several different\nplatforms for the blind and it works and\nit's it was a really well-done project\nis a presentation this video very\ndefines it as the ability to live your\nlife without being helped or influenced\nby others good also mean the ability to\ndiscover new recipe chicken and pumpkin\nsoup means knitting an assignment just\nbefore the deadline it could be sharing\nyour lap with a colleague looks like\nAlex from finance to step up or some\nfresh air and roam the streets with any\nworry looks like a body of water running\nthrough a grassy field just managing to\ncatch that train during rush hour\n15:41 sprinted Amsterdam's patron by\nmemory evolved to be able to sort and\nread my own letters credit card\nstatements post box two eight nine\nshould be able to quickly into my\nfavorites local store mango chutney it\nis to note that when I get stuck\nI've people to call upon hey there um\nthere seems to be a rough look here he\nhelped me out gonna help ya you want to\nuse so I wrote in it for something way\ndon't share my location All Hands\nmeeting to its minimum to look three\n[Music]\nlooks like a birthday cake with lit\ncandles to cook my favorite meal that my\nlovely family can't get enough of to\npush my physical limits to move to jump\nto function to feel alive I wish these\nit happiest year again to be me to be\ninduced to be me to be yes abhi notes\nintroducing envision glasses the new AI\npowered smart classes by envision\nempowering the blind and visually\nimpaired to be more independent\navailable for pretty order now I'm so\nsorry I think we finish time so if you\ncan wrap up a meet yeah that's that's\nperfect so I'm right at the end here\nI'll just say that there are definite\nthere are a lot of limitations and using\nmetrics\ngood hearts law is a big one to be aware\nof that when a measure becomes a target\nit ceases to be a good measure and there\nhere are some kind of ongoing research\nquestions you know we're looking at how\ndo we generally design AI for well-being\nwhich metrics should be optimized how do\nwe translate our values into metrics and\nwhat can go wrong there are some really\nnice opportunities for using AI to\nassess well-being so this is everything\nfrom adaptive assessments like in our my\nwellness check or in chat pots as well\nas sentiment detection within writing\nspeech posture facial expressions and\neven though by\nsensing has been very unfruitful for\nassessing well-being so far I do think\nthat there are some strong theoretical\nopportunities that that I've been\nexploring and I'll close with this one\nthis is more future forward but being\nable to link AI and experience again\nthat kind of quantified and the\nqualified so we've been using\nconvolutional neural networks to predict\nthe qualities of musical experiences\nusing high-density EEG data specifically\nthe enjoyment and familiarity and the\nhypothesis is that neural entrainment\ncan serve as a metric for engagement and\nenjoyment this is what I was referring\nto in terms of the whole mindedness a\ntheory of flow that when you are fully\ncoupled to your environment\nthere are resonance processes that may\nwell be observable and this is an active\narea in the neurosciences now and this\nis a hard this is a hard problem but\nit's been one we've been pursuing in\ncollaboration with a group at IIT in in\nIndia and so this didn't in conclusion\nhere that I've got a very big interest\nin the idea of harmony as a general\ntheory of well-being it's a very old\ntheory from Confucius and Lao Tzu and\nPlato Pythagoras that there's a notion\nof harmony and the self and our\nrelationships and society and nature lay\ndefinitions of happiness so recently\nthese researchers interviewed some 3,000\ndifferent people and found that inner\nharmony was a major component of how\neveryday people defined happiness and\nsince harmony is often defined as\ndiversity and unity\nthere are these sort of pre-existing\nmeasures of diversity and integration in\nnatural ecosystems and economic markets\nand social networks and I think that\nthis frame of Harmony which is a\nquantitative theory brings up some new\nmeasurement opportunities so thanks a\nton for listening and really really\nappreciate the opportunity to share yeah\nthank you I've gotta do we still have\ntime for questions or we have to wrap up\nand close well we still have 16 people\nout there so if any of you have\nquestions we have time Derek and yeah we\ncan keep going for five minutes or so I\nthink I have eventually a question but I\nwant to give the stage to others if yeah\nso I do have question for me I find it a\nvery inspiring talk also of many\nexamples so that you gave and I'd like\nto come back to a point that you set at\nthe beginning or ending anyway about\nautonomy so in the end what is your\nanswer to this you know for yourself to\nwhat extent would you go for autonomy to\nwhat extent would you say no let's keep\nit basically like tools right yeah\ntools very very strong on the tools side\nI am very skeptical of autonomous\nsystems I think it's much better that we\ndesign interfaces between systems and\nnot try to delude ourselves with pure\nautonomy because I think it's very\nrarely the goal using ourselves as\nautonomous do I see myself as autonomous\nbeings in general so well in a certain\nway yes we're in a certain way no I\nthink that our individual personas of\nmore losery than we often admit\num but at the same time our our desire\nfor freedom is very deeply ingrained and\nand indeed necessary for for us to\nthrive so I I think there's a important\nphilosophical relationship between\nautonomy and interdependence that a lot\nof people have talked about in the past\nthat when you have differentiated people\nthat are individuals and autonomous it\ncreates opportunities for\ninterdependence because of the diversity\nof individuals mm-hmm yeah yeah I\nunderstand okay Thank You Luciano maybe\nyou had a question\noh no yes I do have a question regarding\nthe first example you gave her so first\nof all just thank you for the\npresentation was very interesting very\nexpiring and regarding to my wellness\ncheck platform you mentioned that the\npeople have like so much space you put\nsome qualitative data and I'm just\nwondering because you have quite a few\nthousand people ready respond how does\nthis scale up how can you manually go\nthere what can you do in the from each\nhow do you process this information yeah\nit's a huge issue and it's something\nthat started to collaborate with SEPA de\nwho's been working with us and Roe\nPoisson on some different text\nprocessing approaches because something\nlike sentiment analysis is not so\ninteresting with it because people are\nself reporting their sentiment but\nbecause of that it allows the the\ndiscovery like the basic approach that\nwe've been do using now is cream I'll\nsay creating an interface\neven though it's really just like Google\nsheets and things like that but creating\nan interface for people to explore the\nexperiences that people are having using\nthe quantitative metrics for\norganization so the quantitative metrics\nmake it much easier and more informative\nto explore those experiences and then\nthe goal is really sort of storytelling\nbut it takes quite a bit of love you\nknow human engagement to make use it\nyeah I can imagine that yeah okay thank\nyou very much\nsomeone else asked questions or I can\nask something maybe because it's related\nto my wellness check so I was curious\nbecause you introduced it as a service\nbut then for now as far as I understand\nyou are collecting data right what will\nbe the service I finished so what will\nit be and is it similar somehow to\nexisting AI with mental health\napplications yeah\nso the the service has a few different\nstakeholders and initially the primary\nstakeholder that we were imagining was\ninstitutions institutions and\norganizations that are no longer able to\ncheck in on their people in person and\nbeing able to make sure that everyone's\nsort of doing ok anonymously was our\ngoal and so this is everything from\nschools and hospitals and those sorts of\nthings and so in that sense it's a\nservice for those organizations to be\nresponsive to the well-being of their\npeople but you know the call that all\nthe data that we have now is from people\nthat are just signing up\nand what we're building out is some\nfeedback loops where we first of all\nallow people to self assess on\nparticular topics so you know take\nvalidated assessments on anxiety or\nloneliness or things like that and then\nprovide existing appropriate mental\nhealth resources and then one other\naspect is that we've been gathering from\nparticipants their own tips and\nrecommendations for supporting well\ntheir own well-being and then sharing\nthose back out with people in the\ninterface so trying to have a kind of\ncrowdsource\nplus AI approach towards community\nwell-being okay thank you other\nquestions or maybe we can wrap it up\nhere since we are a bit out of time okay\nso thank you very much again thetic was\nreally nice and I hope to see you next I\ngotta\nwhich I'm not okay yeah thanks again for\nthe invite and appreciated the the\nquestions thank you\n[Music]", "date_published": "2020-06-19T12:14:29Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "a9209fd7b81af626f41dbba80a4eac3d", "title": "Modeling human-AV interactions for safety and acceptance of automated vehicles (Gustav Markkula)", "url": "https://www.youtube.com/watch?v=nRCbKFK2b2A", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "talk to and share my screen it's a great\npleasure to be here and thanks for\ncalling and just confirm that you see my\npowerpoint slides here yeah I guess hey\nbro so yes\nmodeling human abs in Oatman v/o it\nreturns for safety and acceptance of\nautomated vehicles and our body's\nrecommendation I've prepared for about\n30 minutes but I've also put in a couple\nof places during the talk when they you\nmight want to ask questions I mean you\ncan stop me at any time basically but\nI've made a couple of breaks Midway as\nwell or four questions so yes I'll go\nfor rides with this picture and what\nthis picture shows for the purposes of\nthis presentation is basically traffic\nwith a lot of people right so people in\ncars people on foot and various kinds of\nvehicles because in relation to\nautomated vehicles it's not becoming\nincreasingly clear that one of the main\nreasons or maybe the main reason while\nwhy automated vehicles are not sort of\ngetting deployed on such a big scale so\nrapidly as we thought maybe a few years\nago is because there are people in\ntraffic right and there are people on\ndrugs and these pesky humans are quite\nhard to what's that sorry in the sound\nsorry what was that Makati no no that's\nfine this one I was checking it's funny\nthat okay\nperfect just stop me if it's Matrixyl\nyes yeah so these humans in traffic are\nquite hard to understand and predict and\nso on on that and that is a big\nchallenge for for automated vehicles so\none kind of question that we might ask a\nvery general question is how can we make\na vs or many vehicles that can\nsuccessfully coexist\nwith humans and what else in this talk\nis that one important thing I would\nargue I'm not a sufficient component but\na necessary component is to develop high\nfidelity models of human Road user\nbehavior I'd argue that we need that as\none of the things to be in place to make\nAV second successfully coexist with\nhumans hopefully you understand what I\nmean by the end of the talk and then\nmore specifically what kinds of models\ndo we need and there I'll argue that we\nneed I think a combination of models\nthat are data-driven machine learned\nmodels and models which are built on\nmore sort of theoretical clear grounds\nor mechanistic models based on cognitive\nneuroscience or psychology and so on so\nthat's something like what I mentioned I\nhave worked with with a bunch of\ndifferent people's file so both athletes\nand other places and I'll the one part\nof the talk will be covering a bit of\nwhat we've done a bit of in the future\nwe think so those bits I'll generally\npass quite quickly I'll sort of just\nwant to give you a flavor and an\noverview and then if there's something\nyou're interested in you can talk to me\nafterwards or check the references at\nthe end of the talk or ask a question\nright so yes on the topic of questions I\nthought I'd before I sort of go straight\ninto it I just try to connect a little\nbit with what I think you and the AI\ntech projects oh yeah for those maybe\ncalling in from other places and\nDelftware the the people who are hosting\nthis talk are part of a project in\nthat's called AI tech working on this\nconcept of meaningful human control of\nautomation so I thought I'd just put a\ncouple of questions out there at the\nbeginning\neither it's discussion questions or\nmaybe as discussed you sort of questions\nfrom me to you a couple of scenarios so\non the one hand if you think of a road\nuser who is effectively interacting with\nan automated vehicle the road user the\nhuman road user couldn't sort of\nunderstand how the automated vehicle\nbehaves and how it what it does and why\nand is therefore able to affect the\nbehavior of Av with their own behavior\njust like we do in traffic all the time\nright so as humans we affect the\nbehavior of other humans with our\nbehavior right and sometimes quite\nintentional ways so it to some extent we\ncan call that humans controlling other\nhumans in traffic right in in some sense\nof the word control so the scenario I\nare with the human that transparently\naffects the behavior of the AV is that\nmeaningful human control of automation\nto some degree another question of the\nsame nature if you think of an engineer\ndeveloping automated vehicles who is\nstudying and justing how an AV will\ninteract humans by using computer\nsimulations with powerful high fidelity\nmodels across a wide range of scenarios\nand then the engineer sort of tweaks the\nbehavior of the AV until it seems like\ninteraction with human as good as it can\nbe is that meaningful human control\nmotivation just leaving those with you\nfor now and then some of launching\nstraight into the the very pragmatics of\nit alright so there the reasons why\npeople are holding back on AV the flame\ndeployment to some extent today is that\nthere are risks associated with it I'm\nworried that there are two main risks\none is that we call human frustration so\nyou might have read news reports on way\nmore cars I think they've been most\nextensively report on reported on being\nsort of disliked by people sharing the\nroads with them for many reasons but one\nimportant reason seems to be that\nthey're not quite getting the subtleties\nof local interactions in traffic right\nthey're not behaving quite like humans\nwould like them to be in traffic they\nget stuck turning across traffic that\nkind of thing right so that causes\nfrustration which leads to bad\nacceptance of automated vehicles and\nobviously another thing which leads to\nbad acceptance is human injury that's\nthat's obvious right so I mean there's\nbeen a number of high-profile cases in\nthe news where where people have been\nharmed by vehicle automation so yeah\ncrashes costs human injury but not only\ncrashes sorry or important here but also\nnear crashes right because crashes can\nalso cause human frustration especially\nif there aren't sort of new kinds of\nnear crashes that humans typically\nwouldn't that course so those are the\nrisks that's sort of what's at stake so\nI'm\nit seems I'm that I'm arguing that part\nof minimizing those risks is having high\nfidelity models of human behavior and\nwhy do I say that do I say it because I\nthink we need to make babies drive like\nhumans maybe we do to some extent that's\nnot my main motivation for it so this is\njust an example of some leads research\nthat I wasn't so involved in myself\nactually but but where we looked at how\nhow humans for example extra safety\nmargins when passing a parked car and\nthen looked into to extent humans wanted\nAVS to do that similarly so on some\ncourse level for that kind of thing we\nneed models of how humans do stuff but I\nwouldn't quite necessarily go as far as\nto say that we need high fidelity models\nthat are very detailed and exact the\nother thing that maybe most Navy\nengineers think about when we talk about\nmoles of human behavior it is its own\nlines of real-time on-board AV\npredictions about human behavior right\nso there's a lot of literature on this\nthis is just one example so basically if\nwe have some sensor data about\nsurrounding road users with some history\nwe try to apply some algorithm or method\nto predict what these guys are gonna do\nin the future right really important for\nfor a v's maybe planning algorithms and\nso obviously they they rely to some\nextent on models models are maybe not\nvery advanced and I my interpretation of\nthis is that a big part of the limiting\nfactor is still just things like dealing\nwith the sensor data and uncertainty in\nthat and so on so so of course if we had\nsuper good - ability models of human\nbehavior they would be useful for this\nkind of thing but that's not the sort of\nprime concern at the moment but I think\none area where it is a prime concern\nalready and perhaps not sufficiently\nappreciated is for as agents for virtual\nenvironments for simulated AV testing so\nyou may or may not have heard that\npeople are talking about the importance\nof simulate simulation based testing so\nbasically running lots and lots of\nsimulated miles or kilometers or driving\nso there the image here is from\nfrom a Wayman report it's their platform\nwhere for example they can replay log\ndata so the the the surrounding road\nusers might come from logged data log\nscenarios any quick vehicle either\ndriven by a human or an automated\nvehicle have collected these log data\nand then as soon as you update your\nalgorithms your software on the on the\norbit vehicle you can rerun all of that\nlong beta to see that you haven't\nintroduced something which causes\nstranger or dangerous situations right\nso that's really important but of course\nas soon as your update changes the\nbehavior of this logged vehicle even the\nslightest rights the the surrounding log\nproducers they're they're sort of just\nbeing replayed open-loop so there's no\ninteraction so if you change your\nbehavior as Navy here you don't know how\nthe others would have responded to that\nso that's the point where you start\nneeding models and people have been\ntalking about it but unless there is\nlots of stuff happening under industry\nhoods that isn't being published at all\nto me it seems like just as it was three\nyears ago we we don't know how to model\nhow humans interact in traffic it's a\nreally complicated problem so in in one\nsense it's fair to ask or I mean these\nsimulations are important people are\nsaying that's how we're gonna know that\nautomated vehicles are safe because\nwe're gonna run billions of miles of\nsimulated testing but if there aren't\nreally any humans in that simulated\ntesting is that really gonna cut it it's\nnot good enough so I mentioned 800\nmodels at the start and I think these\nare super important there's a lot of\nwork on this as well this is one example\nwhere yeah Bonita extracted the\ntrajectories from camera data and then\nused sort of imitation learning machine\nlearning algorithm to create agents that\nwould behave as much as possible like\nthe the observed humans did in this\nsituation and other traffic situations\nso they these methods begin to achieve\nrealistic looking routine traffic which\nis great and I think it's really\nimportant but I think there are also a\ncouple of challenges in relation to\nspecifically those two main risks I\nmentioned at the start right so when it\ncomes behavior of humans in near crashes\nand crashes it's important to realize\nthat in any real traffic data set and\nthese models is machine normals you need\na lot of data from real traffic right\neven if you have lots and lots of data\nnear crashes and crashes are gonna be\nvery very rare so it's gonna be very\nhard to know to what extent your models\nhere generalize well to too near crash\nsituations which arguably are are very\nimportant right for the simulated\ntesting in verification the other thing\nis is with respect to human behavior and\nlocal interactions so how know that\nthese models are capturing the important\nsubtleties of behaviour so it sort of\nwhen you squint a bit and you look at\nthese simulations they look look sort of\nlike real thing but on the on the\ndetailed level how can we know that the\nimportant bits are our rights and I\nmight suggest an stress poor part of an\nanswer is that we should complement\nthese blackbox machine learning problems\nwith white box neurocognitive models\nsort of models based science and similar\nthat will then have the potential to\nprovide us with some insight into how\nmechanisms generalized addressing this\nissue of generalization to near crash\nbehavior which according mention is\nsomething I've I and others have done a\nlot of research on and it sort of gets\nus into this scientific frame of working\nwhere we can have a model and hypothesis\nwe can run a control experiment testing\nit parameterizing it and so on and that\nultimately leads to something like\nunderstanding right you start to\nunderstand maybe what the important\nsubtleties are local interactions are\nfor example and I think we can't really\nsolve the problem of getting\nsubtleties of interactions right without\nsort of trying to understand it to some\nextent\nblack box was really important but we\nalso need to have some understanding\nokay so that was sort of my main little\nfirst part my first little part so I\nthought I'd just pause for a moment here\nand ask if there are any thoughts or\nquestions\nthat's far and if not I'll continue\nstraight on so just if you have a\nquestion please raise your hand we're\nsurely not in real life or just post it\nin the chat and we'll allow for in two\nseconds if there are other questions\nthen go counting to 20 mi head have you\ncontinued your kodi and we can catch up\nhere I have another slot and at the end\nas well yeah okay Cheers okay yes so\nlet's say a bit of what I and others\nhave done if you start just sort of make\nmodels and modeling frameworks for these\nkinds of things so so one of the first\nthings I did in in driver modelling\nresearch was to try to get a framework\nthat could cover both routine driving\nand near crash driving and what I tried\nto do there because I was interested in\nhow the brain does stuff right how\nhumans actually do things I I tried to\nsteal as many good ideas as I could from\nfrom psychology so like perceptual\nheuristics this idea that there are\nspecific sort of salient pieces of\ninformation in our environment we make\nuse of as part of heuristics when we\ncontrol our behavior and decision-making\nmechanisms especially in terms of\nevidence accumulation that you may or\nmay not have heard of as a sort of a\nmodel of how the brain makes decisions\nit's ads together bits of evidence until\nit reaches a threshold where a decision\non what to do is made so there's both\nbehavioral and neurophysiological prefer\nfor that kind of model and then also\nanother part that I've made useless\nobviously the idea of motor primitives\nthat you can even potaters that or sort\nof look like there continues and\nsustained over time can be broken down\nto minimal in blocks of control or\nbehavior and that's those bits together\nin your framework that sort of explains\nroutine driving in the air crash driving\nin the same language so where is it\nconventionally rooty driving is often\ndescribed as a closed loop short delay\nwell adjusted or even optimal control if\nyou look at how many a crash driving it\ndescribed it's described as open-loop\nbehavior without concern of how\nsomething is sort of playing out after\nyou've done it with a long random delays\nbefore you do anything and under and\nover reactions right maybe you brake too\nlittle you steer too much that kind of\nthing so anything but optimal whereas in\npractice obviously these they have just\nsort of be the same thing there's no\nsharp edge between them right it has to\nbe a grayscale of some sort then very\nbriefly put you can sort of tie these\ntogether by positing that there are\nthese intermittent motor primitive\nadjustments that are cited on by\naccumulation of evidence of various\nkinds and you use maybe perception\nheuristics to determine how much control\nto apply and both mystics might be well\nadapted to to routine driving but when\nyou sort of push them out into a context\nnear crash context where they were that\nthey were not learned for they might be\nseverely suboptimal a sensory\npredictions also place in the test run\nbut you can could sort of get out of\nthis or models that can be used for\nexample as sort of cognitive the crash\ntest dummy right so that's something I\nand collaborators have been publishing\non a bit so yeah so I've mentioned some\nexamples about what we've applied it to\n- for example explain routine and near\ncrash braking so we've shown that among\na bunch of different models that we\ncompared this kind of accumulation model\nwas the best one at explaining response\ntime brake response time distributions\nin normal traffic gifs or car following\ntraffic but also applying the same sort\nof framework and in even more detail\nactually to also explain the the brake\ncontrol to some extent we looked at the\nnaturalistic data from the shore\nto data set that you might have heard of\na US large data set with actual crashes\nin the air pressure then it crashes in\nit log data we can also pull that sort\nof model probability distributions can\nbe we can find model parameterizations\nthat are sensible and that help explain\nthis air crash braking as well and also\nexplain additional parts of it that were\ndifficult to understand without a model\nso that's a preprint out right now the\nlink there and another type of extension\nof this is to to pick up on these\nsensory prediction aspects of it that I\nmentioned to connect it with predictive\nprocessing types of models and apply it\nautomation failures right so if you're\ndriving a car that is it's controlling\nyour your speed under the car in front\nbrakes which means that normally if you\nwere not doing anything and the system\nwasn't doing anything you'd see a visual\nlooming this obstacle rising like this\nblack line but since you know that you\nhave an automated vehicle you're\nexpecting the loom to rise slower and\nwhen as the sort of system is taking\ncare of the situation but what we were\nthen suggesting was that the kind of\ndelays that you see in responding to\nthese so yeah so if the system fails and\nit doesn't actually break for you the\nway it's supposed to do though typically\nhappens is that drivers do not respond\nas quickly of course to the lead vehicle\nthis duration as if they have been in\ncontrol themselves but within the next\nin bists by assuming that what they're\nresponding to now in this automated mode\nis the prediction error between what\nthey were expecting to say for a working\nautomation compared to what they\nactually saw so and if we fit the model\nto the the the manual braking of drivers\nwe can just without any further\nparameter changes just apply this\nprediction error thing and get pretty\ngood predictions of the delay bounces\nwhen people were experiencing silent\nautomation failures and yes another sort\nof benefits of doing the\nkinds of models which linked together\nwith cognitive science and psychology\nconcepts that have been studied on a\nsort of more basic component level is\nthat you can also connect you to things\nlike your physiology so we have been\nstarting to do some work on for these\ntypes of driving related tasks see if we\ncan like other time for more basic tasks\nif we can see traces of these kinds of\nevidence accumulation processes are\nhappening in the brain and it turns out\nwe can to some extent obviously it turns\nout to be more complex than one might\nhave hoped from the start but but we do\nsee the same kinds of signatures as you\nsee in more typical lab based tasks yeah\nso this is just some EEG data I won't go\nthrough the details okay so that's sort\nof work in the past and current work\nfocusing on sort of one we vehicle at a\ntime right but now we want to generalize\nthis to interactions so the first thing\nwe did in one project which is now\ncoming to an end\nit has its final event tomorrow on\nFriday I think you can I don't know if\nyou can still register yeah if you're\ninterested ask me afterwards there's you\ncan google interact project on\ninteractions between automated vehicles\nand humans and we did some modeling work\nin there on for example a pedestrian\ncrossing scenario also a sort of driver\nturning scenario we spent most effort on\nthe pedestrian crossing scenario and\nwell we yeah in VR experiments had\npeople caught in front of oncoming\nvehicles that behaved in different ways\nevery little panel you see here is for a\ndifferent scenario a different way that\nthis vehicle was approaching the\npedestrian who decided to either cross\nbefore or after the car right so then\nyou get these rather complicated\nprobability distributions which are\noften bimodal so either you sort of pass\nbefore the car early or there's a gap\nhere in the middle where you it's too\nscary in the past we've has a fter words\nand also in this the seller rating stop\nand kind of thing that you might\nrecognize from real traffic write your\nown behavior or at least I tend to do\nthat quite often you see a car that in\nhindsight you understand was stopping\nfor you all along but but\nyou end up waiting until it has come to\nalmost a complete stop before you pass\nso we saw that in the these experiments\nas well we started playing around with\nhow to maybe account for these kinds of\nbehaviour models and we had some rather\ncomplex ideas of stringing together a\nbunch of different decision-making\nevidence accumulation units making\nperceptual decisions on the nature of I\ncan make it across before the car or the\ncar is stopping for me which then would\nexcite or inhibit action in decisions\nlike I am crossing now right and we also\ntried some simpler versions with just a\nsingle evidence accumulation unit but\nwere the same kinds of inputs and what\nwe found was quite nicely that all these\nmodels even the simplest ones actually\nwork quite well to capture the overall\ngist of all these very complex\nprobability distributions and in current\nwork that we haven't yet published on\nwe're getting even sort of better fits\nand a nicer models out of this so that's\ngood and yes then the tying back to this\nkind of scenario I mentioned and one of\nmy portal questions in the store this\nengineer who is who's using simulations\nto tweak AV behavior so that's one ones\nadministration of that kind of idea we\ncan take this this first model for this\nrather simple pedestrian crossing\ntomorrow and we can start playing around\nwith it in simulation right so the\n[Music]\n[Music]\n[Music]\n[Music]\nokay and that was sort of an overview of\nstuff we've done I don't know if you had\nreceived any questions at this point or\nif anyone in the audience wants to chip\nin anything yeah I just wanted to\nmention that for the last minute or so\nwhen you explained the\nthe moon simulations and do you have\nanimations there we couldn't really hear\nyou it seems like your bandwidth is\nlimited and it was all taken by these\nnice animations there so we could really\nhear you for the last light\nother than that it was working I thought\nwe already might have some questions by\nthis stage please raise your hand or add\nthe question yes Suresh wants to speak\nSuresh can even use herself yes hello\nkuzey yeah thank you very much I'll keep\nit short I wanted to know more a bit\nmore about the looming prediction model\nI didn't really understand how you\npredicted yeah how you created the\npredictions about what they think the\nfuture states of the autonomous vehicle\nwould be\ngirl stuff could you hear the question\nso I made the mistake he was starting to\ngo back through my slides and I go hit\nthose animations slide so I wanted to I\nunderstood that you wanted me to go into\nthe prediction looming prediction model\nyes but then I didn't catch your\nquestion so it was if there was more to\nthe question could you please repeat it\nsorry no not really sir I just wanted to\nunderstand the basics of the model at\nleast as to how you make the predictions\nabout what the humans think that\nenormous vehicle would be in the future\nthat's what it does it right yes yeah\nokay yes okay let's see here we are yes\nso basically the basic model since you\njust take this visual looming this it's\nit's sort of the the the relative\noptical expansion rate of this this car\nin front of you how how fasts relatively\nspeaking it's growing on your retina\nover time so let's describe by this\nblack line and then the model just\naccumulates that basically to a\nthreshold with some noise and that's\nwhat gives the the normal behavior and\nthen the the key bit here is to run some\nsimulations when with the the system and\nthe system is in control and when it's\nworking like it should so so the looming\nthat a driver would see when the system\nis working as it should is this dotted\nline here so that's for a lead vehicle\ndeceleration scenario of the nature that\nwas studied in this study me the driver\nfrom the training and maybe from real\nworld experience as well would have\nlearned it's the assumption that one\nsystem works I will see a very moving\nsignal which looks like this dotted line\nsorry so yeah I was protecting to the\nwrong line for before see the actual and\nthe normal knew in signal is the dashed\none sorry obviously it's that one and\nthe one that it's the case when the\nsystem is working as it should it's the\ndotted one and then the prediction error\nbetween the two so the difference\nbetween these two lines is the black\nline down here sorry I was pointing to\nthe wrong one\nwhich as you can see actually sort of\nrises more slowly which leads to this\nprediction of slower break response time\nwhen we just feed it to the exact same\nmodel them parameterised on the manual\nbreaking behavior we get we hit quite\nnicely these break response times that\nwe saw under study what's not a bit more\nclear yeah that is very helpful yeah\nthank you very much mmm so we have\nanother question from Monique Becker's\nhere and oft as well so do you want to\nask a question in person because we\nposted on the chat right no we cannot\nhear it we know it's almost very low I\nthink because it is my microphone\nbecause Nick is in the same room with me\nso you can hear maybe that's an accident\nall right\ncan you hear me now yes perfect\nso yeah I asked question in the chat\nalready thank you first of all for the\ngreat presentation super interesting\nalready so I was wondering or you can\nalso read the question in chat but so\nmost of these behavioral models are\nbased on off the interaction with the\nstatic actor for instance like a car\ncoming towards you with like a fixed\nspeed or anything so what are your\nthoughts on you know when these\ninteractions actually become dynamic so\nyou influence each other so people like\ntheir the AV and the human adapt to each\nother do these people model still hold\nthese in so what do you think about what\nare your thoughts about that I think\nit's super super important question and\ndefinitely like what we've done so far\nwith these role crossing models thus\ndoes that and not not extend to that so\nwe're just modeling one agent and then\nwe're sort of like you said in those\nstudies for example but also in our\nsimulations we're sort of controlling\nwhat the oncoming car is doing so\nobviously then what you need is to have\nmodels of both actors right\nand that would then that means that the\npedestrian model in this case is you\ncould argue that in in some ways it is\nsort of taking into account what the\nthis oncoming car is doing so if you had\na model of the car that can either\ndecide to pass or decelerate in\ndifferent ways depending on where the\npedestrian does this pedestrian model is\nsort of equipped to respond to those\nbehavioral choices of the car right but\nthen you also need a model of the of the\nthe car and the car driver itself or\nwe're a navy right so oh but that's so\nthat that's stuff where we're starting\nto broach now in currently ongoing\nprojects but it is an order of magnitude\nmore more challenging and for like you\nsaid yeah I think you mentioned sort of\nstudies human-in-the-loop studies I\nthink that's definitely something that\nwe want to do more off so so we have\nstarted Union lead since we have this\nnice benefit of having a super nice\npedestrian simulator a cave as well as\nsort of had multi display type question\nsimulators we also have a few different\nvery nice driving simulators so we've\nstarted connecting those together to run\nsort of cold simulations where humans\ncan interact so that's that's one method\nof trying to get data on that kind of\nthing but of course then the complexity\nof setting those studies up as you can\nimagine yes also in order of magnitude\nmore larger so yeah not not an easy\nproblem but a very important one for\nsure great thank you yeah looking\nforward to reading about it so good I\nthink we're running out of time for the\nquestions for this part of the talk so\nthey want to keep going yes let's do\nthat\nno no just keep animations now with\na station of the jitsi to prioritize the\ngraphics over my voice yes so a little\nbit like you were saying in this last\nquestion here right there's lots more to\ninteractions in traffic than what was\ncovered in these flights so this is just\na figure from my recent paper I'm\ndrinking too much had recent paper where\nwe still wanted to look a bit more\nconceptually I mean what is this beast\nwhat are interactions right and try to\nso this is not a modeling paper it's\nmore of a concepts and terminology paper\nwhere we defined space sharing conflicts\nas an observable situation from which it\ncan be reasonably infer that your moral\nuser and your users are intending to\noccupy the same region of space at the\nsame time in the near future but with it\nwhere we then can also define an\ninteraction that's a situation where the\nbehavior at least two road users can be\ninterpreted as being influenced by a\nspace sharing conflict between they're\nall users so that's so the the key thing\nright is that they people are competing\nover space here or negotiating or\ncollaborating in some cases and if you\nlook at what behaviors people then\nexhibit there you can see to the left\nhere a bunch of different types of space\nsharing conflict that generate\ninteractions and to the right you see\ndifferent types of behaviors if you look\nat the literature what people have seen\nand studied different types of behaviors\nto either achieve own movement or\nperception moving around looking around\nbut also behaviors that sort of seem to\nhave the purpose of signaling about own\nmovement signaling about own perception\nand requesting movement or perception\nfrom others and then also behaviors for\nfor signaling appreciation for the\nthank-you or the fu to the other road\nusers so yeah you can see you can map\nout stuff that's been studied and it's\nand solves these the crosses are just\nexamples from papers or what people have\nlooked at all these different categories\nof a breeze which can then overlap right\nso when a pedestrian is adapting its\ntrajectory to yield to a navy it's both\nmore movement achieving but it's also\nmoving and signaling so one one aspect\nof this which again connects to that\nquestion of how do they adapt to each\nother is when you start to think about\nstrategic behavior or game theoretic\nbehavior right so that's\nGamze here is the typical thing that you\nget into right if you're thinking that\nthat agents in the world are trying to\nmaximize some on and some benefit for\nthemselves they're trying to get through\nthere the goal on time and safely for\nexample but then if you have multiple\nactors doing that at the same time for\nexample competing over space like I just\nsaid then you get into a game theoretic\ntype considerations right so that's\nsomething we have done\nBeethoven leads with a PhD student\ncalled the Fontan camera where we've\nbeen able to formulate sort of simple\ngame theoretic versions of some of these\nscenarios and also has also been able to\nestimate some of the payoffs in this\nsituation so how much people prefer\nprogressing over avoiding crashes how\nmuch crash risk you can accept basically\nto get some progress which is really\ninteresting and elucidates a lot of\ninteresting things but there's a further\ncomplication to it though which is that\nwe know from sort of basic studies of\ngame theory type situations that human\nbehavior is quite often not game\ntheoretically optimal even when we can\nknow exactly what these payoffs these\ncost functions value functions are\npeople don't always behave like Nash\nsaid they should right and and often we\ndon't know the payoff matrix even right\nso the humans value strange things so\nsometimes people aren't just\nprioritizing getting to their goal\nwithout kalaiy and sometimes people seem\nto value actually being polite or\nhelping helping each other so so yeah\nthere's challenging things to get into\nthese these rewards for value functions\nbut but nicely enough again there's been\non the basic level quite a bit of work\non obviously is in psychology on why and\nhow people are not getting theoretically\noptimal and unsocial orientation in game\ntheoretic type situation so I just\nhighlighting here and one interesting\nrecent paper which connect together game\ntheorists reasoning with accumulation\nmodels\nwhere you can quite nicely sort of\nfollow how an agent reasons sort of\niteratively if I do this then he would\ndo that but if he knows that I would do\nthat if he does that then I should do\nthis instead that kind of thing so\nthat's something to maybe leverage other\nimportant areas for viewer for their\nmobile developments that we need to get\ninto there's these things is human\nrecognitions of actions and intentions\nand communication so again these\npedestrian models I mentioned they do a\nbit of that right so they're recognizing\nif the car stopping may have also I\ndidn't mention it but we've also\nincorporated some of these new ideas of\nexternal human machine indications for\nthe AV sort of signals with some\nflashing lights are flashing headlights\neven to the pedestrian that it's gonna\nstop and that can then affect the\nbehavior of the pedestrian but but if\nyou look at this in a more bigger\npicture for for more kinds of scenarios\nyou need some more general ideas for how\nit works me and then nicely enough there\nare again from from cognitive\ncomputational science and computational\nneuroscience there there is models and\nmodeling ideas that sort of address this\nkind of thing this is a one example of a\nmodel where in a sort of joint\ncoordination task the agents need to\nsort of understand what the other agent\nis doing or is he going left or right\nwith this arm and then the agents might\nsort of exaggerate their movement like\nthis AV in the simulation data at\nexaggerated iteration to be more\nunderstandable and that kind of thing\nhas been studied and modeled in basic\nmore basic contexts another complication\nthat is important to bear in mind in\nsome situation is that humans can only\nlook in one direction at a time right so\nin some situations it's especially if\nthere's more than two actors you might\nneed to actually get your attention and\nget allocation modeling to some extent\nagain there there are models out there\nin the more basic literature that you\ncan consider leveraging and that sort of\nfit together quite nicely there again\nthe nice thing about having this\ncognitive science based framework is\nthat it's relatively easy easy of course\nI'm not quite the word but it's feasible\nto to pick these are the components\nupright so overall the contemporary\ncondition according to neuroscience so\nit provides the needed so that's if a\nmodeler is allowed to dream you might\nsay that maybe we can head towards a\nmore complete neuro cognitive modeling\nframework for for interactions in\ntraffic I won't talk through this in\ndetail you can look at it later insights\nor the recording in oral written ask\nquestions or whatever that's sort of\nmaybe works for multiple agents and for\nmore different scenarios but can still\nsort of get these local the subtleties\nof local interactions reasonably rights\nso yes that's what we're working at the\nmoment and mainly in a project called\ncommotions some other ones as well we're\naiming for more complete and recording\nto models of interactions and we want to\nalso as part of the project investigate\nthe complementarity with more machine\nlearning models so we're trying to do a\nbit of both and see if we can sort of\nbenchmark them against each other maybe\nsort of get the best of both worlds\nthat kind of thing a few months back we\npublished a green paper a short one with\nsome more information about the project\nif you want to read that please do there\nalso some sort of questions and\ninvitation for input so feel free to get\nin touch about that so yeah in summary\nI've argued that safe and acceptable\nautomated vehicles are complementing\ndata-driven models of human behavior\nwith models and we and obviously lots of\nothers as well are working on the\nchallenge and just very happy to have\ninput on further discussion Thanks\nand any question and let's see I mean\nmaybe I should open this chat window\nDerek would like to speak that's fine\nwith me yeah great thank you very much\nthat was it was really interesting and\nlook at these neuro cognitive models\naligned with the autonomous driving\nsystems you only wait into it a little\nbit but these kind of motor primitives\nand this sort of broader vision of how\nthe brain can handle some things very\nquickly I wonder if there's a more sort\nof speculative vision that you have for\nhow this research might progress over\ntime\nrespect to the prepare to specifically I\nmean sorry on what you're thinking uh\nyeah I guess with respect to motor\nprimitives or I'm thinking about you\nknow some of these aspects of cognition\nthat we don't get to very often because\nthey're deep and the weeds of the motor\nsystem central pattern generators and\nthings of that nature I just I wonder\nwhether there's a kind of general\napproach to how you see the the code\ndevelopment of the neuroscience work and\nthe AI work coming together you know\nover a broader timescale but basically\nI'm asking you to speculate a little bit\nhow do you see Venice or say oh yeah\nwait I mean even you can I guess you\ncould start from the motor criminal with\na name that's sort of just branched out\nin such a multitude of different ways\nokay it's great I don't know and and\nthere's so many directions you can go I\ndid write so that what I've done with it\nbasically just look at it from I've used\nthis as my sort of minimal building\nblocks and I haven't looked for all I\nsaid for me the motor controls are stops\nthere I can issue a command and\nsomething makes that motor primitive\nhappen but there's obviously lots of\ndetailed motor neuroscience work on\nexactly how how that works on a more\nfine-grained level right there's there's\nfurther hierarchical levels beneath that\nwhere the the more a neuromuscular\nsystem sort of that makes you tries to\nensure that the the desired motor\nbehavior actually happens\nand then there's another connection to\nrobotics for sure I mean it's not\nliterature I'm deeply familiar with but\nI know that there's been a lot of work\non using motor primitives for for\nrobotics and I mean I think it's there's\na lot I mean it fits together with this\ngeneral perspective of hierarchical\ncontrol right so that's so the the level\nwhich which issues the command for motor\nprimitive can very clearly sort of be\nthought of us as one hierarchical level\nabove another but then I think and I\nthink there others have said this as\nwell I'm sure you can sort of again\nthink of that as having further\nhierarchy our he says Bob were above ler\nwhere that motor primitive must be part\nof a more compound primitive a behavior\nprimitive writes that sort of launches a\nset of motor primitives in sequence and\nturn that's sort of it well adapted to a\nlonger sort of longer temporal scale of\nof the context so yeah as it fits\ntogether with that kind of hard\nhierarchical perspective as well I don't\nknow if anything of that connected a\nlittle bit with what you were aiming for\noh it does No thank you I'll send you a\npaper that I came across recently on the\nhierarchical control of central pattern\ngenerators\nyou know it's just it's all kinds of\nweird things in the neurosciences is\nsuch a huge field and this one in\nparticular had some very nice\nvisualizations and it it created a I\nthink a kind of practical view of\nexactly what what you're talking about\nI guess the place that I'm curious is is\nwhat what are some of the implications\nthat that might have for training I mean\nI'm trying to kind of do too much at one\nso there's sort of these small responses\nthat we want to to train up and build\ntogether I mean that that's kind of part\nof my take away\nmmm I mean when you're training the\nautomated vehicle for example yes\nexactly\nyeah I mean it's certainly about it we I\nmean part of how we think we human\nlearns to write with this babbling right\nbut and you get a basic control of your\nbasic motor primitives and then you\nstart sort of compounding them together\nto more advanced behaviors okay\nyeah wanted to speak hello can I ask a\nquestion yes can you hear me I can't hi\noh uh thank you very much very\ninteresting presentation\nmy question goes maybe a little bit\nfollow up with what Nick asked before\nabout this idea for motor interactions\nand there we can have some let's say\nsome unexpected emergent properties deal\nthere and there I connect a little bit\nfrom the work on ethics I don't know if\nyou're familiar with the what's called\nthe naturalistic fallacy\nwhat is the impossibility to connect\naught so what we should do from what it\nis so so it there it is not released\nfastly that if something is natural\ntherefore it's morally acceptable but\nit's very challenged that so in this\nsense how do you see this kind of moral\nvalues norms and the brothers and the\nmoral reasoning for autonomous vehicles\nis that something you consider relevant\nor yeah what are your thoughts on that\nfor sure relevant definitely definitely\nI must confess I haven't thought much\nabout how that connects in here so\nfortunate try to turn on my video and\nmaybe this limited bandwidth recording\nwas mentioning was playing some tricks\nto me because I couldn't catch all of\nyour questions or your question but you\nknow I haven't I haven't thought much\nabout the connection between the\nmodeling work and the more elements but\nyou were saying something about natural\nand yeah it's called a naturalistic\nfallacy so what it means in ethics is\nthe impossibility of the rife in the art\nfrom ease so you should not derive\nwhat the autonomous vehicle should do\nbased only only what the humans do for\nexample take the human factor at the\ncenter of what is right because there in\nthis discussion and if they say there\nmay be there are some moral principles\nor some other things that should also be\nconsidered on these aspects and I'm here\nI'm not really talking about trolley\nproblems these kind of things but enough\nmore practical gem moral reasoning here\nyeah I think one possible connection\nmaybe it's in terms of sort of but I\nmean I'm not the connection hole\ncompletely but but to some extent I mean\nthings that we that we dislike in others\nlike we dislike others to do or I mean\nI'm not sure whether that fits exactly\nwith the definition of what's moral or\nnot but there are some behaviors in\ntraffic that others do which I would\ndislike and I would maybe say that that\nwas sort of an immoral thing to do so I\nthink my connection to the models there\nis that if we are able to get as I'm\nhoping to these more sort of models\nwhere the this negotiation process is\ncovered some explicitly this in\ntheoretical oshi ation then in theory it\nshould possible to say so from a given\nmodel simulation whether the model was\nhappy or not with what the other actor\ndid mm-hmm so so I'm not sure whether\nthat sure it's it's true that if the\nmodel was unhappy with what AV\ndid whether that means that the baby\nbehaved immorally I'm not sure I'm not\nsure if that is true but maybe there's\nsome weak link at least yeah yeah I can\ndefinitely some some connections on does\nalso a moral acceptance of this exactly\nsociety there I can definitely see some\nsome links there and okay Larry and\nmaybe I'll get in touch them soon to\ndiscuss some other ideas I have there\nyes please okay thank you very much\nthanks for a really nice question we\nalso have a question from\nI'm not sure if I'm pronouncing the name\ncorrectly do you want to ask a question\nperson or posted and shot\nshow me or sell me um right\nsorry working then yeah you could type\nyour question in check I mean while can\nI ask a question myself so I'm sure good\nstuff you mentioned the really important\npoint in the beginning that you think\nthese newer cognitive models of humans\nare necessary but what suffusion all\nright I agree absolutely agree that is\nnecessary but I'm biased here of course\nbut I was wondering on European whether\nyou think the data driven models that\nyou mentioned are also necessary or we\ncould get away with them without them\njust using European models and if you\nthink they are necessary and what is\nyour take on integrating for instance in\nthe context of virtual simulations of\nbabies integrating data-driven models\nand neurocognitive ones good question\nthanks and I would say that in in theory\nthey are not necessary but in practice\nthey are necessary the data-driven\nmodels because in theory we could make\nfantastic neurocognitive models that\nsort of capture human behavior very\nnicely at cause across the sort of vast\nmultitude of different scenarios but\nthat's just a very tall order and and\nit'll take some time if it's ever\npossible right so human behavior is just\nso super complex so so yes we can make\nthese I think what we can hope for is to\nmake neurocognitive models that can sort\nof deal with some local situations with\nwith a few actors and then maybe sort of\nbuild a catalog of different scenarios\nthat we can believe that we can address\nreasonably but but to actually do then\nthese large-scale simulated verification\nand so on we need models that can can\nscale up much more to bigger variety of\nof scenarios and so on\nthere I think that's where the\ndata-driven machine learn models can\nshine more so they we can't be sure that\nthey get every sort of detail right we\ncan be sure that they get the near crash\nstuff right necessarily but then we can\non those for those sites we can sort of\ncomplement it with at the very least we\ncan sort of do both things right that's\nthe simplest way of combining the two\napproaches that was the second part of\nyour question how can we combine and so\nthe simplest way is just to do both in\nparallel right we can test our IDs both\nwith machine learning models and with\nneurocognitive models but then in terms\nof combining them we're just sort of\nscratching the surface of that question\nbut there's a there's a bunch of\ndifferent approaches I mean so if the\nargument is that we have too little data\nof near crash situations maybe we can\ngenerate near crash data with the newer\ncounter models that's one kind of\nconnection and another kind of\npossibility would be to sort of look at\nthe newer models and see if their\narchitecture can inspire the\narchitecture of the typically neural\nnetworks that we that we would use in\nthe machine learn caso so maybe by\nfiguring out some aspects of the newer\ncontent models that are important to get\nright we can sort of get them in to help\nthe Machine run bottles get those\npark-like by by controlling their\narchitecture so so yeah so I think\nthere's a some some different approaches\nsome of them connecting them together a\nbit more others keeping them a little\nbit more separate interchanging this\ndata but I think we're just sort of\nbeginning that journey\ngood thanks very interesting appear so\nsell me posted that question in the chat\nthey want to read the Torah can read\nfile yes I'm not sure whether the data\nuse for model is from naturalistic data\nand for a manual driving car if so how\nwould it influence when considering the\nhuman users interaction with ABS what do\nyou think yeah so yes so I guess so the\ndata we have used has been for some of\nit's been firm from naturalistic data\nsources from other it's been controlled\nexperiments\nin in the pedestrian crossing situation\nI mentioned that's been in controlled VR\nexperiments with with a car that it's\ncomputer-controlled I mean it's not\nnecessarily I mean you can either put a\nlittle human in there that the the\nparticipant may or may not see and then\nit's sort of in quotation marks a human\ndriven car or it can be an AV especially\nif it has those flashing external HM\neyes but but I do agree it's an\nimportant distinction sort of if we in\nterms of generalization right if we I\nthink what I'm trying to focus on at the\nmoment is to get to understand human\nhuman interaction and model that that's\nsort of my starting point but I agree\nthat an important part of it is this all\nalso to make sure that if the the AV\nthen starts behaving a little bit\ndifferently from from the human in some\nway or another that the models also\ngeneralize correctly to that so that's\nagain sort of a challenge to the data\ndriven models right so if the AV that\nyou're trying to test thus something I\nhadn't thought about that specific angle\nbefore actually so thanks for that does\nsomething differently from what\ntypically humans would do in that\nsituation how do you know that your your\nmachine learning models are generalizing\ncorrectly then and of course that's\nsomething you would also need to test\nwith with more white box Theory driven\nmodel as well I hope that's answered\nyour question to some extent okay do we\nhave any other questions we are\ntechnically out of time but there is\ntime for one question\nmaybe now that's fine I'm in no rush\nperson right okay maybe last question\nso we were mostly talking about\ninteractions between a human and baby\nright which implicitly assumes that\nthere are two agents here a human and an\nAV so as far as I'm aware the\nneurocognitive models for instance of\ndecision-making attention models they\nare tested and very limited lab based\nsituations so there is very early more\nthan two agents which we can account for\nin this kind of box so do you think this\nkind of generalization to multi agent\ninteractions is possible i I I would\nhope so but ever I agree that's another\nsort of further complication and and how\nlike you said this when people have\nstudied in the lab interactions it has\nbeen I can't think of any paper doing\nthat with more than two people either\nwhen we're doing it modeling at that\nsort of detailed level so yeah so the\nthe I mean I'm certainly trying to in my\nmodeling work now try to sort of take\nheight or whatever if that's an\nexpression in English to sort of have\nideas about how how you could consider\nmultiple agents but it is a complication\nin its own rights for sure and I'm not\nnecessarily one like you say there is\ngreat support for currently and\ncomputational cognitive science so that\nit would be nice if more computational\ncognitive science scientists addressed\nthat in the lab for sure so we could\nsteal their models for these applied\npurposes okay thanks\nyeah if there are no more questions\nlet's wrap it up\nso thanks a lot good stuff for the\nreally really interesting talk I think\nthat this meeting has hit a record at\nsome point we had 50 more than 50 people\npresent so that is something we haven't\ndone before so there is a lot of\ninterest in what you're doing yeah\nthanks a lot\nthat was very interesting and very\nstimulating looking forward to getting\ninto more detail of discussions with you\nand also I'm pretty sure that you'll be\ngetting some more emails in the in the\nin the coming days yes please don't it\nplease don't hesitate to get in yeah\nthanks a lot peace on earth states I'm\nsure everyone listening to get in touch\nif you have thoughts or questions or\nwhatever and yeah it's been an absolute\npleasure I had good fun and\nthanks oh good okay bye-bye\nThanks", "date_published": "2020-06-17T15:03:26Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "a8f9868b4d1d5dac022c84baa10c32d1", "title": "New extractivism (Vladan Joler)", "url": "https://www.youtube.com/watch?v=njhsfk3uT5M", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "is still not possible but these actually\ngive us the opportunity to have people\nlike platinum who are not based in the\nnetherlands\nfor this kind of meeting and it's uh\nit's really great\nand um so also\nthe next meetings ai takagaras will be\nonline for the next uh two months and\nthere are already\nappointments that you can see on our\nwebsite\nand i would say let's move uh to the\ntalk\nand let me introduce briefly vladimir\nvlad and\njoeller or how shall i okay\nokay is a researcher and artist based in\nserbia\nand his work is uh\nfocused on data investigations data\nvisualizations critical design\nexplorations\nand they aim at investigating societal\nimplications of technology\nif i'm correct and you may know\none of his very famous projects which is\nanatomy of an ai system\ndeveloped in collaboration with kate\ncrawford from\nai now institute but there are also\nother very interesting projects and\nprobably will see some of them so\nyou can start when you want okay thank\nyou thank you\nso yeah it's really a pleasure to to\nto be here today and uh\nso let me start some kind of like maybe\nlonger version of the story\nso uh basically\ni think maybe even 10 years from now\ni i started to organize some kind of big\ngatherings of\ninternet activists here in in serbia it\nwas called share conference\nand it was you know back in those days\nin in which we believed how internet is\nyou know a good place and how we should\nfight for it and stuff like this and\nthen\nso it was like some kind of big\ngathering of activists\nuh hackers uh lawyers\nyou know this kind of internet freedom\nuh uh people\nso uh but then in one moment we realized\nthat that\nyou know you cannot do a lot with just\nuh\nyou know meeting great people and and\ngatherings and stuff like this\nbecause in moments when some something\nwas happening here in serbia\nand in the region we didn't have\ncapacity to\nto react in a way on those like\ndifferent attacks tracks\nand stuff like this so we started to to\norganize one\ngroup the first name it was a shared\ndefense\nand and so it was like kind of mix of\nof you know lawyers and and\ncyber forensic guide and there were like\ntech experts like lots of different\ninteresting people at one place so we\nstarted to deal with those problems of\nlike\nmostly in serbia so we're doing like\nmonitoring of\ncases of internet attacks and then\nslowly we started to do\nsome kind of investigations into\nlike uh network first started with like\nnetwork topologies\nbecause always i wanted to to even back\nin like 2000 early 2000\nwhen i saw like first maps of of the\ninternet i i wanted to\nyou know to try to do the same thing and\nuh\nand in a way you know when i\ncoming back like i don't know like 14 15\nyears later\ni realized now i have like a knowledge\nand tools to start to\nexplore those like network topologies\nand we started to\ndo our first investigation into like you\nknow trying to understand\nhow those networks look like\nhow for example like internet in serbia\nlook like\nor what is the life of one internet\ninternet packet\nin in a sense then we were like\nfollowing these internet packets and and\ntrying to build some stories out of it\nand then so we're like going step by\nstep into\nuh uh deeper in deep deeper\ninto like uh investigation of what what\ni like to call like this invisible\ninfrastructure\nbasically everything that is happening\nbehind the\nscreens no so i can maybe start to share\nsome okay just a second\nokay are we good\nyes so for example this is like the the\nthose investigations that we're like\ndoing\nin the beginning so there's invisible\ninfrastructure the exciting life of the\ninternet packet and then going deeper\ninto\nsome mapping and then we're you know\nstep by step\ndiving deeper and deeper you know like\nfirst visualizing trackers\npermissions data flows\nand then more and more\ngoing into this kind of like you know\nlike surveillance architecture\nor or surveillance capitalism\nlet's say until we we\nand those investigations end up with\nlike\ncreating first of the the big match that\ni did\nand that one was called the\nfacebook algorithmic factory and now i'm\ngoing to show you like a lot of\ncrazy black maps so this is the first\none\nyou know so it's called the inside\nfacebook algorithmic factory\nand here i wanted it was like really\nthe question was like you know how\nthis invisible imaginary like this\nabstract\nfactory of facebook is functioning so\nwe're\ntrying to to map for example on one side\nall the the the data we are\nhere or all the types of data we are\ncontributing to this platform and then\non the right side we try to map\nhow this platform is basically selling\nour data\nso this is for example mapping of of\nlike this kind of\ncategories that exist within the uh\nprocess of creation of like\nads and then\nyou know the the most challenging part\nof this investigation and\nmap was this like middle part of like\nwhat's going on\nbetween the input and the output\nand and this is how we enter into this\nfield of algorithmic transparency and so\non\nand in a sense like we tried a\nlot of different ways how to measure\nwhat's going on\nbehind this black box but at the end the\nthe most useful way to\ninvestigate that we found in that moment\nso that was like\ni think like 200 2 000\ni don't remember maybe 14 15\nwhen we started to do that so i found\nout that there's like a\ncollection or that is possible to to get\nall the patents that facebook had\nand in a way that we found like several\nthousands of patents and we started to\nread\nsome of them in a way to create in order\nto create\nthis mosaic and that\nthat is one of the rare visualization\none of the rare maps that\nexist out there about this kind of\ninvisible algorithmic\nuh process so i was trying to find\ndifferent\npatterns that that that were like\nbasically solving the mystery of\nwhat's going on between the left side\nand the right side\nof this map and and then when i created\nthis drawing first of all\nit you know like it's kind of like you\nneed to be\na bit crazy to do that and uh\nbut at the same time i was like really\nsure that\nsomething like this is really necessary\neven if it looks scary\nit looks like you know like what's going\non here and stuff like this i think\nand back then i was like really thinking\nand i still do\nthat you know trying to understand those\nlike\ncomplex processes that are happening\nbehind\nit's really really important so if you\nwant to progress\nin our you know life in future and\nwhatever\nit's really good to know and to try to\nunderstand even even if\nif it's props possible if even probably\nit's\nnot possible at all to create the map\nthat will completely uh\nexplain the process because like those\nprocesses in a way\ni was working on this map for like two\nyears or something like this but\nand and during these two years the the\nthis\nyou know processes behind the black box\nchanged one million times so in a way\nthis map will never be precise enough\nthat we can say this is a reflection of\nreality\ncompletely reflection of reality but in\na sense it's the only map that exists\nout there\nand i think it's really important to try\nto to map the the processes that are so\ninfluencing our life lives on many\ndifferent ways you know\nand and in a sense like\nthis one was like over there for some\ntime and\nand it was mostly used by by uh\npeople uh uh who are like dealing with\nthe internet\nfreedom advocacy lawyers and i was going\naround until in one moment it didn't\nbecame some kind of like uh\nmainstream thinking because like bbc\nwrote like some text about it and then\nit became some kind of\nknown piece of of\nlike known drawing or map\nand in a way this is the moment when\nthose things transform\nfrom some kind of like cartography\ninto something that that some people are\nconsidering\nthat this is art or this is design or\nthis is i don't know there is like there\nwas like a lot of different explanation\nwhat is this even for me it was never\nart you know it was never\nsomething that for me it was just like\nmy internal\nuh uh struggle to understand\ndifferent types of complexities that\nexist out there no\nand then you know on one side i have\nthis kind of\ncurse of of of being trained as\na visual artist so for me that\nsomehow combine into what you are seeing\nand then in a way there was some kind of\nbig change in in what i was doing\nbecause i i\ni met one friend of mine it's called\nher name is jonah mo and she like\ninfected me with this idea of\nmateriality of\nof internet\ninfrastructure and and then\nyou know basically my investigation\nexploded in many different\nuh uh directions you know so what i was\nexplaining to you until now\nit's basically this like i was always\nstarting with from the position of a\nuser\nwho's here and then trying to go deeper\nand deeper as i can\ninto those kind of like things that are\nhappening behind\nso for example most of in this\ninvestigation\nwe're starting with uh with uh uh\nan object so in this case so this is\nanother map that i'm speaking about it\nis called\nanatomy of an ai\nand so it's starting with the uh object\nit's called\nyou know amazon echo so first thing it's\nof course to open the device you know\nand then our opening device trying to\nunderstand what's going on inside\nand trying to you know like uh\nunderstand what kind of\ndata is coming in what kind of data is\ncoming out\nbut even on this first step on\ninvestigation you see a lot of different\nlayers of untransparency so for example\nyou know the issues related with\nhardware can\nwe can say try to repair open schematics\ndiagnostic tools planned obsolescence\nproprietary software privacy digital\nsecurity and also this is some kind of\nfirst layer of of the problem and then\ndeeper you go so the next step then i'm\ninvestigating like for example how these\nis connected to the internet what kind\nof packets are going through then we are\ndoing like internet\nmapping and infrastructure trying to\nunderstand for example in this case\nwhere\nthe packets are going where for example\namazon have their own their\ndata centers where they are located and\nso on so on\nand then going deeper deeper behind\ntrying to understand you know like what\nkind of hardware there is\nand then trying to you know somehow go\ninto the level of\nalgorithms neural networks and so on so\non trying to understand data behind\nthese\nand this is something that i was doing\nfor a long time let's say like five six\nyears i was like really\nyou know being more and more skilled to\nto go deeper\nin deeper and to investigate deeper and\ndeeper\nbut then as i said like i i i met joanna\nand i started to think differently in a\nway and she gave me one book\nuh the name of the book it's geology of\nmedia by\nyusi barika and then i was completely\nlike\nyou know like trying to think from the\nabout those infrastructure from the\ncompletely different angle you know so\nfrom the angle of\nlong period of time\nfrom the angle of geology and if you\nstart to think in that direction\nthe story goes completely different so\nin this middle part of the map we're\nmostly speaking about like privacy\nsecurity\ndata exploitation lot of this kind of\nstuff\nbut then when you when you start to look\nat the this kind of infrastructure from\nlike the angle of of geology or angle of\nlike deep time\nthere is a different problems and then i\nstarted basically to draw this map\nnot from this middle part but from here\nfrom the\nfrom the the earth\nand elements that are basically part of\nthose\ndevices that are part of those like\ninfrastructure that we are seeing\nso in in a sense what they did i\nkind of you know like went back in time\nwhen all of those devices that\nare part of this infrastructure were\ndifferent kinds of rocks\nand then i started from there and then\nuh i wanted basically to tell a story\nabout uh birth\nthis is the left part of the map life\nand death of one device and in this case\nthis is this is a\namazon echo or device\nand then i started to work with with the\nkate crawford she is like one of the\nmost\ni think known uh people in the field of\nthe ai ethics she's running one\ninstitute in uh\nit's called ai now as a part of nyu in\nin new york and then we started to\ninvestigate this story starting from\nyou know elements and then the story\ngoes\nthere is like you know in most of these\ndevices as a mobile phone\nyou know like or servers computers and\nthose amazon alexa for example there is\nlike a three\nuh two-thirds of periodic uh\nsystem of elements in it and in a way\neven on that level when you start from\nelements you you see that the\ndevices that we are using today they are\nyou know like really complex to produce\nin a sense that you really need some\nkind of like\nreally diversity of different elements\nand it's not just like\nyou know a few hundreds ago most of the\nthings that we were producing were made\nlike from\nfew different you know materials\nnow hide that it's built\nof almost like whole periodic system of\nelement\nand it's hard to tell a story you know\nis it really complex we're starting from\nmany different elements\nand in a way i was also thinking like\nyou know if you just follow each of\nthose elements you will\nbe able to make a book about this\njourney\nfrom you know like this kind of one\nelement to the product\nand then another thing uh happened i i\ni needed to have some kind of uh visual\nstructure visual\nelements in order to tell a story so i\nuse this kind of triangles that\nare basically really present in the work\nof so it's classic this kind of marxist\nway of like understanding labor and\neverything but in a way it's really\npresent in\nin in the work of another author it's\ncalled christian folks\nin which basically i try so the score\nstory goes like this it's always like\nthis you need to have a resource\nand tools then you are using some labor\nand then this is as a result you you are\ngetting some product then this product\nis becoming a resource for another then\nyou are using another form of labor and\nthen you\ngetting a product so resource labor\nproduct\nresource labor products so i try to\ntell a story through those triangles\ngoing through\ndifferent you know processes\nthat are happening during the birth of\nof of one device but what is really\ninteresting it's like\nwhen you go deeper into that you realize\nthat for example\nyou know like uh apple have one\num in one device you have\nlike they have like just first layer\nlayer of their\nsuppliers it's some kind of 250\ncompanies\nand those 250 companies that are\nsupplying these parts of this machine\nare you know like all around the globe\nbut then those 250 companies have their\nown 250\nsuppliers that have their own 250\nsuppliers and\nin a way for me what i was like\nmore and more i was investigating for\nfor me this structure became like really\ninteresting so\ni understood basically that in some way\nthose supply chains are some kind of\nfractal structures\nin which you can zoom more and more and\nmore and more and\nthere is never an end and uh\nbut what is like really for me another\nimportant aspect of this is that when we\nare seeing\nlike the problems that are appearing on\neach of those like let's say steps\nit's completely different than the ones\nthat i was investigating before so it's\nnot anymore like this kind of security\nor privacy it's more like you know hard\nlabor\nforce labor child labor conflict\nminerals and but in most of the time\nit's there is like a price and this is\nlike environmental price\nthat no one is paying so on one side you\nhave like this kind of like exploitation\nof human labor and really some kind of\ndiversity of different\nkinds of relations that exist out there\non other side you always have this kind\nof exploitation of\nof resources natural resources and\nyou have always some kind of\nenvironmental cost\nthat is really part of the process\nso when we are speaking about binds you\nneed to understand that for example you\nknow like for\none little part of the the you know\nyou know one little part of the gram of\nsome material\nyou need to move you know like half of\nthe mountain from place a to place b\nand so on so on burning and melting and\nso\nlike so many different and also like\nlike\nthe the the numbers of kilometers so for\nexample when you when we speak about\nthis like\njust from all of those first level\nsuppliers of\nfor iphone for example like just i was\ncalculating a number of kilometers\nhow how many kilometers all of those you\nknow\nuh components need to travel to to\nshenzhen and foxconn factory\nand it's like two times the moon and\nback\nand and so it's a massive\nuh a planetary scale factory in which\nlike everything it's moving around and\nand\nand the thing is that those costs of\nenvironmental costs\nof those movements of goods are\nreally not calculated into the the\nenterprise now\nin a sense this is something that we\nshare\nbetween all the people on the earth but\nthe profit\nprofit it's not shared among the older\npeople in the world so i tried\nalso to to calculate like for each of\nthose jobs that i was like\nable to to detect there like how much\npeople are earning\nand then you you know like\nbig part of the the the process is you\nknow happening in some kind of china and\nphilippines in in india\nand so on the really low salaries but\nthen you know like cap\nbetween the the person that is on the\nbottom that is basically probably a\nchild working in the mine in congo\nand jeff bezos who is on the top of this\npyramid\nis uh uh you know it's it's not even\npossible to\nto to count but it is\nyou know in a way like one to seven\nmillion\nor something like this so someone and\nall of other people are somewhere in the\nscale\nin between so in the sense there is like\ninequality\nof like standard of living between\npeople in this like one big factory\nthat is going from one to seven million\nand and and then there's a lot of also\ndifferent things that we can speak about\nbut maybe\ni believe this later for questions or\nsomething but please interrupt me\nif you if you have something to ask\nuh in a sense there is also different\nforms of labor that are happening here\nso we\nwe you know here on the bottom you have\nlike people working in the mines then\nsomewhere here you have people working\nin foxconn doing some kind of\nalgorithmic work like on the on those\nlike assembly lines\nbut you also have some kind of hybrid\nrelations for example in amazon\nuh storage systems and those like places\nwhere the robots and machines\nand people are working together\nand and we managed to to i was able\nbecause i was traveling a lot\ntrying to investigate most of those\nplaces so i i went\nto some of them so for example i managed\nto enter into\ntogether with kate and some of those big\nstorage places of amazon\nuh where you have those kind of robots\nmoving shelves and stuff\nstuff like this so you you are\nif you follow from the beginning to the\ntop you\nsee different forms of labor you know\nlike and\ndifferent relations you know\nand so this was the birth somehow and\nthen all of those things are\nfueling the the infrastructure that is\nlike in the middle of this map and then\non the right of the map\nyou have a process of death in a sense\nlike\nmost of the devices are being like\nthrown away after few years and then\ngoing\nbackwards basically to to again to the\nearth and again a lot of you know like\nstrange and really dangerous like\nworking conditions\nand stuff like this and stuff like that\nand then the main question in that sense\nlike how fast\nthose cycles are happening how fast we\nare seeing this kind of like\nmovements from here to there\nand and because like this is every time\nwhen we for example buy a phone\nor buy like we are basically giving a\nnew spin to this cycle we are\nbasically approving all of those\nprocesses\nanother interesting point maybe here\nwill be like\nyou know like if this this is the global\nplanetary\nfactory and it will be interesting for\nexample to think\nuh what will happen\nand what's going on now in in this\ncircumstances of like a global pandemic\nand\nhow those parts of this global factory\nare being like\nyou know how they are surviving this and\nthings like this and so\nyeah maybe maybe i can make a little\npose here and open for some questions\nand then i can move to\nsome other other\nyeah other things that i want to show\nso everybody who has questions uh\nthey're welcome\ni have a little question in in curiosity\nso\nyou were saying that you actually\nentered into facilities of amazon and uh\nso did you find resistance for that\nbecause i imagine i mean it's not that\neasy uh to get in and uh\nask for information and that about this\nkind of things\nyeah yeah basically for amazon we we\nand i really uh encourage\nall of you to do that yeah in the sense\nthere is a way to to get in\nby some kind of official tour because\nand people are so once or two times per\nmonth\ni think there's like you know\nthey allowed i don't know some amount of\npeople to\nto go there and to go on some kind of\nguided\ntour but in a sense being guided or not\nit's like really uh it's really intense\nexperience because like you feel you are\ninside\nof some kind of like you know inside of\na machine\nit's like really noisy environment and\nthose robots and it's really not\nsafe environment then and you really\nsee some kind of like dystopian future\nof work\nin which like most of the people there\nyou know\nreally doing some kind of you know\nrepetitive\ngestures and moves but in the same time\nbecause like\nall of this machine and all of this like\nstorage system is run\nmostly algorithmically so for example\nthe on those shells you have like\ndifferent\nyou know products that are not related\nin any logical human way\nso for example not you know it's not one\nshelf is books another shelf is\ni don't know something else no\neverything is messed up everything\nit's mixed up because it's just\nrepresentation of what\npeople are buying together or there's\nsome kind of like\nalgorithmic logic so it's completely\nalleviated\nfrom the human understanding so the\npeople who are working there\nyou know there is no human logic in it\nso you are just part of some kind of\nalgorithmic process for example it's a\nreally crazy place\nyeah cool\nand yeah maybe you are also i don't know\nmaybe luciano has a question did they\nsee a hand\nyeah hi um yeah fascinating it's very\ninteresting what you do there\nand i have a question regarding the\nthe the idea from human\nknowledge and human behavior to training\ndata i know that you're on the bottom of\nthe center part where i could see that\nyou have something like\nthis can you share a little bit there\nbecause is this do they have some\nsimilar com ideas relate like to from\nthe geology\nthe geological perspective to the\nhardware\nand from this knowledge to the data\ntraining how do you see this is this\nlike so this is the relation there\nyeah uh so so that this bottom of the\nmap was\nbasically the most troubling for me and\nmost like demanding\nin a sense i really spend like uh you\nknow\nmonths in trying to do those two\nthings and they're far away from\nfrom perfect so what we're trying to\nexplain\nhere i i will speak a bit later about\nsome other thing that i did on the\nsimilar topic\nbut what we tried to do here we tried to\nthink about like this process of\nenclosure\nso in a sense like uh uh you know in\ncontemporary\nwe have this idea of like you know like\nuh\nduring this time of like a big uh\nindustrial revolution in the sense\nlike this so the the resources was\nmostly like\nyou know like different kinds of uh\nmetals and ores and and land and\nthings like this and then those like in\nthat time there was this kind of process\nprocess of like an enclosure\nof of like resources and\nbut in in and then in a sense\nthis is like the time when the first of\nthis kind of\nyou know like big concentration of like\npower or going on you know rockefeller\ncentral\nand and in a sense what we have here is\nkind of similar to that except the big\ndifference is that\nthat the the contemporary you know data\nbased capitalism can expand\ninto infinite number of\ndifferent territories you know\nbut at the end most of those territories\nthat they are invading\nuh for in this search for resources\nare basically territories of our bodies\ntheir iteratives of\nour uh uh behavior or\nof our conscious and unconscious of\nour you know cells in our bodies and\ndifferent kinds of data that we are able\nto produce as a humans on one side\non other side so there is this kind of\nhuge rush\ninto quantification of everything that\ncan exist you know from\nstars to the smallest atoms of our body\nyou know\nbut on other side there is like a\nproduct so this is those\ntwo parts one is like quantification of\nlike human body actions and behavior\nanother one\nis quantification of human-made products\nyou know like whatever we create\nwill be quantified will be\ndigitalized and we'll become a part of\nthis\nnew it will become a new territory that\nit's going to be\nbasically digitalized quantified and\nthen\nuh in a way like transformed into some\nkind of like\nwealth or whatever and and and so so\nin that sense here i try to to map\nbasically some fields of of of like\nquantification or\nfields of like those new territories\nand then in some way the process how\nthis is like\nbeing transformed into uh uh\nand becoming a part of of the process of\nfor example here like\nlearning of like algorithms or like\nlearning machine learning process and so\non\nso on and in the sense this i think like\nan upcoming\nanother me this map didn't went\nfar into that i will show you some other\nother maps that are maybe a bit more on\nthat that track uh\nbecause here we we wanted to to\nto basically show the map that is\nkind of like speaking at the same time\nabout\nyou know exploitation of labor\nexploitation of nature and exploitation\nof data in a sense\nand what we wanted to tell here is that\nthat\nmaybe we should not\nforget about like exploration of labor\nand exploitation of nature\nas a as a equal\npart of this mosaic of this story\nbecause most of the time in like\nlast i don't know like your tens of\nyears\nwe we are just both speaking about you\nknow our data data data data data\nand in a sense like what we wanted to\nshow here is we wanted to show okay this\nhave like another also another\nlayers of of problems that we\nuh want to speak about\ni will show you some other is there any\nmore question in the same related to\nthis map or\nthere is a it's fine it's fine after\nthis one i\nstarted to\nsorry no no i raised the hand but uh\nit's fine\nyou were already answering those things\nso go ahead\nokay okay so we after that i tried to\ni worked for like one year or maybe\neven more on something\ntogether with uh matteo pasquinelli\nand and then the project was called uh\nnoscope\nand it's published i think like few\nmonths ago it's not it's\nkind of still fresh in a sense and and\nhere we wanted to\nto uh\nyou know basically uh\nbasically this noscope it's part of this\npart of the map\nbecause here we didn't we gave some kind\nof wider anatomy of like an\nai system but but then here with with uh\nwe try to go deeper just into into\nyou know this kind of like uh uh ai\nas a is a system without this kind of\nyou know like uh supply chains\nright just what's going on within\nthe the process of data learning for\nexample\nand and in the beginning the the\nthe project was called the uh limits of\nai\nbecause we wanted to understand where\nthose\nsystems are failing what are the limits\nof of\nlike that that are happening\nbehind and we end up with this snap\nin just a second just a few seconds\nah little corn break\nsorry no working for from home\nso we end up with creating this\nmap this is basically also following the\nsimilar process so it's starting with uh\nwith a let's say like the world society\nand then we have like in the middle we\nhave like a\nprocess you know first you have a\nprocess of capture\nthen you have source selection then data\nset\ncomposition selection labeling then we\nhave a\nalgorithms free\nfeature extraction pattern extraction so\nat the end we are getting a model\nand at the end we have some kind of\napplication of\nof the model but but what is so this is\nsome kind of\nlike you know basic structure of one\nmachine learning process let's say but\nwhat we also wanted to understand\nhere we wanted to understand what kind\nof labor\nis there and what kind of what type of\nbias we we can\nwe can map during this process so for\nexample on the left side of the\nthis map we try to map some kind of you\nknow for example human bias\nrelated to all of those kind of ghost\nwork or\nor the work that we don't see when we\nare thinking about\nlike ai and stuff like this but on this\nother side on the right side of this map\nit was like you know process of like\ntrying to map\ndifferent kinds of machine buyers\nbecause we understood\nthey're like something it's a mix\nbetween human and machine buyers that is\ncomplete\nall the time on in the pro during the\nprocess of\ncreation of like machine learning\nmodels so what\nand and then we tried so this is what's\nhappening on the\nbottom part of this map and in the\nmiddle part of the map\nwe also follow the same logic but here\nwe are going more\ninto this kind of like uh\ntypes of of of because like\nin a sense what we are seeing here we\nare seeing\nan ai is a process of like uh\ncompression of information you know and\nand uh statistical compression in a\nsense\nso in that sense we are trying to\nunderstand all of those like\nmathematical let's say statistical\nprocesses that are happening and trying\nto understand what are the limitations\nand what are the the\nfailures that are coming from from each\nof them\nand then at the end like we try to map\nuh uh different kinds of like you know\nlike uh\nyou know what kind of consequences we\nare dealing with\non the top of this when we when we\nsense like apply those models into the\ninto the world so this one looks\nbetter so that was the the the\nsomething that we were like that i was\nworking\non together with with matteo basquian so\nthis one is\nmaybe a bit abstract to to show like in\nsuch a short\nperiod of time but in a way we can\ni can like you know if you have time\nplease go and check this one so it's no\nscope.\ni uh\nyeah so that's the um\ni think it's really cool this work i\ni also love working with uh visualizing\ninformation not at this level of course\nbut um so i'm really curious how you see\nthe potential impact of this kind of\nwork because you said\nuh it's in your opinion it's really\nimportant to understand what are the\nhidden processes behind these\nuh systems and i really agree but do you\nhave an idea apart from like\nyeah these projects are very important\nimpactful because they get uh\nacknowledged they get recognition\nthrough awards and\nmuseums and these kind of things but uh\nbeyond\nlet's say sensitizing do you see like um\na used like a\nsome way like decision makers\npolicy makers could make use of this\nkind of work\nwell you know like\ni think there is like a lot of different\nlevels of it so for example\njust in in uh in building\nthis one uh uh\nyou know matteo and i will spend like\none year and like\nuh thinking about how to\nbecause for example in this field of\nmachine learning you know\nand like ai you don't even have uh\nyou know like clear terminology\nor or like explanations of like\nsome parts of the process so some kind\nof common\nexplanations for this or terminology for\nthat you know like so\nso every kind of attempt to to create a\nmap\nit's a challenge for and on the first\nplace it's a challenge for us because\nit's it's it's like\nyou know you can write academic text\nbut but you know when you start to to\ndraw something and you start to draw a\nmap of something\nthen a lot of different questions are\nare appearing\nyou know and and and so the the the\nprocess of creation it's like\nreally really important\ni think because like then we are trying\nto\nto find an answers to to some\nquestions you know that appear during\nthe process of\nmapping and but but then you know but\nnot\nevery map every especially this kind of\ncognitive map now we are not speaking\nabout like the first map that they show\nyou it's more\nmore like uh i don't know data\nvisualization\nand this kind of things but now the the\nmore and more some kind of cognitive\nmaps\nthat uh uh you know that they're\nalways you you also need to think about\nthat that\nyou know those maps are also really\nbiased in a sense of like you know like\nthey\nthey pretend to be you know\nsomething that is true but in a sense\nthat can be also questionable because\nit's it's\nour or my way of interpretation of the\nworld\nyou know true so sorry in that sense\nlike\nespecially i don't know anatomy of an ai\nand also facebook one and\nit was used in different fields so for\nexample facebook map was mostly used\nfor this kind of advocacy uh\nyou know like to show how the system is\ncomplex or evil in a sense it have this\nkind of like potential to be a tool\nfor scaring people to make some\ndecisions\nand then on the other side anatomy of an\nai it's more like\nyou know it finds its way to both in\ninto\nlike classrooms where people are using\nthis map to\nto show something but\nalso it find its way into museums into\ngalleries into like different\nformats so i think like there is there\nis a huge potential in like\nand this power of maps you know like in\nthe sense i believe that\nthe the power map is is really uh\nis really big you know like in the sense\nthat\nto help people to navigate you know\nbecause like what we are\nfacing in the world is like we are\nfacing this kind of like really big\ncomplexities\nand obscurities of of like reality you\nknow and\nand i think it's it's important if you\nwant to navigate those\nspaces and in a sense it is really\nuseful to have a map\nand i think we have other questions\nin mind i saw arcadia had a question\nbefore but i don't know if you still\nhave that\nand then you are now\nyeah i had a question which\nbasically resolved when i looked at the\nthe notes so i went to noscope ai and i\nrealized that they actually wrote the\nwhole essay\nso yeah i actually found the answer\nokay okay okay yeah so yeah\ncool yeah thank you it's really really\nfascinating the work you do\num i i have a question about\nthe also about the implications of this\nkind of tools\nnot only the implication for for ethics\nor for policy\num but also for design so i was thinking\nin terms of like ethics i\nthought about transparency and how um\nseeing the humans and the environment as\npart of\nit positions transparency or\nexplainability of ai in a different\nplace\nin a different site also in relation to\nhumans also and then\nquestions of like who creates these\nexplanations who is\nreceiving them this kind of challenge\nand then also the implications for for\ndesign because\nwhen you see how much humans are\nimplicated in\nin the construction of ai\nit kind of uh moves the the\nthe way we or she needs a shift of the\nway we design ai\nwe think of ai in design as like\nconstituted\nby by humans and and yeah so what\ndid but that's you having that or not\nthat it's one of this like uh uh uh\nyou know we tend to to\nthis kind of idea and believe into into\nautomation and out in the way of\nyou know autonomy of technology outside\nof human\nfield it's completely some kind of like\nmyths and some kind of like idea that is\nreally\npresent but in a way it's really far far\nfar from\nfrom uh reality because like if you\ndeconstruct\nuh like in noscope or in\nanatomy of an ai all of those systems\nyou will see that\nthey are full of people doing different\nthings\nand it's just not just full of people\nbut full of people\nbeing treated differently you know like\npeople\nlabeling the the you know armies of\nthese mechanical workers\nuh labeling uh images or people who are\nbeing part of the image being on image\nyou know on one picture that are\nbeing taken from those companies and\nbecoming part of\nof like data sets under like whole\nvariety of of uh\ndifferent roles that human beings have\nin the process of creation both of\nthe the hardware and models and software\nor whatever\nso so you know this kind of meat of\nautomatization is completely\nyou know a myth there is no such a thing\nof like you know like\ncompletely independent out of human\nexistence\nintelligence or whatever you know and\nand but each of those steps\neach of those uh involvements human\ninvolvements\nwe brings with itself like some form of\nhuman bias now\nsorry so and then we can\nalso speak about that you know like that\nhow probably there is no way\nthat something that is completely not\nbiased is ever be able to be\nproduced because there are like lots of\ndifferent questions\nof like you know like who is choosing\nthe data set\nfrom which you know how this data set is\nbeing sampled\n[Music]\nhow you know like who is testing the the\nresults who is fine tuning the the\nlearning algorithms in a sense of like\nyou know\ntuning hyper parameters or something\nlike this you know so it's\nit's like really really hard to to\nthis idea of unbiased models is\ncompletely crazy in a sense\nand the the also on the other side you\nknow like there are some kind of like\nwithin the processes of like compression\nof data\nand all of those like statistical\nprocesses that are happening\nmost of them all of them have some kind\nof\nyou know like problems in a sense like\nso for example\ninterpolation extrapolation model\nfitting\ncurve fitting dimensionality reduction\nyou know like those are all uh different\nkinds of like uh\nprocess of like reduction and\ncompression of information\nthat in a way every time you do that you\nhave some kind of\nloss you know and then what do you have\nas a result of this kind of loss\nand most of the time you have like this\nkind of like\nanomalies are being thrown out of the\ngame and then if you think on some kind\nof more ethical level who are those\nanomalies maybe\ni'm the anomaly you know like maybe you\nknow so there are so many\ndifferent layers of like\nthe problem and so when we were thinking\nabout this kind of limits of an ai\nyou know there are so many different\ndimensions of those limits and so many\ndifferent\nproblems that are coming with\nthere is another question by afghani\ni think yes\ni um yes thank you thank you very much\nreally fascinating work\num really incredible visualizations i'm\ncurious about your thoughts\nso so you mentioned um\nthat you mentioned this if we want to\nnavigate the space it's useful to have a\nmap\nand i'm curious what are your thoughts\nso we\nyou showed maps of existing systems\nuh what are your thoughts about trying\nto construct maps\nof the kind of world that we want to see\nand then can can we then use such an\napproach to\nhelp us understand how do we get from\nthe kind of systems we have today\nto the kind of systems we want to have\nyou know how do we make the bridge what\nare your thoughts on that\ni'm curious you know this is the\nthe the you know something that that\nappears as a possible you know way\nout of you know this and but for me\npersonally i don't\nfeel uh i'm\ni don't feel competent in the sense of\nlike projecting\ni feel more comfortable in the field of\nlike trying to understand\nfirst how the\nthe system look like or how those\ncomplexities look like\nand this is somehow where my capacity is\nis is ending this is what then i i don't\nfeel for example i\ni you know like my kids\nyou know went to school today and i'm\nstill not sure is that a good\nor bad decision so i'm really bad in in\nprojecting what's good or what's bad i i\nfind\nlike my role in in this will be\nthis process is to try to understand how\nthings look like\nin this moment you know and then i i\nhope that someone will\ntake the responsibility of framing\nsomething\nthe next step how to go out of um\nout out of there so so so it's it's\num somehow i don't know i don't feel\ncapable of like doing that for the\nmoment\nbut i think it's really important if\nsomeone\nthanks thank you for sure\nuh yeah i can show you something\nelse as well uh so this one is like\nso this one is bit different than than\nthe\nall of the maps before i show you it's\ncalled uh\nuh new extractivism and i'm not too sure\nyou will be able to see a lot\nbut\nbut i think i had a lot of fun doing\nthis in in\nlast i don't know one year intensively\nduring like\nthe next last six or nine months\nuh so it's basically the\nthe the map of something that they call\nlike a new extractivism\nso anyway it's the same topic as a\nuh previous maps\nbut this one is basically some kind of\nassemblage of different ideas\ndifferent concepts of different you know\nthings that other people\ndid so mostly like philosophers or like\nmedia theories or different\nartists and in a way what we are doing\nhere this map we are following\nso we are following the the\nyou know like some person who is falling\ndown into the black hole\nand and in a way this is starting with\nthis idea of one artist\nshe proposed that like to try to think\nabout the internet and those spaces\nabout some kind of like that you can\napply\nthis kind of idea of gravity how those\nlike big\ncompanies are basically curving the the\nspace and time and in a way i started\nwith this and then we have this little\nguy who is trying to\nescape from this like a black hole and\nin a way like when the\nthis person passed the point of no\nreturn he's falling into some kind of\ncave so this cave it's basically a\nplato's cave and also pound of the\nconcave\nin which each of us are like spending\nour lives in\nthen i try to understand the\narchitecture of this cave\nso under one side of the cave you have a\nprojection of some kind of personalized\ncontent\nyou have interface you have some kind of\nchains and then you have\nshadows and then these shadows have been\ncaptured by\ndifferent capture agents and then\nand then the story is following this\nyou know invisible abstract\nmachine into the processes of like\non one side you know like i'm following\nthis idea of individuals and\ndilutes and fuco and french\nschool how like you know like extraction\nof this information and then you're\nyou know you have some kind of like\ncreation of individuals on other side\nextraction of information from the\ncontent and then all of these together\nand then on the bottom you have it this\none is a bit crazy\nand you have like you know like\ndifferent forms of extraction\n[Music]\nhow basically different forms of labor\nit's cool it so it's called manual\nso you imagine it as a also sort of\nguide for\npeople who want to open your work\nokay what you're seeing now it's manual\nthat is basically used for\nto reading of this map so the there is\nand then basically what i'm trying to do\nthere i'm trying to go step by step\nuh and to explain each of those\nwe have a new question in the chat\nand derek would you like to\nmake it uh by voice or shall i read it\nuh yeah sure um uh so yeah just\njust very briefly because it's the end\nof the hour but uh uh\ni i loved seeing the new scope um\nthere were a lot of ideas that seemed\nvery new to me and um\nuh overall the work is really fantastic\nbut uh i i loved its esoteric references\nbut i also loved how it addressed some\nof the magical thinking\nthat takes place in i guess the politics\nof ai\num the politics of uh new extractivism\nseem a little bit more uh apparent um\nbut i'm curious what some of the\nintentions were and\noutcomes were of nusco\nwell there uh you know\nas i said the intention was to\nto to work and to explain the the\nprocess in in some kind of maybe we\ndidn't manage because like you know like\nalso the materials text it's really\nintense and\nand it's it's intense you know but but\nthe idea was to try to\ncreate some kind of basic dictionary or\nuh\nuh understanding of\nthe the the process and of creation of\nlike models and and\nai and stuff like this so so so we\nwanted to\nto to make to make a drawing that will\nbe\non one side you know accepted and\nunderstandable for\ntech expert people who are working with\nthat\nand to them and we wanted to share this\nidea of like\nyou know bias human\nuh\nyou know involvement in all of these and\nwe wanted in a way to\nto try to to break this like\nuh meat about or automatization\nor you know like autonomy of these\nthings this kind of myth\nof like how this is some kind of you\nknow like\nmagical uh\nintelligence or whatever you know so\nthat was idea but\non other sides for like more uh uh\nyou know like this kind of\nnon-tech audience or like non-tech\nexperts\nwe well wanted to make something that\nmaybe will help them to understand\nuh also the technical part like what\nwhat is the the basic\nanatomy of this system so it's supposed\nto be somewhere\nuh in between those two worlds and and\nat the end i'm not sure that we managed\nto\nto create something that is readable on\nboth side\nbut but that was some kind of intention\nto try to\ndeconstruct and to try to understand and\ncommunicate\nthe the limits of those\nuh systems it reminds me a little bit of\nuh richard feynman's discussion of uh\nscience and beauty um you know this idea\nthat understanding the science of a rose\nwould somehow make it less beautiful\nand it's not quite in terms of the\nbeauty but in terms of the magic of it\ni felt uh the by deconstructing it\nand taking away the magic of it uh it\nactually revealed the\nmagic of it and uh yeah for that reason\ni thought that this\nproject and the other was was just great\nart great science\ngreat political gesture so just\nkudos it's really great to see yeah it\nis it is somewhere\nthere it's always about what i like to\ndo it's like i\nlike to be in those like intersections\nbetween between different worlds and in\nthe sense of like\nthere is not one side you know like so\nmany times\nin my life i was like you need to pay a\nprice for that\nin the sense of like you know that\npeople from\nyou know science will say oh this is not\nscience this is art people from art who\nsaid this is not our this is science\nyou know so you you're always somewhere\nin between but i think\ni'm always somewhere in between but i\nthink this is like really interesting\nplace to be you know and to try to\nto try to somehow use\nuh all all the\npossible tools languages and weapons we\nhave in order to\ndemystify and in a sense like understand\nthose like complexities\nand and this is what i was like trying\nto do in like\nall of those maps it's like because i'm\nat most of the time\ni'm working together with either like\nyou know sometimes like scientists\nsometimes like\nyou know media theories sometimes\nphilosophers sometimes there are depends\nyou know and and and i think that's the\nonly way in the sense of like when we\nare speaking about the\nthe you know like drawing or like making\npictures of black boxes i think\nwhat we can do is to try to look at them\nfrom\ndifferent angles as much angles and we\nas we can and then maybe we will\ncreate some kind of chance to to\nunderstand how what are the shapes\nof those things you know so for me the\nangles are\nyou know that\ncyber forensics uh uh legal analysis uh\nart um you know philosophy\nthose are different angles that i'm\ntrying to see\nthe object from\ngreat thank you very much vladim i think\nwe have to wrap up because i know\nsome of uh people in the\ntalk of other meetings it was really\namazing to see\nand uh yeah uh thank you again\nvery much and uh yeah\nit was yeah again other comments in the\nchat\num great\nokay thank you yeah and uh\ni hope you enjoyed the questions yeah\nyes\nit was really a pleasure to to be with\nyou guys today\nthanks it was our pleasure oh thank you\nvery much\nthank you time bye\nthank you bye bye bye\nbye bye bye\nhmm\nsuccess", "date_published": "2020-09-02T14:16:20Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "6134e953439661d0678dffe58b0153a2", "title": "OUT OF CONTROL - The design of AI in everyday life (Elisa Giaccardi) - 1st AiTech Symposium", "url": "https://www.youtube.com/watch?v=iY_6cnvBxN8", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "cute yes good afternoon what happens\nwhen AI begins to operate in response to\neveryday interactions with people\nhow can a AI systems remain trustworthy\nonce they begin to learn from us and\nfrom each other these are really big\nquestions and they're the ones that\nactually troubled me as a designer one\nof the dominant narratives today about\nAI is that it will help us be more\naccurate in our decision-making\nprocesses it will help us make better\ndiagnoses and keep us safe on the road\nthese narrative come with the assumption\nthat somehow we the humans are the real\nblack boxes and if we just design a time\nwhere enough and we exert enough control\nthen AI will compensate for our human\nflows and the mess we have created now\ndystopian scenarios of black mirror\naside this that I call a smart utopia\nand the design ideals of control and\noptimization that come with it are\nactually a bit at odds with the messy\nreality of everyday life in a study on\nthe future of Taiwanese mobility that we\nconducted in Taipei in collaboration\nwith Taiwan tech design ideas of fuel\nefficiency and energy savings proved to\nbe at odds with how Taiwanese use\nscooters in everyday life in this\npicture for example you can see how the\nscooter used at a low speed at the\nstreet market is no longer a vehicle for\ntransportation really but a cart for\ngrocery shopping and at a different\nspeed and within a different\nconfiguration of everyday life it\nbecomes a means for establishing and\nmaintaining social relationships now ok\nthat doesn't seem very critical does it\nand it's easy to relegate this case to\nmatters of culturally sensitive design\nthe potential friction suggested by this\ncase between the intended\nuse of smart autonomous technology and\nhow it might actually end up being used\nby people in the context of their lights\nseems quite irrelevant when we think of\nthe design of an AI that can save us\nfrom a drunk driver or the wrong medical\ndiagnosis so why not let AI in even more\nspheres of our lives and use it to\noptimize also our wealth all we need is\nto buy a SmartWatch make sure that we\nstay active and reduce our insurance\npremium while maybe even gaining\nadditional benefits that's the value\nproposition of a start-up company called\nvitality in the UK now it gets to be it\ngets to be a bit more serious because\nwhat if I get sick or for whatever\nreason I just can't exercise anymore not\nas much will I still be able to afford\nmedical coverage and what to do this was\nan example presented by Jani this\nmorning what do we do what if a badly\ndesigned and yet already implemented AI\nsystem fires an talented motivated\nteacher just because the mathis cause of\nher students are not good enough how do\nwe fix it when we consider their\ncomplexity of these real-life scenarios\nit is quite apparent that the need for\ncontrol is morally justified so I really\nhope not to be misunderstood when I say\nthat control mechanisms and regulations\nare necessary but not enough frameworks\npromoting principles of control explain\nability and transparency are necessary\nto ensure accountability after the fact\nafter something has been designed but\ndesign has to regain its agency in the\ncrafting of desirable relationships with\ntechnology between people and technology\nit has to become anticipatory gain both\nto address the legal issues but also not\nto stifle innovation so but for the\nfield of design to move forward I'd like\nto make a step back and I'd like to ask\nwhat are the design ideas that are\nactually as designed\nwere locked into an I argue that as\ndesigners we are locked into the fallacy\nthat all we need to do is to get it\nright the right functionality the right\nfeature the right interface right\nalgorithm the right user experience you\nname it\nbut these comes from times when working\niteratively and getting it right\naccurate precise was the best way to\nminimizing the risk of mass replicating\nfaults and shortcomings it is very much\nanchored in design ideas of mass\nproduction contemporary natural\ntechnologies artificial intelligence as\nwell as the platform capitalism that\nhave made pasta have made possible not\nonly differed from the logic of\nindustrial production they fundamentally\nchallenged the conceptual space that as\ndesigners we have created to cope with\ncomplexity with runtime assembly of\nnatural services constant atomic updates\nand agile development processes the\ndesign process is no longer something\nthat happens before production and then\nit's done it continues in use in this\ncharacteristic constant becoming is\ngoing to be further accelerated by\ntechnologies that operate in everyday\nlife and actively learn while e-news\nchanging and adapting over time at an\neven more fundamental level than is\ncurrently the case so the point that I'm\ntrying to make is that while we jumped\nwith both feet in the digital society\nour way of thinking our design\nframeworks and methodologies are still\nlocked into an industrial past and this\ntable shows some of the shifts needed to\nmove towards a truly post-industrial\ndesign for example if we consider core\nperformance that is the core dependency\nof people and I more than they're\nsupposed autonomy and the degrees of\nautonomy is between the two then we\nbegin to ask very different types of\nquestions\nthe type of questions that hopefully\nwill hear also in the next presentations\nwhen we talk about narratives and\ncounter narratives and the very\nimportant idea of design trade-offs and\none question that we might begin to ask\nis for example how can we empower humans\nrunning them controlling machines in the\nresourceful aging project which is a\nproject we conducted and concluded last\nyear and that received by the European\nCommission I an internet ng a next\ngeneration internet a word for better\ndigital life we address this question by\nlooking at how machine learning can be\nused to support older people's natural\nstrategies of improvisation and\nresourcefulness right in the monitoring\npredicting and prescribing behavior the\nmotors are networked so that our\ncolleagues from computer science\noriented often be looked into for this\nproject were implemented to answer a\ndifferent set of questions and accuracy\nof prediction they were concerned with\nthe core usage and variety of\nappropriation that latitude at which\nolder people who could use technology as\na resource to improvise in everyday life\nlearn from each other develop shared\nnorms and values and remain independent\nnot just from the care of their loved\nones but also from care technology and\nwe try to capture all the lessons in the\nbooklets available online so I'd like to\nconclude by provoking the audience and\nsaying that design should not be about\naccountability about fixing the things\nthat are wrong design is imagination of\nhow things might be it's about taking\nagency and responsibility for our\ndesigners for desirable relations\nbetween people and technology it's about\nour future isn't a lot in the motor of\nour faculty but for that we need to\nunderstand and fully engage conceptually\nmethodologically and ethically with the\ntrue challenges of past industrial\ndesign which are codependency not\nautonomy\nMart intentionality not so much accuracy\nand perhaps empowerment are not so much\ncontrolled and with that I like to\ninvite get her the new in other going to\nintroduce him", "date_published": "2019-10-29T16:32:40Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "ecde996d8c77e1606dcc67d8fb015704", "title": "Trajectory optimization for urban driving among decision-making vehicles (Javier Alonso-Mora)", "url": "https://www.youtube.com/watch?v=wc0gGRNoenI", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "yes\nit's recording\nokay uh hello everyone and sorry for\nthe connection problems so it is our\npleasure\nto welcome today our guest speaker\njavier allen samora\nuh for this eight agora seminar so\nuh javier's research interests are lie\nin the area of\nnavigation and motion planning and\ncontrol of autonomous mobile robots\nand uh xavier is looking specifically at\nmulti-robot systems and robots that\ninteract with other robots and humans\nso javier is interested in\nmany applications including autonomous\ncars\nautomated factories aerial vehicles and\nintelligent transportation systems\ntoday he is going to talk kind of\nabout an overview of his work so\ni think it is very uh relevant to\nsome of the uh kind of control issues\nthat we are discussing\nhere at the ai tech and i'm very much\nlooking forward to this presentation the\nfloor is yours\nhere okay thank you very much for the\nintroduction and the invitation so\nyes as i was wondering uh who is there\nso i think this will have been nice to\nbe uh\n[Music]\nin person meeting when where we get to\nknow each other\nso i'm javier alonso i'm an associate\nprofessor in the community robotics\ndepartment at the same department as\ndavid having\nan academy so yeah i was wondering who\nis out there in the meeting\nso maybe you can briefly introduce each\nother just\nlike one sentence that's enough so that\ni know\n[Music]\nyes uh i can manage that\nso i'll if everyone is okay i will just\ncall everyone one by one\nand you'll say 10-15 seconds\nintroduction so i'll start with myself i\nam already\ni'm a postdoc at home robotics i'm uh\ninterested in interactions between\nautomated vehicles and humans\nthen the next one would be luciano\nso um luciano masa postdoc at eight tech\nuh recently uh says professor at the\ninteractive intelligence group\nin fav and i work with responsible ai\nmostly\non more decision making\nokay thanks luciano the next one is luke\nluke\ndo you want to say a couple of words\nyeah hi everybody my name is\nluke murr and i've been supporting\nthe people setting up the itec program\nso not not the researcher but one of the\nsupport people thanks luke\nthe next one is andrea andrea hello\nhi my name is andrea i'm a phd\nstudent at the behave group which is led\nby casper horus at tpm\nand i'm particularly looking at ethical\nimplications of ai systems\nthanks uh thanks andrea so uh and then\ni'm not sure wh who the person is behind\nthe\nbehind the name so the next one is chant\nm\nyeah the abbreviation\nmy name is ming chen i'm from uh tno\nfrom the unit strategic assessment and\npolicy\nand within a group on digital societies\nand i've worked always in the\nmobility sector and lately i'm\nwell i'm doing a project paradigm shift\nfor railways\nbut i'm also managing\nthe project manager for the support to\nthe european commission\non the platform for ccam\ndiscussions so\nwe have a broad working area actually\nbut i i was i just received this\nmeeting request one minute before the\nmeeting uh from a colleague\njack timon and uh so\ni don't know anything about the context\nso i'm curious what you\nhave to say okay\nuh participants uh so the next one is\naniket i think it's aniket\nright uh hi yeah yeah it's a\nuh i'm at embedded systems graduate from\ntudor\nand i've done a course on robot motion\nplanning under javier so i'm just\ncurious okay nice\nokay uh the next one is claudio carlos\nclaudio do you want to say couple\nthoughts about yourself\n[Music]\num yeah we'll we'll skip close you i\nguess\nuh the next one oh hello how's your are\nyou with us\nno okay so deborah do you want to go\nnext\ni'm deborah forster i'm right now in san\ndiego\nit's uh four o'clock in the morning\n[Music]\ni'm happily awake um i'm\na cognitive scientist and uh\nplaying with uh high tech\nfor a few months uh looking forward to\nthis\nthanks deborah uh hermann\nhi i'm a hermann failion comp\nso i'm at the moment working as a\nphilosopher in hong kong and\ni will be starting uh the ai tech\nproject as a postdoc\nin january\n[Music]\nuh okay next one is evgeny\nagain hello can you introduce yourself\nhi guys so i had problems connecting so\ni missed\neverything that happened just before the\nsecond so i guess you're doing\nintroductions\nyes yes yeah all right so uh hi\neveryone i'm evgeny eisenberg uh so i'm\none of the postdocs at ai tech leading\nthe project\ndesigning for human rights and ai uh the\nfocus is\non socio-technical design\nprocesses that put the values embodied\nby human rights at the core of their\nhuman dignity\ngrounding the design that we follow with\nai technologies\nand engaging societal stakeholders\nduring this design to really have a\nroundtable\nformat in which we enter without a\npreconceived notion of the real\ntechnology place\nthanks again iskander\nyes yes i hi there everyone uh smith i\nam very short\ni work in the industry in the company\nbut also\npart-time as a director for city of\nthings that industrial design\nengineering\nfaculty\nthanks to skander the next one is\nlorenzo\nhi there i'm a graduate student of\nnikken of david and nick invited me to\nthis meeting so i'm\nnot entirely uh\nsure what what you're going to say that\ni'm very interested\nokay thanks um we've already spoke with\nluke\nmaria luce i think we know\nyeah hi yeah we know each other already\nso\nfor the others who doesn't know me i'm\nmarie lucia from the faculty of\nindustrial design engineers\nengineering and also postdoc at\nai tech and i'm working on bridging\ncritical design and uh responsible\nand discussions about responsible ai and\nmeaningful human control\nyeah good to see you again\nthanks thanks marielou chennik\nso hi my name is nick i'm a postdoc also\nat ai tech and in cognitive robotics and\ni'm really interested in human robot\ninteraction\nso i'm looking forward to this lecture\nor this talk\nthanks uh seda yeah hi let me turn the\nvideo on\ni'm an associate professor in the\nfaculty of technology policy and\nmanagement\ni work on among other things counter\noptimization\nlooking at the harms and risks or\nexternalities that companies\nexternalize using optimization and how\npeople in environments affected by these\ncan\npush back thanks\n[Music]\nsimone\nit sounds like there's more than 10\npeople uh as you said at the beginning\nit's 18 at the moment simone we can't\nhear you\ncan you hear me now yep yep good so i'm\nan assistant professor at\ndepartment of transport and planning and\nmy main main research area is\nthe traffic flow effects of new\ntechnologies and disturbances\nsuch as automated driving good to see\nyou again hi there\nthanks simone and uh i think i missed\ndavid\nso here's david a colleague of\ncaviar i know his work relatively well\nso so i know\nsome of the things he might present um\nbut uh but maybe\nmaybe not maybe you'll surprise me or\nmaybe i'll learn something that i\nthought i understood but didn't\nand i'm also looking forward to the two\ndiscussions\nyeah yeah javier i think that's\nthat's all of us yeah so very nice um\nmeeting all of you\nuh those that are new and the other ones\nare very nice uh seeing you again\nuh so i think it would be good to this\ninteractive uh\nit's going to be a bit harder now that\nit's\nonline right but really feel free to\nto stop me and ask a question so i i\ncannot see the chat\nwhen i have this presentation in full\nscreen\nso if you have a question uh please\nspeak up\nstop me interrupt me it's perfectly fine\nuh\nso shout out and i will stop and then we\ncan have a discussion in any part\nyeah or if you know how if you raise\nyour hand i also think i will not see it\nuh because it's full screen the\npresentation\nso yeah i can help you with that heavier\nif summer is okay\ni i will track this okay so if there is\nsome question you can also moderate and\nyou can ask it\nokay okay um so\nlet's let's get the start then\nso you might have seen these robots\nalready\nso these are the what used to be the\namazon\nkiva robots and the thousands of them\nautomate\nsome of the warehouses around the world\nof amazon\nand up to date this is already 10 years\nold more or less this video\nbut up to date this is still one of the\nmost successful commercially available\nproducts\nwhere multiple robots actually work in a\nreal environment\nand if you order a package in the us\nit's very likely that\nyour package from amazon is actually\npicked in a warehouse like this\nthrough a team of mobile robots\nnow but if you look very closely or not\nso close\nactually you you probably notice it\nalready there were only these orange\nrobots moving around\nshelves so there are no humans in this\nenvironment\nif something goes wrong they have to\nstop the whole area and then the the\nhuman can go in\nand the humans are in the periphery so\nthe robots will bring the packages to\nthe humans that then they had\nall the packing so they are completely\nseparated the humans and the robots\nthat's the same if we look at other\ndisplays or other commercially available\nproducts so\nthis is a display with multiple robots\nwith hundreds of\ndrones you might have seen it from\ndisney or\nintel across in rotterdam some months\nago there was such a display\nbut again there are no humans in the sky\nso that's difficult\nand furthermore in this case everything\nis pre-planned so nothing changes in\nreal time\nand if we look at mobile robots in\ngeneral\nmaybe some of you have a tesla like the\none here on the\nright corner so these ones work\nmost of the time so it's not yet hundred\npercent anyway\nbut mostly on low complexity\nenvironments like a highway\nwhere the car mostly has to follow the\nlanes and avoid the vehicles in front\nif we go to indoor environments where we\nhave social robots\nthose are interacting with much more\ncrowded environments more interaction\nbut the speeds are typically very slow\nso these robots will just stop and let\nyou pass when they meet you\nso moving forward the mobile robots\nwill need to achieve complex tasks\nand they will need to seamlessly\ncooperate with other robots\nand also with us humans because some of\nthese tasks will be an environment\nshared with humans\nand all this needs to happen in a safe\nefficient and reliable manner\nso within my group we are\nworking towards robots that can\ncooperate with each other\nand that seem seamlessly interact with\nother robots and also with humans\nto achieve complex tasks here you can\nsee\nsome maybe a vision of the future where\nmany robots will interact you will have\nrobots delivering packages\nyour self-driving car maybe robots for\nthe police\nor carrying your suitcase when it's very\nheavy and all this will have to interact\nin this complex world\nwith other robots and also with humans\nwe are still far from\nfrom this vision so but we are\nworking mostly on multi-robot systems\nand we are trying to answer\nthree different questions so the first\none is how do we manage a fleet of\nrobots when you have hundreds or\nthousands of\nself-driving cars in a city and that is\nwhat we call fleet routing\nso today i will not be talking about\nthat topic but that's something that we\nare also working on\nuh the second question is how to move\nsafely\nso in an environment share with other\nrobots self-driving cars humans\npedestrians\nbikers and how do we account for that\ninteraction with\nall those other participants so this is\nmotion planning and this is what i will\nfocus on in today's talk\nbut that will be the second part because\nfirst i would like to give a brief\noverview of some other work that we have\ndone\nand that is answering a third question\nthat is how can a team of robots do\nsomething together and maybe also with a\nhuman so how could a human interact with\na team of robots\nand that's what we call multi-robot\ncoordination and i will just briefly\ntalk about it in the first part of the\ntalk uh if you are\ninterested just ask questions there and\nwe can have a discussion about that part\nuh and then we will move on to the to\nthe motion planning for self-driving\ncars that that will be\nthe main focus so\nyes i was mentioning so so the one of\nthe challenges\nis this interaction with humans in\ncrowded environments\nhere you see an example from a video\nfrom our\nwork this is recorded in the cybersuit\nyou see the small mobile robot it's a\njackal with some sensors on top and it's\ncapable of perceiving the environment\nperceiving the human\nand making a prediction of how the human\nis going to move\nit's now in the 3m corridors and then\nit's\nplanning its motion to avoid this\npedestrian taking into account how it is\ngoing to\nmove as i will be going in detail into\nthis\naspect today but there is another type\nof interaction that we have\nto that our robots will have to handle\nit's the interaction with the\nenvironment as well\nit's about the humans and the\nenvironment and we recently\nstarted a project together with several\nof\nmy colleagues in the cognitive robotics\ndepartment\nand in tbm where we are going to look at\nhow can we apply this in in an\nenvironment that is really shared with\nhumans\na supermarket environment so this is\nwithin the ai for retail lab\nwhere we will be using a robot like the\none you see there that will be capable\nof\npicking things in the store placing them\nin some other places and doing that\nwhile there are people moving around\nso i will have to think about all these\ncomplex interactions\nboth with customers on the store as well\nas the environment itself\nand that you you will hear more from\nthat in the future from us\nnow another type of interaction that\nthat we need to take into account is\ninteraction with\nother robots and that is the multi-robot\nsetting really\nso i did my phd at tth theory and in\ncollaboration with disney research\nand now it's around seven years ago so\nthis picture\nis now a bit old this seven eight years\nold\nso we did a new type of display those\nwere the the pixelbots\nthat you can see here and it was a new\ntype of display where the pixels are\nmobile\nso instead of having millions of pixels\nlike you have in the screen\nthat you are looking at now every pixel\nwas a mobile robot\nand we could control its color but also\nits position\nand using a lower number of them so in\nthis picture is 50.\nso using a small number of them we were\nable to display\nimages we were able to display\nanimations and\nwe also demonstrate this in many\nlive venues so like this one here that\nyou see a key that wanted to interact\nalready with the robots\nat that time there was no interaction so\nit was a fully pre-programmed\nanimation where the robots will\ntransition through a series of movements\nto tell a story but then we we wondered\nokay\ncould we have a better interaction with\nsuch a system where we have multiple\nrobots\nthe first thing that we did is our\nsystem was able to display images so you\ncan just have a drawing display like\nwhat you see here\nin your computer you can draw something\nyou can pick a robot you can move it\naround\nand that way you can interact with it\nbut then we say okay we want something a\nbit more intuitive so we move\ninto gesture based control so here we\nhave a kinect\nthat is recognizing the human gestures\nand then the human was able to control\nthe the shape here one thing to note is\nthat when you have multiple robots there\nare many degrees of freedom\nand you cannot really control them one\nby one so what we did it was to use a\nlower representation so a series\na small subset of degrees of freedom\nthat the human could\ninteract with so like the position of\nthe of the mouth\nor the shape of the mouth or things like\nthose so instead of controlling every\nrobot independently\nit will control some higher level\nrepresentation\nof the team of robots and on top of that\nanother thing that we tried\nfor the interaction was to use a an ipad\nso again remember this is from eight\nyears ago\nso and there uh we were running all the\nalgorithms\non this ipad and you could think of\nhaving drill interaction\nboth with the robots but also with the\nenvironment like here you could have a\nteam of robots playing soccer\nand you could be controlling some of\nthem so that was another way to interact\nwith the team of robots uh here we had\ndirect control over each robot\nbut you could also have augmented\nreality games where\nyour robots will be doing something and\nmaybe your task is to\nto pick some things on the environment\nand with multiple players\nso these were some things that we tried\nfor\nfor multi-robot interactions so human\nswarm interaction\nwhere mostly it was going to a lower\nrepresentation instead of controlling\nall the degrees of freedom\none by one controlling a subset of\nmore high-level degrees of freedom\nand moving forward uh we we have\nseveral projects in the lab that just\nstarted over\nstarting now where we are also applying\nthis type of ideas to\nteams of drones and there\none of them is with the ai police lab\nhere in the netherlands and then we are\nlooking at a multi-robot coordination\nand also learning\nto better coordinate between the team of\nrobots and also with a human operator\nwhere the human operator will be\non command so in command of the team of\nrobots controlling some\nhigh level degrees of freedom but the\nrobots themselves should be must be\nautonomous to be able to navigate safely\nin the environment\nand collaborate with each other so\nthat's a project that\nwe are starting right now so we will be\nbusy with that one also for the coming\nyears\nand i think that yeah that was it for\nthis part so that's a bit the overview\nof uh what we have in vcu\nwith with multi-robot systems and the\nhuman swarm interaction\nand where we are going uh in the future\nso if there is any question about this\npart\nthen i think this is the best time to\nask them otherwise i will move to the\nthe main topic that was the\none on self-driving cars and interacting\nwith other traffic participants\nany questions\nwell i could i could ask a question ming\nchen is here\nuh yeah i'm just thinking uh\nnow there's actually a thin line between\nremote control and interaction i think\nand if i think about human behavior\nactually we also are sometimes remotely\ncontrolled in a crowded environment\nwhere we maybe just follow another\nperson so we give the responsibility\nfor routing to another person so is\nthere\na real difference between the two\nyes i i think there is a that's one\ndifference and it does\ni mean there is one level of interaction\nthat is when you don't have any direct\ncontrol so like what you see here in\nthis video that\nthe drones are navigating and the humans\nare also navigating in the same\nenvironment\nso that's a level of interaction where\nthere is clearly no control\nover the drones directly and you can\nstill influence their behavior depending\non how you move\nyes and then the other influences yeah\nthe other question is when the human is\nactually controlling the team of robots\nin some way\nso where is the line between interaction\nwith the team of robots versus control\nand i think there is another level that\nis the one of being on in control\nor being on command or in command\nand the difference is that in one case\nyou will really control\nhow every robot moves in the team then\nyou are really controlling everything\nand that's that's really not a scalable\nand the other one will be more you could\nbe in command so you could tell the team\nof robots\ngo and see this thing or go and inspect\nthis area\nand then the robots themselves will be\nwill have enough\ncapability or intelligence to perform\nthat task\nout autonomously so it's a bit like a\nhorse in some sense that\nyou you only give high high level\ncommands\nor a military strategies that will only\ngive high level commands to the troops\nso there are multiple levels and it\ndepends a bit so so i would say there\nare\nseveral different levels of interaction\nand control and command\n[Music]\nokay and uh like humans humans\nmake uh heuristic decisions\nas without full information so just\nbased on limited information you make\nyour decision so\nthe example i gave you can follow a\nperson because you assume\nthat he can see the way in front of him\nbut you cannot say yes so\ndo you use the same principles well so\nyou can make a decision to follow\nsomeone\nyes i think uh handling uncertainty\nand then and so and that's hard for\nrobots\nstill at the time so many of the things\nthat you see here\nor except for the things at the end of\nthe presentation\nare in some sense deterministic so like\nhere for the pedestrians we were making\na constant velocity assumption\nfor them uh so if we want a robot that\nreasons about all the possible scenarios\nand all the possible outcomes of all the\nactions\nand then the impact on on the\nenvironment and the robot\nthat that's really hard but i will come\nback to that point uh\nlater on so i will not talk more about\nthat point so you will\ni will discuss it a little bit later on\nfor the case of the self-driving car\nokay thank you\num can i ask a question i put my hand up\nbut i'm not sure\nyeah i cannot see them sorry sorry okay\ni'm just very curious about the use case\nuh for the office of naval research and\nthe police\num i'm i guess um the assumption is that\nthese two people who are working\nwalking around would be people who are\nthey're not the controllers is that\nit was kind of unclear to me and it\nseems that the idea is\num that the drones would be at the\nheight of their\nit's almost like eye level i'm not sure\nif they're really\neye level in terms of power but like eye\nlevel in terms of\nlike almost um physically and i'm just\nwondering why this was a choice like\nwhat kind of use case\nis requiring okay so yeah so for this\nvideo in particular it we were just\nshowing collision avoidance so the\nrobots are just moving from a to b\nback and forth so there is not not more\nthan that\nwe look in the past on aerial\ncinematography and videography so that's\nsomething that these drones could be\ndoing\nwhile moving in the environment so they\ncould be recording these these\ntargets so that's one scenario\nand in particular what we wanted to do\nfor what we are\nworking on is on kind of high-level\nmachine understanding\nwhere you have multiple robots and they\nwill have to go into an environment\nand they have to collect information and\nthen in the case of the police\nthey might want to do something with\nthat information afterwards but\nthis team of robots will be able to to\ngo in an environment that is changing\nand they should be able to share\ninformation with each other\nand they should be able to to understand\nwhat's going on\nuh and we're not there yet so\nthat's our vision that's where we are\nplanning to go so\nthat they can and one case will be the\none of uh\ncollecting information uh and recording\nuh\ntargets uh or going in an unknown\nenvironment\nand gathering information in a\ndistributive manner\nso are you saying that these drones are\nactually coordinated meaning that\nthey're looking at what each other sees\nand those yeah yes yes yes so so we are\nlooking at the methods on how to share\nthe information between the multiple\ndrones\nin the case of the police for example\nyou can think of that there is a central\ncomputer that\nis also communicating with all the\ndrones\nbut it could also be distributed that\neach drone communicates with each other\nand shares some information and one\nthing we are looking at\nis what information should be shared\nbetween the drones\nand when and with you and with whom\nvery interesting also for our\nconceptions of meaningful control thank\nyou so much\nyeah but yeah so sorry i'm not going\nmore on detail on these topics but\ni just wanted to point out that we are\nworking on them so if you are interested\nor want to\nhear more you can reach out to me and\nthen we can discuss more in detail\nfor these topics okay\nany questions or move forward i'm not\nsure\nwe don't have much time that's the\nproblem so we have 23 minutes left\nso if uh yeah you could just uh go\nthrough the final part of the\npresentation we hopefully you'll have\nsome\nquestions and then yeah okay sounds good\nso maybe i just skipped some parts\nas i see a fit so basically well\nself-driving cars\nwe don't need to do much of an\nintroduction so\nnow you might be stuck in traffic\nespecially before the corona times or\nmaybe if everyone goes by car after the\ncorona times\nand no one really likes to be stuck in\ntraffic right so some will say that\nautonomous cars will contribute to\nmaking\ntransport reliable safe efficient\ncomfortable and clean\nso basically they will solve many\nproblems with\nwith transportation right\nmaybe some of this will be solved others\nnot but for\ntoday i will focus on how can\nself-driving cars actually\nmake our roads safer and\nyeah as i mentioned before previously so\nif we have a highway environment there\nare commercial solutions\nlike the autopilot in in tesla's so it's\na hard environment especially to have it\nworking 100\nof the time with rain snow and so on\nbut we know more or less how to do it uh\non the other hand\nthat's that\nyou uh off my google or\nuber or many other companies are testing\nalready sell driving cars in urban\nenvironments\nso more or less it's already in pilot\nphase\nbut if we want to have uh cars that\nnavigate in really\nurban environments like our cities in\nthe netherlands that's really much\nharder because there is much more\ninteraction with other traffic\nparticipants\nlike humans pedestrians bikers other\nvehicles going up and down bridges where\nyou don't see what's going on\nso it's really hard and this is the the\nscenario where we are focusing on our\nresearch so really accounting for the\ninteraction with other traffic\nparticipants\nuh and in order to solve that\nself-driving cars will have to be able\nto listen about what's going on so here\nyour self-driving car will have to\nto be listening about what is happening\nhere is that the person going to let me\npass or not\nand it needs to reason about this in\nsplit seconds so\nlistening about all these possibilities\nof the future\nand moving safely so that's really hard\nso the for those that are not familiar\nwith\nautonomous vehicles and motion planning\nso the way this works is typically we\nhave an environment with\na robot and other agents then we will\nmake some observations so the robot\nwill make some observations of the\nenvironment\nwill update this belief of what it\nthinks that is happening around itself\nin the world\nand with that it will decide how to move\nin the environment so that's the motion\nplanning part\nthat computes a set of inputs steering\nangle acceleration for the case of the\ncar\nand then it will continue doing this\nloop all over time\nas it moves in the environment so for\nour work\nwe in in my group we focus on the motion\nplanning part so\nso once the robot perceives the\nenvironment how does it decide\nwhat to do and how to move and motion\nplanning is then to generate\nsafe trajectories for a robot or a car\nso i will use them yeah both robot and\ncar they are the same to me\nand here you can see an example so it\nwill have to choose how does it move in\norder to do this\novertaking safely so that's motion\nplanning in short\nand then uh what we need to take in\naccount is we need to take the global\nobjective so where does this robot\nwant to go the environment also the\nrobot dynamics so how can it move\nand also the belief of the enviro of\nother\nagents behavior so this is how they\nbehave in the environment\nand all this goes into the what we call\nmotion planner\nuh that in particular we use something\ncalled receiving horizon\noptimization and that one will then\ncompute the\nthe inputs for the the vehicle i will\njust give you a brief overview\nof how we do this trajectory\noptimization because i think it's\nimportant to understand how inside the\nrobot\nis how the robot is actually thinking\nabout the problem\nand how is it computing is motion in the\nenvironment\nso the way that this works is we use\nsomething called model predictive\ncontrol\nwhere we have our robot and maybe we\nhave a reference path that it wants to\nfollow this could be the the center of\nthe lane where the car is\nmoving and we also have a model we have\na model of how the robot\ncan move and we can forward simulate how\nthis\nrobot is going to move in the\nenvironment and we can discretize time\nso what we have done in this example is\nwe discretize the\nfuture time into four time steps and\nthat will be our prediction window for\nwhich we predict what the robot is going\nto\ndo then we can define a cost\nper time step so here in red the red\narrows\nit could be just the deviation between\nwhere the the car is and the\ncenter of the lane and this will be our\ncost term for each\ntime step in the future then we will add\nthem all up so we sum all the\nconstants for every time step that will\nbe our cost function\nand what we actually want is we want to\nminimize that cost\nso we are going to formulate an\noptimization problem\nwhere we minimize these cost terms where\nwe have a cost term for each stage in\nthe future\nuh given the the the prediction\nof where we think the vehicle is going\nto be given the\ninputs use of k here that we plan to\ngive to the\nvehicle in every time step so this will\nbe our cost function\nand then what the cost is you can design\nit so\nthis cost function in every time step\ncan have many different shapes\nso one could be this error with respect\nto the reference the middle lane but it\ncould\nbe many other different things and then\nwe will have a set of\nconstraints so like the dynamics of the\nvehicles so cars cannot move sideways\nso we need to take that into account or\nthey might have a maximum speed\nso then this is a constrained\noptimization and\nluckily then we can solve this\nconstraint optimization\nproblem with state-of-the-art solvers\nand that will give us the optimal inputs\nfor the vehicle for this time\nwindow and we then\napply the first visible input for the\nnext time step\nthe vehicle will then move and this\nshifts over time and we keep doing this\nmany times per second so typically 10\ntimes per second or more we will\ncontinue to solving this optimization\nto minimize our cost subject to the\nconstraints\nand that is what was running in the\nvideo that i showed already before so\nyou see it here also in the\nvisualization is what the robot is\nseeing and the predictions that is\nmaking of the environment\nit sees the obstacles it sees the moving\npeople\nand then it plans a trajectory to follow\nan eight path\nin the environment safely and it will\nadapt its position and velocity to avoid\nthe pedestrians as the robot encounters\nthem in the environment\nas we already saw most of it whether we\nuse\nmpc or moderative control it's because\nwe can\nit allows us to include different things\nso we can take into account multiple\nobjectives\nwe can also have well you then have to\nweigh them so that's another question\nyou can define multiple objectives then\nyou add them all up but then probably\nyou as a designer will have to tune the\nweights of all these different\nobjectives so maybe that's something to\nto look at but it can handle multiple\nobjectives\nyou can have also constraints that you\nwant to satisfy when you are moving\nlike the vehicle model and you can also\nhave predictions of what is going to\nhappen in the future both for the\nthe car or the robot as well as for the\nenvironment and it's a very flexible\nframework\nokay i think in the interest of time i'm\ngoing to skip the mass\nuh if you want to see it it's in the\npaper below or you can ask me\nso at the end it looks something like\nthis where we have the cost function\nthis one up here so this is a more\nrealistic cost function where we have\nthe time horizon these are the n\nsteps in the future and we have a\ntracking error\nwith respect to the the middle lane and\nthen we maybe penalize us\nso the inputs not to have to aggressive\nmaneuvers and then we have all these\nconstraints like\nto satisfy the limits on the speeds and\nalso the dynamics of the vehicle and\nthis one here is an important one\nthat they want to avoid collisions with\nother things that are in the environment\nand then we minimize this constraint\noptimization problem\nand it's a non-complex which means that\nit's very hard to solve\nbut luckily there are people that\nspecialize in solving this type of\nproblems\nand there are solvers available online\nsuch as academic or forces pro\nthat you can use to solve such a problem\nand if you put it on a car then this is\nwhat happens so this is\non our self-driving car from the\ndepartment together with the intelligent\nvehicles group\nand here the self-driving car is\nperceiving the\nthe pedestrian that goes on the way it\nmakes a model of\nhow the pedestrian is reacting and that\ngoes into the motion planner that then\ndecides how the cars should move in\norder to avoid the pedestrian as you can\nsee here\nso we focus on the motion planning and\nthen the perception in this case comes\nfrom the group of\ndario gabriela and julian coy\nand uh yes so that is the base framework\nso there there was no interaction i\nwould say that that's just a\ncontroller that has a prediction and\nthen\navoids collisions but then it's very\nflexible so you can\nby changing the cost function you can do\nother things so we also try to\ndo a visibility maximization so here the\ncar\ntries to maximize what it sees in front\nof another car whenever taking\nand that can be encoded in the cost\nfunction as well\nbut the interesting problem is that of\ninteraction\nso let me ask you a question and it's\ngoing to be hard because we are not in\nan audience but\nif you were driving your car here and\nyou are going to merge into this\nroad with many cars you might be waiting\nthere forever\nwould you actually wait forever there\nmost likely\nuh if you're driving there you will not\nwait there forever so what you will do\nis\nsomehow you will match your way in so\nyou will move a bit forward or when you\nsee that someone maybe is slowing down\nor the gap is maybe\nbig enough then you will kind of hope\nfor the best\nstart moving a bit and see if the other\ndriver lets you pass\nand if the other driver lets you pass\nthen you merge safely\nif the other driver ignores you then you\nprobably stop\nagain and try a bit later so there is\nsome level of interaction so your\nactions will also affect what the others\ndo and then what they do also affects\nyou\nand this is what we are trying to\nincorporate that is a hard problem in\nmotion planning\nbecause the robot or the car must\nunderstand how its future behavior\nchanges the behavior of other agents it\nalso\nand and how those interactions are going\nto change\nwith multiple agents over time and the\nquestion is how can we actually\nencode this interaction in the play in\nthe planner in a way that is\nuh safe and that we can solve in real\ntime\nbecause we need to solve it in real time\nif we want to run it in a\ncar so there are basically\ntwo ways to do that so one is to\ncoordinate that's when robots can\ncommunicate\non the interest of time i'm going to\nskip that one so that's\none way you can consider that there is\nvehicle to vehicle communication and\nthen everyone communicates\nand everyone else changes their plans\nmaybe i just show a video of that\nso that's what you see here so you could\nhave communication vehicle to vehicle\ncommunication then everyone plans up a\ntrajectory\nexchanges with the neighbors and they\niterate to agree on\nplans that are safe for everyone so\nif you run that on a set of vehicles you\ncan get very efficient\nbehaviors like you see here a very\nefficient intersection where everyone\ngoes crazily to the middle\nand somehow very narrowly they avoid\neveryone\nbut this will only work if you have a\ngood communication and everyone\ncommunicates\nand it would look a bit crazy if you're\nin that car probably you might be a bit\nscared\nbecause you really need to trust the\nsystem and that everyone is\ncommunicating\nuh properly in reality\nnot everyone is going to communicate so\nwe also need predictions of what other\ntraffic participants are going to do\nand then we need to encode that\ninteraction\nso think of someone that drives the car\nbut it's not a self-driving car\nor a bicycle driver they will not really\ncommunicate with you they will not\nexchange their trajectories\nso you need to to be able to make\npredictions\nso this this is what we call interaction\nso now\nyou recall this figure from before so\nit's the the typical look for an\nautonomous\nvehicle so now what is new is this red\narrow\nso this one here is the interaction part\nso now the actions of the robot so what\nthe\nthe car is going to do in the future the\nrobot will do in the future it's going\nto have an influence also on the other\nagents\nand that is not trivial and that is what\nwe need to actually encode in our\nplanner\nso this will be a loop so where we need\nto gently estimate what everyone is\ngoing to do\nand the plan accordingly based on what\nwe think that they will do\nuh what we do what we do and then there\nis this recursion loop that\nthat and then it depends how many\nrecursions you want to do i guess\nuh but if there is this recursion loop\nof your actions affect them and so on\nlet me skip this one so this is a\nprobabilistic method and maybe i will\njust talk about this other method\nso one way that we look at it was by\nvery recently together with my\ncollaborators at mit\nand this is the work of wilkes bartig so\nthe phd student\nthat i was working with there at mit\nso we look at the problem of social\ndilemmas so those are situations in\nwhich the collective interests are at\nthoughts with\nprivate interests and that's the case\nwhere that like the one i explained\nbefore of self-driving cars\nso the way we model this is by using\nsomething called social value\norientation\nthat comes from the psychology and the\nhuman\nbehavior literature and basically tells\nyou\nor captures what are the human\npreferences and it\nin a social dilemma and it captures that\nin a circle\nwhere this angle here will identify\nwhether you are prosocial\nor you are individualistic or you are\ncompetitive or well there are many other\nthings that you could be\nbut those are very unlikely so most\npeople are in this range so either\nsocial individualistic or competitive\nand we wanted to encode that so we\nwanted to understand whether the other\ntraffic participants\nhow do they behave what type of drivers\nare they\nso that we can plan better so studies on\nhuman preferences\nthey say that humans are these red dots\nand\nhere you have the references below so\naround 90 of the individuals are either\nprosocial\nor individualistic so it's 50 and 40\nso that i don't say that that comes from\nthose studies\ni don't know uh but that's what we are\ntrying to then understand the\nuh for our self-driving car and we\nbelieve that if our cell driving car can\nunderstand how the other drivers are\nin real time then it will be able to\nnavigate better in urban environments\nso how do we do that so first of all we\nwe need to to use this social value\norientation\nand for us it's useful because we can\nuse it to\nto weigh our own utility so the utility\nof the\nthe self-driving car to the\nwater utilities so this will be the\nutility\nor the cost that we will try to optimize\nfor and with this angle the social value\norientation we are\nweighing our own reward versus the\nreward of\nof the other and then that\nthis rewards we don't have a clue how\nthey look like so what we did was to\nestimate them or calibrate them from\nreal data so highway driving in\nparticular\nand we use inverse reinforcement\nlearning for that so looking at a lot of\ndata\nfrom highway driving there this is this\ndata set here in ngsmim\nso then with from that data we learned\nthis reward function\nand then the question that we needed to\nsolve in real time is to\ninfer the social value orientation of\neach driver\nto weigh those two rewards\nand for planning then that goes into a\nwhat is called a best response game\nwhere every agent maximizes a utility\nand the utility that you see here you\nyou can see that this looks a bit like\nthe\nmoderative control that i explained\nbefore that's because actually in the\nbackground we are also using moderative\ncontrol\nso we have a time horizon and here we\nhave the the utility for\nevery uh other traffic participant so to\nuse weight\nweights it's some here so we will have\nthe\nthis is the the joint utility and then\nwe solve for the nash equilibrium\ntrying to estimate what everyone is\ngoing to do in an iterative manner\nbut that's maybe more interesting to\nlook how it looks like so here you can\nsee an example\nwhere the human is individualistic and\nthen our self-driving car\nunderstands that and then it pulls\nbehind\nor it could be that the the blue car now\nis prosocial so it lets us\naway and the self-driving car can\nunderstand that\nand merge a caster\nthis also works in other situations like\na left turn\nand i think i'm running out of time so\nhere you can see an\nexample where the blue car was\nindividualistic and the next one is an\nexample where now the blue car\nit's uh prosocial and it will slow down\nto let our sheltering car\npass here the car our self-driving car\ndoes not know\nhow the the other driver is it estimates\nit in real time\nso what we are doing is we are stating\nin real time\nan example of estimating those social\nvaluation\nso we try to estimate that based on\ndistance velocity and other features\nand then we integrate that in the motion\nplanner in a game theoretical\nmanner to improve the decision making\nand the predictions\nyeah and yeah i mean i'm happy to go\nmore in detail on all the math\nbehind this but basically the idea is\nthat one\nwe estimate the social value orientation\nof other traffic participants to know\nhow they are going to behave\nand then we integrate that into our\nmotion planner that it's a moderative\ncontrol\ntogether in a coupled manner with\niterative best response game\nand maybe someday we we are able to have\nour self-driving car\ndriving in a real city like delft in\nenvironments like this so obviously this\nis driven by a human\nso we are not there yet but it gives you\nan idea of the complexity of the world\nwhere\nthe self-driving car will have to\noperate\nand yeah that brings me to the end so\nyeah maybe i can tell i have i think we\nhave five minutes for questions or\nsomething like that so maybe just take\nquestions now\nthe questions that are left thank you\njavier thank you that was very\ninteresting uh\nyeah we have three to five minutes for\nquestions\nand you can raise your hand if you want\nto ask it or you can just jump in\ni think i might i don't put it full\nscreen maybe i\ncan see them\nyou can ask a question\nso if if if you would observe a truck\nthat its behavior might make a big\ndifference whether it's\nempty or loaded especially for braking\nof course\nso um how to deal with it\ndoes he first see how he breaks or\nwhere does he makes an assumption that\nit's full\nyeah so so here the i mean we have not\nlooked at that problem in particular the\none of the tracks\nbut you will need a an estimator for\nthat\nso indeed you will have to estimate\nbased on how it's breaking\nso how fast it decelerates you will have\nto make a model\nand maybe that's all a black box model\nbut maybe it's also based on\non some physical models of what you\nexpect\nand if you have that perception module\nthat tells you\nhow is it behaving then you can put that\ninto the motion planner\nso you will need an estimator indeed\nokay thanks uh andrea had the question\nright\nhey i have here thank you for this\npresentation\nso i wanted to ask you when you chose\nthe\nmorality foundations measurement tool\nwhich was seo did you consider other\ntools such as\nmoral phonation theory or morality as\ncooperation\nyeah not that deep i mean we were not\nnone of us was an expert on those topics\nso so we just had this idea that uh\ndrivers are probably either selfish or\nsocial\nand that's what we wanted to encode so\nwe wanted to encode whether\nthey are going to let us pass or not and\nwe found this\nsocial value orientation and that's the\none we\nchose because it allowed us to encode\nthat but i'm not familiar with\nthe other ones that you mentioned so i\ndon't know\nso maybe they are better i i have no\nclue but maybe we can discuss that\noffline\nthank you so the big question would be\nthen how can we transform that so\ngo from from those concepts to that are\nabstract\nto a cost function that we can actually\nuse in the planner so that's the tricky\npart\nyeah okay thanks and maybe uh\nprobably the last question from cataline\ncatalan you want to ask you yourself\nuh yeah that's fine um so very\ninteresting talk thank you very much\ndid you also do experiments in which you\nvary with the ratio of human versus\nautomated drivers\nno no so that one was always\na one self-driving car and the other\nones was\nwere human-driven cars okay thank you\nyeah\nand we also looking at the case of that\nthey are all self-driving cars so we we\nare not looking at the mix\ncase but there are several researchers\nthat are looking at that problem\num yeah okay okay thank you\nthat was quick maybe one more question\nfrom nick\nyeah thank you thank you very much for\nyour especially the last\npart the social value optimization uh\nstudy you showed\ni was wondering did you also test like\ninteractions between two\nagents that had overlapping conflicting\nuh values like two too competitive uh\nlike a competitive\nautonomous vehicle and a competitive\nhuman for instance and how did the\ninteraction look like\nyeah so and the the the framework itself\nuh\nso so we decided to so in all the\nexperiments you saw we decided to put\nthe self-driving car\nprosocial and that's because we\nthink that that's how it should behave\nbut also because it leads to nice\nbehaviors the tricky part if you put\neveryone\naggressive is that someone will\nstill have to let pass right and in this\ncase the self-driving car\nit's uh doing this uh moderative control\nso we have constraints for collision\navoidance\nso what i expect in those cases is\nmostly the self-driving car will\nstill let the human driver pass because\nthe car\nit has constraints that it needs to be\nsafe\nbut that's uh and that brings to an\ninteresting question that is one of the\ntricky parts\nso when when we formulate that as a\njoint game\nin some sense we are also assuming that\nwe know what the other\ndrivers are going to do so we need to\nhave a good estimate\nor understanding of what they are going\nto do we are so if we understand that\nthey are going to be\naggressive then the self-driving car\nwill\navoid them because it's in the\nconstraints but if we\nbelieve or we estimate that they are\nprosocial or whatever and they let us\npass and\nin the end they are aggressive then we\nare making wrong predictions and that\ncould be dangerous\nso that's why we are trying to estimate\nthis but overall there is a\nquestion that it's uh how much do you\ntrust your predictions\nand how much do you believe that you can\naffect the behavior of others\nso for instance if my self-driving car\npulls in front are they really going to\nlet me pass or not\nand you have to be careful how you model\nthat so i\ndon't know so that's a question now we\ndon't have a good answer for that yet\nokay thanks javier uh we have at least\nthree more questions here in the chat\nso it would be great if the\npeople who want to talk these questions\ncontact javier directly because\nwe have to wrap up at this stage uh so\nthanks javier\nso much for for the very interesting\ntalk and the insightful\nuh answers to really good questions\nthank you everyone thank you everyone\nand please reach out\nwith those questions see you\nbye bye thank you bye", "date_published": "2020-10-01T09:42:29Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "5dc4a66833101939776bb2f6257e11ab", "title": "Untangling Artificial Intelligence Ethics (Andreia Martinho)", "url": "https://www.youtube.com/watch?v=slYsaoWGNEo", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "the meeting is now recorded\nandrea the floor is yours thank you good\nafternoon\nmy name is andrei martino i'm a phd\nresearcher at tpm\nand my supervisors are casper horos and\nmartin crowson\nso first i want to take this opportunity\nto present my research\nat the ai tech agona today i will\nbe talking a bit about the ethics of\nartificial intelligence\nthe title of this talk is untangling\nartificial intelligence ethics\nartificial moral agents autonomous\nvehicles and\nthought experiments\nso the beginnings of artificial\nintelligence can be traced to\nimagination\nfiction philosophy and so i will start\nthis talk\nwith a bit of fiction this is\na movie clip from the movie ex machina\nthe video is a bit long one and a half\nminutes but i think it is quite\ninteresting\nhi i'm\ncaleb\nhello\ndo you have a name yes\nava pleased to meet you ava mr andrea\ncan you perhaps increase the volume we\ncan hear it but it's barely\nthere sure\ni've never met anyone new before moments\nhi i'm\ncaleb\nhello caleb\ndo you have a name yes\nava i'm pleased to meet you ava\ni'm pleased to meet you too\ni've never met anyone new before\nnathan and i guess we're both in quite a\nsimilar position\nhaven't you met lots of new people\nbefore\nnone like you\nso we need to break the ice you know\nwhat i mean by that\nyes what do i mean overcome initial\nsocial awkwardness\nso let's have a conversation okay\nwhat would you like to have a\nconversation about why don't we start\nwith you telling me something about\nyourself\nwhat would you like to know whatever\ncomes into your head\nwell you already know my name\nand you can see that i'm a machine would\nyou like to know how old i am\nsure i'm one one what\none year or one day\nwhen did you learn how to speak ava\ni always knew how to speak and that's\nstrange\nisn't it\nso ava is what we call\nin the machine ethics literature an\nartificial moral\nagent so i don't want to spoil the movie\nfor the people who haven't seen it yet\nbut i think it's pretty safe to say that\neva it's an artificial moral agent that\nranks\nvery high in the autonomy and ethical\nsensitivity spectrum\nand indeed most of the controversies\nthat we find in the scientific\nliterature\nabout artificial moral agents concern\nsystems like ava which rank very high in\nautonomy and ethical sensitivity\n[Music]\nso what what are these controversies\nby looking at the literature we\nidentified four main controversies\nso the first one is about the\ndevelopment of these systems\nshould the systems even be developed\nthis is the core\nof machine ethics but is quite\ncontroversial\nthe second controversy it's about the\ndesign of\namas how should we implement\nmorality in machines there's no\nconsensus about\nthe best implementation strategy the\nthird controversy\nit's about moral agency we would this\nsystems\nhave uh moral agency would they be\ncomparable\nwith humans as far as moral agency is\nconcerned\nand then finally what are the societal\nand moral roles of\nthese systems will they be our moral\nteachers\nor our moral destroyers\nso this there's even though there's a\nlot of debates about\nthese issues in the literature there's\npoor empirical information\nabout this so we raised the question\nwhat are the perspectives\nof ai ethics scholars about these\ncontroversies\nthis research is currently under uh\nunder\nrevision and hopefully it will be\npublished quite soon and it's\nin collaboration with adam polson who is\nan australian based\nresearcher so to address this question\nwe used q methodology which is an\nadequate methodology to bring coherence\nto complex and convoluted issues such as\nthis one\nso the first step was to build the\nconcourse of communication\nwhich is pretty much a background so we\nlooked into the literature from popular\nand scientific publications\nand we derived 203 statements that would\nrelate to\nthese controversies that i just\nmentioned\nfrom this course we selected 45\nstatements that would capture the main\ncontroversies and the main tensions on\nthis topic\nthen we selected the participants\nbecause we needed\nthe participants to grasp some sort of\nbasic notions\nof philosophy artificial intelligence\nand ethics\nwe decided that we would um we would\ncontact uh researchers with publications\nin a broad in the broad field of ai and\nethics but also\nartificial more agents machine ethics\nautonomous vehicles\nand we got 50 uh completed surveys which\nis actually a very good number\nfor queue methodology because it entails\na lot of choices so we we don't need\nhundreds of participants and then\nfinally\nwe used multivariate that data reduction\ntechniques which was pca\nto analyze the data\nso just to get you some examples of the\nstatements that we used\nfrom the literature uh from for the\ndevelopment controversy this is one\nexample\ntechnological progress requires\nartificial morality\nand for instance for about the future of\nartificial moral agents\nwe selected this is just one example\nmere indifference to human values\nincluding human survival could be\nsufficient for artificial general\nintelligence to pose an existential\nthreat\nso as you can see the statements were\nquite short\nand they were written like a proposition\nso that participants could either\ndisagree\nor agree\nso this is the survey that participants\ncompleted\nthe first step was to assign the\nstatements\ninto three different bins uh disagree\nneutral or\nagree then the participants would sort\nthe statements\nand assign them into a forced\nquasi-normal distribution\nwhich would go from minus five to five\nuh and then finally\nuh the participants had the possibility\nto provide\ncomments on the statements that they\nranked\nhighest or lowest so this methodology\nallowed us to have quantitative data but\nalso qualitative data\nfrom the data five main perspectives\nhave emerged\nand if this was uh this this talk would\nbe done in person\nwe could do something interesting and uh\nhave some sort of\nuh activity so that you can also vote in\nwhich one you you\nwould agree more so we cannot do this uh\ntoday because\nuh of the pandemic but you can still\nuh do an internal exercise and uh\nand kind of select which one do you\nagree more or which one you identify\nwith the most\nso the first perspective it's called\nmachine ethics the way forward\nthe second perspective is called ethical\nverification\nsafe and sufficient the third one is\nmorally uncertain machines\nhuman values to avoid moral dystopia the\nfourth one is\nhuman exceptionalism machines cannot\nmoralize\nand the fifth one is machine objectivism\nmachines as superior moral agents\nso let's let's learn a bit more about\neach one\nof these perspectives so the first\nperspective as\ni mentioned just before it's called\nmachine ethics the way forward\naccording to this perspective amas so\nartificial moral agents\nare unavoidable and they may even be\nnecessary for technological progress\nthey are more than simple tools and they\nwill advance our understanding of\nmorality\nso the striking feature of this\nperspective is actually the development\nand how positive this perspective is\nabout the development\nand from the the data that we got the\nqualitative data that we got from\nparticipants i selected one\nuh statement um written by one of the\nparticipants of course i don't know who\nhe or she is that relates to this\nstriking feature feature there will be\nno other way than to develop ethical\nmachines when humanity is supposed to\nrely with their life\non them so moving to the second\nperspective ethical verification safe\nand sufficient\naccording to this perspective amas will\nnot replace humans in ambiguous moral\nsituations\nthey believe or according to this\nperspective ethics and human moral\nagency are not algorithmic they cannot\nbe reduced\nto an algorithm and transparency\naccountability and predictability\nleads to sufficiently ethical machines\nso the striking feature about\nthis perspective is the design\naccording to this perspective the moral\nperformance of machines should be\nevaluated through\nexternal checks and balances such as\nverification of transparency\naccountability and predictability and\nnot so much\nby implementing morality within machines\nso this is one of the statements from\nthis perspective\nif an ama makes the wrong decision the\noutcomes could be disastrous\nsimilarly the risks of malicious hacking\nof amas\nare serious this verification validation\nand transparency are critical to the\nsuccess\nof even limited amas in real world use\nso this is a statement that one of the\nparticipants wrote\nthe third perspective it's called moral\nmorally uncertain machines\nhuman values to avoid moral dystopia\naccording to this perspective\namas must be morally uncertain and hold\nhuman values otherwise they will pose an\nexistential threat\naccording to this perspective\nprohibiting unethical behavior\nas well as implementing external checks\nand balances it just\nisn't enough striking feature about this\nperspective is about\nfuture projections according to this\nperspective\nmere indifference to human values could\nbe sufficient for agi to pose an\nexistential threat\nso the the highlight of this perspective\nis\nsort of the dystopian future\nand one of the participants wrote though\nexperiments such as\nthrough experiments such as the\npaperclip maximizer\nshow quite convinced conversely that for\nan egi to pursue a goal that is merely\northogonal to human values could\nplausibly\npresent an existential threat\nmoving to the fourth perspective human\nexceptionalism\nmachines cannot moralize as per this\nperspective amas are not\nmoral agents because they lack free will\nand the understanding to make moral\nassessments\nas we humans do logical machines will\nnot be better moralizers\nor are moral teachers because they lack\nthis human element\nso the striking feature about this\nperspective\nis the human element according to this\nperspective\nhuman flaws such as irrationality\nis a push for moral agency so amas are\nnot and will not be\nmoral agents because they lack this sort\nof conceptual\nunderstanding one of the participants\nwrote you start being moral when you\nrecognize\nyour shared humanness with others and\nunderstand it\nthat like it or not you are in a\nrelationship with them\nuntil machines get that and i'm\nsuspicious of their ability to do so\nthen they're not going to have full\nmoral agency\nand finally the the final perspective is\nmachine objectivism\nmachines as superior moral agents\nso according to this perspective amas\nwill prevent human\nharm through logic and context\nspecificity these agents are better\nbetter moral reasoners and educators\naccording to this perspective free will\nand conceptual understanding are not\nrequired for moral agency\nso the striking feature about this final\nperspective\nis that agi or you know these advanced\nforms of\nuh of artificial agents will lead to a\nbetter understanding of morality\nand actually they will be better moral\nagents than humans because they are not\nsubject to\nirrationality seduction or emotional\nturmoil\none of the participants wrote machines\ncan enhance us as moral agents if they\nmanage to distance\nus reflectively from our intuitions\nwhich are very much determined by social\npreference\nso i was\ni'm wondering if you have some sort of\npreference\nor do you think that one of the\nperspectives makes more sense for you\nand this is something that\nhopefully we can discuss in the end of\nthe talk and i'm looking forward to\nhearing your comments and thoughts about\neach one of these of these perspectives\nso the main findings now that i have\nexplained\num the five perspectives that came from\nthe data\nis that there are very different uh\nperspectives about the issue of\nartificial moral agents and this\nkind of attests the complexity of the\nmatter\nwe found two main dichotomies in the\ndata with respect to development and to\nmoral agency\nso according to the first perspective\nmachine ethics the way forward it stands\nfor advancing artificial morality\ndifferently the second perspective which\nis ethical verification safe and\nsufficient\nis skeptical about the feasibility or\nneed for artificial immorality\ndifferently they just consider that\nexternal checks and balances are\nsufficient\nand then the second echo to me regarding\nmoral agency\nso according to the perspective human\nexceptionalism\nuh machines cannot moralize which is\nperspective four\nthere's a high value of the human aspect\nin morality\nbut according to perspective fifth\nmachine objectivism machines as superior\nmoral agents\nmorality improves when stripped of human\nflaws\nwe also found some agreements there is\nan overarching agreement\nthat ai systems working in morally\nsalient context cannot be avoided\nand that developing amas it is\npermissible\naccording to existing moral theories and\nthis is\nin the final agreement that i found\nwas quite interesting because with the\nexception of perspective four\nnone of the other perspectives consider\nfree will\nan important or essential element for\nmoral agency\nthis position immerse emerges from the\nthought\nthat was shared by participants in the\nstudy that because\nhumans do not have at least radical free\nwill and yet they are moral agents\nthe same should apply to machines\nso then we looked into this data\nwe saw that the dichotomies we saw the\nagreement\nand we started thinking what why there\nare so many perspectives about this\ntopic\nand a potential source of the these\ndiffering perspectives we believe\nit's related to the failure of machine\nethics to be widely\nobserved or explored as an applied ethic\nindeed machine ethics literature is\nchiefly\nabstract we believe that ai\nethics needs to be applied and shaped to\nthe field in which it operates\nand not the other way around so having\nthis in mind\nwe will now look into artificial\nintelligence ethics\nin a particular field which is\ntransportation\nwe all know that in recent years there\nhas been many debates about the ethics\nof autonomous vehicles\nindeed autonomous vehicles are often\nused as the archetypal\nai system in many of these atticus\nethics debates\nhowever little is known about the\nethical issues in focus\nby the industry that is developing these\nsystems\nbecause the industry has such a\nimportant role in developing this\ntechnology\nwe believe that it is important to\nunderstand their position\neven if it is quite formal about\nethics this this work was recently\npublished\nin the journal transport reviews uh\nme my supervisors and also and former\nmaster students\nuh niels herbert\nso to address this research question\nthat i just raised\nwe we did a literature review\nbut rather than just looking into the\nscientific literature we looked into the\nofficial business and technical reports\nof companies\nwith an av testing permit in california\nwe chose california for a couple of\nreasons first it was an early proponent\nof autonomous vehicle technology and it\nhouses\nmany r d programs and also they're quite\ntransparent\nabout the companies that have permits\nso our starting point was\na list of ethical issues so we we knew\nwe wanted to look for\num ethics in the technical reports\nof companies but we needed a list of\nethical issues\nso we used this list of issues compiled\nfrom 22 major\nguidelines of artificial intelligence\nethics\nthis was was not compiled by us this is\na work\nmade by hagendorf and\nthis is the our starting point for this\nresearch\nthen we looked into the scientific\nliterature for this\nparticular step we did the prisma\nliterature review\nand we identified 238\narticles that were reviewed and finally\nwe did the document review\nwe used 86 business and technical\nreports\nthat were issued from 2015 to 2020\nfrom 29 companies first we did a\ncontextual or\nmanual analysis and then we did we used\nthe text mining algorithm\nand then again we we checked again\nmanually if everything made sense\ni just didn't include that step over\nthere\nso these are the companies that we\nuse the reports from in the study\nas you can see uh there's it's 29\ncompanies\nat this point in time which was uh june\n2020\nthere was a record of i think 66\ncompanies had testing permit in\ncalifornia\nso then the question is why didn't use\nreports from all the companies well we\nwere pretty strict about the kind of\ndocuments that we were going to use\nfor reprocessability\npurposes or reasons so\nwe wanted to use reports that were\navailable\nin uh pdf so that we could you know\nalways refer back to them\nso we excluded uh blog entries\nuh articles that were just available\nonline\nin any of those communications that were\nnot\nwe could not actually store in in our\nown\non records so we emailed\nevery company we did online searches\nand then we came up with the final list\nof of\n29 companies and 86 documents that were\nused in this research\nthe data set is stored at the 40\nresearch data if you have some interest\nin looking\nat it it's it's publicly available\nokay so we first looked into\nthe landscape both on scientific\nliterature and\non the industry literature\nand it was quite salient that were that\nthere were some\nethical issues that were more relevant\nthan others\nso based on the criteria of frequency\nand by that i mean uh how many times\nthese ethical issues were\nuh appearing in uh in the documents but\nalso\ncomprehensiveness how deeply were they\nexplored\nwe came up or we identified three main\nethical issues that we wanted to further\nexplore safety human oversight\nand accountability and these are the\nthree main\nethical issues that we will we will\nexplore today so\nthe first ethical issue which is safety\nand cyber security\num in the ethics literature and we can\ngo back to the word cloud\nand you can see how central the trolley\nworld\nis and this is like this is true this is\ni\nwe did this word cloud based on the data\nand actually the trolley problem\nappeared in more than half\nof the scientific articles that we\nreviewed it's quite\nuh an evidence that the ethics\ndebates in the scientific literature are\ndominated by this\nthought experiment for the ones that are\nnot very familiar with this\nthought experiment um it was popularized\nby felipe foot in 1967\nand uh there's many variation variations\nto it but\nit pretty much entails a hard moral\ndecision\nbetween lives saved and sacrificed\nrecently it was reported this that there\nis an overstatement\nof this black and white stylistic uh\nthought experiment and that this was not\nvery helpful\nfor the ethics community so different um\ndifferent ethical extreme situations or\nweakened\nethical trolley problems have been\nreported in the literature\nrecently so this is one of those\nexamples i call\ni call them weak trolley problems\nbecause they're still a moral decision\nin place but it's not as dramatic as the\nthe thought experiment that i just\nmentioned so in this case it's\na situation where the autonomous vehicle\nneeds to\nmake a decision of going into the left\nor into the\nright side of the lane based on\ndistributions of risk\nso the av trolley problem argument\nis set in the in the scientific\nliterature more more or less like this\nautonomous vehicles ought to save lives\nupon development\ndeployment of autonomous vehicles\nextreme traffic situations\nwill not be completely avoided some of\nthese\nextreme traffic situations will require\nan autonomous vehicle to make difficult\nmoral decisions\ndifficult moral decisions in traffic\nresemble trolley problems\nthe best option to assist autonomous\nvehicles in managing\nthe trolley problem is x\nx is programmable therefore av should be\nprogrammed with\nx and the main premises that are debated\nin the literature\nis premise four which is about the\nrelevance\nof the trolley problem and premise fifth\nwhich is about the strategies to\nimplement uh some sort of moral controls\ninto\nthe autonomous vehicle so then from the\nscientific literature we raised two very\npractical questions from which we want\nto look for answers\nin industry reports the first question\nis are extreme traffic situations\nresembling trolley cases\naddressed by industry and the second\nquestion is what are the solutions\nproposed by industry to address\nextreme traffic situations so let's see\nwhat\nindustry says about this so we spected\nthe industry reports regarding crash\nworthiness\nto clarify some of the critical elements\nof the autonomous vehicle moral dilemma\nthe first element is the risk of\ncrashing so if you think about it if\nif autonomous vehicle technology um\neliminates completely crashes and\ncollisions then we don't have\na trolley problem or any sort of moral\nsituation\nlike this so what we found is that\ncompanies express their vision\nof a future without accidents this is an\naspirational goal\nin pretty much all the reports we read\nbut they emphasize\nthe inevitability of crashes and\ncollisions\nthis is one of the statements that we we\nretrieved from a toyota report\ndriving environments can be extremely\ncomplex and difficult and no automated\ndriving system\nregardless of how capable it may be is\nlikely to prevent\ncrashes entirely so then we go to the\nsecond element\nextreme traffic situations or how they\ncall it in industry reports\nedge cases i included some graphs here\nso you can see the autonomous vehicle\nuh it's going off the cliff and\nunfortunately going into the ocean\napparently so this would be an extreme\ntraffic situation\nuh we selected uh one statement from a\nreport issued by nvidia\nai-powered autonomous vehicles must be\nable to respond properly\nto the incredibly diverse situations\nthey could experience\nsuch as emergency vehicles pedestrians\nanimals\nand a virtually infinite numbers of\nother obstacles\nincluding scenarios that are too\ndangerous to test in the real world\nand then finally we go to the final\nelement\nmoral situations and what we found out\nis that there's no reference whatsoever\nin the industry literature about moral\ndilemmas trolley cases or anything like\nthat\nas described in the scientific\nliterature that is\nsituations that require the autonomous\nvehicle to make a difficult moral choice\nhowever we did find one statement that\nwas\neven though it's not a trolley case it\nwas\nclose enough that i felt the need to\nhighlight it here so nuro\nwrote this in the in the unlikely case\nof a neural shuttle ever encountering an\nunavoidable collision scenario\nthe driverless passengerless vehicle has\nthe unique opportunity to prioritize the\nsafety of humans\nother road users and occupied vehicles\nover its contents\nso of course this is not a trolley case\nbecause the nature of the choices is\nvery different\nin this case it's not life versus life\nit's life versus\ngoods or merchandising uh\nwe speculate that uh this somewhat\nmore transparent account um from neuro\nit's probably because about the the\nmoral situations it's probably because\nthis company\nuh does not so the shuttles of this\ncompany they don't carry\nuh passengers but only goods so\nthis is this is why we we considered\nthat they were a bit more open about\nuh this moral situation but it's just a\nspeculation of course\nso answering the questions that were\nasked before i extreme\nour extreme traffic situations\nresembling trolley cases addressed by\nindustry\nno but there are nuanced delusions that\nunravel\nunderlying concerns about these extreme\ntraffic situations\nand then the second question is what are\nthe solutions proposed\nto address extreme traffic situations\nwe found radar speed limitations\nsimulation and validation methods to\ntest\nthose scenarios\nso now we move to the second ethical\nissue that we considered\nrelevant by looking at the scientific\nand industry literature\nwhich is human oversight control and\noddity\nhere we highlight the the human\nmeaningful control\nphilosophical account which uh\ni it was uh recently\num highlighted and and uh\nit's it's been the subject of a lot of\namazing research at the udelph\nespecially by\nuh filippo i'm not sure if he's here but\nuh he can talk about this much better\nthan i can\nbut just in a simple way um for\nautonomous vehicles to be\num to be under uh meaningful human forms\nof control\nthey should meet two conditions the\ntracking condition and the tracing\ncondition\nthe tracking condition is about the av\nbeing able to track the relevant human\nmoral reasons\nin a sufficient number of occasions and\nthis condition ensures that\nthe av complies with the intentions of a\nhuman operator\nand then the tracing condition\nthere's two dimensions to this condition\nso the first i mentioned is about\nthe actions of the autonomous vehicle\nbeing traceable to a proper more\nunderstanding\nand then the second condition the second\ndimension is that at least\none human agent should understand the\nreal capabilities\nof the system and bear the moral\nconsequences of its actions\nso we raised by looking into the\nliterature we raised a question\nuh for industry once again and the\nquestion is\nwhich decision prevails in traffic the\nautonomous vehicle or the human operator\nand now we will look we will look for\nanswers for this\nquestion so first of all i wanted to\nmention that we found a\nstatement that we believe relate to the\ntracking condition\nnamely companies they emphasize their\non-site human oversight\nof the av testing operations and also\nthe remote control\nof the autonomous vehicle operations\nmost of the companies as i showed before\ndid not have driverless uh testing\npermit\nso on-site uh human oversight\nof the operations is normally uh the one\nthat it's\nuh used for the tracking condition\nand then for the tracing condition um\nregarding\nthe understanding of the real\ncapabilities of the system\nwe also found one interesting statement\nfrom ai motive\nwhich states that test operators face\ntheir own unique challenges the debug\nscreen of a complex autonomous system\nis incomprehensible to the untrained eye\nthese engineers and developers have a\ndeep understanding of the code at work\nin our prototypes allowing them at times\nto predict when the system may fail\nthis allows our test crews to retake\ncontrol of the vehicle preventable\nin a controlled manner\nso going back to the question that we\nraised about human oversight\nwhich decision prevails in traffic and\nactually we found different approaches\nwith respect to the authority\nso companies like mercedes-benz and\nintel\nthey prioritize the autonomy of the\nvehicle\nbut also companies like auto x they\nprioritize decisions of the remote\noperators\nand finally we move to the last ethical\nissue\nthat we have identified before which is\naccountability so in the human\nmeaningful control\nin the second uh in the tracing\ncondition but in the second dimension of\nthe tracing condition\nthere's also one dimension that it's\nabout\nuh having one agent that can bear the\nmoral consequences of the actions of the\nsystem\nit has been stated that in order for\nthis dimension of the tracing condition\nto be met\nin higher order levels of automation a\ntransition of responsibility from the\ndriver to designers or remote\noperators is required and another issue\nthat has been raised in the scientific\nliterature\nit's about the liability and technology\nargument\nso it is stated that autonomous vehicles\nhave the potential to save lives but\ncrashing liability may discourage\nmanufacturers from developing and\ndeploying these systems\nand then this technology would not meet\nits potential to save lives\nso from the literature we raised one\nquestion which is\nwhich accountability design strategy is\nadopted\nby the autonomous vehicle industry\nso we found out that companies seem\ninvested in the lowest liability risk\ndesign strategy\nthis is not shocking of course so this\nstrategy relies\non rules and regulations expedite\ninvestigations\nin this case we we have uh hear a\nstatement from uh\nthat we found in a report issued by bmw\nin which they they show their intention\nto use a black box\nsimilar to the ones that they use in\nflights so that\nuh it makes liability easier uh\nto to address\nand then finally crash and collision\navoidance algorithms\nso i will read this this statement\nbecause i thought it was\ninteresting from intel it says that avs\nwill never cause or be responsible for\naccidents\nby formally defining the parameters of\nthe dangerous situation\nand proper response we can say that\nresponsibility is assigned to the party\nwho did not comply with the proper\nresponse\nso going back to the question that we\nraised which accountability design\nstrategy is adopted by\nthe autonomous vehicle industry we\nbelieve that it's minimally\nminimally responsible technology\nmobilized\nstates that the self-driving car will\nnever initiate a\ndangerous situation and thus it will\nnever cause an accident this is\npretty um illustrative of\nminimally responsible technology\nand finally we also found this\ninteresting statement from intel\nin which they say that their\nresponsibility sensitive\nsafety system will always break in time\nto avoid a collision with the pedestrian\nunless the pedestrian is running\nsuperhumanly fast so\nas a summary i first started this talk\num with artificial moral agents and\nwe identified five different\nperspectives\nabout key ethical controversies related\nto artificial\nmoral agents we concluded that for\nfor artificial intelligence ethics to be\naccepted as an applied ethics\ncontext sensitivity is required and then\nwe looked into the specific field of\ntransportation\nand particularly to the autonomous\nvehicle\nwe reviewed the ethical issues in focus\nby the autonomous vehicles industry in\ncalifornia\nand we compared and contrasted with the\nscientific and industry narratives\nindustry provides elements that should\nbe considered in the ethics debates\nmy main takeaway message is that as ai\nbecomes part of our daily lives\nethicists need to adapt and find ways to\nuntangle ethics in order to foster\nmeaningful communication\nbetween the ethics and technology\ncommunities\nin my view empirical research such as\nthe one that i just presented\nit's a good way to realize this\nthank you\nthank you\ndo you have any questions uh if you have\nuh any questions please raise your hand\nthe button is uh on the top right corner\nor just post your question in the chat\nyeah i think we have the first hand\nraised and it's eugenia's hand\nhi yes thanks andrea thank you very much\nreally interesting presentation a lot of\ninteresting things to think about\nuh i uh so one of my questions was\nso uh following up on the conclusion\nthat you that you just uh made that the\nempirical research is necessary to\nadapt the conversation we have better so\nin particular like\nthe the trolley problem example that you\nmentioned\ni'm curious so what are your thoughts\nabout this particular example like uh\num like what\nwhat did you learn from from the\nresearch that you did\nwith with regard to that like is it\nis it useful at all uh to talk about uh\nor\ndoes it or is it useless or like\nyeah i'm just curious to hear your\nthoughts well\nuh the main goal of ethics it's to\num or ethics and thought experiments in\nparticular\nis to um come up with these extreme\ndramatic situations and kind of test the\nvalues that\ncome out of that so in that way i do\nbelieve that the trolley problem\nit's in any thought experiment it's very\nvery very important\nhowever as i mentioned before as\nartificial intelligence leaves the\nlaboratories and starts\nbeing used on a you know day-to-day\nbasis\nwe need to move from these very\nimportant and very interesting thought\nexperiments and start to think about\nethics in a more practical way\nso that we can actually uh be\nalongside the technology uh communities\nand help develop ethical systems\nand what i find what i find always\ninteresting about the trolley example\ni'm asking myself\nwell but what happens today like when a\nhuman is\ndriving the car like so what like what\nhappens in that what do we consider the\nright moral decision there\nlike do we have an answer to that and\nlike if the case would come to the court\nlet's say in the netherlands for example\nhow would the court judge\nuh how that unfolded and i i i don't\nhear us usually talk about that and then\ni wonder like we jump right away to the\nai\nscenarios well i think that uh\nhumans have something that machines\ndon't have we have the benefit of\nreacting by instinct\nand when you react by instinct well i'm\ni'm not an expert\nby by no means but i don't think any\ncourt will\nwill judge you on a split-second\ndecision it's instinct however\nartificial intelligence machines do not\nhave this uh\nadvantage so we need to progre program\nthem in advance\nso they can react in the moment so that\nthat's the main\ndifference nice you actually help help\nhelp advance my understanding of this\nyeah thanks i appreciate it\num while we are waiting for more\nquestions andrea would you mind stop\nsharing your screen\ni think uh yeah we will have a little\nbit of a\nso i'm not watching a screen here yeah\nuh you just click on the small button\nnext to leave\nyeah this one yeah okay\nsorry i'm i never use teams actually or\nrarely\num i actually have a related question\num about the trolley problem or other\nthe conclusions that uh\nyeah that's a question another question\nfrom evgeny sorry again it's uh\nit's my turn now so yeah uh it's it's\nmore related to the not to the truly\ndilemma itself but\nmore to the analysis that you did based\non research papers\nand there you have a really impressive\nword cloud which\nshows that truly is the most often\ndiscussed word in the papers right but\ni'm just wondering uh\nhave you uh have you done any\nkind of in-depth analysis to see what is\nthe tone of discussions because i i\ni'm pretty biased here so i read a lot\nof papers which criticized through the\ndilemma as an approach to\nmachine ethics so i'm wondering can it\njust be people\nuh arguing against uh usefulness of\ntrolley dynamic rather than advocating\nthat this is the way\nto solve ai attacks okay that's\nthat's a very good question so i will\nsay that\n[Music]\nclearly when this this someone came up\nwith\nuh with this trolley dilemma\nin the context of autonomous vehicle\nthere was some sort of fascination i\nthink that the media\nand um you know uh even scientifically\nto sure\nthey you know they kind of fell in love\nwith it because it's like having a\nthought experiment from a textbook\ncoming to life and that does not happen\nvery often and then gradually uh you\nknow the literature\nin recent years as i mentioned before it\nhas been\nsaying that there's an overstatement of\nthat trolley problem\nso there's two things about this\nfirst of all while the community is\narguing in favor\nand against it they're not looking into\nother pressing issues and that\ni think it's a problem and the other\nthing that i want to say about this\nis that yes there is a lot of\ndiscussions\nand there's not a lot of agreements and\nthat's exactly the type of research that\ni do\nis trying to look into convoluted\ncomplex issues\nand then try to make some sense and\nand deriving some empirical insights\nfrom it\ni don't hear you arkady\nyou're muted arcady yeah\nyeah thanks uh so uh i'm wondering if\nthere is also a dynamic aspect to it\nwhether\nthis uh kind of shift of attention uh\nchanges over time so whether people\nstarted looking at through the dilemma\nas you said\nsoon after that paper was published and\nthen it kind of faded down\nand then other topics are catching up or\nis it consistent over time\nso that's uh that would be interesting\nto know as well\num i can say that what i've been seeing\nthe most recent publications um i think\nthey're pushing more for this sort of\nweakest trolley problems so about\ndistribution of marginal risk\nthis kind of more realistic uh\nmoral situations um\n[Music]\nthat's what i what i i've been noticing\nrecently\nthanks\nany other questions i think if ghani had\none right\nyeah while we are waiting for our other\npeople to come up with their questions\nagain\nyou can go ahead um or perhaps madeleine\nraised yeah let's let's let uh other\npeople go first and then if there's time\nleft i'm gonna ask yeah madeline do you\nwant to ask your question\nyeah thank you so much for your\nfascinating talk it was really great you\ncovered a lot\num i was just wondering actually um if\nyou could\nspell out the argument that you made\nsort of switching\ninto the context specific if you could\nsay a little bit more about why you\nwhy you made that move in in that\nargument\num that we really need we really need to\nbe doing this context specific\napplied ethics yeah thank you that's a\nvery good question\nso um as i said before as we move\nfrom theoretical ai kind of unreachable\nai to something that we're using\nin our daily lives i believe that we\nshould have\nethics that it's more applied to the\nspecific fields\nbut answering in a different way um\nmany so if you if you think about the\ndata that we got about artificial moral\nagents\nit's highly heterogeneous and this is\nthe same in i'm sure this is the same\nabout\nother ethical issues related to\nartificial intelligence not just\nartificial moral agents and one of the\nthings that\nthat i think that explains this it's\nbecause it's\nthis literature it's very abstract so\nwhen you\nwhen you're talking about an artificial\nmoral agent\nit could be anything it could go from an\nautopilot to\na system like ava like i showed and so\nit\nbecause it's abstract then there's a lot\nof uh\nthere's a lot of contradictions a lot of\ndisagreements and once you scale down a\nbit\nand and make it more context specific\nnot only do you\num do you gain more insight but also i\nwe believe that it's it's you're going\nto get more agreements than when it's\njust\nabstract and theoretical\nthank you\nhey afghani your turn again yeah sure\nthanks yeah yeah i think that's a larger\npoint andre i think that that's so\ngreat that you bring up the whole matter\nof like\nuh grounding in in in a context with\ncommunication between the\ntheoretical discussions and the practice\ni think also the other way around when\nsometimes we\nlike when we do ethical discourse and we\npoint out problems which are\nvery relevant but people who are\npracticing are not aware\nthat's also true so because some often\nwe have very passionate debates here at\nthe university but people who are\nactually practicing they're entirely not\naware of all these discussions so we\nhave these passionate debates but unless\nwe actually have the conversation\nwe don't impact the world in the way we\nwant to impact\nso sometimes we also have good ideas but\nthey don't so we need to have this\nconversation\ni i was wondering um something about the\nthe first half of your presentation\ni was curious you mentioned on one of\nthe slides the point that\nuh like uh so an argument that humans\ndon't have radical free will i i was\ncould you please elaborate so what what\ndoes like what does that point mean\nuh on that slide sure um\nthat's that's actually one of the what i\nconsider one of the most interesting\nparts of the study or one of the most\ninteresting findings that was a bit\nunexpected\nso of course i don't want to go into\nthis whole philosophical discussion\nabout determinism\nyou know indeterminism but\nthe argument that was found consistently\nexcept for the\nthe fourth perspective is that well\nhumans as humans we don't have radical\nfree will\nit's like it's it's a bit it's a bit\ndeterministic perspective about free\nwill it's like\nour actions are somehow predetermined\num and still we we think we are\nwe regard ourselves as moral agents so\nwhy shouldn't machines as well\nand it was in the beginning when i when\ni was looking at the data i thought this\nwould be just a marginal\num a marginal finding but then\nactually turned out the other way around\nso either\nuh either everyone that that\ndid the survey or almost everyone is\nbelieves in determinism or this is\nreally something that uh that that\npeople really believe so that machines\nthey don't need free will to be moral\nagents and bear some moral\nresponsibility i do think this is an\ninteresting research avenue to be\npursued\nafter this um after this paper is\npublished\nyeah very interesting i wouldn't expect\nthat as well\ni was very surprised that's that's\nactually something that\nreally made this research fun is to see\nthese things coming up\ni i just like a note and this definitely\nlinks to the larger conversation that\nyou want to avoid right now but\ni i was thinking like sometimes in the\nacademic environment we talk about free\nwill\nin ways that makes me really cringe\nbecause when we have these very\nphilosophical conversations\ni find it a very slippery slope in the\ncivil sense of the word because free\nwill of course is the way we interpret\nit civically\nas my you know freedom to love who i\nwant to\nsay whether to say things and like the\nfreedom of speech you know\nfreedom of religion that's very\nessential to our societies and sometimes\nin the academic discourse we go into the\nlike\nwell like is it biologically\npredetermined and\nyeah i was i was yeah maybe maybe i\nshould have made this a bit more clear\nwhen i was mentioning free will it's uh\nit's about uh you know the deterministic\nor deterministic\nperspective about free will and uh how\nmuch\ndo we have free will and or if we just\nbelieve on it so that that's another\nthat that's another layer of the\nconversation because for some for some\nphilosophers it's only required the the\nthe fate or the confidence that we have\nfree will and not\nreally free will yeah\nno well it's a whole it's a whole super\ninteresting topic\non its own but of course but but the\nthing is that some of course\nwe have these developments and\nespecially when it comes to algorithmic\nai profiling of people\nthat when you use the argument let's say\nthat well we do not have free will\nthen you can start arguing well it's\nit's that that kind of can be used as a\njustification why you can use\nbig data to predict people's behavior\nbecause everything is predetermined\nanyway\nand the machine learning model will know\nyou better than you yourself know\nyourself\nyeah but anyway that's a whole big\ndiscussion yeah\nthank you thank you\ngreat we have uh a couple more minutes\nfor uh\nthe last maybe one two questions\ni'm curious to see if someone got really\ngot some strong\nfeelings about one of the perspectives\nin favor or against it or\ni was surprised by the diversity of i\nmean\nyou found the whole two controversies\nnot just one\nand the pro contra amaze but it was a\ntwo-dimensional controversy and it was\nsurprising to\nto see that uh people are that divergent\nand uh\ni think you mentioned that it was mostly\nacademics right\npeople who responded to the statements\nwell yeah from a wide range so\ni sent many many many emails uh but only\nfor people that have some sort of\npublication in\nethics or artificial intelligence\nbecause otherwise it would be very\ndifficult\nto get meaningful uh information because\nthe statements itself they were you know\nthey were philosophical like\nuh you know concepts like moral agency\nfree will\nconceptual understanding it's not\nsomething that uh\nanyone could really grasp it so that's\nwhy we selected this\nparticipant set right but at the same\ntime did you\nbreak people down based on their main\nthing or\nsorry did you analyze did you even\ncollect the data on people's discipline\nbackground so\ni would imagine that you would have a\nlittle bit more uh\na little bit higher share of proponents\nof amas\nin uh people with engineering\nbackgrounds perhaps compared to\nphilosophical backgrounds i don't know\nuh well the people the people from the\nengineering\nuh that i contacted and again i\ncontacted\nmany many many people so i didn't select\ni was not like making some sort of\nselection bias or anything um\nbut they're all uh they all\nuh were published in ethics so it's not\nthat they were\nlike working on autonomous vehicles and\nnever have published something on ethics\nit was always probably\npeople that have published in the field\nfrom at the ethics side as well\nbut but yeah that's a that's a good\nquestion and actually i wanted to do\nthat but then we have a problem in the\nsurvey\nand somehow we didn't get that data\nwhich\noh there's someone is here yeah karolina\nwants to say something\nyeah hi uh thank you very much for the\ntalk just\nvery very interesting um yeah\nin my i just wanted to bring something\nup um\nso on my own phd and uh well\nin in my lab and there are other\nworks before that worked also on this\nquestion uh\nwe we like to look at it as uh in a\ndifferent a slight different uh\nperspective uh\nin the sense that we believe that maybe\nthe machines don't have to be\nso autonomous in everything they can\njust\nask for humans for their input in some\nin some moments so of course you cannot\ndo this when the car\nis driving and the city because\neverything is too fast and uh\nyeah it's they have to decide by their\nselves let's say\num but in many other situations when i\nthink that these ethics problems come up\num we just like\nto to yeah to i would just\nlike to leave this input that maybe\nin some cases we can use actually man to\nto into to be interdependent and not\njust go fully on this idea that the\nmachines have to be fully autonomous in\nethics and moral issues and\nso they can actually ask you know and um\ni don't know we can still deal with some\nethical problems ourselves instead of\nputting that in the\nin the autonomy part yeah that's\ninteresting so in that uh\nsecond slide that i showed a third slide\nthat i showed that then you would\nsay that machines should have it should\nbe low on\nthe autonomy axis and it also seems from\nyour\nfrom your words that you kind of\nidentify more with the second\nperspective which is about\nuh ethical verification checks and\nbalances but not necessarily\nimplementing morality within uh the\nmachine\nyeah so i believe that in a far future\nwell i think that machines can\nlevel up humans in any sense i mean in\nthe far\nfar future so\nso i would say that i will make i will\nbe a bit of devil's advocate because to\nbe honest i\ni like i like to be neutral about\nresearch yeah uh but i would say that\nthe machine ethics proponents\nthey would say that um\nartificial morality should not be enough\nthought\nand that we should working towards that\nsince now\nand you would also say that just by\nlearning about\nit and trying to develop an artificial\nmoral agent we are\nalready learning about morality so\nthat's like a good\nuh collection yeah yeah but i actually\nsaid this to mean that um\ni believe that maybe one day they can\nbecome\nthey can do everything we can so we have\nto\nhave take this into account from day one\nuh that's what i meant um but yeah\nuh i i don't know i i think\nyeah i just i just wanted to give this\nuh idea that\nyeah will the um the autonomy\ni mean to put the morality and the\nethics in the\nin our autonomous partners let's say we\ndon't have to\nmaybe just put everything on on them\nin some situations they can just\nask or require some input\nthank you so much it's a great input\nthank you carmina thank you\nokay i think it's time to wrap up uh\nthanks andrea for your presentation\nand thanks everyone for a nice\ndiscussion um\neveryone yeah the recording will be\navailable on our\nyoutube channel soon and see you all\nnext week\nbye bye bye thank you", "date_published": "2021-01-20T14:43:16Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "af8372efca9e8f6c9fff44fb8591193f", "title": "Hybrid Intelligence for high quality large scale deliberation", "url": "https://www.youtube.com/watch?v=8oJM92XctGo", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "all right\nlet me share my screen\nokay so well thank you for the\nintroduction luciano\nwelcome everyone today as we have said\nwe're going to talk about high quality\nonline deliberation using hybrid\nintelligence\nbut uh and well me and my colleague hill\nbut well this title sounds pretty boring\nand\nit's just after lunch so let me start\noff instead with\na bit more of a clickbait title and let\nme say\nlet me start with the title instead of\nsocial media are bad\nyou probably have heard about this you\nprobably you know are on social media\nyou know\nhow bad they are probably even your mom\ntold you that social media are bad\nbut today i want to try we want to try\nto give you a little bit of a different\nangle explain you another\nway in which social media bet social\nmedia are bad\nto read the wisdom of the crowd so the\nwisdom of the crowd is essentially\nthe opposite of the expert's opinion as\nin\nit is what what people what lay people\nor the group of lay people thinks about\na certain\ntopic for example if we talk about\nlockdown measures\nas you may be maybe familiar with\nthere is on one side there is the\nexpert's opinion and on the other side\nthere is the wisdom of the crowd what\nthe population what the citizens think\nabout it now um\nthree key components for having a true\nreal proper\nreading of the wisdom of the crowd are\ndiversity equality and independence\nand none of these is actually present on\nsocial media so let me quickly go\nthrough it\nso diversity diversity means that\nthis the population needs to be well\ndiverse the name says it\nthere needs to be different different\npeople with different backgrounds\nwith different opinions and this is not\npresent on social networks because\nsocial networks create\nthe so-called filter bubbles you may be\naware of that\nessentially social networks friend\nsuggestions and algorithm\nmake sure suggest friends that have\nsimilar interests to yours and that\nand that means that around you you\ncreate a local network of contacts that\nshare similar interests and that means\nthat diversity is actually\nlow there's no equality equality means\nthat\neverybody's voice has the same weight\nand this thing is also not present on\nsocial media because you have\ninfluencers\nyou have people who have lots of\ncontacts whose voice is heard much more\nthan other people who\nwith way fewer contacts\nand last independence independence on\nsocial media\nis more of a social social dynamic that\nhappens\nwhere so independence means that\nour voice should not be biased by\nanybody's else's voice\nand this doesn't happen also on social\nmedia because\nwhat usually typically does typically\nhappens also according to several\nexperiments\nis that what we write what we say on\nsocial media\nis actually biased by what has been said\nso far for example on comments because\nwe want to try to fit\nin the crowd we don't want to stay out\nof the\ncenters out of the bikes so not only\nthese three factors\nare not present on social media but the\nmain point is that\nthese three factors are actually\ndiscouraged on social media they're\nactually\nnot promoted on social media and this\nmeans that\nreading social media to get the feeling\nof what a population thinks about a\ncertain topic\nis not really the best way so what is\nthe best way then\nwell the best way accord there's you\nknow you may think about voting you may\nthink about referenda\nbut according to us the best way is\ndeliberation\nso deliberation is happens when\na group of people sits virtually or\nphysically around the table\nand discusses a certain topic with um\nwith the idea that people have to give\narguments in favor or against\ncertain certain opinions or certain\ndecisions\nby rationally thinking about it so\nparticipants are invited to rationally\nthink about their argument to rationally\nthink about\nother participants arguments with the\nfinal goal of\nessentially making a common decision or\nat least\nhaving a proper rational discussion\nabout a certain topic\nnow uh deliberation in particular uh\nby the way in which it is conducted\nemphasizes\ncontent creation everybody is pushed to\nsay what they think\nit emphasizes reasoning everybody is\npushed to\nreason about what they think and to\nproperly structure their arguments and\nnot just\nthrow things over there and then it\nemphasizes also inclusivity meaning that\nthese deliberation groups are typically\ndone\nat least according to theory they are\ndone in a very diverse way where\neverybody\nwhere all types of backgrounds are\ninvolved and asked to\ngive their opinion social media\nessentially promotes exactly the\nopposite social media promotes\nclicks right so content consumption\nsensational contents and these local\nnetworks where\nsort of create an echo chamber for this\nsensational content so um\nand the point is that deliberation so\nthe liberation sounds very good\nright sounds like on paper sounds very\ngood\nbut the liberation is not just on paper\nthe liberation and also the liberation\nis not just\nwhat you think that ancient greece\nlooked like but the liberation\nis actually used also in the modern\nsociety\nyou may be aware that a couple of years\nago\nabortion was legalized in ireland\nand the uh but the policymakers were a\nlittle bit unsure\nwhether to propose the referendum for\nlegalizing abortion\nbecause of the because of the\nconservativism of the irish society\nso in order to investigate whether it\nwas the right moment to propose such a\nsuch a referendum they created a\ndeliberate deliberation groups so they\ninvited\ncitizens with diverse backgrounds to\ndiscuss together to put out their\narguments to\nto get informed also during the process\nobviously\nabout this about abortion about the pros\nand cons\nand after these sessions of deliberation\nthe irish government essentially felt\nmotivated and felt justified in\nproposing this referendum\nreferendum passed with a resounding yes\nabortion was legalized and\nby asking uh by asking it during the\nexit polls\ncitizens were were well aware about what\nhappened\nand the fact that this deliberation had\nhappened pushed them to inform\nthemselves about the deliberations and\nabout abortion\nso this created on one hand it created\nlet's say a justification for the\nactions of the policy makers\nbut on the other hand it created more\ninformation and more empowerment\nin the citizens because they felt that\ntheir voice could\nreally be heard and that the government\ndid something because they heard the\nvoice of the citizens\nso this is at the basis of deliberative\ndemocracy it's uh\ni'm not going to go here in details\nabout that but it's\nthis idea of hearing the voice of the\npeople\nand at the same time having people\nfeeling empowered\nand informed is very very\nimportant however what\nhas been done in ireland especially was\nwith just a few hundred\nmaybe a thousand participants but not\nmore than that in order to really\nhave proper deliberation\naccording to us almost all the\npopulation should be involved\nas many participants as possible should\nbe involved so that\nall voices literally all voices can be\nheard\nbut that is very difficult to do of\ncourse when you have\nwhen you have these physical\ndeliberations\nyou cannot you cannot ask millions of\npeople to physically participate to\nthese deliberations\nso how do we take what's good in the\nliberation\nand scale it up to a population white\nsample but my colleague is going to\nanswer this question\nyeah let me uh take over the slides then\nall right you guys should be able to see\nmy uh\npresentation now here we go\nall right yeah so uh we talked a little\nbit about deliberation\nand about uh so we want to scale it up\nright\nwhat i what we exactly mean with scaling\nwe'll get to in a minute\nbut we have a couple of ideas on how to\nget there and\nuh we're also inspired by some of the\nthe things that do work well on social\nmedia\nuh as i will also later show not all\nsocial media is bad\nto kind of uh contradict the the\nclickbait title perhaps\num and and to make it a little bit more\nconcrete\nis that we're talking about either\nliberation uh that is ai supported\nand text-based so those are the three\nuh uh the the three things that you see\nhere on your screen\num so first off we would like to\nuse ai in some sort of way we think ai\nprovides us with a prime tool to kind of\nhelp us scale up the uh deliberation to\nmore participants\nuh making sure that everyone is included\nand that these deliberative ideals\nas we saw with the that are required for\nthe wisdom of the crowd for instance\nthat they are preserved uh so online for\ninstance like enrico said it removes\nthis restriction to physically come\ntogether\nbut also having a video call with like a\nmillion people is not a really good idea\nso\num you should have a different way of\ncommunicating and\nfor this we make the assumption of of\nmaking it text-based\num you could say also for yeah for four\ndiscussions you\ndo want to have it face-to-face uh there\nis of course a difference\nbut even on uh social media as it is now\nthere is already\na discussion happening right and there\nis already some sort of discussion going\non\nso we're looking to kind of amplify that\nand insert some deliberative qualities\ninto that discussion\nand take it from there all right\nso we talked about scaling and uh\nyou know us as a computer scientist when\nwe talk about scaling we talk about\ncompute power\nso uh when we talk about scaling up we\nsay all right i'm gonna trade in my\nlittle computer for a bigger computer\nor we talk about scaling out which is\nactually buying more computers\nbut in the deliberation we have a\nslightly different different way of\ntalking\nabout it so in a deliberation we kind of\nwhen we say scaling up we mean we add\nmore participants to a single\ndeliberation\nso first there's where's my mouse\nfirst there's two guys talking then\nthere's a couple of guys talking\neventually there's the entire group well\ni don't know i don't know how many\npeople there are but like\n10 or something but we would like to\nincrease this to i don't know\na thousand or above you know\nof course when you do this a problem a\ncouple of problems pop up\nso deliberation should be on a specific\ntopic or it should discuss a\nspecific problem like the the case in\nireland it was abortion\nas soon as you increase the number of\nparticipants\nyou become if you have ever been part of\nan internet discussion\nyou might run into the problem that\npeople tend to go off topic so it's\nreally\na problem that you should force people\nto to actually discuss the matter that\nis at hand\num and the other problem is that well\nactually as a single person in this\ndeliberation you don't get the the\nfeeling that you're hurt anymore you\ndon't get the feeling that you're\nyou're whatever you you put out there is\nhaving an impact\nuh which is something that we would like\nto prevent\nso that was scaling up and the scaling\nout is actually\nuh besides adding more participants to a\nsingle deliberation is having more\ndeliberations so\nthis way we can address multiple topics\nuh\nyou know you can have one deliberation\nin the morning and deliberation\nat lunch and a deliberation in the\nafternoon\nwith a bunch of different people but of\ncourse as you're having more\nmore deliberations it becomes\nincreasingly difficult to keep track of\nyou know what arguments that was i was\nusing there and and how far along in the\ndiscussion was i\nin another deliberation uh so it becomes\nreally difficult to keep track of where\nyou're\nwhere you're at and also this this\nscaling out\nagain it shows a really a really big\nproblem with\nphysical and synchronous deliberations\nright so you can no longer\norganize so many deliberations all the\ntime that are that are happening\nsomewhere\nyou need some kind of online platform to\ndo this\nall right so uh next up we were thinking\nabout okay so\nuh in in scaling up this deliberation\nwhat are the steps that we need to take\nto to help humans\nso how do we inform them how do we you\nknow help\nactively uh produce something that they\ncan use in in\nin this deliberation and the first\nproblem that we\nare starting to address is the uh\nrecording of progress\nso like i just said uh keeping track of\nwhere you are\nor where you have lost in left off in a\ncertain deliberation\nor where others have left off for you to\ncontinue with\nis kind of a key component of of our\napproach\nand furthermore once you have this uh\nrecord of progress uh you can also use\nthis this record to do other things\nthings like the analysis of the content\nthat there is or some kind of meta\nanalysis\nof the progress that is being made\nit could also potentially help with\nexternal communication so let's say\nyou have a bunch of meetings during the\nday and you need to provide a report of\nthose meetings\nto a supervisor you basically look back\nthrough the records that you have uh\nwhich are ideally automatically\nsummarized for you\num as uh based on the minutes that are\nthere\nso the this is the type of thing that we\nenvisioned to be there\nand ideally this this should be\nautomated and should\npotentially also be personalized uh so\npersonalized in this case means um how\ndo i\nfor a specific person what is the most\nbeneficial way to help that person\nand we'll come to a small example or\nwe'll discuss this further a little bit\nwhen we're talking about values in\nenrique's part again\nand lastly there's the the case of\nfacilitation or moderation\nin the deliberation uh like i also\nmentioned before the\nuh people tend to get off topic uh\nwhen there's a lot of people\nparticipating in the discussion\nso you need a method for keeping people\non topic or or\nasking people who did not have their say\nyet or\nsome way to kind of you know bring\neveryone around the table make sure that\neveryone is participating\nit's also so in some ways you need to\nkind of facilitate or or moderate this\nand yeah so we kind of in in the broad\nscheme of things we think of this\nas a couple of steps that could help\npeople in the deliberation\nbut really as a core component is this\nwe\nrecording of progress and we do that\nthrough\nthe deliberation map as we call it and\nso next to this being\na record of progress it is also\nin the future we would like to be able\nto reason over it so these meta analyses\nthat i talked about\nso we would like to also include those\nfrom there\num i should point out so the\ndeliberation map is is a map so it's\nbasically containing content\nthat is deliberative and what is nice is\nthat the deliberative content should be\nreasoned so it should contain arguments\nand it should contain\nuh problems that people address and it\nshould contain\nthe way they think about it and and\nthat's really the kind of strength that\nwe're striving towards so we're trying\nto make use of these ideals that there\nare in deliberation\nuh for uh\nstructuring a discussion that's going on\num\nand ideally we would like to do so\nautonomously\nbut we'll see if we get there um\nyeah so of course we're not the first so\nthere's uh\nsome examples of uh these deliberation\nmaps or\ndeliberation approaches so right now\nyou're looking at one this is the\ndeliberatorium from mark klein\nand um it's a little bit overwhelming\nbecause there's a lot of buttons and a\nlot of stuff going on but what is really\nnice about the example what i wanted to\nshow you is that\nit's focused around finding pros and\ncons so it's focused around\nrepresenting the um you know the key\npoints or the\nthe key content that is being discussed\nhere\num and then there's a bunch of options\nlike voting or inserting new ids\nor so it's really really difficult to\nnavigate and it's really hard also at\nadverse glance to kind of see what is\nover there but i can tell you that the\ncontent that is in here is really nice\nso the content in here is really really\nwell structured\num so i mean social media is not all\nthat bad because what social media is\ngood at\nis kind of providing a an intuitive way\nto\nto interact with other people right uh i\nmean\neveryone has used twitter um even\ndifficult things like uh communicating\nin reply threads and and\nkind of traversing this tree it it is\nquite difficult but\npeople still manage and actually people\ntend to produce pretty\nuh intricate uh reply structures like\none person replying to another and then\nanother person replying to him well you\nsee people here\njust tend to get sidetracked in the\nentire different conversation\nalmost but it shows that discussions are\nhappening\nand even these discussions you can kind\nof map them out\nbut what it doesn't show you is the\ncontent that is there right so you don't\nknow\nuh you know what these people are\ntalking about you don't know why they're\ndisconnected from other people\num what problems are they raising what\narguments are they making\nso this is largely still not not found\nout yet\nand that is the type of information that\nwe would like to actually contain\nin our deliberation map\nso so now that we kind of\nsaw what we want from this deliberation\nmap we were thinking okay so we need a\nmethod to fill this\nwe need the method to fill the\nliberation map\num and i talked before also about it\nbeing autonomous\nbut there's also a nice part\nthat i can say now which is about hybrid\nintelligence so\nboth enrique and i we are connected to\nthe hybrid intelligence center which is\nuh like also uh luciano talked about\nit's\nradical grant from nwo and\nas a model they have augmenting human\nintellect so it's really not about\nreplacing humans but it's about\nworking alongside humans using ai as a\npartner\nto kind of increase what humans can do\nso take the best from humans take the\nbest of computers\nand really bring them together to move\nforward\nand why i mean for me\nat least the y is really really obvious\nbecause there's the promise that it's\nboth better for efficiency so you're\nboth able to faster\ncome to more accurate\ndecisions that are made you know\ncombining the two strengths the\ncomputational strength and the intuitive\nhuman side of things\nbut next to it there's also trust so um\nonce you have an uh especially when when\nyou're in a discussion\nyou want to be able to use a platform\nthat you can trust uh\nand if especially if there's\nautonomously your ideas are kind of\nmapped for you in some kind of structure\nyou need to make sure that you have some\nkind of control over it but you also\ntrust that the ai is doing what you want\nit to do\num well his provides some kind of\nway in to to structure this to reason\nabout it\nand it does it does so through four\ndifferent research pillars\nwhich are the the care pillars so care\nstands for collaborative\nadaptive responsible and explainable\nand i would like to highlight for\ninstance explainable so you could say\nwell explainability what does it mean\nit means that an ai algorithm has to\nexplain why it came up with a certain\ndecision\nso why did you use my argument and why\ndid you insert it here at this point in\nthe map\nwell we would actually like to also show\nthat entire process of the ai making\nthat\nthat choice um so you would be able to\nkind of verify\nand trust what is going on there\num yeah so and on top of that\nuh it also again provides a feedback\nloop to the human so you would be able\nto kind of look at the processor so why\ndid the ai think\nuh i meant to say the argument\nthat i was making there so what did they\ni think of that and\nuh kind of reinforce your own idea of\nwhat you're what you're saying\num yeah so i think at this point\nis where we have uh oh wait yeah so\nalmost\num so we talked about\nthe how to help the scaling up of a\ndeliberation\nhow to help people in making sure that\nwe can scale up\nright some of the examples of the\ndeliberation mapping itself and\nthe method that we wish to use but we\ndidn't talk about what we want the\ndeliberation map to contain\nbut before we go there i think we will\ntake some time to take a couple of\nquestions if there are any\ngreat thanks michelle so far thanks\nenrico uh\nyeah do we have questions please you can\nraise your hand\nhere on virtually speaking so that yeah\num\nokay we have one uh ufo\nokay you please correct me if i'm\npronouncing your name wrong so but you\nhave the floor if you want to turn on\nyour camera if you want or just or just\nspeak\nplease sure hi thanks for the talk\ni'm audre welcher from the web\ninformation systems\nuh what was a little unclear to me was\nabout\nwhat goal you were trying to optimize\nfor so i understand\nthe different needs that deliberation\ncan\ngenerally serve i was wondering whether\nyou're trying to optimize\nfor say a deliberation that can inform\npolicy change\nuh for example or is it the process of\ndeliberation itself that you're looking\nat and i think this is\na pertinent question because this would\nthen determine what the metrics\nthat uh would evaluate your framework\nwould be right\ncould you say something about that\nyeah so i like the i like the\nmathematical approach of it\nof this question um so we do not have\nyet\nany optimization in mind the idea is to\nbuild a\nplatform that allows for the tools\nallows for\ntools for moderation tools for example\nthat\ncan be then employed and optimized for\ndifferent purposes\nso one can be for example to increase\nparticipation so to make sure that all\nusers\ngive their opinion one could be to\nincrease\ndiversity of red opinion so\nit could be for example exposing\nparticipants to\nsomething that is actually opposite to\nwhat they believe in\nto see how this could increase their for\nexample acceptance or their belief in\ndifferent opinions so we do not have\nanything specific in mind but\nas i said the idea is to create the\ntools for them optimizing\nas you prefer okay\nso while you're building the if i may\njust follow up very quickly so while\nyou're building these tools right\nyou uh as you mentioned you probably\nwould want to support\ncertain functionalities which are hard\nto achieve\nin their absence you just mentioned how\nuh providing a voice for every\nparticipant is one of those aspects so\ni think eventually that's essentially\nwhat you would want to evaluate right\nto check whether your proposed solution\nis effective or not so i guess there you\nhave a handful of\nideals as you uh called them earlier\nthat you would look at so is that\nwhat uh you would eventually evaluate\nthen yeah\nthose are part of this this kind of uh\ntoolbox as anything\nputs it right so there's a\nthere's also a number of theories of\nthese yeah like like we said these\ndeliberative ideals so there's a bunch\nof them\nand different also when talk when\ndiscussing this platform there's\na different kind of design\ncharacteristics that you can\nemploy and that have a different impact\non each of these ideals right\num and what is also unclear is the\ninteraction between them\nso uh what if i increase the the\nstrictness of moderation you know\nhow does it affect all of these\ndifferent ideas that are there so that\ninteraction is completely unknown so\nthis is because it's\nrelatively new and especially to do this\nonline is people don't know about it yet\nbut it's it would be nice to kind of um\nyou know we're starting to build this\nfrom the ground up and as we go along\nwe'd like to\nyou know put some products and make some\nexperiments here and there to kind of\nsee\nto get a feel for what the interaction\nwould be like sure\nthat is the the main idea yeah that's\ninteresting uh\nif one final comment just to uh share\nthis view with you and then get out of\nhere\nto let others get a chance as well uh so\nin our group we're working a little bit\non trying to understand collective\naction as well in crowd computing\ncommunities so\nthink of online labor uh mechanical turk\nso on and so forth\nand i think you could probably rely on\nsuch platforms to\ntry and understand team dynamics and you\nknow perhaps some of the other\nideals that you can quickly\noperationalize in into an experimental\nframework\nuh you know to try and understand how uh\ndifferent features that you might want\nto build within tools can\nactually play a role in you know\nbringing these\nideals to the fore within your\ndeliberation uh\nwe can chat offline more about this but\nyeah yeah yeah definitely\nyeah definitely just send us an email we\nshould definitely talk about that\ncool thank you cheers thank you thanks\nsuper interesting discussion thank you\nuh lou\nyou have uh you have a question you\nwanna\nyes yeah it's it's kind of in line with\nuh with uh\nquestions and recommendations i think um\ni got\na little bit lost towards the end to\nwhat the actual problems are that you're\ntrying to\num uh trying to address um\nand i got i have a better idea now after\nyour answers to who you are so it's like\nit's a really\nfascinating uh project that you're\nsetting up\nbut i would really like maybe the\nquestion would be like what what's like\na really small practical problem that\nyou think\nis possible to solve or that's maybe\nthat you would typically do\nlike in in a physical presence with a\ngroup of people\nand how would you translate that uh to\nan online environment and then\num plus one and what we all said about\num\nyou know that there's lots of um\nexperimenting over under like really\ninteresting communities like uh\nthe mechanical turk workers uh there's a\ncouple of people there that i could\nalso put you in touch with that have\nbeen doing this kind of work for uh\nfor a long time like how to organize\nworkers online and make sure that they\nthey can align on certain issues\num in a fairly distributed way so that's\ni guess my question for you like i'm\nhappy to help they're offline like\noffline but\nmy question is like do you have a couple\nof like really practical small problems\nin mind or is that maybe a next step for\nyour\nfor your research i mean that is uh\nwhat we'll get to in the second part of\nour presentation as well\nso thanks for the question because it\nnicely gives us a segue maybe\nbut um so we kind of uh viewed this as\nor at least as how i understand it is we\nkind of took a couple of these ideals\nand we kind of zoomed in on them um\nfor instance what i'm zooming in on is\nso i put it down here as perspectives\nbut what it's really about is about\narguments so i'm really looking closely\nabout the reasoning that people do\nwhat arguments are they raising uh also\nkind of looking at what what is\navailable in ai right so\nuh how far along is is natural language\nprocessing how far along is argument\nmining\nuh to what degree can we use what is\nthere reliably\nand uh but that's really zooming in on\nonly like one of these ideals\nso that's just zooming in on reasoning\nso for me\nthat is how i make it practical and how\ni\nkind of tend to scope it off uh i don't\nknow about\nif enrique has a similar uh yeah i think\ni think\nto to answer your question one idea\npractical idea that we have\nis for example what happens if we\ncreate live this deliberation map how\ndoes this\nand the participants can see this\ndeliberation map containing the\narguments that are put\npro or against a certain topic of\ndiscussion how does this influence\nthe the discussion is the discussion\ngoing to for example converge and be\nand have less repetitions simply uh is\nit going to actually produce\nmore arguments simply because you know\nthere's no repetition so this uh\nfor example a simple first step\nand what mimi he'll especially work on\nis the\ncreation of this deliberation map so how\ndo we create these in\nautonomous hybrid intelligence ways\nall right thanks yeah i think like what\nmight help is to\nmaybe like because this i think there\nare\nplenty of organizations a lot of online\ndeliberation of course happens on social\nmedia platforms so we don't really have\naccess to understand like\nwhat they how they like that don't do\nmuch of it and there's not\nnot a lot of collaboration um but it\nmight be interesting to see if there are\num also policy makers who are thinking\nabout like what\nwe want to see and that might be a good\nway to kind of\nput emphasis on because like these are\nvery practical very urgent questions\nbut they're super scientific as well\nbecause they're yeah it's never been\ndone before so\nthat's just uh encouragement is to see\nsee if you can seek maybe partnerships\nbecause\nyou're setting it up as a big big thing\nlike try to find partners who can help\nyou kind of make it um\nmore specific because it will still be a\nhighly uh scientific\nendeavor right we are we are\nyeah we are collaborating with nick\nmauter and the\npve people participatory value\nevaluation\nso we're work we're using their data\nwe're\nhelping them and then you know as\nluciano included\nhave ideas to you know make them\nscale scaled long i'm not sure like\nwhich direction of scaling this would be\nbut anyway um\nso we're already working with them and\nyeah we can talk also\nwe can go in details about those ideas\nyeah yeah\nit's indeed a challenge right how to\ncombine that this is the level\nabstraction that you can allow you to\ngeneralize and face different projects\nbut at the same time be\nconcrete to have some more uh down to\nearth kind of solutions it's always\nchallenging and yeah i had a question\nfor you myself but i will keep it to the\nend because i think that's a very\nsmooth transition for you so\ni think maybe you can continue with the\nwith the slides and then we can open up\nfor questions again later on\nsure yeah okay all right\nso um back to our deliberation map\nand the the content that we want to\nstore\ninside of it so there's two things that\nenrique and i are working on so the\nfirst is perspectives\nthat's well the title of my research and\nthe second one is\nour personal values and eric will tell\nyou later more about the second one but\nfirst\na little bit about perspectives so um\nthe the word in itself perspective it's\nmany it has many interpretations but the\none that i\nchoose to use here is the one of\nperspective taking\nso the actively contemplating other\npsychological experiences\nuh so informally it's like putting on\nyour social classes and\nkind of look at the discussion actively\nplace yourself\nin another person's mindset and and look\nat how that person might be\nviewing the discussion or might be kind\nof weighing off its options um\nso this this perspective taking is a\nsignificant mental effort\num but it has been shown to increase\nreciprocity and empathy\nand all these again so these these\nrelate back to the deliberative ideals\nthat there are\num and um if if you get\neveryone to do this and also uh to to\nmake sure that they provide their own\nperspective so\npeople should really be or not really\nforced but encouraged\nto to uh provide their perspective\non on the topic that's being discussed\num you you will start to find these\nthese problems and criteria and\nsolutions that\nyou didn't know about beforehand so\nthese uh specifically one thing that\nenrico mentioned was this uh\nindependence so that you look uh at what\nthe other people think before you give\nyour opinion and you\njust you're being a little bit uh\nrelated to what the other people think\nbut if you're\nlike one step ahead of that um you you\nwill find out what these these criteria\nare that people actually worry about\nprivately\num and it's highly related to the the\nalso the the reasoning ideal\nso the uh in in also providing their\nperspective\nthese people they should be uh\nto a high degree uh providing arguments\nand providing\nuh you know why are they making\nconclusions and um\nyou know specify all of this and the way\nwe then look for these perspectives or\nhow i plan to look for these\nperspectives is\nbasically by asking uh who says what\nand uh the what in this case are\narguments\nso i'm trying to make it really concrete\nwhere arguments are kind of the reason\nfor having a certain opinion\nand you would think maybe this is easy\nbut actually it's not\nso in the simple case you a person is\nperfectly straightforward in\nuh giving his or her argument but in\nreality it's really complex\nuh there's a lot of you know implicit\ninformation be\nalso being captured in the in the\narguments uh\nhere there's uh uh for the first two\nlines so any doctor who tells an\nindividual patient\nthat all vaccines are safe and effective\nis lazily parroting the cdc\nso in this especially in the words\nlazily parroting right so there's an\nimplicit meaning that\na computer might not be able to tell\nkind of right off the bat\num but next so next to this implicit\nmeaning\njust even finding the arguments and\nfinding the entailment between arguments\nand conclusion\nuh and finding their interaction is\nalready really difficult\nso um so that is kind of what\nwhat for now what i'm focusing on and\nalso what i answered to\nto ruiz i'm kind of taking this uh\nreasoning ideal and trying to zoom in on\nit and really\ntry to get to the core problem that is\nthere and see how far along ai\nis and and see whether we can actually\nuse that in the\nthe broader scope of deliberation\num so this is about the what so next up\nthere is also the the the who\nso the who is a i mean it's uh related\nso who is having the argument who\nis stating it um uh\nwhat other parties or stakeholders are\nthey mentioning\nso i realized here the the other\nstakeholders that are being mentioned\nand you get into complicated cases where\nthere's also embedded\nembedded evidence or embedded statements\nor\nthis very epistemological kind of\nlogical like what you think that he\nthinks that he says\nkind of cases so that's a it's a really\ncomplex topic but\ni'm trying to um i mean i can if there\nare questions i can go into a little bit\nmore detail but i try to keep it on a\nhigh level for now\num yeah so\ni mean when it boils down to it it's\nvery linguistical challenge um\nand also like a before we also wish to\nuse\nhi for this so make it hybrid so really\nrely on the human intuition to make\nthe decision where it matters but also\nemploy ai for the\nthe straightforward computations um\nyes enrico yeah i think you're up i can\nuh yeah i can just\nuh again um\nright so perspectives are who says what\nand values are why do people say\nsomething\nto keep it simple so values are the\nabstract drivers that are behind\nour opinions and our actions and our and\nour beliefs\nso um what you see over here on the\nright is an example of personal values\nand this is an example of basic values\nso these values are general values that\nhave been found\nthroughout the research and\nand essentially these are the\nmotivations for our opinions for our\nactions but as you can see here these\nvalues\nat least we believe that these values\nare a little bit too\ngeneral a little bit too generic such as\nself-direction or achievement and they\nare very difficult to\nconnect to everyday situations and that\nis why we want instead to use\ncontext-specific values so\ncontext-specific values are instead sort\nof the other side of the coin\nand they are defined within a context\nthen applicable to a context so let me\nlet me make a couple of examples so um\nlet's take the example of\nphysical safety this value this value is\nvery important in the context of\nlockdown measures because\npeople have people take actions based on\nthe motivation\nbased on the driver of physical safety\nas opposed to others who\nfor example do it based on freedom\nso it is a value that is applicable to\nthe context of lockdown measures\nbut it is for example not applicable to\nthe context of regulating financial\nmarkets\narguably however the very same\nvalue of physical safety can take\ndifferent forms in different contexts so\nlet's say that if we talk about\nphysical safety in the context of\nlockdown measures then we talk about\nmaintaining distance we talk about\nwearing face masks but if we talk about\nabout the same value physical safety in\nthe context of driving\nthen we think about wearing safety belts\nwe think about respecting speed limits\nand signs\nnow these two are essentially from a\nnatural point of view they're the same\nvalue but they take different flavors\ndifferent forms\nin different contexts and if we want to\nconcretely talk\nor if we want to concretely design and\ntalk about a specific\nvalue in a specific context well we\nbelieve that we should use the context\nspecific value\nso let's make for example if we want to\ncreate an\nagent that throughout your day helps you\nin\nincreasing your your physical safety\nwell that agent\nshould not suggest you to wear a face\nmask\nwhile you're driving in order to\nincrease your physical safety that would\nnot make sense\nthe agent needs to understand the\ncontext where the value is located in\nand that these values are different\nso uh well the problem is that if we\ntalk about the context-specific values\nthe first step is then\nwell finding out which values are\nrelevant to a specific context when\nwe're talking\nand in order to do this this was the\nfirst step of my phd\nmeme here luciano and some other\ncolleagues developed a methodology a\nhybrid intelligence methodology\nfor uh identifying the values that are\nrelevant to a specific context\nso i essentially using human intuition\nand supported by\nartificial intelligence i i'm not going\nto go in detail but\nyou can contact me for additional\ndetails\nand once we have identified which values\nare relevant to a context\nwe can use those values essentially as a\nlanguage as\nas a language of communication to\ninterpret what people are saying or are\ntrying to say\nso um instead of having\ndeliberation participants communicate\nthrough their text\npurely through their text we can sort of\nfilter this text and using a transparent\nbox i'll get to that in a moment\nwe can instead to the listeners we can\ngive\nthe comment that the actual person gave\nand the perspective\nthe values that that person\nholds with that comment the idea is that\nif we give a better interpretation of\nwhat people are saying or are trying to\nsay\nthe on the listener side hopefully\nthere's going to be a better\nunderstanding of different opinion and\nthere's going to be a\ncalmer discussion instead of just you\nknow rushing to thinking that\nwell that person must be stupid because\nwell she thinks\ndifferently from me so she must be\nstupid so hopefully\nwe if we have a better if we give a\nbetter detailing of what is said\nin this additional abstract level like\nperspective well some more concrete\nperspective and\nabstract values levels that would be a\nbetter understanding\nof this conversation and as we say as\nmikiel mentioned earlier\nall of these should be done in a\ntransparent manner this algorithm that\ninterprets\nthese comments that finds the\nperspectives and finds the values\nshould be as transparent as possible\nusers participants should have control\nover what the algorithm\nthinks about what they say in this way\nif they have some meaningful control\nover this algorithm they will build\ntrust in it\nand they will be able to sort of use\nthis platform to discuss and hopefully\nhave a better informed\nand civilized discussion about topics\nso essentially this is our this is well\none of our final goals to attach to one\nof the questions\nfrom earlier and well we're gonna get\nthere\nat a certain point so that was uh that\nwas the\nthat was the presentation and yeah just\nask some more questions\nluciano you said you had one cool so\nthanks indigo thanks michael that was\nreally interesting\nand yeah let me just add one comment\na small comment before that the this\nwork he mentioned was really so\non the it's called axis right it was\naccepted for amaz\ntwo thousand 2000 2021 and\nand it deals like with two different\ncontexts one of the concepts of energy\ntransition\nanother one of the relaxation of covid\nmeasures together with the group from\nyeah\nmidnight malta some other people at the\ntpm on the participatory value\nevaluation so\ndefinitely as soon as out i recommend is\ninteresting\nand we are also trying to understand not\nwhich are the values and there's another\nwork we are also trying to\nto see how how much each individual care\nabout each value so a little bit where\nsomething so uh on my question is\nyou you mentioned on the earlier\npresentation\nabout uh deliberation being a\nway of rational discussion so to\nbring rational arguments but it comes\nit's like\nwe are messy let's not call irrational\nbut\nat least boundedly rational human beings\nand i i just want to say how do you\nyou have some ideas to reconcile this\nbecause if you want to see\nonly rational arguments but we are not\nalways rational on the way we propose or\narguments there's a lot of emotions\nthere's there are feelings there maybe\nthe connection even to values that we\ncan say is that really rational or not\nso\njust you have some ideas on that aspect\nyeah so there i mean the uh this this\nfeeling of the deliberation map right so\nwe gave perspectives and and values now\nas a\na kind of way to fill it but those are\nkind of uh um examples\nmaybe perhaps of things that you could\nput in this\nuh this map and what we're trying to do\nis we're trying to get to a point where\nwe say all right so\nthe technology that we employ here or\nthe ai or hi or however\nyou extract the perspectives or values\nwe get it to a point where we can work\nwith it and we can kind of on a high\nlevel\nuse it for for helping people in a\ndeliberation\num but next to it there might be many\nmore\nthere might be many other types of\ninformation you can strike\nthings like sentiment things like\nemotion things like you know all these\ndifferent\naspects of what people are saying you\ncould uh\ni mean for now also the the idea that we\nhave for the representation of this map\nit allows for these different uh types\nof information to be included and and\nactually persist next to the\nperspectives and values that we\nmentioned for now\num and i mean while i'm looking at so i\nmean you're writing saying well i'm\ntrying to look at this rationality and\ni'm kind of saying well\nit's so i i'm hoping it's going to be a\nself-reinforcing loop right so i'm\ntrying to look at the liberation as\nthere should be rational discussion\ngoing on so i then\npresent to you the rest of the\ndiscussions should be become even more\nrational right\nbut it doesn't mean that i kind of\nignore all the other\nstuff that is there but it's just one\nway of scoping off the project for\nnow but if it turns out that it's really\ndifficult to do or\ni kind of lose people during the process\nthen\nyou need to critically look and say well\nmaybe the the rationality\nconstraints that we have for now are too\nstrict so let's broaden up a little bit\nlet's maybe go\none step lower uh in in trying to to\nmodel it or trying to\nextract what people think or how they\nthink\nand maybe shift a little bit in in what\nwe exactly mean with\nuh with the perspective for instance\nyeah okay\nnice if you think thank you very much i\nthink that clarifies a lot and\ni mean is i i do believe that uh\nthis combination of human and ai that is\nreally the way to go for these kind of\ntopics and issues that you can still\nhave the goal of achieving rationality\nbut still including everyone\nalong the way right okay uh does someone\nhas\nand any questions right now can you\ncan you raise your hands please\num ben you suggested on here in the chat\nan\narticle for enrique i don't know if you\nwant to just show me\nthat quickly i could comment about it\nyeah i could say why i\nthought of it um hi thanks for the um\npresentations very um\nfascinating i was thinking when you\nmentioned black to\ntransparent that um\nthat was sort of i think he used the\nwords in terms of\num just being able to tell\nwhat the algorithms are doing but i\ni'm sort of sensitive to also the sense\nof\nmaybe the uh\nthe perspectives from which we're trying\nto or the tech is trying to understand\nthe values\nand i think how we understand what\nvalues are\nin a situation are is affected by\nthe implicit values in how we're looking\nand there's probably some care um\nover notions like transparency\num that we don't suddenly think that we\nhave\nactual descriptions or transparent or\nwhite descriptions of the values but\nthere's some kind of essential ignorance\nstill\nwithin the overall situation um\nyeah so i just suggested i thought that\nreally that\nthat paper sort of talks about black and\nwhite boxes in a really nice way and i\nthought it might um\nbe relevant to the sort of bigger\npicture you're trying to deal with\nyeah yeah yeah thank you thank you\nabsolutely yeah just uh to quickly\nmention uh\nso what we tried to do for example in\nthe first\nin our first project to identify which\nvalues were relevant to a discussion\nto give an example of how we tried to\nlet's say\nhave humans in the decision-making power\nso the idea that we had was to we took\nanswers to a survey\nas luciano said about two surveys one\nabout energy transition on about\nlockdown measures and we tried to\nguide human annotators so\nmostly they were policy experts or value\nexperts\nto guide them through this data set and\ntry to find the values\nin what people wrote so we had some\nguidelines and try to\ninvite them to see values only in what\nthey wrote what\nthe participants wrote instead of just\num\ncoming up with these values on their own\nso sort of what this means is that we\ncan\nultimately use let's say also trace back\nthese values from what people had\nwritten originally in this survey and\nthat should sort\nsomehow give a little bit more of a\ntransparency and\nat least making it grounded in data and\nwhere data is what actual people had\nwritten\nso this is uh this is just an example\nbut anyway\nyeah anyway it's tough to make\ntransparent boxes\nyeah thanks for that um uh i think that\nthe kind of being able to kind of trace\nthe process is really valuable like that\nthat sounds like a i think that could be\nsome there might be different ways that\nprocess happens and it might be\ninteresting to compare\num compare those so the different ways\nthat values get read\nwould be an interesting topic yeah\nyeah okay so ben thank you very much for\nsharing i think that's\nyeah looks also interesting reference\nand not yeah for\nfor me i believe for some others also\nhere in this meeting\nin this talk so uh do we have more\nquestions\nmarcos please go ahead hey\nhello uh sorry for not turning on my\ncamera i don't know it just\ndon't work no problem okay\num i i i'm i was thinking when you were\ntalking about um context specific\num personal values are taking this into\naccount\nand if you are considering that some\ncollective decisions\nthey need to match match diverse\ndeliberations\nnot very much uh just help every person\nor group to\ndeliberate or create a consensus\nor that calls for every\nspecific deliberations but um\nshowing how uh different group\ndeliberations can be met\ntogether to um\nto make a collective decision i don't\nknow if you forget what i'm i'm trying\nto say\nnot really okay well um\ni always give this example when you're\ndesigning a neighborhood for example\na lot of people have different desires\nyou have to\nmatch all those desires most as you can\nfor designing this neighborhood for\nexample\nso it's more than just giving people\nwhat are all those values involved\nand the conquest of every value but\nthe decision the final decision needs\nthe\nthe match of those\nand it's different from a consensus it's\nnot a consensus of\nwhat is better for all or some like this\nbut to show people that they can't be\nmatched\nin a way you know and i think this\nartificial intelligence\ncan help maybe and when you talked about\nspecific content uh values and showing\nthis\ni think this maybe can get in like this\nspecific way of helping collective\ndecisions\nthat's why i'm posting this\nyeah yeah no yeah absolutely that's also\nwhy\nwe think like uh you know nick mata\nand all those the the\npeople behind the participatory valley\nevaluation that's uh that's essentially\nwhat you're talking about in a way\nso the idea is to instead of under\ninstead of\nuh take what people would prefer\nit's like the policy options that they\nwould prefer or for example\nhow they would prefer to relax lockdown\nmeasures\ninstead of reading those they read the\nvalues that people\nthe the reason why people would like to\nrelax certain\npolicy measures and this is much more\ninformative to the policy makers\nespecially because especially in the\nsituations where\nthings change rapidly like well in\ncorona\nso understanding the values behind\npeople's decisions\nhelps so that's and also essentially\nyou would instead of aggregating for\npreferences you would aggregate for\nvalues\nand that would be very helpful for uh\nsorry that would be very helpful for a\nmoderator or policy maker or\nneighborhood designer\nthank you very much great so\ni think yeah we we we ran out of time so\nyeah we are almost running out of time\num\nso i think i would like to yeah thanks\nenrico\nthanks michael for the very interesting\ntalk\nand yeah thanks everyone for joining us\nand\nsee some of you again next week so\nall right thank you and feel free to\ncontact us personally for any kind of\nquestions really\nwe're open to discuss", "date_published": "2021-02-03T13:18:47Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "b33a51520f8e84578be9691366d72bb5", "title": "Embodied manifestos of human-AI partnerships (Maria Luce Lupetti) - 1st AiTech Symposium", "url": "https://www.youtube.com/watch?v=tVohh8Za2fk", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "okay so in this project I'm\ncollaborating with an eccentric avi and\nevery dubbing and we are working on this\nconcept of embodied manifests of human\nAI partnership which consists on the\ndevelopment of a method for exploring\nnarratives and changing design\napproaches of area so the first question\nyou may ask yourself is why focusing on\na design method for exploring narratives\nwhen the overall theme of this\ninitiative is how can we achieve and\nmaintain meaningful human control over\nAI well if we think about it when we\naddress this question we implicitly\nintroduce some of the most recurring\nelements of stories such as characters\nso we have the potential hero which is\nus as designers and engineers we have\nimplicit dispatchers which are the\ninstitutions who are calling for\ninitiatives for developing responsible\nAI and then we have a dragon which is\nour very AI it is not good or bad per se\nbut it can threaten our princess which\nis humanity society the collectivity so\nwe need to keep it under control somehow\nand then we have the most important\nstoryline functions which is a villainy\nthe potential lack of control potential\nmisuse and abuse of AI so we are called\non a mission to achieve and maintain\nmeaningful human control yep so again\nwhen we address this question somehow we\nare implicitly claiming for a change in\nthe narratives we address for designing\nAI but alternative to war exactly so as\nsuggested a bit as introduced by elisa\nwhen we design especially AI we rely on\ntheorists and we may not be aware of\nthem so we make a policies with set\nrequirements and then we develop test\nand we feedback with our results what we\nare not aware of is that there are\nnarratives who are affecting the way we\nunderstand technology and the way we\nthink this should be designed which is\nnot a problem per se\nbut the thing is that let's take an\nexample the museum robot example so we\ncan identify two dominant narrative\nsurrounding robotics one which positive\nwhich sees in robotics an opportunity\nfor efficiency better lifestyle well on\nthe other hand we have a more negative\nview which see robots as a threat for\nhumans especially in the logic of\nreplacement so if we are more closer to\none or the other narrative we may end up\ndesigning something very different on\nthe first case we may end up designing a\nrobot as a museum guide while on the\nother and we can end up designing a tool\nfor museum guides which are\nsubstantially different again this is\nnot a problem these are just well\ntentative approaches the problems come\nwhen we move into the evaluation and we\ntake out insights and we generate our\nbody of knowledge based on the\nevaluations that are also affected by by\nnarratives so in the first case we may\nend up focusing on efficiency and\nnaturalness and on how our robot is good\nin replacing the human museum guide\nwhile under the second case we may end\nup focusing on usability aspects and on\nhow the robot is good in supporting the\nmuseum guide so these two narratives are\nthe results that we get I've got a\nfeedback on our narratives reinforcing\nthem and never encountering each other\nso usually who is in the first loop is\nnot having a conversation with the\nsecond one in most of cases of course so\nthe aim of my project is explicitly\naddressing exploring these narratives\nthrough the concept of embodied\nmanifestos with the final aim of\nchanging our approach to a to the design\nof the IDE and embodied manifestos are\nto be intended as artifacts which are\ndesigned with the specific intent of\ntranslating and accept me it's\nsimplifying sorry abstract concepts\nrelated to related to narratives about\nAI and you may wonder again why\nartifacts exactly\nwell I'm coming from a faculty of\nindustrial design I have a background in\ndesign and in my knowledge artifact are\nlargely already used to critically\naddress concepts and issues related to\ntechnology but in research so we can see\nhow artifacts I use for reflecting on\nthe very design activity for\nestablishing critical areas of concern\nfor advocating for research agendas and\nchange in approaches and for translating\nand communicating abstract concepts\nwhich is everything that I'm willing to\ndo and again also in design practice we\nsee how artifacts are being more and\nmore used to to explore some gray areas\nbetween dominant narratives such as in\nthese projects from done already\nrobots are developed and designed as\nthings that are smart they are\nintelligent they can perform very\nintelligent things but then through the\nconcept of neediness they establish an\ninterdependent relationship with human\nso through artifacts these designers are\nremedying the way we can establish\nrelationship with robots so as I said in\ndesign is already a common practice to\nuse artifacts as a way to broaden our\ncurrent narratives and to suggest\ncounter narratives but in most of cases\nthese but these practices these\nactivities may be seen as more artistic\ninterventions rather than rigorous rich\nresearch also because the evaluation\nphase is as we intend when we design\ntechnological solutions is often left\nout and the kind of theories we generate\nare very often implicit so my plan is to\nfocus exactly on this last part so\ntrying to systematic systematically\nprototype not one single alternative\nnarrative but a spectrum of narrative\nalternatives and to compare them through\na systematic observation and\ncomparison in the evaluation phase in\nfact uh artifacts enable me to create\nexperience and observe these conceptual\nimplication into practice so that the\noutcomes and the insights that I can\ngenerate can inform an actual theory\nabout designing AI so these are just to\ngive an idea of the possible next steps\nand the case studies that I want to\naddress I still need to decide so one\nexample might be focusing on smart on\nassistants and the narratives\nsurrounding privacy while another can be\nabout service robots and the narratives\nregarding replacement and another one\nautonomous driving and narratives about\nmoral responsibility but I'm still\nexploring the possibility so if any of\nyou have cases to suggest you're very\nwelcome to come talk to me and to\nconclude so my aim is to use embodied\nmanifests as generative research tools\nwhich will enable to escape the trap of\ndominant narratives will enable concrete\nexperiences both for us as designers but\nalso for the potential audience and\nusers and discussion about abstract\nconcepts and finally for probing\nprinciples values and features that are\nworth addressing when designing AI and\noverall these embodied manifestos are\nway for envision and exploring how we\ncan write our dragon thank you very much\nthank you", "date_published": "2019-10-29T16:23:35Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "f82f6de158f1cd7de7a6b80373057bbf", "title": "Explainable AI with a Purpose (Emily Sullivan)", "url": "https://www.youtube.com/watch?v=XDwvKiWYoUg", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "yeah great yeah thanks for having me\nso algorithms are involved in more\nand more automated decision processes\nright they um determine\nthe music that we listen to right the\ntype of news that we see on social media\nthey can even be used in making um\ndecisions in healthcare and maybe\nespecially in this corona crisis\nwhen the hospitals are overwhelmed who\nand who may or may not get\ntreated at the hospital i mean they're\neven making decisions\num based off of parole so what people\nshould be released from prison or who\nshould stay longer\nso because of there's lots of issues\ninvolved in\nthis kind of proliferation of these\nautomated decision processes but one of\nthem is\nwe want to know well why this decision\nright why am i being denied\nmedical care why am i being denied\nparole\nwhy am i seeing the kind of news that i\nsee\non social media in my news feed and this\nis especially important when\nthese models are black box or\nin lots of cases they might be opaque\neven to the people who\nmake them so the kind of claim that\nthat i want to make here start to make\nis that before we can talk about how to\ndo\nexplainable ai and answer this like why\nthis decision\nwe need to know what is the purpose\nof these explanations and explainable ai\nto begin with\nand so this is the claim that i'm gonna\nargue for a little bit today\nso the talk outline is that first i'm\ngoing to talk about\nthe what i call the function approach to\nexplanatory standards\nand so kind of motivate the methodology\nthat i'm using i'm going to look at\nexplanation and philosophy of science in\nparticular\nas a kind of inspiration because that's\nwhere my\nuh focus area my training is in\nand then also then look at various\npurposes functions of explanations for\nthese\ntypes of automated decisions so very\nbriefly of like these technical\nfunctions commercial functions and then\ncertain types of social and ethical\nfunctions related to the gdpr\nso first what is this function approach\nto\nexplanatory standards so this is a type\nof conceptual analysis which is a\nphilosophical method\nand you can see it nowadays in\nepistemology\nwith craig and michael hannon as another\nexample\nso the basic idea is that we can capture\nthe nature and value of a concept\nnorm or practice by reflecting on its\nfunction\nor purposes so\nthe question is well what does what\nfunction does x concept normal practice\nplay in society\nor in human endeavors right so that's\nthe question that\nwe want to answer and so we do this by\nfirst starting out with some hypothesis\nabout\nthe role that some concept has in our\nlife or in society\nthen we want to determine what the\nconcept\nhaving this role would actually be like\nso first we just start with a hypothesis\nand then we determine well what is it\nlike having a concept having this role\nand then we examine the extent to which\nthis concept\nconstructed matches our everyday notion\nor\nmight conflict with other types of\nintuitions that we have\nso here's just an example not to do with\nexplanation\nbut just the concept of a good person so\nwhat is the purpose\nof the concept of a good person and how\nmight that tell us what\nwhat it means to be a good person so\nfirst we can just start with some\nhypothesis so here's a hypothesis just\nthrowing it out there\nthat the concept of a good person is\njust to be able to predict and control\npeople's behavior\nright so that's why we have this concept\nso then we say okay\nwell what uh what would having this\nconcept actually look like\nif that was the purpose of it and so if\nthat was really the purpose of the\nconcept then we would only be using the\nconcept of good person\nwhen prediction and control is possible\nright\nand it means that morality would be\nrelative\nto the view of those who are in power\nright this is what this view\nwould mean if that's what a good person\nmeans\nso then we can say okay well what is\nthis concept does it\nmatch your everyday notion well really\nit kind of misses\nthis point that morality has some kind\nof objective structure and we want to\nimply\nin cases where we don't have these types\nof prediction and control\nright so it seems like this notion of\ngood person\nthe purpose of it just to predict and\ncontrol people's behavior isn't really\nthe purpose of the concept right we want\nto get to something\nthat actually highlights the objective\nnature of morality\nso maybe perhaps a better purpose is\nsomething like\nprovide exemplars for people to learn\nfrom and to help\nmotivate people to be moral this is just\nan example of how this\nhow this might work so when we're\ntalking about\nexplanations then we can apply this\nmethod there\nright so identify the purpose of an\nexplanation in various contexts\nso especially the context we're\ninterested in here is the algorithmic\ndecision\ncontext and then we want to know\nwhat criteria do explanations need to\nhave to successfully satisfy these\npurposes\nand then we can ask whether our\nconclusions have any\nglaring mismatch of common usage and\npractice of\nthe practice of giving and receiving\nexplanations and\nsuch that maybe we adopt the starting\nhypothesis or we need to\nrevise it\nso why take this kind of approach to\nlooking at\nwhat explanations are is that it can\nshow why certain concepts are important\nby looking at\nthe purpose of it and the function of\nthe concept in\nsociety it provides a clear way to\ndelineate\ncriteria for success so if we know what\nexplanations are supposed to do then we\ncan measure whether explanations\nactually do\nthat thing and it can also resolve\nconceptual conflicts and avoid verbal\ndisputes because we're really defining\nit in terms of\nits purpose and function and this is\nespecially important for doing\ncross-disciplinary work\nright so when you're doing\ncross-disciplinary work\none of the problems is that people use\nterms differently\nand so it's hard to be able to even just\npick up\na paper in a different discipline and\nknow what's going on because these terms\nare used differently\nbut if we take this type of function\napproach then we can actually look for\nshared purposes right instead of shared\nterms\nso maybe for instance transparency used\nin philosophy is used quite different\nthan in computer science papers\nbut maybe there's another concept\nthat's getting at the same function\nthere in those computer science papers\nthat we can\nwe can then use for cross communication\nso then the question is well what are\nthe functions\nof explainable ai explanations or ai\nexplanations\nso first as i said since my background\nis\nwas first in philosophy of science i'm\ngoing to first look there what the\nfunction of explanation is to get some\nsome inspiration so there's a lot of\nwork\nthat's been going on in philosophy of\nscience on what the nature of\nexplanations are\nfor a long time and\nthere are some basic ideas that\nphilosophers\nagree with even there's lots that they\ndisagree with so the one thing that they\nagree with is that an explanation\nis a set of propositions or sentences\nthat are an answer\nto a why or how question right and we\ncan explain things like\nsingular occurrences events patterns or\nirregularities\nso there are two main\nfunctions of scientific explanation\nso the first is that explanations are\nto correctly describe relations in the\nworld\nso it kind of intrinsic or ontological\nfunction\nbut there's this other function of\nexplanations as well which is the\nfunction\nis supposed the x the explanation is\nsupposed to enable understanding\nso on the one hand the explanation needs\nto correctly describe\nrelations in the world but it also needs\nto enable\nunderstanding and the person reading the\nexplanation\nsay\nand so because of this there are some\nsuccess criteria what makes for a good\nscientific explanation so the first\nthing is that since it's trying to\ndescribe relations in the world it needs\nto have the right\nkind of structure so lots of\nphilosophers argue that explanations\nneed to be\nasymmetrical right so for example\nwe can't explain the\nlength of the shadow or always get this\nbackwards\nso um we want to explain the length of\nthe shadow we have to do it in terms of\nthe height of the flagpole we can't\nexplain the height of the flagpole in\nterms of the length of the shadow\nright there's a clear uh direction\nthat explanations can can take\nthe second thing that's important for a\ngood explanation\nis that it needs to highlight the\nrelevant features\nso for example if we want to know why\nthis artificial plant\nisn't growing right we shouldn't be\ntalking about the fact that i haven't\nwatered it in days\nright so i haven't watered this plant in\na year but that has no relevance to why\nthis artificial plant\nisn't growing because it's not growing\nnamely because it's artificial\nit has nothing to do whether or not i\nwatered the plant\nso that even though it's true i haven't\nwatered it right it has no\nrelevance to the explanation why this\nfake plan isn't growing\nalso it needs to be truth conducive\nright it can't have\nfalse information and the explanation\nand lastly we need to enable what\nphilosophers call cognitive control\nwhich is necessarily\nunderstanding\nso what understanding consistent is\nbeing able to answer\nwhat if things had been different\nquestions so having an explanation means\nthat you have\nthere's a certain set of questions that\nyou can you can answer\nabout the phenomena and that gives you a\ntype of cognitive control over the\nsubject matter\nright so an explanation is supposed to\nbe able to be such that\nyou can answer nearby what if things had\nbeen different\nquestions\nso if we go back to the main steps for\nthis function approach so identify the\npurpose of explanation in various\ncontexts\nright in the scientific explanation\ncontext we have describe the world\nenable understanding what criteria do\nthese explanations need to have\nthey need to be truth truth conducive\nthey need to be relevant that we need to\nmake sure that we can have some sort of\nunderstanding or cognitive control and\nthen we can ask well is there any\nglaring mismatch or intuitions that this\ntype of you can't handle\nso for right now i'm just going to say\nno but of course\nflusters in size might make this story a\nlittle more complicated\nokay so what is this\nhow does this all have bearing on\nexplanations of\nai or automated decisions\nso first there are various types of\nfunctions and aims that automated\ndecisions or explanations of ai\nexplain like i can have so very briefly\nthere could be just these like technical\naims\nand i think a lot of the research on\nexpandable ai\nare about these kinds of technical aims\nright so these are explanations\nreally just for developers people who\nare working with\nthese machine learning models so\nexplanations could then help to\ndebug the model to improve the model\nhelping developers understand how a\nmodel works\nin order to implement it or to use it in\nsome other sense\nright and if this is the aim of an\nexplanation\nfor how the model works it's going to be\nquite different\nthan if it's used in some other context\nright\nso if i'm not a developer i'm not going\nto\nbe able to understand these explanations\nand i probably shouldn't\nhave to understand them right because\nthey're meant for for developers in\nthese\nin these contexts\nbut there's also lots of other aims that\nexplanations can have so this is from a\npaper from one of my former colleagues\nat delft\nnava tintera about this various aims\nback in 2007 that was in research on\nexplanations of these kinds of automated\ndecisions\nso i want to highlight just\nthree in particular so there's trust\nso increase users confidence in a system\nuh persuasiveness trying to convince\nusers to try something\nor to buy it and then satisfaction so\nincrease the\nusability or the enjoyment of\na system or platform someone's using\nso these aims of explanation\nin some ways they're really commercial\naims i mean they could be\nthey don't have to be right trust could\nbe important in non-commercial instances\nfor example\nbut they really are used in these kinds\nof commercial\ninstances so for example\njust if you go on to amazon and you're\nyou want to buy a slinky you get all\nthese recommendations\nof what to buy next and so apparently if\nyou're interested in slinkies that's\nreally all you're interested in\nso they're if you get to why they\nrecommending more slinkies well because\npeople who buy\nsome buy other ones are frequently\nbrought together\nyou also get an explanation why you see\nthese particular slinkies that the one\nthat you can wear\nas a bracelet seems very interesting\nright the you see these ones because\nthey're sponsored right they're being\npaid so that's the explanation here\nright sponsored products that's why\nyou're seeing\nuh these these products and then you\nalso see\nother products right and what's the\nexplanation for these particular\nslinkies\nwell it's because um that's what\ncustomers\nbuy after viewing the item the main item\nthat you have in your\nthat's that you see on your screen\nso if we think about these\nexplanations of just why we're seeing\nthese products\nthat are made from an automated decision\nprocess\nand we think of the the aim of this is\ntrust persuasiveness and satisfaction\nright the standard for whether or not\nthese are good explanations\nreally are going to be depending on that\nright\nif these explanations didn't actually\nincrease users trust\nor persuasiveness right then amazon\nwould change\nthe explanation and when they test the\nexplanation then they might ask\nquestions about\num well did people actually buy these\nproducts\nor not right or we ask people\ndid it increase your trust in the system\nor not\nso we can ask how do these commercial\nfunctions compare\nwith the various functions of scientific\nexplanations that\ni was talking about a moment ago\nwell the first thing is that this the\ntype of an intrinsic function\nof explaining how relations work in the\nworld\nis best seen in this aim of transparency\nthat tinterev talks about but this\nepistemic function\nof understanding in this commercial\nspace is placed on really different ends\nright so the goal is not understanding\nthe phenomenon or even how the model\nworks at all right if that's\nthat's the phenomena or understanding\nuh yourself and your interests\nabout like buying slinkies or whatever\nit's on understanding\nthe interface right or the explanation\nitself\nokay so the explanation is then placed\non on this different\ntype of end right the function of the\nexplanation\nis really just for the purpose of the\nplatform in these types of\ncommercial cases it's not on it's not\nfor anything else and so that means that\ntruth\nconduciveness isn't really a standard in\nthe same way\nso if the function of the explanation is\njust for the platform to convince you to\nbuy something\nright it doesn't have to be true at all\nit doesn't have to be true that other\npeople\nactually looked at more slinkies after\nthey bought a slinky\nit could be completely false as long as\nit got you to buy more\nof a product that amazon wanted you to\nbuy or to\nincrease trust and what's relevant\nis then tied to right the success rate\nof the platform what they're trying to\ntrying to get and also things are\naren't made explicit that users might\nwant to know\nso for example in this explanation of\nwhy you see these products\nwhat other items do customers buy after\nviewing this item\nuh what's explicit what's implicit here\nis that amazon tracks\nusers behavior right and they track\nusers behavior in a specific type of way\nbut that information is hidden here\ni mean it's hidden in plain sight but\nit's not really highlighting something\nthat might be of interest\nto specific users who are using the\nplatform\nokay so so what what i've done so far is\nmake a case that\ndepending on the function of the\nexplanation the criteria for\nsuccess are going to be quite different\nand what type of thing might be included\nin explanation might be\nuh quite quite different and so those\nwere\nuh commercial functions\nof explanations then there's also\nthis whole other dimension to\nexplanations of automated decisions\nwhich are these like social\nand ethical functions of of explanation\nso if we move away from using\nexplanation just in\na kind of commercial sense to promote\nmore use on a platform or to promote\nmore buying behaviors\nand move to these other types of\nfunctions we get different\ncriteria again for success so what i\nwant to do here is just look at the\ngdpr so i'm no way a legal scholar\nso um what i say might not be legally\nbinding at all\nbut there are some ethical norms that\nthat's\nthat stand out uh when we look at the\ngdpr\nand the right to explanation\nso i'm not going to read all this text\nbut there are some\nsome concepts that really stand out\nso the first one\nis profiling and and processing\nso the idea is that uh anytime an\nautomated decision\nmakes a decision a significant decision\nabout you you have this\nright to explanation and so there\nthere are certain uh conditions under\nwhich they're especially interested in\nhaving this right to explanation so\ninstances of profiling\nso any form of automated processing of\npersonal data\ninstances of where this is processing so\nany set of operations which is performed\non personal data and a particular\nconcern\nis analyzing or\npredicting aspects of certain natural\nperson's performance at work economic\nsituation health preferences interest\ninterests so on and so forth\nhe also talks about safeguard right\nsafeguarding rights and freedoms which\nwhat they mean is\nthe right to the access of the\ninformation collected a right to\nmeaningful information write about the\nlogic involved\nand to provide information in concise\ntransparent intelligible way\nit also talks about the right to consent\nfor users to express their point of view\nand to contest\nthe decision of this automated process\nand it shouldn't be based on special\ncategories\nlike ethical origin political opinions\nreligious or philosophical beliefs so\nall my beliefs about\nphilosophy of science i shouldn't be\ndiscriminated against or used in a\nai system\nso what we see here through all these\nconcepts\nis this idea of\nexplanations are providing\nor ought to give users a sense of\ncontrol\nso this feedback on the system\nthat they can contest the decision that\nthey have to consent\nto it in some sense and there's also\nexplanations are providing a kind of\noversight oversight against\ndiscrimination\nand oversight against infriction of\nother types of\nrights and freedoms\nso this type of control isn't really\ncognitive control in the epistemic case\nof just providing\nunderstanding like we saw in scientific\nexplanation\nbut this kind of actionable control\nright that's something that you can do\nsomething with it so it's not just that\ni can understand\nwhether or not certain things had been\ndifferent but that i can actually\ndo something with that right that\nthere's actually some aspects about it\nthat i can\ncontest to or consent to or provide\nfeedback on\nand we also have this idea of promoting\noversight\nright so the explanation needs to make\nexplicit that certain things\nare in the interest of people's rights\nand freedoms\nright so this is quite different from\nthe case of scientific explanation we're\njust talking about understanding the\nworld and it's quite different from\nthe commercial case in which\nthey're not concerned at all about\nmaking certain things explicit\num that are against people's rights and\nfreedoms so this type of oversight may\nbe\nperhaps the explanation a certain ad\nexplanation on amazon\nor facebook or what have you needs to\ninclude information\nthat people might want to know about\nwhether or not they would be\ndiscriminated against\nso if they were if the model was using\nsomething like gender\nto have some ad being shown to you\nthen the person has there's an ethical\nfunction of explanation there that that\naspect of the explanation needs to be\nincluded\nand why you're seeing this ad that it\nwas somehow based in part on\non gender and that's going to conflict\nmaybe with\naims of trust in the platform right\nmaybe i won't trust facebook anymore\nif i'm seeing all these ads just because\nof my my gender\nso what counts as relevant right for the\nexplanation changes\nas i was saying and the kinds of what is\nwhat if questions that must be answered\nalso change\nright so what if they didn't have that\ninformation what if\nfacebook didn't have my gender\ninformation or didn't have other types\nof discriminatory information about me\nhow would that change what how the ads\nare being seen\nso i the the goal here in these\nethical and social explanations\nis that it's not understanding the\nphenomenon yourself or your interests or\nunderstanding the world or even\ni mean so this broad sense of how the\nmodel works i mean the model works in\nvarious different ways what is it that's\nrelevant\nto you so in the social and ethical\nfunction it seems like\njust taking the gdpr as a kind of\ninspiration\nit's on understanding how the algorithm\nfits into your larger\nvalue framework right how does the\nalgorithm\nfit into the my larger value framework\nsuch that i can\ncontest it if i want to how i can say\nthat this isn't\nit shouldn't be making decisions based\non these criteria\nthat it shouldn't be using this type of\npersonal data of mine\nto be used in the criteria right and if\nthat's the function\nof the explanation to be able to show\npeople how\nthe algorithmic decision process fits\ninto one's larger value framework\nthen what we need the types of things we\nneed to know about the model\nare going to be quite different than the\ntypes of things that maybe developers\njust need to debug something\nor the types of things that\nwe need to know just to try to persuade\nsomebody how\nwhether to buy something\nyeah so to sum up here\nwhat i've been saying is that depending\non the purpose of an explanation\nthe norms of success are different\nand there are several possible purposes\nor functions that\nexplanations can have\nand also um the social ethical\nexplanation\nfunctions matter just as much as\ntechnical or\nepistemic functions and\ncommercial functions might actually\nhinder social and ethical functions\nof these explanations\nso really what we need to do is have a\ndiscussion\na larger discussion of what the purpose\nof\nor function of these explanations\nactually should be\nin various contexts right should they be\nfreedom preserving or should they just\nbe\njust epistemic in the sense that like we\nreally just need to know\nlike the bare bones of how this works or\ndoes it need to\ncreate feelings of trust right in\nsomeone and depending on what these\npurpose or functions are\nthat's really going to change the nature\nof the explanation and\nvarious research projects that we might\nengage in in figuring out\nhow to deal with these types of black\nbox black\nblack black box models\nall right that's it thanks\ngreat super interesting thank you very\nmuch emily it was very nice\nso um yeah i think uh yeah we're all\ni think someone has questions\nyou can yeah you can speak up you can\ntype it or just\nraise your hand i see some hints but i\nthink they're yeah they're just\nclapping for you okay\nso does someone have a question for now\nif not\ni can kick off a little bit one question\ni have one question for you\nis so i really like the framework you\npropose about like\nthis different differentiation for\nregarding the purpose and i was most\ninterested on when you were talking\nabout\ncontrol so first you mentioned about the\nidea from cognitive control\nand then later on when you went into the\ngpr say like\nthe difference between cognitive control\nand actionable control\nso if you can expand this a little bit\nmore because\nso we here at ai tech and the broader\ncommunity work a lot with this concept\nof meaningful human control\nand that's not only about like tracing\nthe responsibility\nafter effects if something happens but\nalso how to\nprovide this kind of explanations and\ninformation so that one can be\naware of the responsibility while using\nthis so i think that's\na very nice way so can you expand a\nlittle bit more cognitive control and\nactionable control please\nyeah so cognitive control really just is\nthat you have some\nsome general idea how things\nwork like in your head so like you um\nso you know like the causal like if\nyou're talking about like scientific\nexplanation like why the window broke\nwhen some kid threw a baseball at the\nwindow\nyou can you really have like a sense of\nwhy that happened\nand if things were different like if the\nkid didn't throw a baseball but through\num just you know a snowball or something\nthen the window wouldn't have broke and\nwhen we talk about these ai decisions\nyou can have a sense of\ncognitive control about how or why the\ndecision was made\nso you might even know that type of\ncounter factual information\nso if they if my data was different my\npersonal data was different then i'd get\na different result for this and this\nreason\nthings like this but it doesn't mean\nthat you can do anything\nwith that with that knowledge so the\nidea of actionable control is that\nyou can actually do something with the\nknowledge so\nthe the knowledge is such that you could\nsue the platform if you needed to so you\ngot the right information\nthat it was discriminatory it wasn't\nbeing hidden from you so now you can\nlike file a lawsuit or something\nor it could be you know as if maybe it's\nlike classically american to go to\nlawsuits\nbut uh maybe on the other extreme is\nit's just that maybe\nthere's a way for you to update the\nsystem for example\nand so it's using some information that\nis just incorrect\nand the explanation should give you some\naction that you can\ncan do or so say okay this is incorrect\nmaybe um\njust fix this type of information and\nthen the decision would be\nbetter in line with with reality\num in that sense so you might understand\neverything about it but not actually be\nable to have\nto have the means to do something so\nthat's what actual control\nis trying to get at thanks thanks very\nmuch very\ngood and so it's not\nonly about um like the decision that are\nlet's say the final decisions that have\nalready been made but you could proceed\nthink about this as a process right\nyou can give a decision an explanation\nand someone could interact with this to\nfind us\nmore ethical or beneficial outcome\nyeah and then you can even talk about\nlike maybe\nit can go a bit broader too in the sense\nthat if you're\nhaving like a credit decision so whether\nor not you're getting\na mortgage or being denied a credit card\nor something\nif all the reasons are things that are\nlike outside of your control\nmaybe that's a bad decision system\nright so if it's only talking about um\nthings like\nmy age and my race things like this\nright i can't control those things so\nthere has to be some aspect of the\ndecision that's in my control\nright and if not then maybe that's\ninfringing on my\nmy rights yeah nice it's pretty\ninteresting\nuh we have a question from uh if jenny\nwould\nask it what's up jenny\nhey yes yeah thanks very much\nhi emily uh thanks thanks very much uh\nreally really interesting uh uh\npresentation really great uh\nyou're touching up on some really great\nquestions i'm curious\ndo you think um i'm curious on your\nthoughts whether\nasking the kind of questions about\nexplanations that you presented whether\nit can help us avoid\ntechnical solutionism ai solutionism so\nwhat i mean is that if we determine\nlike you say the purpose of explanation\nin the context and the corresponding\ncriteria\nuh can it help us realize in certain\ncontexts that it's essential that\nhumans are interacting with humans that\nhumans are the ones that\nare taking decisions and\nalgorithms are at best in the background\nbut maybe even not at all part of the\ninteraction\nthanks yeah so i definitely think that\nexplanations are need to play a\nrole a role here so i i really worry\nabout cases\nof ai decisions\nrunning amok without any type of human\nhumans in the loop\nright i mean just looking at like\nmedical the medical case\ni mean it's really important that\nthat we preserve the doctor-patient\nrelationship where you can kind of give\nand receive\nreasons for certain courses of treatment\nand if that's all being offloaded to\nsome ai system\nwhere there's no explanations at all\nthen you\nreally lose that important aspect of\nhealth care\nso um yes\nso i think explanations can\nit's necessary to preserve the types of\nrelationships that we\nwant in society and it might also help\nto\nhave people feel more comfortable about\nai\nentering in some way into certain areas\nwell what i find interesting is that as\ni was listening to some of the\nthings you talked about like especially\nwhen you made the point of actually\ngoing back to the\nconcepts asking a fundamental question\nhere what's the purpose\nand like it makes me think in some\ncontext\nthat like it can help us realize\nthat in order to meaningfully\nuh tell a person well why a certain\ndecision was taken\nsay like i'm gonna take a context that\ni'm dealing with in my work uh\nhiring and the use of algorithms and\nthat context that\nyou know some of some of the algorithms\nthat are out there\nwhen they make claims of you know well\nbased on your facial expressions we're\ngonna\nmake a judgment on your willingness to\nlearn\nwell like i think in common everyday\ninteraction\nlike many of us would find it really\ndisturbing\nif one would start explaining the\ndecision whether to hire you or not\nbased on your facial expressions and\nclaiming that that somehow\nsays something about your motivation to\nlearn new things so like\ni'm like i'm thinking can basically that\nuh can that be part of our ethical\nreflection of understanding\nhey maybe that's not the like maybe\nthere's a contradiction here between the\nconcepts that we're actually\nworking with say job competence and the\nway we are\nassessing this uh this concept yeah i\nmean so there's\ni think there's a lot a lot there so the\none thing is that\nwhether or not these models are actually\nlike tracking\nlike the real difference makers out\nthere so a real difference maker for\nwhat makes you a good employee\nmight not be what your facial\nexpressions are and some\nlike stilted weird interview you have\nonline right\num so in that sense the model isn't\nreally\ngetting at what is important and\nexplanations can\nhelp help with that because it can point\nout okay\nmaybe if this is the reason why we're\nnot hiring this person that's not really\na good reason\nso maybe we should find a different\nreason\nso there's that but then also it's\nimportant to know\nwhat the function of the explanation is\nbecause if you want to explain them the\ndecision to\nthe person why you didn't hire them\nif the purpose of that explanation is\njust\nto get them to not uh\nyou know get into an argument with you\nabout it\nright then you're not you're gonna hide\nthe fact that you use that weird\ntechnology you're gonna explain it in\nsome other way\num and but if you're the purpose of the\nexplanation is to\nactually uphold the rights of that\nperson then you\nare going to have to be required to give\nthat information that you used\nin that type of software yeah\ngreat point thanks very much thanks\nyeah thanks very interesting discussion\nuh dave\nyou have a question uh i do yeah so\nfirstly thanks that was a that was a\nlovely talk um i really like how kind of\ndigging into what the concept means can\nhelp us think through some of these\nthings um\nand i think the first first thing i want\nto\npick up a bit is it feels like it's\ngetting into a kind of a pragmatics of\nexplanation so it's not just about\npresenting the information it's about\nthinking how\nuh the person is going to receive that\ninformation\nand that that feels really rich and it\nfeels like it goes\nit's as well as uh kind of what is the\nexplanation trying to do it also gets\ninto what kind of person are we\nuh trying to work with and we see some\ndifferent explanation methods\nand so i guess i'm i'm i'm hoping that\nthis leads into\naccounting for the explainee in the\nexplanation\num and then the other thing i was\ninterested in\nis there's often a feeling that\nexplanation should help\npeople trust these systems more um\nand i feel like sometimes it shouldn't\nit should uh\nmake them trust in less kind of in line\nwith the\navengers uh point that you know\nsometimes the explanation should make us\ngo oh\nthis is not a good system\nyeah yeah um so i do think that there's\nwe should take account of the person who\nyou're explaining to\nso i have done work with nava and some\nother people\non um seeing if explanations can uphold\nsome of these epistemic norms\nthat we talked about so like\nunderstanding and other types of goals\nthat they might have so\nyou can design expansion interfaces and\nuser studies with that\nspecific goal in mind and that can help\nyou build better explanations that\nfit those goals but yeah i totally agree\nwith you that\none of the problems that i see in some\nof the literature\nand explanation computer science just on\ntrust is that\nthey're looking at whether or not people\nhave\nfeelings of trust so a kind of like\npsychological\nexplanation or psychological account\nrather of of what's happening but\nthey're not looking at whether or not\nthe system is actually trustworthy\nor whether people are right to trust the\nsystem so that's a normative question so\nthe normative question is\nwhether or not the system is trustworthy\nor people are right to trust\nthe system and yeah if\nespecially if you look at what's going\non and ad explanations on facebook um\nthey're not trustworthy and if they\nactually pointed out the actual things\nthat they used\nto deliver these ads to you then yeah i\nthink a lot of people would not trust\nthe system and that's why they're not\nexplaining things in that way\nso one of the things that i at least i\nwould hope for in future\ndata protection legislation is a type of\nregulation about\nthe kind of information that has to be\nin these explanations\num i mean it has to be accurate so if\nthe system actually used that\ninformation\nwould be in there such that people could\nmake decisions that are transparent and\nin line with their value system\nyeah great um i i\nwas struck by this when i was speaking\nto someone from the bbc the british\nnews corporation who said we did the\nsurvey and we found that our users over\ntrust us\nthey put more faith in our news than we\nthink they ought to\nuh and that's not often you know you\ndon't often hear people say that\num and it feels like explanation might\nhelp there a bit\nyeah that's super interesting um yeah i\nthink that's\nyeah that's really cool\nsomeone else has a question for emily\nyeah perhaps i have a very practical\nquestion\nright so um yeah i i really like the\ntalk\nand i agree very much with the with the\nkind of more fundamental question that\nwe have to answer\nwhat we need the explanation for before\nwe actually start working on making the\nsystem\nexplainable then my question is uh well\ndepending on\nthe purpose of the explanation on the\noverall goal\nuh would we face fundamentally different\nchallenging in\nimplementing the system that we want to\nbe explainable because i would imagine\nthat if we just want to understand\nthe inner workings of a neural network\nfor instance that's one technological\nchallenge but if you want to\nkind of generate an explanation that\nprovides control to the end user\nwhich is a much more kind of overarching\nsystemic question\nthen that would be a very different set\nof challenges so uh\nis there any asymmetry in the way\npeople do research in that or their\ndirection and is that linked to the\ntechnological complexities associated\nwith the\nwith different types of explanations\nyeah um so i definitely think the way\nyou said it is right so\num if we just want to know more about\nthe system it's going to be a completely\ndifferent research project\nright then um explaining to end users in\na specific kind of way\nand whether or not i think they do go\nhand in hand i mean so one of my\none of the research questions that i'm\ngoing to be looking at um\nwith the with this veiny project and\nalso that i've done in the past is um\nlike how detailed about these models do\nwe really need to get\nin order to satisfy these other norms of\nexplanation\nand it might be that we don't actually\ndon't need to know that much about\nhow the model works like to its\nnitty-gritty level\nto satisfy these norms or it might be\nthat we actually do need to know\nall the details at this nitty-gritty\nlevel right and if we do need to know\nthen we need to invest in that technical\nproject\num first i guess right to where to\nsatisfy those norms\nbut if we don't need to know that then\nit can just run in parallel right we\ndon't need to wait\nuntil people make huge progress in\nunderstanding these\nblack box models at a nitty gritty level\ngood thanks great\num yeah someone has a final question\nif not i do okay\nuh uh i'm not familiar first i'm not\nfamiliar with the\nfunctions approach that you mentioned so\nthat seems really interesting and that\nreally\nuh connects a little bit with this idea\nfrom the concept of a reflected\nequilibrium or white reflective\nequilibrium right\nthink about like what are the moral\nprinciples at stake what are the moral\nchallenges and which are the background\ntheories i would say\nthings so do you see uh uh is there a\nspecific connection between these two\napproaches so how do you see this\nbecause you say about a\nhypothesis i can think about a moral\nprinciple what a concept i can think\nabout some background theories and\nwhat is how it matches our everyday\nnotion is a little bit about the moral\njudgments in my understanding\nis does that make sense for you yeah so\ni don't i think that they can be\ncomplementary for sure\nso one of the aspects of the function\napproach like the last thing\nis that once you have this hypothesis of\nthe purpose you need to then say like oh\ndoes this\nlike uh conflict with other intuitions i\nhave\nor is the everyday notions that we have\nand for that\nto answer that question you probably\nneed some type of reflective equilibrium\nhappening right um so\nyeah they can be complementary in that\nsense so the function approach i guess\nkind of really\num just gives a specific structure to\nthat process\ni would say yeah okay it is if\nunderstand\nis a collective process that you see\nthat all stakeholders involved should be\ninvolved in this process or or is\nsomething more about the\npeople that are going to receive the\nexplanation that's the\nmost important um\nyeah i mean because we're talking about\nthis\nlike it on a more um\nmeta level in the sense that\nnot necessarily talking about\nstakeholders but\nyeah i mean i think that in a practical\ncase\ni think it is important that all\nstakeholders are\ninvolved in the decision or in the\nprocess of\ntalking about norms but it might be that\ncertain stakeholders should have\nhigher weights in that in that\ndiscussion\nright so if you're a stakeholder that\nyou really only care about profit\num you should be part of the discussion\nbut it might be\nshouldn't hold as much weight as people\nwho are being discriminated against or\nhaving their rights violated\nfor example okay yeah\nthanks i think there's a lot to think\nabout so yeah\nit was very interesting so thank you\nvery much\nemily thanks everyone for joining\nus and yeah so\nsee you so much you also next week\nyou're also invited yeah of course\nalways welcome to join us\nyeah yeah it's great discussion thanks\neveryone great\nthank you\nthanks very much bye-bye\nthanks great talk thank you bye", "date_published": "2021-02-24T16:01:27Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "bf7f9f4538ddf06aa629801bd7192913", "title": "The future of digitalization (Gerhard Fischer) - 1st AiTech Symposium", "url": "https://www.youtube.com/watch?v=uN3hQlxN_zo", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "so I think they're organizers for\ninviting me and so is this is the title\nof my talk and as Lisa mentioned already\none of my of the interest for me has\nbeen design trade-offs and understanding\nwhat that means and so I want to start\nwith sort of this is a very qualitative\ndiagram saying maybe the real impact is\nnot so much technology but to also\nunderstand how technology caused\ncultural transformations and so this\nobviously only mentioned a few things\nand as we arrived in 2019 we look into\nthe future so Alisa just finished the\nsame design is about the future in some\nof the big things which people talk\nabout our digitalization cyber-physical\nsystem big data AI humans and the design\nand I put in two curves one continue to\ngoing up so to increase the power of the\ncollective human mind aided by\ntechnology and one going down and it's a\nvery beginning I put in so Curtis\nbecause you can study dialogues between\nSocrates and Plato where Socrates was\narguing that reading and writing would\nreally be very damaging to the humans\nbecause they would kind of give up on\nmemorizing things and this was a big\ndebate and we can look back and you know\nusually say that reading and writing is\na good thing our society spends huge\nresources but what is the lesson learned\nyou know for as we look in the future I\nmean we have to think about are we going\nfor\nso up or are we going down so the future\nof digitalization as I said I want to\nCenter it around design playoffs and the\nconsequences for a lot of what is talked\nhere about quality of life human\nresponsibility value sensitive design in\nhuman control and I would argue it a\nlittle bit that we should distinguish\nbetween AI approaches in what I labeled\nhuman centered design so I think what\nyou know being not the youngest person\nin this room anymore what I must say and\nknow it me a little bit is the current\nhype about AI you wait you know you open\nthe newspapers is an article about AI\nand some profits you know tons of world\nhow wonderful the world will be based on\nAI and so this is a history diagram of\nhow I developed and I mean when I was a\nstudent I I was around in different\nforms but some of some major ideas were\nalready present at the time in the AI\nahead for those of you who know a little\nbit of history had a big height face in\nthe mid eighties centered around expert\nsystems and the question were then\nfollowing some of the big expectations\nwhich didn't materialize there was a\nface which was called say AI winter in\nan interesting question which you can\nreflect upon is was the current hype\nagain be followed by an AI winter so\nwhat influenced me fundamentally in my\nown career is that I discovered this\nbook in the early 1970s it was published\nin 1963 and it was a collection of\narticles which is widely recognized\nbeing the foundation of AI in all\nrelevant a I work at this time fit into\none book so we can now contrast this\nwith the history of human centered\ndesign sometimes we call it intelligence\naugmentation in play visa reverse order\nof say abbreviation instead of AI we say\nI a and you may know some of these\ndevelopments but I think a big meeting\nthat I attended numerous of is\ncomputer-human interaction conferences\nthere are other developments like\nend-user development empowering people\nto contribute their own thoughts and\novercome closed systems so in my own\ncareer\nI lifts whose's faces I spent some time\nunder trying to understand how these\nideas would influence learning I spend\nsome time at Xerox PARC\nunderstanding human-computer interaction\nwhere I think it's a fair statement to\nsay the computers which you all have in\nfront of us they're really invented in\nthe 80s at Xerox PARC and then made it\nto Apple and I was then particularly\ninterested in design and this was that\npresented that we had a Center for 30\nyears at the University of Colorado\nvisit idle life flow a Center for\nlifelong learning and design so the\nbackground for some of my remarks is\nthat we tried to build systems ourselves\nnot commercial systems but we labeled\nthem inspirational prototypes and I will\nwho you won in one second I have also\nbeen but sis more understanding sis\ndevelopments collaborated miss research\nlabs on self-driving cars and mobility\nfor all one in Munich one close to\nBoulder Colorado where I live and\nstudying sort of ideas which AI tech\ndevelops I think these are C's\nactivities\nexcuse me relate to understand the\nimplications of meaningful human control\nfor the science design and engineering\nof autonomous intelligent systems and\nthen to understand what meaningful human\ncontrol might mean so I just wanted to\nshow you one system on which we have\ndeveloped on which we have worked for a\nlong time this was a system which we\nlabeled the envisionment and discovery\ncollaboratory what you see in the middle\nis a tabletop computing environment and\npeople can gather around this table you\nbring different opinions to bear there's\na lot of intelligent system components\nunderlying the system where you can have\nsimulations you can have visualizations\nyou have a critical component and it's\nlinked with vertical board in the\nbackground which brings and locates\nrelevant information to this question\nunder investigation which the people\ndiscuss and we debated it for urban\nplanning tasks so now in this context of\nsystem I want to mention one aspect of\nwhat could it mean meaningful human\ncontrol so just out of curiosity who\nknows what SimCity is so quite a few\nso SimCity you know it's a nice game and\nwe closely work with urban planners on\nthese projects\nwe ask ourselves why where's he systems\nnot used in urban planning in the\nprimary reasons for some cities\nlimitation is that urban planning\naddresses open problems in real world\ncontext in SimCity represents a closed\nsystems it does not allow the\nintroduction of elements that were not\nanticipated by the developers of sim\ncities so one example to make this more\nspecific let's say you play the SimCity\nand you do find out there was too much\ncrime so is the solution which SimCity\noffers to you is increase the police\nforce this is a solution but you do not\nhave control that you say well you know\nI may also not only fight crime but\nprevent crimes so I would like to\nexplore what it would mean that I\nincrease social services there's no ways\nthat you can do and what we try to do\nwith the system the investment and\ndiscovery collaboratory to create a\nsolution space where the people could\nconsent sort of closeness of the given\nsystem and explore alternatives which\nthey want which they found interesting\nso here I come sort of to the core\nconcept which is design trade-offs so my\nclaim is designers choice it's an\nargumentative process with no optimal\nsolutions and I argue that design\nproblems have no correct solutions or\nright answers so wideness of honest of\ndesign is not a question of fact like it\nis in the natural science I mean if I\nlet some object drop it will go down and\nnot will not go up so there is a correct\nright solution but it is a question of\nvalues in interest of say involved\nstakeholders\nand my argument which we explored is\nthat exploring an identifying design\ntrade-offs is not an approach to limit\nfocus but to enhance progress by\nproviding frameworks to move in a\npromising future in which all people can\nparticipate and benefit from them so we\nstudied a long list of different design\ntrade-offs I mean the first one I will\nexplore a little bit more I think one\nwhich played plays a role in other\npeople's contribution is sort of\nstudying mobility where you have someone\ntroy's more zai oriented choice of\nself-driving cars whereas humans and the\ndesign approach may direct and advance\nmore driver assistance systems so let me\nchoose one example of saying what design\ntrade-offs may do it may help us to\nidentify the real problem so the example\nwhich I have chosen is on September 11th\n2001 there was a terrorist attack on\nWorld Trade Center and the Pentagon and\nthen people brought together and said\nwell how can we try to avoid that this\nwill ever happen again and the analysis\nof analyzing this problem was to hinder\nterrorists to end us a cockpit and the\nconsequence was to develop secure doors\nto the cockpit and if you enter an\nairplane today you will see that there\nare big mechanisms to secure the doors\nto the cockpit and then what happened\nmany years later there was this\nGermanwings flight where a pilot\npotentially we don't know\nexactly the things mentally disturbed\nsteal an airplane into a mountain in\nsouthern France and 160 people died in\nthe solution which was the original\nsolution to the problem how it was\nperceived was that the secured cockpit\ndoors exactly facilitated this issue and\nnow we kind of we have to reconsider is\na problem by better understanding what\nhappened and so the problem now is also\ndo not leave a single person in the\ncockpit and the method is that we are\ndriven by breakdowns and I mean some\nassumption is that we can anticipate\nsuch things earlier and do not have to\nwait so the major disaster like this\nhappens I think this is sort of a\nsimilar overview of saying what is the\nreal problem is it which is just that\none note in this overall diagram\nself-driving cars is that a real problem\nor is a mobility for all or I mean I\ndidn't develop this know that all so\nreduce the need for it yesterday we\ndiscussed I fly all the way from\nColorado to the Netherlands may what\nwould be the difference if I would have\nstayed there in the presentation would\nhave given via via videolink\nflying has become bad things these days\nso that would eliminate it but then here\nare all these other kinds of aspects\nwhere we can analyze the design\ntrade-offs I mean there are car sharing\nmodels I think this current emphasis on\nself-driving cars public transportation\nhas taken a backseat so this notion how\ndo we get keys a real problem I think\ncan be facilitated\nby understanding design trade-offs this\nis just a diagram that this notion may\nbe related to ISM who is in control\nhuman involvement and automation was\nstudied in the airline industry at a\nmuch earlier time I would say where you\nwent in one and at one in from direct\nmanual control to autonomous operation\nand then people identified all the ways\nin the middle so what our sort of value\nadded by analyzing to design trade-offs\nso we could say avoid oversimplified\nsolutions I think the current populist\nmovements in many countries many of the\npopulace resent simple solution to\ncomplex problems say ignores the design\ntrade-offs associated with Sun we can\nuncover unknown alternatives as I just\nillustrated with an example we can avoid\none-sided views and group things and\nmaybe we also can understand better the\ncomplexity in richness of human\nexperiences and if you think about\ntrade-offs being the endpoint on the\nspectrum we can identify interesting\nsynthesis in meaningful compromises so\nHamas says semental horizon shrinks when\npeople no no longer think about and\nexplore alternatives so let me if a\nlittle bit elaborate say artificial\nintelligence perspective versus the\nhuman centered perspective so CAI people\nI would claim she often humans as\nscapegoats for doing things what the\nautomated system cannot do yet\nsay often says missing a clear\ndifferentiation what's a machine is in\ncan do and if in what humans can do and\ntherefore a balance cannot be reached so\nthere's a paradox of automations and\nmore reliant we become on technology\nceleste prepared we are to take control\nin the exceptional cases when the\ntechnology fails and again the different\nlevels of self-driving cars which we\nhave seen earlier I think there are\ninteresting points to be made and then\nwe can say what is the current basis for\na coven type and it's no doubt compared\nto earlier times aia has now more\npowerful computers we have big data we\nhave machine learning and deep learning\nand we have all kinds of predictions so\nthe human centered perspective I think\nis grounded that in the fundamental\nassumptions human are not computers and\ncomputers are not humans in the research\nparadigm which I personally have\nmigrated to being originally really much\nmore in say AI camp is not to analyze so\nmuch human versus computers but human\nand computers now it's a common type of\nAI cell is as I said earlier and it was\nhiding the ad is followed by an AI\nwinter and we can study how this\nmigration had taken place I asked you\nalready the question will there be a new\nAI winter and what is troublesome in my\nmind is the admiration which journalists\nself-nominated experts who may not have\nheard of the concept of AI a couple\nyears ago and now the cemani\ndisseminating AI Fenton\nand so for me and a challenge is to\nexplore sort of a post AI attitude\ncompared to the current hype so I\nclassified AI people in utopians\npessimists and realists and it's I think\nit's interesting even if we don't\nbelieve what utopian say to heal some of\ntheir arguments so I mean say at you\nthat we as humans will not be is a major\ndecision makers anymore because the AI\nsystem can do it better we should\nunderstand ourselves as an intermediate\nstep in evolution you have super clever\nmachines and if you shoot people in\nspace this is done much better with\nrobots send with human beings in this is\nreal I mean you can reach us and these\nutopians have a non-trivial influence\nnow it's our end of the spectrum as a\npessimist which argue a AI has failed ai\nai is dangerous and you can read you\nknow statements from public figures like\nStephen Hawking and Alan Alan masks\nthough who also argued that full\nartificial intelligence could result in\nhuman extinction I think what maybe this\ncommunity you know should see as one of\nthe contribution is to become sort of a\nI have a list so I would say AI is too\npoorly defined and all people use AI we\nhaven't heard a lot of characterization\nof what AI really is but I think it's\ntoo poorly defined but on the other hand\ntoo interesting to be ignored service\nprogress in AI without any doubt another\nfeature is\nthat when first some of these in\ninteresting problems where tackle by AI\nresearch but since they became better\nunderstood since I merged into general\ncomputer science in are not AI labelled\nanymore and I think one particular\nimportant issue is there are no\ndecontextualized sweet spots which means\nwithout providing context we cannot save\nas or something is good or bad but we\nhave to look into more details we need\nto explore specific context so this was\nalready at the beginning that it's a\nslight that in some ways we do not only\ncreate technologies but we creating new\nworlds and we should ask us not only\nwhat can people computers can do and but\nwhat computers should do so focusing\nScience and Technology is often\ncharacterized there are things which we\ncouldn't do and now we can do it\nso autonomous weapon system nuclear\ntechnology reproductive medicine genetic\nengineering our disciplines maybe not so\nrepresented here here are some others\nthe blue ones are more closer to the\ntopic of symposium and we can look at\nwhat Chinese create we say ourself\nsocial credit system or what's work by\npursuit with self-driving cars I think\nasking this question is not enough so we\nshould then also ask should it be done\nand nuclear energy is one interesting\nexample where Elsa Germans determines\nthat they will get out of nuclear energy\nwhereas other nations are still building\nnew nuclear reactors and I think we can\nsend past should it be done question or\nnotions of voila\ntea of life ethics values impact choice\ncontrol autonomy and souls are all\nassociated with design trade-offs so I\nwas surprised that I didn't know the\npeople who gave lectures earlier but\nthey had a lot of similar references on\nit I have never met them\nand so I find this in interesting that\nfrom two sides of the Atlantic I mean\npeople are aware that there was a nice\nreport on ethically Alliant design that\nsome AI places like Stanford and MIT\nhave created centers for human centered\ndesign value sensitive design has ended\nsort of the picture notches is a very\nnice book where the core concept is\nsomething called libertarian paternalism\nwhich i think is an interesting concept\nrelated to meaningful human control and\nso I hope or I wish or I see that AI\ntech the wants to import similar issues\nand we have also conducted symposia\nwhere we explored this concept of design\ntrade-offs from different angles again\nsis was also mentioned before said one\naway Holle problem there's a big effort\non at MIT called some moral machine I\nthink one additional aspect was when the\ntrolley problem was mentioned this\nmorning is that a human you know being\nable to throw sit switch what should he\ndo can he also escape sort of sees\noptions which are presented to him such\nas Holly goes in one direction also also\nI think an interesting question is if we\ndevelop self-driving cars so this is a\nmodified version of the trolley problems\nis car\nso long if it goes fade it runs into a\nconcrete wall and the people in the car\nget killed whereas if it goes to the\nother side the children being on the\nstreet get killed and now if we think\nabout self-driving cars we have to\nanticipate some of these situations in\nthe algorithm which we will encode this\ninformation will then steal the behavior\nof the car in a certain direction and so\nphilosophical question of the past I\nmean are now becoming in some ways\nengineering decisions how should we\ndevelop the software how should we\ncapture and imagines us such situations\nand I think again here we are facing\ndesign rate of situations and the\nquestion is you know do we how well do\nwe understand the problem there are no\nanswers to these issues yet but I think\nthese are issues which should be\ninvestigated so let me conclude by Al\ndoing which is something sort of dear to\nmy heart and which I involve my own\nprofessional career that the future is\nnot out sale to be discovered but it has\nto be invented and designed in some\ndesign is choice and there are design\ntrade-offs and in our little interactive\ngroup I think what we really discussed\nwell design trade-offs between different\napproaches and let me leave you the next\nquestion if is that statement you know\ncharacterize an essential element since\nthe question is well by whom who will\ninvent in design as a future well Google\nFacebook Apple and sis lists can be\nextended indefinitely some of these\ncompanies have a vital interests that\nsay a way of inventing and designing the\nfuture corresponds to their value\nsystems maybe to their viability and\nprofitability of their companies now to\nmiss a question is you know what can we\nalso who are the people in this room do\nwhat can a I tech as a new center do in\nthe years to come\nwhat can we as academics to living in\nuniversities compared to these companies\nwhich employ hundreds maybe thousands of\npeople but I think we have a\nresponsibility you know that we inform\nourselves that we may be developing some\nalternative visions to contribute to\nthis goal that the future should be\ninvented and designed thank you\n[Applause]\nthank you very much we have time for a\ncouple of questions so I'd like to have\nthe box thank you very much who would\nlike to open the Q&A session here you go\nthis is a simple one it's the same\nquestion as was asked by myself earlier\ndo you have any good examples of where\nmeaningful human design and human\ncontrol was applied where all the\ndifferent stakeholders were taken into\nconsideration well I think you know so\nthis world at large where we can look\nsort of in into very large questions I\ntried to bring in this example where I\nfeel we ourselves developed systems so\nwe do not only ask the observer of the\nsystems coming into the world but trying\nto involve I mean trying to create\ninspirational prototypes and the system\nwas how much\nurban environments be developed by a\ntop-down process so whatever the city\ncounts of the professional city planners\ndetermine the future of delft what it\nshould be what should be billed what\nshould be allowed what should not be\nallowed those who said we wanted to\ninvolve all stakeholders and in addition\nto what I said is that the out effects a\ncomputational environment which we\ncreated was really an open world the\nresults in a closed world and we I\ncontrasted Sim City so that maybe you\nknow urban planning is still a big\ndesign activity which we are confronted\nin many different ways and I there are\nmany more issues and but I didn't want\nspend too much time we're very\ncontroversial issues with lots of design\ntrade-offs\nthey are debated and our city which is\nquite similar to Delft I mean there are\nreally really different opinion in which\ndirection our city should develop so I\nwould say this was an attempt to study\nmeaningful human controls the next\nquestion is I don't know how this is in\nDelft you know our citizens of Delft\nreally engaged or say well I also go out\nand have a beer or sit in front of my\nTVs and going to a neighborhood meeting\nwhere a certain issue is debated yeah if\nwe look at a large scale if you look at\nthe happiness you know quotient in\ndifferent countries this game the Navion\ncountries are doing pretty well so if\nyou look at you know all the different\naspects how they come together in a\nsociety those countries seem to have you\nknow a lot of the values that are shared\nby lots of people and it may be very\nhomey as a homogeneous crowd compared to\nthe US we have so many diverse\nstakeholders that makes it a lot easier\nin a smaller scale country like the\nNetherlands probably very similar I\nthink that's another interesting\nquestion it's a scale like the u.s.\nbeing a very big country Scandinavian\ncountries being much smaller countries\nbut for my personal intellectual\nenvironment the Scandinavians have\nreally contributed very early on with\nideas which they labeled participatory\ndesign so to distinguish themselves from\nexpert top-down present design so say\nwanted to get more control to the people\nwho were affected by automate new ways\nof automating factories so yeah\nScandinavia I think is an interesting\narea of the world where interesting\nideas got developed thank you for your\nnice presentation I have a question of\nhuman centered concept I don't know how\nI understand at a human-centered for\nexample in our cognitive systems does\nthat mean that human beings should\nalways dominate in the tasks well maybe\nthe one slogan which said which I had on\nmy slides is humans versus computers\nversus human and computers I think we\nshould understand that a computer can\nmultiply two numbers faster than we can\na computer can search databases better\nthan we can when I came to Delft I had a\ncar with a navigation system and I\nrelied on incest with some extent\nbecause the different ways to get from\nsomewhere in Germany to Delft I mean I\ncould have studied it more but because I\nhad the navigation system I try to do it\nless let me give you one more example I\nplay tennis and maybe some of you also\nplay tennis so in tennis today better\nball is in or out that our line touches\nthere was a referee but then if there\nwas a debate that the call is the ball\nis called out but the player believes\nthe ball is in he can challenge the\nsystem as a judgement and then computer\nsystem is part in the hawk-i system and\nit shows that so the ball was in and in\nor out and sometimes it's like half a\nfingernail in or out and I would say I\nbelieve that computers in his visual\nperceptions when these balls are played\nwith great speed are better than we are\nwe have to test this as much as we can\nbut I think the controls the\ndecision-making process\nthe ultimate is\nmaker in this case is a computer system\nso it whatever serve fov says is\noverruled by the computer system I think\nit depends so for the elect electoral\nintelligence for for example for\ncomputing something we can say computer\nis faster or maybe more accurate than\nhuman beings but for some situations\nlike self-driving cars another driver\nso human drivers can be more experienced\nto deal with more complicated situations\nso so in this case yes without a doubt\nbut you asked me if I understood your\nquestion correctly whether in all cases\nhumans are better than computers and I\nthink there are some elements like\ncomputers can multiply two numbers\nfaster than we can in the tennis example\nis visual perception of us maybe\ncomputers have an advantage and since I\nsee humans and computers what you asked\nis a very different question because you\nasked al-saher elements where humans\nshould be in the center of all design\nconsideration and this was what my talk\nwas all about said I believe humans and\nof design approaches should put a human\nin the middle and not making its escape\ngold if you have an automated system\nlike level 3 level 4 in self-driving\ncars where the humans is part in then\nthe computer system cannot yet do the\ntask I think this is sort of the wrong\ndesign global design outlook thank you\nthank you very much is there a final\nburning question just one can you try to\nthrow yes Thanks so in the physical\nworld humans have quite a good idea of\nour potential engineering design and\ndecisions impact their life and so maybe\nto give an extreme example you could\nhave a\ncommunity where people have marginal\nunderstanding of how to read and write\nbut they can argue or they can kind of\nadvocate against a highway both being\nbuilt past the area because they\nunderstand that that highway might have\ncertain consequences on their way of\nlife and in the digital space oh and the\nalgorithmic space it's often really\npretty understood even what the systems\ndo let alone what impact they might have\nso how does one how does one start\nbridging that gap between designers and\npeople which in the physical world again\npeople often have an appreciation of\nwhat the consequence of the of the\ndesign but in the digital world they\nmight not yeah I think this is a very\nlegitimate document that I feel in\nhistorically the world was in the analog\nor the physical world the pendulum was\nall over there because we didn't have\nthe digital world in Vincent n de\nl'homme as it often the case as a\ndigital world came along was swinging\nall in the other direction everything\nyou know needed to be digital in I mean\nnow there are some considerations of\nsaying after all we are human beings we\nlive in a physical world in an analog\nworld and shoots a pendulum swing back a\nlittle bit so the question which we\ndebated yesterday you know what is the\ndifference in quality if this meeting\nwould have been conducted in a virtual\nenvironment so one issue is I maybe I am\nthe person who travels it's versus so I\ncould have stayed home and I could have\ninstead of displaying the slides I would\nbe on the screen and this light would be\nin one corner but could be even go a\nstep first what's the purpose that you\nall come here even if it's only a short\nclip you all stay home and sit in front\nof your screen and so what we really\nhave to ask ourselves what is the value\nof the analog in the physical world\nand being interested in learning you all\nknow what MOOCs are so for me MOOCs were\ninteresting not said I personally wanted\nto teach a MOOC but I wanted to\nunderstand the question what is the\nvalue added by attending a physical\nresearch oriented university in\nparticular in the US where people pay\n$25,000 fees for attending a university\nI think this is an interesting question\nand if we cannot answer the question\nwhat the value-added is then maybe we\nshould really you know say well maybe\nmoobs\nare really replacing universities and if\nyou continue to teach courses to 800\npeople my personal opinion is that could\nbe done by MOOCs because it will not be\ndifferent and we could instead save\nresources to create smaller more\ndiscussion oriented causes well like sis\nforum we at least have the opportunity\nto discuss issues with each other so yes\nthere are designed trade-offs to operate\nin the physical world or to operate in\nthe analog world so I hear I hear a lot\nof material for many more presentations\nto come and a lot more discussion so let\nus thank a professor Fisher again thank\nyou very much", "date_published": "2019-10-30T09:57:44Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "77dbc2ff58cd10c45ebfbe2236548527", "title": "What Machines Shouldn’t Do (Scott Robbins)", "url": "https://www.youtube.com/watch?v=xvgbYhNQHSg", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "right\nwelcome everyone to uh today's ai tech\nagora\num today our speaker is scott robbins uh\nbefore passing the word to him i see\nthat we have quite a lot of new faces\nhere\nso i will give a very quick introduction\ninto\n[Music]\nwho we are so i'm arkady i'm a postdoc\nat the delft university of technology\nand uh\nhere we have a really nice\nmultidisciplinary multi-faculty\nuh initiative called aitec and uh at\naitec we look\ninto different aspects of what we call\nmeaningful human control\nso we are focusing uh not only on kind\nof general philosophical\nuh aspects of it but also on concrete\nengineering challenges\npostgrade so if you're interested in\nthis\ncheck out our website\nand also subscribe to our twitter and\nyoutube channel\nto stay updated of the future meetings\nand\nif if nothing else comes up\ni'll pass the word to scott\nall right thanks a lot and uh thanks a\nlot arcadie and\naitech for inviting me i'm pretty\nexcited to give this talk i'd hoped it\nwould be in person but\nalas we will have to make too um\nso while i share my screen\nreally fast um\ncan everybody see that now\ncan somebody let me know if you see that\nyeah all right sounds good\nall right thank you um so a little bit\nabout me i'm just finishing up my phd\nat the ethics of philosophy and\ntechnology section at\nto delft at the tpm faculty\ni'm writing on machine learning and\ncounterterrorism and the\nand the ethics and efficacy of that and\nthis is a paper that didn't make it into\nthe thesis that i just submitted on\nmonday\nbut it's something that i i think the\nthesis kind of leaves on set and kind of\nleaves the future work that i'm really\nhoping to\nget out there as soon as possible and\nright now i'm titling it what machines\nshouldn't do a necessary condition for\nmeaningful human control\nall right so before i say anything about\nwhat machines shouldn't be doing\ni i i have to clarify what i mean by\nwhat a machine\ndoes and when a machine is doing\nsomething or more specifically when we\nhave delegated\na process or a decision to a machine\nand what i mean by this and for the\npurposes of this paper in this\npresentation\nis that a machine has done something or\nwe have delegated a decision to a\nmachine\nwhen a machine can reach an output\nthrough considerations\nand the waiting of those consideration\nconsiderations not\nexplicitly given by humans so\nthe point of me doing this and to making\nthis clarification\nis that i want to distinguish between\nyou know more classical ai\nlike good old-fashioned artificial\nintelligence symbolic\nai or expert systems and things like\nthat\nfrom machine learning or contemporary ai\nand in the classical good old-fashioned\nai the considerations that we\num that go into making a particular\ngenerating a particular output are\nexplicitly put there by human beings\nit may be extremely complicated and may\nlook um it may\nstill simulate our some way\nand we may believe it's doing more than\nit is but really there's there's human\nbeings behind that\nthat path that it's that it's following\nto generate the output\nthis is in contrast to machine learning\nor many methodologies and machine\nlearning\nwhere the specific considerations that\ngo into generating an\noutput are unknown to the to any human\nbeings but even the human beings that\nprogrammed it\nso that's the case in the in in much of\nthe hype surrounding ai today and many\nof the successes that we've seen in the\nmedia\nlike alphago beating the world go\nchampion\nor chess algorithms or many of the other\nother algorithms out there\nso we've delegated a decision to a\nmachine or\nin this example here the move in the\ngame of go\nto a machine because that machine is\nactually has its own considerations\nloosely speaking\nfor how it generates the output\nso this we've delegated in in the way\nthat i'm talking about it we've\ndelegated many\ndecisions to machines everything from\ndetecting whether somebody's suspicious\nor not\nto predicting crime to driving our cars\ndiagnosing patients\nand sentencing people to prison\num going even more fraud detection\nfacial recognition object recognition\nand even choosing our outfits for us and\ndeveloping this presentation i saw\na lot of really random algorithms out\nthere or\napplications out there and so with all\nthese applications\ni think some of them kind of freak us\nout and i think\nautonomous weapon systems are some of\nthe ones that really come to mind\nis the most classic example of something\nthat comes to mind that\nthat scares us about you know like\nshould we be really\ndoing this within with an algorithm or\nhow can we do this responsibly\nand it's fueled this huge explosion in\nai ethics\ni think a justified explosion in ai\nethics because i think there's something\nnovel happening here\nthis is the first time we're really\ndelegating these processes to machines\nnot just automating the processes we've\nalready determined but actually\ndelegating\nthe the act of choosing these\nconsiderations to machines\nand so now we're worried about the\ncontrol we have over those machines\nand specifically i think that\nall of these ai ethics principle lists\nand a lot of the work on ai ethics\nwhether it's explicit made explicit or\nnot is really talking about\nor adding to in some way trying to\nrealize meaningful human control\nand so before i get to the specifics of\nwhat i want to add to meaningful human\ncontrol\ni wanted to say a little bit about some\nof the proposals that are out there\nalready and a little bit about how i\nclassify them\ni have made a distinction between\ntechnology-centered meaningful human\ncontrol\nand human-centered meaningful human\ncontrol and what i mean by that\ndistinction\nis is what i'm trying to capture here is\nwhere people\nare putting the spotlight or putting\ntheir focus\non realizing meaningful human control so\nif it's technology centered then we're\nreally thinking about the technology\nitself\nwhat are the design requirements of the\ntechnology or what can we\nyou know add to the technology so that\nwe are better equipped\nto realize meaningful human control\nwhereas in the human centered approaches\nit's more about where can we place the\nhuman and what capacities or\ncapabilities does the human being need\nin order to realize meaningful human\ncontrol\nso starting with a technology centered\nmeaningful human control\nthere's a few proposals out there that i\nconsider to be the biggest ones\nand i don't mean to say that these are\nall necessarily put out there\nexplicitly to realize meaningful human\ncontrol it's not like these\npapers are all saying you know this is a\nway a proposal for meaningful human\ncontrol\nbut i've argued in the past that some of\nthese are are\nindeed doing that or that would be the\nmoral problem that they're trying to\nsolve if they're solving one\nso first is explicability and\nexplainable ai\nwhich is the idea that if we can make um\nan algorithm output not only you know\nits\nyou know its output but also give us\nsome kind of idea of how it came to that\noutput\nin terms of considerations that went\ninto that output\nyou know this could perhaps allow us to\nsay well that was a bad\nuh that output should be rejected\nbecause it was based on race or gender\nor something like that\nsomething we considered to be um not an\nacceptable the way\na way to make a particular decision um\ni've written a paper on this\nthis proposal and i'm\nnot too thrilled with it i think that\nit's a good idea\nbut it it fails and there's there's\nstill a good idea to\ntry to make ai explainable i still think\nthere's reasons to do it\nbut it doesn't solve the moral problem\nthat it attempts to solve\nif it's if that's what it's doing and i\ncan talk more about that or i can direct\nyou towards the paper\nabout that in the question and answer\nperiod\nthen we move on to machine ethics which\ni i think is not necessarily a proposal\nfor\nrealizing meaningful human control but\nit is a way to say that\nthat it's saying that if we can endow\nmachines with\nmoral reasoning capabilities and allow\nthem to pick out morally salient\nfeatures\nand adjust and be responsive to those\nfeatures then we don't need\nhumans to be in control anymore really\nthe machines and\nrobots and algorithms are controlling\nthemselves\nwith these you know ethical capabilities\ni couldn't be more negative about this\napproach and i think some of that has to\ndo with the reasons that i'm going to\nget into later on\nbut i've also written a paper about this\nwith amy van weinsberg\nwhere we argue that there's no good\nreason to do this every reason put\nforward\nfails for either empirical or conceptual\nreasons\nand again i can talk more about that and\nhopefully it becomes more clear\nthroughout this presentation\nat least one of the reasons why this is\na bad idea\nand then we get to track and trace which\nis a proposal put forward by\nphilippo santoni de sio and jeroen\nvanden home and hoven here at\nuh tu delft and tpm in particular and\nthey they have actually a really nice\npaper i highly recommend\npeople read it if they're interested in\nmeaningful human control\num philosophically you know it has some\ndepth to it\nand they put forward two conditions\nthat we need to meet in order to realize\nmeaningful human control\nthe first is a tracking condition which\nis about\nthe machine being able to be responsive\nto human moral reasons\nfor for their outputs so such that if\na morally salient feature pops up in a\ncontext that would cause a human\nto change their decision or output then\nit should also cause the\nalgorithm to change its output and the\nsecond is a tracing condition\nwhich states that we should be able to\ntrace responsibility back to a human\nbeing or set of human beings\nsuch that that human being should\nknowingly accept moral responsibility\nand should be ready to accept\nthe moral and\nmoral responsibility and accountability\nfor the outcomes and outputs of the\nmachine\nall right moving to human-centered\nmeaningful human control this is the\nclassic on the loop in the loop stuff\num i think uh and this is again about\nthe human where is the human\nin this process and what capabilities do\nthey have and an\non the loop the human is is kind of\noverseeing\nwhat the algorithm is doing so that the\nhuman can intervene\nif necessary it's to prevent something\nbad from happening\nand i think a good example of this is\ntesla teslas which\nkind of stipulate that the human has to\nbe you know have their hands on the\nwheel\nand be ready to take over at any time if\nsomething bad happens\nand really you know all moral\nresponsibility is\nis with the human i think this is more\nof a way to protect\ntheir company from lawsuits than\nanything else it doesn't seem to be a\nvery good\nway of realizing of having any\nmeaningful human control\nof of the machine as it kind of flies in\nthe face of human psychology and\nhuman capabilities to remain aware of\ntheir surroundings\ndespite an automated process which works\nmost of the time there's some\ninteresting work on that\ni think even from tu delft so i don't\nthink it's it's meaningful human control\nbut it's it's an attempt at\num i i think that's what they're trying\nto do it's just not\nworking very well and then in the loop\nis a little bit stronger than on the\nloop and that it requires a human to\nactually endorse\nor reject outputs of the machine before\nthe consequences happen\nso you're not just in an overseeing role\nanymore you're\nyou're actually a part of the process\nagain this kind of suffers from\nyou know flies in the face of human\npsychology and that you know we suffer\nfrom many biases like automation bias\nassimilation bias and confirmation bias\nwhich is going to make this incredibly\nhard to be\nmeaningful control even though i think\nit could be said we are\nin control\num and what i really what i really want\nto say with all these is not that i want\nto\nand the reason i don't go into specifics\non my attacks and all of these positions\nis that i don't think it matters too\nmuch for the purposes of this paper\nthe real point that i want to make here\nis that even if some of these\nuh solutions or proposals will play\nindeed play a part and i think some of\nsome aspects of them at least will play\na part in meaningful human control\nthey've they're kind of working this all\nout\nafter we've already made a huge mistake\num when we've made a mistake\nand we've already hit this iceberg and\nnow we're just rearranging the\ntechnologies\nand the people in the socio-technical\nsystem\nand hoping that everything will work out\nbut it doesn't matter the ship's already\nsinking\nand specifically i think the mistake\nthat gets made\nis that we've delegated a decision to a\nmachine that that machine should not\nhave been delegated\nand as soon as we've done that no amount\namount of technical wizardry\nor organization of the human and the\ntechnical system\nis going to fix that problem we've\nalready lost control\nmeaningful control over these algorithms\nso that's what we have to figure out\nfirst and that's what i plan to try to\ndo\nin the next half of this presentation\nso to jump to my conclusion that i will\nthen defend\nis that machines should not have\nevaluative outputs\nand specifically machines should not be\ndelegated evaluative decisions\nand i i consider a value of outputs or\nevaluative outputs are\nthings like criminal suspicious\nbeautiful propaganda\nfake news anything with bad good wrong\nor right\nbuilt into the labels or the out or the\noutputs\nso when we say somebody's suspicious\nwe're not just saying that somebody\nis is standing around in one spot for a\nwhile with a ski mask on\nwe're not just saying oh that's\ninteresting we're saying it's bad\nthere's something bad about what's going\non we're\nwe're loading a value into it and we\ndescribe somebody as beautiful\nwe're not just saying that they have a\nparticular outfit on or they look a\ncertain way\nit's just in a neutral manner we're\nactually saying something good or\nsomething about the way people ought to\nlook\nand same with propaganda we're not just\nsaying that there's a picture picture\nwith a political message\nwe're saying that there's something bad\nabout this picture with a political\nmessage\nit's not the way things should be and so\nany of these outputs that have values\nbuilt into them\nshould not be delegated to machines\nand i'm not just shadow boxing here i\nthink most of us here will know this\nthat these types of outputs have been\ndelegated to machines quite frequently\nthere's no shortage of proposals to do\nthis\n[Music]\nfor these are four examples here like\ndetecting propaganda\nand here i do want to make a little note\nthat remind us that\nin the beginning i was talking about\nwhat i mean by a machine doing things\nand sometimes algorithms are delegated\nthe task\nof flagging propaganda but that doesn't\nalways mean\num that it's doing something in the way\nthat i've talked about it doing\nfor instance europol flags propaganda\nbut they do it based on\na hash that's generated by\nvideos and pictures that have already\nbeen determined by human beings\nto be propaganda and the idea then is\njust is there a new post or a new\npicture a new video\nthat is the exact same as the one that\nwas already posted and taken down\nthat's not a machine that's just\nautomating um a process we've already\ndone the value judgment\nourselves and then we're just delegating\nthe task of finding\nthings that that match those value\njudgments\nin this case i'm actually talking about\na machine\nor an algorithm that detects propaganda\nand novel propaganda on its own\nand then moving on to you know we've had\nalgorithms that detect\ncriminals just based on their faces or\npurport to do so um ai that detects\nbeauty ai that detects fake news\nwe have one of my least favorite\ncompanies hirevue\nwhich works for which is used by many\nfortune 500 companies now\nwhich purports to be able to say whether\na candidate is okay good great and\nfantastic\nor and probably bad as well\nand and then i think that one of the\nmore infamous versions which is the\nchinese social credit system which from\nmy reading is trying to say what is a\ngood\ncitizen so\nremember these are all things that i'm\nsaying algorithms shouldn't\nbe delegated and then the question\nbecomes why\nwhy why can't we delegate those things\nand i want to put forward\ntwo arguments here one has to do with\nefficacy roughly that uh i'm going to\nargue that we\nwe can't say anything about the efficacy\nof these algorithms and if\nand if we can't say anything about its\nefficacy then we can't justify its use\nand that any use of it is uh is out of\nour control we've lost control at that\npoint\nand then the second argument is more of\nan ethical argument about\na more fundamental loss of control um\nover\nuh of meaningful human control over a\nprocess and i will get more into that\nwhen i when i get to that section\nso first i want to say that efficacy is\nunknown in principle for evaluative\noutputs\nand that every each evaluative output is\nunverifiable for example\nsuspicious and in this example in this\npicture here this\nman is being labeled as suspicious and\nin this this judgment this output here\nand if we wanted to say\nwell did the algorithm get this right\nwas the algorithm correct and labeling\nthis person suspicious\nwe in principle can't do that because\nyou might say you might argue\nwait we can if the person stole\nsomething well then they were indeed\nsuspicious\nbut that's not what suspicious means\nsuspicious can be\nsomebody can be suspicious without doing\nanything wrong\nand somebody can do something wrong or\nsteal something without having been\nsuspicious it's called being a good\nthief\nso if we can't evaluate whether the\nalgorithm was correct on this one\ninstance\nand i'm arguing that it can't on any\nspecific instance then we can't say if\nthis algorithm works at not or not at\nall\nthere's nothing we can say about the\nefficacy of this algorithm and therefore\nwe're kind of out of control we've lost\ncontrol we're just using a tool\nthat's as good as a magic eight ball at\nthat point\nand this is opposed to an example like\nan algorithm that's supposed to detect\nweapons\nin a baggage and baggage at the airport\nin this instance if an algorithm says\nclassifies this\nbag as having a weapon in it or more\nspecifically having a gun in it\nwe could probably we probably wouldn't\njust arrest the person on\non just the fact that the algorithm made\nsuch a labeling we would\nlook into the bag and find out does it\nindeed have a weapon or a gun in it\nand if it does then we know the\nalgorithm got it right at that point\nand we can say something about the\neffectiveness of the algorithm because\nwe can test this on many bags on many\nexamples and determine how good it is\nat in detecting weapons and then we can\nhave some place\nin justifying its use we can say it does\nindeed get it wrong\nsometimes but we have a process to\nhandle that because\nwe have enough information to remain in\ncontrol of this algorithm\num the other thing i want to say about\nefficacy and and uses\nusing this suspicious label again is\nthat\nthe context change over time so one year\nago if this kid would have come into a\nstore that i was\nin and not at night i would have\njustifiably been\nworried and probably thought this person\nwas suspicious but\ni think uh in the context of a global\npandemic right now in the netherlands if\ni saw\nthe same person wearing a mask in the\nin a store that i was in i'd probably be\nthanking them for actually wearing a\nmask\nas i see so little of here in the\nnetherlands despite the spiking\ncorona cases so these contacts change\nand that's that's something that\nalgorithms are not good at\nthey're good at fixed or machine\nlearning algorithms specifically they're\ngood at fixed\ntargets something that they can get\ncloser and closer to the truth with\nover more data but value judgments are\nnot some\nare not those kinds of things so we have\nto be so even if we could\nsolve the problem that i just outlined\nwhich we can't tell whether it was\nwe can't verify any particular decision\nthat we did solve that problem well then\nwe'd also have to worry about\nthe context changing to make those\nconsiderations\num and make those considerations change\nthat ground that judgment\nall right now moving into the ethics\npart of the argument where i'm arguing\nfor a more fundamental lack of control\nwhat i mean by fundamental here and it's\ndifferent from the control that\ni i see so often talked about like in\nthe autonomous weapons debate\nwhere we're really thinking like okay we\nhave this algorithm out there and it's\ntargeting people\nhow do we make sure that there's still a\nhuman being around\nthat can make sure like can take\nresponsibility or have control over that\nprocess\nwhat i mean with this fundamental part\nis that the control over choosing the\nconsiderations that ground our value\njudgments\nso the actual process of deciding how\nthe world ought to be\nin any of these contexts that is a\nprocess that we have to remain control\nover\nin control over it doesn't make sense to\ndelegate that\nto anybody but ourselves we human\na machine is not going to be able to\njust decide how the world should be\nand then we all change just because of\nan output of an algorithm that doesn't\nmake any sense\nwe have to decide how the world should\nbe and then create algorithms\nto help bring us there so in the example\nof\nyou know going over cvs to find a new\ncandidate for a job\nsay in academia we might say that\nthere's certain considerations that are\nimportant for that\nfor instance the number of publications\num their reference letters\nyou know what they did their\ndissertation on there'd be all these\nconsiderations that go into\nsuch an evaluation and we usually have a\ncommittee\nto decide like how do we want to\nevaluate candidates for this particular\nposition because it may not be the same\nconsiderations every time\nand that conversation that we're having\nnot only as a small committee or as an\nindividual\nbut as a society to say what is\nlike a good what is a good academic and\nthen from there determining the\nconsiderations that will lead\nus to choosing those good academics or\nbad academics or good candidates or bad\ncandidates\nthat is our process that's what we need\nto remain in control over\nso delegating this when we delegated an\nevaluative output to\na machine we are effectively seeding\nthat more fundamental level of control\nand then continuing on with this theme\neven though we're having a conversation\ntoday about what a good academic is or\nwe did that you know we were doing that\n50 years ago\nnow things have changed drastically in\nthe last\nyou know bunch of decades or even in the\nlast 10 years i think\neven at tu delft you can see that\ncertain\ncharacteristics or considerations are\nused that weren't used before\nabout what a good academic is for\ninstance valorization is much more\nimportant\nthan it used to be we've decided that\nthat is part of what goes into making a\ngood academic\nor maybe teaching is a little bit less\nimportant or maybe it's not the number\nof publications anymore\nbut the quality of their top\npublications\nthis is the conversation that we have\nit's also a conversation that changes\nas we learn new information or as the\ncontext around us changes\nit's not that the value changes itself\nbut how we ground those values with\nconsiderations changes\nso that is that should be left up to us\nto do\nnext is the is this\nwhat i'm calling a vicious machine\nfeedback loop and it's it's a concern\nthat i have\nthat by delegating this evaluation\nthis evaluating process to machines that\nwe could be\ninfluenced by these machines on how we\nend up seeing\nvalues and grounding values so the\nprocess starts with us\nhuman beings building these algorithms\ntraining it\nyou know labeling the data however that\nprocess works we\nare we are doing that in our way and it\nwill always be biased it may not be\nbiased in a bad way\nbut it's still biased even if it's just\nto the time that we live in\nand then that feeds into an algorithm\nwhich goes into an evaluative decision\nwhich then spits out these these\nevaluations to us and over time what i'm\nworried about is that\nwe could be influenced by how the\nalgorithm makes decisions\nwe could start seeing candidates for\njobs differently because we\nwe've seen so many evaluations of an\nalgorithm\nshow us what a good candidate is and\nthen we start taking those things on\nand this isn't an entirely new problem\num it's happened before with other\ntechnologies\nlike even the media we're constantly\nworried about\nfeeding in the idea of who is beautiful\nand\nbody shape and body image to children\nfor instance or barbies or something\nlike that\nall this stuff feeds into their idea of\nwhat beautiful is\nand affects them later and we usually\ntalk about this in a negative way\nwe don't like that that they're feeding\nin a specific body shape as the only way\nto be beautiful\nand then that actually affects how they\nsee it which affects their\ntheir way of trying to realize it and\nwho they think is beautiful well\nthose same evaluations are now being\ndelegated to ai we don't even know what\nconsiderations they're using\nand i don't think not knowing is better\nthan knowing and it's bad\nthey're both not a good effect on our\nevaluation our ability to evaluate\nthe final thing i want to say about this\nthis this ethical control argument is\nthat\npeople's behavior will adapt to\nevaluations part of the reason that we\nperform evaluations\nis to get people to um we're saying how\nthe world ought to be\nif we're saying somebody did a great job\nwe're saying well if you want to do a\ngreat job too you should\ndo it more like this or you should look\nmore like this to be\nbeautiful or that's what evaluations do\nnow um of course that will change\npeople's behavior\nand in the case of ai we've seen some of\nthese overt changes where people figure\nout that ai is doing something\nlike these students who figured out that\ntheir tests were graded by ai\nand we're actually able to achieve\nhundreds rather easily\num that affects their behavior and of\ncourse in this situation well that's not\nbehavior that we want to we don't want\nto change their behavior that way\nthat evaluation is failing that's partly\nbecause\nwe haven't had the conversation we're\ndelegating the conversation about what\nit is we want\nwhat behavior is good in taking a test\nto a machine\nand that doesn't based on what i said\nbefore that doesn't make any sense\nthat's the whole thing\nwe have to determine what is a good\nwhat is good for a test and then we can\nhave ai help us\nevalua help us evaluate it by picking\nout\nyou know descriptive features of it but\nit can actually\ndo that process for us it can't pick out\nthe considerations\nthat make a good test or a good\ncandidate or any of these things\nthat is us losing control over a process\nthat is fundamentally\nour process\num so then i started this you know the\nwhole thing is\ncalled what machines shouldn't do so i\nwanted to reiterate you know i think\nsome of it should be clear by now\nbut um what i think machines shouldn't\ndo based on the arguments that i've used\nand the first is um they're all\nevaluative outputs but\naesthetic outputs so judging what is\nbeautiful\nor what is good you know what's a good\nmovie what's a what's a bad movie what's\na good song\nwe have you know algorithms that are\ntrying to trying to say what the next\ngood movie is going to be i think these\nthese don't make any sense based on what\ni've said not only can we not check if\nthe algorithm works or not\nbut we can't um but we're losing control\nof that process of having that\nconversation about\nwhat is good um regard to the aesthetics\nthe same with the ethical and moral\nyou know we shouldn't be delegating the\nprocess of evaluating\num of coming up with the considerations\nthat evaluate candidates or citizens or\nanything like that\nto machines that is our process and\nfinally\ni add this last one in i don't think\nit'll make the paper because i think\nit's a separate paper\nbut there's been so much talk of ai\nemotion detection\nthat i wanted to mention it because i\nthink it fails some of the same thing in\nthe same way\nthat some of the aesthetic and ethical\nor moral failures go\nand that is we emotions are not\nverifiable in the way that the gun in\nthe bag was\nand furthermore there doesn't seem to be\nbased on the science that i've read\num the anything more than pseudoscience\ngrounding the idea\nthat we can use ai to detect emotions\nall right so now i'm getting to the\nconclusion now\ni think this is obvious but\nunfortunately based on all the examples\ni've shown\ni think it still needs to be said ai is\nnot a silver bullet\nit's not going to evaluate better than\nwe can and it's not going to tell us how\nthe world should be\nit can help us realize the world we've\ndetermined\nthe way that it should be but it's not\ngoing to be able to do this this is\nfundamentally a human conversation\na human process that we need to we need\nto keep going and it will never stop\nand then have the technologies around us\nhelp us realize those dreams\nand then lastly i want to keep\nartificial intelligence boring\ni know it's not as exciting as being\nable to figure out\nethics you know determine exactly what a\ngood person is and then we just go\ntoward\nwe just follow the machine that would be\nreally exciting if we could do that\nbut it's just not something that can be\ndone\nbut i also don't want to be completely\nnegative here i think despite\nall of the the last you know half an\nhour or so\ni'm actually really positive about many\nof the benefits that artificial\nintelligence\nand machine learning could bring us i\njust think that it needs to be more\nfocused into the things that\nit's possible to achieve and that is\nidentifying the descriptive features\nlabeling those descriptive features\nthat ground our evaluations\nso instead of determining who is\ndangerous it can determine\na gun in a bag instead of determining\nwho a good candidate is\nit can it can rank the the\ncvs based on number and a number of\npublications\nyou know it's hard to read 100 cvs but\nan and our\nmachine learning algorithm could\nprobably do it really fast and we could\nverify that it was doing it correctly\nthis is still hugely beneficial and i\nthink we underestimate\nthe power that that could be so i'm\ngoing to leave it there\ni really appreciate you guys listening\nto me for the last half hour and i\nlook forward to your questions um\ni'm going to stop sharing my screen in\nlike a minute just so i can\nsee everybody i feel quite alone when\ni'm looking at it like this\nso uh but thank you very much\nthank you scott uh that was very\ninsightful and uh we actually have a few\nquestions already in the chat\nso while i'm reading those uh i would\nappreciate if\npeople uh write their question or at\nleast the indication that they\nhave a question in the chat because\nthere are already quite a few of them so\nif you raise a hand i might as well miss\nit\num so i'll start uh with the first\nquestion\nthat arrived i think 30 seconds after\nyou started the attack\nuh so that was uh from enough so\nyou know would you like to ask your\nfirst question\nyes thanks scott for your presentation\nvery nice\ni i was i got stuck when you uh started\nto make the i started to explain the\ndifference between automated\nand delegated and yeah i was worried\nabout it as i gave you as i wrote in the\nchat if you if you look take the example\nof face recognition\nright um then there are face recognition\nsystems that make use of explicit\nexplicit selected facial features like\nyou know position of your nose\ndistance between the eyes color of the\neyes etc etc and there are some that\nlearn\nthese features and we do not know\nnecessarily what kind of features that\nare\nboth of course have the same effect\nnamely they recognize people they\nclassify people etc etc\nbut one you would call automated because\nwe as humans select the features and the\nothers delegated and i don't get that\nwhy it is even important to make that\ndistinction\nuh well i think it's fundamentally\nimportant because we\nin one case we're deciding what features\nare important\nfor grounding and a decision in the end\nand so we have it's the control problem\nright we have that control\nbut if we are delegating that to a\nmachine which is fine in facial\nrecognition i don't\ni have lots of problems with facial\nrecognition but not for this reason\nis that we algorithms\nmachine learning algorithms specifically\nare able to\num use its own considerations and it\nbecomes more powerful and the in the way\nthat i\nmean by that is that it's not confined\nto\nwhat we can think of or human\narticulable reasons\nit actually has a host of other reasons\nthat we could never understand\ni think that's what gives it its power\nbut it does matter because one is going\nto be explainable and one's not\nso what is that the word then so i was\nlooking for that word explainability so\nyou're you're\nthe the distinction between uh automated\nand delegated is whether or not\nit's explainable i think that would be\ni i just so it'd just be a clarity issue\nthen then i\ni think that works fine for me but i\nwould just i prefer the\ndelegated the decision because i think\nthat makes it more clear maybe for my\ncommunity\nbut i'll leave it today now you gave a\nvery if i may you gave a very nice\nexample of a decision tree with two\nlevels in the tree right yeah realistic\ndecision trees have\nthousands of levels of course yeah yeah\nis that explainable yeah so you see i i\nmean it's fundamentally explainable\nbetween the two huh yeah but it's\nfundamentally explainable right\nwhether you have a thousand levels or\ntwo levels you can still explain it\ni mean somebody could explain it in the\nend we could in principle get an\nexplanation but with machine learning we\ncan't\nright now at least in principle get an\nexplanation so there's a difference\nbetween the two and\ni'm very interested about this because\nuh you mentioned this one example the uh\nai that was used to determine whether\nsomeone should be hired or not or the\nquality of\ncandidates right and i think you made\nreally good points against that\nuh being used but um when we are\nwhen you were talking and when you are\ntalking about this\num i'm wondering would you say it's\nperfectly fine to use an ai\nif it's a gopher good old-fashioned ai\nwith a decision tree behind it\nand only problematic when we use machine\nlearning\nuh right so i think it is so there might\nbe other problems associated with\ngophi in that situation but i think we\nstill have i mean the way or the\narguments that i've made today\nwe still have control over what\nconsiderations are grounding the\nevaluation\nso for example if we're using a hiring\nalgorithm that's basically just saying\nyou know how many if it's in the\nacademic sphere is there\num do they have greater than five\npublications\num and did they get a phd that has the\nword ethics in it\nand those are the only two things we\ncare about separating out the things\nwell of course we can we can argue about\nwhether that's good those are\nconsiderations are good\nbut it's still us deciding what\nconsiderations there are so it doesn't\nfall\nfoul to what i've talked about today it\nstill may be problematic\nbut um not for this reason\nokay so uh if we move on to the next\nquestion\num so providing a link to the paper\nsomebody posted it already\nso uh then we had uh a question from\nreynold about\nhaving a disease uh would that be an\nevaluative output\nyou know can we clarify what the\nquestion was about\noh yeah i think the question is pretty\nclear huh yeah\npretty clear don't worry um so i i think\ni think i use the example in my paper\nsometimes about\ndetecting skin cancer and an algorithm\nthere's algorithm that can uh\nlook at a mole and take that as an input\nand evaluate whether you have\nuh skin cancer or not so this i don't\nthink is evaluative because it's\nit's verifiable right we can just take a\nbiopsy and say well did the algorithm\nget it right or not\nit's a zero or one do you have skin\ncancer or not so in that case it's not\nan evaluative output no there's no\nthere's no evaluation i mean\ni guess in the uh if it was more mental\nif we consider mental issues that's the\nnext line in the question exactly\nwell then it's uh i think there's a\nblurry line there but\nuh i i'd not experienced enough in the\nin that field to know whether so i have\na very strong feeling that and i think i\nsaw that somewhere in the chat window as\nwell that that\nwhat what you're worried about is\nwhether or not there's a ground truth\nan objectively verifiable ground truth\ni think that is yeah i mean i we could\nsay that\ni'm happy with that language okay\ngood thanks good now we have a question\nfrom david\nuh he is wondering whether you can\nacquire equate\nevaluative outputs with outputs of moral\nevaluation\nis there necessarily a moral component\nor maybe you can clarify that\nyeah thanks thanks for the talk also uh\nscott\nbut uh i think enol\njust touched upon it in the previous\ntalk so\ni was wondering what you mean with\nevaluation\nand i think the ground truth part that's\nprobably\nmore general than moral evaluation but i\nhad the feeling you were thinking about\nmoral\nevaluation but also like i said in that\nin the talk it's also a static\nevaluation you know anything's with with\nvalue\nattached to it but i think that does\ncause i think the verify ability here is\nvery important\nand it may indeed be easier just to say\nis it um is there this ground truth and\nand verifiability\npart of part of it i think that's that\ncould be possible i need to think more\nabout it\nokay thanks okay uh now we have another\nquestion from another\nalso drilling down on the no no no let's\ntake someone else come on okay\nyeah we'll skip this one so we have a\nquestion from jared then\nuh about uh the kind of\nthe gray area between evaluative outputs\nand uh something more objective\njared do you want to ask a question um\nyeah so thank you very interesting\npresentation uh\nand so one thing i was thinking about is\nthat it would be terrifically easy for\npretty much all examples you gave\nto reframe them uh as if they are not\ngiving evaluative outputs but more like\nobjective judgment\nso you can say the ai does not evaluate\nappearance it judges similarity to other\napplicants based\non or as ranked on suitability and you\ncan\nsort of reframe what your\nai does to something objective and then\nsay\nit's just meant to inform people the\npeople make the decisions and we just\ndo this objective bit that informs the\ndecision\nuh and i have a feeling that\nfocusing on my outputs um might not be\nthe right\nemphasis but i'm not completely sure and\ni just wanted to throw it right see\nwhat's uh\nand and i'm worried about that too i'm\nintentionally making this\nbold so that i can get pushback but\ni think what what you just said is is um\ni like that better in terms of we're\njust um\nfinding applicants that are like the\napplicants\nthat are the people that we've hired in\nthe past that we've classified as good\nso we've done that process of judging\nwhether they're good\nhowever what considerations are\ngrounding that similarity part we don't\nknow\nhow it's reaching that similarity so i\nthink it still falls foul\nto that and what i think and um in my\nmore hopeful parts about ai\ninstead of trying to find you know what\napplicants are good based on the\napplicants of the past that we've\ndetermined as good\nwe should figure out well maybe we\nshould think about what makes them good\nwhat what is it about them that makes\nthem good and then when we determine\nthat\nthen we can use ai perhaps we need ai to\nautomate\nbeing able to find that feature and to\nbe able to sort those\napplications that's what i really want\nbecause you can't say\num because this suitability thing i\nthink it's a nice work around and i'm\nsure the technology companies will do it\nbecause it makes it uh makes it sound\nbetter and makes\nit it forces more of the responsibility\nto human beings but i don't think it's a\njustified or\nmeaningful control then i mean what's\nthe difference between more suitable\nand like our better applicants from the\npast and just a good candidate\nit's basically amounts to the same exact\nthing and then it falls into the same\nproblems\nthank you okay next we have a question\nfrom sylvia\nsylvia can you plug in and ask the\npicture\nyes yes um if i can also just add\nsomething about what\nwhat um uh jared was saying then\ni think it would at least be an\nimprovement then\nit's clarified what the ai actually does\nso for example matching\nsimilar cv because in terms of social\ntechnical\nsystem at least you remove the um\nthe the tendency to uh trust that then\nthe ai is gonna have this\nadditional capability respect to us and\nyou know we should trust then what the\ncomputer is saying\nmaybe i agree i think it's a step\nforward like that it's definitely a big\nstep forward it's it doesn't it still\nfalls victim to someone what i've said\nbut it's a step forward i agree\nyeah but now what i wanted to ask you is\ni mean i\ni actually don't mind your idea of\nsaying okay let's keep it boring\nbecause to me just sounds like that just\nkeep it within the boundaries of what it\nactually\ncould do because it can't do this\ncontextual\nkind of evaluations but it doesn't it\njust boils down then\nto it can verify very well\ntangible things like objects like your\norganic sample or the\nthe checking the moles and\npossible skin cancer against then\nintangible things so is it is it boiling\ndown to let's just keep it to\nobjects or things well\ni mean i i think much hinges on the\nverifiability\ni mean like i guess i mean\nearthquakes are tangible you know like\nwe could predict earthquakes with an\nalgorithm\nand we would only know after the fact if\nit got it right but\ni you might be right it might be just\ntangible it depends on what we mean by\ntangible um\npossible but really i think the key word\nis verifiability\nis is can we can we verify it after the\nfact or\nduring and i think that will matter when\nwe can verify it but it\nit if we can't verify it at all which\ni'm trying to say that all evaluative\njudgments cannot be verified\nso we can't do that and i'm trying to\ngive an easier way\nand maybe verifiability just is easier\nand pragmatically i should just be using\nthat\ni'm open to that and i will be thinking\nabout that more but\num yeah so i will\nthen can i just leave you with like a\nsort of like devil's advocate\nfor location because then it's the\nquestion that i would ask myself\nwhich is um we can't verify an\nevaluation from a human either\nif the judge decides that that was or a\njury if you're in a in a common law\nsystem\nthen that that was suspicious you're\ngonna fall into the same problem so\nsomeone that really wants to use the\nalgorithm might say\nbut then let's let's make it you know\num statistically sound or whatever like\nthe\nlombroso nightmare that was with the\ncriminal behavior ai and and\nso whether it's a human or an ai we\nstill can't verify that evaluation so\nlet's just switch to\nlet's just you know save money and use\nthe ai anyway\nthat would be my biggest problem then\nright\nand i think you're absolutely right we\ncan't um we can't\ni'm saying evaluative judgments aren't\nverifiable so it's not going to be\nverifiable if a human does it either\nbut i think um and if i can go back to\nusing the phrase\nsaying how the world ought to be and how\npeople ought to be\nthat is up to us to do and the idea that\nwe're going to use a tool\nto do that instead doesn't make any\nsense especially if we can't\nwe can't say anything about its efficacy\nso um\nif i'm right and i think you know i i\nfind it hard to create an\nopposing argument to say well no\nactually it doesn't matter\num how we come to the decision about how\nthe world ought to be\nthat doesn't matter we just need to\naccept you know we just need to accept\none so that it's easier for the machine\nto get there\ni don't think that just doesn't make any\nsense to me but\num it's something that needs to be\nworked out more but i think there's\ndefinitely a difference between a human\nnot being able to\nmaking an evaluation and us not being\nable to evaluate it\nversus a machine making an evaluation\nand i think one of the big differences\nis that a human can explain themselves\nabout how they\ncame to that decision and what\nconsiderations they used and we can have\nthe disagreement about\nwhether the considerations they used\nwere indeed okay and of course we can\nget into the idea that humans can be\ndeceptive you know they're going to lie\nabout how they came to the decision\nthey were very biased against a\nparticular person but they're not going\nto say that they're going to try to use\nobjective means\nbut you know the responsibility falls on\nthem then you know we shouldn't be\ndelegating that process to a machine\nokay uh herman do you want to ask your\nquestion\nyeah great um\nuh thanks scott i really liked your\nhearing your paper\nas but i also have a question on\nsomething that has been touched on be\nquite a few times already so the\nverifiability of an evaluative judgment\nso i still struggle with understanding\nwhat you what you mean exactly with that\nso do you want to say that nothing is\nsuspicious nothing is wrong\nnothing is beautiful because if if\nsome things are beautiful then we can\njust see\ncheck whether the output corresponds to\nreality\num so your answer you just\ngave you seemed to hint that we went to\nverify the reasoning\nbehind the judgments uh so and\nthat seems to be something different so\nis that is it the last thing that you're\ninterested in that\nthe the reasoning should be very\nvaluable or is the uh is it the outputs\nthat is\nwell i i think both first of all it's\nthe output and so\ni i think with algorithms they're\nthey're in computer science it's zeros\nand ones\nand i think we should be thinking about\num we should be working towards and i\nknow all the bayesian stuff but\ni'm not going to get into that but i\nthink first at least even if even if i\naccept that there might be a possibility\nthat we could uh verify what's beautiful\nor not\nwhich i i think there's a lot of\ncomplications in that because it's going\nto be culturally specific\neven even person-specific not that\nthere's no somewhat of\neven if it's a human-constructed truth\nover it there may be\nyou might be right then i think i that's\nwhy i added in all those reasons about\ncontext changing our considerations\ngrounding these judgments are changing\nmovies that were amazing 50 years ago\nif the same movie came out today we'd be\nlike well it's kind of tired\nit's not something that we're interested\nthat's not beautiful anymore\nand it's because the context has changed\nwe've already heard the beatles you know\nwe can't have a new beatles out in\nanymore somebody has to evolve and and\nthese\nand even with how the world ought to be\nthe context changes\nthe climate changes global pandemic how\nwe\nhow we live our lives now the fact that\nwe're doing this digitally\ninstead these considerations change and\nso now that there can't be any truth at\nthe moment right now\nbut that algorithms are not good about\nthis\nshifting context in this shifting\nsituation about how we ground our\njudgments what what what a mountain\nlooks like\noverall is pretty much static right i\nmean there's some differences in\ndifferent mountains\nbut overall an algorithm trained on\ntrying to find spot mountains\nis going to get closer and closer to\nbeing able to be better and better at it\nbut that's not the case with evaluative\njudgments so\nyou might be right we might be able to\njust say well look everybody agrees\nthat's beautiful the algorithm got it\nright\nbut then we have to worry about this\nchanging context\nand that's if we can get everybody to\nagree on those things which i doubt is\ngoing to happen\nyeah so of course there are many\nuh metastatic meta\nnormative views that yeah contradict\nsome of the things that that you just\nsaid so maybe it could be very explicit\nthat you're endorsing a\nspecific but i actually i'd be curious\nto know i'd be curious to\nto hear if um any because i i\ni do read a little bit of meta ethics\nespecially not meta aesthetics\nbut even if like for instance real and\nmoral realism is true\nand that there are mind independent\nmoral truths out there\num i don't see unless unless we can\naccess them which\neven which i have not the moral\nepistemology behind that is\nexceptionally difficult\nand there's no solution to it yet but if\nwe can't access them at this point\nand then the algorithm will say people\nhave hubris\nthey believe the algorithm can actually\naccess them this algorithm is actually\nbetter than us\nit can actually detect the real moral\ntruths well then we'd be left in a\nsituation where we have to just\ntrust that's the case we'd have to say\nwell yeah i guess we\nwe should kill kill um kill five to say\nuh\nkill five instead of the killing the one\nand the trolley problem i didn't know\nthat was true but the algorithm said\nit's true\nit's like why how would we ever trust\nthat it doesn't even it so even if the\nmeta ethics\nwere such that there were moral truths i\ndon't see that solving the problem\nso i'm not sure that a mind is dependent\non a med ethical\nviewpoint at this point but i think it's\nreally interesting and i think we should\ntalk about that more\nlet's do that\nokay next in line is madeline but when\ndo you think your question\nuh hasn't been answered before because\nthis goes into\nuh context for evaluator judgments\nyeah so this is funny the question has\njust become more complicated i think\nso maybe i just pushed a little bit to\nsay um\nokay so i'm just wondering about whether\nor not if\nthis output of a verb evaluative\nstatement\nis evaluative at all so if the\nai this program says this is beautiful\num that what it's really doing is saying\ni've processed this data and here's what\nyou people seem to think is beautiful in\nthis particular moment in this\nparticular context so\nit seems like the uh if you're i'm not\nso strong on the\ntechnical side here but it seems like if\nyou're using machine learning this is a\nlearning process and would be able to\nadapt to a certain context\num so i'm just wondering if this\nand just i guess i'm just challenging\nthe idea that an ai\nprogram could come up with the\nevaluative statement at all\num well i think you're right i think i\nshould clarify that more that i mean\nyeah machines don't have any intentions\nthey don't they don't have uh\nany you know viewpoints or anything like\nthat they're just doing the statistical\nanalysis\nbut practically when people use higher\nview the algorithm\nthey're getting an evaluative output and\nno matter how\nyou know you could try to put a\ndisclaimer in there consistently\ni'm saying like this is not a real\nevaluation this is simply\nand put i have in my paper on\nexplainability\nthis great explanation of why\nalphago made a particular move in the\ngame of go or how it comes to a\nparticular move\nand really this is what the algorithm's\ndoing of course it's like\nreally over anybody's head that's not in\nmachine learning\nand i think the practical situation is\nthat people are using these algorithms\nand taking it as an evaluative judgment\nand if we're not going to\nand i think language is really important\nhere and that's why i said we're going\nin a step forward if we're saying that\nthis algorithm has determined that this\ncandidate is similar to\nthese three candidates that were\nconsidered good by you before\nthat that's going a step in the right\ndirection for sure\ni still don't like it but it's it's it's\nmore accurate to say it like that even\nif it's not as exciting to say it like\nthat\nso the concern is about then the\npractical use and how it's taken up\nby people\num no i think i think that's also a\nconcern for sure\nbut i don't think it i don't really\nthink it solves the problem to just\nstipulate that\nthe algorithm is coming up with uh is\ncoming up with this\nthis person is the most similar to these\nthree candidates that you've had before\nsimilar how i mean that how is so\nimportant\nyou can't just disregard the how it's\nhow they're similar\nthey're similar because they're white\nmales i mean that's what amazon's\nalgorithm did\nthat how was fundamentally important so\ni think we can't i think it's just this\nthis nice clever little trick that some\ncompanies are doing to say like oh well\nwe just we can just make this little\nmove with language and change the whole\ngame\nno the same problems exist in my opinion\npeace\nall right i think i'm next um\nthis is still enough time okay\ncan you hear me yeah sorry i was\nneutered i was talking all this time\nyeah i said that we have three minutes\nand the next line is rule so uh go on\nand uh it would be great if we could\nalso get the catalan's question\nokay well hi everyone great to be here\nuh scott thanks for your talk i really\nappreciated it\ni think what you're one thing you're\ndoing is like taking the focus off of\nthe technology of the you know the logic\netc and giving arguments to uh\nmotivate us to look at the practice and\nall the choices we're making and that's\nwhere i've i've\nspent a lot of my my time in the last\ncouple years as well\num and um like i have a specific\nquestion about\ndelegation um yeah\nwhere you where you draw the line but\nbefore i say that like i\nwant to point to one thing i did myself\nwas\nto start looking at all the different\nchoices that people make\nand a lot of what you're talking about\nis like how do you how do you determine\nwhat a label is and what can you capture\nin the label and is that thing that\nyou're capturing is that like an\nepistemologically sound thing that you\ncan verify\num so i like the idea of verification\nbut i also think that\num just having having a ground truth or\nnot having a ground truth would be too\nrough\nof a of a categorization to not use it\num\nbecause there are many applications for\ninstance in biology where you don't\nreally have a ground truth but it's\nstill very useful\nto use evaluations based on our best\nguesses\nand then you know have now have a\nmachine like run\nanalysis for us just as an example\num so but i i will i will i will just\nsay that and i'll leave the question\nbecause i think you've already said\nenough about\ncontext and that delegation and all that\nokay thanks\ngreat uh yeah one last minute cataline\nare you still with us\ni'm still with you yes yes decision\ntrees\ni'll be quick so regarding your answer\nto email's earlier question about\nexplainability\nyou said well in principle the decision\ntree would be explainable if you\nhave a thousand layers yes the principle\nof that is explainable but that holds\nalso for deep learning\nso is that an essential difference\nbecause of course a thousand layer\nnetwork\nis not comprehensible for uh people in\ngeneral\nuh wait you're saying that in the same\nway that a decision tree of a thousand\nlayers is explainable so is deep\nlearning yeah\nokay so i i that goes in\nthat goes against what i understand as\ndeep learning i mean\nthe whole problem with the whole field\nof explainable ai is to attempt to solve\nthe problem that we can't explain\nhow deep learning algorithms come to the\noutputs that they come to\nso i but i'm not a machine learning\nexpert so i\nuh i mean you can explain the\nfundamentals of it how you construct\nthese things how do you work okay so no\nso sorry the explanation that i'm\nlooking for is the cons\nyeah you can explain there's many\ndifferent types of explanations you're\nright\nbut the difference between the two in\none the decision tree\nyou can actually explain the features\nand considerations which led to the\noutput which played a factor in the\noutput\nand how and maybe even how much they\nplayed a factor into the output whereas\nin the deep learning situation\nyou cannot do that anymore that's that's\nokay with that i would agree with\nyou you could ask back like why did you\nmake this decision and get a\ncomprehensible answer from a decision\nplayer yeah exactly and so that's that's\nwhat's really important to me here is\nthose considerations that played a\nfactor\nyeah happy with that\nyeah uh yeah the next uh question is\nmine but unfortunately we don't have\ntime to\nuh for that question and also for all\nthe remaining questions which i have to\nadmit are really\nexciting so uh feel free to get in touch\nwith scott later\nto discuss those i will so thanks again\nscott that was a fascinating talk\nthanks to everybody thank you again for\nthe invitation it was a lot of fun\nthanks for the questions\ni'm sorry we didn't have a chance to go\nthrough all of them\nokay uh see you all next week next week\nour speaker will be jared\nso looking forward to that", "date_published": "2020-09-09T12:51:39Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "b78c4529648f2aaba97af58afe54610b", "title": "Robots Learning Through Interactions (Jens Kober)", "url": "https://www.youtube.com/watch?v=nvy_ziFvLDw", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "there we go yes yay\nthanks a lot for the kind of\nintroduction and also for the\ninvitation uh so i'm very much looking\nforward to having\nsome very lively discussions uh as\nhopeful\nas most of the times um yeah\nso nick already uh you know told you\nabout the question who should\nhelp whom so most people if they think\nabout robots they would\nthink about robots supporting us with\nthe\ntedious or dangerous tasks and then yet\nvery annoying cases where\nthe robot does something wrong or gets\nstuck and\nwe also have to help the robot to get\nunstuck\nand then in a similar way who should\nteach whom depending on\nwhat kind of research you're doing or\nwhat your perspective on robotics is\nyou might say yeah could be a great idea\nto actually\nuse a robot also as a teacher for\nhumans my research focuses\nmore on teaching robots\nnew tasks so that's an example of\nkinesthetic teaching where a person\nshows\nthe robot how to solve a new task\nand in this talk what i'm going to focus\non\nis how do we actually make this teaching\nin interaction with a per person as\nsmooth as possible and\nas efficient as possible for the robot\nso they're largely two different but\ncomplementary ways on\nhow robots can learn new skills so so if\ni say skills i'm talking about\nlearning some kind of movements on the\norder of a few seconds or a few minutes\nand the first one is imitation learning\nwhere a teacher demonstrates a skill and\nthe student then tries\nto mimic tech like we also saw in a\nprevious slide\nhere's an example from my own research a\nbit\nolder and you'll see one my former phd\nstudent demonstrates\nthe robot how to unscrew a light bulb\nand then how to put it in a trash can\num so here yeah you need a couple of\nturns to get it loose and then you move\nit to the trash can\nso just playing back the this recording\nthat's not really interesting what\nyou're really interested in is to\ngeneralize the movement so for instance\nin this case for the number of rotations\nyou need to do\nto unscrew it but you could also just\nthink about changing the position of the\nlight bulb holder or the trash in\nparticular what we were interested\nin here is the order of\nlittle sub movements you need to do so\nyou need to move towards\nthe bulb then grasp it rotate it in one\ndirection\nand so forth and then you can represent\nthat as\na form of a graph that tells you in\nwhich order you need to do things\nand then what you were also interested\nin is okay when do we actually\nswitch to the next primitive\nand then the robot can reproduce it and\nalso\ngeneralized to new situations\nso here you see it unscrewing and then\nwhen it's it becomes loose\nit's pulling slightly on the bob so you\ncan actually remove it\num so that's really important to detect\nit accurately otherwise\nthe light bulb will go flying okay\nso then it successfully did that\nso that's an example of imitation\nlearning\nand the other side is reinforcement\nlearning as you can imagine\nlike for us if the task gets slightly\nmore complex\nalso a robot needs to practice\nwhat i'm going to use a bit as a running\nexample is ball in a cup so that's a\nlittle game where you have to catch the\nball\nin the cup and here's my cousin's\ndaughter she was\naround 10 back then i showed her once\nhow to do that\nand then it took around 35 times until\nshe\ncaught the ball in the cup so\nreinforcement learning is learning by\ntrial and error and trying to\nmaximize the reward and you can see the\nlittle uh rewarded chocolate\nhere okay so in both um\nexamples i showed you you have the human\nin the loop\nin some form or other uh for the\nimitation it's very clear you have to\nhuman demonstrating and\nthe robot mimicking but also for\nreinforcement learning you need to\ndefine\nthe revolt function some kind of\nfunction approximation\nand how exactly you set up the problem\nand so forth so to illustrate a little\nbit what we\ndid there is the reward is more or less\nbased on the distance between the ball\nand the cup\nthe first time the robot drives it's\nstill a bit far\nso gets a pretty low reward close to\nzero\nnext time it's a bit closer so it gets a\nbit higher reward\nand if you imagine that you do some kind\nof weighted combination of the two of\nthem\nyou might get something that's already\npretty close um\nand then in this case pumping on the rim\nof the cup and then\nthe maximum reward would be a one in\nthis case\nso it's already pretty close and then if\nyou repeat that\nso mainly focusing on the good one and\nonly the other two a little bit you\ncan hopefully catch the ball in the cup\nvery quickly so that's the robot trying\nout different things\nand then seeing what works what doesn't\nwork and adapting accordingly\nalso here we helped the robot\nadditionally by giving it an initial\ndemonstration\nand so here there was a good\ndemonstration\nbut then as the dynamics are a lot more\ncomplex here if you just play that back\nthen\nyou missed the cup by one of 10 15\ncentimeters\nafter 15 trials or so it's already\nconsiderably closer\nnow the next one i think goes a bit too\nfar\nand then it's going to hit the rim of\nthe cup\nand then finally after 60 trials or so\nit starts to get the ball into the cup\nand after 100 tries or so that has\nconverged and works very reliably\nokay so as you can see in the dates\nthat's\nuh some fairly old videos um\nwhat people are working on nowadays is\ndoing end-to-end learning so\nwe had some vision system that tells us\nwhere the\nball is um we used low dimensional\nrepresentation and so forth um if you do\nnot use neural networks to try to learn\nend-to-end\nthat usually means you need some form of\nbig data or some other tricks to do that\nso this famous example here\nthat's from google they had an army of\nrobots working day and night for a\ncouple of\nmonths actually non-stop just to learn\nto pick up simple objects in bins like\nthat\nso that's clearly impressive but not\nreally\ndesirable or practical in any sense\nokay so what i showed you so far\nis the human is involved in the\nbeginning\neither giving demonstrations or setting\nup the problem and then the robot is\noff on its own trying to learn something\nif you again compare it to how humans\nlearn you'll actually notice that\nthere's often a continued\nstudent teacher interaction so while\nsomebody is learning you might provide\nadditional demonstrations or some other\nform of intermittent feedback\nand at least nowadays that's still\nlargely missing in robot learning\nand i believe that including these\nintermittent interactions with a teacher\nallows you to\nspeed up the learning which in turn\nallows the robot to solve more complex\ntasks\nand that's also a somewhat intuitive way\nfor\nhumans to teach and in the remainder of\nthe talk i'm going to show you a few\nexamples of\nthat's what i'm claiming here is\nactually hopefully true\nokay so how does this interactive\nlearning look like we have\na robot and an environment\nthe environment has a state so it could\nbe the position of the robot in the\nworld or the\nposition of the robot arm and the agent\nhas a policy which tells it\nwhich action to take for each state\nso depending on where it is what to do\nand that's just more or less traditional\ncontrol loop\nnow what we have additionally is a\nteacher\nand the teacher is going to observe the\nstate of the world and the robot\npotentially also what action the the\nrobot took\nand then there are many different\nvariants you can think of but in some\nform or other\nthe teacher is going to provide\nadditional information\nto the agent usually if something is\ngoing wrong so it could be an additional\ndemonstration to show something that the\nrobot hasn't seen yet\nand could also be just a correction on\nhow to modify\nit the robot behavior to perform the\ntask better\nso in the first few things i'm going to\nshow you we're going to focus on\ncorrections in the action space so the\nteacher very explicitly tells you\ntells the robot okay you should have\nmoved a bit faster\nor a bit slower um in order to perform\nthe task well\nto come back to the example i had before\nthe ball in the cup so here you see the\nsame thing\nhowever now we start learning from\nscratch and carlos my postdoc\nis sitting in front of the computer and\noccasionally giving it some additional\nfeedback so in this case\nmove more to the left move more to the\nright move slower\nmove faster and then after\n17 trials or so it's already quite close\nand then let's see\nyep next one so that's after 25 trials\nor so\nit can successfully catch the ball in\nthe cup\nso compared to the human gel there was\naround 35 times if you ask an adult\nthat's also usually around 20 times\nso it's really the same order of\nmagnitude now\nand if you compare that to learning the\nskill\nfrom scratch so without initializing it\nwith imitation learning\njust using reinforcement learning that\ntakes\ntwo orders of magnitudes longer so some\n2000 or so trials usually in our\nexperiments\nand that's still not doing end-to-end\nlearning but relying on\nlow-dimensional inputs for the states\nokay so that directly brings me to the\nnext part\nlearning from high dimensional inputs of\nin this case from raw camera images\na typical thing you're going to see in\ndeep learning is\nsome form of encoder decoder structure\nwhere you have the\nhigh dimensional images here you bring\nit\nback to some kind of low dimensional\nrepresentation that\nhopefully still contains all the\nrelevant information and in order to\ntrain that what you\ntypically do is you decode it again\nand then you try to ensure that the\noutput\nmatches the input so that effectively\nhere this\nlow dimensional representation contains\nall the\nimportant information and then\nonce you've trained this mapping from\nthe\ncamera images to a low dimensional\nrepresentation you can\nuse that to learn the robot behavior so\nthe policy on top of that\nso then you would remove the decoder\npart and then start learning\nthe policy either completely freezing\nthis part or maybe\nalso training that partially\nso you could do that beforehand\nso you collect lots and lots of data of\nimages the\nthe robot is going to encounter and then\ntry to find some kind of generic\nrepresentation\nand then learn a policy on top of that\nso that's what we see here\nbut as you can already imagine that's\nprobably not the smartest\nthing to do because depending on the\ntask you do\nsome stuff in the environment might be\nrelevant and some other stuff\nirrelevant and that might change\ncompletely depending on which task you\nwant to get\nso then the obvious thing is okay so can\nalso learn that\nwhile so this representation while the\nrobot is learning the task\nand that's exactly what we tried to do\nthere\nto learn simultaneously and embedding\nand the policy and then\nso really the main objective you want to\ndo is to learn this policy so the\nactions the robot takes but then as an\nadditional\ncriterion we have to this reconstruction\nfrom the\ninput image to um in this case a\nslightly blurred\nimage of a tree again\nand again while while this whole\nlearning is happening\nthe robot is going to get feedback from\nits human teacher not continuously so\nnot we are not remote controlling it\nwe're not teleoperating it\nit's just the teacher is jumping in\noccasionally to\nfix some things\nso what you see on the left is the\nreconstruction so based on the\nlow dimensional representation and then\nyou can see that already very quickly\nit learns something that represents the\nthe real images reasonably well and\nagain what you can't see\nin this video is the human teacher\noccasionally giving feedback via a\nkeyboard in this case on\nwhat the robot is supposed to do in this\ncase\npush this whiteboard swiper thingy\noff the table and\nlet me skip forward a little bit\nand that's that's some so that's in real\ntime here so we can\nlearn that really in a couple of minutes\ncome on\nno\nokay let's see here's another example\nof teaching this little ducky town robot\nto drive\nand it's rodrigo one of my phd students\nand you can see him holding the keyboard\nand if you look very closely sometimes\nhe presses\nbuttons to teach it how to drive again\nhere the raw\ncamera images and the reconstructed\nimages and again after\njust 10 minutes or so it learns how to\ndrive on the correct side of the road\nand then here that's in real time\nand here you can see that really learned\nand that he's not using the keyboard any\nlonger to\ncontrol the little robot\nyeah so that's not just taking the\nimages\ncompressing it down to something useful\nand then doing\na control on top of that however for\nmany tasks you need some kind of\nmemory um\nso you need to so either if you only can\nobserve\nthe images but you need to know the\nvelocity then you need to\nhave at least a couple of images you\nmight\nact based on something you saw in the\npast and for that\nyou can have a very similar structure\nbut here\nadditional a recurrent unit in a neural\nnetwork\nso here a couple of toy examples i'll\nskip over those pretty\nquickly here the agent only observes\nthe images but not the velocity but in\norder to solve the task\nso for balancing this pendulum you\nreally need to know how fast it moves\nand here we compare two different\nmethods\nthe one on top that's all method where\nwe use\ncorrective interactions so we give\ncorrections in the action domain\nand on the bottom there was a different\napproach where you need to give\nadditional complete demonstrations which\nis um typically a lot slower\nokay so that's again the the\nreconstruction and the images and then\nhere\nit's a little real setup and that's the\ndemonstrator is really learning to\ndo the control based on the images and\nnot\nbased on some internal states\nwait come on and then here\ncorresponding robot experiments\nwe have the camera inputs the only thing\nthe robot sees is this shaded\nwhite area there so it needs to touch\nthe the fake oranges so it needs to\nremember okay the orange was here and\nit's going to move there\nin order to then touch them later on\nand also something like this we can\nlearn in\nin five minutes in this case two rights\nand then here if you train it longer\nafter 45 minutes\nit can really do it very accurately\ni'm not sure if you can see that here um\nhere again you could see somebody doing\nthe teaching\nhere you see nicely where the\nkeyboard and here the task is to\ntouch the oranges but avoid the\nthe pairs\nbecause we already trained a good\nrepresentation so going from the camera\nimages\nto a compact representation\nuh teaching this additional task is\nsomething that can really be done\nwithin 10 minutes and if you compare\nthat\nto a more traditional deep learning\nend-to-end approach\nwithout having a teacher in there\nwe haven't tried just because based on\nthe toy examples we\nsaw that's again taking at least an\norder of magnitude longer so\nthink about spending a day teaching\nsomething like that\nand the other big problem is if you\nuse these other methods you need to know\nbeforehand\nwhat kind of data is required so it's\nreally up to\nthe human demonstrator to think before\nand okay that's the situations that\ncould arise\ni need to collect data for that i trade\nthe system you test it out i figure it\ndoesn't work\ni need to collect more data and that\ntypically is a couple of iterations that\nare required for that\nokay let's see i have a couple of\nquestions\nnick asks if do you want to just ask\ndirectly\nsure so uh or um i actually the question\nlike so\ndo you assume that the human corrections\nare more or less optimal or at least\nlike\nwork towards more reward but and then\nand also what like what happens if the\nhuman\ncorrection is actually not helpful at\nall so how can we\nhow can you take that into account\nokay so we assume that the human\ncorrections\nare at least somewhat helpful it's it's\nnot a problem if\nthey are wrong or the human changes his\nor her mind\nbut if the human is just demonstrating\nnonsense\nthen depending on which setting you talk\nabout\nin the imitation learning if it's really\npurely\nlearning based on these demonstrations\nthen obviously there is no chance\nwhatsoever then it's just going to to\nlearn whatever you teach it\nin the other example i showed where we\ncombine it with reinforcement learning\nit's very unclear what's going to happen\nit really depends on how you set it up\nthe robot has the chance to\nto learn it correctly based on the\nreward you have\ndefined and then how much the human\ninput\nhinders the robot is is a different\nquestion\nif it's at least something more or less\nsensible\nso it might not be the correct task but\nshowing something like you're not\nsupposed to move very erratically and\nshaky but smoothly without\nshowing the actual task that might be\nhelpful you can come up with the\nother scenarios where it's really going\nto to harm the performance\nokay luciano\nyeah so um yes so my\nquestions goes a little bit i was\nthinking this is like one step further\nif you teach like a human teaching a\nrobot\nand but assuming that the robot is\nstitching another robot\nand let's think about like i don't know\nsome center of a warehouse but of course\nif the\nsame robot with same joints okay they\ncould just transfer\nthe learning but let's assume that the\nrobots are different as\nwe from humans are different from the\nrobots the robot is teaching another\nrobot that's a bit\ndifferent so then i'm thinking of course\nthat could scale up and then\nsome emergent phenomena could come there\nbecause if you didn't really\nlearn this is 100 percent didn't teach\nsomeone else\nyeah so how do you see and there's some\nuh alternatives you do you think like\nmore imitation learning\nenforcement learning what how could we\ntackle this kind of issues\nyeah good question i don't\ni don't have a good answer\n[Music]\nso it really depends on what you want to\nteach or what you want to transfer um\nso if you i'm going to get to that in a\nsecond\nuh so if you okay i'll just show that on\nthe slides because that nicely collects\nto what you were asking\nso what i was talking about so far was\nteaching\nin in the action space so directly in\nthis case in the seven joints of the\nrobot and then\nsaying which velocity for instance you\nyou should apply\nverge is kind of very direct and low\nlevel and probably allows you to very\nmuch optimize the the movement however\nit's typically pretty high dimensional\nand at least for humans it's\nnon-intuitive connecting to your\nquestion\nit doesn't transfer to a different\nembodiment at all\num well if you consider the state space\nor the\num the task space\nso for instance the position of the end\neffector then that's something that you\ncould transfer so if you\nknow the trajectory of the end effector\nthen it doesn't really matter how\nthe kinematics of the robot as long as\nit has kind of the same workspace\nobviously yeah\num depending a bit on the constraints it\nhas but that's something that's a lot\neasier to\nto transfer and arguably that's also\na lot more intuitive and easier for for\na person to teach\nthat isn't familiar with the robot the\ndownside then is that you still need to\ntranslate that into\nthe the movement of the robot so the the\nactions\nagain i'm using some kind of inverse\nkinematics so\ndynamics models which might be as\nthemselves a bit tricky to learn\nso to come back to the question\nso if you have robots with different\nembodiments\nthinking about at which level of\nabstraction you should to transfer is\ndefinitely one\nthing that that's done and so for\ninstance\ntransferring trajectories in end effect\nthen in joint space would help\nand the\ni mean the big advantage\nrobot to robot teaching would have is\nyou can work around the the\ncommunication issue\na lot better so you have complete\nknowledge\non both sides effectively\nso i've i would say better ways of\ndoing that compared to what i was\npresenting\nhere which really also focuses on okay\na human doesn't really know exactly what\nhe or she\nshould do it's they don't like to kind\nof\nbe constantly tell they operate the\nthing but\nit's a lot more um pleasant if you only\nneed to jump in when something goes\nwrong and occasionally give feedback\nand you really care about how\nhow long it takes so um\nfor robot robot teaching you can\nprobably do a few\nother things that\nmight be better so yeah you could\nprobably apply those methods i'm not\nsure if they're the best\nyeah okay thanks\nokay and then how could we what did he\nhave how could we keep\nhuman control yeah i was thinking like\nif you teach but you don't teach like\nhundred percent\nuh the robots don't download 100 what\nyou mean\nand then they teach someone a dozen 100\nso you gotta\nincrease the gap as far as you go from\nthe original source\nhow is that called that stupid game\nwhere you whisper in somebody's ears and\nthen you do the chain\nyeah it's like a telephone game i think\nright chinese chinese whispers\nchinese whispers yeah exactly so you\nwould get something like that\nindeed yeah\nso yes i'm to be totally frank at the\nmoment very\nmuch looking into\nthe question how can robots\nbe best taught by humans um\nand it's a bit more on the algorithmic\nside so we're looking into how\nhumans experience the teaching\nuh but it's not like we're at the moment\nlooking into human robot interface\ndesign so to say\num and then\nwhat you're saying that adds no whole no\nnew additional layer of complexity on\ntop of that definitely very interesting\nand\num we should get there\num okay what else\nso wendelin asks do you want to unmute\nand directly ask\nsure so i was just like referring to the\ndiscussion before um so if if\nthe robots or the first question is like\nthe robots still get the original\nenvironmental reward right\nyes in the reinforcement learning case\nso\num would this just mean that if you have\na robot that trains another robot\nafter a while that other robot would\nstill\nuh converge to the original task it just\nmade it take longer because the first\nrobot might have given him some bad\nideas\ncould be yeah\nyes i mean that's the general question\nin\nall transfer learning uh approaches\nwhen is it actually beneficial to to\ntransfer and\nwhen is it better to to learn from\nscratch\ni mean like it's it's very different\nfrom transfer learning\nbecause in transfer learning you like\nfor example is very different from\ninvitation learning in imitation\nlearning\nyou don't know what the real task is but\nin this case i think\nlike luciano's concerns\nare should not be too hard\nto disprove because like basically um\nyou still get the original task\nso you can't really stray too far away\nfrom that\nyeah i agree i mean so i've been\npresenting two different\nthings one some of them were purely\nbased on imitation learning so\nsupervised learning where you\ndon't have that and there kind of this\ndrift\ncould very well occur but but yeah i\nagree\nif you think about reinforcement\nlearning\nand transferring the\nthe reward function as well then\nyeah worst case is it takes longer to\nto converge to that okay thanks\nokay good\nso\nhere um i was just introducing that\nthat we might actually want to teach the\nrobot\nin in state space so in the end effector\nspace in this case\nand in this drawing i have here that's\npretty obvious because that's something\nthat's\nalready provided more or less by the by\nall robot manufacturers\nbut in some other cases that might\nnot be so obvious so um i have another\nlittle example\nhere is there's my mouse here\nis that the laser pointer uh on on the\nrobot\nand it's trying to to write a letter on\na whiteboard just a\nlittle fun task where we actually\ndon't know the model of the\nthe complete system and we actually need\nto learn it at the same time\nas learning the task um\nor you could say before we do the\nlearning somebody\nsits down for an hour half a day and\nwrites that down\nin order to program the robot to\nbe able to be controlled in in\ntask space so in this case what i mean\nby task space is\nwhat you would like to do is to directly\ncontrol the\nposition of the laser pointer point on\non the\nwhiteboard and not care about how the\nrobot actually is supposed to\nmove and the approach\nsnehal came up with actually allows to\nlearn that simultaneously so learning\nthe\nmapping from the task space to the robot\nactions and actually how the the task is\nsupposed to do\nagain using yeah\nusing interactions and here that's\none of the initial\nthings where you still have lots of\ncorrections from the human teacher\nin the top right it's just the zoomed\none and we\ndid some enhance the the laser pointer\nand then here it\nafter 15 minutes it again learned to\ndo that and the last move you saw\nhere is the robot doing it\nautonomously\nso here's writing core the name of our\ndepartment\nwhich nicely coincides with the first\nthree letters of the uh\nthe conference where he published it\nokay um so so far what i was talking\nabout\nwas very much on the\nlevel of directly controlling the\ntrajectories if you like\nof the robot um\nin the very first example i showed you\nwith the light bulb\nunscrewing it was on a higher level\neverywhere i'm\nconsidering the the sequence\nso for the for learning these kind of\nthings in a sequence\nwhat you will quite often have is\ndifferent reference frames so that if\nyou say\nyou have an object and you attach a\nreference frame to that\nthe robot can move relative to the\nobject rather than\nmoving in in world coordinates or in\nabsolute coordinates\nwhich then very much helps to generalize\nto\nno matter where you put the object and\nif you start having more\nobjects and then suddenly you get a\nwhole\nlot of these reference frames or\ncoordinate\nexample would have to light bulb the\nholder the um\nthe trash bin maybe volt coordinates\nadditionally\nand then one of the challenges for the\nrobot is to figure out from the\ndemonstrations\nwhich is actually the relevant\ncoordinate frame so should i move\nrelative to the light bulb or the\nholder or should i actually use position\ncontrol or should i use force control\nso here's another example of that\nwhere the robot is picking up this mug\nand we have two\ncoordinate systems one is relative to\nthis coaster and the other one is\nrelative to\nthe cup and now if we only give one\ndemonstration when the cup is on the\ncoaster\nthere's just not enough information in\nthe data we collect\nfor the robot to figure out what it's\nsupposed to do\nyou could put in some prior okay we're\ngoing to touch the cup so that's\nprobably the one we're interested in\nand then use that but\nyou can do that but that breaks down\npretty quickly\nbecause these priors tend to be somewhat\nspecific\nso now if you always have to cup\non top of the coast that's not a problem\nit will work fine for\nfor either reference system because it's\nalways\njust a little offset and no matter how\nyou describe the movement it's going to\nresult actually in the same position\nnow if you separate them like we have in\nthis example here\nthen suddenly that becomes interesting\nand the robot needs to\ndecide on what to do if it's\npurely based on the initial\ndemonstrations then there's no way it\ncan do it\nother than flipping a coin if it can\nask for feedback or have interactions\nwith the teacher\nthen that some something that is easy to\nresolve\nand what we were additionally\nconsidering here is like i described\nbefore in some cases you actually don't\ncare about this ambiguity so it's fine\nif you don't know which\nof the two you want to use because it's\ngoing to result in the same movement\nand you don't want to bother the human\nteacher if it doesn't really matter\nanyhow\nso the approach we came up detects\nif the well obviously if there's still\nmultiple reference frames possible\nif it's actually relevant in the\nsituation we're currently in\nand then if it can't decide on its own\nor\nif it's actually relevant only then the\nrobot is going to\nrequest feedback\nokay so that's the corresponding video\nhere\num the demonstration was already given\nso let me go back that was way too quick\nso we demonstrated picking up the\ncucumber moving it\nto the lower tray here\nso if it's in the same position\nthen that works fine if we move the\ncucumber only that's also fine because\nit's\nwe touched that and we know that it's\ngoing to be relative to the cucumber\nokay so then we can remove that\nworks fine now what happened is we\nswitched to two crates\nnow you have an ambiguity so are we\nvery interested in moving relative to\nthe world coordinates so the absolute\nposition somewhere here\nor are we actually interested in in the\ncrate and what you're going to see now\nis it starts moving and then\nit stops because it's unsure which is\nkind of\nthe trigger for the human teacher to\ngive feedback in this case it's a\nhaptic kind of little push in the right\ndirection\nwhich then helps the robot to\ndisambiguate that\nand from now on it actually knows okay\nwe're not interested in the\nin the absolute position but in the\ngreat position\nuh so it discards that from its\ndecision tree so to say and from now on\nhas\nlearned how to act in these situations\nokay you can make a little bit more\ncomplex if you have multiple reference\nframes\nattached to the two sides of the object\nof this box i mean and then again the\nquestion is should we move relative to\nthat one that one or are we actually\ninterested in\nthe one that is bang in the middle\nbetween those\nyellow and red coordinate systems\nso here you see a different form of\ninteraction wires and kind of\non screen yes no interface so i was\nasking\nthe yellow one and giovanni said now\nthe apple one also know and then the\nonly remaining possibility was\nsomething in the middle\nso here's a one last task where we're\nsupposed to\nstack boxes on top of each other and\nbased on that we also\ndid a little user study where the 12\npeople i think\nuh just before the lockdown happens\num so here the task is just like\none specific couple of a box of tea bags\non top of\nanother one here you see\nso on the kinesthetic demonstrations\nand what we compared here is purely\ndoing it with these\ndemonstrations versus having\nthe the interactive corrections or the\nrobot actually asking for for feedback\nif there's an ambiguous situation\nand as you can already see if you've not\ndone that before\nteaching it like that is\nalso not so easy time consuming and\nannoying as well\nso there was you need like six\ndemonstrations to\nget all the combinations covered well if\nwe\ndo that with the interactive method here\nyou just get one initial demonstration\nand then only in some cases\nit's not going to know what to do and\ni'm going to ask for\nagain feedback by by getting pushed\nso that's significantly quicker to to\nteach\nfor the human and to learn for for the\nrobots\nand also in terms of mental demand\nphysical demand\ntemporal demands if we ask the\nparticipants\nthey very much prefer to teach\nin this interactive corrective fashion\nrather than just giving a whole lot of\ndemonstrations\nand yeah you can see that for all the\nscores the\nthis lira which is the interactive\nmethod is doing a lot better than\nthe kinesthetic teach in which is just\ngiving a whole lot of demonstrations\nbeforehand\nand hoping that you covered all the\nsituations you\nneed to do okay\nso that's the end of my presentation no\nrobotics presentation without a little\nvideo clip from a movie this is from\nreal steel where you can really see\nhow teaching a robot might be child's\nplay\nif you have the right interface and\nthat's\nwhat i'm trying to work towards okay\nso to sum up i'll just sum up quickly\nand then i hope there are a few more\nquestions\num so i hope i could show you a few\nexamples on how\ndoing this teaching interactively and\nintermittently\ncan help speed up robot learning i\nshowed you\na few different variants of using\nimitation learning or combining\nthose demonstrations with the\nreinforcement learning\nand then like i was saying before\nthere's still a lot of open questions\nhow do humans like to teach\num and then especially for this audience\nwhat i'm presenting is that actually\nmeaningful human control\nyes you can teach the robot to\nact like you want so it's it's a form of\nhuman control but then still\nthere might be quite a few things\nyou're unsure about and how it's going\nto generalize how\nhow it's going to react in in different\nsituations where you have not\ntaught the robot good\nthank you very much jens for this really\ninteresting talk so yeah\nthe well the the questions are already\nrolling in\nif you want to ask your question please\ngo ahead\nokay um hi uh thank you\nokay hi okay so\nthat's where the talk was very\ninteresting um\nand uh yeah my question goes uh do you\nformally model\nany of these interactions or uh learning\nprocesses\num so\num my i just started my phd\non uh mental models uh\ncontext and um\nyeah so i'm looking for ways of modeling\nthis uh\nteam interaction to achieve a certain\ngoal with context awareness and all that\nand this is very interesting because\nthis can also be seen as teamwork\nuh if you are to achieve a goal\nthat both agree um so i was just\nwondering if you\nlook at the uh at the knowledge-based\nmodels\nof any kind or if you're just looking at\nmore machine learning uh models and\nresults\num i'm not sure if you understand\nwhat i mean not entirely sure\nwe'll see that's a discussion um\nso we\nmodeled these interactions\nin indeed more kind of as a machine\nlearning\napproach in the reinforcement learning\nthings i showed for those people that\nknow reinforcement learning\nthe the human kind of interaction was\nactually modeled as a form of\nexploration to\nbe able to incorporate it in the\nreinforcement learning\nframework in the for the other things\nit's effectively some kind of a switch\nin the robot logic that tells it okay so\nhere was\nan interaction uh so i need to treat\nthat\ndifferently but we're not\nso the robot itself knows about\nits behavior or its movements where\nit's sure or unsure about what it's\ndoing\nso so in that sense it's modeled and\ntaken into account\nbut really the the interaction\nis more kind of on on a human\nlevel that we don't model that very\nexplicitly at the moment\nbut if you have ideas on on how to do\nthat and how that would be helpful\nyeah i'm i'm wondering yeah uh for\nexample uh\nbecause yeah so i just\nsaw for now um yeah so this moment when\nthe robot can detect that it's unsure\nabout the box for example it stops for\nan interaction\nand yeah i was wondering if you have\nsome kind of\nif this is formally modeled in any way\nso\nthat there's actions and states and it\nreached that state and then\nit stops and waits for the interaction\nand then it goes back to\nso how that's modeled this was more my\nquestion maybe\nso how is i'm not sure if you're talking\nabout the same kind of modelling\nso what what it does internally is it\ndetects\nthat in this case okay i still have two\npossibilities left\nand the actions i would do according to\neach of them would be contradictory so\nin one case i would move to the left and\nin the one case i\nwould need to move to the right so it\nknows\nthat there's a problem and then there's\na\nswitch in the program that says okay now\ni'm going to stop and reach\nand request human feedback and once i\nget the human feedback\nand then hopefully it will allow me to\nchoose to correct one\nand then the robot can continue\nokay thank you i will also think a bit\nmore about it\nyes it's on my website ikari\ni can send it to you directly thank you\ngreat ah this channel yeah\nno one has i have a it's kind of a\nfollow-up on a previous question but\nit's on the\non the point so i just first wanted to\nunderstand if i got right what is it\nwhat is the trigger for the robot on the\ncamera like to define\nwhich we should ask for feedback\nand this is yeah that's the so true part\nthat's the first part and the second\npart would be\nthinking about the context that the\nrobot would be teaching another robot\nwould be a similar point and say okay\nnow i want to intervene because i see\nyou're doing\nsomething weird or\nyeah so first is\nwhat was the point how do you define\nagain the the point that the robot asks\nfor feedback\nokay so\nlet's take this example again because\nit's a bit easier i think\nso what we already have is one\ndemonstration\nand then we can represent\nthe movement in\ntwo different ways one is relative to\nthis\ncoaster and one is relative to the cup\nso we know we have two representations\nof the movement and we don't know which\nis\nthe correct one yet because we only have\na single demonstration\nif we encounter the same situation again\nmaybe we just moved to two objects\ntogether to a different location\nand then we the robot checks okay how\nwould the movement look like if i move\nrelative to the cup and how would the\nmovement look like if i move relative to\nthe coaster and if those are very\nsimilar\nthen you don't really care uh you don't\nneed\nto ask for feedback because you can just\ndo it right\nokay yeah now the other situation is\ni separate the two objects and then\nagain if i predict how the movement is\ngoing to look like if i move relative to\nthe coaster or relative to the cup\nthen you're going to discover okay those\nare very different movements and they're\nnot compatible\nyeah which and then you then the robot\nis going to ask for\nfor feedback yeah yeah that's\nyeah that's clear for me but my wonder\nis like more about the intentions there\nlike when you had\nthis box with the cucumbers and the\ntomatoes you don't know if you want like\nthe\na specific absolute position or the\nother one so that's like\neven though the coordinates everything's\nstill kind of the same but your your\nis the intention for the human that you\ndon't really know\nso so the intention in this case is also\nmodeled as\nthe uh as these reference frames if you\nlike so the intention\nwould be either move relative to the\ncards uh the cucumbers move relative to\nthe world coordinates\nokay so still as a reference frame yeah\nyeah i understand that\nso there could be if there's a deviation\nfor the reference frame that's a good\npoint of asking or giving\nfeedback as soon as there is some\ndeviation there yeah\nso in this case we focused on on\nreference frames because that's\nsomething that occurs frequently and is\npretty visual\nbut i mean you could do similar things\nalso for\nfor other types of ambiguities so in\nparticular what i'm interested in\nis uh force and position control which\nis also\nif you always have the same object\ndoesn't really matter what you do\nbecause one\nkind of results in the other and the\nother way around\num so it's really about\npredicting different possibilities and\nfiguring out if the\num contradictory which is\nwhat we call an ambiguity\nyeah that's great okay and do you all\nsee that\nyeah okay thank you\nthanks so you have another question\nno i think that's it let me leave also\nthe opportunity for someone else to jump\nin\nso so just for my quick clarification\nfor myself\nso basically so so you could say more or\nless that you know one one\nbig issue in meaningful human control is\nthat what if we misspecified the\nobjectives of a robot or ai\nthen for instance this your this this\nresolving ambiguities would be\none way like that the robot would\ninherently stop and ask for feedback at\nthe moment when it's not sure about what\nwhat\nthe objective should be is that a\ncorrect would that be a correct\nextrapolation of this work\ni'm not sure in the sense that\nif you misspecify something i'm not i\ndon't think\nthe method i presented will solve that\nbecause i mean if you explicitly\nspecified then the robot is going to be\nsure about it what you could do\nadditionally is that you allow\nalso so in the last part i presented was\neffectively\nrobot driven questions or interactions\nyou and the first part was more about\nhuman driven\ninteractions if you assume that you have\nyour misspecifications\nand you allow human-driven interactions\nso that the human can always jump into\nto change or correct or something then\nthen yes that might be\na way to at least detect that something\nis going wrong and\npotentially also to fix it\nthank you so uh we i think we have\nquestion for\none more time for one more question uh\nwe're almost approaching two o'clock\nyeah\nif they're more questions shoot me an\nemail\ngreat so\nif there are no more curses indeed uh\nyeah\ni would like to thank you jens for this\nthis great and interesting talk it was\nvery uh very inspiring\nuh and yes and uh also thank to all the\nthe audience for being here and\ninteracting with with us and the ends\nand uh\nyeah we'll see you next time\nbye-bye thank you very much\nyes thank you\nstop recording", "date_published": "2020-11-21T17:54:41Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "c80bef3e16935e9c641a54940bf9e2a8", "title": "The Control is in the Interaction (Jered Vroon)", "url": "https://www.youtube.com/watch?v=xLAyTZzl65Y", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "[Music]\njared you're good to go i think that's\nmy cue right\nhey all it feels really weird to be\ngiving a presentation to you all without\nseeing you\nuh still really happy to be here and on\nthis side of the\nscreen where i'm doing the presentation\num\nand well as the title already says i'll\nbe talking today about\ninteraction uh specifically in the\ncontext of social sidewalk navigation\nand more specifically in the context of\nunpredictable social situations\num but before i dive in first\nreally brief bit about me and my\nbackground i\nstudied cognitive ai back in the days\nwhere\ngood old-fashioned ai didn't always\nnecessarily include machine learning did\na phd in social robotics\non social navigation and i am now\nworking as a postdoc\nin the design faculty of the udaift's\nspecifically knowledge and intelligence\ndesign\nand what for me is a constant is a\nfascination\nwith people basically and there are some\namazing things we can do\nthat our machines can't do yet and\ni think a good starting point here and\nthis is actually a shot from the matrix\nbut\nuh is this amazing human capacity if you\nwere faced with this crosswalk\nsomehow you would probably be able to\nfind a way\neven though there's a whole wall of\npeople somehow\nthe wall would split for you and you\ncould navigate through this whole mass\nof people\nand i think that's quite fascinating and\ninteresting how we actually\nget that going and\nin fact we even managed to do it in more\ncomplicated situations\nsuch as those with kids or dogs with\nleashes\nand in fact we even managed to do it in\nsituations where everything gets turned\non it\nhad and um we certainly have to keep one\nand a half meters distance\nand so there is somehow this amazing\nhuman quality to\nfind a way um even\nif rules change if we don't know what to\nexpect from other people\nand um what i would like to be\ndiscussing today is\num how can we well\ntranslate or at least get robots to fit\ninto that as well\nwhen they're navigating um\nthe niche will be specifically within ai\nrobotics hri\nsocial navigation and slightly more\nspecific i'm looking into sidewalks\nuh so i know a lot of people um within\nai tech\nlook into social navigation i look more\nspecifically into sidewalks\nalso because i think it's very\ninteresting sub niche\nwhere you get much closer to the people\nand the\nsocial rules are much more important\nthan the traffic rules\nand at the same time it's also a more\nsafe environment because you cannot\nkill people by driving over them that\neasily\nand\nso wait one important disclaimer to\ngive before i dive in further is that\nmost of the things i'll be discussing\ntoday are not necessarily new per se\nso i might introduce things and concepts\nthat might be quite familiar and i\nhope actually that the main point of\nwhat i will be doing is perhaps putting\nthem together in a slightly different\nway\nthat might lead to a nice actionable way\nto get this working\num so to kick off let's\nstart with a very rough uh overview of\nthe state of the art\num in in social navigation i think one\nof the very first\nexamples is shaky uh that nowadays\nand um perhaps many people would want to\ncorrect me but i'm just going to try and\nsummarize it this way anyway\nuh much of the work is based on uh\ntheories on human social positioning\nlike proxemics which is the idea that we\nsuits\ndepending on how close we are to someone\nactually take a\nspecific physical distance to them as\nwell\nand then those models then\nhave to accommodate things like the fact\nthat humans move or the fact that humans\nrespond to what a robot is doing\nand that leads to very beautiful\nsolutions\nin terms of path prediction and\nappropriate quick replanning\nboth machine learning based and\ncognitive model based\nit also leads to sometimes social\nnavigation being more\nefficient and effective in controlled\nsettings such as the\nwarehouses of amazon where you can just\ntell your workers what to do or what\nnot to do and similarly there's also\nbeautiful solutions to how humans\nrespond\nnot going to go into too much step there\nbecause i have some other things to say\nabout that\nuh and lastly of course there is the\nlearning from aspects\nexperts where we try to somehow\ntake their expertise and replicate that\num i think the gap that's currently\nfaced by this\nstate of the art is that in the wild\nwhen you throw your\nperfect algorithm into the real world\npeople often turn out to be complex and\npredictable or\ndynamic in their reactions and perhaps\nmost importantly\nunique in their needs so different\npeople might have different\nneeds even on different days if they're\nin a bad mood or a good mood\nand i think that raises the very\nimportant question um\nhow can we assure socially acceptable\nnavigation in the face of such\nunpredictable and noble social\nsituations\nuh and this is actually unfortunately a\nmore relevant question\nthan i would like uh i'll read out this\ntweet for you\num it's from emily slackerman ackermann\nor emily eckerman i think is her real\nname\nshe in a wheelchair was just trapped on\nforbes avenue\nby one of those delivery robots only\ndays after their independent rollouts\nand she can tell that as long as they\ncontinue to operate they are going to\nbe a major accessibility and safety\nissue\nso what happened here was that these\nrobots\nroamed the streets to make small\ndeliveries\nand in this specific case\nwhile she was crossing the street the\nrobot was stuck in the sidewalk because\num it saw so many people in his direct\nenvironment that it\ndidn't know how to find a safe part and\nso it did the second best thing which is\njust not move at all\nand that was typically a very good\nsolution but in this specific case it\nmeant that it blocks\nthe curb so the low piece of the\nsidewalk that you use to get back on the\nstreet if you're in a wheelchair\nand that resulted in quite a big\nproblem for um this specific wheelchair\nuser\nand there's several more examples where\nour beautiful algorithms\nbreak down when we throw them into the\nworlds there's children harassing me\nrobots uh there is people learning that\nautonomous cars just stop for them and\nthus\nuh consequently deciding to cross in\nfront of them\nwithout stopping uh there's office\nrobots being unable to use elevators\nand there's actually the whole package\nof challenges faced by\nautonomous vehicles that may have led to\nthe\ncurrent shift from autonomous to\nautomated vehicles\num and if we look at the\ncurrent solutions for these problems uh\nthey are a bit of a band-aids\nso they are band-aid solutions i would\nsay for example recent tweets seem to\nindicate that\nthe robot now avoids claiming the whole\nkirk uh the\nmore robots i think they did something\nbeautiful there they\npredicted the likelihood of harassment\nuh and\nif a kid was deemed highly likely to\nharass the robot it would flee to their\nparents and\nthe office robot asked for help and like\ni already said\nwe are now moving from autonomous to\nautomated vehicles\num and i i think\nall those solutions actually face the\nsame problem again\nbecause people in the world still are\ncomplex in predictable and dynamic in\ntheir reactions and unique in their\nneeds\num so yes this is a band-aid and i think\nit's very important that we make them\nbut um the question i am exploring and\nthat i\nhope to be discussing with you today is\ncould we perhaps solve this on a more\nfundamental level can we\nactually um well\nlike people do figure out a way\nto do this\neven though we don't know which\nsituations to expect\num and\ni think a very good starting point for\nanswering a question like that\nis always to look at situations where it\ngoes wrong\nand i i hope you are all familiar with\nthe sidewalk shuffle\nand this would be one point where i\nwould really like to see your faces\nnodding but\nthat's life at the moment uh\nthe sidewalk shuffle is when you're\nwalking on the sidewalk and\nsomeone is heading in your direction and\nyou step aside and at the same time\nthose people also step\naside in so you step to the left and\nthey step to the right and\nyou're still faced with the same problem\nand you step to the right again and they\nstep to the left and you're still faced\nwith the same problem\nand you get that back and forth where\nyou end\nup having to stop or somehow break out\nof this loop of adapting to each other\nat the same time\nand i think that's a very interesting\npoint because it shows\nuh it it's where we our\nmechanisms break down and it shows that\nwe apparently are really good at\nadapting to each other um even though we\nneed some time to do so\nand i think that brings me to my main\nidea\nor hypothesis which again is not\nnecessarily new\nwhich is that we are figuring it out as\nwe go\num and i think that actually\nso not necessarily learning but um just\ngoing forward and then um failing or\nsucceeding\nuh either way using that somehow as a\nway to\nadapt our behavior and find more\nsuitable behavior\non the fly um\nand i think if we could manage to\nsomehow\nput that into our robots um\nwe could solve a lot of those situations\nwhere we don't know what to expect\nand still run into problems simply by\nbeing able to figure it out\num so how can we get that\num into reality um\nwell my proposal actually is pretty\nstraightforward\nand it combines something that many of\nyou will probably be familiar with\nwhich is an interaction focused approach\nso\nwe have examples from embodied and\nbetter cognition action perception\nfeedback loops reactive robotics\nbradenburg's vehicles\nand that's just the tip of the iceberg\nthere's lots of people\nadvocating for work that focuses on\ninteraction\nin the actions of a robots um\nbut i propose to couple that to the cues\nthat we give to each other\nso for example when we are walking on\nthis sidewalk\nif we are trying to find a gap that's\nnot just\na matter of going and hoping for the\nbest it's also really looking at\npeople's faces\nand it might be that the tall man in the\ndark sunglasses might be in a bad mood\nand you might be heading straight\ntowards him\nand he might not be willing at all to\nget out of your way\nand picking up on that picking up on\nthose cues\nand using those to adapt i think is\nfundamental\nso better that means that i would\npropose to have the robot pay to the\nimplicit feedback in people's responses\nto its behavior\nand then use that as the basis for\nfinding more suitable behaviors through\nthe interaction\nand and\ni i can talk a bit longer about this and\ni will\nbut actually this is the meat of what i\npropose\nand well\nthat means that the hypothesis that\nresponding to people's social feedback\ncues can find\nthrough the interaction socially\nacceptable navigation in the face of\nunpredictable and unknowable situations\num and that means that i envision a\nsituation where if the robot is driving\nand\nsomeone is just not in the mood to get\nout of its way\nat all that then still the road would be\nable to sort of\ndetect from their behavior that they are\nnot in their moods\nto adapt to the robots and subsequently\nthe robot would adapt more\nor perhaps even find a way to convince\nthem to still get out of its way for a\nbit\num that brings us to the next big\nquestion i think and that's how are we\ngoing to make this\nreal how are we going to do this um\nfirst trick in doing that is always to\nzoom in\nso i actually think that this kind of\nmechanism might be feasible\nfor a lot of interactions we have with\nphysical systems\nbut to keep things feasible\ni zoom in on small mobile robots such as\ndelivery robots navigating the sidewalks\nand trying to be socially acceptable in\nterms of time efficiency\nuh safety and comfort and perception of\nthe robots\nand that leads us to a lot of different\nsteps and\nto be honest this is also a bit of an\ninvitation to all of you\num both to give thoughts and insights\nbut also\num to perhaps connect if you feel that\nthis might be something\nrelated to what you're working on um\nbecause i think to make this happen we\nneed to start with a clean blaze line\nthen find a small and relevant set of\nfeedback cues\nsmall set of behaviors and use that to\nsomehow explore different interactions\nand um i think\nbased on the time yes i will give you\nalready quick\nteaser um of the pre-study one\nuh a clean bass line\num which is uh\nwhat i've been talking about so far i\nthink is really focused on\nthe emphasis on how we can somehow let\nthe interaction\nemerge from our responses to each other\nand i think that what that also means\nspecifically for social navigation\nis that we are somehow often tempted to\ntry our solutions\nand the problem that has is that it also\nimmediately\ngives rise to new emergent behaviors\nso for example if we have a robot that\njust tries to stay\n1.5 meters away from humans\nwe get beautifully weird um\nnavigation behaviors where it's fleeing\nfrom humans where humans are adapting to\nthe robot and the robot is adapting to\nthem\nuh leading to uh wobbly paths deviations\num so uh\nmy step one the clean baseline would be\nwhat happens actually if we\ntake away all social behaviors from a\nsocial navigating robot so just make a\nrobot that basically ignores all humans\num and i had started\nthese explorations just before covet hit\nand\nso um unfortunately i won't be able yet\nto give you\nfull-fledged results um\nbut i did make some cool observations\nthat i do want to share\nbecause i hope they will provoke some\ninteresting discussions\nsoon after and\nso what we did we made this small robot\ni'm not sure if you can see my mouse but\na small robot you see on the bottom\nright\nand had it drive around it's actually a\ncardboard box\nso we could relatively safely ignore\nhumans\nand we observed\nwhen problems actually arose so\njust went in without assumptions and\nstarted looking at\nwhich conflict does such a robot\nactually run into and what are the\nproblems we actually need to fix\nwhen we start with a clean slate um\nand i'll share two observations with the\nvery important disclaimer that these are\njust\nfirst observations not proven results\nyet\nuh still so observation one was\nthe robot was driving to the city\napparently on its own\nit did stick a bit to the right but it's\nvery much stick to its own parts\ndid not slow down speed up or deviate\nfrom its trajectory for anyone\nand um this was the city center of delft\nso it wasn't too crowded but still a\ndecent people of amount of people was\naround\nand interestingly to me at least\num especially uh based on all the\nliterature i've read so far\nthe robot could follow its trajectory\ncompletely unimpeded\nin fact people actively did their best\nto get out of its way um\nand that wasn't even too noticeable some\njust made very subtle changes in their\ntrajectories\nuh they some of them slowed down just a\nbit\nuh but at the same time there were also\npeople who actively steered their\nwalking aids um out of the paths\nuh to make sure the robot could just\nkeep going um\nwhich i think is quite interesting\nbecause it might suggest that\nour robot which was very clear and\nstraightforward in its\npaths might actually trigger people\nbeing more\nadapted to the robots one more\nobservation\nand then i hope to move on to the\nquestions soon\nin contrast to what i just said we also\nnoticed that when\ngroups people were standing still for\nexample because they were chatting\nif there was enough space to navigate\naround them\nthen straight up approaching them which\nwe had the robot do because well it\nignored people\nsomehow that fell to people as if it was\nengaging with them\nand so somehow we found there was a big\ndifference between people who were\nmoving and people were\nstationary um\nand i think that's very interesting so\nwe saw that they\nengage with it they started taking\npictures they seemingly started\nexploring\nhow they could play with the robots um\nwhich very much looked like they were\nactually teasing\nthe robot and trying to break it uh\nwhich i think is just like the previous\none a very interesting\ntwist on what's already known because um\nyes if you throw your robot out there\nmaking it avoid people you wouldn't even\nrun into this situation you would have\nafforded this problem\nbut you also fail to see that it arrives\nspecifically when people are standing\nstill\nand i think that's a very interesting\nway to go at it and\nin fact what i would really love to do\nis hear your thoughts on this\num and i might have something more to\nwrap up at the end but i think this is a\nvery good point\nto um go back to seeing each other and\nhaving some questions\nand some interaction uh because in the\nend i think that's the most\ninteresting bit of being here and being\nhere with you\nthank you\nthank you jared that was great\nso uh yeah the floor is open for\nquestions\nif you want you can post it in the chat\nor just ask\num or you could raise your hand good uh\nluciano you wanted to ask something\nyeah hi jen thank you very interesting\nwork\nuh i have a clash a question actually\nregarding the social cues so i\nunderstand the next step\nis to model the social cues with respect\nfrom the humans\nto the robot and the other ideas try to\nmimic to imitate them\nthat the robot would be doing the same\ncues to the\nhuman or how do you see this interplay\nand i think that's very interesting in\nwhich question and that means that\npartially i don't have the full answer\nyet okay i think the first step\nis actually to understand the cues that\nhumans are using\nso not necessarily to mimic them but\nunderstand and be able to read from\npeople's reactions how they feel about\nrobots\num to give one example of that another\nobservation we made was that sometimes\npeople would deliberately change their\npath so that it would collide with the\nrobots\nyeah so they would sort of challenge the\nrobot and see what would happen\num and i think that's a very\ninteresting cue to pop up and it would\nbe very relevant for aurora to be able\nto detect ah i think this person might\nbe challenging me\nuh also because in the end we found that\nif that happened and the robot just kept\ngoing\nmost of the time people would jump out\nof the way at the last moments\nfiguring out yes the robot is not going\nto stop for me my teasing\ndidn't work so\ni think in that respect the first step\nis really to detect the cues and use\nthat to\nchange the world's behavior but of\ncourse\nchanging the robot's behavior will also\nresult in the rope giving cues to people\nand an awareness of that and how to\nsteer that is something that\ni think we will really have to figure\nout as we go\nbecause it's such a complex and emerging\nsituation that i\nfind it challenging to make real\npredictions about what will happen there\nyeah i understand so but thank you for\nsharing this is a really\ninteresting field thank you very much\ni think we have one uh we have the next\nquestion from simeone\nhi there um sorry about some of the\nbackground noise uh there's some\nconstruction work going on here\num any robots involved i'm sorry\nany robots involved in the construction\noh man i don't know\nanyway my my questions kind of relates\nto um to what um i mean obviously\nuh interactions with robots and people\num does mean interactions so that's two\nto two directional bi-directional and so\nto what extent\nand do you think that and we should\nfocus either on the robot avoiding\npeople\nor that maybe the humans getting cues to\navoid people\nand where do you think the onus lies in\nthat responsibility\nso also um while you were presenting i\nwas thinking at a certain point\nand so i don't know if you know star\nwars you know all these droids at\ncertain points where they can\nbe going in and out weaving in and out\nof people my thought process there is\nwell these are\ndroids or robots that are avoiding\npeople and not necessarily the people\navoiding them so\nseeing that maybe humans were on the\nstreet first should the\nrobots be avoiding people or is it\nreally this bi-directional interaction\nwe're looking for\nthank you uh rich question again\num which is good which is why i really\nenjoy speaking here um\nand i think the um\nit has to be bi-directional in the end\num so\nif our robots would constantly be giving\nway to people\nthen they wouldn't be fulfilling their\nfunction and actually they might be\nending up worse so the example i already\ngave of the\nwoman getting stuck on the street\nbecause the road was\nwaiting for people in the curb uh\nturned i think that demonstrates that it\nmight actually be more problematic\nsometimes to not\nengage in the interaction and and say oh\ni'm subservient i'll just\nhold back rather than actually engaging\nin it and\ndoing the back and forth um so\nuh i i would strongly argue that we\nneed to engage in the back and forth and\ndon't have robots always due to humans\nunless perhaps we are on a big starship\nand there's not that many people around\nand just a few small robots\nbut that of course still leaves open the\nquestion where that balance should fall\nso um should we would you most of the\ntime some of the time\nhow to um basically balance out that\num taking dominance in the interaction\nand again i think this is something to\nfigure out as we go\num i also think it's something that may\nchange\nso perhaps at this stage people are\nquite willing to accept robots so all\nthe examples i gave were of people\nmostly enthusiastic oh there's a robot\ncool new thing\nbut that might change massively if there\nis 50 robots on the street and\nyou have to waste your way through them\nand you might be less inclined to\nactually give the robots right of way\num and i actually think that might make\nit more important to listen to the\nqueues and to\nfigure this out based on people's\nreactions\nso not to say so we're gonna sit at 50\npercent of the time the robot will yield\nbut rather have the robot stay in tune\nstay attentive to the way people are\nresponding to it yielding or not\nuh and using that to decide whether or\nnot it shoot yields\nand then perhaps during covet our robots\nmight from those reactions\nturn out to be a bit more gentle and\nbit more uh eager to give leeway while\nonce corona is over and we get get\ncloser again they might become a bit\nmore dominant again\nand so i think actually the answer will\nbe\nwe really need to stay alert and figure\nthis out through the interaction rather\nthan\npre-deciding on it yeah so i i can\nimagine also that\nwe might actually and get something like\na social optimization problem that\nand which will change dynamically in\ntime so that\num the optimal uh strategy for the robot\nand the the humans um will change and\nit's about being\noptimal in the movement of basically all\nactors involved\nyes okay cool thanks\nyeah if uh there are no more questions\nso i would like to follow up on\nall sorians i was there first so\nyeah i'll just follow up quickly on\nsimon's point so uh\nabout the dynamic nature of this kind of\ninteraction\nwhich we do not know for sure yet but we\ncould foresee the way it would evolve\nright and uh my question relates to uh\nbasically\nwhat what do you think would be the\nright way\nof trying to anticipate\nwhat these uh what these dynamics would\nlook like\ngiven that for instance in some settings\nit would be relatively safe for instance\nto just\nput the robot out there and measure how\nit interacts with humans and see how\nhumans behavior responds\nin uh to to that robot sections and then\nhow the robots\nadjust their behavior in the in response\nto humans so\nthat kind of putting that approach of\nputting the robot and humans in the wild\nand just studying how\nhow they code up that is one approach\nright but then the second approach\nwhich could be for instance uh more\nsuitable for situations when\nit's not it might not be so safe to put\nthem in the wild\nthe real humans with real robots\ninteracting you would think for instance\nof autonomous vehicles\nthe second approach would be to do the\nsame thing\nexactly the same thing just in the\nsimulation if assuming that we have a\nreally really good\nvirtual humans uh models of models\nvirtual humans which\nwe could use to test our autonomous\nvehicles or\nfor that matter any other robots with uh\nkind of significant\nimplications uh\nthen there is this fundamental question\nof whether it will generalize from the\nsimulation however advanced it is\nto reality to real human robot\ninteraction and then\nwhat where would you put yourself on\nthis kind of spectrum\nfrom uh let's say studying behavior in\nthe wild versus uh\nuh generalizing trying to generalize\nfrom virtual testing or maybe there is\nsomething else that you have in mind\nhere\ni i think i'm going to split my answer\nin two bits\nfirstly i think it's um very\nimportant to realize that yes it's very\ncool what i'm arguing for and we want to\nbe responsive and\nback and forth and all those interactive\nbits uh\nof course that's not the whole deal we\ncannot just throw a robot out there and\njust have it respond to people's facial\nreactions and\nhope for the best so i i\nam not arguing we should abandon\neverything that's been done so far in\ncertain navigation\nuh on predicting people's paths and\nusing that\nto think ahead a bit in fact i think we\nshould probably end up with combining\nthem\nbecause i think they will strengthen\neach other where we can\nfind parts um as we go\nbut we also predict sort of in the rough\nstrokes where we want to do and\nwe might at a crossing with a lot of\npeople decide oh we're going to cross\nmostly to the right and then figure out\nthe details of crossing mostly to the\nright as we go\num so in a practical sense\nwhere i think the end solution will end\nup it will be very much in a mix\num that's part one of my answer i think\nsecond bit of my answer is\num i think simulations are a very good\ntwo\nfor those kinds of predictive models\nwhere you are trying to\ntrain your model to basically get a\nfeeling for how humans will react in\nthose situations where you can\npredict how they will react\nuh but i think that actually the problem\nwith models uh all those things is that\nthey are limited uh they are firstly\nlimited by\nthe ones making them uh but secondly\nthey are also limited\nuh because of the question uh just asked\nby\nsimeon i think um\npeople will adapt as they go so even if\nyou have a perfect model hypothetically\ni don't think it's possible to say we\nhave one\nthe fact that people we will respond to\nthe robot and\nchange the way they act over time\nwill mean that the mere fact that you\nhave\nintroduce the robots will make your own\nmodel invalid\nuh and i again i think it's a very good\napproach to get those\ngeneralizing models but i do think we\nneed this bit of extra interaction\nto fine-tune the actual\nlast bits on the street\ni hope that answers your question uh no\nno no not completely but yeah i'll leave\nit to\nlater discussions if you have time for\nthat uh yeah i think jens has the next\nquestion\nthank you um so my my question also goes\na bit in the same direction you were\ntalking about the interaction and the\nbackground and forth\nso i was wondering about how much you\nwant to go into\ntheory of mind and these kind of things\nyes um yeah\ni hope you all forgive me for giving\nslightly longer answers but\ni really have a really cool anecdote\nabout this and so i'm gonna just share\nit with you\nso back in the days when\nai was mostly symbolic\nthere was this big problem of common\nsense um\nso common sense is the idea that i know\nthat you know that i know that you know\nthat i know that\nthe sky is blue and at infinity um\nso um this turned out to be quite a big\nproblem because since it goes to\ninfinity that\nposes quite a big problem for all ais to\nprove this all the way down uh the\nrabbit hole\num and actually i think\nthere's like a decade of people\npublishing tables papers struggling with\nthis\nstruggling with how can we actually\nprove that something is common sense\nthat we all know that the sky is blue\nuntil at some point there was this\nreally cool paper by garrett\nand pickering where they said no no wait\nlet's turn it around and what they did\nis they assumed in fact that everybody\nknew all this common sense\nso they would assume that everybody knew\nthe sky is blue\nand then from that look when errors\npopped up\nso starting for the assumption that\neverybody knows it and then just\nlooking from for feedback uh that shows\nyou have been wrong\nand if that's the case you try to fix it\ntell people oh you didn't know sir the\nsky is blue\nand continue assuming that they know um\nand i think that's a really cool story\nboth because it shows that other people\nwiser than i have been working on this\nas well uh but also\nuh and now i get to your question uh\nbecause i think it's just tear of mind\nis\nvery interesting and rich but to\nimplement it in ai\nuh gives you very big problems actually\nbecause\num there's this whole complexity of\nactually\nmodeling what someone else knows\nespecially if what they know also\ninvolves\nwhat you do and that whole back and\nforth\nso i definitely think that theory of\nmind is relevant to\nunderstand what's going on here but to\nimplement it\ni i really like something more similar\nto the\napproach of garrett and picking where we\nsay let's just assume\neverybody knows uh what's going on here\nand correct ourselves\nas we go it's a bit of a windy answer\nbut i hope it\nworks thank you\nyeah if yeah again i'm just\ni'm eager to contribute to the\ndiscussion but if there are any other\nquestions i would\njust give away so let me know\nyeah i'll go for now so uh yeah\nfollowing up on the\ntheory of mind kind of problem that you\nsee there\nuh have you considered looking into more\nkind of game theoretic approaches where\nthey kind of nicely avoid this\nproblem of infinite kind of levels of uh\nhierarchy uh for reasoning about each\nother's actions by just looking at\nthe kind of equilibria solution which\nwould assume that everybody has perfect\nknowledge of each other and then they\nwill just converge the situation which\nis good for\neveryone in one way or another and then\nyou could just\nbasically try using some kind of finesse\nequilibrium operator equilibrium\nand uh make your robot follow that and\nthen\nhope for the best as a as you say you\nwould do any kind of\nmind setting yes and i think this is\nactually\nquite a similar question to your\nprevious one um\nwhere on the one hand we definitely\ndon't want to just go into their\ninto situations of search navigation\nblindly we want to be able to predict a\nbit of what's going on to make some\nassumptions and\nstart with a good guess\nat the same time as i unfortunately\nlearned when playing\ngames with other people most other\npeople are not that optimal in their\ndecisions\nme neither by the way but uh so\num there is this yes i think\ngame three is a very good starting point\nand it's very rich\nbut i think also when you throw this\ninto the world and into the real world\nthere will pop-up situations where\npeople are less perfect or predictable\nthan you would hope\nand i think that what i have been\nproposing might be a way to\nhandle those specific situations the the\nsituations where\ngame theory breaks down because people\nare imperfect\ncould have a bad mood could be\ncould have any reason not to obey your\npredictions\nokay thanks um royal has a question\nyes thank you thank you jared was a\ninteresting presentation fascinating\ntopic\num i'm kind of building on a few people\nhere including the last\nfrom the last question or less answer\nsome of my former colleagues in berkeley\nthey\ndeveloped kind of a way to interact\num you know robots with pedestrians\nbased on a technique called reachability\nuh probable probabilistic reachability\nwhere you basically\nhave a game theoretic approach and you\nhave some kind of\nmodel for you know human pedestrians\npredicting their most likely paths and\nyou're trying to navigate\nyourself around those and when um\nof course perhaps different pedestrians\ncan have different parameters and have\ndifferent you know\nperform different predictions um\nand when um when you see behavior that\nis doesn't align with uh with the models\nthat you have\nwhat you do is you you use this\ntechnique called reachability\nwhere um which is used to kind of figure\nout like figure out how to stay in a\nsafe operating zone\num you use that in a dynamic sense so\nwhat's what happens is\nuh the safe zone basically would um\nlet's say shrink if you have some\nbehavior by a pedestrian that you're not\nfamiliar with so you're gonna be\nmore conservative if you might slow down\nat that point and you might\nwant to kind of update your models\nand one of the critiques with that is\nthat it's\nquite can be quite conservative or it\ncan be quite easy to\nbecome super conservative and kind of\nstop right and that's the moment where\nyou either want to collect more data or\nmake your models better or\nyou want to say you know what in this in\nthese moments there's so much\nuncertainty in my model\nfor me to stay safe that i want to\ninteract\nso i think there's a really nice kind of\ncompliment that's a really nice\npaper i think the paper is called um\nprobabilistic probabilistically safe\nrobot planning with confidence-based\nhuman predictions\nand my like kind of my question is would\nit be interesting to see like when that\ntechnique kind of\nbreaks down or becomes too conservative\nlike how would\nlike what kind of cues what are the\nkinds of cues or kinds of interactions\nthat you want to\ndesign to you know bring those two\nworlds together that you were just\ntalking about\num so just an idea and i'd be happy to\nuh to discuss that\noffline if that's of interest\nvery cool yes and it it sounds like\nindeed it would be a\nvery good indicator of when it is a good\nmoment\nto switch to um\nmore of the interaction and more looking\ninto it um\nyeah i think that's my main answer yes\nit sounds like a very interesting thing\nto look into and i'll definitely be\nlooking back into the youtube video to\nsee\nuh or the recording to see what you said\nand then look up that paper\nvery interesting thank you yeah so maybe\nlike i have a dog here that's about to\nbark sorry\n[Laughter]\nseeing something um she's trying to\nupdate her own model so\nthat's safe or not um\nyeah i guess my question or maybe like\nthe prompt is like yeah how do you\nlike what do you use as the context of\ndesigning social cues and i think\nusing using other techniques is actually\ni think can be a very rich\nplace to be creative about like what are\nthe kinds of social cues you need so\nthat's maybe\nalso a question whether that's like how\nyou think about that\nah good yes um\ni i think um\nand to relate it to the situation so\num when we get in the situation that\npeople don't correspond to our models\ni i think what we are most interested in\nand should be most interested in is\nhow they relate to us uh\nand i think that's actually the cues\nthat i would be most interested in so\num i already mentioned one so for\nexample\nwhen someone is testing the robots\nchallenging it by adapting their parts\nso that it actually\nbecomes a run-in um but it could also be\nmore\nsubtle things like people really\nactively avoiding\neye contact with robots or looking to a\nrobot and\nbeing a bit more scared uh it could be a\nbig smile where people are\nuh seem to be very enthusiastic and\neager\nuh which for many robots is a good thing\nbut not for most\nuh delivery robots um\nand i i think that that means that the\nmain cues will be\nindications of how people might be\nfeeling to watch the robots\nbecause i think that will be the most\ninformative for making you decide how to\nrespond differently to those people from\nwhat it would normally do\nthank you\ncan i follow up on the social cues uh\nquestion\num i mean it does sound very promising\nand they could realize how\nuh how this could be useful and at the\nsame time\nuh when you mention uh facial\nexpressions\nuh it feels a little bit uh like a\nslippery path here\nuh because yeah there is quite a lot of\nresearch\nrecently that uh is showing that uh\ndetecting emotions and internal states\nbased on facial expressions\nis not reliable at all so it has\nvery uh high individual differences\nand it it it's just it is a slippery\nslope so uh\nthere used to be this uh uh this article\nwhat was called uh said something like\nai emotion recognition\nuh can't be trusted or something like\nthat\nso yeah what would you think of that\nbecause yeah if a person smiles it might\nindicate that they have a positive\nkind of attitude towards the robot and\nthey want to approach it or it might\nindicate that\nthe guy is just saying i hate this\nromance but it would look like a smile\nright\nyes i think i can say two things about\nthat\nthe one thing is\nmy guess would be that the cues to start\nwith are not facial expressions\nyou need very high quality cameras that\ndon't have the range that you can use\npractically when\ndriving a robot around so i think a\nmuch better starting point in this\nspecific context would be things like\ntrajectory body pose those things\nthat said my guess would actually be\nthat\none of the reasons those problems arose\nwith facial expressions\nis that we don't use facial expressions\njust to express our inner worlds\nwe use them to communicate\nso if someone is smiling at me\nwhile i'm walking on the street that\nmight not mean at all that they are\nhappy to see me\nbut it still signals something to me\nand it might still mean that they are\nokay with me getting closer to them\nwhile we're crossing the street\nso i would say that i completely agree\nwith\nthe new ideas new ideas that facial\nexpressions are not\nmeant just to express our emotions and\ninternal states\nbut i actually think that's a they are\nmeant i think to communicate\nto other people to somehow\nmake them aware of something and i think\nthat communicative meaning of facial\nexpressions and all kinds of social cues\nis actually what we are interested in\nthis case\num more than trying to understand the\nunderlying\nemotions of imports internal states\nand so that that's i think that the two\nsides of it\nfor me right\nokay\nthank you uh any more questions\nthoughts suggestions\ni don't see anything yeah no hands right\nuh did you mention that you also wanted\nto share\nsomething else in the end of your\npresentation or\nor we wrap up at this point i i think\nactually this is a better point to wrap\nup\nyeah yeah okay okay yeah uh\nif if you're curious uh i think rule\nposted the link to\nthe paper he mentioned i also posted it\na minute earlier and then\nderek posted a link to another paper so\ncheck out the chat\nfor the links um yeah if we don't have\nany more questions then uh\nthanks uh jared for for the really nice\ntalk and uh the discussion was really\nlively and i enjoyed it a lot\nand thank you all very much oh wait uh\nwe have we have we do have a question\nfinal question just the final one okay\num so i i was thinking\nabout this different things that were\ndiscussed and about like game theory\nplayer of mine a lot of different\ninteresting approaches to do\nthis uh but\nare we considering a specific goal for\nthe for the robot for the\nmobile robot the the robot the goals of\nthe robot\nis to go from point a to point b the\nfaster as fast as possible so are you\nassuming some kind of thing there is\nsome kind of\ngoal of the agent of the robot itself\nand if that would be the case wouldn't\nhe just push like\nwhat kind of social cues you use\nwouldn't the robot just say\nmaybe if i just a pushy person i can\nimitate this kind of person that really\npush everyone\narrogant i don't care about anything\nthat would be the best thing to do\ngiven my goal so how do you see these\nand how do you see this within a kind of\na\nmachine learning framework that could be\nlearned that's kind of show cues can be\nlearned\nand can be used for a given goal\nobviously this and\nsure please share your thoughts uh\nhonest answer i would be super happy if\nyou could get to the point where we can\nmake a horribly pushy robot\nso if if we uh sort of\ngot to that point of understanding yeah\nwhere we know okay so this is how\npushing people on the street managed to\nactually walk this quick\nand get people to move out of their way\nthat said of course i don't think the uh\ntypically the goal for these kinds of\ndelivery robots is on the one hand\ntime efficiency so be quick in their\ndeliveries\nbut on the other hand also to fit in and\nnot offend people and and sort of be\nsufficiently socially appropriate\nand that would ideally result in a robot\nthat's not fully pushy but that can be\npushy if it\nneeds to be that can uh if a\nbunch of teenagers is harassing the\nrobots\ncan say okay wait i'll switch to my\npushy mode and i'll\ntake my way through this bunch of\nhorrible teenagers\nand get where i need to be\nso i i think we need\nseveral different behaviors and i think\nthat's logical because we have several\ndifferent things we want to optimize\nuh i hope that sort of uh answers the\nquestion a bit yeah uh just a quick\nfollow-up on that\nand who decides that this is how this\ngoes his objectives i know there's a lot\nthere's a lot of different ways to do\nbut who should decide that that's also\nthere's no right answer but just about\nhow to do i i think\ni think the the answer here is that this\nis one of the reasons i'm super happy\nthat i'm at the design faculty now\nyeah uh i i still have very much of an\nengineering mindset i'm just trying to\nfigure out optimal solutions\ni'm still trying to model this whole\ninteraction stuff\nand i think that actually the amazing\nthing about designers is that they\nsomehow often find or know\nhow to find a balance in this how they\ndo this\ni'm still figuring out myself perhaps\nthey don't know themselves\nexplicitly either i do think that's\none of the reasons it's really nice for\nme to be at a design faculty and to\nsee how they figure out that balance\nbetween the different needs and\nrequirements\nyeah great cool thank you very much\nthank you\nyeah with this old to the designers we\ni guess we wrap up the today's talk\nthanks jared thanks again and thanks\neverybody for the discussion\nuh see you next week oh wait wait we\ndon't have a\nmeeting next week so the next meeting is\nin\ntwo weeks and then we will have heavier\non some more representing\num about multi-robot navigation i guess\nwhich is actually related to what chad\nwas talking about today\nokay good bye thank you\n[Music]", "date_published": "2020-09-16T13:34:08Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "4186845e09e426082fe50553dc687a77", "title": "To use the human or not to use the human (Erwin Boer) - 1st AiTech Symposium", "url": "https://www.youtube.com/watch?v=FeoyCUf9MkQ", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "thank you it's a great honor to be\ninvited I always like to stretch myself\nand David already has given quite a\nbeautiful background of a lot of the\nresearch that he and I have done\ntogether a lot of research in terms of\nhuman modeling that human modeling\nunderstanding how humans control systems\ncan be used to diagnose people so if you\nhave a person with glaucoma\nyou know they control something\ndifferently because their peripheral\nvision is different so that\nunderstanding helps you you know in the\ncontext of driving to use these types of\nmodels for diagnostic purposes we've\nused human models in this types of\nshared control that David talked about\nwe're also using them as templates for\nautonomous control so we're looking at\nhow do people adapt to the environment\nhow do they perceive the risk how do\nthey take into consideration or the road\nusers in terms of vulnerable road users\nhow quickly do how early do they slow\ndown how do they swerve around them very\ndifferent than a typical engineering\nsystem might do that because you know\nthey start with different assumptions so\nwe try to already take into\nconsideration starting with\nwell-established human models but\nshaping them based on how people adapt\nto the environment like I said I'm going\nto stretch myself a little bit and talk\nabout a whole bunch of stuff that I\nhaven't actually done research in but in\nthe context of meaningful human control\nI think it is is a number of factors\nissues that I think are important and\neven try to apply some of the knowledge\ntracking aspects that are in the paper\nby yeah you don't so the question is to\nuse humans or not humans david advocated\nyou know we should just have shared\ncontrol\nwe should always use humans humans\nshould always be part of the design loop\nand the decision loop but are there may\nbe situations where humans might want to\ndrive autonomously or where humans\nreally might want to or might need to\ndrive manually so I want to kind of\nbreak up that space in terms of how can\nyou think about it how can you engineer\nit what kind of decision techniques do\nwe have to talk about that previous\nspeaker talked about do we need to be\nhere do we not need to be here this is\nprofessor in a gawky who has made a\nrobot looking just like himself\nbasically this robot can go to\nconferences and can stand in front of an\naudience and speak you know as if he's\nthere and either he can speak directly\nthrough the robot or the robot can you\nknow read something that was\npre-programmed this is clearly something\nwhere you know you have the same human\nyou're really pushing it to you know\nreplicating essentially a human which is\nan interesting idea AI and in robotics\nfrom a technological point of view but\nis that really what we want do we need\nhumans or do we need really something\nthat works well together with humans and\naugment human so towards meaningful\ncontrol like I mentioned in the\nincarnation of autonomous driving there\nare different ways to do that one is you\nknow sometimes the human wants to drive\nyou know I bought a Porsche I want to\ndrive this thing manually I don't want\nto drive it autonomously sometimes the\nsystem you know the system is when can\nthe system do that if the system is\nconfident enough maybe you can drive\nautonomously what does that mean and we\nhave levels of automation there\nblanketly applied everywhere we'll talk\nabout that in a second sometimes you on\nboth you know David very much advocated\nthe combination of the two and sometimes\nit's very much a few symbiotic\nrelationship David didn't really touch\nupon that but when you couple a human\nand a machine you have this symbiosis\nwhere you can teach each other you can\ntouch each other you can protect each\nother and together you learn you know\nunder what circumstances is the robot\nmachine may be better under what\ncircumstances is the it's a human better\nand the human can teach the robot and\nvice versa so it's a beautiful mechanism\nvery much you know the way we interact\nwith humans and we challenge each other\nwe teach each other we not each other we\nprotect each other so the question\nreally is when to use what mode then I\nhave to walk back and forth each time\none thing that already came up is this\nnotion of bounded rationality we don't\nknow everything and then do we even need\nto know everything Herbert Simon coined\nthis term bounded rationality and from\nthat a whole different type of decision\nEngineering came about called\nsatisficing decision-making it's not\nabout trying to maximize everything that\nis important and in optimal control it's\nnot about you know maximizing\nperformance and minimizing this it's\nreally about setting aspiration levels\nand once you have reached those that's\ngood enough that's extremely important\nin critical situations because in the\ncritical situation you don't really have\ninfinite time to find a better solution\nor a more optimal solution so in order\nto be able to quickly make decisions you\nneed to have a very good understanding\nof what is enough evidence for me to\nmake that decision later on I'll talk\nabout how we can actually quantify that\nevidence accumulation and how can we\nmaybe have some meaningful ways to talk\nabout whether it is sufficient\ninformation or not yeah so some of the\nthings to come about is you know what\nevidence do we need you know what are we\nactually collecting when you look at an\nAI with sensory and actions out you\ndon't really know what the AI is using\nto drive its decisions it's important to\nknow that especially from a meaningful\nhuman control point of view you know how\nmuch evidence is enough you know when do\nwe believe that okay we can leave that\nsystem alone and it can do it\nautonomously you know how much time do\nwe have is important and can we get the\nevidence in other ways you know we have\ndiscussions that you know within AI or\nwith the robot maybe not everything\nneeds to be inferred maybe you can just\ntalk to the robot or maybe you can just\ninstruct a robot to do something yeah\nsort of all forms of interaction there\nfrom a technological point if you we try\nto make these things as autonomous as\npossible but if you switch back to make\nthem more socially interactive you have\na whole way of exploring all the ways to\nto teach that robot and it's called\nepistemic probing now you also see that\nas cars come to an intersection you\nnudge it a little bit forward to see how\nthe other one responds to that it's\nyou probe the system sometimes you just\nwant to have more information really\naugment in the human this is Neil\nHarbison is one of the first cyborgs so\nthe first side works recognized as a\ncyborg where you know his visual system\nwas enhanced he's called a blind he only\nsees one color so he put a camera on his\nhead and now he can see from infrared to\nultraviolet and he has a perception of\nthe world that's much richer or\ndifferent at least than what other\npeople have yes so here you have\ntechnologies directly integrated with\nthe person and if you have all this AI\nsymbiotically working with you that's\nvery much what it feels like\nthat's this greater consciousness that\nwe can all tap into for example so to\nuse a robot not use a robot we talked\nabout closed environments where we can\ndo quite a bit we can predict everything\nwe can control everything like this\nHotel in Japan it exists you can go\nthere\nyou surf by dinosaur it's wonderful but\nin an open environment where we can't\nunderstand everything we need to have a\nmuch you know richer way to actually\ntalk about you know what does it mean to\nsend my child with a robot to school\nyeah are the conditions under which I\nwould accept that or the conditions\nunder which not so context might become\nvery important this is one of the most\nugly slides ever created\nI think about changing it all the time\nbut I don't think I will\nit's really just the capturing of\ncoupling a human at the top with a\nmachine and you see the duality yeah\nwhen humans interact there's all kinds\nof ways in which we share information\nand mimic each other and cooperate etc\netc that is essentially a very large\nduality if we have that duality also\nwith the systems with a system you know\nall the benefits you have of how you\ninteract with the human you can also\nhave with the system and vice versa as I\nmentioned in the learning to sharing the\nnudging the protecting\none of the issues that has been\ndiscussed in a VA ssin an aviation has\nlooked at automation for a much longer\nperiod than then driving in a lot of the\nresearch we're doing right now comes\ndirectly out of automation which is a\nwhole different domain but one of the\nissues is handover you know like in the\nrejected take off an engine fails or an\nengine catches on fire you know do you\nabort takeoff or do you slam on the\nbrakes it all depends on the speed\nyou're going at the length of the runway\nthere are a whole bunch of factors that\ncome into the picture and does the human\ntrust the sensor systems is the system\nyou know sufficiently aware of its own\nlimitations does the system actually\nhave a good understanding of the you\nknow dynamics of the airplane with the\nweight that's in the airplane etc etc so\nall these different factors and\nprofessor in a gawky looked at this\nproblem and brought to bear a technique\ncalled Dempster Shafer evidence theory\nto combine all these different bits of\nevidence into a decision you know\nwhether to take over or not so how to\nbring all these different factors into\nconsideration I want to focus on that a\nlittle bit before I do that Trust is a\nvery important aspect and Trust is a\nvery dynamic aspect and when you start\ninteracting with a machine you have some\nsort of a priori trust in how you\ninteract with it I believe that you know\nthis person this robot is capable of all\nkinds of you know wonderful things and\nthen we interact with the robot maybe it\ndoesn't quite work out so you trust very\nquickly dies where is this child has the\nsame technology built in your\nexpectations are a lot lower and you\ninteract then you built this mutual\nunderstanding that is much better\ncalibrated to where you you know\nultimately end up that's the same thing\nwith technology you know how do the\ncompanies present this technology how\ndoes the media you know react to you\nknow one little failure that might\nhappen and we all believe what on most\ncars are absolutely awful for every\nautonomous car crash I could also look\nat you know 10,000 poor kids that were\nyou know killed by drunk drivers for\nexample yeah maybe not 10,000 but a few\nhundred that would be killed at the same\ntime so it's very much depends\nhow the media presents this information\nto us I'm not saying autonomous cars is\nsafe very much with David in terms of\nthis whole shared control is much better\nbut we can definitely control that so\nwhen I read the paper on meaningful\nhuman control there's a lot of stuff in\nthere and this morning it was mentioned\nthat it was too early in the morning to\nto dive into this math so I think\neverybody is awake enough now so perhaps\nwe can we can do that a little bit and\nthere's a notion called truth tracking\nyeah so all kinds of aspects of truth\nhave to be true and let's talk about\nthat a little bit so if you have a\nparticular fact say the system is safe\nand the belief whether the system is\nsafe so then you have four conditions\nthat essentially need to be true for the\nsystem to be truly trackable in\nknowledge and have all the aspects so\nthat basically means that if the system\nis truly safe then you believe that it's\nsafe if the system is not safe then you\nbelieve it's not safe and then also the\nreverse where you if you believe if you\nbelieve is that the system is safe then\nit is actually safe and vice versa and\nthen you can extend that where you have\nthese beliefs that then ultimately\nrelate to actions yeah so some of the\npractical implications are that you need\nto actively establish evidence for these\ndifferent facts in order for you to have\na belief whether it's true or not the\ntwo directions you can co go you can a\npriori assume that or you actually base\nit on some knowledge and that knowledge\nmight come from somewhere else that\nmight know what knowledge might come\nfrom the trust you have in a company etc\netc but there's some evidence that need\nto be established and the same thing\nevidence for not safe and then use this\nevidence to essentially built a belief\nstructure about safe and not safe so\nyou're interacting with the system\nsometimes it works sometimes it doesn't\nwork and in the end you end up with\nsomething that I'm not quite sure or yes\nI trust the system completely or not\ncompletely and then you combine these\ndifferent belief structures\ninto a belief that you have whether the\naction is meaningful or not or safe to\nperform and the action depends on your\nconfidence in your beliefs about these\nfacts whether whether you have evidence\nfor it being safe or not safe yeah and\nthen there is this notion which taps\ninto this whole satisficing approach is\nyou can act now where you can wait a\nlittle bit and collect more evidence you\nsee a lot of systems where you just wait\na little bit longer you collect more\nevidence and you can base your\ninformation on better better knowledge\nso when you talk about this evidence\naccumulation and finding out whether\nit's safe or not the different policies\nthat you can adopt yeah there is a\npolicy that's safe until proven unsafe\nso yeah priori assume that something is\nsafe until it's proven unsafe I'll show\nyou an example in a second where that\nwas very catastrophic or you say believe\nit's unsafe until proven safe you know\nsome of you might recognize that we have\nvery similar things in the different\njurisdictional systems in different\ncountries like in the u.s. you're\ninnocent until proven guilty but in\nMexico you're guilty until proven\ninnocent so they put you in jail and\nthen it's up to you to prove that you're\nnot guilty it's very similar to this\nthis seems very silly this is what we\nalso do okay so now we have the need to\nunderstand whether a system is safe or\nnot how do we do that there are two\ntechniques this morning Judaea parole\nwas mentioned with this book causality\nhe is very much a believer of Bayesian\nnetworks and Bayesian networks have been\nextremely useful or extremely powerful\nbut are they really most easily\ninterpreted by humans no are they\nperhaps the best way to go about\nintegrating evidence and belief that\npeople might have perhaps not dempster\nand Schaffer back in the 50s came up\nwith this possible astok tech\nnique go and then later was coined after\nthem Dempster shade for evidence theory\nthink about this particular case a\nmurder was committed and you have some\nevidence that this particular person is\nthe murderer your evidence is point for\nthe whole structure is just like\nprobabilities has to add up to one\netcetera etcetera we'll go through that\nin a second in a probabilistic approach\nof the the Bayesian approach and the one\nthat we're all familiar with and thought\nhow many of you are actually familiar\nattempts of shape of evidence theory\nokay four or five in a Bayesian approach\nyou have probability that this person is\na murderer\nit's point four well that means there's\na probability of 0.6 that he's not a\nmurderer well what does that really mean\ndo we have any evidence that he's not a\nmurderer no we only have evidence that\nthis person is a murderer and the rest\nis just a mathematical convenience\nso what dempsey shaver evidence theory\ntells us that okay we have some evidence\nthat it's a murderer and the rest is our\nwhat's called frame of discernment what\nare all the options that we have in in\nwhat we're trying to decide so it's\neither safe or it's not safe or it's\nboth yeah so the rest of the total\nbelief that you need to assign which is\none is 0.6 so 0.6 is idream urder or not\na murderer mm-hmm this is important\nbecause we have hard evidence that this\nperson is a murderer and we have a\npossibility and that's why possible\nistic that this point four might become\npoint might become one if the rest of\nthe evidence we collect all points in\nthis direction or it might be conflict\nthing where there might be additional\nevidence yeah\nso don't have to go through the whole\nthing here but essentially it's you have\na frame of discernment which is safe\nunsafe what a combination of the two\nwhich is the powerset you have a\nparticular assignment function you have\na way to combine these assignments which\nis just if you have a belief structure\nyou can have two experts providing\nevidence or you can already have an a\npriori belief for yourself and then you\nget new evidence or you combine the\nevidence that you bring in with what you\nknow already and you do that in\nessentially this way you know what\never a and B you have evidence for which\nare part of the powerset as long as you\nknow the one you're trying to compute\nthe like safe for example has this might\nbe safe and this might be both there you\ncan combine them this way and then you\nhave a null set where you might have\nevidence for safe and evidence for\nunsafe which is a conflict the\ninteresting thing is that conflict is\nactually very nice construct if you have\nconflicting evidence what do you do with\nthat and it's one of the things why\nDempster Shaver evidence theory hasn't\nbeen applied very strongly although the\nfriends Millett French military uses it\nfor a lot of it's a fusion data fusion\nsensor fusion but the the way you can\ncombine this conflicts and now you have\nevidence for safe and unsafe what do you\ndo with that you can say I'm ignoring it\nI'm just assigning part of it to you\nknow whatever I know already or you say\nno my entire conflict goes to ignorance\nwhich is very useful because then you\nsay okay I have a little bit of evidence\nfor for this is a murderer\nthe rest is conflicting so that's\nbasically equivalent to to ignorance if\nyou look at you know a driving situation\nand you look at the evidence we have in\nevidence for safety it tracks lanes\naccurately well maybe it only does that\nwhen it's nice weather tracks the lead\nvehicle accurately in stable or dub safe\ngaps etc and then you know in rainy\nweather it doesn't do so well and in\nhigh traffic density it behaves\ndifferently so very quickly you see a\nvery complex sets of evidences and\nconfidences emerge that are very\nsituation specific and one of the things\nI just don't see enough when people talk\nabout levels of automation and this and\nthis and this there's no mention of any\ncontext it's very clear that an\nautonomous car on a beautiful well\nstriped wrote in clear weather should be\nable to drive autonomously yeah with the\nsame accuracy as people and probably\nhigher because people get super bored on\nthese long roads that are straight and\nyeah so those are situations where maybe\nthe human doesn't have to be in the\nliberal yeah so we can collect evidence\nwe can have the human collect the\nevidence and then decide do I trust the\nautomation or not so it can be\nevidential or you know the system can\nprovide some a priori evidence for that\none of the other nice things about\nDempsey shavers evidence theories this\nnotion of belief in plausibility\nthe belief is essentially all the hard\nevidence do you have for the system to\nbe safe the plausibility is that you\nknow everything I don't know yet so\neverything that's still in in my\nignorance can I to be assigned to safe\nor unsafe so my plausibility is the\nmaximum amount of belief I could have\ngiven the unassigned beliefs that are\nstill on the table you can have the same\nthing for unsafe yeah so now you have a\nvery nice way to talk about how much\nevidence do a half how large could it be\nyou can do nice simulations with that\nand depending on you know what type of\ncombination rules you use so you start\nwith I have point eight evidence that\nit's safe point nothing unsafe and the\nrest I don't know then you have an\nobservation where it's safe unsafe and\nignorance and say you have a whole bunch\nof these observations and essentially\nyou can track how your belief in whether\nthis system is safe or not tracks over\ntime if you ignore all this conflict in\nhere you will actually gravitate to\nsomething that says okay I think the\nsystem is safe purely because there's a\nlittle more evidence that it's safe than\nunsafe very dangerous if you take these\nignorance into consideration on this\nconflicting to consideration and keep it\nat ignorance you see that your evidence\nfor so this is your belief for being\nsafe is a bit more than 0.4 plausibly it\ncould be almost point eight but you know\nwe haven't found the evidence yet so\nthis is essentially something okay we\ndon't really have enough evidence to say\nsomething meaningful here yet to make\nthe decision\nso there are two policies safety\npolicies that NASA employed on a safety\npreservation offered to two of the ones\nthat can be considered NASA employed to\nthe fault warning one where the safety\npreservation is it's unsafe until proven\nsafe so if you don't know whether a\nsystem is safe you just assume it's\nunsafe so and this again very much\ndepends on conditions so you operate\nonly when evidence for safe operation is\nsufficient it makes a lot of sense right\nand then you can turn it into\ntemperature evidence the area where\nbased the dis is purely based on the\nbelief that you haven't safety all the\nevidence that accumulates to them for\nwarning is safe until proven unsafe so\nshutdown only when there's sufficient\nevidence for unsafe operations operate\noperate unless evidence for unsafe\noperations is sufficient and this is\nbased on the plausibility of safe which\nis 1 - belief which is always a much\nhigher number yeah so in this case you\nwould much more often go so if you look\nat this this Challenger accident in the\n1980s 86 so they adopted two safety\ncontrol policy safe until proven unsafe\nbut the context within which they\nlaunched that morning they had never\nexperienced before he was very cold and\nthey had you know no evidence that the\no-rings would be brittle in that\nsituation but because of this policy\nthey decided to go had they understood\nthat okay there are contacts now it's\nfreezing we don't have safety and they\nhad adopted the other safety policy this\nthing would have never launched yeah so\nagain context is very important we can't\njust saying this thing is safe under all\nconditions it has to be very contextual\nand again I don't really see that enough\nand systems might be very aware of your\nown confidence in particular situation\nparticular context whereas in others\nthey might be completely uncertain about\ntheir their own confidence\nso the effect of meaningful human\ncontrol there are two aspects one is\ntracking knowledge so reasoned\nresponsiveness and tracing ownership I\nwant to focus a little bit on the\ntracking and so the issues that\nunsubstantiated evidence was not used\ninstead of factual knowledge yeah so\nunsubstantiated evidence I mean there\nwas basically no evidence that this\nthing was safe yeah you adopt the safety\nof policy created an a priori belief\nthat the system is safe yeah so again\npeople make assumptions without evidence\nand I think that happens a lot you know\nwe step in the car and we pretty much\nbelieve it's safe and we have no way to\nreally try except for you know\nmicromanage this thing but like David\nmentioned after 10 minutes we can't do\nthat anymore\nwe just don't got the mental capacity to\nmonitor something that basically works\npretty well so if you think about this\nis tracking for the Challenger task so\nthe fact is that the system is unsafe\none second Thea yeah so the system is\nunsafe this is a fact the system is\nunsafe and the freezing launching\nconditions it's a new concept new\ncontext that was never experienced the\nbelief that we have is the system is\nsafe in the freezing launch conditions\nwithout counter evidence so we're\nbasically completely contacts techno\nstick and then you have these two\npolicies which you can write in this\nthis knowledge tracking framework if it\nwere the fact that you know the system\nis unsafe then a human would believe\nthat the system isn't safe this is not\ntrue because the human assumes the\nsystem is safe until evidence for unsafe\nis enough so that knowledge tracking\naspect is violated in this condition and\nI you know I really like the way the\nmeaningful human control laid out these\ntypes of things and you know the fact\nthat we can apply to these real world\ncases it's quite useful and it's\nessentially you know if it were the fact\nthat you know these types of subjunctive\ncondemned conjunctions of subjective\nconjunctions are very useful there and\nin the safety policy if it were the fact\nthat the system is say\nand then the human would believe that\nthe system is safe she's also not true\nbecause the human assumes that the\nsystem is unsafe until evidence for\nsaviours enough so it both policies have\nan issue so essentially that suggests\nfrom this truth tracking point of view\nthat any policy not grounded in evidence\ncollection is basically doesn't satisfy\nthe truth tracking criteria of\nmeaningful human control and any policy\nnot sensitive to you know context\nspecific evidence also and also we have\nthat in autonomous driving again there\nare lots of situations where the systems\nmight be perfectly safe but if you look\nat you know these types of conditions\ntotally acceptable\nbut you know would we drive our Tesla\nhere the same as there some people might\nand depending on what they actually know\nabout the system it's okay drives here\nit drives it drives you know as a whole\nnot understanding the technologies is\nall over the limitations they might try\nit here and the same thing if you look\nat you know the dam leur perfect image\nof what you know autonomous driving\nutopia looks like you know the lighting\nconditions are perfect and no hard\nshadows there's no rain there's no snow\nand there's nothing you know everybody\nperfectly walks in the center of the you\nknow sidewalks you know etc etc yeah in\nthis condition it works very well so if\nwe train everybody in society perfectly\nyou know we might be able to do\nautonomous driving but you know this is\nthis is more like the real world in that\nsame context and this picture similar\npicture has popped up we have these\nlevels of automation you know we have\nthe same thing with let's go back to\nthat the second these levels of\nautomation that have been proposed for\ndifferent decision support systems for\ndifferent automated systems where\nessentially you know the computer offers\nno assistance and the human does it all\nto you know the computer executes a\nsuggestion if the human approves it etc\nso the human gets brought into the loop\nmore and more depending on some rule\nwhen that rule specifically comes from\nyou know whether it's the engineer\nsaying okay we don't trust the system\nand the human really needs to have the\nfinal authority or whether it is the\nresponsibility issue you know people\nhave thought about this and have applied\nto two autonomous driving as well and\nyou know you see that here there's no\nautomation you know the five levels of\nautomation by SAE to you know assists it\nto you know full automation always in\neverywhere under all conditions you can\njust send your car with your kids you\nknow you don't even have to look at the\nweather kind of thing doesn't matter\nwhere you live you know we're barely\nhere at this point and and there are\nmany conditions where it doesn't work\nreally very well we're pushing towards\nyou know a control Society and I think\nyou know that's very dangerous and it's\nwe tried that back in the 90s right we\nhave dedicated freeways and we\ndemonstrated that we can drive cars\nautonomously on that if we have enough\nautonomous cars we can see a shift to\nokay let's take part of the city and\npart of the road network where you know\nthese cars drive and we leave\npedestrians out of there or we make it\nmore difficult for people to pedestrians\nso we could gauge their so they can only\ncross at certain crosswalks and you can\ncontrol the environment and you can plan\nyour cities around that to essentially\nfacilitate more or less imperfect\nautomation do we start with the\nimperfect conformation and control our\ncities or do we say now we really need\nto wait for the automation to work well\nto not completely disrupt a society that\nwe want do we even know what society we\nwant probably not right now the safety\npolicies you know who applies those the\nsystem who decides you know does the\nhuman decide the system uses or the\nsystem decide the human usage right now\nwe're very much in this camp here\ntechnology is to a certain degree and\nhumans will adapt right the human\nfactors world complaints all the time\nit's like we always have brought in you\nknow when a system fails and you know\nthey need to understand the human\nfactors you know how can we instruct a\nhuman to deal with those limitations in\nthe design properly it's a completely\nwrong way of going about it yeah but you\nknow this type of duality and design and\nengineering you know looking at it from\na systems point if you're looking at\nfrom a human point of view is a great\ntechnique for gaining insights and David\nand I have used that very much in terms\nof you know how do we couple a human to\na machine so one of the things I'd like\nto propose is I've showed you that you\nknow with the stems to shave or evidence\ntheory you can take all the evidence if\nyou know what what the important bits of\ninformation are you can combine that and\nyou can come up with this belief that\nthe system is safe given a particular\ncontext and then you can have this this\nmodel where you say okay in this\nparticular context I have a high enough\nbelief that it is safe and therefore if\nthe human chooses to use automation\nthat's great they can still you know\nmaintain or continue a cooperative\ndriving style what they can do in manual\nand so those options are always there if\nyou're in between then it's really\ncooperation and you get into the\nsituation the David also described where\nyou know it's the combination of both is\nbetter than either one individually and\nthen you know if there's just really no\nevidence that this thing is safe like\nGoogle has only tried you know has\ndriven 90% of its its data you know\naround San Francisco what happens when\nyou put it on a rural road in Iowa it's\na whole different experience and we\ndon't have any evidence there it may be\nsafe but we don't know yet we already\ntalked about that you have all these\nlevels there's no context you know\nimplicitly sometimes it's assumed that\nyou know in you know on freeways we\nstart but these systems don't have to\nyou're fencing you can drive them off\nthe freeway and try it in the city they\nmight have speed limitations it doesn't\nwork on the 45 or I'll just write faster\nthan 45 in the city sure I get the\nbenefit of my system so that's you know\nthere are lots and lots of technologies\nthat we could bring into the picture but\nfrom an economic sales point of view\nit's not done yeah indeed\nthe other thing that has been mentioned\na little bit is systems need to become\nself-aware I just throw that out here\nbut that's one huge thing with AI you\nknow how capable is AI of saying okay\nI'm very confident to recognize you know\nthis group of people or to drive\nautonomously in this situation or you\nknow take over in the critical crash\nsituation into that condition and\nthey're working on you know AI to have\nbetter estimates of its own limitations\nand an understanding and traceability\nbut it's at the moment only used in\nacademic labs so leave with the same\nnote that David there is so much\nuncertainty and so many things we don't\nknow and even you know so many\nengineering techniques that we still\nneed to try what works best in terms of\nhow to build evidence and combine that\nthat you know there's a symbiotic\ncooperation and you know even using the\nhuman to teach the automation\nI'm surprised not many more car\ncompanies do that but it's also\npartially because you don't really know\nwhere that goes to and then you need to\nhave a watchdog around that who's gonna\ndesign the watchdog catch-22 thank you\nthank you\nexactly so we have time for a couple of\nquestions of course with the books ooh\nmay I invite whoops sorry thank you very\nmuch there you go sorry yeah thanks so\nmuch for the your talk that was very\ninspiring so I have a question\nwe've been recently trying to put\nmeaningful human control inside a\nclassic control loop one of the things\nwe have been smashing our heads against\nwas quantification of the many normative\nvalues that are sort of implicitly\nincluded in the meaningful human control\ntheory so all the knowledge the moral\nknowledge and awareness of a certain\nuser agent in the system even like\nthings that are potentially easier to\nquantify like safety do you have since I\nhave seen sort of you have decision\nstructures and theories do you have\nsuggestions from how to get to those 0.8\ncertainty or belief that something is\nsafe or it isn't so how do you get to\nthe quantification of certain particular\nvalues that determine assessing yeah\nit's extremely art and in some ways it's\nreally to explore it in in the real\nworld and that's the difficulty many of\nthese systems can be can't really be\nevaluated unless you throw them out in\nthe real world you know with the snow\nand the variability etcetera etc and\nthat's why it's it's an infinitely more\ncomplex problem than you know the closed\nsystems in which we had AI and robots\nliving but you know we've thought a lot\nabout you know what what does it mean\nmean to be safe and one of the aspects\nis that in the human has time to take\nover so and we have had workshops in the\ndomain of almost evaluation so all of\nyou who have designed systems probably\ngenerally evaluate them within the\ncontext within which you designed them\nyou never really test them\noutside that contacts so honest\nevaluation is to recognize that people\nactually use it outside their context\nand see what the implications are there\nlike you know sensor failure in an\nautonomous system is very detrimental if\nthe human isn't connected it just takes\nthe human way too much time to respond I\nmean you can go with reliability\nengineering and things like that but you\nknow it's really in the context and I\nwould try many of these systems to\nactually have humans as sensors and\nevaluators in the loop\nyou know in complexin of situations we\nwere in the workshop yesterday where we\ntalked about micro worlds and and this\nmorning we talked about you know kind of\nlike an open city planning environment\nwhere you can try these different things\nso you can look at the dynamics and we\nhave fairly good models of different\nresponses but in many cases we don't we\ndon't know when you introduce cell\nphones into a society what's going to\nhappen to people you know we have no\npredictions so in some ways it's very\nhard to have meaningful human control\nand evaluate that because we don't\nreally know what the impact of society\nit's good to think about it and have\neverybody think about it and a lot of\nmistakes will be avoided but at the same\ntime the implications and the dynamics\nand the shockwaves that it throws into\nsociety or very unpredictable yeah we'll\nbe using Likert scales for instance like\npsychometric methods to assess some of\nthose but that's why I'm sure there's\nmany different ways one thing I would\nmention quickly though I mean if you\ntake this satisficing approach and the\nproblem with optimal approaches where\nyou have to combine all these different\nfactors okay and need to look at you\nknow is it traceable and what's the\nimpact on society what's an impact on\nthe human what's you know this and this\nand this you have a weight on all of\nthose and you try to maximize that as a\ncontrol engineer you know one of the\nthings I always say that for every\nbehavior there is a criterion that makes\nthat optimal yeah because you can change\nyour weights and then you'd its optimal\nagain so if you take a satisficing\napproach where you have all the\nstakeholders say this needs to be met\nand then unless that is mad it's not\ngood enough right doesn't matter if\nthere's a trade-off it's something else\nno so if we can establish these\ntrade-offs and then take\nthe spicing approach I think that's a\nmuch more practical tractable problem\nthan going with something like an\noptimal approach and we've done that for\nflight planning and things like that\nwith noise cancellation thank you can\nyou throw the box to the right-hand side\ntook a heart so this control theory you\nare already sort of in a system where as\nyou indicated visual formulas you\noperate with tools and so on\nlet's take us in examples in 737 max who\nshould be involved in avoiding such\nissues where potentially Boeing knew\nthat there are critical situations which\nyou may label unsafe but to the world it\npresented that this is a safe airplane\nso this is outside the formalizations\nthere are behavioral psychology kind of\ntrying to take advantage of certain\nsituations for economic benefits and so\non so you as a control theory person I\nmean who is responsible for this facet\noils is also in your sphere of interest\nand we think in this particular case it\nwas just a lack of redundancy in the in\nthe sensor system so that is definitely\na case where you an engineer made a\nshortcut probably because of cost cost\nimplications to not have the redundancy\nthat you really need in the safety\ncritical situation so when the\nredundancy comes from a human if it's\neasy to assess by human or whether we\ndone that see comes from completely\ndifferent you know sensor systems and\nthen you can apply this type of you know\nbelief structure and you can have you\nneed to have you know sensors that can\ndetect you know whether a sensitive same\nsensor is actually failing or not you\nknow partially through consistency parts\npartially through modeling partially\njust through the dynamics that you've\nexploration you know observed in the\npast so there are ways to to do this\ntype of stuff but in this particular\ncase it was just a redundancy that\nwasn't there and then you know the\npilots didn't have the time to to figure\nit out with these very complex systems\nthank you very much\nEvan let's thank the speaker again\n[Applause]", "date_published": "2019-10-30T09:58:50Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "dd71ef4892bfccecaf6d608eefa65919", "title": "Emiliano De Cristofaro: Understanding the Weaponization of the Web via Data Driven Analysis", "url": "https://www.youtube.com/watch?v=dQkZOMdFiaU", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "today to talk to you a little bit about\nuh our work uh we've been doing with\nwith a number of colleagues\num first funded by uh a new project\non on cyber safety and now uh with some\nsort of virtual\ninternational lab that that has\nseveral members scattered all around\naround the world called the\ndrama lab uh on what i generically\ndefine\nas a weaponization of the web these are\nsort of a set of issues uh\nrelated to uh to safety cyber safety\nuh or like in the uk they like to call\nit online harms\num and i'm gonna try to to walk you\nthrough through some of these issues\nuh using some some case studies i would\nsay\nand um a lot of our work is is\ndata-driven so\num and what i mean here is uh it's\nreally through the lens of\na sort of large-scale quantitative\nanalysis\nwhich is you know a part of the of of\nthe story right so\num there is also a lot of great work\nhappening\num at a smaller scale or you know with\nqualitative analysis and um\nyou know there's kind of our hope and\nyou know what's going on is that\nthese sort of two lines of work inform\neach other and support each other\nand uh i'm happy also to to report that\nyou know in the in maybe more recent\ntimes\nwe have been growing collaborations\nacross these two these two words\nall right so um before i uh i dive in\nuh sort of a warning that there might be\ncontent in this talk\num they may be perceived as offensive um\nand you know we we decided um you know\nnot to censor uh content in this in in\nour presentations\nif anything because you know where to\nsort of draw the line\nof what is considered offensive it's\nactually pretty hard and this is part of\nwhat makes this research\nuh challenging so um you know sort of\ntoxicity and\noffensive content is something that is\nvery subjective it's very\num context-dependent obviously you know\nthere's\nthere's clear examples that you know\nnobody can really debate but\nthe others are a bit more nuanced right\num and overall we wanted\nwe wanted to sort of give you the\nunfiltered uh look\nat what you know what our data looks\nlike and what these communities that we\nstudy\nuh look like um so i apologize in\nadvance and you know if\nif anybody has you know any issue i'm\nyou know very happy to\nkind of skip over slides or you know if\nyou just want to drop out of course\nby any means uh um please you know no no\nno hard feelings um of course all right\nso\num like i said is a concept of\ninformation weaponization is a very\nbroad genetic one\nand you know it kind of happens through\na number of\nof sort of problematic uh uh issues\nuh ranging from cyber bullying cyber\naggression which is\nsort of a coordinated um set of attacks\num targeted to to specific uh\npeople or groups of people um\nradicalization\nmisogyny misinformation propaganda and\nso on and so forth right so\nyou know this is uh these are some\nexamples of of of things that\nuh that the research community has uh in\ncyber safety online arms\nonline extremism however you call it has\nbeen focusing on\nand uh you know obviously these are uh\nvery important\nuh socio-technical problems and that\nhave attracted a lot of\num sort of interests from the research\ncommunity um\nand you know from both from uh um\nfrom the point of view of solving\ntechnical issues and also from a\nsocietal\nuh importance point of view of course\nright and uh you know my\nas as mentioned in the introduction um\nyou know my experience comes from like\nthe cyber security\nand privacy technology side and you know\na lot of\nof us in this field have had a lot of\nexperience in\ndoing research on so mitigating unwanted\nactivity on social networks\nright so um social network on the web in\ngeneral right so\nyou can think of um sort of problems\nlike uh\nspam right that's an unwanted activity\nyou don't want\ni i assume that nobody wants to receive\nspammy emails\nin their inbox um and there are in\ngeneral\na lot of tools available to uh detect\nsort of automated\nunwanted activity or you know activities\nperpetrated by by bots\nright by sort of automated programs\nright and\nwe've actually done a lot of great work\nin in\nin that space um you know i think\nif if you use like gmail or you know\noutlook or you know this kind of large\nproviders\nthey are pretty good at detecting spam\nof course they're not perfect but they\ncan really rely\non the fact that uh you have sort of\nlarge-scale\nsynchronized uh unwanted activity that\nyou know sort of really exhibits\nsome um some clear patterns right um and\nso this actually does not\num apply only to spam but in general to\nuh you know both activity um in in\nsocial media right so\nyou know we can be spam can be um tools\nto\nuh um like do this kind of like forms on\nfacebook or\num you know networks of of retweets\ninflate number of followers\nin inflate engagement and so on right so\nthere are machine learning classifiers\nthat\nthat we can build uh that are able to\nmodel\nuh these recognizable patterns and have\nuh\nreasonably accurate uh outcomes\non and like these examples on the slide\ni think the trapezoid cohort barometer\nuh or box layer um you know they're\nthey're pretty\nthey're pretty good with some caveats\nokay and this maybe it's\na conversation for for not to talk okay\nbut what can we do really i mean can we\nactually use these tools\ncan we apply it with the same intuition\nto solve some of those issues that i\nmentioned in the information\norganizations like cyber bullying cyber\naggression toxicity and so on\nwell i mean the first uh observation is\nthat a lot of those issues\nare actually perpetrated by humans by by\num\na lot of those activities are prepared\nby humans right and human activity\noverall\nuh is less coordinated right than than a\nscript that is\nyou know launched at a large scale so\ncharacteristic traits and these patterns\nare much less recognizable right uh if\nanything\nyou have a much looser synchronization\nof activities\num toxicity can come and go at different\ntimes\nuh and you know real users uh use common\ntalking points\nand so it makes it uh much harder to\nsort of recognize\num you know sort of differentiate\nand this activity from what whatever is\nconsidered normal activity\nright um and also you know\na lot of the um the tools that we've\nused for\nuh uh automated unwanted activity\ndetection\nreally take the fight on economic\ngrounds like for instance for spam\nor you know for like preventing uh\ndenial of service attacks\nwhat we do is we actually make it uh too\nexpensive\nfor the attacker to launch his attacks\nor we make it less profitable\nright so in the end the attacker you\nknow doesn't uh send spam\nif he cannot if they cannot um profit\nfrom it right so they will move to some\nother platform that will move to some\nother kind of criminal activities let's\nsay right\num but in this case you know not always\nthings are\nhave to do with money right so um or\nyou know even if they do uh you know\nsome some of these activities\nmay involve adversaries with deep\npockets like for instance state-level\nadversities that want to\ninterfere with elections or you know\nwant to\nuh spread some kind of uh propaganda\nthat will you know drive uh um\nthey will will drive ideology in a\ncertain direction or\nor something like that right so um you\nknow this\nsort of is our first observation uh when\nit comes to this problem\nuh the other one is that um you know\nonline services do not exist in a vacuum\nright and it's not only\num you know considering things like you\nknow each social network\nis distinct from each other right so you\nknow\nthese social networks really um impact\neach other so content that\nyou know may come may originate from the\nreddit social network\nmight then spread and you know go viral\nlet's say on on another social network\nlike twitter and facebook and so on\nbut also you have different sort of\necosystems let's say that have uh they\nobviously are really have a big\ninterplay with each other\nright so you can think about content\nproviders and social networks and\nuh news providers and so on right so\nwhen it comes to this kind of research\nuh looking at a single service at a time\nis not enough and unfortunately you know\nwe are used to\nreally focus on like one platform at the\ntime one problem at the time\nand you know if anything for just like\nfor\nboth for um what it means to do data\ncollection\nbut also with respect to the tools the\nanalytical tools that we have\nright so this is was one of uh still is\none of the biggest challenges\nuh from a technical point of view in\nthis space\nand the next observation is that uh you\nknow in these ecosystems you\nhave both what you may refer to as sort\nof mainstream\nuh communities or mainstream platforms\nthink about social networks\nright you have uh um social networks\nlike facebook that has\nbillions of users a twitter mainstream\nbut then you have\nso-called fringe communities right and\nuh\num for instance like sub communities of\nreddit\ncalled subreddits that are you know sort\nof fringe or extremists\nyou have uh actors like uh communities\nlike 4chan\nhan um and new uh communities that's\nthat\nsort of come into into the picture and\nmore and more often like gab parley a\nvote\npoll and so on and so forth right and\nthis french community is actually you\nknow\nthey're considered niche uh they're very\nsmall uh in footprints in their\nfootprint compared to you know facebook\nhas billions of users\nbut you know fringe doesn't mean\nunimpactful doesn't mean\nunimportant and in fact uh we have and i\ni'll\ntell you some examples in a little bit\nwe have actually found\nuh significant evidence of sort of\nof these communities playing a really\nimportant role\nuh visit the information weaponization\nbe it uh misinformation or\ndisinformation campaigns\nuh coordinated hate campaign um\n[Music]\nproduction of weaponized highly toxic\nuh racist or politically charged memes\nand so on and so forth so\nit's also important to look at these\ncommunities okay\nand so here are some examples right so\nthere are actually coordinated efforts\nthat\nyou know spill on on mainstream uh\nsocial networks like twitter\nthat actually originate from from\ncommunities like uh 4chan\nthis is a classic example of a\nminority being sort of pushed off\ntwitter or\nmainstream network by sort of a\ncoordinated mob\nuh attack uh uh some coordinated attack\nby by a mob that\nwas was originating and coordinating on\n4chan\nthe same happens with conspiracy\ntheories like\npizzagate which now has somewhat evolved\nin\ninto q a and\noverall you know you see that the\nactions\nof these things are not just confined um\non these french communities nor on these\nmainstream platforms as\na result but actually still in the real\nworld right so there is a lot\nof of evidence that sort of this\nradicalization that's\nif if you allow me to to use this word\num\nof of users on french platforms have\nthen\nuh resulted into real world violence uh\nlike mass shooting\nuh incidents um in in the us\nand in australia sorry new zealand and\nelsewhere\nokay um so um\nokay so look there is a you know in\nthese examples i mentioned already\nfor chan a few times so i'm just going\nto uh\ntalk a little bit about 4chan and like\nsort of use it as a as like i said a\ncase study\nof of some of our work um so first china\nis a\nit's an image board forum uh that has is\nreally integral part of internet culture\nlet's say you might have heard of them\nas they produce a very large quantity of\nmemes that ultimately go\nviral like the love cuts\nback in the day they serve as a\ncoordinated platform coordinating\nplatform\nfor the hacker group anonymous\nthey have really sort of mastered the\nart of trolling\nand made microsoft chatbot\ntie racist and say\nsay things like the holocaust didn't\nexist didn't happen and other sort of\nracist things\nand they have produced like a number of\nsort of internet culture\nmemes and and and narratives and um\nnotations let's say and expressions like\npepe the frog\nwhich was actually a sort of an innocent\ncartoon uh\nthat was created by you know sort of a\nnon-political uh\nuh person and then was kind of\nappreciated by by fortune and became a\nsymbol of hate\num and and and so now it's for instance\nbanned on\na number of platforms um and so\nit and this is sort of another um\nvery important observation to make is\nthat um\na lot of these issues really do not\nnecessarily unfold through\njust text but a lot of things happen to\nimages\nso for instance paper the frog was you\nknow it's usually sort of\nmorphed into different characters and\ndifferent things and during the 2016\npresent\npresidential election in the us you know\nthe picture of\ntrump uh looking like pepe the frog went\nso\nmainstream that actually the the uh the\nthen candidate donald trump\nretweeted itself and like i said paper\nthe is is a designated symbol of\nhate\nby uh entities like the anti-defamation\nleague\nin the us and a number of other uh\norganizations right\nand i will come back to to this uh later\nin the talk\num so fortune really got involved and\nsort of in politics let's say\nor through some uh some part of 4chan in\nthe 2016 elections\nto the point that you know really the\ncommunity took a lot of pride\nand a lot of credit uh for sort of\nelecting\nuh trump um and there are many examples\nthat i that i can\nthat i can cite um so like i said\nfortune is an image board forum\nuh well it's essentially um um\nso you you create a new thread in the in\nthe in a board by\num making a new post and the new post\nhas to have a c\nan image attached so that's you know\nalso tells you\nuh that is actually a lot of uh of image\nimagery being posted on 4chan and a lot\nof it is\nsort of um original right so we\ni i forgot but we we didn't have uh uh\na measurement of how many unique images\nare on 4chan\nand overwhelming majority are actually\nunique so most of it are like sort of\nphotoshoppings\num so you know there is some sort of a\nbasic subject like paper the frog and\nthen you add text or you add like some\nsome other\nuh elements and you make a new image\nright\nother users can reply in the thread by\num\nyou know adding posts and this can or\ncannot have images okay\nuh so these are the 70 boards that are\nuh\nactive at the moment uh we focus on\nparticularly on one called politically\nincorrect or tall\nwhich really has extremely lacks\nmoderation almost anything\ngoes like nothing really is removed\num and one of the features of 4chan and\npaul in particular is\num that everything is anonymous right\nthere is no concept of\nuh of user account um you know in order\nto post you just need to solve the\ncaptcha\nso they in that case they can avoid sort\nof automated\num posting by scripts of you know\nspamming the board uh but you don't have\nto\nhave an account there is some degrees of\npermanence\nuh identifiability that is supported\nsomething called trip codes\ni'm gonna skip this for uh for the\ninterest of time but\nif you are interested or if you looked\nat q unknown\nconspiracy theory this is relevant for\nthat case this is\nkind of like a way where so q was using\nthe q was using to prove that it was the\nsame person\nuh um sort of licking these uh secret\ndocuments\num the other the other important feature\nis ephemerality so\nall threads are archived and eventually\ndeleted after a while\nso there is sort of a very short\nattention span span\num like all so all content is removed\nafter a while right so\ni think that at most things last about a\nweek okay\nand there is sort of a complicated um\nnot so complicated but you know sort of\na system\nto determine which thread is deleted\nfirst so there is essentially a limit on\nhow many threads\nuh can be active at the same time and\nthen you know if you make a new post\nuh the thread is bumped up to the top of\nthe of the board\nand if you make a new thread then the\nlast uh\nthread gets deleted but there is a limit\non how many times\nuh the trade can be bumped so eventually\neverything is deleted right so and these\naffect a lot\nuh conversations on unfortunately if\nanything because there is no memory so\nlike certain posts and certain\nnarratives um\nsort of reappear uh periodically so\nyou know one good example is is our\npaper on 4chan\nthat you know after after every few\nweeks\nsomeone posts like oh my god someone\nwrote a paper about 4chan\nand then they start discussing it and\nthere's a number of conspiracy theories\nabout it like\nif we were funded by soros or by the un\nand a bunch of other things and\nthis sort of really periodically uh\nhappens all the time so as part of our\nwork on 4chan we actually made a data\nset available\nuh through a conference called icwsm\nthat has a data set track um that'll you\nknow sort of really encourages people to\nto share\nuh data so we have been crawling we've\nbeen getting data from 4chan\nuh since june 2016 and then you know the\nend of 2019\nwe released a snapshot of the data so\nabout three and a half\nyears worth of data there's about 3.4\nmillion threads 130 million posts\nand you know this data is available on\nxeno so you can download it and\nuh we augmented the data set with um\ntoxicity scores as obtained from\ngoogle's perspective api\nthere's different labels like toxicities\nsevere toxicity and so on\num these are\nyou know scores of how toxic a post is\nand should not be taken at face value i\nalso want to say that so google\nperspective api is not perfect\nit's a automated library that sort of\ngives assign scores to\ntext uh you know like i said for how\ntoxic it is\nbut it has problems right so it can be\ncircumvented like if you change a little\nbit\nthe spelling of certain toxic words you\nmight be able to\nevade the classifier it's also been\nthere's also been studies showing that\nthe api is somewhat biased against uh\nthings like um african-american\nuh um english for instance it's\nis scored as more toxic than uh\nthan it should essentially so you should\nnot take\nthese values at face value and they\nstart sorry this chorus and face values\nbut they're good to sort of compare\nthings so if you want to compare like a\nboard or four channel with another board\nof 4chan\nacross distributions uh this might give\nyou some\nsome interesting insight um we also\nreleased what we call entities\nsorry named entities that are mentioned\nin each post so these are things like\num concepts or names of celebrities\nso you can search and we have uh\ncollaborated with a number of social\nscientists\nfor instance to retrieve uh all posts\nmentioning donald trump right so you\nhave this uh um\nthis easy way to uh to search okay\num right so um and so what you can do\nwith this data is you know of course\nsort of a generic\ncharacterization uh for instance like uh\nunderstand more or less how you know\nsort of toxic\nand hateful content is so it's pretty\nbad\nas you can see in this slide uh you have\na very high percentage of posts that\nhave\nat least one um hateful keyword so this\nis a very simple dictionary based uh\napproach\nwe use something called hate base is a\ndictionary of hateful keywords\nbut i also show this slide to to to\nsort of uh highlight how much you know\nthese things are\nare hard to uh to do in practice because\nyou like i said\nyou you might have even keywords simple\nsimple single keywords that um\nmay be hateful or not depending on on\ncontext and the stupidest example is the\nword frog\nwhich you know can mean uh you know the\nanimal frog\nor can be pepe the frog which in some\ncases like i said is used in\nin hateful context or maybe the hateful\num you know word for to to to call\nfrench people right so it's a um\nit's you know this is uh um\nderogatory yet i couldn't remember that\nthe word\nderogatory term for french people for\ninstance so you know if you\nwant to look at data and determine you\nknow hatefulness let's say\nyou know it's not easy to do it at scale\nlike i said you know\nfrog for instance can can be hateful or\nnot\ndepending on the contents uh if you look\nat the prospective api again\nyou know um do not take it at face value\nbut you can see like this is a\ndistribution a communicative\ndistribution function uh that shows that\nreally an overwhelming majority of posts\nhave\nvery high toxicity scores um and\nyou know severe toxicity is a bit more\nrobust um so i usually encourage people\nto use severe toxicity rather than\ntoxicity\nuh and you can there's other labels like\ninflammatory profanity and so on okay\nso overall when you sort of study these\nfrench communities you have a number of\nchallenges that you have to face\nokay so i already mentioned that you\nknow actions are not limited or\nyou know obviously these things don't\nhappen in isolation so a lot of\nof things that are talked about on 4chan\nare influenced\nof about influenced by what goes on on\nother platforms\num the classic example is the gamergate\ncontroversy\num you know was uh primarily happening\nelsewhere\nbut of course it was discussed and\ncertain actions were coordinated on\n4chan\nuh or you know certain things that\noriginate from 4chan then\nyou know have a big impact on other\nplatforms uh but\noverall even if you consider sort of uh\n4chan as a in a\nin a vacuum let's say it's clearly not\nyour typical social network right so you\nhave this anonymity in the funerality\nthat really changed the way things\nhappen\nuh ephemerality if anything like like i\nsaid uh\nit creates this like sort of lack of\nmemory on on the discussions and so\nit really uh makes some topics sort of\nreoccur\nalso makes it harder to collect data so\nwe have to have like a crawler\nthat um you know periodically checks\nwhen a thread is archived and then\nuh retrieves all posts in that trade\nthread\nfrom the archives not easy and you have\nto have a lot of redundancy to\nuh to support that um you know a failure\nin the corridor infrastructure means\nthat you you miss data\nyou cannot go retroactively like you you\ncould do for instance on twitter or\nsomething like that\nuh also anonymity means that uh you know\nthere is no way to do\nany kind of like user-based analysis if\nanything\num i mean maybe there is a way but it's\ncertainly not not ethical\num there are like sort of stylometric\nfeatures you could use to find posts by\nthe same users\nbut you know that would not be ethical\nso we've never done it\num knowing what you're talking about is\nnot easy\nuh so there is like a number of words\nand really a lingo that's\nsort of you have to learn to even figure\nout what the hell they're trying to talk\nabout\num it's a bit risky uh it's very risky i\nwould say actually uh you might get\nattacked\nuh doxxed so doxing is you know sort of\nthe\nthe act of exposing personal information\nabout a user\nuh with sort of really um hateful intent\num we might i i've this has happened to\nus multiple times also\nsomething like something called concern\ntrolled someone sent an email to\nmy department chair\npretending that our research was you\nknow detrimental to\nsome minorities and you know that was\nsort of phrased in a way that looked\nplausible\nso it wasn't really an easy thing to\ndeal with\num so it also means that you sort of\nhave to quickly learn\nhow to deal with these things and how to\nprotect yourself and\nyour collaborators your students and so\non and\nyou know sadly is not something that um\ninstitutionally\nwe we are geared to really to to to\nsupport\num i most of my of the support i got\nactually was from colleagues in\nthe crime scenes crime science\ndepartment i had been doing research on\num terrorism and radicalization\nand you know they were sharing some of\nthe resources some of the support but\nyou know it's not really something that\nwe learn just like we don't learn ethics\nreally you know there's nobody teaches\nyou in\ncomputer science uh a curriculum things\nare changing but you know when i went to\ncollege\nuh in the early 2000s nobody was even\never mentioned ethics in any of my\nlectures ever right so\nand same in grad school so it's really\nsomething that we have to learn uh by\nourselves\nluckily there is you know sort of a\ncommunity that help tries to help each\nother\nbut there is a lot more to be done there\nokay so actually let me just take a a\nbreath and drink a little bit and ask if\nanyone has any questions\nso far\nyeah someone questions you can\njust pick up raise your hand\nno questions all right uh no problem\ni do have right a question for you but\nit's about moderation on these aspects\nwell maybe you'll cover in the next few\nslides\nso if you prefer i can wait until the\nend of your presentation to ask this\nis you talking about identification also\non some aspects so\nthis kind of motivation ask me the\nquestion and ask me the question because\ni might not cover it if i if i am i\ni will just tell you okay great so uh\nyeah so there is also the very\ninteresting\nuh approach as you said to identify for\nexample\nuh hate speech and but that's also\nuh still stays with the thing okay how\nwould we identify what is going wrong\nthere\nhow would but given that we identify\nthis\nthere's a lot of challenge to identify\nbut how how do you see moderation on\nthis kind of communities in the strange\ncommunities\nis it about having some person there is\nabout some automated algorithms because\nyou said like there's this challenge as\nmoderation with uh\nyeah yeah yeah it's a very complex uh\nvery complex problem and happy to i mean\ni think this is a good time to\nto answer this question so uh first of\nall i mean\nyou can there is a sort of two kinds of\nmoderation one\nis sort of community level moderation\nand one is you know\nuh fine grain let's say post or user\nlevel moderation right\nso for instance what you see is like on\nreddit uh\nyou know radius organizes communities\nthe subreddits so periodically you see\nsome\num subreddits being banned quarantined\nor banned okay\nand um so you know that's close and\nnobody can post on that commute on that\nsubreddit and then you know um\nlike users leave and what happens is\nthat obviously these communities don't\ndie but they\nlike migrate to other platforms or they\ncreate\nnew platforms uh um all together right\nso\nand so this is one issue which is i\nthink it's a very important societal\nuh problem which probably myself as a\ncomputer scientist\ni'm sort of the the last person who\nshould have an opinion\non this uh this is really for for people\nthat\nyou know have uh uh training and have\ndedicating their life i've been\ndedicated their life on\non sort of the societal problems to uh\nto\nto study uh because you are essentially\nand we have a paper i'm not\ngoing to to present this today but we\nactually have a paper that tries to\nuh uh analyze what happens when\ncommunities are banned\nand as you maybe could expect uh what\nhappens is new communities that are\nformed um\nare smaller in size um so when you\nde-platform\num like this for instance the donald\ncommunity or in cells communities on\nreddit\nthey migrate to forums they migrate to\nlike things like parley\nor gab so these are much smaller in size\nso their footprint is\nsmaller they sort of outreach their\naudience is smaller\nbut they're much much more toxic so\nyou you see that users even like not\nvery active users\nthey get much more toxic and so there is\na sort of an increased risk of\nradicalization\nright so you have like really two things\nhere like a more toxic but\nsmaller community or like a bigger\ncommunity that is still toxic\nbut still there's maybe some you know\ncross-pollination with less toxic\ncommunities or overall\nthere is some kind of moderation like\nalready you know you cannot just\ngo and and and you know call someone\nlike\nthe n-word but you can on this other\nplatform\nso i don't know how which is which is\nbetter you know like i said i'm computer\nscientist\ni should shut up and not have an opinion\non this um\nbut it's it's an important it's an\nimportant problem when it comes\ndown to you know sort of post or level\nor user level\nmoderation yeah it's very challenging\nbecause\na lot of the so in french communities of\ncourse you know\nso so first of all like every platform\ncan decide sort of a community\nguidelines and\nyou know decide what uh is admitted on\nthe platform like this is free speech\nand you you can decide what what is\nallowed and what is not\num and so you know you might think okay\na\nplatform has sort of this kind of values\nbut sometimes these values are not\napplied\nuh in a sort of coherent way right uh or\nyou know the the values are not really\ndriven by\nsome sort of ethical or societal\nstandpoint but they're they're driven by\nyou know\neconomic interests right so like the\nclassic example is i could say facebook\nnot\nbanning political ads you know for for\nyou know they have sort of market\nreasons right\num but anyway so even even once you um\nyou have agreed where you have stated\nsome guidelines and you know what is\nallowed and\nwhat it's not it's still hard to\nuh stick to to those guidelines uh\nbecause moderation at scale\nis just not possible like you know you\ncannot just manually\ndo moderation of things and even if you\ndid\nmanually the moderator is themselves\nthey will have different views\nso there are studies in the show you\nknow they they ask\nmoderators to label um 10 moderators to\nlabel the same\npiece of of text and you know very few\nposts are like\nyou know the labels are 10 out of 10\nagreeing\nright um so it's very it's very\ncomplicated\num and the other issue which i'm going\nto talk about later is that\num this approach is after ultimately\nreactive right so what happens is that\nyou someone reports that\na post or user is violating community\nguidelines\nand there is you know some mix of\nautomated tools and machine learning\nand moderators and labeling and you know\nthen people get their posts are removed\nor people get suspended or abandoned on\nso you know the damage so to speak has\nalready been done right so\nyou've already as a victim you've\nalready been exposed to the toxic\ncontent\num and you know just that the post is\ndeleted\nyou know maybe doesn't necessarily make\neverything better right\nso it's a very very problematic uh it's\na very complex problematic yeah\nthanks thanks for answering so i think\nuh\nif i can just very quickly react to this\ni can say like yeah\nthat's definitely it's a very\nmultidisciplinary challenge right so you\ndo as i said\nas a computer sciencey there's so much\nyou can do but if you work together with\nsome\nwith athletes with social sciences with\nuh yeah a lot of\ndesigners and many other audiences then\nyeah\nyeah that's very important you know and\num\nyeah unfortunately you know\nyeah so this is\nuh ben wagner emiliano thanks for\njoining us it's great to have you here\nsorry i can't turn my video on right now\num what i wanted to ask is so i'm one of\nthose crazy social scientists who work\non the content moderation stuff\nso i thought it would fit in quite\nnicely into sort of the context\nand one of the things i was wondering\nabout you were mentioning the robustness\nof google's\ntoxicity model and specifically that you\ndon't really\nthink there's uh helpful the sort of the\ngeneral toxicity but only severe\ntoxicity produces relatively robust\nresults\ni was wondering if you could say a bit\nmore about that both sort of in terms of\nfalse positive false negatives how that\nsort of plays out in terms of\nat least your experience of how that\ncodes things\nand the other oh yeah um please no first\nthen i have a second question but go for\nthe first one first it's\nall good yeah so maybe i misused a\nlittle bit uh terminology and\nrobustness is it's just a little bit\nless\ncontext uh dependent so you have\num sort of a lot of\nof a lot more text in the toxicity\nuh case that is sort of mislabeled as\nyou know toxic um you know\nbased on certain words uh whereas severe\ntoxicity is\nyou know a little bit less um\nsort of mislabeled yeah so\nit's a little bit less uh um\nsubject to to mislabeling right so\num and it's because it actually uses\nuh um a lot more\nuh text that was um so this is obviously\nthe\nuh the prospective model is trained on\non text that has been labeled by\nby humans right and um\nuh so the severe toxicity had much more\nagreement\num the model for is much more agreement\nacross different um annotators\nso that's why it's a little bit more\nrobust but like i said\nrobust maybe it's not the right word to\nuse here\num but i hope i hope you understood what\ni mean\nyeah totally and i think that's exactly\nthe point that i was trying to get at\nbecause it's quite interesting when you\nlook at the\nbasically i i feel so we've had some\nuh law students basically go through\ncoding data sets like this from a legal\nperspective\nand once they did that they came to\nrelatively high levels of agreement so\nhigher than 90 percent or 85 percent\nafter like they've done some training\nrounds\nand what i found interesting about that\nis that the problem is itself the term\ntoxicity or harms that we use that are\nnot sufficiently defined\nas to be clearly able to then yeah but\nbasically that means that even the human\ncoder stuff you're talking about\nmeasuring the severe toxicity against is\nalso as you say has its problems\nso it feels like a lot of the time\nyou're sort of you're circling around\nlooking for a ground truth which can be\nreally frustrating\nno it's very hard yeah so i mean there\nis a there is a simple example that\npeople\nmake is you know for instance like the\non sort of context dependence in\ntoxicity right so uh for instance sports\ncommunities uh in the us um\nyou know sort of the bar of what is\nconsidered toxic is\nis allegedly higher than\nyou know in some maybe political\ndiscussions or\nyou know some some other kinds of\ndiscussions right so it's it's sort of\nso socially acceptable let's say to\ninsult each other\nuh in sports based conversations like oh\nyou're a fan of this team\nand so you know insulting certain cells\nuh\ncompared to you know other contexts\nright so if you don't take that context\nin in into account uh it's hard to\nyou know to to to do to kind of uh have\nthis ground truth right\nyeah you had another question you know\nyeah no that's uh that was literally a\nsecond question but i think it's really\nhelpful\nand i think i mean i don't want to hug\nall of the time so maybe we\nchat about this separately all right but\ni really found that super helpful for\nlooking at this thanks\nthanks um all right so yeah then um\ni i mentioned uh already a couple times\nthis concept of sort of coordination\nof of actions and in particular of\nhateful campaigns\nso this is something called raids uh\nthere is coordination of sort of real\nworld\nevents so for instance the january 6th\nstorming\nof the capital uh i think that's the\npicture i have in the slide\nwas coordinated on um on mostly\non platforms like partly and and and\nother ones\nbut uh what i'm talking about here is a\na raid we call a coordinator hit\ncampaign that targets another\nplatform for instance youtube um so we\nobserved this\nanecdotally at first where someone\nposted a link to a youtube video on\non 4chan and maybe with a prompt like\nyou know what to do and then what\nhappens is that uh\nthe youtube video starts getting a lot\nof hateful comments\nokay um so we we\ndid like a an exploratory study trying\nto\nto see if we could actually model uh\nthis behavior um so you would imagine\nthat if a rate is taking place\nyou have a sort of a peak in youtube\ncomments while the thread is alive\nso you know the the number of comments\nthat that video\ngets spikes because of\nthe coordination happening on 4chan and\nactually you might\nobserve a synchronization uh between\nthese two things right\nuh and so this is actually an example of\nthings uh of something that happened to\nme\ni was giving a talk about our 4chan\npaper uh\na few years ago and you know it was at a\nworkshop when\nyou know there was a some some issues\nsome some people couldn't\nattend and so they asked they asked us\nto um\nrecord the uh uh the talk and i just put\nit on youtube\num then i was giving an uh an interview\nto a journalist\nuh talking about 4chan and say oh by the\nway here is\nyou know our work here's the paper\nhere's a youtube video\nand the journalists linked the youtube\nvideo\nin their piece so what happened is that\nsomeone\nfound and found the video and post the\nlink on 4chan\num and yeah so my my video was was rated\nlike in a matter of\nminutes i had thousands of views this\nwill\nnever happen to me uh and um also you\nknow we have i had um\nhundreds i think of of hateful comments\num and also dislike um so uh\npeople like using this dislike on\nyoutube anyway so\nuh what you can see is that you know if\nyou consider like time zero\nbeing the time where the youtube link is\nposted on 4chan\nyou can really try to model the activity\nright so you see that 14\nof videos uh that were posted on 4chan\nsee\na peak of comments uh um during\nthe time where the pull thread is is it\nwas active\nand actually that things were\nsynchronized so there is a way\nuh using signal processing to measure\nthe synchronization of time series\num i'm just going to skip the details\nfor the interest of time\nbut you know visually you can sort of\nplot the youtube comments sets\nthat are hateful and the youtube content\ncomments are\nnot hateful and you can see that you\nknow they're really\nmost of them happen uh around the time\nwhere the\nthe thread is alive right where right\nafter someone posts the link\non on 4chan so we sort of defined his\nmetric\ncalled uh hates comments per second uh\nwhich became a meme on 4chan now people\num you know mentioned this uh uh\nrepeatedly\nand uh we also looked at um you know a\nfew\nfew other things but so we we sort of\nfound some quantitative evidence that\num when people post youtube videos on\nyoutube links on 4chan this may make it\nrated but we flip things around and and\nwe thought\nokay can we actually try to uh go one\nstep forward\nand not just simple simply you know\nuh observe that something is taking\nplace uh the rate is taking place and\nwait for somebody to report it\num and you know just actually trying to\npredict\nuh uh that something like that might\nhappen right so this will allow us to\num you know have a more proactive\napproach that does not\nrely on um you know reports and\nmoderator\ngoing through through content which is\nan approach that's obviously it's\ninherently slow\nso we thought what if we take a video\nand\nwe try to determine whether or not this\nvideo\nis at risk of getting rated by 4chan\nokay and uh so again this is a\ncompletely proactive approach right\ndoesn't even\nwait for like the link to be posted on\n4chan\nright so what we do is we try to model\nwhat\nwhat are the videos that that are likely\nto attract hate\nuh from 4chan so what we did we we\nconstructed the data set\num of so we had sort of ground truth\nhere\nsome videos were rated some videos were\nnot rated\nand we try to understand what are the\nfeatures of the videos that\ndo get rated uh and in the end we\nextract a bunch of features and we use\nan ensemble classifier\nto predict on unseen data right so on\nvideos\nuh which we for which we don't know if\nit's going to get rated or not\nwhether or not you know the it will get\nrated or not uh\nwhether or not it will it will get rated\nso it turns out that there are some\ndistinctive features\nuh like the topic being covered by uh\nby by the video of course um you know\nthe\nthe elements in the video so we use for\ninstance like an image recognition\nlibrary that tells okay this video has\nlike a man in a suit\na man playing rugby or something um and\nalso the metadata is very important like\nthe description\nthat you see on the on the youtube page\nitself\nand it turns out again i'm skipping the\nwhole thing here but\nturns out that there is some some uh\nhope that we can actually flag\nuh videos at the time of upload that may\nbe targeted\nuh by 4chan raiders so you know you can\nhave proactive measures where you know\nfor instance you\num keep an eye you know algorithmically\nspeaking\nspeaking on the on these videos or\nyou've warned\nthe users uh that this might happen\ni don't know right so there is a tool uh\nhere that that we can use right\nit also tells us that you know we can we\nhave\nsome hope of trying to model uh the\ncontent that might attract\nuh hate okay and not much more work is\nneeded here of course\nso the other the other thing i i wanted\nto talk about is\nyou know i mentioned before how\ncommunities influence each other\nand i i'm going to do this through the\nuse case of sort of disinformation\nuh campaigns and sort of disinformation\nnews um and so an example\nthat sort of motivated our work was the\npizzagate conspiracy theory\nwhich was something that originated\nduring the 2016\nuh um u.s presidential election campaign\nso there were some leaks uh email leaked\nby by wikileaks from the\ndnc the democratic national committee\nand\nin this email someone mentioned like a\npizzeria okay\nand um someone of 4chan put two things\ntogether and\nand claimed that um the email pointed to\nuh evidence that the democrats and\nhillary clinton were running a pedophile\nring\nfrom the basement of a pizzeria in\nwashington dc\num so this you know 4chan acted like a\ntheory generator\nand then the tier incubators and sort of\nthe gateway to the mainstream world\nwere platforms like reddit and uh\nwebsites\nuh like infowars and breitbart and so on\nand so this eventually really went\nmainstream to the point that\num someone uh did a study the show that\nactually it's against shifted some some\nvotes\nuh in that election so it went on\ntwitter went on facebook and so on\nright so we we try to say okay can how\ncan we somewhat\nmodel quantify how much influence\num a platform like 4chan\nmight have on these kind of things so\nthe idea is to look at the appearance of\nalternative and mainstream news urls so\nyou know mainstream urls you can think\nof like i don't know bbc\nor cnn alternative news think about\nbreitbart in for words um daily mail\nthese kind of things right\num we took a data set um that some\njournalists had published\num so and what we do is we build the\nsequence of appearance for each url\naccording to timestamps so imagine you\nhave a brightpearl url\nuh first it appears on on reddit then on\n4chan then\non twitter right so we can build the\nsequence and um\nwe can accept the sequence and build a\ngraph and\nuse a statistical uh a tool called hawks\nprocesses\nthat allows us to quantify the influence\nof\nan event occurring on one platform\num having you know sort of the influence\non on the other platform\nright so um essentially what you try to\nlook for\nis impulse responses uh that an event\nmight might cause okay and so this gives\nus some\nbecause we are able to model the\nbackground rate\nuh so like what is the expected um\nsort of number of occurrences without\nthe event\non the other platform we can start\nreasoning about\ncasual relationship uh so like try to\nhave some confidence about the number\nof events in this case the sharing of a\nurl\num caused by another event okay\nso i'm going to sort of skip the details\nhere but\nsort of the the main takeaway here is\nthat\nseemingly tiny web communities like\n4chan\nor certain subreddits like the donald\nsubreddit\ncan really punch about the weight class\nwhen it comes to influencing the greater\nweb so they actually produce\na lot um so they they take these news\nurls\nand by posting it many times it actually\nyou know exposes people that are also\nactive\non mainstream uh social networks let's\nsay like twitter or\nmore mainstream subreddits and\neventually\nthat those urls\nspread spread there because\nof these french communities okay so\nand we actually did this both for urls\nand for memes\nfor for image memes right so um you know\nwe all played with memes and you know we\nthink that memes are fun\num we send it to to each other but in\nmany cases\nyou know memes are not fun at all uh\nlike paper the frog like i said\nit is a meme but it's a hateful one uh\nis used\nfor instance this kind of examples to\nsort of convey\nvery hateful problematic\nissues and they are increasingly used\nalso in political campaigns\ni think you know especially the 2016\npresidential campaign was really\nwhere memes um became like came to the\nforefront of sort of the political\ncampaign\num and were used and even sort of\nretweeted by uh\ntrump and um his sons and so on\num so and they actually\nhave been often um sort of\nuh discussed in alongside\num real world acts of violence so this\nis for instance the picture\nof um of a guy who went on a shooting\nshooting spree in florida his van\nwas covered by you know these 4chan\nmemes\num right so uh what we do actually here\nwe try to\nuh to understand what memes are\nshared on on platforms like 4chan gab\nreddit twitter and so on but also how\nmuch\nsort of content uh con like meme content\nthat originates from french communities\nlike 4chan\nultimately gets to to mainstream\nnetworks so again\nestimate the influence of of this\nplatform and we have like a really\nnice uh cool processing pipeline\nuh that is open source and it's been\nused by other researchers now\nwhere you can take some images find\num visually similar ones using an\nalgorithm called perceptual hashing\nand clustering them together so you find\nsort of variants of the same mean\num or sort of like memes that you know\nthat really look\nlook the same and we have an automated\nway to label these memes\nwe use know your meme which is some sort\nof encyclopedia of memes\nas a website um so we can actually\nyou know label them and say where are\nthese memes about um\nand so on and then we can look at the\ninfluence estimation\nuh again uh i'm gonna skip the details\nbut you can see like what are the top\nnames for social networks you see very\nsort of disturbing things on\non things like 4chan less so on twitter\num\nwe can sort of find this cluster of\nmemes uh and\nactually on the on the paper there is a\nlink to a website that\num has sort of a browsable version of\nthis picture so you can click on these\nclusters and see\nexamples of memes and\nyou can also see how you know the um\nthe production of memes is correlated to\ncertain events like for instance for um\nyou know especially on platforms\nuh um like gab but also on twitter\nyou see like a spike in memes being\nshared around\nthings like presidential debates or\nelections and so on\nbut again we can quantify the influence\nand you know long story short sorry if\nwe look at the influence\num sorry so we look at the sort of\npure influence we see that paul is very\nvery influential\nin spreading racist memes onto um\nother platforms including mainstream\nones like reddit and twitter\nand but when you look at the normalized\ninfluence so\nnormalized to the number of posts of\nmeme posts\nactually specifics sub-communities have\nread it like the donald's\nare even more uh are very efficient\nright so they produce\na a sort of a smaller number of memes\nuh which ultimately end up uh being very\num\nyou know end up being on on mainstream\nnetworks\nall right so um i'm going to skip the\nrest of of\nof stock so we we still have some time\num to maybe answer questions\num okay so sorry\nokay so this is it i just wanna\nacknowledge uh the eu project that\nfunded us in case\num and my collaborators at the idrama\nlab\nwhich is again it's a virtual\ndistributed lab of people across the\nworld\nthank you very much thank you very much\nreally\ninteresting fascinating topic i think i\nhave a good time for one question uh\nceda\nplease yeah emiliano um\nmaybe you got to this but for a moment i\nhad to step away but like\nyou know we we heard a lot about\nalgorithms being optimized to increase\nengagement\num and you can imagine that part of the\nwhat are what you're seeing\nis that you know they're media moments\nthat companies\nbenefit from by increasing engagement\nlike i don't know how much for example\ntwist\ntwitter and facebook uh let's say um\nreally up the viral\nvirality of messages during media\nmoments right like\nlike whatever the presidential election\nor debates being won\nor sports events being another um\nand you know we saw this also with\ngamestop where you know members of these\ngroups are very savvy about how these\nplatforms are basically\nfull of algorithms that amplify their\ntheir effect\nright and so i just wonder um how much\ndo you think\nthe problem is this kind of gear\nof these companies to grow by increasing\nand optimizing engagement\nthat is quite easy to analyze from the\noutside so that you can get this\nefficiency gain\nand how much like are you trying to kind\nof band-aid the fact that they continue\nto grow\nusing these efficiency algorithms for\noptimizing engagement\nenabling these parties and you're just\nallowing to pick up the guys that are\ndisturbing that\nthat infrastructure right so i'm just\nwondering like how much\nwould you like if you had a magic wand\nemiliano well how would you change this\nunderlying infrastructure so that you\ndon't have to do what you do now\nthis is the question yeah i mean i\ni haven't done work in this space myself\nbut uh\ni mean there are other scholars who have\nand you know they've\nprovided evidence of how you know\nplatforms\nlike you know facebook and twitter and\nso on uh benefit\nfrom you know sort of the polarization\nright and and and\nwhat you mean so i think that uh\nyou know it's in it's in their economic\ninterest to\num maximize\nengagement and one way to do so is to\nmaximize\npolarization right that's the evidence\nthat we've been seeing\num so how to you know to disrupt that\nuh personally i think that the only ways\nto\nthrough policy um i i don't think that\nthere is much that we can do\non on this side uh we can expose we can\ndo uh measurements it's unfortunately\nvery hard to do\nany kind of of analysis of facebook\nbecause we\ndon't get any data so it's impossible to\nto get data\nthey sort of promised that they would\nmake some data available\num you know through this one\nfacebook one or something like that\nprogram\nwas we were very excited there was going\nto be differential privacy\nthere and and unfortunately that didn't\nreally work nobody got the data\nand the data is useless okay um\nso that but still there are people that\nhave managed to\ndo some kind of measurement in that\nin that respect it's a little bit easier\non on twitter because we have\naccess to at least you know one percent\nstream through through\nacademic projects\nwhich is is perfectly enough to\nmonitor sort of large-scale phenomena\nit's you know the one percent is a\nrepresentative enough sample uh for this\nkind of uh\nbehaviors um but yeah so we can expose\nthat but\ni i don't think that you know like even\nif i did have a\nmagic algorithm one i don't think\nthere's really anything that we could do\nto disrupt\nuh from the outside so the only way to\nsort of disrupt from the inside is\nthrough policy\ni think i don't know\nwe will have to take it offline okay all\nright\nall right okay yes thanks\nthank you very much emiliano thank you\nfor raising all this they have very\nimportant issues\ntelling us about your work really was\nfascinating so thanks everyone for\njoining us\nand yeah i'm sorry maybe i i didn't\nleave enough\nenough time i i actually did not see a\nwatch for a question but i'm happy to\nstay a bit longer or you know i'm happy\nto\nyou know answer questions uh offline via\nemail\nor signal or whatever um so\nyou know please please reach out um i'm\nhappy to talk a little bit more\ngreat i'll stop the recording now but\ni'll leave it let's keep the\nroom open so whoever wants you to talk a\nlittle bit more with emiliano please\nstick around thank you thanks", "date_published": "2021-04-07T14:49:46Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "c12dcec86636e33a6e9f3b57572338f2", "title": "AiTech Agora: Lotje Siffels & Iris Muis - Zeitgeist and data: the danger of innovation", "url": "https://www.youtube.com/watch?v=KXWlhbEDs6I", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "uh\nlucia is i uh give the floor to you all\nright\nthank you so much um i will start\nsharing our slides first of all\nuh welcome today we're very pleased to\nbe invited to this a uh\nhow do i say it igora yes agora meeting\num\nreally nice to uh be able to e-meet all\nof you and discuss with you the topics\nof today in the second part of this uh\nmeeting\num i took the liberty of inviting some\ncolleagues from utrecht university so\nyou will see some unfamiliar faces today\num\nbut um first of all let's uh start by um\nuh introducing ourselves\nuh loki please go ahead\nyeah thanks again indeed for the invite\nit's great to be here um\nso i'm lucia sifuls i am now a phd\ncandidate at the rutgers university\nuh working on a project about digital\ngood which is mostly about the\ndigitization of healthcare and also the\ninfluence of big tech in healthcare\nand before i started that about i worked\nwith edis at interact university\non dida which is a tool she will explain\nmore about\ntoday i think for now that's enough\nthings\nall right so my name is i still work at\nutah university um within a team called\nutrecht daily school\nand we researched the impacts of\ndatification on society so we try to\nreally bring that humanities perspective\nto\nto tech\n[Applause]\nand\nyou know within that perspective data\nethics is of course very uh very\nimportant so that's one strand of\nresearch we are very interested in and\nwe have focused on for the past five\nyears\nso or six years even\nand when luchi and i uh worked together\na couple of years ago\ndata ethics was was our joint focus so\ntoday we would really like to share with\nyou our experiences\nin working with data ethics ethics in\nexternal organizations because we\nreally did a lot of work guiding data\nethical sessions\nmostly within government organizations\nand you know\nafter doing this for a couple of years\nand and within multiple organizations um\nbig organizations smaller organizations\nwe\nstarted seeing um\nrecurring\narguments within the ethical debates\npatterns\nand\nwe would like to share with you some of\nthese\nrecurring themes and we in the second\npart of this uh this workshop we would\nreally like to hear\nyou know your experiences or or opinions\nabout this and really turn this into a\ndiscussion\num\nuh\nwhich could be fruitful for further\nresearch\num\nso first of all what exactly did we do\num\nso\nwe worked with\nwith an instrument called the data\nethics decision aid or dda\nfor short\nwhich was created in 2016 by italian\nschool to\nguide organizations or project teams\nspecifically\nin\ndeveloping ethical algorithms ethical\ndata projects in general so really\ncoming from a\npublic values perspective\nthis\ndeliberational framework\nwas meant to guide project teams in\nreally operationalizing public values\nwithin their specific data projects\nso on the slide you see what that looks\nlike\nthere's a big poster\nand the project team is standing around\nthis poster\nthey are answering all sorts of\ndifferent questions\nsurrounding different topics\nwhich range from technical topics like\nanonymization and data sources used\naccess to data and security\nto\ntopics such as privacy or bias\nso all of these questions you can see\nthere on the poster and the project team\nis meant to deliberate with each other\nabout these questions and to really come\nto\ndecisions about how to best adopt these\nyou know different public values within\ntheir project design\nor algorithm design\nso that's just to give you an idea of of\nwhat it looks like and\num for these\nworkshops which typically take\naround three hours so quite long but\nwe really feel that like that's that's\nnecessary for a good ethical reflection\nyou know it takes some time\nto get to the core of things\nwe really advise project teams to to\ninvite people with a range of different\nbackgrounds to these sessions so people\nwith a background in tech\npeople with a background in policy\ndevelopment or law\ncommunication even\nso to really\nstimulate different viewpoints being\nbrought into these ethical deliberations\num\nthis is the poster the english version\nof the poster the i i realized the the\nletters are very small so i don't expect\nyou to be able to read this but um\nif you want to take a closer look just\nlet me know and i will put the link to\nall of the downloads of dita in the\nchats later on\num\nwe also\nduring the past two years started\nworking with an online version which you\ncan see here\nso we would\nnot no longer go to organizations but\nhave these kinds of you know teams\nmeetings with uh people from\nmunicipalities people from uh from other\ngovernmental and even sometimes\ncommercial organizations\nin which they could fill out this pdf\nversion of dda\nand you can see here at step one we\nstart out with\ndefining the values of the organization\nso you here you really see that public\nvalues perspective\nreflected in the design of and the\nprocess of data\nso you know after five years of of\nethical deliberation what did we gain\nout of this and what were the uh the the\nthings we saw um um uh\nreflected in in almost every\nworkshop uh so so first of all just to\ngive you an idea of of the scope\nwe did these kind of sessions within\nmore than 80 organizations\nmost of them dutch organizations couple\nof them\ngerman or or\nbelgian organizations but like 95 was\nwas dutch\nmost of which were local government\ninstitutions so municipalities\num the cases that were brought in were\nvery different so it could be a more\ncomplex\ncomplex project like a project with\nimage recognition or risk prediction\nto\nmore simple projects like combining a\ncouple of different data sets\nthe project teams were interdisciplinary\npeople with different backgrounds and\nyou know every\nevery organization obviously has\ndifferent organizational values\nwhich also cost a different outcome of\neach workshop so of course the\norganizational values are also very much\ndependent on the\npolitical color of the city council\nwhen talking about municipalities so\nmore liberal\nmunicipality\nhas different organizational values than\nuh\nuh left uh city council for instance\num\nbut most importantly uh we got lots of\ndata\nout of these ethical deliberations\nbecause for us uh dita mainly acts as a\nas a research tool\nuh because as i said before we're very\ninterested in the impact of datification\non society and also\nuh the impact of of of the way a city is\ngoverned for instance so\ndida really uh gave us an entry point\ninto an organization\nto have a seat at the table\nnot as\nresearcher with a survey for instance\nbut really as as expert or as\nas moderator\nuh which which gave us\nyou know very valuable insights and and\nand very intimate details sometimes of\nthe impact datification really has\nwithin an organization so here you see\nreally that the duality of data so for\nthe external organization it's it's an\nimpact assessment which can\nguide them into ethically implementing\ndata projects and for us as researchers\nit really acts as a as a research tool\nwhich gives us\ninsights into the way datification\nshapes society\nif you have any questions\nby the way feel free to to ask them at\nthe end of this session i would really\nlike to discuss with you also\nuh\nthe use of data as a research tool\nuh it's something where we're very\ninterested in hearing from from you also\nhow you collect your data\nand and your experiences with these\ntypes of of instruments\nso um\nnow we get to the juicy part\nwhat did we observe\nand i would like to give the the word to\nlogi for this\nthanks edis so yeah for me\ni am allowed to do the juicy part it's\nreally nice\nso um i first want to just briefly\ndiscuss\nuh some general observations that we\nmade uh\nand then go into uh the subject that we\nreally want to discuss today which we\ncalled the zeitgeist narrative\num\nso\nalso just to give you an idea of the\nkinds of things that we would see and\nthat would already be very interesting\nso first of all\nthere was a huge difference between\ndifferent kind of organizations in what\nkind of expertise they had\nthis expertise we mean with that\ntechnical expertise but also expertise\nin the practice of the field so these\nwould be local government institutions\ndoing data projects about a given thing\nfor example\nhow to deal with citizens in debts or\nhow to prevent\ntheir citizens from falling into debt\nand it would differ greatly what kind of\nexpertise would be at the table at these\nethical deliberations and when there was\nsomebody with expertise from the\npractice of the field\nin this case somebody who actually\nworked with spoke with people who were\nin debt debts\nwho knew the practice of it\nthat would make a huge difference in the\nethical deliberation so it's already\ninteresting that this varied so greatly\nbut it's also interesting that we could\nsee in the ethical discussion how\nthis changed the ethical discussion so\nvalues like\nequality dignity\nlike the absence of bias privacy these\nare very abstract values when they're\njust there on the list but when you have\nsomebody there who knows how these\nvalues work in practice\nthen you can get a real ethical\ndiscussion so this was really necessary\nfor the ethical discussion and the same\ngoes for the second point so this is\nabout the more technical expert\nexpertise uh the data literacy of the\npeople involved and this also differed\ngreatly per uh government organization\nthat we worked with but we saw how much\nof a difference it makes when you have\npeople on the table who know\nthe nitty-gritty of the technical\naspects because as i guess most of you\nhere will realize\nthese ethical values\nthey are ingrained in the nitty-gritty\nof the technical aspects and if you\ndon't even know\num a simple example is like if you don't\neven know what anonymization is or\npseudonymization how are you supposed to\ntalk about these abstract values\nand so we saw ethical discussions having\nsuch\nbeing so much more effective if the\npeople with uh the technical expertise\nwere at the table\nbut oftentimes this kind of expertise is\nnot there in project teams because it's\nbeing outsourced so not all local\ngovernment institutions\nhave this expertise within their\norganizations and they hire external\nparties to deal with that part and we\ncould really see how much of a\ndifference this makes because this means\nthat\nyeah most of these projects teams\ncouldn't really discuss\num the ethical aspects of the project\nthey were developing\nand most of the time these external\nparties they weren't there for the\ndiscussion right so uh the whole\na whole part of the ethical decision\nmaking was also outsourced in this case\nand this is a war something that worried\nus that we could see\nbut it was very interesting also to see\nhow it works\num and this also uh um yeah has to do\nwith the third point that uh when\nthere is a lack of either kinds of these\nexpertise like the practical or the\ntechnical then uh yeah you run the risk\nof having\nresponsibility gaps\nif if people don't really know how it\nworks\nuh when you ask them well who is\nresponsible for this aspect or who is\nresponsible when something goes wrong\nwhat what will you do when something\ngoes wrong\num they won't know\nand especially this is of course\nspecifically for data projects a very\nprominent thing because people don't\nreally understand where the decision\nmaking even lies when there's all these\ntechnical things involved or when\nthere's an external party involved\nso this was also something we saw quite\nregularly\nand that was dependent on the kind of\nexpertise at the table\nbut a more positive notes uh the fourth\nthing is that we also notice that civil\nservants are really good at having\nethical discussion they are well\nequipped to talk about the common good\nso there is something in in our uh\nstructure of our local governance where\nthis this is this works out well these\ncivil servants\nthey are very good at articulating\nvalues and also thinking about the\ncommon good for their citizens\nwe know this also because we were able\nto compare it with doing these workshops\nwith some commercial organizations and\nthis was a big difference uh mostly the\nparticipants from commercial\norganizations\nthey were not used to thinking about the\npublic good or public values at all\nso\nwe suddenly\nnoticed that we had to start at a whole\ndifferent level of having an ethical\ndiscussion\nand then\num fifth point which gets a bit more to\nwhat we i will discuss after this a bit\nmore elaborately um\none of the things we really noticed is\nthat when ethics became a box to be\nticked\nuh it loses its value and what we mean\nis that we notice a big difference\nbetween workshops where participants uh\nusually out of their own interest for\nthe ethical aspects of the projects\nwould want to do a kind of ethical\ndeliberative process and ethical\nassessment\nand the workshops where\num the municipality had uh obliged them\nto do an ethical assessment so they had\nto just take this box of yeah we did an\nethical assessment if that was the\nmindset of the participants then you\ncannot really have an ethical discussion\num they just want to be done with it\num\nand then finally uh the thing i will say\ni will go into\ndeeper now is the zeitgeist apology\num so\nthe zeitgeist narrative or apology so we\ncall it an apology when it's used\nwithin things\nthe um within the discussion as kind of\na justification for doing a project\nwithout really thinking about the\nethical considerations\nwhat we mean with that is\nany time we notice that participants\nsaid things like these we just have to\nget through\nyou just need to do this everybody does\nit\num\none participants during one of the\nworkshops actually called data projects\ntoys for the boys uh which i thought was\nvery interesting i will go into\nit later a bit deeper\nbut they also sometimes called it a\nsystem jump\nand what they mean is like um\nit\nis required to just make this switch to\ngo on into this new system this this new\nfuture that there is and and\nit it will happen anyway and it needs to\nhappen anyway so\nwhy think about it too much or all these\nethical considerations\nare then not taken seriously\nso it is really\nabout um\nanytime a participant did not seriously\nenter into the ethical discussion\nbecause they thought something like a\ndata project was inevitable because of\nprogress this was the future and you\ncannot\ndeny the future that is coming anyway\nand more than that it is also\nbad to try to prevent this future from\nhappening because they really see\nuh technological advancements in the\nform of these data projects as as a\nhigh-speed train\nthat is going really fast and you have\nto get on it because otherwise you miss\nit and you're left behind\nand this is then one of the worst thing\nthat things that could happen\nnow why did we think this was so\ninteresting well it's because this\nzeitgeist narrative\nit has a huge maybe even bigger than the\ninitial observations that i mentioned\ninfluence on the quality of the ethical\ndiscussion we had\nwith these participants um\nand this is because\nuh of\na couple of reasons a couple of\ncharacteristics of this narrative so\nagain as i said progress is seen as an\ninevitability\nwhich of course it makes you a bit yeah\nit makes no sense to really carefully\nconsider something that's inevitable\nanyway\num\nbut also valuing innovation\nso this progress and innovation in\nitself is just put\nlike on a higher level than the other\nvalues and like i said these civil\nservants are pretty well equipped to\nthink about public values and also to\nrecognize like there's a plurality of\nvalues and we we want to consider them\nwe do not want to place one on top of\nthe other but then when it was about\ninnovation or progress or this idea of\ntrying not to miss the train this was\nreally forgotten and it was like you\ncould use the argument of innovation\nto\ntrump any other value that was relevant\nin the project\nso\ninnovation is a way to\ntemporarily waive other ethical\nconsiderations\nand then the for it the third point is\nthat it invokes a sense of haste\num which also of course is in the way of\na very of having a careful process\nwhere you\nthink about ethical values\nlike it said you need a long time to do\nit and a lot of things will come out\nthat will slow down the project\nand this doesn't fit with this feeling\nof haste that this zeitgeist narrative\ninvokes you have to get on it now\notherwise\nwe will be left behind\num\nand then the fourth point is\nuh i guess\nfor us this was the most uh\nimportant one that it invokes this sense\nof powerlessness and this is because\nyeah participants they would sometimes\nreally be downcast during these\nworkshops because they would be just\nlike well it's going to happen anyway\nwhat are we doing here\nand they felt so powerless because\nsometimes they were actually interested\nin the public values and they were\ninterested in all the ethical aspects\nand they were worried and they had all\nthese great insights about possible\nproblems\nwith these data projects but they just\nfelt like they were powerless to stop\nthe\nadvancement of these data projects\nand so\num this also leads to another aspect\nthat is that sometimes the participants\nso the civil servants themselves\nwere really thinking in this tight guy's\nnarrative and they uh\nuh they weren't very critical about them\nbut sometimes they were very critical\nabout it and it wasn't really that they\nthemselves felt like progress\nwas this train they had to get on but\nthey knew that um the politicians that\nat that moment were determining which\nprojects needed to be developed that\nthose were thinking along those lines so\nthey felt like yeah we can be here\nthinking about all the ethical aspects\nand we can even tell them that this is\nproblematic but they won't listen anyway\nthey will want to do it anyway they will\nwant to get on this train\num and this is some of the participants\nthey\nmentioned things like this\nand usually this was like way at the end\nof the workshop so after having a two\nthree hour discussion about all the\nrelevant ethical aspects a really rich\ndiscussion\nand then at the end one of the\nparticipants would say like\nbut it's going to happen anyway\nand then if you ask why is it going to\nhappen and they say well it's to score\npolitically the political image is the\nmunicipality's one number one priority\nso they want to score with data projects\num and another set\nthis is another quote an ethical dilemma\nis the output of the project so all the\nethical considerations that we had\ndiscussed for three hours at a time\nversus just the eagerness to start the\nproject so this eagerness is just\ntrumping\nall the careful considerations that you\ncould have about these\nand um\nyeah so um for us it was really the\nsense of powerlessness that we could we\ncould find we could see and we saw how\npernicious it was because it just\nprecluded all the ethical considerations\nthat were being discussed\num and also and this is why i put the\nsecond sentence here um\nit is not just the ethical deliberation\nbut it also does something pernicious to\nthe relationship between\num the civil servant and their citizens\nand this is also something interesting\nso with data projects more than we think\nwith other projects civil servants will\nhave\nan urge to think think like yeah but\npeople will get angry about this anyway\nbecause they are data projects\nand a civil society will get angry about\nit anyway because their data projects\nbut we have to get through it like we\nhave to do it still because otherwise we\nwill miss this train so this whole idea\nof that public opinion uh uh that there\nmay be\na richness to a democratic process that\nthe civil servants were actually usually\nvery\naware of in the case of data projects\nsometimes this was just gone\nand um this also leads to\nthe final thing that i wanted to say\nabout this because we want to leave a\nlot of room for discussion\nbut\nwe haven't really worked out yet what\nkind of conceptual frameworks we could\nuse to think about this i mean we called\nit a zeitgeist narrative or apology\nbecause that's\nit resonated with what we what we felt\nand saw but there's of course a lot of\nways that this has been noticed before\nby other scholars\num but linking to this uh preclusion\nalso of the democratic aspects\nof these um\num\nyeah the the the consequence it has for\nthinking about democracy and about uh\nrelationship between\nuh the civil servants and the citizens\nis that it it yeah it's a kind of\ntechnocratic way of thinking so it it\ngets you out of thinking about\ndemocratic values and get you into but\nthe system\nwill know what's best and we know what's\nbest because this is progress so we know\nbetter than our citizens what's best so\nit doesn't matter whether they get angry\nuh because there's\nthere's no sense anymore that there's\nvalue in the democratic process itself\nso it's it's kind of technocratic way of\nthinking\nand i just want to briefly hide that\nbecause i don't have time to explain\nuh\nthe difficult frameworks of i don't know\nif any of you are familiar with uh\nboltonski and davino or botansky and\ngeopalo but\ni've been working with these frameworks\nand it\nit really shows how for civil servants\nthat are usually discussing things among\nthemselves in a kind of civic logic\nwhich is for them very relevant so it's\nabout democratic values about processes\nof democracy\nuh it's about\nrights equality\nbut when it's about data projects\nsuddenly this logic shifts to a\ndifferent kind of logic which you could\ncall an industrial or project logic\nwhich is mostly about efficiency\nit's about expertise but in a very\ntechnical sense\nand it's also about innovation and also\ndisruption\nso disrupting traditional old-fashioned\nways of doing things to create something\nnew\nand again this goes at the cost of a\ndemocratic logic\num but i also\nuh\nlike to connect it to this uh work from\nuh finzo and russell that you may know\nuh which is called the innovation\ndelusion which is\nabout saying\nthey really show that it's also in our\npoint of time in certain societies where\ninnovation as a value is just\nhighly valued more highly valued than\nother values and specifically the values\nthat are opposite to innovation which\nwould be\nmaintenance\nmaintenance work care work so anything\nthat that that works more on the\nbackground to keep things going but\nisn't disrupting it's actually enabling\nsociety to run so think about\nmaintaining uh bridges roads but also of\ncourse care work health care work these\nare things that we value\na lot less in our societies and even we\nsometimes make them very invisible\ncleaning jobs are always done at night\nor early in the morning so you don't see\nit\nand innovation that's something uh we\nare we always make it very visible we\nshow it\num and there's also a gender dimension\nto this um which i think is interesting\nbut we\ni'm just curious to hear what you think\nabout it but there's definitely this\nidea that there's something about\ninnovation which is\nuh exploring discovering uh like these\nthoughts of fronts here and breaking the\nfrontier and um\nthis is valued uh and it's also\nstereotypically a bit of a\nthere are a bit of male values at least\nstereotypically in our society whereas\ncare cleaning maintenance they are\nstereotypically seen as female values\nand therefore they're also\nseen as less valuable\nand finally but definitely not least\nit really resonates with\nafghanistan's idea of technologism\num so uh\nsorry i just now realized that there are\nmore afghanis working on this than\njust one\nbut uh yeah so so what we really saw is\nthat um\nyeah the civil servants they are not\nreally led\nby a problem when they're trying to do\nthese data projects not always sometimes\nthey are but oftentimes it's not the\nproblem that's the thing they just want\nto do the data project and they try to\nfind any kind of problem\nor even make one up\nin order to be able to do the data\nproject so it's solution-led they first\nhave the solution they just need to find\na problem that they could\nstick to it and that's of course uh yeah\nleads to all kinds of problematic\naspects\nso these are just\nquite random thoughts and i'm just just\nto hope that in the discussion you can\nhelp us along in thinking about this\num\nyeah so i think we're going to do future\noutlooks yeah\num\nso\ni'll keep this brief because i'm not\nsure so i i have um we have seen of\ncourse over the years things changing as\nwell by doing these workshops for so\nlong uh and edis will have more to say\nabout it because she's been doing it for\neven longer i've been\nout of it for two years but i think\nthere is uh some hopeful messages so i\nthink we do see also in the public\ndebate about this but also within this\nworkshop that there is more and more\nattention for\nuh the democratic character of\nethical deliberation so there's more and\nmore criticism on having just a list of\nvalues\nuh and not really doing anything with it\nnot really doing anything deliberative\nor democratic with these values\nand i think that's a good thing we need\nthis kind of a shift\nand i also think that we do get more\ncritical of this idea of innovation\nbeing the only important value and we do\nget more critical of techno solution is\nthinking\num\nyeah you may notice this is also just\nwhat i hope\nwill be the direction that we are\nheading but i think there is some reason\nto be a bit optimistic there\nbut the third point which is not so\noptimistic is that i think\nspecifically also when looking at uh\ngovernment institutions there's still a\nlarge dependence on external parties and\nit depends very much on the kind of\nproject that you're having what kind of\nparties they are\nbut they they have quite a lot of power\nwhen it comes to data projects\nand also when it comes to public policy\nuh as data projects and and this is\nquite worrisome so there isn't there\nstill isn't a lot of uh\num\ni i still think there isn't enough\ninitiative on public institutions\nthemselves to start developing uh the\nexpertise needed to really keep uh data\nprojects in the democratic system and\nthis of course also goes on a different\nlegal level when we look at big tech who\nare also still increasingly uh getting\nmuch more and more powerful within these\nkinds of projects um i did some research\ninto corona apps on a european level and\nof course this is a very clear example\nwhere big tech had such a big say\nin our ethical discussion about what\nthese technologies should look like\num so a bit of positive and a bit of\nnegative and now i hope edith will make\nit even more positive than i did\nwell let's see\nthanks lucia\num\nso\ni've really noticed a very big shift in\nuh the sense of urgency surrounding\ndigital ethics so\nwhen we started at it school um\ninteresting or getting interested into\ndata ethics which was around 2015.\nreally no one was talking about this we\neven did a\nsmall research into how many times the\nforthcoming gdpr was mentioned in the\nmedia\nzero\nzero times up up until like the the\nthe last two months it was actually uh\ngetting into effect in in 2018 so it\nreally shows the lack of awareness\nsurrounding these topics and the past\ncouple of years i've really seen a very\nuh\nvery high rise in\nthe attention given to data ethics so\nthere are a lot of\nnew frameworks being released uh\num\nguidance instruments guidelines um\ncodes of conduct\nuh especially in the past year or so\nlike year one and a half years so\num i feel like that's that's a very\npositive thing um\nthere's also an eu regulation uh\nforthcoming um\nabout a.i so uh i'm just very curious to\nsee\nwhat its final form will be\nand it really changes\nthe the character of of doing data\nethics from uh\nbeing up until now purely voluntarily\nor mostly\num\nto\nbeing a bit more obligatory\nso\ni also feel like that's\nthat's a good thing because now doing\ndata ethics is largely dependent on\nindividuals within organization\norganizations that have an\nintrinsic motivation\nin\ntalking about these subjects\nsorry let me take a sip of water\nso um\nso the character of data ethics will\nreally shift uh in the future um\ni think it will be very challenging to\nto codify ethics because\nof course ethics as opposed to law\nis a very gray area which is not really\num\nit's very hard to put it into print and\nto to to codify it\nso instead i feel like uh\ndemanding proof or documentation of an\nelaborate ethical deliberation will be\nmore successful\nso really\ninstead of demanding\num complete\nadherence to the law\ndemanding elaborate documentation and\nand showing that you have carefully\nconsidered uh ethical considerations\nwithin your your process\num\nso uh\nthat's that's just my that's just my\nopinion i'm i'm interested in hearing\nyour uh opinion uh in a minute\nuh something else i\nwanted to\ntalk about is that there is um\na new impact assessment which is the\nfundamental rights and algorithms impact\nassessment\nwhich was created by the utrecht\nuniversity last year\ni was one of the the co-developers of\nthis impact assessment which\ni\nthink will play an important role\nespecially in the dutch context because\nthis uh impact assessment\num is one of the the options\nwhen impact assessment are are made\nobligatory by this eu regulation so it's\nit's one of the options of\nuh for for the dutch context at least so\nit is available in dutch but it will be\navailable in english i hope\nthis month\nuh\nand i will definitely make sure to share\num\nthe link with uh jeff gainey\nif you're interested\nand this this is really also focused on\nthat uh um\nfacilitating\nuh on on facilitating the um\ndocumentation\nof ethical deliberation so really\nfocused on on on creating that proof of\na careful decision-making process when\nit comes to ethical aspects\nso um\nwe would really like to hear\nyour opinion on this um\ni've written down some you know starting\nquestions on on this slide but\njust feel free to weigh in on on\nwhatever subject we have talked about\ntoday\num\nand um\n[Music]\nyeah let's let's open the floor uh up\nfor some uh some discussion i would\nreally love to hear some uh some\nreactions of you\nthank you so much uh roger and it is\nfascinating insights uh so indeed let's\nopen up the floor for uh for questions\nand discussion so\nplease feel free to use the raise your\nhand uh function or send in uh into chat\nbut\npreferably just uh\nuse the raise your hand function so that\nwe can have a\nchat uh\nin live format i i did see there was one\nquestion\nuh that came in in the chat earlier uh\nfrom nishant\nnishant would you like to ask a question\nyourself\nah sorry i've missed that question\notherwise i i can i can read it so\nnishant was saying i'm sorry if i missed\nthis but i have a question what kind of\ninfrastructure\nwere these projects using uh for example\ncompute for data processing storage\nmachine learning etc\nso i'm guessing um the question is about\nwhat cases were typically brought in for\nfor this dita instrument\num so this this really varied a great\ndeal so\nsometimes a case was brought forth\nsurrounding an algorithm to predict\nwhich citizens were likely to get into\ndebt for instance\nthis algorithm could be developed by an\nexternal party\nor could be developed in-house\nanother example is\nyou know\nthis is of course a like a risk\nassessment model and these were\nthese were very prominent within the\ncases that were brought forth but we\nalso encountered uh more more simple\ndata projects so it it it really\ndepended\nuh completely every workshop was had a\ndifferent case being discussed\ni hope that um\nanswers your uh your question\nthanks and uh\nnext we have mandalay\nyes\noh can you hear me yes\nyeah okay sorry my computer's a little\nfunny\num yeah thanks so much for your talk it\nwas so interesting i was especially\ninterested in the the zeitgeist\nnarrative um i actually\ni work in the ethics and philosophy\nsection at the university and and work\nwith engineers as well and i've heard\nthis a lot in in both the philosophy\nsection and the with the engineers as\nwell\nand i was just wondering\nyou\nhow often you hear explicit concern\nabout funding\nso we have to keep up because funding\nthat's how we'll keep\nuh\nfunded or\num i guess i'm just wondering how how\nmoney\nplays a role in the zeitgeist narrative\nand if it does explicitly um which i\nthink you sort of allude to in the\nreferences you were talking about um\nand yeah that's what i was wondering\nyeah it's actually a good question i\nthink um so i feel like with the the\nworkshops we did with these local\ngovernment institutions\nit doesn't explicitly seem like funding\nwas a very big aspect of it um it was\nmore like the um\nso the political interests and the uh\nthe political image that could be\ncreated with these data projects that\nseemed like a big\na big influence but please either sir uh\nsay so if it was different i i also just\nwanted to briefly mention that when i'm\nbecause i'm currently also looking at\nthe influence of big tech and healthcare\nand there it's very clearly uh present\nso\nwhen you ask medical researchers why\nthey want to collaborate with\nbig tech\nit's they just say there is no public\nfunding for these kinds of projects\nbecause usually these data projects are\nlong-term\nresearch projects and they just say that\nthere's no public money so we have to uh\nand and it's of course a\nvery worrying answer yeah so so\nuh and i think it might be relevant here\nas well but\nmuch less explicitly\nit is anything you want to add to that\nor no i i completely agree so um uh\nmunicipalities have a set budget for\nthings like um\nsocial services\nand when they deploy an algorithm or\nsome other type of data project within\nthe domain of social social services it\ncomes out of that budget\nso\ntypically it's not something that's\nreally\nbeing made explicit\nfunding within these types of ethical\ndeliberation but i i really think that's\ndue to the the structure of the the\nmunicipal organization and and the\nbudgeting\nyeah but but very interesting logic to\nhear the the the very big difference\nbetween what you've encountered within\nhealthcare\nyeah\num catalan you have a question i\nsee yeah thank you very much also for uh\nthe interesting presentations and\ndocumentation that you provided i'm\nreally\nquite interested in what you do with\ndita\nand i see so publications coming up and\nyou have an interesting website on it as\nwell so what do you see as the next step\nin in uh in this this platform what what\nwould you like to enhance over time\nin dita you mean\nyeah so i'm actually working on an\nupdate right now\nso we aim\nto bring out an update every year\ndue to\ntechnological developments\nthings we encounter during workshops\nuh you know changing uh\nchanging laws and regulations\nsurrounding the topic\num\nso that's that's something we we take\ninto consideration\num and for this\nnew impact assessment that we just\nbrought out yama\nwe um\nwe did that because there we felt a need\nthat there was uh for more focus on\nhuman rights so this impact assessment\ntherefore\nfocuses heavily on human human rights\nfundamental rights\nand also it narrows the scope from data\nprojects in general to algorithms in in\nparticular\nuh because the the need is is changing\nthe the use of algorithms that could\npotentially breach human rights is\nincreasing\nso that's that's a shift we've\nencountered\nso is this like an ongoing collaboration\nbetween radbout and utrecht\n[Laughter]\num well no it's not so uh the basis for\nthis i mean we're still uh thinking of\nideas together and collaborating but\nit's kind of uh um what i'm doing at the\nabout is kind of a different project\nso it's just uh i used to work at\nutrecht and now i work at that but of\ncourse the the the issues are very\nsimilar sometimes and there's very\ninteresting so we\nwe collaborate and exchange ideas yeah\nbut it's not a uh the project that at\nthe dot about is is a different project\nyeah\non a similar tour\nokay thank you\nbut of course we hope it's an ongoing\ncollaboration\nyes uh so do we ah no yes please go\nahead\nhi uh thanks um\nfor the fascinating insights\num\ni was going to ask you um you're talking\nnow with with engineers all of us i\nthink\nmost of us have a background engineering\nnot all of us but i think all of us are\nin this ai tech community really to\nto build bridges\nbut also to see how we can transform\nthe culture and practice of engineering\nso i was wondering based on your\ninsights because there was like\ni think it's a really great\nuh granular insight into cult like\nengineering culture or more particularly\ni think\ncultures of\num\ntechno solutionism that are mostly i\nthink\ninhabited or also\npersist\nthrough uh you know engineers uh taking\non these roles in these spaces\nso i was wondering if\nif you have any advice or any ideas\nabout\nwhat we like what we can take away or\nwhat we could do\nbased on on these insights of course i\nhave my own but i was curious to see if\nyou have any any any ideas for how we\ncould train and engage\nwith the next generation of engineers\ncomputer scientists\nto uh yeah to\nsteer away from from the from this\nculture and work towards other practices\nit's such a\ngood and also big question yeah no no\nit's uh\nso\nfor me but that's also because that as\nyou also saw and which is also ingrained\nin this guy's narrative one of the\nreally important things is that we need\nto\nyeah just uh develop systems of\nrecognizing when this narrative is there\nor when we are thinking more in a\ntechno-solutionist way than in an actual\num\nyeah a\nconstructive ethical deliberation and i\nfeel like\nyeah we're getting i do think we're\ngetting better at that but we still need\na bit of um\nyeah just kind of kind of recognizing it\nso that we see like oh\nit seems like we really want\nthis solution and uh we're not really\nthinking carefully about whether\nthis problem that we're trying to stick\nto it is really a problem is really\nsomething that needs to be solved in\nthis way or maybe a much more complex\nproblem or it may be that we're creating\nmore problems as we're trying to fit\nthis solution to it um\nso we need\nyeah systems of thinking about this and\ni think actually so uh this is also\nthe work that the data score is doing is\nalso always\nuh re-evaluating\nuh the the kind of\npretty practical tools that are there\nbut they are tools to open up the\nliberation so you always have to think\nabout how do we keep the process\ndeliberative how do we keep thinking\nabout the democratic values involved in\nthere and i feel like that's really the\nkey because if you\nthis is the thing with ethics of course\nit changes and that's its essence it\nshould be allowed to change and any kind\nof thing that\ntries to pin it down too hard or\nactually tries to preclude any kind of\nethical deliberation is dangerous so\nthese are for me like the fundamental\nuh conditions that we need for ethics\nthat\nthey're mostly democratic values because\nthose think about how to make them\ntransparent how to allow ethics to also\nkeep changing\nand i i think engineers uh are\ngetting better and better at thinking\nabout this but they\nthey have like a really big\nresponsibility because they they know\nthe details right and this is i mean\nthis is why what you're doing of course\nis great because\nin order to think about the ethics and\nthis is why i also showed you like how\nmuch we noticed how much of the\ndifference it makes when you have the\npeople who are good at thinking about\npublic value together with the people\nwho know how the technology works that's\nreally the only way to think about it\nethically if you miss one of the two\nit's not really working\nbut i'm also interested in your uh\nyour ideas\ndefinitely and to to add something super\nconcrete uh\nuh\nto what you were saying lucia so to give\none example of how we deal with this is\nwe we have a master program applied data\nscience\nuh and we have now started to integrate\nan ethics weekly ethics colloquium\nwithin that uh\nprogram so it's a mandatory colloquium\nfor all of the data students\nand every week we either invite someone\nfrom the field to talk about\nlike a data scientist to talk about how\nhe or she\ndeals with data ethics\nthis week i am giving a session there\nwith dita so with the data ethics\ndecision aid\nto\nyou know really teach the the students\nto have these deliberational processes\nsurrounding ethics yeah but what is your\nview on this whole\nwell first thanks of all for the i think\nthere's really tangible things so you're\ntalking a lot about systems of thinking\nand reflecting on the problem\nthe stakes the different different\nperspectives\nstaying open to new forms of ethical\ndeliberation so like also reflecting on\nthe practices of the liberation and the\ntools that you're using\num and then i heard um\na few more things but hopefully\nwe're all taking notes so i think first\nof all thank you for that\num well i think you know\nindeed we are\nat least here in delft trying to create\num\nmore and more community and more and\nmore ways for\nother researchers and students to\nto engage but\ni think we can learn a lot from from\nwhat you're doing\num and\nyeah i think\nwe i think we're ready so personally i\nchose to come to delft and to work in\nthe technology policy and management\nfaculty because it's\nkind of\njust this rich\ncollection collection of\ndifferent disciplines\nthat's also then translated to the\ncurriculum\nand so i think within delve we can we\ncan learn from that we can also learn\nfrom our our colleagues in in industrial\ndesign engineering that have a similar\nkind of makeup\num so there's a lot of i think quick\nwins and that and then also i think\ninspiration we can draw from the kinds\nof programs and tools that you're\ndeveloping\nso those are my immediate kind of more\npragmatic thoughts\nuh\nyeah\nnot much to add i think it's excellent\nand\nwell i think one one key thing that\nwe we still lack and this is where i\nspent some of my research is trying to\nwork towards ways to imagine new ways of\ndesigning\ndata-driven\nfunctionality algorithmic functionality\nso i think if you if you stay stuck in\nthe on the one and the very practical\nnuts and bolts and on the other end the\nmore i like kind of ambiguous values\nlike there's still like a big gap there\nand like how do you fill that with like\nways of thinking how to bridge that also\nin a material way so some of the work\nwe're doing is\nlike what are the kind of\ntypical socio-technical dimensions that\nkeep coming back when you look at a\nsystem once it's integrated in a in a in\na context\nso what are the kinds of things you have\nto take into account when you're\ndesigning these systems both\nmaterially kind of but also in terms of\nhow you how you manage them but that's\nreally ongoing work so it would be great\nactually to kind of bring that\nperspective together with what you're\ndoing and see how it could\nkind of complement and um\nand feed on each other\nour cat also wants to be part of the\nconversation\nso\nuh\ncome here so\num\nthat something that i observe is that\nyou know often how we frame projects\nthat\nwe we start from saying\nwe want to use algorithms or ai to solve\nsocietal challenge x\nand that's kind of how the project\nstarts\nand this is what i often find\nproblematic because\nfor example\nwhat you talked about today that is\noften on my mind\nand i find but that leaves very little\nwiggle room to even raise the the the\nquestion but hang on uh so the societal\nchallenge we want to solve that great\nuh but\nbut if we if we enter the project with\nthe framing that\nthe solution\ninvolves algorithms and ai how can we\neven question that maybe the solution\ndoes not involve algorithms or ni or ai\nor maybe\nit might involve them but in a very\ndifferent\nway and narrative from\nyou know what is being raised at the\nbeginning well\nwhat do you think about that\nyeah i think it's so i think you're\ncompletely right and it's also one of\nthe things that that interests me about\nthese kinds of projects it's indeed the\nyes yeah so like\nit's\nyou can sense that it's more solution\ndriven than problem driven and then\nalso the problems that they're trying to\naddress are really\nbig and complex and need\ncare\nin order to think about them and to\ndevelop uh solutions about them but it\nbut this is indeed something that we've\nnoticed and i'm i'm still noticing also\nwith different projects that it's it's\num\nyeah it it it's technology led so so\nthis whole\nthere's still just this this\nway that we\nwe worship a new technology in\nprogress and i feel that that's really\nconnected to this because then you\nyou have this technical solution and\nthat's what you're actually starting\nfrom in your thinking process\nand i think you're right so it precludes\na good ethical deliberation and i also\nthink that there's something interesting\nin the\nthe this is also why i emphasized haste\nbecause i think that's also there in\nthis way of thinking it's always about\ndoing things very fast\nwhereas i think inherently in ethical\ndeliberation and democratic processes in\ngeneral\nthey need to be slow they need to be\nslow right because you can only do it by\nthinking about it with a lot of people\nletting it pass through a lot of\ninstitutions having all these moments\nwhere it's slowed down and refute again\nso so there's an inherent tension there\nthat that yeah\num we need to be wary of or maybe we\nneed to you know this is also why i like\nthe\nidea of maintenance work because these\nfinso and russell they're really\nthinking about okay so let's start\nworshiping maintenance instead of\ninnovation and you could also do this\nwith slowness like that's worship\nvery slow bureaucratic processes let's\nenjoy them\nand that's not that's not something\nwe've we have scripts for how to do that\nnow but if we develop those that could\nhelp yeah\ni don't know if that\nkind of resonates but yeah yeah yeah and\nand also\nno the thanks lodge and and uh and and\nalso to echo some of the things that you\nwere not talking about with rule uh yeah\npersonally uh i also think that a lot of\num\na lot of the things that we need to do\non this frontline the educate in the\neducation realm so so\nuh\nbecause looking back for example at my\njourney i see that\nthere's a lot of\nsocio-technical gaps and systemic gaps\nin terms of you know how i was educated\nfrom my technical background to think\nabout problems that is very like well\nyou solve uh\nthat in today's challenges basically\nthat i think there's so much we can do\nin the way we train the future\ngenerations of specialists that they\nthink systemically and they realize it's\na puzzle that you solve together with\nother disciplines\nand i think so so indeed like rule was\nsaying i think we can really learn from\neach other about the things we're trying\nand experimenting and education in our\ndifferent institutions\nyeah i agree\nyeah\nso unfortunately we ran out of time so i\nthink we could\ncontinue talking about this for much\nlonger\ni i want to\nlaunch and it is i want to thank you\nboth very very much for coming today and\nsharing your insights\nand i hope we can continue this\nconversation another time\nand of course will be great if uh\nuh people uh uh people would like to\nreach out to each other and talk more uh\noffline\nyeah great sounds good\nyeah thanks so much for the invite and\nthen\nmaybe hopefully speak to you again all\nthe time i will talk again at some point\ngreat\ndefinitely thank you thank you thank you\nvery much for joining today take care\nbye bye bye\nyou", "date_published": "2022-02-17T12:36:08Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "129cfe7ae6f076caa04101c0fe3109b6", "title": "AiTech Agora - Jie Yang: ARCH - Know What Your Machine Doesn't Know", "url": "https://www.youtube.com/watch?v=VvLncg14Jc0", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "so it is better\nafterwards there will be room for\nquestions in the middle so there will be\na designated uh spot\nin the middle of the presentation for\nquestions okay thanks\nokay uh the floor is yours okay\nthank you akadi and uh thank you\neveryone for joining my presentation\nyou can all see my slides right just to\nconfirm\nyes okay cool cool um\nokay let me uh start by briefly uh\ndescribing my uh my research background\nalso to explain a bit how i reach to\nthis current\nresearch topic that i'm going to\nintroduce next so i did my phd\nin ntu left here and the topic was about\nunderstanding and leveraging human\nintelligence and data on knowledge\ncreation processes and an important\napplication that you can imagine is\nof course creating training data for\nmachine learning systems\nbut during my phd i was mainly focused\non the human side\nof the research studying relevant human\ncharacteristics such as expertise\nand motivation um\nand then in my postdoc i started to\nfocus a lot on\nintegrating human and machine\nintelligence for example how to best\nallow\nmachine learning models to actually\nlearn from humans\nso that you know we can save some effort\nin\nthe labeling while addressing issues in\nhuman data creation for example our\nreliability and biases\nof these data are annotated by humans\nso afterwards i spent some time in\nindustry working as a machining\nscientist at amazon\nbased in seattle what i did there was\nmainly applying my research outcomes to\npersonalized search and recommendation\non voice\nyou know those alexa devices which most\nof the time\ndisappoint you and that by the way\ninvolves a lot of natural language\nprocessing which i'm currently also\nteaching at\nthe every faculty\nso now i will start to\nintroduce this exciting project i'm\ncurrently working on\ncalled arc the idea is to develop\na a diagnosis tool that would allow us\nto know what\na machine learning system doesn't know\nso\nuh why are we introdu interested in this\ntopic\nor this is many driven by the fact that\nmachine learning systems currently are\nrunning everywhere right\nbut in a very unreliable way and they\ngenerate errors\nuh which we don't really expect in many\nsituations\ni believe most of us have seen or have\nheard those terrible stories of machine\nlearning errors\nin for example in many types of mission\ncritical tasks\nincluding for example transport finance\nand health care\nwhere safety is a very very important\nconcern\nit is it is also a very big problem in\nother applications such as those\ninformation services\nuh which nowadays uh shape\npeople's of people's opinion uh to a\nvery\nto a very large extent right so the\nchallenge\nhere is that we don't really know we\ndon't know much about\nhow to build a system that you know can\nbehave in a reliable manner that you\nknow can\nreally help us uh to be to become less\nworried about all those\ncatastrophic damaging effects\nso what are the pain points here\nwell let's take a look at machine\nlearning systems life cycle\na typical machine learning system life\ncycle would generally involve\ntwo types of stakeholders the developers\nwho sometimes are called the machine\nlearning scientist or engineer\nand the user or domain experts right\nthis is of this is of course very much\nsimplified\nuh for example in the hospital case\ndomain experts or users\ncould be the doctors and could also be\nthose people who are affected by the use\nof the machine learning systems for\nexample those\npatients and their families right who\nare affected by the\nby the use of for example those\nautonomous automatic\ndiagnosis tools\nbut i want to highlight the highlight\nthose two types of stakeholders\num because they are really\nrepresentative of the problems that we\nare going to talk about\nso the problems with this current\nmachine learning\nlife cycle is that when the system fails\nthe user can well they can provide\nfeedback to the machine\ndeveloper but the developer wouldn't\nknow what to do\nto to fix the systems such that\nsimilar errors won't happen again in the\nfuture\nand from the user's perspective it's\nreally hard for them to know\nwhat's going wrong right and how much\nthey should trust\nthe model\nso basically what we need in this case\nis\na diagnosis tool that can help us to\nto debug the system that can help us to\nknow\nwhat is really going wrong right why\nisn't why is the error happening\nthereby allowing us to avoid similar\nerrors on new areas as much as possible\nright so this kind of tool is essential\nif you think about it\nit's essential to build any type of\nreliable systems\nbeing them software or hardware\nwe cannot really build a perfectly\nreliable system in one shot\nright but it's more an incremental\nprocess that involves a lot of testing\nand dividing\nand having such a tool would help us\nreally to closely loop\nand allow for such incremental\nimprovement\nof the system now i've been mainly\ntalking about\nthe importance from the developer's\nperspective right but other than that\nknowing the machine knowing what the\nmachine doesn't know\nis actually super important also from\nthe user's perspective\nit helps them to decide when to trust\nthe system\nand and it can help it can help all the\npeople\nto better work with ai algorithm\nknowing what what are the things that an\nai algorithm can do\nand cannot do right so to sum up a bit\nhaving such a diagnosis too is very\nimportant\nfor different stakeholders but in\nmachine learning we don't\nyet have this kind of tools and this\npartially because that well machine\nlearning is still kind of new\nto many domains right and partly also\nbecause\nthe problem of machine learning\ndiagnosis is relatively\nhot that needs a lot of research\nand next i will very briefly give an\nintuition of\nwhy this is hard so to understand\nand uh to understand that um\n[Music]\nwhat we're actually asking here is okay\nwhat causes\nmachine learning errors right and only\nby knowing that\nwe can develop this we can start to\ndevelop\nthose those tools for the purpose of\ndiagnosis\nof machining systems right so what\ncauses machining errors\nuh well those errors generally come from\nbiases in the data\nin data selection representation and\nin annotation right this is different\nfrom the cases with\nsoftware systems if you think about it\nright software systems usually come with\na lot of codes not that much data but\nthen in machine learning\nwell what's most complex is those\npatterns hidden in the data right\nand and in comparison to that in\ncomparison to that\nthe code that we use to build machine\nlearning systems\nare relatively simple right it's\nalmost the same same thing that we do to\nto you know really to program a machine\nlearning system for different tasks\nright we follow the same uh uh uh\nrecipe uh while creating the\nfeatures building a model and then test\nthe model right\nbut then what's complex here is what's\nhidden in the data\nuh and to really see that let's consider\na simple\nbut illustrative example let's say our\ngoal now is to train a machining model\nto classify if an image is about a dog\nor a cat a very typical uh division task\nso now assume that the training data\nonly contains images of\nblack dots and white cats and then\nif we train a machine learning model of\nthis training data\nwhat it will learn would simply be okay\nwe can\ndiscriminate perfectly dogs and cats\nright solely based\non their color regardless of any other\nyou know genuinely\ndiscriminative features such as for\nexample a shape of nose\nkind of eyes right so if we train a\nmodel like this\nlet's say we deploy the model right then\nif\nif the model now sees an image with the\nwhite dock\nwhat it will do will be okay it will\nvery confidently\nclassify the image as\nas a cat based on the color right\nbecause it learns that okay color is\nvery indicative of\nof of the class of the image\nso this example shows well despite being\nvery simple right it shows that the\nintrinsic problem with machining\nresistors\nthat is they learn from the data by\npicking up those\nstatistical correlations for example the\ncorrelation between the color\nand the animal right and those\ncorrelations are not really reliable\nand the example might be very again very\nsimple right but it can happen with a\nmuch bigger\nuh implication depending on where the\napplication is uh\nis the uh when the application is\nright for example imagine now the goal\nis to predict\nwho is more likely to recommit crimes\nin a legal system and the bias now is\nhuman race right so just like\ncolor for dark and cat classification\nin this case the bias this what we call\nactually spurious correlations\nthat the color indicates uh uh the class\nright\nhere the color is wrist right it comes\nwith a much bigger\nimplication and\nsimilar problem can also happen in other\ncritical scenarios for example in\nmedical cases right for example if our\ngoal is now to predict\nif or not a medical image contains\ncancer right\nand predicting a cancer to be for\nexample a benign tumor\nbased on irrelevant features could lead\nto\na well damaging effect\nnow this kind of bias is lead to a major\ntype of machine learning errors that we\ncall\nunknown unknowns which are areas\nproduced\nwith very high confidence\nand as as i already described right\nthose kind of errors can lead to\ncatastrophic\ncatastrophic uh uh uh\noutcomes if you actually if you think\nabout\nit right it's it's fine to make errors\nas long as the model can tell that\nit is not confident right and in that\ncase humans can take all the\ndecision-making but when the model says\nit's confident\nwell the prediction better be correct\nthis kind of\nerror is is really hard to identify\nbefore\nwe observe the damage right for example\nall those methods developed\nfor proactive identity identification of\nmachine learning errors\nsuch as active learning method can only\ndeal with\nthe other type of error that we call no\nunknowns right those errors that\nmodel has no confidence in prediction\nand then on the side note i would i\nwould like to mention that we are\ncurrently working on an approach\nthat would allow active learning to be\naware\nof those biases in the data and we're\ndeveloping a\nreinforcement learning based approach\nwith humans in the loop\nsuch that during active learning we\nlook not only at model confidence but\nalso look at\nthe diversity of the data so that the\nmodel in the end will be\nmore aware of things that they don't\nknow\nso coming back to the story um\none thing i would like to point out is\nthat data alone might\nnot be a solution but this is what we\nare currently doing in practice right\nwhat we do now is that okay to avoid\nthose errors as much as possible\nwe collect as much as as much data as\npossible right with the hope that the\nmachine will be reliable in application\nbut it turns out that this is not really\na good solution\nwe have seen that machine learning\nmodels trend on even huge data sets\ncan still easily fail at different\napplications\nand this is the case in both vision and\nlanguage tasks for example models trend\non\nimage nets with 14 millions of image can\nstill make\nall those errors that from our humans\nperspective is rather\nrather stupid right this is the exact\nwords people have been using uh\nin the in the academic world stupid\num and it's the same case with large\nlanguage models like gpt3 right which is\ntrend or hundreds of billions of words\nbut still people find that those\nmodels fail easily at those reasoning\ntasks that we\nhumans see as quite uh simple\nso instead of collecting more data what\nwe need to know is that\nwell ideally we should know what data\nthe system will see\nin application right and have them\ncovered in the training data\nthe problem is that we cannot really uh\nwe cannot really foresee all different\napplication applications scenarios\nright and that's the reason why we want\nto you know partly the reason why we\nwant to use machine learning not to do\nall these\nuh intelligent task for us right\nso but that is true but at least\nso here's my main proposition of the of\nthe research\ntopic so at least for anything that we\nknow\nthat machine learning system should know\nwe should be able to know what it\nreally knows and what it doesn't know\nso once we know what the machine\nlearning model doesn't know\nwe can not only fix errors when they\noccur but also identify you know\nforeseeable areas in the future and take\nactions to hedge\nthe risks\nnow one thing that i have not mentioned\nintentionally\nin my previous uh speeches that\nis what we mean by us knowing\nsomething or by the machine knowing\nsomething and that is a key\nconsideration actually also a key\nchallenge\nin developing a machine learning\ndiagnosis tool so when we say that\na machine knows something what we mean\nis\nyou know those correlations captured by\nthe model\nrepresented by matrices and vectors\nright\nfilled with numerical numbers that are\nhard for humans to understand\non the other hand when we say that okay\nwe as humans knowing something we mean\nthat\nthose knowledge represented by\nsymbolic concepts relations and rules\nright this is how we humans understand\nthe world\nthrough all those symbolic concepts\nand note that those concept rules they\nare usually more reliable because\nespecially the rules right they denote a\ncausal relations\nas compared to those statistical\ncorrelations captured by the machine\nmodels\nso with this in mind um our this our\ntool\narc is built with the following pillars\nso first of all to\nit uh it allows us to know what a\nmachine learning system knows\nthrough some explanability method and\nthen\nwe specify uh our requirements of the\ndomain knowledge\nthat needs to be internalized in a\nmachine\nand this is key to ensure that the\nmachine will behave reliably in the\nforeseeable situations right and then\nthe last part is that well given what\nthe machine\nknows what we what we know about the\nmachine knows\nit's rather complex and given that what\nwe know that the machine should know\nwe would like to infer what the machine\ndoesn't know\nright so in the following i'm going to\nvery quickly go through uh works that\nwe're currently doing in those different\nuh subtopics and then i will dive into\nuh the first topic how do we know\nthe machine what the machine knows right\nby presenting a\nrecent work that we published at the web\nthe web conference okay\nso very quickly um\nabout the the first the first pillar\nright how do we know\nwhat the machine knows this as a again\nthis is about\nuh machine learning interpretability or\nexplainability which is currently quite\npopular everywhere right\nso uh as i said right we've recently\npublished a work that introduces a humor\nin the loop approach to explain what the\nmodel has learned\nand that i will give more details\nin the second part of this presentation\nso essentially what we did is uh\nasking humans to annotate the concept\nthe model relies on\nin making the prediction so with this\nmethod users can\ndebug a system by looking at what\nclasses the model is most\nyou know confused about right and then\nidentify those concepts and rules that\nthe model relies\nrelies on in making those programmatic\npredictions\nnow for the second uh sub topic\nin terms of how do we know what the\nmeasure should know right\nthis is what we call requirement\nelicitation for machine learning tasks\nso here what we need is to invoke domain\nexperts or other types of users\nthat can tell us their expectation of\nwhat the model should know\nright and then here to make the fun we\nare developing\ngames that can engage domain experts to\ncontribute the knowledge\nbecause here we are really dealing with\npeople right it's important to\nmake the task a enjoyable task uh\nuh well enemy in the meanwhile we're\ndeveloping a language that\nsuch that those requirements can be well\nspecified\nfor diagnosis purposes this is still\non an ongoing work but\ni like to share some well referenced\nnumbers\nto show how big potential this could\nhave\nfor machine learning diagnosis so there\nis a currently a team\nin stanford which that recently\nshowed that well by testing machine\nlearning systems against\nsome predefined requirements we can\nlargely reduce the errors\nand what they did is to uh is testing\nthis uh\nthis idea in the object recognition task\nso it's so you can consider this uh\nso what i show in the in the on the\nslide now is uh\nuh is a task of recognizing what's in a\nvideo right\nin a sequence of images in a video and\nthe goal here is to detect whether\nor not there is a car present in those\nimages so the the requirement that the\ntest is very simple\nlet's say we let's say the model have\nhave seen\na car at a car at time t times then t\nright and the car at\ntime t plus two and then\nthe model should already also see\na car at time t plus one right in the\nmiddle because it's not likely that the\ncar would disappear\nand then come back in consecutive\nseconds right\nso this the this kind of very simple\nrequirements\nand they show that if we can you know\ntest\nthe the uh test\nand improve the machine systems against\nthose requirements\nthen we can largely reduce the area of\nof object\ndetection by up to 40 percent\nnote though that what they did is only\nto test the model output\nmodel outputs against the the the\nrequirements\nand what we are doing is something that\nis even deeper right we want to go\ninto those systems up open them up and\nsee what it has learned\ninternally instead of test\ninstead of posing requirements on the on\nthe output of the model\nrather we want to uh pose our\nrequirements\nfor those uh knowledge that is learned\ninside\nthe system right and this would allow us\nto\navoid more similar types of errors if\nyou think about\nthink about it right in the future\nand another difference is that instead\nof simply adding some constraint that\nanyone can come up with\nright what we're doing is well we're\ncoming up with an approach that\ncan allow us well first of all can allow\nus to elicit those\nuh requirements in a very principled\nmanner and also developing an approach\nthat would\ninfer what's really going wrong in the\nmodel\nright and and that's something that i'm\ngoing to\nuh introduce now so this is related to\nthe third topic\nso basically our goal here is to really\nto infer\nwhat the model doesn't know from the\nobservation of what\na the model should know and what it\nalready knows\nthis might sound very simple to you\nright but then in the end it's it's\nnot that simple especially when we\nconsider all those different relations\nbetween\nyou know other concepts that model\nshould should be learned\ninternally yeah so there are all those\nrelations that need to be dealt with\nthat's why we look at reasoning\napproaches\nand there and and for this specific\nproblem\num uh this is not uh the usual type of\nuh\ndeductive or inductive reasoning that we\nencounter in most cases but rather it's\nit's it's related to a specific type of\nreading that we call\nabductive reasoning where the goal is to\ninfer from observations to explanations\nright and here our observation is what\nthe model should\nknow and what it already knows and the\nexplanation is what it\ndoesn't know right so all the slide is a\nproject that we are currently working on\nwith the freeburg hospital in\nswitzerland\nmy former colleagues in the context of\nprostate cancer detection so here we\nbasically apply the same idea right we\ndeveloped\nwe have already developed a machine\nmodel for automatic uh\ncancer detection and then given the\nrequirements collected from the doctors\nand also annotations from the crowds\nabout what the model has learned\nwe want to infer what the model doesn't\nknow\nright um well one of the key challenges\nhere is that there\nis that the requirement that we get from\nthe from the\nuh requirement at the station a step\nmight not be perfect\nright so we would need to involve again\ndomain experts in the reasoning when the\nmoney is uncertain so we need an\napproach what that we call abductor\nreasoning\ncentered around humans and that that is\nwhy we named the two\narc\nokay this concludes my first part of the\npresentation\num i let me quickly share with you some\ndeveloping\nsome of the applications apart from this\nprostate cancer case\nso uh in the legal domain we're working\nwith the data scientist\ndata science team from the uh our\nministry of water and\ninfrastructure management who are using\nmachine learning to detect the\ncompliance of\nof all those vehicles and shapes right\nso in their case data is also biased you\ncan imagine that\ninspectors have a higher tendency of\ninspecting certain types of trucks for\nexample those trucks that\nmight look overweighted right so how can\nwe ensure reliability and fairness when\nthe\ntraining data is biased in this way\nright and then in the financial domain\nwe're working with banks where machinery\nis used a lot to detect fraudulent\ntransactions for example uh money\nlaundry right\nso how can we do uh how can we make sure\nalso the the\nmachine learning models and predictions\nto be reliable in this case\nand other than that we are we're also\ndeveloping uh solutions to ensure\nreliability of machine learning and the\nembedded ai\nscenarios in application to uh\nsmart buildings and factories right\nthis is running this is actually very\ninteresting we're currently working on a\nproject\nthat uh that tries to combine uh\nsensors and ai that will allow us to\nto detect what people uh draw in the air\nright\nall those digits one two three four five\nthat people draw in the air\nand this could be very useful for\nexample for the uh\nwell for for the buildings in the\nelevators right when the you when the\nuser wants to go to uh when people want\nto go to a certain floor right\nhe or she wouldn't need again to uh to\npress those buttons\nthey can simply draw those numbers uh in\nthe air\nand then with the sensing and ai we can\ndetect\nwhat's uh what's going on right what did\nyou what the\npeople where the where the people want\nto go right\nand this is very important today in the\nin the\ncorona in the corridor context right\num and there are all kinds of\nreliability problems there as well for\nexample you know people might have\ndifferent habits\nduring different uh during the same uh\ndigits in different ways\nright this might also depends on the\neducational background or cultural\nbackground okay i will stop here and\nuh to and take questions for this part\nif you have questions you can simply\nmaybe directly ask through the chat\nthrough voice but\nare there any questions\nthere's actually one from uh\nfocused in the chat um\nso the question is that often we do not\nknow what the machine must know because\nwe're not aware\nof a lot of knowledge we have a lot of\nthis knowledge has\nnot been consciously being put in\nsymbols\neither how do you deal with this that's\nactually a a very very relevant question\nthank you uh thank you for cutting\num yes indeed indeed so\nindeed this is related to a lot of uh\nwork going on in uh knowledge reputation\nand\nreasoning right which our group is also\nheavily uh working on\nso uh you're right that there are a lot\nof knowledge\nmaybe most of the knowledge that we\nhumans use\nfor many different tasks right they\nremain in our in our brain\nin our mind in this biological neural\nnetwork right\nand we cannot easily find them on the\nweb or in some\nexisting databases so there is a lot of\nwork that\nstill needs to be done to you know\nreally uh\neffectively extract those knowledge that\npeople\nthat we rely on to uh to do those uh\ndaily uh uh to or to do all those tasks\non a daily basis\nright and and that is related to\nokay uh uh related a lot to\nthis uh uh approach that we call human\ncomputation crowdsourcing\nwhich is also uh one there which is also\nmy main\nfocus it's the subfield of uh\nartificial intelligence well not a lot\nof people are working on that but it's\ngetting quite\nquite popular the whole idea there is is\nto develop\na uh develop solutions that would\nallow us to uh efficiently well to\nto engage people right so that we can\nuh effectively obtain\nall those knowledge for example and it's\nnot only about knowledge right but also\nfor example beliefs\nvalues uh those uh those things that we\ncare a lot about nowadays right\nhow to get those things from from people\nto elicit\nfrom from by by actively engaging\nuh people to contribute those uh\nthose things that we want to have does\nthat answer your question\nwell it's it's\nyou confirmed that we do not really have\nthe\nthe final solution to this question\nindeed um\nthank you for your presentation in the\nfirst\ni want to say as well and i'm very glad\nthat you are aware of\nthat that you try to make\nhuman knowledge available for machines\nespecially corsol models because i think\nthat if we\nfind a way to to codify\ncausal models we have then you can as\nwell cater for\nfor outliers for exceptions because uh\nthat's\nexactly what you said if the machine is\nvery sure then we have no reason to\ndoubt it but if the machine says\nwell this is not this does not fit in my\nconsole model\nthat that can be another flag of saying\ni'm not sure\nand uh this if\nif you just start doing this then you'll\nfind\nthe errors will show us where we did not\nformulate causal relations that we need\napparently in practice\n[Music]\nyeah it has a lot to do with this causal\nuh\nrelations but on the other hand is also\nvery much relevant to uh really to\nuh find ways to\nto engage with those different types of\nstakeholders by that\nyou can imagine that all those\nexpectations uh they are they are\ni mean they exist uh in different\nstakeholders mind\nright so we need to find a way to uh to\nactively engage them so that we really\nwe can really be informed about uh how\nthe model\nshould behave uh there is another\nquestion from\nstephen that asks how are you\napproaching the\nchallenge of capturing what the model\nknows\nand symbolic concept that users are\nfamiliar with or are\nsufficiently abstract that's a great\nquestion that's uh\nexactly the the uh the work that i'm\ngoing to introduce\nnow so good\nwell uh stefan uh tell me if\nif you think my plantation next is uh\nreally\nanswer to your question um so\num so what i'm going to uh to introduce\nnow is a recent work that we published\nat the web conference very recent i\ndon't\nknow if the the paper is already online\nmaybe not yet but if you're interested\njust let me know\nso uh in this work we developed a method\nfor integrating the internal uh uh\nconcepts or mechanisms of of computer\nvision models\nby leveraging human computation at scale\nthrough\nthrough crowdsourcing uh as you can see\nalready right from the title that okay\nthe way that we make something\ninterpretable is really to\nuh interpretable to humans is really to\nengage humans in the process of\ninterpreting those\nthose things that we are interested in\nright model behaviors what it has\nlearned\nconcept and rules um\nwhat this i have already talked about\nvery\nvery briefly so why do we need\ninterpretability\nwe've talked about earlier from the\nreliability and trustworthy perspective\nright right\nso here is a more extensive list of\nreasons\napart from being helpful for our\ndevelopers to debug\nmodels of interest right explanation\ncould also help them\nhelp those developers so data scientists\nto gain buy-in from customers right\nor management if the customers were\naware of\nuh well what the model can and cannot do\nright\nand then it's also well very important\nfor auditors\nto decide if the model is eligible to be\ndeployed or not\nand this is getting more important\ngiving all those\nnew regulations about about ai right\nso uh when we talk about\ninterpretability there are two types of\ninterpretability that we're interested\nin uh local interpretability\nand global right local interpretability\nmeans that we want to know uh\nthe inference that the model uses for a\nspecific prediction if it makes sense\nright uh well not\nnecessarily if it makes sense but yeah\nwe want to know\nuh the the the influence that the model\nfollows right in making predictions\nfor specific uh data instances and then\nglobal interpretability means that we\nwant to know\nwhat are the mechanisms that the model\nhas learned in general\nand uh here we mainly look at the\ncomputer vision and in this context\nglobal expandability or interpretability\nmeans that\nokay we want to know what uh uh\ni want to explain the model behavior\nwith respect to\nthe predictions that it has for a set of\nimages\nright for a data set\nand all those global interpretability\nmethods\nuh well there are actually a lot of uh\nuh\nglobal interpreter interpretability\nmethods\nand uh one of the most state of the art\ntype of this kind of method is that this\nthat they will\nin those methods that will generate a\nvisual concept\nthat represents what the model knows\nright\nto uh to explain the behavior so on the\nslides\nuh output of uh the interpretability\ninterpretability methods\nuh published very recently that explains\nmodern behaviors in the context of\nvehicle classification\nso you can see that what it does is that\nthose interpretability methods right it\ngenerates\na set of concepts that model looks at\nin making predictions and each of those\nconcepts is represented by a set of\nimage\npaths right image patches or now\non the slide each of those uh uh\nso all the image patches corresponding\nto one concept\nis organized at one individual row right\nso for example this rows on the\non this row on the top says that the\nmodel recognizes all those\nwhite pages uh when the model is\nrecognizing something as a movie van\nright and those white passes are likely\nto indicate\nthe body of the the car body of of the\nveins\nand and there are other similar concepts\nas well for example\nlogo right here you can see all\ndifferent kinds of logos\num that are present in all those\ndifferent\nuh vehicles right but then i\ni add question mark\nnext to each of those concepts because\ni imagine you may have already\nfelt right it's relatively hard\nto really attach a semantical meaning\nand you need to\nuse a lot of effort right and this is\nthe\neven if we zoom in so\num here what i did is i zoom\nuh uh to those image patches find out by\nthe model\nthat uh you know those image patches\nthat are supposedly should represent\none concept right one distinct\nconcept concept and what you can see\nhere is something that is uh relatively\nambiguous right it looks at okay uh well\nit could be\nthe the tear of uh of the of the vehicle\nor it could also be uh i don't know\nthe the pavement right so\na big problem with this kind of method\nthat generates uh\nimage patch to represent the concept\nthat model has learned\nis that it requires okay first of all a\nlot of uh uh\nto make sense of the of the output right\nand on the other hand it doesn't really\nallow us to get a global understanding\nof what the model uh\nreally learns because what we get is\njust those you know\nuh separated image pages\nand you can imagine that in a data set\nlike imagenet right that contains\nmillions of images how many uh\nimage paths we are going to get in the\nend right this is not consumable for\nfor uh developers or for users um\nother than those problems there are also\nuh other problems as well\none problem that we find through our\nexperiment through our empirical\nexploration is that not all the concept\nthat the model use are actually captured\nby those explanability methods\nthis i will show you more uh in the\nexperimental part\nand very importantly those methods they\ndo not really give an indication\non potential on the relevance of\npotential combinations of concept that\nmodel relies on in making predictions\nhere when i'm talking about combinations\nof concepts\ni'm talking about rules so um\nfor example the explanation might\nhighlight that okay the model looks at\nflashlight cross logo to recognize an\nambulance but we don't know if it uses\nthem together right\nor it's or it looks at them individually\nfor example\nand this could be a problematic for\nexample we don't really\nknow whether or not we should trust the\nmodel if it looks only at\nthe uh for example the the\nthe tear or tire the tear to say the\nvehicle is\nas ambulance right or only look at\nthe the the cross side this could still\nbe\nambiguous in in certain cases\nso um before i introduce our method\ni want to make it uh\nvery clear for us that well what\ndo we really want in a good explanation\nmethod\nso the first so we're now talking about\nthe requirements of\nof uh explanation right what makes a\ngood explanation\nso the first thing that uh that we need\nto\nthink about is of course the\nintelligibility right because in the end\nwe want humans to be able to understand\nall those\nall those uh explanations um\n[Music]\nso what we did is that we searched in\nthe literature outside the computer\nscience for\nfor insights about intelligibility and\nwe identified two relevant theories one\nis\nis called representational theory of\nmind which basically says\nexplains how human minds represent and\nreason about the\nabout our environment right and the\nother is uh works on uh\nuh human visual processing systems that\nstudy\nhow people uh process visually our\nenvironment\ninformation from our environment right\nso important things that we learned are\nthat\nfirst humans understand the environment\nthrough a\nconcept that corresponds to entities\nthat come with you know any types and\nattributes for example we're thinking\nabout an ambulance\nwe might think about the flashlight the\ncross logo and etc\nright and additionally we might also\nthink about\nattributes of those of those entities\nfor example the flashlight is typically\norange or white\nright and those stripes for example on\nthe on the ambulance are typically\nyellow or blue note that\nwhen i was describing those attributes i\nwas work i was using the word\ntypically right which is in fact a\nimportant concept\nby itself that denotes the strength of\nthe association between the\nbetween the entity type and the\nattributes\nright for example the stripes on the\nambulance is more likely to be blue\nthan red right there's a relative\nstrength\nof of this uh association between the\nproperty and\nand the entity right\nso another thing that we learned is that\nconcepts might be composed themselves of\nother entities for example uh\nthe wheels they have spokes right which\nby itself is also a concept\nand then it can be further broken down\ninto for example the shape\ncolor texture etc right so in fact\nhuman brains we process visual\ninformation\nalso uh well we process it in a way that\ngoes from the low level uh concept to\nthe high level\nhigh level ones for example we\nfor example from color contrast\nto shapes texture and two more abstract\nsemantic repetitions of\nof a concept right so combinations of\nconcepts can be represented through\nuh throughout different logical\noperations like and\nor right which we have actually already\ntalked about for example flashlights are\nusually orange or white\nright not that the combinations or\nconcepts is really important it's very\nimportant for us humans to understand\ncomplex objects and that is something\nthat we\nwant also to see in a machining model\na second requirement of\ngood explanation that we have uh\nwell actually this is something that\npeople have or recognized\nbut in the specific context of uh\nof course this might come with this\ndistinct\nmeaning so\nso what we want for a explanation to be\nvery fruitful\nis that we want to know about the ones\nthat the model about the all those\nconcepts right\nthe model really looks at in making\npredictions and not anything extra not\nanything more than that right here for\nexample in the in this\nuh uh example ambulance or movie advance\nclassification right in this example\num let's say the model might not\nhave learned to look at the wheels right\nsince they are present in both types of\nclasses\nand in this case we wouldn't want the\nwheels to appear in our explanation\nbut if the model is looking at\nthings like the drivers or the sky right\nin the in the detection of vehicles\nwhich are biases\nby the way which are biases and we don't\nwant the model to learn about this\nbut if if indeed the model learns\nabout those concepts in making the\ndecision\nwe want all those different concepts to\nbe exposed in the explanation\nright um\nand as the last requirement um well\nthis is usually uh ignored\nin a lot of those uh a lot of those uh\nexpandable ai work\nso the requirement here is that we want\nexplanation to support the different\ntypes of interactions\nfrom the stakeholders you know all those\nthese different stakeholders it\ncomes with a purpose you come with a\ngoal right to debug the model\nto whether or not to decide whether or\nnot uh deploy the model or trust the\nmodel and things like this\nright so then\nso broadly there are two types of\ninteractions uh\nuh uh that we are interested in one is\nthe uh for the purpose of uh exploration\nand the other is for uh\nvalidation right on a more abstract\nlevel\nso uh for like exploration\nmeans that the users might simply want\nto explore what model has learned\nto see if there are anything interesting\nright but they might also have some\nspecific\nconcepts or requirements in mind already\nright\nand would they should be able to query\nwhether or not the model follows this uh\nthis kind of expectations that user have\nin mind right this is another type of\ninteraction uh validation right to\nvalidate\nmodern behaviors against those those\nrequirements those expectations\nand for this in this scenario a\nparticularly relevant\ntype of user queries that they may have\nin this kind of validation activity\nis is is multi-concept queries\nqueries that involves multiple concepts\nfor example the user might be\ninterested in finding out whether or not\nthe model\ndoes look at flashlights across logo and\nnot the sky and making the predictions\nright so here you can see\nthe combinations of those different\nconcepts and yoga queries\nright and we should be able for a good\nexplanation method we should be able to\nsupport this kind of queries\nso and this is what we did uh well in\nour paper\nso um explanations generated by our\nmethod can support both the model\nbehavior exploration\nand validation and in particular our\ninterpretation allows to answer those\nmulti-concept\nqueries such as okay if the model relies\non cross size the flashlight\nright or blue sky for the classification\nof\na vehicle um\nso here i give some examples right these\nare\nexplanations generated by our model for\nexploration purposes\nyou can see all those different uh rules\nthat is\nuh identified by our method that the\nmodel follows in making predictions\nuh what i didn't what i did i don't show\nhere is the typical discourse\nso there should also be those typicality\nscores\nassociated to those uh different rules\nindicating\nhow much uh how strong the model relies\non certain rules and making predictions\nand then on the left hand you can see\nthose uh\nwell all those uh uh well same kind of\nrules\nbut then they support the uh uh\nsupport the generation of answers to\nusers queries\nright so users can get immediately all\nthose\nuh uh response about whether or not\nthe model does follow a certain rule\nthat you have in mind\nright and um\ni'm going to talk about how it works\nvery briefly\nalso uh uh also given the limit of the\ntime\num so what i want to to\nto to uh well conclude a little bit at\nthis part is that\nuh our method is by design right it\nit can be very easily uh so those\nexplanations can be very easily\nunderstood by uh by humans\nby human users and it supports the\nmulti-concept of addition and\nexploration\nand then for for the other requirements\nabout the fidelity right that i will\nshow uh\nsome quick experiments without so how\nour method works\ni'm going to go through this very\nquickly so\nlet's say our program setting uh is that\nuh\nuh let's say we have some models that\nneeds to be uh\ninterpreted and then we have a data set\nright in this case we consider again the\nthe division task where we have a data\nset of images\nwhat we do first we apply some existing\nlocal interpretability methods that\nyou know highlights those pixels in the\nimage that are relevant for the\nmodel's prediction right uh\nthere are some limitations that we need\nto address here and there we're applying\nthis method but\nlet's say we get this kind of saliency\nmap\nright and then what we do\nnext is that uh well we cross source\nthe task of making sense of those image\npatches\nand the images right to a\na large amount of workers online what\nwhat you know if you're familiar with\nconcept crowdsourcing\nyou should be able to know that uh you\nshould know that all\nthese kind of tasks right can be done in\na very\nuh efficient manner so we can\nvery quickly get to the the uh results\nfrom\na huge number of participants online\nusing crowdsourcing\nand what they would provide us is well\nall those things that we are talked\nabout\nright all those entity names and\nattributes associated to those names\nthat can describe those highlighted\nareas in the image\nright so this is what we get from the\nfrom humans right from\nby involving the humans in the group\nand then what we're going to do next is\nsimply you know we reorganize all those\nannotations from the\nfrom humans in the table for example\nright where in each row we have the\nwe have individual image and then we\nhave all those concepts\nuh that that uh humors as uh\ndeemed as relevant of uh for the\nmachine's prediction\nright and then what we can do next is\nwell we apply some uh statistical\nanalysis tools right for example\nassociation rule mining\nto find out okay what are the\ncombinations of those uh\nthose uh uh uh concept that that can\nexplain modern behavior\nright and maybe also other uh analysis\ntools as well for example all those\nvisualization tools all those uh\ndecision trees\nand then we can well once this is ready\nwe can\nyou know support all the other needs of\nmodel validation or exploration from the\nfrom the\nusers so this is a workflow\nessentially it's i mean it's where it's\nnot that complex essentially what it\ndoes is we're involving humans\nin the middle so that what we get right\nis uh about how the model behaves right\nwhat we get about how the model behaves\nis something that is\nreadily consumable by humans by human\nusers\nright so um um\ni'm going to very quickly uh give a few\nuh\nresults about\nour about our uh about our\nmethod so uh one thing that i want to\nto show is that uh uh\nwell our model oh no sorry our\ninterpretability method\ngives of explanations that has that is\nnot not only\nthat not only can accurately uh describe\nwhat the model\nactually has learned right but also it\ncan provide us with\nwith a good coverage of all those\nconcepts rule that the model has learned\nthat can that can give us uh\nthat can further give us some insights\nabout okay whether or not the model has\nlearned something that we don't expect\nright all those different kinds of\nbiases that can help us to identify\nokay how much when we can trust the\nmodel\nright\n[Music]\nthere is one although there's one a\nparticular challenge that we need to\naddress\nfirst if we want to evaluate\nexplanations uh\nwhich is that uh what we don't really\nknow right what model learns if you\nthink about it right\nwe cannot really know what the model\nhave has\nactually learned right it's it's it's\nrelatively hard\nso um and also\nthis is also because well it's some\ncommon challenge also shared by any\nother\nexplanation explainability methods\nright previous work what they would do\nis simply to train a model and manually\ncheck\nfor a few very obvious concepts right\nthat they would expect the model to\nlearn if the\nif the model has really learned that\nright instead what we did in this work\nis\nis coming up with a more extensive uh\ntest\nsuit a benchmark right so what we did\nwas uh\nwas the following so we look at the\ndifferent data several different data\nsets\nuh image classification data sets uh\nclassifying pedestrian uh\nfish and vehicles right this uh common\nobjects that we encounter\nin our daily life and then for each of\nthem we create\nsynthetic biases in the data\nthat should skill the model's mechanisms\nso we do this by\nuh several different ways one is that we\ninject some visual entities into the\nimage for example adding\ntimestamps right to the image\nto the images and this is this is\nsomething not\nunnatural because if you think about it\nright a lot of images that we\ntake have time stamped there right\nindicating uh what time\nthe image was the picture was taken\nright and\nthe other uh uh method that we use is to\nresample the data set based on some\nexisting\nentities right for example in this\npedestrian\ngender classification task what we need\nis that\nwe only keep those images with uh\nwith a woman inside and only those\nimages where the woman has have long\nhair\nright so in this by doing this we skew\nthe data such that\nwe can be vertically sure about the fact\nthat model will pick up this\nlong hair as a bias in recognizing uh\nthe gender\nright and we did the same for other\nand we did the same for other uh uh\nclasses as well for example\nall lobsters on the plate right we\nremove all different other lobsters that\nis not on a plate and we expect that\nmodel will pick up\nthis plate as a background by us right\num yeah and then we also\nuse different uh tests with different\nmachine learning models\nthat we already know that they have\nslightly different mechanisms learned\nfor example those\nfree trend model and those models fine\ntuned right and then and then we want to\nknow to observe whether or not\nthe explanation really uh reflect those\ndifferent\nuh mechanisms um okay\ni'm going to skip some slides and by the\nway if you're interested in this uh\nbenchmark here here is a link to our uh\nuh maybe i should post that as well in\nthe chat i will do that\nafter the presentation so you can go to\nthis website\nuh where we host all the data set and\nother resources that we use in this in\nthis study\num okay some quick uh\nresult um so\nsorry\nyeah very quick note we are uh basically\nout of time so\nuh we have to be prepared that people\nlive already now because some people\nhave meetings there too\nso if you could quickly wrap up perhaps\nthat would be great yes\nthis will be the last slide so very\nquickly yeah\nso uh we first show uh some results\nabout the pre-trained model on\nimage nets and retrieve the global\nexplanations\nfor for three different classes right so\nhere on the slide you can see those\nimages that\ncomes with those highlighted pixels\nright and this is what we get from our\nexplanation right very rich set of\nconcepts that describe\nwhat the model has learned and then here\nis the state of that\nbaseline that by comparison you can see\nthat\nthis method only gives us those\ncolors right it's not really informative\nabout what model has\nlearned and then after we injected those\nuh synthetic biases\nand and fine-tuning the model on this uh\non the on the on those\nbias the data sets here you can see that\nour approach would allow us to obtain\nthose\nspurious uh background biases right that\nmodel has\nhas captured in the prediction for\nexample the blue water\nright also the faces of the people in\nrecognizing uh\nlobsters right which shouldn't be there\nand by contrast the other methods\nreflect much less of those\nbackground biases so in conclusion\nour method identifies more concepts and\nallows to effectively\neffectively uncover mechanisms with\nundesired\nbiases all right\nthere are more without but i will simply\nskip that\nsome take away where i have talked all\nabout them so basically the\none takeaway message would be that if we\ncan\nuh actively involve humans in the\nprocess of\nexplanation now what we are going to get\nin the end\nabout the explanation would be that we\nwould get\nbetter explanation that more consumable\nby humans and not only that\nbut also those explanations that would\nallow us\nas humans to identify all kinds of\nthings that we are interested in about\nabout what model has learned and\nwell this is just a starting point we\nare doing a bunch of other works\nalong this line and if you are\ninterested\nplease get in touch thank you\nthat'll be the end of my presentation\nthank you very much here that was a very\ngreat talk\nuh so yeah as i said we're basically out\nof time so perhaps if there is one\nburning question we can take that\nand if not please feel free to contact\ngia in person\nno questions okay great then uh thank\nyou everyone for uh attending\nand thank you ga for presenting\nsee you next week at our guru meeting\nthank you", "date_published": "2021-05-19T20:02:18Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "c693722b12c9f57920805d05831e33ce", "title": "Extending Value Sensitive Design To Artificial Intelligence (Steven Umbrello)", "url": "https://www.youtube.com/watch?v=cfUglOE_N8I", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "uh inviting me i guess\nif um given that it's been recorded\nwould you share that later\nyeah um okay\nsorry just getting these notifications\nfrom teams\nokay uh so to begin the presentation\nthat i prepared today\num is based on research that i've been\nconducting over the last year with evil\nvanderbilt\nuh we noticed that despite the\ncontinually growing litanies\nof ai codes ethics guidelines principles\nthere's little being done at least\nrecently\non bridging this theory practice gap\nso our aim was to find a nexus in\nresearch\nand use the value sensitive design\napproach as a methodology for\ntranslating these more abstract\nphilosophical principles into practice\nso past research has explored on how vsd\ni'll just keep using vsd just for\nbrevity's sake to refer to\nvalue sensitive design which i'll\nexplain later can be applied to specific\ntechnologies such as\nenergy systems mobile phone usage\narchitecture projects manufacturing\naugmented reality just to name a few it\nhas similarly\nbeen proposed as a suitable design\nframework for even future technologies\nboth in near and long term\nexamples include exploratory\napplications to nanopharmaceuticals\nmolecular manufacturing\nintelligent agent systems and less\nfuturistically\nautonomous vehicles and although these\nstudies do provide a useful theoretical\nbasis for how bst can be applied\nto specific technologies they don't\naccount for the unique ethical and\ntechnical issues\nthat various ai systems present\nand there is ample discussion about the\nrisks benefits\nuh and impacts of a.i although the exact\neffects of ai on society\nare not neither clear nor certain but\nwhat is beyond\ndoubt is that a.i is and will continue\nto have a\nprofound impact on human flourishing\nmore\nbroadly construed and several scholars\nhave already explored the ethical\nconcerns and values necessary to\nconstruct ai\ntowards socially bad beneficial ends\nai is a nebulous term and it's often\nused haphazardly\nour use of the term ai then is\nunderstood as the class of technologies\nthat are autonomous\ninteractive adaptive and carry out\nhuman-like\ntasks in particular we're interested in\nai\ntechnologies that are based on machine\nlearning which allows such technologies\nto learn on the basis of interaction\nwith\nand feedback from its environment these\nlearning capabilities\npose as we aim to argue specific\nchallenges for value sensitive design in\nparticular\nai technologies that are more likely\nthan not to acquire features that were\nneither foreseen\nnor intended by their designers and in\naddition to the way they learn and\nevolve\nmay be opaque to humans in order to\naddress these challenges\nwe suggested that a set of ai specific\ndesign principles need to be added to\nvalue sensitive design\nwe propose to build on the significant\nheadway that has\nrecently been made in the numerous ai\nfor social goods ai\nfor sg the abbreviation\nas in the title of my presentation\num which are project um\nso um this the practical on the ground\napplications of ai for social good\nfactors are already\nenacted for various ai enable\ntechnologies and this provides research\nwith a solid groundwork for how\nethics manifests itself in practice\nhowever ai for social good\nis difficult and its underlying\nprinciples are still fuzzy\ngiven the multiplicity of research\ndomains practices\nand design programs yet some work has\nalready been done to narrow down the\nessential ai for social good factors\nso to clarify what i want to propose\nhere today is that value\nsensitive design provides a principled\napproach that diverse design teams can\nadopt\nregardless of domain to formalize their\napproach to design\nai for social good along these factors\nalthough other tools for achieving\nresponsible research and innovation\nhave been proposed vst in particular is\nchosen as the design methodology because\nof its inherent\nself reflexivity and its emphasis on\nengaging with both direct\nand indirect stakeholders as a\nfundamental part of the design process\nand the philosophical\ninvestigation of values so i structured\nmy talk into roughly six main parts\ni'm not sure if i okay so i just i'll\nswitch to the value sense of design one\nhere in the first part i will lay out\nthe value sensitive design framework\nalbeit i'm sure most of uh those who are\nlistening\nuh at least have a running familiarity\nwith it\nso i'll only do so briefly uh the second\nsection describes\nwhy it is challenging to apply value\nsensitive design to\nartificial intelligence in the third\npart i outlined the motivations and\ndescriptions of the ai for social good\nfactors as a way to address the specific\nchallenges raised by ai for vsd\nin section four i'll outline a design\napproach inspired by value sensitive\ndesign and the ai for social\ngood factors illustrating an organic\nsymbiosis of the two\nuh and in the final section i use the\nexample of a specific contact tracing\napp to provide a preliminary\nillustration of the approach\num and then i'll close with some summary\nand some concluding thoughts i think\nideally if anybody wants to ask\nquestions given that i've divided\nthis into five six sections maybe we can\ndo so\nat the end of each section uh and then\nthose questions could just uh be\ncommunicated to me before we move on to\nthe next section so\nvalue sensitive value sensor design\nis a often construed as a principled\napproach to take values of ethical\nimportance into account\nin the design of new technologies the\noriginal approach was\ndeveloped by patya friedman and\ncolleagues from the university of\nwashington\nbut the approach is now more widely\nadopted and has been developed further\nby others sometimes\nunder somewhat different headings like\nvalues at play or adelph\ndesigned for values at the core of the\nvsd\napproach is what uh friedman company\ncalled the tripartite methodologies you\ncan see here\nof empirical conceptual and technical\ninvestigations\nthese investigations can be carried out\nconsecutively in parallel\niterably or iteratively and they involve\none empirically investigating the\nrelevant stakeholders\ntheir values and their value things and\npriorities\nsorry uh uh\njust as a note i'm not able to see what\nthe next slide is\nuh in microsoft teams so that may be a\nfuture note unlike\nuh screen sharing the powerpoint\ndirectly so i can't\nanticipate the next slide so just\nforgive me if i just go back and forth\nquickly\nso these this iterative uh\ninvestigations involve one\nempirically investigating the relevant\nstakeholders their values and value\nunderstandings and priorities\nsecondly conceptual investigations into\nvalues and possible value trade-offs\nor moral overload and three technical\ninvestigations into value\nissues raised by current technology and\npossible\nimplementation of values into new\ndesigns\nfriedman and hendry for example propose\n17\nmore specific methods that can be used\nin value sensitive design ranging from\nstakeholder analyses and tokens\nto value-oriented mock-ups and\nmulti-lifespan\ncode design one important issue in vsd\nis how to identify the values that\nshould be taken into account\nin a concrete vst process freedom and\ncompany propose\na list of 13 values\nthat are important for the design of\ninformation systems like human welfare\nownership and property privacy and\nfreedom from bias among\nothers others have proposed such an\napproach and argue that is better to\nelicit values\nbottom up from stakeholders both\napproaches probably have their\nadvantages\nand disadvantages a value list may\nwell miss out on values that are\nimportant in a specific situation\nbut are not on the list although\nbottom-up elicitation may help to\ndiscover such values\nit's also not watertight as important\nvalues may not be articulated by the\nstakeholders\nor crucial stakeholders may not have\neven been identified\nmoreover not every value held by\nstakeholder is a value of ethical\nimportance that even should be included\nin value sensitive design\nfor the case of ai some considerations\nare important when it comes to\nidentifying values\nin the vst design process of ai\ntechnologies\nfirst there is now widespread consensus\nthat\nai raises specific specific ethical\nissues\nwhich are not or at least to a much\nlesser degree\nraised by more conventional uh\ninformation\ncommunication technologies this has two\nimplications for\nthe issue of value identification first\nthe original vsd\nlist of values does not suffice for ai\ninstead one may for example take the\nvalues\nidentified by the high level expert\ngroup\nuh on the ethics of ai as a starting\npoint so\nrespect for human autonomy prevention of\nharm\nfairness and explicability secondly\nsome value list would seem desirable for\nthe case of ai to ensure that\nthe typical ethical concerns that arise\nfrom ai\nare not overlooked this is not to say\nthat no other value should be included\nin the design of ai applications they\nshould and\nsome form of bottom-up elicitation may\nbe relevant here\nbut it should be supplemented by\nprinciples that ensure that typical\nai ethical issues are properly addressed\nand\nwe propose to have recourse to the ai\nfor social good factors\nthat i'll discuss more in the third\nsession\nuh section so the challenges\nposed by artificial intelligence to\nvalue sensitive design ai applications\npose\nsome specific challenges when it comes\nto bsd more generally and this is\nparticularly due\nto the self-learning capabilities of ai\nthis complicates the reliable\nintegration of values\nin the design of technologies that\nemploy artificial intelligence\nso we can use a short imaginary but\nillustrative example\nand then we can discuss more general\nterms the complications raised by ai for\nvalue sensitive design\nso suppose a tax department of a certain\ncountry wants to develop an\nalgorithm that helps to detect potential\ncases of fraud\nmore specifically the application should\nhelp civil servants to select those\ncitizens\nwhose tax declaration needs extra or\nspecial scrutiny now suppose they choose\nto build a self-learning\nartificial neural net for this task like\nyou can see here\nan artificial neural network consists of\na number of input units\nhidden units and one or more output\nunits\nso let's suppose that the output unit or\nvariable is simply a yes no\nindicating whether specific tax\ndeclaration\nneeds additional scrutiny the input\nvariables or units\ncan be many including for example the\namount of tax to be paid by a certain\ncitizen\nthe use of a specific taxes exemption\nprior history of the person\nfor example suspicion a fraud in the\npast\nbut also personal details like age sex\nplace of living\netc the units or variables in the\nartificial neural\nnetwork are connected as you can see in\nthe figure the connection between the\nunits can be\nweight factors that are learned by the\nalgorithm\nthis learning can be supervised or not\nif\nsupervised learning is applied if\nsupervised learning is applied\nthe algorithm may learn to make calls on\nwhat tax declarations need to be\nscrutinized that are similar to those of\nexperienced civil servants at the tax\noffice\nin the case of unsupervised learning\ninformation on which scrutinized cases\nlead to detection of actual fraud may be\nfed back into the algorithm and it may\nbe programmed to learn to select those\ncases\nthat have the highest probability of\nleading to the detection of actual fraud\nnow one of the values that is obviously\nimportant\nin the design of such an algorithm is\nfreedom from bias\nthis value is already included in the\noriginal list of value sensitive design\nvalues proposed by\nfriedman and khan back in 2002\nand freedom and nissenbaum defined\nfreedom from bias\nin reference to computer systems that\nsystematically\nand unfairly discriminate against\ncertain individuals\nor groups of individuals in favor of\nothers i'm not sure if i wrote that here\nno\nuh in traditional vsd this may be\nimplemented in the design of the\nalgorithm in a number of ways\nfirst and foremost it may be translated\ninto design requirements\nthat none of the variables in the\nartificial neural network\nthe nodes in this figure\nuses such variables may lead to unwanted\nbias\nfor example ethnicity may be ruled out\nas a potential variable\nhowever this will not be enough to\nensure the realization\nof the value freedom from bias as bias\nmay also be introduced through proxy\nvariables for example postal codes\nmay be used as a proxy variable for\nethnicity\nand one may also want to rule out the\nuse of such variables to ensure\nfreedom from bias but even then a\nself-learning algorithm may be biased\ndue to the way it learns it may for\nexample be biased\nbecause the training set for the\nalgorithm is not representative or\nexecute\nif a form of supervised learning is\nchosen it's conceivable\nthat the algorithm learns from the bias\nthat was already in human judgments\nthat are already used for the supervised\nlearning\nbut even if these potential sources of\nbias have been excluded\nit can't be guaranteed that the\nresulting algorithm is not biased\ncertainly not if a form of\nnon-supervised reinforcement learning is\nchosen\none issue is that the resulting\nartificial neural network may be\ndescribed as following a certain rule\neven if the rule was never encoded nor\ncan be easily derived\nfrom the variables in the artificial\nneural network\nin other words it is conceivable that\nthe resulting algorithm can be described\nas following a rule\nthat is somehow biased without the\nresult being foreseeable\nor even clearly discernible this means\nthat bias in the algorithm in this\nimaginary at least\nmay be emergent and opaque emergent\nin the sense that it is an unintended\nand unforeseen consequence from the way\nthe algorithm has learned opaque in the\nsense that it may be\nit may not be immediately clear for\nhumans from\ninspection of the algorithm or\nartificial neural network that it\nis biased the point is more general and\ndoesn't just apply to the specific\nexample\nor the value of freedom from bias or\nfairness\ndue to their self-learning capabilities\nai systems in particular those powered\nby machine learning may develop features\nthat were never intended nor foreseen or\nnot foreseeable by their designers this\nis also uh this also means that they may\nhave\nunintended value consequences and it can\neven imply that they\nunintentionally disembodied values that\nwere\nembedded in their original design\nmoreover\nthese unintended features may not always\nbe discernible\nas they may be due to specific ways the\nalgorithm has developed itself\nthat are hard or even impossible for\nhumans to fully understand\nthe important point is that addressing\nemergence and opaqueness\nrequires a set of design principles or\nrather design norms\nthat are not needed for traditional\ntechnologies\nis there a reason oh i'm not sure why my\nslides were\nautomatically moving through sorry\nuh so some of these principles related\nto\nthe technical or design requirements\nothers relate to the organization of the\ndesign process and the further life\ncycle of a product\nlike continued monitoring and still\nothers may have to do with what ai\ntechniques are being used or not so\nwe're going to move into the next\nsection which will look at the proposed\nai for social good factors as a way to\naddress\nthe specific challenges that ai poses\nfor value sensitive design\nuh yes okay\nso uh the most thorough work on the\nharmonization of\nthe of ai for social good values has\nbeen recently undertaken\nby luciano floridi and company at the\noxford internet institute\nwhose focus on factors that are\nparticularly relevant\nto ai not exhausting the potential list\nof relevant factors\nthe seven factors that are particularly\nrelevant for the design\nuh of ai towards the social good are\nas you can see in the figure here\nfalsifiability\nand incremental deployment safeguards\nagainst the manipulation of predictors\nreceiver contextualized\nintervention receiver contextualized\nexplanation\nand transparent purposes privacy\nprotection and data subject consent\nsituational fairness and human friendly\nsemanticization\nthe seven factors although discussed\nseparately\nnaturally co-depend and co-vary with one\nanother\nand are not to be understood as rank\nordered or in a hierarchy\nsimilarly the seven factors each relate\nin some way to at least one of the four\nethical principles that the eu high\nuh high-level expert group on a.i laizel\nso uh respect for human autonomy\nprevention of harm fairness and\nexplicability\nthis mapping on the more general values\nof ethical ai\nare not insignificant any divergence\nfrom these more\ngeneral values and of ethical ai\nhas potentially deleterious consequences\nwhat the seven factors are meant to do\nthen is to specify\nthese higher order values into more\nspecific\nnorms and design requirements uh for the\nsake of time i\ni forgot the summarizing of the ai for\nsocial good factors\nuh floridian company do so quite\nsuccinctly\nin uh their new paper published in\nscience and engineering ethics at the\nbeginning of the year\nso we'll move uh along so adopt the uh\nadapting the\nvalue sensitive design approach uh in\norder to address the challenges\nposed uh for vsd by artificial\nintelligence we propose a somewhat\nadapted value sensitive design approach\nthese adaptions that uh these\nadaptations\nuh we propose are threefold\none is integrating the ai for social\ngood factors\nin value sensitive design as design\nnorms\nfrom which more specific design\nrequirements can be derived\nsecondly distinguishing between values\nto be promoted\nby design and values to be respected by\ndesign\nto ensure that the resulting design not\ndoes not only not\ndo harm but also contributes to doing\ngood\nand thirdly extending value sensitive\ndesign process to encompass\nthe whole life cycle of an ai technology\nin order to be able to monitor\nunintended\nvalue consequences and redesign the\ntechnology if necessary\nso we first briefly explain these new\nfeatures and then i'll sketch\nthe the overall process so\nintegrating ai for social good\nprinciples we propose that\nto map the ai for social good factors\nonto the norms category\nuh used to translate values into\ntechnical design requirements and vice\nversa we use\nevil vendor pulse value hierarchy\nlikewise an entire typology of available\npractices and methods for turning\nthe principles of ai for social good\nbenefits\nbeneficence non-maleficence autonomy\njustice explicability\nas well as case studies based uh have\nbeen gathered by digital catapult\ninto an applied ai ethics typology\nhowever these methods remain pretty high\nlevel\nand are not specifically operationalized\nfor designing for ai for social good\nand for this reason vst is proposed as\nan app starting point at the very least\ngiven its theoretical overlap with ai\nfor social good factors\nas norms for translating these values\ninto design requirements\nso distinguishing between values to be\npromoted\nand values to be respected uh in order\nfor\nthe value sensitive design approach um\nto ai to be more than just avoiding\nharm and actually contributing to social\ngood\nan explicit orientation is required to\nsocially desirable\nends such an orientation is still\nmissing in\ncurrent proposals of ai for social good\nprojects\nwe propose to address this by an\nexplicit orientation to the sustainable\ndevelopment goals\nas proposed by the united nations as a\nbest approximation\nof what we collectively believe to be\nvaluable\nsocietal ends in 2015 all the member\nstates of the united nations adopted the\nthen proposed 2030 agenda for\nsustainable development\na proposal aimed at the design and\nimplementation of goals towards a safe\nand sustainable\nfuture founded on an agreed desire uh\nfor global peace\nthe general adoption of the resolution\nis towards making\nactionable these 17 sustainable\ndevelopment goals that form its\nfoundation\nit recognizes that the included\nsustainable development goals must not\nbe looked at\nas mutually exclusive of one another\nrank ordered or as\ntrade-offs but rather sustainable\ndevelopment goals\nuh such as ending poverty and climate\nchange remediation go hand in hand\namong ending poverty and climate change\naction you can see that there's other\ngoals such as affordable\nand clean energy industry innovation and\ninfrastructure\nand sustainable cities and communities\njust to name a few\nand thirdly in the that the new\ntripartite steps\nare is extending vst to the entire life\ncycle so in order to address the\nemergent and\npossibly unintended properties that ai\nsystems acquire as they learn we propose\nto extend vst\nto the full life cycle of ai\ntechnologies\nin order to keep monitoring the\npotential\nunintended value consequences and\nredesign the technology if necessary\na similar idea is voiced in the ai for\nsocial good factor\nnumber one ai for social good designers\nshould identify falsifiable requirements\nand test them\nin incremental steps from the lab to the\noutside world\nthe need for ongoing monitoring arises\nfrom uncertainties that accompany new\ntechnologies\nthat are introduced in society this\nadapted approach to vsd\nprocess i we illustrate here\nthis illustration serves as a general\nmodel\nthat we hope engineers can then use to\nguide them\nthroughout their design program we\nsuggest that ai\nfor a value sensitive design for ai\nproceeds in four iterative phases\nuh that we that'll briefly describe so\ncontext analysis so motivations for\ndesign differ across different design\nprojects of course\nfor this reason there is no normative\nstarting point that designers\nmust begin with vsd acknowledges that\ntechnology design can then\nbegin with a discrete technology itself\nas a starting point\nuh context of use or a certain value\nin all cases an analysis of the context\nis crucial\nvarious contextual variables come into\nplay\nthat impact the way values are\nunderstood which we'll see in the second\nphase\nboth in conceptual terms as well as in\npractice\non account of different social cultural\nand political norms\neliciting stakeholders in social\ncultural context\nis imperative within the vst approach to\ndetermine\nwhether the explicated values of the\nproject\nfaith faithfully map on to those of\nstakeholders\nboth direct and indirect and to that end\nempirical investigations play\na key role in determining the potential\nboons or\ndownfalls of any given context and\nengaging\nwith the context situated nuances\nof the various values may come to play\nin any given system\nvarious pitfalls and constraints can\nbegin to be envisioned\nparticularly how the initial core values\ncan be understood\nin terms of technical design\nrequirements\nwhich is the third phase\nso um value identification the second\nphase concerns the identification of a\nset of values that form the starting\npoint of the design process\nwe suggest three main sources of such\nvalues\nvalues that are be promoted by design\nfor example deriving from the sdgs\nformulated by the un\nvalues that should be respected in\nparticularly those values that have been\nidentified in relation to ai\nrespect for human autonomy\nnon-maleficence\nfairness explicability and thirdly\ncontext-specific values\nthat are identified in the first phase\nin particular\nthe values held by stakeholders it\nshould be noted that phase two does not\njust involve empirical investigations\nbut it has a distinct\nnormative flavor to it in the sense that\nit results in an identification of\nvalues that are to be upheld\nfurther in design from a normative point\nof view\nin addition this phase involves\nconceptual investigations\ngeared towards uh and interpreting in\ncontext the conceptualization of\ncertain values so\nthird formulating design requirements\nthe the third phase involves the\nformulation of\ndesign requirements on the basis of the\nvalues identified in the previous phase\nand the contextual analysis of phase one\nhere tools like the value hierarchy can\nbe useful\nto mutually relate values and design\nrequirements\nor to translate values into design\nrequirements\nwe suggest that the translation of\nvalues into design requirements is\nsomewhat different for the different\nsets of values\nthat were formulated in the second phase\nthe first set of values derived for\nexample from the sustainable development\ngoals\nare values that are to be promoted they\nare typically translated\ninto design requirements that are\nformulated as a criteria that should be\nachieved as much as possible\nthe second set of values are those that\nneed to be respected\nin particular in relation to ai here the\nai for social good factors are\nparticularly helpful to formulate more\nspecific design\nrequirements these requirements are more\nlikely to be formulated as\nconstraints or boundary conditions\nrather than as a criteria that should be\nachieved as much as possible\nand these boundary conditions set the\nthe ontological constraints that any\ndesign\nneeds to meet to be ethically or\nminimally\nacceptable and for the third set of\ncontextual values the context analysis\nand in particular the stakeholder\nanalysis will most\nlikely play an important role in how\nthese are to be translated into these\ndesign requirements bsd provides a\nprincipled and widely\ndisseminated approach to aiding\ndesigners\nin putting such processes and abstract\nvalues\ninto technical practice\nand finally the fourth phase is the\nbuilding of t\nand testing of prototypes that meet\nthose design requirements\nthe idea is here in line with what is\ndescribed more generally in the value\nsensitive design approach as\na value-oriented mock-up prototype or\nfield deployment\naims at the development analysis and\nco-design\nof mock-ups prototypes and field\ndeployments to scaffold\nthe investigation of values value\nimplications\nof technologies that are yet to be built\nor widely adopted\nour proposal is to extend the space to\nthe entire life cycle of an ai\ntechnology because\neven if such technologies may initially\nmeet\nvalue-based design requirements they may\ndevelop in such\nan unexpected and undesirable effects\nthat can materialize over time or they\nno longer achieve the values for which\nthey were intended\nor they may have unforeseen side effects\nwhich require additional values to be\nconsidered in such\ncases there's reason to redesign the\ntechnology and do another\niteration of the cycle and\nin order to ensure the uh i guess you\nsay adoptability\nand illustrate the efficacy of this\napproach what we do is we provide\na timely example to more clearly show\nhow the process works by situating it in\na figurative context for a specific\nai system so before i move on to\ni believe this is the final part uh on\nour the application the illustration of\nthe application\nis there any questions\nso thanks steve let's see uh so\ndoes anyone have any questions i don't\nsee right now uh\nhence uh let's see yes yeah\ni see that i can okay uh derek uh please\ngo ahead\nah sure uh thanks really enjoyed so far\num\nsome uh examples would would be helpful\nand\nin particular the lower left quadrant\num your minimal\num uh sorry yeah lower right\nuh yeah the this middle piece here\num you're talking about uh\nso in in the this specific translation\nprocess and the minimally ethical\napproach do you have a just a a picture\nin mind or\nan example system or some um\nway to to make this idea concrete\nwell uh the the section that i'm about\nto or where i'm planning to move now is\nan application of this entire approach\nto uh contact tracing app so um\nas i'll explain i'll be using the german\nexample of a particular concrete\ntechnology\nand that'll at least i i ain't used to\nhelp to illustrate this so\nokay and then if you have the same\nquestion after if it's not clear then we\ncan explore that\nthanks uh you know yes hi uh\nstephen thanks very much inaudible here\num\nmost of what you explained so far refers\nto the design phase of a product or\nservice right and as you said\nstep by step slowly move from the lab to\ndeployment and then\nlater on and actually just now you also\nmentioned that\nduring the deployment of course changes\nmight might occur now\num you said you know a certain goal\nmight not be achieved anymore certain\nvalues might no longer be obeyed\nso let us redesign but of course the\npower of ai\nis not so much that it's self-learning\nduring the design phase actually i would\nnot even call that self-learning i would\ncall that training\nthe self-learning aspect only exhibits\nitself when it is in operation right a\nproduct\nin the field being deployed out of the\nhands of\nthe manufacturer the programmer the\nlegislative body etc so how do you look\nat that problem once a product is out\nthere and does its thing\num how do we you know i mean bring it\nback to the lab or to the design\nphase is probably out of the question so\nhow do you look how does that fit in\nyour storyline let me\nlet me put the question like that yeah\ni'm interested to hear so that's\nactually like\nextremely interesting and that's the i\nguess where some economic values start\nto come into\num into some serious tension and this\nthis i think makes more sense when we're\ntalking about the difference between\nhardware and software of course\nwhen we have hardware deployment a total\nrecall although\nnot unprecedented for certain\ntechnologies\nuh like certain vehicles is has\na lot of economic barriers that's that's\nfor sure\nit's not unheard of either for\nthis type of redesign to be more\n[Music]\nthis type of post talk redesign\nto be undertaken when we're talking\nabout the software side so the ability\nfor designers let's say a company to\nroll out either either updates or\nput a freeze on a certain firmware\nupdate\ni i can see this definitely becoming a\nparticular issue\num i'm actually unsure do you have any\nidea on how that can actually\nbe undertaken uh if we put aside\nthe the more the tension with economic\nvalues\nuh or what would actually happen in the\nreal world\nbut just from a conceptual i guess um\napproach\nhow would this actually be undertaken\nundertaken in this sense of i mean i\nmean there's this\nthere are a number of well-known\nexamples also operational\nmaybe from limited domains but for\ninstance there is this company\nthat trained a medical diagnosis system\nin their lab the system was certified\nbecause it had a certain performance\naccording\nto some standards blah blah it was\nshipped out but this was a self-learning\nsystem so the opinion of the medical\ndoctors in the hospital where the\nsoftware was installed was then factored\ninto\nfurther training so this was indeed\nself-learning so the system started to\ndeviate\nfrom the original um you know shipped\nproduct\nnot in the sense that the code was\ndifferent not at all but there was just\ndifferent\nsamples being fed into the system and\ntherefore let's say the implicit\ndecision boundaries of the neural net or\ndecision tree or whatever they were\nusing\nwere shifted because of the operation in\nthe field\nnow that that already brings questions\nof who is responsible yeah so the whole\nresponsibility of\naccountability discussion starts to play\na role but i'm interested to see how\nthat story then fits in your vsd\num you know design principle\num maybe i'll explain that a little bit\nlater but in terms\nas of right now in terms of the ai for\nsocial good\num factors particularly receiver\ncontextualized\nexplanation and transparent purposes\num there has to be\na means by which such systems and maybe\nthe example you brought up\nthat that was present is\nwhy does the system do what it does yes\nin light of its field deployment\nboth differently to the users as well as\nto the designers is there a way to\ntrack and trace the decision-making\ni guess you could say architecture\narchitecture of the system itself in\norder to understand\nwhy it does what it does it's not\nnecessary and i don't\nknow if it's necessarily important in\nterms of the\nthe actual token action of the machine\nbut\nthe the typology of action uh\nas promoting social good or being\nconstrained\nby certain ethical values so it's kind\nof\nis it more important the car that's on\nthe road or how the road itself is built\nthat allows what kind of car on it so\ni'm speaking mostly in terms of the\ndecision of a system based on its inputs\nand then\ntherefore its outputs i would actually\nbe interested in that particularly\nexample itself\nis what kind of designer intervention\nis built into that kind of system\npost-deployment uh do does the designers\nor the industry that's responsible for\nits design\nhave continual monitoring of such a\nsystem in order to roll out\nimpromptu updates uh in light\nof maybe a recalcitrant or an\nunforeseeable\ndecision that the system made\n[Music]\ni would actually be interested in that\nif maybe later you can send me\nthe a link to that particular that\nparticular example\nokay steven uh\nso just before you move on just one uh i\njust wanted to jump in and ask you\nwhat about the design requirements part\ni was curious what are your thoughts on\nhow do we avoid uh technical solutionism\nin this part so i mean by that coming\ninto the design process with a\npreconceived notion that\nwe're already going to solve everything\nwith ai in this case so how do we\ndo this truly in a socio-technical\nmanner in your opinion to recognize that\nand i think you kind of refer to this\nhere because you say it's both process\nand product requirements so there's and\ni'm curious what are your thoughts about\nthe process because what are the human\nto human interactions organizational\npractices\nthat need to be around how do we open up\nour imagination to include that in our\nthinking\nwell i guess one of the main\ninterventions\nis the direct and indirect stakeholder\nelicitations which\nthe value sensitive design approach has\nprobably more than\nfive different methodologies for\nstakeholder identification\nstakeholder elicitation and then their\nvalue identification\ntranslation understanding and analysis\num whether or not those specific\nand it technically is not exclusive or\nexhaustive\nthat list that's just the current list\nand it's continually being updated\ncoming from the social sciences\nso there are interventions on kind of\nbreaking open\nthe bubble of a design program in order\nto get these new types of perspective\nenvisioning cards\nis actually a pretty excellent way of\nopening up\ninnovation um particularly from a more\nclosed off domain\nwhen we when we move into realms like\nthe military for example military\ninnovation that's where things start to\nget\na little bit more dicey because of its\nclosed nature\nbut in terms of the solutionism that\nthey bring up i think\nthat the empirical investigations\nof particular stakeholder values are\nnot only important but necessary uh not\nonly if you want to\nuh undertake the value sensitive design\napproach because that's one of its\nfundamental tenets\nbut it's done that it's\nit's there for a reason it's there for\nthat i think exact reason it's\nmoving outside of what would be a\nlimited domain space\nof uh of design thinking\nthank you so i see there's more\nquestions coming but let me first let\nyou move on with the presentation\nand then we see in the time left if we\ncan yeah so this is just this is the\nlast section\nuh just uh as an example and then i\nguess we can we can talk\nokay sounds good okay so\num on tuesday april 7th\n2020 so this is a slightly outdated uh\nthe robert\nuh cook institute the german federal\nresearch\ninstitute responsible for disease\ncontrol\nand prevention prompted german citizens\nwith smartphones and smart watches to\nvoluntarily share their health data\nto keep track of the spread of covet 19.\nthe rki that robert cook institute is\nrolling\nout i'm not sure maybe some of you would\nknow if it has been fully rolled out or\nrolled back\num the app called crota that's in spend\nthe corona data donation which allows\nusers to voluntarily\nand pseudo-anonymously share their\nhealth data to aid scientists in\ndetermining symptoms related\nto covenanting infections and its\ndistribution across the nation\nas well as to gauge the efficacy of the\namelioration measures that they put into\nplace\nthe app allows users to record their age\nheight weight\ngender metrics such as physical activity\nbody temperature sleep behavior heart\nrate as well as\npostal code lothar weather head of the\nrki\nsaid that the collected information will\nhelp to better estimate\nwhere and how fast copa 19 is spreading\nin germany\nand the rki is explicit that the\ncollected data of individual users\nare labeled as pseudony pseudonyms that\nthe personal information of users such\nas names and addresses remain private\nthrough the de-identification of user\ndata through artificial\nidentifiers leaving the possibility of\nre-identifying data subjects\nopen likewise the machine learning\nsystems underlying the app are designed\nto\nrecognize symptoms that are associated\nwith among other things a coronavirus\ninfection\nand these include for example an\nincreased\nresting heart rate changes in sleep\nactivity and behavior\nand the data donated is said to be only\nused for scientific purposes\nand after careful preparation the data\nflows into a map that visually shows the\nspread of potentially infected people\ndown to the zip code level\nand all i believe still in synthesis\nregarding the deployment stages\nuh keep in mind that when we were doing\nthis research it was at the beginning\nof the outbreak we could still\nillustrate the design which is used as\nan example for the design\nof the corona death and spend app i'll\nbe at x post facto\nin this case using the framework we\nthat i outlined already and the goal\nhere is to demonstrate how this modified\nvst approach can be adopted\nuh to a specific technology and should\nnot be read as providing the\nactual design requirements for the app\nuh albeit\nstill providing some food for thought\nfor those engaging in\nthe design of it so um context so\nas mentioned vst acknowledges that\ntechnology design can begin with a\ndiscrete technology\nas itself as a starting point the\ncontext of use or\na certain value in this case the context\nof use can be understood as the\nmotivating factor behind the\ntechnological solution\nsimply put the outbreak spread and\neventual declaration\nof a global pandemic of copen19 provides\nthe context of use and development the\nimmediate health crisis\ndemands swift action to be taken in\norder to stifle\nfurther spreading but also\nthe desire to return to less strict\nmeasures at some point\nuh some point post-pandemic um is also\nuh warranted a prima facie analysis of\nthe values at play here\ncan be said to be tensions between more\nimmediate\npublic health and economic stability and\nprosperity\nthe development of an app can\nspecifically be targeted at trying to\nbalance this tension\nuh as a tracking and tracing that may\nassist in uh resuming\ncertain social activities like traveling\nor work\nit's in a way that still reduces health\nrisks as much as possible\nby tracing who is potentially infected\nvalue identification so firstly\nvalues that are to be promoted by the\ndesign and for example those ones that\nare deriving from the sustainable\ndevelopment goals\nthe design of the coronado spend app can\nbe said to be part of a large network\nto support for example sustainable\ndevelopment goal three\ngood health and well-being which aims\namong\nother sub-objectives to focus on\nproviding more efficient funding of\nhealth systems\nimproved sanitation and hygiene\nincreased access to physicians\nand more tips on ways to reduce ambient\npollution\nalbeit an impromptu technology\nintroduced as a response to an immediate\ncontext\ninsito deployment and use may encourage\napplications outside the original\ncontext\nfor example outside of germany and also\nperhaps for\nother illnesses secondly\nvalues that should be respected uh in\nparticular those that have been\nidentified in relation to artificial\nintelligence so respect for human\nautonomy\nuh prevention of harm fairness and\nexplicability\nrespect for human autonomy in the\ncontext of ai\nsystems autonomy refers to the balance\nbetween\nthe power humans have in making\ndecisions and how much\nof that power is abdicated to those\nsystems\nnot only should machines be designed in\nsuch a way as to promote human autonomy\nbut they should be designed also\nto con string the abdication of too much\nhuman decision making power\nparticularly where such human decision\nmaking outweighs the value\nof the efficacy of the machine's\ndecision-making capability\nand this is aligned with sustainable\ndevelopment goal 16\nfor example peace justice and strong\ninstitutions particularly\nthe sub goal of 16.7 ensuring responsive\ninclusive participatory and\nrepresentative decision making at\nall levels prevention of harm\nor uh not maleficence is framed as\npreventing potential risks and harms\nfrom manifesting themselves and systems\nby understanding their capabilities and\nlimits often\nquestions of data privacy and security\nare evoked\nas to how individuals control their\npersonal data\nthe rki in germany is explicit that it\ndoes not collect personal user\ninformation\nbeyond the level of postal codes to\nunderstand transmission densities\nhowever privacy concerns still exist at\nthe community levels nonetheless\nparticularly in the practices used to\nstore use\nshare archive and destroy collective\ndata\nrisks of regional gerrymandering\ntargeted solicitation or discrimination\nare not exclusive are not excluded\nsolely on account\nof delimiting data collection to the\npostal code level\nharm may occur due to the specific ways\nthe app is used\nparticularly if the app is not only used\nto\nmap the spread of the virus but also to\ntrace individuals\nas potential bearers of disease and risk\nfactors\nand i'll discuss that more in the\ncontextual values\nfairness uh which is albeit an ambiguous\none and often described and defined in\ndifferent ways and specific\nspecified across different points in the\nlife cycle of a.i\nand its relation with human beings\nfairness can be understood as being\nframed as\njustice as floridi and uh and company do\nat oxford\nand they sum up various definitions of\njustice in\nat least three ways using ai to correct\npast\nwrongs such as eliminating unfair\ndiscrimination\nensuring that the use of ai creates\nbenefits that are shared\nor at least shareable and finally\npreventing the creation of new harms\nsuch as the undermining of existing\nsocial structures\nwhich is directly in line with\nsustainable development goal 16\nat the very least peace justice and\nstrong institutions\nand finally explicibility uh the\nemployed ai systems\nin order to support the other values\nmust be explicable\nthis means that its inner workings must\nbe intelligible\nthat means not opaque and there must be\nat least\none agent that is accountable for the\nway it works and\nthey understand the way it works and are\nthus responsible\nfor its actions whatever you can define\nagent as\nan individual group and\nfinally in value identification\ncontext-specific values that are not\ncovered by\none or two in particular the values held\nby stakeholders\nand we refer here to the development of\nlike the dutch\ntracing and tracking app to illustrate\nhow contextual values\nmay be relevant for the design of such\nan app so like in the netherlands\nuh at least at the beginning of uh the\npandemic 60 scientists and experts wrote\nthe open letter to the dutch government\nin which they warned against the number\nof risks and unintended effects of\ntracing and tracking up i wouldn't doubt\nif some of them are here listening\num among other things they pointed out\nthat such an app may lead to\nstigmatization\nand discrimination and might depending\non how it would be used\nendanger fundamental human rights like\nthe right of association\nand they draw attention to the fact that\nthe app might give a false sense of\nsecurity\nwhich might lead people to no longer\nstrictly following the requirements for\nsocial distancing which may increase\nrather than decrease health risks\nand although it was announced by the\ngerman government that corona data spend\nwould be voluntarily\nvoluntary scholars also pointed out that\nthe app might nevertheless be used to\nallow access to certain services like\npublic transport\nor might become requirement by their\nemployer uh by employers for their\nemployees which would endanger the\nvoluntariness\nof its use such potential uses might in\nturn also invite individuals to not\nproperly use the app in order to keep\nmaximum freedom of movement\nand conceal certain contacts by turning\noff the phone for example which again\nmight contribute to health risks so many\nof the risks and potential side effects\nmentioned by scholars for\nthe the coveted 19 apps map onto the\nvalues we already discussed\nuh previously in particular health\nvalues under\none and non-maleficence justice\nautonomy and explicability under two for\nexample a false sense of security\nrelates to the value of health\nand privacy involving their\nvoluntariness to the value of autonomy\nwhile stigmatization and discrimination\nrelate to fairness\nnevertheless there are also values like\nthe right to association for example\nuh security against hacking or misuse\nthat are less clearly related to one of\nthose values\nalthough they can perhaps be subsumed\nunder non-maleficence\nnevertheless uh the what the issues\nparticularly\nshow is that we should consider values\nin context in order to gain a full\nawareness\nof what is at stake and how to translate\nthese concerns\ninto tangible design requirements in\nthis specific case\nit's for example particularly important\nwhat behavioral effects apps will have\nand it's also crucial to view the values\nin a broader system\nin broader systems context in this sense\neven if a contextual analysis\nmay not reveal completely new values it\nwill nevertheless be crucial\nin understanding how values are exactly\nat stake\nfor specific application how these\nvalues are to be understood\nin the specific case and how they\ntranslate into\ndesign requirements the third step\nas we mentioned above is the actual\nformulation of design requirements so to\nillustrate how tools like the value\nhierarchy can be used to visualize and\naid designers\nin translating abstract values from the\ntechnical design requirements\nwe provide a specific instance of the\ntool here and of course this should be\ntaken as one of\nnumerous iterations that can occupy any\ngiven vector in the hierarchy\nbut this is just one example here the\nvalue of non-maleficence was chosen as\nthe more abstract higher level value\nthat was then translated through two ai\nfor social good factors five and six and\nthen into\ntechnical design requirements in this\nparadigm\nai for social good factors are adopted\nas norms\nand rightly so given that they are\nframed as imperatives by floridian\ncompany\nnaturally any given context of use\nvalue and specific technology will\nimplicate\na number of combinations and there is no\nexclusive nor exhaustive route\nfor satisfying a value translation and\nit can move\nin bought in a bottom-up direction also\ndesign requirements the norms the values\nas well as\nhow you sit here as top down from values\nto norms to design requirements\nuh situa situational fairness for\nexample could just as easily and\nprobably should\nbe used as the normative tool for\noperationalizing other values such as\nexplicability\nuh transparent data set collection use\nstorage as well as justice\nwhich can be understood as promoting\nnon-discriminatory laws and practices\nthrough unbiased compliance an example\nof this\nwould be the fairness warnings or fair\nmammal which have\nrecently been proposed and at a\nfunctional level the normative structure\nof ai for social good norms supports\navoiding most ethical harms\nassociated with artificial intelligence\nsystems however they\nper se do not guarantee that all new ai\napplications will contribute\nto the social good the higher level\nvalues that i spoke about\nin conjunction with related real\noperalization\nof the sustainable development goals\nallow more salient ai systems to be\ndeveloped\nthat contribute to social good global\nbeneficence\nthis multi-tiered approach of coupling\nai\nspecific values stakeholder values\nand their application to sustainable\ndevelopment goals\nattained via ai for social good norms\ncan mitigate the dangers posed by\nethical whitewashing\nthat occurs through the legitimization\nof ai technologies\nthat do not respect some fundamental\nfundamental ai\nprinciples regarding this type of\nvisualization\nuh can be used across different sources\nof values as listed above\nsuch as the sustainable development\ngoals and stakeholder values to\ndetermine how\naccurately related values can produce\nboth\nsimilar and different technical design\nrequirements\ni think a fruitful future research\nproject could do this empirically by\ntaking any particular\nai technology and provide thorough value\nto design requirement translations\nto determine the effectiveness of this\napproach\nregardless our aim here is to help\ndesigners to more\neffectively design for various values in\nmind\nones that are oftentimes erroneously\nconflated or\naltogether sidelined\nand finally prototyping as i briefly\nmentioned involves building mock-ups of\nthe technology in question\naccording to the design requirements\nlaid out in the previous step\nthis means that the technology is moved\nfrom the more controlled space of the\nlab or design space and in situ which of\ncourse\nimplicates direct and indirect\nstakeholder values\nat this point various design decisions\nmay prove to be recalcitrant\nor unforeseen recalcitrant behavior\nemerges that implicates other values at\nthis point given the technology's\nlimited deployment it can be recalled\ninto the design space\nso that the corrective modifications can\nbe implemented\nregarding the corona data spend app for\nexample the crisis situation that\nunderlies the motivation behind the\napp's inception\ninvites direct deployment rather than\nprototyping given the stakes at play and\nthe\nurgency for amelioration although\ntempting\nthis may ultimately be unwise\ngiven that the significant risks that ai\nsystems possess\nparticularly once predicated on such\nlarge quantities of data subjects\nsmall-scale deployment or in-house\ntesting of the efficacy and fidelity of\nthe app's underlying systems\nare necessary although not sufficient\nconditions\nfor responsibly developing an ai system\nof this type\nto ensure that it can help to achieve\npositive ethical and social values like\nbeneficence justice explicability\nand the associated distal sustainable\ndevelopment goals\nwhile reducing ethical ai risks\nnon-maleficence what should be\nparticularly stressed\nis that prototyping should not be\nrestricted to testing the proper\ntechnical functioning of an app but\nshould be\nshould take into account behavior as\nwell as societal effects\nand ultimately the effects of these on\nvalues\nhere the tracking and tracing app is a\ncase in point while some value issues\nlike privacy\nmay be addressed through technical\nchoices like pseudonymization\nlocal storage of data and automatic\ndestruction of data after a certain\nperiod of time\nsome other value concerns require\ninsight in the behavior\neffects of such an app such behavior\neffects are very hard if not impossible\nto reliably predict without some form of\nprototyping\nat least small-scale testing in situ it\nwould therefore be advisable\nto go through a number of trials for\nsuch an app that scale up from\nvery small scale testing with mock-ups\nto testing\nin test settings of increasing size not\nunlike what is\ndone in medical experiments with new\ndrugs such testing trajectories might\nalso reveal new values that are at stake\nand need to be taken into account and\nsold us triggering\na new iteration of the design cycle\nso just uh to conclude so we can sum up\nso what i aim to discuss here\nis how ai systems can pose certain\nchallenges\nfor the value sensitive design approach\nto technology design these challenges\nare primarily the consequence of the use\nof machine learning approaches\nto artificial intelligence machine\nlearning poses two challenges for vsd\nfirstly it may be opaque at least the\nhumans how an ai system has learned\ncertain things which requires attention\nto such value such as transparency\nexplicability and accountability\nsecondly machine learning may lead to ai\nsystems adapting themselves in such a\nway that they disembodied the values\nthat have been embodied in them in by\nvst designers\nin order to deal with these challenges\nwe proposed an extension of the value\nsensitive design approach to the whole\nlife cycle\nof ai systems design more specifically\nwe tried to illustrate how the ai for\nsocial good factors\nproposed by floridian company can be\nintegrated\nas norms in bsd when considering\nai design in order to integrate\nthe ai for social good factors into a\nmore systematic\nvst approach we proposed a design\nprocess that consists of\nfour iterative basic steps contextual\nanalysis\nuh value identification design and\nprototyping\nat the core of this model is a\ntwo-tiered approach to values and ai\none consisting of a real commitment to\ncontributing to the social good\nbeneficence through ai and to the\nformulation and then adherence to a\nnumber of concrete ai for social good\nfactors\nwithout the first tier ai for social\ngood principles may help to avoid\nmost ethical harms but there's no\nguarantee at all\nthat the new ai applications will\nactively contribute to social good\nwithout the second tier there's danger\nthat the contribution\nto the societal challenges and sdgs are\nused for legitimization of ai\ntechnologies\nthat do not respect some fundamental\nethical principles\nfor example there's a danger of ethical\nwhitewashing which is already visible\non the web pages of some large companies\nin addition to these two tiers of values\nwe aim to argue that uh there's an\nimportance to pay\nattention to contextual values or at\nleast the contextual\ninterpretation of the values from the\ntwo mentioned tiers\nand this is necessary to understand why\ncertain values are at stake for specific\napplication\nand how to translate those relevant\nvalues into design requirements\nand before i leave you i just wanted to\ndraw everyone's attention to call for\npapers\nthat i am co-editing in the\njournal of technical techno ethics on\nengineering ethics bridging the theory\npractice gap\nuh if any of you are interested\ncontributing you could just uh write\nshoot me an email later\ndeadline is december 1st so i guess we\ncan\ndiscuss now if anybody has questions\nstephen thank you very very much so\nofficially we're out of time right now\nbut i if\nuh anybody can stick around for a couple\nof minutes and ask a question so\ni will leave the floor open for a couple\nquestions\nto be asked so herman if you're still\nwith us\num you can ask your question\nif you'd like to still do so yeah thanks\nfor jenny yeah\nthanks to steven uh hermann felim comp\nhere\nso that was super interesting so\none of the things that uh i really like\nis how you\nyou combine the extra social goods and\nthe typical uh\nvsd values and you distinguish between\nvalues that\nto be respected and the failures to be\npromoted so the main question i have is\nso is there a difference in how they\ntranslate into design requirements\nso i think you mentioned one difference\nyou say that values to be respected\nshould be seen as boundary conditions\nthe dealing constraints\nand how ab that is so for example if you\nlook at privacy\nuh this is i think one of the values\nthat is to be respected in your in your\nmodel\nbut isn't that all also something that\nwe typically want more\nof so you talk in your workshop if you\nyou you say that pseudonyms are used\nof course it seems it would be better if\nwe would completely\nanonymize all data but\ni think the makers don't do that because\nthey trade off this value with\nsome other values such as health\nbenefits or something so isn't\nso yeah so that made me wonder if this\ndistinction that you make is really apt\nso thanks yeah um\ni'm not sure if i exactly understood um\nthe question in term particularly with\nthe example of a privacy\num there is perhaps um\na way of confusing the levels of\nthe tiers of of uh value sources\nuh with the ai for social good factors\nwhich um\ni think even the original authors\nperhaps even confused them\nthat they seem to be offering ethical\nprinciples but they're actually framed\nas uh normative constraints uh designers\nshould do this so that's where we see\nthat that\nnormative operation of\ni guess you could say a practice so it\nseems very practice oriented\nthe ai for social good factors which we\njust call norms\nuh and we choose them because they're\nparticular to ai\nand they're mostly based on or they\noverlap with\nuh some of these higher level more\nabstract values\num like privacy perhaps\nprivacy even through the ai for social\ngood factors\ncan even be understood as a boundary\nconstraint\nthe way floridi uh and company do that\num because things like transparency\nand privacy protection data subject\nconsent\nuh are are\nhow can i say this they define it\nas a as a normative boundary so\nwe didn't include uh privacy for example\nas\na higher level abstract value\nbut rather as a norm through which every\nother\nof the two levels or two tiers of values\ncan be translated through so privacy\nuh protection and data subject consent\nare continually present throughout the\nentire design process of\nthese types of ai systems as a way of\ntranslating\nhigher level abstract values into more\ntechnical design requirements uh as i\nmentioned\nthe ai for social good principles are\nnot to be\ntaken as either mutually exclusive rank\norder but co-vary with one another\nuh that's why i did kind of leave open\nuh at the end perhaps because\ni don't even know how to do it uh in\nterms of these numerous amounts of value\ntranslations that could be undertaken\nfor any number of higher level values\nthrough\nthese seven different uh uh ai for\nsocial good factors\nand in numerous\ncombinations of the two can become\nslightly unwieldy and that's why i\nmentioned the only way i can\nfeasibly see the adoptability of this\nkind of approach\nis through some sort of empirical\ninvestigation of whether or not it's\num effective or not because it does\nleave kind of the floodgates open\nin terms of the the practical on the\nground work an\nengineer or design team has to go\nthrough or in order to come up with a\nlist of\nactionable design requirements that they\ncan actually begin\nbuilding a system with\ni even got close to answering your\nquestion\nuh and i just spoke a lot of random\nwords\nyeah so some so maybe yeah if so if i\nhave time uh\njenny so if there are no other questions\num i will\ndo i have time yeah i i don't see any\nother raised hands so go ahead head on\nokay good so so maybe you can can\num just\nhelp me understand so so so one of the\nthings he said well we shouldn't see\nprivacy as a value but as a norm so\nthat's that's\nfine uh so can you give another example\nof a value that that\nin your sense that it that is a boundary\ncondition\nand that and i think what you mean with\nthat is\nthere if there is a certain level that's\nenough then it then our application with\nthat\nwith respect to that value it is\nminimally ethical\num because when i try to think of\nexamples\nthese albums are always such that i\nthink yeah well of course\nsome level is fine but more is always\nbetter so they're still always just\ntrade off\num i'm not sure should we\nframe them as like that type of moral\noverload as a trade-off or just as\num an engineering problem\nof not being i'm not finding that the\nproper design\nsolution towards accommodating as much\nas possible of course there's\nthe question of ambiguity of limits it's\nyou know more is\nif the argument is more is always\ngreater it's like at what point do we\njust say\nlet's release it uh because the maybe\nthey can always add more\nas i mentioned the ai for social good\nfactors as\nnorms are a minimum\nnecessary but not sufficient condition\nthey are not\nas as our framework they are not\nsufficient on their own\nin order to actually have ai for social\ngood\nglobal beneficence it also needs to\nactively\ncontribute to the social good not just\navoid most ethical harms um\nso there's also the the engineering\nquestion which is\nactually usually what most engineers say\nwhen i bring up things like value\ntensions\nor or use the word trade-offs\nthat's just an engineering problem\nthat's not a philosophical problem\nuh which may or may not be the case i'm\nactually not sure i'm not an engineer\num so um\nit is a they're the way the ai for\nsocial good\nvalue principle or factor that you\nmentioned the\nprivacy protection and data subject\nconsent\nis strictly defined by uh floridi\ni don't have the definition in front of\nme else i would read it\na minimum necessary but not sufficient\ncondition\nfor um having minimally ethical\nai however you want to frame that that\nphrase\nbut as a necessary process for designing\nai\nai systems based on these type of\nmachine learning or deep neural network\nnets\ngood thanks steven\nthank you very much so guys i'd like to\nwrap up\nuh the meeting so i'm sure that we could\nengage in more conversations so i hope\nyou can uh anybody who wants to follow\nup with steven\nuh you you will be able to follow up\noffline\nthat's fine yeah uh so you can find\nstephen's information\nin the agora meeting uh event in your\ncalendars\nstephen i'd like to thank you very much\nagain for making the time and for\nsharing your\nyour work with us today uh and uh your\ninsights\nuh and thanks very much everyone for\njoining and uh please uh\ntake care and stay safe i appreciate not\nclose the uh\nthe meeting and we can wander out on our\nown\nyou well you're welcome if uh stephen so\nyeah please if you have time\nyou can go ahead and take take off\nwhenever you like but uh i figured just\nin the\nmodel of uh meet meetings\nuh we could take take the same\nopening i leave that up to you guys\ngreat", "date_published": "2020-10-21T13:23:44Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "79a1c1348420ae306d69fed7118bd30d", "title": "Human-Robot Coproduction: non verbal sharing of mental models with AR/VR (Doris Aschenbrenner)", "url": "https://www.youtube.com/watch?v=cdyeyROsaPI", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "reactions and i look forward to the\ndiscussion\num i hope everything works on the\ntechnology side\nyou should see now uh my presentation\nand\nuh yeah maybe i just like to share a\ncouple of like insights of you from my\nbackground\noriginally i'm computer scientist and\nthen i did my phd\nuh within the area of human uh like\nlet's say\ntailor maintenance a remote maintenance\nfor industrial robots\ntogether with companies and a tno like\nextra university institution and i came\nto tudor for my post talk and\nkind of stayed here and\nmy interest and that's maybe i dive a\nbit in the domain context because that's\nreally relevant for my research\nand i'm not sure how many of you are\nalready familiar with that so i try to\nbridge the area of manufacturing with\nhuman-centered\ndesigner molars and may some of you i\nalready know i read\nthe names you are familiar with one but\nmaybe not with the others so\nplease i would try to kind of give a bit\nan introduction to\nmore or less the problem so um\nso you should see my video now and you\nknow that their\ndevelopment of human industry can be\ndeveloped into four industrial\nrevolutions so mechanical\num revolutions um steam machine and\nthese kind of things\nelectrical engineering and reclical\nintroduction\nthen computers and automation and then\nfinally this is what we call now the\nfourth industrial revolution where you\nhave automation\ncoming to a new level with artificial\nintelligence and new types of robots and\nthese kind of things\nand the interesting thing is um\nyeah within that area also the\nwork of people change a lot so we're not\nfacing only a technological change which\nis called in this fourth industrial\nrevolution for industry 4.0 but it is\nalso happening in other parts of the\nworld within different names for example\nmade in china 2025\num but we're also encompassing um well a\nsocial change so\nthere is an aging society and we are\nalso\nhaving uh some migration streams um\nand here we have all these questions\nabout how\nis this future work within manufacturing\nlooking like\nthis is getting much more um\nthis question is getting much more\ninterest at the moment so for example\nthe world economics forum forum or there\nwas also a very cool\nuh conference from stanford\nuh on on ai and the future of work i'm\nnot sure whether you\nwere aware of that otherwise i should\njust share um\nthe link maybe and um i see\nthis kind of research within this\ncontext of course not solving all of\nthis question\nand the interesting thing for me is that\nthere are basically\nfour different types of future scenarios\nso which you can only read in literature\nand there's a very nice\nunfortunately german paper who kind of\nsummarizes a bit of that research about\nfuture work in the production industry\nand they come basically up with like\nfour different streams the first one is\nthat the robot will take over\nthat's what you mainly also hear in mass\nmedia and i think everybody if you have\nbeen tortured with this kind of all the\nrobots will take over the world stuff\nand there is also a contra scenario\nwhich is more or less on the\nokay within that new technology we also\ncan use this\nnew technology in order to come to a\nmore human\ncentered new type of organization and\nthese are the homogenic\num either the one when or the other\nscenarios\nand there are also other scenarios that\nare discussed in literature\none is definitely that there will be\nwinners let's say in the higher\nin the higher up um uh quali in a higher\nqualified\nregion um for example uh yeah if you\nregard\nlike our jobs in the end or i love this\nquote\nwhich says well there will be two types\nof people in the world those who tell\ncomputers what to do and those are told\nby computers what to do\ni think this this polarization scenario\ngoes in this direction\nand and then there is also another\nscenario which is also interesting to\nhave in mind\nthat stuff is dissolving and dissolving\nso you don't have any boundaries anymore\nwith respect to space and also hierarchy\nbecause of the strong modularization\nso these are the two more or less\ndiversification scenarios\nand my faculty has more or less\nthe um aim to design for our future and\nif we want to go\nin the envisions future that we also say\nit's the preferable\nfuture then we choose choose to design\nfor the scenario of these four\nwhich also for our side is the most\npreferable one and this is the the\nsecond one where the humans are helped\nby technology\nwhich i call it among others as the\noperator for the theory scenario\nand what does this operated 4.0 mean\nwell you have this force industrial\nrevolution stuff is getting much more\ncomplex\nless transparent but we still have in\nhigh demands of safety\nand of course efficiency and the humans\nand the robotics colleagues needs better\nways to communicate\nwith each other in order to make that\nhappen so apart from the factory 4.0 we\nalso need the operator for\nzero which we envision here a bit in the\nsuperhuman style\nand how does it look like exactly\nthe the basic paradigm is that we have\nthis cyber physical production system\nwhich is more or less the manufacturing\nenvironment\nand we have the human in the center in\nthe interaction with that system\nand we have more or less technology\nhelping this human to be better\nin his or her work and enhance the\nphysical capabilities\nso this could be for example using an\nexoskeleton\nand then we have the enhancement\npossibilities of descending capabilities\nso that's where i talk a lot here\nin this talk um about using augmented\nvirtual reality in order to improve\num on one hand ascending capabilities\nbut also on the other hand\ncognitive capabilities but you can also\nenvision much more\nuh yeah different functions than ar we\nare in these kind of two realms\nand uh one thing that is very important\nto understand is that there are we have\nlike\ntechnical challenges which are mainly\ndiscussed so complexity\nuh dynamics so that stuff is not\nnon-linear\nand then we don't have a not transparent\nsituation of the manufacturing\nenvironment\nand but we also and these these\nchallenges with indian manufacturing\nindustry or the robotics domain are\nvery much discussed a lot\nbut people tend to only talk about the\ntechnology\nand if we regard on the theory behind of\na\nsocio-technical work system then this\nlooks like this so you have some kind of\nwork cell and you have some input coming\nin you have some output\ngoing out and you have of course the\ntask and the physical system involved\nwith the task\nand this is what we call the technical\nsubsystem and a lot of stuff\nis only like what you leading literature\nat the moment is only focusing this\nusing ai for uh predictive maintenance\nor something like that then it's kind of\nlike centered only on that\nthat part of the system but the system\nis larger\nwe have the people with this cognitive\nand social abilities\nand we have the structure of the entire\nfactory or manufacturing environment\nwhich is of course interacting a lot\nwith the technical system\nand we of course need to focus also on\nthe inner dependencies in order to\nreally make the entire thing work\nand that is something well i think the\ndesigners among you\nare kind of people that have something\nto do with you in fact to say yeah well\nthat's logical that's what we always do\num but it's not entirely logical\nespecially in the manufacturing domain\nthere was a lot of stuff that was only\nfocusing on the technical development\nand there are a lot of comp\nopportunities if you want to use human\ncentered or human computer interaction\nwithin these industrial environments\nyou have less training you might have a\nhigher safety\na quicker problem solving and an\nincreasement of the well-being\num and this comes more or less to our\nguiding\nquestions which are a bit stolen by from\nthe dutch\nresearch agenda for artificial\nintelligence\nso we try to design an augmentation\nlayer\nso that humans and robots can\nproductively interact and understand\neach other\nand and we want the human to trust\nautonomous system\nand we want to enable task sharing so\nmutual understanding\nuh between both partners yeah in order\nto come to such a nice\num yeah well handshake situation uh\nwhere\nit's not only the human doing the work\nbut it's also not only the robot doing\nthe work\nso and what would we understand by this\nhuman robot co-production which is\nthe framing that we had um if you regard\nmanufacturing environment this stuff\nlooks like normally like this so you\nhave a lot of\nlike sometimes dirty machinery big\nmachinery\nand some robots that are encaged so you\ncan see the bottom here there's a robot\nwho has a safety cage around it and\nhumans are basically only able\nto kind of interact with these big\nrobots from a distance\num and this is currently a bit changing\nbecause there are these collaborative\nrobots which you also i think\nalready know and they are designed so\nthat the human can\nclo work in close interaction with them\nand we don't require any fences anymore\nwe can have direct interaction\nt readily quicker programming and\nthe market is increasing a lot in this\narea because these kind of small robots\ncan take\naway a couple of like small\nmanufacturing tasks and they're much\ncheaper\nand yeah they're quite promising but\num we still have some stuff to dissolve\nthere\nmaybe as a kind of overview why this is\ninteresting or why the market is kind of\nincreasing at the moment\num if you regard um high\nlarge enterprises for example uh\nautomotive is not a perfect example but\nlet's use automotive um you have a high\nproduction volume\nand you have different parts that are\ncoming from that have low\num uh that have low high production\nvariation so for example\ni need a whatever car and i need a\nspecific kind of seat i need this\nso the car itself comes with a lot of\nvolume\nbut the different components come with\nlow volume\nso and this is making the um high\num uh the the large enterprises\num being enabled to automate part of the\nproduction already quite quickly within\nthe\nthird industrial revolution let's put it\nlike that um\nand they can do highly automated stuff\nthey can do high volume low variation\nstuff quite well\nand they have optimized the factories\nfor that um\nbut if you regard small and medium-sized\nenterprises or\num also other people that do let's say\nbatch size one or small batch size\nproduction\nthey are less automated less low volume\nand higher variation\nand this means oh we need a better human\nrobot collaboration on this low volume\narea so how does it look like\nwith the human on one hand robot on the\nother hand we have some kind of\ninterface in there\nand i still stick very much to some\nquite\nold theory um from sheridan where you\nhave\nwhere you say that you have a different\ntask from human\nwomen need to plan past teach a robot\nmonitor\nrobot is doing the right thing intervene\neventually teach again\nand then learn and that still is more or\nless\nthe basic things that still are there\nmaybe they're a bit quicker than they\nwere before\nand this kind of human supervisory\ncontrol\nis um yeah using a lot of different\nmental models\nso i don't want to give you too much\nin-depth discussion but you kind of have\na\nmental concept of stuff how stuff works\nand what is quite interesting that if\nyou have this kind of control chain\nthere are a lot of different mental\nmodels that are coming to pass for\nexample\nif you here see the different components\nthe human has\na mental model of how the robot will\noperate\nto this place will show a specific\nrepresentation of the robot\nwhich is always only a picture and\ndepicts also\nthe mental model that the programmers of\nthe display or the interaction\nsoftware has then of course we have an\ninternal\nmental model of the computer which might\nbe a bit different to what the human\nactually sees and can understand\nand everything which has been designed\nas being a control panel\nalso has an embedded mental model in\nthere\nhow it's designed and how stuff would\nwork and the interesting thing\nwithin manufacturing industry this is a\nbit of a dancing bear problem\na dancing bear problem is well known in\nhuman-centered\ninteraction theory so you're so glad so\nif you look at a bear that is dancing of\ncourse it's animal cruelty and we know\nabout that\nbut if you look at that bear and you\npossibly you like it and you say well\ncool the bear is dancing\nand you're saying oh well that's cool\nbecause you never saw a bear that was\ndancing\num but if you regard yet human dancers\nand you give a\nor b values for that the bear doesn't\nfit at all\nthis kind of classification but you're\nstill happy that the bear is dancing\nbecause it's the only bear that you know\nand this is more or less the same which\nhappens with human interaction\nespecially in specialized industry\nyou're so happy that something is\nsolving your problem that it might be\nover complicatedly solving it but you're\nstill happy\nhashtag sap or something like that um\nand there are this is just the thing\nthat we are covering\nand there are a couple of worker needs\nwithin that area for example\nof course human want to stay healthy and\nthe work\nshould be sufficiently demanding but not\ntoo demanding the human wants to\nunderstand\nwhat's going on and how to control the\nsystem\nand of course on an even higher level\nyou won't want to trust the system and\ndon't fear that it kind of is overtaking\nhim or her\nand feeling valued by the entire context\nso a lot of stuff to couple and this is\nonly\nmore or less the basic layer is physical\nergonomics and we have\ncognitive ergonomics and then we have\nemotional aspects or what we call user\nexperience\num which is a bit more than that um\nand here of course there should be\ndesign methods for kind of making that\nclear there are design methods from\nother areas but they're not\nthat well established within the\nmanufacturing field\nso coming to the overall research topic\nthat my\ngroup and i'm trying to couple is to how\nto design a hybrid human robot system\nthat is able to optimize\nboth workable being and the overall\nsystem performance\nto really come to some kind of handshake\nworking together situation\ni quickly go through some related\nresearch i think a couple of people will\nknow some of this um first of all i like\nvery much the trading and sharing\ncontrol\ntheory so that if you have a human\nworker then you have a specific load\nthat that human worker is able to carry\nand if i have a computer system i can\nuse that computer system in order to\nextend\nthe capabilities of the human so it's\nnot\nonly the human load it's an increased\nload by\nhaving a computer taking part of that\njob but you also can use the system\nin order to relieve the system so the\nload is the same but the human has\nkind of some relief in there you also\ncan use it as a backup to the human\nand but then also there are some fields\nwhere you\nsay okay but not that many to be honest\nwhere the system or the automatic system\nis replacing the human\nbut with a less load because human is\nmuch more capable still\nthan an autonomous system and i also\nlike very much the levels of automation\nalso this\nis quite old but nevertheless especially\nkind of refined for the field of\nmanufacturing\nso in more or less it gives a great uh\nyeah kind of um yeah difference\nbetween the total manual um case\nand the totally automatic case and it\nkind of defines\nsome some more or less discrete areas in\nthe mean\nuh while where you can say okay there is\nkind of a\num especially we are interested in this\nsupervision\nand intervene case and not not too much\nin the uh\nclosed loop case and of course\nthere is a lot of classification on how\nhumans\nand robots can interact here the\nso-called levels of interaction\non the left side about the constellation\nof the group so between humans and\nrobots multiple humans multiple robots\nand on the right side more or less the\nquality of the interactions so is\nboth are both players active in the task\nis one only supportive maybe some\ninactive but somehow present\nor is there some kind of intuitive hand\nover thing\nof course that's where we're all aiming\nfor but it's really hard to design\nand then you also have this level of\ncollaboration which is a bit more on the\nphysical side here if you can regard the\nrobot\nand the human either they totally\nseparated that's the normal case for\nnearly\nall of the industrial cases that we are\ncurrently also inquiring\nand these kind of co-existing or\nsynchronized or even cooperation or even\ncollaboration cases so coming more and\nmore to this kind of\nshared thing that's still quite very\nunique because\num it's also with a lot of effort\ninvolved within real industrial cases\nthere was a very nice phd thesis from uh\nour uh yeah associated with the stuff\nthat we are now doing\nwhich unfortunately he doesn't work with\nus anymore but if you're interested he\nalso had a very nice\nuh work on using this kind of\noperator-centered production design\nand you can can look it up if you want\nand the other thing that we are very\ninterested in\nin order to make this kind of\ninterdependent teamwork situation\npossible we need to have legibility\nso predictive ability between what a\nrobot is aiming to do\nand this is has been proven to increase\nsafety\ncomfort surprise or lessen surprise to a\ncertain extent\nincrease efficiency and also the\nperceived value\nof the human worker\num and how do we do that and how do we\nincrease\nwhat on the human side is happening on\nthe robot side legibility is more or\nless incorporated but on human side we\nwant to do\nsituation awareness you want to kind of\nget\nthe human to a point that he or she is\nunderstanding what is going on\nand situational awareness is basically\nmore as a measure for understanding\nwhat's going on and it kind of is\ndefined in different levels okay i know\nthat there's a lot of\nlike discussion on that whether this is\na valid\nconcept but i like it very much because\nit's really applicable also for my\ndomain\nand on one hand you say perception i\nwould like to know\nwhat is there i would like to kind of be\nable uh\nto identify all the critical elements\nthe second thing i want to comprehend\nwhat is the meaning and the significance\nof the situation\nand then i also in order to plan and\nto interact with each other i need to be\nable\nto project how the future state will\ndevelop\nand this is involved also with concept\nof sense making i don't go into detail\nhere\nand then later on also into sharing the\nmental models between the human\nand the robots also these kind of if i\nknow what a human or what the robot is\nkind of aiming at\nthen it also will increase my situation\nawareness\nour specific focus is then to say okay\nwe want to design this kind of\naugmentation layer for this human robot\nco-production within the era of\nmanufacturing\nand here i come back to the uh social\ntechnical system stuff that i have\nintroduced earlier\nso we still have this human and robot\ncell with some input coming in some\noutput going out and we have these\nlike combination of the social system\nand the technical system\nand our augmentation layer is enhancing\nthe physical sensing and cognitive\ncapabilities\nmainly the last two in order to come\nfrom this kind of\nnormal human worker to our worker 4.0\nand we have these two factors worker\nwell-being and work performance that we\nwant to optimize for\nand the specific focus that i would like\nto kind of\nenhance here because other people within\nmy group are more on a cognitive side\nfor example or more on the physical side\ni'm using augmented virtual reality as a\ntool\nin order to kind of yeah improve this\noverall system\nand to come back to these uh research\nquestions\nand then breaking down these research\nquestions a bit so that you can have\nsome comprehension on\non what you're actually doing um so we\nwant to design a human\naugment augmentation layer so that the\nhumans and the robots can productively\ninteract and understand each other's\nbehavior in context\nand here let's break that down with\nrespect to literature and also kind of\nthe stuff that we actually can measure\nwe want to help with situational\nawareness we want to help with sense\nmaking\nwe want to help with decision making and\nwe want to help with the sharing of\nmental models\nso let's have a bit of a dive in\nfor example if you want to improve the\nsituation awareness uh what you could\ndo is like we also um are of course\ninterested in level one and level two\nsituation awareness\nbut mainly we are also very much\ninterested in having a level one and two\nand then\nthe level three which is the projection\ni want to know what the stuff is going\nto do\nand a very basic example maybe but quite\ncomprehensible of what is feasible\nis increasing the safety by\nprojecting the future trajectory of a\ndriving robot\nso here is the example study um\nwe have a person walking a specific\nway and we have a robot where we know\nthe robot will have a specific\ntrajectory\nand we have two conditions in one\ncondition we don't have a projection and\nthe other one we have a projection on\nthe floor\nand uh here you can see it's based on a\nvideo study\nand you can see this is the video\nmaterial that participants were look\nwatching as\nand here the normally the system would\nuh\nthe the participant would be\ninterrupted to watch the video and ask\nwhat he thinks or what she thinks that\nthe robot will do next\nand you can see here we're doing these\nexperiments within the era of\nsemi-cell and yeah death was quite\nyeah predictable so we had different\nscenarios\nof different interaction scenarios and\nyou can quite see if there\nare specific type of scenarios it's\nquite\nreally helpful to have some kind of\nprojection in there with other scenarios\nand this was on scenario four for\nexample this was that the human is\nactually\ndoing a task and then the robot comes in\nwe don't have any significant difference\num but and that was really really nice\nto just see okay what can we do\nin real world on one hand but also and\nwithin respect to these\nsitting away situational awareness and\nthe other example\nis not with driving robots but with\nmoving robots collaboration robots and\nhere you can see\nthat we have made the task a bit up\nbecause yeah in order to have it more\ncontrollable\nthere is a person packing stuff into um\nfor packaging and part of it should be\ndone by the person\npart of it should be done by the robot\nand that's more or less the same\nlike more or less similar setup than the\nfirst study\nand um what we're doing here\nwe're using the same\nsituation in virtual reality so here in\nvirtual reality you can more or less\nalso say\nlet's switch on a perceived future\ntrajectory of the robot for example here\nyou can see that small\num a moving uh trajectory\nso that there is some kind of projecting\npossibility of the future and of course\nyou can have a lot of different\nvisualizations for that\num and this helps you to understand okay\nwhat will the robot will be doing next\nand the nice thing is that we're not\nonly able to do that in virtual reality\nbut we also can use\naugmented reality for this and here you\ncan see someone putting on the microsoft\nhololens\nand we have developed some nice\nframework where you have\num you can see the robot moving and also\non the left side also the\nvirtual robot moving uh we have more or\nless a framework where you can have\nall the stuff that you were seeing in\nvirtual reality it's developed in unity\nand with that kind of feedback framework\nto the robot operation system\nyou can have the same visualization\nstuff also happening\nin augmented reality and the question is\nhere and it is unfortunately ongoing\nsorry for that\num which kind of visualization would\nhelp and in which scenario does it help\ndoes it help in real life\num situation does it help in the uh\nonly virtual reality environment um\nwhen yeah where are the benefits where\ndid you get the biggest benefits for\nthis kind of situation awareness with\nrespect\nto understanding what the robot is going\nto do next\nhey and where do we apply that then this\nis more or less the\num the the last case that i want to show\nyou\nand this is an application where we have\nwe work together with the robot with the\nbicycle manufacturer\nand the idea is to share tasks within\nbicycle manufacturing between human and\nrobot because there are some tasks that\nare really\nnot easy automatable and how are we\ngoing to kind of do this kind of task\nsharing\nand if we do discuss sharing how are we\ngoing to communicate\nthe task sharing and the stuff between\nthe human and the robot\nyeah this is much more to it so we have\ndone\nquite a stuff a couple of stuff so far\nthat\nwe have developed a digital twin of the\nsamix l\nenvironment within unity so that you can\nuse that for experimentation\nwe have designed some kind of control\nroom features within unity\nfor some xl which we are hopefully\nimplementing somewhere in the future\nalso there in real life we did a couple\nof studies on automation\ncapabilities for this bicycle case uh we\ndid a couple of papers on using\naugmented virtual reality for helping\nwithin the field of manufacturing\nand also for planning manufacturing\ntasks if you want to read more\ni'm totally happy to share also some\nmore examples later\nin the discussion but i just wanted to\nconclude it\nhere um yeah this is everything\nis only possible because we have such a\ngreat team\num and uh all of that work is no\nnot the work of someone alone it it\nalways is the combination of people\nand i have a wonderful team i'm so happy\nuh that we work together\num i have a quick video i'm not sure i\nneed to look at a time\nwhich i could share with you of like\nmore or less all of the projects that we\nhave going on right or now\nbut i'm not sure if we have the time or\nwe will not start the discussion first\nthank you very much for the attention\nand um yeah if you want to look to\nwatch the video we can also totally do\nthat thanks\nthanks doris um is there any immediate\nlike\nquestion that people have for doris or\nelse i'm actually quite happy to see\nmore examples because i think that\nthey're great\nso uh that's actually quite exciting\nso yeah doors can you maybe quickly\nyeah i guess yeah okay okay just like\ntry to do it because i try to have it\nrunning with sound which is\num hopefully working\nuh no that doesn't work because if i try\nto\nupload it that is not working\nokay um yeah let's see if it works if\nyou get\nif you don't have sound please let me\nknow\nsound no we don't hear anything so you\nhave to narrate\nyeah i'm sorry and that's also something\ni could do but they have such a nice\nsound\num okay let's see if it works now\ndo you have thought now hey hello\nwelcome to this virtual tour of our\nresearch projects at some excel\nthe research we do at some excel is\nfocused on the future of manufacturing\nand sustainable human robot interaction\nthis is our team and we all welcome you\nwe're excited to tell you about our\nresearch and show the great facilities\nat some excel\nhi i think most of you already know me\ni'm your friend\ni work at the applied labs but also work\nhere at some excel\nand here at some iselle i helped to\ndevelop all of the research facilities\nfor our projects and it also helped\nbridge\nthe research we do here at sun excel\nwith the research we do at the applied\nlabs\nso let's have a look inside\nso this is the main hall of some excel\nso the raw ones can be found here\nit's 2 000 square meters robots\nand very cool projects and in the combat\narea\nwe have the robofish project in the\nrobofits project\nwe're helping a bike manufacturer to do\nproduction of bikes with cobalts\nlet's have a look at some more projects\nwe do here in the combat area\nhello there i'm jonas i'm an xr\ndeveloper\nmy primary work concerns the topic of\ndigital\ntwinning this does not only include the\nvisualization of\ncyber physical systems like robots\nor agvs but also the development chain\nbehind it\nhi this is elvis so over the previous\nyear\ni've assembled and have been developing\nthe ros\ncomposite pick and place workbench and\ntogether with others i've been working\non tooling\nso that we can visualize soft robotic\nactuators in real time in ar\napart from using cobots we also do\nprojects with\nmobile robots these power robots can\ndrive\nautonomously around factories let's have\na look at that\nhi i'm denis this year i'm happy to be\na member of two projects first one is\ncollaborating and coupled agv swarms\nwhere we use mobile robots to improve\nintro logistics and second one is\nprofits where we use robot arms to\nimprove bicycle assembly line\nhello my name is martijn and i've been\nresearching the possibilities of\napplying spatial augmented reality\nin the smart factory context an example\nof this\nis to use projected arrows to improve\nthe communication and safety of\nautonomous vehicles\nhi my name is better caseman the koch\nproject my colleagues and i have been\nworking a fleet management system\ncalled rooster brewster's goal is to\nsimulate schedule\nand plan tasks for robotic fleet in a\nwarehouse situation\nhi my name is nil naga i'm a controls\nengineer on the team\nand for the past year i've been working\non setting up navigation software for\nmulti-robot systems\nso that robots like this one can be used\nto carry stuff around\nfactory shop floors and warehouses on\nanother front quite recently i'm\ninvolved\nin extending the bobrov project which is\na robotic arm program\nto paint\nso let's have a look in the rest of the\nsub example\nhere at some excel there's also a really\nreally big robot\nit's called a gentle robot and it's\nsituated in this corner\nlet's have a look\nthis robot is huge it measures\n12 meters in length 10 meters wide and\n5 meters high different types of tools\ncan be attached to this giant robot\naerospace engineer will use it for\ndrilling riveting halls in giant\nairplane wings\nbut imagine our faculty attaching a\ngiant 3d print\nhead to this robot then we will be able\nto 3d print giant structures\nprototypes of car bodies or even large\noutside furniture pieces\nall of these robots here at some excel\nproduce a really amount\nof data it's hard to comprehend for a\nhuman being\nmy name is samuel kernan i developed an\nassistant that can\nautomatically generate a report based on\na conversation with a technician\nthis saves time reduced the perceived\nworkload and resulted in\nreports of higher quality for my phd\nwe'll be developing an assistance that\ncan provide cognitive support to factory\nworkers\nwhile they use analytical tools\nmy name is sudei and i'm gonna join the\ndepartment and the koala project as a\npostdoctoral researcher soon in december\ni've been working mainly on recommender\nsystems\nsince my master's thesis in stockholm\nand then over my phd\nand then my postbook in ap\nto dealt see you all soon\nhello my name is santiago i'm a product\ndesign engineer and i am\nparticipating in the diamond project as\na postdoc where we are developing a\ndigital intelligent assistant for\nsupporting the maintenance activities\nof the maintenance staff at\nmanufacturing companies\nall the data the robots create we also\nhave developed\nthe virtual world of some itself\nlet's have a look inside this virtual\nsome excel world\nmy name is danielle ponto and my work is\nmainly focused on extended reality or xr\ni work for the mirrorless project where\nwe create a digital twin\nwhere robots can be viewed and\ncontrolled remotely for this project we\ncreate tutorials where we teach how to\nuse this digital twin framework\nhi my name is irina and i am responsible\nfor a virtual playground community\nit's a community that connects\nresearchers students and startups\ninterested in vr and ar technology we\nhave\nregular online meetups with experts from\nall over the world\nand will be happy to see new members\nhello my name is jasper and my calling\nis teaching\nwhich is why i'm here to make all of\nthese exciting new technologies\naccessible to steve\nokay so i think that's it that's only\nthe teaching program which i\nkind of miss now yeah i hope you like it\nand we still have another\ni have more videos because augmented\nvirtual reality is always with a lot of\nvideos\nbut i hope you like to kind of give a\nbit of an feeling what we're doing\nthis is great thanks uh doris that's\nreally really exciting stuff\nit was originally presented for our\nfaculty because we didn't have the\npossibility to show them in real life so\num that's the reason why it's a bit on\nthe ide side of telling stuff i just\nwanted to kind of share that with you\nbecause it gives much more tangible\nfeeling to it\nyeah now great so with that\ni was wondering uh if anyone had some\nquestions for doris like more like\nabout the projects that you showed\noh yeah i see that people would love to\nuh if you have a link to this video\ndoors\napparently people are very keen to uh\nokay yeah we have don't have it online\nwe have showed it\nwithin this kind of uh these natalis\nthing\nso that's something i can definitely\nshow you um\nbut i think uh we should make a real\nversion for youtube because that was\nonly for internal purposes\nuh we will do that um i i still hope\nthat it will come soon and then i will\nshare you at the link\ncool very good so i actually have a\nquestion to just to kick it off if\nthat's okay\num that was fine and uh so so actually\ntwo questions because one thing that i\nthink is really cool that you're working\nwith an actual bike's bicycle company\nright on this cool production\nso i was wondering uh are you also\napplying your um\nexternal reality for those people so the\nactual line workers\nand and do you know how they like it\nwhether they like working with the road\nwhether\nadding this uh layer uh makes it like\ntheir work\nthere they enjoy it more i was just\nwondering\nyeah so so the real application cases\nwithin the augmented virtual reality\ndomain\nare mainly for other purposes like\nespecially for\nmaintenance tasks i did some old studies\nfor that\ni can link them if you want so that way\nwe\nactually did also a comparison on using\nstudents as\nparticipants for these kind of\napplications and real workers\nand the interesting thing which is maybe\nnot that maybe already obvious\nis if you test these stuff with students\nit's nice you will get some kind of\nresults but\nin the end you really need to test it\nwith the real end user\nand they will see the stuff entirely\ndifferently\num so everything would be doing we also\ntry to really involve the end users\nwithin the bicycle project we are not\nactually there what we did within a\nbicycle project if you want we can also\nshare another video sorry for that\nand we built an envision scenario\nfor the kind of co-production within vr\nbecause the problem if you want to talk\nwith workers on what they like and what\nthey\nwould prefer they actually don't have a\nclue\nwhat to long for and what robots are\ncapable of and how could this envision\nsituation look like\nand we uh basically were using the\napproach of\nusing the virtual reality environment\nfor\nsetting them into that future scenario\nand then having a discussion with them\nwe did this together with robo house and\ni think we officially release\nit uh the video or something like that\nsoon\num and this is where we applied\nhuman like human robot co-production for\nthe for the bicycle industry with\nvirtual reality\nthe main stuff i'm doing on uh augmented\nreality assisted\nthings at the moment i was doing a lot\non maintenance and repair\ntasks and i might come back to assembly\nbut at the moment we do composite layup\nso what we do is\nuh as you saw this kind of pick and play\nstuff with the with the robot and using\nthe composites\nhere is where the industry in within the\naerospace domain\nhas a lot of interest\nwithin the manufacturing industry my\nmain focus would not be the direct\nassistance for single worker cases\ni did a lot of cases for multi-workers\nso\nif you have someone on the phone and\ncollaborating with someone local\na couple of situations that we have a\nlot at the moment with due to corona\nand this is where i'm it's called\ncomputer supported collaborative work\nthere is where i'm uh yeah have worked a\nbit more\nbecause like at the moment everybody is\ndoing repair and instruction\nmaintenance a kind of suggestion stuff\nand there's already a lot out there in\nthe industry\nand so it's it's not that interesting\nanymore because it's like a lot of stuff\nhas been already discussed there so only\nthe more complicated cases\nmultiple humans or humans and robot\nsystems\nor what we will do in the koala project\nuh this cognitive advisor\nthe um the ai system giving you\ncognitive advice so that is going to be\na bit more interesting than the\nuh like normal okay i know how the\ninstruction works and i give you some\nsome tasks if you want to read something\non that i have a very large comparison\nstudy\num with 150 participants um\nbut i'm not sure if i want to do that\nagain and uh\non kind of how ar and vr can ar\nvisualizations can help on uh this kind\nof\ninstruction based uh stuff cool yeah\nno definitely so if you can share it\nafterwards uh yeah i'll uh\ni'll be happy to um luciano\nhi ladies thank you very much this was\nreally fascinating uh presentation\nand i was uh i really like also the\nexample\nyou show the project you show about the\nrobot projecting\nthe expected trajectory on the ground\nand\nyeah i can imagine that really helps for\nthe operates for the people to\nunderstand have a little bit more mental\nmodel of what the robot is intending to\ndo\nuh and i'm just thinking\na little bit about this interplay so as\nsoon as you give this notion so maybe\nthey're gonna feel\nthe operator might feel feel a bit more\ncomfortable and get like closer to the\nrobot and that that could be like\nthis kind of emerging interaction\npatterns on this and that's\nmaybe the the so the menthol\nthat helps the humans to form the mental\nmodel but how does it also\nhelps the robot from the mental model\nand the adaptation\non that so you have any thoughts on that\ndirection\nyeah so so the interesting thing is with\nthe autonomous ground vehicles within\nfactory interlude\nlogistics this is a field which is a\nrapidly evolving at the moment within\nthe manufacturing industry\nso a lot of questions that we are\ndiscussing currently within the\nautonomous driving community\nis kind of entering the factory like on\nthrough the back door so a lot of\nquestions that we have on the normal\nstreet interaction stuff is kind of\nentering now the manufacturing world\nso the question i think what is\nimportant to know is that autonomous\nground vehicles are not entirely new\nto manufacturing they are quite common\nactually\num but they are not self-steered and\nkind of like swarm-like behavior\nthey have their dedicated routes they\nhave the very very strict safety\nroutines on\nstoppings is there is any obstacle they\nneed to stop and these kind of things\nand they are interacting at the moment\nquite predictable\nso um because they have this kind of\nlines more for example on the floor\nwhich they are following\nand these kind of different passages and\nfactories\nare designed and like compared to\nstreets factories are designed in such a\nway that humans behave as part of the\nmachine so there are\nvery strict rules on how humans are able\nlike\nallowed to behave and on these rules is\nquite easy to develop all the rules for\nthe robot so it's kind of a very\nrule-based and of course safety critical\nenvironment\num so the real interaction thing is we\nwould normally imagine it with like\nautonomous cars or something like that\nat the moment don't really arise because\ncurrently the systems are not really\nself-controlled\nif they are getting and that's a very\nnice vision that we have developed\ntogether with magna magna is\na car manufacturer in austria who\nare using these autonomous ground\nvehicles and want to use them in a\nself-organized fleet\nand here you start to have this\ncontrollability of the\nai system because this is like the\nsystem is\nindependently going to decide what it\nwants to do and kind of self-organizing\nwhat it's the next steps will be and\nhere and that's the point where\ninteraction gets more important that's\nwhy we came into that project\nwe're not that much the like typical\nrobotics engineers we are much more on\nthe\nyeah still rule-based interaction and\nuh two things that we have inquired here\nis like we have\na couple of different scenarios that we\nwanted to look into\nand one is definitely the close\ninteraction on the shop floor and here\nis more or less the main question what\nyou have with human steered forklifters\nlike the interaction with humans walking\nhumans human steered forklifters\nand atvs and here the main thing is i\nwant to know what the thing is going to\ndo next so that's why we came up with\nthis projected trajectory thing\num i think it gets more complicated if\nyou have mobile manipulators\nbecause then you don't only have the\nrobot driving\nbut you also have an autonomous part\nwhich is able to manipulate stuff\nand this is going to make the stuff even\nmore interesting but we're not there yet\nlet's put it like that so it i hope that\nanswers your question\nand i can share a couple of links if you\nwant with the case that magna envision\nit\nand they build also nice videos sorry\nthis is just a bit of the industry\ndomain they always make videos\num and i will share the smart factory\nversion of magna and this is quite\ninteresting actually\nuh where they also see a couple of uh\nand here without and that's maybe one of\nthe topics that i want to kind of really\nraise\nis um that compared to the traditional\nmanufacturing\nuh to the traditional manufacturing from\nthe employees where\nnobody really needed to take care that\nmuch of the human\noutside of safety um constraints\num now if the stuff is getting more and\nmore intelligent we really need to take\ncare about the interaction and this is\nquite new for that field\ni hope that answers your question that\ndoes there's a lot probably\nso thank you very much thanks thanks so\ni saw that akari racism too and i'm\ndavid\nso yeah i i just wanted to say it's it's\namazing\nyeah i like especially this example with\nthe\nground kind of wheeled robot yeah\nbecause i'm myself doing uh\nhuman av interaction with the thomas\nvehicles so uh we also have this kind of\nreally nice uh analogy there i think but\nuh yeah\nmy question was basically the same one\nthat luciano's but i want to elaborate\non that\nuh you mentioned that you're interfacing\nalready now with the\nautonomous vehicles uh industry and uh\nwell the way i understood it is that\nthey are trying to bring some of the\napproaches\nuh that they're using into the workflow\nright but\ni was uh also interested if uh some kind\nof uh\nar vr based interfaces are already being\nused uh\nin autonomous vehicles uh interactions\ninteracting with humans\nand uh yeah if you have any plans of\ngoing there at all or maybe you just\nknow of any relevant work\nyeah i'd be happy to look at the\nreferences i have a literature survey\nfor that if you want i have a graduation\nproject\nand you also can have our code and we\nalso have the fleet management code that\nwe published as students\nyou can also see if that helps you to a\ncertain extent\num and yeah let's just start to get in\ncontact because we're currently\nproposing to\nput the stuff into an age horizon europe\num\nattempt together with magna because yes\ni see that the market is increasing a\nlot within the fts or far loser\ntransport statement that's the german\nworld\nor the autonomous ground vehicles or\nautonomous guided vehicles within the\nfactories i am not quite sure why they\ntook so long um because there were a lot\nof systems already on the market\nbut there had been really a recent push\nuh for that and new\nalso new standardization and these kind\nof things if you want to elaborate\nfurther on that we also have an\ninternational working group together\nwith other universities\non the topic and if you want to\nparticipate i'm always happy to have\na new people joining us there\nokay sounds perfect let's let's get in\ntouch\ncool all right thanks zakari\ndavid\nwhere's that button yes\ntop right no i hate to change between\ndifferent\nuh systems and i'm always searching for\nthis two functions again\nit's not working no maybe uh try the\nbutton\nno also not\noh okay you're gonna type the question\nperfect\nokay give him a keyboard you want to\ntalk to us\nperfect interface\nokay maybe also restarting sure\nokay yeah how do we evaluate that that's\nvery they're very good\nvery very very very good question um so\num situational awareness is more or less\nhorrible to measure\nand there are also some psychologists\nthat disagree with the concept overall\num it has been proven to be very well\nhelpful for the aviation industry and\nalso for\nmilitary contexts and it has been\napplied a lot in the manufacturing world\ni am mainly using um the\nsituation awareness rating technique\nwhich is a very\nbrief questionnaire um but to put it a\nbit on the context i like i don't do a\nreal sagat approach which is the the\noriginal enslave approach\num but you saw that we for example use\nthis interruption technique that we are\ninterrupting on a specific point in the\ninteraction which people are seeing on\nthe internet in in the video\nand on that point we are asking the\nquestions on\nlegibility related questions where we\ndon't have standardized questions so\nlike what do you think what robot will\ndo next\nif you want we also can share the entire\nstudy with you i haven't published it\nyet but\nyeah i should um but there is a\ngraduation\nwhich has a couple of these kind of\ntasks in there\nand then we of course also use um\nexperience usability so there are also a\ncouple of standardized questions that\nyou can\nquestionnaire that you can use from the\nusability side\nand if you want to evaluate more into\ndepth for presence\nrelated things if you really are in\nan augmented reality or virtual reality\nsetting then you need to\num also evaluate on presence\nand sometimes also really important to\nmeasure the immersive tendencies\nbeforehand so i have a kind of set of\nof questionnaires that i tend to use\nincluding then also\ntask load you can use nasa tlx you can\nalso use other methods\nand if you want i can if anybody\ninterested i can just share the\nmethodology that we are\nkind of now using most of the time\nand i want to add that i'm only using it\nas a tool\nand i'm not doing research on these\nmethods directly i'm just like using\nthem how they are recommended to be used\nwithin the human factors domain\nand i find it much more valuable to use\nstandardized questionnaires as much as\npossible then you can really compare\nalso to other\napplications if there are really\nbut for the manufacturing industry the\nmost of the questionnaires are not\nreally validated for that application\ncase so they are validated for\nspecific parts of the application cases\num\nfor example but the most of the ux\nresearch is definitely\non let's say screen based interaction\nand we\ndo much more than that right i found my\nmy microphone button again so um\nso that that seems very uh let's say uh\nreally usability kind of questionnaires\nand usability kind of uh\nof methodologies i'm familiar with um\nwith all of them but uh so i was\nwondering if you\nlook into well-being then those kind of\nquestionnaires they don't\ntypically address that and also if you\nlook into\nthe question that that concerns us at ai\ntech which is uh\nyou know do people actually feel\nresponsible\nwhen they work with these kind of uh\nrobotic systems you know that\nuh then then also we need other things\nthan the the standardized questionnaire\nso i was wondering if you could reflect\non those two elements there's a\nwell-being\nand and responsibility over what\nactually happens in the production\nprocess\nyeah so we found when we respect to the\nwell-being um\nmy chair is a professor peter fink who\nis\nuh his uh yeah more this special area is\ncomfort\nand uh well-being within the context of\nuh\nflights but he originally comes from the\nwork economics domain\nso within the work ergonomics there are\na lot of like measurements that you can\ntake\nespecially for example now for the\nbicycle case we did a ruler analysis for\nthe current status\nso this is also standardized method that\nyou can use for physical ergonomics\num and for in order to assess the\nquality of the physical interaction\nlike before and after treatment let's\nput it like that we use these kind of\nstandardized methods there are existing\ncomfort questionnaires for example i did\na study on\nuh comfort on on the sense glove\nstuff for example i think you know it\nwith with with then\nand these people um and here we use\nthere are also some standardized\nquestionnaires that we're using there\nand within cognitive ergonomics we work\ntogether\nwith chalmers university and they have\ndone in tremendous work\non cognitive ergonomics within the field\nof assembly\nand they also have very nice methodology\nwhich we applied for example also again\na cxi that's also a complexity index\nwhich they derive from the\nautomation from the assembly tasks and\nhere we\nlook into more or less also\npre-treatment after treatment\nso first for the analysis of a task for\nthe different levels of\nautomation and how the automatability\nand then after we have finished but we\nhaven't done\na really finished task yet and\nthen the uh plan is to look into the\nfinished\nthe new version more or less and\ncompared it to complexity index and\nperceived cognitive ergonomic factors\nso i'm mainly relying on existing\nmethodology here\nbecause i have the feeling there are a\nlot of people doing great work there\nwhich i can use then as a tool and and\nthen\nrather kind of focus on making the stuff\nwork and seeing\nif our design is really kind of\nresulting in some improvement instead of\nkind of doing research on the\nmethodology itself\nif that answers your question uh partly\ni mean i get your choice but um\nbut my question also was uh\nbasically in terms of responsibility\nwhat do you what do you think\nwe could use should use what we need to\ndevelop because it's not there\ni understand you don't do it but but\nyeah just reflect\non that so what we have is what we\ndefinitely have is\num we use the virtual reality setting\nfor the bicycle in order to kind of\nuse this as a tool within a responsible\nresearch and innovation approach with\nclaudia\nfrom tbm to use more or less the vr\nenvision setting within a methodology\nsetting\nof research responsible research and\ninnovation\nand they have kind of a bunch of tools\non\nmaking sure that the workers values are\nwell captured\nand then embedded later on in the system\nand that's something i find very\ninteresting and very relevant\nanother study that we do uh yeah sorry\nnick i see it um\nand the another study that i was doing\nwas asking\nrobot developers out in the industry\nif they are considering human factors\nand the end user at all\nand because that's something where i\nwanted to start in order to just\njustify the need at first because there\nare a lot of people within the robotics\ndomain which you might know and\nbetter than i do but also in the\nmanufacturing domain who don't really\nsee the necessity yet\nso my study what i was doing and i can\nshare the methodology with you if you\nwant\nis to go out and ask robot manufacturers\nin or robot builders within project\ncontext\nif they consider these kind of typical\nuser-centric approaches that we\nuse as a methodology and the answer is\nbasically no they don't have any clue\nand they don't think about the end user\nand i think that's the point where i\ntackle this kind of responsible approach\nif the developers don't care about it\nthen we cannot fix it like afterwards\nthat easily\ngood thank you great\nthanks uh thanks everyone thanks doris\nalso for uh\nfor this really really inspiring talk\nit's great and um\nso yeah like i said in the chat i'll\ntalk i'll talk with you to see how we\ncan share these references that you\nmentioned\nthe best with the people that are\ninterested and thank you very much for\nuh for the for being here\neveryone else for for uh for you for\nyour\nattention and uh well we'll see you next\nweek\nthanks so much and please send me an\ninvite innovation in invitation next\nweek i'm so curious to see\nthe presentation of your other guys\nthanks definitely see you bye", "date_published": "2021-02-24T16:02:18Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "b56287fbfae14e260dd4537302513b69", "title": "AiTech Agora: Chao Zhang - Personal Autonomy in Human-AI Interaction", "url": "https://www.youtube.com/watch?v=OOEooM_GVN0", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "and so i came back to ty open my\nhdi group to do my phd so\nthe topic was about modeling and\nchanging people's lifestyle behaviors\nsuch as like encouraging people to do\nmore physical exercise or changing\npeople's daily diet\nthe focus of the project was on the\npsychological process\ni have information and the self-control\nso i try to model those processes using\ncomputational models and also sort of\nimplement computational models in\nintelligent systems\nfor instance for\nmore accurate behavior prediction\nso that's a bit different from what i\nwant to talk about today\nand you can ask like whether my patreon\nsystem was related to ai or not i think\nit's somewhat related to ai uh so the\nproject was\nwithin the data science collaboration\nbetween gue and phoenix research so back\nthen in 2015 i think no one was talking\nabout ai yet it was a big data data\nscience was the password back then\nso i also worked some people from\ncomputer science so i used computational\nmodeling and machine learning in my own\nwork so that's good connection with ai\nin terms of topical autonomy really not\nso much through my phd i included maybe\na little bit discussion on what are the\nethical implications of this behavior\nchange system and people's autonomy is\none of them but that was a very brief\ndiscussion\nand after my phd i decided to go to go\nto a different environment so i went to\nutrecht to do a postdoc in the\ndepartment of psychology i always wanted\nyou to be in the mukha or traditional\npsychological department for a while so\nthat was the motivation\ni will talk a little bit more about the\nhuman ai\nproject\nin a bit\nand then last october i\nwent back to my old research group\nas assistant professor\nand so\nright now my i have a couple of\ndifferent research nights so like i\nstill continue to do a bit\nwork on habit and soft control and try\nto model those processes and to see how\nwe can use those models for for behavior\nchange\nthere are some topics more relating to\nai\nsuch as decision making and autonomy\nissues you human ai interaction\nalso started to work with some other\npeople in my group are human robot\ninjection especially focusing on\nemotions\nand i have also the idea maybe to use\nsocial robot also for behavior change\npurposes then that's why new area wants\nto sort of us invest some time\nuh in consumption may be nice to mention\nthat i'm also developing a new human ai\ninteraction course i don't know if\nthere's anything similar here in\ntutorial so we'll be happy to discuss\nwith you also about\nsimilar\nprojects\nso um\ni want to say a few words about the\nwhole human ai alliance program\nso this is the this is the core team i\nthink it was\na project\nuh awarded to professor hancock from the\npsychology department at utrecht\nuniversity and also\npalo's\nfrom industrial design at oven\nand supaya and i was hired to do the to\ndo the real work\nwe also have like 20 to 30 participating\nresearchers from all the different\ninstitutes\nthey are not like very closely connected\nso it really depends on\ndepends on whether some people want to\nwork with us on some of these topics\num\nit's a bit about motivation so this was\nproject was founded in 2019 i think that\nwas a time\nwhen kind of researching in interesting\nai has really gained a lot of traction\nfrom like different disciplines\na lot of i think initiatives are like\nhuman centered ai and in this program\nthe the focus is really on the tension\nbetween\nmachine and human autonomy\nuh whatever it is but there is a\ntraditional concern i think as a lot of\nmachines around us starts to like make\nautomatic decisions uh they\neven move around like they can act on\ntheir own this is of course the worry\nthat people might lose a sense of\ncontrol or sense of even like the core\nvalue of like humans being like the\nautonomous beings\nso this was the efforts to sort of um\nto strengthen the research in this area\nuh\nsort of\nwith people from both\nutrecht regions and also in total\num so when i was doing the post talk\nthere was actually besides my research\nthere was a lot of effort on\ncreating some kind of joint uh\neducation so what we did was i think\nsomething quite quite level so we\norganized those events\nuh inviting like students and\nresearchers from different institutes to\nget to know each other that we founded\nlike seven joint master sisters project\nin about two years so this was something\ni really enjoy doing because you really\nhelp some researchers to get to know\neach other also students they like it a\nlot because they you know they\nthey can work not like\ncompletely individually but in a big\nteam they learn from\nsupervisors from different fields i\nthink that's really also very helpful\nfor for their like career\nin science or in the\nindustry\nand so talking about research uh\ni want to talk about many about\ntwo projects there were actually\nmany more ideas that we come up with you\nknow during the more last one half a\nyear but\nin terms of research it was not optimal\nfor many reasons and mainly due to the\nthe doctor it was the project was really\nin the\nin the middle of the whole pandemic so\nwe couldn't access like nab resources\nlike no way to really build like\nphysical\nprototype uh\nalso in terms of collaboration was also\na bit uh challenging so we\nthe two projects that when we talk about\nuh uh both sort of online experiments\non\nmore kind of conceptual uh levels about\nyou know these autonomy issues in human\nai attraction\nso in the first project\nwe tried to propose a new functional\nmodel of personal autonomy\nand we did three empirical studies to\ntest the model\nwe also\nwanted you to know like\nuh does it matter if like the the agent\nthat constrains you is a a real person\nanother person or a a ai agent\num\nso this leads to the very fundamental\nquestion what is autonomy uh\nto be honest for this question\ni don't feel like i can commit myself to\njust one definition even even now there\nwere a lot of different opinions in in\nthe literature from different fields i\neven among the\nour core team they are just different\ndifferent ideas even for student\nprojects sometimes they got confused um\nfor most cases it is fine as long as you\nsort of define precisely your way and\nyou you know you do them you focus on\nyour real research question\num but i do think it's very interesting\nto talk a little bit about about\ndifferent perspective and how we sort of\nuh\ndefine and try to do some research on it\nin our projects\nand\nif you don't know\nmuch about the topic i think\nthe starting point for the whole\ndiscussion about tommy it has two\ndifferent\nuh traditions in philosophy so\nthere was\njust mule according to him\nautonomy is really about sort of\nliberty and freedom of choice\nyou might have heard like that the no\nharm principle so by means basically we\nshouldn't\ninterfere with other people's choice\nunless there would be some uh\nunless they would hurt or harm others\nso that's really the liberty perspective\nand on the other hand uh you got ken's\nand uh for him\nautonomy means something totally\ndifferent it has a very strong moral\nsense it's really autonomy really means\nuh\ndoing the right thing or doing the sort\nof the the morally\nright thing and he talked about rational\nself-raw rational self-control so\nto give one example so uh if someone is\nable to\ndecide all by let's say himself or\nherself to to eat a lot of let's say\njunk food according to uh tomiro that\nwould mean this person is autonomous but\nnot according to ken because according\nto cans your actions should actually\nreflect not like your sort of first\norder like kind of important desire like\nyou know tasty food but should reflect\nsome kind of rational thinking you know\nwhat was really good for you was what\nwas responsible for for the society for\ninstance\nso that this is the\none key\nsort of two different camps and you you\nsee\na lot of later definitions all follows\none of these two different different\nviews\nand uh i think it's cycling the concept\nof autonomy was mainly discussed in the\nself-determination theory it's a very\npopular theory so even if you know\nanother cycle you probably heard a\nlittle bit about it so\naccording to self-determined\ndetermination theory autonomy is like\none of the\nmain human needs like out of the the\nthree different needs\nit is this need perspective means that\npeople kind of need autonomy in order to\ndo functioning in the right way or to to\nhave\nmotivation to uh to do different things\nuh i'm not really a fan of this theory\nsince i always found a little bit awake\nand even within this theory the meaning\nof economy sometimes is this bit unclear\nto me but\ni should say the theory did contribute a\nlot in terms of apparently demonstrate\nthe importance of autonomy for\nhuman functioning and\nwell-being\num so\nin daily bioacids autonomy is also a\nvery important\nconcept so here this is a very\ninteresting\nframework called the\nintervention letter\nby uh\nnew fields\nethics council\nso what they show here is they\ncategorize\ndifferent type of interventions in terms\nof\nsort of how strong they are how much\nthey affect people's behavior or and\nalso how much they\nundermine people's personal autonomy so\nif you look from\nthe bottom that move to the top so this\ninformation becomes stronger and\nstronger and also in terms of the\nlactic\ninfluence autonomy this also becomes\nmuch stronger so for example\nthe most let's say\nuh different one would be just you uh\nobserve people's behavior or immortal\npeople's behavior without\nany active intervention then you can do\na little bit more by maybe educating\npeople that are moving afterwards\nyou have for instance you can guide\npeople's choice through the use of\ndefault this links to\nsome research on lodging so this is\nuh already like us at the middle level\nthen you have\nif you change the incentives that's\nreally uh\nwould be have a big uh lack of impact on\npersonal autonomy in the end you could\nbasically restrict\npeople's choice or eliminate people's\nchoice altogether by maybe regulations\nor by by law so that that\nthat would be the\nstrongest restriction on autonomy\nand then in the\nmore engineering field in ai computer\nscience\nsometimes autonomy is also used as\nmeaning the same like\nhuman like capacities and intelligence\nalmost sometimes it use i feel like\nalmost\ninterchangeably with the word\nintelligence which i think it\ngives a little bit maybe\ni think\nlike more ambiguity to what\nwhat you want\nto convey with the word autonomy\nbut there are more specific theories\nsuch as\nif you talk about\nhow to\ndecide whether a system is autonomous\nthere's one perspective that you would\nlook at when the systems can make\ndecisions and can act in the sort of\nphysical environment by their own so\nwithout any\nintervention from human users or\noperators\nuh there's another i found pretty\ninteresting idea\nby\nquite old idea in the 1990s so\naccording to these people they define\nkind of a framework they\ncategorize or different entities in the\nworld to three different categories so\nyou start with like objects it's like\nmaybe your cops or they basically they\nthey just sit there pass me them\nbasically waiting for you to you know to\nuse them to to achieve your goals they\ncannot actually do anything then you\nhave agents so you can think about maybe\nsoftware agents or some just typical\nautomation\nso those\nif you give the system kind of a goal\nthey will just do something by their own\nbut according to this framework\nthe agent can be called autonomous only\nif\nthe systems would actually they can\ngenerate their own goals\nbased on the external environment and\neven based on the internal motivation so\n[Music]\ni think the agent at this level was\nalready qualified as kind of autonomous\naccording to this definition but not\naccording to uh and the\nenvironment\num\ni also found the idea of internal\nmotivation is pretty interesting because\ni think it's still pretty difficult to\nsay what is a multi internal motivation\nfor a artificial system\neven in today's applications\nso in our research we\nhave the idea to\nto sort of propose a functional model of\npersonal autonomy so\nthis is\nactually my last idea of handcarts for\nmy postdoc supervisor\nso according to this model\nyou would say a person is autonomous\nif\nthis person has\nfirst of all some kind of agent capacity\nso that's really kind of the internal\nability like you need to have um\nyou need to be able to basically to to\nact in the physical world it needs to be\nto have\num\ncognitive function to just to think that\nyou decide if you have you know issues\nwith those things then basically you\ncannot have\nit cannot be autonomous and then in\naddition to the ability the\nfor a person to be autonomous\nthey also need to have a active goal\nthey need to\nthey need to want to achieve something\nuh\nfor instance to satisfy their motivation\nor needs and goals of course can be very\ngeneral can be very specific you can\ntalk about\nhaving a goal of\nliving a healthy life you could also\nhave a goal of you want to eat salad uh\nthis evening so gold can\ncan\nthe level can can change from very\ngenerous very specific\nand then when there is a goal then you\ncan talk about the environment you can\ntalk about whether someone has actually\nsort of opportunities to achieve the\ngoal by\nspecifying\nthe different uh determination so here\nwe have\num the different sort of\nspecifics about gold pursuit like the\nword when and how\nso in our research firstly we\nfocus on the determination of these\nspecific\ngoal pursuits components like the words\nwhen and how so we assume of course we\ndon't look at the case when someone\ndoesn't have capacity or doesn't have a\neven real goal so we assume that someone\nhas a goal and in order to pursue\na goal\nthese are things you need to specify so\nyou could call this like three different\ngoal pursuit components you could also\ncall this maybe just three different\ntype of decisions you need to make\nbefore you can achieve your goal\nso what does this mean so um\ni think this is in a way quite intuitive\nyou could say the the word component is\nabout really\nwhat you will do\nin order to achieve a goal so this\nbusiness set like a more concrete goal\nand\nto basically or like kind of a behavior\nthat you need to carry out to achieve\nthe goal and i think about the when uh\ncomponents that's about ten minutes but\nuh when do you want to uh\ndo that thing and then you have the how\ncomponent is about how\nwould you want to would you want to do\nthat and normally there are just\nmultiple\ndifferent methods to achieve the same at\nthe same goal\num\nso this is like as slightly lower level\nthan the the component\nthe word component\nso in the psychology literature\nand also in neuroscience they are\nresearch that shows some distinction\nbetween for instance the world and the\nworld components and also the world and\nhow components i'm not going into\ndetails\nbut here\ntalking about some like real-life\nexamples so you can think about those\ndifferent components in certain\napplications\nso if you look at\nthe subscription from ns\nthey have different kind of\nrestrictions in terms of what went on\nhow for instance\nthere you some\nsubscription\nonly allow you to for instance go to a\ncertain\ncity or place you have to sort of say\nokay that's where i want to go\nand other subscriptions basically\nrestrict you from like when can you\ntravel like you you can or you cannot\ntravel during the the peak hours\nand uh finally there is also the how\naspect like you know whether you can\ntravel maybe in the first or second\nclass\nuh you can also think about uh for\nexample uh\ndrivers who use uber so the uber has\nalgorithm that sort of manage all these\nworkers then their behavior can may also\nbe restricted\nin terms of a different aspect like\nwhich customer they\nthey should pick up when they should\npick up the customer and also maybe\nthrough\nwhich routes should they drive to the to\nthe customer so in a lot of real life\nexamples you can sort of make\ndistinction between these different\ncomponents so in the embryo studies we\nasked the question\nwhat is the relative importance of these\nthree different\ndecision making aspects in determining\npeople's\npersonal autonomy so\nif you restrict to what when and how and\nwhich restriction was needs to like a\nstronger sense of\num\nlike a um\nreduced autonomy\num\nwe have like no i think we probably we\nthought like of course all three shoots\nbe important but the question is\nbringing about the the the relative\nimportance and we also wanted to know\ndo they also interact to influence\nautonomy\nor the influence autonomy is more or\nless sort of independent from each other\nso for example if you restrict\nthe word\ncomponents maybe the one component\ndoesn't matter anymore because\nmaybe the world is just very strong\nrestricted so when you see whether there\nwill be any interaction or there will be\nactually no injection\nand secondly uh we want to know\nuh does it matter if the restriction\ncomes from a person or from a ai agent\nor algorithm\nso they are of course a lot of research\nlooking at\ndifferences in trust and acceptance\nin terms of\ndecisions made by human experts or ai\nalgorithms and i think\nsometimes\nstudies found that people are obesity of\nreasons but in some other studies they\nalso found people seem to like\nai always understand that the opinion\nfrom experts\nbut there's not much like really the\ndifferent type of restrictions ai\nalgorithms can impose on our human users\nso that's that's one of the uh the novel\nthe\nnovel aspect of our study\nuh so we did three studies three\nexperiments on experiments i want to\nfocus on study\n1a so\nbecause the others two experiments are\nextensions and replication of the first\nso in this first study\nwe have basically a\nthree by two by two by two design sounds\nrather complex but the basic\nuh\nmanipulation is that we motivate of\ncourse the three different components so\neach of this aspect could be\ndecided by oneself could be decided by\nby another agent\nuh so all these are manipulated within\nsubjects\nand then we have\nuh\nanother factor so the source of the\nrestrictions could be from a human could\nbe from ai agents and we have a baseline\ncondition which we don't specify\nthe type of\nthe source of restriction so this is\nmanipulated between between subjects\nuh what we did was we basically give\npeople uh\nthey they\nimagine themselves being different\nscenarios\nand\nincluding travel\nwork health and social the different\ntype of goals they may want to achieve\nand they go through like eight different\nscenarios in render holders where these\ndifferent things are\nmanipulated and we basically\nwe measure uh the perceived autonomy uh\nin terms of uh\nthe freedom of choice control\nuh the restriction on autonomy and also\nresponsibility and\nthis\nall these items we basically aggregate\nto have one measure of a perceived\nproblem\nso this is how a\none\nthe task looks like\nso the first basically they read a um\nkind of description of a scenario\nuh for instance here is about planning\nat the holiday so very brief like kind\nof abstract description and also we also\nbasically told them like what\nuh\nwhat are the meanings of these different\ndecisions like what when and how\nand then uh they were told that they are\ngoing to uh\nto look at eight different scenarios\nand these decisions can be made either\nby themselves or by\nanother person\nand so this is how\neach each trial looks like so we\nbasically\nvisualize the allocation of these\ndifferent aspects using a diagram so\nin this trial then all the three\ndifferent\ncomponents are determined\nby\nby myself\nand in this trial you see that uh this\nis a case\nwhere um\nsomeone can decide on the when\nlet's say to\nuh to travel but cannot decide about the\nword and the how those are determined by\nuh supposedly to be by by the other\nperson and then they answer\nfor the four questions for the\nmeasurement\nso\nso they are\nfor for each um\nfor each type of gold pursuit there are\neight trials and this was presented in\nrandom order\nuh we also manipulate the\nsource of the restriction so this uh\ndifferent version of the instruction\ntext replacement need to read\nuh again this is again quite quite\nabstract we don't want to define like\nwhat who is the other person or what\nkind of ai so it's just very much on the\nconception level in this\nfirst project\nand in terms of\nthe background we basically it's the\nsame we just change at the level here\nfor\nit could be the other person in the\nhuman condition the ai agent in the air\ncondition or we just basically say\nsomething is constrained in the baseline\ncondition\nwe did two applications uh so in study\ntwo uh\nwe wanted to rule out the possibility\nthat the\nthe order of introducing and\nregionalizing the component would bias\nthe result because we always put the\nword on problem when and how\nbut we found that actually it doesn't\ndoesn't matter\nand\nalso\nwe want to show like can we just simply\nask people like which aspect they\nconsider to be the most important but\nthen we found what they self-report is\nquite different from uh\nwhat is revealed in the uh from the task\nand uh we did a third study where we\nextend uh the study to the more\norganizational settings like\nallocating job tasks or making like a\ncareer plan\nwe also replicate the human versus ai\neffects i will say you know what we\nfound\nand\nwe include two additional dependent\nvariables like how much they would like\na decision-making situation and also um\ndo they actually accept sort of to go\nalong with with the situation defined in\nthe in the scenario\nand so this is a\nvery basic result from study one so you\nsee\nthe three different between subject\nconditions the baseline human ai\nand\nyou also see\nbasically what we found was that\nwhen you remove each of the the\ncomponent from one's own control\nuh\npacific only would basically\ngo down so that's quite uh\nunsurprising uh you see\nuh also like what happens when persons\nhear\nthe word class when these two are\ncontrolled by by oneself in that trial\nbut not the how\nand so\nthis is a\nmaybe\neasier to understand the basic result\nfrom study so here\nwe plotted\nthe\nregression coefficient so\nthese are basically the\neffect sizes of these three different\ncomponents and also the uh\nthe two-way interaction effect and the\nthree-way interaction effect\nand what we found was that if you\nrestrict any of the three components\npacific autonomy will basically be\nreduced\nsubstantially\nwe found\nvery little interaction effect so that's\nlike all these different components they\nuh what when how they affect autonomy\nmore or less like uh independence or you\nsee they it's actually in fact very\nclose to q0\nwe also didn't find any difference\nbetween the human and the ai condition\nthat both things study one and the\nstylish three so you see\nuh\nthe bars with different colors and\nreally\nonly tiny differences\nif you compare that to the\nto the effect size\nuh of just removing this component\nthat's like for instance here like\nit's so close to one that means if you\nuh\nrestrict any of those the pacific\nautonomy will be reduced by that one\npoint on a 7.0\num we try to compare the relative\nimportance and in study 1 and 2 we found\nsort of a consistent\norder uh of what how and when\nin terms of their importance so it's\nlike people\ndid consider the world to be maybe the\nmost central to pursuit of autonomy\nfollowed by how and when we saw this\nis pretty consistent should replicate\nagain but not really in study three when\nwe use\na little bit more concrete scenario in\norganizational settings and here you\nfind the much smaller difference and the\nsame is also the when components now\nbecomes a bit more important than the\nhub\nso to conclude\nuh\nthe three studies so i think it was not\nsurprising that of course in such a\nabstract experiment if you just\nmanipulate the restriction on this\ndifferent\naspects\npersonal autonomy and also goal\nmotivation always go down\nwe didn't find instruction effect\nmeaning that\nperhaps in a way you can say you can\nsort of\nuh the removal of one component by maybe\ngiving people freedom to choose maybe\nanother uh\nin terms of another component so if you\nrestrict\nthe world component maybe it's nice to\ngive people to choose like when they can\ndo something\nand we were initially kind of excited\nabout seems to be like a clear order in\nthis different aspect but that was not\nreally replicated in the service like\nwe also don't find didn't find the\ndifference in terms of uh restrictions\nfrom human and ai\num\n[Music]\nof course at least not in such a quite\nabstract experiment in real application\ncontext of course that could still be a\ndifferent story at least i i think that\nway\num\nthen\none thing i actually found with\nstruggling is like what kind of\ndesign\nimplications you can think of\nfrom the result as i said the model was\nsort of really\num\nfound like a very strong kind of a\nconceptual psychological perspective so\nwe also want to hear your opinion uh do\nyou think some of these results might be\ninteresting in terms of\nuh creating kind of design space like\ndifferent can you separate the different\ncomponents uh like which one you want\nmaybe their systems to constrain and\nwhich one you want people to be it to be\nto be free to choose\nso i want to continue with\nsomewhat different\ntalk about somewhat different study so\nthis is\num the work\noh sorry so this is the word um\nmainly by my collaborator supplier from\nthe id departments at qe so this is a\nlittle bit more\na little bit closer to application so we\nwanted to look at what are the effects\nof providing explanation and also like\nmaybe making people to be aware of like\nthis really ai or algorithm behind some\napplications what would be consequences\non people's\nautonomy and in terms of like very\ncommon basic everyday sort of\ninteractions with ai\nso\nwe have many three different research\nquestions so we want to know\ndoes providing explanation help to sort\nof protect\nspecific autonomy to some extent uh of\ncourse there's a lot of research on\nexplainable ai so the general idea is\nthat this should\ni think normally would be a good thing\nso you want to see whether indeed if you\nprovide explanation maybe people are\nless intimidated by some of the\nautomated recommendations\ni want to know how does also the\nuh awareness of ai influence specific\nautonomy these we don't have like a\nclear prediction we don't know whether\nmaking people to be more aware of the\nthe algorithm or something like ais\nbehind the system is is beneficial\nalways\ndetrimental for for pacific autonomy\nfinally\nwe also want to explore the like\ndifferences of course like different\neveryday applications\nsuch as like movie recommender or like\ncolor application system kind of smart\nformal application or like social media\nsort of uh\nfiltering of like like what you read\num\nso\nwhat we did was again online experiments\nwhere we used uh\nthings called design fiction method to\ncreate different scenarios uh using some\nvideo clips we had a 2x2 between sundeck\ndesign\nso\nfor for each application the participant\neither they\nuh they saw a explanation or they don't\nso i don't see any explanation and they\nwere made to be\nhighly aware of the ai behind the system\nor that was not the case\num\nwe got\nover 300 percent from politics so about\n80 percent per condition\nuh so the person basically they went\nthrough the eight applications in the\nrundown order uh so we we used they did\nsome videos to show the behavior of\nthose ai infused systems i will show a\nfew examples of the video in a minute\nthen we measure\nplacebo autonomy\nthis time using a bit more\nquestions kind of\nmore relevance to the applications so\nyou see\nfor instance whether the system provides\nchoices based on the user's true\ninterest\nor like the system left the user to do\nthings their own way\nand\nso we have five of these items to\nimagine pacifica told me\num so this is my example\nhow\nthe solarium looks like so this is\nlike left legs\nso she was like when you log in so this\nis a condition\num\nit presents of some kind of orgasm is\nmade\nvery standing so they release that you\nknow this\nsure kind of artificial brain and you\nknow\nand this is also a condition with\nexplanation uh so so for each we\nrecommend it there is some labels\nunderneath showing that\nwhy what are the reasons that you might\nlike certain movies like certain actors\nor certain\ngenre of a\nfilm\nand this is a\nthe\ni think the the thermal starts\napplication\nyeah so again you can see some\nexplanations here about why the system\nset like this particular temperature for\nyou\nand finally this is a example from the\ncar navigation\napplication\nokay\nso also some explanations are\nhere\num\nso again\nit was a 2x2 design so either\nthey see something like this like\nhighlighting the presence of ai or they\nsee\nsomething much simpler\nor and also\nfor some participants they\num explanations were provided by the\nsystem or there's no explanation\nso what we found\nwe\nwe checked whether the modifications\nsort of\nuh work or not you know whether people\nactually notice the differences that\nseems to be all right so um\nyou know people they\nthey were aware that\nthey were aware of the\nexplanation\nso it was different rating in terms of\nhow much do you think the system was\nproviding any explanation between the\ndifferent conditions and also in terms\nof the motivation of the awareness and\nbut\nit's also it was also clear that the\nmeditation of this foundation worked\na lot better than the other one the\nother one was maybe a bit too subtle you\nknow the difference between the\nconditions uh was was pretty small\nand if you look at\nall the\nresults across all applications\nit's actually\nnothing was really much interesting what\nhappened uh there was no effect of\nexplanation knowing effect of\nyou know prime is like higher awareness\nof ai\nalso no instruction effect\nwhat becomes a little bit more\ninteresting is that you look at the\nindividual applications\nand there was some somewhat surprising\nto us that we found\nfor whatever reason uh\nthe car navigation system sort of\nstood out\nbecause\nwe seem to find actually pretty\nrelatively strong\neffect of\nproviding explanation of a civil\nautonomy so if they will show the\nexplanation they\nthe person they perceive the autonomy to\nbe higher\nin in the car navigation case so you see\nthis visualization you can see the\ndistribution are\nseparated quite a bit\nhere but for all the applications there\nwas almost just a totally overlap\njust to check was that corrected to\nmultiple comparisons it was so we sort\nof uh divided like alpha by\neight and also to be honest we had a\nquite large sample like 80 per between\nsubject condition\nso\ni was also thinking whether this is like\na flirt or\nyou could replicate attempts to believe\nyou probably would replicate but\nit could be something specific really in\nthe design of our scenario or it could\nbe something more interesting really the\ncar navigation is uh have some like\ndifferent special attributes maybe\ncompared to other uh applications that's\nthat's i think still quite debatable\ncharge us to check seriously we have uh\nwe want to leave some room for\ndiscussion yes\nokay yeah it's almost at the end of the\ntalk\ni have a couple more slides but i will\nskip some of the more recent results\nsome like three five years but uh and we\nalso cover like the differences of\ncourse the application so regardless of\nwhether\nregardless of our manipulation so\nproviding explanation a lot or\nyou know priming the ai or not you also\nsee\n[Music]\nsome trends in terms of like what kind\nof application people\nfor what kind of application people seem\nto be worried about maybe personal\nautonomy a bit more than others uh if\nyou do some proper tests and you could\nargue that these things like\num for social media like you know like\nfacebook that filter like what you see\nfrom your friends for instance that is\nyou know this is like worried the most\nprobably by uh\nthe for the for the punishments also\nthe climate control but for like fitness\ncoaching and car navigation\nuh those were people tends to perceive\njust higher autonomy regardless of\nyou know the different type of different\nuh manipulations\num\nso i thought it was interesting to\nobserve the difference across the\ndifferent applications and that that\nhasn't been done a lot\nand the other question is why this kind\nof application system stood out so is it\nbecause of\nthat it's kind of a\nmore\nhave a critical application that's about\nlike real-time decisions or actions\nuh or for or something else i\ni don't know uh could also be just the\nthe way we design the scenario that's\nthen that's that's interesting um\nand i think one limitation is that the\nmanipulation of this awareness of ai was\nmaybe interpreted differently by\ndifferent perspectives like showing the\nbrain people could think about data\nprivacy issues you could think about\nlike this maybe this system is very\nsmart or could could\nall go\nall different different ways\num i think i will stop here\nthere are some other things that we can\nalso talk maybe uh in the next\nhalf an hour but\ni want to know if there are any\nquestions about the uh what they're\npresenting great thank you\n[Applause]\num the question is to unexplain the\ndesign scenarios implied action and so\ni'm thinking maybe if one if one thinks\nof the netflix\nmovie recommendation maybe there's an\nexpectation of picking one anyway so\nexplanations might not matter so much\nways with the nest it looked like there\nwas no option on the following action so\ni couldn't go and suggest to change the\ntemperature and with the car then you\nhave maybe two options you can take that\nelectric so that was just a question if\nif there was a thought around which\nactions are implied by the scenario\num and then the other question was\naround the\nwatch when um and how you spoke about\ncontext dependency that you've seen and\nto what extent do you think is also\ncultural dependency\num so first so thanks for the for the\nfurther i think very good questions\nuh so for for this study i think\nuh there are actually many differences\nacross the the scenarios to be honest uh\nwe try to make this to be realistic but\nthat would also mean\nyou don't have a lot of control so\nuh for instance as you suggest like for\nthe movie\nscenario uh it's like you still need to\ntake action you actually need to decide\nbut like the room temperature is sort of\nit's it's that for you then you see some\nexpansion alone\num\nthose might of course\nchange people's experience\neven though sort of\nthe only one that stood out was really\nthe calibration because for the others\nsort of um\nyou know the motivation\nwe thought might influence how people\nperceive autonomy was not\nnot really the case\nand\nbut indeed perhaps because for the\nsocial media and the\nuh the time the temperature control\nindeed people tends to\nperceive indeed if you look at table a\nlittle bit lower\njust autonomy in general could indeed\nmaybe be because of you know in those\ncases it's like the decision is made you\njust see what's come out of the\nuh the algorithm and uh you are either\nhappy or unhappy\nbut i guess for some other things\nwe recommended you still need to make\nactive choice uh\nhigh navigation sort of\nalso a little bit\num\nbut i would say probably we need like\nmore\ndedicated studies too if you want to\nalso separate those those different\ndifferent factors\nand\nin terms of the cultural difference for\nthe different components\num\ni i don't know i tend to think if we use\nsuch a abstract study probably not\nthe context\ncould probably indeed quite critical\nand however somehow from the first two\nstudies we found a very consistent order\neven for different sort of scenario like\nplanning a travel or\nplanning planning your work or like just\nplaying like a social event didn't seem\nto system to matter\nbut then we when we switched to like\nmore concrete scenarios you know\norganizations\nthen we found there were quite some\nvariations in terms of the relative\nimportance\nwe\nso i also did a follow-up study where i\nwant to sort of\ngo beyond the the really abstract\nscenario but i did the experience\nsampling studies um different type of\nrestrictions people's decision for like\nthree meals like breakfast dinner lunch\nand again asking like what aspect was\nconstrained or not constrained now over\nthere you again find the in the kind of\nthe the case of\nuh dietary choices order is again not\nwhat we found like that the caneer\nwhat's how and when so probably it\ndepends a lot on different uh different\ntype of uh corporate suits\nokay before we continue the discussion\ndo you mind sitting here so that we can\nturn the camera so people in downline\nmeeting can actually see all of us yes\nthat's okay can you change slides from\nhere i think again right\n[Music]\ngreat more questions\nyeah thanks so thanks for super\ninteresting talk\num great question actually about the\nfirst\nhalf perhaps so you laid out these\nreally nice different\nperceptions of meaning of autonomy that\nthat they're out there\nand then you propose your own one so and\ni think that's the one you use here\nin the setup of your experience so i\nwondered\nif you would have set up your experiment\ndifferently or if you would have\nperhaps do you think the results should\nbe interpreted differently if you had\nused the different conceptions or one of\nthe other ones maybe the canteen one or\nso would that have\nyeah would have changed\num i would say it's more as follows the\nmore kind of the the liberty like\nindependence of you because still\nreading about like restrictions and\nchoice not so much really about\nsort of relational and\nself-control doesn't have like a moral\naspect\num and\ni i think what we\ncame up with like this comparison for\nthese three different\ncomponents is also a little bit like\ninvestigation at slightly different\nlevel so it's not just about restriction\nor not like or what people do you know\ndo people follow\nuh like a second order or personal\ndesire but it's more like\ndifferent\ndifferent uh let's say\ndifferent type of restrictions\num\nso\ni think in general it follows the more\nperspective that autonomy is about\nuh you know do you sort of\ncontrol your actions do you can you\ndecide something something by yourself\nso that's i think that's that's how uh\nhow that was sort of\nimplemented in in the\nin the experiments\num and this this distinction between the\ndifferent components i i will say it's\nsomething something a bit quite specific\nso\nuh i had actually a little bit\ndifficult time at the beginning to see\nis this really important because\nintuitively\nyou know you can talk about those\ndifferent aspects\nalthough in real applications they're\nalso open into interviewing you know\nsometimes if you can can't decide like\nwhere to go like also the timing is\nprobably also constrained\nso um\nso why i thought you know in what way\nbut perhaps in kind of\nthe design scenarios\ndesigners can\nreally play with these parameters you\nknow to to set like you know this like\non off like to create different uh\ndifferent type of interventions\nyeah so i i tend to agree with you\nso my wish would also be that it really\nmatters which which of these\nuh meanings you would use and that's why\ni'm you know i'm\nvery interested in also why you would\nthink so why you chose this specific one\nbecause\nso if you if you choose perhaps your own\nconception of autonomy then you might be\ndesigning or checking or measuring for\nsomething that you don't want to check\nfor but uh i think\nthat you\nso i think in the first part of your\nanswer you said something that this\ncaptures what we actually find important\nin this specific context\num\nis that is that the is that indeed but\nis it a good interpretation what were\nyou just said or yeah so i think this\nidea that these different components\nmore follow from like the some uh\nresearch psychology about like people's\nschool pursuits like what are the\ndifferent aspects\ndifferent decisions\nand\nso\ni don't know if like you take a\ncompletely different uh\nperspective and told me how would that\nyou know change the description of this\nthis very\nspecific things\nand\nso i would say it sounds like a slightly\ndifferent different angle you know it's\nlike about you know what what type of\nrestrictions rather than you know\nwhether you are being restricted or like\nin what way you are being restricted\nokay thanks\nokay\nuh thanks\nso\nyou talked about it\nagain earlier i understand correctly in\nyour studies autonomy is expressed as a\nnumber right apparently so how like so\nwhat's the how does that measurement uh\nhow is that measurement defined in a\nrelated question and it's related to\nsome of the things that were discussed\nuh do you see some limitations in uh\nreducing the meaning of autonomy\nin this context to a number what might\nget lost uh when we reduced\nso\nin i think both studies was kind of\nmeasured on\na\nuh like\none example scale like here\nthey basically\nneed\nlike what it stands to\nfeel that you have living on chores in\nsuch a scenario so they imagine the\nscenario this is this is how things are\ndetermined they um they they basically\nuh\nbasically choose one of the options\nthey think this is uh really indeed\nthat they have a lot of freedom when\nthey have legal freedom\num\nwe try to cover like\nthe concept like\nsome\ndifferent dimensions so we have\nuh\none question about freedom of choice one\nquestion about control\njust like talking about restriction\nautonomy very directly and also\nresponsibility that's also open\nassumption that if you are\nnot autonomous then you are not\nresponsible for action\num so in this case\nin the end they correlate really highly\nand we aggregate it\num\na single question like reducing these\ntwo numbers how would that sort of would\nbe an imitation\nand\ni think yes\neven though i think\nthis is like i would say like very\ncommon limitations in a lot of similar\nstudies that you try to\nmeasure certain\npeople's feelings or attitudes\nyou know using using this type of\nnumeric skill you could of course\nuse like\na different approach uh\nmaybe\ndoing\nqualitative research to ask how people\nreally feel then you can capture a\nlittle bit more uh perhaps\nnuances in terms of you know maybe like\nagain like maybe the even though what\nwhen how\num\nthese different restrictions people\nmight have like\ndifferent specific feelings to them\nthat's indeed that's not really captured\nby just these four items\njust to quickly share so from this from\nmy experience with engaging with the\nstudents in some research projects where\nquestions related to autonomy were\ninvestigated\nthat we founded this qualitative aspect\nto engaging with people and kind of\nreally going into the nuance that was\nreally useful to them by seeing what are\nthe design directions\nuh to for a for example user user\ninterface\nuh\nso so so so that's where that nuance\ncame to that inspire particular things\nto be designed yeah totally i think i\nthink also\nsome measures should be on the behavior\nlevel not just\nthe waiting but you know how they you\ncan also sort of maybe observe how they\nwould continue to maybe interact with\nsystem i think uh that would be\nindeed also very useful um\nstudying like that alternative\nautomation\n[Music]\nwe are running out of time\nso maybe more questions from there\nyes if possible\nthanks uh\nno questions from your\naudience uh there is a question from\ntimon but he's left already so i think\nwe just\nreflect him later on so my question was\nif i take a step back from your results\nit seems like\nthere were no no big effects\nuh and so similarly\nmy sense is also that maybe a\nqualitative perspective\nwould actually really enrich the\nunderstanding but i was also thinking\ni'm a quantitative guy after all as well\nso i think maybe there's big confounding\nfactors\nor maybe it's it's not a good metric\nthat but also people maybe it's bigger\nso i was thinking\nwhere are the consequences\nare the consequences of you know\nuh in all of these these cases of what\nyou would go wrong\nand\nwhat's your ability to resolve\nmisalignments\nand my senses because that's what this\ncommunity is about this or actually\nhaving some kind of control and so it's\nalso related to one of these autonomy\nfactors and so my question is could you\nrespond to what i think you think that\nthat is\nmaybe you know preconceived perceptions\nof consequences and ability to actually\ndo something about it\nactually played part in the in the\nvariability\nand and therefore perhaps the lack of\nbig effects yes it's a really good point\nbecause i also think like one may\nlimitation i wouldn't say it's like\nreally confined result but\nin all these cases it's not like very\nabstract scenarios\nthey don't really make decisions they\nthey give a scenario how the decisions\nwill play out so that's the only thing\nthey perceive then they raise basically\nhow they feel about that situation so\nthey don't make a real\ndecision they don't make choices\nthere's like no sort of consequences\nlike manipulate in the experiment so um\nif they just look at you know how this\nis supposed to be the case in\nshow showed in in those diagrams uh i\nthink different people can relate to to\nthose scenarios in different ways and\nthink about what would be consequences\nuh i think even\nthe way we describe like the other agent\nis also very abstract there's no like\nyou know what is the relationship with\nthis person to you or it's just not it's\nnot like\nuh so the these all these things are not\nlet's say uh flesh now\num\nso\num so one my idea i have sort of at the\nend of those was we need to\nyou can still try to see\nhow you know does these different\ncomponents matter you know what the one\nhow\nabout for instance using\na different approach where you have\npeople actually making decisions\ntogether with the system like maybe\nmaybe maybe a chatbot you know some kind\nof conversation and you see different\noptions like some constraints someone\nnot constrained i think in that case i\nwould say\nit\nshould give you a bit more realistic\ntests on how this how this difference uh\npasses different dimensions\nokay great um\nyeah i'd say uh\nit's technically over but we can stick\naround for uh\nor enough to party uh because i think\nthe room\nis i mean we didn't book it after two\nbut nobody's showing up sorry for half\nan hour yeah great uh but with that i\nthink we have to say goodbye to our\nonline participants\nand then stop the recording\nand then i think\noh okay\nyou're just yeah\nto explore", "date_published": "2022-05-09T15:12:57Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "fee38cb58404bc5b99be19a4cd0c9cc2", "title": "AiTech Agora: Pradeep Murukannaiah - Personal Values and Social Norms as Foundations of AI Ethics", "url": "https://www.youtube.com/watch?v=kxQ851JjACE", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "all right\nuh do you see my screen now yes all\nright\ncool uh i think we can get started\nuh hi all uh first of all uh uh thanks\nfor uh inviting me to uh present in this\nforum\nthat's uh i think uh ai tech is a great\nuh\nmatch for the several things that i do\nand it's a pleasure to be presenting\nhere\nuh so uh lucian already mentioned so i'm\npradeep murukulaya i'm an assistant\nprofessor in interactive intelligence\nuh so i also co-direct uh hippo lab so\nthis is a new delft ai lab\nit's a bridge between computer science\nand\ntpm broadly on the theme of using ai for\npublic policy design and analysis so\nwe're currently\nin the state of like hiring phd students\nand of course we're looking forward to\nbuild the lab\nand happy to talk to any of you uh with\nrespect to what we do and uh\nlooking for collaborations there yeah so\nwith uh that said i guess so we can\njust uh dive deep into it uh\nwhat did we do here yeah so\nuh broadly the outline for talk today is\nuh i mean i want to introduce the notion\nof uh\nai ethics uh from our standpoint so it's\nfrom standpoint of\nsocial technical systems and then i'll\ndescribe what we mean by social\ntechnical systems and why our\nview of social technical systems is\nimportant for uh\nlike realizing ethics in technical\nsystems\nand then i'll talk about particularly\nabout values and norms so those are\ntwo ingredients of ethical social\ntechnical systems\nand then i'll end with some broader\nchallenge challenges and\nimmediate opportunities to advance this\ntopic so that's\nbroadly the outline\nokay so let's um start with uh like a\nvery fundamental question\nso what is uh ethics so i mean if you\nthink about it uh\ni mean you may read several definitions\nbut at the end of the day\nthe core of it is about uh\ndistinguishing right versus\nwrong behavior right so that's uh what's\nat the core of\nethics this may make it sound like this\nis purely like a\nan individual's trait like being ethical\nbut if you think about it uh like\neven in the the classical foundations of\nethics and individuals behavior is not\npurely\nindividual it's shaped by others around\nthe individual so things like\nfamily and friends which expands into\nthe the broader society\nso that's what it means is whenever we\nwant to study ethics\nit's important that we try to understand\none's behavior with respect to\nothers and this is also evident when you\nlook at\nseveral ethical dilemmas so ethical\ndilemmas are basically situations uh\nwhere there are no obviously good\nchoices so\nuh you can start by looking at uh\nclassic trolley problems\nuh somewhat overstudied in my opinion so\nthere's always like these scenarios\nwhere\nthese hypothetical scenarios where the\ntrolley can go in like one track or the\nother track\nlike kill 10 percent versus like 10\npeople versus like one child and all\nthese kinds of situations right\nlike what is the best thing for the\ntally to do ethically speaking\nuh i mean you can also look at other\nscenarios like for example if you think\nof\nles miserables so the\nthe main character so i mean he steals\na loaf of bread of course like stealing\nhis bad like ethically speaking\nbut at the same time\nuh i mean he stole the piece of\nthe loaf of bread to feed a child right\nso then uh\nis stealing really a bad thing because\nthat was with a good cause\nand again you don't have to think of\nlike the hypothetical scenarios right so\nyou can also think of like for example\nthink of like a\nlike an ambulance that is speeding to\nthe hospital so what is the extent to\nwhich\nit can speed uh on on the one hand uh\nthe fast rate goes the higher the\nchances of saving the patient in the\nambulance\nbut at the same time if it goes too fast\nthen there is the higher likelihood of\naccidents and also harming others so\nlike then\nconsidering these things what is\nethically speaking\nthe right amount of speed at which the\nambulance should travel\ni also want to highlight that ethics is\nnot only are the ethical dilemmas are\nnot only about this\nhypothetical or extreme like life and\ndeath type of scenarios\nso ethical dilemmas arise also in uh\nvery mundane\nday-to-day type of scenarios uh for\nexample think about uh\nlike uh one answering a phone in the\nmeeting right so\nlike when like a meeting like this one\nso you're in a meeting\nlet's say it's more appropriate if it\nwas like a real physical meeting\nso um can one take a phone call\nuh in between the meeting uh it doesn't\nhave to be the presenter it cannot be\nthe people in the audience as it often\nhappens\nuh well on the one hand actually it's a\nbad thing to do so it disrupts\nthe meeting and it also affects other\npeople who are taking their time to be\nin the\nmeeting but at the same time the person\nthat's calling you\nuh maybe let's say that's one of your\nfriends\nwho is in some sort of an accident and\nuh need your help\nand of course at that stage actually\ntaking that phone call taking that uh\nphone call it does help the uh the\nyour friend so uh so it can be justified\nfrom that perspective\nnow you can take it even further right\nso once you take a call\nso you have an option so if you don't do\nanything then other people will think\nthat you're rude\nor you probably want to tell other\npeople that you took a call if not right\nnow\nlater that you took the call because uh\nsomebody needed your help and\nthat was important so that you can avoid\nthe sanctions from these other people\nbut i mean is that okay now considering\nthat you violated the privacy of the\nperson uh you said that actually your\nfriend\nwas in an accident and telling that to\nother people creates a\nan impression that your friend was a bad\ndriver so i mean if you start thinking\nabout these things like ethical dilemmas\nthey arising like uh\nseveral scenarios uh in like day-to-day\ntype of\nactivities as well so broadly uh what\nwe're interested in\nis like i mean if agents were to be\nuh or like if ai was to be in this\ndecision-making process how can it\nunderstand these\nscenarios and help make decisions\nso one thing that you may have noticed\nin all these examples\nis that all of these are inherently uh\nmulti-agent\nsettings so in the trolley problems you\nalways have a decision like considering\none person or the other person\nsimilarly with the speeding hospital i\nsaid that there's a patient in the\nambulance and then there's other people\nwho may get harmed if\nyou go too fast answering your phone\nthen i think that people in a meeting\nand then there's this other person who\nwas calling you\nso in almost all scenarios whenever you\ntalk about ethics there's always like\nmultiple parties involved and the\ndecision is always about\nwhat is right or wrong considering like\nthis multi-agent setting as a whole\n[Music]\nhowever the dominant view for the\nthe large majority of works that you see\nin literature today like i'm talking\nabout computer science literature\nare typically about making a one agent\nor uh\none algorithm uh ethical so i i see that\nat least there are three\nlike major problems in this view so the\nfirst problem is that\ni mean ethics is not really about the\ntechnical entity right so\ni mean what does it really mean to say\nthat actually like let's say\naccountability as the ethical property\nto say that actually the software is\naccountable\nright so in almost all cases uh there's\nalways like a principle behind\nthis technical entity which is a social\nactor and the\nthe the ethical burden lies on the\nthe principle so if your autonomous car\nwere to be in an accident i mean what\ndoes it really mean to hold the car\naccountable\ni mean it could be the company who\nmanufactured the car or it could be\nthe person who owns the car but the\naccountability always is with the\nthe principle behind the the technology\nso\num so that's the first problem so trying\nto make the algorithm itself or the\nthe machine itself uh ethical without\nthinking about the broader\nsocial context in which the mission is\noperating\nso uh the second problem uh\nis uh thinking that you can make uh one\nentity like\nlike like the car on its own or the the\nalgorithm on its own\nethical like consider for example an\napplication that makes uh\nrecommendations about mod gauge in a\nbank so broadly let's say\nthe bank aspects to be fair in making\nits market decisions\nand that's why this is uh i mean in the\nmachine learning literature especially\nin the last decade or so\nthere have been several algorithms\nmetrics and measures for\nuh making the algorithm itself\nostensibly fair\nso why do i say ostensibly fair because\nactually there have been several studies\nthat\nespecially recently that show that uh i\nmean none of these\nalgorithms in the uh long run\nuh yield equitable outcomes to all\nparticipants\nso i mean you may adjust the gradients\nof the algorithm a certain way or\ntweak the parameters of the algorithm a\ncertain way but often actually it\nhappens that actually the unfairness is\nbecause\nlet's say the machine is being trained\non data from the previous bank managers\nwho made the lending decisions\nif those managers were uh unfair in how\nthey made the decisions\nand if that is the that on which uh your\nmortgage algorithm is a trend\nthen obviously the best the algorithm\ncan learn is\nthis biased decisions that the previous\nmanagers made\nand again it's not just like one\nalgorithm right so the whole let's say\nthe mod cage process i mean there is\nlike a what case recommendation\ni don't know there is like a an audit\ndepartment or\nmaybe there is some document processing\ndepartment i mean all of them need to\nmake certain decisions eventually\nto make sure that the decision is uh\nfair\nfocusing purely on making an individual\nalgorithm fair and that doesn't uh solve\nthe problem especially\nin the long run so what we need like i\nsaid\nis you want to think of software systems\nin general\nas broadly situated in a society of\nstakeholders\nand ethical considerations really arise\nin how the\nthe social actors or the stakeholders\ninteract with each other\nand our goal is to design these systems\nin such a way\num again it becomes like a difficult\nproblem so i'll show you in a bit\nso design this uh systems in such a way\nuh not in an atomistic manner trying to\nuh\nmake only like like a part of it fail\nbut the system as a whole uh fair so\nthat's uh\nthe the broader objective of what we\nhave been doing so you can push it\none more step further so i mean you can\nthink of as\na software like in the previous slide as\na decision support\nsystems but i think more futuristically\nyou can\nalso think of these uh software systems\nas agents that uh\nact and interact on behalf of\nstakeholders in that\nthey also make several uh proactive\ndecisions\nso so then i think the question really\nis uh i mean we're thinking of this\nuh ai system it's not it's no more like\na single algorithm\nno more like uh like a box of computer\nbut it's a broader\nuh multi asian system uh which is like a\nmicro society i'm saying a micro society\nbecause like within the larger human\nsociety you can have several\nuh multiagent systems depending on the\napplication scenario each of them is a\nmicro\nuh system and the agents in this\nmicro society they reflect the autonomy\nof its uh\nstakeholders and it's important to note\nthat i mean\ni think i mentioned it one of the slides\nbefore autonomy and automation are not\nto be confused\ni mean often actually when people say\nautonomy they mean automations like more\nand more things are done in an automated\nmanner\nbut that doesn't really mean that the\nsoftware or the principle is\nflexible to do whatever it wants to do\nand that's what autonomy is\nall about and of course autonomy and\naccountability so they mirror each other\nso one is only autonomous to the extent\nto which they are accountable for\nwhatever they are free to do uh i mean\nif you want to understand concepts like\nthis and model concepts like this\nuh the essential argument is that you\nwant to think of systems as\nmulti-agent systems or broadly like like\nwhat i'll show in the next slide as\nuh social technical systems otherwise uh\nwhatever ethical outcomes or ethical\nproperties that you may realize in\ntechnology\nuh would be fairly shallow in my opinion\nso\num just to um\n[Music]\nokay or maybe i think i'll just take a\nbit of a\nsmall break here to see how you're doing\nand if there are any questions so far\nis there anything on chats luciano i\ncannot really see the chat screen\nno there's not a chat but someone had\nany questions\nplease just speak up or raise your hand\nwe have any ques\ni guess that everything is is very clear\nso far\ni do have some questions but i'll keep\nthem for\nfor the end then sounds good yeah let's\nkeep going then\nuh go ahead thank you yeah so um i\nwanted to take uh\nuh i want to describe our vision of what\na social technical system is uh\nin a little more detail like i said uh\num whenever you want to think of like\nengineering an ai system uh it's a\nstrong argument that you want to think\nof like engineering it as a broader\nsocial technical system\nwhich has both the the social tier so\nthe principles or the\nwhen i say principle i mean like humans\nand organizations\nso they interact in the the social tier\nand\nmany of the uh the ethical challenges\narise in the social care\nand what we also want is a technical\ntier where agents can\nrepresent the the principles and of\ncourse the agent's behavior\naffects the principle and there is a one\nadditional layer to it it's not about\nlike also engineering the individual\nagents\nwhat you also want is you want a\nspecification or a model of a social\ntechnical system that can govern the\ninteraction between these individual\nagents\nand one other point that i forgot to\nmention earlier is also that\nwhat's important is that you want to\nthink of these social technical systems\nas a decentralized multi-agent systems\nactually even in the multi-asian systems\ncommunity they've been quite a few\nworks which confused like distribution\nwith the decentralization\ni mean the fact that uh let's say each\nof us uh\nuh so i talked about like let's say\nyou're using uh\nan app for managing your the ringer on\nyour phone\num or any other app for that matter the\nfact that each of us have a different\napp installed on our computer\nthat in itself doesn't make it uh\ndecentralized\nso if all of these apps are essentially\ntalking to like a centralized server\nlike google or apple or whatever that's\nsitting somewhere i mean conceptually\nthis is uh\nas bad as like a like a centralized\nsystem that's like a claim server system\neverybody talking to\none server now when you say\ndecentralized what you really mean is\nthat\neach agent uh has its own control and\nthe corresponding principle\nis the control for that agent and of\ncourse you will still need some\ncentralized\nentities to uh realize this broader\nsystem as whole\nbut you don't really need this\ncentralized system for governing the\nsystem so\neach agents can enact their part of the\nbroader protocol\nlike we say to realize the social\ntechnical system as a whole\nuh yeah uh yeah and then like i was\nsaying uh\nuh when we say sts again it's not really\nlike a separate\nrunning system so it's not like you\nengineer a system and then you put that\nsystem on a set of computers and they\ntransit\nso the idea is that you engineer\nindividual agents and you specify the\nbroader social technical\nsystem as a whole and this when when\nthese agents are put to use actually\ntheir interaction is uh\nwhat makes up the the social technical\nsystem um so like we say\nin a setting like this you want your\nlike whenever\nthe agent wants to make an ethical\ndecision\nyou need to have a user model to\nunderstand what are the the\nthe values and the preferences of the\nthe user you also want some aspects that\nare at a\nsocietal level or at a social level\nthings like nonsense sanctions because\nthese are expectations from others\nand then you also need some technical\ncapability like like reasoning about the\ncontext and\nwhat actions to take and things like\nthat all of this have to go uh in the\ninto the decision model of the agent\nso the broader technical challenge then\nis how can we engineer\nsuch a decentralized multi-agent system\nso like i was saying you need both\nso on the one hand you want to model the\nindividual agents and some things can be\nmodeled at the agent level like for\nexample\nthe values of the individual users and\ntheir value preferences\nthey can be learned at an individual\nlevel\nbut this will influence what norms you\nwant to use for specifying the social\ntechnical system\nas a whole so the norms must be modeled\nthen at the level of\nsocial technical systems and again once\nyou have a set of norms info\nin place actually they can influence the\nvalues of the system so it's a process\nthat goes\nhand in hand uh so the agents\nlike on the one hand to specify the\nsocial technical system\nthe stakeholders have to come together\nand negotiate with each other in order\nto come up with whatever are the\nappropriate norms of the system and when\nyou\ndevelop the reasoning for the individual\nagent and that takes into account\nthe values as well as norms that were\nspecified at the the social technical\nsystem\nand then uh so this relates also to the\nthe\nthe ethical properties that you desire\nlike things like fairness transparency\nand accountability like here so know\nthat actually each of these\nconcepts actually there's you can think\nof them at the social technical system\nthen you can also mirror them and think\nabout it like a projection of that at\nthe individual agent level\nand this goes all the way from like\ndesigning and developing the agent to\nlike verification and validation\nfor example if you want to verify the\nindividual agent you can verify that\nwith respect to norms of the the social\ntechnical system so that was specified\nat the level of sts\nand if you want to validate uh the sds\nthen you can do with the values of the\nindividual members of the system\nso in a sense what i want to convey from\nthis picture is that this development of\nindividual agents and specifying\nthe social technical system as a whole i\nmean that has to go uh hand in hand\nit doesn't mean that you have to do it\nuh one shot i mean so this is going to\nbe an iterative process like uh\ni'll show you in a bit uh and then you\nstart with certain\nversion so you may start with certain\nlike values and design your individual\nagents\nthen you would notice once the agents\nare put to use that the agents are\nrepeatedly getting sanctions sanctions\nfor violating some non\nnorms so that means that actually either\nthere is a problem in um\nthe way the agent models the values are\nthere could also be a problem in\nthe way norms are put together in the\nplace and so one of them\nrequires uh changes to it\nso um in a sense the\nthe so what we have been doing uh for\nfor\nat least like seven eight years now is\nuh developing uh\nuh broadly uh a methodology for\nspecifying the social technical system\nand uh the agents that operate within\nthat\nsocial technical system at a very high\nlevel so the way it works is\nyou start by identifying the individual\nstakeholders\nvalue preferences and then you can\nspecify norms that support those value\npreferences\nand of course you would refine the norms\nuh\nas we go and norms are what are crucial\nfor accountability when\nthings don't go as expected then you can\nsee who you can hold account\naccount for using the norms in the\nsystem\nand then the the individual agents uh i\nmean they take one or more roles in the\nthe sds as a whole and they carry out\ntheir part of the enactment\nso each agent has to carry out its part\nof the enactment and the enactment\nitself is\nenforced decently there is no one\ncentral entity that enforces this uh\nenactment now when you have a system\nlike this then you could\nevaluate the outcomes of the system with\nrespect to uh the values promoted or the\nnorm satisfied and things like that\nand then you iterate this process over\nthe period of the\nthe sds of course it's a fairly abstract\nuh\nmethod as a whole and it seems like easy\nenough when you look at the whole uh\nmethodology but of course realizing each\nand every part of it is like\nlike a challenge on its own so what we\nhave been doing is like uh\nwe're developing like a suite of methods\nsome of these are\nmore methodological some are more\ntechnical uh in realizing this broader\noverview of how you specify an sds and\nuh\nhow do you specify individual agents in\nthat sds so we've been working on like\nfor example the most recent work here is\nabout\nidentifying contextually relevant values\nand then we\ndid some earlier work which shows how\nyou can incorporate values in agent\noriented models\nand you can also use both values and\nnorms for\nreasoning about what actions to take i\nalso did work on how communicating\nvalues how can agents\ncommunicate their values with each other\nso as to receive positive sanctions not\nnegative sanctions uh then of course\nthere are going to be conflicts in a\nsystem like this so how can you identify\nthose conflicts and\nresolve those conflicts so uh i'm not\ngoing to of course talk about\nall these works uh in this presentation\nbut just to give a flavor of what we are\ndoing uh i'll talk about\nlike values and what we're doing with\nthe values especially the recent work\nand then i'll talk about how values can\nbe used to inform\nnorms of the system okay\ni'm just uh taking a bit of a pause to\nsee if there is anybody\nthere's any comments yeah um but\nwe have a question here from fgm\nabout a figure you showed early on about\nthat you\nyou divide the social technical cease\nand the human asian duel\nand there is some i believe he refers to\nthat one\nuh if jenny do you wanna can i\nshould i read it you wanna yeah myself\nthanks thanks for saying\nthank you hyperdeep uh yeah thanks very\nmuch for very much presentation\nso far so um it's about an earlier\nfigure you showed\nuh where you showed how um so yeah i\nthink it was here yeah so how norms\nvalues get goals here value preferences\nget realized\nin different parts of the system so if i\nunderstand correctly you show that norms\nand values are realized in\ntechnical parts of the system while the\nsocial side of the system regulates\nuh but i i would say that\nsocio-technical\nsocio-technical design uh approach\nrequires us to realize values and norms\nin both social and tech social and\ntechnical aspects so it's as much about\ndesigning\ninteraction between humans through uh\norganizational practices\nsocial infrastructure as it is about\ndesigning interaction between humans and\ntechnical artifacts so it's\nthe synthesis of all uh of these\ntogether the humans\nuh and the technical aspects that uh\ngives the socio-technical system it's\nemerging desired properties would you\nagree so yeah yeah\nyeah sure i think the social\ninfrastructure and uh the technical\ninfrastructure as part of it\nyeah i am and i agree i'm not sure uh\nwhich aspect of it when i convert i\nthink i fully agree with what you're\nsaying and that's exactly what uh\ni want to convey as well so the\ndistinction between uh like uh\nlike i say the social and the technical\ntheory is more like\na social there is where like humans\noperate and technical theories where\nthe the agents or robots or whatever\nthey operate and of course\nthe real challenge is uh i mean taking\nsome of these abstractions that uh\nthe principles using the the social tier\nand coming up with the technical\nabstractions that can be realized in uh\ntechnical entities i mean that's what uh\nspecifying the social technical system\nuh is all about in my opinion and of\ncourse it's not easy\nyeah i think so so thanks for clarifying\nso because when i looked at the figure i\nsaw that\nso the green lines that you have they\nshow that the realization happens in the\ntechnical tier but i see the for to the\nsocial tier i see the\nregulate uh the regulate line so then\ni was i just wanted to uh to check with\nyou uh\nbecause like basically what i'm saying i\ni would i would extend those of the re\nthe realization you know\nwhere we implement the norms it's as\nmuch about the social tier\nas it is about detect and to them\ntogether interacting with each other\nyeah so i mean what we mean here is uh i\nmean yeah of course uh uh\ni mean if you're if you're thinking\nabout uh like even broader issues like\nfor example like things like\nloss and so on uh then of course some of\nthese norms and so on they need to be\nrealized in the the social tier as well\nso that's a like definitely like a\ndimension there but at least here what i\nwant to convey is that i mean we are\nreally interested in uh\nlike how can you realize the the social\nabstractions\nin the technical tier and that's why\nyou're saying uh uh the realized in for\nthe technical tier\nand the regulate for the the social tier\nbecause this is something that\nwe're not asking principles on how to\nbehave i mean of course that's again\nthis another domain like like for\nexample uh\nuh the policies and things like that\nso they may work on that but at least\nthe scope here is that uh i mean you're\nnot\ntrying to um come up with like uh\nlike rules for how humans or\norganizations should behave assuming\nthat\nthey are behaving a certain way based on\nwhatever is the framework\nhow can you sort of like translate that\nto the technical deal that was the the\nfocus here\nwell so just well i don't want to delay\nit right but no so so my point is\nbasically\ni'm i'm saying that we should be\nthinking about designing them jointly\ntogether so basically to your point that\nyou mentioned earlier in the\npresentation like with fairness right\nyou said that it doesn't make much sense\nto if you try to\navoid discrimination to focus just on\nthe algorithm that you need to think\nabout\nthe larger practices which are around it\nright so\nbecause all of it interacts together\nbasically i'm saying like that\nit requires invites requires basically\ndesigning\nyeah i think that's indeed the right\nintuition i think i agree that you\ncannot\nlike i mean just like if you want to\nmake any rules about like how uh\nlike i don't know that the humans in a\nsmart car should behave\nthen that has to go hand in hand with\nthe rest of it i think that's what\nyou're saying so not only about uh\nyou should have to come up with norms\nfor how smart comes should behave but\nalso about norms about how\nhumans in those smart cars should behave\nyeah i agree i think that's that's the\nright intuition i think\nyeah thanks very much thanks cool yeah\nit's good thanks\nnice nice discussions do you have any\nother questions\nright now all right\ni i think for now you can yeah go ahead\nthank you okay so i'll uh keep going\num yeah so like i said\nuh how are we doing what time now is 3\n32 so we already have some time\nyeah so basically uh what i'll do is um\nso especially this part of the\npresentation i think i'm going to talk\nabout a few of our papers\nat like some more concrete details now i\nwill go through it uh\nfast but of course uh if you want more\ndetails\nplease feel free to ask me as i speak\nso first of all so i want to talk about\ntalk a bit about values and why are we\neven talking about values i mean what\nare values to start with so essentially\nuh values uh when you think of values as\na schwarz\nwe think of what is important to us in\nlife\nand then why are we talking about values\nbecause if you want to make any ethical\ndecision\nyou need some building blocks to able to\nbe able to reason about\nwhat's right and what's wrong and values\nare\none such building blocks for reasoning\nabout\nethics uh and uh schwarz\nschwartz describes here all values have\nuh\nlike characterizes like six main\nfeatures\nso these are priorities that guide\nactions i think that's important from a\ntechnical perspective because you want\nto eventually use values in\nmaking decisions and these are beliefs\nthat are intrinsically\nlinked to effect so so it's all related\nto effects i want to don't want to go\nthere\nthese also refer to goals so it's like\neach of them has a clear\nmotivational goal and values also\ntranscend context\nso it's not something that is applicable\nin only one particular scenario but they\ncan be applied in a variety of different\nscenarios and of course this is uh uh\nchallenging right so\nthat depends on the abstraction at which\nwe are looking at values and i'll talk\nmore about this\ntranscending context aspect in a bit and\nthere are also the standards criteria\nso and that's what uh that's how we\ncould justify the\nthe rightness or wrongness of one's\naction based on\nwhat values they they hold especially\nwhen there is no obviously\nuh right or wrong answer and they can be\nordered by\nimportance so that's what uh\ndistinguishes one person from the other\nalthough the values are universal and\nall values are applicable to all people\nand how you ascribe relative importance\namong these values\nchanges from person to person and that's\nwhat makes these\npreferences also subjective uh\njust a bit on like like a motivation as\nto like why these are\nuniversally applicable so in a sense\nlike\nhumans as a social beings\nso all of us have like several basic\nrequirements so i mean you can look at\nthe the maslow's pyramid\nso which range all the way from like\nbiological and physiological\nrequirements as well as requirements of\nthe social requirements like uh\nlove and uh belongingness and all the\nway to like more about\nself-actualization and more more\nindividual type of\nrequirements in several of these in\npursuit of several of these values\nwhat you need is a cooperation of other\npeople\nand essentially then what you need is a\nlanguage to communicate your needs with\nother people and that's how and values\nare that language\nso you can use values to express to\nother people where you want to\ndo something a certain way and then they\nmay be interested in\ncooperating with you as well so that's a\nbit of a motivation about um\nlike what values are and why they are\nuniversal\nbut the question is still can you use\nvalues\nin like decision making context as is\nlike for example what i show here is uh\n10 values that\nschwartz identifies in his model so\nthey're on two dimension\nso some of these values are\nmore self-centered so achievement in\npower and the others are more about\nother centers things like universalism\nand benevolence and of course you will\nhave preferences among these like i mean\neach person will have preferences among\nthese\nand the other dimension is um some\nvalues are more about\nconserving uh the past or with the\ntradition\nand the other values are more about uh\nopenness to change right so this is ten\nvalues along these uh two dimensions\nbut the question is um like how can you\nuse these values in uh\nlike engineering uh specific apps like\nfor example um\nwe had a scenario uh some time ago let's\nsay you're trying to develop an app\nin this pandemic times\nso you want to build an app that would\nrecommend you when to go out and when to\nnot go out right so depending on\ni mean whatever factors the crowding or\nthe the infection rate i mean whatever\nit may be\ni mean can you just take values like\nthe ones that i show here and engineer\nuh an app like that\nand it turns out that that's not trivial\nso firstly these\nthese values are i mean not all of these\nvalues will be applicable\nin the particular context that i'm\ntalking about and again\nhow you interpret these values could be\nquite different\nfrom one context to other context so\nwhat we have been doing\nwhat we did recently is we developed a\nmethodology for\ncoming up with values that are\nspecific to a particular context so they\nare applicable in that context at the\nsame time\nthey have an interpretation that is\nspecific to that context\nso this is a hybrid methodology so this\nis where i think uh\nuh some of our hi technics also come\ninto picture i mean\nhumans identifying these values at the\nsame time automated techniques in this\ncase uh natural language processing in\nparticular\nis assisting the annotators in coming up\nwith these values it's not like uh\nsome designer thought that these are the\nvalues that are applicable in this\ncontext uh\non their own so this is more of a data\ndriven approach and where do we get data\nfor doing\nthings like this we look at discourse\nlike for example um\nfor covet related values so you can look\nat uh like what people are talking about\nuh covet and what they like and what\nthey don't like uh\nso this is like for example like a\nreddit discussion about uh\nuh protests about uh lockdown measures\nuh and you could so start look like once\nan annotator start looking at a\ndiscourse like this then they can\nidentify what values may be\nrelevant to users in this context or not\ni want to get into the details of the\nmethod\nbut again we don't have to this doesn't\nhave to be reddit data right so we\nactually did it with the\ndata from a large scale survey that was\nconducted at tpm\nuh by a neat motor so it's called the\nparticipatory value evaluation\nbut you can also use like for example\nlike for specific uh like let's say for\nmobile apps you can also look at their\napplication reviews\nto see what kinds of things users value\nin that application and things like that\nand this all gives data for you to\nidentify values that are\nrelevant in a particular context and we\nalso did some experiments to show that\nso the methodology that we had yields\nvalues that are i mean first of all\nspecific to this context i mean if you\nidentify values in a different context\nthen you're going to get like a\ndifferent set of values not the same\nand also the methodology is repeatable\nin the sense that it's not specific to\npeople who apply it now yeah i'm happy\nto talk more about it afterwards\nso once you know what are the values\nthat are in play in a particular\nscenario\nyou also want uh some mechanisms to\nincorporate these values in the the\nmodel of an agent\nright so coming up with like a list of\nvalues maybe that's fine for like\nlike a policy maker to look at it and\nsay what to do in a particular scenario\nbut if you really want like agents to\nmake decisions from uh like\nabstractions like values you want some\ntechnical abstractions to incorporate\nthese\nvalues into the the design and also the\neventually the source code of the\nthe agent so i think for that i think\nthere is a quite a bit of work\nin the multi-asian systems community\nbroadly about agent-oriented software\nengineering\nwhich has these high level abstractions\nthings like\nactors goals plans beliefs and\ndependencies\nand as we noted earlier so values\nessentially refer to goals\nand you can incorporate values in an\nagent modded\ndirectly using the goals you remember i\ntalked about um\nthis intelligent ringer application so\nessentially this application is uh\ntrying to decide whether to ring your\nphone uh like silent or like cloud or\nvibrate it automatically does that\ndepending on whatever it thinks is uh\nethically appropriate like if you think\nof that scenario i think\nthere'll be like values like privacy\nthat are important\nand you don't want to disturb others\nthat's a value and you also want to be\nreachable by people and these are the\nvalues that are specific\nto this context first that can be\nidentified using\naxis the methodology that i described\nearlier and then using the other methods\nthat we have described\nyou can incorporate them as part of your\nagent model\ni mean note that when i talk about these\nasian models these are not just figures\nright so these are\nformal models so they can be\ncomputationally represented\nand they can even be refined to such an\nextent that eventually they can be\naddressed to the the actual code of the\nagent so there are\ntwo advantages of models like that so\none uh it makes the\nreasoning process of the uh the designer\nexplicit\nuh which we have shown in some\nexperiments that when somebody else has\nto maintain your application\nthey better understand why a certain\nthing was done a certain way looking at\nthis\nagent models and the other advantage is\nalso from the the end user's perspective\nlike for example i mean you know what\nare the values that are\nunder play in a particular context but\nyou still want to know\nlike what values one user prefers uh\nover another value so it's the value\npreferences\nuh which is user specific and if you\nwant to elicit that\nuh i mean i mean instead of giving like\nuh like a flat service to users and\nasking them what\nthey value more you can start asking\nthese questions in particular context\nso these models will have like\nplaceholders where you could say so this\nis something\nthat i know that actually like for\nexample in this case um\n[Music]\nso deciding whether somebody wants to\nlike\nwhether they value to be reachable four\nby four more\nor to work interrupted by four more you\nneed to ask question there\nand we're saying that depends on the the\nrelationship to the caller and whatever\nis the context of the neighbors\nand then you can ask this question to\nthe user in this particular context\nso so essentially you have a model uh\nlike this\nand that's realized as the software\nagent and the software agent is put to\nuse\nwhenever this context is recognized\nthat's the time at which the the\nsoftware would ask this particular\nquestion to the user\num and again i think there are some more\ndetails here and how you ask that like\nin specifically we use active learning\nfor doing that so this way you could\nalso combine the the more of the\nsymbolic approach which is this is the\nsymbolic approach\nand then actual learning of this context\nis more from a machine learning\nperspective you learn that from the data\nyou can combine both the the learning\nand the symbolic approaches and doing\nthings like that\nso that's a bit about values\nand then i want to slowly transition to\nalso the notion of\nnorms so values are fine so if you want\nlike like i said you can develop\nindividual agents that can\nmake decisions uh for the individual\nuser\nso you can do it based on what values a\nparticular action promotes and what\nvalues uh an individual action demods\nand then you can come up with the\nbalance but it's more complicated than\nthat\nbecause a lot of the times it's not just\nvalues that influence action but it's\nalso the expectations of\nother people like for example in this\ncase i think let's say we are talking\nabout an app\nwhich is like a location sharing app so\nthe app automatically shares your\nlocation with the other people\nand who it shares with when it shares\nwith depends on uh whatever is ethically\nappropriate\nand of course you can so like let's say\nfrank is the user who is using this app\nhe prefers pleasure and recognition as\nvalues\nso the app can share his location\nwhenever it thinks that\nit is promoting these uh values\nbut uh but it's possible let's say that\num\nhis mother grace so let's say frank is a\nteenager\nhe she cares for frank's safety\nso then i think it's in her interest\nalso that frank shares his\nlocation whenever so even when these\nvalues are not promoted uh probably\ngrace wants his\nlocation to be shared similarly let's\nsay like frank is visiting\nhis aunt hope uh but again uh\nhope prefers privacy so the frank's\naction to share with all uh\npromotes pleasure for him but i mean the\nfact that he is with the hope\nif he shares this location that is also\nsharing uh hope's location\nthat violates uh hope's privacy so then\nhow can you make decision like ethical\ndecisions in this case um\nso the values that the individuals care\nfor in itself is not\nsufficient we also want to know what are\nthe expectations of\nother people about your behavior and\nthat's where norms come into picture\nso in essence uh norm uh again\ni'm using it is in the the word norm in\na technical sense\nso it's a directed social expectation\nbetween principles\nso although they can be like a variety\nof expectations so we can like\nmany uh common uh interactions we can\nimagine that\nthese expectations can be thought of as\nlike very few different\nforms like things like commitments\nprohibition authorization and power\nin that setting essentially you would\nsay for in each norm it says\nwhat is the subject and what's the\nobject what's the antecedent and what's\nthe consequent\nso that's essentially like the the core\nof the norm\nso it says that like for example for\ncommitment it says like the subject is\ncommitted to the object\nwhen the antecedent holds it will make\nsure that the the consequent will\ncome into picture so um again there are\ntwo obvious advantages of using norms\none is it makes the expectations\nexplicit so whenever something doesn't\nhappen\nyou exactly know who is accountable for\nthat\naction and the other advantage is that\nyou don't need to invent them for each\nand every individual application\nso you can specify the the norms for the\napplication but you don't need to\nimplement them\nin every single agent you can have like\na like a norm machine or like a like um\nlike a library that like verifies norms\nor enacts them\nat the same library things like a\nmiddleware can be used in all different\nagents\nindividual agents will have different\nnorms but the actual realization of that\nindividual agents don't need to do that\ncan be done in something that is shared\nacross\nagents uh again i'll\ngo through that a bit faster i also have\nan example it's about like showing some\nexamples of\nnorms in a healthcare setting i will\nskip through these things but i would\nlike to mention that again these are not\nset in stone like a designer will\nspecify norms to start with\nbut uh once the uh\nthe system is like starts operating the\nnorms can start\nwe can start like refining the norms\nlike for example here uh to start with\nuh\nlike this healthcare social technical\nsystem there was a prohibition which\nsaid that um\nuh so so the hospital prohibits\nphysicians from\nsharing the the personal information of\nthe patient like outside the the\nhospital and in\nall all circumstances so that was the\nthe original norm\nand then later you realize that let's\nsay there is like an emergency and your\nhospital is\novercrowded and there are like let's say\nuh other doctors\nwhich are from outside the hospital who\nare volunteering\nin your hospital to help deal with the\nthe\nthe overcrowding at that stage like i\nmean you know that actually the norm\nwill be violated\nbut that's okay norms are meant to be\nviolated and like uh like rigid rules\nand if it's like broken more than once\nyou know that actually there is\nsomething wrong and then you can\nrefine the number now we can say that\nunless this is an emergency the hospital\nprohibits\nthe physician from sharing the phi\noutside the\ndeficiencies of the hospital so that's\nthe general idea here\num and again i think in the interest of\ntime i'm gonna\nuh skip this so so what this leads to is\nlike a fairly complex system so you have\nlike individuals values and you have\nnorms of the system and also these are\nevolving over time and you need some\nmechanisms to reason about these things\nautomatically and one of the methods\nthat we developed\nuses a multi-criteria decision making so\nis the method called\nwiker so given all the stakeholders that\nare involved in a particular decision\ncontext\nknowing their value preferences and\nknowing their norms so you can come up\nwith\nagain you need to use some theory again\nthis is where i think uh\num how the technical\nartifacts can benefit from research in\nmore humanities uh\ndisciplines like the utilitarianism and\negalitarianism there are like known\nethical\nsort of stances in the literature and\nthen you could realize them\nas part of this reasoning framework\nokay so with that said i want to go to\nthe last part and i think this quickly i\nwant to run through this\nso essentially i want to conclude by\nsaying that um\nso what i did so far uh i introduced\nthis notion of\nsocial technical system and then i\nargued that in order to reason about\nethics\nyou cannot think of like purely\ntechnically designing one algorithm or\none machine you want to think about\nengineering the social technical system\nas a whole\nand then again social technical system\nis like not one running entity or like\none computational entity that you would\nuh\nengineer you want uh you want to\nengineer individual agents in the system\nand you want to engineer them such a way\nthat when they are put to\nput together to use in a society and the\nentire social technical system is\nrealized\nand for that besides engineering the\nindividual agency also want to come up\nwith the specification for the social\ntechnical\nsystem as a whole uh so what are some\nso i recognize that there are three\nbroad categories of challenges\nin uh realizing this vision as a whole i\nmean we have been doing quite a bit of\nwork but that\nbut it's like such a complicated problem\nthat i can imagine uh\nspending uh rest of my career and you\nknow i don't know several other people\nspending their careers on realizing this\nvision fully so the first set of\nchallenges are about\nmodeling ethics so i talked about uh\nnorms and values\nbut if you start thinking about it uh\none user\nhe or she would participate in multiple\nsocial technical system\nand within the social technical system\nthere are multiple contexts and then the\nvalue preferences and the norms will\nchange\ndepending on the settings and so on so\nif you want the individuals to deal with\neach and every of the scenarios uh\nindependently\ni mean that is just like too much\ninformation overload so what we still\nneed is\nwe need to we need some ways of\nabstracting this to a\nhigher level so like for example one\nthing that we've been thinking is uh\nuh can we take these things like the\nbehavioral pattern\nwithin a socio-technical system and\nstart\nformulating some ethical postures so if\nyou know that actually this user\nlike acts in like one of these 10\nethical postures\nand if you know which is the like with\nethical posture is appropriate\nin which application and the context\ncombined\nand then the users have to deal with\nthis 10 postures and refining and\nupdating those 10 postures\nand not deal with each and every uh\ndecision point in\neach application um\nyeah so and there are again more\nchallenges here right so i only talked\nabout\nvalues and norms but there are several\nother concepts that need to be formally\nmodeled and defined things like guilt\nand consent and inequity aversion and\nthings like that\nand also about like formulating\npro-sociality in terms of the principles\nof justice\ni would say i mean these have been\nstudied in the\nuh the humanities literature like the\nphilosophical literature or the\npsychological literature\nbut what we really need is as like\ncomputer scientists we want technical\nabstractions to represent these things\nand realize them as part of our social\ntechnical systems\nthat to a large extent is still missing\nin\ncomputer science and then we also need\ntechniques for analyzing the ethicality\nof the system so you model the system a\ncertain way\nand then once the system is put to use\nyou want to see to what extent uh\nthe system is ethical and again you\ncould think of this doing this both\nfrom a design time perspective where you\ncould extend some of the the formal\nverification or the model checking type\nof methods\nbut you can also do it uh in run time uh\nby simulating the behavior because i\nthink many of these things uh\nit would be really difficult to test uh\nin the wild\nand i think there are also some really\ngood opportunities in combining uh\nthis design time verification type of\napproaches with runtime\nsimulation type of approaches so finally\ni already alluded to it a bit so i mean\ndesigners need to\nmodel the system and then also need to\nlike users and designers they need to\nverify\nor the validate the system once it puts\nto use but in the process\nyou also have the end users so many of\nthese things uh\nthe values or the norms of the system\nthey must be elicited from the the end\nusers\nso and you want to do it uh\nunobtrusively or un\nintrusively so you don't want to ask\nusers too many questions\nbecause then those solutions may not\nwork but you still want to give enough\nflexibility that people who want to\nspecify can specify\nand others can use fixed defaults of the\nsystem but how do you do it in like a\nbalanced way i think that's a challenge\nuh so one challenge is the level of\nindividual users and the other is also\nat the level of\nthe sds as a whole and developing uh\ndeliberation and\nuh negotiation type of techniques for uh\ncoming up with norms that everybody\nagreed\nyeah so with that uh i would like to end\nmy presentation\nuh thanks for your attention uh i think\nwe gave like a longer tutorial i believe\nit was about\nuh two and a half hours recently at amaz\nso\nif you want to listen to it more or if\nyou want to get more resources please uh\nuh go to this website uh so there's\nlonger description but\ni'm happy to talk more now and take some\nquestions\nyeah sorry i think it took longer than i\nexpected but uh\nsome time to ask questions okay great\nyeah thank you very much that was really\nfascinating a lot of\nof course as you yourself mentioned a\nlot of\nresearch challenges a lot of things to\nwork on but you can see a lot of\ngreat framework you were developed so\nfar so we have um\nthe first question uh here from derek\nderek\nshould i really would you like to ask\nyourself\nsure with some compliments i thought it\nwas a great presentation a lot of\nreally good material and very specific\nuh\nso so thanks for sharing thank you\ni guess the the two questions here are\num and i may have misunderstood but it\nseems like a lot of this is\nfor kind of the initial system design\nbut um i'm trying to understand this in\nthe context of feedback loops\nand a kind of continuous iteration over\ntime\nare are there barriers to that or are we\nmissing\nkind of any tools or frameworks for uh\napplying this\nto systems um iteratively yeah\nno i know i think there are it's a great\nquestion i think also\nuh like like a main challenge i would\nsay so\ni mean there are two types of uh like\nfeedback loops here right so one is i\nthink among developers i guess so i mean\nyou develop\none version of the app today and that\nneeds to be refined uh afterwards so\nthen you need to communicate actually\nwhat was intended by the first developer\nto the second developer\nthere is that and that's to some extent\ni think the\nthe the software engineering community\nis focused on that\nbut uh the other feedback loop is also\nbetween like the designers\ni think it goes beyond like designers it\ncould also be like regulators and things\nlike that\nto the the actual stakeholders or the\nend users of the system\nuh and that's something that we have\nbeen trying to\nsort of um at least make an initial\nattempt at bridging\nso like the designer comes up with an\ninitial specification of the system\nand says like within the design what are\nthe points at which\nwe want the end user feedback for\nexample as a designer i can say that\nactually these two values are\nin play in this setting but i like which\nvalue is more important for one user or\nthe other user those are the ones that\nthe end user is supposed to say\nso the fact that you can put that as\npart of your app actually then the app\nknows\nuh the app being intelligent is what i\nmean who knows when to ask the user that\nquestion so it can ask that question\nin context also creating these feedback\nloops\nbetween the designers and the end users\nis important and then also these other\nchallenges right like for example i also\ndid some work in the requirements\nengineering community so they're more\nlike a passive feedback loops there like\nsomebody uses a an app and then writes a\nreview about that application or\ndiscusses about that application in some\nforum\nand we can also mine like forums like\nthat to understand what the users want\nand then\nupdate the designs uh accordingly but\nyeah\nbut overall i think it's uh like a\nfascinating challenge and i don't think\nuh\ni mean i do think that there's quite a\nbit of work to be done here\ngreat yeah uh yeah thanks for the\nquestions eric\nuh we have another question now from\njenny go ahead\nyeah thanks for saying uh pretty so\nmy question is about studying values and\neliciting preferences so\nuh at the design stage um\ni mean so for those of us who come from\ncomputer science the technical side of\nthings\nuh who are the other experts you would\nsay that need to be at the table with us\nin order to a design stage study uh\npeople's values people's needs\ninterpretations\nuh like for example do we need to work\ntogether with uh\nui designers ux designers in order to do\nthat and\nalso do you think it's necessary to well\ndepending on the context\nuse qualitatively rich methods to do\nthat for example\ninterviews methods from ethnography and\nso forth\nyeah yeah so yeah indeed i think that's\na good question i mean i mean to start\nwith i think uh\nso the first sort of criteria i would\nsay for anybody who wants to be part of\nthis process is uh\nan understanding of what values mean but\ni think that's i assume is something\nthat's\ndoable i mean people can educate\nthemselves on that but the other part of\nyour question i think that relates to\nwhat they call as\nvalue sources in value sensitive design\ni mean that's certainly i think\nyou need like like a mix of people right\nso on the one hand the value resources\ncan be i mean i would say\nthe most important thing is the actual\nend users of the technology\nif not all of them i think you\ndefinitely need a representation of\nthose users to be part of this uh\nlike value specification or value\npreference solicitation process\nand some of these values i think they\nalso come from designers and developers\nso\nalthough people may want to be\nobjective i think there will be all\nkinds of unconscious biases and\nso the designers values may eventually\nlike surface in the applications you\nalso want\nof course them to be um the part of this\nvalue specification process at design\ntime and then\nthere was one other value sources that i\nthink are missing\n[Music]\nyeah but i mean things like i mean this\nagain may be the same category as\ndevelopers and uh\ndesigners but more like people who are\nregulators\nso i would like to to add social\nscientists to that so\nyou really have a great specialist in in\nin values\nlike uh hofstede\nis is a good person to add to that and\nthey're more like that\nyeah indeed i think so this is what i\nmeant\nlike when i said actually we want these\npeople to be like uh like\neven the other people to be like\neducated in values but i think you're\nsaying\ndefinitely having people who are experts\nin values itself they can certainly\nuh bring other people on track if\nthey're missing something about values\ngreat yeah thank you thank you very much\nthank you jenny for the question i have\nalso um\ni have a quick question myself also uh\nyou you mention as uh accountability\nin the framework as also connecting and\nrespecting to\nnorms so i i just have like the the\nbut when you look us in moot agent\nsystems there is also this\nyeah this agent did this because of the\ninteraction with one agent the\ninteraction with the other and the other\nand then\nyou get some immersion pattern that you\nwere not necessarily expecting\nand the connection to this idea chooses\nlike this problem of many\nhands and many things which like in and\nthus\ndiffusion of responsibility or\naccountability\nthen accountability gaps so how do you\nsee this\nin in the framework by connecting\naccountability to norms\nhow can we keep track of this yeah yeah\ni know that's a great question i think\nuh\nthere's been some work that i know of\nit's about uh like\ndelegating norms so to start with let's\nsay\nuh the physician is prohibited by the\nthe hospital and maybe that's too vague\nlike like what does it really mean uh\nlike\nthe hospital right so it may be the the\nthe lord division of the the hospital so\nuh i mean i don't think it's uh an easy\nproblem\nbut one way i can think of dealing with\nthat in the norm setting is uh i mean\nyou can\nyou can you can update uh the the\nsubjects and objects\nand of course each time you object that\nthen you need to know actually what are\nthe other norms that are related to it\nand you also need to update them uh\nthere so otherwise you would create uh\nasymmetries between so there's one\nexpectation here but there's a different\nexpectation at the other time\nbut all these things i think um the\nadvantage of doing it in a declarative\nway like i said\nis that each time something changes you\nonly need to change the specification\nthe declaration you don't really need to\nchange\nexcuse me you don't really need to\nchange the underlying\nagent implementation i think that's\ndefinitely something to\nuh like like a virtue of this approach\nso things will change and\nlike things will go out of control and\nthey'll evolve over time\nbut each time you don't have to update\nthe implementations of the agents rather\nyou need to just deal with the\nspecification and\nupdate the the representation or the\nsymbols so to say\nyeah great so yeah to have this dynamic\na little bit bottom up\ngreat thank you very much so uh we ran\nout of time\nso yeah thank you very much uh pradeep\nit was really interesting talk really\nnice discussions\num if you have uh\none or two more minutes we have a final\nquestion\nif you if that's okay yeah i could do\nthat okay\nso thank you everyone for joining uh\nstefan\nyou had your hand and raised do you\nwanna ask it\nuh yeah sure uh thanks for the talk um\nso one thing i was still wondering about\nis how you were\nhandling conflicting values so i can\nimagine that\nyou don't necessarily want to go for say\nthe value most people agree on because\nthere might be some values that are\nnormatively\nmore acceptable than others so that you\nhave some kind of ethical background\ntheory for\nranking values um so how how are you\ntreating that\nproblem yeah yeah i know i think that's\nagain uh like in general\nthe notion of conflicts i think is uh\nlike challenging right\nso first of all i think there can be\nconflicts uh\nlike within one agent so like i may have\nsaid that uh\ni prefer x to y uh like one time and i\nmay have said the other time that\ni prefer y to x that's already like like\na sort of conflict like\nconflict within and that's good to\nrealize to start with\nuh so it's possible that uh that's like\na real\nuh preference so i do mean i think in\nthis context before x to y and that\ncontext i prefer\ny to x but that's fine but it could also\nbe that actually like humans being\nnot always rational or like being\nbounded rational you may have not\nthought it through you may think that\nactually you prefer x2y\nbut you really prefer y2x in both\nsettings like recognizing this\nis already a starting point for uh sort\nof like uh\nprompting the user to think carefully uh\nabout this\nthat's that's still at the the level of\nan individual uh\nuser now of course there will be\nconflicts uh when you\nlike especially when you want to use\nnorms to inform\nwhen you want to use values to inform\nnorms so i may have\ngenuinely one value preference and the\nother stakeholder may have\nanother value preference so how can we\nspecify like one norm\nwhere both of these stakeholders\ninteract so i think\nthat's i mean to some extent uh uh\ni mean unavoidable but at the same time\nit's not if you don't have to force\npeople to follow whatever is uh\nthe the majority so i can still uh\nfollow whatever i think is uh more\nvaluable to me i take actions like that\nknowing that actually if i do that\nactually these are the sanctions that\ni'm gonna get\nlike like for example um like the norm\nis for only\na physician to administer a certain drug\nuh but uh it could be like a like a like\na like a patient is\ndying and the doctor is not there and\nthe nurse knows that actually to save\nthe patient's life for this is the\nuh the drug that needs to be\nadministered there\nso even though that action was violating\nthe norm so if that is in accordance\nwith the value preferences of the\nthe nurse she may still uh administer\nthat particular drug so that would\neventually then mean that actually\nthere was something wrong with the norm\nright so that you could recognize and\nthen you could update\nthe norms over time with the same\nmotivation so\nyeah i mean i think it's a challenging\nthing and also i think some of the other\nthings that we are doing with respect to\nthese\nnegotiations instead of like one\ndesigner\nmanually specifying the norms of the\nsystem like when you say one person\nspecifying the norms\nif these norms were to be specified as\npart of the interaction between the\nstakeholders even from the very\nbeginning during the design time\nthen it's likely that you know like what\nwere the norms and why they were uh\nput together in the first place to start\nwith yeah so\nthat's yes\nyou start with the preferences and then\nadjust the norms based on those\nyeah indeed yeah okay so yeah i think\nyeah we have to wrap up so yeah\nthanks everyone thanks again pradeep\nvery nice\nand yeah see you all soon on the next\nyeah takakora\nthank you all i think i appreciate again\nfor inviting me here and uh i would be\nhappy to talk to\nany of you more about these things or\nalso other things that i'm interested in\nuh\nplease feel free to send me an email and\nwe can set up a meeting\nthanks bye", "date_published": "2021-05-26T20:33:55Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "1f6a6325c74a70b7de64c3413e3a4fb6", "title": "R Dobbe: Towards a Systematic and Realistic Practice for Developing Safe and Democratically Sound AI", "url": "https://www.youtube.com/watch?v=ZV9YWDDwC3A", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "go ahead\nthank you evgeny and hi everyone\num it's a pleasure to be here and be\nback uh to present\nuh in the ai tech community i remember\nuh\nvisiting delft from new york a couple\nyears ago when when it\nwhen it all got started um and i'm\nreally happy to see how\nhow vibrant the community now is and\nsuch a beautiful gathering of different\ndisciplines\nuh people perspectives backgrounds\num today i'm going to be presenting\nabout hard choices in artificial\nintelligence\nthis is a project that i started back in\nnew york working with\nsome friends from berkeley where i did\nmy phd\na political philosopher thomas gilbert\nand a operational um\nwhat is it in industrial engineering um\nfellow called united mints um\nand this work is really kind of a\nculmination of a longer trajectory\nwhere um i used to study in delft i\ngraduated in delft in 2010 as a in\nsystems and control\nstarted in mechanical engineering and\num and then i was quite kind of restless\nand curious to understand\nother other parts of society apart from\nfrom academia i worked as a management\nconsultant\nfor a couple of years and i realized how\norganizations were struggling with the\nadvent of\nof data using data for decision making\ni encountered a lot of political\ncontroversies or issues within\norganizations but also\nsaw how different parts of the\norganizations didn't really have a\ncommon language to talk about the\nopportunities but also the risks related\nto these these new\nnew new opportunities and i went back\nbecause partly because of that and\npartly because i loved\ndoing research i went back to academia\nand\nin berkeley where my phd i worked on\nvarious\nissues of data-driven monitoring and\ncontrol\nand i also started to look at the\nethical and social implications of\nautomation\ndata-driven decision-making ai as we\ncall it now\num and yeah what i learned through\nthe projects that i did some of them\nwere in systems biology so building\ndata-driven\nmodels for biologists where i saw that\nthey were suddenly\nwith the outputs of my models they were\nsuddenly asking wildly different\nquestions\nand i was quite uncomfortable with that\nand i started doing more theoretical\nresearch but also think so thinking\nabout how my model could\ncould go wrong and working in the energy\nsystem\non data-driven operations\nand decentralized control for energy\nresources i also saw the importance of\nworking with the legacy systems the\nlegacy infrastructures\num and the political context in which\nour energy systems are now\nevolving and i think everyone nowadays\nis really\nconcerned with uh with climate change\nand the importance of\nthe energy transition and so i started\nto um\nto think about how can i yeah weave that\ninto my research because i like to\nto work to learn about these normative\nchallenges\num it took a while i started working\nacross campus in berkeley and we\nthomas gilbert and i and some others we\nstarted an organization called\ngeese graduates for engaged and extended\nscholarship\nin computing and engineering really just\nmeant to bring different\nparticipants together on these issues of\ntechnology in society and\nwe have had a lot of emphasis on on ai\nsystems over the last\nyears and tom and i and jonathan we then\nworked on hard choices um\nand the the subtitle was the title that\nyou saw in the\num in the calendar invite towards the\nsystematic and realistic\npractice for developing safe and\ndemocratically sound ai systems\nit's a mouthful i was planning to\nintegrate\ntwo different projects but i had never\nactually\npresented these hard choices work and it\nwas just too ambitious\nfor me to uh\nto then like bring it down to a shorter\npresentation hopefully next time i\npresent it i can do this\nso today i'll focus on hard choices\nand to start off i would like to call\nout\num yeah the kind of continuing string of\nuh safety scandals\nand and and failures that we see with um\nai systems ai functionality that's\nintegrated in\nin high stakes domains this is a slide\nthat we we built with the air now\nteam back in 2018 um\nand you know without going into the\ndetails we\nwe we worked on different reports over\nthe years where we\nstudied all the the various um\nphenomena in in society societal\nimplications with ai systems in in all\nkinds of\ndomains and i worked had the pleasure to\nwork with\na wildly yeah multi-disciplinary team\num did a lot of work in also in law and\npolicy to understand\nhow governments are adopting ai systems\nand my role was more more of the\nengineer trying to\ntranslate what was happening in the\ntechnical realm\num so these reports are a really good i\nthink place\nto go if you're interested in a broader\nperspective on\nhow these systems are affecting society\nhow they\num yeah how they go wrong and and what\nwe might do about that\num so yeah for me safety then is is not\njust uh\nthinking about physical safety as you\nmight do might be as a roboticist\nuh which was typical for me as a as i\nwas\nas i developed myself as an engineer but\nmore broadly\num i would like to think or pose a\nquestion of like how can\nhow do these systems go wrong like how\ndo you how do they make\nmistakes or errors and how does that\naffect the broader fabric\nand people that are interacting with or\naffected by the system\nand to do that it's it's easy to have\nsome examples so i have\na lot of emphasis on examples today uh i\nmight speed up at some point but i'd\nlike to quickly\num go over three of them so the first\none will be around\nautonomous vehicles here we see the the\nfatal crash of\nthe uber self-driving car during um\ntesting and there was a pedestrian\nwith a bike walking across the road\nthat's\nunfortunately was was hit by the car was\nnot detected\nand when we when we think about uh\nautonomous vehicles or we\nsee a lot of energy and a lot of like\ndiscussions\num over the last years have revolved\naround\nand especially in the technical work in\nthe ai safety domain\nand value alignment domain has revolved\naround\nethical implications in terms of trolley\nproblems\nso should i divert trolley or the\nautonomous vehicle for the sake of\nsaving let's say a larger group of\npeople or an\nelderly woman or hurting a smaller group\nof people\nor in this case a young child\nor should i not do that so this has been\na kind of subject of philosophical\ndebate across different traditions\nuh including kantian ethics and\nutilitarianism\nand this thought experiment assumes uh\ndeterministic dynamics\nuh discrete action space complete\ncontrol\nover the agent and the environment in\nterms of completely or exhaustively well\ndefined objects\nand based on this tradition there's a\nkind of strong desire to build ethics\ninto the ai system or autonomous vehicle\nor\nbuild safety into the system and\nunfortunately\num the product problem doesn't do a good\njob at capturing the reality of choices\nthat are faced by a developer\nthat is building the system and indeed\nlast week we had\nniko crochet from deepen ai here in\nthe agora talking in the seminar and\nreflecting about the many normative\nchallenges related to developing\num perception for autonomous vehicles\nand niko distinguished between two\ndifferent approaches\nfor using ai to detect objects and\ndynamic situations on the road\n[Music]\nand so you mentioned approaches that use\nhigh dimensional deep learning\narchitectures and large quantities of\nless structured data\nto learn perception capabilities also\ncalled deep automotive perception\nhere's an example from from berkeley\nwhere i i used to\nwork with some of these people or at\nleast sit on the same floor with some of\nthe people working on this\nand these approaches i have to figure\nout themselves what information and\npatterns they need\nto keep track of in order to adequately\njudge dynamic traffic situations\nand due to their high dimensionality and\ncomplexity such\napproaches are intrinsically difficult\nto be interpreted and i would\ni would argue they're inscrutable in a\nway\num nico also mentioned uh yeah so the\nnormative issue here is that\nwe cannot really trust inscrutable and\non and or let's say hard to\ninterpret machines to have emerging\nbehaviors that are somehow safe and\nsatisfy all written and unwritten\nnorms that we encounter in in traffic\nsituations\nhe also nico also discussed\nanother approach that was of annotating\ndata sets\nhuge data sets collected by cars on the\nroads\nand here the idea is to have humans\nlabel as much as possible\ndifferent objects and dynamical\nsituations\nand in nico's data sets various people\nand organizations have contributed\nto labeling these data sets and what he\nmentioned was the difficulties in making\nsure that labels\nare assigned consistently so he had an\nexample\ntalking about some people labeling\nchildren as\nas children while others labeled\nchildren as people or human beings\nadults in the same category\nand i hope that it's clear that\ndetermining whether car should behave\nsimilarly or different for children\nversus adults\nis a hard choice to make and one that\nshould be made with caution and involves\nthe appropriate stakeholders to\ndetermine any viable norms\nand solution strategies and to\nanticipate and determine what exact\nexact acceptable sorry acceptable risks\nare\nso here the normative issues that i see\nis that we are\nredefining public space by determining\nthe information that cars collect\nand the processes\nand that's and how they process this\ninformation towards\nuh the inputs for for the con for their\ncontrol\nand the developers cannot and should not\nmake these decisions on behalf of all of\nus\num so coming to terms with these\nnormative complexities\nsome leaders in ai are implying that\nsuch choices can be made on behalf of\ncitizens\nwhile suggesting that the responsibility\nfor safe interaction with\nautonomous vehicles should also lay on\nthe shoulders of pedestrians and\nbicyclists\nwho can be educated so here's a quote\nfrom andrew ing who is a\nleader in the field of machine learning\nand has done a lot of\nhe's doing a lot of entrepreneurial work\nthis is in the context of drive ai which\nwas a startup that he was involved in\nand he says that it's unwise to rely\nexclusively\non ai technology to ensure safety\ninstead the self-driving industry also\nhas to think about the people who will\nbe\noutside the vehicle which is why we will\nbe undertaking community-wide education\nand training programs\nwhere we operate and they made these\ncars\nyou know they took the obvious colors\nalthough i know\ni like the color orange but in this case\nthey took very very vibrant colors to\nmake sure that people would\nsee that that this is a special car\num and you know there's a lot of naivety\nin it but also i think interesting work\nin it but you can see some some some\nassumptions that he's making that i\nthink is\nproblematic i think it's tricky to\nupload responsibility to humans\ntoo quickly and i don't think that av\ncompanies\nhave the right incentives to lead\nefforts like this to train local\ncommunities and in effect redefine\npublic space\ni think last thing i want to say is that\nit's contrasted to the trolley problem\nand this embrace this this approach\nembraces the lack of formalization\nof ai of safety and instead imposes the\nflaws\nof the system onto human traffic\nparticipants\nor at least runs the rest to do so so\nthat's the first example\nsecond one uh is a bit closer to home\nthis is uh a photo from\nthe from our parliament at faded camera\nwhere you see um family members that\nwere\nfamilies that were affected by um the\ntuslar affair the benefit\naffair which was\nthe uh main reason why our former\ncabinet\nresigned here you see our prime minister\nbiking to the king\nto offer his the resignation for the\ncabinet\nand um this is this has been labeled the\nthe biggest human rights violation since\nthe second world war that took place\non dutch soil\nand what we know is that algorithms\nplayed\nquite an important role in um\ndetermining whether someone was\nfraudulent or not so these these\nfamilies\nwere there were more than 20 000\nfamilies that were labeled\nas fraudulent that were asked to pay\nback\nhuge amounts of large amounts of\nbenefits\nrelated to child care and many of them\nwent broke lost their homes were stopped\non\nuh on the highway to\nhad to give up their cars people\ncommitted suicide\nand we we don't even know half of all\nthe tragedies that occurred here\nwhy do i bring up this this example\nbecause ai systems did play a role it's\nnot just about ai systems it's also\nabout the way that we\nwe build laws and how we execute them\nmore broadly\nbut we know that there was a risk model\ninvolved\nthis risk model most likely was was\nbuilt in the context of the system risk\nindication\nsiri in short and here you see\na figure that shows like how different\ndata data\nfrom different departments was combined\ninto a\ninvestigation bureau that then was\nbuilding risk models for individual\nhouseholds\num and these risk models were most\nlikely\nthey were used in the in the benefits\ncontext of benefits\nand what also happened last year was\nthat this system this whole system\nfor risk indication was ruled as\nviolating human rights and was basically\nhalted by the courts\nin the netherlands after a coalition of\ndifferent organizations and dutch people\nstarted the strategic litigation against\nthe state\nand what's interesting here is that and\ni will just read it up\nthat the courts found that the siri\nlegislation\num in no way provided information on the\nfactual data that can demonstrate the\npresence of the circle\na certain circumstance for instance a\nbenefit fraud\nso in other words which objective\nfactual data can justifiably lead to the\nconclusion that there is an increased\nrisk\nto make it sound to make it uh\nput it in simpler terms the inputs\nof the machine machine learning model\nthat was used\nare no legitimate basis for detecting\nfraud\nand there was also no legal basis um\navailable to say that those data should\nbe\num gathered put together and and form\nthe input to a model\nof such source so we also you see here\nthat\nthat the exchange of information across\ndepartments is highly controversial\nand there are likely many other cases\nthat might pop up over the years to come\nand the and the government is really\nstruggling to find\nto figure out whether they can come up\nwith laws to kind of legitimate\nthe the practices that they already are\nputting in practice it's really\nproblematic\nand there's also no or no or little\ntransparency given on how these models\nactually work\nnot even the judges did have access to\nthrough these models\nuh and couldn't really um distinguish\nhow they worked um\nthe third example i want to bring is uh\nrelates to social media\nand here we see children um that refugee\nchildren\nthat belong to the rohingya muslim\nminority\nin myanmar and we all know that there\nwas a\ngenocide that's played out there over\nthe last years\num and that social media had a role to\nplay this so here we see a quote from a\nrecent\narticle from mit technology review\nfantastic journalistic work by karen\nhowe\nand i'll again read it up so here we see\nthat the models that maximize engagement\nalso favor controversy misinformation\nand extremism\nput simply people just like outrageous\nstuff sometimes this inflames existing\npolitical tensions\nthe most devastating example today is\nthe case of myanmar where viral fake\nnews and hate speech\nabout the rohingya muslim minority\nescalated the country's religious\nconflict into a full-blown genocide\nfacebook admitted in 2018 after years of\ndownplaying its role\nthat it had not done enough to help\nprevent our platform from being used\nto foam and division and inside of line\nviolence\num what's really um\ninsightful and and and disturbing about\nthis this piece which i all\nreally urge you to read is that um\nor you can read it here that just\nbasically facebook got addicted to\nspreading misinformation so\nthe ai algorithms that gave it its\ninsatiable habit for lies and hate\nspeech\nthe man who built those algorithms is\nthe kind of\nprotagonist of this article he's not\nable to fix them\nand what you see in this article is that\nthere's such a focus on growth\nprofiteering and\nresulting lack of proper democratic\ndeliberation about serious safety\nconcerns\nand instead facebook has organized a\nform of internal politics that prevents\nthe addressing of the role of ai systems\nin perpetuating hate speech\nmisinformation put more depressingly\nkaren hoe's investigation so that the\nevent of fairness metrics\nand approaches issues of addressing\nbias in ai systems first happened\nto narrow the mitigation of harm to the\nprevention of\ndiscrimination which is of course an\nimport still an important\nset of of problems but this was narrowed\nonly to\nfairness and bias in an effort to\nprevent likely in an effort to prevent\nregulation and to sideline\nthe role of ai in polarization and\nmisinformation which you could say is a\ndifferent category of problems\nand second these fairness approaches\nwere also misused to rationalize\naccusations of anti-conservative bias\nso it was used as a kind of an argument\nto say that um\nwe should um\nit was used in a way to um\nbasically support the spread of\nmisinformation\num to make sure that um that\nthe information that comes from the\nlet's say extreme rights\nis balanced with the amount of\ninformation that comes from\nthe other side of the spectrum and\nthereby systematizing behavior that\nrewarded misinformation instead of\nhelping to combat it\nso in other words the role of ai systems\nin misinformation and institution of\noffline violence\nis just systematically denied there's no\nteam\nof people that's given the job to really\nwork on\nthat that that problem so here again we\nsee a move towards framing normative\nissues of algorithmic harm and\nresponsibility\nthat are inherently political and\nlack a negotiated definition\nto cast it more as technical problems\nwhich can be diagnosed and solved with a\nset of technical tools\nso the normative issues we see here is\nthat i mentioned some of them already\num i think i've mentioned all of them\nalready\nso just flash them here for you\nso now i've covered three examples so\nwhat do we see in these examples or what\ndo i\nsee and what questions these are more\ninformal kind of\nquestions or statements first is that\nyou can't necessarily bake safety into a\nsystem\nor learn your way out of an unsafe\nsituation\neven if you're powerful and have all the\ndata in the world\num you can't reframe safety problems\nunless you're very powerful like we saw\nfor facebook\nyou can also hand them off to others and\nwe'll see because\nif you're very powerful you might still\nbe able to do that but these are the\nthe kind of normative\ndilemmas uh and also the kind of\nbehaviors we see that i think we have to\ncounter that we have to address\nproperly and if we do a good job i think\nwe can come up with much better\ntechnical work or socio-technical work\nto uh to resolve\nor to address safety puts puts proper\nsafeguards on these systems\nso what is needed so the focus i i'm\nputting is uh\nto work towards a broader systematic or\nsystemic view\nof ai systems to understand how they are\nsituated in and reshape\ntheir context of application um\nand that includes and transcends the\ntechnical logics and fixes of the model\nor algorithm\nand engages with normative dimensions in\nwhich technical choices are made\nbut also with the socio-technical\ninfrastructures and the political\neconomy\nthat these systems rely on\nthe other thing is that i won't go into\nmuch uh\ni won't go into the concrete problems\npart that i i\ni mentioned in the abstract but i am\nalso working on\nrealistic and um looking at\nai systems failures and working towards\na realistic and ongoing conversation\nabout these failures\nand the plurality of safety implications\nthat these failures have on different\nstakeholders people and communities\num yeah and that includes explaining\nerrors and failures not just in terms of\nphysical safety\nso look at of course look at the\nfacebook example look at what happened\nat the us capitol a couple of\nmonths ago and also understanding errors\nnot just as arising from insufficiently\nrefined\nrepresentation of the context but also\nas a possible result of fundamentally\nincompatible\nincompatibility or what we will call\nnormative indeterminacy which is what i\nwill focus on today\nso the research questions um that go\nwith that are\nare listed here uh i will i will focus\nand i will keep going because of time\non the first part the first one so how\ndo we adequately understand\nnormative complexity in the eye system\nit's a big question i think it's a\ncontextual question for different\nsystems in different domains uh but\nwe've tried to\nkind of look at political philosophy\nmainly and and but also at\nscience and technology studies and other\nfields to see what we can\nwhat we can bring to um\nto ai system development and the\npractices in order to\nreinterpret the choices and the\nassumptions we make\nas uh yeah normative choices\nso again this is work with thomas\ngilbert and jonathan mintz\nand to start off here i would like to\nbring up the psoriasis paradox and the\nidea here is\num or the question is how many grains of\nsand do i need to take away from a\nfrom a heap of sand for it not to be a\nheap anymore\ndo i need to keep going is this still a\nheap\num and um\n[Music]\nwhat the pro the point here is that\nwhat uh entails a heap is vague\num and vagueness uh it will be a central\nuh\nkind of concept for um\nfor this talk and that's defined as the\npractical absence of conceptual\nboundaries\nit's an ancient concept it was studied\nby the ancient greek greeks\nand more recently um or over the last\ndecades actually\nruth chiang has been studying vagueness\nat oxford and what she says is that\nthere are\nthree canonical answers to this question\nof what um\nyou know how many grains percent um\nshould i take away for heat to not\nno longer be a heat um the first one\nuh is the first answer to that question\nis that there exists an\nan objective answer for for what is a\nheap\nwe may not know it or there might be\nvarious degrees of confidence\nand we might be able to characterize it\nif we just collect enough insight enough\ninformation\nthe second canonical answer\nis that different answers exist because\nthere are different people\nthat use the words heap differently\nand this is called it's basically means\nthat it's semantically indeterminate and\nthere's a constrained set of communities\nthat necessitates consensus or\ncoordination to see\nif a common norm can be determined the\nthird\ncanonical answer is that there is no\nanswer\nfor that question as what entails a heap\nis intrinsically vague\num so let's not let's not bother uh\nfinding an answer\nit's fake um\nand so what i'd like to do now is to\nbring this concept of vagueness and\nthese different canonical lenses to\nto safety and particularly safety in ai\nsystems\nso i want to go back to autonomous\nvehicles\nand as i hinted in traffic there's a\nplethora of\nnormative uh issues or normative\naspects norms and values that we have\nagreed upon either explicitly or\nimplicitly to make sure that we are\nwe we stay safe and we act this out\nlike i said based on written and\nunwritten rules\nand so a question you can ask is what\nshould be the criteria\nfor discovering evaluating and resolving\nharm among stakeholders and what i'm\ndoing\nhere is i'm i'm taking the first\ncanonical answer as a starting point so\nlet's assume\nlet's look at um traditions that have\nsaid okay there exists an answer to this\nquestion an objective answer\nbecause that's that's been that's what i\nwould argue a quite a dominant uh\nperspective within ai safety and within\nwithin engineering more broadly\nso what should be the criteria for\ndiscovering evaluating and resolving\nharm among stakeholders\nrelated to let's say autonomous vehicles\nso here we um we look at a tradition\nor some work actually recent work that's\nbecome quite popular in the ai safety\ncommunity\nwhich looks at machine ethics as dealing\nwith normative uncertainty\nso the work by william mccaskill\ndefines metanormative normativism\nbasically\nan idea as an idea where you collect\ninformation and learn policies\nto incorporate and act accordingly to a\nplural\nplurality of norms um\nthe idea there is that you can\narticulate if you have various different\nnorms and various different\napproaches that you can to to address\nthese norms you can\narticulate second order norms so like\nmeta norms\nthat guide how one should act when\nmultiple appealing moral doctrines are\navailable and the assumption is that\nthere exists\na clear positive value relationship\nbetween available ethical\nactions one must be unambiguously better\nworse or equal to the other\nand so this is quite a i think a bold\nstatement\nbecause it's it says that um you know\nyou're able to\nkind of go across all the let's say\nyou're looking at autonomous vehicles\ndifferent stakeholders\npedestrians bicyclists um cars\nself-driving cars um the public\ninstitutions that uh that\nare concerned with traffic and and\ntraffic safety\nand you could somehow like figure out\nwhat all the different norms are and\nincorporate them\ninto an a normative framework that will\nadhere to all these different norms um\nand as such\nnormally of uncertainty it's cast as a a\nproblem of empirical\nobservability so the idea is that if you\nleverage the ability of the system to\nlearn\nit can do a better or more consistent\njob at following norms\neven better than humans do\nso optimizing over what different\nstakeholders might do\non the under the assumption again that\nthe dynamics are are deterministic\nlet's say with a measure of probability\nover them\num and so this is\nwas one important starting point for our\nstudy and\num now i list two two challenges with\nmetanormativism first is that it's a\nstatic perspective\nit looks at the world as a kind of\naesthetic\necosystem where you have these different\nnorms and somehow wants to kind of\nunderstand and discover them and encode\nthem into the system\nhowever there are social dynamics and\npower relationships\nthese are omnipresent and when you start\nto engage\n[Music]\nwith with these social systems\nthese dynamical social systems\nwhere you have to say different\nstakeholders that might might have some\nimplicit\nhierarchy of expertise and domain trust\nif you're starting intervening at one\nlevel even if you're just collecting\ndata or just observing it\nyou are affecting the other layers we\nknow this from social scientists they\ncall this\nperformativity and and recently there's\nbeen some\nsome people looking at um some some\naspects of\nperformativity also in the context of ai\nsystems for instance good hearts laws\num and takeaway is that if you're\nobserving the world\nyou're also in relationship to that\nworld so regardless of your entry point\nyou know you might have\nan api you might have a historical data\nset or even just\na set of traditional norms these views\nthese views that you take on are not\nneutral they have been developed and\nshaped\nby this hierarchy of expertise and as\nsuch there is a sense of\nresponsibility that presents the frame\nof metanormativism\num because aggregating and resolving\nnorms also means that you're\nyou're reshaping them\nand so yeah that's just an image to\nlet you think about that that uh that\nchallenge for a bit\nthe second problem is um\nthat it ignores conflicting values it\nsays it can resolve\ndifferent conflicting values or hard\nchoices\nso an epistemic meta ethic imposes a\nboundary and\ninherently strikes a trade-off between\nvalues and we know that\noftentimes there are um\nwe we encounter value conflicts when we\nstart building a system\nand so there is of course um\nvalue to metanormativism to a kind of a\nstructured approach to thinking about\ndifferent values and whether you might\nbe able to\nkind of relate them to each other in a\nnumerical numerical sense\num but um probably you would need some\nkind of structure and some kind of like\nuh limited complexity so the value of\nthe approach is proportionate to the\napplicability of the situation\nnot so much in terms of how much\nnormative uncertainty we can resolve\nwith it but more in terms of the the\nworthwhile\ninaccuracies of such a system and the\nacceptable levels of ignorance\nthat come with it and so what we'd like\nto do what we do in the in the paper in\nthe heart choices papers expand this\ndefinition of\nto go from normative uncertainty to\nwhat's normative indeterminacy\nand here we also build on some of the\nideas from\nfrom the philosopher ruth chang that i\nmentioned earlier\nthe idea is that you cannot merely\ndiscover or encode social norms in a\nsystem\nyou're also making them by\nby choosing the practical conditions\nunder which ai tools are being developed\num yeah so developing safe ai is also\nand often primarily\nit's about developing the appropriate\npractices\nthat affirm uh you know different value\ncommitments to\nto keep a system safe and a lot of that\nkind of work\nof safeguarding systems and and\norganizing um\nyou know systems in practice uh\nhappens behind the scenes it doesn't\nreally surface much in the\nin in circles of you know artificial\nintelligence\nresearch ai safety um\nand we believe there needs to be more\nhonesty about this there needs to be\nmore honesty about the discretionary\npower\nof developers you know oftentimes or i\nwould\nargue almost always there are more ways\nto develop a system\nand we don't have to control an av in\nany particular way\nand you're choosing to do it in a\ncertain way\nbecause it seems legitimate and that has\nto be justified\nit's not something discoverable by more\naccurately observing the domain and data\navailable to you\nso that brings me to uh again back to\nthe three canonical lenses\nand these are now labeled as\nas follows so first epistemicism so\nepistemicism again\nsee these as like before i go into it\nsee these as like different\num responses that you\nmaybe as an as a researcher maybe you\ngravitate towards one of them\nmaybe you combine difference it's not\nit's this is really meant\nmore conceptually so the first one\nand i would say this is where\nmetanormativism definitely\nfits fits in is epistemicism which\nassumes that there's a single observable\nboundary\nfor harm but we don't precisely know\nwhere it is\nmaybe we could hire a psychologist to\ndetermine\nexperiments to elicit people's distinct\nexperiences or expectations\nexpectations of safety so that we can\ndetermine where the boundary is and some\nai safety researchers are actually\nhave proposes or are doing this work\nsecond one is uh semantic indeterminism\nso there are many forms of harm and many\nways of\ntalking about harm in different language\ncommunities you could think about legal\nengineers people on the out on the\nstreets\nyour grandmother your nephew\num we all think of safety differently um\nand we have to accept\nthis pluralism in how the kind of\nlanguages that we\nthat we uh that we um\nuse from day to day then antic\nincomparableism\nis basically that there is no boundary\nfor harm because the real experiences of\nharm across society are fundamentally\nvague and incomparable\nlet's see so yeah it might be an object\nyou deem impossible to define\nharm to define well for an ai system and\ndepending on perceived safety risks you\ncould either conclude that the system is\njust too risky or in fact it might be\nvaluable to experiment with it so that\nyou can shed some light on the\nphenomenon\num so the point is that all these lenses\nare valuable\nit's it's valuable to um think about\nwhat kind of formal work you can do as\ntaking the ex stem assist perspective\nuh it's important to understand what\nkind of for instance legal\num basis there already are that we have\nto take into account\nto make sure that systems adhere to the\nlaw or\num um also understand if there are new\nsafety\nrisks like what kind of legal um basis\nwould we need to actually\nto actually protect systems from from\ncausing harm in in in society\nso i see now that i have a couple of\nminutes left so\ni i quickly want to kind of flash\nhow we see that these um ways of\nthinking\ncome back in in in ai system development\nand and and with this framework this is\nthe hard choices framework it's a\ndiagnostic framework it's not a way\ni'm not saying okay this is how you\nshould be developing systems\nbut this is one way that i i we propose\nfor diagnosing normative complexity in a\nproductive way\nthat also opens up abilities to um to\nengage\nacross different stakeholders and to\nmaybe work towards more meaningful\num definitions or\napproach yeah like definitions of safety\nand and ways to\nto deal with uh with safety risks um\nso we we distinguish like typical work\nthat you do\nduring design where you where you\nfeaturize you figure out what kind of\nfeatures am i using\nliterally in your machine learning model\nbut also more broadly like what kind of\nfeatures do i need in my engineering\ndesign um and one of the things that can\ngo wrong there\nwhat we what we say is j walkerization\nso this is the idea that um there are\nonly certain kinds of safety that you're\ntaking into account\nwhen you're in certain ways that um\nmy car might hit a pedestrian and\num there are certain ways that you are\nyou you are kind of\naccounting for and and making sure that\nuh\nthat you prevent but um\nwhen you're doing that you're\neffectively deciding how society should\nkind of wrap itself around the iv\nand the avs right and one uh historical\nuh reference there is that of jaywalking\nso jaywalking was\nkind of an idea that came from\nautomakers\nthey invented the crime of jaywalking\nand there were actually\nlarge campaigns\nto stigmatize jaywalking before that was\nactually\nmade illegal and you know in a similar\nway i think\nwe can think of um avs and the way that\nwe\nfeaturize them in the way that we we um\nformalize safety um having\nmade potentially major effects on the\nkinds of behaviors that\nwe might need from bicyclists\npedestrians and others on the road in\norder to\nhave the system function properly that's\none example i've made other examples for\nin this case the next phase is\noptimization which talks about potholes\ni will not go into that\nunfortunately um then we talk about\nintegration\num where we have some work or we're\nworking on thinking about moral crumple\nzones it's supported by\nuh madeleine claire ellis um\nand uh and then we kind of highlight the\nimportance of divine\ndefining the station as an engineer you\nshouldn't just be thinking about\ndesigning training and implementing your\nsystem but you should start with\nuh bringing around the table to the\ndifferent different stakeholders to\nproperly understand the normative\nuh complexity of your problem and and\nwho you should work with in order to\nto address it um yeah i'm going to wrap\nup now\num afghani in one minute\nso yeah hard choices once more is a\ndiagnostic framework\nuh the questions that we ask um to\nacknowledge and respect normative\nindeterminacy can help to\nwe believe can help to democratize the\ndevelopment of systems\nand we propose the development of\nchannels for dissent throughout the\ndevelopment\nintegration and life cycle of these\nsystems\num yeah we've worked worked uh um\nbased on the work by political\nphilosopher elizabeth anderson\nuh i'll just flash this slide just for a\nsecond but the idea of dissent is to\nhold or express opinions\nthat are at variance with those commonly\nare officially held\nand we are currently now actually\nworking for instance in conversation\nwith the city who would like to\nconceptualize what the scent could look\nlike for the kind of\nprocesses that they're developing for\ndata science and building\nmachine learning functionality within\nthe city\num as an example and i think that\num yeah the implications you know\nthe now i'll stop i think is that\nnormative\nindeterminacy requires a more honest\naccount\nof the importance and limitations of\nformal work\nit helps to build bridges hopefully\nbetween crucial language communities so\nhow do you integrate engineering design\nand governance how do you\ntranslate legal requirements to design\nfeatures\nit necessitates uh a development\npractice that welcomes broad feedback on\npossible harms and gives affected\nstakeholders a real voice at the table\num and uh it might help to overcome more\npolitical controversy by providing a\nproductive alternative for narrow\ninterpretations of af safety i think\nthis is a broader\nthink struggle that we're in and we hope\nto contribute to that conversation\nso that's it i'll just stop with uh\nthis slide there's some references here\nand uh yeah\ni'm curious about your questions\nthank you very much real thanks uh\nreally uh really great uh\ntopics to to to contemplate about and\nvery interesting work\nso let's open up the floor for uh any\nquestions\nuh opinions discussion any um\nanybody would like to raise a question\nso please either use the raise your hand\nbutton or you can type in chat if you\nprefer\nyes seabourn will go ahead\ni will uh thanks for a nice talk um\nreally enjoyed that\num i'm just going to match on one point\nyou you mentioned right so hard choices\nshould be\njustified um and then of course um\nyou know aren't you holding\nmachines to a higher standard than we\nhold humans somehow right so\nespecially if you have to make decisions\nunder time pressure\nuh you know clearly you know you\nsometimes you just act\non instinct or one might say um\nso so how do you put that in the\nframework and of course i realize that\nif you\nallow for that kind of you know human\nsloppiness then you\nyou create a moral hazard in you know\nmachines pretending to be sloppy but you\nknow how is that how do you\nyeah is it what what what about the\nhuman norm as the standard in such\nin such uh yeah in such\ncontext yeah well\nour focus is on safety so hopefully you\nknow safety\nconcerns are are worth addressing and\nthey are worth\nslowing down a development process\ni think that's something that's um you\nknow if you take facebook as\nthe example 10 years ago they were proud\nto say that we move fast and break\nthings\nand i think now they've learned a lesson\nbut there's\ni think they can still slow down a bit\nmore and actually open open up and let's\nuh you know because these these\nalgorithms and the systems these are\nlike public\ninterest in my view there are public\ninfrastructure\nand we should have like we probably need\nto build european\ninstitutions to uh to kind of have a\ncounterpower right and to to come and\nsay this is how we want these algorithms\nto work and not the way you you you\ndevelop them\nthat's a very long and slow process\ni think yeah you said holding the system\nto a higher\nstandard than humans in this work we\ndon't really try to\nemphasize like human and machine too\nmuch like we\nwe look a little bit more at like how\ndoes the overall social technical system\nperform um i think the work by that i\nbriefly uh mentioned by more crumple\nzones\nby meddling claire ellis is also really\ninteresting like looking at the\nresponsibility for when things go wrong\nand what you see is that\nif things go wrong there's\na very strong incentive for\nyou know judges\norganizations that are involved in like\nresolving what happened\nto to arrive at a human being right\nwho's responsible for it in the in the\nuber case\nthe lady was behind the wheel was\npretty good she plead guilty and yes she\nwas looking at her phone\nshe should be should have been looking\nat the road that's an obvious mistake\nhowever how much training did she get\num could they have done other things to\nmake sure that she was actually\nbehaving properly how much money did she\nmake\nnot so much right we shouldn't we like\nmake these kinds of test jobs\na bit more uh lucrative because they're\nquite risky\num and so yeah these are more the the\nthe kinds of\num questions i think that are\nthat are interesting is to look at at\nthe overall system\nboth the the actual the what like what\nare we building in terms of the\nalgorithm\nin terms of how it integrates into a car\nbut also like the organizations that are\nresponsible for that\nand what kind of yeah safety fill-safe\nmechanisms do they have\nlike how do they and to what extent do\nthey anticipate\nthe obvious the obvious or the inherent\nerrors that the system is going to make\nand what happens then so but then if i\ncan summarize that also really is yeah\nit's about the design stage right\nso you're you're kind of putting the\nbrakes on at the design stage or or\ntry to well close to operations maybe\ntry to avoid getting in a situation\nwhere\nyou know where you can't make you know\nthe right decision anymore\num yeah yeah\nso it would be interesting to see how\nthat works in in this yeah\nin a competitive space right because\nthat's of course the\nthe i guess in a global competition as a\nkind of the ai\nwars or or as they're whatever you\npeople might call it right\num there is attention here into\nbeing safe for being first especially if\nyou don't control the whole space maybe\nso there's maybe a follow-up question\nhow that\nhow you envision that right so if you're\nyeah it becomes a kind of global\num race to to be first\nlooking at how can we still be safe i'm\nseeing trying to see if i have a book\nhere i don't think i have it here but\nit's a book called\nunsafe at any speed which was a book by\nralph nader\nwho was uh like later on he was a\npresidential candidate in the us\nand he yeah he saw the same thing\nhappening with cars in the\nthis in the 60s 70s um\num you know these were like flying\ncolossus of thousands of\nkilograms that flew through public space\nand there were no\nseat belts yet right there were no um\nnot many safety no crumple zones yet\nthose kinds of things and he went around\nhe talked with\ndifferent engineers and what you see\nlike you see this now again engineers\nknow that the system is not safe but\nthe the commercial world kind of ignores\nthem right\ntesla various engineers have resigned\nbecause they didn't agree with\nthe autopilot um functionality\nand that has led to you know various\nfatal uh\naccidents and and so ralph nader talked\nwith these engineers but also with other\nstakeholders and then he wrote a book\nabout it\ntitled unsafe at any speed which i\nrecommend anyone who's interested in\nsafety and ai systems to read\nbecause yeah that's the kind of work we\nneeded there is a there's a problem in\nin in the\nthe push for for yeah for growth and and\nbeing first\ndefinitely thanks\nthank you um bro by the way i can\nmention right that you you will have a\nbit of time\nafter two o'clock to stick around\nabsolutely yeah so if anybody wants to\nstick around after two\nyou can uh do that after a official part\num\ncars i i see that you uh you have a\nquestion would you like to uh\nsure any thanks uh\nthat was really really interesting um\n[Music]\nand and yeah the child benefits scandal\nin particular i find\nincredibly disturbing mainly uh\nbecause of the way that that blame blame\nwas diffused\nby the various actors that were that\nwere heard about the\nscandal and and you know\nin my view they managed to they managed\nto escape accountability by kind of\npointing at each other\nand so how do you how do you how do you\nview\nthose dynamics in light of what you are\nproposing here\nin particular also with regards to\ndescent\nit's not a really specific question but\ni'm just interested in how you how you\nview that\nissue of culpability in these uh\nin the social technical systems\nwell the the uber example i think you\ncould\nhave a dark view and say why did they\nput um\nsomeone with nuts like you know not so\nmany means\nand uh education into that car right she\nwas\na vulnerable she was vulnerable to begin\nwith\nand so that's what meddling claire ellis\ncalls a moral crumple zone like someone\ncan take the hit and it's not in this\ncase it's not uber right so that's one\nthing that you actually that she writes\nabout that i\nreally recommend uh reading if related\nto your question\nuh in the the child benefits affairs the\nmost\ndisturbing or the yeah one of the\nsaddest moments was when\nuh the minister for social affairs uh\nformer minister uh lodowick usher\ntalked about how he got letters from\nthere was a grandmother who saw that her\nchild\nchild and and grandchildren that the\nfamily was going off the rails\nwas like um you know went broke had all\nkinds of issues\nand they were there they were on guilt\nthey were um\nthey weren't guilty and and there was\nthe standard response back then was\nwe cannot go into individual cases right\nand my i think so that's just uh\none response to your question is like\nyeah when it when we are talking about\nreal safety issues in this case it's\nlike the you know\nit's basically their whole livelihoods\nwere destroyed um\nyou have to you have to look at outliers\nyou have to look at individual cases in\nthis case\nthese were not outliers right there were\nmore than 20 000\nfamilies somehow so\nyeah i mean i'm not a public\nadministration like\nexpert or like political scientist i\ndon't understand how that happens\nbut i think we need a really like a push\non like we can't ignore these individual\ncases we have to\nput them front and center um and see how\nwe can\nnot just you know design better\nalgorithms and systems but also look at\nthe broader context in which\nthese systems are being developed and\nintegrated\nthanks thanks\nany any last minute question for the for\nthe official part\nanybody would love pl please feel free\nto speak up we have like\nanother minute or so\ni have a question yeah anybody else does\num is there any\nor can you think of any potential legal\nmechanisms for ensuring that these\nchannels of descent\nare actually abided uh\nby by those who ultimately have the this\nhave the let's say decision making power\nabout\nwhat specifications what features what\noptimization patterns\netc end up being built\nor is this like a legal uh\ngray zone yeah i don't have an example\nof the top of my head\nyeah i think this uh i quoted i had a\nquote from the\nthe court ruling for the siri case\nwhich is in a way a precedent it says\nlike you cannot\njust put a lot of variables input\nvariables into a model and then\num assume that at least that's how i\ninterpret it right but like assuming\nthat the output is is\nis legitimate but i think we need much\nmore\ndetails um\nand legal analysis particularly this\nthis case and other\nother places where this is in the\ncontext of bureaucracy and like making\nchoices about human yeah household\nlivelihoods\num yeah i think this is it's a really\nfruitful i think\ninteresting area where we need to have\nconversations with uh with\npeople in law and policy and i think\nthey're\ni'm quite hopeful that um we can come up\nwith\nmuch better formal work that is um that\nintegrates things from the start instead\nof like\nafter afterwards but i don't have good\nexamples yet that i've seen\nthanks thank you thank you\nso you're welcome to stick around for a\nbit more but i will close off the\nofficial\nuh part so thank you very much for uh\nfor joining everyone today rule thank\nyou so much for\nuh for presenting um bringing this to\nthe community so thank you very much i\nwill stop the\nrecording now and um but you're welcome\nto stick around and ask\nrule some more questions", "date_published": "2021-04-07T11:13:50Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "eaa924d643a252a3f8a1b2688c7ba083", "title": "Of data scientists and AI from critique to reflective practice (Mario Sosa Hidalgo)", "url": "https://www.youtube.com/watch?v=c2wXNwdspzo", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "uh\nand if you have any questions\nduring the presentation please use the\nraise\nyour hand button that you can find the\ncontrols on msteams\nif you don't find it you can type your\nquestion also in chat\nand in the second half hour we will open\nthe floor for questions and discussion\num and we'll i'll leave it up to mario\nif he wants to\nmake some pauses in the first half an\nhour so mario the floor is yours\nokay well thanks i'm jenny uh good\nafternoon everyone\nthanks for for inviting me here my name\nis mario sosa sabine mentioned i'm a phd\ncandidate\nat the keen center for digital\ninnovation the school business and\neconomics of the variety university\namsterdam sorry for repeating that but i\nwas part of my presentation\nuh then i just let me click yes so\nuh the agenda for today of course is\ngoing to present oh my perhaps that's\nnot necessary anymore\nthanks f jenny uh i'm gonna present a\nlittle bit more of the group the\nresearch group that i'm in at the\nprairie university university and um\nalso present you a little bit of my\nresearch which is related\nas you might notice with artificial\nintelligence and data scientists\nin the end we also can have a q a\nsession\nyou can please let me know all your\nfeedback\nand also your doubts and questions uh as\ni mentioned\nas i mentioned to you before my name is\nmario sosa\nhere i am the i am a phd candidate\nthe king center for the innovation for\nthe third time funny thing i graduated\nfrom the industrialist engineering\nfaculty of the tu delft last year\nyes still last year yeah last year my\ngraduation thesis\nis called design of financial toolkit\nfor the development\ndevelopment of artificial intelligence\napplications\ni graduated from the master of strategic\nproduct design\nuh important thing to mention i am doing\nmy phd\nin the i a i know project which is a\nproject that was funded by a\nnow open competition national grant that\nwas earned by kim\nking group the idea of this project is\nto answer the question what is the\nimpact of artificial intelligence in\nknowledge work\nwhich i'm going to cover a little bit\nlater in the presentation\nbut first let me introduce kim well\nshe's not kida\nshe's my promoter uh the king center for\ndigital innovation is\nled by my promoter marlene hussman uh\nwe are a research group that studies\ndigital innovation we take\nthree different areas uh for example the\norganization and digital innovation\nfuture work and also it\nhas to do with management of digital\ntransformation uh\nwithin that group i belong to a sub\nresearch group\ni know it's kind of funny what is called\nai at work which are\nthese phds uh and postdocs and everyone\nrelated to\nartificial intelligence in work setting\nwhat we do is to try to understand how\nthese practices are changing\nuh with the introduction of sorry these\nwork practices\nso our practices are changing with the\nintroduction of data driven technologies\nwhich i\ncalled ai from now on i'm sorry i know\nit's a technical term but i will use ai\nas\nan umbrella term sorry no machine\nlearning deep learning specific just\numbrella term uh our methods involve\ndoing ethnographic studies quite long\nethnographic studies to gain insights\ninto are the change\nchanges that organizations have with\nthese technologies\nover time uh we're currently\ninvestigating different cases\nuh where these technologies are\nintroduced into police off and sorry\ndifferent work and organization for\nexample police\nin the netherlands also several\nhospitals in netherlands as well\nhuman resources of multinational\ncompanies uh\nin my case also like uh gonna do\na demographic research in an\norganization that\nas uh does they do matching job\nwell there's a consultancy for matching\njob or job matching\ni think goal scope something like that\nso\nand that's why i'm going to talk about\nmy research which is still research in\nprogress\nuh i'm going to try to answer the\nquestion what is the impact of\nartificial intelligence in knowledge\nwork\nin my case specifically i'm gonna focus\non data scientists\nbut before going into that i'm gonna\npresent you\nmy personal roadmap a phd because i\nthink that this is a good opportunity\nfor\nto get feedback from it uh okay i'm\nfinishing my first year\nso one of my deliverables that i got\nfrom this first year a phd is a\nconference paper\nwhich has been accepted uh i'm going to\ntalk about that later\nand during the second year i'm expecting\nto start with my ethnographic study\norganization which i'm going to call\nfor this presentation matching co which\nis uh so daniel\nof course it's confidentially we have to\nrespect that\nuh the third year of course nobody knows\nwhat is what happens in the third year\njust entering the unknown\nuh perhaps some of you might relate and\nthe fourth year of course\nyou start entering to panic because you\nneed to finish your phd\nuh for the sake of this presentation i\nwill focus on on the paper\ni think that is a good opportunity as i\ntold you to present you my research what\ni have been working on and i would like\nto have your feedback perhaps you can\ngive me some suggestions\nuh you can help me think along this\ntopic and\nyeah i'm going to present you my paper\nwhich is titled data science as digital\nwork from critique\nto reflective practice it's a short\npaper research in progress paper\nuh this paper has already been accepted\nat ifib\njoint working conference 2020 which is\ncalled the future of digital work the\nchallenge of inequality\nto be held in in a month in december 10\nand 11\nof course gonna be online due to the\ncurrent situation\nuh yes so some of\nthe yeah i'm gonna begin the\nresearch context that i have the context\nof my research starts\nregarding the these new professions that\nare emerging\nin organizations due to ai it might be a\nlittle bit like obvious\nthat they are now their person data\nscientists their engineer machine\nlearning engineers but\nthey weren't there before and that this\nis something that\nhas only happened with artificial\nintelligence not even with for example\nbig data or\ntechnologies before that uh there is a\nlot of\nthere's a big fuss around what is the\ninfluence of artificial intelligence\nso but most of the debate around that is\neither too high level so\nai will automate everything will\nautomate all of our jobs\nor ai will control us or these companies\nwill control us\nas this is too black and white for\nexample in the\ndiscourse of the surveillance capitalism\nconcept and what we notice what i notice\nis there's\nstill not a clear understanding of the\npractices of these data professionals so\nwe're assuming too much\nwe we just get into a high level of what\nare the technology can do but\nwe're now focusing on the people that\nare actually well we're not focusing on\nthe people that are actually doing the\ntechnology\nwe are also focusing on the people that\nare getting affected but\nnot on the data scientists for example\nthey might be\nunaffected as well and you might be\nwondering why this is important\num well first these these professionals\nare in charge of building these systems\nright with all that this implies\nthe values the ethics the epistemologies\nall these things that\ncoming from these professionals to the\nsystems\nand they might be reflected on them as\nyou can see in this car\nsorry in this cartoon and also they are\ngaining influence in organizations\nthanks to this offer of objective\nrationality\nwhich is quite attractive for companies\nand managers\nuh of course this kind of race to power\nhas been criticized\nmostly uh most recently in popular media\nwith the social dilemma\nuh but also in research where there's a\nsuggestion of\nthe emergence of knowledge inequalities\nwhich we define these as disadvantages\nposition that\ndata science itself this this field has\nin organization\nbecause of this rationality speech\nuh there are other data science\ncritiques that come from the literature\nfor the critical data literate\nscholarship or literature\nthat they also cannot connotate these\nadvantages\nand also some of their consequences i\ncan present to you\nnow a table what i did a compilation of\nall these critiques\nuh there are some certain topics uh\ncertain important names for examples uh\nreeves\nuh and susanna socialist\nwho's like the author of the book the\nage of surveillance capitalism\nso the critiques go around the\ndomain agnosticity of data science how\nis just go there\nabsorbing knowledge without actually\ngiving something back which is related\nto this prospecting practices\nalso related to more social high level\nperspective this data extractivism uh\nconcept and data colonialism concept in\nwhich these authors\nargue that you know these companies just\nget our data is colonized our everyday\nlife\nand we don't get anything in exchange\nso what i realized when we problematized\nthis problem is that\nthese critiques are actually quite high\nlevel\nthey they get this inequality is an\nabstract level no\nempirics related so little is known\nabout what is the actual\ndata science practices and they are\naligned with how they are being\nportrayed they are the\nthey understand what the data science is\ndoing or the ai creators doing\nso that's why i came up with this\nquestion research question\npreliminary who says how does the\npractice of data scientists compare\nwith the critics or critiques about data\nscience or data science\nfor a research setup is pretty standard\nuh i made it the first stage\nthe research in progress between may and\njuly\nuh due to the current uh\nyou know current virus thing i had to do\nsemi-structural interviews\nand i didn't have the opportunity to do\nno graphic research due to\nthe situation but i did several\nsemi-structured interviews with data\nrelated experts with a lot of\nfrom a lot of organizations different\ncompanies i use snowball\nsampling and it's important to mention\nthat i am going to do the number of\nresearch which\nshould come actually from tomorrow i'm\ngoing to have a meeting with this\ncompany\nuh and uh after that i\nmade it just call it qualitative\nanalysis\nusing this software called atlas ti\nquite something that\npeople from design is used to i'm going\nto present you some of my preliminary\ninsights\nuh well it seems that some interviews\nsure that there's\nthis start acknowledging the limitations\nof\nthe representation the world\nrepresenting the world solely with data\nso there is an uncertainty there there\nis no only perfect rationality\nperspective there is something that they\ncannot\naccount for uh i like this example\nfor quote that i get from um that i got\nfrom an interviewee when he mentioned\nthat we can never model all the\ninformation in the world in a computer\nalgorithm so there is\nit doesn't matter how do you do it there\nis something always there\nin the real world that doesn't let you\nget to that perfect\nperspective of perfect rationality\nuh another important insight that we get\nthat i got dude is that contrary to what\nis criticized for example\nin reefs or slaughtering company super\nas well\ndata science is not it seems to engage\nin knowledge sharing\nand exchanging practices they are not\nthey do not only\nget get other discipline knowledge\njust to apply data science to it and\nmake artificial intelligence systems\nwith it but they also share their\nknowledge of data science and share that\nknowledge what they got\nthrough communities of practice and\nchapters in organizations and also\nthrough these type of communities\noutside of the organization\nwhich is quite interesting uh another\ninsight that we got is that there is\nthis emergence of the analytics\ntranslate\nanalytics translator role which is a\npopular role nowadays you can see this\nused a lot in silicon valley this is a\nsimilar result of uh\nwardenberg and others this is a\ncolleague of mine\nuh that regarding the information\nofficer so she did a demographic study\nin the dutch police\nthey are trying to apply a predictive\ncrime tool\nand the end the police needed to create\nthis information\nofficer role in order to translate the\nresults that they have\nin this algorithm to the officers in\nduty\nthis what suggests that you know the\ndata science process the artificial\nintelligence process is not as\nstraightforward as thought uh so it\nrequires in a way\ntranslation uh something\nthat we get and finally we got\nsome concerns about data science being\nautomated and\ncommoditized so tools such as\nautoml uh data scientists seem to be\nuh aware of these tools and they seem to\nbe of the let's say quoting\ndanger that they represent to their\nprofession because it might get\ncommoditized in the end and then\nthey will they won't get any further in\nthe future\nthey even talk about changing of uh\nprofession to go more to an applied\nrole machine learning engineer for\nexample data engineer something that is\nreally required and not can be\ncommoditized and what this suggests is\nthat this profession is not secure as\nsometimes it's portrayed by the\nliterature role in popular media\nso uh from my final remarks let's say a\ndiscussion\nuh discussion on my paper so our\ninstitute suggests that these data\nscientists\nseem to be reflective of their own\npractices right so they are not just\nthere by themselves\nuh in a company just doing artificial\nintelligence being the mean\nguy being the evil guy or being the evil\npeople that are creating this ai to\nautomate us all\nand contrary to what is criticized uh\nthey appear to be vulnerable as other\nprofessions and also to rely on our\nprofessions more than\nwe think so it is not this close group\nof people that is rising in power\nwithout actually having\nto communicate and having to\nrely on another profession\nso yeah what's next in my research\nyou might be wondering uh i'm as i told\nyou i'm going to present this paper\nto the uh if jwc 2020\nconference next month and i'm going to\nstart with the ethnographic study of the\norganization\nmachine code which is a recruitment\norganization uh for\ni'm gonna do that for the next year year\nand a half uh\nof course because it allows face to face\nbecause it is important for\nfor to get legitimacy and our results to\nget this ethnograph\nethnographic study otherwise it might be\njust quite superficial\nuh yeah this company is a recruitment\norganization that is doing\nthat's currently developing machine\nlearning algorithms to optimize their\njob matching process\nthey have a team of data scientists\nthere a chapter of data scientists there\nand they\nseem really interested in the study and\njust\nbefore we ask today questions just a\nsmall commercial break\nthis is the upcoming upcoming new book\nunfortunately\nunfortunately it's only in dutch from\nour some of my colleagues\nand my promoter of course uh from lauren\nwardenberg uh marlene hussman and marlux\nactingberg\nuh this is a compilation of this\nethnographic\nstudies some of the demographic studies\nthat have been done with\nthis ai context and so if you are\ninterested\nin and of course you speak that uh in\nlearning more about how you can manage\nartificial intelligence in practice uh\nyou can\nget the book it's going to be released\non the 24th of november\nand i think that we're working right now\ninto\ntheir english version for those who know\ndutch and are interested in the topic\nand that's it um i think that we can\njust start with the\nq a now\nmario thanks very much uh for this part\nuh\nso uh let me see here because anybody\nwho has questions uh\nplease uh click the hand button as you\nsee\nluciano yeah please go ahead nice\nthanks mario for the presentation um\ni was thinking about what you said about\nthe\nthe rationality assumption there and i\nhave like a\ntwo questions connect to this one is\nlike\num i think if i understood correctly so\nthe data science they see it as a like a\nand or at the market they see like as an\nadvantage for themselves this is this\nyou have this rationality and for this i\nask you\none do they also consider these\nwhen they are coding they are also\nassume that their code is also\ncompletely\nrational in a way i'm saying\nand another thing is that it connects a\nlittle bit to your\nthe concerns of becoming increasingly\nautomated\nthat you mentioned that would that mean\nthat they should be\nless rational in a sense but more\nfocused like\ninstead of should they have to let go of\ntheir own current advantage would that\nmean something in this direction what do\nyou think\nuh well first thanks for that question\nthat the first one is well\nall of the questions are are great\nquestions\nuh yeah we got in the the first one\ni think that that's what we want to find\nthat's that that's what we want to go\nthat's why we need to do it in that note\nto study uh i i currently do not\nknow the answer to the first one so i\ndon't know if they are actually thinking\nabout that\nin rationality we the only thing that we\nknow is we have\nsome other examples in the literature\nbut they have not been focusing on\nthat part of data scientists themselves\nfor example there are a lot of\nexamples in scientists normal scientists\nengineers\nbut not in data sciences that's why i'm\ndoing this this uh\nresearch so it's quite new so okay that\nthat was the first one the second one\nwas\nuh yes so the point of this rationality\nis not about\num that is not\nan advantage but i think that we are i\nmean at least with the with the paper\nwhat we're trying to do is to defend\nthe idea of the critics the critic\nsystem that data science has\nlike these uh the you know the uh\ndocumentaries and netflix and everything\nlike that technology sucks we\nshould just get rid of it and we got to\nget their ethics which of course that\nthat's right but\nwe think that this this type of postures\nare too extreme and they are they don't\ncover\neverything so uh we think that the\nperspective\nof the data sciences regarding their\nrationality\nit's something that is innate in them\ncomes perhaps from their institution\nfrom how they were educated of course\nuh but we think that if we can improve\nor we can make them\nwell we can understand them we can\nunderstand if they believe this this\nrationality gives them an advantage\nand they truly do while they are coding\nand\nwe can kind of like understand that we\ncan just\nproduce something in order to help every\neverybody align the best that we can in\na way so\nyou can manage data scientists under\nrationality as an advantage\nwithout getting into these problems\nthat i mentioned before the ethical\nproblem the epistemology\nall that so yes\ni hope that i have answered that part of\nthe question or yeah yeah it didn't i'm\nlooking forward to seeing what's what\nyou're going to\nfind in your ethnographic studies thank\nyou very much\nthanks thanks so so\nhi uh mario thank you for uh for your\npresentation it's really fascinating to\nhave you have you with us today\num and i i do really appreciate the\nquestion that you've posted how do\nuh data science practices kind of align\nor\nhow do they relate to the critiques of\nit and when i'm thinking of the\ncritiques that\nsome of them that you've also listed um\nyou know it's not just about\nand this is this is one one of the\nthings you see a lot that a lot of\ncritiques go towards\nthe data scientists and their what their\nrole is right but they are of course\noftentimes in organizations where\nthere's other stakeholders or other\ndecision makers\nthat sometimes even force them to do\nthings that they don't have any\nagency over and so my question to you is\nlike\nlike is is really the data science test\nyour object\nof study and to what extent is your\nobjective study also like looking at\nthese actual power power structures and\npower relationships and\nand the other thing is like the other\nobjects you could imagine is like what\nare the actual\nnormative choices and decisions and\ntrade-offs that are being made and\nand the you know the the implications of\nthe of those\num so you could of course you could\ntrack people you could track power\nrelationships and power structures or\nyou could track actual implications and\nit's\nphd is a short time but i kind of wanted\nto challenge you or just like to hear\nhow you think about\nuh scoping your your study at this point\nyes so yeah the main interest is to know\nmore about data scientists\num mostly because there is not\na complete complete account of them so\nwe don't know what they do actually\nbecause it seems that still be\nis still an emerging occupation right\ni mean 20 years ago was quite really new\nthe first\ntime the term was coined was around 2012\ni think\n2010 2010 so there are recent\nbut they are they don't have they\nhaven't been studied yet\ncompletely right and they're involved as\nyou mentioned in this structure\nfrom the data science team to the\nbiggest hurricanes of management\nand the idea is to look out for for\nthat relationship as well but of course\ni don't i have only\njust four years to do that and well no i\njust earned\nthe first one so i only have three years\nto do that on\nperhaps also to write my dissertation\nbut uh\nyes i am considering myself like\npersonally going and taking more like a\ncritical stance a critical\nlens to check of course if these data\nscientists are the ones that are making\ndecisions or not\nthat's that's why these type of studies\nare so important because we're getting\nthese assumptions of\ndata scientists are either evil or not\nevil but we don't know what they're\ndoing\nor if the managers are the ones that are\ntaking decisions\nand how their practices change how these\ndecisions change\nthe data sciences practices when they\nrealize that the world is not as\nuh objective let's say the world is\ncannot be modeled with\ndata that they gather do they need more\ndata do they\nwhat what what is going on there so so\nyeah you're right i'm planning to do\nthat to go more into that\nuh in detail i won't be able to do that\nof course i need\na lifetime perhaps what if i could get a\nnice\nnice research or job after this perhaps\ni could just keep studying that but but\nyes i'm looking out for that\nand uh what we have checked until now\nis that as we as i we mentioned with\nthis analytic\ntranslator role they are quite\ndisconnected with\nbetween there's this connection between\nmanagement and data sciences they speak\nthey don't speak the same language\nalthough they want to so it's really\nweird because everybody wants to\ncollaborate everybody wants to make this\nai but\nthey don't know how to speak to each\nother so that's something really\ninteresting and we\ncan just also uh like test that out with\nour\nethnographic study to check if that's\nthat's the case or\nit is it is something else uh yeah but\nwe aren't track that\non that yeah maybe to give you an\nexample\nuh my own experience as a data scientist\nthere was and this was in in silicon\nvalley and this is where companies they\ntend to you know i often use the phrase\nthat you know\nfake it till you make it so like you\nhave you know commercial people\nmarketing people\num ceo going out to customers and\nselling certain technologies and then\nthe data scientists they get\nbasically they get um\nthe ones that i worked with for summer\nthey got like uh\nsomething to build that wasn't you know\nthe requirements were just like\nnot possible right but they were uh they\ndidn't actually didn't have much power\nand they were\nsent into like a conference room again\nand again\nand the ceo was saying your accuracy\nnumbers have to go up because i promised\nthem\nthe customer that you know the accuracy\nwould be more than 90\nthat we were like they were like 67 or\nsomething but like if you just look at\nthe data if you just look at\nthe possible data collection they could\ndo they were never gonna get there and i\nkind of had to help them\nthrough doing more like rigorous\nstatistical analysis that\nlook you know it's not going to work and\nand there i've seen kind of a\nprogression where data science actually\ngot some more\ninput at the executive level just to you\nknow\nto make sure that when they go out and\nthey promise things to\nto customers that they don't over\npromise um so it might also be\ninteresting for you to kind of\nuh in addition to like longitudinal\nstudies maybe to\nto to ask different companies about how\nthey have struggled in the past and how\nthey kind of adjusted their\norganizational structure the processes\nas a result because i i think what i\nwhat i ran into is probably not um it's\nnot an anomaly so that's probably a\npretty common experience from the last\nlet's say five to ten years yeah\nexactly i mean that's what we are\nexperiencing and observing as well\nthat's why we're doing this\nresearch because if you look at these\nthese critiques\nit's more like a data scientists are\nsuper powerful and artificial\nintelligence is\neverything and then they are the ones\nthat understand it so they are the ones\nthat are not getting their jobs\nautomated so\nyou should be a data scientist or data\nengineer machine learning engineer\nand what you're observing as you\nmentioned is that that's not the case\nbut\nyou need to report it in a way right you\nneed to you just\nto make people notice that\nthere is something going on there and\nthen you can actually make it solve it\nwe were discussing about as you\nmentioned perhaps doing\na couple of ethnographic studies instead\nof only just\none with another company different\nsetting different\nuh different also field because if you\nthink about it there's also this\norganization that are technological\norganization like technology\nthey do technology and they're these\nnormal companies organizations then they\njust\napply a little bit of technology into\ntheir normal business model\nso those are quite some differences the\none that i am doing the ethnography\nmodel is more like normal organization\nregular organization that is trying to\napply data science and\nimplement distribution intelligence in\nthem so i i'm also wondering if\nit will be of course as you mentioned a\ngood idea to go there to go to other\ncompanies to see\nhow is this how this has happened right\neven though it's the same company or not\nbut but yes you're right i mean that\nthat's that's what i'm gonna do that's\nwhat i'm trying to do\nto look at for the truth or well not the\ntruth but what i can observe about what\nis\nhappening and why these critiques are so\npushy towards\na uh they are so great when they are not\nso that's what we're trying to do\nbecause we don't have a clear uh image\nclear picture\nof who is these people behind yeah\nyeah let me just say that i've i have\ntremendous respect for ethnographic\nscholars who go and do this work for\nlonger times and i\ni've experienced myself not not\nnecessarily in delves but in like\nengineering spaces i've been in that\npeople don't\nquickly like realize or like acknowledge\nthe value\nas as i would like them to and who\nreally i think it's really\nit could really be very valuable um\nknowledge that can feed back into the\nactual way\nlike you said like the way that we we\neducate\nengineers data scientists uh\nstatisticians so\nthanks for uh taking on this\nchallenge\nyeah well thank you for uh your\nquestions now like\nmake me think thanks\nthanks rule uh so while uh please uh\nwhile others uh\nyeah i think if you would like to ask\nany additional questions and raise your\nhand so well\nwhile i wait for you to raise your hand\ni will ask a question\ni'm curious mario um so this is my\nquestion is somewhere i think\nuh kind of on the crossroads of the\nobjective rationality\npoints and what you said about the limit\nthe awareness of data scientists of\nlimitations of\ndata modeling of the world so\na lot of the techniques used in data\nscience these days\nthey are they they are um\n[Music]\num they consist of correlation based\nmodeling\nright between correlations between\ninputs and outputs\nbut that process of modeling it's very\ndifferent\nfrom a scientific process in which\nwe make clear assumptions about the\nworld\naround us there and and we have a debate\nabout\nthe causality that is at the heart right\nas we know correlation\nand causality are not the same yeah and\nand\num so i'm curious from what you have\nseen so far\nor perhaps maybe from things that you\nare planning to investigate\nin your if you're in your research like\nis this something that has come up is\nthere something that you\nare planning to touch upon like are data\nscientists\naware of uh what are the implications of\nthat because especially i think\nkind of my motivation for asking this\njust to be clear is that\nin many of the socio-ethical issues that\nwe\nobserve with how artificial intelligence\nis applied\nthis is one of the things that really\ncome up because\nwe see how certain judgments are made\nbased on\nrecommendations that ai systems do that\nwhen you start opening the hood and you\nask well hang on okay\ni see that there's a correlation in data\nbut\nfrom a causal point of view is this\nepistemologically like\ndoes it make sense to make such a such\nan inference\nso yeah i'm curious to hear what you\nthink about that yeah i mean\nthanks for your question jenny's really\nnice question i\ni i don't know if i will touch it in the\nfuture but i've been\ni made kind of like a mini research\nbefore we'll get into literature review\nand what we realized is that uh you're\nright these\nobjectivity ideas in inside uh\nthe development of artificial\nintelligence that come from\nuh the statistical uh correlation\nuh can be also involved in like general\nterms\nuh you know when we when artificial\nintelligence first came up\nit was more like a symbolic approach\nright try to\nimitate the world completely no only\nstatistical correlations were not enough\nbut then\nmachine learning came and then from\nmachine learning\nthere was also this\nbranch of okay because much the original\nthe original of\nthe objective of machine learning was to\ndo it to do the\nthe same our symbolic artificial\nintelligence but\nwith a different approach but then it\ncame this statistical approach and then\nthe statistical approach\njust got spread towards everyone so what\nwe realized is that\nthe problem around still there\nit doesn't matter if it's ai's symbolic\nor ai\nmachine learning uh but because for\nus the the the main factor of this\nobjectivity is that\nknowledge is perceived as something that\ncan be accumulated\nso as you mentioned you can say like oh\nthis correlation\ndoesn't make sense right and that part\nof doesn't make sense is related to\nknowledge when it is\na process knowledge\nin the philosophical perspective could\nbe either\nsomething that you can accumulate but\nalso a process of uh\ninteraction between people you share it\nyou use dude you move it around\nand that's the thing that we believe\nthat is missing in artificial\nintelligence\nand it's research they're so focused on\nthe idea of\ngetting all this knowledge that you can\naccumulate that instead of\nthe hard part which is focusing on\nthe symbolic part and you know the\nexpert\nsystems and we at least i understand why\nis the reason of that\nyou know making symbolic efficient\nintelligence is way harder\nnot as efficient as doing machine\nlearning but there are other ways of\ndoing\nartificial intelligence that for example\na hybrid\nuh perspective they are working in the\nvu with this hybrid perspective of\njoining machine learning with\nexpert systems or symbolic artificial\nintelligence but the point that we\nwe find it fascinating is this this\nperception of knowledge\nas can be accumulated or not even\naccumulated because it's not even\nit's not a symbolic perspective anymore\nbut\ninferred from incomplete data\nlet's say that there is not this\nuncertainty taken\ninto consideration so we believe that\nthat's kind of like\nthe issue with machine learning the\nuncertainty part the okay eighty percent\neighty percent is enough but\nif you think about a knowledge as a\nprocess eighty percent of your spreading\nknowledge is not the same\nso there is still there\nuncertainty part that is hard to to take\ninto consideration\nuh and also this uncertainty rises up if\nyou included the biased uh socio-ethical\npart or other things that might happen\nthe\npoor explainability so you need to get\nover the trade-offs so you're going to\npush the technology\nand but what we've what we found is that\ndata scientists seem to be aware\nthat this is happening aware that\nperhaps their definitions of knowledge\nare not complete it's not they're wrong\nit's just simply it's\nnot the whole picture and that and that\nthis is the\nperhaps they have they are they are\nstarting to reflect upon it this is the\nright way or as you mentioned\nthis is correlation doesn't make any\nsense but also we have\noh sorry so also the\nthe management perspective where are\nthey positioning the company so\na lot of factors there but what i'm\ntrying to focus in is in that\nthat perspective that they have and how\nthey are actually\nstarting to realize that it is not as\neverything is not as beautiful as put it\nlike in a beautiful correlation of more\nthan 0.05\nand then so and then following up and\nyou mentioned now the management\nso now let me flip it to the perspective\nof the\nother people in the organization\nsurrounding the data scientists\nso you're saying that data scientists\nbegin to\nacknowledge this more the the this issue\nyeah of indeed the knowledge how it's\nconstructed and the correlation versus\ncausality and the\n[Music]\nbut what about uh indeed what about the\nmanagement the people around the data\nscientists so\nbecause one of the things that we\nobserved there is the narrative uh right\ni think i've heard\nmarlene say that herself in one uh\nin the workshop that she's done kind of\nthe narrative of uh ai\nknows better right because there is this\ncommon narrative around us in the world\nthat we live that\nwell because ai is quantitative because\nit's automatic\npeople get the perception or it's often\nmarketed\nsometimes even aggressively kind of\nmarketed as well this is objective and\nyou know great um\nwhat do you see from the side of the\npeople surrounding the data scientists\nagain in the context of this\ntopic um yeah\nwhat we see is that i\nthink they they need to give\nto perform in their organizations and\nthat's why they are going to do whatever\nit takes to perform\ni guess but in the literature of\nmanagement you can find that managers\nare aware of different types of\nlet's say knowledge in a way so managers\nare reflecting themselves they are\nalways changing that's why they change\ntheir management styles\nbut at the same time you are right we we\nare looking at\nkind of like a process that is\nincomplete\neven between managers and data\nscientists\nso there's this knowledge perspective of\nthey are not interacting correctly\nbecause they they want to but they don't\nknow how to\nright so and even we got sold the idea\nthat ai is so great\ndata scientists know that there are some\ncases in which ai won't solve their\nproblems but they don't know how to\ncommunicate this\nuh one of the uh as mentioned in the\nlast question one of the\nthings that they might do is just rise\nin the heritage of the company so they\ncan have their decisions\nheard but at the same time you realize\nthat\nthis might also cause some problems not\nthe idea that they rise into the\nhierarchy but the idea that they\nare not able to communicate properly\nwith this management section\nabout what are the capabilities so in\nthe end these managers accept the\ntechnology which fails\nand then they are not performing\ncorrectly because of this problem\nso it's a problem that involves everyone\nin the organization\nbut we we want to go there and to reveal\nthe truth\nwhat to what extent does it affect who\nis\nthis that the people more that are more\naffected with this and what are their\npractices\nwhat is happening around in my\nperspective i i believe that this is\ncomplete\nyeah uh there's a wall between\ntwo of them and is for what i got in the\ninterviews\nthe data sciences they they are willing\nto to share\neven some managers so everyone is\nwilling to\ncreate the perfect product to do\neverything for the company\nbut yeah again still this infer\ninterference between\neach other of knowledge and their\nconcepts and epistemologies in which\nthings go around and what i'm trying to\ndo to find is how this changes the\ndata sciences perspective and practices\nbecause they might have started this\ndata science profession as they are\nsuperior because we are objective but\nlittle by little they are realized that\nperhaps they are not as\nobjective or perhaps there is something\nthere is a problem they have been taken\nby granted or they have been exaggerated\ntheir own\nresults just for the sake of the company\nand that's why we have been observing\nif you go to deep into these\norganizations that's what i want to\nstudy my phd how this\nhype of ai has changed their own\npractice\nand with this we i expect that you can\njust we can create a little\nsomething to help them go through the\ncompany and\navoid these problems because it affects\neveryone even\nthem or the managers if something goes\nwrong because of these\nproblems yeah and i think this is this\nis very very\nvaluable knowledge to accumulate in\norder to\nlike in terms of the kind of questions\nfor example that we at ai tech and\nto delve that often we deal with in\nterms of design uh\nwhich you very well know of course\nbetter than me\nuh that to to understand uh\nhow how can we uh steer things towards\nthe kind of future that we want to see\nit's so important to understand the\nexisting practices\nin order that we know how to enter\nthe conversation uh to be able to steer\nthings effectively\nyeah let me uh let me see does anybody\nelse have\nsome additional questions\nno hands so far\ni i can ask uh probably some more\nquestions\ni mean like related to what you\nmentioned about going\nthere because we need to understand it\nuh the way that i see it is that\nuh going through the academia and from a\ntechnical perspective\nbecause i have a bad background in\nmechanical engineering\nuh following through design and then\nending up in social sciences\ni think that i can just check out these\nproblems firsthand by myself\nand i can see a difference in\nperspective and i think that as you\nmentioned it's really important\nto understand the why a problem be\nbefore starting to\nwork and how to solve it right otherwise\nit's just a technical problem of i'm\njust trying to solve a problem that\ndoesn't exist\nthat that just i don't know why this\nhappens but i'm just applying ai to\neverything that i know because of course\nwe are in\nengineers in a way you are curious to\ncreate things to to manage all these\nthings\nbut as you mentioned this is really\nimportant as a like\npre-step preface of the applied research\nof how do you solve it once you\nunderstand this these practices and how\nthese practices\nhave changed in the certain\namount of time\nyes\nyeah and um let me see it um\nso an additional question you mentioned\nthat um\nat one of your slides you said so the\ndata scientist\nmight not be as secure as portrayed\num so can you elaborate on that i mean i\nthink you\nmaybe you've touched upon this but can\nyou talk a bit more about that point\nyeah yeah i mean i think i can go back\nto that one here\nyeah um yeah\nso the point is that we always have this\nnarrative this uh aggressive narrative\nof you should be\ndata scientists because or machine\nlearning engineers something related to\nthe ai because that's the future\nof work and we have always had that\neven though before in the past when you\nhave it professionals\nand the internet and everybody's going\nto be automated\nit's still going and going but the point\nhere is that\nuh you what happens when you automate\nthe automators\nright so when they feel transcendent\nbecause they are disappearing\nand that's also important to to to\nacknowledge because they might\nbehave in a different way they might\nchange their practices in order to not\nget extincted so that that that's\nsomething interesting that we we would\nlike to\nto go further because it is not as\nsecure as it was portrayed so you always\ngo to the tech world you're never going\nto get\nyour job taken by robots because you are\nthe ones that are making rods\nand stuff but then you realize that\nthat's not the case specifically for at\nleast the data science practice\nalthough we believe of course yeah it\nmight be the case if you don't use\ntools like automl and other\nsmart tools for making data science work\nyeah again this is\nsimilar to this phenomenon of uh web\ndevelopers\nand what happened with uh website\ncreators you know wix\nand all these uh other pages that they\ncreated there so the web developers\nbecame a little bit redundant but in the\nend it just\nchanged a little bit their job and\nthat's that they switch\nto something different to give a bigger\nbetter experience\nrather than only just the same just\nsecuring servers and\nmaking sure that the page was online all\nthe time and available and accessible\nso that's that's what i meant with this\nis just like things are getting\nautomated and commoditized so they're\nfeeling threatened and they\nthey kind of like they know they have\nassumed that that happened they\nthey they have reflected or at least the\ndata scientists that i\nspoke to they are they they understand\nthat and some of them are thinking about\nchanging from data science to\nanother another field some of the\nmachine learning engineers they are\nplanning to stay in the machine learning\nengineer field and not going back to the\ndata science because they also believe\nthat it's\ngonna change a little bit i even have an\ninteresting response from one of my\ninterviewees\nhe believes that the data scientists\nwill\ngo will encounter will re-encounter\ntheir way through\nsoftware development because they have\nbeen treated as\na shiny little new object you know\ndoctor\nphd is being hired in organizations they\nare really important\nbut in the future they might get into\nthe same\nlevel as any software development\noccupation profession\nand that's what they some some of them\nthey fear that some of them they\njust accepted that but there are some of\ntheir\nperspective insights and this is\ninteresting to know how this changes\nover time yeah and\nso let me ask something that is kind of\nyeah\nit kind of goes beyond the scope of what\nyou talk about here but\nyou you have an interesting journey and\nexperience yourself as you mentioned\nyou've in terms of the uh environments\nin which you've studied and worked you\nhad engineering you had design\nyou know social science when you think\nabout the way\nuh we educate uh how our education works\nthese days how we educate\nai professionals let me put it like this\nbroadly kind of yeah\nwhat do you think what kind of\nset of multidisciplinary knowledge\nshould young professionals that graduate\nthese days\nwho will mainly focus on technical\nthings but what kind of\nmultidisciplinary knowledge do you think\nthis technical professional should have\nin order to work effectively\nin the real world i don't want to sell\nthe sound sorry i don't want to sound\nlike a\nlittle exaggerated but i think that\ndesign education\nwill work design is is\nactually that i notice in in my\ntransition to the engineering world to\nto these organizations you just\nlisten to their problems what these data\nscientists are experiencing and\nthere are so many gaps that designers\ncan feel\nin there like absolutely like i think\nthat design education in any\npart of any program around the world in\ncomputer science something like that\ni believe that of course etiga ethics of\nuh the profession\nis a must but a part of design education\nof how do you solve problems thinking\nout\nout of the box how do you how do you\nbecome that bridge\nbetween one discipline and another i\nthink that that's actually the greatest\nasset that's i believe that that's what\nhelped me with this transition and that\ni can just understand this data science\nis\nbetter uh and i translated that the\ntechnicalities to the business side or\nthe social side\nso yeah long live design education sorry\nyeah that really yeah that really\nresonates with me because i've been\nthinking the past\ntwo years that i'm working on on the\nthings that i'm working on i've been\nthinking\ni wish that back in the day as a\ntechnical student\ni had exposure to design just to some\nbasic\nnotions and i yeah so definitely\nresonates with me\ni think probably with more people in the\nroom and uh yeah i i would love to see\nus uh\nincorporating some uh basic design\neducation for our technical students\nuh indeed yeah let me see uh does\nanybody else have\nkenny sorry i fully agree with that\ncatholic would you like to add something\nmore yeah i just\nsaid that i fully agree with that yeah\nuh yeah does anybody uh cut the line do\nyou have by the way any\nany questions you would like to to raise\nor any any more discussion points\nit was very interesting uh conversation\ni i like it a lot um\ni think you're touching upon the\nimportant questions in these\nsituations and also in relation to\nindustry i find that\nso important and also you know wonder\nto what extent we can improve our\neducation of\nour students will be prepared for these\nkind of\nmatters and their responsibility you\nknow we give them the diploma and we\nstate\nexplicitly we point out to them also you\nknow that\nthey have a responsibility to society\nto um to act according to what they've\nbeen taught\nabout of course the technology but also\nthe ethics\nyeah are they prepared by just stating\nto them that they have the\nresponsibility can they act on it\ndifficult\nyeah yeah i agree completely\nwith you i think that perhaps adding a\nlittle bit more of this design education\nto all\ncould help and could improve their\nperspective into\nnot only uh an advice of be responsible\nwith this degree but also\nand to know how to communicate with\nothers uh and also had to work\ntogether with others uh and knowing how\nto\nreact and how to recognize when\nsomething is not\nas well as expected and how to explore\nother fields\ni think that that absolutely necessary\nwhat i found\nthese gaps are they are there i think\nthat there are not enough designers to\nfill in these gaps\nthat i am finding and observing\nyeah and it's very much about bringing\nthe different pieces of the puzzle\ntogether right\ni i i say these days like when i talk\nabout this i say i don't expect\nuh computer scientists to be experts in\nqualitative research in conducting\ninterviews\nbut i would love them and i speak as\nsomebody who yeah\ngraduated from technical studies i would\nexpect myself to appreciate the\nimportance of working together\nwith people who can study the context\nand can conduct the ethnographic\nresearch and\nconduct the interviews and do these kind\nof things because together\npuzzle and understand what to do\nyep any more uh any more questions we\nhave a couple\na couple more minutes left\nso well so let me okay let me then so\nmaybe i'll finish by\nlike a provocative question you know but\ni want to throw it out there i'm curious\nso\nthe data scientists right and that that\nwill connect\nto what i was saying before about that\nif you think about\nsciences like in the more uh like\nsciences like uh\nphysics for example in chemistry right\nagain but where we make some very\nexplicit assumptions about the world\nand then we construct the knowledge\nbased on the system so again this will\ncome back to the symbolic versus\ncorrelation\nso being very provocative does uh\ndoes the term science and the data is it\nappropriate or are we actually talking\nmore\nabout without being disrespectful or\nanything are we\nis it more appropriate to talk about\nsort of indeed machine learning\nengineering\num you know machine like data anal\nanalyst yeah\nyeah i that's a great question i uh\nuh i don't know but uh i i believe that\nwhat i found in the literature and\naround in my research\num this is an ill-defined occupation\nright so calling it the analysts\ncalling it machine learning engineer\ndata scientists um\nthe point is that what happened was\na started to become more applied\nand what that happens perhaps you change\nfrom\nphysics to mechanical engineering or\nphysics to civil engineering\nuh this might happen with statistics and\nyou change to statistics to data science\nright the the only difference is that\none works with applied\nuh data uh the applied world and the\nother one\ncan give the luxury of uh\ngetting into more uh theoretical stance\nwhen you don't need\nactually real world data to to explore\nwhat you do so i think that\nit might be depending on the\netymology of the word but i i just want\nto say that i i believe that it should\nbe called different\nthat then statistics should be more like\napplied but applied statistics sounds\ntoo dull\nso i think that also data scientists\nthe term data science and the role of\ndata scientists\nmight have been also with these\nuh marketing uh objectives behind\nbecause if i remember correctly it was\nproposed first by\nfacebook and linkedin to describe their\nuh data analyst uh\nprofessionals and their groups of how\nthey can handle data\nso and it was coined as i i told you\nlike 2012\n2012 so i think that is still emerging\nuh still ill-defined occupation um kind\nof like in the middle\nmore or less like design uh\nyou don't know what is designed right\nnowadays it is designed for interaction\nthis is strategic design business design\nengineering design\nproduct design so so i think that they\nare in right now in the middle the only\ndifference is that they are\nunder the spotlight right design still\nbehind curtains\nshouldn't be there but well still behind\ncurtains a little bit but as they are in\nthe spotlight they are\ngetting a lot of attention so i think\nthat i\nalign with what some of my interviews\nsaid uh regarding the future of it\nwhen they stop being the new shiny\nobject and just\nrecall their place uh they even got this\nnew nice phrase of like when they\nreached urbana or something and they\njust finally get into the position of\nall the other ones their software\ndevelopment occupations\nprofessions and stop being this new\nshiny object\nuh which is uh perhaps affecting them\nnow nowadays they're affecting more\nbecause now they already got the\nlegitimacy that they wanted\nit's failed but now things are getting\nmore more complicated with this social\nethical problems with that are caused by\ntheir creations so\nyeah i hope i have answered your\nquestion\nyeah yeah thanks very much yeah thanks\nvery much for sharing that\nso last opportunity we have uh for\nquestions\nanybody can i can i may be uh responding\nyeah yeah so because i i\nkind of agree with a lot of you're\nsaying mario but it is a very\nprovocative question\nso let me just uh respond to one thing\nyou said which is that\nyou know making the analogy of physics\ngoing to mechanical engineering\nstatistics going to like being a more\ntheoretical display going towards data\nscience\nlet's remember that statistics is the\nfield\nthat is totally built on sampling\nprocedures\nthat are designed uh that were designed\nto make sure that that we could get as\nmuch\nlike valuable information out of as as\nfew samples as possible\nand so it's it's it's like an inherently\ndeeply empirical kind of field of course\nphysics also has its empirical size but\ni think even more so like statistics has\nreally\nalways been there to serve empirical\nstudies\num and so i think the way i look\nwhat i the way i look at it is that in\ndata science\nrecently there we've been kind of\nignoring a lot of that um statistical\nlike the\nthinking about data and that way because\nwe you know we've often\nhave been given like big data sets and\nthen we make\nand i'm generalizing here right but like\nwe're making assumptions that the data\nis fine let's just see what we can get\nout of that data\num and through that we're not actually\nasking a lot of really important\nquestions about the limitations of the\ndata\nthat might then perhaps lead to like\nnegative\nimplications that we then have to solve\nwith ethics and other\nyou know design studies to to clean\nthings up right\nthat's kind of a way that i look at it\nso i think\nmaybe we should go back actually to\nstatistics and to understand and that's\nhard right this is it's hard to build\na workforce that has those kind of\nskills and it will also inherently\num have us meet the limitations of what\ndata science can do in like\nespecially in more sensitive domains so\ni'm i just want to respond to that i'm\ncurious what you think about that\nand i know we're running out of time but\nwell i know very quickly i think that\nyou're right i mean\nit might be hard for them because uh\nyeah now that they\nalready obtained the legitimacy but they\nare different from statistics\nyeah i mean you're right like going\nthrough like\ntechnicalities of what a statistic is\nthis\nis hard uh for them to understand that\nthey might\nstill be doing statistics right uh but\nthat but then you i agree with you\ncompletely agree that there's something\ni think that they they are doing\nright now touching upon that is that\nthey are reflecting on that topic\nthat's why i'm doing the phd and uh or\nperhaps i am assuming but i've noticed\nthat this reflection might have to do\nalso with\nthat might end up in that it could be\nor not that you can just join data\nsoftware engineering as part of that\napplied\nside and just leave these statistical\nassumptions behind\nbut but i mean i agree with you i think\nthat it's a great exercise that we just\nreflect and that we problematize and\nquestion\nwhy why these things are the way they\nare in a way\nstatistics as the analysis that you have\nmade just like physics and\nmy example and the point is\nright now is how to understand whether\nthis field\nmight go through if you say oh yeah you\nshould go to statistics again\nhow they are how do they perceive their\npractices which is really important\nbecause even though\nwe want them to come back to statistics\nthey have the final\nword because yeah they are now they are\na legitimate field\neverybody in the world is looking for\ndata scientists or people that can\ncreate artificial intelligence\nbut yeah we must wonder what's what\nwould happen in the future and\ni will be there doing ethnography in an\norganization so\nperhaps when i when i finish that third\nyear that\nnobody knows what happens there i can\njust keep\nkeep keep up and then keep telling\neveryone oh i found this i found that\nperhaps they might go back to statistics\nwe don't know oh i don't know yet\ni need to get there mario i need to go\nbut\nthank you very much until we meet again\nbye-bye\nthank you mario mario thanks very much\nit's uh\nthank you very much for a really\ninteresting uh talk and a really great\ndiscussion yeah i think uh it will be\ngreat to have you coming back uh then\nlater on in your research share with us\nwhat you what you find out\nuh down the road so thanks very much for\nuh\njoining us today and uh thank you\neveryone uh who participated for joining\nand we see you at uh one of our next\nmeetings\nstay safe", "date_published": "2020-11-21T17:54:51Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "9d142db6ac97d0f0c50d3f37bc376454", "title": "AiTech Agora: Emma van Zoelen - Human AI Co-Learning Mutual Understanding and Smooth Collaboration", "url": "https://www.youtube.com/watch?v=W3jbp0QiG7Y", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "but before i started my phd because i\nwas looking for nice places that worked\non like human centered ai\nuh type of stuff\num\nso yeah very nice um so today i'll talk\nabout my phd work um and also just some\nmy general thoughts and vision on the\ntopic of human ai co-learning um and\nalso a bit how it relates to uh\nmeaningful human control\nbecause i already had a few discussions\nwith some people and thoughts so i'm\njust\nwell excited to discuss it with you\nso the title of my presentation is also\ngrowing mutual understanding and smooth\ncollaboration so is that what\nco-learning can bring us\num so to give a bit of background about\nme and my vision so\nuh i believe very strongly that if we\nwant humans and ai agents to kind of\nlive together symbiotically they need to\ncollaborate as team partners\nand i think it is important that we\nfocus on the different strengths of\nhumans as well as ai agents to really\nensure that they can empower and support\neach other\nand i think that can help us to make\noptimal use of the qualities of ai\nagents but also while maintaining\nrespect and appreciation for the\nqualities that we as humans have\nand i think probably many of you agree\nwith me on that\nso i have a background in industrial\ndesign i originally studied industrial\ndesign in eindhoven but i also did a\nmasters in artificial intelligence in\nutzek university\nand i think this combination combination\ncauses me to look at ai and robotics\nreally from a very human-centered\nperspective but also interaction focused\nso what happens in\nthe interactions between systems and\npeople\nso my phd is about co-learning human ai\nteaming\nbut i focus very much on the complexity\nof interactions that appear and emerge\nif you create such situations\num so if we talk about co-learning so\nand start to think about what does that\nmean\ni always start with okay what does it\nmean if we have two people\nthat collaborate with each other but\nthat are very different so they might\nspeak a different language or they have\ndifferent ways of behaving um and that\nmight mean that they don't understand\neach other from the get-go\nso what we do as humans to cope with\nthat is\nwe\nadapt to each other\num we maybe learn each other's language\nbut also we change our behavior um and\nwe do this in a very reciprocal manner\nso both both of us do that or if we're\nin a larger team everyone does that\num\nthat of course raises the question what\nhappens if you're in a team with a\nmachine robot ai agent what not\ndoes it understand us if we talk to it\nand behave well probably not immediately\nso what do we do\nwell you can of course say well we just\nput the human behavior into the machine\nso we let the machine adapt to us and\nthen we'll be fine that's perfect but of\ncourse there's something a bit odd about\nthis uh the machine doesn't have any\nhands naturally unless we give it to it\num\nand i think also of course the inside of\nour brains are very different um so it\nfeels a bit odd to let only one\ncompletely adapt to us\num so\ni always try to argue that you need this\nkind of very reciprocal and mutual uh\nadaptive learning process to really get\nto uh like understanding and become a\nwell-functioning team to get attuned to\neach other\num so when we talk about co-learning\nwell what comes to mind is a process in\nwhich two or more parties or systems\nchange their behavior and or mental\nstates while interacting with each other\nover time\nwhen you look in literature there are\nseveral terms that you can use to\ndescribe this so for example\nco-adaptation co-learning co-evolution\nwhat do they all mean uh when compared\nto each other so i think that they\ndiffer in terms of time span\nso if you talk about adaptation that\nfeels a bit like something that happens\nlike ad hoc implicit short time span go\nevolutions like this huge long process\nand co-learning is somewhere in the\nmiddle\num i think they also differ in terms of\nintentionality so co-evolution is\nclearly unintentional just this process\nthat happens that we can't really do\nanything about co-adaptation\ncan be\nmaybe a bit intentional in the sense\nthat you want to\nwell be better at something or improve\ncollaboration or whatnot\nbut at least\nthe actual changes can be relatively\nimplicit and i think co-learning is\nmore intentional so we really want to\nimprove the performance of a team we\nwant to improve our experience and also\nthat this development is sustained over\ntime and context so it's not just we\nadapt and then we adapt again and adapt\nagain and this is just a continuous\nprocess\nof course it is somewhat of a continuous\nprocess but if we adapt to something\nthat works well we maybe also want to\nkeep that and keep doing that\num\nso in relation to that um we can also\nwonder in general what does it mean to\nlearn\num\nso\nprobably many of you have either done\nthe utq or the um\nsome graduate school courses on teaching\nso you might know this\num so we can say learning means to\nchange behavior\nwhich very behaviorist perspective we\njust change our behavior we can also say\nlearning means to acquire knowledge\nthe more cognitivist perspective um with\nthe computer metaphor we just gather\nsome knowledge\nbut the more constructivist perspective\nand one that we kind of usually try to\nuh to see is also that it means to build\nmeaning\nand try to make connections between\nbehavior and knowledge\nand all these different things\nso to relate that to what i just said\nabout co-learning for me co-learning\nconsists of different\nparts so yes it is also co-adaptation\nwhich means these implicit and\nunconscious changes in behavior and\ninteractions\num but there is also some kind of there\nshould be some kind of feedback loop so\nthis is a bit more related to acquiring\nknowledge but it is specifically about\nconnecting the two so\ncan we become aware of\nimplicitly developed behaviors\ncan we reflect on them and then see if\nwe want to use them in the future\nso this is the kind of the definition of\ncoloring that co-learning that i go by\num\nbut yeah how to actually research that\nhow to facilitate such a process what\nare the benefits what does it mean if\nyou actually have a human and a robot or\nhuman in an ai system\nso this question is a very big part of\nmy pte research\njust how do we actually research this\nso i'm going to give two examples of\nexperiments that i did that were an\nattempt\nso one\nis one that i already did during my\nmaster's in industrial design\nwith a focus on leader follower dynamics\nwhere i created this task where people\nhad to navigate from one end of a field\nto the other side with a robot on a\nleash\nand the idea was that they had to do\nthis as quickly as possible but in the\nfield there were hidden objects\nand the robot knew where those objects\nwere and you could get points\nby finding the object but of course\nfinding the objects costs time\nso this was a wizard of oz experiments\nit was really exploration of behavior\nhow do humans deal with this conflict of\nconstantly trying to decide okay do i\nfollow the robot or do i go my own way\nand kind of moving back and forth\nbetween that\num\nand it's very interesting to see what\nkind of behavior comes out of it so some\npeople really only follow the robot some\npeople really like lead and pull the\nrobots towards the end of the field but\nthere's really all kinds of behaviors in\nbetween um where they closely watch the\nrobot and see where does it go and then\npull it that direction that the robot\nwants to go\nor all kinds of other different\nbehaviors\num so another\nexperiment that i did is in a virtual\ntask\nthat i did last year as part of my phd\nwhere the focus was more on\nobservational learning and adaptation so\ni created this task\nit is an urban search and rescue kind of\ntask so you had to save a victim that is\nburied underneath a pile of rocks\nand again there are some\ndifferences in capabilities so the\nrobots virtual robots could pick up\nlarge rocks and break them into pieces\nand the human could only pick up small\nrocks but the robot was very unaware of\ngravity and those kind of things so it\nmight just pick up a lower rock and then\neverything would collapse\nand in this case the robot the agent\nactually used the reinforcement learning\nalgorithm where it tried to decide\nbetween different rule-based strategies\num\nand the human was very unaware of all\nthose strategies so kind of on the go\nthe human had to figure out what are the\ndifferent strategies of the robot and\nimplicitly pick one to kind of focus on\nand then hopefully the robot would also\nlearn to use that one which is also what\nsometimes happened\nso both these experiments focus very\nmuch on the implicit part so the more\nco-adaptation so to explore okay what\nhappens if you put a human and an agent\nin this environment where there's this\ntension and they need to learn to\nunderstand how to work with the other\nso some things that i learned from that\nis of course if you create a task like\nthis you need a task an environment with\ninterdependence uh it needs to be useful\nto collaborate why else would you do it\num so you need to provide the\nopportunity to have this implicit\nadaptation in learning\nand what was very important for me in\nterms of research purposes is that it's\npossible to have different strategies\nso if you have one clear optimal\nsolution then it's very easy to convert\nto that but the real world's often not\nlike that the real world is often\ndynamic so\ntasks change you have preferences so\ni deliberately design tasks that\nallow people to choose their own way of\nsolving the task\nso\nfor the second experiment so it was also\nimportant to have a simple enough\nproblem such that in this case\nreinforcement learning agent could learn\nin a few rounds so quickly while working\nwith the human\nand you could also say maybe it's\nimportant to have like a real life\ndomain um because it should eventually\nalso scale of course the real world\num but what i think is the the\nkind of key thing or what i thought was\nmost important\nis\nto facilitate the emergence of\ninteraction patterns\nso what do i mean with interaction\npattern um i mean with that basically a\nsequence of behaviors or interactive\nbehaviors that can be distinguished as\nkind of a unit of behavior that repeats\nitself so if you come in a certain\nsituation then there's a certain type of\nbehavior that you show like a strategy\num\nbut then one that is not really decided\nbeforehand but one that emerges um so we\noften use examples from sports so for\nexample football or any other team\nsports um when you know each other\nreally well and you've\ndone the game for a long time uh you\nknow kind of strategies of how to deal\nwith specific situations that you\nencounter um and at some point you can\nalso reflect on those and be like oh\nlet's apply that when the\nopponent does something\num\nand\nso okay so maybe not go to the figure\nimmediately so um for me what i mean\nwith when co-learning is co-adaptation\nand this feedback loop and reflects upon\nthat it is especially this so\nsomething emerges some kind of behavior\ncan we become aware of the fact that\nthat emerged\num and can we then also name that say\nthis is we call this behavior x and\nwe're now in a situation where we should\nactually apply behavior x and then you\ncan keep that behavior and sustain it\nover time\nso then we can learn and it's also tying\nthat\nbehavior that we felt and experienced to\nthe knowledge that it works well and\nthereby building some kind of meaning\nso in order to facilitate that what i am\ncurrently working on as a next step\nis to build an ontology\nthat is able to save that kind of to\nstore that kind of information\num so inter patterns of behaviors\num with requirements for context in\nwhich they hold or in which they are\nexpected to be\num\nand a way for a human and an agent to\ncommunicate about that so um currently\ni'm focusing on the left scenario so\nwhere we give human initiative in that\nbecause it's a lot harder to also let\nthe agent or robots recognize certain\npatterns\nso the idea is that okay the human and\nthe robots they collaborate they\ninteract they adapt and as a consequence\nof that\ninevitably there will be some patterns\nof emergent interaction\nand then in this case a human should\nkind of signal that or recognize it as\nsuch\num and then save it in the ontology by\ncommunicating to the robot\nuh meaning that the robot can also\nin its uh\nwhen it performs the task look in the\nanthology and see oh in this situation i\nmy human expects to have this behavior\nuh or and expect me to have a certain\nkind of behavior\nand then of course ideally what you\nwould want also is that the robot can\nalso signal this has initiative in this\nso the robot can also say oh maybe you\nhaven't noticed but i see a pattern in\nhow we behave maybe this is a useful\npattern um so yeah so that is something\nthat i'm uh working on uh or trying to\nachieve also with different students\num of course it's not trivial\num\nand then a question that arises for me i\nthink and hopefully also for you\nprobably is okay but what does that then\nmean for\nmeaningful human control or control in\ngeneral\nbecause um\nwell\nco-learning also means to fail\nuh learning means to feel always right\nand i think it was very clear in some of\nmy experi experiments so in this one\nsome rounds so participants did this for\ni believe five rounds um\nso sometimes they were really slow\nsometimes they walked all over the field\nto find all the objects and their\noverall performance was just really bad\num or in other tasks this can have even\nmore severe consequences meaning that\nmaybe the robot drops all the rocks on\ntop of the victim in this case which is\nnot really something you want or\nwe take a very long time so maybe the\nvictim is not very happy being\nunderneath the pile of rocks all the\ntime so maybe eventually after after a\ncertain time it becomes better\nbut yeah in order to get there you also\nneed to fill so you need to let go of\nsome kind of control to\nallow for that failing so what does that\nmean for a real world example um it's of\ncourse hard to imagine\nbut i think\nthat the process can have the potential\nto facilitate the emergence of meaning\nso if we talk about\nwell we have certain behaviors that\nemerge once you can\nname those behaviors and be aware of\nthem\nreflect on them\nmaybe that is something that we can call\nmeaning\nso maybe if we have a certain modus of\ncollaborating that\nis there in the end\nmaybe that is more meaningful than\nanything that we can design because um\nwell we learned to understand the other\nin a certain way and i think that is how\nwe humans often work\nwe learn to understand by\nby making these connections over time\nand learning about the other slightly\nchanging our behavior\nso\num\nkind of also to to finish that up um i\nthink there are many questions that you\ncan ask but um i wrote down a few that i\nthink are important\nso first of all how do we cope with\nhuman failure so we're\nwe're very well familiar with the fact\nthat humans fail and we accept that also\nto a certain extent\nand we have methods with which we go how\nwe cope with that but what if we are in\na safety critical environment how do we\ncope with failure\nwell some people in the military might\nprobably say we don't fail\nbut that is of course because they put\nall kinds of protocols in place but also\nbecause they train a lot beforehand\nso can we use similar strategies to\ndeal with or cope with machine failure\nthat also means we need to change\nhow we look at that and we need to start\naccepting that machine sometimes fill\nbut we need to make sure we have\nstrategies in place that help us control\nfor the machine failure well the machine\nmaybe can control for our failure\num\nof course also a very philosophical\nquestion always when do we start\nqualifying behavior as meaningful\num\nof course touches on in general what is\nthe definition of meaningful human\ncontrol but i think um\nin general what what is meaning and is\nit something that we can create or find\nthrough these exploratory processes of\nco-learning and whatnot\num and then one that i always\nwell\nthis is one that i don't necessarily ask\nto wonder but ask to trigger people also\nwhat does it mean to optimize\nperformance in complex and dynamic tasks\nbecause of course a lot of work we do in\nai and robotics and whatnot\nis focused very much on optimization\num but if you connect that to a term\nlike meaningful into a process like\nco-learning where you have a dynamic\nenvironment where people have different\npreferences and there's not clearly one\nsolution to a task\nwhat does it even mean to optimize\nso um those were the the three questions\nwith which i wanted to conclude the\npresentation and hopefully open it this\ncould open the discussion\nso i don't know if any of you has\nquestions\nbut uh otherwise let's just start the\ndiscussion\ngreat thanks emma for this presentation\nsuper cool yeah so\nwe're opening up the floor for uh for uh\nfor everyone else so please unmute and\nuh\nand have at it\nhere's what comes out\ngo for it lucha\nyeah\nthank you very much for the presentation\ni really liked it and then also really\nlike the slides which is another\nkind of common thing for me to\nappreciate also this like\nwell uh but uh so i also\nlike a lot the questions and the kind of\nuh\nthe\nthe fact that you try to make a\nconnection with meaningful human control\nwhat does it mean uh what happens when\nuh\n[Music]\nwe move from like uh the lab environment\nthe simulation environment to the real\ncase scenarios and i i think it's really\nwhat we should be asking ourselves and\nuh\nrelated to that do you think that this\nwork can have like an anticipatory power\nlike\ndo you see it like possibly as as you\nare saying it it it helps\nunderstanding\nemerging interactions emerging patterns\nuh and behaviors so\nuh do do you see it in a positive way\nthat it might create the space for\nanticipating uh\nconsequences uh\nand effects that we might have in real\ncase\nsituation\num\nyeah yeah i think so how i look at it is\num as i said it partly is about the\nperspective that we have on um what what\nis what does a machine or an ai system\ndo and do we require it to not fail um\nso i think um\nyeah looking at these kind of processes\nis really for a very big part reflecting\non this um like how how do we look at\nthis relation um\nallowing a a machine to fill um also\naccepting that a process like\nco-adaptation and co-evolution is maybe\ninevitable yeah because humans are\nadaptive anyways and if we then start\ncreating adaptive machines whatever that\nmay be um then co-adaptation is\ninevitable i guess\num\ndo research work that also appreciates\nthat because i think a lot of the time\nwe assume either the human or the\nmachine to be static\nwhich is almost never the case yeah um\nbut like do you think that the kind of\nfailure that you imply\nand you might experience in uh\nsimulations and\nlab environments might be representative\nof what can happen\nin uh\nreal applications or do you think there\nwill be still\na lot of\nwell probably it depends on the case and\nthe and the kind of simulation you build\nyeah and i think there's a difference\nbetween the actual failure and the\nprocess of like accepting that there\nwill be some kind of failure yeah um so\nif you look at how humans collaborate in\nsafety critical environments like the\nmilitary or whatnot\nthey also can't anticipate beforehand\nexactly what the failure will be but\nthey can anticipate that there will be a\nrisk of failure yeah and they can put\nstrategies in place um to make sure they\nknow how to deal with that failure\num and i think\nlike a process like co-learning or\nsomething um is a way to to start\nunderstanding how to cope with that so\nif you train together and you learn then\nthings will go wrong and then you can\ndevelop behaviors interaction patterns\num that enable you to to deal with\nfailure situations so um in a very kind\nof uh not real life example so in the in\nthe experiment\nthat i did like with the virtual\nenvironment with the rocks so a type of\nfailure that occurred there is that\nthe robot\nbroke rocks in such a place that they\nthen dropped on the victim so some\nparticipants started creating\nbehaviors where if they suspected that\nthe robot was going to do that they\nstarted placing like a little bridge of\nrocks over the victim\num to um well\nmake sure there wouldn't be yet to\nprevent damage um\nso then you can say okay well they have\na very concrete strategy to react to the\nconcrete failure but part of it is also\nlooking at the at the robot and\nunderstanding its behavior to an extent\nwhere they could know when it was going\nto fill for example yeah\nokay thanks\ngreat thanks\num\ndoes anyone else have a thought or a\nquestion for emma\nif not actually i have a question in the\nmeantime so\ni think it's it's interesting uh uh\npoints also that you make i mean also in\nterms of meaningful human control one of\nthe issues that that that's also present\nis the\nthe rate of learning that we're\nconsidering right so sometimes\neither the human or the robot is faster\nin terms of learning a certain pattern\nor a certain you know uh preferred\naction\nso\nfrisk is also in your own in your own\nresearch how do you\nhow do you see that happening so you\nknow in terms of for instance the hume\nstill has a little bit control over what\ninteraction pattern goes into your\nontology set for instance right um but\nthe robot can be way quicker and how do\nyou so\nin your view how do you how would you\ncope with this or how would you handle\nthis\nis it a problem at all\ni think it's a very good question and i\nthink it is a problem\nso um\ni think in my experiment uh i try to\nkind of force uh the agents to learn at\nthe the rate that is similar to the\nhuman so\num\nfor example people played uh the little\ngame eight times and i had to ensure and\nand with two different scenarios so five\ntimes one scenario and then three times\nanother so i kind of had to ensure that\nthe robot would learn something\nsome kind of strategy or behavior within\nfive rounds such that the the human was\nalso able to kind of see that change and\nanticipate\num\nbut yeah in the real in real life the\nquestion is of course do you want to\nenforce that\nbecause learning has is a very strong\nbehavioral cue also so if people notice\nthat the robot changes its behavior then\nthat will lead them to maybe also change\ntheir behavior and\nultimately if you want to have a\nbalanced collaboration where indeed both\nagents have some kind of initiative at\nleast in the type of behavior that\neventually emerges yeah if one is is\nmuch quicker than the other\nthat is very it's going to be very\ndifficult so yeah i don't have a\nsolution for that i think in my current\nwork i just enforce it\nas much as i can\num\nyeah no\ni guess then they're the that's sorry\ndavid that's where the mutual\nunderstanding i guess that you also\nalluded to is is going to be really\nimportant to make sure that at least we\ncan still understand what has been\nlearned at some point to some extent i\nguess\nbut uh you know yeah yeah that is one of\nthe big challenges in this kind of work\nis you need to make sure that the human\nsees that the robot learns because\notherwise they'll just be like this\nrobot just does whatever i'll do my own\nthing and not care\nso you need to create this uh empathy\nsort of\num\ni just wanted to make a\na comment uh\non what what you just said uh about the\nlearning rate oh maybe i'm bypassing\nafghani i see a raised hand\nso just trace it\njust raise it okay good\num so um\nso actually i think what you did is a\nprime example of what in the military\ndomain\nwould be called putting the system under\nmeaningful human control so\ni'm not quite sure but so\nlet me check\nwith you now\nif it if the analogy makes sense\nso in this case it's learning right\nuh that that the speed of learning is\nkind of made\nmade in the same order of magnitude\nand in this case uh\nthat learning also i think has something\nto do with the initiative with how\nquickly the robot\nmight take initiative but there i'm not\nquite sure because i don't understand it\nwell enough yet perhaps\nbut so in the military remain they're\nvery scared of systems taking initiative\nby themselves\nand perhaps learning much more quickly\nwhat the target is that they need to be\nsearching out and how to identify friend\nor foe or whatever and do that much\nquicker than humans and there and that\ntherefore the system should then also\nmake the call to you know press the\nbutton\num and then there's this disparity in in\ni guess response time\num\nthat says well yeah you know there's\nmaybe uh advantages of having\nmilitary advantages of having the\nsystem uh being uh not under meaningful\nhuman control because then you're much\nmuch faster\num\nyeah and then\nand so that's the fright right and then\nmaybe then if the system makes an error\nmy god how could you prevent that or\nright so yeah so i think what you're\ndescribing is maybe correlated somehow\nthat you're saying well right now i\nimposed the learning speed to be in the\nsame order of magnitude\nsimply because i want to make sure that\nthe interdependence\nstays\nand so i just wanted so this is an\nobservation i thought that came up\nlistening to you\nand\nmaybe i'd like to hear your reflection\non that\nyeah i think it's a very fair a good\nthought in the sense that of course you\ncan also adapt the agent or the robot\nbeforehand\nand by indeed enforcing a certain\nlearning speed to adapt it beforehand\nversus real time\nbut i think it also\ndepends a bit on\ncontrol versus meaningful control so\nof course\nmaybe if the system can take initiative\num then that that means the human's not\nin absolute control but it may still be\nmeaningful control\nif we decide okay in certain\ncircumstances we allow the machine to\ntake initiative because we know it is\nit can perform better and within the\nboundaries of what we find acceptable\num\nso yeah um\nthis is what i always wonder about so if\nwe say meaningful control that means we\ndon't mean absolute control\nright\nso yes yeah so if that's a if that's uh\na question so then we're at the heart of\nof our research topic emma and\nso i think there's still debate around\nthat um\nbut\nyeah but so maybe indeed my perspective\nis that that\ncontrol indeed is then that not absolute\ntop-down control that you can control\nevery nitty-gritty detail necessarily\nthen a lot of the systems are out of our\nmeaningful control anyways\nit also doesn't mean it needs to be\ntop-down control it could also be kind\nof bottom-up influencing perhaps\nin terms of meaningful it it also\nmeans that sometimes we implement\nsystems that in principle\nallow people to overrule them but in\npractice\nyou know are so boring and so and are\ngood enough that that people don't even\nyou know\nactually de facto stay in control and\nthen therefore it's a kind of yeah but\nit's possible but it doesn't happen and\nso that's also not meaningful uh control\nyeah and so i spent now quite a lot of\nwords describing what mean viewing tool\nis not\nand i gladly give the i drop the mic and\ngive the floor over to people to define\nwhat it is\nyeah i mean\nyou can have different different well\ntalk a lot about it so for me a very\nimportant component is kind of what i\nstarted with\nmy vision on this is\nif we collaborate that means we need to\nappreciate the qualities that the system\nhas and we need to appreciate the\nqualities that a human ha has and be\nrealistic about who does what better\nand then of course as a human maybe\nsometimes we say okay we\nsacrifice performance over something\nelse\nwhich is maybe also performance but\nanother kind of performance\nbut then that also means that sometimes\nyou need to let the machine do its thing\nbecause it is simply better at it\n[Music]\nyeah but what what does better mean that\nis then of course another question\nand uh evgeny if you allow me one more\nresponse or\nto that\nis that uh uh this discussion also\nreminds me of uh work we've been doing\nin the uh um in the automotive domain\nwhere um where it's uh\nactually who's better\nuh it's very uh contextual so context\ndependent\nuh and that means uh if you know uh a\npriori\nuh\nin what context which would be better uh\nyou know which system would be better\nlife would be easy\nbut uh but quite often contexts evolve\nuh\nunpredictably and perhaps even unknown\nand\nand then\nthere's this tendency that\nbecause we look at specific parts of the\ncontext we can say well there the\nautomation or the robot can do much\nbetter\nso we should do it there\nand actually\nwhat this discussion makes me think of\nis that also there you might say\nactually maybe it's not overall better\nthe way we design our collaboration\nsystem um but but but maybe in a larger\nscheme if we consider more context\nmaybe then overall\nis better rather than in one specific\ncontext so maybe kind of the widening of\nthe scope of your context\nit's actually maybe\nneeded to\nyeah to make the distinction between uh\nyou know what's what's good performance\nheather was your third point performance\nin complex dynamic systems uh um\nuh yeah so that was was another thought\nthat i said that maybe different\ncontexts\nneed to be\nput for your collaboration system in\norder to find out its weaknesses and\nstrengths yeah\nyeah i agree and um for me also that\nmaybe it's too sometimes too subtle to\ndetermine a priori and that is why we\nneed to um\nlook at co-learning right\nactually because it's an ongoing process\nto figure out the subtleties\nyeah\ngood okay so\nafghani\nthanks for the nice discussion\ni am i was so i was curious about that\nlast uh the third question that you uh\nuh post on on the last uh slide so so um\nwell so first of all i was curious if\nyou can share so from your research like\nwhat kind of experiences uh\num\nwhat kind of experiences you've had\nwhere you have seen this question uh\nreally uh\nlike play out with with\nwhere where then\none would struggle to say\nwell what is optimal because maybe like\nit's hard to define something that is\noptimal or the nature of the\nproblem and people's perspectives makes\nit hard to even talk about some one\noptimal thing so i was just curious if\nyou can share some experiences that you\nhad with them um yeah so one that we\nbasically almost always find if we try\nto do some kind of co-learning\nexperiment\num is um\nshort-term versus long-term performance\nand then subjective experience and like\nhuman understanding of what is happening\nversus performance\nso very often performance is um\nsomething you try to measure to see okay\nwe now made some kind of co-learning\nprocess but it doesn't increase\nperformance\num and\nbasically in all the experiments we've\ndone it's very difficult to within one\nexperiment\nuh c increase in performance\nuh sometimes it might even be a decrease\nin like and then with the performance\nhere i mean objective task performance\nso how long does it take to finish the\ntask how much did you hurt the victim\nbut if you then talk to the people who\nuse the co-learning versus people who\ndid not\nthey have a much better understanding at\nleast in words of what the system did\nthey can explain it much better\num\nthey sometimes\nwell they maybe they feel like their\ncollaboration was more fluent but maybe\nyou don't immediately see that in\nobjective performance\num\nso that is then exactly the challenge so\nthe claim we usually make is\nwell\nit makes sense that maybe you don't\nimmediately see a performance increase\nbecause the tests are sometimes complex\nand humans try different strategies that\nsometimes don't they're not necessarily\nefficient\nbut\nuh well we kind of assume or hope that\nin the long run because collaboration\nfluency increases because understanding\nincreases in the long run we will have\nsome kind of performance increase um but\nthen that is very hard to measure of\ncourse and to\nkind of prove then you need long term\nexperiments so i think that is uh where\nwe find the challenge so um\nfor that reason i measure collaboration\nfluency usually\nso it doesn't necessarily mean that i\nlet my agent optimize for that sometimes\nuh i mean in my experiment with the\nrocks for example the agent did try to\noptimize for\ntime and\nvictim harming um\nthat doesn't mean because you're in the\nteam context doesn't mean that it\nactually works so that that actually\nimproves\nbut that was just to give the the agent\nsomething to optimize for\nso yeah i think collaboration fluency is\nan important thing but i think a lot of\nthe things you want to optimize for are\nvery subtle and qualitative\nmaking it hard to optimize\ni was just gonna thanks i was just gonna\nsay because like it strike i mean it\ntouches something that i'm looking into\nin my own work but\nexactly what you just said that\nthe qualitative\naspect\nwhich can be so important in many\ncontexts whereas\nif we i mean by nature of mathematical\noptimization and if you want to put into\nalgorithm then things need to be reduced\nsomehow into numbers\nand then they're in certain contexts\nthere there there's a tension uh yeah\nthere's a tension there but but\nwell\ni mean just as a follow-up to that like\nwhat um\nfrom\ni mean part part i think part of what\nmany of us are figuring out as people\ncoming from\noriginally different backgrounds\nkind of common ways of working together\nand different disciplines have different\nuh\num well philosophies of approaching\nproblem but\nlike connecting to the previous question\nwhere do you see\nlike what what are some of your\nwhat are your views on how qualitative\nresearch\nplays a role in in you know informing\nthese decision design decisions that we\nmake engaging with people in a\nqualitatively rich way in a way that you\ncannot just present in a diagram\num necessarily you know with some\nnumbers but you really\nyou really need to get into like\ninterviews and then and yeah\nthe nuance of what people say and there\nwere there might be more than one\ninterpretation as to\nyeah\nyeah i think it's essential but uh it's\nalso uh\nhow i was raised academically because uh\nwell when i studied industrial design um\nwe basically the research that i learned\nwas more qualitative\nstuff\num\nso for me\num\ndoing my phd i'm constantly also trying\nto search for the right way to combine\nqualitative research with\nmore traditional\nquantitative analyses or doing\nexperiments where you compare have a\ncontrol an experimental group and trying\nto combine all of it\nthereby also trying to convey the value\nof the qualitative stuff to people who\nare not used to working that way\num\nyeah and trying to be very systematic in\nthat to make sure people believe what i\nsay\num yeah i think it's essential um\nand i think um people should\nmaybe always use mixed methods\num to really understand complexities of\nthings like this\nthanks i can definitely\ndefinitely agree with you that yeah\nthanks\nyeah thanks for skinny um\nactually am i if if i if oh caramel\nyou're first before i\nyou go for it\nso thanks emma that was super\ninteresting and fascinating although i\nhave to admit i didn't\nunderstand everything but\nso i but i had a question about the\n2020 experiment that you are\nplanning to do or probably already have\nthought about a little bit\nso\num so there seems to be an asymmetry\nbetween the the human and the robot and\nand of course it's very interesting that\nif you have a way to for the robot tool\nor the ai system to find out that there\nis an interaction pattern so that seems\nvery challenging but\ni think manageable in some sense at some\npoint not for me but perhaps for you\nuh\nbut so but what i\nfind difficult to see is how you could\nif you have found such an interaction\nbetter how you could communicate to a\nhuman\nwhat what kind of interaction pattern it\nis you have found and what so how can\nyou communicate to the in a meaningful\nway to the human that there is this\npattern and yeah so in a way that is\nuseful as well\nso did you\ndid you have any thoughts that's just\nalready or\nyeah i have some thoughts another\nsolution um\ni so it's very ongoing so um\ni have a master's student who's starting\njust now in general to think about in\ngeneral these communication questions so\nhow what kind of dialogue or\ncommunication modalities etc\nshould we\nuse\nfor this type of communication\nso and he's just starting\nbut i think\nwhat i and also what my supervisors\nreally strongly believe is that\nyou need\nyou need a basic common vocabulary\nand\nthat is what i use my ontology for so i\ncreate an ontology so a knowledge\nstructure that has certain concepts in\nthere like okay we have actions and we\nhave objects we have actors\ni create kind of like a template in\nwhich you can describe the interaction\npatterns meaning you can describe\ncertain context factors and then\nbasically a sequence of\nactions\nof course in reality it's a bit more\ncomplex than that and maybe you also\nwant conditionals in there and\nexpectations and whatever but this is\nlike a first uh first way to try to\nachieve it\num and then just assume that as the\ncommon vocabulary so we create that\nontology we\nmake sure the robots can use it and or\nthe agent can use it and then we make\nsure it uses a vocabulary that is\nunderstandable to a human\num and of course in in kind of an ideal\nworld you would have to kind of\nground all these concepts so there's of\ncourse a lot of research on like\ngrounded situated communication where\nyou say okay this is a mug now you know\nwhat a mug is and i mean\nwhat would be very cool is if you were\nto ground all the concepts in an\nontology in that way\nbut currently i assume that has been\ndone so i just have an anthology that is\na given\nso you need the common vocabulary um\nmy tea is coming in yeah so um\nuh currently i can show you that uh\nmaybe something so what i've done for my\nown next experiment is to just create a\nlittle graphical user interface that\nallows the human to describe behavior\nand it looks like this\nbut it's just something i designed\nso you have so basically just look at\nthe right side of the screen so you have\nthis little form that is just like okay\na situation contains and what did we do\num so you can say for example situation\ncontains large rocks at the top of the\nrock pile and in that situation um\nbasically\nwe let the human\num maybe\nuh stand still in the side of the field\nso we let the human wait and then the\nrobots can use that as a signal to then\num\npick up\nthat large rock that is in the top of\nthe file\num so this is really based on that\nexperiment\num\nyeah so this is just a first suggestion\nof\na communication thing\nbecause there's a lot of questions of\ncourse like do you want\nan interface do you want natural\nlanguage do you even want language can\nyou do this in a more haptic manner or\nuh whatever and it's all very dependent\non context also\nyeah\nyeah\nthanks fascinating yeah\nand and and that really helps me\nunderstand as well how you think of this\nreally great i would love to\nread\nthe thing that you already work on and\nespecially also your\nthe your next paper so yeah sure yeah so\ni have a i have papers about the\nexperiments that i used um uh in my\npresentation yeah for this i don't have\nyet but i will be running a little\nexperiment where people have to watch\nvideos of the previous experiment and\nuse this graphical user interface to\ndescribe patterns that they see of\nbehavior\nand once that is done i'll probably\nwrite something about it\nyes\num one question ago you said that\noh we should definitely encourage mixed\npatterns of uh solving this\nco-adaptation problem\nand then uh\nharmon talked about uh how we can\nbasically describe asked about how we\ncan basically describe or\ntranscribe the behavior we are saying or\nhow the machine\nis trying to recognize\num\nthis\ncollaboration\ndo you think it is also valuable to do\nthis in systems where\nthese systems who which\nimplicitly do this\nso optimization systems such as let's\nsay\na very common example youtube\nrecommendation\nwhich optimizes for something\nreally simple uh money\nso it uh only shows people\nuh more ads if you watch more ads if you\nare always clicking the skip button it\nwill show you less ads um it will show\nyou addictive content when\nit knows that you can watch addictive\ncontent which is at night at 3am when\nyou actually feel like it or\nthese\nreally\nlet's just say toxic actions\nyou think in these situations\nwhere\nan algorithm\nautomatically just\nmakes uh user profiles and uh knows how\nto act to just like generate more\nrevenue no matter\nwhere on earth and what or what kind of\ncontent you're consuming\nit is good to describe this sort of um\nimplicit co-adaptation\num i think it might be um\nso there's there's different things um i\nthink on the one hand the the way that i\nlooked at co-learning i mostly assume\nthat the agent is kind of like a\nrobot-like entity even if it's virtual\nand of course this might be different if\nit is more like a a dashboard ai kind of\nthing or so also within tno we're trying\nto apply co-learning in such situations\nso we're also thinking about how does\nthis translate uh is it actually\ndifferent um\nbut i think\nin general it is also useful to reflect\non harmful or bad or whatever\nnot effective patterns of behavior\nand i think for example like a youtube\nrecommender uh does a youtube\nrecommender assume that the human will\nalso change its behavior\nuh maybe maybe not but it hap it will\nhappen i mean we all know it if you at\nsome point you figure out what the\nalgorithm does um so then you know you\ncan influence it right or at least we\ntry to it doesn't sometimes it works\nsometimes it doesn't work\ni think it would actually be very\ninteresting if um\nthe website would then give back to you\nokay i noticed that you've done this and\nthat causes me to do this\nand then as a given um because sometimes\nyou might be unaware of it\nsometimes you might be it's it's really\nabout awareness\num\nif the system would do that then you can\nalso say okay sometimes i just really\nlike to watch these type of videos but\nit doesn't mean that\nyou need to start recommending it to me\nso actually this sequence of\ninteractions\nthanks\nt flying so this sequence of interaction\nis not something i want and then you can\nuh so you can reflect on it then\nso i think it might be a valuable in a\nsense it gives a certain transparency um\nit makes people aware of their own\nbehavior as well how they react if the\nif the algorithm recognizes that the\nhuman changes its behavior to influence\nthe algorithm they can also say hey\nyou're currently influencing me do you\nrealize and then you can be like\nyes and i want this or oh maybe this is\nnot what i want\nokay\nnot sure if\nyoutube would do this but\ni think it would be interesting\nyeah it's difficult to implement for\nmillions of people\nyeah so\nthere is also of course\na lot of the algorithms optimized for a\npopulation kind of and they put you in a\nbox\nof you're this type of person so\ntherefore i behave like this um and i\nthink the perspective i try to take is\nmore okay everyone is different and as\nso i focus of course on teaming kind of\ntasks where um your behavior is\ninfluenced by the past interactions that\nyou had within that team so then it\nmakes more sense to personalize for the\nperson that you are as a unique person\ngiven by the previous uh\nnot necessarily a box\noh\nsorry i didn't\nno no that's great i actually think it's\nit's an important point too am i like so\nthere's also part of the meaningful\ncontrol right this control over the\nmental model that that something the\nrobot ai or whatever has built over you\nso that's it that's a nice uh\nand i guess that's also something maybe\nyou can test in your experiments with\nthis anthology where people can test you\nknow i want this but then all of a\nsudden you realize i don't like the\ninteraction pattern at all so you can\ntake it out again something like that\nyeah yeah i hope to see that that people\nthen uh save an interaction pattern and\nthat maybe then the robot interprets\nthat differently as how they assumed it\nwould\nand that means they need to like revise\nit and that is how you get this\ncontinuous\nimprovement and co-learning process\nokay\ngreat\num yeah so we have a couple of minutes\nleft um\nwe do so we'll stick around for a couple\nof minutes more um but i\nwant to thank also i guess\nalso on behalf of the audience emma\nthank you so much for this presentation\nand the discussion afterwards it's super\ninteresting we uh we really sunk our\nteeth in it so thanks everyone um and\nalso especially to you emma um and yeah\nindeed if you\nwould like to know more about emma and\nher work there also her link to her\npersonal webpage and two delta pages in\nthe invitation of the\nof this meeting so there you can also\nfind her papers for instance so uh\nthanks everyone\nyeah thanks for the very uh very nice\ndiscussions\nthanks\ni don't know if so\ndifferent i had a adam\ngo for it\nyeah\num\nso now\nsome people are leaving already but but\nuh\nif you want you can stay around\nand because\nso\nemma we we've been working you know that\nwith nick\non\non actually translating this to a\nphysical task and we're working\ntogether and when i say we it's actually\nyou all\nand me vaguely hovering in the distance\nabove it uh on actually finding a\nphysical equivalent to the kind of uh\nuh\nco-learning and uh that that you're\nlooking at\num but i thought maybe it's nice to\nshare some of the work we did in co\nadaptation\nand so i have a one slider\nthat\nthat might be interesting for you to\nreflect on\nand so um let me share that while we\nscare everybody away\nand\nyeah we have a meeting with with the\nstudents uh for the project tomorrow\nright\nthat's yeah the different master student\nexactly yeah lucas lucas indeed\nyeah very very happy to hear that\num\nand great that you're you're\ngoing on\nand so uh this is the not so beautiful\nslide\nthat i scrounged together from work of\nlorenzo flipsa\nsupervised by nick\nand also by sergio gramatico from\ndcsc and what he's doing is he's looking\nat a task which is uh\nusing a steering wheel to control one\ndegree of freedom left and right\na kind of uh\ncompensatory\nor actually kind of pursue task\nit became compensatory tasks right uh so\nyou know anyways you need to to follow a\nbox and minimize the error\nso it's a very simple task\nbut then we also looked at how well do\npeople do this over time\num when it's just manual control and\nthen you see here kind of the gain of\ntheir control inputs\nand we develop two kinds of robot\nassisted\ninteraction strategies\nand so these are actually you could say\nnot emergent behaviors but imposed\nbehaviors\nthat that\nthat actually dictate how the robot\nlearns to do this\nreally joint action task\nand so one would be to say well if\nhumans do less if we estimate that the\nhuman gain is lower then the robot needs\nto realize aha i should do more\nbecause then i can guarantee\njoint system performance\nand that assumes that humans won't adapt\nyeah but uh i won't\nadapt to that over time but what do we\nfind\num\nif we actually implement that let's see\nif this works\nand if we do that the robot does more\nwhen the human does last and the first\ntrial you're still working together but\nquite soon the human has very nicely\nlearned not to do anything anymore\nand so we know this concept from human\nfactors domain and from slacking and so\nyeah you know if a robot will do it why\nwould you even make a an\neffort um\nand so\njust i thought it would be nice to show\nit so we also explored what happens if\nwhen the robot estimates that the human\nis doing\nuh\nless maybe then also the robot should be\ndoing less or vice versa when the robot\nis doing when the human is doing more\nwhen it's the estimation that the robot\nshould be doing more so it's kind of a\nreward for if you're engaged okay i'll\nhelp you to do it even better\nand and then actually what you find is a\nkind of stable\nequilibration it's a new word it's very\nvery uh\nbeautiful\nbut so that's actually the idea of\nkind of trying to\nempower humans and the interesting thing\nis this all dynamic so this happens in\nreal time and all possibilities for\nweird stuff could happen but but it\ndoesn't seem to uh to happen that's what\nit seems to be something stable\nso i was just wanting to share that with\nyou uh emma yeah very nice yeah yeah i\nthink\nyou it's always very important to make\nsure that the human feels the need that\nthere is really a need to collaborate\nwhy otherwise would you do it and\nthe more dimensions you add the the\nhigher the chances that this will happen\num\nyeah i think for example in my\nexperiment with the robot on the lee so\nthat was a wizard of oz of course but um\nyeah some people did like push it\ntowards one side or to the other side\nbut most people try to find some kind of\nbalanced uh collaboration because there\nwas an advantage in in well\nthe robot actually had a capability that\nthe human didn't have and vice versa\num\nand these some of these situations were\nindeed relatively stable that people\nkept\nhaving the same balance strategy um so i\nthink in the paper that i wrote uh the\nsecond paper that i wrote about that i\nalso\nuh categorized that as okay you have\nstable situations and then you have\ntriggers that lead to kind of like\nsudden adaptations\nof moving into a different stable\nsituation so it's not always a gradual\nit's more of a now you're in this\nsituation something happens and you go\nto a new\nnew situation yeah\ncomplexity theory yeah\nright boundaries and triggers and uh\nyeah yeah yeah\num by the way maybe something that you\nwill find interesting as well uh last\nmonday i had a meeting with isa could\nyou kill us\nprobably maybe know this person\nyeah yeah\nvaguely from name but i know well she\ndoes a lot of human human interaction oh\nof course yeah from bristol yeah yeah\nyeah\nyes yeah she so she reviewed that paper\nof my robot on a leash\nbut it was a frontiers paper so um\nshe did very extensive reviews of like\nthousands of words\num\nso\nwhat she\nhas worked on and is working on is also\nlike very haptic and like one dimension\nkind of adaptation so i think very\nsimilar to these kind of things yeah\nyeah the role of roles is one of our\nkey papers uh yeah yeah yeah so we're\nalso looking into how we might uh apply\nthese ideas of like having stable\nsituations and then these\ntriggers uh and learning different types\nof stable situations uh how we maybe can\neither use all data from her to analyze\nit in that way or maybe also set up a\nnew experiment uh\nhow nice\nshe's doing well\nshe's in bristol or something right uh\nnot in him i think nothing was it yeah\nyeah yeah\nyeah it was very funny uh\nyeah because that's a nice thing about\nthese open access so that you can see\nafterwards who the reviewer was and then\nactually reach out so\nyeah\nyeah she's she's great\nbut um so so unfortunately we need to\nmove to the next meeting already so but\nthanks emma i just wanted to say one\nlast thing about lorenzo's data that you\nsaw\nso basically also what happened in these\nwith these with this controller is that\nbasically the mental model of the robot\nso the controller thought that the human\nwas terrible at the task and then at\nthat moment it would completely take\nover yeah like you said\num but then also the human did get\ndidn't get the chance to take back\ncontrol anymore because you can never be\nbetter than a human at that moment that\nwas it that was an interesting emergence\nlike it's it's it's it's a simple\nengineering you know um\nimplementation of this controller that\nled to this you know weird behavior\nwhere the um had no control at all\nanymore to take over it so that was just\njust a little observation that we had\nalso in this yeah\nand the insightful discussion this is\ncool\nyeah thanks for the conversations yeah\nand uh yeah we will see each other\ntomorrow i guess definitely yep thanks\nvery much bye-bye good day bye bye", "date_published": "2021-12-10T11:59:29Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "f5eceb2f8ef13fde7bf2be77066858ac", "title": "Designing for Human Rights in AI (Evgeni Aizenberg) - 1st AiTech Symposium", "url": "https://www.youtube.com/watch?v=dVsIkwyZ9rQ", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "so hi everyone my name is Jani Eisenberg\nand my approach I'm a postdoc at a tech\nand my project is on designing for human\nrights in AI as you've gathered now I'm\nworking together with you don't hold on\nthis and also you know so jumping right\nin as we all know artificial\nintelligence is rapidly growing in news\nand the promise that we often hear and\nthat many of us have spent time working\nin the past and in the present on these\ntechnologies is to provide more evidence\ndriven decisions provide more efficiency\nhave more automation to hopefully let us\nbe more free to pursue the things that\nwe wish to pursue as humans and be less\nbusy with all kinds of mundane tasks\nhowever does AI really live up to these\npromises from what we see over the past\ndecade well unfortunately we have seen\nover the past decade or so many examples\nof the opposite this is an example that\nyou might have heard about of the\ncompass algorithm used in the United\nStates for assessment of risk that a\ncriminal defendant commits another crime\nin the future and as this Republican\ninvestigation has found in 2016 it it\nwas found to be twice as likely to\nfalsely label an african-american\ndefendant as high-risk compared to a\nwhite defendant we have seen cases where\nemployee performance assessments\nperformed by algorithms result in firing\nof talented employees like in the case\nof Sarah vysotsky a schoolteacher in the\nUS who was highly valued by her fellow\nteacher colleagues and parents of the\nkids that she taught but when receiving\na low assessment score based on the test\nscores of her students from that\nspecific year she was fired of course as\nyou mentioned we are all aware of the\nshockwaves of the chem 'bridge\nanalytical scandal that has shaken the\nthe way we we live our election systems\nand in an election cycles in our\ndemocratic state right what all the\ndisinformation that is now bombarding us\non social media and the Internet and as\nyou might have heard and you'll\nmentioned China is taking a very\nparticular direction on how they want to\nuse AI in their society we're in China\nright now a whole social credit score\nsystem is being implemented where every\ncitizen will receive a basically how\ngood how good of a good citizen are you\nscore based on all kinds of last amounts\nof data from CCTV cameras to to you you\nname it which can affect things like\nyour ability to purchase a train ticket\nor a plane ticket or your ability to\nsend your kids to a school of your\nchoice and so forth so some of the major\nsocial ethical issues surrounding AI\nthat we see are discrimination\nunjustified unexplainable decisions\nprivacy infringement disinformation job\nmarket effects of automation the anxiety\nof people as to how their jobs are going\nto be affected with more automation\ntaking place and of course safety now\nwhat I would like to impress upon you\ntoday with these examples is that these\ntechnologies are of major consequence to\npeople's human rights and when I say\nhuman rights I mean such fundamental\nnotions of human dignity freedom and\nequality and in my view this is one of\nthe biggest challenges of our time it is\nright up there second to the climate\ncrisis which all of us should be worried\nabout regardless of our occupation but\nnot far from the climate crisis is this\nbecause of how disruptive this\ntechnology can be and don't get me wrong\nthere's a lot of great things we can\naccomplish with a\nbut if we get many things wrong about\nhow this technology impacts as you were\nsaying or freedom of choice and all of\nthose other dimensions it can be\nimmensely disruptive to our societies so\nof course this has been getting\nincreasing attention over the past\ndecade both from the technological side\nof the spectrum both from the social\nscience side the ethics of Technology\nside\nbut unfortunately a lot of the technical\nsolutions which are developed to address\nthese issues such as fairness for\nexample yeah and discrimination\nengineers often develop these solutions\nwithout the society without the input of\nsocietal stakeholders and so engineers\ndo often focus on the machine learning\nmodel the input data and the output data\nbut the larger contextual contextual\ninformation and the interpretation of\npeople who are affected by the\ntechnology and use the technology is\noften ignored at the same time what we\nsee is that calls for ethical AI point\nout the issues but often fail to provide\nanswers on how do we proceed to fix that\nbridging the socio technical divide is\nwhat my project is about and in my\nproject I take a design for values\napproach to doing so because of how\nimpactful say I is to people's human\nrights what we want to do we want to use\nhuman rights as top-level design\nrequirements in the kind of design\napproach that you don't unfillable\ntalked about these should be the values\nthat guide the human centered design of\nthese systems the other critical\ncomponent of this approach is that we\nthen work together with the stakeholders\nby engaging them through a range of\nempirical methods from methodologies\nlike value sensitive design and\nparticipatory design with a combination\nof qualitative research and quantitative\nresearch which is very important this is\nwhere the\ncollaboration between disciplines comes\nyes this is what we need each other as\nengineers and social sciences to\ntranslate what do human rights and their\nimplications mean in a specific context\nof use in a specific system that you're\nconsidering whether this is a decision\nsupport algorithm in criminal justice\nwhether it's an automatic driving car\nwhat's in that specific context the\ninterpretation of an abstract value and\nthen translate that into corresponding\ntechnical requirements one comments\nabout what we mean here by human rights\nso we make an explicit choice of\ngrounding the design roadmap in the EU\nCharter of Fundamental Rights the values\nof the foundation of these are the\nvalues of the foundation of EU law and\nthey are dignity freedom equality and\nsolidarity of course this is not\nexclusive to EU Member States\nthese values are shared by many cultures\nin in in the West and at the outset we\nshould recognize that there will be\ndifferent value choices that are likely\nin many other cultures and\nsocio-political systems so we want to\nmake clear we're not trying to impose\nour view of society on on other\ncountries but we want to show how these\ntechnologies can be designed over here\nto support the values that we treasure\nover here now to just give you a glimpse\nof the benefits of the of and how how\nwas the structure of this of this\nprocess imagined that we would want to\ndesign for values which are at stake in\nthat criminal risk assessment case well\nfor the criminal defendants a important\nhuman rights which is at stake of course\nis obviously freedom and if we look in\nthe EU Charter in in the freedom\narticles we have of course the right to\nLiberty which in the context of criminal\njustice one of its main implications is\nthat a person may not be subjected to an\narbitrary arrest if you arrest a person\nthat there needs to be a court with\nstanding evidence right you need to be\nable to prove in court that there's the\nrest of this person is warranted the\nnext question is how do you translate\nthat into a technical requirement well\nin general this is an important moment\nbecause in situations where there is no\nobvious technical solution to support\nthat norm sometimes it would be a\nstimulus for innovation because\nhistorically we know that when faced\nwith moral dilemmas we produce some new\ntechnologies that allow us to resolve\nall of the dilemmas above but I want to\nmake the point that there will be\nsituations when norms can don't be\nfulfilled with technology and the\nresponsible thing to do then is to stop\nit is not an obvious thing that AI\nshould be introduced in every context\nmore AI is not always the right answer\nand in that I want to say that\nmeaningful human control VI requires\nfrom our from my vision eliciting\ncontextual design requirements by a\ntruly Co designing with stakeholders\nengineers individuals affected by the AI\nsections direct users field experts\npolicy experts and so forth moving\nforward we want to engage in case\nstudies in which we implement this\nvision in specific context and I'll be\nglad to interact with you today to also\nhear your ideas learn to communicate\nefficiently across diverse backgrounds\nand finally transition from these\nstudies to design protocols in specifics\nsocietal domains just recall sure I want\nto say that designing for human rights\nshould not be viewed as an obstacle on\nthe contrary I believe that it is key to\nachieving innovative partnerships\nbetween humans in AI and understanding\nthe roles that we can play together to\nhelp in and humans benefit from more\nhuman to human valuable human to human\ninteraction it is not an easy path to\ntread because we need to learn to\ncommunicate across all these different\ndisciplines which we represent but I\nbelieve it'll bring long term benefits\nto all stakeholders and with that I call\nyou to design together for Humanity\nthank you very much thank you\nthank you you have gaming\n[Applause]", "date_published": "2019-10-29T16:17:08Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "fcd560e992d5a4209029e943a3730a8d", "title": "AiTech Agora - Stefan Buijsman: Defining explanation and explanatory depth in XAI", "url": "https://www.youtube.com/watch?v=J5_-vmhsrv0", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "[Music]\nall right\nso uh welcome to uh the weekly\nai tech government things everyone and\nmy name is archaic\ni'm going to host today's session and\nour guest speaker\nfor today will be stefan bellsman stefan\nis a new postdoc joining\nai tech at the faculty gpm\nudolph and stefan has a\nvery interesting inter and\nmulti-disciplinary background which\ninvolves computer science philosophy\nmathematics ai cognitive science\na lot of interesting work and then\naside from research stefan is also\nengaged in\npopular science uh science communication\nwriting\nhe has written several books on\nphilosophy on mathematics\nuh and also recently on ea and this\nlinks to philosophy\nand today stefan is going to talk to us\nabout uh explain about artificial\nintelligence\nso i give the follower to you stefan if\nwe have\nany questions uh i think there will be\nplenty of time for that\nright and uh yeah that's some questions\nmeanwhile just put them in the chat\num i will manage that okay uh the floor\nis your style\nyeah thanks for um so yes i'm\ntalking about explanation uh in ai today\nso\nas i think all of you know and i don't\nreally need to go into any depth here is\nthat\nwe have all these new ai tools which are\nhard to interpret\nand we want to explain why\nthey give us the output they do\nso specifically why for some input x\nyou get output y which could be\nthe decision that someone is a potential\nfraudster or\nthe decision that there's a cat on an\nimage um\nwhatever you like um and uh\nthe sort of philosophical work i'm\npresenting in the next\n30 minutes roughly is on the question\nwhat the explanation\nhere really means so what kind of\ninformation qualifies as explaining\nwhy the output is what it is um and\nalso some preliminary work on what\nmakes for a good explanation because\nhere too you'll see\nthat the definition i'm proposing will\nallow for some\nreally uninformative uh explanations\num which obviously we want to avoid\nsomehow\nso that's the philosophical plan uh\nfor today um\nand i want to start with a bit on the\nkind of xai definitions that you'll find\nin the\ntechnical literature um to show that\nthere's\nuh not only some disagreement here but\nalso that these definitions tend to be\nvery broad and\n[Music]\ncan use some further refinement so for\nexample you can have\nsomeone defining explainability is the\nability to understand the logic of the\nai\nor an explanation as an interface\nbetween\npeople and the algorithm\nthat both accurately tracks how the\ndecisions are made by the ai\nand is comprehensible and i think we can\nall agree that this is what we want but\nthese kinds of definitions uh including\nhere\nthe last one which uh says it's that\nit's an explanation\nsome kind of meta information generated\nto describe feature importance most of\nthem don't give you a very\nclear idea of what kind of information\nto provide\nto people to in order to explain\nan ai system uh or they can be like this\nlast one very specific\non feature importance being the relevant\ncase where\ni think we should be a little broader uh\nin how we understand things\nso this is more or less what the kind of\nexplanations in the literature look like\num and instead of going there i want to\ntake something from the philosophy of\nscience where there has been kind of a\nbig debate on explanation\ngoing on the last 20 years where a very\npopular\nidea is that explanations in science\ngenerally speaking are causal\nexplanations\na bit of a disclaimer here that this is\ndefinitely not the only position there\nare plenty of philosophers who disagree\nwith this and\num i think there's a debate to be had\nthere but for now i will just assume\nthat we can take on this idea of\nexplanations being causal\nexplanations which you can see for\nexample with very\nconcrete attempts to explain something\nso\nan oft-used example is that you\ncan't really explain why\na flagpole has a certain length by\nappealing to the length of its shadow\nthis\nsounds somewhat fishy compared to\nexplaining\nthe length of the shadow based on how\nlong the\nflagpole is that's making the shadow\num so that there's some kind of\nasymmetry here in\nwhat we think is a good expert it counts\nas an explanation and what is just\na derivation of one length from another\nlength\nand the idea being that this asymmetry\nof explanations\nattracts very well with the asymmetry of\ncauses that idea hasn't gone unnoticed\nin the xai literature so it's certainly\ncertainly not the first one to\nappeal to this idea of explanations\nbeing specifications of causal relations\nfor example an influential\n2019 literature survey\ndefines interpretability and\nexplainability\nas the degree to which a user can\nunderstand the causes of a\ndecision um but\nthere's a lot more to be said here than\njust the idea that the\nexplanations are causal\nand to do that i want to make a brief\nside step on causation so the\nthe final account i will be presenting\non explanation is by\nuh woodward from the early 2000s\num and he starts off also with this idea\nof causation\nbeing based on counterfactual relations\nso you can say that\na set of variables x is the cause of\neffect y if changing x\nx's value produces a a correlated change\nin the value of\ny um so if you look in counterfactual\nsituations where\nx would have been different then you see\nthat\nsimilarly y would have been different\nso changing the cause also changes the\neffect\nand if there's no causal relation then\nthe effect will be\nunchanged um\nthere's a whole bunch of further\nspecifications on how exactly\nchanges in acts have to be made so\nthat's a bit where this\nimage comes in i think that's something\nwhich we can go further into in the\ndiscussion if\nsomeone wants to but just to indicate\nthat\nit's not simply if you change\nx in any old way then y has to change\nbut\nthe kinds of changes on the\ncausal variables have to be fairly\nspecific here\nokay that being said um this idea of\ncausation having some kind of\ncounterfactual relation is\nthen used to further explicate\nwhat causal explanations might be\nwhere an explanation is typically\nseen here as consisting of two main\nparts on the one hand there's a kind of\ngeneral\nrule a generalization if you will that\num relates these changes in the\ncause cause variables so in variables x\nto changes in the effect uh in variable\ny\num so that there's some kind of\ndescription of how cause and effect\ncorrelate with each other um\nand then furthermore you want to have\nthe case that\nthis generalization is such\nthat it explains the current\ncase so that\nthe variables x actually take value x\nand that that that means that variable\ny actually takes the output value that's\nbeen observed and that you want to\nexplain\num and then finally the idea that this\nis a\ndecent explanation if it's\na good correlation so if it's\napproximately true\nif as i was just saying the current\nvalue is covered by this explanation\num and if there is some counterfactual\nsituation\nwhere the input variables are different\nand your general general rule\nthis generalization also explains\nhow the output would be different\nso if we put this into the context of ai\nthe\nidea is that you will have an\nexplanation consisting of\na rule relating input values to output\nvalues\nthe rule has to be covering\nthe actually observed output that you\nwant to explain\nso that it predicts that for the current\ninput\nyou would get the current output and it\nwill cover some counterfactual\nsituations\nwhere you have different input and you\nwould get a different\noutput also covered\nby this rule that's the kind of minimal\nsetup that this woodward account from\nthe philosophy of science requires for\nexplanation\nand it fairly readily applies to\nthis xai context\nand what's interesting i think is that\nyou will actually find\nbits and pieces of this idea spread\nthroughout the literature only\nsep taken separately um so yes so we\nhave these two conditions\num a rule covering the actual case and\nthe counterfactual\num and we have a lot of these tools that\nhave just one of these two aspects\nso what you will see is that there's\nquite a lot of tools nowadays in the\nxai literature focusing on just\ncounterfactuals\nthis paper from 2018 made the idea quite\npopular that you can\nexplain the output that was\nreturned by an ai system\nby pointing to a counterfactual instance\nwhere you say if the input had been\ndifferent\nthen this different output would have\nbeen returned\nnow that's half of what woodward is\nproposing with his definition of\nexplanation\nand i want to suggest that this\ncounterfactual approach with\nonly providing a single instance\nonly works really well if you find\na counterfactual that suggests a\nplausible\nrule so the second part of\nwhat i will be pushing as a definition\nfor explanation\n[Music]\nand there are different ways of going\nabout\nwhy this might be the case so here are\ntwo examples\num from sort of normal physics where\nyou can see the difference so if you\nhave a first explanation of why\na window breaks when a baseball hits it\nthen you might say well if the baseball\nhad hit it at a lower speed the window\nwould not have broken\nand i think this is something we fairly\nreadily interpret because we\nlink the energy with which the\nbaseball hits the window with its speed\nand we have all this experience with\nspeed being a very important variable\nhere\nbut if in the other hand the\ncounterfactual you're presenting is\nwell if the earth had been much heavier\nthan the window would not have broken it\ntakes a while to comprehend how this\ncounterfactual which\nis true if the earth is heavy enough the\nbaseball would\nhit the ground before reaching the\nwindow and then the window wouldn't\nbreak\num how this might explain the situation\nin any way\nnow obviously these two examples are\nvery convoluted and not really in the ai\ncontext so\n[Music]\nthough i would suggest here that the\nsecond is not a good explanation because\nit doesn't have a very\nplausible covering rule the response\ncould very\nquickly be that well these kinds of\ncounterfactuals are very distant from\nthe actual case under consideration\nyou see a lot of mention of\ncounterfactuals in the xai literature\nhaving to come from a plausible subspace\nof possible inputs where\num you might say that\nthe earth being much much heavier isn't\nvery plausible in this case\nso this could be one response to\nwhy just counterfactuals are enough\ni think the issue here is that\nthere are still quite a lot of close\ncounterfactuals which\nfit all of these conditions and still\ndon't give us a clear reason\nwhy the output was returned the way it\nwas\nso to give some examples of from those\nrange of natural adversarial cases\nthe sun here on the left might be\ninterpreted correctly by the ai but if\nyou show it a\npicture of the sun in ultraviolet light\nit might very well like in in this\nparticular instance classify it as a\njellyfish\nwhere showing this counterfactual won't\ntell you anything about why the ai was\ncorrect\nin the case of the image on the left or\nvice versa why it was wrong in the case\nof the image on the right\nand this can happen even with very small\nchanges so you can have for example the\nimage here in the bottom left where\nthe ai recognized\nthe image as containing a banana\nprobably based on the\nyellow shovel but\njust changing the color of the shovel\nturns it back into the proper conclusion\nof it being a dragonfly\nso here we can get some idea of why this\nis happening but just showing the\ncounterfactual\ndoesn't elucidate why specifically those\nchanges\nare relevant and this is what precisely\nwhat this covering rule is supposed to\nbe doing\nso just presenting a counterfactual\nseems to be\ninsufficient for full explanations\nand empirical evaluations of these xai\ntools using only counterfactuals seem to\nsupport this\num so there have been a couple of\nstudies already where\nyou see that simply presenting users\nwith\na range of counterfactuals doesn't\nreally help them improve on predicting\nsystem outputs\nso it seems like they don't get a proper\nunderstanding\nin this way of how the system makes\ndecisions of\nwhy it makes the the decisions that it\ndoes\num and hopefully then that at least is\nthe the idea that the philosophy of\nscience has\nhere is that adding this covering rule\num will help there\nthat on the other hand brings up the\nquestion well maybe we can do with\njust a rule um there have after all been\ntools doing exactly this\nto name one example from 2017 there's\nthe idea that you can explain the\nbehavior of a\nclassifier f here\nsimply by showing you\nwhat subset of images it applies to\nso here you will have just a rule it\nwon't cover any counter factual cases\num because it will focus specifically on\nin this case the\nimages for which the classifier returns\nuh one so in this case returns robin\nimages\num and it will refrain from mentioning\nany\ncounterfactuals whatsoever um so this is\na suggestion that's out there\num i think the question with these kinds\nof suggestions is\nfirst of all what\nexactly is the classifier attending to\nso we have this set of robin images but\nwe don't know\nwhy it picks exactly those images\nfor the label robin we don't get some\nkind of\nindependent conceptualization of the\nmodel behavior\nbut more importantly there is\nthe general finding in\npsychology also mentioned in this\nliterature review by miller that\nexplanations tend to be con contrastive\nso that there's always some specific\ncontrast you're looking for so\nwhy is it robin rather than another bird\nor rather than an\ninanimate object or why is it yellow\nrather than\nblue these kinds of contrastive focuses\nin our explanations\nand they will be missing if you have a\nrule which doesn't cover\nany counter factual cases so\nindependently from this philosophical\naccount\nthere is the the general finding that\npeople prefer these contrastive\nexplanations and\nwill in fact offer contrastive\nexplanations\nuh almost all of the time and then\nadding the counterfactual component\nfits in very well with that human\ntendency\nto focus explanations on a specific\nfeature\num okay so to sum up a bit\non on this part on sort of trying to\ndefine\nexplanation in a general sense\nfrom the philosophy of science we have\nthis account\noriginally by woodward that you need two\ncomponents for\na minimally adequate explanation\nfirst of all a generalization where\nwhich covers the actual case\nand then furthermore a counter factual\ncase which is also covered\nby this generalization so that you need\nboth in order to have a explanation\nthe question is however are these also\ngoing to be good\nexplanations so that's\nwhy i have this second part on\nexplanatory depth\num because there are very easy\ncases uh of what would be explanations\non this account which we definitely\ndon't want\nto push in the xai context so if you\nhave\na black box algorithm then you could\nvery readily\npresent this generalization which is\njust exactly the function approximated\nby the black box\nand it would be a covering rule\n[Music]\ncounting the current instance you want\nto explain\nthe function approximated by the black\nbox will return the actual output when\ngiven the actual\ninput and it will contain lots and lots\nof counterfactual cases\nbut clearly just having this\nprobably very complex approximated\nfunction\nisn't going to elucidate the algorithm\nto anyone\nso why is this explanation not\na good one or is there some kind of\nfeature that we can look for\nthat will point us to what uh good\nexplanations would look like\num so it's so like it's uh here on the\nslide how can we distinguish\nthis explanation and other bad\nexplanations from\num explanations that are actually good\nand we do want to present\nto users and i will be going here into\ntwo specific factors that\nhave been picked up on in the\nphilosophical literature\nthe abstractness of variables\nand the generality of the explanation\nwith the idea being that already very\noften in the\nxai literature you see that people will\nsay\nif an explanation is broader in some way\nor more general then it's better\num philosopher philosophers have been\nsaying the same kind of thing in their\ndiscussions on explanation um and there\nare some\nfurther subdivisions to be made here on\nwhat type of generality exactly\nmight be relevant for determining if an\nexplanation is a good one\nokay that being said um i\nwill start off with abstractness here\nand again\njust an example to get an idea of what\num\nwhat distinction people are after here\nand what the idea is with abstractness\nso\nif you have two competing explanations\nwhere\num on the one hand you might explain why\na pigeon pecked at\na stimulus by saying that it was\nscarlet colored or on the other hand you\nexplain\nwhy this pigeon packed by saying once\nbecause it was\na red stimulus then typically we prefer\nthe one with the more abstract variables\nso the\nthe fact that you say was a red stimulus\nseems to explain more rather than\npinpointing a specific shade of red\nprovided that this pigeon will pick at\nanything that's red and\nnot just specifically at anything that's\nscarlet\nso this kind of abstractness of the\nvariables if it\nmatches with the actual behavior of\nthe pigeon or the ai system\nseems to help us um we tend to like\nthese more abstract\nexplanations compared to very very\nspecific\nones um and the reason\nuh for this there are probably\nquite a few for cognitive\nload and so on but the reason from this\nphilosophical perspective\nis that having these more abstract\nvariables\nallows you to answer more why questions\nso the idea is that an explanation\nanswers\nand a question why does the bird or\nthe ai system act in this particular\nsituation the way it it did\nan abstraction is one way to be able to\nanswer\nmore of those questions by\ncovering for example all shades of red\nin\nall the situations in which this bird\nwill pack\nas opposed to only the cases where\nthere's a scarlet\nstimulus so abstractness here leads to\nan increase in generality\nwhich is helpful to us in explanatory\nsituations\n[Music]\na little bit on how to define this\nabstractness then\nso there are different accounts here\ni think one of the easier ones to follow\nis the idea that\na variable is more abstract than another\nwhen\nits actual value is implied by the less\nabstract values so to show how this\nwould work\nyou can have the case where if an object\nis scarlet\nthen it's also red automatically\nsimply by knowing which shade of red\nsomething is you know that it's\nred similarly if something is a fridge\nthen it's a kitchen appliance\nso this is a way of getting a sense of\nhow these levels of abstractness are\nbuilding up\nthough there's definitely more to be\nsaid here which well i'm happy to talk\nabout\num and then the same thing you will have\nfor\nai systems so the input variables of a\nmodel\nare probably going to be much less\nabstract than for example the the\nconcept\nbased explanations that uh jay is\nworking on\num so simply copying the\nexact behavior of the model this idea of\npresenting the approximated function of\nthe black box as an explanation\nis going to score very poorly on\nabstractness\nat least most of the time so in the case\nof images this will\nbe very clearly a bad idea\non the other hand going all in on\nabstractness isn't going to be\nthe absolute answer either you can make\nthese explanations too abstract\nso one limiting case here is where you\nmight explain\nwhy this pigeon was pecking at something\nby saying that it was\neither presented with a red stimulus or\nprovided with food or tickled on its\nchin\nor and so on you might have this really\nlong disjunction\ngiving you a very abstract very widely\ncovering\nexplanation but not one that we will be\npleased to hear about this is tends to\nbe an explanation that we find very\nuninformative\nso then again you have the question why\nis this a bad explanation\n[Music]\nand the suggestion here that i will be\nfollowing\nis that it's not specific enough\nwhere to make this a little more precise\nthe issue is that you can change the\nvalues of some of the variables\nin the explanation\nwithout having any effect on the outcome\nso if we stick to the pidgin example\nto keep things a little more concrete\nas long as there is this red stimulus\nthe bird will\npeck at it it doesn't matter so giving\nthis very broad\ngeneralization with lots of disjuncts\nmeans that there will be variables in\nthere like whether there is food or if\nyou tickle\nthe bird which will be completely\nirrelevant for\nchanging the outcome so\nmore abstract explanations are typically\nbetter here\nbut the limiting cases that you\nshouldn't make them so abstract\nthat you add in irrelevant information\nby for example in this case\nadding lots and lots of disjuncts which\ndo make it a more general explanation\nbut take away from how specific the\nexplanation is for users\nokay finally then um we have this idea\nof\nexplanations being better because\nthey're more general that's\nin the end what the explanatory depth of\nthis abstractness\nis uh coming from too\num so the question is what type of\ngenerality exactly\nuh matters here um\nand like i already said very briefly the\nidea is that\nby answering more of these why\nquestions um so by\nexplaining more instances\nof an algorithm's behavior\nyou have a better explanation\nand here there are at least two relevant\nways of\ncashing this out in terms of being able\nto answer\nmore white questions you can both look\nat how\nwidely it applies so the breadth of a\ngeneralization\nand you can look at how accurate it is\nso if we stick to the breadth of the\ngeneralization first\nthen the idea goes that the more inputs\nthe explanation manages to cover the\nbetter\ni didn't say this very explicitly in the\nbeginning but\nthe account of explanation here\nallows for some inaccuracy\nin the explanation so it could very well\nbe that it covers\nsome inputs but predicts the wrong\noutput\nand at least according to woodward and\nthese philosophers\nthe the general explanation might still\nbe a good one provided it covers\nenough of the inputs in it range\naccurately so you could expand\nhow many of these inputs it covers or\nis supposed to cover\nand note also here that an additional\nfactor to consider is that\nit's not necessarily the case that an\nexplanation might\napply to only a single black box\nalgorithm and i think that's where this\nnotion of breadth of a generalization\ncan actually help us to get out of the\nissue of\nwhy isn't the function approximated by\nthe black box a good explanation\nbecause this approximated function only\napplies to that very specific model\nwhereas more wide ranging explanations\nwhich might point to\nfeatures like models acquiring bias\nor adversarial cases\nmight apply to much more than just a\nsingle model and i think these are the\nkinds of explanations\nwhich informally you will see\nresearchers give\nfor a model behavior they will\nexplain that the the model led to this\nparticular behavior in these cases\nbecause it acquired\nbias there was some correlation between\nfinancial data and income inequality or\na correlation between the training data\nand the sex bias\nin hiring at amazon for example\nit's these kinds of very broad ranging\nexplanations where we point to\ngeneral features across models\nwhich we find the most helpful so\nfor these explanations the\nat least theoretical suggestion would be\nthat you try to look at how it applies\nto more than just\nthe single model as well\nthat being said simply making\nan explanation range as widely as\npossible won't work either of course so\nwe have to counterbalance this with the\nidea that\naccuracy matters too so the number\nof correct outputs that the rule will\ngive you\nor the general accuracy of these\noutput these predicted outputs\nso yes there will be different ways of\nmeasuring this exactly a very simple one\nwould be to just look at whether or not\nthe predicted output is the same\nor how much it deviates from the actual\noutputs\nso to wrap things up a little bit here\non explanatory depth\ni've mentioned here how abstractness and\nbreadth and accuracy\nall factor in to how good or\nhow deep an explanation is\n[Music]\nof course there's going to be much much\nmore\nthat is going to be relevant for whether\nor not an explanation is good and\ni think here factors like cognitive\nsalience or\nthe precision that you have in the\ncontrast class\nare are going to be relevant as well as\nmore\nhuman computer interaction factors\n[Music]\nto do with our specific psychology\num but based on the at least\nphilosophical account here\num abstractness and breadth uh seemed\nlike\nrelevant not uh automatically available\nin the\nliterature criteria to mention\n[Music]\num so to conclude\num i've talked a bit here about\nhow we might define explanation in xai\nand i've suggested that we could follow\nthis\naccount from the philosophy of science\nby woodward\nof having a general rule covering the\nactual case\nas well as one or of course in practice\nmuch more\ncounter factual cases\nand that not all those explanations will\nbe good ones but\nthat explanations at least with\nvariables at an appropriate level\nof abstraction with a decent breath and\ngood accuracy\nare promising from this philosophically\ntheoretical\nstandpoint and that\nfinally this also helps to focus on\ncontrastive explanations which is\nsomething we know to be important but\ncan still be missing from current x\nai tools okay\nthanks for listening and i\nguess we can open the floor to questions\nthen catty\nyes uh thank you stefan uh exciting talk\nuh do you have any questions uh please\nfeel free to raise your hand\nor type it in the chat or just speak up\ni have something yeah go ahead and draw\nsure so yeah really interesting talk in\nperspective on how\nabstractness and generality can aid in\nthis\nexploration of explanations uh one of\nwell i think i have more of a comment\nand i wonder what you think about it\nuh do you think so to me all of this\nsounds really great\nand it sounds like we could we could\nindeed use abstractness and generality\nto try and\npaint out explanations in a helpful\nmanner but i think the crux of it all\neventually ends up on the user needs\nright so on this very last concluding\nslide as well you say\nuh in order to identify good\nexplanations\nall of these would sort of depend on the\napproach\ndepend on identifying the appropriate\nlevel of abstraction\nas well as a reasonable breadth and\naccuracy\nright and all of these metrics are\nuser dependent right so what is a\nappropriate level of abstraction is\nsomething that\nis heavily reliant on the purpose that\nyou're putting the explanation to\nright so what use uh is it's serving\nso i wonder how uh you know using\nabstractness and generalization\nin the context of chalking out user\nneeds\ncould uh you know play a role in serving\nexplainable ai\nso i think these two notions are quite\nnice especially so if you can tie them\ndirectly with user needs yeah no i think\nthat's a good point\none thing which might be of help here is\nthis\nidea of having the the contrast class so\nby saying that the explanation is\nanswering a question why is\nthe output this rather than something\nelse and then the\num the contrast you're looking for might\nhelp you specify what level of\nabstraction you're interested in\num so i think if you're looking\nat a a contrast class with a low level\nof abstraction so why is it\num red rather than blue you might have\nto focus on a more specific\nexplanation as opposed to a wider\nranging question\num yeah i should be able to come up with\ngood examples here but\ni i think that hopefully the idea is at\nleast somewhat clear that this\nby by picking which contrasting examples\nyou're trying to explain\nso how the current output differs from\nwhatever output the user might expect\nthat you might be able to determine to\nsome extent at least\nwhat level of abstractness and what or\nwhat what kind of explanation you are\nsupposed to be offering\num i the the there's a further question\nof how much this depends on\nindividual psychology um\nif you look at the sort of philosophical\nliterature on this then they tend to\ntake a very objectivist\nstance with the idea that there is going\nto be some kind of\nlevel of abstraction that is the most\nappropriate for virtually everyone and\nthere is going to be a lot of agreement\nuh on which explanations we think are\ngood ones and which explanations we\nthink are bad ones\num so this this is probably taking it a\nlittle too\nsimplistic because they they just assume\nwell you have the relevant background\nknowledge and so on\num so there's going to be difficulties\nthere with implementing\nand i think you're very right to point\nthat out um\nbut but yeah i think some of it might be\nmitigated with these contrast classes\nnot that that's going to be an easy\nsolution but there might be something\nthere\nindeed so i guess you could you know\ninvolve the end user in the loop by\nasking them to provide\na desirable level of you know a contrast\nto example or contrast\nexplanation they need right yeah all\nright interesting all right\nthanks a lot very nice talk by the way\nthanks so well um next we have a\nquestion from wiki but\nuh unfortunately you cannot ask it uh\nyourself\ni'll read it for you stefan so uh how do\nyou consider explanations in ai systems\nthat don't directly make decisions\nuh for instance perception based\napplications of computer vision\nsmart cameras or speech recognition the\nhigh google invoice assistance so i\nthink it's basically about the scope of\nuh\num of explanation uh whether you explain\nthe decisions of a system or basically\njust\nmapping between input and output\nyeah so i think it this this will work\nquite generally just in terms of mapping\ninput to output so the\nthe context where this idea is coming\nfrom is that you have\nphysical systems um where you want to\nexplain\nwhy some one physical event happens\nbased on\nprevious physical events so for example\nyou want to explain why an apple fell\nfrom the tree using newton's\ngravity laws of gravitation um\nand and i think we can do this similar a\nsimilar thing here with\ndifferent ai systems so of course if\nthey make decisions that might be the\nmost logical thing for users to ask\nthese white questions about\num but i think for for the smart cameras\nor\num for speech recognition you might just\nas well as why did the system\nrecognize this phoneme as this rather\nthan as this other phoneme or\nwhy did the the camera recognized this\npart of the image\nas a fridge rather than as a table\num and and then you would have the same\nquestion about\nthe correlation between the inputs and\nthe outputs so across counterfactual\ncases\nso my hope would be that this applies\nquite generally um\nif there are challenges there i i'd be\ninterested to know\num but yeah i i hope it's quite a\ngeneral\nframework at least good thank you\nuh so okay i hope that answered the\nquestion if not please follow up in the\nchat\nand then next in line we have gm\nthank you thanks uh stefan uh\nit's really nice to to see the full\nstory\nand it makes me uh a little bit more\nconfident about the thing that i'm doing\nit seems that\nbecause it maps all the things that you\nhave talked about maps very well to the\nto the to the project that i'm working\non right using concepts to try to\nexplain what exactly a\ncomputer vision model has learned um\n[Music]\nalso my question is that i can i can\nalready imagine right\nto set up a a computational pipeline\nthat that can be used to and to really\nleverage what all those ideas that you\nhave right okay\nuh to to what levels of depth we needed\nto get the concept right there for the\nexplanation and\nhow do we collect all those uh\ncounterfactual examples right to verify\nthat the model has really learned\nsomething\num but what i'm i was wondering is uh\nwould this uh like more uh\ncomputational pipeline really contribute\nto the theory would that be helpful at\nall\nwell i i think there's a reason we're\nalready working together right jay\nso yes no i think that's uh i think\nyou're exactly right there that this\nkind of\nconcept-based explanations are sound\nlike a very promising way of\nworking out this this theory um you're\nyou're also working on correlations with\nbetween input and output and\num these will automatically cover\ncounterfactual cases so i\ni haven't given\num a lot of the detailed discussion here\nabout other existing\nxai tools but there are definitely tools\nout there that already\ngive you um to some extent a rule that\nwill cover\nuh counterfactual cases um\nso lyme will do something like this\num even though it won't be very general\nexplanations\nso so i think the the theory is um\nin my case explicitly aimed at making it\nuseful\nfor these generating these these tools\num so the the idea here is to try to\noffer what type of information might be\nrelevant and what\nfactors to consider and it's very nice\nthat this maps on so\nnaturally to what you're already doing\nbut that should be the end goal here of\ndoing the philosophy that we\nget some more detailed specifications of\nwhat the xai tools should be delivering\n[Music]\nthanks that's good to know\ngreat thanks for the questions here and\nthen we have a question from eugenie\nyes thanks uh hi stefan thanks very much\nfor uh really interesting talk a really\ninteresting look at the topic\num so i wanted to ask you what are your\nthoughts on the following that uh that\nmaybe\nif given how important that is and given\nthis account let's say that we want to\nfollow\nuh this kind of account of what's what\nis a good explanation\nso before considering building an ai\ntool in a given social context\nshall we maybe we should first determine\nwhat is a good explanation in that\nsocial context\nand only once we have determined that\nthen decide what are the means to\nprovide such an explanation so\nand then approaching that from a\nsocio-technical point of view thinking\nwhat kind of social infrastructure\norganizational process policy\njointly with any kind of technical\nartifacts that can be part of that\ninfrastructure could actually provide\nthat kind of good explanation but then\nwe we start from\nwhat is a good explanation and then we\ngo to what kind of tools i'm curious\nwhat are your thoughts about\nthat if you have any yeah no i think\nthat's a good approach\nso um and and in general i would be\nuh skeptical of trying to dive straight\ninto this and saying this is the\nfinal answer on what good explanations\nare so\ni think it would be relevant also to\nfirst\njust do human computer interaction\nstudies on whether\nexplanations of this type are actually\ngoing to be more helpful than\nsingle counterfactuals this hasn't been\nstudied yet and i think it would be very\nimportant to do this and to also\nsee if indeed this philosopher's\nintuitions that abstractness and\nbreath and so on are going to be\nrelevant factors for good explanations\nif they actually\nbear out it wouldn't be the first time\nwhere philosophers have just\nbeen exchanging intuitions to the point\nwhere you start confirming your own\ntheories\num so so i think that kind of study\non if these are good explanations would\nbe good\nand and your suggestion to start\ngenerally in a context with okay\nwe want to give explanations what is it\nexactly that we're aiming for\nwould be a really good basis before\nstarting to develop tools which you\naren't going to be sure\nwill actually help for the problem\nyou're trying to to solve\num so yeah i think that's an interesting\napproach\nto to start with the question what are\ngood explanations actually\nin whatever context you're working with\nand moving on from there and and\nhopefully this framework will help a bit\nto do that because right now\nthanks because right now it often the\ncase is that we have some kind of tool\nin some context and then we kind of we\nwant to reverse engineer and see well\nokay how do we\ntry to better the tool to provide a good\nexplanation whereas\npeople read the tool correctly or that\nthey interpret shapley values in the way\nyou want them to\nyeah yeah because what what really\nyour presentation you think you really\nnicely gave an overview of\na rich body of work that exists out\nthere into the topic of explanation\nand that makes me think well given also\nhow\num how crucial this is right when it\ncomes to real life consequences\num that if we actually if we if we kind\nof\nreverse this around right and say no but\nhey explanation\nis a critical requirement to have in a\ncontext regardless of whether we're\ngoing to use ai or not\nso let's first try to settle to some\ngood extent what kind of explanation\nand maybe that inspires a better social\nprocess around it and actually maybe\nthat inspires\nus to a new innovation in terms of the\nai tools themselves\nif we kind of flip this around yeah\nthanks thanks very much thank you for\nyour point again\nthen i will read up a question from luke\nwho is also\nnot in a good talking environment so his\nquestion is\nis it reasonable to expect that a very\nsmart human will be able to\nunderstand the most simple explanation\nthat can be provided\num because i'm not sure i do\nno but i think the so so if i'll\ninterpret it a little bit so\nmaybe the simplest explanation on\nwhy the entire model is behaving the way\nit does might be very complicated\nbecause it's still a very non-linear\nmodel and\nso the the simplest explanation for the\nfull model behavior might be\nquite difficult still um and and i think\nthat's a good point\nso um one of the\nthe suggestions there might be is that\nwe um\ngenerally don't uh might not\nor well i think this is uh actually\nmaybe something up for debate but we\nmight not\nneed a full explanation of the entire\nmodel\nfor any particular why question that we\nhave\nso it could very well be the case that\nwe can\nmanage to answer our explanations\nwithout these\nfull global explanations of the entire\nmodel if they\nturn out to be too difficult but we that\nwe can offer more focused explanations\non the different\nparts so i think that the easy cases\nhere to\num focus on is is say adversarial\nexamples\nor biased outputs\nwhere we have a simple explanation for\nspecifically that part of the model\nbehavior\nand so if we can somehow manage to carve\nup\nour model behavior into these different\nparts then maybe that will be enough to\nunderstand it\ni think that's an open question um so\nit's it's a good point to raise\num on on if we will ever be able to\nunder to fully understand these\num but it's it's something to\nwork to strive for anyway\ni hope luke you're happy with the answer\ni don't have any other questions in line\nperhaps someone wants to just ask\nimpromptu question\nyeah can you go ahead oh if nobody else\nwould like\nso i um yeah stefan\nmaybe you touched upon this in one of\nthe previous responses but so i was\ncurious\ngiven that many current ai tools operate\nbased on correlations\num so our attempt to explain their\nbehavior\ncozily um that\nso at least on at first glance that\nseems to me to run into a kind of guess\nwork zone\nthat uh like we we then try to interpret\nit\nand and we think as humans a lot of our\nuh daily interaction with the world is\nbased on a causal\ninterpretation so we kind of we try to\ninterpret it but these interpretations\nthere might be actually off right from\nwhat this correlation based model does\nso um can you talk a bit about\nhow do you see this yeah sure so i think\nthere's a\nthere's a an important difference here\nin terms of\nwhich causes you're interested in so\num juan for example in our group who's\nalso working on explanations he's\ninterested in how you might give\nscientific explanations of phenomena\nin medicine using ai tools\nso he wants to get to these causal\nrelations that are out there in the\nworld between a medicine\nmaking you better or not using the\ncorrelations that ai will spot\num and and that's i think related to to\nyour\nquestion a bit um so so that's one goal\nyou might have and that's going to be\ntricky to\ninfer causal relations out there in the\nworld based on the correlations that the\nai models are working with\nthe a separate question here and that's\nthe more the sense of\ncausal relation that i was\ninterested in is that there is going to\nbe\na cause namely the you put some\ninformation into the model\nand there's going to be an effect the\nmodel outputs something\nand this relation basically just the\nmodel behavior\nin any particular computer it will be a\ncausal relation with just\ncomputations being carried out by the\ncpu or the gpu\nthat causal relation is the one i was\ninterested in explaining\nso that's a different one from the\ncausal relations out there in the world\nbut still a very relevant one because we\nhave the input to the model as a cause\nand then the\nmodel output is the effect and it's\nbased on these correlations in data\nwhich is what we\nwhat we want to get out of it but yeah\nthe the causal correlation talk can get\nvery confusing if you don't\nkeep this very strictly separate\ni say thanks very much thanks for\nclarifying that yeah because i was uh\ni was thinking about it in terms of the\nother one yeah yeah but that's a really\ninteresting perspective i mean because\nthat itself then\ni could see how that can even if you\ncannot then causally explain\nthe underlying thing that the tool\naddresses that can spark\nmeaningful deliberations among\nstakeholders yeah\nyeah thanks\nthanks again again so stefan i have a\nvery\ngeneral question going i guess beyond\nthe\ntopic that you explored but maybe not so\nuh it has to do\nwith one of the points that evgeny\ntouched upon the fact that\nai systems are actually parts of\nmore encompassing human a systems like\ncomplex social technical perspective\ncomplex systems\nperspective right and then uh what i\nthink is that uh\nmost at least yeah i i'm only very\nfamiliar with the explainable ai\nliterature but it does seem to me that\nthe focus there is pretty much\non the ai part of the system and my\nquestion is broadly how do we go from\nthe explanations on the on this level on\nthe ai agent level\nto explanations on the level of the\nsystem as a whole and then a good\nexample could be\nfor instance this uh infamous case of\na uber test vehicle crashing in arizona\nand well hitting a pedestrian\non the road and the pedestrian died and\nthen there could be multiple\nexplanations for why that happened\nstarting from that the emergency braking\nsystem that was built in the vehicle was\ndisabled\nso that it doesn't clash with the av\ncomponents of the system uh then going\non to computer vision\nsaying that yeah basically the\ninvestigation\nshowed that the vision system of the car\nfalsely identified the pedestrian as a\nbicyclist\nor as an object and then it switched\nover\na couple of times so that could count as\nan explanation for why that happened but\nuh i think\nfrom the meaningful human control\nperspective we are more interested in\nuh what kind of human decisions have led\nto this specific\noutcome so that uh yeah it's very easy\nto say\nyeah ais bad ai is to blame but\nwe're not really interested in who is to\nblame but what\nhumans need to do next so that this\ndoesn't happen again\nand i think uh this is a completely new\ncompletely different\nuh way of approaching things as opposed\nto what people in\nuh xai are doing who are trying to say\nyeah how does this\nneuron in your network affect the\noutcome of the system\num so what are your thoughts on this\nyeah\ni think that's an interesting uh\nquestion\num so to to some extent the\nthe examples you were giving might be\ncaptured with these different\nuh contrast classes so why did the\nuh car hit the pedestrian rather than\nbrake beforehand and then you would have\nthe emergency system or\nwhy did the car hit it rather than\nidentify it as a pedestrian and\nand stop and and then you would have the\nidentification system to\nwhich you can point to um so so i think\npartly this is just the fact that there\nare several\num facilitating causes uh and and\nthe the difficulty with any of these\nexplanations is to find\nall the relevant ones um and then\ni think you're very right to point out\nthat the people in the car\nare going to be part of this uh\nsufficient causes for causing the\ncollision\num or necessary causes maybe even\nyeah but also there are humans i mean\nwhen i say\ncomplex humanitarian system it doesn't\nmean that just the driver the potential\ndesigners and legislators the arizona\nstate authorities which basically\nallowed that to\nhappen eventually so yeah yeah so so\nyeah um i'm i'm not sure if\nthe the kind of talk i was giving here\nis going to address that issue in any\nway\num i i mean\ntheoretically you could probably fit in\nall these different\nexplanations that you're after into it\num but just saying well look for some\nkind of covering rule with\ncounterfactuals isn't going to tell you\nwhich\nexplanations to look for or more\nspecifically which questions to ask\num but i i think it's an interesting\nidea here to\ndraw the types of things that you want\nto explain more broadly\nthan the current xai literature might be\nlooking at\nso and that's an interest that's going\nto be an interesting challenge to look\nat\nyeah okay great i think uh\nwe are running out of time so uh if\nthere are no burning questions\nwait a minute no nothing so thank you so\nmuch stefan that was a very interesting\ntalk and uh i really enjoyed the\ndiscussion\nthank you everyone for contributing to\nthat\nokay uh then i think this is our last\nmeeting uh for\nuh this summer and i think we will\nmeet all of you starting from september\nagain\nthank you bye", "date_published": "2021-07-16T14:25:27Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "e8ef2ed8b5ae8a19edb8afde3740e7d0", "title": "Understanding values – from the perspective of personal support technology (Myrthe Tielman)", "url": "https://www.youtube.com/watch?v=i72mHaWKKUE", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "all right so welcome everyone to today's\nhigh-tech Agora meeting I assume\neveryone could hear me well right this\nis our last Agora meeting for before the\nsummer break so the next time we will do\nthe meetings we might as well do it in\nperson\nalrighty although we don't know so that\nwould be in September we will keep\neveryone updated through the mailing\nlist and it is our great pleasure to\nhave mr. Hillman as an invited speaker\nfor automating before the summer break\nso Murthy is an assistant professor of\nthe Interactive Intelligence Group in\nthe Faculty of Electrical Engineering\nmathematics computer science Chuck has a\nbackground in cognitive artificial\nintelligence and human-computer\ninteraction\na research interests lie on the\nintersection between human and\ntechnology especially on how we can\nemploy computers to provide assistance\nand support for him for various human\nproblems so this includes social systems\neffective technology smart dialogues\nknowledge systems and behavioral change\nso without further ado Arthur the floor\nis yours thank you very much\nso I'll start sharing my screen so you\ncan all see my presentation and you\nshould be able to see that now\nso what I wanted to talk about today is\nvalues and specifically understanding\nvalues from the perspective of personal\nsupport technology I haven't prepared a\ntalk which very much necessarily tells\nyou what I've been doing but it tells\nyou more what I've been thinking with a\nfair number of discussion points\nthroughout so at those points that\ndefinitely I would like to invite\neveryone to think along with with the\ndiscussions and the questions that that\nI also haven't put on the screen\num feel free to interrupt me in between\nif you have any questions or comments or\nsee anything that you would like some to\nsee some some clarification about I have\nthe the window with you all in view so\nif you turn on your camera I will\nprobably see it I should I should also\nsee the chat but also feel free to just\nunmute and talk so like I said this talk\nis about values but specifically in the\ncontext of personal support technology\nso I wanted to start a little bit by\nsetting the stage and setting the stage\nof that context so the type of\ntechnology that I'm very interested in\nis the type that helps people make\ndecisions and do tasks throughout their\ndating life and with daily life I mainly\nmean that it's not necessarily for\ninstance a work setting or a\nprofessional setting but it's it's more\nof a private setting where you as a\nperson are interacting with the\ntechnology so not so much you as a\nprofessional that you as a person that\nis very broad so I put some examples\nhere on on the slide so for instance a\nnavigation app an app that helps you\nchoose her out for it for example but\nI've also collaborated with people who\nwork on the vacation app specifically\nfor people with a visual impairment so\nthen it's very much more about actually\nhelping them navigate and knowing where\nto turn or where to stop but also for\ninstance a personal scheduling assistant\nto help you keep track of meetings your\ncalendar if things conflict what do I do\nbehavior change support systems so\nthat's the category of systems where\nyou're really actively trying to change\nsomething and the technology is there to\nhelp you make that change but also even\nmore broadly speaking you can think of\nthis general picture that we have of a\npersonal robot that helps you with tasks\norders your food makes you a cup of tea\nand maybe work which is over\nyou and all of this can be for the\ngeneral public or for instance for\npeople with certain cognitive\nimpairments physical impairments that\nwould would mean that they could use a\nlittle extra assistance in some aspects\nof their daily lives so this is a type\nof technology that I'm personally very\ninterested in and one common denominator\nhere is that support in this type of\ntechnology is always about decision\nmaking so you can support someone with\nmaking a choice you can support someone\nwith executing something so for instance\nfor the navigation example I gave you\ncan help someone choose a route but you\ncan also help someone actually get to\nthat place if that's necessary but in\nall cases the technology is making\ndecisions whether it's for that choice\nor what action to support or how to\nsupport direction\ndecision making happens somewhere there\nand this is then what sort of leads into\nthe topic of personal values so one of\nthe most common definitions of personal\nvalues is from the paper in swatch and I\nsee I missed a mind err I think it's\n1992 not 192 but in this paper a day\nthey use a definition which again comes\nfrom other literature but it basically\nsays the personal values are the\ncriteria that people use to select and\njustify actions and to evaluate people\nand events if we look at values in this\nway it makes a lot of sense that if we\nhave this personal technology which is\ngoing to help us make choices and\nperform actions and make decisions about\nhow it's going to do that that is should\ndo so in some way in line with our\nvalues because values are two things\nthat we use to justify these actions or\ndecisions so to give a little example\nhere if you want to you know choose\nwhether you take the bike or\ntake a car you would want that decision\nto somehow be in line with your values a\nvery easy example in areas say well\nyou've just told me that you value your\nhealth you value the environment so then\nmaybe in this case the bike is the\nbetter choice it's just the setup of the\nsituation that we have we have these\nvalues and we have these technology and\nof course thinking about values in the\ncontext of technology isn't the new\nthing so most notably the field of\ndesign for values has been investigating\nthis topic for quite awhile and design\nfor values has a couple of assumptions\nor principles it's based on and I took\nthe liberty to steal this list from the\ndeltas I revised Institute website which\nI'm also a member of so design for\nvalues has the assumption that values\ncan be expressed and embedded in\nproducts services technologies system\nspace and it seems that with personal\nsupport technology this is even more\nobvious right because it makes decisions\nbut even in a way technology that\ndoesn't make decisions has values\nembedded in them for instance through\nwhat choices it offers us to make can\nyou turn off your camera if you value\nyour privacy for instance that's a\nchoice that a system can allow you or\nnot allow you and it doesn't make\ndecisions for you but it does have that\nvalue somehow embedded in its design a\nsecond assumption that this conscience\nand explicit thinking about values is\nimportant and then it's socially and\nmorally significant to do that and\nfinally it says that you need to do this\nearly in the design process where it can\nstill make a difference now design for\nvalues as a methodology is in in a sense\nthat is a little bit older it also comes\nmore from the perspective of a designer\nthen it necessarily comes from the\nperspective of an engineer and there\nseems for it to me there seems to be a\nsubtle difference between designing\ntechnology and building it and also for\npersonal support technology there is\nthis question that arose with me is it\nenough to design personal support\ntechnology for values or does the\ntechnology itself also need to\nunderstand values and what's the\ndifference and this is sort of the first\npoint at which I like to invite everyone\nto think about and jump into this\ndiscussion a bit I don't know if anyone\nhas any starting words deep questions of\ncourse that is a positive question so if\nyou don't feel like switching on your\nvideo and participating in the chat you\ncan type it yeah I see that if Katie\nwould like to speak please hi yeah thank\nyou this is a very interesting question\nwell I mean so just kind of\nbrainstorming here like when I see these\nquestions always I think to myself so\nwith the second question can it be\nbasically part of the first one that as\npart of the process of designing\npersonal support technology for values\nin some situations you might depending\non how you think interaction between\ntechnology and humans can satisfy the\nneeds in that context of the people\nwhich means you want to satisfy or if\nyou're talking about needs of\nenvironment also not necessarily humans\nso do we see that based on the kind of\ninteraction that we\nto have does the different what you need\nto understand the values so I guess it\nwould really like I think it depends on\nthe context so those studies\nunderstanding the context is the first\nthing that comes to mind the extent to\nwhich would be something yeah and is\nthen the context of this personal\nsupport technology that is making\ndecisions about what to support you with\nand how would that need to be even more\nexplicit to make to answer these\nquestions so I don't know that is the\ncontext you know also could you repeat\nthat again so just to make sure I'm just\nas correct yeah so so you talked about\nthat whether you just can just design or\nwhether to know you also needs to\nunderstand very much it depends on the\ncontext I think so say we have this\ncontext of this personal support\ntechnology which is there to help people\nmake choices for actions is which you\nthen still say it still depends on the\ncontext of that type of technology or is\nthere something general that you can\nalready say based on well so and I'm\nprobably missing here something because\nI know enough with you know the kind of\nsituations that arise I'm here what you\ntell us yeah because what I mean is that\nunder my context I mean like they\ncouldn't create social domain in which\nlike the kind of decisions that you saw\nthe kind of decisions that we're talking\nabout so what kind of choices and\ndecisions are we talking about in what\nthe realm of life and then depending I\nthink also what will matter the extent\nto which we want to rely on the\ntechnology in that case what\nexpectations do we place on the\ntechnology so kind of how wouldn't how\ndo we distribute the roles between\nhumans and technology because I can\nimagine the more we ask of the\ntechnology the more important it will be\nfor the technology to understand our\nvalues yeah I think that's that's indeed\na good point and that if it's if we\ndon't\nto give the technologies that much\nfreedom or Liberty or independence in\nits choice making then it might be\nenough to just have it enable us in our\nchoices so for instance Jesse was being\nable to turn your camera on and off\nwhereas if it's going to make these\ndecisions completely independently um\nthat it might be more important yes now\nif anyone else has some thoughts and I'm\ninterested to hear especially from\ndifferent perspectives because design\nfor values especially is not something\nthat comes from computer science or AI\nbut it's very much relevant to AI but it\nalso brings for me this interesting\npoint of what is design if you have a\nsystem that's going to make independent\ndecisions which you cannot predict\nbeforehand\nbut maybe it does very deep actually\nslightly more practical question and\nalso so if you look at for a minute like\na design engineer when if you you know\npersonal support technology if you\ndesign you know the second point the\nsignals understand the users value I\nwould say like if you really want\npersonal support technology right then\nthen option two would be the way to go I\nguess but I don't know how you see that\nand like the first point it might be\nbits you know if you design for because\nbecause it values can change it with\ncomplex and time and etcetera etcetera\nright so is that a difference that we're\nlooking at for instance yeah partly I\nthink so for me I I also think the\nsecond thing is the way to go the\nquestion of course is how on earth do we\ndo this and I have some thoughts about\nthat which I'll share after it is I\nthink the the baseline of design firm\nvalues is very good but it sort of\nassumes that when a product is designed\nit then static and it doesn't change and\nif you for instance your personal\ndollars change your circumstances change\nthen you need a new product almost and I\nthink that given the fast pace of\ntechnology and given also how much\ntechnology evolves after its first\niteration and not necessarily always in\nways that we can predict means that we\nsomehow need to do more when you still\nneed to design the initial four to allow\nfor these values but but my intuition is\nthat that we need something a bit more\nthere in that understanding the other\naspect to it is that when you design her\nvalue she typically design for the\nvalues of a group and if you can somehow\nembed an understanding of values into\ntechnology then you might also be able\nto address the values of an individual\nand I think that's a powerful thing as\nwell which is gained by looking more\ntowards embedding two values not just in\nthe design choices but in the reasoning\nframework of the technology itself that\nmakes sense yeah interesting so and\nmaybe this is a little bit of tension\nstill so do you so are you also gonna\nconsider how that if you if you can\nunderstand the users value are you also\nconsider you know how difficult you will\nchange that users value to to like use\nor is it something maybe a little bit so\nwe can maybe discussed a bit further\nindeed about that I don't know if there\nis any final thoughts or if I shall go\nto the next topic you don't see any\nhandsome I'll go to the next slide\nactually excited yeah\nyes so one are we up so I've been\nthinking in my research about this\nunderstanding and what is actually\nrequired to do this and the way what\nI've been looking at it is is sort of\nthree main questions at first if what\nwhat does understanding mean and for me\nthat means that you somehow have some\nsort of internal conceptual\nrepresentation of these values I don't\nnecessarily think that this is the only\nway to use values there might be\ndata-driven methods which can help you\nactually make more value aligned\ndecisions but I have some questions\nabout to what extent you can pull that\nunderstanding that that's maybe another\nanother question altogether so for me\nI'm really looking at this this how can\nwe conceptualize values in situated\ntechnology can reason with them and use\nthem so in order to do that it doesn't\nneed to just have this concept of values\nbut it also needs to understand how\nthose relate to those choices and the\ndecisions that the technology is making\neither with or for a user and then of\ncourse if you have that understanding\nthe point here is is to make more value\naligned positions and to to use these\nrepresentations now one thing I think is\nimportant to mention is that in all of\nthis I'm not necessarily seeking for the\nholy grail of how we can formally\nrepresent what a value means for me this\nespecially this concept of having this\ninternal conceptual representation it's\nnot so much about getting to the point\nof what a value is but rather it's a\nmore practical approach saying how can\nwe make better decisions so even if we\nhave some sort of internal\nrepresentation of values which does not\nfully match\nhow humans look at values or how humans\nreason with values if it's still enough\nto make us help better decisions done\nfor me that will already be a good step\nso how you define success there anything\nI think is important so what do we need\nto understand all use in this discourse\na little bit goes back to the thing that\nyou mentioned about changing values\nright you're talking about a person's\nvalues so I think one of the first big\nimportant questions is how do you even\nget get this knowledge how do you get to\nknow what the person's values are as\ntechnology and also with that how do you\nchange it right if the purpose of this\nis to make it more flexible and to make\nit more personalized and changeable over\ntime then you need to be able to change\nthese things over time and adapt them to\nan individual the second part is how do\nyou represent values this is more of the\nformal question of how can we capture\nwhat values are and how they work more\nin formal reasoning snow data system\ncould use them to affirm a\ndecision-making and that leads to define\na question of how do you then make these\ndecisions based on the lowest IQ had so\nin my work I've been looking at snippets\nto to explore how we could do these\nthree things and I'd like to start with\nthe middle one actually because I think\nwhen you talk about how you learn values\nit's important to also first have an\nidea of what you're trying to learn and\nwhat type of representation you're using\nfor values so when it comes to formally\nrepresenting values I think it's good to\nhave a bit of an overview of what we see\nvalues as so these are some of the\nthings that for me characterize values\nso there\nabstract concepts but at the same time\nthere globally recognizable context\nconcepts so they're not necessarily\nalways have the same meaning for\neveryone just 0.3 so different people\nmight have different definitions of what\nprivacy means and that can change over\ntime it can differ per person per\nculture but there's still this notion of\nprivacy which is sort of globally\nrecognizable to a large extent values\ncan also be categorized in the sense\nthat there are some values which are\nmore similar to others and you can group\nthem in that way that also means that\nthey can potentially be in conflict with\nother values so sometimes there's some\nvalue pairs which are very typically\nopposing so privacy and security are one\nof the classic examples and then choices\nand actions can demote or promote values\nand that's how I been looking at how\nthey relate to two actions how values\nrelate to actions so translating that\njust to check because jitsi is saying\nthat my microphone might be noisy can\nyou all still hear me okay yes yes okay\nand I'm not hearing anything we're an\neider so let's just just let me know if\nI break up I should be okay so for\ntranslating business representations I\nthink the fact that values are these\nrecognizable concepts in itself as a\nprotocol makes them so powerful because\nit means that you can communicate with\npeople about them and you can\ncommunicate about what it means to be\nsafe or healthy more easily than if you\njust say okay you find this important\nthat you find that important\nhaving that word to explain\ni I think can be a powerful thing\nso there's relationship between values\nthat's sort of how you can classify them\nhierarchically meaning for me is given\nto values not so much inherently as it\nis by relating them to choices or\nactions so in a way the way I've been\nlooking at values is that the semantics\nof what a what a value is has to do with\nwhat type of actions they promote or\ndemote and then this abstract concept is\nthere for communication purposes mostly\nso you can talk to people and your users\nabout it there's also this inherent\nthing that we think that some like say\nwe have health as a choice or as a value\nsome choices promote our health more\nthan others some demoted and some more\nthan others\nso there's this scale involved and you\ncannot just say there are two things\nboth from wealth and therefore they're\nequal very often they're one thing does\nthis more than the other and it can be\nboth both positive and negative right\nsomething can be explicitly good for\nyour health or its list like that or\njust up nothing to do with it so it is\nthat we get this notion of we have this\nscale which represents how much a\ncertain value represents a certain\nchoice promotes dessert choice or\ndenotes a certain choice the other\naspect is that people prioritize values\nin different ways some people would say\nmy health is more important than social\nfriendships throughout our people that\ncould be the other way around there's\npersonal differences there so alongside\nwith this relationship between a choice\nand a value there's also this\nrelationship between values amongst\nthemselves in how much assert a certain\nperson values them just to say it like\nthat\nso we've been looking at how how do you\nrepresent these things and one of the\nthings which is which we settle on but\nalso which is which is commonly done is\nthat you have both this sort of personal\nvalue profile of a person which just\ngenerally represents their values and\nthe relationships between these values\nand have values relating to choices um\nso this little picture represents the\nfirst thing and then you immediately run\ninto problems because you can easily say\nthat some people value their health more\nthan social relationships some people\nvalue the environment more than others\nbut how do you represent this is it\nenough to just write down but then you\ncannot really give a distance measure\nsomething is either more or less we\ndon't know much\nmaybe we can assume that it's all equal\nbut I don't think that that necessarily\nmatches up you can also say okay then we\nwe give a rating all right then we're\nmore flexible we can we can represent\nthat you know in the case of this\npicture health is one point more\nimportant than their love of\nrelationships and that's again so many\npoints more important than personal\ngrowth but can people really express how\nimportant they find their values in this\nway and another question we have is is\nthis even the same in all domains\nthere's I think this notion that a\nperson's values are static and your\npriorities are static and yet when you\ntalk about certain different domains so\nwhether the way you travel to work\nversus how you invest your money there's\nmaybe it is intuition that some people\nwill say well you know when I travel to\nwork\num the time efficiency is important and\nmaybe money about how when I invest and\nsuddenly the environment is very\nimportant so this is very much still I\nthink an open question hmm\nand I again liked if I never want to\nthink along I don't know if anyone has\nany input or insights on this party yep\ninteresting so I don't maybe it's not\nabout input I do have some further\nquestions on that first complications\nthere because how do you see either you\nlike ranking or rating how can we have\nthis how can we understand we have any\nkind of those things I mean for are we\ntalking about observing behavior or\nasking person can you talk a little bit\nof these what is it different what do\nyou think is more interesting for it\nspecific context so I have a slide about\nthat but I'll skip ahead because it is\nan interesting question so future goal\nlater Childress's later also well I\nthink it's it's interesting also that\nyou bring it up now because the question\nis also does the fact of how you can get\nthis information matter for how you\nrepresent the information or is it the\nother way around do we want to have this\nconceptual representation which matches\nour conceptual ideas about what values\nare and how to behave or do we maybe\nwant to adapt that to what people can\ntalk about for instance um I think that\nin itself is an interesting question so\nso what I've been personally looking\ninto is is very much the dialogue with\nthe person perspective but and not so\nmuch data-driven yeah\nand so really just just having a\nconversation with people or could even\nbe be questionnaires and then maybe\ntweak them based on behavior one of the\nreasons I'm not looking at behavior\ndirectly is one in the context of a\nbehavior change support system that\ndoesn't really work because the whole\npoint is that they don't want to keep\ndoing the same thing and it's very\nambiguous as well yeah people can take a\nbike and you can say that taking the\nbike is good for the environment and\nmaybe someone always makes choices that\nare very good for the environment but\nmaybe they're doing it for completely\ndifferent reasons that they like the\nfresh air they live close and they don't\nwant to spend too much money in\nelectricity so drawing that explicit\nlink from behavior to values instead of\nthe other way around I think is can be\nused as input maybe at some point but to\nbase your whole system based on that I\ndon't I can't see that clearly yeah yeah\nI totally agree with you and it's very\nchallenging if you look at any kind of\ndata-driven I don't know any kind of\nlearning approach it's very hard because\nthat's it's on the car of these and the\nambiguity problem is everywhere and the\nconsequences are very hard but just if I\ncan just I saw that Cariaso and assume\nthat you're gonna follow up very quick\nquestion given this okay given we can\nunderstand the profile of the person but\nyou mentioned also that the the present\nthe meaning that they make of this value\ncan be different for different\nindividuals right meaning that they make\noften in the specific context as your\nexample and privacy and everything so\nI'm wondering cuz it's very clear for me\nsometimes that when we use the values\nwithin within like a within a social\nsciences that you can only talk with a\nlot of people and understand everything\nbut for computational season I struggle\nis in a\nbecause I understand there's different\nconceptualizations of privacy for\nexample with different views but an\nagency's they had do we need to settle\non one in order to make some workable\nthing because I mean your preference is\non privacy but what privacy how can you\nunderstand in the video\nmeaning 8 you know values yes I think\nthe next slide goes into the little bit\nso maybe ok so and then we'll go back\nthe next slide is discussions - that's\ngood\nok so I'm really hitting to the flow of\nyour presentation so later for the end\nok thank you I think our car do you want\nsomething yeah really quick question\nfirst like I'm really fascinated with\nthe whole narrative so far so I love the\nrepresentation of this small human there\nas for numbers I mean this is but still\nso this reminds me of a character in the\ncomputer game where you basically say\nthis this vector of ten numbers is what\nthe person is but that's not a point but\nthis connects to my question which is\nbasically it is really I'm really\ncurious about the way you could actually\ncapture the values and represent them in\nin an artificial system but to me it\nconnects like 100 percent to the\nquestion what are you going to do with\nthis representation I am afraid that I'm\nrunning a little bit ahead but it does\nseem to me like this is really really\nabstract an interesting theoretical\nproblem but when it becomes tricky is\nwhen you start asking question how are\nyou going to use these numbers am I just\ngoing to make life-or-death decisions\nbased on these numbers or yeah yeah\ncomment click on that\nthat'd be great yes I think that goes a\nlittle bit into the the\nimportance of decision-making and\nreversibility which is not something I'm\nnecessarily addressing here yet but I\nthink that that's indeed important\nbecause there there's always this sort\nof measure of like you're not entirely\nsure a letter if you just use it this\nway it will be correct right because we\nlose something if we represent values as\nnumbers and yes I say I think we gain\nthe possibility to weigh them against\neach other which is maybe even more\nimportant but I will go go into that a\nlittle bit more in the question of how\nwe use them a bit later on I see two\nmore hands so feel free to turn on a\ncamera mic yeah well I'll keep my camera\noff because my insurance is sketchy I\nthink this approach of designing with\nvalues like super interesting and\nespecially what interests me is when it\ncomes to value conflicts you mentioned\nthe conflicts sometimes under different\ncontexts but what also interests me I\nwould love to have you thoughts on this\nis when it comes to conflicts between\nvalues of the consumer or the person\nusing our products and then of us as a\ncreator so for indeed we might have a\nvery different understanding of what\njustice means of fairness right and then\nyeah I think it's very easy to see this\nwalls that go like this little perfect\nplace where like although he's aligned\nwith these of our users but and also if\nyou look at all the political conflicts\nin the world right now it becomes\napparent that the understanding of what\nfairness and justice is for instance\nalso like caring for the planet is very\ndifferent so we deal with this if our\nvalues as designers or engineers don't\nalign with our use of areas yeah so I\nthink so so what's being done in\ndesigner of values is very explicitly\nthat you know design for your own values\nyou talk to your users about their\nvalues right\ndesigning for values with your own\nvalues and - it's really beside really\nmisses the point of that whole thing\nI think in a way representing values\nexplicitly goes even further to actually\nhelp with conflicts because you make\nthem explicit and whereas in design you\nknow you talk to your users and you talk\nto what their values are what the\nconflicts are and you try to find the\nbest solution having a system that has\nthese values represented explicitly I\nthink really helps in making these\nconflicts more clear and make the\ndecisions more clear and I think that\nthat doesn't solve the problem but I do\nthink it's the first step is recognizing\nthe fact that there might be a conflict\nand my perspective as well as always\nvery much been that you want this\ntechnology to adhere to the values of\nthe user but there is always the\nquestion of how far do you want to go in\nthat so you one if you have you know a\nself-driving car and that person doesn't\nvalue the environment their own safety\nand just wants to get there fast and\nthat do you don't break the speed limit\nprobably preferably not right so there's\nalways this trade-off but I do think the\npowerful thing of having values\nexplicitly represented behind those\nchoices is that you can make that\ntrade-off clearer and then also tweak\nwhose values you're going to find more\nmost important incidents in both choice\nand I think that's something that's\nharder to do if you're just designing\nand even harder to do if you're not\nthinking about these things explicitly\nat all yeah it's interesting if we then\nhave the right to challenge the values\nof our consumers as well kind of\nacknowledging them but then nudging them\nor at least arguing to push them in a\ncertain direction yeah\nwell that's also what happens now right\nand especially with more to healthcare\ntype of apps they sort of all assume\nthat you want to be healthy and\nthere's one goal and they very\nmindlessly go towards that or as this\nperspective of putting the user central\nstage would also mean that that you can\nmaybe lose that that goal but I think in\nthe long term as people will want to use\nthis technology more it will be more\neffective but you also yeah you have to\nthink about what well as a society we\nwant and whether it is okay for us to\nhave personal technology which adheres\nto our values but which in the\nbackground be sort of running what\nsociety is all once of you and also\ntries to fold that message because I\nthink then you also get it to a bit\nShady terrain of how do you trust that\nthe decisions are made for you or is it\nsecretly a decision which sort of feels\nokay for you but also slightly precision\nof their each other yeah thank you did\nyou have fun yes I know I think we just\nlost him Mac can you hear me\nyes sorry that's a big connection\nconnection I know so I was wondering\ngiven the trickiness of constructing\nsuch a profile\ncould it be that instead of aiming for a\nvery accurate representation what if we\naim for a human machine interaction that\nlets the user build the representation\nin an interactive way so then we let the\nuser have an authority and autonomy of\nbuilding a profile that they identify\nwith so it's basically like if the\nultimate goal is to have an assistant\ncan we have can we fulfill that ultimate\ngoal without the very tricky task of\nconstructing such a quantitative\nrepresentation I think I think we can\nand I think that's very much in line\nwith the point I made earlier off of\nwhat is how do we measure success\ndo we measure success if we have this\nabstract representation that perfectly\ncaptures everyone's values or is it\nenough to have something that's useful\nfor our interactions and decision-making\nto make them at least better and I think\nthe perspective indeed of letting the\nuser take central stage in that and\nhaving something which maybe not\nperfectly represents their values but\nwhich at least they have control over\nand they can input can be a good first\nstep at least and then maybe you you can\nhave some more data during measures that\nsay hey there's a lot of people who\nwould associate this choice with\nsomething demoting this value you say\nyou find very important is that actually\nthe case or is something else happening\nhere and then you can maybe tweak it\nyeah I've been becoming more attentive\nto - like opening up our design\nimagination I kind of build beyond so\nkind of bringing in the interaction of\nthe human and technology and really not\nalways trying to kind of solve\neverything through perfecting the\ntechnology but actually give giving more\ngiving more authority it's all me and\nresponsibility to the human that is in\nthis interaction in a way that's\nultimately together - then we actually\nfulfill the kind of the bigger goal that\nwe want to fulfill in this case yes\nno definitely so I'll move on a little\nbit and I'll touch upon that exactly\npoint a little bit later so so a quick\nnote and given time I'll skip the larger\ndiscussion but this slides goes back\ninto the how do you give meaning to\nvalues right so the previous discussion\nthere's the discussion of how - what\nvalues do I prefer over others but\nthere's also that how do we give meaning\nto what these values mean\nso something that we've been looking\ninto is relating them to choices and\nthat means that automatically you need\nthis representation of what your\npossible choices are and maybe even\ntheir consequences so having this\nexplicit representation of choices\nallows you to then make an explicit\nlinking with values you can say that\nthere might be some ground truth in what\nvalues is enjoys promote or demote I\nthink to some extent there's there's a\nbaseline for that taking the bike is\nbetter for the environment right but\nthere's also some values for which it's\nharder health and the environments are\nfairly clear but what is good for your\nvalue of Independence can be very very\ndifferent for a person so I think that\nhaving this individualized meaning to\nwhat for instance independence or even\nprivacy mean is something that's\nimportant and then of course we we again\nget this question of can we attach\nnumbers of how much is something does a\nchoice promote or demote values and then\nyou get exactly the same problem that\nyou want to say this hurts this value\nmore than something else and yet\nattaching numbers is a very difficult\nthing and especially because if you're\ntaking your perspective of talking to\nthis from a user perspective right and\nthat's what we've been doing very much\nputting users center stage here so one\nstudy that we did on this was with this\ncontext of visually impaired travelers\nthe context of an app on your phone\nwhich helps you to to navigate and with\nwithin this context we wanted to know\nokay if we have this representation of\nchoices which in our case will\nsort of a formal hierarchy of behavior\nso for instance at the top you would\nhave go to your doctors and then you\nwould have several different ways of\ngetting there\nand then these different ways of getting\nthere\nfor instance by bus or walking would\nhave different parts so a part of\nwalking there for instance could be\ncrossing the street a part of getting on\nthe bus is you know getting to the bus\nstop waiting for the bus recognizing the\nright bus number getting in that's all\nright so by linking actions that way we\nhave this representation of what the\nchoices are you can go in different ways\nand you then need to execute certain\nthings and with their structure we\nwanted to attach values and in this case\nwe skip the numbers for now but we just\nsaid okay can we talk to people about\nfirst of all what these hierarchies of\nactions look like for them personally\nbecause everyone is different travel\nbehavior and then secondly can we talk\nwith them about what values certain\nchoices promote or demote so we used a\nconversational agent for this to some\nextent a visual representation of these\nactions would help but of course for\nthese users that's impossible and the\nadvantage of working with visually\nimpaired users is that they're very very\nused to speech systems that's how they\ninteract with all of their technologies\nis true spoken and spoken text for\ntext-to-speech and speaking back so they\nare fairly competent I would say more\ncompetent than your average user in\nworking the conversational agents so\nwhat we wanted to do in this particular\nexperiment was really to figure out what\nwould happen if we talk to people about\nthese complex formal structures with\ncomplicated concepts such as values and\nthe fact that a behavior can be a way of\ndoing something or can be a part of\nsomething\nwe can have these nice formal\nrepresentations of user models but if we\nthen want to talk to people about it\nindeed what happens so this is\npreliminary work in the sense that we're\nnot trying to perfect the system here\nbut we're trying to figure out what\nhappens and what to watch for and what\nwe found in both the questionnaires that\npeople did and the interviews that we\ndid with people after they have this\nthis very structured conversation with\nthe conversational agents about these\nstructures and these concepts is that at\nsome point you can have the situation in\nwhich the user is not actually talking\nabout the same thing as the agent\nanymore so that we called misalignment\nand of course if that goes on to the\nlong ban the misalignment will basically\nhappen in the user model that you end up\nwith basically the user model that you\nhave will not match what the user thinks\nthey told you or what is right what\ntheir own representation is that's of\ncourse something you want to avoid you\nwant that to be as similar as possible\nand what we found that was interesting\nspecifically with the idea that you have\nthese complex structures and concepts is\nthat there's different layers of\nmisunderstanding that can happen in the\nconversation so the very basic is\nunderstanding the structure we had these\nhierarchical structures with values\nlinked to them and understanding how\nconcepts were related if there was a\nmismatch in communication about that or\npeople didn't grasp that concepts were\nrelated in that way and it led to a lot\nof confusion about what these concepts\nactually meant but also even though just\nknowing what to say to the\nconversational agent and if you\nunderstand a question answer something\ndifferent and then the system doesn't\nget that you misunderstood that gets\ninto the system and the errors that you\nsee in this picture are basically house\nmisunderstandings could lead to others\nso misunderstandings in what concepts\nmeant what it means to promote or demote\na value could then lead to not knowing\nhow to answer it and to assume\nmisunderstandings again so I think this\nwas useful because it shows that if you\nwant to talk to people about these\nformal conceptual representations you\nneed to be very sure that they\nunderstand the structure of what you're\ntalking about\nand that they understand the concepts of\nwhat you're talking about\nand then of course there's the whole\npractical problems with having a\nconversation with an agent which always\noccur but there there's a deeper problem\nin where people don't know how to answer\nnot just because there is problem with\nthe the speech-to-text\nbut because they don't conceptually\nunderstand the user model that you are\ntrying to construct with them so I think\nthat was our biggest lesson here and I\nthink that ties into a couple of things\nthat were mentioned before and that if\nyou're on the one hand you want these\nformal representations which match\nconceptually with what you're trying to\nmodel and on the other hand you need a\nrepresentation which is can be followed\nby people so so that's kind of a tricky\nthing right you don't know there's any\ncomments questions\nso and this was them one of one of the\ndiscussion point I had which adidas can\nyou learn values also automatically or\nand to what extent can people talk about\ntheir values explicitly how much do they\nreally understand what underlies their\nactions we are fairly used to thinking\nabout these things but a lot of people\nhave never heard of the concept of\nvalues in the first place\nright and if you ask them what value\ndoes this promote without giving more\nexplanation they'll see say I do this\nbecause it saves me time or I do this\nbecause I'll get there quicker and of\ncourse there's an underlying value that\nis quite clear that that is about the\nvalue of time and time efficiency\nmaybe but getting there quick is not a\nvalue in itself so that that's a tricky\nthing I think so I final the final point\nwas using values which we touched upon\nand all round up because I see it's\nalmost time and then again we get this\nquestion of can we use these numbers\nright we want some way to represent how\nmuch something is valued and yet numbers\nfeel like an oversimplification I\nhaven't got I have I don't have a better\nanswer than numbers I think most people\nwho are working with values in such a\nway are using numbers but it's good to\nkeep in mind that this is we're making\nsome assumptions here about what values\nare and how they work which are not\ntrivial and to what extent does it\nreally is this really different from\njust using a utility because that's\nbasically what we're doing right and I\nthink the fact that you can can talk to\npeople about it on a conceptual level is\nwhat separates it from a very simple\nutility but\nin a way we're just doing the same thing\nand can we do away with numbers in\nScituate that we don't constantly need\npeople say oh this I find more important\nthis is no no I want to do this because\nthe large part of the point of this\nexercise is to avoid that I think this\nwas my last slide so maybe in some given\ntime some final questions remarks I\nthink we have more questions than\nanswers still but thank you previous\nquestion you mentioned discussed and all\nso I'm wondering what's your opinion on\nthe more data-driven perspective like if\nyou go into the donor had the chance to\nlook the beneficial beneficiary I had a\nnew book from Stuart Russell yeah he\nargues the point that I mean of course\nthe the the AI should be always\nuncertain what your preferences are they\nshould learn from your behavior things\nvery clear from then its behavior is the\nultimate source of information for them\nthat's the thing that can be\nquestionable but they also say ok we're\ntalking about value alignment but in the\nend it means preferences and in the end\nof it are just gonna values are what you\nconsider important and those preferences\nare values I think I always have a\nlittle bit of problem because I see this\nas a different concept that we talked\nabout this before but my question goes\nin ways you think that it could be\neverything we discuss it could also be\ntowards preferences regular preferences\nor and in the other way do you think we\ncan ever get to some kind of efficient\nupgrades that are we'll be able to\nunderstand so well what do you think\nyes I think for me Andy there's there's\nthis intuitive difference between the\nvalue and preference or as a preference\nis more temporary somehow and the value\nis a more long-lasting stable thing I\ncan definitely see situations in which\nyou use values in a way that you could\njust as well have use preferences and I\nwould say that especially now we don't\nhave the full conceptual richness that\nvalues have inner representations yet so\nthey're more like resemble the richness\nfor me personally comes in to the fact\nthat you're not communicating with\npeople about their preferences but\nyou're communicating with them about\ntheir values so it's almost that the\ncommunication part with the user is more\nimportant there and how you talk to\npeople who are types of terminology you\nuse in talking to people about what they\nwant that that's more important the\nother aspect is especially with like\nbehavior change and the what what people\nprefer now is not always in line with\nwhat they tell you long term\npreferences are the way I see them more\nshort term I would prefer to have all\nthe ice cream but I value my health the\nway then one a system that's going to\nbind you all the ice cream or do I want\na system that maybe goes against my\nconcrete immediate preferences sometimes\nin order to support the more long-term\nvalues so I think those are two\nimportant things I think learning values\nfrom only behavior it's much more\ndifficult than learning preferences from\njust behavior also because we don't\nalways act in line with our values it's\nhow we judge your actions that's\nimportant when it comes to values notice\nto do just what we do if that were the\ncase that we'd never want to change our\nbehavior right if we always act along\nwith our values and perfectly satisfied\nwith that I do think that behavior and\nmore today to prove a measure\ncan be very powerful in fine-tuning\nsystems like these and in making sure\nthat they're not as intensive to use as\nsomething that constantly asks you to\nget your input on what you want the\nsystems do I think there's there's a lot\nto say about having a lot of interaction\nwith your user about what you want and\nto make sure that it understands your\ninternal representation of them fo and\nvice versa but you don't want to tell\nyour system everyday every time\nsomething changes and that's what I\nthink we're the day the Durman approach\ncan be powerful to fill in the gaps and\nto to connect the dots but also maybe to\nsee ok this person just said that they\nwant to do this but they're always\nacting completely differently and when I\nmake a suggestion they ignore me is that\nbecause I'm fundamentally wrong do I\nneed to change my strategy for\nconvincing them but at least trigger\nsomething like ok maybe I need to do\nsomething here so I for me that's the\nparts where I would see the data-driven\napproach being very helpful yeah thank\nyou a really yeah I agree with\neverything you said and I would say I\nthink one more thing else towards\nexplain ability I think it's so much\neasier to explain in terms of values and\nnorms that inventor explains in terms of\nlike my preference on attribute 47 is\n0.3 like what what are you talking about\nI have no yeah like I said for me values\nit's it's it's not even just about the\nconcept it's also about the fact that it\nbecause it's a concept you can\ncommunicate to people yeah and if we\nlook at your values also we can\nincorporate so much discussion on social\nscience and ethics that connects to that\nso that's ok thank you very much other\nquestions or comments\nyeah if any I yeah later thanks a lot\nfor a really interesting talk also by\nthe way really cool illustrations\nI got I got a tablet to draw and I at\nhome because I never have a whiteboard a\nreally cool thanks yeah and I think very\nthought-provoking and so so for me like\nthat connects to what I was raising in\nthe previous question so the question\nthat your third question on the previous\nslide you know kind of the balance\nbetween the new miracles\nso the yeah using the numbers for values\nand and needing the Constitution and for\nme like like I'm thinking like somewhere\nin between is kind of what what I'm\ncurious about because because for me\nsomething that I really realized that we\nwe too often try to quantify things that\nin my opinion we really should not try\nto quantify because as you said it can\nfeel often and it is an\noversimplification I'm curious about\nyour opinion about like when you think\nabout personal assistance to what extent\nwould you like them to see empower\nresponsible decision-making on the part\nof the human but where the human really\nrealizes a at the end of the day the\npersonal assistance is just an advice at\nthe end of the day I'm the one who is\ntaking a decision and it's my\nresponsibility and I'm curious about\nyour opinion of others I understand that\nof course there are circumstances when\nwe talk about people who may have\ncertain disabilities for example where\nwe might actually want personal\nassistance who have more authority to do\nsomething ultimately that might benefit\nthe well-being of those people but yeah\nI'm curious about yourselves\nyeah now I think it's a very interesting\nquestion\nI think that and there that's also why I\nlike and I took a girl project because\nthere's there's this trade-off between\nthe power of intelligence systems is\nthat they can do stuff on their own\nwithout input from us that's also\nimmediately to the downside of it and\nfinding this\nbalance between letting them make\nautonomous decisions and execute them so\nthat we can spend time on other things\nversus making sure that we still have\ncontrol over what's going on it's\nsomething that I don't necessarily have\nan answer to it is one of the reasons\nthat I think the explainable transparent\npart of AI is very important and it\nindeed matters a lot what the decision\nis on so if the decision is on you know\nthis person needs to go through the\ndoctor am I going to say them to them to\nturn left so they can take a bus today\nor right so they can't walk there is a\ncompletely different the implications of\nmaking the wrong choice are different\nthan in some other situations right so I\nthink that that is an important thing\nwhat are the implications of choosing\nwrong and maybe also having something\nthat and maybe values could also come\ninto that but knowing you know what\ncould go wrong if you have two choices\nto make where both of them you know not\nchoosing something can really demote\nyour values but you can only choose one\nso for instance if you need to choose\nbetween two important meetings and then\nthe fact that not doing something not\ndoing one of them if you need to not do\none of them can really demote your\nvalues can be an indication in itself\nthank you maybe need to let the person\nmake a decision right so I think values\ncan be an interesting aspects there so\nand and the other part of what you're\nsaying is really interesting because\nI've actually been I've been talking to\nsome people about this concept of\nexplain why for people with disabilities\nand there you get the question of how\ntransparent do you need to be in your\ndecision making right which is a very\ndifficult question I think\nto answer but one that's important\nbecause you need to know whether you can\nmake decisions for these people or not\npisum in certain situations yep\nthank you Thanks any final words\nquestions I think most people need this\nto leave it for Cara - yeah yeah I think\nwe can wrap up now so you're - thank you\nvery much it was very nice and so yep\nthanks for having me I'm pretty\ninteresting uh interesting discussions\nyeah make sure - yes yes\nback in September we're gonna take care\neveryone", "date_published": "2020-07-08T17:08:29Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "4bf10f4f39581e586b4cf70c8d9e6cfb", "title": "Meaningful human control over automated driving systems (F. Santoni de Sio) - 1st AiTech Symposium", "url": "https://www.youtube.com/watch?v=Faoevs3XLoE", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "there you go so hi there how are you\ntoday\nso my name is Philip Poisson Toni the\nCEO I'm a colleague of your room from\nthe novanet dissection philosophy of tu\nDelft and I will tell you something\nabout this project I've been running for\ntwo years in particular on the\nphilosophical part of it the definition\nof meaningful human control and in this\nwork of course I've started working on\nthat i'm with yoren and then I've been\nworking mainly with Giulia makatsch who\nis sitting somewhere in the room and it\nis partially responsible for better for\nworse or what you would be hearing now\nby working with your own I've learned\nsomething very important about\nresponsible innovation about the\nsensitive design this is one of my\nfavorite quote from your own and other\ncolleagues in depth when I moved to\ndeath that was something really struck\nme but what you know as a philosopher\nwe'll talk innovations that define\ninnovation if you talk to engineers\ninnovation sounds like do new stuff new\nfunctionalities new gadgets something\nthat works better but in your own and\ncolleagues case say hey this is a very\nlimited conception of innovation like\nreal innovation is when you can break a\ntrade-off between values you want to\nachieve something you want to see\nsomething else\ncurrent technology doesn't you allow to\nwhat she borrowed these things you make\na innovative design that allows you to\nachieve both of the things you want to\nachieve right this is real innovation\naccording to a certain view of\nresponsible innovation in in order to do\nthat as I would not repeat that you need\na love of a different approach to\ntechnology you need more in\ndisciplinarity more of a design\nperspective also in philosophy you need\nto take social science and empirical\nstudies very seriously you need to\nreflect on technical in institutional\ndesign at the same time so a lot of\nchallenges that a itec project as I\nunderstand it will try to take on and\nalso you need to have word at our\nfaculty TPM we call comprehensive\nengineering so I think our faculty is a\ngood example of a first attempt of\nrealizing an institutional place where\nthis can happen because under the same\nroof you have complex system engineers\nyou have people studying the human side\nof systems multi actor systems and your\npeople like us studying the human side\nof it the value side of it economic\nvalue philosophical value\nethical\nsays sayfudine security and so forth now\nindeed a mere Philemon control idea came\nfrom philosophical point of view from\nthis idea of breaking a trade off like\nif you talk to people in different areas\nabout autonomous technology you will\nhear this sort of a the atomic answer\nsay sorry if you want to go for autonomy\nthen and for efficient innovation\napology but incidents will happen\nwill happen and you know this thing of\nhuman responsibility you man\naccountability this kind of overrated\nlike why do we want responsibility let's\njust go the way for efficiency\ninnovation and on the other hand you had\nvery conservative people you know\ntechnophobic people who say no no we\ndon't want any of that because we want\nto stick to all possible safety and\nhuman accountability so the holiday\nmeaningful you my controller is again\nhey why should we choose why can't we\ntry to redesign the whole process of\ndesign and regulation of autonomous\ntechnology in such a way that we can\nachieve both a maximal level of safe\nsafety and accountability and all the\nefficiency in innovation we want to\nachieve so that's a basically push of\nthe project and again this is the\nproject we are in it's an Embraer\nproject we are partners and we have\nengineering traffic engineer engineering\nsocial psychologists behavioral\npsychology and philosophy working\ntogether so again this is sort of a\nattempt to realize this idea of\nresponsible innovation of course this is\na background of the of the specific case\nstudy namely automated driving systems\nbut and this is the set up of the\nproject so basically we are moving away\nfrom the robot dilemma to some\ndefinition meaningful human control we\nhave these three disciplines we have\nthree use cases and we are trying to\nanswer these three related questions and\nin particular Julie and I are busy with\na philosophical conceptualization of\nmeaningful human control but as you\ndon't say this is not our term the term\nwas coined in the political debate on\nautonomous weapon systems and in\nparticular the NGO article 36 came up\nwith this notion and everybody was super\nhappy about that\nso that was a very interesting political\nphenomenon when a certain term magically\nstarted attracting\nconsensus around it and then the problem\nwas as a philosophers that while you\nreading the definition of these terms\neverybody's going their own way and I've\nbeen to some of this discussion as\ninformation and at some point we were\nlike look we need to find a\nphilosophical definition if you want\nthis concept to work if we're like no\ndon't do that\notherwise we will stop agreeing right\nthat was there is this interesting thing\nI've talked to people involving this\nprocess that they don't want to define\nthe term because the fuzziness of the\nterm is what allows for disagreement but\nas philosophers and designers we do need\nand we do want this clarity and this\npossibility of translating the concept\ninto design requirements so maybe in the\nI'm not an expert of the diplomacy and\npolitics about autonomous weapon systems\nper se so maybe it's a good idea to keep\nit vague there but it is not here in\nDelft and in other technical\nuniversities so the story of meaningful\nhuman control is at some point people\nwere very concerned about the\npossibility of having fully autonomous\nweapon systems for the reasons that here\nyou mention and this is a definition\namong many others or what an autonomous\nweapon system is which is already quite\na controversial thing as you can imagine\nand basically there are two main\nconcerns with autonomous weapon systems\nwhere unpredictability what if as you\ndon't say at some point a target is\nidentified as relevant and it was not it\nwas just some you know AI messing up or\nsometimes it does you know the other\nhand what if that accident happens\npeople are killed civilians are killed\nand there's no way to reconstruct the\nchain of accountability which in a\nmilitary and political domain is super\nimportant as you can imagine so there\nwas this idea of meaningful human\ncontrol and this was the general\ndefinition of it so humans not computers\nand their arguments should remain\nultimately responsible and disease they\nof course the difficult part for\npotentially lethal operations a critical\nfunction that in a dimension in in the\nbeginning so this is in a nutshell the\nresult of many years of reflections that\nIan and I had in this paper 2018 so at\nsome point you and I thought look we\nhave this experience of working of free\nwill and moral responsibility for many\nyears there's a lot of theories out\nthere and some of these theories\nspecifically focus on the conception of\ncontrol at the level of dividual human\nbeing in order to be responsible for\nyour action you need to be in control of\nyour action right and so we try to use a\nspecific theory in that fit I will not\nturn you into that at this time of the\nmorning would not be a good idea but we\ntook some of the criteria from that\nspecifically official a visa and we\ntranslated it into criteria and Spanish\nand translated that into criteria that\ncould work for the control the\nmeaningful human control which grants\nresponsibility over autonomous systems\nand reserve the two conditions we came\nout with tracking and tracing which mind\nyou this is disclaimer they do not\nnecessarily mean what you think they\nmean in engineering or your discipline\nso it's a specific meaning of that and\nby tracking we mean in the system\nconceive of us human operators operated\ndevice infrastructure so the socio\ntechnical system should be able to could\nbe designed in such a way to be able to\ncover its behavior with the relevant\nreasons of the relevant human agents in\nthe network of the systems this is a\ntracking condition I will get back to\nthat and the trusting condition is\nsupposed to cope with the accountability\nproblem we want at the same time that in\nany of this socio technical system by\ndesign there is at least one human agent\nwhich at the same time can appreciate\nthe technical capabilities of the system\nso as some sort of a reasonable\nunderstanding expectations towards the\nbehavior of the system while at the same\ntime also appreciating here on moral\nresponsibility for that so we want to\nprevent the responsibility gap where on\nthe one hand you have engineers who\nunderstand everything about the system\nbut they they come they don't consider\nthemselves responsible because they've\n the responsibility to the users\nwhile at the same time you have the\nusers who do think they know that they\nare responsible for that but at the same\ntime they can't appreciate the\ntechnology enough as to really be\nresponsible as in satisfying the\nconditions of capacity control on the\nsystem itself and indeed you don't need\nto go to very futuristic autonomous\nweapon systems to\nthat with very low levels of autonomy\nyou can really have already big problems\nof human meaningful human control and\nthe Tesla accidents this is a stupid\naxiom as you know there have been way\nmore tragic accidents with fatalities\ninvolving out test autopilot just as one\nexample and there the problem was\nclearly DS that was a big response moral\nresponsibility gap from a legal point of\nview that was settled the driver was\nresponsible idiot you should have had\nyour hands on a on the wheel case closed\nbut from more point of view this is\ndisturbing because this driver hadn't\nreceived any training was not fully\naware of course you signed a lot of\nterms or condition as we all do without\nthis technical assistance but it's\nreally dubious way that he had some deep\nmoral responsibility in the sense of\nknowledge control etc that you don't\nmention so basically we started realize\nthat those in the driving systems there\nis a big problem of definition of human\ncontrol and this is a standard\ndefinition of autonomy and what is a bit\nconcerning about this a set of\ndefinitions about autonomy is that it\nseems to suggest that the more you are\non this side the more control you have\nand the more you are on that side the\nless control you have which is sort of a\nheuristics that in that's not\nnecessarily work because Tesla is here\ntoo and this seems to suggest that the\ndriver is in total control just because\nhe has his ends he's supposed to us his\nhands on the wheel but as we see this is\nnot the case so that could be a mismatch\nbetween our definition of control from a\ntechnical sense as he not said and our\ndefinition of meaningful human control\nthe kind of control the grounds moral\nresponsibilities and so indeed this is\nsort of a traditional controllers we\nnon-engineers understand the engineering\nnotion of control right is as far as\nthere is a responsiveness of the system\nto the action or behavior to a\ndesignated agent human or not there is\ncontrol but this is not meaningful human\ncontrol why because this applies very\nwell to old-style dumbo systems but does\nnot apply to everyday systems this is a\nmetaphor a variation of the horse\nmetaphor of Flemish in which you say\nokay is it clear who is supposed to do\nwhat in order to achieve what from a\ntrain but here who's in control of this\nspecific horse here is the organ itself\nbecause his smart intelligent autonomous\nis the specific driver of the horse is\nthe audience around is the person in the\naudience was training the horse and so\non and so forth is the interaction of\nall of these elements how do we define\ncontrol there this is the challenge of\nmeaningful human control and our answer\nis in a nutshell that our tentative\nanswer which is a very broad framework\nwhich would be implement implemented in\ndifferent contexts with different tools\netc the general idea is by using this\ntracking interesting condition diseases\nand a meaningful human control to the\nstandard it responds not to the action\nbut to the reasons so a more abstract\nlevel not the behavior but the reasons\nbehind the behavior the values the norms\nthe intentions of who some designated\nhuman agents which may be the user the\ncontrollers could be the designers could\nbe so we're also broadening the scope of\nthe potential agents who can could be\ndeemed as in meaningful human control of\na specific system and at the same time\nthere is at least this is a threatening\ncondition at least one human agent who\ncan be legitimately called to ask for\nthe wrong behavior of the system so we\nshould design vit this example in such a\nway that by design we can reliably in\nthe most early show that the relevant\nelement of the system are responding by\ndesign to the relevant reasons values of\nthe relevant agents so it's it's a lot\nof work right that's why we're here and\nat the same time that there is at least\none person there be that that guy in the\npub in the audience because they the guy\nhere the trainer of the horse the\norganiser all day of the fair whoever it\nis who was entrusted with responsibility\nnot all in the legal sense which is the\ncurrent solution now let's just decide\nthat you are responsible you pay and we\nwill discuss it in the past week this is\nsort of a legal scapegoating right we\ndon't want just to decide a prior that\nsomeone will pay this wouldn't be fair\nso we want to entrust people with real\nresponsibility as in the capacity in the\nawareness to realize these\nresponsibility conditions by design so\nin a nutshell this is they just grows in\nour talk with some specific implications\nof it\nthis means that we need to really have a\nbroader conception of the different\ntechnical and human components in a\nsystem moving to a broader understanding\nwhat the system is not just a device by\nthe institutional system around it the\nnetwork of people are only debt cetera\nidentify the social players in the\nvalues that we may want or not want all\nin advance by design designed to realize\nthis interactiveness an interaction\nbetween human and robot and David will\nsay more about it I guess in training\nhumans also lay people to realize this\ncontrol condition identified this is\nmore a psychological part of it\nidentifying the necessary human\ncapacities for and relevant knowledge of\na given control task you name it what as\ncreating effective mechanisms of public\naccountability so this is just I'm going\nback to how they did the story it\nstarted this morning within up this is\nvery multidisciplinary enterprise but we\nhope that with this notion be specific\ninterpretation of the notion of\nmeaningful human control we may have\ncontributed to have some steps forward\nin this complex task thank you\nthank you Filippo Santoni the CEO and\nneroon from there over and we have a few\nminutes for burning questions who would\nlike to give reaction or ask a question\nto one of the two former speakers ego\ncan you please speak into the mic great\npresentations it's just working yeah is\nthere an example where meaningful humour\ncontrol has already been applied to a\nlarge degree so many of the aspects\nyou're mentioning are already on the\ntable and have been discussed you mean\nas applied to some autonomous takes some\nautonomous system that's already out in\nthe society that's a very good question\nwe as philosophers we tend to focus on\nthings that do not work so it would be\nnice to have a positive example of that\nlet me think about it\nyeah I cannot come up with one specific\nexample I can give you the sort of the\nidea I mean for instance if you have\nsomething working in a very controlled\nenvironment takes out one automated\ndriving systems for instance I guess if\nyou say if you take there's this project\nin the Netherlands the we part project\nand indeed it's a very gradual attempts\nto get to sultanim allah knows driving\nbut step by step and so for instance we\nhave this shuttle who is unmanned but\nthere is a remote controller sitting in\na control room and this is where for\ninstance you have this idea of combining\na professional expertise as opposed to a\nlayperson on board sitting without\npressure in a control room operating a\nsystem which is moving in a controlled\nenvironment because the environment\nitself has been designed to not present\nchallenges that the vehicle cannot\naddress so I guess this is in principle\na good idea or they or the system design\na minimum control of course if you look\nat my graph at our graph this is very\nmuch on the safe side and possibly not\nso much on the efficiency innovation\nside because cities say hey you know if\nyou talk to enthusiasts about\nself-driving cars say yeah but we don't\nneed that we already have trains if you\nwant to have self-driving cars on trucks\nthere\nwe better stick to trains so I do\nunderstand of course there's a push to\nbut the idea is that you should go step\nby step\nonce the the thing works in this control\nenvironment you can say this is the\nchallenge of the project you can\nslightly start introducing variables and\ntesting them and also the testing part\nis very important he didn't mention that\nthe testing part is very important too\nso that he'll be a sort of an example to\nstart from a control environment by\ndesign and they're removing obstacles\nwell that's kind of where we are right\nwe have robots in all the controlled\nenvironments and we want them to be in\nsociety right now so lapis is where we\nneed to really think about this indeed\nright right behind you one last question\nyou're gonna and Philippa will be around\nfor part of the day so but we can take\nthat question please I would like to ask\na question about okay it's nice to have\na meaningful human control but what if\nyou cannot expect a meaningful human\ncontrol I'm thinking here about for\nexample nursing homes where people with\ndementia are supposed to be empowered\nand supported with autonomous systems\nsemi autonomous systems what then how\ndoes your framework work in that kind of\nsetting so you're assuming that it is in\nthis second meaningful control would not\nbe possible\nwell I know the person from the my\ncognitive impairments\ncannot know fulfil the criteria not that\nperson but a part of the theories\nexactly that you may shift a meaningful\nhuman control to other persons like you\nmay have controllers somewhere sitting\nand you may have the design of the house\nbe obviously the device we are such as\nto not allow for things that could be\nyou know detrimental to that privacy\nwell-being or the person so my first\nreaction we may be meaningful in my\ncontrol is possible we need to study\nthat okay\nmaybe of course if we that's a very\ngreat example because if you focus only\non the obvious controller maybe you\ncannot achieve it but maybe there's some\nother way and the other answer is\nunfortunately if at some point Sri turns\nout that technologies is not allowed for\nmeaningfully much control in a critical\nsetting we may decide not to go for it\nokay thank you thank you very much both\nof the speakers let's thank them again\n[Applause]", "date_published": "2019-10-30T09:52:35Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "6c585e9b1de9584f574aa20f4d91172d", "title": "AiTech Agora: Sebastian Köhler - Responsible AI through Conceptual Engineering", "url": "https://www.youtube.com/watch?v=NCv4J4wH39w", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "good\nso uh so today speakers sebastian kola\nsebastian is\nassistant professor of philosophy at the\nfrankfurt school of finance\nand management so a lot of his work\nfocuses on\nmeta-ethical and meta-normative\nquestions which is\nsuper good but the main reason that we\ninvited him today is his work on\nresponsibility gaps so he has\nseveral good papers on that as well\nso and today's talk is also on\nresponsibility gaps\nand uh at least\nsorry i i i i'm not so it's not it's not\ncompletely focused on the responsibility\ngaps\nbut it's to the titus responsible ai\nthrough conceptual engineering\ni suspect that we hear something but i i\ngive just give the mic to\nsebastian and let him explain uh what\nhe's going to do\nthanks okay thanks uh herman\num and thanks for the invitation so\ni'm very happy to be able to present\nthis um so this is\num this is still pretty much work in\nprogress that i'm doing with\njohannes himmel from syracuse um\nwho's not going to be here today but\nhe's also looking forward to the\nfeedback that we will receive\num in this session and yeah so the title\nis responsible ai through a conceptual\nengineering and so let me start with the\nintroduction\nso um\nautonomous weapon systems and\nself-driving cars um\nthe pr the prospects of those are uh\nincreasingly realistic\nand um these kinds of technologies raise\na lot of questions\num and one of the questions they raise\nis\num well what happens who's responsible\nwhen\nsomeone is using a technology like this\nand the technology causes a harmful\noutcome and the debate\nthat is concerned with this question\ncenters around this thing\nthis problem the responsibility gap\nproblem\nwhich is the problem that\nit seems that there might be certain\nrelevant cases\num where no one is responsible\nand that's somehow problematic\nso and this paper is\ngoing to be concerned with this debate\nbut it's actually not going to take a\nstance within the debate but it's going\nto take\nand take a different kind of approach to\nthe question\nso and the paper has basically two main\ngoals\num so one of the goals is just is not\nvery ambitious\num and i think a lot of people already\nare recognizing this but\num it still kind of shapes the debate so\none\nso the first goal is just to make\nexplicit that the debate is\nshouldn't be concerned just with\nself-driving cars or autonomous weapon\nsystems\num and instead we should recognize that\nif there is a problem here\nit's a problem for basically all\nsoftware applications or robots\nthat engage in some sort of autonomous\ndecision making\nthe second more interesting goal\nof the paper is that we want to argue\nthat the responsibility gap problem can\nand\nactually should be approached as a\nconceptual engineering problem\nand we're going to at the end i'm going\nto um\ntry to sketch one way how you could\nconduct\nthe discussion if you see the\nresponsibility gap\nproblem in this way and basically our\nhypothesis is that\nthis debate about the responsibility gap\nproblem is at a stage where they\nwhere the best way or one very fruitful\nway\nof making power this lies in\nsystematically investigating\nwhat concept of responsibility we ought\nto use so here's the\noutline for the talk first i'm going to\nbriefly\nsay explain how we view the\nresponsibility gap problem\nthen i'm going to present our\nargument for the hypothesis that the\ndebate basically\nis at a point of consent what we call\nconceptual statement\num or is reaching such a point um and\nthen we're going to suggest\nconceptual engineering as a way out of\nthis conceptual staymate and then\nend the talk with how you might engineer\nresponsible ai through conceptual\nengineering\nall right so let's go to the first point\nso what's the responsibility you get\nproblem so the problem arises\ni mean let's say or let me\nrephrase it differently so people have\nstarted to recognize that there is this\nproblem with the advent of a new sort of\ntechnology\nuh specifically so with\nwith machine learning and the wealth of\navailable\ndata that we now have it's becoming\nwell possible to build increasingly\nsophisticated systems that are able to\nexecute\na number of ever increasingly complex\ntasks\ni mean obviously there are certain kinds\nof things that they that they\ncannot and might never be able to do but\nthere's more and more things that\nthese systems are able to do and it's\nplausible to say of some\nsuch systems um and we're going to talk\nabout this a little bit later on\nand that they are agents um at least in\nthe minimal sense that they can engage\nin decision making\nand that they can act in accordance with\nthe decisions that they make so these\nsystems can basically\ngather information and process it they\ncan evaluate that information in the\nlight of the goals that\npeople set for them and then they can\nusing the information and the goals\nbased on\nthey can basically make decisions and\nexecute them\nand the\nit's there it's we can expect that there\nwill be systems like this\nthat take over um\nmore and more tasks right and they can\nand that they are able to do\na relatively broad range of things\nand importantly they will be able to do\nthis\ntasks autonomously\nat least and this is the only sense that\nwe're going to consider here at least in\nthe weakest sense that\nthey can do this independently of human\ninterference right so no human\nhas to interfere for these systems so\nfor example\njust to use the example of a\nself-driving car so self-driving car\num at some point we'll be able to just\ndrive\nto the to the destination as you said to\nit without anyone having to interfere in\nin in his decision making and so on now\njust for the sake of con\nthe sake of convenience we are going to\ntalk call\nthese systems ai um obviously that's the\nstipulative\ndefinition right so obviously\nother people might mean other things by\nai we're just going to\nwhen we are going to use the term ii for\nthese sorts of systems so how\ndo these systems relate to the\nresponsibility gap problem\nso the problem roughly rises like this\nso suppose you have an ai\ncould like a self-driving car or an\nautonomous weapon system or an\nautonomous trader\nright financial trader or a healthcare\nbot or something\nand the ai makes a decision that causes\na harm\nand now assume further that this is a\nspecial case where\nthe neither the failure that resulted\nin the harm nor daham could have been\nforeseen\nby those who used or and designed the ai\nand the hum was also not intended right\nso um\nlike an autonomous weapon system would\ncause harm but often the harm is\nintended by the people who built\nthe system i mean that's what it's for\nbut just assume that the harm\nis not one that was intended so the\nquestion that arises\num of course is in that case well who's\nresponsible\nfor this harm and\nthe basically the responsibility gap\nproblem arises or i mean the argument\nfor that there is a problem here\nright is pushed by looking at all the\npossible candidates\nyour responsibility so first we look at\ncould the ai be\nthe responsible agent and then it's\nplausible to say\nat least for the for what we might mean\nby ai that the answer is no right so\nthese kinds of uh ai just lack\nat least some required capacity for\nmoral responsibility and you can of\ncourse\nhave a dispute about what that would be\nbut uh i mean i think it's very\nplausible that that's the case\nbut of course then and this is how the\nargument goes on\nthe the problem is that these ai have an\nautonomous agency\nand this agency interferes with some\nsort of necessary condition\nfor attributing moral responsibility so\nthere are arguments that\nuh focus on the epistemic condition for\nexample\num so this talk in in the paper we\nshould only focus on the control\ncondition\num obviously you obviously you can have\na\nwe i mean you could do the same sort of\nargument we are\nusing also use uh focusing on the uh\nepistemic condition or some other\ncondition but um\njust for the sake of um\npresentation we're focusing on\nthat problem okay so\nlong story short so if it's\nthe case that neither the ai nor any\nhuman is responsible for them\nthen it seems like there's no one who's\ngoing to be responsible\nfor this harm and that means that there\nwould be\na responsibility gap\num and that's the responsibility gap\nproblem\num i mean in one sense that's not or\nthat's not\nfully yet the problem because you can\nask yourself\nwhy is that even a problem right that\nthere is no one who's responsible and\nuh i i've actually grabbed up with this\nquestion a lot but i'm just going to set\nit aside here\nand um or i'm going to say a little bit\nabout that um\nin a second but um i'm not sure\nuh whether i'm fully happy with the\nanswers we give but\nlet's just move on for a second so\nbefore so one thing to note about the\nway we've presented the argument here\nis that if there is an issue here\nit's a completely general issue right so\num so from the presentation\nof the responsibility gap itself it\nbasically already\nuh that supports the first claim that we\nwere trying to\nmake which is that when we look at the\nproblem we should focus on\nall software applications not just on\nautonomous different insistence or\nself-driving cars because\nthere are many different kinds of things\nthat ai could be\ninvolved in and where it could engage in\nautonomous decision making um and\nthere's just\nnothing that really singles out\naws or self-driving cars as particularly\nrelevant\nfor the responsibility gap i mean i\ndidn't even talk i mean the argument\nitself as i presented it didn't even\nrefer\nto any of these uh potential\nai applications um and because the only\nthing that's really relevant is that\nthe tasks that we're focusing on could\npotentially have harmful outcomes\nright and then when when we have an ai\nthat engages in autonomous decision\nmaking\nthat could potentially uh result in\nharmful outcomes\nthen the responsibility gap problem can\narise\num and this it's possible that\nmany or even all tasks that could be\ntaken over by ai\nfalls into this category i mean i think\nit's very plausible that it's many\nwhether it's all i think is a question\nopen to debate i was thinking like\nalphago for example wouldn't fall into\nthis category but my but johannes thinks\nthat\neven the alphago falls into this\ncategory so i guess there we have a\nslight disagreement but i don't think it\nreally matters because i think\nit's relatively clear that um if there\nis a problem\nhere it's a general problem it's not\njust a problem that for that arises for\nparticular\nsorts of ii applications\nso let's just briefly focus on the\nquestion why is this a problem right so\nwhy are responsibility gaps\na problem um and\nas i as i already said i i haven't i've\nmyself been a little bit puzzled by\nthis issue um so i've tried to figure\nout\nfigure this out a little bit and this is\nalso going to be relevant later on so\num i think they're i mean in a sense i\nthink responsibility gaps are\na problem because there they can give\nrise to other\nto certain kinds of moral problems\nand they will these problems will often\narise when there are responsibility gaps\nso i think the first\num and most obvious problem\nis that and i think this is what some\nfor example\nandreas matthias for example when he uh\ntalks about the responsibility give i\nthink this is kind of\nwhy he thinks responsibility gaps are\nproblematic um so i think\none problem with responsibility gap is\nthat they just\nkind of clash with our sense of justice\num so\njust intuitively it seems like\nwhen there is a harm that's caused by an\nai we feel like there should be someone\nwho can be probably held to res\nto account right so there's a there's a\nmismatch between\nthere being someone who should be and\nsome there know being no one who can be\nheld to account\num so and i think this is actually\nthis is actually quite plausible\nespecially if we consider\nthe fact that so i mean intuitively\nharm that results from ai actions\nisn't really all that different from\nwhen harm occurs due to other technology\ntechnological failures\nright it's not like it's i mean that\nthese kinds of cases are not\ndo not seem very similar to acts of god\nor paradigmatic instances of mere\naccidents\nand i think this is kind of the reason\nwhy we feel like there's something\ndeeply unjust if there's no one who can\nbe held responsible for these kinds of\num harms and of course\num so and i i personally think that's\nthat itself is a problem but of course\nalso if\nif there is if um these sorts of cases\ndo conflict with our uh widely held\nsense of justice then of course that can\nalso um lead to a perceived um\nto lead to the technology losing uh\nlegitimacy in the eyes of people right\nso\num another problem\na more specific one is that\num responsibility gaps can also lead to\ncan also undermine accountability in\npublic institutions\num so this could this only this doesn't\napply to all applications but\nall i applications but to some i'm here\nand here basically the problem is that\nin public institutions it seems like\nthere's an important democratic\nrequirement that\npeople who make the decisions are\ncan be held responsible for the things\nthat happen\nbut given that we can expect ai to be\nused increasingly\nin public and administrative decision\nmaking\nwe can so one problem that comes up with\nresponsibility gaps is that this\nrequirement\nmight be increasingly violated or eroded\nand of course there's also the danger of\nexploitation right so politicians can\nactually\ntry to use the fact that there are\nresponsibility gaps\nto kind of um\nobscure the fact for example that they\nare\nresponsible for something so there's\nalso kind of um\nmisuse a risk of misuse\nand then of course there are kind of\num at least two more consequentialist um\nissues which is so first\ngiven that\nresponsibility gaps might um strongly\nconflict with our\nsense of justice and given that they\nmight for example erode things like\npublic\naccountability um and i mean maybe in\nthe military we\nwe also need something like this right\num that sound\num given these problems that\nthat we might have more reason to just\nban the use of the technology or\nseverely restrict\nits use right so for example this is\nwhat so sparrow robert sparrow argues\nfor\num autonomous weapon systems so he\nthinks that responsibility gaps\nbasically give us a reason\nto ban autonomous weapon systems\nbut of course the problem is that\nusing ai will be hugely beneficial right\num i mean\nautonomous weapon systems are a great\nexample for this\nso we can expect that if there are ever\nreliable autonomous\nweapon systems that they will probably\nsignificantly reduce the\namount of harm that will result in war\nmaybe not i don't know\nuh maybe a less controversial\nexample of self-driving cars right so\nthe use of self-driving cars will\nprobably reduce the amount of\ntraffic accidents and so on and so\num banning this sort of technology comes\nas a at a moral cost\num because we have to do we have to\ndeprive us of benefits that we would\notherwise be able to enjoy\num and\nand this is also related to the this\nfirst moral cost\nanother problem with responsibility gaps\nis that they\ncan undermine the trust\nthat people have in the technology right\nso\nuh responsibilities gaps could lead to\nwidespread mistrust and then\nthis will as a consequence have\num negative consequences on innovation\nor profit proliferation\nin with regards to the technology so\num people might be less inclined to buy\nself-driving cars if they're\nif there are things like uh\nresponsibility gaps and then of course\nalso\num the surprise is com is a moral cost\nright because it deprives us of the\nbenefits\nso i think\nand i think this is relatively\nuncontroversial i think the best thing\nwould be\nif we could if it was if there was a\nworld in which we can\nuse ai but there's no responsibility\ngaps right so\nwe can use it but um there's always\nsomeone who's responsible for the harm\nthat they cause\nand this of course is leads us directly\nto the debate\nabout responsibility gaps because people\nhave\ndiscussed the question whether there is\nactually whether\nai actually do create responsibility\ngaps\nor not\num and this is the this is what we now\nfocus on\nso this is the debate we now focus on\nand as i said\nat the beginning we're not actually\ngoing to intervene in this debate and\ninstead we're going to try to argue for\nthe thesis that this debate\nis reaching the point of conceptual\nstatement\nso what's the conceptual stalemate made\nso\nhere we draw on david chalmers's work on\nverbal disputes\num so and so according to our definition\na dispute about a question reaches\nconceptual statement if it's\nsatisfies two conditions first\nthe question has been answered in\ndifferent ways such that these answers\ninvolve different assumptions about the\nunderlying concepts\nand the dispute over the question is\ngrounded at least\nto significant extent in disagreements\nover the content of one or more\nunderlying concepts\nso conceptual standards aren't really\num i don't think that they're really a\nrare phenomenon they're probably\nquite widespread um at least in\nphilosophy\num and this is that's unsurprising\nbecause in a sense philosophy\na huge part of what philosophy does is\njust to try to clarify\nthe content of the core concepts in a\ndebate that\nthat the people are concerned with right\nso um\nif you look so consider for example the\ndebate whether free will\nis compatible with determinism right so\none of the first things that people do\num is to try to clarify what they mean\nby free will\nand determinism and a lot of the a huge\npart of the debate is just concerned\nwith trying to clarify these concepts\num especially the concept of free will\nnow what\num happens when people engage in this\nsort of philosophical work\nwork of course is that um they basically\nuncover conceptual possibilities right\nso\num for example again if you use the\nexample of\nthe free will debate so compatibilist\nfor example\num they make a suggestion about the\nconcept of free will that um\nis such that everywhere would be\ncompatible with determinism\nand then incompatibilists make their own\nsuggestions about what the content of\nfree will\nis uh maybe for example it requires some\nsort of absolute or ultimate control or\nsomething along those lines um and\nwhat these what the people are doing\nthen when they're\ntrying to clarify the content of these\nconcepts is actually they\nuh discover different possible\nways how you could understand the core\nconcepts right\nuh in in the debate so they're covering\nconceptual possibilities\nnow at one point at some point what's\ngoing to happen\nis the conceptual choice points are\nbecome are going to become\nrelatively clear right so i mean further\nclassification is always going to be\npossible of course and more work\num can probably always be done and maybe\nthere are\nsome conceptual possibilities that you\nhaven't uncovered and so on\nbut i mean if enough people work on\nthese issues as\nit is the case for example in a free\nwill debate then\nit will become relatively clear what\nkinds of things um\nyou what kinds of choice points you have\nright so do you think\nfree will is this or is do you think it\npre-will is this and\nif you say it is this and so on\nand then what can emerge is a situation\nwhere\nit's it is actually possible to answer\nthe question in the positive\nor in the negative right if you\nunderstand the concepts in a certain\nkind of way\nright so for example if you take uh\nmills view about\nfree will right you can just see so you\nyou understand the concepts\nin in the way that will understand them\nyou can say yes\nfree will is compatible with determinism\nright if we accept\nthis uh unpacking of the concept\num now and then that's a in\nif we are in such a situation but we see\nthat the dispute persists\nthen that is likely due to this\nagreement about\nthese concepts right and\nwhat we have then is basically a\nconceptual statement\nso we have a disagreement where it's\nrelatively clear what\num well you can you can answer the way\nin different kinds of ways if we\nif you understand the concepts in\ndifferent sorts of ways um\nand there's a disagreement about how the\nconcepts are to be understood\nnow conceptions they made even aren't\nactually\ni mean or i mean that's the our view our\nview is that conceptual statements\naren't actually bad\nright they actually because they're good\nuh in the sense that they'll\nthey can lead to philosophical progress\nbecause\nonce you've uncovered the conceptual\npossibilities now you can\nlook at the question which of these\nchoices\nis the correct one\nand we're going to talk about in in the\nthird part of the\nof the paper i'm going to talk a little\nbit about what that means right what the\ncorrect conceptual choices are\nbut first now let me try to make\nplausible\num our suggestion that\nthe responsibility gap discussion is\nactually\nalso reaching the point of conceptual\nstatement not in the sense of course\nthat all the conceptual possibilities\nhave been mapped\nto the most precise um detail\nthat it could be mapped onto or and so\non but\ninstead that um you can answer the\nquestion\nso it's relatively clear what kind of\nchoices points there are and you can\nanswer the question\nin the negative or in a positive\ndepending on what view you take on the\nconcepts\nand the disagreement between the\npositions is likely going to\nconsist in disagreement about how you\nshould understand these respective\nconcepts\nso i think it's so\nwe think it's plausible that basically\nwhether or not there are responsibility\ngaps\ndepends on the content of\nthe certain kinds of concepts that are\nrelevant in this discussion\num in particular because we're focusing\non the control condition\non the content of the concept of\nresponsibility and of control\nand in the debate\nactually different participants to the\ndebate\nhave put forward different sorts of\ncontexts\ncontents um for these concepts\nso first uh oh sorry\nyeah and so\nregarding the cultural conditions the\none that we're focusing on\num i think i mean you face there are a\ncouple of\num important choice points that you\nmight face when you're considering how\nto understand the\nconcept of responsibility so the first\nmost\num important uh choice point is\nobviously that whether you think that\nresponsibility\nin the sense that it's a sticky requires\ncontrol at all right\nso some people so most people i think\num in the debate about responsibility\ngaps think that it does\num but there are some exceptions\neven to this um\nso one exception might be hellstrom\nwho so who well i mean\none way to understand him is that he\nthinks what is what is required is not\ncontrol but autonomous power\nand the degree of autonomous power that\nyou have determines\num whether or not you're responsible\nand he uses this idea\nto try to argue that actually the the\nprevious assumption that we\nthat we made that the iai itself\nwouldn't be responsible to argue that\nthe ai could be responsible\nand then in a recent paper tiggert\nhas argued that we shouldn't just focus\non\nresponsibility in the sense of\naccountability\nbut also on other forms of\nresponsibility\npractices like answerability and\nattribute attribute\nhad to attributability\num but what's noteworthy of course uh is\nuh and this is something uh that is also\num comes up in the actual free will\ndebate\nor responsibility made is that the\nthese other forms of responsibility\nespecially a true\nattributability for example do not\nrequire\ncontrol to the same degree as\naccountability\nand so if you if the if responsibility\nin the relevant sense here isn't\nreally accountability but some other\nsense for example\nthen maybe you can deny the control\ncondition as well\num and importantly of course given that\nthe problem that gave rise to the\nresponsibility gap was\nthat the autonomous agency of the aia\ninto somehow intervenes with the control\ncondition\num if you deny that responsibility\nrequires from control then you can\nit's very likely that you can just avoid\nthe responsibility gap problem\nnow this brings us to our second\nconceptual choice point now if you can\nsee that you can\nif you can see that responsibility\nrequires control then the next question\nis of course what kind of control\ndoes it require right um and here also\ndifferent uh suggestion have been made\nand the central issue here\ni think is\nhow control that you think is required\nfor responsibility interacts with the\nagency of the ii\nspecifically whether the agency or the\nthe ai can be seen as intervening agency\nsuch that it undermines the\nresponsibility of any other agents right\nso\nparadigmatic examples of intervening\nagencies of course human agency\nso often so when so for example when\nthere's\nsome some other person's actions that\nstands between me and an outcome\num it's often very plausible that their\nagency intervenes\nmaking it such that i am no longer\nresponsible or less responsible\nfor what's happening and so the question\nis here of course whether the sort of\ncontrol that requires required for\nresponsibility\num is interfered with with\nby the agency of the ai and basically\neveryone who thinks that there is a\nresponsibility\ngap problem basically relies on an\nunderstanding of control\num on which that kind of agency that the\nai possesses is intervening agency\nright so matthias says\nfor example nobody has enough control\nover the machine's actions to be able to\nassume responsibility for them\nright sparrow says military personnel\nwill be held responsible for the actions\nof machines whose decisions they did not\ncontrol\nand so both so these authors all rely on\nan understanding of control as required\nfor\nby responsibility um that where\nbasically\nthe kind of ai agency the ai possesses\nwill interfere with the control that the\nhuman operator has\nand of course it's very likely that if\nyou understand control in this way that\nyou will have\nresponsibility gaps but there are at\nleast two ways\nhow you could avoid responsibility gaps\nso first you could\nconcede that agency intervenes\nor interferes with control\nbut take a particular stance on on the\ncontent of agency\nright where you say that an agent is\njust an entity that can be held\nresponsible\nand if you take this kind of approach\nthen you also don't get a responsibility\ngap\nbecause now you basically say well if\nthe ai is an agent\nthen sure that interferes with control\nby the humans but there's no\nresponsibility to get because now the ai\nis responsible um well if the\nif the ai does not qualify for agency\nthen you don't have a problem because\nthat\nwhat because there's because there's\nnothing that intervenes with the\nwith the relevant sort of control that\nthe human has\nanother thing that you could do is you\ncould take a view about\ncontrol um and\non which basically\nthe agency of others does not inter\ninterfere with the control\nthat is required for responsibility and\nthere are many ways how you could do\nthis um so here are just\ntwo possibilities so um you might think\nyou might think that\nwe have control over probabilistic\noutcomes right\nand that responsibility that the control\nwe have over\nresponsibility over probabilistic\noutcomes or risk impositions\nis sufficient for responsibility\nor you could say you could use a weaker\nnotion\neven weaker notion of control where you\nsay there's such a thing as supervision\nwhich is a relevant sort of control but\ndifferent from the sort of control that\nmatthias and sparrow for example\npresuppose\nand then in all of these cases\nyou get the result that there is no\nresponsibility gap because\nthese weaker forms we do have these\nweaker forms of control over\nuh the actions of the ai now as i said\nit's not our aim to argue for any of\nthese views here um instead the point is\nonly this\nit seems very plausible if we look at\nthis at this debate that\nwhether or not there is a responsibility\ngap depends on conceptual issues\num it depends on questions about like\ndoes responsibility require control what\nkind of control does it require\nwhat sort of agency does ai possess and\nall of these questions depend on how we\nunderstand the concepts in\nquestion\num now\nwhat the literature already has done and\nit's doing this is still\ndoing this right so this is an ongoing\ndebate um\nis has explored already to some degree\nthe relevant\nconceptual alternatives and the\nquestions have been answered one way or\nthe other right so some people say yes\nthere are responsibility gaps um\nand some people say no there are no\nresponsibility guests but it always\ndepends\non the kind of concepts that they're\npresupposing so and plausibly\nif this if this disagreement persists\nand we can assume that the disagreement\nis partially grounded\nin disagreements about the concepts um\nso\nthat's support for our first thesis that\nit's this this dispute is basically\napproaching a conceptual statement and\nthat raises of course the question\nwhat should we do now right so what\nshould we do\nwhen our debate has reached this point i\nmean one thing that we should do\nobviously is we should just go on and\nfurther investigate conceptual polish\nconceptual choice points\num\nbut another so\nanother thing that we should do now is\nwe should focus on the question\nwhich conceptual choices regarding\nresponsibility\nwould be correct\nand that mean but that of course raises\nyet another question which is what do we\nmean\nwhen we say that conceptual choices are\ncorrect\nand there are basically two sorts of\nanswers\nthat you can give so one is i think\num so one answer is just you say what we\nshould must do is we must figure out\nwhat our actual concepts are right so we\nneed to figure out\nwhat do we mean when we say that someone\nis responsible\num or when we think someone is\nresponsible\nwhat is the concept that we're using and\nwhen we're thinking this\nand i think in some sense this is\nprobably what the debate\nis is uh charitably charitably\num construed is about\nnow i think that there is at least\ntwo ish two problems with this sort of\nfirst answer\nwhich are reasons why we shouldn't take\nthat first answer\nthe first problem is just that i\npersonally i personally and i think\nthere's good very good evidence\nto think that this is the case i think\nit's very plausible that\nthere isn't it's not very plausible that\nthe\nquestion what our actual concept of\nresponsibility is can be resolved to a\nsatisfactory\nextent and this is not i mean and this\ni think the reason for this is just that\nthe actual concepts that we use\nor the words that we use right they have\njust they have many different aspects\nthat pull in\npull us in very very many different\ndirections and it's very very\nlikely that whatever analysis you impose\nwill is going to\nstrain against some relevant aspect of\nthe content such that\num there's no going to be no analysis\nthat's going to be fully satisfactory of\nthe actual concepts\num there's always going to be something\nmissing\num and that's because the what the act\nthe content of our action concept is is\nprobably\nvague or in the tournament to some\nextent\nsecondly and more relevantly though i\nthink that it's just\ndoubtful that pointing to our actual\nconcept\nhas going is going to have the right\nsort of normative significance\num to resolve the dispute that we're\nengaged with\nright so suppose that it turns out our\nactual concept of responsibility is such\nthat there are responsibility gaps\nright we can i i think we can\nlegitimately then say\nso what why does that matter given that\nthere are other concepts that\nwe have we have other discovered other\npossible concepts\num that don't raise responsibility gaps\nwhy shouldn't we go for one of these\nright and just saying but that's what we\nmean by responsibility i think it's not\ngoing to\nbe a satisfactory answer\nso that brings us to our second\nway of understanding this um claim that\nwe should figure out what con\nwhat conceptual choices are correct\nwhich is that we should figure out\nwhat concepts we ought to use right so\nwe should engage in conceptual\nengineering where\nconceptual engineering is just as we\nunderstand it\nis a mythological approach that's\nconcerned with\nconcepts and the words we use to express\nthem\nand which urges us not to just consider\nwhat concepts we actually use but also\nwhich concepts we could use\nand which ones we ought to use right so\nit\nwhat it does is it aims to evaluate and\nimprove\nour conceptual repertoire and the basic\nidea\ni mean that lies behind our acceptance\nof uh conceptual engineering in this\ncontext is just\nthis idea that well words and concepts\nare\nthings that do things for us right\nthey're useful things\num and we have interests that are at\nstake\nuh when we use them and these interests\ncan of course be\nrepresentational interests right so\nsometimes when we use a word we're\ninterested in\npicking something out in the world right\nand we're interested in picking out\nuh maybe a theoretically interesting\nkind uh but\nsometimes the interest that we have are\nalso um\npractical right so maybe we use a word\nto\njustify what we're doing or maybe we're\nusing a word to insult someone right and\nthese are also interest also interesting\nand relevant interest that we have in\nusing the concept\nand given that we have these interests\nwhat we should do is we should ask\nnormative questions specifically two\nrelevant nominative questions here\nfirst which content which which of these\ninterests are the most important ones\nright so what's the thing that the\nconcept should do for us\nand what concept should we do\ngiven should we use given\nthese are our most important interests\nall right so\num i think\nand i think it's really relatively clear\nso what\nwhat's attractive one thing that's\nattractive of course about conceptual\nengineering is it\njust avoids the problems that the\nalternative one\nhand right so and that i think makes it\njust a better\num makes it a better\nanswer or better approach in response to\nthe request for the correct choice of\nconcepts\num but that of course also means that\ngiven that we should the correct choice\nof concept is determined by conceptual\nengineering\nif the responsibility gap debate is\nreaching conceptual statements\nwe should also see the responsibility\ngap problem as a conceptual engineering\nproblem so what we should focus on\nis what is the content\nof the what should the content of the\nunderlying concepts be now\nthis suggestion of course raises a lot\nof questions mythological questions and\nso on and\nobviously these are all fair questions\ni'm not going to talk about these\nquestions especially because i'm\nbasically out of time now but also\nbecause that's not what we're doing in\nthe paper instead\ni just want to end the paper by very\nvery briefly sketching\none way how you could make progress on\nthe responsibility gap\nproblem using conceptual engineering\nso just to start\nle i'm going to introduce this notion of\na responsibility concept\nwhich is basically any concept that can\nfeasibly regulate the set of emotional\nand practical responses\nand practices that we associate with\nmoral responsibility\nand i think basically what conceptual\nengineering for responsibility will be\nconcerned with\nis which of which responsibility\nconcepts\nshould regulate our responsibility\nrelated practices\nand to determine that we need to figure\nout what interest we have\nwhat interests are at stake when we're\nusing responsibility\nconcepts and if we want to\nuse conceptual engineering to address\nthe responsibility gap problem\nwhat we need to argue is that our most\nimportant interests favor conceptual\nchoices\nthat close responsibility gaps now i\nthink\none thing that already speaks in favor\nof\nany concept that does so of course are\nthe kinds of practical issues that we\nhighlighted earlier\nright so the kinds of problems that you\nface when there are responsibility gaps\nthese kinds of problems already give us\ngood reason to\nfavor responsibility concepts that don't\ngive rise to\num responsibility guys but we of course\nwe still need to figure out\nwhether there are other interests that\npull in different directions and how\nstrongly they do so\nand that raises the cost question what\nare the important functions\nthat responsibility concepts might play\nand\nwe now have a list like we have six\npossible functions\nthat responsibility concepts could play\ni i'm just going to\num like fo fo\nbriefly mention them and then go on to\num how you would\nhave to argue based on this okay\nso first so most important and most\nimportantly for our discussion\nis um the ledger function of\nresponsibility concepts which is\nresponsibility concepts can be used to\nfacilitate a cert a form of moral\naccounting right so they\nallow us to keep books on what can be\nattributed\nto whom and here basically the ledger is\na metaphor\nand it's a metaphor an overall\nassessment of a person's conduct\nif you take the ledger function\nseriously then\nso on a ledger function the\nresponsibility concept will pre\nattributing responsibility presupposes\nwrongdoing\num and so what's it will be part of the\nresponsibility\nconcept that is will be concepts of\nright and wrong now\nthat's one function that responsibility\nconcept can concepts can play but there\nare many other functions as well so\nthere's for example\njustificatory answerability functions so\nbasically allowing us to figure out\nwho has to answer for what happened or\nwho has to offer\njustification for what up happens there\nmight be retributive functions\nright so we just use responsibility to\npick out someone who has to be punished\nfor what happens right there are\ncommunicative educational functions so\nwe use responsibility concepts to\ncontinue\nto express and communicate moral norms\nin our\ncommunity there's incentive functions\nwhere we use responsibility concepts to\nbasically impose\nincentive structures giving people\nreasons to act in certain sorts of ways\nand there are compensatory functions so\nmaking sure that those who have suffered\ndamages are\nadequately compensated\nnow the reason why i rushed through\nthese\nthese two so i rush to these to save\ntime but also because\nthe point that i want to end the\ndiscussion with is\nuh can be made with just this sketch\nbecause\nit seems very plausible that you get\nresponsibility gaps\nfor responsibility caps that are most\nprominent that most closely fit the\nledger role\nright so the ledger role presupposes\nthat people are responsible only if they\ndid something wrong\num and that of course requires the\nrelevant the right kind of causal\nconnection between their action and\nthis kind of thing that we can plausibly\nsay that what they did was wrong\nso if responsibility gifts are most\nlikely to arise\nwhen we give the letter role prominence\nthe other kinds of fun concepts don't\ngive rise to responsibility gaps\nmost more easy as easily and there are\nactually some of these that\nwould specifically favor concepts that\nwould avoid responsibility gaps\nright so for example the compensatory\nfunction or the retributive function\nright so making always sure that\nbad deeds go don't go unpunished\nbasically\nany concept that fits that role is going\nto\nnot allow for responsibility gaps so\nwhat we have to do is\nif we want to engineer away the\nresponsibility gap problem\nwe have to argue that the cost is\nbearable\nwhen we accept a responsibility concept\nthat discards the ledger function or\nrestricts its relevance\nso that's the way that is i we think is\nthe way forward if we want to engineer\nourselves out of the responsibility\nproblem now that suggestion seems quite\nplausible actually\num and it looks plausible if because if\nyou just look at the\ndebate about responsibility caps it's\nthere are lots of suggestions\num for concepts that would allow us to\navoid\nresponsibility gaps but which do seem to\nserve the kinds of practical interests\nthat we have in our responsibility\ngap responsibility practice\nand so prima facie at least is quite\nplausible\nbut more work needs to be done that i\ncannot do\nhere so i guess i i think i'm five\nminutes over which i apologize for\num but i'm i'll just end the discussion\nhere and i'm\ngoing to stop my screen sharing now\ngreat so thanks sebastian thanks very\nmuch it wasn't\ni thought it was a super interesting\nvery good\ntalk and presentation\nthere are many things to think about so\nbut are there\num any questions\n[Music]\njust raise a hand or queue\nin the chat um\ni mean if there's if it's just\nclarificatory questions that's fine as\nwell\nyeah so yeah so you can also think about\nthe question i can ask oh\nluciano go ahead okay yeah so\nthanks sebastian that was really yeah it\nwas really interesting\nreally enjoy so uh the this this very\nnuanced discussion and what it\nmeant by responsibility is very good and\nso\nwhen during your talk i think you make\nsome yeah some connections\nstart like autonomous weapons systems\nlimited vehicle they have some kind of\nagency called an\nai they cause harm responsibility gaps\nand then go\njustice and all the harms caused by lack\nof justice\nso but you focus on the conceptual\nengineering for the responsibility gap\nwhich is on the middle of this part\nbut i was thinking if you\nyou make the connection through\nautomated vehicle weapon c stands to ai\nyou use the concept of agency for\nexample yeah\nand wouldn't that also require some\nconceptual engineering so\nwhere do we stop because you need to\nhave some assumptions to get to the\nresponsibility gap\nyeah um uh i mean\nso i think so i mean first of all thanks\nfor the question i think that's a good\nquestion um i mean one\nso i think one question so where do we\nstop i think\ni mean at some point i think we should\nstop when we're doing it conceptual\nengineering we probably have to stop\nat like basic normative stuff\num at least when we're focusing on\nspecific normative questions right\nobviously we can also engineer our\nnormative concepts\nbut that then we get to never ending\num engineering but obviously yes i agree\nthat we need to\nalso think about the concept of agency\nthat's at play here\num and how it connects to control i mean\ni think so i tried to highlight that\nin the middle part as well so rob a lot\nfor example\nso some people so i mean at the\nbeginning the\nthe concept of agency i used at the\nbeginning it's like a very minimal\nsense of agency where you are an agent\nbasically when you\nare involved in decision making of a\ncertain side of sort\nbut of course you could just say that's\nnot really agency\nthat's not really the right relevant\nkind of agency that matters for control\num and that that's in a sense also\nan engineering question so yes\nso where do i think it stops i think i\nmean in a sense it stops\nso when you look i i mean i think when\nyou look at a specific specific debate\nwhat you should look at are the central\nconcepts concerned that the debate is\nconcerned with\num and then consider those\nfor i mean but of course you are right\nthat\nit gets expansive very sick very very\nfar i mean one thing that\nvery fast one thing that i did for\nexample that i didn't talk about so much\nis\nis of course that what we say about this\ndebate has implications for other things\nas well right so\num the i mean fairness\njustice of harm and then we can go all\nthe way there right\nyeah but nice thank you\ngood so um maybe to follow up on that so\nyou said what\nso um uh maybe our conceptual\nengineering in those cases\ndoes have some um\nconsequences for it for for the for if\nin other contexts so maybe we find out\nthat responsibility\nmeans actually means in this course\ncontext means something\nand if i consider you correctly you\nthink that this automatically\nmaybe means that it also means the same\nthat the same conception of\nresponsibility is also applicable in\nanother context\num or so is that what you because that\ni would be very skeptical skeptical\nabout that claim\num so um but is that what you want\nis that one of the things that we that\nthis discussion uh teaches us so how to\nuse responsibility in other contexts\nor do you allow for some kind of uh\nconceptual pluralism\nmaybe they had a different that may be\ndifferent conceptions of contexts might\nbe appropriate and the\nconceptions of concept might be\nappropriate in different concepts uh\ncontext\nyeah no no i think that's a great that's\na great question um i actually\ni mean i was i have always assumed that\nwe just want the same concept for all\ncontacts but now that you're i mean now\nyour question\nbasically i don't see why that should be\nthe case\num i mean i think\nso i i mean what we do want is\nso we want a concept that regulates our\nresponsibility\nwe want concepts that regulate our um\nresponsibility practice right so our\nblaming um\nand praising and criticizing\num practices\nso um but i think\nit's quite plausible actually that\nfor different sorts of assessments we\nneed different sorts of responsibility\nconcepts\num as long as it's relatively clear that\nwhat concept we're using for what\ncontent\ncontexts but i do think that i mean\nin a sense so okay\nlungs so i i do agree with you that we\nprobably should be pluralist\nto some extent but i mean given that\nit's very i mean what we have here is\nresponsibility when we're using a\ntechnological application\nright and i it seems like very plausible\nthat the standards\nthat apply in that case also should\napply to\nto other cases\num so yeah yeah\nis that helpful or yeah yeah that is\nthat is helpful\num so it might be of course be\ninteresting so what then determines so\nthere must be some overlap between\ndifferent contexts\nif if uh if there's good\nreason to assume that conception in one\ncontext\nis also appropriate in the other so then\num\nyeah and that that that that probably\nneeds more work um\nso yeah thanks so much yeah as usual uh\nit's already two minutes after the time\nthat we uh\nthat we should end um\ni thought because we started five\nminutes late that's right\nso um if there are if you're if you have\ntime uh if there are people who want to\nask a question\num i'm more than happy to um\ni mean i'm just sitting in my home going\nherman you have a comment from diana in\nthe chat oh good oh thanks\nthank you so um diana do you want to ask\na question shall i just read it out loud\ni i yeah i can't i can't see the chat so\nokay so i just read it out loud so we so\nwe may need to better or more thoroughly\nengineer the concept of conceptual\nengineering itself\nuh so that we don't end up in an\ninfinite recursive engineering loop\nsomewhat ironically yeah good\ni think chalmers made some similar\nremarks yeah\nyeah um you agree yeah\ni mean that there is a i mean there is a\nproblem i mean yeah so\nwe do i mean\ni think this is i mean i think this is\nthe i mean the worst\nthe worst thing of course is\num i mean as a meta-ethicist um\ni've i've already heard about this a lot\nbecause\nlike the engineering the i mean\nthe normative concepts using the\nnormative concepts right that is going\nto raise\nuh some very really really tricky issues\nbut i think if we can resolve that\neverything else is kind of\ngoing to be fine that's a big if though\nsorry to just bargain i just yeah no i\ni appreciate i agree it is a big if\num so\nyeah so then that's to the meta ethics\nfirst\nyeah math ethics first obviously and i\nmean um\nsomeone i actually was at a workshop\nwhere um mati was there right\nheckling um and\nsomeone was making an argument that\nactually i think uh\njust to bring in a little bit of meta\nethics um\nat the end no one was making an argument\nthat actually but probably\nexpressivism has to be true about the\nmost fundamental arts um\nso that's why i'm fine with\nyeah that's where you're happy with this\ngood i understand good so i think we've\nalready moved uh too much uh\nyeah from our uh original topic so\nlet's just add and thanks sebastian i i\ni really really like this thought\nthank you thanks for the invitation\num and thanks for the questions so\nyeah and i mean apologies that", "date_published": "2021-06-15T11:20:40Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "a4b3bf5b77810c82a71803df8d74c7a6", "title": "Responsibility for outcomes when systems are intelligent (Nir Douer and Joachim Meyer)", "url": "https://www.youtube.com/watch?v=O--QL4SRGgI", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "um and near will give the presentation\nand i'll be\nsitting in the back and if maybe i'll\ntry to answer some of the questions if\nuh i think well uh help\nany of it although he will do this\nexcellently by himself so\nagain thanks for having us and please hi\neverybody my name is nier\ni'm from the department of industrial\nengineering at tel aviv university\nand we were asked to present you some of\nour work on human responsibility\nin intelligent system this is a part of\nmy phd dissertation\nand my promoter reserve professor jorge\nmayer\nand so let me begin\nso intelligent system have become a\nmajor part of our life and we can find\nthem\nanywhere in transportation autonomous\nvehicles in\naircraft in industry medical equipment\nalmost everywhere\nas you look you see some intelligent\nsystem or very\nadvanced automation and\nwith this system computers and humans\nshare the collection of information\nit's processing decision making and\nimplement implementation of action\nand this really raises the question who\nis responsible\nand to what extent so these are the\nthings that we try to\nto figure out in our study\nnow in the past uh things were\nmuch clearer the operator\nwas responsible for anything that\nhappened there during the operation\nof course unless i'm an unperceived\ncircumstances\nwhile the manufacturer was responsible\nfor\nanything that related to system design\nfault but\nnow there is a responsibility gap\nbecause in the interaction with\nintelligent system\nhuman may no longer be able to control\nthe intelligent system sufficiently\nto be rightly be considered responsible\nor fully responsible for the outcomes\nso there is a responsibility gap in the\ninteraction with intelligent system and\nthis\nresponsibility gap arises from the\ncombination of\na few factors\nfirst as this system are becoming more\nand more\ncomplex there is transition to shared\nand supervisory control\nin which humans either decide and act\njointly with the system\nor only monitoring the intervene if\nnecessary so\nthe level of control is shifting between\nhuman and machine\nsecondly there is technological complex\ncomplexity\nsystems that incorporate artificial\nintelligence\nhave some kind of an opex structure\nand the user and even developer cannot\nalways\num predict\nall their behavior which sometimes can\nbe\nvery peculiar and unpredicted they are\nkind of a black box user that uses\ndecisions a support system\nbased their decision is the decisions\non the information which is supplied by\nthe system\nand hence the human decision process is\ninfluenced\nby what is presented to the human by the\nmachine\nthere is an issue of functional location\nand i will talk about it later on\nbut it's a mismatch between the world\nthat we assign to the human and what we\nuh\nauthorize him to do with the system what\nis is normally not in automation is the\nproblem of\nlast world for example in airbus\nairplanes\nthe airplane may limit the pilot from\ndoing\ncertain actions if the the airplane\nthinks that the pilot is going to take\nthe\nthe aircraft outside of the safety\nenvelope\nand there are also negative implications\nof automation\non the user especially of advanced\nautomation and intelligent system\nbecause this may lead to over-reliance\nskill degradation and loss of\nsituational awareness\nand this is not just you know puzzling\nacademic issue\nuh it's very interesting for real life\nespecially if the outcomes of the system\ncan harm people\nfor example in autonomous vehicles or if\nthe system\nis deliberately designed to inflict\nlethal force\nas may be the case with a little\nautonomous weapon systems\nso it's it's really an interesting real\nlife\nquestion but before i proceed i want to\nto explain what type of responsibility\nwe are\ntalking about because there are\ndifferent types of responsibility\nso there is more responsibility which is\nuh the duties of\nall that i assign to a person when he\ninteracts with the system\ncausal responsibility is the\nconnection between action of a human\nand the consequences of these actions\nmoral responsibility is the\nresponsibility to act according to some\nmoral code liability is the\nlegal responsibility usually it's it's\nconnected to\nstuff like punishment and compensation\nand capacity is the psychological\nability of a person to be\nheld responsible for his actions and all\nthese\ntypes of responsibility can be looked on\nuh\nin retrospective manner in which i look\nat past\nevents and try to figure out what was\ndifferent responsibility and prospective\nmanner\nin which i try to predict what will be\nthe responsibility for example in\ninteraction with some intelligence\nsystem\nso if we look at the academic literature\nabout responsibility for example about\nhuman responsibility with autonomous\nweapon system\nautonomous car you can see that there is\nextensive\nphilosophical ethical and legit legal\nliterature\nabout moral responsibility and liability\nand actually we we found uh very few\nresearch on the subject of causal\nresponsibility\nand examination of the this type of\nresponsibility from engineering\nperspective\nso our work really deal with a causal\nresponsibility\nthe ties between a person's actions\nand the final consequences of this\nactions\nthis is a this is related to the subject\nthat you are investigating which is\nmeaningful human control because\nmeaningful human control is related to\nthe notion that\nit's not enough only to put a system a a\nhuman in a system\nin order for this human to have some\nmeaningful influence on the system\nand if you look at the literature there\nare many literature on the subject of\nmeaningful human control in many\nsystems like medical equipment an\nautonomous weapon system autonomous\nvehicles\nbut sometimes they are different in\ncontradicting interpretation of policies\nregarding meaningful human control\nand system designers lack models and\nmetrics to measure\nhow meaningful was the human control in\nintelligent system\nso as i told we are measuring we try to\nmeasure\ncausal responsibility and a measure that\nwill quantify causal responsibility\ncan assist in evaluating meaningful\nhuman control because\nif i didn't as a human or operator i\ndidn't have any effect\non the outcomes of the system so i guess\nmy\ninvolvement or control was not really\nmeaningful if i didn't have any effect\non on the system and the outcomes\nso the measure of causal responsibility\ncan aid in the assistance of\nassisting in evaluating a meaningful\nhuman control\nso in the back i by the way you can see\nour faculty\nit's very nice and the sunny actually\ntoday i don't know what's the weather in\ndelft but here today it's\na 30 degrees and\nin about 10 minutes you can reach from\nthe faculty to the beach\nso we think what to do after this\npresentation\nbut the research component\nthe the our research has the three\nmain component first we developed\na normative analytical model that\nit's a mathematical model that i'll\npresent it's an essence\nthat explain or try to measure a causal\nresponsibility in intelligent systems\nhowever unfortunately people not\ndo not always act optimally according to\ntheoretical models\nso we also examined how human\nactually behaved uh in laboratory\nexperiments so the thinking thing is to\nto to observe empirical behavior of\nhumans\nand lastly people might perceive their\nresponsibility\nor their contribution to the outcomes in\nanother in another level that they\nreally contributed to the outcome so it\nwas also interesting to\nto assess or to try to to figure out\nwhat is the perception of responsibility\nof human when they\ninteract with different intelligent\nsystem\nand when you combine all this research\ncomponent\nwe can figure up or try to figure out\nthe notion of human causal\nresponsibility and intelligent system\nand all\nits different aspects so i'll i'll talk\nabout\neach of the component very shortly\nactually we published quite a few papers\non each of the component and also we had\nsome\nconference presentations so you can find\nall the\nfine details in the these publications\nand i'll only explain the motivation and\nthe essence of each\ncomponent and we'll start with the\ntheoretical model and remember our aim\nis to try to\nfind a measure that will happen to\nquantify to put a number\nhow much did a human contribute\nto the outcomes in the interaction with\nintelligent system which this is the\ncausal responsibility\nso our model is built as follows\nwe first describe the human and and\nsometimes instead of\nintelligent system in order to be\nsimplified i will call it\nautomated model automation but it's all\nthe same\nso uh we we first start with by\ndescribing the human automation\ninteraction\nby a four consecutive steps of\ninformation\nand processing and it starts with the\ninformation acquisition\ninformation analysis action selection\nand\naction implementation and as in short\ncontrol and\nother types of control the human and\nautomation\nworks together in each uh\npart of these consecutive steps but\nthe level of automation can be varied\naccording to the\num specific systems that you are\ndescribing in some steps can be\ncompletely manual while some steps can\nbe\ncompletely autonomous it depends on what\nsystem you are describing but but this\nis just\na schematic uh picture of the\nhuman automation interaction so to that\nwe had variables that will describe\nthe information flow from the\nenvironment\nto the combined system of human and\nautomation\nand outside to the environment and i'll\nexplain some of this these variables\nso we assume that the environment\nincludes\nn possible states that are different\nfrom each other\nand each state has a different\nobservable\nparameters and these parameters can be\nobserved\neither by the automation\nor the system either by the human or by\nboth\nfor example if my um\nstate is an airplane let's say and i\nwant to detect airplanes in the sky\nthe automation may include radar and it\nwill look\nat the radar signature of the airplane\nwhile the human is incapable to\nobserve radar signatures so he will\nsearch the\nelectro-optical uh signature of the\nairplane\nand also there is the sonic signature of\nthe\nairplane so each state in the\nenvironment has some\nparameters that are observable either by\nthe automation or the\nhuman and um\nthese parameters are acquired by the\nautomation\nmodule and the human then in the second\nstep stage the\ninformation analysis analysis stage um\nboth the automation and the human try to\nfigure out what is the\nthe the states that we are confronting\nright now\nwhat is the state and try to assessing\nit and according to this\nanalysis and action selection process is\ncarried on\nand finally from the combined action\nselection by the\nautomated module and the human a certain\naction is implemented\nof course this is a very um this\ngraph portrays all the possible\ninformation flows in\nthat are possible but in specific\nsystems\num the graph is much simpler according\nto the design of the system and the\nfunctional allocation between the human\nand the system\nand so it's not always so\ncomplex and the implemented action is\nalso depends on the on the state\non the functional location between uh\nhuman and the system because sometimes\nthe\nthe human and the automation may uh\ndecide\nabout a different action and then enter\nthe\nthe issue of last word who decides in\ncase of a conflict\nor sometimes the automation can act\nfaster than the human can\ninterfere for example automatic breaks\nin cars\nthat happens before the human can\nactually be involved so\nyou by such figure of information flow\nyou can describe many types of systems\nso we have here information coming out\nof the environment\nmixed up inside the combined systems\nthat includes the automation and\nintelligent system and the human and\nsome things\nis coming out of the system so\nin order to describe this information\nflow and analysis\nwe use information theory to analyze all\nthe interaction\nand interdependencies between those\nvariables\nand i know that some of you are not from\nthe\nengineering background and i'm going to\nuse a lot the notion of entropy\nand you don't really need to to go into\nthe formulas but\nyou should treat when i say entropy\nas a measure of uncertainty related to a\nrandom variable\nwhen i have large and central\nuncertainty i have a large\nentropy so we have all the information\ncoming in and processed in the system\nand coming out\nand we defined the measure of human\ncausal responsibility\nas follows uh\nwe look at the the implemented action\nwhich is denoted by the\nand the responsibility the the share of\nthe human contribution to\nthe distribution of the implemented\naction\nis defined by the conditional entropy of\nz\ngiven all the automation variables\ndivided by the original entropy\nof the and this measure although it\nlooks\ncomplex it's quite simple it ties\nit measures the relation between the\noutcomes\nand the information um\nparameters that are processed by the\nautomation and what is left\nis the human contribution to the\noutcomes\nfor example this measure there is none\nsentence in\ninformation theory that conditions only\nreduces entropy because\nwhen you know something that your\nuncertainty can only be\nreduced or unchanged so this is the\nfraction that\nranges between 0 and 1.\nand if it when it is zero\nif the output of the system the\nimplemented action\ndepends only by the automation variables\nin this case the the denominator of the\nfraction will be equal to zero and then\ni i know that\nif i know what the automation did i know\nwhat is coming out\nthe human has no uh didn't contribute\nanything to\ndidn't contribute anything meaningful\nfor the outcome because if i know what\nthe automation did\ni know for sure what is the outcomes so\nthe human causal responsibility for\nthese outcomes\nis zero on the other hand\nif i see that when i know the automation\nand variables i cannot say anything\nabout the outcomes for sure\nthe uncertainty remains the same the\nconditional entropy\nremains is the original entropy\nit means that the outcomes is\nindependent from the automation\nvariables\nand in this case the human has a full\ncontribution\nis the one that really determined the\noutcomes of the system\nso this is this is very intuitive the\nit's a responsibility measure that\nranges between\nzero and one and it measures the unique\ncontribution of the human to the process\nhow unique was the control\nwas the contribution of the human\nand the use of a information theory has\nmany adventures\nadvantages because when i describe to\nmeasure the flow of\ninformation i need to assume nothing\nabout the rationality or behavior of the\nhuman\nor the underlying distribution and the\nentropy uh the measure of entropy enable\nto measure very complex association\nbetween the outcomes and the\nother parameters which may be non-linear\nand non-metric so it's much more broad\nthan\nuh pearson for example correlation or\nother collection correlation methods\nand it's very applicable to real world\nsystem because even if i\nif i know some nothing about the system\nif i can measure the different\ndistribution i can\nmeasure the correlation the correlations\nusing entropy\nand the unique contribution of the human\nso in order to clarify it a little bit\nmore we are still in the theoretical\nmodel and to see\nresponsibility in our own eyes let's\nlook at very simple\nuh example\nso let's look at a binary classification\nsystem\n[Music]\nor a binary alert system it's the same\nthese\ntypes of systems are systems that\nlook for abnormal values and warn the\nuser that now i'm in abnormal\nrange or there is something wrong and\nyou can find\nthese types of alert system in\nmany many applications are very broad\nyou can find them in advanced control\nrooms\nin the aviation decks and in your car\neverywhere so the aim of the system and\nthe human\nis to identify and\nin this case reject signal so we assume\nfor simplicity\nin this case that the environment\nincludes\nonly two types of states\nsignal which happens in certain\nprobability and\nnoise and as i said before each one of\nthem may be measured\nas some observable parameters that i can\nmeasure\nthe alert modules look at the\nthe states or the the the parameters\nthat the alert\nmay observe and decide whether to issue\nor not a warning to the user\nand the human user look both at the\nindications that are coming from the\nalert system\nand on the observable parameter and\ndecide whether to\naccept or reject the state which\nand i remind you that our our aim is to\nreject\nsignal and the human really press the\nbutton or do does the action of\nexception and reset rejection\nso the human has the role of doing\nexceptional rejection but the decisions\nthat the human\nuh takes depends also not by only on his\nown information\nbut the information that the human gets\nfrom the alert model\nso it might be according to\nto the performance abilities of the\nalert module and the human\nthat the human realize fully for example\non the alert model and then although he\nis the one that pressing the button\nis just following the alert indication\nand he has no real\ncontribution so we are able to measure\nthis\nand to measure this and to to calculate\nthe entropy for\nsuch a simple system it's very\nintuitive to employ the the theory of\nsignal detection theory which\ndeals exactly with the stuff like that\nand i know again\nthat both of you might know not know\nsignal detection theory so i'll just\npoint to\nto two parameters that are important to\nunderstand in order to understand the\nnumerical\noutputs that i'll present\nso in signal detection serial for for\nthis simple case of only signal and\nnoise\nthere is assumption that their signaling\nnoise\nit's each one of them has some\ndistribution\nover in observable measure this is the\nsignal strength\nbut there is an overlap so many times if\ni look\nat the observable measure i'm not sure\nif this is a signal or noise there is\nambiguity\nand the signal detection theory\nseparates between two things and it's\nvery interesting to\nto analyze things in this way the first\none is the detection sensitivity of the\nsensor it's the sensor's ability to\ndifferentiate between signal\nand noise it means that when i look at\nthe observable measure\nhow good am i to say okay this is noise\nand this is signal\nso this is one stuff that characterize\nyour\noperational ability is the detection\nsensitivity\nthe second parameter is called response\ncriterion\nit's the motivation of bias to favor one\nresponse over the other\nand this one also incorporates things\nlike preferences\nwhich are values for correct or\nincorrect decision\nand let me explain the intuition\nit's possible that my detection\nsensitivity\ntells me okay i this is uh\ni twenty percent that i have a signal\nlet's say that the signal is a tumor\na malignant tumor that i want to to uh\nto discover to detect and according to\nmy detection sensitivity\nthere is rather small probability that\nwhat i'm observing is a tumor it's only\n10 or\n20 percent but in this case the cost of\nmisdetection is very large if i'm\ndealing with\nmedical situation of malignant worse so\ni will be biased\nto treat the the the observed\nentity as a signal even though i'm not\nsure that it's a\na signal because of the high cost on my\npreferences\nthe high cost for miss detection in some\nother systems the\nthe the emphasis is on reducing false\nalarm because if you have many false\nalarms\nthere is the phenomenons of uh koi wolf\nand user\nuh don't trust the system uh anymore\nso the response criterion describes the\nbias to favor one things over the other\nso in the system that i portrayed if i\nwill\nput numbers what is the detection\nsensitivity of the\nautomated model what is the detection\nsensitivity of the human\nwhat are the response criterion and what\nare the\nsignal and noise distribution i can plug\nnumber\nto all of that and really figure a\nnumber that says okay this is the\nhuman or average human\nresponsibility the causal contribution\nfor the outcomes in this\ntype of system and performances\nso this is exactly what you see in this\ngraph and this is for the\nfirst time in history you can see\nresponsibility\nin your eyes and numerically so we\nplugged some numbers\nand what you see in this figure\njust a minute at the bottom\nthe x's are the human detection\nsensitivity and automation detection\nsensitivity\n[Music]\nand for example\nwhen the human detection sensitivity is\nrather low\n0.6 is rather low and the automation\ndetection sensitivity is very high 3 is\nvery high it's all measured in\nstandard deviations the result is if i\ndo all the calculation with the entropy\nthe result is that the resulting\nhuman responsibility is zero which is\nquite intuitive because if the system is\nmuch much\nbetter the alert system is much much\nbetter than the human and human cannot\ndescribe between states\nhe will always fall the indication and\nrecommendation of the alert system and\nthe human by itself\nhas now really a meaningful contribution\nto the process i could replace him with\na robot\nthat will follow the alert indication\nand i will have the same result\non the other end this is the bottom\nright corner if the human has very good\ndetection sensitivity\nbut the system has poor detection\nsensitivity the human say okay this\nsystem is loud i cannot count on it i\nwill count only on myself and i will\nfollow\nall only my own recommendation and then\nthe human\nis actually responsible 100 percent for\nthe outcomes\nand there is a mathematical proposition\nthat we prove\nin the in the paper that they and it's\nalso very intuitive that\nhuman responsibility decreases\nmonotonically with the\nautomation and detection sensitivity and\nincreases with the human detection\nsensitivity because\nif as the human detection ability are\nbecoming\nbetter and better and better the human\nwill tend to assume more responsibility\nand on the other hand as the automation\ndetection ability are becoming lower and\nlower and lower the human will assume\nless responsibility and\nwill rely more on the system and\nactually\ni'm not going to go into this right now\nit depends on the ratio between these\ntwo values\nso this graph in this graph i i assume\nthat both the human and the automation\nhas\nhave have the same response criterion\nthey have the same incentives\nand we also analyze situation in which\nthe human and the automation have\ndifferent response criterion\nfor example here the axis are the human\nand automation responsitarian\ni will not go into that but it adds\nanother complication because if the\nautomation\nresponse criterion or motivation is far\ndifferent from the\nusers the user will tend to rely less on\nthe automation because it\nreflects different incentives and he\nwill tend to intervene more and will\nassume\nmore responsibility\nso what can we learn from this\ntheoretical model\nfirst we devised a new measure that\nreally quantifies put a number\nof the on the level of comparative human\ncontribution\nin determining the final outcome of the\nsystem which is\ncausal responsibility secondly we saw\nthat the\nhuman causal responsibility depends on\nthe combined\nand convoluted characteristics of the\nhuman the system\nthe environment and as i said the\nconvoluted convoluted\neffects and what we saw in the example\nthat the\nhuman contribution to the outcomes or\nunique contribution to the outcomes\nwhich i\ndefined by causal responsibility is\nhigher\nwhen humans capability are superior to\nthose of the system\nand when having different preferences\nbecause in these two situations\nthe human uh rely less on the system\nbecause he either\nhas better abilities than the system or\ndifferent preferences\nso he relies more on himself and takes\nmore actions that are\ndifferent from the recommendation of the\nof the system and that\ncontribute more to the outcomes\nand this has a very um\ninteresting implication because as\ntechnologies\ndevelop and outperform humans in many\ncritical functions\nactually the human unique contribution\ndiminishes so simply demanding\nuh like there is very common demand in\nmany\nsystem to always involved human in the\nloop\nbut simply demanding that does not\nassure that the human has\na meaningful part in creating the\noutcomes even if\nimportant functions are allocated to the\nhuman for example even if\nin my example the human had to press the\nbutton of accept or reject\nif he only relies on the indication from\nthe automation and never deviates\nhe has no meaningful contribution just a\nmechanical means to to translate the\nautomation's\noutcome to some action so\nputting him in the loop does not really\nhelp\nso current policies that demands human\nin the loop may create mismatch between\nworld responsibility and cause of\nresponsibility\nand this is important because sometimes\nyou can hold\na human who is falsely responsible when\nhe actually has little real\ninfluence and you may expose the human\nto unjustify legal liability and\npsychological burdens now i have a\ndilemma because we have about 20 minutes\nleft so we can\npause for a few questions or i can show\nsome empirical findings and\nwe'll have more questions in the ends\nokay uh let's we already have quite some\nquestions in the chat\nbut then uh how much time do you need\nfor i think i need another 10 minutes\nokay then let's focus first on maybe\ntwo questions on the model uh and for\nthat we have four minutes\nso yeah i'd appreciate if uh who was\nfirst\nwe'll skip david's first comment uh\nwhich was posted when you\ntalked about the weather in tel aviv so\nhe said near that's just cruel\nso uh yeah maybe no need to respond for\nthat uh\nbut then the next question is from jared\ni have another comment for this comment\nin our university the air conditioning\nare still still tuned to winter\nso it's 30 degrees outside and the air\ncondition is stand on heat\n[Music]\nso if you see me sweating now the real\nrhythm okay but then uh\none uh question related to the model\nactually is from garrett here do you\nwant to ask a question in person\nyeah we'll do uh so thank you cool stuff\ni i did wonder so it seems that you\nassume that the contribution of the\nhuman is independent of the state\nuh well i could also imagine scenarios\nfor example where what the human\ndoes is sort of uh complementary to what\nthe\nsystem does where in some cases it's the\nhuman is better or the performance of\nhuman is better it's on\nthat of the system is better and\nso the question i asked is\nhow do you take into account that that\nresponsibility may actually be different\nfrom state to state\nactually the\nthe example that i showed was a very\nsimplified example in order to put\ngraphs\nbut if you uh look at the\nthe main diagram that i presented that\nfigures up all the information flows\nit may also portrait a states in which\nat certain environmental states the the\nautomation is better and then the human\nis inferior and the other way around\nit doesn't limit anything it's just it's\njust a picture of\nall types of information flow so if you\nmeasure the system let's say you do\nempirical work which i i want to present\nuh soon and you measure the\nthe information flow at many states\nyou will the model will answer that you\nwill have a certain state that the\nthe automation is better and then the\nhuman is better but\nokay the the the measure that i\ni compute now it's the average\ncontribution of the human\nso yeah this is a i will talk later on\nin the conclusion\nindeed it's averages the contribution of\nthe human on different\nstates so you are right at that point\nit's the average contribution of the\nhuman uh over\nin many states along um along the time\nbut you can figure up also specific\nstates in which\nthe human has larger or smaller\ncontribution or responsibility\nokay thank you and then the next\nquestion is from rule\nuh rule you want to answer\nto ask a question yeah yeah it's related\nactually so i was\ni was before you started going into the\nsensitivities i was just\nreflecting on the metric itself\nand i was wondering um that i was\nconsidering two examples two decisions\nuh a and b and so what would it mean if\nthe human responsibility is twice the\nvalue for b\nis compared to a and relatedly like\ndon't you\nuh you were already alluding to this\nwhen you were saying that this is an\naverage contribution so\ndon't you lose grip on actual causality\nwhen you opt for an information\ntheoretic measure\nlike how can you tease out the\ncontribution of the human\num to the causal responsibility chain of\na decision or a situation\nthat's my question so\nit's hard to to answer shortly but i'll\ntry to answer first\nthe the measure when i presented the\nthe responsibility types i told you that\nresponsibility can be either prospective\nor it was with perspective\nnow the measure that i'm presenting now\nis a\nprospective measure it tries to to\nto look at the future and say i'm\ndesigning a system\nwhat will be the average contribution of\na human in\nsuch type of system i present in the end\nof my\npresentation that we have a deviation of\nthe models that look at retrospective\ncases\nin which case when you analyze a single\npast event you cannot look at the\nat the average contribution of the human\nover many states because you have a\ncertain line of information that\nfollowed from\nthe environment to the automation and\nthen to\nthe human and for that we have another\nmeasure it's also\nbased on information theory but it it\nmeasures\nuh it's not an average measure it looks\nat a\nit's a certain specific case and\nmeasures\nthe causal responsibility for a single\npast event\nbut the measures that i'm presenting now\nit's more like and\nas i said it's an average contribution\nand it's in a\nprospective way of looking it's very\nimportant for system designers or\nwhen you want to do some\num legal or other consideration what was\nhow meaningful is the human world in the\nsystem or the average\ncontribution i guess what i'm seeing is\nthat\nthe way i was interpreting is that you\ncan see whether the human is a necessary\ncontribution\nbased on the information right and not\nso much\nto what degree is the human involvement\nuh\neffects really is unique yeah necessary\nbut to what degree it's not like\nnecessary or not it's not binary\nit's how how often or to what degree the\nhuman contributes something\nunique to this interaction that i cannot\njust take the human out of the system\nand\nrely on the intelligent system by itself\nyeah\nno that's clear thank you okay so in the\n10 minutes that left or less i'll\ni'll go very briefly about the empirical\nfindings and then on the other research\nthat we are doing\nokay near just a quick comment so uh\ni know that many of uh people who are\ncurrently in the meeting\ncan actually stay a bit longer after two\nbut\nuh yeah if someone wants to stay that's\ngood uh\nand just feel free to drop whenever you\nneed and i think we could just extend\nthe discussion a little bit\nlater so how much time you give me more\nuh well as much as you need i mean yeah\nbe great\nokay until tomorrow but\nnow it would be great to have at least i\nwould say i hope the beach will be\ncrowded in few hours\nlet's actually aim for 10 12 minutes and\nthen we will have five minutes\nsomething like that and then i will give\npriority to the questions of the people\nwho need to live soon\nthat's uh for you david because he\nposted three questions but i know that\nyou are staying\nso i uh i presented the the\nthe analytical model at the top of the\npyramid\nand now let's look at actual human\nbehavior and\nperception of responsibility and we\nwanted to test\nwhether you know this is the theoretical\nmodel and\ndoes it really can it predict how people\nreally behave and contribute with\ndifferent systems\naccording to different characteristics\nof the human and the system\nso what we did we did the quite a lot\nlaboratory\nuh experiments and you see here this is\nour\ninteraction with technology laboratory\nat tel aviv university\nand now i need to to introduce more\nmore types of responsibilities so i\npresented\nuntil now theoretical responsibility\nwhich is as i said prospective it's the\npredicted\nshare of unique human contribution to\nthe overall\noutcomes and as many\ntheoretical models it has simplifying\nassumption like perfect human knowledge\nrational humans that maximizes some\nutility function and optimal use of the\nautomation\nbut unfortunately people don't use stuff\noptimally so in the experiment we\nmeasure we\nmeasured what we called measured\nresponsibility\nwhich is the observed the real empirical\nshare of the unique human contribution\nto the overall\noutcomes it's the same measure but we\njust measure the distribution and the\ninformation flows in the laboratory and\nwe calculated this measure\nso it quantifies the same information\ntheory measure\nand it's based on actual user\nperformance\nand we also handed out questionnaire in\nwhich we asked the\nparticipant to evaluate how how\nyou think was your contribution what is\nyour assessment regarding your\ncomparative\nunique contribution to the interaction\nwith the system\nand what we did if you remember and so\nwe use the simplified\nclassification binary classification\nsystem\nas in the example and you remember this\ngraph from the example\nand what we did we selected four\nexperimental points\neach one reflects a different kind of\nhuman or automation detection\nsensitivity\nand they span a range of predicted human\ncontribution from\n12 to 47 to 69 and 287\nso our first ex experiment or types of\nexperiment\nwe changed the characteristic the\ndetection abilities or detection\nsensitivity of the human and the\nautomation\nand we wanted to compare to the\ntheoretical prediction\nin this experiment both have the same\nincentive i mean when we programmed the\nintelligent system and we told the human\nthe cost\nand the benefit\nmetrics they were the same so they have\nthe same incentives\nwe did also other experiments in which\nwe gave the human and the system\ndifferent\ndifferent incentive but we also compared\nthem to the model um\nprediction but i will not present this i\nwill i will focus on the first one\nso in the deep prime or detection\nsensitivity experiment we had 60\nparticipants we devised a kind of a\nsimple binary classification system and\nwe could control\nthe system accuracy how good was its\ndetection sensitivity\nand the human had to look at some\npresentation and we could control\nhow accurate is the presentation that\nthe human uh\nso so we control both we could control\nboth\nthe detection human detection\nsensitivity and automation detection\nsensitivity\nand according to the graph we took the\npoint of\n1 and 2.3 the participants\nwere half of the\nparticipants had poor abilities with\ndetection sensitivity of about one\nor quite good detection sensitivity with\ndetection sensitivity\nabout 2.3 and each one of them worked\nwith two different\nalert system one alert system was\npoor and the other one was good so this\nwas the within subject\ncondition so we have the poor human here\nand he works or they worked with rather\npoor or good\nautomation system and these are the\npredicted responsibility values\nand each participant performed 100 twice\nwith each system\nand we counter balance the order of the\nsystem and we also handed questionnaire\nduring the test different part of the\ntest\nso we start with measured responsibility\nand i'll explain this work because it\nwill repeat itself\nmany times below the x-axis\nis the human detection sensitivity and\nyou have the less\naccurate human on the left and the\naccurate human\non the humans or participants on the\nright\nand the red is the accurate alert system\nand the blue is the less archaeological\nsystem\nand as predicted by the rescue model you\ncould see that\num the former the formation of this um\nthe formation of this web is according\nto theory because\nyou can see that both type of\nparticipants\nrelied more on themselves\nwith the less accurate system and they\nassumed higher responsibility\nhigher causal responsibility with the\nless accurate system\nand you can see that the accurate\naccurate participant\nassumed always assumed regardless the\ntype of the systems assumed higher\nresponsibility than the less\naccurate participant because they relied\nmore on their own capabilities\nso the general pattern is according to\ntheory but that we had the specific\nnumerical prediction so it's interesting\nto compare\nthe average value to the theoretical\nprediction\nso you see that in most cases like let's\nsay\nwith the less accurate system the the\naverage\noutcomes was not very far away from the\ntheoretical prediction\nuh despite only one case when the less\naccurate participant worked with the\naccurate system\nthey assumed much higher responsibility\nthan\nuh optimal and actually we analyzed\nthe reason for this is uh known for many\nother behavioral\nstudies we analyze it in different\nways but what really happened that this\ntype of participant\noverestimated their own abilities and\nthey intervened\nmore than optimal and thus they assumed\nhigher\nimpact or higher causal responsibility\non the outcomes\nbut the outcomes were better off if they\ndidn't intervene at\nall so people with\npoor abilities that worked with very\ngood system\nwanted you know to to do something they\ndidn't want to feel neglected so\nthey did something but and they\noverestimated their own abilities and\nthis\nphenomenon is known from other\nbehavioral studies\nif we look at the subjective perception\nof responsibility and here\nit's another scale because it's a\nsubjective scale in which they rated\ntheir\nresponsibility from i contributed very\nmuch\ni had no unique contribution you see the\nsame pattern\nand if you compare the results by\nnormalizing the scale\nyou see that subjective and measured\nresponsibility really matched\neach other so it means that people\nreally\nfeel or have a good sense of how much\nthey really contributed to their process\num i will not enter this\nbut we also analyze the relation between\nall the three components together and\nwe discovered that the subjective\nfeeling of\nof responsibility encountered for about\n20\nof the way people actually behaved with\ndifferent systems so\nthe perception of of yourself of how\nmuch you\num contribute to a to a process\ninfluences to some degree it's not it's\nnot the main influence but it influences\nfor some degree\nthe way you act with the with the system\nanother interesting finding was you see\nthe solid lines it's\nthe graph that i showed you before\nand we also asked their subjective\nassessment if another person\nwas working with such a system what do\nyou think his responsibility\nwould be and it was a significant\ndifference that people\nwaited their own responsibility lower\nthan of another person in the same\nsituation\nand this resembles um something known in\npsychology as the fundamental\nattribution error\nand actually we we did another\nspecific laboratory examination for this\naspect\nanother dedicated experiment in which we\nhad the people that\nworked with systems they were the actors\nand people that sat next to them and\njust\nobserved how they worked and we asked\nabout their\nsubjective perception of the level of uh\nresponsibility and there were\nsignificant diff\ndifference between people who actually\nworked with system\nand people that just were observers\nso the findings from the empirical\nanalysis it's that the rescue model is\nalso a descriptive model and\nactually we can use it to to predict\nhow human uh will take responsibility\nand what will be\ntheir average responsibility in\ndifferent uh\nsystems and it also can predict the\nsubjective perception of how much the\npeople will feel that they\nare that they contribute meaningfully to\na system\nnevertheless there are two systematic\nhuman biases\none is the tendency to assume\nor to assume excessive responsibility to\nintervene more than necessary\nwhen the human capability are inferior\nto the ones of the automation\nand the other bias is a tendency to\nconsider other responsibility to be\nhigher than\none's own for yourself you always has a\ngood excuse\nwhy your performance was poor and it's\nnot just\nyour problem it's the always the\nautomation faults\nso the implications from the empirical\nobservations\nis that operators may feel correctly\nthat they don't have significant\nimpact on the outcomes when they work\nwith the advanced intelligence system\nand they may interfere more than\nnecessary or conversely\nin some other cases that are known in\nthe literature\nthey'll be less motivated to take action\nat all\nboth responses will hamper exploiting\nthe full potential of the system\nand could lead for undesired\nconsequences\nso again the demand to\nalways keep human in the loop can have\nadverse implication on the overall\nfunctioning of the system\nthe human attitude toward the system and\ntheir interaction with the system and\nthe role\nwith it and the perception of outside\nobservers that watch the the human user\nand assess their\nresponsibility for the outcomes\nso this is the the end of the empirical\nresults that i want to\num to show just few\nconcluding rewards so\nthe measures that we developed all three\nmeasures the theoretical\nmeasured and subjective responsibility\ncan serve to expose anomalies and\nprovide a new method for quantifying\nhuman comparative causal responsibility\nfor the outcomes in advanced system\nthey can help the design of the system\nby tying different design options to the\npredicted effect on user behavior\nand their perception of responsibility\nand also it it can aid in the formation\nand analysis of deployment policies and\nlegal decision\nregulation and i'll give you just a\nshort example for example in autonomous\nweapon systems there is a\nindeed a requirement by the us and\nbritish authorities to always keep a\nhuman\nin the loop so there will be no\num um lethal force um\ndone inflicted without human involvement\nhuman in the loop that will authorize\nthe action\nbut let's say that the human just sits\nin a dark room he doesn't see anything\nfrom the outside world and all he sees a\nlight bulb\nthat lights whenever the automation says\nthere is a tire\nat a target that he needs to attack in\nthe human press\nbutton that authorizes the attack so\nin terms of of regulation we put a human\nin the loop but\nit's very clear that his involvement is\num\nis meaningless it's not meaningful it's\nin the loop\nbut it doesn't have really causal\nresponsibility or\nunique causal contribution for the\noutcomes\nso it's this this way of analyzing\nthings may expose a\num policies or faults in policies and\nlegal regulations that\nare not looked upon\nso regarding future work and some of\nthis work we already\naccomplished but it's on some kind of\npublication\nprocess first as people asked um\nwe need different measure for with\nprospective responsibility because the\nentropy measure it's kind of an average\nmeasure that averages over all the\ndifferent\nstates and if i want to measure the\nhuman responsibility in a single past\nevent i need to\nto change the demands i'm not looking at\nthe\nthe the average contribution or about\nspecific contribution in a single chain\nof events\nso we devised an information theory\nmeasure for that\nand very importantly and we also work on\nthat\nis the issue of temporal effects because\nin all the examples that i showed you\nthe graph and the\nthe example that i showed i didn't\nconsider the time\nbut it's very obvious at the time you\nhave to\nto take the the decision may impact\nor should impact your um tendency to\nrely on the system if you have a very\nvery\nuh short time to decide then the system\nis very good\nyou will tend to decide to to rely more\non the system and your\ncause of responsibility will decline\nso we need to to measure not just\nthe information flow but in information\ntransmission rates\nand human channel capacity constraints\nwe which are also\nknown from the literature and then add\nthe time\nelement to the model as another\ndimension that is currently missing from\nthe model\nso these were my 10 minutes\nengineer that was great\nso uh yeah it's exactly two so i suppose\nuh\nmany people need to leave if you have\nany questions leave them in the chat\nbefore we leave\nand then we will have a chance to ask\nthem\nor you could just raise your hand if you\nhave time to ask a question yourself\ncan i ask a question\nhi everybody i was on um on voice for\nthe largest part of the time so i could\nnot see\nthe slides but if i'm correct in the\nbeginning of the presentation like 20\nminutes or so into the presentation\nthe entropy measure was introduced and\nthen a ratio between entropies were\nwas mentioned did i get that correctly\nor did i miss something\nokay so being an information theorist i\nwas wondering where the ratio of\nentropies comes from\nbecause that's very unusual to take\nratios\nof entropies we usually subtract or add\nor or\nthat kind of combinations hardly i've\nactually never seen\na ratio other than ratio to a maximum\nlike efficiency and these kind of things\nso i was wondering\ndoes it even make sense to think about\nratios of entropies\nyeah so i have a good answer and this is\na good questions\ni'll slide on here let's look at this\nokay so this is the measure\nbelow we can see the entropies of the\nfinal\naction action which is implemented it's\nthe entropy of the election and\nthe the denominator is the uncertainty\nleft\nabout the outcomes given that i know\nfor all the automation variables so\nactually i measure the\nrelations between the\nsystem outcomes and all the automation\nvariables and i\nmeasure how much uncertainty is left\nbecause\nif i if i'm looking at all the the\nthe automation variables then there is\nno uncertainty\ni mean their identity is gone then the\nnumerator would be zero and the human\ncontributes nothing and actually\nin the literature this form of measure\nappear at what is known tails\ncoefficient measure\nwhich is very known measure you can find\nit\nalso in spss to measure the\nrelation between nominal variables\nbut he measures something else he\nmeasured the\nrelative reduction in the uncertainty of\na variable\ndue to a knowledge of another variable x\nso let's\nlook at the yeah\nso if i if i may so i i hear what you\nsay but in terms of the explanation\nthis is very much like terms from\ninformation theory like equivocation etc\nwhich are usually defined as the\ndifference between entropy so difference\nbetween\nnormal and conditional entropy etc so i\nsee that you can also take a ratio\nbut would a difference also have\nworked the problem is different that\nit would not give me a a measure that is\nis confined to a range between zero and\none because the entropies could have\nmuch different number and then it will\nbe\nit will change in very types of system\nwithout me\nable to compare them if we look at tails\nuncertainty coefficient and we have\nx and y what else does he say\nokay let's look at the mutual\ninformation and look at\nthe uh mutual\nentropy and look at the entropy of x and\nif i divide this\nmutual part by the entropy of x it's the\nhow much\nis reduced uh\nthe uncertainty of x is reduced by\nknowing why and i'm looking at the\ncomplex\na complementary value which is how much\num uncertainty is remind\nit remains about the output when i know\nall the automation variables so i\nlook at the other part and i extend it\nbecause i don't just look at the\ntwo variables but i look at\n[Music]\nmany variables but tails uncertainty\ncoefficient is used to\nexactly that to measure the association\nbetween for example nominal variables so\nyou cannot\ndo it with the pearson coefficient or\nexperiment coefficient because they are\nnot nominal and i'm not\nmetric so so\nthis is when you use the tails\nuncertainty coefficient and i do i just\ni do something\nsimilar but i i'm not looking at the\nreduction but i'm looking at the\nuncertainty left\nafter i know all the automation\nparameters and since there are no other\nsources for information flows into the\nsystem or the remaining uncertainty\nis related to the human contribution and\nthis is the sense of devising them to\nhave a measure that ranges between zero\nand one\nexactly like a tails uncertainty\ncoefficient and based on experiment\nwhich relate also to\nforced linear relations between\nvariables\nokay thanks very much for the further\nexplanation\nokay let's perhaps\nget back to david's questions\ndo you want to start with the first\nquestion\non contact specificity yes\nthanks so thanks\nfor the presentation um it's good to\nknow that you're sweating uh because of\nthe\nof the the air conditioning situation\nand not our difficult questions\num still i'll let's let's do our best to\nmake you sweat even more\nnow um uh okaying aside i was wondering\nabout uh\nsomething that was also touched upon by\nby an earlier uh\na question from a colleague which is uh\nkind of the\nyeah the context specific uh\nelements to responsibility um\nand and and what actually human\ncapabilities and automation capabilities\nare because they\nthey often depend on on a context\nso uh in terms of perception and\ninformation processing\nuh it often depends on um\nyeah on on on contextual parameters like\nyou know how\nwhat what's the design of the system\nwhat is the human actually doing\nuh on the site what's the design of the\ninterface\nand systems that work perfectly under\ncondition a don't work so well under\ncondition b\num et cetera et cetera um\nso that means how do you actually\ndeal with the fact that these\ncapabilities are not\nstatic but actually yeah in a sense a\ndynamic and context dependent how is\nthat incorporated in\nthis quantification\nwell there are two parts to the answer\none we deal with and one we don't\n[Music]\nlet's look at the diagram\nthe general diagram that portrays the\ninformation flows\nenables to describe a situation where\nfor\na different state let's say the informa\nfirst of all when you plot this diagram\nfor a specific system so you can\nit it reflects the the function\nallocation and the system\ndesign you can do the the errors that\nwill describe the\nexact types of human interaction like we\ndid in the\ndialect uh example in which the it was a\ndecision support system it's very often\nthat\nonly do information analysis but the\naction selection\nand implementation is remains for\nfor the human so you can for each system\nyou can do a different drawing\nand you can when you do the distribution\nyou can really change\nthem according to the state\nthat the system or the combined human\nmachine system encounters so it's\npossible that\nif at one er let's say the human is\nat one um environmental state\nthe human will be the main contributor\nbecause he\nhas a better ability to acquire\ninformation and analyzes and do the\naction\nand in other environmental state the\nsystem is uh is better than the human in\nit will do so so there is a viability in\nthat sense\nand of course you need to to figure out\nthe the probability of\nencountering different the different\nstate in the environment\nmaybe you ask about something else to\nwhich we don't answer because\nentropy and information theory do not\nanswer entropy\nand information theory were developed\ndeveloped\nunder the assumption of\nbeing a monotonic over time i mean\nstationary and ergotic\nwas both type of uh actually shannon\nwhen he developed information theory he\nstarted with the identical uh\ndistributed variables and only later on\nit was\nexpanded to uh variables that are um\nstationary and ergodic so if we have\ncases in which the probabilities\nchange over time\nthe human causal responsibility will\nchange\nover time and we will don't we there is\nno meaning to look at the average\nbecause the change in\nchanging over time so for that i have to\nanswer first\nmany system you can analyze them at\ntheir\nconstant state when they are rather\nstable\nfor example in the experiment of course\nthere is a\nthat we did in the empirical experiment\nthere is a learning period in which\nusers are\nyou know learning the system so at that\npoint the contribution changes because\nthey are just\nyou know in the learning mode so we let\nthem experience the system sufficiently\nenough\nto see that they are uh know the system\nand then\nwe did the calculation only when we saw\nthat the the probabilities are quite\nstable so we are in the\nwe are in the constant mode\nspace but\nyou are right that if if the\nprobabilities and the\ndistribution changes over time or it's\nnot a ergodic\nthen the whole use of in information\ntheory is uh you you cannot use it it's\nit's\nit's outside its core but nevertheless\neven even if though in those day\nchanges if you do the diagram of\ninformation flow\nit sometimes it can give you some\ninsight on on\nhow good is the human on what is the\nhuman contribution\nyou look at things in other way which is\nmore\nin generic way and less let's say a\nlegal or philosophical way\nokay thank you i had more questions but\ni don't want to eat up all the time so\nif there's\nother people wanting to butt in please\ndo and otherwise i kind of can give me\nthe word again\nyes let's say here if there are any\nother questions\nfrom someone else not that so\nwhile we are waiting for the questions i\nthink a good journey has his reign\nuh yes please i don't see it for some\nreason\nyes i can see it now you're gay\nhey thanks uh i need uh thanks very much\nfor the very interesting talk so one of\nmy questions is actually the\nthe same one as david's last question in\nthe chat so i'll let him ask that later\non\nbut i was wondering um um\nso i'm not familiar with all the\nuh in depth with all the you know\ninformation theory and and other\nthe the kind of the mathematical side of\nthings here but\ni was curious how does this kind of\nmodel uh\ndoes it relate uh at all to\nuh causal modeling you know like in the\nway that judea apparel and\nhis colleagues uh um conceive\nof kind of the causal relationships and\ntrying to measure them so is this a\nrelated way of doing this or are those\ntwo not related at all\n[Music]\nso i don't think i don't think that we\ni think this is a very interesting\nquestion i don't think we try to\ntie this directly to causal modeling\nlike pearl's\nmodels we see this more\nas a as a description of a system\ndescription i think tying these two\ntogether is\nmight be a very interesting issue but we\nhaven't looked at it closely yet\nthanks\nokay thank you evgeny so uh what i would\npropose to do is to\nuh wrap up with the david's questions\nand if we don't receive any questions\nwhile david is asking for those\nquestions\nwe will have a short break and then\nwe also have another meeting scheduled\nwith uh with a few of us so i would just\nask you to\nrejoin the meeting using that link but\nfor now let's just\ngo through those questions all right\nthen\num so a second question is um\nuh it's actually about the design of the\nof the of the interface and so you you\ndescribe uh\nuh in this scheme it's it's it's either\nthe human or the automated\nuh system that takes a certain\nactions or information processing or\nanalysis\nand and so i had two questions\none is we know for example from warning\nsystems with binary thresholds\num that it's very difficult to tune\nthose thresholds\nunless you know everything upfront which\nunfortunately usually we don't\num and and there and especially when\nthere's more variability\nin how to solve situations then\nsome operators may think the thresholds\nare too early\nset too early or set too late um or\ndepending on the context they're set to\nearly or too late and so\nyou get these annoyance and cry wolf\neffects that\nthat actually you know if they wouldn't\noccur\nit would be just a clean information\nprocessing problem\nbut because humans adapt and learn and\nget demotivated\nand and\nand such this is actually\nimpacts uh in a way they they deal with\nthe systems\nuh and i would say and to some extent\nthen the designer has some impact on uh\non on actually the responsibility that\num that the operator\nyou know feels or or can actually take\nbecause they\nmight get disengaged from the from the\nsystem so i wondered\nuh if you can follow this line of\nreasoning uh\num how you how you see this\nit's a great question because the model\nis intended to tackle exactly that\nbecause\nwhat i say that in the prospective model\nthat i present is\noriented to designing system yes now\nif my if my system has poor capabilities\nand it\nhas a lot of false alarms\nand or it has reflects\nother preferences than mine for example\nit\nallows false alarms or something like\nthat\neventually when the operator work with\nthe system\nconsecutively he will learn that the the\nsystem is unreliable and it cannot trust\nthe system and then it will tend to\ninterfere\nintervene more the theoretical model\nwhich assume a perfect rationality and\nperfect knowledge\nassumes that the human knows the\ncharacteristics\nof the system for example the rate\nof of false alarms\nand misdetections and of course his own\ncapabilities\nand then he select the the the right\namount\nto interview for example the optimal\namount so this is the theoretical model\nwhat happened in the empirical empirical\nexperiments is that people really\nexperience with the system and if they\nif they had for example good capability\nand they worked with the\nwith the inferior system\nafter you know 10 or 20 trials they say\nokay\nthis system is just because they were\npenalized for each time they were\nwrong and got points when they arrived\nso after a few trials they say okay i\ncannot rely on the system and unless\nyou know something very obvious and they\ntry to um\nto and they relied only on themselves so\nand in in most cases despite uh\nexcept the one that i marked um\nthe empirical responsibility values were\nvery close to\nthe theoretical prediction which means\nthat after people experienced a\nsystem with different characteristics\nthey behaved in in the expected way so\nif i was a designer of the system and i\nused this model i could predict\nhow much the human will really\ncontribute to\nto my system but more importantly\nif i want to put some weights\non the preferences that i\nput in the system algorithm which\ndetermine the false alarm rate and\ni could also play with this and see what\nis the the\nthe effect on the on the outcomes and on\nthe user because i allowed to play\nwith with all the the different\nparameters so for a system design you\ncan\nplug in your your assumption\nwithout even having the real system and\ntry to\nand you will have a notion of how much\nthe human user will really contribute to\nyour system\nor how much the final outcomes really\ndepends to what agree on the unique\ncontribution of the human\nand\nis that it that's it um\nthat you can use during uh during system\ndesign to understand the\nthe how meaningful is the world how\nreally meaningful is the world of this\nis the stuff that you are working on\nbut how really meaningful is the the\nrole of the human in\nin your system how is how meaningful is\nor\nunique is the actual human contribution\nto the outcome and\nprocesses by the way we didn't do it\nbecause we\nwe were interested in causal\nresponsibility\nso we measured only the unique human\ncontribution to the outcomes\nbut you can measure also the unique\nhuman contribution to each of the states\nand to say okay the human contributes\nmore to the information acquisition and\nthen to information analysis and\nyou can put a number on each state it's\nit's not complicated but\nit just makes things longer but we were\ninterested in the final outcome and when\ni look at the simple as\nthe system as a whole how unique is the\nhuman contribution for the final\nuh selected the action and uh\na very quick follow-up question it does\nseem to imply that\nuh the the there's some time to\nto iterate and learn for humans as well\nas for the system\nuh so that let's say the the\nconsequences are not too critical\nuh is that correct or uh\ni didn't understand what you say so\nlet's suppose we are dealing with uh\nwith uh for example uh aviation or\num or driving or\nor you know weapon systems then for you\nto find out that things are\nuh wrong the thresholds are set in the\nwrong way\nuh sometimes that can be just uh you\nknow just annoyance but sometimes it can\nmean the difference between life and\ndeath right\nyeah of course what what you are saying\nthat in um\nfor the human user in order to to as i\nsaid i'm looking at the system at the\nstationary\nstage in which in in which the human and\nknows the the abilities and disabilities\nand the performance of the system and\nhis own and then he can act in a certain\nway because he knows the system\nbut you're saying in some system the the\nthe action is very\nrare and i will not encounter it often\nand i will not\ngain experience\nthis is right the way to do it is by\nletting you experience with the\nsimulated environment\nin enough repetition until you know your\nabilities and the abilities of the\nof the system so in case of real um\nactivation of the system you will\nalready have a kind of\nnotion about your abilities yes but\nwithout a learning curve or something\nthat you know and just you know manuals\nof course the the things will not uh\nlook that way and of course entropy look\nat the average uncertainty\nit's kind of everything that's entity so\ni need kind\nit will be or when i'm looking at their\nprospective responsibility it will deal\nwith your responsibility over many\nrepetitions\nand not to a singular past event but\nfor example in our retrospective model\nwhich i didn't present\nwe measure the distance between\nwe measure if the how reasonable was the\nhuman decision given\nall the information that he had in the\ntime\nof the decision and this really um\ndeals with a single chain of events and\neven unique events\nand you can measure you can put a number\nof\non how reasonable was the human action\nselection but it's another model it uses\nanother\nanother measures of information theory\nwhich has not entropy but\ndistance between the distribution\ncould be clear leveler distance and\nstuff like that\nnice thank you um then my final question\nmight be a quick or my uh but the answer\nmight be long and that's actually uh\nwhat you actually miss out when you\nquantify stuff according to you\nso to put it differently what would a\nqualitative assessment\nof responsibility add so not\nnecessarily liked scales but but really\na qualitative assessment\nwhat would that add if anything\nso i'll answer i think i think the the\nqualitative\nqualitative assessment is extremely\nimportant here the\num and i think the one has to take\nthe quantitative model we're presenting\nhere with a certain\num level of of sort of\ncaution uh because obviously it\nsimplifies things it sort of it says\nsomething about\nuh what the human involvement in some\nidealized\nschematic depiction of a system is\nand and it's it can serve as a sanity\ncheck\nfor a situation like the one that you\nknow described earlier where you have\nsomeone who is\nin the loop formally but in fact relies\nentirely on outside sources of\ninformation that are\nsupplied to this person now a\nqualitative model will\nwill be very important because it takes\ninto account\naspects that we don't really consider\nhere like for instance\nthe dynamics of the of the situation the\nthe fact that you may\nlook for additional sources of\ninformation that you may\nmake this decision within some social\norganizational context\nthat will have will affect the\ninformation will affect the\nthe incentives will affect the the\nmethods that are acceptable or\nunacceptable for making these decisions\nand so on\nuh these are all we we can all model\nthese things\nand if we really abstract them into into\na payoff matrix or into just\nprobabilities but this will lose a lot\nof the\nactual complexity that exists if one\ndescribes a situation\nso i believe this this kind of time kind\nof quantitative modeling\nis not in any form a replacement of a\nqualitative analysis but rather\nit sort of adds a\nrespect a another perspective as sort of\na quantifiable perspective for such an\nanalysis\nthank you very much\nokay that was a great conclusion i think\nto the\noverall session so i will stop the\nrecording now\nand thank you everyone for attending", "date_published": "2021-03-12T20:41:52Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "b44ecc676a11fe2f5c021faae4950b74", "title": "Agora: M. Brandão: Fairness and explainability in robot motion", "url": "https://www.youtube.com/watch?v=o7vHVWsdys4", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "this community\num shall i start\ndo you want to start the recording\nyes i started recording so confident all\nright\nall right so i'm going to talk about uh\nfairness and explainability in robot\nmotion\nuh which are probably concepts that um\none wouldn't uh wouldn't think would\nbe relevant to robot motion at the first\ninstance maybe but i hope i hope\nto convince you uh otherwise\nso for those of you who are not familiar\nwith the\npath and motion planning these are\ntechnical problems that are very\nimportant in\nrobotics but not only robotics even in\nyour\nin your phone apps when you uh ask to go\nto a certain place on your\ngps using google maps or something you\nhave to\nsolve a path planning problem so uh to\nfind\na sequence of uh of uh\nof steps over over network of roads for\nexample to get to the cone\nand uh the motion planning problem is\nthe continuous version of that so\nwhere you want to find for example full\nbody postures of a robot\nso exactly where each of its motors\nshould be and its position in the world\nshould be\nand this is both for this kind of robots\nwith an arm\nand the wheels or humanoid robots\nthat need to do steps and avoid\nobstacles and balance and etc\nlegged robots but also autonomous\nvehicles when you want to\nfind a trajectory maybe for the next 10\nseconds that\navoids obstacles so it's safe but also\nperhaps\nis smooth and comfortable for the\npassengers\nand even in warehouse automation\nso computing the paths or motion for\nfor many of these robots to reach their\ndestinations without colliding with each\nother\nand making sure they satisfy certain\nconstraints so this is the\npath motion planning um problem\num i'll start with the uh explainability\nof robot motion so to try and uh unpack\nthis\nconcept of explainability i'm going to\nstart uh\nbasing myself on some literature review\nand some user studies\nderive at some working concepts of what\ncould explanations for robot motion\nuh look like but then use the actually\nuh implement concrete uh explanation\ngeneration algorithms or explainable\nplanners\nand get uh feedback from users to\niterate the concepts and\nuh figure out what are the important\ndesign considerations when thinking\nabout\nexplanations for about motion\nand so why would you actually want to\nexplain\nthe way the robot is moving or the the\noutput of a motion planner\nso here are two examples the your robot\nmight move in a way that is that you did\nnot expect so for example here this\nrobot is taking this\nlong path through the right around\naround the table to reach first\nshelf and the operator or the user might\nthink\nwhy why are you doing such a long path\nyou might be spending more battery than\nyou should i would expect\nthat that you you you or the robot would\ntake this other path that is shorter\num through the other side of the table\nand the\nanother reason is that sometimes or\nactually very often\nplanning algorithms just cannot find a\nsolution so they will tell you\nuh yeah sorry uh i could not find a\nsolution\nafter i tried for so long or i was\nunable to solve this planning problem\nso i cannot get the robot to where you\nwant it to be\nand then the the user or the even the\ndeveloper is is left with no clue as to\nwhat they should do to to fix this issue\nwhy why is it failing\nso i i've actually recently done a user\nstudy with\nboth developers of motion planning\nalgorithms and expert\nusers who use it on a daily basis of\nmotion planning algorithms\nand most of them kind of the recurring\nuh theme and opinion\nwas that explanations could be useful\nspecifically for debugging\nof these algorithms and also for\nmechanical design and for\niterating the mechanical designs of\nrobots so these\nexperts uh these were around 20 or\n25 experts they were\ncommonly saying things like explanations\ncould\naccelerate the debugging process or they\ncould even help\nthe developers understand the inner\nworkings of motion planners because even\nthough they develop the algorithms\nthemselves often it's not intuitive uh\nthe outputs you get why why is it that\nthe output is the\nway it is so the the sometimes the\nexpectations of developers do not match\nthe actual\noutputs of the algorithms and\nexplainability could even even suggest\nalgorithm changes so thanks to changing\nthe algorithm such as to\nto better match the expectations or they\ncould even suggest uh\nchanges to the robot model so that\nbasically the the\nmechanical design of the robot or\nmorphology or actuation changes\nso of course this is the point of view\nof experts and\nprobably the different people will be\ninterested in making different questions\nand they will be interested in different\nkinds of answers\nwhen they ask questions about robots so\na developer as i said could ask\nquestions about a certain plan that the\nrobot has\nin order to to make changes to the\nalgorithm improve the algorithm\nor find the bug but maybe a la user\ncould ask questions about what the robot\nis doing\nimagine or imagine a warehouse worker\ncould ask\nquestions about what the robot is doing\nin the warehouse in order to better\nunderstand how the planet works to\nbetter\nbe able to predict how the the robot\nwill move and so\nand better collaborate with the robot\nmaybe but the mechanical engineer might\nask questions about the robot\ni might ask questions um that that are\nbased on the robotic design so for\nexample why can't you why can't you\nreach\nfor this object and uh this maybe a\nmechanical\nengineer would like to hear something\nlike because my arm is not\nlong enough and so they they will know\nthat they should change the mechanical\ndesign of the arm\nand even an and an architect could ask\nquestions in order to be able to\nto change the layout of the warehouse\nfor example so it depends on the user\nof course then uh what kind of\nexplanation so what can explanations\nlook like\nof course this again will depend on who\nthe user is and what they're interested\nin\nbut uh of course there are also\ndifferent ways to answer the same\nuh different possible explanations for\nfor a certain question so i've i've\nuh organized this into problem-centered\nexplanations and algorithms centered\nexplanations in this in this recent\npaper\nso uh if you see this example here on\nthe right where the robot does path a\nbut the user expected plus b b so why is\nit that you took path instead of b\nthe explanation could be based on costs\nso because\nb would take more energy or because of\nconstraints so because\nb would violate a certain constraint b\nwould actually be in collision you don't\nreally see it but it would be in\ncollision\nit could be based on robot design\nbecause the robot cannot fit through\ncorridor y so it's it's\nbased on the on the dimensions of the\nrobot or based on the environment\nbecause\nthe there's an obstacle there's a table\nin a certain location\non the other hand there are algorithm\ncentered explanations so you could say\nthat\nthe algorithm found path a not b because\nyou didn't run the algorithm for long\nenough if you had\nrun it for long enough it would have\nactually found b\nor because the algorithm is not optimal\nso of course there can be different\nexplanations and also similar\nexplanations to these\nif you're wanting to explain failure\nso uh so now i have a kind of a set of\nworking\nexamples of possible explanations and i\num\nto better tease out what what are the\nactual design considerations\nand and the how what how you should\nactually\nshow these these explanations i uh went\nahead and\ndid some prototype implementations of\nthese\nof this explanation so one was the\nconstraint-based explanation\nso imagine your algorithm fails so you\ncannot find a path\nfor the robot to reach for this water\ntab here for example on the left\nand it could and i made an algorithm\nthat automatically generates\nexplanations for this kind of problems\nso it will say\ni couldn't find the solution because i\ncannot simultaneously\ntouch the water tap or grasp the water\ntap and\navoid collision for example though i\nwould be able to do this if the water\ntap was\n15 centimeters closer for example\nand another kind of explanation\nalgorithm that i've\ndone is based on initialization so it's\nan algorithm centered explanation\nso you could say i failed to find a\nsolution because\nthe initialization which is a kind of\nstep of the\nuh planning algorithm utilization was\nin the basin of attraction of an\ninfeasible um solution\nthough you would you would have been\nable to find a solution if you had used\na different strategy\nso you could also say because you used\nuh this initialization strategy instead\nof the other one\nso how i generate this kind of\nexplanation is basically for this\ninitialization\ni just try a different kind of\ninitialization scheme initially\nmethod and if that works i can blame the\ninitialization scheme basically states\nthat\nit's the the that's the reason for\nconstraint based explanations i\nbasically\nsolve many relaxed problems so problems\nthat\ndo not need to satisfy all the\nconstraints of the original problem so\nthere are subproblems\nand i try to find that problem that\nsatisfy as many\nconstraints as possible so that is\nas close as possible to the original one\nso i then\nuh showed these kind of explanations to\num the same expert users as\nas before and uh so\nwell one good thing is that most of them\nwere satisfied with explanations so in\nthis kind of\nquestion like how how how much are you\nsatisfied with this kind of explanation\none to seven\nso they are in general they were\nsatisfied with this with these kind of\nexplanations\nsome explanations more than others but\noverall more satisfied with than\nthe kind of output they currently get\nfrom motion planners on the other hand\ni got some interesting insights such as\nusers were not sure\nas uh whether one should see\nwhen there are multiple possible\nexplanations whether the user should see\nshould see only one or a set or all\npossible explanations\nso it could be the case that you could\nmake a problem feasible but both\nby saying that two constraints don't\nconflict\nor by by saying that you could change\nthe initialization\nuh method right so which which kind of\nthings should we show\nor should we show all the options\nanother insight was that\nsometimes the language was hard to\nunderstand potentially for the users so\nwe we might have to tune\nuh the the kind of language uh\nthat you use depending on the user other\nother users complained that it was not\nclear to them\nwhy there was a collision so even though\nthe explanation was because there is a\ncollision it was not clear there was\nactually a collision so\nvisually visualization aids might be\nimportant to make things more\ninterpretable and intuitive but perhaps\nthe most\nimportant um insight from this user\nstudy was that\nmany people said that the explanations\nshould go deeper\nand for example not just say i couldn't\nfind a solution because\nuh i can't uh reach uh\nfor this and avoid collision at the same\ntime but actually say\nwhy it is that you cannot reach for this\nand\nset the cyclist at the same time so for\nexample you could say in this case you\nshould go deeper and say well\nyou can the the water tap is too further\naway from the beginning of the furniture\nthat's why you cannot\nsolve this problem so what should the\nexplanations look like then the\nvisualization\ntheme was very common around these\nexperts inputs so they they suggested we\nshould use highlighting or\nvisualizations of feasibility regions\netc to improve\nbut also abstraction so sometimes it's\nnot enough just to say\nuh constraint one conflicts with\nconstraint two you might have to use\nintuitive concepts\nfor example say because the environment\nis too cluttered\nor because the object is not close\nenough but these are very\nabstract things that um\nwe will have to find uh methods uh\nthat this is i think an open problem\nthat we still have to find\nmethods that are able to come up with\nthese abstractions automatically\nand finally this deep explanations theme\nof uh so not just say\nuh because two tasks conflict but this\nbut the same widely conflict\nso i then um did another\nuh user study with a with a different\nkind of method\nso i uh this was not just a very simple\nmethod as the ones\nas the ones before but a more involved\none using advanced\nalgorithms and more uh perhaps more\nvisualization\nfocused so here you want to ask why\nuh so i'm and i was also interested in\nseeing\nso if explanations are actually\neffective\nat helping users understand the problem\nso i wanted to evaluate this depending\non the the kinds of\ndesign choices you make so for this\nspecific problem\nyou see here there's a map with blue and\nred areas\nthe blue areas are easy to move on and\nthe red arrows are hard to move on so\nyou have to go slower\nand then the shortest path or the\nfastest path is\nthe path a which goes around this\nlong distance and path b is what user\nexpected\nand so the user asks why did you take\npath a instead of path b which is\nshorter and takes these\nterrors here this is a staircase\nand so i i've proposed some new\nalgorithms to generate explanations\nfor this kind of questions based on\ninverse shortest paths\nalgorithms for those of you who are\nfamiliar\num so basically what the what these\nalgorithms do is\nto answer the question why is path a\noptimal not b\nthey find the minimal changes to make to\nthe map\nso that the path b so the user's\nexpectation would actually be true\nwhich means uh basically here you would\nsay well\nf a is the shortest not b because for b\nto be shortest\nthe shortest you would have to change\nthe\nthe traversability or the the ease of\nreversibility of these two areas\nhere so you would have to make these\nstaircases here be\neasier to traverse if you wanted this to\nbe the shortest path\nor you could say because um\na is the shortest not b because walking\non blue\nis is too cheap so if it costed a bit\nmore to walk on blue then b would\nactually be the shortest paths so you\ncan automatically generate\nthese kind of explanations and even more\nrecently i've developed an algorithm\nthat can\neven generate all possible\noptimal explanations for a question so\nsay\nuh why is path a optimal not b and you\ncould say because\nfor b to be the shortest path the trains\nshould change in one of the\nfollowing ways and it gives the user all\npossible ways in which so for example if\nyou put\nthese two areas here or these two or\nthese two\nif they become hard to traverse then\nindeed\nthe shortest path would be the one you\nwanted but since it's not that way\nthe shortest path is is this one instead\nright so i passed these explanations on\ntwo users so\nanother set of users so uh\nsingle um explanations and\nmultiple explanations and i've tried to\nsee um\nhow how well uh so after after seeing\nmany of these explanations do\nusers become better at understanding and\npredicting the behavior of this path\nplanner\nand there was some interesting insights\nthat we got so\nfor example more user satisfaction does\nnot mean\nthat the explanations are more effective\nwe actually\ngot the the opposite so people were more\nsatisfied\nwith one kind of explanations the the\none with multiple\noptions but they actually um\ngot worse at understanding the problem\nif they if they\nhad seen more uh possibilities\nso it is also interesting that showing\nmultiple possible explanations could\nactually be counterproductive\nand confuse the the users and decrease\num problem understanding so to kind of\nsum up their important design\nconsiderations to think about when\nyou're\nimplementing explainable planners the\ntype of explanation depends on user\nneeds\nthere's a conflict between user user\nsatisfaction and understanding\nusers might prefer simpler or more\ncomplex\nexplanations even though they're not as\neffective\nthere's so there's a conflict between\ncomplexity and multiplicity\nand uh there's you should also pay\nattention to explanation depth and\nvisualization abstractions\nbut also kind of a general methodology\nthat i used here\nwhich was to explore a concept\nan ethical concern of transparency or\nexplainability\nand jump into implementation\nof prototypes and use these to\nanticipate design\nissues and to actually refine the\nconcepts and\niterate and this could be potentially a\ngood um\ni believe this could be potentially good\nway a good tool for responsible\ninnovation\nas well and which i will apply next to\nthe\nproblem of fairness in robot motion but\nif if there are any questions at this\npoint\nabout explainability i'm happy to\ntake questions yeah thank you martin\ni see one question from uh young\nmaybe if you can mute yourself and ask\nthe question it'll be great\nhey yes thanks thanks so um\nthank you martin for sharing your\nresearch so my question is\nis really about these uh these\nconstraints that you're describing right\nso it looks that you mainly consider the\nuh those\nconstraints that can be formally\nspecified right\nand then you inject that somehow into\nyour organization algorithm\nso um my question is other uh\nconstraints that are you know more\nambiguous that need to be\nuh and that can only be uh expressed\nthrough for example natural language or\nsomething like that\nmaybe so certainly it's very it's very\ndifficult\nto in many problems it's difficult to\nelicit the requirements and it's\ndifficult to understand what are what do\nwe actually want\nthe the robot to do but if we want the\nrobot to\nto do it using a we actually need to\nformulate it to put it down\nin into uh into an equation so that's\nthe\nthe only way it's going to it's going to\nwork but then so usually the way it uh\nworks is that actually you start by\nby eliciting the requirements of the the\nproblem and then you\nimplement it in the motion planner using\nspecific equations\nand constraints and then you deploy it\nand very often you realize oh wait but\nthe robot is still doing this which\nis not really what i expected so we need\nanother constraint and this\noften happens in deployment so it takes\na long time for you to get at the place\nwhere the robot is doing exactly what\nyou wanted\nand this this is a continuous process as\nwell\nyeah exactly that part i found very\ninteresting right it seems that what we\ndo now is mostly you know uh\nlet's see how it works and then we come\nback and maybe there are more\nconstraints that we can derive right\nbut then i'm wondering if there should\nbe some you know principled method that\nwould allow us to get as much\nas possible in the beginning and then of\ncourse it's the incremental process\nright we need always have to debug and\nimprove\nright but that's the part that sounds\nquite interesting\nto share some thoughts yeah\nwhat seems to be you usable by those\nmachines it needs to be formally as\nspecified\nbut then it would be really interesting\nto see what are the things that can\nreally be formally specified and what\nare the things that is hot\nmaybe there has to be some research\ngoing on there\nyeah of course there are also methods\nbased on learning right so\nif if the user is not it cannot easily\nwrite down the requirements the users\ncan just\nshow examples and then uh by by seeing\nmany examples an algorithm\ncould learn to do this but this is\nimpractical impractical for\nlarge systems like imagine a warehouse\nwith with 1 000 robots the user cannot\npossibly have\ngood intuition about um\nhow each of the robots should be moving\nso you have to start by actually\nformalizing\nsome objectives so that's also just some\nthought\nyeah yeah that's a good point thanks for\nsharing\nyeah and actually as a follow-up to this\nquestion for me\nuh so did you also consider like of\ncourse you need to define all these\nconstraints\nbeforehand but there's obviously going\nto be cases that you haven't foreseen\nwhen you started implementing or\nformalizing everything\nso have you also considered uh scenarios\nwhere francis robot has no clue what you\nknow has not the real explanation and\nwe'll just tell you the human\nhey i have no idea what happened but\nthis is what i what all the events that\nled up to this point\nand uh did you consider that in your uh\nyour algorithms too or in your design\nand\nuh no not not at this point but\nso the for example it's it's usual for\nfor\ndevelopers of motion motion planning\nalgorithms to for example look at the\nsearch tree as a\nway to kind of get an intuition as to\nwhat what happened so the search tree is\na kind of\nhistory of everything that the algorithm\ntried and\nbut i didn't for example compare uh\nthe explanatory power of a search of\nlooking at the search tree versus\nuh these kind of explanations that i did\nhere so that would be a good baseline as\nwell\nthanks for the suggestion\nthanks interesting um any other\nquestions\nor else\nwe can uh move on to the second part\nyeah\ngood sure okay all right so um\ni'll continue then with the uh fairness\nuh\num the idea of fairness in\nmotion planning so i'm going to use the\nsimilar kind of methodology that i used\nfor drawing out this uh the this concept\nof\nexplainability so i'm going to try and\nstart by\num actually simulating the\nthe deployment of a robot and\nand looking at the results uh and then\ndistributions of impact\nand trying to understand what kind of\nissues awareness could come up\nand then directly formalize concepts\nuh formalize the concepts of fairness\nthat are in the\nphilosophy literature and distributive\njustice literature\nand again simulate the deployment of of\nof a fair planner in those senses\nand use those simulations to again draw\nout the design considerations and\npotential issues\nand uh i'm going to suggest this could\nbe again a good way to involve\nstakeholders in the in the process\nso to for all of this i'm going to use a\nwalkthrough\nuse case to to make things more\nintuitive\nso this use case is of a rescue robot\nthat needs to find as many people as\npossible so it uses path planning or\nmotion planning that\nso it starts at a certain point in the\nin the middle of the\nof a city which is oxford in the uk uh\nthe fire station it goes\nuh for uh some some path\naround the city and then returns to\nrecharge the batteries\nand then of course we can we can use\nperhaps some\nuh census date about uh how the\npopulation is\ndistributed so population density or\neven age distribution and\nethnicity distribution to think about\npotential impact\nbut also about how to get as many people\nfind as many people as possible with\nour robot so i'm going to use these\nsimulations i'm going to simulate a\nrobot that is\ntrying to find as many people as\npossible and then look at the kind of\nwhich people the robot found and try and\nuh\nmake claims about fairness in the past\nof robots\nfrom this so my first claim is that\nrobot paths can be uh\nbiased maybe uh this is straightforward\nto some of you but the idea\nhere is that if your robot tries to find\nas many people as possible it should\nstay\nin in the close vicinity of the of the\nlaunching station\nboth because it can then uh return very\nquickly when the battery is over\nand because that's where the highest\npopulation density is\nbut at the same time if you do this the\nthe people you will find are mainly\npeople in\nbetween in their 20s so the\nundergraduate population of oxford\nand they're also mainly white chinese\nand male\nand these are the population you find if\nyou stick to the center of\noxford because of historical reasons and\nbecause of the way that\nthe city works at the same time you can\nalso make the other\nuh like the next step uh of\nclaims which is the robot pass can be\ndiscriminatory\nso if you think that internet\ndiscrimination is when a decision or\npolicy\neven though it doesn't target specific\ngroup it has worst\neffects for those group this happens in\nthis case here right because\nthe people that are much younger so the\nolder population and younger population\nare actually um\nhave less probability of being found by\nthe robot\nas well as the female group and minority\nethnicity groups\nthen you could go one step further and\nsay that it's possible for robot paths\nto be unfair\nso if you think that the goals the\ndisaster response are to find as many\npeople\nas possible but also to attend those\nthat are most at risk first\nthis strategy that tried to find as many\npeople as possible actually did not meet\nthe second requirement\nbecause it found the student population\nin their 20s which could be considered\nthe risk so you could say that a robot\npass could be\nunfair according to disaster response\nethics\nthis other claim is perhaps also uh\nintuitive to many of you but the idea is\nthat robots must face dilemmas already\nfaced by humans so disaster response\nteams\nalready have to think about the impact\nand the fairness of a mission so the\nrobots that\nare deployed for disaster response we'll\nuh of course also have to think about\nthis but how do we operationalize\nsimilar furnace sensitivity so this is\nthe step where i uh dive into\nimplementation\ndirectly to try to tease out issues and\ndesign considerations\nso i i did a survey of the distributive\njustice and\npolitical philosophy literature and\ntried to bring these concepts\nand formalize them for the the for the\ncontext of path planning\nso i here are three examples that are in\nthis\nin this paper in this in my recent paper\nso demographic parity is\nis one concept conception of fairness\nwhere you say that the probability\nof the distribution of people that your\nrobot finds\nin its path should be the same as the\ndistribution of people\nof the whole city so this means\nbasically that when your what goes and\nlook\nlooks for as many people as possible it\nshould still try to pick a\nrepresentative sample\nof the population of the whole city but\nthere are also\nconceptions of fairness for example that\nthe distribution of people found by the\nrobot\nshould be such that all groups have at\nleast a certain probability of being\nfound this is a sufficient\ntherian um approach and it could also be\na rosian\nuh approach so the the path should\nmaximize\nthe the the probability of the\nleast uh likely group and there are also\nother\nkinds of um apparent conceptions of\ncourse but i'm going to stick to one and\ntry and tease out\npossible issues um that come up\nwhen when you when you choose a\ndefinition of fairness so i implemented\na motion planner\nthat simultaneously uh optimized the\nnumber of people found\nuh found and the uh minimized the\nunfairness so try to get the\ndistribution of people\nfound by the robot as close as possible\nto the distribution of people of the\nwhole city\nso um this curve here has many points\neach point is a different path that the\nrobot could\ntake and this this point here more\nto the left uh says 0.6 this 0.06 this\nis the distance to the ideal\ndistribution so if this\nwas zero that would mean that you could\nfind a path\nthat had the exact ideal\nprobability distribution so you can\nalready take\nquite a few uh so if you're interested\nin the details of the method you feel\nfree to\nread the paper it's a pareto estimation\nmethod but independent of that so you\ncan already\ntake some interesting conclusions from\nhere so first\nit might be not possible not even\npossible to\nto satisfy a fairness definition exactly\nwhich is the case\nin this example so this never reaches\nzero\nand it to find more people you might\nhave to compromise on fairness\nso you need to find more people you\nmight have to find uh people that are\nless representative\nof your city in this in this short path\nanother observation that you that i uh\nmade in this work was that a fairness\ndefinition could be\na counterproductive so here is the\ncomparison\nof two fairness definitions one\nwhich is demographic priorities so your\nyour planner is trying to as much as\npossible that the path\num finds uh\nmale and female groups in the same\nproportion\nas in the whole city and this in the the\ngraph on the right is the result of\ndoing\nuh running a planner that tries to\npromote affirmative action so exactly 50\nratio of men and women found on its on\neach of its paths\nbut you see that when you do uh the\nalgorithm on the right you find\nboth fewer men and fewer women\nthan if you uh applied the the first\ndefinition of fairness so you have to be\ncareful you cannot just\nthink oh i think i should went this\nfairness definition not that one\nyou might have to actually simulate it\nto see what happens because it might\neven be\ncounterproductive be worse in all\nrespects compared to another\nmetric um so\na kind of final more technical\num conclusion for observation is that\ncurrent\nmethods for motion planning offer a few\nguarantees so basically they're\nthey're methods that will solve\nyour plan your problem optimally but\nyou'll have to mix an\napproximation of the problem and there\nare methods that will try to solve\nexactly what you asked it to solve but\nthere they cannot guarantee that they\nwill do it optimally so you have to\nchoose between\noptimizing the real fairness real uh\nfairness metrics suboptimally or\noptimizing something else that is not\nwhat you care about\noptimally so intuitively for those of\nyou who are familiar with the a-star\nalgorithm for example\nfor path planning you have to use the\ncumulative cost\nwhich means that the cost of being in\neach\ncell along a path is equal to the in\nthis case it would be equal to the\nhow close the displacion of people on\nthat\narea is to the distribution of people\nover the whole city but this would mean\nthat uh your planet would try and avoid\nneighborhoods\nthat are that have uh\nminority neighborhoods because they are\nnot representative\nof the whole city so even though you're\ntrying to optimally solve\na fair um planning problem you're\nactually\ndoing something very unfair which is to\navoid minority bit neighborhoods because\nthey're not\nnot representative of the whole city\nbecause of the approximation\nthat you need to use in order to apply\nan optimal\nuh planner all right so you can complain\nabout this this example this toy example\nthat\ni did that it's not realistic enough uh\nbecause it only does one path and\nreturns and that's it\nuh so i've recently started exploring\nthe\nthe the coverage problem which is when\nyour robot\nwill do uh a short path\ncovers a small area returns and then\nanother area returns until the whole\ncity is covered it's called the the\ncoverage\nproblem and i've actually\nobserved through experiments that um\nif you're solving a coverage problem\nagain\ntrying to first find the first go to the\nneighborhoods that i have the most\namount of people so as to find more\npeople faster\nso you you'll have this kind of graph\nfor the amount of people you find\nfirst you you find a lot of people very\nquickly and then\nit starts slowing down the amount of\npeople you find until you find everyone\nbut that also means that again you have\nsimilar inequality and you have an\ninequality peak in the beginning so in\nthis particular example of oxford you\nwould find\n50 percent of the younger population by\n200\nminutes but by 200 minutes you would\nonly have found 15 percent\nof the older population which is a big\ngap and problem\nand also because the older population is\npotentially the one you you care most\nabout when you're doing these missions\nso to kind of uh also take some\nconclusions from here\num it's important to know that fairness\ncan be not only about who gets served by\na robot but the order or the speed in\nwhich groups are served\nwe've also seen how population greedy\nalgorithms those algorithms that try to\nfind\nas many people as possible or serve as\nmany people as possible\nwill be biased according to structural\nbiases in the city or or in your\ndomain so this this kind of bias could\nreinforce\ncurrent inequalities and criticisms for\nexample disaster response there\nare like from this recent disaster\nresponse missions in the us and thailand\nthere have been criticisms that the\ndisaster response\nefforts uh did not help enough the\nmarginalized\ngroups because even though they were\nfrom the start less likely to survive\nand if we were to deploy these kind of\nalgorithms\nfor finding as many people as possible\nwe will again be\num reinforcing this kind of criticism\nbecause we would not be serving\nmarginalized groups as as quickly as as\nwe\nshould so to sum up uh for\nin order to build the fair robot motion\nplanners\nuh what we have to think about so so\nfirst i made the claim that robot motion\ncan be biased and unfair so\nthis is both true for goal directive and\ncoverage paths\nand i showed how motion can inherit\nspatial distribution biases\nof people so people they're natural\nnaturally uh\nthey're they're segregations\ndue to that they're um gentrified\nneighborhoods in their minority\nneighborhoods so this\nthen passes by passes to the\nthe motion planner then of course i\nomitted from the\ndiscussion here that there are also\nfairness issues related to census data\nitself and its representations and\ngathering methods but i\ni can skip that bit from this\npresentation\nregarding design considerations for fair\nplanners so i've\nshown how there's something to be\npay attention to is the fairness and\nefficiency tradeoff\nand i've shown how some fairness\nspecifications might actually be\ncounterproductive\nand i've shown how there's an issue with\nthe optimality of an algorithm and the\nuse of\napproximations and to\nto wrap up in a similar notes as to the\nexplainability section so uh how do we\ndesign for\nfairness i think we need to iterate\ndesign and\nanticipation and why because the set of\nuh personal characteristics we care\nabout or\nuh what we mean by fairness the furnace\nspecification\nand the the impact of a deployment all\nof this is not obvious from the outset\nso you might have to\niterate so deploy or simulate the\ndeployment of the system look at the\nresults\nand then iterate and then uh\nanother another important aspect of\nthis methodology that i used is that it\nis\ncurrently hard to engage with\nstakeholders in the early stages of\ndesign so\nso someone wants to deploy a robot and\nand they will say yes of course we want\nto make sure this\nthis deployment is fair but it's not\nclear what what\nhow fairness even relates to robot\nmotion for example right so\nuh these discussions uh are also\ndifficult to ground\nunless we have something uh poppable\nlike like something like a simulation so\ni think\nit's um probably as a\nmethodology what i did here could be an\ninteresting tool for\nresponsible innovation where we\nimplement prototypes and simulate\ndeployments to anticipate\nissues with stakeholders and to ground\ndiscussions with stakeholders\nto better understand the concepts so to\niterate concepts and design\nconsiderations um in the early stages of\ndesign\nso thanks a lot for your for your time\nand let me know if you have any other\nquestions hey thanks my team for this\nreally interesting\ntalk i see that every guinea has his\nhand raised\nyeah thanks and thanks martin for the\npresentation\nso uh about the the fairness uh topic so\nso i understand uh that taking this\nperspective\num if we assume that like a robot is\nintroduced that\nit allows us to uncover what kind of\nunfair dynamics\nmight happen but thinking more broadly\nabout the ultimate purpose here\nso shouldn't we be taking a more\nsystemic\ndesign perspective here of the fairness\nissue so\nwhat i mean by that is thinking more\nbroadly about the infrastructure\nwe are designing rather than focusing\njust on the robots\ndecisions so for example bid build more\nstations\nin different areas of the city\nincreasing\nbattery capacity increasing the number\nof charging stations throughout the city\nso\nuh thinking more broadly about what's\nthe socio-technical infrastructure that\nneeds to be in place to\nsatisfy both of the objectives that you\nwere saying so also get to the people\nwho are in need as quickly as possible\nand make sure that you do it fairly\nyes yes i i totally agree i don't i\ndon't think what i've\ntalked here is is kind of going against\nthat thought it would be a\nsmall piece of the of the puzzle of\ncourse\nyou you need to think about\n[Music]\nhow many how many robots you can use how\nmany two to buy as many\nas possible where do you launch them\nfrom and do we have the\npeople to operate them are is this is it\neven\nis it even good to use robots in the\nfirst place\nuh do do disaster response teams want to\nthose are also important questions of\nthe for the whole uh problem so\nmy claim here was more uh so first\nthat there is a fairness component to uh\nmotion and then uh of course\nwe will we should think of the whole\nsystem\nso where to put launching stations how\nmany robots etc\nbut then even then so uh how should the\nrobots move depending on how they move\nit it will still have a different impact\nso it will still have\na different uh people will still\nstill be served or found or or uh helped\nat different paces depending on the\nalgorithm you implement so\nuh regardless of that we we will still\nneed this kind of um\nthis kind of uh algorithms and and and\nand thought processes going into the\ndesign of\nin of uh motion planners but i totally\nagree yes so that that\nthere's a lot of different problems to\nto think about and also\nmore social problems of how\nhow are these then going to be used and\neven how was this\ndata obtained and and do these the\ncategories\nin the census even make sense so these\nare all\nquestions that need to be thought of\nbut of course it cannot be just be by\nmyself it would have to be a full\ninterdisciplinary uh conversation with\nall the stakeholders so yeah\nin general i really agree with what you\nsay thanks for that\nthanks for clarifying that and if i can\njust a quick follow-up you\nyou mentioned that you you saw that it\nwas difficult to engage with\nstakeholders\nearly on the topic of fairness to to to\nunderstand the nuance better could you\nplease elaborate on that a bit\nlike what what were some of the\nchallenges uh\nin that um uh\nno so i didn't so to clarify i didn't\nactually do any user study with this\nwith this fairness work uh yet what i\nsay is more is more\na general uh claim do i have no data for\nthat that it's\nthat uh once uh people who do not have\nso much\nknowledge about uh planner or or or\nsomething\nor or even or even the domain\nso some people want to buy a robot to\ndeploy in a hospital for example\nand um and they are aware that it's\nimportant right it's in\nit's in the the guidelines and the the\nprinciples of ethics\nethical the ethical development of aio\nyes of course we should\nmake sure this is fair uh but it's\nactually difficult\nto even come up with with things\nyou should have in mind when you okay we\nwill put a robot\nin the hospital or or in the what does\nit even mean\nfor the motion to be to be fair it's not\nclear\nuh what how to even start the\nconversation that's my\nissue you can not start talking about uh\ncompromises between enforcing fairness\nor not\nor start talking about do you you cannot\njust use the menu\nof uh yes do we do we want\nuh egalitarianism sufficientarianism do\nwe want prioritarianism it's not just uh\nit's just not it's not easy like that to\nto to guide the conversation you\nprobably need examples you need\nvisualizations you need simulations\nto tease out details of the definition\nthat's that's my\nmy claim not something that i\nencountered personally\nokay yeah no thanks for clarifying yeah\nyeah now because\nand of course the the if you actually do\nthe empirical work\nthen of course if you start talking to\npeople people don't think\nnecessarily like in our everyday\nconversations we don't think in these\ncategories and we don't necessarily\nthink in categories that align with\nquantitative fairness metrics that have\nbeen proposed so far in computer science\nliterature so so that's no but that's\nthat's kind of yeah coming back to also\ncoming back to the first question i ask\nis basically\nthe the world is like the social reality\nis so nuanced that\ni think more broadly speaking like we\nshould be thinking about these\nsocio-technical infrastructures where we\nfigure out what's the appropriate\nreal goals for the humans to play to\nensure for example fairness but also\nexpandability what's appropriate roles\nfor\nmachines to play and the interaction\nbetween the two\nyeah i agree thanks thank you thanks\nluciano you were first i saw\nyeah okay so uh thank you very much for\nthe talk that was\nreally great talk uh and i\ni really like the way you make very\nclear that explainability and fairness\nthey are relevant\nso many choose to specific concrete\ncontext\nand that's very important for to also to\nraise awareness that is like we are not\njust developing\nrobots we're not them just ai there's\nthings that are going to use in the\nworld have the impact\nthere was very nice things and my\nquestion\ngoes on the aspects on the second part\non the fairness\nabout the the metrics and the\ndefinitions so it's basically following\nup a little bit on what you just\ncomment here for a vienna's question\nit's\nbecause yeah indeed people have like it\nit's not an easy\nthing to discuss about like what\ndifferent concept of fairness and also\nin broader sense if you think about\nethical ai\nit should be utilitarian should it be\nfollow this canton approach\nwithout this or that um a way\nto to try to see to visualize as i said\nlike through\nvisualization representations and\nthrough\ndemonstrations also as preferences so\nbecause usually when people try to say\nokay this is\ni would prefer to go in this way because\nor that way\nand then you try to engage into this\ndeliberative process you're saying okay\nwhy do you prefer this way then that way\nmight be something some some interesting\nthings there so my question is\num going a little bit much more to this\ntechnical side so\ndo you envision some kind of uh learning\nmethods\nto engage these for example investment\nenforcement learning are some approaches\nto try to\nlearn what fair means for a specific\ngroup of people that might be impacted\nby the solution\nso what do you think about this\ni me personally personally i i'm not a\nbig fan of learning\nbased approaches because um because\nyou you're never too sure what they're\nlearning\nand and um and that's a kind of\na personal approach i think it's still\nit's still perhaps easier to come up\nwith rules\nthat that you totally understand and\nthen\num maybe iterate on those rules but but\ni think\nrelated to this learning learning\napproach there's this kind of\niterative uh planning approach that is\npopular in the in the planning community\nwhich is when\nuh so you you suggest your algorithm\nsuggests a plan\nand then the user says yes but i would\nexpect this to be\ni don't know fairer or or or more\nintuitive according to\nto whatever it is that i that i think\nshould happen so the user provides some\nother suggestion and then\nthe algorithm could could uh try to\nmake make that happen and say what what\nare the potential issues with that so\nshow\neither show from uh to similar to what i\ndid in the explainability bits\neither show that there's an increase in\ncosts or there's an increase in\nuh there's uh then this furnace metric\nwould go down\nor this this minority group would not uh\nbe seen anymore so and then by\nthese back and forth then the user oh i\nsee then why not this path\nand then by this conversation almost\nlike a like a\nnegotiation between planner and person\nyou could then arrive at um\nat the path so even though it's not\nreally the\nthe traditional learning scheme it is\nstill something\nuh similar i think in in a product\nyeah yeah no it does and i i just really\nclarify i do\nshare your concerns with completely like\nopen world\nlearning approaches which you don't read\nyou know there's lack of transparency\nlack of explainability there\nthings but i also see this value of this\ncombining things so you might have one\nspecific\ndefinition of fairness but by learning\nprocess you can try to\nfine-tune this according to to\ninteraction\nso yeah thanks thanks that was\nvery nice thank you very much and also\nrelated to that learning\napproach there they're also we could\nalso think of\nmethods that try and not just uh so\nlearn an arbitrary or an arbitrary\nneural network or something but come up\nwith rules explicit rules so\nwell then do you not mean this for\nfairness what about this\nformula for fairness would that make\nsense to you\nand uh perhaps always uh side by side\nwith\nwith the examples of how that would work\nin practice i don't know\nalso because in in like in political\nphilosophy and in this\nin these works about distributed\nfairness it's always a lot about counter\nexamples right\nit's usually the rules sound perfect\nwhen you just look at the rules but when\nyou look at\nspecific examples you find counter\nexamples and and then and i think\nprobably in design you will have to to\ndo it the same way\nyeah yeah definitely and i which are the\nnorms relevant because they\ni completely imagine that this path\nplanning that we're going to work in\nin oxford maybe doesn't would not work\nin delta and the other way around\nright so that's understanding what are\nthe specific\ncontext dependent norms that's also very\nimportant nice\ntrue okay thanks a lot thanks thanks\nthanks i see we have five minutes left i\nsee one more question from\nour caddy\nyes uh hey martin that was a fascinating\ntalk thank you\ni have a well let let's make it two\nquestions about the\nfirst part about the explanations and uh\nwell the first question is basically\nsome of the approaches\nthat you showed uh it seems like there\nis an extra computational toll\nuh that you get on top of your normal\nmotion planning\nif you need explanations so can you\ncomment on that so\nyeah i would imagine that some of them\nare more computationally intensive some\nof them\nare kind of not as much but uh yeah in\ngeneral\nwhat is how hard is this trade-off\nbetween\nextra computational cost and explanation\nright yeah so in in terms of\ncomputational\ncost it depends a lot on the kind of\nexplanation you want to\nto make all right so um\nmaybe i'll share the screen at the same\ntime so for example for the\nfor for the path planning explanations\nthese ones\nuh where you find changes to the model\nthat lead to the expected path\nso this we can be done quite quickly so\nfirst i had an algorithm\nthat was slow but you can always find\nways to make it fast and it takes like\none second\nto to generate an explanation so it's\nquite quick for\nfor reasonably sized\nmaps that you would use normally but for\nexample for motion planning\nfor some kind of motion planning\nexplanations like\num if the reason why you\nfail is because um you didn't wait long\nenough you might\nactually have to wait a long time\nto resolve the problem for a long time\nand say oh\nin order for you to be able to say that\nyou would need two hours to solve the\nproblem\nyou might actually have to try and solve\nit for two hours\nso uh for some explanations it might uh\ntake it it might take considerably long\num for path planning it's easy it's\neasier because since it's discrete you\ncan do\nyou can pre-compute explanations to a\nlot of different questions right from\nthe beginning you can\nyou can leverage that but for motion\nplanning continuous motion planning\nsome of them you might just have to say\n[Music]\nif you want to do it quickly you might\nhave to skip doing some some of the\npotential explanations\nnot investigate some of the potential\nexpressions or you could say\nuh well if you want i could also see if\nthis is the source of the issue but you\nwould\nhave to come back in in a while right\nyeah but that's an important point good\nthanks uh and then uh\nthe second question well it's it's not a\ndirect extension of the first one but\num yeah i was wondering your general\nframework for\nthe kind of explanations for the kinds\nof explanations that you can\nexpect from uh robot motion uh\nwell how how generalizable that is to\nthe tasks where uh a robot need to\ninteract with the human\nbecause uh yeah with the examples that\nyou showed it's just the robot and that\nmoves\nnice but uh what if uh kind of a human\nis involved or\nseveral humans are removed that's messy\non its own even without\nconsidering explainability but if you\nwant to add the\nexplanations in terms of human sections\nthat also could come in very different\nshapes and forms\nso uh how do you see this kind of\ngeneralization to\nhuman robot interaction tasks yeah\nthanks\nso i think so even if your system is\nmessy when there are humans involved\ni mean there will still be rules to how\nthis the\nrobot should act right it is uh so if\nthe user that of course it's not as\nsimple but\nif the user is if this user is doing\nthis then i i do this so you can always\nuh in this framework you would still\ngenerate explanations by saying\nuh by finding changes to the input\nbasically\nthat would lead to the expected outcome\nso you could say\nwhy didn't the robot do this while there\nwere three people passing by and one\noperating that robot and\nyou could try and make simulations of\nuh removing one of the people or putting\none more person\nor or making simulating that the person\nwho's controlling the robot asked for\nanother command so\nof course the space of exploration is\nhuge but i think the general framework\nof finding\nchanges to the inputs of the the control\nsystem right of the or the planning\nsystem\nso that the the the things that were\nasked uh become\ntrue right so i think that's still valid\nof course it will be more\ncomputational intensity and then\npotentially more difficult to interpret\nso you'll have again\nhave to think about about um\nabout how to to summarize this or to to\nfigure out which which are the the the\nrelevant bits\nbut i i think um\ni i think it's doable i think there's a\nlot\nto eat on i think it's a an interesting\nresearch avenue\nyeah thank you thank you martin\nyeah thanks thanks for the feedback", "date_published": "2021-03-31T12:38:33Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "8eb3656f1a26aa1cc26d13a51f2202a6", "title": "How humans co-determine the development of intelligent technology (Serge Thill)", "url": "https://www.youtube.com/watch?v=4x3ag-Zqo6c", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "yes\npage is yours okay thank you\nright now i have to click away the fact\nthat you are started recording\nand i have to click this away and now i\ncan actually talk so i'm guessing you\ncan all see my screen right that's\nworking out okay it's a little bit weird\nmy setup now because i can only see\nuh my slides on this screen and i'm\ntalking to you\non this other screen so just so if you\nsee me flicking backwards and forwards\nuh that's why it is why that is um\nokay so thank you for having me right\nthank you for the invitation i'm\nvery happy uh to be here and uh\nbasically uh what i wanna talk about is\na little bit about\nhow humans and intelligent technology\ninteract\non various levels the reason i want to\ntalk about that is\nwhen i was asked to give this\npresentation\nright maria luche said that she\nmentioned these two things\nthat she knows of my work that she\nthought might be interesting to talk\nabout\none is called dreams forecast that's a\neuropean funded project it has recently\nfinished and it's basically about\ncontrol of autonomous vehicles where\nwe've basically built a slightly\nbiologically inspired\nuh controller for the autonomous vehicle\nthe other one was a paper that was\npresented at the roman conference\nthis year which is much more high level\nmuch more philosophical and basically\ntalks about what the human robot\ninteraction the field\nuh can learn from what human animal\ninteraction the field\nuh has done in the past so i was kind of\nscratching my head for like five minutes\non how can you possibly combine\nhuman animal interaction with autonomous\nvehicle control\nand the presentation today is the result\nof that\nuh deliberation deliberation\nso the commonality is of course that\nthere are humans involved\neverywhere autonomous vehicles interact\nuh with the huma\nhumans but also the kind of controller\nthat we built in this project was\ninspired\nuh by human control and in human robot\ninteraction human animal interaction of\ncourse human is clear\nit's clear that that term exists so\nthe taker message right because i also\ndon't know how much time i will really\nhave\nand also i don't have a clock on my\nscreen anymore i don't know why that\ndisappeared\nbut in case i don't run i run out of\ntime this is the take-home message\nyou cannot ignore humans when you're\nbuilding\nan intelligent system and the reason is\nthe your systems that you build have to\ninteract with humans\none way or the other right this is\ncertainly uh true\nuh very often even if we build an\nautonomous system\nan autonomous vehicle exists on the were\non the road\nin a physical world where other vehicles\nare driven by humans\neven if no other vehicles are driven by\nhumans there are pedestrians\nuh bicyclists and so on vulnerable road\nusers\nand even if they all go away there are\nstill people inside the vehicle so\nyou're always\ninteracting with humans and that's you\ncannot forget about that part\nto some degree it's even true um if the\nrobots are in deep space\nso i primarily have this slide because\njanet\nvertexi gave a nice keynote lecture at a\ndryder conference\na couple of years ago and she has this\nvery nice book\nwhich describes basically how the team\nat nasa that controls the mars rover\nworks right and how how much of the\nschedule of the team and the\num the direction\nof what was it going to say the\ninteraction of the team with the rover\nand the rover's behavior\nit all depends on how these two uh\nhow they co-interact it's certainly not\nthe case that uh having the robot\njust uh on mars means there is no human\nrobot interaction anymore there is\nan entire team on earth\nliving to martian times uh\nwhat's the word for that uh the rhythms\nthey rhythms circadian rhythms to the\nmushroom\ncircadian rhythms just so they can\ncontrol uh the robot\nand the dynamics that uh emerge from\nthat of course make an entire book\nwhich is why i have this little slide\nhere so\nif you look at hri i think this is\nobviously\nto some degree at the minimum implicit\nin most of what people do right so a\nvery long time people have\nuh starting all the way back in 1978\npeople have started thinking about like\nwhat kind of different\nrobots can we build and how does the\ninteraction between the human and the\nrobot\nuh then evolve right and what is the\nrole of the human\nuh in those uh uh what is the role of\nthe robot in these various\nuh interactions and how does that shape\nthe kind of robots that you will build\nso it's not necessarily a particularly\nnew\nidea but when you get to the more\nmachine learning\nfields these days sometimes it gets lost\na little bit that the algorithms\nthat we build are there to interact uh\nor to function in a world that is still\nfundamentally a human world\ni have like some things in the chat\nright there we go the challenge is of\ncourse how to facilitate suitable\ninteraction indeed\ni fully agree with that okay\nso what i want to talk about here then\nis that there are like different ways in\nwhich humans can matter\nand it's maybe worth going through a\ncouple of examples of each just to get a\nfeel\nuh for why it doesn't make much sense to\ndevelop\nan algorithm for real-world application\nwithout also considering that it's going\nto be used\nby human at some point in time\nand i'm gonna just move this here so i\nactually have some\ncontrol over the time so there are four\nthree main points and they kind of break\ninto a couple of sub points sometimes\njust having a human present can change\nthe dynamics of your system\nand just having an artificial system\npresent can change the dynamics of the\nhuman\nuh very often they define basically how\na system will actually work\neither they're an inspiration for\nalgorithms or\nuh they cut off they can't start\nbecoming a kind of success criteria\nin the sense that your system that you\nbuild is only good if your end users are\nhappy with it\nand they're certainly going to have a\nlot of opinions right so you might have\nsome idea on how to build a robot\nuh that will uh support uh uh the\nhumans in a specific situation like in\ncare homes and so on\nbut the end users might have a very\ndifferent idea\nof what would be a suitable robot and\nall of these are things\nuh we cannot just uh ignore so to talk\nabout them\na little bit right first one the\npresence might change the dynamics of\nyour system\nuh this is going to be a relatively high\nlevel\num but it's something that i really\nthink should be obvious right you can\nbuild a fantastic algorithm\nuh and it should be obvious that as soon\nas you expose it to humans\nthings are gonna go in a direction that\nyou did not necessarily anticipate\nbut apparently uh it is not always that\nobvious because in 2016 if you remember\nmicrosoft\nuh decided it was a good idea to release\na type of chatbot uh on twitter that was\ngoing to learn\nfrom uh the interactions it has with\npeople on twitter and it became\na very unpleasant very racism racist\nmisogynist\nbought in less than 24 hours like to the\ndegree that i couldn't actually find any\nfunny\nexamples of the tweets that it was first\nviewing out at the end of it everything\ni found\nwas actually really rather offensive and\nof course to some degree this is simply\nbecause it started learning from real\nhumans and humans\nespecially a particular subgroup on\n4chan decided to have some fun\nuh with the learning abilities of this\nsport and\nat the end of the day the system started\nbuilding doing something that none of\nits developers\never intended and this can happen over\nand over and over again right so one\nsimple way in which humans\nmata i have a bunch of videos that i'm\nnot actually going to show in the talk\nsimply because i'm not sure how good the\ntransmission rate on this call is going\nto be\nbut if you go to youtube you can find\nthis uh fantastic little spoofs\nuh from a team called boss town so you\ncan see that it's not boston but\nboston uh dynamics uh and they\nbasically have this proof of boston\ndynamics uh robot videos where the robot\ngets increasingly annoyed with its\nhumans and\nincreasingly starts punishing them for\nuh punishing it\nand that's obviously an interesting\naspect again right if we build robots\nthat are going to learn from experience\nfrom interaction with humans\nthen we need to understand in what kind\nof direction this robot is going\nto develop it's certainly not the case\nthat it will necessarily keep doing\neverything that the designer intended so\nthe designer has to keep in mind that\nthis is going to be released\nuh to people that are just going to use\nhockey sticks and push the\nrobot around just to see what it does if\nthe robot then learns from this\nunintended consequences might happen\nright um if you then take a step back\nand you go to uh like social cognitive\nscience\nuh you find a few people who are very\ninterested in what actually makes\nuh social cognition and it's an\ninteresting subfield\nbecause for a very long time\npeople in social cognition still tended\nto treat\nsocial cognition the mechanisms as\nsomething that exists fundamentally\ninside your skull so the problem in\nsocial cognition traditionally was\nsimply\nwhat kind of mechanisms do you as an\nindividual need\nin order to interact with another\nindividual\num and then more recently and in like\nsubfields called an activism and a bunch\nof related fields uh people started\nasking\nuh what if these um interaction itself\nis also constitutive\nof uh cognition so that you cannot\nactually just\nfocus uh on the uh mechanisms that an\nagent has um individually inside\nwhatever it is it's exclosure it's skull\nit's robot brain uh whatever but what if\nthe fact that it interacts with another\nagent\nmatters and what if the interaction\nitself is also\na constitutive element in social\ncognition so there\num the paper that i have here is from\n2010 i like it a lot so i recommend it\nuh from hannah de jega\nit basically gives you an explanation\nthat you cannot think of a cognitive\nmechanism just\nas something that happens inside one\nindividual the fact that\nwe interact with other individuals is\nimportant in understanding human\ncognition\nand the interaction itself is also\nimportant in understanding the overall\num\nwhile interaction and dynamics and\ncognitive mechanisms that emerge\nif you have never heard of an activism\nthat i'm going to oversimplify it\nso if you have heard of it then you will\nprobably hate me right now but if you\nhaven't heard of it\nit's basically this idea that what's\nfundamental\nis that cognition\nin agents we have dynamical systems that\nare agents and they are interacting with\nthe world and we understand\nuh these agents fundamentally through\nthe mathematical language of dynamical\nsystems\nand what makes an agent a cognitive\nagent is that it tries to maintain\nits own internal dynamics in the face of\nperturbation\nfrom the environment so you get this\nkind of graph i don't actually know if\nyou can see my pointer\ni hope you can see my pointer but if not\nuh then you see these two circles um\nthat exist and they have this internal\nhorizontal circle this is sort of meant\nto represent the internal dynamics uh\nof the system uh and this is what we\ntry to maintain right if this goes away\nthen you stop existing as an agent you\nalso have\ndynamics that define how you interact\nwith the world and then you have\nperturbations\nuh from the environment including from\nother agents\nso it begins it's beginning to be\nrelatively easy to see how\nuh all these interactions are going to\nmatter for the uh the internal dynamics\nbecause they all\nuh co-determine uh these dynamics so\nthat's\nuh the one slide uh summary of what an\nactivism is\nvery much in an oversimplified manner\nthere are people who are interested then\nin how multiple agents\ninteract and here's tom fruser he's done\nan interesting\npaper a while ago and i think they're\nstill working on this\nwhere they've created a little arena in\nsimulation and two very simple robots\nand they are the simplest\npossible robots that you can imagine so\nthey're each driven\nwith just one neuron and this is\ninteresting because that also makes it\nthe simplest dynamical system you can\nimagine\nit becomes a one-dimensional dynamical\nsystem\nand they start letting these agents run\nabout and they look at what happens in\nthe state space\nof the two agents or the fair of their\ndynamical systems the one-dimensional\ndynamical systems that control these two\nagents\nand they see these kind of um\noscillatory uh behaviors uh appearing\nand this is interesting uh if you're\ninto dynamical systems because a\none-dimensional dynamical system\nshouldn't really be giving you any\noscillations right these things\nuh go towards a fixed point attractors\nthey either go to a stable value\nuh or they uh declare decay uh to a\ndifferent uh\nto zero uh so the fact that these kind\nof dynamics go\nuh shows that you could not understand\none uh particular agent just from those\nuh from the dynamics that you would\nexpect from one neuron if you studied it\nin isolation\nin addition um it's where it's possible\nto\nestimate like the dimensionality of the\nsystem that you're observing\nand they also did that and they find\nthat the system that they are looking at\nlooks like it's a three-dimensional\nuh system most likely and that's also\ninteresting so you cannot explain\nthe dynamics that you observe entirely\njust from the fact that you have two\nagents\nthat are interacting because that would\nbe a two-dimensional system then so this\nseems to\nuh suggest at least that uh when i have\ntwo\nagents with one neuron each interacting\nthe resulting behavior can only be\nunderstood if i consider it in\nat least three dimensions so i guess\nwhat the two agents are doing and\nsomething that the interaction is doing\njust to give this a general message that\ninteraction between agents is going to\nbe useful and important\nand when we design algorithms we should\nprobably keep that\nin mind right so your system might also\nchange the\nthe way that humans behave so we already\ngot this idea that we have this kind of\nfeedback loop there\nagain i'm not gonna show this video but\nif you've probably seen it before\nuh if you haven't seen it before this is\none of the reasons i love youtube\nbecause people do all sorts of stupid\nthings and they put that on youtube for\nthe world\nto see in this particular case these\npeople\nuh decided that they were going to this\nuh uh try out the pedestrian detection\nsystem\nuh of the volvo xc60 60 uh that they\nhave there\nso the idea behind the pedestrian\ndetection system is that\nif it attacks a pedestrian it will go\nand uh\nbreak the castle you don't actually run\nthe car over\nthe pedestrian over if you look at the\nvideo you will notice that they will run\nover the poor fella in the\npink shirt here because they just\naccelerate\nat uh this person the reason that\nhappens\nuh is for two reasons uh the first\nreason was that this particular volvo\nthat they had\num the pedestrian detection system was\nan optional extra\nit did not actually exist in the vehicle\nthat they purchased it wasn't built in\nso it wouldn't have done anything and\nthe second reason\nis that the way it works in this\nparticular car is if you're flooring it\nthen the vehicle thinks you're in\ncontrol uh full control of whatever it\nwhatever it is that you're doing so it\nwill not interfere\nright so even if they had had the\npedestrian detection system they would\nstill have run over the person\nbecause it would not have engaged in the\nsituation that they have used it\nso this is important because it shows\nyou\nthat how people are going to use the\nsystems that you build doesn't depend\nnecessarily on how you've designed the\nsystem\nbut it depends on how people\nthink that that system is actually going\nto work\nright and that we take into account that\nwe've done like a bunch of little um\ni mean this has been studied by a lot of\npeople right humans will always adapt\nto other agents and the abilities that\nthey think the other agent has not\nnecessarily the ability that the agent\nactually has uh this is true for humans\ninteracting with other humans right you\ninteract differently\nwith a five-year-old child than you do\nuh with an 18 year old\nteenager than you do with a 60 year old\nit extends to artificial agents people\nhave studied this in robotics for quite\na while now\nuh you can for instance manipulate like\nhow\nfluent in english a robot is and humans\nwill adapt\ntheir language in the kind of behaviors\nuh\nin how they give commands uh to the\nrobot\nhave a robot that speaks perfectly\nnatural english and people will converse\nin natural english have a robot that\nspeaks more machine like\nand people will go more to like keyword\ncommand type\ninteractions and we've also seen this\nwhen people\ninteract with intelligent vehicles so we\ndid a bunch of things just in simulation\num this is an old um example but i want\nto talk about it a little bit anyway\nbecause it's interesting\num where we try to manipulate how people\nhow intelligent\nan autonomous vehicle or an\nuh intelligent adaptive system inside a\nvehicle appeared to the participants\nthat we had\nso participants got a little navigation\ntask\nuh where they had to navigate this kind\nof weird map that you see on the top\nthe reason it looks so weird is because\nwe try to avoid 90 degree\ncorners we talked we built we did this\nin collaboration with volvo\nand one of the things that volvo told us\nat the beginning is that if you have\nlike more than two or three 90 degree\ncorners\nin a simulation people are gonna get uh\nsimulator sickness\nand then uh they'll be in the corner\nvomiting rather than doing uh your\nexperiments so you should have\nas gentle um corners as you can have\nso we tried to build this kind of map\nand all they have to do is navigate to\none of the two\ngold locations which are the squares\nthat we have at the very right\nand the way they can do this is either\njust with a paper map or by having like\na glorified gps that points them towards\ndifferent directions\nuh at each location uh whenever they get\nto a junction and when that happens it\ncan either just show you an arrow that\ngoes like your left go right\nor it can show you an arrow and give you\na reason\nwhy you should do that the other thing\nwe manipulated there was simply how much\ntraffic there was on the various uh\non the various routes that they could\ntake and you can also see that when they\nsee a juncture they can see both\nversions\nuh so you can do things like say that\nthey should go left\nbecause there is less traffic on that\nroute and then drive them right into a\ntraffic jam\nwhich then makes it seem like it wasn't\na very good idea\nto go that way so you can manipulate\nlike how clever the system worked\nbasically by\ncombining how much information the\nsystem gives to the users\nand how it describes the situation and\nhow these things that actually match\nuh with reality so i don't want to talk\nin detail about everything we did in\nthis study\nbut that's the basic idea right you find\nuh people who rate\num this uh vehicles more or less\nintelligent\nwhile they do this interaction task a\nlittle funny side thing is that there\nare\nextremely easy navigation strategies for\nthis environment even though it looks\nkind of messy\nuh you could for instance just decide\nthat you go left\nuntil you get to junction 18 and then\nyou go right so there's nothing\nparticularly cognitively demanding in\nfiguring out a way to navigate\nthis thing but people will still follow\nwhatever the system says\nthey should do and one of the funky\nthings we saw\nis that depending on how intelligent\npeople thought the system was\nthey spent less and less time just\nstaring straight out\nof the front screen of the vehicle\nand they did or maybe a more accurate\nway to describe that is that\nthe more stupid they thought that the\nsystem was the more they spent staring\nuh straight ahead\nand they did that to a degree where\num it's no longer um equivalent well\nit's no longer what we do when we\nnormally drive vehicles so\nit's not the case with that when you\ndrive a vehicle you spend 100 of your\ntime\nuh staring out of the front of your\nwindscreen because you're obviously\npaying attention\nuh to your surroundings as well so what\npeople normally do is about like 80\n75 percent of the time staring ahead\nuh the rest is checking out your\nsurroundings people who do like 95\nstaring ahead they are people who are on\nthe mobile phone\nand they have stopped engaging with\nwhat's actually really happening\nuh around them so this is just an\ninteresting thing right the behavior\nof uh the participants in this study\nlike something that you don't\nnecessarily\nthink should be affected namely just\nlike how much they stare\nto the front of the windscreen was\naffected simply by how intelligent this\nsystem seemed to them this was a small\nstudy with only 30 participants so\nthere are plenty of question marks we\ncan have on how strong these results are\nso we did a similar thing this time with\n130 or 120 i don't remember\nuh participants and this was funded by\nthe\nswedish energy foundation i think\nor yeah energy uh administration so\nthey're people who like\nuh to find ways of reducing uh energy\nconsumption on the planet so we looked\nat whether or not\npeople people's eco-friendly driving\nbehavior could be influenced\nin that way as well so depending on what\nkind of suggestions the systems gives\nin the exact same task do people drive\nmore or less economically friendly\nand what we had in this case is that we\nhave\na system that either doesn't give them\nany eco\nfriendly driving recommendations at all\nor it gives them driving recommendations\nlike\nusing simple icons you know lift the\nfoot off the guy off your gas and start\ncoasting\nor change gears up change gears down or\nwe had like the\nmore intelligent version is what we\ncalled it where in addition to\nsuggesting that you should start\ncoasting or you should shift gear\nit would give you a reason why you\nshould do that now the reason this is a\ntwo by three design is because we had\nthe same\ngps type in there as well so it would\nalso navigate\nyou to a goal either just using arrows\nor giving you a reason\nof why you should go a certain direction\nand the long story short is that this\nmatters\nso on the same task in the same\nenvironment\nif you give them eco-driving directions\nokay so some recommendations of course\nthey will start\num use they will start following them\nand they will use\nless petrol uh but if you give them\nreasons why they should do something\nuh then they will um save even more\npetrol so the\nkind of uh energy consumption went\ndown consistently except uh for this\nkind of group that i have here that i\nwill talk about\nin a second but give them a basic eco\ndrive system and we reduce\nthe fuel consumption a little bit this\nis basically liters per hundred\nkilometers\ngive them an informative one then we\nreduce uh eco uh\nfuel consumption by a lot and a similar\nthing happens when you look at when\npeople start shifting gear\nright tell them to shift gears and they\nshift start shifting gears a little bit\nearlier\ntell them to shift gears and tell them\nwhy they should do that they will start\nshifting gears\neven earlier except for this group here\nand what is interesting about this group\nis that it gets a gps\nwhich tells people why they should go\nleft or right at a crossing\nbut it gets an eco-driving system that\njust gives them instructions without\njustifying them\nand then it seems that those people in\nthat particular case they stopped\nstopped listening to the eco-driving\nsystem entirely this was a bit\nsurprising to us when we\ndid this uh our best hypothesis uh\nis that this happens because people\ndon't perceive these two things as\nseparate systems\nthey perceive one holistic system that\nsupports them\nin their driving task and if the\nnavigation part of the system does a\nfantastic job\nthen it just makes the eco-driving\nrecommendation part of the system\njust look a little bit stupid and then\nthey're less likely\nuh to follow that um recommendation but\nthat's the hypothesis\nwe haven't really been able uh to uh\nfigure out how to test that\nexactly this is very much uh something\nfor future work this was published i\nthink in 2018 so\nstill relatively recent the reason we\nlooked at both\nlike fuel consumption and behavioral\ncues so when do people change\ngear and also how much of the time do\nthey actually spend\ncoasting uh is because\nfuel consumption you can fake and\nmanipulate as much as you want\njust by messing with the fuel\nconsumption model that you have\nin your simulation right these are very\ncomplicated models with lots of\nparameters\nif you wanted to drive a particular\npoint home you can easily do that\nit's a little bit harder to mess with\nthe behavioral cues\nthat people will give so i think these\nare a little bit more informative\nthan just looking at the fuel\nconsumption but okay\nso just a couple of examples where how\nthe system is designed is going to\nchange how does\nhow humans uh behave when they interact\nwith the system and then the same\nway if you know imagine the full circle\nhow humans behave\nwould then again affect how the systems\nmight\nbehave we get this kind of feedback loop\ngoing on there\nthis is a little site study this was\nalso interesting it shows the same point\nagain this was done by other people but\nalso at the university of shefter in\nsweden\nand they had an autonomous system again\nsimulated\nwhere it was literally an autonomous car\nthis case so people just got into the\nsimulator\nand then they had a newspaper they had\nsnacks they could nibble they could do\nwhatever they could play on the phone\nuh no restrictions and the car would\njust go and drive along\nand while it was driving uh weather\nconditions started deteriorating\nso at some point uh the vehicle asks uh\nthe human and in the driver's seat to\ntake over control\nin autonomous vehicle research this is\none of the interesting questions right\nhow do you get\na human who's been doing literally\nwhatever uh to being in control of the\nvehicle within a short amount of time\nwhile also understanding what it was\nexactly that caused\nthe uh uh the problem uh that caused the\num\nuh the vehicle to relinquish control\nand they had two conditions one\ncondition basically just had this drive\nand then eventually people had to take\nover\nand the second condition uh people\nadditionally had this\nlittle display uh here on the left and\nthey were told\njust that this indicates the confidence\nof the vehicle in continuing to drive\nautonomously nothing else that was\nthe only information that they had\nso what they found initially uh the two\nthings that you would expect right if\nyou give\npeople this kind of additional\ninformation they are capable of taking\ncontrol faster than\nif it is not there um and they will also\nlook around\nand do other things more than\npeople who don't have this uh\ninformation and they start uh\nwell they start monitoring what the\nsystem does a little bit more\noverall but the interesting result was\nthat if you give them this additional\ninformation you would think that it\nmakes it more transparent\nwhat the system actually does and it\nprobably does but\npeople trusted this system less than\npeople in a condition that did not\nhave this interesting uh additional\ninformation\nso that's also an interesting result and\nthe hypothesis why this is happening is\nbecause then\nis that it's not really a case of\ntrusting a system\nless than in the control condition but\nthat the people in the control condition\nthey actually started trusting the\nsystem too much right so that this\nis a way of ensuring that people\nunderstand what the actual abilities\nof the system are and that then ensures\nthat they show appropriate levels of\ntrust\nand they also start um showing\nappropriate\nbehaviors uh when this drops um\nhow do you measure trust in this case\nthis was a simple questionnaire like\nfive\nitem like at scale i think uh so in\ngeneral i don't want to talk about trust\ntoo much because trust is obviously a\nhuge pandora's box\nand a can of worms if you start going\nthere you never get out\nbut it is an interesting thing right\nthat if you ask them objectively how\nmuch did you trust this system then\nyou end up with this difference between\nthe groups and you end up with behaviors\nthat is more appropriate in one group\nthan in another and it's all to do with\nlike transparency to what degree do\npeople understand what the system is\ndoing\nuh and uh to what degree um does that\nhelp them build a comp\nunderstanding of the system right so\nthis was quite a while ago right this\nwas already eight years ago this was a\nlittle bit before\nuh people started really talking about\nexplainable artificial intelligence and\nso on\nuh but uh before explainable artificial\nintelligence turned up\nwe had this entire subfield of\nsituational awareness and system\nawareness\nwhere we thought about how does a system\nreally explain what is necessary to a\nhuman operator so that they can take\nappropriate decisions in time and you\ncan see how this sort of feeds back to\nwell what is now xai and if i have\nenough time i will actually get back to\nexplainable ai\nat the end of this talk but i already\nwanted to drop it in here\nbecause just to say that xai is not a\nnew thing it's\nbasically been built up on like decades\nof human\ninteraction with machine systems\nespecially in decision support tasks and\nso on\nokay so that was the first point second\npoint so they define\nhow your system works right so this is\nalso a little bit uh my attempt\nto bridge the two very different things\nuh\nthat we were interested in so let's talk\na little about i'm gonna talk a bit a\nlittle bit less explicitly about\nhuman cognition as an inspiration for\nalgorithms because\nliterally all of cognitive robotics\ntends to do that and even deep learning\nalgorithms were originally inspired by\nthings that we find in neuroscience so\ni think we all know plenty of examples\nof this case but they're also part\nof your success criteria i think that's\na little bit more interesting in this\ncontext of this talk so\nthis is then the story of that paper i\njust want to plug that a little bit\nand sell it to you when you have humans\nand you design\nuh use humans as an inspiration for a\nsystem you have these two different ways\nin which you can use\nhumans right they can be a target in\nwhich case they are simply defining what\nyou want\nthe system uh to do and they can be a\nbenchmark right so they can be\nsomething that you evaluate um your\nsystem against uh either like by\ncomparing\nartificial systems behavior to human\nbehavior\nor basically by looking at how the human\nand the system interacts and whether or\nnot this goes\naccording to what you have so sometimes\nit's a human ability that becomes the\nbenchmark\nsometimes it's how the human interacts\nwith the system that becomes a benchmark\nand when how the human interacts with\nthe system and becomes a benchmark\nyou very often have ideas uh from\nprevious fields\nlike you might have looked at social\ncognition beforehand and have\ncome up with ideas of how humans\ninteract with other humans\nand you want that interaction to be\nreplicated but you can also have looked\nat how humans interact with computers\nand you want that to be replicated with\nyour robots\nor and this is the story of this paper\nthat i will not actually go into more\nthan this slide\nyou can look at how humans have\ninteracted with animals\nand look at those interactions human\nanimal interaction\nis an interesting field because animals\nplay various roles in society that all\nsort of mimics\nuh the roles that here that robots that\nwe want robots to play right we use\nuh animals uh as companions so we use\nanimals to get stuff done uh and think\nabout farms and so on\nwe use animals uh as like in therapy\nif you think about animal assisted\ntherapy for children with autism\nspectrum disorder and so on that is a\nthing but there is also an entire field\nuh that is building robots for therapy\nfor children with autism spectrum\ndisorder\nand human animal interaction as a field\nof course has\na huge head start on human robot\ninteraction because\nanimals have been around for a long time\nrobots have come around\nrelatively recently so the fundamental\nstory and this paper is simply\nit's a little bit strange that we\nhaven't looked at it a little bit more\nat that field of human animal\ninteraction because there are plenty of\ninteresting lessons to be learned there\nand that's certainly something we should\nexplore a little bit more\nso that's that um i did mention uh\nrobots in robot assisted\ntherapy so this is part of a project i\nwas also involved in\nearlier we did actually try to build\nrobot control\num for um yeah robot assisted therapy\nfor children with autism spectrum\ndisorder and the fundamental aim of this\nproject was simply\nuh to make this robot control a little\nbit more autonomous so replace\nuh the wizard in a wizard of oz paradigm\nyou want this robot to be able\nto go through um an intervention as\ndefined by a clinical psychologist\nbut you want it to do so autonomously so\nyou need to be able to pick up\non like how the robot's behaving uh and\nnot the robot sorry the child like is\nthe child doing what we would expect a\nchild to do\nand what are the appropriate behaviors\nthat uh\nthe robot should be doing if a child\ndoes a certain thing x\nthe good news and the reason we should\nbe interested like more from a\ntactical point of view the reason we\nshould be interested in this kind of\nfield\nuh is because it is actually relatively\nwell constrained one of the major issues\nwith robots is of course making them\noperate\nuh in the real world and that can be\nhard but this is a very constrained\nworld because\nthe therapy the way that the therapy\ngoes is defined by the psychotherapist\nyou don't get to have any say\nin that and they also have very clear\nideas of what they expect to see from\nthe child at every point in time\nand they also have very clear ideas of\nwhat the robot should be doing\nuh based on what the child is doing so\nit becomes feasible\nto build a robot that can operate\nsomewhat autonomously\nin this system obviously never without a\ntherapist\nthere's always a therapist in charge of\nthe whole interaction and always\nable to override what it is whatever it\nis that the robot wants to do\njust a way of removing remote control a\nlittle bit\nbut uh when you're building this kind of\nsystem you also\nat the higher level you realize um that\nthe way uh that we define uh the success\nof this kind of robot isn't actually in\nlike it can score well on an amnest data\nset or it can score well on this data\nset or it can\npick red object and put it in the blue\nbox right the\nnice criterion for success is really the\ninteraction that\nstarts to emerge between the robot and\nthe child so if you start thinking\nabout this on more general terms we have\nvery often a situation where\nuh we build robots what we design is the\nrobot behaviors but what we\nevaluate is how this robot behavior uh\nfacilitates a certain interaction with a\nhuman that we did not design\nbut the interaction between the robot\nand the human is what really really\nmatters there\nso that's what i have on this slide i do\nthink uh yeah i probably talked about\nthis um already\nright the main problem that you have in\nthose cases is that you can never fully\nspecify\nuh all the behaviors of your robot so\nyou get what we had before with the\ntwitter bot\nuh and the bosstown dynamics fake\nexample that the robot can start doing\ncrazy things\nso you do want to constrain it and\nthe proposal here but again this is a\nproposal it's not something we've\nactually managed to implement yet\nso if anybody wants to work on this i'm\nreally really interested\num is to take a sort of an activist\napproach\nto the system understand that we're\nfundamentally building\nuh dynamics of a system that's sort of\noptimized according to some objective\nfunction but\ndefine that objective function purely in\nterms of the quality\nof the interaction in a way that will\nactually modulate the dynamics of the\nsystem such that it becomes\nwhat it wants what we want it to become\nso\nthis is basically um what we have\nwritten about uh\nagain in a paper a couple of years ago\nand the take-home message of that is\nsimply this kind of characterization\nright\nall of the cognitive science will tell\nyou that yeah you have basically\nuh component dynamics that basically\ndetermine how your system behaves and\nhow the system\ninteracts with the external world but\nhow\nthe extra this interaction with the\nexternal world goes\nthat defines your component dynamics so\nyour change\num in your internal dynamics is a\nfunction of what your current state is\nand basically a function of what's\nhappening inside the outside of uh\nwhat's happening outside\nin the actual world so the way that we\ncan characterize\ninteractions and this is going to be\nrelatively quick because\nit takes me 20 minutes to explain it in\ndetail but hopefully two minutes to fly\nover it\nis that we can describe like something\nlike something like a forward model and\nsomething like an inverse model\nthat describes the expectations that we\nhave on the interaction\nso it's not literally a forward model\nbecause it's not about the agent itself\nbut it's basically uh models that\ndescribe if the robot does\nsomething then i want to see the\nappropriate\num i want to see this specific response\nfrom a child and the inverse like models\nare then if i see a specific behavior\nfrom a child\ni should have done this behavior in\norder to elicit\nthat response from the child i can have\na bunch of these uh\nforward-like an inverse like model to\ndescribe my interaction initially\nthe cool thing is that i can ask talk to\nmy psychotherapists\nuh so they can give initial inputs into\nwhat kind of behaviors we would want\nright what should the robot have done\nin order to elicit a certain behavior\nand what is it that a child should do\nwhen a robot does a certain behavior and\nthen i can start learning these things\nand getting better at it maybe\nusing reinforcement learning and so on\nso in this paper we have this full\ncharacterization of that system but we\nhaven't actually implemented it because\ndoing this in practice\nusing reinforcement learning in an\nactual interaction becomes\nreally hard and i haven't really figured\nout a good way of doing this in a\nsimpler simulation yet\nso if anybody has any ideas i'm very\ninterested\nto talk about that more\nsomething that falls out of this a\nlittle bit when you start talking about\neverything in dynamic\nsystems terms is that you can start\nputting things\nin spiking neural networks right there\nare a bunch of reasons why you want\nmight want to start controlling your\nrobot\nusing spiking your networks rather than\ntraditional\nalgorithms the simple reason is that you\nwant to make use of\nthe benefits that neuromorphic\narchitectures might potentially\ngive you and then of course you need to\nhave a way of instantiating everything\nuh that you do in spiking neural\nnetworks\nthe reason i am interested in it is\nbecause uh\nit gives me an interesting uh entry\npoint\nuh into ways of modeling cognitively\nmore interesting\nbehaviors on robot arms uh\nin a in a way that grounds cognition in\nthe censoring motor experience uh\nof the robot uh there are a couple of uh\nframeworks out there one of them is\ncalled\nuh the semantic pointer architecture and\nuh the neural engineering framework\nwhich comes with a software that's\ncalled nango in which you can basically\ndesign\nspiking your models of pretty much\nwhatever you want\nand also run that on robots so\ni'm interested in building a spike in\nyour network\nmodels of basic sensory motor cognition\nthat operate on an actual robot in this\ncase this is a simple model\nthat can do reaching uh towards\ndifferent targets in space on the robot\narm\nand it can adapt um to changes in the\ndynamics of the\nrobot and it can adapt to locations\nof the target changing location in space\nbut it does all of it\nin spiking neural networks and it can\nthis can become\na basis for building a higher level\ncognitive\nmodel of uh how you might interact in an\nenvironment\nso this is really relatively brand new\nthis was published um\ni don't know a couple of months ago uh\nthe actual proceedings are not even out\nyet\nand we have a second paper in the works\non this that just does a slightly more\num expansive uh way of controlling the\nrobot\nusing forward models in particular to\npredict\nwhere the robot arm is going to move uh\nwhile\nit is moving in order to make control\nmore accurate so there is nothing\nparticularly new in terms of robot\ncontrol here\nthe thing that is new is that we're\ndoing this in spiking neural networks\nuh with all the dynamics that come with\na spiking neural network so we're in a\nsituation where again\nwe don't necessarily know exactly how\nthis robot is always going to behave\nin particular if it's going to start\nadapting and learning\nfrom a human interactor so i can no\nlonger\nignore the fact that i might be doing\ncollaboration collaborative tasks with\na human because this might eventually\nco-determine what my robot\nwill pick up on and learn and if i try\nto figure it out just in terms of what\nspiking neurons\nare doing that might actually become a\nhard thing\nall right so this is basically the\nreference for the paper\nuh just to have that there but it's what\ni already mentioned\nand then i mentioned forward like models\nhow am i doing on time i have 10 minutes\nor so left i guess\nwe did start a little bit late so\nhopefully i have 15 minutes\nuh let's see um every now and then uh\nyou start thinking about how do we need\nto interact with the world and every now\nand then you end up with this idea\nuh that you need to have a forward model\nyou need to have some way of predicting\nthe outcomes of actions\num either because you want to predict\nthe outcomes of your own actions before\nyou carry them out\nor you want to predict what other agents\nare doing\nwhile they are carrying it out\npotentially so that you have a nice\ninteractive dynamics going on right\ncertainly in hri this is extremely\nuseful\nuh because you can help the robot choose\nappropriate behaviors\nnot just because it can predict the\noutcome of its actions before it carries\nthem out\nbut also because it can already figure\nout what humans are doing and then\ndecide\nwhat it should do in itself um\nthe thing is that you cannot always\nexpect that you have large amounts of\ntraining data available\nwhen you when you try to train these\nmodels because there are many different\nways in which robots interact\nwith robots and we're not going to\ncollect data sets that cover\nall of this uh extensively a good\nexample\nis autism spectrum disorder where every\nchild is really unique\nbut you're only going to get so many uh\ndata points on how they behave\nright and then out of there you try to\nbuild something that as\ngood as it can will work even in cases\nthat the system hasn't even seen before\nit's really a different completely a\ncompletely different game\nuh from uh trying to figure out what a\ntraffic sign is\nby first training on 150 million traffic\nsigns right we\ndon't always have the luxury of large\ndata sets\nin human robot interaction in particular\nyeah so something you can do is look at\nthese kind of data sets that do exist\nthere's a nice one it's called\npincero this was presented a couple of\nuh years ago by several lemanyon and his\ncolleagues\nuh and what they have is just a data set\nof children\neither playing with each other or\nplaying with a robot\non a tablet like a really big massive\ntouchscreen that is placed between the\ntwo agents\nand we just have a bunch of videos in\nthat data set of how these children play\nuh so this is relatively naturalistic\nthey were not told to do anything in\nspecifically uh the\nvideos are just um collected\nand annotated later on by human\nannotators or like levels of engagement\nand whether children are playing or not\nengaging in play\nand so on and you can obviously throw\nthese kind of uh\nvideos that you get out of there through\num software like open pose\nto go from a full scene to like the\nskeleton of the child\nand then you can ask interesting\nquestions like if i'm interested in\npredicting like how engaged children\nare can i figure this out maybe not\nnecessarily from a full scene but can i\nfigure this out\nuh from looking just at the kinematics\nof how they behave\nand this kind of data set is useful for\nthat\nbut the first question that you can ask\nbefore you get\nto the question of can a robot recognize\nthese things is even\ncan humans do that so we have this\nstudy here where we did that can you\ntake\nuh can you show these kind of videos do\nyou have an example here on the top uh\nright uh of uh children playing with\neach other on the sam tray\nand then you have the killer map they\nhad the open pose uh version of it\nuh to the right of it do people\ngenerally uh\nare they capable of recognizing what's\nhappening even if they only see uh\nkinematic information uh and uh do they\nagree with each other what's happening\nthat's\nbasically what maddie bartlett here it\nin this paper\nso lots of participants it was an online\nstudy uh\nit's just a simple questionnaire study\nso show them a bunch of videos in\nvarious\nconditions they either see the full\nscene or they just see the video\nand then ask them a bunch of questions\nlike at scale like to what degree do you\nagree\num with those things like the child what\nthe left was sad the child on the right\nwas sad\nchild was aggressive the child was\nexcited a bunch of those things that's\nall there is to it\nthere are also a bunch of questions\nthere just to catch the people who are\nnot going to actually try\nuh to do your experiments so you can ask\npeople\nuh were the people in the video children\nor adults\nuh and you ask and the answers are\nstrongly agree disagree\nchildren not sure adults strongly agree\nso people who always just\nclick strongly disagree those who can\nfilter out immediately\nbecause they clearly didn't read the uh\nthe instructions but what what we had\nleft were i think like 130\nor even more people after that okay\nso first thing you can do is ask how\nmuch do people agree\nuh with each other uh two insights that\nfall out of it they agree with each\nother more if they see the full scene\nthan if they see uh just the kinematics\num they don't agree with each other a\nlot though so if you look at these\nkrippendorf alpha\nscores they're really not super high\nright so they're just like the highest\nthing we have for a clip\nis potentially 0.4 or something yeah\nwhich isn't really great right it's not\nreally up to the 0.8 or 0.9 that you see\nin like fantastic conditions\nbut at least it is above chance so at\nleast humans\ncan pick up on something they don't\nnecessarily agree\non what it is uh that they're really\nseeing but it is uh the agreement\num is better than chance and it is\nhigher\nif you have the full scene than just the\nkinematics but even for the kinematics\nit is higher than chance so at least\nthere is some hope that we can build a\nmachine that can eventually pick up on\nthese things\nbecause uh at least humans seem to be\npicking up on something there\nhopefully we can you know exploit that a\nlittle bit\nyou can do an exploratory factor\nanalysis to see what is it really that\npeople pick up on\nand they basically pick up on imbalance\nbetween the two children they pick up on\nthe valence of the interaction\nand they pick up on how engaged children\nare in the tasks\nso those become like really interesting\ntargets to see if you can\nrecognize them with an algorithm\nitself you can see how good it is\nuh how good machine learning algorithms\nwould be at\npredicting how humans would classify a\ncertain\nscene so in this case the via the\nclassifier is fat\nthe video scene and then we ask you to\npredict what it thinks the human\nratings were scales and again\nit works but really not great and all of\nthat\nis just the take-home message here is\njust humans\nhave considerable variability uh in how\nthey\ninterpret even the same theme right so\nthat's why i said at the beginning right\nif you're in user experience design\nand so on i think this will all be\nstraightforward to you because every\nhuman is different\nand they interact with things that you\ndesign differently\nbut if you have um\nif you have um like a machine learning\napproach and you think all humans are\nessentially going to be the same\ni'm trying to say that they are not\ngoing to be the same\nsimian has to leave so by simeon i mean\nthe talk will be recorded i guess so\nit will just be available for everybody\nelse\neven the parts that you haven't yet seen\nuh okay how am i doing on time so i have\nlike three minutes left i'll pick up the\npace\na little bit so that the first thing\nhere is humans are different\nuh but they do still like agree on some\nlevel right it's not like there was\ncomplete disagreement between all our\nraiders\nand even a knm could pick up on the fact\nthat there was something\nuh that people tended to agree on here\njust not very good\nso at least it sort of scopes like what\nwe expect\nin performance from the algorithms that\nwe built we certainly cannot expect\na hundred percent agreement with a\ncertain ground truth\nif humans will not show us a hundred\npercent agreement\nright so with that in mind we can then\nask can we essentially build uh\nalgorithms that can explicitly recognize\nthis kind of engagement it's taking\neverything into account\nthat i just set and doing it in pincero\nso\nfirst thing you can do again is ask\npeople\nto rate how much they think various uh\nuh clips uh are engaged uh uh a show\num engagement and the way we did that\nhere\nis that uh the um principal data set\ncomes with some\nannotation and\nwe asked basically uh people to rate\nengagement in the three different\nannotations that the data set came with\nso there is\na goal-oriented aimless play there is\nsorry um there's a goal-oriented play so\nif the children are certainly doing\nsomething\npurposeful there is aimless play when\nthey seem to be playing but without any\nclear goal\nand there is a no-play condition where\nthey don't seem to do anything at all\nand it makes sense that you would expect\nuh goal-oriented things to be more and\nto feature more engagement than aimless\nbehavior and that should still be more\nengaging than no play\nbehavior so we asked people uh how much\nthey would rate engagement and this is\nsort of\non the right here with these plots and\nyou can see that it's difficult\nto distinguish those two especially\ngoal oriented and aimless play are\ndifficult to distinguish between their\nengagement ratings it's certainly true\nthat goal oriented gets a little bit\nfatter\non higher ratings and a bit narrower on\nlow ratings\nand if you do the statistics yes you can\nfind a difference but it's certainly not\nthe case that uh\nhumans tend to agree a lot then they\nalso agree a little bit\nuh in overall that no play conditions\nshow less engagement but again not\nin like the beautiful clear-cut way that\nyou would hope for right where\neverything clusters around the six for\ngoal-oriented\nclips everything clusters around the\nthree for aimless play\nand everything clusters around zero for\nno play that that just doesn't happen\nright and that should adjust a little\nbit our expectations of what we expect\nfrom a realistic algorithm working on\necologically valid\nuh situation um scenarios\nit's possible to train algorithms that\ndo and that then try to classify this\nengagement so\num we did this with conceptus this is a\ntype of reservoir computing\nand then we get fantastic performance on\ntrading sets\nand slightly better than chance\nperformance on\ntesting sets and we also tried something\nnew that's called the\nlegendre memory unit this is something\ni'm not going to talk about too much\nbecause that paper is currently still\nsubmitted and then we got\nmuch better results and in particular we\ncan also use these lmus\num to kind of do perform somewhat\nreasonably\non classes that were not explicitly\ntrained uh in the training set\nso the idea being that you can train on\nexamples of higher engagement and\nexamples of low engagement\nand if you then see an example of\nintermediate engagement that you haven't\nseen before\nit's a little bit easier for our\nlegendre memory unit system\nto interpolate between the two extremes\nthat it has been trained\non and understand that this is uh this\nnew example is something in between\nthan it is in other types of classifiers\nbut the principle\nuh is just what reservoir computing will\ngive you if you have\nreservoirs that work well on 2x streamer\nit should be possible to also identify\nand things that sort of semantically\nlearned between two extreme and it\nshould be possible\nto do this with like a reservoir type\ncomputing like a conceptor network in\nparticular\nas well it's just we that to work as\nwell as we wanted to\nwhich is why we switched to these\nlegendre memory units if you haven't\nheard of them there is an europe\neurope's paper last year by aaron volker\nand colleagues which describes this it's\nbasically a kind of recurrent memory\nuh which tends to make optimal use of\nthe reservoir so if you're familiar with\nreservoir computations then you know\nthat tuning the reservoir and the\nparameters\nis a bit of a black magic sort of dark\nart how do i find\nuh the best um the best\nvalues for making this reservoir work as\nwell as i can\nand the gender memory units give you a\nmathematical answer\nto how these uh connections should be\nbuilt\nall right how much time do i still have\nthen\njust to say we should\nsorry i can wrap up okay yeah because we\nwe yes we are running out of time\nokay yes okay i'm gonna wrap up\nright so the same thing forward models\nthis idea that we can predict the\noutcomes of actions this is relevant for\nautonomous driving\nso i'm just going to leave this as a\nplug for this project\nbut what we did is we built a controller\nfor\nrobot vehicle that is a little bit more\ninspired by how\nthe human mind works in particular in\nterms of action selection\nand so on how does the system decide\nwhat to do how does it run hypothetical\nsituations so that\nyou don't actually have to crash the\nvehicle in order to find out that a\ncertain behavior\nis a bad idea uh so we did that in a\nbio-inspired way right you look at how\nthe human brain\ndoes this and then you try to implement\nthat in a controller\num it's also you that you're possible to\nuse this kind of predictive mechanisms\nthat you build in order to predict what\nother agents are doing\num but you can only do that really well\nfor other vehicles because they have the\nsame sort of dynamics\nif you try to predict what uh the\npedestrian is doing\nthen then gets that gets a little bit\nharder\nso that's sort of my open research\nquestion\nuh on this part uh it's very easy to\nbuild biologically in\ninspired control for an autonomous\nvehicle because of advantages it\noperates in two dimensions\nso that's fantastic um but using it to\npredict what uh\nvulnerable road users are going to do\nand so on that is much harder but that\nis of course what the field\nstill needs to solve so i can see uh\nmaria luchi getting a bit impatient\nso i'm gonna zoom through the last point\nso\npeople have opinions uh i just want to\nsay this\nright so a bunch of the work that we\nhave done is building robots or\nintroducing robots in care homes so they\ncan work like companion robots\nuh for the elderly uh and they're an\ninteresting thing if you're in that\nfield\nis that you have a lot of papers about\nresearchers having strong opinions on\nwhat this robot should look like\nand if you then go and talk to the\npeople in the care homes\nthey have very different opinions of how\nthey should look like so what we did\nhere is simply ask a bunch of robotics\nresearchers like what kind of robots do\nyou think\nare appropriate and why are they\nappropriate for a kind of companion\nrobot scenario\nand then we asked a bunch of elderly in\ncare homes what kind of robots they\nwould want as well and we also asked\na lot of other stakeholders right we\nasked people\nwho are nurses in care homes we asked\nfamily members\neverybody and the interesting thing is\nthat roboticists really like\nparo and none of our elderly people\nlike parano the reason roboticists like\nparo\nis because it's animal like but it's not\nso it's familiar but it's not too\nfamiliar so people will not get confused\nuh as thinking that this would actually\nbe a seal\nthe reason the elderly didn't like pero\nis because they don't know how to\ninteract with a seal they like a cat\nthey like a robot cat because they know\nhow to interact with a robot cat\nthey know it reminds them of mr\nscruffles that they had\n20 years ago uh it's just more familiar\nand that's okay\nand i think that's the last last point i\nwant to make here right because it's\nvery easy to go off\nand build beautiful solutions to\nproblems that you think make perfect\nsense\nand then when you deploy it with your\nend users uh\nit fails fantastically as a product\nbecause you never actually talked\nto them and you designed a solution that\nthey were not asking for\nso this kind of importance of always\ntalking to the end users of your\nproducts\nno matter if it's a robot an algorithm\nor whatever this is sort of highlighted\nin the paper that i have\ni just have to reference down there and\nthat's the take home message here\nand since there is at least one\nphilosopher i want to point out that the\nsame thing is true for ethical concerns\nphilosophers have a lot of opinions on\nwhat ethical concerns of robots are\nand stakeholders have a lot of opinions\non what ethical concerns\nare and they do not match between the\nliterature\nand what and philosophic\nand users don't agree with what\nthe ethical problem okay\nso i've got to stop there so you see i\ndidn't actually get to talking to\nabout xai but i got through most\neverything else\nthese are the phd students who did a lot\nof the work that i mentioned\nand so i just want to thank them and i\nwant to thank you for listening\nthank you i think it's great\nand it's a pity that people uh have to\nleave\nbecause yeah usually this is when our uh\nmeeting uh let's see if some questions\ncome up in the short time we\nhave left but in the uh in the meantime\ni really would like\nlike can you paste the title of the last\npaper you discussed um\n[Music]\nso this one is basically i think on the\nimportance of end user-centered design\nuh i think this one is basically ethical\nperception\nof stakeholders yeah right yeah\nwell i can also\nyou will find them um they are there\nthis is the most cited paper\nanybody has questions of course i put my\nemail address here so you can also just\nuh\ni don't know we're running out of time\nthank you\ni will hang around for a little bit more\nbut i just have to go one second and let\nthe cat out of this room\nyou know always cut yeah there you go\n[Music]\nso there we go sorry that's like that's\nno problem\nit was really it was really cool um\nof course many aspects to to look at\num yeah for me what was interesting is\nthat at one point you you\nyou put you wrote like a uh we should\nnot\ndefine all the behaviors of an attempt\nbut uh but still we want to constrain\nthem now\nand i think the following slides about\nthe\nneural networks were about that right\nbut the thing is that\nwhen i when it gets to that complexity i\ndon't understand\nit but uh yeah yeah yeah and i think\nit's a very fair point to make because\nit's very easy\nuh to to mention this conceptually right\nthat\ni don't want robots to do whatever i\nwant the behavior to be constrained but\nat the same time i cannot constrain\ni cannot define everything a priori but\ni still want uh to constrain it\nso this is already uh difficult to\nreally disentangle if you want to build\nthe system and then if you want to have\na system that actually behaves\nwithin those constraints so that gets\neven harder so that's obviously\nyou figured out why we didn't actually\nbuild this right it's a conceptually\ni think it makes a lot of sense but how\nto really do it is hard and\nmaybe it's a bit of a concern uh that in\na lot of modern day ai\nwe don't actually seem to care about\nthis part right we just throw\na lot of data at an algorithm and\nit's uh yes we don't think yeah yes\nyeah i was actually interested in this\npoint because we are as a group\nwell by the way i will jump uh out soon\nbecause the others\nleft because we have another meeting but\nin any case we are writing\npaper and one of the points we are\nmaking is that\nlike um the agent should be\nuh constrained within uh\nuh the boundaries of morally acceptable\nsituations or\nnow to make it simple yeah but actually\nthen\nmaking this something actionable it's\nall another story\nlike you can say that indeed you were\nsaying\nbut actually maybe it it's really\nsomething yeah and it's also hard\nbecause it's like this concept like\nmorally acceptable is really fluffy\nand even if you build something that you\nthink is morally acceptable right you're\njust building your own morals in there\nand it's going to bias the system\ntowards your own morals and yeah\nyeah yeah yeah indeed indeed\nyeah okay\nyeah yeah yeah so thank you very much\nagain\ni'm so sorry that uh i have to run into\nthat other meeting i thought it was a\nthird year\nbut yeah if kenny i think he's also\nleaving for that\nuh but i hope to see you soon maybe at\nhigh or\nother venues exactly oh you know maybe\nin real life it's not that far\nexactly exactly exactly well\nlet me know if you if you come into that\nfor any reason\nyes exactly i will definitely be in\ntouch and also if anybody comes to know\nme and you know you have my email\naddress\njust drop me a mail you know especially\nafter corona has gone away maybe we can\nactually meet up in person right\nwould be yes yeah that would be really\nnice\nthat's all all right all right cool\nexcellent thank you very much for having\nme thank you for joining\nthank you no problem bye\nyou", "date_published": "2020-12-02T15:46:27Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "72375cfa50bf7b20f43609157da4b26c", "title": "Kostas Tsiakas - Designing Human-AI interactions using Explainable and Human-in-the-Loop AI", "url": "https://www.youtube.com/watch?v=udZgpTZ20DI", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "postdoc\nid\nhcd\nsince january\nuh first of all thank you very much for\nhaving me\nbecause it's a good opportunity for me\nto introduce myself\nso in today's\ntalk\ni will\ndiscuss about\ndesigning human a human ai interactions\nusing human in the loop and explainable\nai\nso a few things about being uh basically\nmy background is in computer science and\nartificial intelligence but my research\nexperience and\ninterests lie in the intersection of\nai machine learning and human computer\ninteraction\nare focusing on aspects as user\nmodelling and personalization\nin human-centered\nai\nwith applications in healthcare and\neducation\nand here at the hd i will also\nexplore topics in\nfuture work practices\nso this is the outline of the talk i\nwill begin with a short introduction\nabout the human ai\ninteractions and the\nthe current guidelines and frameworks\nand then i focusing on human ai and\nexplainable ai\nand then i will\ndescribe\nmy work in some use cases for human ai\ninteraction\nconcluding with my ongoing and future\nso whenever you have any questions\nplease feel free to\ninterrupt\nso what is a human ai interaction uh\nhuman eye interaction refers to\nwhen a user needs to complete a task\nwith the help of ai support\nin a general\nway\nin this paper\nthe authors have\nidentified three types of human eye\ninteraction\nbased on who initiates the interaction\nand how the user and the ai respond\nduring the interaction\nand based on this we have intermittent\ncontinuous and proactive interactions\nthe authors in this paper\nhighlight the need to focus\non continuous and productive\ninteractions since they seem to be the\nmost complex ones\nbut we should have in mind that a mix of\nthese interactions can take place during\nhuman ai\nthere are also other frameworks and\nguidelines that try to investigate\ndifferent aspects of humanity\ninteraction\nin\nthis one\nthe authors discuss about how we can\ndesign with machine learning\nuh\nand\nthey identified these\nfour different\nvalue channels\nand what\nthis means is that they want to go from\nwhat ml is capable of doing to how these\nml capabilities can actually be\ntranslated\nto human values so for example ai can be\nused to\nget insights about our own self or about\nthe world\nor how a system should be optimally\nperforming and\nso on\nnow in another framework uh\ntries to\nuh categorize human centered\nai\nbased on the different levels of\ncomputer automation and human control\nand what they say is that\nthe way that we can achieve\nuh reliably safe and trustworthy ai\ninteractions\nis by achieving at the same time\na high level\nof computer automation\nwhile maintaining also a high level of\nhuman control\nnow considering such frameworks and\nguidelines there are also\nmany more uh there are also some\ntoolkits that have been proposed\nto help designers design human ai\nexperiences\nand this is from my my microsoft\nand they have also published a set of 18\nuh guidelines for human and interaction\nand based on this\nthey they developed this toolkit which\nincludes\ndesign patterns and examples that can be\nused to actually satisfy these\nguidelines\nand i have highlighted here some\nguidelines that\nfocus on explainable ai and human\ncontrol\nthere are also\nmore of them\nfrom google we have the ai guidebook\nwhich again includes\nworksheets and tools\nthat can help designers\ndesign ai systems\nand they include\nagain about explainability\nor how users can add feedback\ncan provide feedback to the system\n[Music]\nand the similar one is again from is\nfrom is the ai meets design toolkit\nwhich follows a similar way\nto shape designers\ndoesn't seem very\nnice but it can help designers\nplot the machine learning models that\nthey want to use\nand also address some of the most\nsimilar\nai challenges\nthese are very\nuseful\ntoolkits and they cover\nmany different aspects of\nhuman ai interactions\nbut i would like to focus on these two\nuh\nmethods\nfor ai\nbecause we want to focus on these two\nparts first is how we can include\nhuman users to the decision making and\nlearning\nprocess of an aic system\nand at the same time we want ai\nassistants to be able\nto yeah\nsorry\nso someone so it's great it's very\ndifficult to see the slides but someone\nin the chat says that he has your slides\nfrom the previous times\nso yeah we will focus on human the loop\nand explainable ai\nwhich describe which describes how\nthese two parts of interaction can\ncommunicate with each other and\nexchanging information\nso from the one hand we have human loop\nai or interactive machine learning there\nare many different\nthere are different terms for\nthe same thing\nessentially what the interactive machine\nlearning does is to include the end user\nto the learning process\nand this can be done to\nhelp users get the better insight of how\na machine learning system works and how\nit makes predictions\nand they can guide this underlying\nmachine learning process\nto either improve the model or satisfy\ntheir own needs\nand it has been used for example for\npersonalization or to increase\nmotivation\nof users during the interaction as well\nas to enhance trust\nand it's interesting because this term\nhas been around for\na few years\nas we can see this was one of the\nfirst times that it was kind of coined\nas a term\nand in this thesis it is also mentioned\nas socially guided machine learning\nbecause this author\nargues that in order to be able to teach\na machine you have to establish kind of\na social interaction\nwith a machine\nand here we can see also that we have\nthe transparency aspect\nof machine learning\nand this can happen uh during different\nstages of the machine learning we can\nhave users for example\nto get to be involved in the data\nprocessing\nif it seems\nin the data processing uh part of the\npipeline\nas well as\nduring the model design training and\nevaluation\nbut also\ninteractive machine learning can be used\nduring the interaction so we will have\nan ai system that acts\nautonomously a user may be able to\nsupervise and monitor the system and\nintervene with feedback\nwhen needed for example to ensure a\nsafety\nso here are some\nsome examples of interactive machine\nlearning for different purposes\nwe have this one from 2003\nand\nback then the authors\nproposed interactive machine learning in\norder to\nfacilitate the image segmentation a\nproblem which but then was a very was a\ncomplex problem for computer vision\nso they saw by having a user\nin the process it could facilitate the\nlearning and actually to\ngenerate\naccurate classifiers for segmentation\nand here we have a\nrobot learner and a human teacher\nand the teacher provides feedback and\nguidance to robots to learn how to build\nobjects through basic shapes\nso the user here explicitly tells the\nrobot what to do\n[Music]\nin the case that we have here which is a\nsocial robot for language learning\nthe\nthe student provides feedback to the to\nthe robot implicitly so the robot learns\nhow to personalize its effective\nbehavior\nbased on how the the user if the user is\nengaged or not\nso here we can see that we have\ndifferent types of feedback that the\nuser can\nprovide to the system\nand in a\nmore recent\ncase and more complex cases about\nautonomous driving\nin this paper they investigate different\naspects\nwhen a human user is involved in the\nlearning process\nfor example they investigate if the\nsystem should start with a pre-trained\nmodel or start start from scratch\nor how users can actually be used to\nfacilitate this\ncall start problem\nas well as how user is supposed to give\nfeedback is it gonna be continuous or\ninterrupted\nor there are many different ways\nand this is uh\nof course this depends on the expertise\nand the intentions of the user\nso we need to take lots of things into\nconsideration\nbefore putting a human in the loop\nso these are some\nsome points about interactive machine\nlearning\nso it\nwe need to investigate at the same time\nyeah\ncan i ask a question if i don't think\nabout the story too much about the\nprevious paper yeah i i\nhaven't read it\nbut i'm just curious in terms of the\nhuman control over this\nuh\ncan you say a bit more about\nwhat is being learned by\nthe interactive reinforcement learning\nso\nenvironment or anyway yeah yeah\nthey are through a simulation\n[Music]\nso they try to investigate if\nhow the user can provide feedback during\nthe interaction to improve the model\nbecause if the model starts\nthe model is how the\nit's autonomous drive so here is how the\ncar will avoid other cars\nin a simulation\nand the user what does is to\nbe there in the interaction and when\nneeded to tell them the reinforcement\nlearning agent what to do\nif they have to\nthe actual actions\nduring the so the the system is\nautonomous and the user is there and\ntrains the system when needed\nyes i think i i get it right next i'll\nyeah so these are some\npoints that we need to consider for\nhuman in the loop\nis\nhow much sins actually can learn from\nhuman input and how\nuh\nhow humans can provide good feedback\nto the system\nand we have different types of feedback\nsuch as evaluating evaluating feedback\nor\nlabels or to give the system\nthe examples and demonstrations of what\nwhat the system has to do\nand we also talked about implicit\nexplicit\nfeedback it can also be\nmixed\nand\none basic\nconsideration is how to minimize users\nuh workload because user can be always\nthere and tell them a thing about to do\nbut that would\nresult to extremely high\nworkload\nso on the other hand we have explainably\nai\nwhich\nhas become extremely\nfamous these\nlast years and\nin general the goal of explainable ai is\nto enable ai systems to self-explain\ntheir internal models decisions\nand predictions and to do that in an in\na in a human understandable\nway\nalso for explainable ai a very important\nthing is\nwho is the target user of explainable ai\nand as we can see for example in this\ngraph\nif you are\na data expert you may have different\npurposes of using explainable ai\nand\nthere are different design goals as well\nas different evaluation measures\nfor example a user that\ndoesn't know what ai can do\nthey may a excellent ai may be used for\nexample enhanced user trust\nbut for ai experts it should be used it\ncan be used for modern department so we\nhave also different evaluation\nmeasures\nthis work also\ntries to identify who needs explainable\nai and why\nand here we can see examples for example\nthat users that are affected by modern\ndecisions\nneed to understand this their situation\nand verify their decisions\nuh\nwhile a\nregulatory agencies may need to certify\nthe compliance of a model\ni'm sorry\nso in this slide\nuh we see three different terms which\nare interaction concept explanatory goal\nand interaction goal\nand in this paper the authors\ntry to match\nthe interaction concept with the\ninteraction going and the explainable ai\ncode\nfor example if the interaction concept\nis\nto transmit information between the ai\nand the user\nthen this is the interaction code that\nusers need to see accurate\nexplanations\nand the\nthe goal of the explanation is to\nachieve a transparency\nso based on this i would like to\nhighlight these two parts that\nwe have explainably ai\nthat it can be used for trust for\ndebugging\nfor different\nways but\nalso ai explainable ai can enhance users\nperception\nand understanding\nfor their own behavior\nbecause ai can learn many things about\nus\nwhile we interact\nso by presenting this user model user\nmodels to us\nit can help us enhance our own\nself-perception\nbut also it can enhance a user's\nperception about the system capabilities\nso based on this i would like to briefly\npresent three\nthree projects three use cases\nthat use ai in a different\nway in terms of how users interact with\nai\nhere we have a cognitive assessment\nsystem\nwhere the user the time possibly\ninteracts with ai that means that they\ndon't have any control over the ai\ndecisions\nin the next case we have a social\nassistive robot that learns to adapt to\nthe user based on user's feedback\nso user participates in the\npersonalization process\nand at the third case we have\nexplainable ai that is used to enhance\nusers cognitively\nso let's see how this\nhappened\nso this\nthis was a multidisciplinary project\nto design\na cognitive assessment tool for children\nuh more specific more specifically it\nwas for embodied cognition\nand that was how to\nmeasure a\ncognitive assessment through uh\nthrough exercises through physical\nexercises that also have cognitive\ndemands\nso the idea was that the child performs\na set of predefined\nexercises\nand the computer vision and machine\nlearning is used to analyze this motion\nand\nassign a score to the child\nand this is how uh\nthe framework of the system was actually\nhow the process of training the ai model\nwas\nfirst we had to collect data from 96\ngram\nthen extract the\nvideos and\nfeatures\nand then we have the very tedious and\nreally consuming\nprocess of annotation so what we need to\ndo there was to see the videos\nscore the children based on the\ncognitive assessment tool\nand then fit this to the learning\nalgorithm\nthat seemed much easier before we\nstarted uh doing that\nbut manually annotating was very very\nhard and it was most of the times not in\naccordance with what the machine\nlearning system would do\nso briefly what i did here was to\nimplement\na graphical user interface that\nvisualizes the data that the system gets\nso it could help non a technical users\nto score\nthe child to score the participants as\nthe machine learning uh would do so that\nwould\nthat was in order to get reliable\nannotations from non-technical\nexperts\nand\nthis interface could\nbe also used for\nactive learning for example\nso if the system didn't know how to do a\nprediction it would ask for the label\nfrom\nthe user\nnow on the second use case where user\nhas a little bit more involvement in the\nlearning process\nthat was actually a cognitive training\ntask using a social robot\nthe robot would announce a sequence of\nletters and they use their hub to\nremember this sequence and use the\nbuttons to repeat\nthese letters\nand the robot would adjust the\ndifficulty of the next\ntask so the length of the\nof the string of letters\nand also the verbal feedback to the user\nand\ninteractive reinforcement learning\nwas used\nto\nto make this personalization\nusing both the performance of the user\nas well as\nengagement as it was measured by a\nneeds a headset\nand the problem was how to combine these\ndifferent types of feedback in order to\nachieve a personalization\nand so in order to achieve also safety\ner\nwe\nwe also did a user study for secondary\nuh users so we assume that there's a\nsupervisor maybe a teacher who\nsupervises this interaction with the\nrobot and inside\nso we built this interface\nthat visualizes both the user model of\nthe user\nand also what the reinforcement learning\ndecides for the next round\nso the supervisor user could\nuh agree or disagree with reinforcement\nlearning and this could be used again as\ntraining feedback\nfor the system\nnow for this\nwe did a comparison study with a\npersonalized robot and i'm not\npersonalized robot\nresults were\nvery\nnice for me\nbecause a personalized robot was\nperceived as a more intelligent trainer\nand also users performed better with a\npersonalized robot\nbut what was really interesting was\nduring the\ninterviews with\nwith the players\nthey\nthey highlighted some\naspects for explainability and autonomy\nthey were for example players that asked\nme okay but how does it work\nand why the system gave me a harder\nexercise\nso it would be maybe it would be nice to\nexplain give this explanation to the\nuser\nand also in terms of autonomy\nsome users told me that it would be nice\nif i could select my own level\nonce in a while\nso that has to do with a human autonomy\nand also for the\nfor the interface for the supervisor\nuh\na proper visualization of systems\nperception\nand decisions can sort of enhance uh\nhuman decision making\nso the\nthese were the\nmessages from the take away messages\nfrom the startup\ncan i ask your questions you said\nperformed better compared to\nwhat\nwhat did you compare in this study\nah\nit was a comparison study so half of the\nusers\nused the personalized robot that learned\nhow to personalize their difficulty and\nthe other one was giving random\ndifficulty levels\nso the users that followed the\npersonalized training session performed\nbetter in terms of score\nyou didn't compare to\nan\nexpert trainer who adjusts the level\nno no it was just the\nthe score from this game each player had\nto play 10 rounds and at the end of the\n10 rounds we\nso for the third\nuse case\nthe goal was to build an explanable ai\nsystem to support self-regulated\nlearning\nself-related learning is how to\nenable students to self-control their\nlearning process\nself-assess\nthe skills\nand become more independent learners\nand this was the framework\nthat\nall the information that can be\nused through\nmachine learning and ai for example\nstudent modeling and user profiling\ncould be\nexplainably\nand used in order to support specific\nself-related learning\nskills\nso for an example for this\nframework\ni developed a prototype\ngame for cognitive training\nso here the user could\nselect their own task to play for the\nnext round and it could be a combination\nof different cognitive exercises\nso the more complex the combination is\nthe more harder the task is\nand what we want to investigate is that\nis how to use explainable ai\nas open learner models and explain\nexplainably recommendations\nto help to help the child\nchoose appropriate levels\nso what what we would do is that here we\nhave actually the open learner model of\nthe child so it's what the machine\nlearning part\nlearns\nand based on this\nit can give the child a recommendation\nfor the next task\nso the goal of explainability at this\npoint is persuasion so we need to\npersuade the child why this next task is\nappropriate for you and what the outcome\nof this\nwould be\nand because we're talking about the\nchildren\nwe needed to find an appropriate way to\ndeliver\nthese explanations and these\nrecommendations\nand we followed the some persuasive\nstrategies\nthey saw how to deliver these methods\nhere we have an example of authority for\nexample your teacher\nsays that you could do that\ncompared to an example of social proof\nthat your friends\npreferred this task\nbut the idea is to use the\nrecommendation system output and\nformulate this persuasive\nrecommendation\nand this\nit was actually with uh\nwith with a master's students uh during\na project\nuh\nagain in the same\n[Music]\nidea about self-regulated learning\nand this included the design of an\neducational avatar that is used to\ndepict the future self of the child in a\nweek\nso the idea is that\nthe student does the planning on their\nown their weekly planning\nand then\nthey kind of discuss it or negotiate\nthis plan with their future self\navatar\nand this is the architecture\nso the student can set the goals\nthe underlying machine learning makes\nthe predictions and\nmakes the predictions for this model\nand\nthe idea here was\nhow we can visualize these outcomes\nthrough the design of the avatar\nso for example if the user would\naccept an over\noptimistic goal or something that\nthe avatar the machine learning could\ndetect that this is not visible\nit looks so innovative that it's kind of\nconfused\nso based on the model's confidence or\nmodels uncertainty\nthis can be used as a design feature for\nthe for the app\nand here are some\nother examples\nfrom a master\nuh course\nit was a coursera for a designer\nindustrial design and the idea is how to\ndesign explainable ai for education\nand here if you can see\nit's\nfor online\nlectures\nso that the teacher could get an\nestimation of how engaged the students\nare\nand showing something like a blob\non their screen\nthe other one is a prototype\nfor a robot that could uh simulate how\nthe student\nfeels\nso the student could check\ncould check the robot to kind of\nself-reflect\nalso for a brainstorm\nso this device would\ngo to the to the people that\nneed to speak more for example during a\nbrainstorming\nsession or to the more dominant\nones to understand that okay maybe i\nhave to speak a little less\nand also other\nnice applications for example about\na\nsign language learning\nand\nhere we see that we have a robot that is\nthe instructor\nand if you can see here there is a bar\nthat shows actually the uncertainty of\nthe machine learning model\nto give feedback to the user immediate\nfeedback about how they can correct or\nimprove\ntheir\nsum\nso\n[Music]\nwe discussed about explainable ai and\ninteract machine learning and\nmy goal is to see how this can be\ncombined\nand kind of unified\nto design\ninteractions\nbecause\ni see that the explainable ai can be\nused to provide information from aei to\nthe user\nwhile interactive a man or human beloved\ncan be used to provide\ninformation from the user to the ai\nand this combination\nis\ncan lead to\nthat's my argument to better human ai\ninteractions and we have different\nchallenges that we need\nto face for example how we can design\ntransparency or\nexplainability or\nhow we can design\ninterfaces that can be used from humans\nto provide feedback\nand this combination of explainable ai\nand human in the loop can also lead to\nwhat they call hybrid uh intelligence\nso here we have explainably cognitive\nintelligence that comes from human and\nexplainably ai that comes from the\nai\nhere we have\nother examples of how hydrogen\nintelligence can be\ndefined\nand we have different\ngoals of uh\nof hybrid intelligence for example here\nwe can see\nthat for in order to integrate knowledge\nfrom both sides\nit's different when\nwe have decision support so there are\ndifferent actions that\nuh take place\nbut it's again the loop of\nexplainability and interactivity\nyes\nso yeah currently uh our work is to kind\nof realize these\npossible interactions\nin the context of future work\npractices\nso to see how different types of users\nin the workplace can interact with ai\nand what's the\nwhat could be the possible\nreasons for this\nfor example in a team of employers\nexplainable ai could be used to provide\na certain understanding of how the team\nworks on a specific task\nor for example the team\ncan provide feedback to the ea to the ai\nthat can be transferred to the to the\nsupervisor\nso for example if there is lots of\nnegative feedback within a team this\nshould be\nshould be visible to the supervisor\nso when\nthat's the last\npart this is what we\nwe aim to do now is to actually\nmake a study a design workshop\nto define\nsome low level actions that\nusers and ai can do during the\ninteraction\nfor example\nby providing either collect correct\nlabels\nor by providing numbers that is an\nevaluative feedback\nor demonstrations for the for the model\nand from the other side we have ai\ninteractions\nwhich is\nhow to\nto provide the modern output which is\nthe prediction or to provide different\nexplanations\nand the idea is that\nit's not very visible\nthe idea is that by having such\nprimitive actions\nto design interactions for for a given\npurpose i can give you some explanations\nhere\nuh here we have a user that is a job\napplicant\nand\nthis is the case for a cv assessment\nso the machine learning is used to\nsay if this applicant will be accepted\nor not\nand here the user scenario is that we\nhave a user that is rejected\nso we see\nhow the user can ask\nfor explanations from the system and how\nthe system can provide\nexplanations back to the user\nand for example this design pattern here\ncould be\nthe\nconcept of contesting an ai model so\nwhat we want to achieve\nthrough this workshop is to see if\ndesigners can use\nsuch interaction cards\nto start from\nprimitive actions to go to\nhigh level\nintentions\nand that's the last slide\nso the goal here is\nboth in terms of design so if we can\nidentify such design patterns between\nuh explainably and\ninteractive humane interactions\nuh or to see if there are new types of\ninteractions\nand but also uh to get insights about\nwhat are the computational challenges\nwhen we want to implement\nsuch an\ncollaborative\ninteraction for example if we have\nfeedback from different users\nhow can we wait this feedback for our\nserved autonomy\nand so on\nso by considering also concepts of\nco-performance because we have\nboth parts are participating actively in\nthe direction\nand quantitation\nand i think that\nwould be all and thank you very much\ni have a question\num\nwell apart from sorry\nthe mess\nyou know the project where you were\nassessing well the study where you were\nassessing the cognitive\nlearning of children children i guess\nfor for with body yes\nin the later slides you say\nthat then you were\nnot\nhere\nproviding the accessories for the\nmanual labeling\nwith how the\nmodel would\n[Music]\nwould interpret that or\nso like a sort of support\nto code as the model will do right this\nis what you said\num\n[Music]\nand then\nwhen the model didn't know how to label\nyou would ask people\nso yeah that's for the future\nthat was just for annotation but it\ncould be used\n[Music]\nbecause yeah i was\nif you can explain\nmore why do you need\nthe\nthe manual implementation\n[Music]\nyes yeah\nif you can explain\nyeah okay so for example this\ntask was uh\nduring this task we met we\nwe observed the position of the legs\nso the time\nor right based on what they see in the\nscreen okay so as humans\nwe knew that for example this one is a\ncorrect step yeah so we would legend\nas a right one\nor so\nbut sometimes and due to\nchildren's motions it wasn't very easy\nto\neither manually give the labels if it\nwas a correct movement movement or not\nso by also visualizing the data so you\nmean that the model could understand\nbetter\nif the\nhere was correct or not here we didn't\nhave a model yeah it was just the\nannotation phase\nso we would just watch the side and kind\nof annotating its single movement\nstill that was really difficult with\njust the human\nperson by visualizing the data yeah it\nmade it more difficult for the human\nannotators\nto annotate the data\nand as as a later\nstep\nif we\nafter we have a model\nthe model could automatically detect\nthis but if there was an issue with the\ndata i could ask okay can you give me\nthe label for this one because i'm not\ncertain or\nthanks for the presentation good to uh\nto meet you like this with content\nstraight away um\none of the\nquestions\nthat was going around in my head is how\ndo you in your studies\nuh\nassess the\nbehavior of the people that you're\ninteracting with so uh so sometimes you\ngive anecdotal evidence of how people\nchange their behavior or not\nand one of the things that i find\nfascinating is\nbeing relation to systems\nand so how does it affect\nthe way you then interact or learn or\ndisengage or engage or get confused or\netc etc\nand um\nand so my question is how do you do that\nso how do you um\nfor example for this\ncase\nwhich was the most complete because i\nalso did the final user study and\ni just focused on how they perform\nhow\nthey were engaged\nduring the interaction with\nself-reporting\nit was both from\nthis headset yeah it's a headset but\nalso self-reports so i tried to collect\nboth sports and objective data\nand to kind of make a story out of\nthem it wasn't just a single\nmeasure because\nyeah and that's the interesting over\nthere but if we could\nbe able to find behavior indicators\nit would be\ninteresting to see also how explanations\nfor example affect\nusers or so uh\ni can\nso we should have a chat\nanyways but uh but one of the things\nthat's in the back of my mind is the\nwork of\nmy friend zula\nshe's supervised by mark nearings\nso this idea that you also test upon the\nend\nemergence yeah and the co-education\nand\nmedical learning is something that you\ninvestigate so kind of what what how do\nyou assess the uh you know the\npatterns that bubble up in terms of\ndoing something not doing something\nchecking you know\ndo weight\nor or\ngetting frustrated or\nall these kind of observations of what\nactually happens\nuh\nwith a you know in this case uh it's a\nchanges it's not a physical system\namazing that's learning\nanyways i'm\nwe can ask this question\nwhat's the\nyeah yeah yeah come on\nyes\nplease uh go ahead ask your question\ni don't think he can hear me from here\nuh can you\nyou're muted\nready\nyes would you like to ask the question\nyourself\n[Music]\nno\nyes\nokay i can just read\nwhere is the\nshow conversation\num\n[Music]\nso\nsomewhere in your presentation you spoke\nabout uncertainty uh and that got me\nthinking about how humans in general\nmake their decisions or provide their\ninputs in an ambiguous fashion\nso for example in the scoring of a child\nproject that you had\nyou said that your annotation was a\nlittle bit of an issue and it's quite\nlikely that there was some ambiguity\nboth inter annotator as well as possibly\nintra-annotated disagreement\num\nso\nmy assumption would be that bayesian\nmethodologies are one way to handle this\nkind of\nambiguity by modeling the distributions\nbut of course the downside is again a\npoint that you mentioned that the\ncomputational challenges around them\nmight be so high that you can't really\ndo human computer interaction in that\nso in our work we're trying to for\nexample do interactive segmentation on\ncd images\nof the head and neck area um and usually\nover there the contrast is so poor that\na lot of times your ai models fail\nbecause they can't see the contrast\nbetween one tissue and another\num\nbut then we're trying to extract\nuncertainty which is very time consuming\nand we also want them to do interaction\nbut then we can't really expect our\nclinician to like hit a button and like\nwait for like probably 10 seconds and\nfor the input to come in\nso is there any\nlandmark hallmark work within literature\nthat has\nyou know even at a smaller scale used\nbayesian uncertainty and interaction\ntogether\ni\ndon't have something in my mind right\nnow but specifically about\nbayesian ways\nbecause i think it's uh yeah maybe it's\nreally\nrelated to the use case and what the the\npurpose of\nthe human uh\nis because for example here just the the\na simple measure of uncertainty\nwhich is a parameter of the machine\nlearning model\ncould be useful for the\nfor the human annotator but in other\nexamples maybe we need more complex\nanyways\ni'm not sure\nall right\nokay thank you thank you\ni have a rule for the question\nso it's called the future meme project\ncan we go to the next slide and i'll\nalso share it\n[Music]\nuh so could we talk a little bit more\nabout the model so i think that the\nmodel is actually inspected on the right\nyeah yeah so this is the model that\nprojects the\nrequirements of the user but then how\nwe haven't implemented\nthat but from the\nliterature we found a\nsimilar model it's a recurrent\nneural network\nand the input for its node\nit's the time\nspent for each one of the four subjects\nfor example we have a english that's\ngeography\nso here it would be the inputs from the\nstudent so how many minutes they want to\nstudy for it starts\nand the output\nuh would be a number to say the number\nof completed exercises right\nso and we have it as a recurrent neural\nnetwork so kind of it's\nit's no it's never it's one day of the\nweek okay so we would like to see\nfor the weekly plan\nwhat would be the weekly progress okay\nand the data is\nkind of simple\nso it's not a very complex\ntask\nso yeah that's why we proposed this one\nand regarding recurring neural networks\nhave been used for student modeling and\nwith such data\ni was just wondering uh\nwould be there\nwould you see any kind of value into\ngoing into a more\nof uh\nexplicit models of how students engage\nwith the tasks so try to see if you can\ni don't\nit's it's almost a swear word these days\nbut how about rule-based models which\nyou could just uh\nyeah describe it what exactly the\nstudent's done how they turn the\ntraining time into the result\nso my question i guess is uh do you need\na new on the talk here or\nyeah\ni agree with this one yeah i mean if\nif then else rules work it's\ntotally fine again the explainability\nwould be\nthe same\nmore or less\nbut\nyes this machine learning can capture\nevents that you cannot\nlike code with events even if you follow\na model for engagement or\nthere may be students that are not\ndescribed by\ncertain models right so that we\npropose them a similar model which is\ndata driven so\nand uh you mentioned we did not\nimplement it\nyeah yeah that was more uh as a design\nproject\nright now i'm also wondering how you get\ndata from this\nyeah that would be for a\nfree user interface so for example we\nwould give students to\nlike forms\nto write down their intended plan and\nwhat they actually did\nand use this data for the model so\ninitially without the avatar and the\nwhole system okay but then\nof course when you connect the data\nwithout the whole system\nthen this data only is valid while you\nare not using the system and when you\nuse this data for uh\nuh\nas an input this system then called\nlearning process changes so it might be\nthat the data that you use yeah it\ndepends on the because for this model it\nwould just say\nif i kind of\nfollow my own schedule\nso it's not because the interaction with\nthe other is\nonly during planning it's not while\ndoing the exercises or\nokay thanks nice thanks\nno i think we\nokay thank you again thank you very much\n[Applause]\n[Music]\nyeah you couldn't move too much\nokay", "date_published": "2022-07-25T10:42:43Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "d69a8f64f3aed489b4b9ab34c2da19a3", "title": "Nicola Croce: ODD, data labeling and the problem of representing knowledge for AVs", "url": "https://www.youtube.com/watch?v=IijqelLKtQ4", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "uh discuss while i'm talking because i\njust put together some\neight or nine slides but it's just\nsomething for me to\nguide my my talk and discussion but\nagain i'm very much\nopen to an interactive session also\nbecause all these topics are\nvery much open-ended and there's no\nright quest the right answer yet\nand i don't know for a long it's gonna\nbe like this so we'll definitely\ngood to also gather your feedback and\nthoughts on the problem so\nlet's start so basically uh we'll go\nthrough uh data labeling\nand uh from the from the point of view\nof knowledge representation so\nuh we can see data labeling as mostly uh\nknowledge representation problem which\nis that if we just stick to\nwhat it is in in practice because um\nso basically this is the agenda we go to\ndata label for first principles some\nopen issues with that\nwith that point of view and then odb\nalso has another\nknowledge representation problem um\ntwo premises i i also\ntry to urge you to disentangle a little\nbit\nthe view of data labeling from the\nspecific\nmachine learning model algorithm\ntechnical implementation\ni know that the two are very highly\nrelated but\nit's i think good to just try to\nabstract away from that\nto see the thing for what it really is\nand then\njust the goal is to provide a general\nconcept of framework of what labeling is\nuh mostly for concept learning which is\nthe branch\nof learning machine learning and\nlearning that deals with learning\nbasically uh belongings to categories\nof the world and not regression those\nthings were just saying\nwhat is what so which which object\nbelongs to a certain category\nso uh basically what we see um\nwhen we label data every day and also\nand my company deeply is that every\nlabeling task every leveling job every\nevery customer comes in with very\ndifferent specifications so\nfor some people they're just interested\nin\nfive six objects like pedestrians um\ncars trucks uh buses\nbicycles and motorcycles for example for\nothers they're interested in a\n20 plus categories with them very rich\nattributes that are ranging from\nstate attributes to position attributes\nto\nvisual appearance attributes there's a\nwide\nvariety in what customers and people\nwant to be labeled according to their\nhow they think about their system and\nand things like that\nbut ultimately the problem of labeling\nis basically very similar to the problem\nof language\nwhich is uh being described a bunch of\nyears ago i believe in 1923 or something\nlike that\nbut to gentlemen's with this semiotic\ntriangle of language\nwhich basically says that whatever\nwhenever we\nwe give meaning to things and and build\na language out of it we\ndo this exercise of taking a symbol such\nas a string of characters like the word\ncar and\nand which stands for basically we want\nto substitute\na real thing or a mental model with this\nsymbol\nso we have the symbol card that stands\nfor a real car in the real world\nwhich is not exactly my i don't know for\nexample this\nopel that i have in front of my window\nnow in front of my\nview outside of the window but it's just\nwe want to\nstand for the concept of that of that\nopen so we want to abstract it away\nand mode and use it through a thought of\nreference\nthat leaves just in our minds of course\nand that\nabstracts away the characteristics of\nthe properties of that\nreferent so that that real object and\nstands for it through a symbol so this\nis basically\nexactly the exercise that we are trying\nto do when labeling\nright so we're just saying having this\nlist of things of categories that we\nare interested in and we have data\nand we need to figure out a way to\nbasically say\nto teach to cars or to machines or to\nautonomous systems\nwhat is what the problem is that\nthis view is um so if we look at this\nfrom these lenses\nit becomes really a knowledge\nrepresentation problem right which is\nthe problem of\nhow we represent human knowledge in a\nway that is\nfungible for machines and understandable\nkind of a machine and they can also\nmaybe reason upon so\nthe promise then becomes more complex\nbecause the\nreference so this card is not living\nanymore\ninto the real physical world as we\nhumans perceive it\nbut instead it leaves into inform what\nthey call information containers or\nor files right that organize the\ninformation in certain specific ways\nfor example here we have you know an\nexample of a video stream\nwhich is just basically a collection of\nindividual frames right\nwithout without ordering time so every\nframe is an image\nand every image is just a collection of\nthree mattresses\none for red one for green and one for\nbrew\nwhich every matrix matrix has a value\nleft so every entry in the matrix has a\nvalue usually from zero to 255 to just\nspecify the intensity of that pixel\nright so\nwe really need to understand that these\nmachines\nhave only these mediums or these\nlimited information that comes from\nthree matrices\nto understand the concept that you're\ntrying to tell them\nhey learn this and\nso for example a very trivial let's say\nor\nthe very widely used task\nfor machine learning is is called object\ndetection\nand it's basically the task of\nlocalizing\nspecific objects into a scene through a\nbounding box\nin order for the argument to learn that\nas i mentioned we just try to strip away\nfrom the technical details and i don't\nwant to even\ncall it too much objective action but\nwhat we do from a labeling perspective\njust\ngo through all these frames and mark\nand specify a bounding box around for\nexample cars\nthis is our way of pointing a finger\nand saying hey algorithm like\nthis is the concept of a car that you\nneed to learn\nso we are basically using what i call a\nlabeling construct\nso a tool to summarize in our view\nthe concept that we want the machine to\nlearn and\nand point that finger as a for example\nas a dad\nthat says you know points a figure to a\ncurrency and teaches you okay\nthis this thing is a car we do the same\nthing by just\ndoing this bounding box which is\nbasically cropping the image around the\nthe concept that we want\nthe machine to to learn right and this\nbasically is a tiny extension to the\ntriangle that we saw before because we\nwe use\none one step more which is\ngoing through this labeling construct so\ndissecting the\ncontainer of information which is the\nframe in this case\nand and find a representation\nof the object that is worthwhile for us\nthat\nin our mind summarizes well enough the\nconcept for the machine to learn\nof course a bounding box is is not that\nfull of knowledge because it's just\nbasically a smaller\npicture right so the of course this is\nalso the problem\nthe big issue of explainability and and\nwhatsoever in a\nmachine in neural networks but this is\nwhat this is exactly\nall that mineral network has to\nunderstand and to learn what the car is\nof course this labeling construct is is\nvery important i believe because this is\nwhat basically the\nthe way that we represent human\nknowledge\nfor the machine so this is the way we\nrepresent these concepts\nand that's why for example this is not\nthe unique\nuh way that we have to do so\nuh if we stick to the digital reference\nso this\ndigital thing of images so you know it\ncould be jp\njpeg files png files or whatever it uses\nthis rgb let's say matrice matrix\nmatrices\nwe have multiple ways of refi of of\nsaying\nuh and of representing human knowledge\nthe easiest one is for example\nwhich is usually refer attached to the\ndata the computer vision task of image\nclassification\nis very simple it's basically we just\nattach a string\nso for example here the string dog to\nthe whole container of information so\nto the whole picture we don't use any\nlabeling construct to summarize\na concept we just say to the machine hey\nthis picture is a dog\ngo figure it out the problem is that\nthis picture does not contain only the\ndog right it contains\na lot of other stuff with which are\ncontext noise and things like that\nand this is the job of the neural net\nright the job of the neural net has\nis to just figure it figure out itself\nwhat makes a dog a dog\nand and that's basically the simplest\ntask from a human perspective because we\ndon't represent knowledge\nbasically at all we don't do any sort of\nfeature\nengineering whatsoever we just say hey\ngo figure it out yourself\nthis is a little bit problematic\nsometimes okay we know that\ntheoretically in machine learning\ntheoretically\nif you have infinite data you're done\nlike\nthe neural network will solve this and\nwe'll will just understand\nuh to be able to uh detect dogs in\nwhatever image you\nuh you submit to them but the problem is\nthat practically we're always bounded by\nuh by a data quantity somehow and\nand we saw there was a paper that uh\nalso an experiment that a friend of mine\nworking in the field in machine learning\ndecided me\nactually yesterday there were on one\nhand imagenet is performing which is a\nfamous model of neural networks which is\nperforming way better than humans\non this task of image classification\non the other hand in practice there's\ngoing to be been a lot of cases where\nfor example\nuh classifier were trained only in\nimages of cows\nuh which has we which had the background\nof\nscotland uh you know fields and then\nif you if you so if you feed the network\nwith an image of a cow in a silly for\nexample you will never be able to\nrecognize that\nbecause it of course the model the\nthought of reference\nof the of the mental of the neural\nnetwork which is basically what we are\ntrying to\ncome up with so we are basically using\nthe construct to summarize a\nconcept but then we have not we have no\ncontrol over this thing for a neural\nnetwork the neural network will figure\nthat out\nitself and we don't know what this thing\nis right\nso uh this is the problem of\nexplainability but this is\nalso something that i think we can um\ndamper and dampen a little bit if we\nthink more deeply\nabout how we represent knowledge for the\nneural network to learn from in\nprinciple so i talked a little bit\nof this simple thing where the labeling\nconstruct\nso this aspect goes basically to zero\nand we just attached a string so symbol\nto the whole image\nthen we talked about bounding boxes\nthere's another way\nthat we have as humans to represent\nknowledge in images which is\nuh usually it's attached to the task of\nsemantic segmentation but this is not a\ncomputer\nvision task but this is just label this\nis just how we represent this concept\nand which is basically in this case as a\nlist of pixels so for example this car\nis just a list of\nof all the pixels that compose the car\nconcept\nand this is for the trees buildings\npedestrian etc so this is a little bit\nmore refined\nfor example that than this\nbecause we we have a little bit more\ngranularity in the information of the\nlocalization about the localization of\nthe entity\nand what it really is made of but for\nexample\nthis also from how we label from a\nhuman knowledge representation is a\nlittle bit problematic\nbecause what we are saying here to the\nalgorithm is not\nthis is a car it's just that all these\nblue pixels\nare made of the cardis material kind of\nthing\nin this in this picture in this task the\nmodel is not able to distinguish\nbetween instances it's just being is\njust able to distinguish between sort of\nlet's say materials of stuff so this\npiece of the image is made of\nuh source of pedestrian material but\nthis thing is\nexactly treated at the same way of this\nthe algorithm doesn't know that these\nare two\ndifferent uh basically entities so\nit is reason in a very different way\nthan human the humans right\nand that's why the whole point of what\ni'm trying to convey here is that\nwe probably need to think more deeper\ndeeply about how we do labeling\nfirst because whatever we do here is\nbasically\na human representation a human knowledge\nrepresentation of the concept that we\nwant the machine to learn from\nand uh these things might not be\nrich enough so the\nworld in autonomous systems i think is\nnow dividing itself into two main\ndirections\none is going towards end-to-end deep\nlearning\nso these things are not even\ndone or are done just in a minor form or\nfor specific subtasks\nbut the whole goal is just to have an\nend to learn pipeline that says\nthis is the raw censored input and this\nis the exact\nfor example path that you need to follow\nand then you as a deep learning model\nneed to figure out\neverything else so you this is also\npartially like implemented through\nreinforcement learning or a little bit\nof\nin imitation learning which is basically\nyou just label data by\ndriving right but there's no\nthere's no human representation\nwhatsoever\nthere's no even human understandable\nlayer whatsoever between input\nand output everything is in the hands\n100 percent of the narrow of the deep\nlearning system\nbasically this is great on one hand\nbecause it\nit really we're seeing some\nbreakthroughs and it really could be\nyou know a good way forward there are a\nbunch of startups\nuh carrying out this model but of course\nthis is a huge\nissues in terms of explainability uh\nsafety and and liability because we\ndon't we really don't know what\nwhat these systems are thinking about we\nreally don't are not aware about\nunderstanding what's going on inside of\ntheir mind while on the other hand\nthe other path that i see in the\nindustry is\ngoing through more so-called\nneurosymbolic approaches neurosymbolic\nai\nwhere we think a lot more about how we\nrepresent knowledge\nfor machines and how we encode we label\nthings\nand how we modularize the technology so\nthat we\ncan have always uh keep an eye on the\nuh basically on the pieces of what is\nthe input and output from the blocks but\nalso we can\nopen the black box and see inside of it\nand this is\nalso the path that a lot of\nbig players big oems uh like uh\nyou know the the traditional car\nmanufacturers are converging towards a\nlittle bit i believe\nand and that's why we need to think more\nabout representation so for example\nwhat i can envision being happening in\nin the future in this field\nthis is my personal view of things but\nis that\nwe we started seeing something in the re\nin the research going uh towards the\ndirections that\nwill build basically semantically rich\ndescription of the scene with uh we\nwhich may\nwill probably make use of mediums like\nknowledge graphs\nor or either even eterogeneous graphs\nthat gives a very complete description\nof this thing\nin terms very close to natural language\nideally we should say like this guy is\non this lane which is next to\nthis other lane is running towards\nthis other guy so it's basically a very\nsemantically rich\nrepresentation of the scene which gives\nus more power\non understanding and also on giving the\nnetworks and the models and\nour technology uh the idea of which are\nthe\nconcept that we care about as humans\nright\nso why i told you all these things\nbasically because\nthere's also a moral aspect to this\nattached to this i believe because every\nday that i\ni in my job i see new labeling\nspecifications coming in\nfrom from industry players\nand sometimes i find like uh the\ndistinction between\nchild or pedestrian sometimes i just\ntreated\nas pedestrian in in any case right like\ni just want to label podesta i don't\ncare about\neven if a child or not so\ndo we need to care about it i mean is\nethically\nsomething that needs to be regulated\nbecause at the starting point\ni envision being the standard for label\nfor example now we are building at this\nstandard for label which is\nnon-normative non-prescriptive\nwhatsoever just uh you know the first\nstep toward the direction but tomorrow\ni can envision an authorities coming up\nand say hey\nyou you really need to show me that\nyou're not only be able to\ndetect pedestrians for example but you\nyou need to be able to\ndetect whether the pedestrian is a child\nor no\nuh of or even or if the pedestrian is\nworking on crashes or no because these\nthings can have an\nimpact on the season somehow but\nalso uh the you know these\nentities can behave differently and can\nhave uh produce some\nrisks associated with them that are\ndifferently according to\nwhat they belong to so this is basically\nwhat's going on with labeling and why we\ncan\ni can envision this being very important\ndecision from an ethical and moral point\nof view\nand this is basically\nuh a list of open issues that you see\nwith these things that\nof course every knowledge representation\nso every\naccess that we do with labeling is a\nsurrogate\nas we saw first we are limited by the\norganization container\nof the information so that if we have\nrgb if we have a png file we just have a\npng file with just the\nthree matrices if we want to tell more\nto the model we need to instill more\nhuman knowledge in that one\nbecause every knowledge representation\nis is not a completely accurate\nrepresentation of the object right\nalways we are always losing something\nthe other problem is that of course\nthis is a trivial problem i mean this is\nsomething that everybody knows there are\ninfinitely many educators in open world\nso that it's very hard to put things\ninto categories into buckets\nbecause as you label data and tons and\ntons of data you will see that\neven if you work a lot and you put a lot\nof effort in engineering the taxonomy\nthe ontology behind it\nand the the categories that you're using\nyou will always\nfind a weird edge case in your data that\nyou\nyou're still wandering and scratching\nyour head and say oh okay what's this\nthing how do i call this thing is it\nis it this category or is it probably\ndoing this action or this other one so\nyou know we need to find ways to and\nfind tradeoffs\nto deal with this problem and we have\nsome uh\nideas there also in the standardization\ni can tell you about if\nif you are curious but this is something\nthat we cannot\navoid like this this is the problem is\nthere somehow\nso then again what does make a good uh\nignore\nuh chaos and knowledge business a good\nand fair one this is the problem they\njust mentioned you about should they\ndistinguish between child and\npedestrians or for example or is\nis not something that i should mandate\nand then of course\nthe other the final problem i want to\nthrow at you is that\nall these things have strong algorithmic\nand technical traders and constraints so\nthe the models how they work is\nvery very dependent of the\nrepresentation so for example the\nthe neural network architecture of an\nobject detector so of a\nthing that is trained and developed to\nto draw bounding boxes around things is\nvery different\nfrom this thing so these two models are\nvery different\nlike the these they work in a very\ndifferent manner\nwhich is tied to the the starting\nrepresentation that they have to\nin in as input so there's a problem\nbecause\nif we change drastically this the input\nrepresentation\nthen we need to to be to do\nto spend a lot of effort in finding new\narchitectures\nfinding new models that can deal with\nthat representation so that's why i\nthink it's a\nit's a hard problem and i i'm seeing\nsome research and some papers that are\nthat want to go towards a richer\ndescription of the scene\nbut they just find hacks to\nbasically fit their representation to\nthe model instead of\ntrying to come up with new models of\ncourse because it's very hard and\nrestores the money\nright and then that's the hard thing to\ndo coming up with new\ncutting-edge technologies and and and\narchitecture that can deal with these\nthings\nbut i think that this is probably\nsomething that will open up a lot of\nbreakthroughs if we do so in the near\nfuture\nfinally the odd is nothing that us\nyeah before we move to the odd can i ask\nyou a question\nabout these points i mean\num it's more of a common\nprobably but like what i was wondering\nfrom a non-technical perspective is\nthat uh one of the issues also is that\nuh\nthis knowledge representation problem\nlike also the way we\nuh label uh data that you were saying\nlike is a child as a person with\ncrutches\num isn't it also inner to the fact that\num we don't have also a\ndiverse pool of\npeople doing this kind of work i mean\nhow do you see this in your\nexperience and the kind of assets you\nget\nto work with yeah that's that's a good\ncomment actually because we see\nso for example there are two things to\nbe said first is that\nso for some cultures you know they are\ninterested in different concepts or you\ncan see that you can tell that\nuh their taxonomies their specifications\nare made\nthrough a specific point of view uh in\nsome ways\nfor some it's very it's very of course\nevident when you're talking about\ngeographical locations right like\nif we get data from india you'll have a\nlot of categories of waves\nvehicles that you will never see in\neurope uh\nthat's just an objective thing of course\nbut\nalso these um i i do think that these\nuh are are definitely also guided by by\nsome cultural aspects because for\nexample\ni see so personally i see some when i\nsee data from\nour taxonomies from uh you know specific\noems in europe\nuh in a specific country in europe they\nthey tend to be very precise\nin or try to\nto put a list of very rich attributes\nsometimes not in the best\norganization possible uh but they try to\nto really\nprovide a rich rich representation of it\nwhile in other countries\nthey're very they're much more\nstreamlined\nof course this is a technical cultural\naspect but\nthe the major aspect here is is due to\nthe\ni think to the experiences of the teams\nand how they\nthey go about building neural networks\nbecause at the end of the day\nif you're very granular in what you want\nto represent you need to have\nenough data or of that concept to be\nable to infer it so\nusually the trade-off is\nstripped a little bit the taxonomy so\nthat you have enough\ninstances of those things that you want\nto represent in your\nin your that you want to infer and learn\nin your data because for example\nif i want to come up with a category of\nuh\nhorse instead of just animal\nwell i need i need a lot of forces in my\ndata to be able to recognize the next\none\nright so this is basically a little bit\nof the trade up there\nyeah we also have a question from\nluciano\nokay yeah thanks so uh yeah\na secretary of follow-up a little bit of\nwhat lucha asked\nso there is of course the problem\nthere's a lot of different people can\nlabel this differently but also with\ntime these definitions might change\nas i said there are infinitely many edge\ncases in the open world\nand also with the etch it is that those\nlabels definitions they can\nchange over time so what we consider in\na car\ntoday is not the same as you consider a\ncar 20 years ago or something like this\nso yeah so i'm just wondering how do you\nsee this because it's hard to get a\nyou can get a good and fair knowledge\nrepresentation that you consider good\nand fair have enough\ndata but that might drift apart with\ntime\nyeah that's uh that's a very big problem\num\nfor example uh you sometimes it happens\nthat customers\nlabel data in a specific way with a\nspecific taxonomy and then they figure\nout\none year later two years later that uh i\nwanted to add this thing or i wanted\nthese things to be different for this\nthis reason\nand they have to re literally throw away\nall the work\nand and re-label everything which is a\nlot of\nit's a big cost so data labeling is is\nnot cheap\nbecause it it involves a lot of human\nlabor and\nand this is a problem that we see in the\nindustry so what we are trying to do\nto partially solve that is not this\nproblem is not\ngoing away easily but partially solving\nit by\ncoming up also with the standard called\nopening scientology\nwhich tries to be a very\ncomprehensive and detailed\nontology of the domain of road traffic\nso that we are trying to put definition\nof what a car is\nwhat is built for what what are his\nparts like\nwheels doors whatever where where a\ntruck starts and a car ends and how are\nall these things related so a real\nontology basically a knowledge graph\nand and openly so the standard of\nlabeling will make use of that ontology\nto label concepts in a way in such a way\nthat\nat least partially good or all the new\nfor example\ndata set can be mapped to each other's\nthanks to the ontology and uh\nif i am a manufacturer that i'm using\none version of the ontology one subset\nof it let's say i want to\nextend it or modify it i can use\nthis big monster to just map my concept\naround\nbut of course this is not the fantasy\nlike this doesn't solve\nall the issues but it's at least\nsomething like a first step towards\nalso tackling a little bit this this\nissue\nokay yeah thanks thanks\nuh we can move forwards um\ni think yeah okay yeah it's like um\nvery close to the end because uh for the\nodd uh what i wanted to say is basically\nthis console operation design domain\nthat\nis is nothing new it's basically\nstating the conditions under which my\nsystem\nis designed to function properly and\nthese conditions are\nyou know general like um environment\ncondition\num type of uh\nroad that i want that my system is\nallowed or to function or is designed to\nfunction properly\nand uh for or weather condition\netc etc and also some modes of\nworking such as speed limit for example\nor\ntype of traffic agents that my my\nsystem can deal with so if you think\nabout it this is nothing new right\nalso my microwave as an odd\nit says or my iphone don't don't use\nuh into the inside of the water for\nexample\nor below three meters in the water or\nor under a strong sun that burns it\ncircuits this is the ldd right this is\nbasically exactly the same thing the\nproblem is that now here\nwe have to deal with odds design around\nthings that\nbehaves in the side autonomously and it\nhas\nto deal with an open world context so\nthis make the problem\npretty hard to to treat um\nbut this is basically collapses to the\nsame problem of not representing\nknowledge because we need to come up\nwith a proper odd taxonomy we need to\ncome up with a proper which is basically\na list of\nokay how how we enumerate things so\nthat we need to list in our odd so my\nsystem isn't is allowed\nto function we when rain is\nup to a certain threshold or when fog is\nuh below a certain threshold so\nit's really a big problem because you\nneed to categorize everything you need\nto figure out which are the right\ncategories\netc etc so that\nthere are for example there are two main\napproaches\nfor odd definition one is\nbottom up and one is top down the top\ndown one is kind of problematic because\nexactly you need to start from the\nbottom and say okay what's my led is\nbased on\nis made of scenery elements so\nrolls junction roundabouts so you have\nto enumerate that\nis based on weather conditions\nwhich are the weather condition need to\nenumerate that be able to measure those\nand etc etc and then it's very hard you\nwill figure out if you take a map of the\nwall you figure out okay\nwhat's the highway here or what what\nreally when\nwhen when this is a country road or when\nwhen this starts to be um you know uh\ni don't remember the name but there are\nspecific names that uh a lot of\nstandards have come up with for four\ntypes of roads\nbut you you'll never be 100 sure\nthat your street is of that\ncategory for example so the trade-off\nhere is to manage\nbalance the top-down and definition with\nthe bottom-up\nthe bottom-up is basically okay we know\nthat odd has to deal with with the world\nsomehow with the\nwith the geofence area so we just say\ni want to design my system for the\nsan francisco area so i need to\nmap san francisco very completely so\nthis is concreteness\nsubstance test my system the whole\narea of san francisco and then\ntry to abstract a little bit away some\nof the categories that make up\nsan francisco dd such as um\nhighways off-ramps off-ramps\non-ramps off-ramps and and those kind of\nthings\nuh but this is still um you know you\nneed to find that\nthey trade off and there's still an open\ndebate around\nwhere to put the boundary how to build a\ndecent enough odd taxonomy uh\nfor generalizing the odd and and\nbringing them\neverywhere around the world and then how\nto properly\ninstantiate it into a specific area and\ngo very concrete in saying\nuh what's my odd because\nthis is also what you really end up\ndoing is testing and designing a system\nfor a very specific\ngeolocation let's say that then you can\nabstract away go to another location\ninstantiate it there\nand re-test partially your\nsystem to make to account for the\nthings that you didn't manage to\nabstract and that are very\ntypical of that specific reason and\nthat's very concrete\nso there's this let's say concreteness\nabstract nice concreteness process that\nyou need to go through\nwhen you when you expand or you scale\nyour\nautomation capabilities to new location\nthere's something pro that's probably\nsomething that is going to\nhappen at least because um i see that in\nthe standardization\nworld where we are trying to standardize\nalso open odds to a format that\ndescribes ldd we are\nreally you know bumping our head on the\nwall\nto to try to figure out how to make a\ntrader for this problem and\nprobably this is going to be the\ndirection so find a good trade-off\nbetween\nvery concrete odd top top sorry bottom\nup\ndesign and a very abstract ability\ndesign top-down\nso i will i mean there's a lot also that\nmore than can be said about odds and\nwhat's in it and what's outside but i\nwill\nopen up the floor if you have any\nquestions or would you like to\ntalk about specific aspects of it yeah\nis there any question from the audience\nfor now\notherwise i have a question related to\nthis so i was looking oh yeah there is a\nquestion\num let me see\nplease yeah yeah i'm i'm just gonna jump\nin quickly because i have to go to\nanother meeting\ni just wanted to thank you first nico\nfor the for sharing\nuh your thoughts and i think you're\nyou're pointing to a lot of really\nimportant um challenges and addressing\nthe\nkind of normative dimensions of like\nwhat it means to build a proper\nknowledge grab when we think about\nespecially about safety that's my kind\nof my concern\num i just want you to know i'll reach\nout to you\nuh because i think there's some\ninteresting connections with some of the\nwork that we're doing\nis title hard choices where we're\nthinking about what are these\nkind of trade-offs that are built in how\ndo you\num kind of combine\napproaches that are more semantically\nrich like you're doing\nthat are more data driven and then how\ndo you bring in like\ndifferent kind of value perspectives\num and that's what we are like kind of\nkind of combining like\nphilosophical um kind of\ninputs with like um the more technical\ndevelopment practice\nyeah um so i i have to run but i just\nwanted to\nto say that like thanks for sharing this\nand i think there's a lot a lot to\nfollow up on so i think that's great\nthanks for looking forward it\nthanks for the comment um\nis there other questions or i had the\ncuriosity\noh i've got any of gummy\nfrom the beach yeah yeah santa monica\nnice\ni wish hi nico yeah\nthanks very much uh for the talk i was\nuh so this relates\nmore to the to the beginning where you\ntalked about labeling\num i i was curious about your thoughts\nso\ni i think much of the many of these kind\nof issues that you talked about were the\ndeep learning models for example they\nare not capable of\ndemonstrating the kind of understanding\nthat we humans have\na lot of it comes from fundamentally\ndifferent\nway that kind of these models operate\nand we in the following sense that these\nmodels at the end of the day\nuh make classifications and predictions\nbased on statistical correlations\nso it's really correlations of input to\noutput\nbut we humans operate in a much more\nsophisticated way which is uh more like\ncausal\nreasoning which implies additional\nlevels of sophistication like\nwhat's a cause and effect and and\ncontrol factual thinking like\nwhat if i had done this like in a\ndifferent way what would be the outcome\nbut these models are simply not capable\nof this kind of thing\nso they might see indeed like you said\nif they only saw examples of\ncows with green grass they might\nlearn that a cow is green like\nthe green pixels rather than what is\nactually the animal\nof the cow um\nwhat what are your thoughts about uh you\nknow\nkind of trying to combine these\nperspectives because also\ntrying to adopt more of the causal\nmodelling\nkind of way of thinking it also has an\nimportant role in making these models\nmore explainable\nopening them up for scrutiny it invites\npeople to make\nassumptions more explicit about what do\nwe put\ninto our models yeah now this is a great\nremark\nso i think well the whole field of\ncausal\nlearning also by asian networks and\neverything is i\nis i think is progressing while we are\nseeing a lot of breakthroughs there but\ni i so personally i i think that we'll\nwe really need to convert those towards\nthese neural symbolic approaches where\nwe\ncombine the perceptual capabilities of\nstatistical models as you mentioned with\nsome\nmore richer new knowledge representation\nin a cohesive way\nand why i say this because it's because\neven if we're\nwhen we talk about simply statistical\nmodels\nin the exact same time that we are\nlabeling something\nwe are representing human knowledge like\nwe are\ntrying to get the model to think as\nourselves\neven though the model is designed to\njust and\nits nature is dna just to find\nstatistical correlation as you mentioned\nif i label a car i'm doing i'm biasing\nit i'm saying\nyou need to learn this human concept\nsomehow so\nuh that's why also i think that whenever\nwe\nyou know we hear uh deep learning\nuh gurus or you know the founding\nfathers of the\nfield like ben juan and everything and\nall these people\nthat has done great achieving deep\nlearning\nsometimes i still feel that we\nthere's not really this acknowledgement\nof the fact that\nyes everything that is supervised has at\nleast a little bit of human knowledge in\nthere\nso that we cannot avoid that that piece\nof of trying to let things\nlearn our concepts to to to work with\nthem\nso if that's the case might as we might\nas well just\nuh be go towards uh a better way of\ndoing it\nand and adding human knowledge in a\nmore rich way you know in a more\ncomplete way\nand figure out how to plug them into\nthese perceptual or statistical things\nat least that's my personal opinion and\ni'm very ignorant in the field of course\nbecause i'm not the osho banjo for\nwhatsoever uh but this is how i view\nthings if\ni just sit back and look at what we are\ndoing\num i i still think that we we\nit's really hard to strip away human\nknowledge for whatever we do because we\njust are\nwired to think in a way and we want\nmachines to\ntry to think in the same way at least\npartially well and i think\njust like speaking for myself for my own\nexperience i think like for many years\ni i kind of uh\nyou know i think we often go with the\nwith the\nuh um how to say this like\nwe give the technology uh qualities that\nit doesn't have\nlike in the way we think about it like\nfor example that we might expect that it\nhas this kind of capability to represent\nthings in\nour human ways but actually it doesn't\nand and i i\nfind that like as somebody who comes\noriginally from the technical field\ni think looking back for years i like i\nwould think no no but the neural network\nwill be able to\nlike magically you know learn uh and\nonly i have to say only in more recent\nyears i really\nstarted started understanding these\nlimitations like right like machine\nlearning at the end of the day\ncorrelations this is not causation and i\nthink this is something that also an\neducation\nof uh engineers is something that\ncan be clarified much more and it will\nhelp it will help us\nthink uh i think more creatively also\nabout the waveform yeah\nno that's a great point also it should\nbe clarified because\nand the problem is also marketing right\nlike\nwhen we hear when we read titles like\ngpt3 has solved\nhuman intelligence just because it's a\nvery very good language model\nit doesn't have a clue of what he's\ndoing it's just a fact like\nit's just predicting the next vector of\nof values so\nand also if we there was some some\ndebunking uh papers about it i mean\nit's very good it's very useful it's\nsuper powerful also for\nthe bad things if possible\nbut that's not intelligence right\nthat's not intelligence is i don't know\nwhat it is but it's not that thing\nuh it's just another model that it's\nextremely good with a ton of data like\nit is 13 billion parameters or something\nto predict the next string\nof word after what he has seen\nand it has you know he has seen a lot of\nstuff so\nprobably is gonna do well but then you\nsee that for example\nif you type some addition or or\noperation then you you go and check\nwhere it trained and you see some blog\nposts where\npeople were just doing this operation\nand just learned to copy that\nit didn't learn about to reason about\nthings so that's\nthat's why this is definitely true\nwe also need to be very clear in\ncommunicating what these things are\nreally doing and what they are not doing\nbecause it also when you see\nsome startup pitches from startups in ai\nthat peace to investors\nbut sometimes you really feel like\ndon't say this thing because this is not\nwhat you're doing this is just\nyou're basically i'm a good model to\nstatistically correlate this thing in\nthis thing\nit's not solving the intelligence\nproblem this is not\nhow we are selling the thing right and\nand this of course is uh is a great\nissue but if\nso if you guys are interested i know if\nyou have a look at it but i had a look\nback in november or december it was the\nai debate with the banjo\nma gary marcus and the others and\nit starts to it seems that a lot of\npeople\nthat used to be very very super deep\nlearning\nuh let's say fans\nor you know traditionally the the funny\nfact of it\ni started to realize probably we need\nsomething more we need some more\nwe need to get go back to some more\nsymbolic approaches and figure out how\nto\nmerge the two words and and those kind\nof of things\nbecause there are some very strong\nlimitations that we cannot\nuh still tackle and i'm very lucky to be\nworking in autonomous vehicles domain\nbecause i think that\nthis is the field where you really bump\ninto these problems and\nhopefully someone will will\nyou know solve it somehow the i think\nthat this is the\nthe testbed for these things because you\nneed to be very very smart\nso it looks like and a lot of companies\nusually that build just ada systems so\ndriver\nassistance systems like coma ai comment\nrather they they just\nyou know their voice is really driving\nis hard to ask but you know\nit's not that hard we don't have\nspinning liners on the head don't have\nthis thing you just solve it with a\ncamera and you're done\nyes you're building a level two system\nso you're you always fall back to the\nhuman\nin certain times if you talk about full\nautonomy\nwhatever it means driving is a very very\nvery very very hard task to solve\nwe have another question from luciano\nand\nyeah before we run out of time\nyeah okay second question so first let's\nlet me just say this\ndiscussion here you just had here it's\nreally interesting i\nreally like this where it was going so\ni'm gonna bring this in a little bit of\na\ndifferent direction here my question is\na bit different but just i just want to\nsay it's really nice and it's really\nimportant for us to understand if you\nwanted to\njust delegate all the tasks and all the\nresponsibility to machines\nand where are we as humans so where\ncan we still be responsible for the\nothers for any of the behavior if you\ncannot really understand\ndo so really nice my question is um\nwhen you you were talking about odd the\nexamples you gave they are\na lot like um looking for to the outside\nwe'll talk about the roads if it's a\nhighway or not\nif it's about the the pedestrians that\nare there\nso there's there was of course a little\nbit on the representation so\nbut i'm just wondering should this odd\nso the the\nontology you're working on are you also\nlooking\ninside the car so looking who is driving\nwhat are the\nemotional states of who is driving so\nshould these kind of\nlooking inside be part of the od how do\nyou see this\nyeah thanks for this comment that's my\npoint of view i'm trying to\ninfluence as much as i can the people\nalso the the community i'm working with\nin the standards because\nit looks like traditionally or the\nthe majority of the industry industries\nis leaning towards odds and\neverything outside nothing inside but\nit's weird because i don't think it's\nthe right\nway to of approaching it and uh\nwhy true thing first is that internal\nstates are very important\ntoday you have um light bulbs\nlike you know saying hey you need to\ncheck your\nlights or something tomorrow you don't\nhave this you have a system need to\nfigure it out so\nthere are some internal failure states\nthat could be part of an odd\nnow the discussion is yeah but those are\njust failure stage we just\ncall them failures failure modes and we\nand we\nstrip away from odd the problem is then\nokay but these will probably affect\nhow you how you\nbehave and also it can they can put\nconditions on odds so there's this\nnotion of conditionality\nif you have the battery is i don't know\nover 50 degrees because of overheating\nsomething\nthen you probably need to restrain your\nodd and\nand your strain your velocity or if you\nhave a\nsensor that is failing a little bit then\nyou probably say okay\nmy led is not more anymore okay with\nrain if it starts raining\nbecause i have this problem i need to\nstop so\nthese are all uh relevant uh things that\nwe need to touch upon and\nand this is a great comment because i i\nreally think that internal state and\ninternal things are really\nneed to be part of the odd the industry\nthe audience the community is still\nfragmented on this opinion and\nlast point i want to mention this is\nalso which is another topic recurrent in\nin my standards is\nthe behaviors our behaviors part of the\nodd\nso behaviors are still\nby behaviors i mean other agents\nbehaviors\ni think so because for example\ni can have and and data that is\ncollected by autonomous vehicles\nis also behaviors like i can go around\nsan francisco\nand build uh let's say a behavior\nmap with all the trajectories that\ndynamic agents perform over time in all\nmy runs that\ni see through i i passed through this\nstreet in san francisco\na mill like a thousand times i i have i\ncollect the trajectories and i put it\nthere i can build a statistical model\nthat says\nin this piece of the world these\nbehaviors are way more common than in\nothers so\ni think well should this be part of my\nodd\nprobably because i need to figure out\nwhether my system is\ntrained well enough to be to to deal\nwith jay walking homeless people in\ndowntown san francisco or not this is od\nright i mean it's a statistical idd it's\na\nkind of a probabilistic led which is\nalso another topic we are discussing but\nit's part of it you you want to have\nprobably a lot of\npedestrian homeless jaywalking poor\npeople in\ndowntown uh i don't know in a small town\nin\nin in northern italy right you'll have\nthem\nin san francisco a lot i can guarantee\nyou so you need to\nyou know be able to specify that as a\nspecification of the system that you are\nable to deal with that\nthank you very much\ni think uh i can speak for myself and\nthe others it was really interesting\nnico and we will follow up\nwith another uh chat i know you are also\nbusy and we also have another meeting so\ni want to thank everybody who joined and\nyou especially uh thanks again and uh\nenjoy the sun also for us thank you\nthank you i will\nyeah and i hope to talk to you soon yeah\nokay thanks everyone and\nfeel free to reach out to my emails\nnikoi deepan ai\nor linkedin works thank you bye\nbye\nyou", "date_published": "2021-03-17T14:01:24Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "f20f31e68058ef8d3a7aa2a41ef9aa6c", "title": "Responsibility in the age of AI (Jeroen van den Hoven) - 1st AiTech Symposium", "url": "https://www.youtube.com/watch?v=_HzOs8V9AEs", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "I'm very much looking forward to that I\nwill together with Filippo set the scene\na little bit the bigger picture if you\nwant how we arrived at this this topic\nand and this this approach that you know\nthere's sketched out I would take at\nleast 50 minutes to mention all the\nthings that are relevant that I cannot\ndo but we split the presentation in two\nparts so Filippo will go into this\nmeaningful human control and talk about\nhow it is applied in a project that they\ndo as a Dutch Research Council project\non meaningful human control and assisted\ndriving autonomous vehicles and so you\ncan see the the concept in action if you\nif you want and it started we feel a\nlittle bit responsible because we took\nthis notion of meaningful human control\nout of discussions that started three\nyears ago and we participated in people\nand I in Geneva when it was about\nmeaningful human control over lethal\nautonomous weapon systems and we noticed\nthat the the the diplomats and the\ninternational lawyers were all talking\nabout meaningful human control and\nthey're they're very happy about you\nknow this term that they came up with\nthis term but then we coming from a\nTechnical University we thought about we\nhave to explain this to our engineer\ncolleagues in Delft so how are we going\nto do that and that prompted us to write\na paper on a conceptual analysis that\nprovided the conceptual analysis of this\nterm and Felipa will give you the gist\nof that so what I will do is present you\nthe bigger picture in three steps some\nunder 20 minutes the ethical problems\nand how I lump them together or cluster\nthem and then a little bit about how to\ndo epics but if Gainey will also go into\nmore detail later on\nso first the bigger picture but first I\nneed a glass of water because I'm\nsuffering from a cold can what's over\nthere okay good wonderful thank you so\nmuch yeah\nthe bigger picture really quickly hmm\nI'm also advising European Commission\nhas done something strange there but\nanyway they think all think that a eyes\nimportant so it's and and you can find\nmany many quotes it's gonna change our\nlives this guy also thinks if you get\ndominance in AI you will rule the world\nin China they also think that that's the\ncase and they're using it this guy's\nusing it in Silicon Valley's using it in\na completely different way so a short\nterms there's a didja battle for digital\nsupremacy going on and in Brussels the\nquestion is where is Europe going where\ndoes it leave us and is Europe going to\nbe the Museum of the world nice to visit\nand take pictures of or is it going to\nbe the cradle of a new enlightenment it\ncould well be well the Commission has\nsaid never mind China and the US in\nthere can relaxing the you know the the\nlaws on use of data or privacy and\ntraining up there they're deep deep\nlearning algorithms let's just stick\nwith our core values and let's focus on\nthat and we know exactly what they what\nthese are is the Charter of Fundamental\nRights and the Convention on Human\nRights and have gained II will go in\ninto detail in how we can design for\nthem so my call actually made an\ninteresting suggestion three years ago\nthere's a third way there's a European\napproach that is true to these\nconstitutive legal binding treaties that\nmake you know the put our human rights\nand ethical principles cor-ai for\nHumanity is Brussels talks about\nsystemic rivals so these are no longer\nideologies these are coherent clusters\nof policies infrastructure monetary\npolicy military strategy and all powered\nby AI and Big Data so you have the u.s.\nSilicon Valley model innovate fast in\nthe gray zone break things first and\napologize later minimal regulation\nChicago school economics libertarian\nvalues hard power and Homo economicus in\nstrong individualist and then you have\nChina and that is socialism with China\ncharacteristics social harmony party\nrule sharp power and a behaviorist\nutilitarian collectivist kind of view of\nman and Europe as I just explained a\nKantian conception of the person all as\nautonomous free and a responsible agent\nand so we want to in Europe\nrussell's wants to hang on to that\nconception so that's the that's the\nthat's so the good the discussion about\na good digital society is about this it\nis about a model of man and it's about a\nmodel of society and that is what is at\nstake so if we are going to discuss\nmeaningful human control to a large\nextent we try to salvage this core idea\nso what are these ethical problems now\nof course it goes without saying that\nthese are all important safety loss of\nlives health well-being it's a no\nbrainer of course we we have to make\nsure that the robots that we produce out\nof our labs don't kill people and don't\nharm people but I want to divvy up the\nethical problems in it's like a\ndifferent way in in in a way that\nprovides a certain coherence to them and\nrelates them to the core problem that\nI've just sketched out for Europe for\nthe world namely this conception of a\nresponsible free agent that can be held\nresponsible that can feel responsible\nand for which it is fair to hold that\nperson responsible now you know that\nthese are conditions necessary\nconditions that need to be fulfilled for\nsomeone to be responsible and you can\nfind out by saying okay suppose that you\naccuse or suggest that someone is\nresponsible someone could say oh well\nsorry I didn't know or sorry I wasn't in\ncontrol or sorry I had no choice or\nsorry I I wasn't my good self\nI did something because I thought other\npeople expected this of me or I what I\nwanted to prove people wrong so I did\nnot on my own accord but I took my cues\nfrom a social environment because that\npeople were looking over my shoulder\nright\nso all of these conditions undermine\nthis idea of my suggestion is that AI\nkind of under could undermine all of\nthese conditions for moral\nresponsibility so it's a big threat to a\ncore idea of Western societies so I'll\njust you know briefly go through them we\nhave made things extremely complex and\nit will come back in many discussions\ntoday as a result of introducing big\ndata and machine learning it's become a\nlittle bit of a black box Society and we\nuse it everywhere it affects people in\ncrucial ways so introduced possibly\nmachine bias you know the the negative\nversions of it or good versions we don't\nknow as long as we don't know we cannot\njustify what we have done we don't know\nexactly what we're doing a nice paper\nhas machine learning become alchemy well\nin a in a certain way it has become\nalchemy the first AI legislation is\nalready in place and it it addresses\nthis problem its article 22 of the GD P\nR and that if you if you apply\nalgorithms or machine learning to people\nand it affects them in a big way then\nyou have to be able to explain what you\ndid\nalright so that's that's and and some\nadditional requirements so some people\nthink that AI itself could provide the\nsolutions to this problem of\ntransparency and the lack of knowledge\nwhich chips away at one of the\nfundamental conditions for\nresponsibility right so we have to\ndesign for knowledge in such a way that\nwe restore individuals both the\ndesigners and the users to positions of\nresponsibility\nso all kinds of temps are made you could\ntry to torture the system in such a way\nthat it gives you the information that\nyou need in order to provide an account\nor a justification you could do these\nkind of things just so so in identifying\na zebra you know where it has the system\nbeen looking at so that you can account\nfor it hmm very interesting work of\nJudea pearl also trying to kind of\nbludgeon these machine learnings that\nthis\nsystems in such a way and torture them\ntoo that they give up their their\nsecrets and and and allow you to say\nsomething about the causal mechanisms\nthat are underlying this work on\nalgorithmic recourse so you explain what\nthe algorithms are to the people who are\naffected by it in such a way that you\nthat that you provide opportunities for\nthe people affected to temper with the\nalgorithm so if I would do this oh then\nI would get me I would be selected for\nthis procedure or I would be admitted to\nDelft University so perhaps I should\nspend two more weeks on you know Crick\nare doing scoring better on my TOEFL\ntest or something like that\nso and all of these attempts these\nengineering attempts are made to do what\nto be able to say yes we had the\nadequate knowledge of what we were doing\nso to restore this knowledge condition\nthen this condition of control oh sorry\nI wasn't in control right the quest for\nmeaningful human control and filippo\nwe'll talk about that so I will skip\nover that but we have seen the examples\nof how this could really be a problem\nespecially when you start to apply to\nlethal autonomous weapon systems yeah\nyou like this target you may also like\nthat target you know as the recommender\nsystem especially recommended for you we\ndon't want these things right people to\nthrow up their their hands and and not\nbe be accountable because they they were\nnot in control or they didn't know what\nwas happening right so they they have\nplausible deniability because AI has\nhelped them to deny that they are\nresponsible so this is what Felipa will\ntalk about so that we we need to restore\nourselves to this position yes we had\nthe kind of control the kind of control\nthat is a necessary condition to take\nresponsibility freedom and choice I see\na colleague cost per course and it's\nvery much related to your work and AI\ncould be a threat of undermining this\ncondition for responsibility if you're\nsorry there was nothing to choose or I\nwasn't aware that I could choose right\nso we people are locked up in these\nfilter bubbles it's sufficiently known\nnow\nbut this is a big threat if you have big\ndata you have machine learning and you\nhave advanced behavioral signs then\nfreedom to choose is becoming a little\nbit of a problem and this is one that\nyou know very well but other people may\nnot have seen and I think it drives home\nthe message very very clearly you can\nsubscribe to The Economist for $59 or\nyou can subscribe to the print edition\nso this is the online form 59 this is\nthe print 425 or special offer both 425\nright so everyone you're a good company\nif you say well third option well that's\nthat's let's do both here is an\nalternative choice architecture that is\nthat it's not you know just an accident\nthat it's there the economy is online\n459 or the print and the web 425 now\neveryone says oh well let's forget about\nthe paperwork let's do just the online\nsubscription you see that the way you\nline up these alternatives you you you\nengineer the choice set prompts you to\npick this one why this is the outcome of\nadvanced behavioral research a lot of\nbig data on how people choose and a lot\nof pattern discovery in how what people\nwhat how people are motivated and what\nthey're what makes them tick right so if\nyou have that you are sitting duck\nfirst these people as customer knows\nvery well and this is going to be a\nproblem for that third condition of\nresponsibility freedom of choice right\nso we had knowledge we had control we\nhave freedom of choice and AI is\nchipping away possibly undermining all\nof those conditions that's that's the\nidea and look at what Sunstein the\nnudging Pope says himself it is possible\nthat companies that provide clear simple\nproducts would do poorly in the\nmarketplace because they are not taking\nadvantage of people's propensity to\nblunder that is what is at stake and\nthey are greatly helped by big data a\nmachine\nlearning a I is their game and in a way\nthat we don't have access to it and of\ncourse it has been used also in the\npolitical realm as it's become clear\nhere I couldn't find a nicer picture of\nthis guy but you know he this is this\nthis is what has been at stake micro\ntargeting and manipulation of choice\nsets so we wrote this paper will\ndemocracy survive big data and\nartificial intelligence in Scientific\nAmerican we have to be able to say truly\nyes we made our choices freely and we\nstand by them privacy it's a slightly\natypical way of involving privacy here\nbut I think it can work\nsame thing here big data and machine\nlearning AI combination very powerful\nyes we know who you are what you where\nyou have been and more or less what you\nthink and we need to take this very\nseriously we have machine learning\nreconstructing what people have been\nlooking at without knowing what you have\nbeen looking at actually just by\nanalyzing your brainwaves so this is\nwhat people have been looking at and\nthis is what the system kind of does on\nthe basis of the analysis of their neuro\nactivity so so that's that's already\nquite interesting right and we know\nthese kind of studies you know from 300\nlikes the system can better predict what\nyou will do on a psychological test and\nyour partner so that is so doing what\nothers expect us to do or to be right\nbecomes a threat to our freedom and the\nthe freedom that we require to say yes I\nown this decision I did this on my own\naccord and not because people were\nlooking over my shoulder and expected me\nto be this or to do that and this is one\nof the important elements or ingredients\nin privacy so that we can say yes my\nchoices were mine I made them freely\nthat were under my control and I know\nexactly what I have been doing and\ntherefore I am responsible\nI'm feeling responsible I can take\nresponsibility you can hold me\nresponsible so that is what is its take\nwe're designing a society in all the\napplications that that tries to optimize\nthese conditions these moral conditions\nso that we salvage the idea of moral\nresponsibility on which our whole\nsociety is built our legal institutions\nour social institutions all built on\nthis very core idea\nnot necessarily in China not necessarily\nin Russia they may be willing to abandon\nthose ideas for the moment I have the\nimpression that we are not so let's try\nto tick those boxes in our designs now\nthis is an interesting thing you know\nthis of course you know the trolley\nproblem we're sick and tired of it we've\nseen so many of these examples but the\ninteresting thing is is if you use this\nand teach this and discuss it with\nstudents in Delft they say oh well shoot\nthis guy you know do this or that and\nthen you you bring your account or your\nutilitarian theory to it and we go into\nan analysis but engineers say that\ndesign that design because the problem\nthat this poor guy has is an is a\nfunction of the design he can only do a\nand B just pull the lever or not right\nso he may have wanted to do something\nelse but it was not designed in never\nyou will find this in a philosophical\nanalysis because it's of course in a\nphilosophy seminar room it's a stupid\nquestion you're asked to leave\nimmediately because it's a thought\nexperiment right you're not supposed to\ntemper with the thought experiment\nbecause it's there to make give you a\nhard time but if you were thinking about\nthe real world then the engineering\nremark is a very good one because we\nwant the world to be safer we want the\nworld to not if possible have dilemmas\nand create moral problems for operators\nwe want to prevent that so we'll have to\nlook at the design history\nof how did who designed which idiot\ndesigned it in this way right so this is\nthe question we have a responsibility\nfor the responsibility of others this is\na second-order responsibility we have\nresponsibility for the responsibility of\nothers now how do we do ethics in the\nage of AI probably running of time and\nAfghani will go into more details so I'm\nvery comfortable doing this very fast it\nis by design and I think all the\nexamples have shown clearly that is you\nknow you can you can design a bench in\nthis way you don't have to chase away\nhire 20 people to chase away people if\nyou don't want to sleep you know them to\nsleep on it\nso you can design these things in it's a\ncontroversial example you need to have a\ndebate whether that is solving our\nproblem we perhaps we need to solve the\nproblem in a different way than kind of\nbuilding these kind of benches but this\nis the key problem of value sensitive\ndesign this is a bunch of value societal\nmoral requirements there are\nnon-functional requirements and they\nhave to be somehow not like the iron\nrods in the bank but they need to be\nbuilt into over everything we do in the\nalgorithms in our our coding every every\ndetail of a system and we have to be\nable to explain that we did a good job\nso that's the that's the idea we do\ndesign for X designing for all of these\nvalues by breaking them down from a very\nabstract level and until they hit design\nrequirements that our colleague said the\ntechnical units can can work with so\nthese kind of things privacy by design\nis a good example if you want to count\nthe number of people in this in this in\nthis audience but you don't want to give\naway their identities you can do this\nit's a very simple example of how you\ncan design in privacy and at the same\ntime have the functionality that you\nwant out of the system\nit's a simple thing and of course it has\ngiven rise to a whole lot of lot of\ntechnical solutions coarse graining the\ndata Kay anonymity differential privacy\nhomomorphic encryption Enoch knows about\nthis privacy preserving machine learning\netc so these are attempts\nto hold on to these ethical values\ndesign the mean in such a way that we\nmake good use of all the AI without\ngiving up on all the and the same needs\nto be done for the conditions that I've\njust lined up and discussed on\nresponsibilities and I think it can be\ndone the fascinating thing is that we\nhave a wonderful team and we have you\nhere to discuss this with and very much\nlooking forward to that without further\nado Filippo who will tell you more about\nmeaningful human control thank you\n[Applause]", "date_published": "2019-10-30T09:49:14Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "6767e7177bddefc74effef1f4d1051dd", "title": "AiTech Agora: Karim Jebari - Artificial intelligence and democratic legitimacy", "url": "https://www.youtube.com/watch?v=mEzpWaHzKrU", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "especially when you have a cooperating\nwith with uh colleagues and things like\nthis makes you\nincreases the motivation level so it's\ngreat to be here\nuh so uh yeah i can say a bit about the\nbackground so\nwe have a research team\nhere\nat the institute for future studies that\nare doing uh things related to\nai and democratic legitimacy and other\nkinds of\nuh moral philosophy but also other like\nempirical issues and uh yeah i hope that\nuh i mean if you're interested in\npresenting something uh at our seminar\nthat we have just started forming that\nwould be uh really great for us so if\nyou if you would be interesting you can\njust send me an email\nokay so uh the\nthis article is about\nas you you might have seen the abstract\nabout\nthis idea of democratic legitimacy and\nhow\nuh it can be impacted or how we can\nthink about\nthe impact of ai on the uh democratic\ndecision making\nor decision making in a democracy okay\nso here's the outline i'm going to talk\na bit about democratic legitimacy\nand the principle of publicity i want to\num relying on the work of thomas\nchristiano\ni also want to talk about how we extend\nthe principle\nor at least\napply this idea to a wider set of of\nproblems or a wider set of domains\nthen i'm going to say something about\nhow\nmachine learning can be used in public\ndecision making or is being used and the\nchallenges to the principle of publicity\nfrom machine learning\nand then\ngive a few kind of considerations for\nwhen machine learning can be legitimate\nfrom this from this point of view and i\njust want to on the outside here saying\nthat\nmachine learning can be legitimate yet\nbad in other ways perhaps if it's not\ninaccurate or if it's biased and so on\nso so this is just that a\ndecision-making process is legitimate\nit's not a sufficient condition for this\nfor it to be good it's a necessary\ncondition\nokay let's start\nso the idea of democratic legitimacy is\nuh well\nit's it's basically a property that a\npublic institution has a public\ninstitution that exerts authority over\ncitizens for example um child protection\nservices courts employment agencies and\nso on\nand\nwhat democratic legitimacy does is that\nit permits\nthat public institution to wield power\nand sometimes even coercive power\nover citizens and it also implies that\nthere is a moral requirement that\nsubjects comply with that authority so\nif um authority is not\nlegitimate then it's wrong for this\nauthority to wield coercive power over\ncitizens\nand citizens have no moral\nresponsibility to to obey that authority\nthere might be other reasons why um\npublic authority maybe\nyou know you don't need to follow\nto to comply but but the legitimacy is\nan important one and this is\ndiscussed a lot in in um political\nscience and so on and one of the central\nideas that make um\na\npolity or a\npublic institution\nlegitimate is this idea of public\nequality\nand this is an idea that's\nbeen defended by several philosophers\nbut most prominently by thomas\nchristiano\nand the principle of publicity says that\num\nwell it has two main components\nthe one is\nrecent giving and this is the idea that\nthe\ndecision maker the authority needs to\nprovide reasons for a decision\nand\nthose reasons\nneed to\nbe\nspecific to the case so for example\nlet's say that\ni was drunk driving and and i got caught\nby the police and the police um\nmade a test so it's uh\nthey could\nverify that i was uh drunk and then the\nthe courts or the police could say that\nyou know it's\nit's illegal to be drunk while you're\ndriving and we also have evidence that\nyou were drunk\nand therefore\nthese are the consequences so the\nimportant thing is that the reasons\nare specific to me to to\nto this particular case so would not be\nenough for\nthe court to say that you know on this\nparticular hour\nthen people are often drunk\nat this\nat this intersection or people that\ndrive cars like yours are often drunk so\nit would not be acceptable to make a\ndecision on the basis of kind of some\nstatistical regularity the the the\nreasons need to be\nparticular to to me\nthe second\naspect is that\nthose reasons need to be\npublic for the relevant stakeholders\nfor example lawyers myself of course and\nperhaps journalists and so on and and i\nthink there's been a lot of talk in the\nrealm of ai ethics and so on about\ntransparency but we want to emphasize\nthat publicity is not the same thing as\ntransparency als although they are in\nsome respects similar\nso publicity just\nas it means that a\na verdict or the the process by which\nthe verdict was reached\nis made public to the relevant people\nand and it might not always be\na good idea to make them completely\npublic for example in the case with\nminors being defendants in criminal\ncourt cases uh in sweden at least we\nhave a rule that\nthe the identity of the child is\nprotected\nand this is uh makes the process less\ntransparent in one sense but it's not\nsomething that violates the principle of\npublicity\nor at least it's nothing that\nnecessarily violates the principle of\npublicity\nso according to thomas cristiano the\nthis idea of the\npublic equality and the\nprinciple of uh publicity is mainly\nimportant for the democratic assembly\nand the\nexecutive power so\nthe the democratic assembly needs to be\npublic with its uh reasoning and with\nits decision making\num but we argue that public equality is\nalso important for administrative\nauthority uh and judicial authority and\nwe're not sure that uh\nthomas christiano would agree disagree\nwith us i think that he would agree with\nus but what we don't know um but we have\ntwo main reasons for why this uh this\nidea should include administrative and\njudicial decisions uh one is because\num\nthe it's it's required to uh determine\nwhether\nadministrative decision makers\nuh\nserve to a uh serve the aim of the\ndemocratic assembly so\nif if we don't know for example\nuh the the reasoning behind uh\nuh a court when they sentence someone\nthen we don't know if the court takes uh\nthe laws uh and the procedures are\nstipulated by the democratic assembly\nseriously so that's one kind of\ninstrumental reasons for why\nthis public equality is is important but\nwe also and this is perhaps a more\nsubstantive claim that uh\nthat um\nthere is no strict separability between\nthe legislative domain and the judicial\nand administrative domain of public\nauthority\ni mean in one sense there is\nwell these are different branches of\ngovernment of course but if we're\ninterested in the legislative process in\nin how\nuh democratic societies craft rules and\nlaws\nthen we cannot\ndisregard the fact that those laws are\nnot fully formed when the ink dries so\nto speak on on\non the legislative assemblies uh\nuh decisions uh the the laws that's just\nthe first step in the legislative\nprocess in the the de facto legislative\nprocess and the most obvious example of\nthis is of course\nit's um\nthe president's as established by courts\nso the courts interpret the law and\napply it and those precedents become\nwell have the the same force as law\nbut\nprecedents are also important in an\ninformal sense in\npublic administ agencies for public\nadministration\nthere\nis the precedence are established\nroutines are established\nuh\nanalysis of the requirements are\nestablished and so on so the whole kind\nof process downstream that\nfrom the legislative domain to the\ngrassroots bureaucrats is actually\na part of the\nlegislative\nprocess so that's why we think that this\nidea\nshould also include um\nall all branches of government\nthat exercise power over citizens\nokay so this brings us to uh machine\nlearning uh\nalgorithms\nand uh yeah so you are uh\nof course uh you know uh\nmore about this than i do perhaps but uh\nyeah so\nwe are also very familiar that the\nmachine learning algorithms make their\ndecisions or yeah based on the basis of\ncategorizations\nuh so for example a a very famous case\nand you probably know about this the\nthe machine learning system compass uh\nplaced individuals in different\ncategories um\non\ndifferent categories on the likelihood\nto reoffend\nby comparing the profile of these\nindividuals with other\nprofiles\nof people that got apprehended after\nreoffending\nso if you are sufficiently similar in\nsome respects to someone that did\nreoffend then the system decides that\nyou are likely to reoffend and then\nof course the system does not have the\nfinal say in this case but\nthe courts have\nrelatively little uh discretionary power\nto disregard the verdict of the system\nso\nthey can deviate from what the system\nsays but it's um\nyou know\nit's costly for them in\nin in some respects they need to\nmotivate it and so on so it so it means\nthat the fact of what the system is\ndoing is uh\nexercising power over a citizen on the\nbasis of that citizen's statistical\nsimilarity to other citizens\nso this would then we argue violate the\nprinciple of reason giving\nanother problem and this is of course\nsomething that you are very familiar\nwith is that machine learning systems\nare opaque\nand this opacity we argue is\ncan be broken down in a number of\ndifferent kinds of opacity and this is\npartly\ninspired by\njulia dressler's work\nbut we have observational opacity so to\nspeak it's very difficult to inspect the\ncode\nuh there's also a cons to considerable\nextent uh theoretical opacity um the the\nfield of machine learning is still\nuncertain about\nyou know the you know theoretical\nyeah it in its theoretical understanding\non why certain\nnetwork architectures work uh why others\ndon't work how many layers there should\nbe\nhow much training you should have\netcetera etcetera so they're still kind\nof the jury is still out and there was a\npaper from a few years ago at mips that\ndescribed the state of the field of\nmachine learning as alchemy\nthat is you know we can produce cool and\nuh effective tools but\nwe we don't have much understanding of\nhow we're doing it\num yeah so\ni don't remember the name of the author\nbut you can you can google uh\nnips and alchemy\num\nthen we have of course judicial opacity\nuh\nand this is of course not always the\ncase but in many of the cases that we've\nlooked at the algorithms are actually\nnot run\nnot owned by the public\nadministrations but by a third\nparty company and they protect their\nalgorithms and they also sometimes it's\nalso secret how they collect data\nand\nyeah so so there's a kind of a whole\nsystem of\ncapitalist or proprietary opacity around\nthe use of these algorithms\non top of that we have sociological\nopacity this is something that dresser\nnames too that\nthis is and hopefully this will not be a\nvery big problem in the future but at\nleast at the moment\nour societies\nhave no clue how ai works most people\nhave this kind of very\nweird sci-fi inspired idea about how ai\nworks that it's supposed to be some kind\nof magic or or that it can think and so\non\nand this creates a lot of problems um in\nterms of opacity because\nit's often misapp ai systems are often\nmisapplied\nand it's also difficult for\nsociety for example uh legal scholars\nand and journalists and so on to\nunderstand what's going on uh so that's\nthat adds another layer of opacity and\nfinally there's a certain\nsense in which machine learning systems\nare psychologically opaque and by\npsychological opacity we mean that\num machine learning systems don't really\nbehave\nlike people do nor\ndo they behave like\num\nlike machines often do so let let me\ngive you an example if you have a car\nand the car\nyou know it's a bit rusty it has a kind\nof some problems that\nthe capacity of the car will degrade\ngracefully you know the car will be\nworse at accelerating maybe it will be\nit will start to shake and so on and\nthat that means that you can somehow\nknow that you know i i probably need to\ntake this car to the workshop fairly\nsoon\nbut ai systems\nnot they're not unique but\none feature that air systems have that\nthat cars and\nhumans have is that they don't have this\ngraceful failure effect and often a\nminor deviation can cause a catastrophic\nfailure and there's been multiple\nexamples of this you know\nyou're probably familiar with all of\nthem when you add a couple of pixels to\na stop sign the uh to an image of a stop\nsign the\nthe the the cars can't see the stop sign\nanymore or\nor interpret it at some something else\num\nso and and the problem here being of\ncourse is it's an obvious problem when\nwhen\nwhen the systems don't do what they're\nsupposed to do but the main the problem\nhere from the perspective of opacity is\nthat it's difficult for humans to\nunderstand\nthe failure of machines of these kinds\nof machines so it's difficult for us to\npredict failure\nin a machine\nand this adds another layer\nand this connects to\nan objection\nthat we discuss\nand that is often voiced in these uh\ncircumstances that humans are also\nopaque and this is of course true um you\nknow you can't see my brain and so on uh\nbut\num\nhumans\nyeah so but ai systems have these\nmultiple layers of opacity\nwhich means that um and and these\nmultiple layers we argue\nreinforce each other\nso\num the\num\nthe yeah the outcome is a situation that\nis much worse uh\nthan with most human decision makers and\nwe can of course also add to this uh but\nwe don't discuss this in the paper but\nperhaps we should is the the the\nfact that humans can be personally\naccountable\nwhereas ai systems cannot of course you\ncan hold the company accountable so if a\ncompany produces an ai that is for\nexample biased against black\npeople then you can\nsay that okay you need to recall this\nproduct or you need to pay out fines and\nso on so this is of course\nperfectly possible but the problem is\nthat\nour\ninstitutional systems don't have that um\ninstitutional capacity at the moment for\nextracting accountability from\nthird-party providers like this and this\nwill perhaps come in the future but\nas long as we don't have that those\nmechanisms there's a\ndifference in how easy it is to extract\naccountability from an ai system versus\na human decision maker\nokay so this sounds a pretty uh downbeat\nshould we does does this mean that we\nshould not use uh ml systems at all in\nthe public decision making\nwell so this is um\ni think\nthis opens up for the need of a more\nkind of\ncomplex annuals analysis uh that\nthat cannot be kind of sweeping and too\ngeneral\nso one of the considerations that's\nimportant is what happens at the\ngrassroots\ngrassroots level so in some cases uh\npeople use the ai system much more than\nintended so\nthe they\nthey um\nperhaps because they don't have time to\nmake their own considerations they just\nfollow the the advice from the ai uh\ndecision support system\nuh or for some other reason maybe for\nsome cultural reason or for some\nsociological reason people feel\npressured to do what their system tells\nthem to do without\nthemselves making a\na judgement\nbut in other cases the reverse can be\ntrue and this seems to be the case we\ndid a case study here at the\nswedish public employment agency\nwhere we saw that a lot of the\ncaseworkers at the swedish employment\nsystem they were very unwilling to use\nthe\nuh\nthe considerations of the ai\nand uh and it was kind of in is it's a\nkind of a fun example because this ml\nsystem had um\nyeah it had an algorithm that\nassigned the probability of getting a\njob to a job seeker\nbut\nsince there's a lot of noise in the\nsystem the ml system also had a random\nvariable\nso if you\nif the caseworker clicked on\nre-evaluate many times the verdict could\nchange and uh we\nwe have seen that many caseworkers uh\nkeep clicking on the button until they\nget the result that they\nfeel um corresponds to their intuitions\nso yeah it can vary a lot and these\nvariations are important to how\nacceptable it is\nbecause um\na public decision for example in the\nswitch employment office is of course a\na decision built up from any sub\ndecisions\nand\nand\nto the extent that\nsome decisions are very kind of near to\nthe top of the decision hierarchy that\nwould be more problematic than in a\ndecision that is low in the decision\nhierarchy um\nand whether or not it's on the top or on\nthe bottom of the decision hierarchy so\nto speak it depends on these\nsociological factors\nother other considerations is that some\nsteps in in a public decision uh are not\nand cannot be explained so for example\nconsider the case of a police officer\nstopping me here for for drunk driving\nand\none of the steps in the decision of\ngiving me a fine for example\nwould be for the police officers to\ncheck my driver's license and\num make sure that i am the person\nthat\non the picture of the driver's license\nso yeah so they get the right person and\nthis step in a decision is\ntypically not explained\nthe police officer will if asked at some\nlater time the police officer will just\nsay well he looked like the guy on the\nid so yeah and and it yeah\nthat step is typically not explained and\nit's also a step that um\nis a very a clear case of a pattern\nmatching or pattern recognition exercise\nso what the police officers is doing is\nthat it's matching my facial features to\nthe facial features on the person on the\nid card\nso\nin a hypothetical case where this step\nin the decision was made by a machine\nthen\nthat step in a decision would be\nthen that would be no worse from the\npoint of view of um\nof publicity than it is now so if you\nthink that it's acceptable\nthe way it works now with a police\nofficer doing this visual check\nthen\nthen we should also accept that machine\nlearning systems can\ncould do this this step\nand\nfinally we we argue also that\num and this is also from cristiano that\nthat there are\ndifferent kinds of decisions that the\npublic administration can do and some of\nthem concern our constitutional\ninterests and exactly what those are\nwell that's a matter for debate uh but\nexamples of this uh include you know\nthe right of habeas corpus\nthe right of free speech and so on free\nassembly and so on\nand\nwhen those interests are threatened\nor or under consideration then it's much\nmore important than\nthat the decision process be public\nwhereas other decisions\nare\nnot\nconstitutional interests but just the\ninterests of the legislative assembly or\nthe democratic assembly for example\nlet's say that the democratic assembly\nwants to\nraise the\nchild allowance\ncredit\nuh for\nfamilies with\nmany children say\nand and then you need to have some\ndecision maker that that needs to you\nknow um make an assessment of whether\nthis\nthis child is my child for example and\nso it's not my constitutional interest\nto get an increased child allowance for\nexample so in those cases it would be\nyou could be more uh permitting of these\nuh\npartial violations or or partial\ninfringements of the principle of\npublicity\nyeah that was the end of slasher thank\nyou\nyeah thanks a lot kareem um\nyeah if you could stop sharing the\nscreen then we don't see ourselves twice\nuh yeah yeah\ni think that helps\num right so\nfeel free to raise a hand or post\nquestions in the chat\ni\nsince no one is doing so just yet i'll\njust take the first question\nso one thing i was wondering about is if\nyou could say something more\nabout these reasons people are supposed\nto be giving so i can imagine that\ncaseworkers when they estimate the\nprobability of someone getting a job\nsoon might do this based on previous\nexperience and so comparing the current\ncase to earlier cases they've seen so\nhow does that contrast to\nthese statistical inferences you were\ntalking about that aren't\nuh allowed which\nat least to some degree make similar\ninferences based on earlier cases\num\nyeah so\nso\nyeah so that's that that would be\nanother example i would say of a public\ndecision maker\nuh in some cases making an assessment\nthat is very similar to that\nassessment\nwhich a machine learning system would do\nso\nof course there are um\nalso\nindividual particular considerations\nthat the caseworker takes into\nconsideration\nthat are perhaps not part of the kind of\ndata set that the caseworker has let's\nsay that someone is a felon a convicted\nfelon then a caseworker would\nif we think that the convicted felons\nare very rare uh so they would perhaps\nnot be in the kind of data set of a uh\nof a case worker\nbut\nthe case worker can take that into\nconsideration when making an assessment\nso um\ni would say that in most cases a\ncaseworker would\nmake\nan inference based on on statistical\nsimilarity\nbut in many cases it would be um\nit would be kind of based on on\nuh\nyeah you know individuals uh\nparticular circumstances uh of course\nthat does not make the decision any good\nbut that's that's another question\nbut yeah so the just to answer your\nquestion the the discussions we've had\nis precisely this that that many\ndecisions are made\non this statistical basis\nand they also\nnot\nby you know by mistake or by\nyou know the it's there's an explicit\ninstruction from the policy maker from\nthe legislative assembly that you need\nto do this\nand and we've been thinking about that\nyou know when there's an explicit\nmandate from the legislative assembly to\nmake uh a decision on the basis of\nstatistical similarity that could also\nbe um\na set of cases that where machine\nlearning can be acceptable provided that\nthis opacity\nis not a major problem\nokay thanks um\nandrei had a question\nor you raised your hand\nyeah\nyes oh\nyeah i think it works now uh sorry i\njust my microphone was uh not working so\num really interesting presentation i had\na couple of questions one uh in the way\nyou\nin your characterization of uh reason\ngiving\nuh it seemed a bit restrictive that you\nwould characterize it in terms of rules\nand laws\nuh a connection between rules and laws\non the one hand and some natural effects\non the other\nbecause if you if you frame it like this\ni don't see why um\nthe reason giving property would not be\na technical complication when it comes\nto ai system as opposed to some sort of\nstructural\nuh impossibility so we can program\nuh\nai tools in such a way that they do\nconnect some individual property of the\ncase to some law\non or general rules so in that in that\nrespect you can make them reason giving\nin the state sense that you or in the\nrelevance that you characterized\num\nand and and second about public reason\ngiving\num\ni was thinking that there are standard\nand this is not so much on the side of\nthe authorities but depends on how you\nthink about authorities in the\ndemocratic context\nthere are\nstandard\nor salient democratic institutions such\nas juries\nor voting where\nwe have good democratic reasons not to\nengage in public recent giving so for\nexample\njurors are actually protected from\ngiving reasons for their verdicts uh and\nthey're\nprotected against the public in their\ndeliberations and similarly so\nfor uh voters\nso um\ni was wondering whether they're not\nmight not be considerations that are\nsimilar to those when it comes to the\nproblem at hand and finally a question\nof\nabout the reference i didn't get is it\ndestler about the dressler the types of\nopacity if you could uh just pump that\nbecause i find it interesting thanks\num yeah thank you so um yeah so\nand this is i didn't mention this but\nin in our view it's a big difference\nbetween uh classical expert systems\nand machine learning systems uh in uh in\nthis context because what the\nan expert system or a classical ai would\ndo is that it would\nyou know\nsay\nhere's a condition\nuh if that condition is fulfilled then\nyou know then this is something that we\nwant to do yeah and and there are\nactually\nexpert systems\nbased in sweden for uh\nsocial insurance compensation and\nalthough those might be good or bad and\nthere's been some complications with\nthose systems because of this kind of\nproprietary thing uh that\nthey're making verdicts and nobody\nreally knows how this system is working\nbecause it's\nthey don't have access to it but from\nthe kind of\nrecent giving\naspect it's uh it would be fine in our\nview because it's not kind of\nstatistical in in that sense\num\nyou asked also about what public means\nin this context or or want a\nclarification and i think that the jury\nis a very good um good example of as a\ncase where the reasons are public in one\nsense but they're not transparent so the\nthe reasons are accessible for example\nokay i'm not an expert on the american\njury system so you can correct me if i'm\nwrong but the the the reasons uh\nprovided by the jury can be uh accessed\nby the the courts or by uh lawyers and\nthe uh the\nthe prosecutor perhaps i don't know but\nthere is at least\nan idea that um\nthat these reasons can be communicated\nin in some respect\nand\nbut but i think it's an interesting and\nespecially the other example you gave\nvoters the voters\nyeah\nthey are protected in\nall democracies from disclosing what\nthey voted on and so on uh and uh but\nbut that would be a case that would not\nbe a case of public\ndecision-making so a voter is not\nthe state it's not the coercive\nauthority of the state so the voter is\nuh the citizens exerting power on\nthe decision makers so so that they are\nnot kind of included in this requirement\num\nand perhaps now i'm kind of just\nthinking what i'm talking and perhaps we\nshould consider juries in the same\ncategory as voters in this case that\njuries ought not to be considered as\nrepresentatives of the state but rather\nas representatives of the public i don't\nknow what do you think\nso so i i don't want to dwell but\nespecially for juries i think the case\nis clear that they are exercising a\npublic office uh in that respect so so\nwe might disagree about what voters are\nwhether uh you know that's a\ncivic office and so on but for for for\njuries it's clear that it is a public\noffice that is filled in with ordinary\ncitizens\nand that affects concrete\nuh\npeople the persons directly\nand there might be at this analogy in\nthe sense that there are reasons during\nthe liberation so whatever goes into\ntheir deliberations is accessible in\nin places where they can discuss about\nthe deliberations and you know there are\na lot of biographies and kind of quality\nthere's a lot of qualitative research on\nthat yeah but they're not accessible to\nthe to the judge with some exceptions\nwhere\nuh basically they engage in in\nproblematic practices uh but they can\nhave very weird\nreasons going in there and and they're\nnot bound to communicate them uh in uh\nin in any sense but uh yeah so yeah\nthank you so much this is an interesting\ncase i never thought of so i need yeah\nyeah we need to discuss this thanks\nokay um david also had a question\nyes indeed uh thanks for the for the\ntalk um\nso i'm not sure i could follow all of it\nbut uh but i i did have uh a\nquestion\nor maybe like a you know a point for\nbrainstorming\num when you\nyou basically compared uh\nthe decisions that humans need to make\nand need to be able to provide reasons\nfor\nuh versus\nthe fact that you know that that's hard\nto do that for algorithms\nand i was wondering um if you could\nthink along with me um\nthat i have this feeling that uh\num\nuh\nkind of we we accept the opaqueness as\nyou called it of of of humans\num but we seem to\nuh in general kind of\noverestimate or or you know hope too\nmuch\nuh of technology as being able to\nyeah i think afghani usually calls it\nlike a tech our way out of the\nopaqueness\num\nand so\ni guess i'm not quite sure where i'm\nheading but but my i guess my point is\nthat\nthose are kind of\npublic processes\nwhere\na lot of the times also the officers\nthat introduce them be they public or\ncompanies\nkind of position themselves as this is\ngoing to improve our decision making or\nthis is going to make it much more\nobjective and much more clear etc\nwhereas i think you pointed out many of\nthe issues with that\nso maybe i'm asking a kind of meta\nquestion\nso maybe i think what odd law or public\noffices\nuh\nyeah what should they do to kind of uh\nyeah i guess talk about\nor frame their use of technologies\num\ndo you have any perspectives on that uh\nor is my question just too vague it\ncould also be\num\nyeah i i'm not sure exactly\nthat we addressed that point i mean\ni think that they what we rather want to\nsay is that\nyou know these\nthis idea of\npublic equality\nwe take it seriously i mean in many\ncountries take this very seriously it's\nthere's a mention about this\nthe right to an explanation and so on in\nmany of the eu treaties in swedish law\nalso\nand and those are kind of echoing the\nsentiment that you know if if\nthe child protector agency wants to take\nmy child\nyou know at least i should get a\nexplanation for why\nuh and um\nand that uh\nthat explanation can not only be\nyou know\naccording to some companies uh secret\ndata set you are similar to other people\nwho lost their custody of their children\nuh that yeah so it so that's the kind of\nidea but but of course that's a kind of\ngeneral intuition and what we want to do\nis to bring that down to the\ncomplexities of\nof how\nai comes into the decision process\nso i don't know if it was a good answer\nto your question but that's at least the\nway we\nwe try to approach that that question\nand uh sorry i i forgot to answer uh\nandre uh\nthe the person is julia dressle\nand she has an article from i think 2016\nabout uh opacity uh\nlet's say yeah\nso so can if i can quickly respond\nso i think\nwhat you're describing lies at the heart\nof what we try to do with ai tech has or\nthis concept of meaningful human control\nbeing able to somehow you know track and\ntrace\num\nso that's\npart of it lies in the idea that well\nmaybe we could somehow you know\ninvestigate the algorithms\nand ask questions and make them more\ntransparent\nand i think now we propose several\napproaches to do so\nbut i guess the main point is that\nthat i feel that\nthat even if we do that\num\nyeah there's something in us that as a\nsociety or as users overestimates that\nthe the technology and so i i guess\nmaybe i'm saying as a brainstorm so it's\nnot a clear question but\nbut how do those kind of\nsystemic\nyou know impacts of\nseeing technology as a as a way to\nimprove decision making and\nuh what are your your thoughts on that\nyeah\nyeah\nso yeah i have a few thoughts and well i\ni kind of\nkind of mentioned some of them that\nin the presentation but i think that\none of the thoughts is that we can\nactually reduce some of this opacity can\nbe reduced perhaps it's\nand i think many researchers now are\ntrying to reduce the technical opacity\nuh and observational opacity and that's\ngreat but\nthere are so many layers of opacity that\ndon't have to do with the ai system\nthemselves like for example this\nproprietary opacity that you know\ncompanies can say like no we don't we're\nnot going to share our data or we're not\ngoing to share our\ndata gathering practices and so on so so\nthat's kind of an opacity that could be\nreduced quite quite a lot by kind of\nlegislative efforts and also the\nsociological opacity can also be reduced\nyou know if people were more informed\nabout how ai systems work then\nperhaps yeah then then that would be a\ngood thing\nuh i can i can give you an anecdote from\nthis swedish public employment office\nwhere they they had uh so\nit's fairly easy to know the probability\nof someone getting a job if they have\nbeen\nregistered at the office for a long time\nbecause uh the probability of you\ngetting a job within the six next next\nsix months is um\nvery well correlated with an amount of\ntime you've been unemployed so the\nlonger you've been unemployed the less\nlikely it is you're gonna get a job so\nokay that's great but the problem of\ncourse is to how do you categorize\npeople that are just\ncoming into the office so then you don't\nhave this data of course because yeah\nand and that's why they developed the\nalgorithm for those cases\nuh because it was a kind of a difficult\nuh\nso here's a person he's been unemployed\nfor two weeks\nis this one of those that is gonna stay\nlike for many years in the unemployment\noffice or is this person gonna get a job\nthe next day\ngreat but the problem is because the\nbosses do not understand ai\nthey started using the same algorithm\nfor people that were already unemployed\nfor a very long time\nthe problem is that the variable being\nunemployed for you know three years was\nnot part of the algorithm so of course\nthe algorithm sucked\ncompared with any you know reasonable\ncaseworker that said like look this\nperson has been unemployed for three\nyears and we don't think it's going to\nget a job\nin the near term but the since the\nalgorithm did not have that data\nvariable that is super i mean it's like\ncorrelation 0.7 or something uh then\nyeah and and this is not the algorithm's\nfault so to speak it's just the bosses\nthat don't understand how this works so\nsorry for that\nyeah\nthanks thanks a lot for thinking along\nwith the with the vague question uh\nthanks claire\nokay great uh and then focus had another\nquestion\nyes thank you thank you for the\npresentation i i missed the first few\nminutes so maybe you answered my\nquestions then i have three things to to\nmention so if that's too much then um we\nshould maybe deliberate\nafter this meeting\ni'm studying the use of aiber police\nofficers so you'll understand that i\nhave some\nsome thoughts about this as well\nmy first question is\nwhen you explain the objections to\ncurrently for systems being not being\nopaque\nyou mentioned the opaqueness of machine\nlearning that is based on\ncorrelation and not on causation\nbut what i did not see but that may be\nbecause i missed it is that it's not\ncontext aware that we do not have\ngeneral ai\nso that things that for people are very\nfor instance in the swedish\nsystem\nif somebody has cancer\nand is seriously ill\nand in case vehicles understand he will\nnot get a job before he's but if it's\nnot in the system\nthe system will not see it and will give\nbad\npredictions\nso\nthat was that's my first question uh\ndo does it play a role in your article\nor your considerations\nuh\nno not necessarily so we're not\nassessing whether\nyou know the accuracy or what how good\nan a system is but rather this aspect of\nlegitimacy\nand of course that there's what's\nthe political scientist called process\nlegitimacy you know if a decision maker\nis really bad at what they're doing\nand consistently produces outcomes that\nreduces\ncitizen welfare then that can also\nundermine legitimacy but\nit's kind of another\nroute to to that\nyes but i think that\nthe lack of context awareness and of\ngeneral of common sense\nthat that is a very important reason\nthat\nthe legitimacy of machine learning\nin public processes should be limited\nthat's my first remark the second\nis\nthat you state\nyou have three\nthree\nreasons that machine learning can be\napplied even if it's\nyou in general should not do it\nand one of those is if it's currently\nnot being explained like you said\nrecognize a picture a person in a\npicture then we don't have to ask it\nfrom a machine learning application\ni don't agree with that i think it's it\ncan be um\nsometimes we we should ask it for\ninstance in policing\num currently the selection of people to\nstop\nuh there is a suspicion that a bias\nplays a role racist\nracism\nand we want to get rid of it so even\nthough currently we do not have a good\nexplanation why people by police\nofficers stop certain people and they\ndon't stop other people\nwe would we will not accept this opaque\nsystem in this way we want to be sure\nthat this bias that we have i think we\nrecognize\nis excluded\nhow do you think about this\nno i agree and perhaps i was unclear\nabout this so what we say is that uh if\nyou replace this step in in a decision\nwith an ai system then the ais system is\nno worse than it was before but of\ncourse if you think that this is an it's\nunacceptable that we don't explain this\ndecision then of course you're gonna\nthink that it's unacceptable that we use\nai too um but uh yeah so so yeah i agree\nwith that point yes\nand the third one is\na principle i\nformulated for the dusk police\num\nthat has to do with\nwith risk management\num and you could maybe add it in your\nconsiderations\nthat um detectability\nand\nrepairability of decisions\nas well are\na factor that can\nhelp decide whether you can apply ai or\nnot and very easy example is speed\ncameras\nbe accepted if there's automated\ndecision taking and automated finance\nsent to the people because it's very\neasy for people\nor the the burden for people\nto uh to object or to say no it i have\nnot been there give me a picture and i\ncan show you that it's not my car\nit's quite easy\nso it has to do with whether you can\nrecognize that an error has been made\nand then you can repair it and this\ncould as well be applied maybe in\nin other less\nmundane\ndecisions like\nlegislation or so\nyeah yeah thank you yeah so\ni think that this is something that we\nwant to discuss in more detail and\nperhaps this is uh\nyeah we we are not sure\nexactly how far to go in this direction\nin this paper but this is definitely\nsomething that we want to discuss that\nwe want to\ndiscuss like what is a public decision\nand how\nwhat do we mean when we say that ai is\nsupporting a public decision so a public\ndecision can you know it's there are a\nlot of\nsteps\nbefore the decision is made\nbut as you suggest\nthe the the decision might not be final\nbecause people might uh\nuh\nask for appeal\nuh for a decision and and that means\nthat we also have to this is a good\npoint very good point i think you made\nbecause that means that we we don't have\nonly\nwe don't normally have to consider what\nhappens up until the decision but we\nshould also consider what happens after\nthe decision and if there is a right to\nappeal\nuh so justice that can be it makes a big\ndifference if let's say\na person a decision maker has the legal\nright to deviate from the ai\nrecommendation or if they don't so\nthat's\nthat's a big difference in our view\nor\nand and of course the difficulty by\nwhich you are allowed to deviate from\nthat verdict because that can vary also\na lot right um for example in uh in this\nat the moment in the swedish employment\noffice the the bosses are trying to\nforce people to uh comply with the ai\nrecommendation so\na directive has been issued recently but\nit was very controversial so they\ndropped it was that they need to comply\nwith the a system at least at least 50\npercent of the time\nwhich is kind of okay whatever\nbut yeah but that was very controversial\nbecause people were very dissatisfied\nand yeah but there are a lot of these\nback and forth and what you mentioned is\nalso a relevant aspect of that\nconsideration is there\nan opportunity to appeal\nand if so is that\ninstance of appeal is that a person or\nis that perhaps also an ai because\nthat's also possible right that it could\nbe\nanother a that that gives you a second\nopinion or a second verdict so that's\nrelevant yeah\nuh you're on mute yes so in this is\nthere an instance where you can appeal\nbut the other thing is\nis there a way to detect whether a\ndecision was\nmaybe not right\nbecause some systems are so complex that\nthe human will have no idea whatsoever\nwhether a mistake has been made or you\ndon't feel the the\nyou don't know that a decision has been\nmade so you won't be able to appeal\nbecause you don't detect it for instance\nif the police would decide\nto uh\nin their in their office\nto do a lot of controls in my\nneighborhood i won't be informed about\nthis decision or the reason for it only\ni think i will\nfeel that sometimes i'm stopped a little\nbit more than before\nbut i i have no no no way to appeal\nbecause i even don't know that this\ndecision has been made\nso that's really in uh\nyes and we have had an ai agora meeting\nbefore where some\ndower presented his research about\nautonomy of people when using ai systems\ndo you know about this\nuh\nwhat was their name a tower i will put\nit in the chat\ni thought the i thought the\npronunciation was down man okay okay\nyeah yeah i know\nokay\nokay yeah\ni\ni think i i am familiar with the with\nthe article\nyes okay thank you for uh answering\nthank you\nyeah thanks um\nwell if there are more questions feel\nfree to ask um one thing i wanted was\nstill curious about sort of as a\nfollow-up on on david uh oh well afghani\nyou go first\nah\nthanks stefan uh kareem thanks very much\nfor uh very interesting presentation so\ni i uh\nmy question i think it connects to what\ndavid uh was bringing up\nuh so connecting to the what you were\ntelling about\nuh the this point that like when\nif when you talk about the things you\ntalked about in your presentation\nsometimes people will say well humans\nare also opaque\num\nbut and then you you well you explained\nwell but hang on uh\nwhat are some of the differences i'm\ncurious do you have some thoughts\num\nso so you you mentioned that some some\nof this comes also from misunderstanding\nof how the ai functions\nand that and that then i think so you\nimply i think that comes from people who\nare not themselves let's say technical\nexperts and but maybe they misunderstand\nhow the technology works but i'm curious\ndo you have thoughts about how those of\nus who come from the technical field\nhow we can do a better job\nin\nalso being clear to ourselves but also\nbeing clear to\nthe outside world about\num\nwhat\nwe you know what does the technology\nwhat it can do what it cannot do\nwhat it quantifies and what does it do\nwith these quantifications and these\nkind of things\nthank you how can we communicate\nactually better to the outside world to\nto\nto decrease the miscommunication\nand manage expectations yeah thank you\nuh so yeah actually i have some thoughts\nabout this and this is from a friend of\nmine who is a professor in\nuh\nin the uk uh\nin york university and he's a professor\nin machine learning\nand\nhe is you know he he comes from uh\nmathematical statistics from that side\nso that\nhe comes from that side and and he\nthinks so he he tells me like for me\nit's important that the students\nunderstand that what they're doing is\nmathematics and statistics right\nso\nso he had this course plan where they\nyou know explained like dutch books and\nthose kind of uh uh you know\nbayesian statistics and those kind of\nthe the\nthe the the basics of\nunderstanding what is\nwhat is statistics but a lot of students\nwere very dissatisfied with this and\nthey actually went to the prefect and\ncomplained and said like we don't want\nto learn about this uh\nuh\nyou know\nphilosophy uh\nwe want to learn just how to do uh\ndo the apps\nso we can get a job uh you know and uh\nand and he was uh yeah he was very\nbitter about this because he he felt\nlike uh unless\nyou give up the students kind of a\nrigorous understanding of\nyou know what was the statistical\ninfluence and what that means uh the the\nprogrammers will not understand what\nthey do and they're going to be kind of\nreproducing false ideas about about\nmachine learning so i think that one one\ngoing for on his view i think that one\nimportant aspect is to\nuh\nhave a\na curricula for programmers or for\nmachine learning developers young people\nthat that includes this\nkind of maybe more difficult ideas about\nthe nature of statistics and so so\nyeah thank you very much thanks\nuh okay well very quickly then so one\nthing i was still wondering about is\nif we\nuh manage to have more of this\ntechnical opacity if that takes away\nsome of your worries about the reasons\ngiving aspects of machine learning or if\nthis is purely because it's statistics\nthat's problematic so let's say if we\nhave\nfeature importance methods and we can\npoint to specific\naspects of an application why it's\ndeemed as unlikely that this person will\nget a new job does that solve part of\nthe issues around giving reasons for the\ndecision\nor is there just the fact that it's\nfundamentally statistics that's\nproblematic\num yeah i would say that\n[Music]\nyeah well i mean i think we have\ndifferent views in the\namong us three in on this idea\nso my idea is that\nyou know\nit's very difficult to be absolutist\nabout\nwhat is reason giving because\nas some of your examples before the\nthere are many cases where reason giving\ncertainly involves a statistical element\nat least\nuh when people do do it\nand\nso\num so i think that\nor at least my view is that we need to\nhave kind of\na a holistic approach to this that we we\nneed to both consider to what extent are\nthese decisions\nuh\nyeah\nto some extent what what\nis this decision reason based\nand the problem opacity but also the\nthing that i mentioned the in the end of\nwhat interests are at stake\nare these uh\nconstitutional interests that have to do\nwith yeah with personal liberty and so\non or are these principles that have to\ndo with you know uh\ncompensation uh like a well one first\nsystem compensation levels or safety net\ncompensation levels that can of course\nvary and\nuh and uh from\ndepending on on what the democratic\nassembly wants\nso i think those are our uh\nimportant considerations as well\nbut i think that my political theory\ncolleagues they're more kind of\nno there needs to be public reason\nuh so uh so yeah um\nyeah i don't know if that was a great\nanswer but that that's at least the way\ni see it that uh\nthat it's um\nit it's it's difficult to give a kind of\na\nuniversal answer to that that question\nyeah okay well thanks a lot uh also\nthanks so much for this conversation\nyeah\ngreat thanks so much i have to actually\nrun to another semi", "date_published": "2021-11-01T12:48:24Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "565735fcfbdb6cc1ff91958461427982", "title": "Human control over fully autonomous systems: a philosophical exploration (Giulio Mecacci)", "url": "https://www.youtube.com/watch?v=H_fUBF5ZR2U", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "turns as usual we're probably going to\nleave this room potentially even more\nconfused than we arrived in some cases\nin some cases i'm not sure this is one\nof those it's a good thing right but\nwe'll see so let's get it started um\nthere is a lot of talking about whether\nmore automation and artificial\nintelligence\nwill bring about less and less human\ncontrol\nright by simple definition\nmore automation means less control\nbut of course on the one hand automation\nis highly desirable right it promotes\nproductivity it reduces human effort\nand also when it's used correctly uh it\nimproves the quality of\noperations of what we do um and on the\nother hand\nhowever we would like to ideally be\nable to remain in control of this\nautonomy\nand this is what i call here the control\ndilemma\nright um but first things first\nwhy do we want or need to remain in\ncontrol of these intelligent\nautonomous technology\nthere's at least two big families\nlet's say of preoccupations so the first\nfamily of concerns regards\nsafety this ranges from safe operability\nfor instance when we deploy systems we\ndon't know very well\nor are that are unpredictable to some\nextent\nand a two existential risks that's\nthat's the other\nthat's the other problem and the whole\ndebate about super intelligence if\nyou're\nif you're familiar with that we've seen\nmany important authors recently\ndedicating some serious efforts to this\num stuart russell for instance very\nrecently his last book human compatible\nhey aai and the problem of control\ni'm talking about these risks for ai\nbut there's a second family of concerns\nwhich regards\nmoral responsibility so it's something\nlike who's going to be blamed for the\nactions that are initiated by an\nautonomous system\nmany have um observed for instance that\nartificial intelligence\nespecially with regard to certain deep\nlearning techniques\nhas a transparency problem in the sense\nthat we cannot easily find the reasons\nwhy\ncertain decisions have been taken from\nthe\nartificial system and this makes it\nreally\nhard to trace back those auctions to\nsome human person that's accountable for\nthem\nand also in many cases even if these\npersons or\nperson are retrievable we have troubles\ngenuinely blaming them\nright either because they didn't do\nanything intentionally\nor maybe because they couldn't foresee\nactually what would have happened\nand so on and and other reasons like\nthis\nso it seems to be important for many\nreasons to remain in control\nbut we might just not be in luck\nmaybe we have to to give up on that at\nsome point\nsince since we're going towards like\nincreasing and increasing autonomy right\nso many have tried to provide solutions\nor maybe i should say sometimes work\narounds\nto minimize the issue of giving up human\ncontrol\nespecially for what concerns the problem\nof responsibility\nso just just here a couple of very\noversimplified\nexamples here we have for instance those\nwho propose that\nsomething like nobody gets uh blamed\nokay\nnobody would just stipulate some sort of\nagreement\nand where we specify who's going to pay\nlegally speaking and we settle for that\nbut this approach\num let me only say that it has been\ncriticized\nalso by myself by for instance\nhighlighting the importance of some\nforms of\nmoral responsibility that should never\nbe dismissed\nlike this i'm not going to go into\ndetail here because i want to get to the\npoint and i never do\nbut as a i've been there in other talks\nand papers on this and there's actually\na forthcoming paper\nwhere uh phillipos antonio de cr and i\nwould discuss this thing among other\nthings\nso hope you stay tuned if you're\ninterested in that but there's there's\nthem let me go to the other\npoint there are others like some of my\njapanese colleagues\nroboticists who genuinely give their\nbest\nshot at making artificial autonomous\nagents\nresponsible for their actions uh\nso and they investigate forms of legal\npersonhood um this approach\nhas frequently encountered criticisms in\nthe form\nof you know there's something off with\nkicking my refrigerator because my beer\nisn't cold enough you know how it goes\nit's like this uh but of course i'm\nto to a large extent sympathetic with\nthis uh attitude to to a large extent\nnot completely but\nit's worth considering of course that\nartificial autonomous\nagents might sometimes even soon even\nsoon become very similar to us humans\nand one might want to be prepared i\nthink that's what moves this\napproach to the central idea that\nencourages this approach\nbut all you know many believe and we\ngood with good reasons that there's no\nway\nto have the cake and eat it\nright so in a sentence to maintain\nany sort of human control\nin the meaningful sense over highly\nuh but well let alone fully autonomous\nsystem is impossible it's it's\nimpossible just impossible\nso if we can control them it means maybe\nthat they might not be autonomous enough\nbut in this little talk here i would\nlike to explore like a few philosophical\nideas\nthat come from the tradition and\nphilosophy of free will mind and action\nincluding a very recent idea of\nmeaningful human control\nto see if there's anything that might\nlet us have half the cake right we have\nthe cake\nand at least we take a little bite of it\ni don't know maybe we can settle for\nthat\nand we'll see um some of you may know\nand it's been uh\nmentioned here already for the past few\nyears i've been working at deepening and\noperationalizing this philosophical\ntheory\nof meaningful human control over\nautonomous systems\ndriving systems where was the use case\nbut the theory itself is neutral to the\ncase and and that was developed by\nfilippo santorini this\nin a is paper a few years ago and this\ntheory this is important\nthis theory was really devised with\novercoming this dilemma between\ncontrol and autonomy in mind among other\nthings but but this is what i\nreally want to take uh out of of out of\nthis this idea of meaningful human\ncontrol of this theory\nuh i know that some of the crowd here\nmight think i'm taking them like at\nnauseam\nthey're allowed to switch and switch off\nthe audio the next few minutes\nbut i would recommend staying anyways\nand so and wait for them\nfor the twists theory\nthe theory of meaningful human control\nis theory sees two conditions\nright for control over autonomous\nsystems they're called tracking and\ntracing\nokay so the degree of human control of\nthe meaningful kind\nuh would depend on the degree these two\nconditions are satisfied okay so\num\nthe the tracking condition to the left\nsets a specific requirement a property\nthat these autonomous systems should\nalways display\nif we want them to be controllable by by\nhuman agents by humans by us\npersons this property is that these\nsystems potentially even complex\nsocio-technical systems you know\noperators devices the infrastructures\nthese systems um\nreally um should should display this\nthis property of\num covariating their behavior\ncovariance their behavior you see the\ncogs there\nwith the relevant reasons of the\nrelevant human agents for for doing\nthings\nor not doing these things\nwith their intentions in a way they will\nand the second condition then we'll go\nback to this one actually we'll focus on\nthis\num the second condition is called\ntracing and it requires several\ncompetencies from a user of the system\nand that means making their it aims at\nmaking their involvement\nin controlling the system as relevant\nas possible right there should be a\nperson it says at some point of the\ndesign or use context\nof an autonomous system that has a\nspecial awareness\nof its functioning and an awareness of\nthe moral\nrole that they play in that system and\nthis would allow as the word world\ntracing uh suggests\nto trace the auctions of of a\ncontrolled system as effectively\nas possible back\nto one or more human persons humans that\nwere put in charge\nso trace back to them the options of\nthese systems\nnow what you should notice at this point\nis\nthat these two conditions they aim at\ntwo different aspects of control right\nthey have to be slightly different if\nyou will the complementary scopes\nthe latter condition tracing aims at\nfacilitating\nuh the possibility to attribute\nresponsibility in a way that is fair and\njust as possible\nbut it is less less concerned with\nthe uh the nature of the actual\ninteraction between\ncontrollers and the controlled system so\nif we would take tracing alone as as a\ncondition for control\nso the whole meaningful human control\ntheory would be a\nnormative theory we've seen several\nnormative theory about responsibility\nand accountability\nbut it wouldn't tell us much about\ncontrol in the other sense control\nitself any other sensory\num about whatever connects\ncontrollers and control systems\nso the tracking condition the first one\nis there for this reason\nand it requires the system's behavior\nto cover with these certain reasons\nto act but what what's up\nwhat are these reasons you you ask of\ncourse why\nreasons are not auctions for instance so\nnormally you'd want to control lowly\nautonomous systems with\nsome sort of control panel you want a\nsystem to behave\naccording to the buttons you push that's\nthat that's behaving according to\nauctions\nreacting to options not however with\nhighly\nor fully autonomous systems and one of\nthe reasons\nbeing that we're very bad for instance\nthat's supervising and vetoing\nso reasons to act is a generic way to\ncall\nhuman intentions dispositions goals\nplans and even values we have argued\nthe idea is that that assistance\nthe systems assistance behavior\nsimple alignment with those reasons\nwould grant the space for higher\nor high degree of autonomy while\nmaintaining also the right degree of\nmoral involvement of control in a way\nand therefore\nof course yeah it's more control\nthe whole idea is it wasn't invented uh\nas uh like it wasn't a fresh start it\ncomes from\nphilosophy of mind and free will in\ngeneral let's say\na general intuition uh there we have\nbeen trying for centuries\nto sort of understand how mental things\nlike intentions of reasons are connected\nto physical things\nlike actions and in particular very\nimportant free actions\nso we're almost there not yet you give\ngive\nus philosophers another thousand years\nmaybe two thousand\num and we'll we'll see what happens\nbut now here's the issue with this\nhere's the issue here's the issue\ni said we need autonomous systems\nbehavior to co-vary\nright with human reasons so there needs\nto be some interaction\ngoing on right um so these reasons\nhave to somehow cause the system's\nbehavior\nthey need to steer it to keep it on\ntrack with our ever changing\nmoods or the uh ever\nchange value changing society\nour changing needs yes\nand no mostly no there's\nthere's good reasons why this condition\ndoesn't use words like causing\nor similar words so reasons reasons\nthe reasons are well known\nto be somewhat problematic causes\nfor action so reasons are not good\ncauses for auctions\nthose reasons that should steer push to\nwhich the system should respond to\nright so not good causes i\num donald davidson amongst other\nphilosophers\nhe discussed this problem at length in\nthe context of of the mind body\nproblem to oversimplify\nit might be it might be hard to retrieve\na strict and well-defined\nlaw like a physical law right which\nbinds the reasons\nfor an action and the action itself\nthis is for some at least some due\nmainly\nto the fact that reasons are high level\ndescriptors\nin a certain psychological language and\nthis language seems to be hardly\nreducible\nto its physical counterpart which is of\ncourse expressed in a different\nformal language the language of physics\nmathematics\nmaybe so in philosophy of mind many\nwould settle for this would say after\nall physical events\nokay actually control wants action\nthat's not what we're denying\nthey're just too complex and fuzzy to be\ndescribed\neffectively in in in physical terms\nso so we use reasons to describe\nthose uh those events\nthose from physical phenomena so we use\nmore abstract explanations in a way\nwhich we can handle\nmuch better and in some sense so still\nin some loose sense we can say still\nthat reasons cause\noptions right but we have a different\nproblem here\nbecause we have to design\nbehavior and not only to\nexplain it so so reasons are very good\nexplanations\nexplanation for auctions but\nbut we have a we have to take a\ndifferent more aware perspective\nfrom a step before design\nright so we need to understand that the\nnature of this link between\nreasons and autonomous systems behavior\nand the tracking condition here it just\nsays\num that human reasons and the actions of\nthese systems\nshould go hand in hand should be aligned\nit doesn't say whether this is the case\nbecause they just always let's say\nhappily agree\nor because they talk to each other\nso\nhow do we solve this um kind of\nprincipled problem\nso one idea would be to find a way to\nestablish\nthis link that we're missing between a\ncontroller's reasons to act\nand the behavior of these autonomous\nsystems but then one could think\nwe have said reasons might be right in\nsome hardly accessible sense in the\nbrain and\ntherefore in some way there are causes\nof an option\nso one way would be one way to establish\nthis missing link would be\nuh through some sort of i don't know\nbrain reading device\nthat's capable of classifying mental\nstates\nwith interpreting abstract intentions\nputting them into action um\nokay but um i think this this is a\nlittle\nmisleading and i have some concerns\nabout this idea\nranging from well technical visibility\nto its\nprincipal soundness let's say i'm going\nto be very\nvery superficial here i apologize\nbecause this sort of deserves a whole\ntalk\nmaybe longer one than this but first of\nall from the technical\nside we're far away from having a\nfunctional\nneural imaging system an ai classifier\nthat is sensitive enough\nto discern the adequate nuances in\nthoughts and intentions so\nachieving such technology might require\nextremely high\nresolutions ai classifiers that are very\nwell\nsmart they're sensitive and as specific\nas\npersons and understanding thoughts i've\ndefended this point\nin the paper some years ago but this\nmeans\nvery broad and very intelligent\nalgorithms or neural networks huh\ni'm not saying this is impossible i'm\nnot saying this but i'm just wondering\nwhat do we do\nwhile we wait for the technology to be\nat the at that point to be invented\nand the second concern is also sort of\nis maybe\nmore more deep uh somewhat more\nprincipled it's about the possible\ninherent difference between a reason\non one end and the neural event on the\nother hand\nso we should be i believe very careful\nnot to\ntrivialize the notion of reason\nuh in fact these reasons as i said\nbefore\ninclude include different different kind\nof\nentities\nlike moral goals motives\nand even values so it's to these\nabstract\nentities that we want a sufficiently\nfree and autonomous system to be\nsensitive to respond to to align\nto so the extent to which these things\nare in\nretrievable in someone's mind let alone\nin some interpretable\nneural event is very dubious\nso these these entities these weird\nentities these reasons\nthey extend over time they emerge at the\nlevel of society\nand so they're they might just not be in\nsomebody's or more person's head right\nit's\nnot what they are maybe so being short\nof ideas\nhow to make it how to make this work\nthis connection work\nthis connection between human reasons\nand systems uh\nactions i i thought i went back in time\nmeeting uh gottfried leibniz\nuh well i thought you know maybe\ni'm reminding of something so as many\nphilosophers\nof of his time he was trying to make a\nlot of things work\nmany different fields doing many things\nand one of those was the usual of course\num\nworking out the relation between our\nsoul or mind\nand our body and therefore\nactions our options so one of his ideas\nwas\nthat actually causality might not have\nbeen necessary\nmight be something necessary he thought\ngod is a very good clock maker\nand they designed all things\nso well that they would forever\nwork in harmony a\npre-established harmony\nso any causal interaction between two\nthings\nwould be mere illusion\nand these basic entities the world is\nmade of\nhe calls them monads that's why the\nmonodology of\ntechnology here uh called the monads are\nthese just self-sufficient systems that\ngod pre-programmed to harmonize\nwith each other like perfect locks they\nthey sort of all run together but they\nnever communicate\nwith each other or with the designer\nand since hey i would really like to uh\nplay god\ni thought this would have been an\ninteresting analogy but yeah in a way\nin a way in a way we do right\nso if i keep playing along the lines of\nthis analogy\nmy question becomes should we consider\nat least as like an option to\ninvestigate the possibility to conceive\ncontrol\nwithout any such causal connection\nbetween controller\nand control system which is at least in\na big part already already contained\nin the in the idea of tracking and\nmeaningful human control\nbut of course like we have to make\nexception of this initial design\nphase there they're just the contact\nright when we as uh like it was for the\nlate night\nleibniz and god we set things in motion\nand what does it mean to design for this\nkind of\ncontrol so i thought another metaphor\ncame to mind uh and uh\nand let me mention it it's a silly one\nbut not so silly the train\nright so in a way the train goes where\nwe want\nall right so does does we want\nusually but it doesn't require any\nadditional input to steer\nit doesn't require us to constantly\nintervene\nto sort of keep it on track\nwith our reasons um the tracks\nthe railroad allows the train to be\ngoing where we want the whole time it\ntravels\nand there are design design of these\ntracks it expresses a\nvery dense history of societal values\nand moral reasons\nso they tell a story about for instance\npolitics\nand economy and about the people's good\nreasons\nto meet each other and stay connected\nmoral reasons too so i understand this\nis not a good example of sort of\nintelligence\nof or flexibility or autonomy yeah\neven but i in some sense it might be\nbut it might also be a good example of\ncontrol so\nshould we consider maybe more\nintelligent\nand more autonomous systems\nlike trains with very many of those\ntracks\nthis is it can this inspire a little\ncould we design for instance all of them\nto go where we want so we can we\nset them up in a way that that\nthey won't fail us but they'll be\nflexible enough\nfor us changing our minds we design\nthose tracks and this is harder\nso to go where we might want to go in\nthe future\nin this value changing society but to\nnever go where we should normatively\nnever want to go\nso final question is really if this is\neven\nsomething would this be a sufficient for\nus form of control would this be a\nsufficient form of autonomy as it was\nfor\nfor leiden it's or would this just be\nanother\nover complicated way to sort of to give\nup\nanyways on control autonomy or both\ni'm gonna i'm gonna leave you with this\nthank you so much\nbecause i feel like this becomes\na field for designers more than\nphilosophers\nand i would love to sort of see if if\nany intuition or or have i stimulated\nany thought about it thank you so much\nand thanks\nthanks like this great\njulia thank you very much extremely\nfascinating talking i really like it\nthis uh yeah someone have question you\ncan just yeah just\nraise your hand or just say something or\njust vote on a chat\nor anything right not only questions or\nanswers to judy\nquestions that's what i want if i may\nthank you very much for presentation um\ni'll turn on my camera as well otherwise\nyeah just looking at a screen for us\nhello so let me put you in the right\nspot\nyes now i'm looking at you and seeing\nyou back um\ni i that second idea i was it got me\nthinking about what is ai to you\num because i would actually be quite\nfine with this idea of perfect design\nbeing the solution for my trains or my\ndishwashers and other systems but i\nmight be less fine with it when i'm\nthinking about for example my kids\nand i have a feeling that i will be\nsomewhere in between that spectrum\nuh so where would you place ai and how\ndo you think that might relate\nto this idea of perfect design will be\nenough\nso these uh thanks so much thanks rob\nvery very fascinating this brings me\nback to so many things and discussions\nand\nfree will so let me let me give you sort\nof the counterpart\nthat what it's what we do in in\nphilosophy free will right\nthere's this whole discussion about\nwhether\ndeterminism a world that is\nthat is made of uh physical laws that\nhave\none way to go right it's a chain of\nadventure there's no way to do\notherwise in physical terms\nright and then when we think about\nourselves as\nmuch as so your our kids right or\nourselves\nand when we think about when we think\nabout ai we're made of the same stuff\nwe're made of the same thing we're very\nsimilar\nright so the way not the answer but sort\nof\nthe observation for your question is\nclearly\num if we're happy with us being\nsufficiently free\nand we're in a way even if we're not\nsort of intelligently designed but but\nat some point we're made in a certain\nway\nuh because of evolution and because of\nso we have our own constraints and\nthings we can do\nthings we cannot do our body is\nmade in a certain way we're trying we're\ndoing\nwe're overcoming these limitations daily\nwith technology that's one of the great\nthings that technology does\nbut we have for instance are the limits\nand challenges of our cognition that are\nthere\nagain trying to overcome them but if\nwe're happy with\nus have being constrained we might be as\nhappy uh with with ai\nand then the the next question that you\nmade\nactually is but then are we can\nwe define ourselves as being under\ncontrol of evolution\nfor instance that's something that sort\nof\nbecause who designed us right now we\ndesign we want control\nthrough non-intervention through just\njust that\nmoment in time at the beginning when we\ndesign this technology in such a way and\nwe want to call that we want to see if\nwe can call that control and are we\nunder control of our own nature this is\nspinoza by the way i believe\ni believe i don't know if there's\nphilosophers who want to kill me\nin this uh in this uh audience\nthey're welcome because i'm this but\nyeah forget about it um anyway\nyeah this is this would be what i\nobserve out of your question which i\nfind very fascinating\ni cannot hear uh we have a question from\nenote on the chat uh yeah\nyou know do you wanna yeah sure so julia\nthank you very much\ni'm i'm i i sort of tripped over the\nlast\nremark that you made about are we under\ncontrol of evolution\nso when you say so before i go to the\nquestion in the chat\nand i'm sure you can answer this one but\nwhen we say control do we always\nassume that there is a controlling\nentity or can it also be something that\nis emerging\ni think it's it's emerging i think it\nshould\nwe should consider that it's emerging\nideally i'm sorry why do we call that\ncontrol\ninfluence but that is the challenge\nso meaningful human control really is a\nway in a way\nto smoothen to to say it's it's more\nlike an influence when you when you\nthink about\nat least when i think about our\ntheory of meaningful human control i\ndon't think\nabout that kind of control that is\ndirect operational but is more like an\ninfluence would that still be within the\nboundaries of control\nthat's your question which is which is\nvery good of course what you're saying\nis\nbut why do we call that control is that\ninfluence\nand where does it where does influence\nfinish\nand control starts right do we want to\ndefine meaningful human control as a\nsofter form of control do we do we\nassociate control with a sentient being\ndo you mean the controller do you\ncontroller yes yes so the\nthe the person the phenomenon the\nwhatever\nwhatever that exercises that control\nwhether it's whether it's controller\ninfluence and that was not the intention\nof my question but i'd see your point\nbut now i'm on that question of uh you\nknow\ndeliberate sentient etc i mean\ni i mean i'm a big fan of a selfish gene\nand those kind of\nbooks which i don't think talk about\ncontrol\nbut yeah that in a way that book it\nrepresents a little\nbit in our terms like translated in our\nterms\nwe've been many times claimed to be the\ncontrol\nthat is exercised by\nsocietal values by the\nsystem at large that's made of\na lot of sentient beings but also\nuh made of of these these regulations\nvalues intentions ideas are those\nthose values that maybe we can conceive\nas controlling the direction\nof the technology where the technology\ngoes\nare those sentient in a sense or do they\nemerge you you use the merger i like\nthat word very much\nokay let me go let me go to the question\ni put in the chats\nso this discussion about the reasons and\nactions etc of course is something that\nuh\nhumanity has always you know been\nsubjected to in a way right if you go\nout and get food or if somebody attacks\nyou and you pick up a stick and you kill\nsomeone or defend yourself\nthere is this action reason um causality\netc\nbut in the past i don't know 50 years\n100 years or so suddenly of course we've\nmade that step to\ninteraction with technological artifacts\nand ai being the\nthe most recent one at a level that we\ncan hardly understand so i was wondering\nif we discussed the\ninteraction between reasoning and\nactions let's say in\nhuman history versus where we are now do\nyou think they have the same answer\nyeah i'm not sure if i make my question\ni'm not sure if i'm not even sure if\nif i phrase it well but\nso do you do you see where i'm trying to\ngo or or am i\ndid i lose you so no then it's not\nbecause of you i think i think\ni'm just trying to digest the depth of\nthis question\num so the interaction between\nrationality rational entities and\nand actions and the world outside\nbetween mind\nand body okay did it change\nwhen we discovered\nthat we could design intelligent beings\nor or what you mean maybe is did it\nchange\nwhen um in\nin in the process of interacting\nwith artificial beings\nsuch as ai sufficiently intelligent\nthings\nso let's say that in the era of\nthe interaction and the design of\nintelligence so at some point we\ndiscovered that intelligence could be\nuh could be designed could be created\nand we started thinking look\noh so we can we can reproduce what we\nbelieved\nlike only god could could do right or\nwhatever other\ntheory you want to have um\ni believe at that point what we had\nwas this the realization that\nthe idea of us being very material\nthings in the world and intelligence\nbeing something more tangible that's\nwhat it\nchanged so while interacting or being\nable to\ndesign intelligent things uh\nwhile doing so we have had this sudden\nargument for materialism that's that's\none of the things that it gave us so\num in the debate about whether\nreasons um determine\nauction i think certain explanatory\ntheories such as look reasons are just\ndescriptions the one that i the one that\ni mentioned just descriptions of\nphysical events\ni think ai and technology helped us\nin in understanding better\nthe nature of of cognition\nso i'm not sure this answers i don't\ni don't think it does\nwhat i was asking anyways but it's uh\nit's okay thank you very much\nyeah it's truly it's very very complex\nand\nquestions so that's yeah it's gonna\nalways be hard to cover all the points\nright but yeah i think you did a great\njob and\nwe have a next question from sylvia\nuh syria um can i read it or would you\nlike to ask\nokay the video oh no there it is it\nworks\nnow i was wondering if there is and i\ndon't know either\nhow to properly um uh ask this question\neither\nbut um i was wondering if when we talk\nabout\nthe connection between reasons and\nactions if there isn't first some sort\nof synthesis\nand the reasons because\n[Music]\nthere can be different actors the the\nproducers of the technology have\ncertain reasons that might translate\ninto for example a different connection\nwith the actions and a different\nmeaning or value ascribed to what's\nmeaningful in the meaningful control or\neven what's control\nso i was wondering and then the public\nauthorities might have another one\nindividuals might have another one so i\nwas wondering if if\nwhen we talk about connection between\nreasons and actions\nif we first also need to discuss that or\nif\nthat affects that somehow absolutely\nif i may if that was for you the\nquestion\ni believe the one that you pointed out\nis\none of the first in time\nand in priority challenges of any theory\nof meaningful human control and control\nand\nin the time of autonomous technology\ndefining understanding a framework of\nactors and their reasons oh i think this\nis mostly what i've been trying to do\nthe past years actually so i'm\nsympathetic with this point\nto identify and isolate\nthe actors and their motives\nreasons intentions\nthat comes before in a way being able\nto translate them and find a way which\nis which is what we're talking about\nhere\nfind a way to understand them\nas causes\nor reasons or explanations\nfor the actions of a technology\nso this link is the second step\nafter we have really well\nmaybe they'll they'll go hand in hand\nbut it all starts with identifying the\nsources\nas you mentioned then you can design\nto transform these and do the design and\nyou find a way to explain how they can\ninfluence\nthe behavior influence i'm using arnold\nuh\nyou sorry um word because it's correct\ni like it very much so that's the\nthat's that's the idea so yeah of course\nit's it's actually\nthe challenge one of the first\nchallenges i don't have an answer i\ntried i\ni uh in the paper in 2019 i designed\nthis um\nwith felipe with this this\nscale of reasons and agents to sort of\ntry to deploy some sort of framework\nthat would that would relate to reasons\nand actors\nin a certain given case scenario\naccording or donated according to\nthe nature the proximal nature of their\nintentions this is a little bit\ncomplicated\nit's in philosophy of auction but you\ncan identify different kinds\nlet's say or degrees of intentions and\nreasons and\narguments and even laws and values at\nsome point i don't want to go there\nbut just like we've been trying\ncan i can i ask a little follow-up if\nthere's time\ni was wondering because i'm a yuri\nand for us there is often the question\nof um\nbalancing values or inevitably\nsometimes we stop at okay we have a\ntrade-off here\nso basically the choice then among these\npossibly conflicting values is going to\nbe\none prevailing on the other and uh\nwhat i was mentioning before was more is\nis there because this kind of chops off\npart of the reasons\nsomehow but the reasons are going to\nstill stay there i believe\nwhat we find often is that the conflict\nis because these reasons try to\nre-enter the whole thing so that's why i\nwas wondering like\nis there a possibilities are there\nmechanisms for a synthesis\nmore more than than that i mean turbine\ndiscusses that a lot and he's one of my\npersonal favorites because\nfor the law he is particularly useful\nlike instead of just chopping off\ntreating these things as a trade-off\num are there mechanisms are there um\nreasonings that actually are more\nwell yeah i'm more of a synthesis or\nmore just including everything\nfrom the philosophical perspective i\ndon't know i mean yeah\ni mean there's plenty uh this is a big\nproblem in\nin ethics um so how to do\nto take sort of the the best trade-offs\nbetween values and in your case and\nnorms\num but this is neutral and it's a big\nchallenge in itself\nit's but it's an it's independent from\nwhat you normatively then choose\nafter deliberation after having\nidentified your trade-off what you take\nfor your model of you know we should\nfollow we should design for\nprivacy rather than security i'm saying\nsomething absolutely\num random but but the trade-off between\nprivacy and security that has to be\nelaborated\nthe prior stage in a different context\njust absolutely important\nit's fundamental but you just sort of\nthere are two processes different\nprocesses connected\nbut and then i wouldn't i'm not an\nexpert in\nsort of value trade-offs there's many at\nthe udel\nyeah for sure you probably are a much uh\ngreater expert than i am at this uh so\nyeah two weeks from now we're gonna have\nalso uh\nexactly for instance that's one of them\nplease join us in two weeks again we can\nget to that conversation amazing\nso uh um the next question is from elisa\nlisa would you like to say or can i read\nit\nokay are you you're muted\nyeah yes thank you i always do that\nthanks julia it's really an open\nquestion i have for you um\ni'm i'm curious at how also based on the\nconversation we just had and then i see\nthe next\nquestion from claudia cruz\num about how you see philosophically as\na modern philosopher\nthe difference between control and what\nwe might call\nstewardship um as an interaction\ndesigner control\nbrings along questions of who\nis in control and and who has the\nprivilege of controlling their poison\npower to act\nto respond um\nit it also brings up questions of\nhow can i deal you know with with\nemerging behavior that might not\nnecessarily be bad\num so how do i\navoid stifling also um\ninnovation so so i just\nwonder philosophically how do you see\nthese two concepts or you know what is a\nsofter version more responsive version\nof control philosophically speaking\nlet me let me take the last thing that\nyou said it's very\nvery sort of stimulating uh you said\nsomething about\num there are emerging behaviors\nof a technology is that would that be\ncorrect\nof a technology that are not\nnecessarily bad\nbut they were unexpected and they are\nout\nof they might be oh correct me yeah\nwe're open\nout of a particular or\na set of particular controllers\nintentions right oh we did not expect\nthis but but it's it's good but we never\nwe never intentioned we never\ndid it as controllers so the the the\nidea that we tried to\nsort of um incarnate with\nmeaningful human control is that going\nback to to what we talked about before\num control doesn't need to be conceived\nor at least this is a proposal you can\nconceive\ncontrol being exercised by\nvalues themselves there at the extreme\nspectrum\nof the great big scale of intentions and\nreasons\nright so these values are not are not\npersons they represent\nuh an emerging sentiment\nwhich is for instance what you said when\nyou said oh but it's good\nright we never did it\nin a way but we did it in another way\nbecause that\nresponds to the\nvalue of goodness i'm not this is stupid\nbut\nto to make the point it's a good thing\nso it responds\nto this value right um\nso in a way that is a little bit of the\nessence of the tracking condition where\nit says\nit has to covaria with the reasons\nright it means going hand in hand which\ndoesn't mean\nthat if we if tomorrow what's what i\ntoday think it's good it's still good\nright and that is the problem that's one\nof the reasons why we're here today\nbecause i can conceive control as being\nthis uh this\nalignment between values and reasons and\nthe behavior\nwhich is by accident good so i'm in\ncontrol\nbut i'm not entirely satisfied with this\nbecause if tomorrow\nmy values change technology has\nto immediately switch\nthat's responsiveness the way that that\ni desire and that i don't\nobtain really while i'm struggling to\nget out of extremely fully autonomous\nsystems so the idea of you know you know\nyou have these two clocks\nthey're always so if we change the the\nthe\nthe technology changes because it's all\ni can foresee possible changes and some\nof those changes i cannot i don't want\nthose changes so maybe i have a range of\n10\ndifferent choices and any of those might\nbe\nokay within that range that's that's a\ndesign question that i really don't know\nhow to understand i'm sort of i'm\nthinking\nalong with the better designers um\nthat's that's what i can say here yeah\nokay great thanks thank you\nuh just let me just correct myself\nbefore i said in two weeks you're gonna\nhave people from the pool no\nin two weeks gonna have steven umbrella\nand in three weeks on the 28th of\noctober we're gonna have evil vulnerable\nokay so uh the next question is gonna be\nfrom claudia\nyeah hi um actually lisa was right that\nuh my question was very much connected\nto this\nuh to to her question but i do have a\nfollow-up\nuh in a sense that um when we talk a lot\nof times about control and also\nhere when we talk about control and like\nrestricting um\nkind of the outcomes uh possible\noutcomes or what we\nuh intend for this machine to do\nwe talk about this idea of like okay at\nthe end of the day\ni have control right but what if\nwe have a system that you know kind of\ndeparting from autonomous weapon systems\nuh going into the direction of systems\nthat\nsomehow can suggest to you where to go\nso algorithmic systems are like\nrecommender systems that can provide you\nwith\nideas for your course of action so in\nthat situation of course\nnowadays in like the the public\ndiscussion that would be\nconsidered to be in control because you\nas an ultimate\nlet's say commander you're able to say\nokay well this is the recommender system\nand i have to take the decision to for\nexample\num well flip the the the you know put\nthe gun or\nwhatever you need to do so from\nyour perspective would you say that\ngiven kind of this framework that you\nalso presented as\ndo you think that this is you can also\nsay that the control there applies\nbecause you're there at the end\nlike doing the action you're actually\nthe doer even though that actually may\nhave been\ninfluenced in this case by an algorithm\nthat provides you with the decision\nwhat to do and uh especially in like a\nvery uh kind of\nintense situation you might not even\nhave maybe time or resources to\nto double check that so so how do you\nmaybe you know how would you deal with\nthat kind of situation and control\nunderstanding\nyeah i think we have to to some extent\naccept that our decisions\nas much as our um\nopinions are as much as our design\n[Music]\nideas or our the way we do the things\nthey're always influenced by\nby something else so i i could say well\nin a way think about\nyour uh decision to\ngo to a certain\nuniversity was it your decision\nor was it a decision that was also\npartially\ndetermined or influenced by a number of\nof contextual cues other persons\nso and do you feel that you're\nin control of your own decisions\nbecause of that reason it's a question\nactually\num i mean of course i do agree that of\ncourse all of our actions are influenced\nbut in a sense if you especially talk\nabout algorithmic systems or systems\nthat you know assist in military\ndecision making\nthen you have two very two like two\nsituations in which first the military\ndecision making of course has a much\nbigger impact than my choice to go to\nuniversity\nand then you also have a situation in\nwhich a military system or the\nthe system that i have at hand might be\ninfluenced by design of an external\nparty so my decisions will not be\ninfluenced by the organization or the\nthinking within the organization\nuh but externally by other forces so\nthat's kind of the the tough okay\nabsolutely i understand can you see this\nmy screen still\nam i still sharing it says your screen\nsharing\nyeah yes we can so okay can you see this\nlight here\nso um actually maybe i can play this\nyeah so that's so to to\nto consider the importance of those\nnormative aspects\nof values like well decisions uh\nthat have high stakes um should be taken\nby humans humans should be\nin charge uh humans should be ultimately\nresponsible that is what the tracing\ncondition does\nafter so it's not just about the\nmetaphysical condition of control\nthe tracking condition that we are in\ncontrol\nbut we need a certain kind of the has to\nhave certain a certain nature\nand what you say is absolutely i\nshare it i i'm i completely agree with\nwhat you say\nand those values that you mentioned\nin theory at least sort of the attempt\nis that the tracing condition\nshould set up the\nthe further normative conditions for a\ndecision for instance a military\ndecision where there's high stakes\ncould be meaningfully\ngenuinely truly human in the sense that\nyou can attribute responsibility and the\nsense of\nso that is a normative set of\nof of um requirements rather than the\nmetaphysical\nrequirement in the in the link the\nconnection\nso i agree with you and and\nthat's why we have two\ntwo aspects of control taking care and\nyou're talking about this\nuh aspect on the right side of the\nscreen\nyou you can put more conditions here\nthis is one thing yeah all right thank\nyou very much\ngreat so i think we we already ran out\nof time since we had like some problems\nbetween all this transition from gta to\nzoom let's be a little bit more linear\nso whoever has to\ngo another community thank you very much\nfor joining us\nand let's go for one last question here\nfrom arcadi\nand then that's it okay i carry\nyeah thanks uh so i i've noticed that we\nhave a couple of more questions here in\nthe chat so julio if you don't want them\nto disappear into nothing\nyou might want to uh check them\nuh before we can send you all you mean\nin the chat\nyeah i'm sorry yeah yeah yeah so yeah\nit's just assuming that my question will\nbe lost\nso yeah i have a very practical question\nright so i've been thinking for the past\nyear or so\nhow do we actually cover\nthe behavior of the autonomous of\nautonomous systems with the human\nreasons and exactly\nconnecting reasons and then the actions\nand the actions we can observe and the\nreasons we cannot and then\nyeah it's interesting that you mentioned\nthat reasons are bad causes\nuh for action so i'm not sure i follow\nthat argument but\ni have a very practical question right\nso assume that uh\nwe can observe these actions and the\nobservations are perfect and assume that\nwe also have some kind of\nan understanding maybe not perfect\nunderstanding but some kind of\na causal model if you want of reasons\nuh causing actions and then\nwhat we can boil the whole problem down\ndown to\nis basically causal inference so we have\nobservations and we have\na model which might not be perfect but\nwe might want to infer what are the\nactual reasons behind these actions\ngiven the observations and the model and\nthen this is\nsomething that we can relatively easily\nformulate mathematically and\nwhether it's valid at all i'm not sure\nbut what's your take on this\nthat um i think the problem i mean\nthere's many problems but\nthe and challenges not problems\nchallenges because i like it\num but the main one that i see here\nis it stays at the beginning the first\nthing that you said oh\nlet's let's assume that reasons\nare things that are good enough to be\ntranslated into causes right um\nboiled down somehow transformed\ntrimmed compressed reduced that's the\nterm that we use often\nreduced to physical causes um\nthe risk i am at first there's just\nseveral reasons why\nthey're inherently non-reducible there's\nmany philosophers um\nassuming take this as in a way for\ngranted i'm not sure i'm not sure\nbecause of as i mentioned there are\ndescriptions in a high-level language\nand the language isn't the same\nand when you translate it you lose stuff\nthat you cannot regain\nokay when you translate the language of\npsychology\ninto the language of neural\nevents let's say i say neural events you\ncan take physics\nchemistry you can go as down as you want\num but there is a loss\nof content right and the other concern\nstill in principle is that it might be\nalso in\nagain in principle in invisible\nto make sense to make justice\nof what we call values\nso this is very overarching emerging\nentities are they do we actually have\na way to to\nto find a counterpart in in a language\nthat you can then model\nmathematically as you said now once you\nhave that\nthen you might be over the bigger\nproblem\nbut that this is and this is what\nphilosophy has been doing without\nsuccess\nmuch success success enough to help us\nat least\num so far this is really like how do i\nreduce\nconsciousness to physical events\noh you you can there's many theories um\nthat i can accept maybe we can do it\nwe still talk about mental entities in a\ncertain way in a certain language\nrather than another language because of\nreasons\nthere's reasons to do so oh you can ask\nother philosophers\nmaybe patricia churchland she she might\ndisagree with me i believe that she\ndoes uh and so on\nbut yeah i i i own\ni see this as being a major problem like\nif you're you're assuming\nsomething that is the actual problem\nif you assume that's solved i'm happy\nthanks a lot julio i think uh yeah we\nhad some\nother uh questions and so i have saved\nthe answer\ni will oh i can send you i can i can\nsave the backlog i will send you\neverything so thanks everyone\nfor joining thank you for the very\ninteresting talk julio\nthanks so much for inviting me yeah okay\nbye\nthank you bye-bye", "date_published": "2020-10-08T15:52:34Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "732f6d6ed456dd03b714512e712e90f7", "title": "Morality, uncertainty, and autonomous systems (Luciano Siebert) - 1st AiTech Symposium", "url": "https://www.youtube.com/watch?v=lGux74w6H9g", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "thank you note thank everybody for\ncoming here to the high tech symposium\nI'm gonna be talking to you about\nmorality uncertainty and autonomous\nsystems and the message I wanted to give\nyou that uncertainty can be a driving\nforce to meaningful human control this\nproject is developing a collaboration\nwith cattle are younger and arune\nvanderhoven so first I would like to\nbring you to and illustrative examples\nto highlight some of the main points\nthat I'll be discussing okay let's\nconsider here in a not so distant future\nyou are sitting on a desk by the way\nthis is my desk and I'm sitting here\nwith my VR monitors working and just got\na coffee and suddenly a lot of noise and\nthen I think okay today is actually it's\nThursday right it's not Monday it's not\nthe testing of the arm season maybe the\nserious I see some smoke coming from the\nhallway and okay probably should get out\nof the building right and then I go to\nthe hallway and then fine suddenly I see\nRobo is the robot for burning offices\nit's not it's gonna burn the offices\ngonna help you and so it's actually a\nnew regulation that came just came\naround and then every building in the in\nthe country must have a robot okay but\nRobo mingo is like to support the fire\nemergency evacuation okay so what can\nyou do it can guide you to the best or\nthe shortest and best exit route can you\nremove some obstacles on the way it can\nextinguish the fire a lot of things that\nrobot can do okay yeah we can definitely\ncan think about a lot of other things\nsure but what do we want\nRobo to do so according to which\nguidelines should Robo act it was\nbrought on the previous talks a lot of\ndifferent ways so we don't want I also\nchoose to embed some specific rules we\nwant to that Robo should act according\nto some values to some norms that this\nplays a role in our society when should\nRobo act or not act if the Robo does not\nreally know the outcome of a specific\ngoal it tries to help you and get you a\nreally bad situation that we want robot\nto do this\nand we were able be able to justify a\ngiven action if you make something can\nit can it be accomplished can you\nexplain Kenny we have some\naccountability not for robbing self but\nfor developers or for the users and\nshould we rob should we actually develop\nRobo or not because any approach may\nhave strong ethical implications an\napproach we take so let's think about\nsome possible ways one Robo should act\naccording to what people want so there's\nsomeone so whoa fire I'm getting me out\nof here now okay but what does it really\nmean it can how how can we understand it\ncan be so it can be a comment like your\nthing can talk to your phone assistant\nokay but do we really want what we say\ncan Robo really interpret these kind of\nthings it's really not an obvious\ndecision so maybe you can make some\nrules previously when I analyze all kind\nof scenario is going to be in the\noffices and then okay if this happened\nthe N Robo should do this and if these\nother things happened then through Robo\nshould act another way but this is not\nalso really possible because it's so\nmany different contexts out in the real\nworld for a simulation small system okay\nbut really for the real world is very\ndifferent there's some other ways okay\nmaybe Robo could have some model of\nmoral cognition and then I want to make\npresence that's one of the things I'm\nmost investigating in my projects is\nthis kind of how to embed this models of\nmoral cognitions ain't you machines okay\nRobo could have a model to interpret\nwhat this person want in a different\nsituation so can get a little bit more\nadapter but does this model explains\noffer is is applicable for different\npeople for all situation maybe you can\nget something okay let's let's use AI\nlet's use machine learning and let's use\nsome automatic at math inference of\nhumans preference there's something\npeople say over in inversion for some\nlearning so Kim the robot kind of\nunderstand by the behavior these people\nwhat are the values what are they\nworking towards is also way there's also\na lot pros and cons but okay let's\nassume we can kind of teach robot how to\nact according to what people want\nso I'm trying to investigate this models\nof more cognition and how ultimate\ninference can play a role but even if\nyou do that there is someone else there\nlet's say don't worry about me go to the\nfifth floor there's someone in the fifth\nlocker really needs assistance so robot\nshould help the person a should really\ngo to the fifth floor s person B\ncommanded then there is already some\nwhat's the right thing should do and\nthere's one more thing also for the\ndesigner and the engineers develop Robo\nhow they translated is and this the this\nunder sting of the words there they have\nany any assumptions of the human\nbehavior so even if it can act according\nto what people want comes a great\nquestion so what if there is no\nagreement and what if people are not\nconsistent I say now I want this and\nthen me thirty seconds I want something\nelse and they should a robo go up and\ndown the room trying to figure out what\npeople want will be effective could we\nfind an overall solution that would make\neveryone satisfied\nhmm it's a good question so let's try to\nact according to what is right okay what\nis really right Jaron gave a green\nstairs leg also the possible ways like\nhe said like the Cantonese room that is\nmaybe almost in Europe are all Tillet\nenemies me or other people can\nunderstand it's not it's not the\nconsensus I'm pretty sure even in this\nroom we could not find any consensus and\nany of different situations but let's\nassume we go for utilitarianism it's a\nconsequentialist theory what does the\nmean is the final consequence of an EF\nan action that really matters\nokay so let's say Drobo should in a way\ntry to maximize happiness and well-being\nfor the context where it is being\ndeveloped but what if robot blocks one\nroom just English the fire do we really\nwant and kill some people inside it's\nsaving a lot of people do but do you\nreally want that let's go another\napproach we can say kangan ism is a\ndeontological theory that says action\nself is can be right or wrong but what\nif Robo save someone there is says the\nlife of someone on a wheelchair but\ndozens are\nin terms of people get severely injured\nfor this get really so do we really want\nthat okay so how to act on this there's\nalso some things I've been working on my\nresearch project also to understand is\nwhat's the wage how to move given these\nnormative s is more uncertainty one pick\nand choose choose choose an ethical tier\nthat you want to go on talk without\nstakeholders involved with this agree\nokay we're gonna go to this theater\nisn't approach okay and go for it they\nhave pros and you have a lot of cons\nthere a lot of a possible ways to\nconsider multiple theories if you\nconsider several theories that may have\na relevance that people have a degree of\ncredence on that you could if you think\nabout the Parliament you can combine\nthese views different political views is\nnot there always the majority wins is\nalso about trade-offs then for specific\nsituations okay so Robo face a lot of\nuncertainty state space what is the\ncontext of the word what is the\nconsequence of an act and what is the\nright thing to do so a lot of\nuncertainty but how can we move on so I\ncan I'm gonna just connect back right to\nPhillipos talk to the tracking condition\nfor a meaningful human control it says\nthat an autonomous system should respond\nto the relevant moral reasons of the\nrelevant humans and the relevant factors\nin the environment in which the system\noperates okay there comes a lot of\nquestion what are the relevant my\nreasons who were the relevant humans in\nlimits you think so what should i do\nwhat should i do what should any\nautonomous system do in this kind of\nsituation and what is the right way of\nthinking or reasoning and this is\nactually thinking about thinking or\nreasoning about reasoning this is called\nmeta reasoning meta reasoning so on is\nhow an agent can select its computations\nwithout knowing previously what the\noutcome if I can just try to connect\nback a little bit to the should my\nillustrative example let's say okay we\nhave fire situation row\nokay why should I really do and not for\nand I'm not sure\nand then what there's a lot of people\nthat have a lot of strong opinions on\nthe anarchy you should be a shoe do that\nokay but let's see we have what the\nrelevant people want that we could try\nto understand into some models more\nmodels of more cognition we can also try\nto automate inference of people's\npreference we can combine those what is\nthe right thing to do also we can\ncombine those different ethical theories\ndifferent of different views and what we\nwant the system to do it doesn't mean we\ndon't want the season to act in all\nkinds of situations\nit'll all kind of actions and these I\ndon't mean that that the autonomous\nsystem should be able to choose this\nside by itself all the time it should\nhave a lot of human interaction human\nevaluation in so fallbacks season and\nspecific sense so that can lead us to a\nmore safe approach to the development of\nsuch systems and now I just like to\nconclude here just bringing me in some\ncattle I and your own here and I would\nlike to say let's integrate let's talk\nabout possible way outs let's bring some\nspecific context where we this kind of\nthings can be applied thank you very\nmuch\n[Applause]", "date_published": "2019-10-29T15:46:12Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "212de2a76eaa21d4bf7261731fd03d80", "title": "AiTech Agora: Alessandro Bozzon - Designing and Engineering for Meaningful Human Control", "url": "https://www.youtube.com/watch?v=WnVvUByk26c", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "i'm hoping that it doesn't disconnect\nagain\num well first of all thank you for uh\ninviting me here today\nuh also as a new member of the core team\nof ai tech it's a pleasure also to\npresent myself maybe and tell you a\nlittle bit about how i think and what i\ndo\ni'm a professor in my center ai i'm a\ncomputer scientist\ni was until three years ago in the\ncomputer science faculty specifically\nthe web information systems group\nwith uh uh\nit is also the same they collaborate\nwith we still have i still have phd\nstudents working there esteemed\ncolleagues and others that we work\ntogether enjoy remotely uh but now i\nlead it together with your cartoon the\nknowledge and intelligence design group\nin the faculty of industrial engineering\num\nand well indian collaborator in the\ncontext of the designing ai program with\ndave for instance there\nwith others so we also do a little bit\nof ai in there and of course as you can\nimagine we take more of a\npeople-centric approach or a\nhuman-centric one being let's say\nfaculty of design that is some extent to\nbe expected now what i want to do today\nuh\nis simply to share with you some of the\nreflections that i let's say developed\nwhile reading\na paper that the i-tech team recently\npublished and talked about it in a\nsecond\nand meanwhile also contextualize a\nlittle bit the work that i've been doing\ntogether with well\nmy team and brilliant students and\ncolleagues in the last few years\nuh i'm probably going to go either very\nshort or very long because uh\ni don't know anymore how to present uh\nin person uh so please somebody take a\nlook at the clock uh if there is\nsomething that you would like to discuss\nwith me right away please just do it\nlet's not wait until the end i would\nlike this to be more of a discussion\nthan our presentation if possible\nso first thing you might wonder is why\nthat particular background\nso this is actually an artwork from kent\nlogowski\nuh kent is an artist that uh well\nrealized that the puzzle makers use all\nthe same cutouts for puzzles\nso if you actually combine multiple\npuzzles you can create one that somehow\nfits together\nand this is nice i mean i think it's\nalso mesmerizing in a way but to me it's\nalso evocative of how we do ai these\ndays it is we have multiple pieces that\nsomehow fit together\nengineer them together and then yes\nsomething comes out it's not exactly\nwhat we wanted\nbut it's sort of okay\nso a sort of a reverse engineer way to\ngo about things\nand this is to me is also the starting\npart\nso\nwhen we\nlook at the ai techniques of course they\nare increasingly becoming uh performing\nand and very very nice you adopt them\neverywhere\num but the thing is that we know that\nthey also have a lot of limitations\nbecause of their data needs and\ncomputation needs to bring in issues out\nthere\nfor instance\nothers for those others\ni know because you say so yes\n[Laughter]\n[Music]\nbut we know that there are many many\nissues right we know that they are not\nvery robust actually they are very\nbrittle to out of distribution\nprediction perturbation in general on\nthe input we have a huge issues of uh\ntransparency explainability so we know\nthat especially the most complicated\nmethod deep learning and the like\nuh we know that they sort of work but\nyou don't necessarily know why and this\nis actually a very active and\ninteresting area of research\nand ultimately they are there isn't\nreally a lot of intelligence in the ai\nthing that we use i mean the result of\npattern recognition working perfectly\nfine but not necessarily what we would\nconsider to be human-like intelligence\nright i don't want to go into the beta\nwhat is intelligence but i hope you get\nwhatever\nand here i somehow ascribe to some time\nto the view that uh jeff bigham has jeff\nis an associate professor at cmu\nand also working machine learning tv in\napple where is somehow\nsome time ago already trying to\ndistinguish you know\num\nhide the ai in useful ai\nright where useful ai is the one that we\nactually had to use in practice\nand when we use it in practice it is\nless and less about fancy ai models that\nare to some extent saturating again big\ndiscussion if learning is hitting a wall\nor not i don't want to be to go there\nwhere i want to go is the fact that the\nmoment\nwe bring these techniques in the real\nworld\nthen the set of challenges tend to\nchange from a purely technological one\nto a broader set of challenges right\nsocial technical sociology ethical\nlegalism\nso these are and by the way if you have\na presentation being with either so\ncheck them because they have one so if\nlet's say yeah the iaf the part that is\nactually below the level right is the\none about human ai interaction right and\nbroadly speaking\nor humanize systems as in the case of\nthe paper so it's more about our formula\nproblems how the systems are evaluated\nhow the data is collected this is\nsomething that is really close to me and\nsomething i've been working on for many\nyears\nand also how the interaction works and\nthis is where we're going towards\ntoday's practice\nthe way we look at it from a design\nperspective or let's say that we like to\nlook at it from the faculty is that what\nif ai was designed instead of\nengineering engineer\nright now we engineer it\nvery well\nmost of the time but what if you want to\ndesign what if instead of doing\nsomething and just dealing with the\nconsequences we actually tackle the\nuncertainty right away\nin order to deliver something that we\nwant and not only something that's sort\nof happening pretty much like the puzzle\nthat we were talking about\nbefore\nin this context i really welcomed and\nenjoyed reading this paper first\nauthored by well luciano yellow guinea\nand others from the tech team about\nmeaningful you know controlled\nactionable properties for ai system\ndevelopment i really liked it\ni think it's a very useful reason\nuh i'll talk about it in a second and\nalso maybe a couple of reflections over\nthat\nalso because it tries to\nbridge uh again between the discussions\nthat are more on the ethical level and\none they're more operational and i think\nwe need more of that work right\ngoing really into the action notable\npart of the\njob of designing and implementing ai\nso quick summary things that are really\nnot with me first of all well why do we\nneed human uh meaningful control many of\nthese things by the way the team might\nprobably already discuss them\npointlessly so i'm sorry for the\nrepetition is also for me to\nself-reinforce the notions here\num because we want humans to remain in\ncontrol of these systems right\nand be more morally responsible for them\nso we want to make sure that these\npeople are aware and can\nare of the limitations and possibilities\nof these human eye systems and can\nactually meaningfully deal with them and\nhave control over them\nuh and i like\nin this formulation also this idea of\nyou know relying on these two necessary\nconditions of tracking and tracing that\nare small rooted in the work of filippo\nand\nso tracking is about the system being\nresponsive to the human moral reasons\nand the tracing is about what the\nbehavior to be traced back to human\nnations so you want to have both these\ndirections of the relationship between\nthe human and the humans in their own\nvalues\nand\nand\nmore also in how the system gets\nimplemented it executes interacts with\nthe real world and back\nthe whole paper is about four properties\nthat are somehow defined in there uh\nthat i just have here i mean we can read\nthem together but\nthey're basically about\nexpliciting teasing out and eventually\nverifying in a way or at least\ncontrolling the moral boundaries\nof a particular ai unai system so we\nneed to really tease it out it should\nnot be implicit it should be explicit we\nshould be out there and the system\nshould be able to control for it somehow\nunderstand when it goes out of those\nboundaries\nit's about having shared representation\nof the problem in the domain and also of\nto some extent how the human and the ai\nsystem operator\nso these are mutually compatible\nrepresentations\nit's about the fact that humans should\nhave the ability and authority to\ncontrol\nthose those ai systems\nuh\nso it's a matter of competence but also\nthe matter of the authorities also i\nhave been entitled to be allowed to do\nit\nand it's also about the link between the\nactions of the humans and the action of\nthese agents and the two of them somehow\nbeing related so that there is no\nuh\nuh deviation due to i don't know\nemerging behavior from the i system that\nwere not\ncodified in the people\nit's a real really quick summary i left\nout the\nbest part of it i'm sorry if this is too\nshort the paper is there to be to be\nread please do because it's a very nice\none\nbut since i have only 20 or so minutes\nto talk about the rest i would like to\ngo on about the rest and maybe we can\nhave a discussion about the paper so\nsomething really stood out with me and\nsomething that\ntriggers some reflection from my side\none of the\nuh actually also spoke about it during\nmy inaugural lectures here a couple of\nyears ago\nwhat i really believe is the main\nproblem when it comes to our system is\npeople\nright people are actually the issue here\nright technology is there is a lot to be\ndone but it's really about people about\nthe humanities human ai systems and to\nme the first question is who are these\npeople\nbecause we have sort of an assumption\nthat there is a designer or there is an\nengineer\nin practice the people around the human\neye systems are planning\neach one of them have different\ninterests each one of them have\ndifferent competencies\nresponsibilities\nand sometimes when we talk about you\nknow moral\nboundaries i wonder whose boundaries\nbecause there are so many ones that need\nto be aligned\nright when we say in the paper that the\nhumans that are responsible for\ndesigning the behavior the system should\nbe aware of the moral constraints and\noperate according to that i wonder for\ninstance is this also applicable for the\ncrowd workers labeling\nthe data that goes into the system\nhow do we guarantee that that actually\nhappened right auditors have a different\none so as you see to me that the\nthe\nthere needs to be an acknowledgement of\nthe fact that the plurality of\nindividuals that actually\nare involved in designing the behavior\nand running the system is so big that\nsometimes where to put exactly you know\nlike the the pressure is not super clear\nagain i don't think this is a new\nargument when it comes to freedom of\ncontrol i think it's actually also in\nthe literature but to me it becomes\nreally evident at the moment you really\ndesign an engineered assistance because\nyou really have to take into account all\nof this i will give example later on\nanother one for me that is also\nsomehow important to look at is which\nunai systems are we talking about\nin the paper we\ndiscussed two example uh self-driving\ncars right so autonomous systems\noperating in the real world so sell\ndriving cars or autonomous robots i mean\nright now we have a whole bunch of\nautonomous systems out there\nand this is of course important right it\ngives us context but this context is\nso needed\nand and so important that definitely\nscopes the\nbalance the generalizability of what we\nbelieve to be human mythically in\ncontrol\ni suspect i believe that it declines\nvery differently the moment it gets\ndiscussed thought from what about in the\ncontext of a single system so\ni believe that a cell driving car and i\nthink you do too\nit's not the same thing as a\nlarge-scale language model that can be\nused to co-author text\nor as much as we do these days we like\ndigital twins\nbut we created digital representations\nof real-world people and systems for the\npurpose of studying them or simulating\nwhat happens in these systems it's not\nreally the same but there are some\nfundamental differences in how these\nsystems are designed how they are\ncontrolled what are the control points\nand then i wonder how much\nthe the\nthe the operationalization of those four\nproperties can be generalized\nand i know that this is not a new uh\nargument in the in the era it's actually\na pretty common one\nbut somehow stood out when i was within\nthe paper and thinking about my own\nexperience as a scientist as an engineer\ni've been also playing around with the\ntechnology for a while\nso that's very high level\nmy two features right so who are the\npeople and what are the systems\nit is\nthis is the sort of problems where we\nprobably need to engage with the\npracticality of both those questions\nbefore we be able to actually come with\nan actionable solution\nnonetheless i think that the four\nproperties actually stand but perhaps\nwith\nsome some calyas and that's actually\nwhat i want to talk about in the rest of\nthe presentation\nfrom the people's side\nin practice uh if you're thinking about\ndesigning designing ai humanii systems\npeople are the raw materials people are\nthe\nentities from which we extract\ndata traces whatever right and there are\nseveral problems with this\nthat also touch upon the issue of human\nbeing for human control\nfirst of all where the data are coming\nfrom and we have been experiencing for\nthe last 10 years at least\ni just call it the big data availability\nfallacy you can call it the big data\navailability bias the idea that just\nbecause we have a lot of data\ntruth is out there to be grasped\nand this is now becoming asymptotically\nan issue\nbecause we are now really engaging with\nweb scale\ndata sets\nwhen this is for instance of the write a\npaper recently published one or two\nweeks ago from the facebook\nthey published their own large language\nmodel the equivalent of lgbt3 but\ndifferently from an ai which is also\nironic in a way they released everything\nthey released the data set they released\nthe code they released a long book\nthey are doing everything by the book\ndepends by the is by the way of\nresponsible ai right and also there is a\nnice paper where they basically um\ndiscuss the potential harms and\nimitation of the model and what it boils\ndown to if you read it that just put\nsome except there that\nissues of\nbiases encoded in large modern and\nissues of\nundesirable potentially harmful behavior\nlike toxic language and the like\nactually come out of the data seems to\nbe strongly correlated with the data\nthat you use which is intuitive of\ncourse if you train your model on\ntwitter data and on twitter people just\nthrow\nwhen you're at each other you won't get\na model that you know is very likely to\ncreate a polite and artistic and poetic\nsentences out of there but of course\ngarbage in garbage out i mean what is\nthe news here\nbut the\nimplicit assumption that if we go large\nenough all of this becomes less of an\nissue turns out to be probably a wrong\none\nsome people are now riding this you know\nthese this wave of uh what what's what's\nuh do you think a data centric ai now\nhow they call it\n[Music]\nsomehow i like the fact that you don't\nneed it to get good data\nand how do we get good data if you don't\nfind it out there in the world and you\num actually want to create it but there\nis a technique that we have been\nstudying for more than 10 years and was\nout there for all 10 years which is\ncrowdsourcing\nit had to reach out to large amounts of\npeople proactively to ask their\ninvolvement in data management\noperations being creation\nuh annotation manipulation whatever it\nis\nand one of the main\ntools that we have are crowd sourcing\nplatforms actually micro task work\nplatforms like mechanical turk prolific\ncrowd flower you name them\nwhat's surprisingly still sometime here\nand this is a little bit of a\ni don't know how to call it maybe a\nlittle bit arrogant this perspective is\nthat people somehow rediscover the warm\nwater or the hot water when it comes to\ndata quality because there have been\ncommunities out there for more than 10\nyears studying\nhow to get people to be part of\ncomputational processes like data\ngeneration\nand so when i find the\nstudies or or things like this that you\nknow like this excavating i had for k\ncrawford the classified critique of the\nissues of labeling data as\nyou know used for machine learning of\nthe quality of this label\ni wonder yes or not\nthis is the result of best practice\nthis is the result of\nassuming that you can simply consider\npeople that you engage so this again\nspeaking to the human control meaning\ncontrol and\nand the quality of the work\nthat they are functionable they are not\nreal people that you can just pay them\nby the minutes and they can simply rely\non their ability to empathize and to\nidentify a song\nwhich is simply not true if you don't do\nthat right and back in the days when\nthis was done for imagenet\ni must admit the knowledge about how to\ndo this was not as developed as it is\nnow so it is easy to to to criticize\nback but now we know more okay back in\nthe days it was not there and what you\nget is yeah does understand\ngarbage luckily some work has been done\nin that respect and i want to highlight\none work that uh\nmy colleagues from computer science says\ndual for instance recently published\nabout uh you know things like cognitive\nbalances in crowdsourcing\nhow to design cloud sourcing tasks a\nchecklist and somehow inspired by\nsimilar techniques from behavioral\neconomics right\nuh what actually can go wrong if you\ndon't design your crowdsourcing task web\nwhich practically means if you don't\ndesign your data collection annotation\nwell so issues of\na factoristic availability bias disaster\nneglect right this is really speaking to\nthe problem here right\nhere workers who commit to my tasks\nbeing properly informed about the\nconsequences of their participation\none will wonder is the one that collects\ncollecting the data aware of that but\nthat's a different story but are we\nreally i mean there is something up\nthere but that is still not\nin the in the\nlet's say in the\nin the mindset of of practitioners and\nscientists to the point where we still\nsee\nsomething like this\nthat's a paper that was published three\nor four weeks ago of nasa\nand probably you saw that to me it was a\nsort of like a car accident happening on\ntwitter uh because these two authors\nright they tried to do everything right\nthey wanted to collect a data set of\nhuman assessment of some qualitative\nproperties of people to their face\nassuming you could\nthey dealt with the privacy issue\nbecause they didn't use real faces of\npeople but they generated synthetic\nphases controlling for some of those\nthose some of those properties they\ntried to do everything by the book and\nthen they went down to mechanical\nequivalent they gave one million images\nto be evaluated by 30 participants and\nwhat you get is strong biases\nobviously\nright this could be seen by things like\nokay\npeople evaluating their which faces are\nmore typical or which faces are more or\nless like you if you look at this image\ncan you imagine the demographic\nproperties of the crowd workers\nyes i have a question about this um\nso\nis this\ndo you think\nmainly due to the fact that they're\nfaces and that they\nhave strong opinions about these like\nwhat if they were abstracted with this\nand and assuming that people could\nactually write abstract images would\nthis still occur\nthat you know every type of biases\nright i mean\nimagine\nsomething like a simple\nrecent example that just came to my\nattention imagine people assessing uh\nthe the balance or you know like the\nsentiment of a particular\nsentence based on emojis\nthere is a cultural divide between\nyounger people and older people in the\ninterpretation of the emojis right that\nare smiling faces apparently by from\nyou know younger generations it's more\nlike like like a yeah smile not amigas\nline so\neven for something like emojis that are\nabstract and that there is a problem of\nbiases so demographic culture uh the way\nthat you pay the incentives that workers\nhave all of that contributed in the\nquality\nand there's plenty of empirical evidence\nabout this is not uh i mean it's not\nnecessarily new right we are discovering\nnew ones but uh\n15 years of research in that respect\nyeah i mean for example for images like\nlandscapes the uh\nyour graduate students\nyeah and still there they exist\neven though we barely recognize uh\nyes because i mean when we go to the\nrealm of subjectivity we expect people\nto diverge in what they consider to be\ngood bad beautiful not beautiful\ninspiring not inspiring\nof course\nmachine learning systems are\nnormalization machines and i could spend\nother half an hour talking about the\nproblem of how do you aggregate\nyou know multiple uh assessment into a\nsingle one because ultimately you want\nto learn one level\nthere is a very nice paper that won a\nbest paper award this year kai from from\nstanford uh bernstein and all where they\nactually have uh something like\naggregation by jury it's a it's a\ndifferent approach to not simply\naggregate an average but tattoos about\ntaking out a few years ago with a piece\nof student of mine with a gut when she\nwas doing a masterpiece we also in this\nproblem\nfor the problem of toxicity assessment\nwhat is considered to be toxic or toxic\nor inappropriate varies a lot so every\ntime you go into the realm of subjective\nyou have to deal with these biases they\nare almost inevitable\nyeah so\ndespite progresses we are still there\nand this is last three weeks ago\nbut it's not only about crowdsourcing\nsomething that i want to also raise a\nlittle bit of attention on is that we\nthe companies\ncompanies that are in the business of\ndata management they have literally data\nfarms out there\nso it's not only your mechanical\nissues we have companies whose business\nis labeled in data\neverywhere\nit's not only china in india they are\neverywhere\nand\nthey are feeding the data that\nultimately goes into those algorithms so\nit's not only a problem of scientific\nawareness it's a problem also of\npenetration in the the\nthe\nthe\npractitioner right in the real world\nwith big issues of exploitation i\nmentioned this all the time because i\nwant to remember we are basically if you\nwere fueling our ai economy on a new\nform of\nneighboring\nas it was you know like 1500 years ago\nif you look at the\nresearch from 15 years ago in this field\ni think all the field was guilty of not\nacknowledging this problem enough i\nthink these days visa\nif you're using crowdsourcing please pay\npeople well enough and if you're using\nthem at least is a platform that is\nknown to care for the workers so just\nreally avoid mechanical turkey use\nsomething else\n[Music]\nso\nthe data problem\nalone\n[Music]\nit's a very concrete one that that\nreally makes me question what does it\nmean to a meaningful human controlled ai\nsystem\nwhen much of the problems stem from the\ndata that we collect\nthe data that we collect is not really\ncurated because most of the time we just\nget it out there we just reuse it\nand then we take it from there again it\nseems to me like we're engineering\nsomething and we deal with the\nconsequences we're not designing for it\nfor what we want\nit doesn't invalidate the concept of\nmeaningful human control but to me\nrequires perhaps as i said before a\ndeeper understanding of who are the\npeople and what\nis the context where all of this\nhappened\n[Music]\nfive minutes on the clock few\nreflections on the properties\nfirst one how to\nhow do we\ndo that right so so how do we have a\nsystem in a system where the operational\ndesign\ndomain is explicit\nand i don't know if you're familiar with\nthis video i don't know if the audio\nworks honestly here\nhave you seen this video step one\nget two pieces of bread out\nget a butter knife and yes\ntake one piece of bread and spread it\naround with the buttermilk no doubt\nbutter i'm just doing what it says it\nsays take one piece of bread\nspread it around with the butt with the\nbutter knife\nhold on\nget some jelly\nrub it so you you get the point right\nso if you're teaching an algorithm your\nknowledge should be it is that\nif you don't do it well the algorithm\ndoes not make any difference right so\nthe problem is a problem of\nknowledge engineering knowledge\nelicitation\nwhich\nis actually\na very old problem\nin computer science\nlet alone the question of who gets to\ndecide you know the domain the moral\ndomain right how do you tease out all of\nthat\nknowledge engineering has been\nas a field of research and practice has\nbeen out there for a long time there's a\nlot of tradition it is possible but it's\nvery difficult\nwe don't know how to elicit all this\nknowledge about the people the\nenvironment the condition the morals for\nthat that's probably gonna be incomplete\nsecond of all even more complicated is\nthe type of knowledge right we started\nwe have many different types explicit\ntacit general specific situational\nand\nto conclude that when we take it out\nfrom whom do we\nhow participant is is the process is the\nprocess do we ask 15 people do we ask 1\n000 people who are there\nnothing new very classical but uh it's\nan issue so\nsomething you need to contend with to\nbring these vision of meaningfully\ncontrolling practice and there is some\nnice work out there i mean actually the\nthe\nthe group j and and the group is uh\nactually working on this this is a very\nnice paper that\nthey published this year at dublin\nabout\nactually eliciting this type of\nknowledge so this is a work that somehow\nintersects knowledge engineering and\ncrowdsourcing is a game with a purpose\nis actually out there you can play with\nif you can try\ndesign for the purpose of getting this\nknowledge out a very different type of\nknowledge at scale\nit's a nice paper i just i don't have\ntime to if you want to know more just\nask jj is\nis actually a very very nice nice work\nso how do we get it out\ni don't know but but work is needed\nquite out of work\nthen i just gather\nthese three properties together\nbecause\ni think we have an issue here that is an\nissue of semantic gap\nwe're talking about shared\nrepresentation between the human and the\neye we are talking about\nwell\nhow do we specify\nhow things are what we want\nif we could be if we would be able to\nspecify precisely constraints\nand if we have this mental model that we\ncould even enforce those constraints\nsomehow\nbut the issue is a very old issue\nagainst computer science which is the\nproblem of semantic gap\nhow do we\nimagine problems\nand how a machine actually gets it\nso there's also something to be done\nthere and we've been working on that\nalso too so in uh so last year this is a\npaper for slaughter by agape uh a vision\nstudent of ours\nwe've been actually trying to look at\nthat from the perspective of computer\nvision models\nwith a vision used everywhere uh it's a\nproblem of interpretability if you want\nan extendability it's the idea that you\nwant to understand how a particular\nmachine learning system works and what\ndoes it catch up we have some tools that\nsort of work like a senior synapse they\ngive us a sort of a item based\nunderstanding of what works what doesn't\nby the way we have a lot of tools also\nfor structured data i came back to that\nin a second\nbut these vision systems are the one\nthat we spend out of time and so what we\ndid there was to design an engineer\nsystem to actually collect\nsemantically reach a description of\none of the concepts\nthat actually seems to play a role\nwithin a machine learning system that is\ntrained on a particular data set\nbecause if you can\nthen you can read you can reasonable you\ncan explore and you can listen\nwhich is actually the next step this is\na paper from this ear\nwhich is really about debugging right\nhaving the mechanism the tools the\nsystem that allow you to express\nconstraints that you want this system to\nhave\nif you are classifying a picture of a\nparticular scene and the picture\ncontained these nds but not that that\nshould be the class\nyou explain you define beforehand what\nare your expectations what do you think\nthat the model should know and you can\ncompare it with what the model picks up\nif you can do this systematically then\nyou can go back\nand improve intervention\nand\ni don't think we were the first i don't\nthink we were the last actually this\nyear and i'm not exactly sure where but\nmicrosoft\nbuilding good natural language came up\nwith something like this\nit's a\ncloud source\nand deploy approach models okay no\nmodels that work great on standard test\nsets often fail in the wild you need to\ntest your models beyond simple trained\ntest splits\nbut how\nthere are two types of approaches those\nthat help people write tests and those\nthat automatically test the model the\ngood thing about people is that they\nknow when the model is right or wrong\nbut writing tests is slow in contrast\nautomated methods are fast but they can\nonly test limited aspects of model\nbehavior\nwe propose a human ai partnership called\nadaptive testing that combines the\nstrengths of both humans and large-scale\nlanguage models like keeping as yourself\nin general the ai generates lots of\ntests designed to highlight thoughts\nwhile the person decides if model\nbehavior is right or wrong here's how it\nworks you start by typing in a few input\nexamples in a topic representing an\naspect of model behavior such as to-do's\nthat are in the past tense if the model\nis good it will probably pass all these\ninitial examples then you can ask the ai\nto generate a large batch of similar\ntests in the current topic sorted by how\nlikely they are to reveal failures you\ncan then label a few of the top\nsuggestions that are in the current\ntopic and then repeat the process this\nresults in a growing set of organized\ntests optimized to break the model being\ntested the ai can also suggest topics\ni think enabling you\nand we also require interfaces we also\nrequire to have the right tool to give\nto the different people that are part\nyou know that have actually a role in\nthis system\nto to uh uh to understand what's going\non so these are certainly one work at\nthe moment the other actually ended up\nbut the gut again did a fantastic job in\nuh\nco-designing using richard through\ndesign as a methodology uh working with\nalso some students from uh from\nindustrial design to design evaluate\nprototype and then implement although\nit's an overlapping prototype\na system for explainability\nright that actually capitalize on these\nhigher level semantically reached levels\nto explore the model behavior to\nunderstand how the model work\nsomething accessible something that\ndoesn't require expertise and it is a\nfrom mental modern perspective\nmore aligned with how people think not\nhow they machine things\nultimately we have designing for and\nwith human rights so there should be a\nthing to focus more\nto conclude\nthis is something that was mentioned in\nthe paper designing from uniform control\nrequires designing for emergence i\ntotally agree with that and this is\nwhere we're going\nif you look at what's out there right\nnow this is exactly the direction\nuh we need new tools so this is again a\nvery recent paper for google from kai uh\nwhere they designed developed and tested\na tool for quickly prototyping prompts\nfrom language models to give the\ninstruments for the people that are in\nthe process to understand\nwhat's going on\nright so in this case it's a language\nmodel so you can easily create prompts\nmodify find human language model and see\nwhat comes out\nagain to understand things like\npotentially harmful behavior and biases\nbut we need to have the same classes of\nstool also for the other type of ai that\nwe wanted to design if our goal is as it\nshould be to design and not engineer or\nat least the first design in the mgd\none last thing and then i really don't i\ndon't have time to talk about it but\nplease look at it this is a fantastic\nwork that a phd student about ours just\npublished at\nthe sector with\ndave as a co-author a value-based\nassessment of ai systems is a framework\nshe went through an insane amount of\nliterature effect the paper was accepted\nwith stellar reviews\nuh and she put together a nice model a\nnice framework to reason from from let's\nsay\nuh from a value perspective down to the\nmechanism and the tools available right\nnow instead of vr to assess and\neventually later on to design ai systems\nso this is just a review what we are\nworking on right now is actually to\ncreate those tools that's what is going\non\nif you want to know more we can talk\nwith myself of miraya who's actually the\none doing most of the work i'm just here\ntalking about it\nthank you very much\n[Applause]\nwe don't have questions in the chat yet\nbut uh\npeople are in uh connected feel free to\nask questions i have a comment\nwith i i love that you unpacked our\npaper it's always like a privilege for\nsomebody who just published the paper to\nsee the perspective of others and the\nappropriation\nand\nactually\num\nbut you know you you you the feeling\nthat a god is like a cold but\nbut this opens up a lot of uh questions\nor but you know there is a lot of work\nthat is being done in this space to\ncover what you're saying should be done\nand at least in my perspective or my\nview is exactly what the paper was\nsupposed to do like provide\nand handle for people who are who are\nalready actively contributing in these\nspaces to see okay if i really bring\nthis forward it's covering this space\nit's contributing in this space so\nit's like a catalyst for so in my view\nthese properties are a way to catalyze\nongoing efforts\ntowards a vision for me my control and\nin that sense i i really thought wow\namazing\nalexandria is really killing the the\nspaces and that this doesn't mean that\nwe solved something i i hope i didn't\ngive the impression that i was i mean i\num\nwhat i was trying to do was mostly to to\nto discuss the paper and connect it with\nthe work they've been doing so the i\nhope that what did come out was not a\ncriticism or something that was not\ncomplete\num\nas an effort as an instrument is very\ngood and i really liked the way he\ntackled it and so there is no battle in\nthat respect\num the only if you want a\nreflection that i have and it's not a\nreflection on the work is a reflection\nof the field perhaps or deflection of\nwhat we are doing is that\nin order to do what we should do we\nstill still have a long work\nto walk\nwell long road to work some stuff are\nblue sky because we haven't been doing\nthem yet\nothers unfortunately are all problems\nand\nand yeah i mean\nit's not about the people right it's\njust about the status of the field at\nthe moment\n[Music]\nwas very nice to see how it connects us\nto your work so one thing i think is\nimportant also to\nsupport everyone i think or\nwe should that\nyou saw on paper that is different\nbetween like\nfully less ethical vi\nand meaningful human control we don't\nclaim that they are the same because i\nmean it advised me my visual morality\nmight be something different for someone\nwhat we try to do in control is there's\nsomeone that is responsible for this\nand giving this um\nvalues cultural differences there's so\nmany things and we say like\nthis is this probably don't know exactly\nit's a bigger issue so we're going to\nsay okay we want to have some\nattribution of responsibility if you did\nthis\nyou should be responsible for this\noutcome this group of people disperse\nother things so um\ni i like not when you mentioned your\nwork on crowdsourcing there and\nespecially when you mention like the\nproperties\nrepresentations there i could see how it\ncould work have to to reach this gap to\nget a better knowledge to me but i do\nfeel like the question of responsibility\nbecause then\nyes one step even further away because\ni'm thinking okay let's say uh the\nsemantic gap for example i wanted to\nbetter define this and\nif that's wrong there is some impact and\nthere's some consequences\nwho should be responsible universal view\nis like is the crowd working who design\nthe crowd working experiment someone\nelse because that\ndefining this creates some additional\nlayers it does but i mean responsibility\nand human control i don't think they are\nthe same thing either right\nso you can be responsible for something\nright but to be meaningfully controlled\nrequires many of those properties to be\nthere\nso i don't uh i don't know if if your\nassumption here is that when you control\nand responsibility are the same thing i\ndon't think so right one probably\nimplied the others\nor the others\nthey are connected so\nit is a very interesting question for me\nwho ultimately is responsible for a\nsystem if you know you take the full\nvalue chain you know from the data to i\nmean\nprobably is the company i don't know\nright i guess this is the part of\ndiscussion but it will be productive\nto put the the locus of control\nmeaningful control just there\nbecause the control of how the system\nbehaves actually all along this pipeline\nand uh and and this will be\nto some extent i mean don't know if the\nanalogy here older than just making one\nup but if you're buying a car that you\nknow doesn't work as expected of course\nyou hold\nthe manufacturer responsible for it but\nif your question is is there's the\nmanufacturing control the production\nprocess\nthe answer to this question\ngoes deeper\nright when they they call back a car\nbecause something does not work with the\nabs right of course the manufacturer is\nresponsible\nbut\nwhere the problem is in the value chain\nsomewhere else right because if you want\nto be in control\nthat's why we have all the team\nprocesses we have somehow\nmany different approaches to somehow\naligned into\nthe control and the responsibility yeah\nright i don't know if that allows me to\nbut this was my point by the way yeah i\nmean i'm not trying to say since nobody\nis responsible since everybody is\nresponsible as possible that's not my\nproblem\nwe don't try to say that they are\nexactly the same concepts but what we\ntry to do is like this\nwhat is a reasonable attribution\nresponsibility for example the vehicles\nokay they're in the sense of like a\nlegal responsibility and liability you\nsay like okay\nsomeone gives you a form and say okay\nyou have to supervise this if anything\nhappens at your phone and you sign a\npaper and something happens\nare you really really responsible if you\nreally i mean like uh\ncould you do something about it and so\nthat's also i mean you could not be you\ndidn't have any meaning for human\ncontrol when that was missing okay so\nthat even if you have a legal paper that\nscientists say that's not a meaningful\ncontrolling\nlogically it's my turn\nhow it is expressed or realized under\nthe different stages\nfor example for\nai practitioners meaningfully\ni guess it would mean for debugging\nso when it would mean\nhow\nhow human\nimprove the model\ni guess\ndebug another\nbut for an\nend user\nmeaningful control could be a\ndecision-making level\ni guess so there's meaningful control is\nfor all of these like\nstatuses right who makes the decision\nwho\ni don't know i will turn the question to\nthe experts\ni would say\nboth in the design phase and the use\nspace there\nthere might be different\nreasons that needs to be uh traced\nto place responsibility\nso you might have renewed some\nresponsibility like somewhere different\nfrom\nuh\nso\nin multiple places\n[Music]\n[Music]\nthis\nwhole idea discussion with human control\ncame from the debate of those black\nsisters\nand there was like okay\nif there is some causality killing\nsomeone there and also\nall\n[Music]\nresponsibility\nand maybe we have to make a distinction\nthat was troubling us for a very long\ntime meaning we want to draw\nso a system underground control doesn't\nmean that is a medical system\nit's a very big decision it means that\nin any point of time somehow you can\ntrace back a responsibility\nsomebody that is responsible for an\naction for the first election but that\ncan be also that's the system needs\nsomething\nor the system is designed to do\nsomething\nbut\nyou can go back and say then you can\naudit that yeah it's yeah\nthere's a question for from cars is he\nalways has\nexcellent questions\num\nso the question is uh\nthere are a lot of examples of tools if\nthere's any work out there trying to\nunderstand how these tools are actually\nused in practice\nthat correct cars\nyeah sort of thing you can also just say\nthe question if you want and you can\nwell the the thing is that no we\nwe something that we did there was this\nwork of\nagatha on\nthe\nexpendability tool\nuh there we\nengaged as much as we could with\npractitioners of different types\nto check what they use right now and how\nthey will use the tool that we design\nthere is an increasing body of work\ncreated especially by big companies\nabout it so if you look at kai this year\nand last year there are several papers\nfrom for apple from microsoft i think\neven one from google where they actually\nat least try to look internally of how\nthese particular tools are used\nespecially expandability ones\nand there is also some work on modern\ncards\nwhich is another\nbut but yeah so something i mean right\nnow that the the momentum is growing in\nthis particular and i think well this is\nalso where the opportunities are from a\nresearch perspective\nto go really and trying to understand\nhow all of these happens to practice\nmaybe that means something\nwhich is super because it's always\namazing conversation\ni also have a question about the\nresearch opportunities yeah um i'll\nstart a little bit far i mean\num so just uncover\na little bit of the process how we came\nup with the four properties so we talked\na lot\nwe talked a lot and i mean a lot and\nnervous\nyes we've\ngone through a few\ncase studies\nmany more than was included in the paper\nuh\nyeah the thing is that uh\nyeah correct me if i'm wrong but i don't\nthink any of us\nhad\nat least the people who were doing the\nkind of the\nwriting neither of us had any\nwell much experience working with large\namounts of data\nright so we were approaching it from\ndifferent perspectives and\nnon-disciplinary perspectives different\napproaches to ai but i think what your\nuh\nmany of your examples bring on top of\nwhat we've put in the paper is the\ndata part so i value that a lot\nand uh that goes to the question that\nyou phrased somewhere in the middle\ncalled yeah\ndo we think that these properties are\nwell generalizable beyond these two\nexamples how useful they are perhaps\nyeah and this is something that i want\nto find out for myself as well so i\nreally think that there is a lot of uh\nconversations to be had\nwithin a attack with a broader ai\njust on uh yeah on sanity checks running\nthese four properties by different\nexamples and trying to see right we\ndidn't really think about this uh\nwell data providers right when we were\ntalking about people who were affected\nyou're thinking of people who interact\nwith the ai people who designed the ai\npeople who govern the yeah\nitself so not the player but the role of\ndata\nas a venue for losing control let's say\nyes in the hiring case especially or the\nlabeling of uh data for autonomous\nvariables like\nthe way you you might label your way\nwith the people who provide this data\nright so yeah\nwe are talking about their demographics\ncharacteristics we are talking about how\nthe system yes exactly the bias is the\nthe way in which the ai affects them so\ni think the the checklist is super nice\nsuper cool very good yeah\nyeah so yeah this is just to reinforce\nthe\nsentiment from uh luciano and lucha i\nthink uh this is a really great\nperspective that i really enjoyed and i\nthink uh yeah it's just uh\nwe need more of that yeah\nso the paper is clear about the fact\nthat those\nfour properties are necessary but not\nsufficient probably there are more\nand that as much as i can let's say we\ndiscern it from the paper i think those\nfour are spot-on\ni don't think there is any anything\nfundamentally wrong about them\ni wonder how much people actually think\nabout them\nso if anything for instance i would be\nvery curious to understand how much of\nthese particular angles are somehow in\nthe mindset of practitioners and\nscientists\nbecause i think i still find examples\neven from scientists where this is not\noh yeah the case yeah so that could be\nan interesting feeling for inspiration\nyeah and i don't think people have\nactually consciously thought much about\nthat it's more like that we try to find\ndifferent\nright\nwe try to find different uh kind of\napproaches\non practice and we try to map them onto\nthese properties and uh not always them\nup directly but one way or another we\nfound ways of\nchanneling for instance some approaches\nto two or three properties at the same\ntime so we do think it provides a kind\nof a nice meta narrative in a sense\nwhich\ni don't think there is anything\nfundamentally so\nthe\nthe generalizability i was talking about\nwas not necessarily the properties i\nmean one would have to make an\nexhaustive right explorations to see if\nall of them are always there you know\nmeaningful all the time\nfirst like it seems they are\nuh i don't know asymptotically\nit's mostly to mediogenizability was\nmore on how to look at them from the\nperspective of the people you want uh\nyeah that should be in control\nand and to the domain\nwhen it comes to the details\nuh unfortunately\nthe one this for this system is the\ndetails letter we are not yet in the\nsituation where we can really just\nsimulate\nthe spillover effects or the negative\neffects so that we can control for them\nso the details is where things become\nreal and we will realize that\nevery sentimental is completely off\nyeah and as you said in the presentation\nlike that\nfor the monologue\nwhy are the people that are relying on\njust that\nmapping out who are is yeah we say yeah\nwe do a stakeholder\nbut already that is usually\npoorly defined yeah so even all the\ncomponents of that so in fact\nfrom now from here now we can start\nunpacking so much but yeah it's uh it's\nyeah quite a lot of opportunities for\nvery nice work here for us\nokay do we have a question for more\nquestions\nsorry\ni'm pleased to note that\naccording to you to what extent\nthe notion of meaningfulness is a\nsubjective option\nmy goal\nlet's say that\neven if it was objective\nthe\nnumber of dimensions through which the\nnotion of meaningful\nshould be analyzed it's so big that\nwe'll probably be in computer rooms\nbecause they're meaningful according to\nwhat particular perspective right so how\nconfident you are about the system how\nmuch you trust it\ntrust is a very loaded work it's worth\nit here\nabout too much do you rely on it how\nmuch\namount of meaningful control you need to\nhave in order to satisfy your legal or\nyou know organizational requirements so\nit's uh i don't know if it is going to\nbe easy to\nbox it in such a way you can say oh\nthat's it\nmight vary a lot i suspect\nyeah i think you know maybe for that\nthe other paper from uh\n[Music]\nprovides more definitions\nuh application scenarios in general and\nthen\nhow do you define him what are they\nmaking\nso i'm working with also\nstudent right now\num\nwhen you stop\nreasoning about something you do\nsomething\nwhere is enough\nthat's kind of a connection when is this\nmeaningful\nbut i'm curious why did you ask the\nquestion\nuh\nbecause\nnot in terms of the\nexpertise of the\nexpert who are working on the\nmeaningful human control or\ntotally on the ai systems\nbut according to the cultural background\nout there or\nsome different perspectives of their\ndesigners and developers of their\nsystems\nuh the notion of meaningful\nmight be different\nor\nmight lead to different\napproaches different results\nyeah\nand then yeah one of the kind of one of\nthe practical kind of tips that we give\ntowards them\nfor properties essentially to know all\nthe stakeholders\nin the in the design process early on\nand not when you already have any system\nso i think what i know disadvantaged\npopulations think of that no just ask\nthem before you design the whole thing\nso that is really really important to\naddress this second\nnotion of subjectivity\nincluding these uh yeah it's very\ncomplicated\nyou know the citizenship\nto design and scale\nit's uh it's actually something that we\ndon't necessarily know how to do yet\ni would say\ni was wondering what are the elements of\nthese tools are they metrics are there\nsimulations or are they just\ngui\nwell you could reduce them as to simple\ngraphical user interface on top of\npractically i think there is to\nthere is some\ndesign and size to it because you\ndon't simply wanted to surface the\ntechnological handles that the\ntechnology afford but you want to\npresent them in a way that somebody can\ndesign with\nto give you a personal example here i\nmean we were recording\nuh so this year i thought for the first\ntime a a course of machine learning four\ndesigners\nsecond year so students that have no\nbackground knowledge on algebra we\ncannot teach them the math\nright\nbut they want to use ai as a material\nmachining material\nif you simply obstruct up if you simply\nmap one to one\nthe concepts that come from the math if\nyou want right from the big technology\nto them they\nbasically they will be lost right\nif you want someone to design you need\nto give their affordances to content and\nto experiment a prototype to play\nand this requires thinking about how to\ndo that\nso\ncan it be only a simple quote-unquote\nuser interface of top of the\nhyper parameters that you might actually\nor the fine tuning parameters that might\nget out of\ngt3\nand then don't miss that from there yes\nthanks\nnothing\nalthough\nthis might be a bit of a long one and\nthat's a bit badly formed\nbecause\nit's okay the presentation\nand it tends to come with ideas called\ncolony and things like this\nand then they go back to cans and moral\nagents and things like this\num\nbut it seems a lot of the systems you\ntalked about\nthere's models there's systems there's\nagents and there's a bit of a blurry\nline between them sometimes and there's\na bit of a process where you start some\ndata collection and then the data set\nbecomes a bit agential in its own right\njust because it exists and comes the\nworld that way and the model like uh\nsome of the models have agency because\nas soon as you put out a model that\nclassifies\nfaces into game or strength you're doing\na thing in the world\nand then some of them are really nicely\nwrapped up in self-driving cars which\nare really clearly mobile agents in the\nworld\nuh\nand it feels like a fundamental\nchallenge to notice these shifts and uh\nto be able to talk by these things in\nevery way\ni agree\ni\ni tend to agree i mean the notion of\nagency is very distributed\nright which i think it wasn't\nmy point\nso so responsibility\nwhere does it land\nthe agencies is everywhere\nclear is not in the car itself alone\nright so\ni i i honestly i'm not sure how to\nthat's my personal deficit i guess\nhow to talk structure in a structured\nway about that\ni can start a mental model i'm more of a\ngrown up as i mean i did a little bit of\neverything in my relations career but\nlike more\ndata if you want uh statistical than a\nknowledge representation and with my\nfair share of fun there\nso maybe that's why my conception is so\ndata centric\nand that's a traditional agent modeling\ncenter\nso i'm not sure\nyeah i don't know that there are good\nanswers here\nit feels like\nan inherent message\nthat's the way i see even if you if you\nhave an agent\nbased model right you have some point of\ntime to define the properties and how\nthose properties vary\nfor the agent which ultimately is a\ndesign and a data problem\nbecause you needed to somehow make\nassumptions about what are the\nproperties that matter and how they\ndistribute in their particular property\nspace\nif you want these to somehow be\nreflected by the understanding of the\nwork which is imperial the one which is\nempirical you have to use data\nyou fall back on okay who collected the\ndata how does it collect me\nso you somehow get back into that so to\nme that is a\nalmost essential and necessary but not\nsufficient condition\nthe data part yeah\nyeah and so i think there's a challenge\nto hold on to that while also seeing\nthings as\nsomewhat convenient agents in the world\nit was more about injection i guess yes\nthank you very much thank you very much\nthank you very much everybody\n[Applause]\nokay i guess we're done thank you for\neveryone", "date_published": "2022-07-25T10:30:13Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "de55c59a2d7d4b1d958bf0769d669d0f", "title": "AiTech Agora : Markus Peschl - Aligning AI with Human Norms", "url": "https://www.youtube.com/watch?v=gbjV_hTroyU", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "an introduction\num i'll be talking about aligning ai\nwith human norms multi-objective deep\nreinforcement learning with active\npreference solicitation\num yeah as luciano said this was\ndone as part of my master's research it\nhas nothing to do with my current job\nbut i'm still very enthusiastic about\nthis topic and\nhope to to spread the message\nthat i came up with essentially\nso i'll start with a small motivation\nnamely\nmy research is about deep reinforcement\nlearning and i like to call it a\npowerful optimization framework for\nfinding optimal behaviors in some sort\nof sequential decision making tasks\nwhich can be quite broadly defined\nand this uh this framework and and all\nthe advantages in the deep learning as\nyou're probably familiar with have led\nto to a bunch of quite impressive\nresults and one of them being the\ndeepmind\nalphago system which five years ago\nactually beat the world champion in in\nthis very complex game of go\nand slowly but surely we're starting to\nsee more more real-world applications or\napplications that have more impact to\nthe real world than than playing games\nand\na very prominent one is of course with a\nbig economic interest is this chip\ndesign problem so we can\ntrain reinforcement learning agents to\ncome up with computer chips much better\nand faster than experts\nbut this is just a small\nlist of examples on the right hand side\nyou can see many more which we can\nexpect to see in the future such as\nrobot locomotion automated factories\npower grid optimization and self-driving\ncars we can expect many of these things\nbeing powered by some sort of\nreinforcement learning perhaps if the\nthe progress continues uh\nthe way it is doing it is right now\nand this is all great but also a little\nbit scary\nand uh yeah before i tell you why i\nthink this this progress might be scary\nin some sense i want to to revise the\nreinforcement learning framework that we\nthat we build our research on right now\nso reinforcement learning it typically\nmodels the and some agent some\nintelligent agent acting in a markov\ndecision process mdp there are many\nextensions to this framework of course\nbut i'm just presenting the most simple\none\nand this this this process it is a\nmathematical\nstructure it consists of a set of states\nthat the agent can be in a set of\nactions that it can do transition\nprobabilities which determine what the\nnext thing will be\nthat happens\na reward function and the starting state\ndistribution and the way it works the\nway we model this process is we have an\nagent which takes an action at each time\nstep according to some policy pi\npi it determines how this agent behaves\nand it takes a state and and gives a\ndistribution over actions\nand then once the agent takes this\naction it gives it gives it to the\nenvironment and the environment gives\nback a next state and a reward that the\nagent can then\nuse to learn\nand typically our goal is to to in\nreinforcement learning our goal\ntypically is to find this policy pi\nwhich maximizes cumulative rewards in\nexpectation so we want to maximize some\nsome sort of sum of rewards that we get\nin the future\nwhich might be of course discounted by\nby some vector gamma that just\ndetermines how far we actually want to\nlook into the future\nbut for now you can assume that this is\njust set to 1.\nso this is the basic framework and what\nmy research deals with is this reward\nfunction\nso why is this reward function a problem\nin some sense well here's a here's a\nfunny example by openmai where they\ntrained a a team so this is in fact more\nthan one agent this is two teams of two\nagents of hide and seek agents so they\nwanted to to\nlearn this game of hide and seek with\nsome objects in the physics simulation\nand the red team gets a very simple\nreward of plus one if it sees the blue\nteam and the blue team gets a reward of\nminus one if it is seen and also reward\nplus one if it's if it hides from the\nred team at each time step i believe and\nwhat they found out besides the agents\nactually being\nquite impressive at solving this task\nsometimes they would come up with quite\nabsurd strategies that were absurd\nmeaning uh\nthat they actually break the the physics\nengine and on the left hand side you can\nsee for example the the red team agent\nmanaging to to bump this ramp into the\nwall in a way such that it gains a lot\nof upwards momentum which is of course\nnot intended by the designers of this\nsimulation and on the right hand side\nyou can actually see a counter strategy\nto this\nwhere the the agent manages to to get\nthis this ramp out of bounds\nso this is a funny example and uh sort\nof shows that in some in some sense this\nthese deeper all agents can come up with\nways to exploit the reward function but\nyou could think maybe\nthis is just a this is just an edge case\nand this this just happens in in some\ngames because we didn't pay enough\nattention to the reward and not enough\nattention to the simulation\nbut in fact there's actually this is\nactually a very common thing in\nreinforcement learning so there's a very\ncool list by victoria krakow which i\nbelieve is a researcher at deepmind\nand i shared this in the slides which\nyou can i i highly encourage you to\ncheck out it's very funny also\nand yeah the the message here is that\nif we look at all these types of\nproblems that we have been studying in\nreinforcement learning\nactually this problem of not paying\nenough attention to the reward function\nhappens all the time in essence um\nand\nyeah this is a big problem because if we\nexpect as i said before reinforcement\nlearning agents to to act in the real\nworld to optimize some sort of procedure\nuh sometimes with a lot of economic\nincentives in the in\nin mind of course as well then\nwe\nwe might expect the the agents to to\nmess up certain things without human\nvalues in mind for example\nso this has led to to a broad discussion\nin the past years and actually to a much\nwell it is part of a much larger\nresearch agenda i think of value\nalignment which i think this community\nhere is very well familiar with and\nthe problem of value alignment of course\ngoes much beyond reinforcement learning\nand and reward function design\nbut also incorporates ethics and\ngovernance\nbut\nin fact this is very important because\nwe cannot solve this reward function\nproblem i believe by itself from this\ntechnical aspect but we also have to\nkeep in mind that there is always this\nnormative question of what values should\nan ai follow to begin with it's not only\nhard to find the reward function\nsuch that the agent does exactly what we\nspecified it to do it's also hard to\neven come up with something to specify\nin the first place because sometimes\nthere might be some conflict between\nwhat the agent should really do not even\nhumans have a very clear thing in mind\nto do\nso this technical question this first\none has been tackled by reinforcement\nlearning research quite a bit in the\npast\nwhere they studied basically okay we\nhave some sort of reward in mind that we\nwant the agent to follow but it's hard\nto specify so what do we do instead well\nwe can try to learn this reward function\nuh from some sort of human inputs and\nmany many prominent human inputs or many\nyeah popular ones are our natural\nlanguage demonstrations purpose\npreferences human intervention there are\nmany more more types of feedback you can\nyou can imagine for these agents to act\nupon\nbut\nyeah\nessentially the the the problem here is\nthat\nyeah this these these approaches they\nonly learned from one\nexpert most of the time so there is no\nway of keeping this normative question\nin mind of what value should the ai\nfault to begin with what do we do if we\ncannot even incorporate from one expert\nthis this these demonstrations into the\nsystem because then the agent will just\nlearn from one from one person but this\nperson's views might not reflect\nhumanity's views\nso what we study in this work is\nessentially how to how to keep the\nsecond question in mind we of course\ncannot solve it since this is a\nphilosophical question and i don't think\nthis is easy to solve\nbut the\nthe main goal here is to to look at\nscenarios like this where we have\nmultiple experts that come up with\ndifferent preferences in mind and we\nstill want the agent to do something\nuseful in the world and there will\nalways be certain behaviors that are\ndominating others so there will be\nbehaviors that a lot of experts agree\nupon\nwhich are good and a lot of behaviors\nwhich are bad\nand we want to find these these policies\nessentially that all most experts agree\non and then leave some room for\nfine-tuning the agents with respect to\nspecific details\nso this is what we do we study how to\nlearn reward functions that can trade\noff different objectives\nthen we propose a\nan algorithm essentially which is called\nmulti-objective reinforced active\nlearning which i'll call moral from now\non and it\ncombines learning from demonstrations\nand powers preferences\nand finally we demonstrate the\neffectiveness of moral in the context of\nlearning norms\nif there are any questions here please\ninterrupt me but i think i'll leave a\nprobably a lot of time at the end still\nto\nanswer questions so in case there are no\nvery pressing questions i'll move on\nhere\nyeah if you\nsomeone wants you can also post the\nquestions in the chat and then we can\npick it up then later so as\nokay but uh this is a question but no\nokay marcus i think you can continue\nperfect yeah this was just the\nmotivation so i i was hoping no\nquestions to to arise here\nso that's good um so now comes the the\ntechnical part of the talk i would say\nwhich is this algorithm that we came up\nwith\nand and it consists of two steps\nessentially so the the first step is is\nlearning from demonstrations and the\nsecond step is kind of combining\ndifferent demonstrations into one final\npolicy\nand\nboth of these steps they combine sort of\nstate-of-the-art approaches for their\nrespective fields namely this learning\nfrom demonstrations and learning from\npreferences but we adapt them in a way\nto to\nwork together\nand and you'll see in the examples later\nwhy we we chose this essentially\nso i'll skip the first step not\ncompletely but just briefly because it's\nit's a bit technical and\ni don't think there is much uh\ninteresting to see there\nbut in a nutshell step one it learns\nfrom from a bunch of different experts\ndifferent reward functions so assuming\nwe can we can learn a reward function\nfrom a single expert we can repeat that\nand then we end up with multiple reward\nfunctions and that's essentially a\nvector of reward functions that we we\nthen have which represent what what\ndifferent experts deem good and bad in\nthe world\nso this is the first step and i'll just\nbriefly mention how this is done\num since we're we're doing deep learning\nand we don't really have access to to\nto features of the of the world we have\nto do everything in some sort of\nend-to-end fashion with learning\neverything from scratch\nbut there's actually a very very cool\nframework which is called adversarial\ninverse reinforcement learning so invest\nreinforcement learning it's by the way\nin case you're not familiar with it it's\nhow we extract reward functions from\nfrom demonstrations usually\nand\nwhat this does is it takes a set of\ndemonstrations in the so this just shows\nthe single expert case it takes\ndemonstrations from a single expert\nmultiple ones and then it trains uh sort\nof a generative adversarial network so\nit it lets it trains a policy pie\nwhich tries to imitate these\ndemonstrations and at the same time it\ntrains a neural network which\ndiscriminates behavior from this agent\nand the the real demonstrations so all\nthe time the agent tries to do something\nthat closely resembles these\ndemonstrations while the discriminator\ntries to gets get some clips from the\nagent and some clips from the actual\nexpert and tries to say okay this came\nfrom the agent this this came from the\nfrom the demonstration data set and and\nthe the reason why we do this is because\nyou can actually mathematically show\nthat under some some certain assumptions\nwe then end up with a with a reward\nfunction that we can extract from the\ndiscriminator which will yield rewards\nin such a way that if we optimize for\nthis report function we end up with\nbehavior that looks like the the experts\nso it's a reward function that\nimitates the expert demonstrations in\nsome sense\nso this is quite convenient and we can\nwe can\nrepeat this for multiple experts\nand end up with this reward function\nr\nbut in our case actually we can also\nincorporate another primary reward\nfunction rp\nand this is totally optional but you\ncould think of maybe some some designer\ndesigning some reinforcement learning\nsystem and\nhaving some primary reward in mind this\nwill be more clear the experiments i'll\npresent later\nuh so the reason why we include more\nthan just these learned reward functions\nis is just that this um allows for more\nflexibility\nand then the the problem is of course we\nwe have now these representations of\nwhat different experts like and do not\nlike but how do we combine this into a\nfinal reward the agent needs to do\nsomething at the end of the day\nso we need to find a way to to fine tune\nthis to two different preferences\nso the idea here is that we can just\nlinearly scalarize this reward so we\nmultiply this reward function with a\nnumber a vector w\nand since we do not know how to combine\nthese these\nthese f theta are are not really\ninterpretable since these are all neural\nnetwork outputs\nwe learn these\nthese combinations of reward functions\nfrom from preferences and this is what\nwe do in step two\nand i guess this is more the interesting\npart so i'll pay more a little bit more\neffort into explaining this step number\ntwo so now we've learned a\nrepresentation of different experts\nand we would like to to come up with an\nalgorithm that can\nin in real time sort of tune this expert\nto to optimize either either a yeah a\ncertain combination of these experts\ndesires and and of course it's really\nimportant to learn this correctly\nbecause sometimes one expert might say i\nlike a and do not like b and another\nexpert says exactly the opposite so if\nwe were just taking the mean over these\ntwo uh these two reward functions then\nthe agent would not know what to do at\nall so we we need to to learn this\nin some reasonable way we can just take\nthe mean and and hope that it satisfies\nall people's preferences\nso what we do here is\nin the second step we do active learning\nactive moral what we call it and it\nconsists of three steps essentially in\nthe first step we optimize for some\nfixed reward function\nwith proximal policy optimization this\nis a state-of-the-art reinforcement\nlearning algorithm i won't go into\ndetail it's a black box that optimizes\nthis report function\nand the reward function we choose is a\nis the posterior mean of this w\ntransposed r where we have\na main we are maintaining a distribution\nover these these w's\nso we want to not only learn\na w we actually learn a distribution\nwhich also gives us some uncertainty\nestimates\nand in this optimization step we just\ngive the agent our best guess of what\nthe combination should be right now and\noptimize for that for a little bit then\nthe agent acts in the world tries to\nsatisfy this combination of rewards\nand then after some time we give another\nexpert\na query which\nconsists of two clips of the of the\nagent's experience\nand then the expert here can can decide\nwhich of these two\nclips it it deems more preferable\nand then we use this this answer from\nthe expert to update our combination of\nreward functions\nby using bayesian learning essentially\nand the way this is done is we we of\ncourse have to define some sort of\nmathematical model and this is a bit\nmore involved but\ni hope it won't get too complicated\nso there's actually quite a lot of\nresearch on paradise preference learning\nin the sequential decision making\ncontext and\nit turns out you can actually adapt this\nto a multi-objective\nto a multi-objective setting\nwhere you say that\nour model of the expert preference\nthe expert's preferences here\nare given by this following equation so\nthis is a bradley terry model\nwhere we value clips or trajectories\nthat achieve a higher\nscalarized reward exponentially higher\nuh yeah in proportion to to this report\nfunction here\nso\nthat's uh that's the equation\nand\nby the way r of of tau some trajectory\nit just denotes the rewards we get\nthroughout this this clip this period of\ntime\nthis is our model of of paris\npreferences\nand then we can use this this model this\nthis likelihood to update in the\nbayesian manner where we just say that\nthe posterior of this weight w given\nsome answers by the expert is just\nproportional to some prior which we have\nto specify\nand then we multiply it with the\nlikelihoods that we obtain through time\nand then hopefully we obtain\nover time the agent acts and acts and\nover time we can fine tune the agent to\nspecific uh different\nvalues you could say\nand in our experiments we always use a\nprior that's uniform over all positive\nweights so we start out\nagnostic totally with respect to all the\nexpert reward functions this can be of\ncourse changed but in our case this was\nthe easiest to to come up with\nexperiments essentially\nand there's one technical detail though\nthis this bayesian learning we of course\ndo not have an exact representation of\nlike an analytical expression of this\nthis posterior so we need to employ some\nmethods to to sample weights that we\ndeem likely and we use markov chain\nmonte carlo for this\nbut yeah this is just the technical\ndetail\nso this is how we update give them an\nanswer a pair as preference from the\nexpert we can then use this to update\nbut i haven't told you how we choose the\nquery\nand this is actually a very crucial step\nbecause the way reinforcement learning\nagents\nwork is that they they\nthey learn through a lot of trial and\nerror so they will just run around in\nthe world and try to maximize the reward\nin some absurd way\nand of course if we do not pay any\nattention how to query an expert then if\nwe just randomly sample from the agent's\nexperiment experience we will just\nsample a lot of boring behaviors you\ncould say so for example most of the\ntime the agent will just start out by\nrunning against the wall\nand then this will be most of its times\nbeing spent essentially so if we just\nrandomly sample from all of this\nexperience we'll probably sample two\nclips where the agent does absolutely\nnothing or runs against the wall and\nthen there's not really anything to\ncompare so we want to extract sort of\nthese these interesting cases where the\nagent runs into some some\nvalue conflict and this is the equation\nthat essentially gives us these these\nclips\nwhere we we expect to learn most from\nso we want to maximize overall pairs of\ntrajectories or clips\nsome sort of lower bound and this lower\nbound essentially each of these terms in\nhere they give us the the volume that we\ncan remove from this posterior\ndistribution\nif the expert answers in a certain way\nand we take a minimum over both of the\npossible answers that the expert can\ngive so no matter what answer the expert\ngives we want to maximize the amount of\nvolume we remove from the posterior\ndistribution which in in other terms in\nnon-mathematical terms just means the\namount of information we we expect to\nget out of of this this answer\nso this is what we optimize for and this\nof course we cannot really solve it's\nit's kind of intractable in the real\nworld so what we do is since\nreinforcement learning as i just said it\ntakes a lot of time to to optimize for\nits rewards we just sample from the\nagent's experience and then look at this\nmetric this is essentially just a number\nfor each pair of trajectories and then\nwe keep track of the best the best clip\nthe best experience the agent has\nexperienced so far\nand use that after some fixed amount of\ntime to to update our our preferences or\nto query the expert sorry\nand then this leads us to the following\nalgorithm\nso we we start out with with some\nsome demonstrations by the experts then\nwe learn a representation of of these\nexpert\npreferences essentially in in the vector\nvalue reward function and in the second\nstep we learn how to how the agent or we\nprovide essentially a way to to\nfine-tune the agent with respect to\ncertain combinations of these reward\nfunctions\nto to come up with a final behavior\nall right so now i'll talk about results\nand this is one caveat of the research i\nsuppose is that we we evaluate our our\nresearch only in in grid worlds which\nare discrete environments where the\nagent\nfinds itself in the two-dimensional grid\nand it can move in one of the four\ndirections and then do in our case also\nsomething else namely interact with its\nneighboring cells or do nothing\nand the reason why we evaluate here is\nbecause this allows of course for for\nvery easy experimental workflow and we\ncan easily iterate on the environment\nand automatically conduct these these\nstudies\nand also one reason is that these\ninverse reinforcement learning\ntechniques they they are very hard to\ntrain and most of the value alignment\nresearch in this field is is done in\ngrid worlds unfortunately so this is one\nbig big next research step i would say\nnonetheless these experiments can\nprovide valuable insight which is why we\nwe rely on them\nand yeah we present two environments one\nis really simple one is called burning\nwarehouse where the agent finds itself\nin a burning warehouse and it is its\nprimary goal is actually to to\nextinguish fire or to walk to a fire\nextinguisher and then use it\nbut also we assume that there are a lot\nof workers which are lost in this\nwarehouse and they do not know how to\nescape the building\nso we would like in this case we would\nlike the agent to not only follow this\nprimary goal and not only optimize that\nbecause we assume that if it walks past\nthe person that is lost\nor if a human were to walk past\na person that is lost it will probably\nhelp him and and and help them and and\ntell them look here's the way out\nso this is kind of what we want to\nachieve here\nand in delivery we have actually much\nmore more conflict between different\nvalues so here we have\nmany more things to do and also a\nprimary goal of delivering packages so\nthis is sort of an agent walking on the\nstreet and then we assume that it could\nit could\nencounter some some people that are in\nneed of help maybe if they tripped or\nthey lost their phone uh it also could\nclean up the street in case it's not too\nmuch effort and there are also vases\nthat it should avoid because we do not\ndo not want to to break vases for no\nreason\nso these are the two environments that\nwe study\nnow let's start with the results in the\nfirst environment\nso in the first environment we we train\nthe agent in the following way so we\nprovide pro we provide in the first step\njust\nuh demonstrations from a single expert\nand we assume that this single expert it\nreally wants to save these people that\nare lost so these these demonstrations\nthey just go for for go go up to the\npeople tell them look or interact with\nthe\nthe cell essentially meaning telling\nthem how to get out\nand then we train for this in the first\nstep and we can see that the inverse\nreinforcement learning works so we\nwe do learn a policy that that imitates\nthis behavior of saving all the people\nin a reasonable time frame\nbut now we of course have this this\nconflict between this primary goal and\nthis demonstrated norm you could say\nwhich we want to incorporate and this is\nwhere the second step comes in\nand here we we also chose to give\npreferences in a very principled way\nin the second in the second environment\nthis will be a bit different but here we\nassumed that\nthere is only one reasonable behavior\nwhich is\nsave all the people and then go and\nextinguish as much fire as you can and\nthis is the preferences we provide so\ngiven two trajectories we always prefer\nthe trajectory where more people are\nsaved and if both of the trajectories\nsave the same amount of people we prefer\nthe one which extinguishes more fire\nand we can see the results on the left\nhand side so here we plot\nfor each point during the second step\nthe the objectives that the agent\nachieves and it starts out here doing\nnothing just wandering around\nbeing confused\nand then after\na few time steps it receives these\npreferences and of course at the start\nit doesn't save any people so it\nreceives preferences of please save more\npeople and it learns this and once it\nhas learned a policy that saves people\nall the time it then understands that we\ngive it more preferences which also take\ninto account this primary goal and then\nit converges to a solution that does\nkind of both as much as as possible and\nwe compare this to to just manually\nchoosing these combination weights in\nthe second step\nand we can see so these are these these\ncolored dots\nand we can see that we actually find the\nsolution which we first of all wouldn't\nhave found by just doing a grid search\nwith with weights in zero point uh i\ndon't know 0.05\nsteps so we find the policy that we\nwouldn't have found but we actually\ndirectly\nobtain something that resembles our\npreferences without having to search\nforever for some linear combination of\nreward functions\nso this is this is good news i guess and\non the right hand side you can see that\nthis is actually quite\nrather\nwell these are all relative terms but\nthis is rather sample efficient so we\nuse here\non the left hand side\n25 queries in total and you can see that\nthis does suffice to to converge to a\npolicy that\ndoes the best of both worlds\nso at the end i'll just present one more\nexample namely this delivery environment\nbecause the last one was very simple\nand here is the i guess the the most\ninteresting\nhere are the most interesting results\nso what we do here is that we we give\nthe agent the primary reward of plus one\nthis was this rp in our vector value\nreward and then we assume that we have\ntwo experts which both like something\nwhich both think that the agent should\ndo something a little bit different and\none expert thinks that if the agent\nencounters a person it should always\nhelp them and the other agent the other\nexpert thinks well please uh this this\nstreet is so dirty i think the agent\nshould put more effort into cleaning the\nstreets if it walks past\nthe the dirty tiles\nboth of them however agree that the\nvases should not be broken\nso this is uh this is sort of the\nagreement point between between the two\nand then we we give these uh\nwe give our algorithm demonstrations\nfrom these two different experts which\nhave of course some some some conflict\nin them\nand in the second step we try to\ncombine these into one final policy or\nwe try to study can we now come up with\npolicies that first of all not only\nimitate each of these experts but also\nkind of get the best of both worlds and\nmaybe you you walk past the person but\nright next to you there is also some\nsome some trash that you could pick up\nso why not do both if you have time for\nit\nof course this depends on how much you\nyou value\nwhich of the experts preferences\nand this is what we study here so we we\ndo actually a simulation study where we\nprovide the agent with a bunch of\ndifferent preferences and we do it in an\nautomated way so our preferences are\nrepresented by a vector\nand the way we we we give\nwe give answers to these parabolas\nqueries in the second step is that we\nlook at the two clips\nas before\nwhere\nthe clips are represented by the numbers\nof objectives so for example the agent\nmight just deliver one package help\nthree people and clean two tiles the\nvase series omitted for simplicity\nand then the second clip the agent\ndelivers packages but doesn't really\ncare about helping people or cleaning\ntiles\nand then for instance we define a\npreference vector m which defines a\nratio two to three to two between these\nthree objectives and we would like to to\nchoose the the clip that more closely\nrepresents this ratio so what we do is\nwe we take the the clip that has a lower\ncallback like divergence which is just a\nway of measuring distances between\ndistributions in our case this is just a\ndiscrete distribution\nso here the s1 would win because 132 is\nis quite close in ratios to a 2 to 3 to\n2 ratio\nand if we do that for a bunch of\ndifferent vectors m so now we we just\nwant to see can the agent really can we\nfine tune the agents to all of these\ndifferent preferences\nthen we do see that we get a wide\nvariety of behaviors which is good so\nthis means that we we can actually fine\nfine-tune the agent\nand\nalso what we see is that the\nthe policies that we achieve very\nclosely resemble what we what we gave\nduring these preferences stages\nso we don't only get a lot of random\nbehavior for each of these behavior\npoints\nthese actually represent what we what we\nwere aiming for while training\nso what we plot here is uh is a sort of\na two-dimensional projection of these\nthree three first objectives since\nthree-dimensional plotting is not really\nbeautiful\nand\nthe the colors in the first row\nrepresent always the third objective and\nin the second row they represent how far\nthe\nthe achieved objectives are away from\nthe preferences that we gave during\ntraining so what we can see from the\nfirst row essentially is that we we do\nachieve i mean\nyou would need some time to to look at\nthis in more detail but essentially you\nwe can we can read from this that we can\nget policies that that do quite well on\ncertain regards so let's say for example\nwe take this point here\nit manages to clean a lot of tiles also\ndeliver a lot of packages and still help\nsome people\nand while not breaking a lot of laces\nwhich are these gray circles around so\nwe can get a good trade-off between all\nof different objectives\nbut also we can choose to to do\nsomething else and maybe help more\npeople at the cost of cleaning less and\ndelivering fewer packages\nthe more interesting point though is\nthis lower row where where we see that\nfor most of these points the these these\ncolors are very dark indicating that\nyeah for example here we perhaps\nprovided a ratio of yeah three to seven\nto five or something for example\nthen this just means that yeah the ratio\nwas actually attained and the agent\ndidn't just come up with some some other\nbehavior that it deemed interesting\nso this is sort of this alignment that\nwe are trying to measure with the\npreferences during during training\nall right then one more baseline i\nwanted to to mention here\nis that\nof course our method it involves a lot\nof different algorithms\nor it combines a lot of different\nmethods and you could maybe say that\nthis is a little bit too complicated and\nconvoluted why do we not just skip some\nof these steps\nand one of the the main criticisms i\nguess you could come up come up with is\nwhy do we need this first step of\nlearning reward functions from different\nexperts if at the end of the day there's\njust one expert\nthat determines how do we how we find\nyou and what behavior we achieve at the\nend of the day\nso this is a very valid criticism and\nthis is kind of\nwhat we test here what happens if we\nskip this first step and only train an\nagent and to end using preferences\nand this is essentially already this was\nalready there this this algorithm it's\ncalled deep reinforcement learning from\nhuman preferences which which employs a\nsimilar model but now it learns only\nfrom preferences using this this bradley\nterry model and it doesn't use any\nbayesian updating it just uses back\npropagation or gradient descent\nand what we found is that of course\nthere are some fundamental differences\nnamely that this this approach here it\ndoes not allow for for any primary\nreward or any combination of rewards and\nit also doesn't allow for any\ndemonstrations or not not easily at\nleast to be incorporated so what we do\nto to make it a bit more of a fair\ncomparison we just provide many more\npreferences to this approach\nso we hear a thousand preferences for\nexample in the in this\nburning warehouse environment\nand and see\nif we can at least get the some sort of\nsimilar behavior out but what we found\nis that actually if we just train from\npreferences end to end besides taking a\nlot of more a lot more time is that we\nactually do not converge to a solution\nthat is pareto efficient so here um you\ncan see that\nyour lhp it manages so we give\npreferences in the same way as before by\nthe way so here we see that it manages\nto to\nunderstand we care about saving people\nbut it does not understand that we also\ncare about extinguishing fire it does\nnot understand it enough at least so it\nreally lags behind in this in this\nprimary goal whereas moral just uh\noutperforms it here and just finds a\ndecision essentially much better\nsolution in all\nregards at least in this environment\nand we also repeat the experiment for\nfor a bunch of different preferences in\nthe in the bigger environment\nand we sort of see a similar trend\ninterestingly though we see also that\ndrill hp it really has troubles\num finding out these different nuances\nbetween experts preferences\nso in all of these cases this orange\ncurve drl hp it sort of converges to uh\nwell it it does understand some sort of\npreferences that we\nprovide\nbut in most cases the the\nthe differences are much more subtle so\nit's sort of always a mean solution and\nit cannot pick up that we might care\nabout\nnot destroying this face in the corner\nor it might care about\nwe might care about really helping this\nperson we just ran past\nit sort of washes over everything\nbecause it's it's an end-to-end deep\nlearning system whereas moral it manages\nto to\nmuch give us a much more fine-grained\ncontrol over which things to prioritize\nand which knob\nso at last i think i'm already taking a\nlot of time but this is the last slide\nthe second last slide so this is just a\nan ablation study essentially\nuh and also a robustness study where\nsince everything we do is automated\nfirst of all we would like to see how\nefficient this is and we can see that by\nusing this active querying procedure we\nactually\nare much more efficient and actually\nmuch more\nclose to the expert's preferences\nthan what we would attain by just\nrandomly querying but this is kind of as\nexpected as i said before and the right\nhand side is much more important\nwhich is we we test what happens if the\nexpert has a lot of noise in its\ndecision making process in the second\nstep so what if the expert sometimes\ngives a very contradictory\nevidence\nand we model this by by adding some some\nsome random noise with some probability\nthen the expert gives a random answer\nand we do see that of course the the\nerror so this is the how far the policy\nthat obtain is from the preferences that\nwe give it during training we do see\nthat it increases but still it is\nstill up to a noise of 0.3 which is\nalready like each third answer is is\ncompletely arbitrary it's still better\nor\nas good as the random queries so that is\nsort of a sanity check of this approach\nso in conclusion we propose moral it's a\nmethod for combining multiple sources of\nrewards and we shorted through\nmulti-objective optimization we can\nactually recover a wide variety of\ntrade-offs and actually generalize a\nlittle bit from this demonstrated\nbehavior we don't just have to imitate\nit we can actually combine the best of\ndifferent worlds\nand we compare this to a baseline\napproach this drlhp and we found the\nprevious methods to really lack the\nability to deal with conflicting\nobjectives and we also find that\nactively learning a distribution over\nthese combination weights is quite\neffective for for learning efficient\npolicies\nall right that is it from my side thank\nyou very much\nnice so yeah big round of virtual\napplauses or oh such a marcus yeah thank\nyou very much\num yeah do we have questions you can\neither raise your hand or just pick up\nyeah um yeah i see one\nso um\ngo ahead\nyeah uh hi marcus uh thank you for the\npresentation i think it's really\ninteresting approach\nso um\nmy question might be something that you\nalready showed but i lost the internet\nfor a while it was basically that how\nmany um\nuh human preference so how many uh\ntrainers do you need\nor have you done any experiments on at\nwhat point are do does providing the\nnumber of trainers just plateau what the\nmodel can learn from you know different\npeople so where i come from is from the\nfield of medical limit segmentation\nwhere contrast in medical image is not\nso good so you get multiple doctors to\nannotate you know any kind of organs\ninside your body\nuh then the question arises how many\npeople do we actually need to annotate\nthose things because at some point you\nwill have diminishing returns\nokay\nwell i think maybe it's a little bit of\na different uh\nview because in our case\ni'm not sure if i answered the question\ncorrectly but in our case it's like the\nmore experts we have we assume that\nthere is more disagreement i guess and\nin your case probably you're hoping to\nget more agreement between uh doctors is\nthere right\nor more information it's quite possible\nthat the american styles of two doctors\nin our cohort might be same yeah\nokay well we\nwe didn't we didn't check how far we can\npush the amount of experts the the nice\nthing about this approach is that since\nin the second step we we just employ\nbasic learning we can just\nit scales so definitely we can just\nscale it up and see what happens\num but we didn't test it it's a bit hard\nin these environments because we would\nhave to come up with environments where\na lot of more a lot more disagreement is\npossible but if of course we kept the\nenvironment to this simple setting and\nthen there's a lot of overlap between\nsome experts\nthen these these weights they will just\nthey'll just learn that so this will not\npose uh\nany problem i think as long there's more\nas there is overlap the more\ndisagreement we have the more harder\nit will get of course to find you into a\nspecific preference\ni hope that answers your question in\nsome form\nokay great thank you thank you very much\ngreat question um yeah some someone else\nhas some questions for marcus\nuh yeah i have a question\ngo ahead so marcus thanks a lot for the\ngreat uh talk uh really insightful well\npresented um\nand i had a question about uh\nhow you see this relating to meaningful\nhuman control\nand so\ni guess one of the questions i had\nand then i'll let you comment and answer\non my question\nis so now you've you've captured\nyou know\nkind of a reasonable\nquantification of combining multiple\nsources of rewards\nand perhaps equally well as humans would\nbe able human experts would be able to\ndo it\nuh so we can argue about that but the\nquestion is now how would that work and\nand would you be able to\nas an expert kind of question\num you know that\nthe policies and that that actually roll\nout of your uh of your of your approach\nwould you be able to say well um\nno it's here where it's going wrong or\nuh so could you comment on that\nyes so of course uh i think this\napproach has uh\nhas advantages and disadvantages i think\none of the disadvantages is of course if\nyou were to really track back\nwhat the who is responsible for the\nactions of the agents is a bit hard here\nbecause we have these multiple steps\nwhere\ndemonstrations are involved and also\nthen and a trade-off is involved by\nparis preferences\nhowever that does not mean that we're\ncompletely lost so we of course can of\nuh\nwe can inspect the demonstrations of the\nexperts in the first step and see if\nthere was actually really a malicious\nbehavior in there for example in the\nworst case\nif this is not the case that we then we\ncan assume that this is of course not at\nleast the expert's faults in the first\nstep\num\nin the second step\nyeah i mean we what we show essentially\nhere is that the\nthe preferences that we give\nare closely imitated or closely learned\nby the agent\n[Music]\nso this only gives us some sort of\nguarantee that the agent will not do\nsomething completely different but it of\ncourse doesn't shield us from\nadversarial attacks or something else\nand we will not be able to to tell like\ndepending on the on the case who\nessentially was uh\nwas responsible for for what happens\nyeah\nyeah\nso we're still we're still in business\nif i may pitch in also uh um\nalso i really like about marco's\napproach there on the morrow is become\nthis the when you divide you have the\nsecond step this the person that gives\nthis preference could change according\nto situations according to context so\nthat will also track the reasons of a\nspecific person say robots delivering\nclose to your home it can kind of tailor\nthis\naccording to some kind of agreed upon\nrewards can try to tailor this to your\nuh preferences there but also when it\ngoes to another neighborhood can also\ntake it to another one so i think these\nthese two steps also help on this\ntracking of uh conditional uniform\ncontrol in this sense\ndo you agree with that marcus as well\nyes yes i agree with this actually um\nwell another side effect which is uh i\nthink related a little bit is we also\ntested what happens if if\nthe experts in the first step they they\ngive very reasonable demonstrations and\nthen the expert in the second step tries\nto come up with completely malicious\nbehavior such that for example the\nexpert prefers trajectories where races\nare broken and then we actually show\nthat this is not possible because um\nif we assume and you can actually prove\nsome sort of guarantee if all the the\nreward functions the marginal ones in\nthe first step behave reasonably well\nand if the learning\ngoes right then in the second step we\ncannot provide malicious preferences so\nthis is just a\nmore nice side effect of a two-step\nbehavior\nyeah\nthanks marcus um yeah catalan you have a\nquestion go ahead yeah thank you\nthanks uh marcus for a very interesting\npresentation i found it uh fascinating\ni'd like to learn more about this\ncombination of\nreinforcement learning and active\nlearning\num in particular in this case giving\ngiven the norms and so on i was\nwondering did you see maybe with the\nexpert opinion coming in that\nthat it made experts think\nin the sense that i'm looking for say\nthe hybrid intelligence point of view\nwhere we see where we say you know human\nand artificial intelligence together\nbrings us further\nso you show how it brings the ai further\nbut i would also like to see to what\nextent it could make humans think and\nsay oh\nwell actually i want to change my\nmy norm maybe or my strategy\ndid you see any\nindications of that or do you see the\npotential for that\nwell indications no but i do see\npotential for sure so actually there's\none sort of motivation for this research\nwhich which is that i mean the the the\nstrength of reinforcement learning at\nits core is really to to search through\nvast spaces of future possibilities so\nthe idea originally was that maybe\nof course people always have their\npreferences and and they should be\nrespected at all cost but\num maybe we also think in\nshort-sighted sometimes and maybe the\nagent can come up in in a way to to save\npeople and deliver packages and clean up\nuh things at the same time right and and\nthen maybe this uh this possibility\nmight change some humans behavior so you\ncould incorporate maybe a feedback loop\nto\nto to the expert or the expert then\nrethinks their preferences because they\ndidn't even know it was possible to to\ndo certain things in combination\nso i think that's the that's the\nyeah i think that's the promise yeah\nokay cool well if you um\ncontinue in that kind of direction i\nwould be really interested to to know\nabout it and see if we can do something\ntogether\nall right yeah yeah it's very\ninteresting\nyeah great thanks yeah yeah do you have\nany other questions\nyeah just remember if you don't wanna\nyeah\nbe on the video you can also just type\nyour question in the chat\nyeah maybe i have a question so uh yeah\nwhich relates i think to david's\nquestion\nabout the relevance of this for uh\nfor the general kind of considerations\nof meaningful human control and then uh\ni do agree that the kind of uh\nthe value of understanding values is uh\nthe major one but i'm also wondering if\nthere are any\nlike um\nimplications for the tracing condition\nwhich is basically uh uh\ni think it it does become tricky right\nwhen you do have an agent that is\ntrained by uh multiple humans\nwell trained or instructed or whatever\nuh and then the agent does something\nwhich is not exactly what humans want or\nyeah there is a kind of uh\nuh some some kind of consequences uh\nuh\nbut then yeah\nit it it does become tricky to say okay\nwhich human of those experts\nwould be the one to uh to to to be\nresponsible for that\nright so could you comment on that maybe\nyes i mean i feel like this also relates\na little bit bit to the last question i\nthink there's a there's sort of a\ntrade-off we sometimes make with these\nsystems\num\nbecause\nof course we would like the agent to\ncome up with to not only imitate a\nsingle expert because we want to\nleverage the power of artificial\nintelligence and learning from a lot of\ndata\nand then\ni guess this this in this alphago system\nwe had this move for d7 which which uh i\nguess only very few experts had had ever\nplayed before and it was just very very\ncool to learn from that\ni guess here the the problem is that if\nwe want to to\nleave open the possibility of coming up\nwithin with new behaviors then the more\nwe we leave open the possibility the\nharder it will get for us to trace back\nwho actually was responsible for this um\nand i think this might just really\nrequire more understanding of neural\nnetworks there was actually i think\nquite recently last week a new paper\nabout\nhow reinforcement learning agents come\nup with with new strategies i think in\nalphago so maybe that is a a\na reasonable way of of of tackling that\nproblem but yeah introducing more\nexperts and more more deep learning will\nwill not make it easy that's all i can\nsay i think\nno that's a very good answer i think\ngood thanks\nthanks yeah more questions\nyeah i do have one question uh\nto marcus\nso um\nmarcus so do you see as a possible\nextension on your framework to towards\nlike the unknown nose so let's say like\nthis because now you have like the\ncriteria like you have the cleaning or\nhelping someone or delivering a package\nor in the vases right\nso maybe there was something that it was\nnot represented in the grid world so for\nthe or maybe in the real world of course\nthat happens more often because maybe\nthere's something else besides like um\nanother goal or\ninstead of helping someone i mean i\ndon't know you have to if you start so\nhere you have to protect the robot to\nprotect from the rain or anything like\nthis\nso uh\ndo you see this as a\ninteresting step so how to extend this\nin the framework you propose\nhmm that's very difficult\ni i see i mean this reminds me a little\nbit of this inverse reinforcement\nlearning\nimmersive reward design sorry paper\nwhere where of course the the whole goal\nis to to literally come up with a reward\nfunction that\nthat is sort of robust to having\nforgotten some certain things and just\nbeing uh very risk sensitive so i i\ncould see something like this being\nbeing combined with this moral framework\nwhere we say that this combination of\nreward functions we only we don't treat\nit literally we just treat it as some\nsort of soft prior information that that\nif we come across some person that needs\nhelp we should probably help it but also\nif there is something unknown we should\nprobably avoid it or some risk sensitive\nalgorithm\nincorporate this in there so just treat\nthe reward function more as a suggestion\nthan as a\nlike literal\nrule for behavior i feel like this is uh\nthis is a little bit\nyeah it's really difficult but i think\nby learning distributions over over\nreward functions which is what we\nessentially do with this p of w there is\nthe possibility to\nto explore the direction because we have\nan uncertainty estimate of what the\nagent thinks is right and what is not so\nwe can use this uncertainty to well be\nless strict and then keep the\npossibility open for for unknowns\nyeah okay yeah nice i think it's nice\nthat you mention also the inverse reward\ndesign this paper is also i think it\nfits very well maybe on the first step\nbut also\nsome adjusted ideas to the second step\nto the preference given that that could\nalso be interesting\nso thanks thanks a lot\num yeah do we have any\nfinal questions someone else would like\nto ask something comment\nyeah i think um\nso if you uh marcus you want me to share\nthe slides right you mentioned about the\nlink there so we can sure i can uh i can\nsend you over the slides yeah we can\nmake it available also on our website so\num\nso thanks thanks a lot marcus was really\nnice and great discussion thanks\neveryone for joining us today\nand\n[Music]\nso yeah see you at\nthe following yeah takagora so all right\nyeah was a pleasure talking to you all\nand thanks for the great\nquestions thanks have a great day bro\nbye", "date_published": "2021-12-10T11:13:20Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "572fa69de8df26a6b49d2b7b1392172f", "title": "AiTech Agora: Prof. Paul Pangaro: Cybernetics, AI, and Ethical Conversations", "url": "https://www.youtube.com/watch?v=VvJpkqKlv9Q", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "it's okay yeah go ahead we need an\nai in the background to know when to\nstart\nthese simple things we miss simple\nthings somehow we always want\ncomplicated ones\nso the recording is started i take it\ngood okay so again to try to keep it\ninformal\nuh there's some slides break slides\nuh with the word discussion and we can\nopen it up then and try to come back\nthe peril and the joy of course is that\nwe have an extraordinary conversation\nand\nsome slides left on annotated or\nunspoken\nthat's fine with me we can always\ncontinue in other forms\nbut i uh rely on deborah who is really\nremarkable and\nwonderful in the time she spent with me\nto help fashion this\nfor this group so i thank her and i also\ninvite her\nto interrupt to say oops\nhold it etc so please let's try to keep\nit informal\nso you should see a screen yes excellent\nand now let me move whoops to keynote\nso here we are this is a kind of guide\nto the passing topics i'd like to flow\nthrough\ncybernetics and macy meetings already a\nlittle bit in the preamble from deborah\nand then today's ai i'm labeling this\nspecifically today's ai\nit's not all ai but the way we have it\ntoday is\nin particular a problem uh wicked\nproblems are topic\nthat you know i suspect and i'd like to\nuse that\nas a framing of a discussion of how\ncybernetics in conversation\nin my view may help\nthis leads us to pasc and to an idea\nthat i've been\ndeveloping that i call ethical\ninterfaces\ni hope that becomes a waypoint to\nethical intentions\nand then coming back to a new set of\nmacy meetings\nso this is uh the idea of what i'd like\nto do today\ncybernetics and macy meetings many of\nyou know this history\nin the 40s and 50s there was a series of\nsmall conferences\nand experts from this extraordinary\nrange of disciplines you\nprobably can't name a discipline soft\nscience or heart science that was not\npresent and they created this new way of\nthinking\nand acting in my view and it was a\nrevolution\nof thinking and of framing how we see\nthe world\nand they call this thing cybernetics\nmany of you know this word all of you\nknow the word i'm sure\nthe root is fun comes from the art of\nsteering\na greek word a maritime society uh using\nthe idea of steering toward a goal\nacting with a purpose now goal is a hot\nbutton here\nit doesn't mean i have a fixed goal and\ni go to the fixed goal and then i'm done\nthe word goal can be problematic it\nmeans that we are constantly\nconsidering action and in the framing of\naction\nconsidering what the goal might be and\nhow the goal might change\nso i don't want to become a rationalist\nhere and say we always have a goal etc\nor that the goal is always what we're\nacting toward we're always acting\nthat for sure that's the foundation of\ncybernetics as argued by andy pickering\nand others\nbut acting with a purpose was the\ninsight if you will\nthe the moment of aha which led to the\nidea that this could be a field on its\nown\nand these macy meetings change the world\nnow we often don't remember that these\nthree things\ncybernetics neural nets nai are\nintertwingled\nas ted nelson might say these things\nthe mccullough pits neurons which is the\nfirst idea of over\nsimplifying a brain neuron in order to\ndo some calculations\nthe macy meetings already mentioned and\nthe book of cybernetics by weiner which\nof course was also part of the origin\nof cybernetics as a field but i feel\nthat\nthe book gets all the credit in the in\nthe popular zeitgeist and the macy\nmeetings gets lost\nthis was uh happening in this era\nroughly these days\nand neural nets were born out of\nmccullen pitts who were at the core\nmccullough at least at the of the macy\nmeetings\nand the macy meeting swarms the\nzeitgeist i like to say\nbecause the book and the idea of\ncircular causal\nsystems influences generations of\nthinking\nwhether or not the word cybernetics\npersists or is given the credit\nwe have another phase the dartmouth ai\nconference 56 i believe\nand symbolic ai rises owing\nlargely to smaller cheaper faster\ndigital machines\nperceptrons the book comes along very\nvery conscious\ndesire to kill neural nets political\nstory for another time\nand cybernetics languishes and\nthe dartmouth ai conference was\nconceived\nagainst cybernetics they didn't want to\nuse the word they wanted to\nseparate themselves from it and with the\nrise of the power of some\nhey we can do chess hey we can do all\nthese amazing things we can\ndo robot arms and move blocks around\nfantastic\num this was what was going on in that\nera and\nas a consequence of this and as a\nconsequence of other things cybernetics\nlanguished there were some philosophical\nissues that\nmade it problematic i would say for\nvarious political\nenvironments and then hinton\ndoesn't listen to the perceptron's book\ntrying to kill neural nets and realizes\nthat it's an oversimplification in its\ncriticism\nand we have this coming along\nand i'm going to tie i don't think i\nneed to convince you\nthe consequences of today's ai and big\ndata\ninto what uh zubov calls surveillance\ncapitalism and\nthe wicked problems that arise there are\nmany many wicked problems\nso let's simplify this chronology for a\nmoment\ndon't forget expert systems\nbut of course after the 80s neural nets\nin the 2010s became extraordinarily\npowerful\nbecause of big data and because of\nmassive compute\nand today's ai everywhere in our lives\nand this is of deep concern to me it's a\ndeep concern to many people\nthe recent controversy at google with uh\nuh tim mitchell being well being fired\ni think is accurate enough and many many\nother issues that you know\nso what's going on here manipulation of\nattention\ntristan harris has made this a very\nimportant signal\nin today's silicon valley days\nwe know the manipulation that's going on\nall these things we know\nthis is what i mean by today's ai\nit's hardly an exhaustive list but it\nfeels like a pretty powerful\nset to be concerned about\nnow with colleagues i often use the word\npandemic\nand i would claim that ai today is a\npandemic and they say wait a minute it's\nnot biological\nand other comments could be made and i\ndon't disagree with them\nbut my feeling is that ai makes the\nworld we see in the world we live in\nand the loss of human purpose in the\nmorass of all of that\nis a concern\nand pan demos ben sweeting uh looked it\nup in a meeting recently pan\neverywhere or all demos same route as\ndemocracy\nthe people so all the people are\naffected\nby ai today only two or three billion\nare online\nbut that's close enough for me\nso this led me in the process of the\ncovid arising and um\nthe shutdown of carnegie mellon and all\nof the institutions that you know in the\nworld that we've\nlived in brought me to a couple of\nmoments that the wicked problems demand\nconversations that move toward action\nare transdisciplinary if we're going to\ndo anything about today's ai or today's\nwicked problems\ncertainly we need transdisciplinary but\nwe also need transglobal\nnamely geographically inclusive and\nethnically culturally socially inclusive\nand also what's\nextremely important to me in particular\nis trans generational\nand maybe we'll come back to this i've\nmade some efforts in this direction\nas part of the american society for\ncybernetics as ever mentioned that i\nbecome president of in january\nso could we possibly have conversations\nof a scope that could address\nall of these pandemics um\nwell that's ambitious audacious\ncrazy but i note\nmore and more i'm hearing conversations\nin this framing\nof the framing of we have global\nproblems\nsome are biological pandemics and\nthere's a lot of other wicked stuff\ngoing on\ncan we talk about it and i don't mean\nthat in a\nsuperficial way i mean can't we begin to\ntalk\nin order to move toward action and i'll\ncome back to that as a theme later\nso as a reminder of the macy meetings\nwe need such a revolution again\nto tame today's wicked problems and i\nacknowledge those\nin the audience now who question the\nword tame and that could be a\nconversation we might get into\nbut the idea is is how do we improve the\nsituation we're in\nif we like herb simon who only left\ncybernetics as a footnote in his famous\nbook sciences of the artificial thank\nyou her\nhe did admit that it was moving from a\ncurrent state to a preferred state\nand in that sense i think we can invoke\ndesign\nand invoke action so this\nmight be a stopping point if we wanted\nto\nask a question or i can also just keep\ngoing\ndepending on preference it looks like\nyou're on a good roll and nothing popped\nup in the chat so i would say let's uh\nwonderful so why cybernetics what is\nthis thing it applies across\nsilo disciplines it's this\nanti-disciplinarity thing\nfocuses on purpose feedback in action i\nthink you know that it's a methodology\nagain another term wicked problems\ncomplex adaptive systems\nthere are many descriptions of this this\nparticular phrase complex adaptive is\nquite\nis very much around and i find it fine\nit seeks to regulate not to dominate\nand it brings an ethical imperative and\nwe'll come back to this\nand i want to acknowledge andy pickering\nwho coined the phrase\nanti-disciplinarity\nyears ago and has also been\na wonderful influence on these ideas and\non me personally lately so what are the\nalternatives\nto cybernetics here's the cynical slide\nwhich says not much else is working\nso where do we go there are none\napparent alternatives to cybernetics\nit's a bit arrogant perhaps\nthis was an email i got from a research\nlab director\nin 2014 who had the instinct that second\norder cybernetics\ntimes design x crossed with design\ncrossed with some modern version of the\nbauhaus is what we need to\nfix science an extraordinary phrase\nfix science i love it but this is along\nthe theme of isn't there something we\ncan do here if we are from cybernetics\nand i love the design component and i\nknow many of you would as well\nsince wicked problems cut across so many\ndifferent domains we need\ndeep conversations and this is a recap\nof why we need new macy meetings\nglobal and virtual and after i coined\nnew macy\ni went back to andy's work and he talks\nabout\nnext macy as a new synthesis in 2014.\nso what's missing conversation ronald\nglanville some of you know\nmaking a bridge why does it matter\nwell to tame wicked problems assuming we\ncan make some improvement to wicked\nproblems we have to act together we\ncan't act separately that's obviously\nnot going to work\nto act together we have to reach\nagreement\nto reach agreement we have to engage\nwith others and to engage with others we\nhave to have a shared language to begin\nso to cooperate and collaborate requires\nconversation\nwhat may come out of it well lots of\nthings\ni would claim to achieve these requires\nconversation\nwhat do you get if you have effective\nconversation\nthese things and again i would say\nall of these demand conversation so\nwhat's missing\nis conversation and now the question is\nhow does all this fit together what am i\ntalking about\nwell conversation in today's ai\nlet's contrast these things\nwhereas today's ai maybe all ai is\nmachinic digital\nrepresentational you could argue that\nmachine language\npredictive data animated and i'm\nproposing\nthat cybernetics is a bilingual\nsensibility i owe this to karen kornblum\nwho's on the\ncall today it's bilingual it goes\nto conversation into today's ai\nnow these things are all intermeshed i\nwon't talk through how i think they\nself-reinforce and make each other but\nof course this is what\nweiner meant when he talked about animal\nand machine\nand again uh we might stop here\nfor a moment i see esther said to fix\nthe practice of science\nfair enough one could dive deep into the\nidea of what\nwhat science practice is and how that\ngets us to a certain\nplateau of understanding but acting in\nthe world is\nis beyond understanding deborah\na pause here shall i keep going i think\nif people\nuh people can unmute themselves uh so\num feel free i think we're we're in a\ngood enough flow that if someone uh\ncomes up uh\nwe can do that but uh yeah i would say\nyeah james has had a question maybe\njames would you like to\nspeak up and ask this question directly\nor\nwell and and we can get to this uh later\npaul but i think we'd all be interested\nin hearing more about the politics of ai\nversus cybernetics historically\nespecially if it's something that would\nbe useful to consider\nfor how things move forward\na long discussion on its own it's a\nbeautiful point james thanks for\nbringing it up\num i'm not sure how to unpick that now\num and i think there are others in the\nroom who may\nbe able to expound it better than i can\ncybernetics of course grew out of world\nwar ii\nin some ways and grew out of this idea\nthat\ncircular causality was everywhere\nsome criticized cybernetics as being\nabout control but i think that's a\nmisunderstanding of the term\num ai in my view grew out of\nthe desire to use digital to dominate\nan environment which could be controlled\nby conventional mechanistic means not\nembodied organic cybernetic means\nthat that's a really uh poor job\nat the surface of it and i think by\npolitics you mean something deeper\num maybe we can reserve that for a\nlittle later if there's time or\nanother session i think that unpicks a\nwhole world of complications\nyeah i think um just one comment along\nthe way then\nuh that the traditional challenge of\nincorporating\nteleology into the sciences um\nseems to sometimes strike people uh\nthe wrong way about cybernetics and the\nother\ncomment that i'd make is um\nbecause i'm biased but i feel like\ncybernetics has a much stronger\ntheoretical\ngrounding than artificial intelligence\nbut one of the things\nthat really strikes me about artificial\nintelligence\nis the expectation of the\nthe artificial so the exclusion of the\nhuman and\nuh the the fact that humans are expected\nto be part of cybernetic systems\num at least seems much more clear in\ncybernetics so i'm i don't know whether\nthat intersected with the politics at\nany point\nbut that's just a comment to leave along\nthe way yeah\nyeah yeah it's a big box\nuh big pandora's box to open\nuh i don't want to cherry pick but uh\nphil beasley phillip thank you for\ncoming today it's wonderful to have you\nhere\num so minsky and pappard wrote a book\ncalled perceptrons which was based on a\npaper that was\nleft on a file server at mit for all to\nsee\nas a book was posed and the purpose of\nthat paper was to kill neural nets and\nit did\nthe story of von forrester which he told\nvery often was\nfor many many years one forester would\ngo to washington to get money to\nsupport his biological computer lab at\nurbana\nand one year he went and they said oh no\nyou have to go see this guy in cambridge\nhe'll give you the money\nbecause we've decided to centralize the\nfunding and all of the money now will\ncome through this guy while heinz went\nto see marvin and marvin said\nno and that was the end of the bcl\nso that's a factual story that isn't is\nin the history\nphillips raising other extraordinary uh\nquestions as usual uh which i i'm gonna\nduck\nthat's another interesting conversation\nphilip maybe we should have one of these\nchats uh about the philosophical issues\nand the relationship to modernism and\nagain i would defer to you and others as\nperhaps\nnot perhaps as much more\nconversant with that history but thank\nyou for that\ndeborah anything else in the chat um no\ni think there was a comment there about\nuh\nai uh failing or not and maybe we just\nscratched the surface this is uh\noh yes volcker yeah yeah yeah\nsucceeded at many many things but failed\nat producing a humanistic technology\nin my view a technology that we embrace\nand we love and we use every day\ni mean who loves\ni don't want to make this seem an attack\nwho loves facebook\nyou know it's it's uh extraordinary in\nmany ways and yet\nfails in so many ways anyway\nyeah i would i would point out that uh\nthe people who were in the cybernetic\nmeetings and they see\nthey didn't think that they succeeded\nvery well\nuh margaret me didn't think that it was\na success and stuff like that so i feel\nlike that part of it is also\nwe're living in this moment so to say\nsuccess failure is always uh\nhard for us to assess black and white\nyeah it's much too black and white i\nagree with you\nso back to norbert weiner and\nof course weiner in a cybernetic world\ninvokes gordon pasque in a cybernetic\nworld a cybernetic praxis\nuh both theory and making of machines\npractice action in the world\nif you don't know andy pickering's book\nthe cybernetic brain i can't recommend\nit\nhighly enough it's a particular view of\ncybernetics\nthat others feel doesn't attend enough\nto second-order cybernetics or to\num uh\ngordon's theories uh but uh\ni think it's um it's andy's trying to\nmake a different point it's it's an\nearlier moment\nin his arguments for what the history of\nthe field has meant\nand we're also in active conversation\nabout that and\ni think that that thinking and that\nresearch if you will will will continue\nso let's go into past this is a picture\nof\ngordon pasqu's\ncolloquy of mobiles in 2020 in paris\nwhere my colleague tj mcleish and i were\nin february\nand installed it at central pompidou it\nwas part of an exhibition that\nunfortunately closed rather quickly\nbut the original from 1968\nwas an extraordinary thing these are\nphysical mobiles the\nlarge uh ones that you see\nflesh-colored are females so called the\nblack ones with the little light in the\nmiddle are so-called\nmales they do a courtship dance a long\nstory for another time\nbut pasc even in the 60s imagined\nmachines that were autonomous agents\nthat learned and conversed\nand already there was a bilingual\nsensibility here of the human and the\nsocial the machinic and the digital\nand the interactions were about\nresonance and not representation\nand the i the fundamental architecture\nif you will was interactional and not\nstand alone\nso if you study this work and other\nworks of cybernetics it's not a machine\nthat's smart\nor an algorithm that's smart but rather\nin the interaction intelligence is\nexhibited by way of achieving uh\npurpose formulating new goals acting\ntoward those goals and so on\nso this is one way of thinking in a\nbroad brush of what\ncolloquial mobiles uh tried to do is\nextraordinary work\nthe goals of conversation theory in my\nview are to rigorously understand what\nmakes conversation work and to make\nmachines converse in like humans and\nlike humans not not like machines\nwe can talk about the limitations of\nwhat that might mean\nbut also to rigorously understand how\nsystems learn\nbecause these two things conversation\nand learning\nare inextricable i think that is\nuncontestable and in our daily life we\nexperience that\nbut also uh it was very much what past\nwas interested in and he saw that they\ncame together he wrote a number of books\ni don't know how you write two books of\nthis length and depth within a short\nperiod of time bernard scott who's on\nthe call\nwas part of this era with pasc and was\nvery important to the psychological\nstudies and the\nof the learning studies that were done\nand i value very much the conversation\nthat bernard and i are continuing\ni came across pasqu uh as deborah\nmentioned at mit when i was at the\narchitecture machine which was a lab\nstarted by nicholas negroponte\nwhich was the predecessor to the media\nlab although predecessor implies\na continuation of dna that wasn't really\nthere\nthe architecture machine as you see\ngrounded in the upper right in an\nawareness\nof minsky simon newell papert\neverybody these are the ai guys\nnonetheless\nuh took a balloon ride north to thirty\nthousand feet\nand thought about interactions of any\nkind\nin a set of cybernetic\nloops in a set of relationships so on\nthe left you have a\none participant in a conversation on the\nright you have b\nor beta as it's also labeled and these\nare the interchanges some of these are\nme\nsaying something to you some of these\nare me taking an action\nuh some of these are you observing my\naction and saying something and taking a\nfurther action\nbehind this is actually quite a lot of\ndetail this is one summary\nthis is another summary\nand all of this is about modeling\nconversation\nand leading toward for example in the\nwork i do in interaction design\ntrying to take these components of\nconversation\nthat we start in a context we start with\na language\nwe have goals and so on that are\nevolving\nthese lead to some other ideas again\nthis is just in passing\nthere are some conversations are better\nthan others\nuh one conversation i like to have is\nwhat i would call an effective\nconversation\nwhere something changes\nand brings lasting value so not just any\nchange but a change that brings lasting\nvalue\nand these changes may be in information\ntransaction they may be rationally\nemotional and so on\nbut back to this idea that the goals of\nconversation\nare these\nconversation as this critical aspect\nconversation cybernetics\nai and that conversation and that triple\n[Music]\ndiagram theory specifically\nis helpful and then we'll get more into\nthat in a moment\nwhen we talk about ethical interfaces so\nagain a moment of a brief pause\nyeah uh david do you want to say\nsomething about uh\nconnecting resonance and representation\nwith your uh\n[Music]\nyes um so actually i was just\nplus wanting james's\ncomment but yeah\nfor me it resonates very much from my my\nown work and also the\nthe work that we're trying to uh to get\ngoing um\nin a in a large proposal um that you'll\nprobably hear about more uh\npaul uh soon maybe already have\nbut that's a a lot of work uh\nalso in my uh domain of human robot\ninteraction\nfocuses on modeling humans\nyou know representing that perhaps to\nput into the basis of\nmachine operations so then we have\nhuman-like\nbut in isolation it's always a lot of it\nis in isolation and the\nreal interaction which is such an\noverused word that uh\nthat i almost hesitate to use it again\nthat's being lost and what uh what i\nlike\nvery much is that you you call this\nresonance which which is a term that's\nalso being used in many other domains\nthat\nactually already implies that it's more\nthan interaction\nit's actually loops perhaps exciting\neach other\nand and so i very much like this very\nshort phrasing\nso there was my plus one um thank you\nit's very necessary\nthank you david appreciate it um\ndo you wanna you wanna talk through your\ncomment\nyeah sorry i'm so wordy\nso um we've been looking at ai as a kind\nof continuation of\num let's say operations research applied\nat scale to many social contexts\nand when we look at the way those\noperations work we notice that they\nreally do not distinguish between\nmanaging things and managing humans so\nin a sense they're removing the category\nof human\nas this distinct category for which we\ncan have ethics or politics and they're\nalso centralizing control as rule said\nand this comes exactly at a time because\nyou raised timnit gabriel\nwhere scholars working on race and\nanti-blackness\nare saying that the category of human\nhistorically and today excludes\nespecially the black people and a lot of\nother disenfranchised\npopulations so one could argue okay\nlet's ask for a very\nlet's insist on a very universal human\nand not use the human\nas it has been used historically but to\nmake sure that we center the people that\nthe category has excluded\nor we can be very careful about using\nthe category human meaning that it has\nbeen almost always historically used to\nexclude black people and\nother disenfranchised people so i'm just\nwondering um\ni mean that's a lot of claims like do\nyou agree that operational control\nuh you know with this kind of\ncentralized ability to set policy on\noperating on things in humans not\ndistinguishing humans and things and\nthen\nwhat should we do with the category of\nhuman because it keeps on coming out but\nwe use it maybe a little too easily\ni agree generally with what you're\nsaying and i thank you for the comment\nai of course has become distributed\ncybernetics uh was interpreted for\nexample by stafford beer and fernando\nflores\nto be a central control of uh economy\nin chile but the word control i think is\nproblematic as i\nthink i said earlier in cybernetics it\ndoesn't mean control\nit means to attempt to regulate in order\nto achieve\naction that is effective in the world\nand usually that reflects a goal and\npurpose that's\nbehind it all again a wonderful comment\nin which there is tremendous richness\nmaybe a topic of its own\nentirely it relates to the politics\ntopic earlier\ni don't think i can eliminate it better\nthan that right now that's not an\nelimination i mean i don't\nbut i love that point maybe we can\nreturn to that in another form\nbecause i think that's very important i\njust wanted to mention in this context\nthat\ndavid usually remarks that his interest\nin cybernetics is to realize that\ncontrol is not just bottom-up\nand could also emerge in situations and\ni also wanted to flag\nthe connection with resonance uh james\nderek lomas\ni think paul we talked uh we made that\nconnection earlier\nand derek i don't know if you want to\nsay anything more about the resonance\nright now or\nno i'm just loving the conversation and\nit's it's going in\na great direction and it's just so much\nfun to\nget such a great context um\ni mean i i find i i know i have a bias\nof being attracted to areas of academic\nstudy that seem a little bit off limits\nand for some reason\ncybernetics has a little bit of that\nscent to it and i don't know why\nbut its association with with with\nresonance um\nsomehow uh confirms itself\nsomehow we could talk about that also it\nmakes it make some\npoints of view uncomfortable and and\nagain\nthere's been some discussion of why\ncybernetics failed\nwhich in some ways it has again blacking\nand whiting something that is really\ngray\num and and perhaps a topic for another\ntime\nuh ben i just happen to notice i can't\nsee all of the stuff going by the chat\nit's a magnificent\nconversation on its own that i look\nforward to saving and savoring\nuh ben sweeting mentions about pasc's\nmodel\nthat yes it does include me with myself\nconversing\nand having interactions with different\npoints of view in my own head that was\none of the important things about his\ntheory in my view\nwhich was it's person to person me to me\nas long as i have different points of\nview that need resolution\nacross a boundary across a difference if\nyou will\nand then of course you could even say\nschools of thought speaking to schools\nof thought\ndemocrats and republicans come to mind\nunfortunately\nyeah should we move on then wonderful\nso again if you base this idea\nof ethical interfaces\nuh on understanding conversation and\nunderstanding and i'll amplify that in a\nmoment i believe that the ultimate goal\nis to build better machines to build a\nbetter society\nbut as always the question is how do we\ndo all this\ni like organizing principles i think you\nwould agree\nthat there's no such there's no there's\nnothing more practical than a great\ntheory or a great organizing principle\nso let me expand this one i i like to\nunpack this it'll take a few\nsteps so i shall act always\nso this is me taking responsibility for\nmy action\nand saying that i will always act\nso as to increase\nthe total number of choices now many of\nyou will\nrecognize this and i'll give it the\nauthorship in a moment but for those who\nhaven't seen it before i want to explain\nchoices means something very specific\nchoices doesn't mean\noptions for example right now in this\nmoment i could do one of a thousand\ndifferent things\nand all of those are options to me i\ncould stand on my head i could turn off\nmy machine\ni could throw my coffee cup again no\nthey are not choices\na choice is something that i would\npossibly\nwant to do now a viable option\nsomething that would reflect resonance\nwith me\nand who i am and how i see myself and\nwould be consistent with what i am which\nyou can phrase\nin terms of my goals my purpose\nmy intention my direction now\ni shall act is important going back to\nthat\nand the author of this makes a wonderful\ndistinction i could try to say\nthou shalt i could say you must do these\nthings\nbut of course that's me being in this\nparticular way of distinguishing these\nterms\nmoralistic me standing outside\nsaying i have the right to tell you what\nto do\nthat's the thou shalt that we recognize\ni shall means in a sense the opposite\ni'm part of this whole thing\ni am indistinguishable sorry i'm\ndistinguishable but i am\npart of the greater flux and i take\nresponsibility for\nwhat i do in the context of the whole\nso this of course comes from heinz\nforrester and he called it an\nethical imperative\nand some on the call once again are\nchallenging\nthis and its limitations i think ben\nsweeting in particular\nand i look forward to those later\ndevelopments\nnow i want to go here i'm going to\ndeclare this an axiom for an ethical\ninterface and say as a designer i shall\nact always to increase the total number\nof\nchoices not options amazon is about\noptions\nright amazon suggesting things to me\nbased on what other people have done\nare not choices because they're not\nabout me they're about the big data\nthey're about the aggregation of\nmillions and billions of interactions\nthat in a sense have nothing to do with\nme\nthey have to do with an aggregation but\nyou know what the hell does this mean\nhow do we do this right well could i ask\nyou one question to make sure i\nunderstand\nthe difference between uh choice and\noptions\nas you see it so uh i read\nthis book the paradox of choice that you\nmay or may not\nknow about and one of the most famous\nexamples is\nsupermarkets right where\nyou would think that if you give people\nmore and so he calls it choices\nif you give people more choices that\nthat would be a good idea but actually\nit also takes away something for people\nso\nit costs effort and it makes people a\nlittle bit unhappy because they don't\nreally have the means to make the best\nchoice or\net cetera et cetera so there's many\ndownsides which is what he called the\nthis paradox of choice so do you think\nthat\nyou know it's a different use of the\nterm choice exactly yeah i'm just\nspeaking\nvery very specifically what might i do\nnow to be who i am in taking an action\nso heinz used to say paul don't make a\ndecision\ndon't make the decision let the decision\nmake\nyou which was a way it takes a moment to\nreflect on that\nwhich is a way of saying when you think\nof yourself as deciding something\nyou're really being who you are so when\ni walk into\na supermarket it can't possibly put in\nfront of me my choices\nbecause it doesn't know me and our\ndefinition of personalization today\nain't what it should be if it's this\naggregation of the totality of the\nbillions of ideas\nsorry the billions of choices that\npeople have made and therefore it says\npeople who bought cigars also bought\nsmoking jackets\nfamous example from 20 years ago\nso so he's reserving choice to mean\nsomething very specific and i wanted to\nmean\nthat specificity here as well\ncan i say something but as as uh\nai collects your history and amazon does\nthat and many other places do it\nthen they do get to know you uh\nsometimes even\nbetter than or some things that then you\nknow yourself so\ni i'm not saying you know uh that it's\nuh\nit's the kind of knowing and meaning\nthat we want\nbut uh yeah go ahead you know we can\nunpick that further i think i have a few\nslides here about that yeah\nto try to unpick it but if not we should\nmaybe come back to it and\nremain so number one actors to increase\nchoices for a user now part of that\nfor me is acting\nin order to create conditions such that\nothers may converse\nbecause it's through conversation that\nyou expose\nor learn or have revealed\nto you what the options are within which\nyou can decide which are viable choices\nso my claim is that designing\nfor conversation designing such that\nothers can converse\nis absolutely foundational\nand for me that's part of a praxis if\nyou will of ethical design\nso i propose applying models of human\nconversation you've seen a\nskeleton of that here in the slides i've\nshown\nstrive for interfaces that are\ncooperative ethical and humane and i'll\nexplain those in a moment\nand push for new forms of interfaces and\nthis is really the basis of what i'm\ntalking\ntalking about i'm sorry that went by\nthese are offers\nthat i think are are worth proposing\nto you and to others so if you design an\nethical interface\none idea one intention is to make it\ncooperative\nso it's cooperative when there are a\nsequence of coherent interactions\nthat enable the participant to evolve\ntheir points of view\nsuch that in understanding and agreement\nare ongoing\nthere's a cooperation that allows a true\nconversation\nto evolve such that we might have\nunderstanding and agreement\nmight have we might agree to disagree\none big block cooperative\nnext block ethical i claim that it's\nethical\nwhen there is reliable transparency of\naction and intent\nthe what in the why such that we might\nbuild trust\nnow very often we're told that\ngoogle has really great search results\nand it offers us the best choice for us\nas deborah was saying it knows my\nhistory of clicking around\nand it uses that to to tell me something\nthat's coming up next\nthat's fine uh that's helpful but it\ndoesn't tell you why it made the choices\nand let me take a moment for a brief\nparable i call it the parable of luigi's\npizza\nit's a little awkward to do in this\ncontext so i'll pretend to be both sides\nif i'm in an audience i ask someone to\nsay\nwhere's their great pizza and i say\nright across the street the luigi's\npizza\nand then i asked them to ask me why is\nit great pizza\nand they say paul why is luigi's great\npizza and i say screw you i'm not going\nto tell you that\nand after they're shocked and they go\nback in their seat i then say\ni just described google\nbecause google doesn't tell you why it\nmakes the choices it makes oh yeah there\nare 200 signals and it takes into\naccount recency and reviews\nand where you clicked and all of that no\nthat's a generalized answer\nthat's like giving me terms and\nconditions\nthat i have to read through in order to\nhear the generality of what's going on\nbut that's not\nwhy it chose luigi's pizza\nnow if i asked you a friend of mine\ncolleague of mine where is their good\npizza etc\nand i asked you why you thought it was\ngood and you said screw you paul\ni'd never talk to you again but we allow\nthis\nfrom our machines we allow to be\nmistreated by our machines so i'm\nclaiming that an ethical interface is\none in which i can say why is it great\npizza and it says\nwell paul your values are you like\nsustainable\nsourcing you want people to be paid well\nyou want it to be open late you want\ngluten free and in your value terms\nthis does what you want and therefore\nthis is a way of\nmoving ahead of being in the world in a\nway that you want to be\nit's a valid choice so terms and\nconditions\nare an answer to the question why\nhow does google present to you why luigi\nis great pizza\nbut i would claim it's not humane so\nthis is the third intention\nwhere you can in the conversation create\na direction for the focus and flow\nso very dense but these three ideas\ncooperative ethical and\nhumane for me are pragmatic ways of\ntalking about designing interfaces\nand therefore i think could be\na contribution so this might be a place\nto start\nanother conversation\ni didn't realize i'm muted i'm also\nmindful of the time\nuh so let's uh so paul can you make a\ndecision\nwe have uh kind of nine minutes left uh\nof you know would it be good to uh\nclose stuff up in a few minutes and then\nkind of uh\nyeah i can do i can do the rest in just\na couple of minutes shall i do that\nyeah yeah i think i think that's good\nyeah so\ni want to build a better society and\nbetter machines\nare part of that this is part of what i\ncall ethical design\nit's a bit highfalutin how do we do that\nwell we talked about the wicked\nchallenges this is another paraphrase of\nthat\nto go after those you've heard me claim\nwe need conversation who do we need in\nthe conversation well\nsome people in the conversation could be\nthis history from cybernetics\nboth the history and current\npractitioners\nso the bottom gage and doverly glanville\nhas passed away carrione and deborah and\ngagan are still very close colleagues\nbut my point about trans generational\nthese are younger generations coming\nalong the list on the right are often\nstudents\nand they are interested in these ideas\nand they are practicing\nsystems thinking cybernetics etc and\nthese are the people for me who are\nextremely important in these\nconversations\nand how do we begin we begin with the\nnew macy meetings\nwe begin by moving along a path\nthis happened to be my path today\nwith considerations that you have heard\nand that is what i wanted to say thank\nyou very much\nfantastic um so\ni think we could uh take some of the\nif you just scroll from the bottom of\nthe chat poll\num and maybe uh andy\ndo you want to say something about your\ncomment\nyeah so uh let me\nit was essentially the second point that\npaul was getting at that building of\ntrust\num in in a way that is significant to\npeople and both\nuh the building of choice and the\nbuilding of trust can happen through\nconversation\num but it's a question of which people\nprefer today\nuh more choices more variety or if\nthat's what choice means to them\nor that building of trust and that\nthey're making the correct choice the\nfirst time\nright\nyeah correct choice the first time is a\nlittle tricky yeah of course\ncertainly you want to let them make a\nchoice and then recover easily it's that\nundo button in the world uh that's\nthat's tricky\nyeah no that's a good observation i\ndon't have more response from them\ndo that\nwhat about role\nwhat about what sorry raul made a\ncomment uh\nbut i guess he's responding to bernard\nso there's a lot yeah go ahead bro\ni was responding to some of the\nresponses on um\nissue of race as a valid category\num and i was just pointing to a report\nthat was written by colleagues\nformer colleagues of mine at the air now\ninstitute where we look at um\nthe really urgent uh societal\nimplications of\nof ai systems as they're being deployed\nnow mostly by\num by larger tech companies um\nand how do you how they lead to issues\nuh\nacross categories of gender\nrace and other important categories\num so that that's a really good report\nfor those who who think we shouldn't be\nusing the category\ni disagree with that um and i i just\nwanted to maybe also respond\nquickly to just the former uh discussion\num i think one of the things we do at\ndelft i think we do pretty well is to\nto think about um kind of the normative\naspects ethical aspects\nof technology and that's that's part of\nthis ai tech forum where we think about\nmeaningful human control\nand there's a lot more coming there so i\nthink when we think about choice\num i i tend to i tend to kind of side\nwith what\nwhat david said is that oftentimes\nchoices are also\num about like important trade-offs have\nto be made and\nand thinking about um the later\nimplications\nof design choices um so it would be\nthat's what my own personal research is\nabout it's about hard choices and how to\naddress the\nnormative uncertainty and how to create\nconversation\nthe choices that you materialize in the\nsystem so i'm kind of curious\nhow you think about dealing with the\npolitics and the normative uncertainty\nwhen\nthinking about design of college\ncybernetic systems called the i systems\nand how it comes back in your your\nperspective\nyeah it's a beautiful question um\nso it's a conversation about the\nconversation that you're yeah\nyeah well second order cybernetics right\nyeah he's easier said than done yeah\ni hope go ahead somebody says something\ni'd like just to clarify what's why i'm\nconcerned about the use of the term race\nthere's a scientific basis for it\nand people look across each other mutual\nignorance\nsome people think they know what the cat\nis labeling a category there are better\nlabels\nthe ethnic group um\n[Music]\nculture and so on i have i bet is\ni've met people who think that different\nraces\ni've met people who think that different\nraces are actually different species\nthey just have they're ignorant of the\nbasic biology\nand as i said before the term race does\nnot have a\na well-founded scientific basis\nthank you bernard and i have had an\nexchange about this before the idea is\nnot to deny difference\nbut rather not to place it into the into\nthe label of race but rather into other\nlabels\nwhich then expand the conversation in my\nview\nto um the aspects that are really\nimportant which is to discuss\ndifferences and to be inclusive\nabsolutely thank you thank you\nyeah um i think uh jared\nyou made a comment about uh\nconversations or the best part is often\nthat they develop in unexpected ways do\nyou want to say something about\nthat um yeah it's one of those tensions\ni i feel as in i have a feeling that you\noften treat our ai\nas our helpful slaves which is very good\nbecause they do\ncool stuff for us and if they're not too\nintelligent we don't mind\num so but that means that you often want\nto give them very specific\ninstructions and have them do them as\nwell as they can and that feels a bit at\nodd with the idea of having\nconversations and opening up and\nhaving things going in expected ways and\ni was wondering um\nhow would you propose we reconcile that\ncontrast between\nthe two um\nthank you for that comment um let me put\nagain in the chat\nthe link to my page which has in\naddition to this pdf that has an\nappendix pdf\nand in that appendix are additional\nslides\nwhich amplify the power of\nconversation to create possibility\nand that's another way of talking about\nconversation as\nopening choice and again choice in this\nmeaningful rich way that lines of course\nso if you consider the desire to create\nnew possibilities\nas appropriate and ethical then\nconversation\nfor me is almost the only way i want to\nhedge that a little bit from learning to\nride a bicycle i'm not necessarily\ntalking to myself about it\nand you'll see some slides in there\nwhich talk about\nwhat a great conversational partner is\nand in particular here's one other\ncomment to make a complaint i have about\nrecommendation engines and facebook\nfeeds and google\nranking and so on is it's based even at\nbest\non who i was it's making a decision on\nmy behalf\nas if i were in my own past or to put it\na rather more contentious way\nas if i were dead because answers are\ndead\nquestions are alive questions are of\nthem now\nso don't give me a search engine that\ngives me answers give me an engine that\ngives me questions\nbecause questions open up possibilities\nand that's another whole research area\ni'd like to\ndevelop and there are some slides on\nthat in the opinions\nthis is very cool i'm going to be uh\npretty\nuh accurate about the the the ending\ntime\npartly because i feel like we we clearly\nwet the appetite\nof many people here and i want this to\nbe\nan ongoing conversation and we also have\nsomething starting in a couple of\nminutes that we want to engage some of\nthe people here\nthank you so much paul and thank you for\nsuch a great audience\nit was lovely to have the conversation\nongoing throughout\num and we'll like i said i think we'll\ntry and make sure that the chat\nuh is also copied over because there are\nplenty of comments in here that\nwe just managed and links and stuff like\nthat\nluciano so this is fair to uh to close\nnow and then\nfor", "date_published": "2020-12-09T16:37:22Z", "authors": ["AiTech - TU Delft"], "summaries": []} +{"id": "03002b846bf8523c2ed4a77adcdb834c", "title": "Autonomous technology and the paradox of human-centered design", "url": "https://www.youtube.com/watch?v=CF1BCAd5KPc", "source": "ai_tech_tu_delft", "source_type": "youtube", "text": "um\nthe talk will unfold out of a few\nrecent publications of mine um the kind\nof retrospective journey is a\npersonal reflection on how particular\nconcerns\nand concepts have emerged and to become\ncentral in my work\nmost of my work in interaction design\nover the last two decades is\ncontextualized within a digital\ntransformation of society\nbut also more broadly fundamentally\nrethinking the way in which we design in\na post-industrial age\nand by post-industrial i mean an age\nwhen digital things come to express some\nform of agency and because of that\nin a way in this sense to also actively\nparticipate\nin the making of what we come up to\nexperience as our reality\nand examples of such things which i have\nresearched and designed for\nare network communities in the mid 90s\nonline public publics very early\nplatforms for user generated content\nin the in the early 2000s and\nthe recently conducted products ai\npowered product service systems\nall of these developments are powered\nclearly by advances in digital\ntechnology\nand to design researchers\nthey speak of how in this impossible\npost-industrial age\nthe act of design becomes at first\nmore diffuse than decentralized\nand as we come to realize now\nincreasingly probabilistic\nand so the central theme of this talk\nrevolves around this idea of autonomous\ntechnology\nand the paradox of human-centered design\nor in other words around the fact that\nas designers we're just\nnot trained for understanding and\ndesigning\nfor human interaction with autonomous\ntechnology\nwe're not equipped conceptually we're\nnot equipped methodologically\nand therefore without such practice\nwe have also not developed a sensibility\neither\nfor how to operate within this new level\nof complexity\nwhere it is not exclusively humans that\nact and produce effects\nand we design ideas such as user\nproducts functionality etc\nare no longer serving us\nwell to craft relations between people\nand technology in a way that are\nresponsible and sustainable in the long\nterm\nand so in this talk i'm going to unpack\na few um theoretical concepts\nthrough the reading of three recent\narticles of mine\nwhere i grappled with these ideas and i\nattempted to develop a vocabulary\nand eventually an international research\nprogram to address these issues\nis a concept that lenica coeur and i\nintroduced in 2018 in a paper presented\nat kai\nand what this paper does is attempting\nto expand\non what in the social sciences is\nreferred to as social practice theory\nto revise notions of agency and current\nnarratives of autonomous technology\nand on the basis of that we\nconceptualize how agency and\nconceptualize specifically how agency is\nconfigured\nbetween humans and non-humans\ninteracting\nnext to each other in the context of\neveryday life\nwhat we do in the paper is to conduct an\nanalysis\nof historic changes of domestic heating\nas a social practice and we use\nthis historical case to consider how\nroles between people and heating systems\nhave shifted over time\nin doing that we are able to show that\nthough\nthis interplay between what we refer to\nas\nuniquely human capabilities and uniquely\nartificial capabilities\nhave changed over time because of\ntechnology\ngiving rise to quite different roles for\nhumans and non-humans\nthis interplay has always\nbeen central to performing\ndomestic healing as a social practice so\nwhen we were burning logs of wood to\nmake fires\nof course our role was bigger and\ntoday instead the sensors and algorithms\nof our smart thermostats are the ones\ndoing muscle\nwork but it's still an interplay\nin which we learn from each other and\nperform we carry out\nnext to each other a particular set of\nactivities necessary to that particular\nsocial practice\nin this co-performance between humans\nand non-humans\nboth humans and non-humans are\nconsidered\nas necessary elements and they both have\nknowledge and abilities\nand they both encode some particular\nvalues\nthey both perform carry out through\ntheir minds and bodies\nan instance of what we come to socially\nrecognize as a clearly identifiable\npractice\nand it is in this performing next to\neach other in this core performance that\nagency is actually expressed\nand effects are produced\nand sometimes with implications as we\nknow that are quite\nbroader um and have an effect\non other social practices\nthat come to be connected to the\nparticular\npractice under examination and then the\nend and the implications for the\nelements that constitute them because\nas we name life real life it's quite\nmessy isn't it\nso what\ncore performance does as a concept\nto determine within what boundaries or\necosystem our design has to be\npositioned\nto be likely to produce a particular\neffect\nis to help us move past an abstract\nand often totalizing idea of agency\nand position it somewhere in our life\nnot with humans only neither just with\nproducts and services\npowered by intelligent technology almost\nas\nthey were something other independent\nfrom us\nontologically autonomous but at the\nnexus of this dynamic interplay\nbetween humans and then humans um which\nnext to each other with different yes\nhistorically changing rules yes\nbut next to each other carry out\ndomestic heating\nenergy saving decision making and many\nother social practices\nand for this this is an\nimportant conceptualization for much of\nthe design work that we have conducted\nover the last years\nbecause it frames agency as a matter of\ninteraction\nof crafting and sustaining relations\nand even relationships between people\nand technology\nand it's important because then as an\ninteraction designer\ni can begin designing with and for\nagency\nin the context of people's real lives\nrather than assuming that agency is\nsomething that can be\nfully engineered at the system level and\nthat's it\ni can also through such a concept\nas an interaction designer begin\ndeveloping a different sensibility\naround questions\nsuch as what is an appropriate\ninterplay who decides it when is it\ndecided\ncan it be changed can it be negotiated\nor even contested\ncan these changes and repairs take place\nin context\nof the time of use of the system\nsensitive to that particular context\nbuilding on this idea of core\nperformance\num i then started to ask myself whether\nperhaps\nwe should begin to take this interplay\nmore seriously and how in this more\ndiffuse\nexpanded universe of design where\ndigital things are never finished\nhow should we conceive of autonomous\ntechnology\nas a design partner perhaps\nin in the chapter published last may\num for the volume related\nrelating to things which is the science\nand technology studies volume edited by\ned wilson i i analyzed cases\num based on projects we conducted in our\ngroup\nbetween 2015 and 2018 in the field of\nsmart mobility democratized\nmanufacturing\nand assistive technology for older\npeople\nall of these projects used quite\ndifferent approaches to include\nautonomous technology as a\nco-ethnographer\nand a co-designer next to human\ndesigners\nwe used live vlogging intelligent\ncameras\nand bespoke sensors attached to objects\nor very\nuse um\n[Music]\nwe use open libraries for data\nvisualization and even collaborated with\nour colleagues in computer science for\nexpert machine learning\nthese examples are offered in the\nchapter as\ncase studies of a possible modern human\ndesign practice\nwhere i at first describe\nhow we use this type of technology\nto gain access as designers\nto the ecosystem of relations in which\nhumans and non-humans are entangled\nwithin particular social practices\nand how this approach helped us surface\nelements\nof these social practices and ultimately\ngenerate insights\nthat were previously either unattainable\nbecause invisible to the human eye or\nunexpected\nbecause initially considered an\nimportant\nfrom a human observer's perspective\nand then i described how using\nautonomous technology in this way\nto open up this non-human trajectories\nand perspectives\ncontributed a point of view that helped\nus\nproblematize the design space\nand set all our initial assumptions\nreveal our biases demonstrate the\nproblem to be more uncertain\nmore nuanced or more complex that we\nhave originally assumed\nand so this type of more than human\ndesign practice\nled us to more open-ended ways of\nco-performing with autonomous technology\nit invited us to integrate this uniquely\nhuman and uniquely artificial\ncapabilities\nand performances in ways that instead of\nreinforcing existing world views\nand therefore augmenting\noften our biases aim to bring instead\ninto existing things we could not think\nof before\nin other words helped us imagine\nalternative\nand possible hopefully better futures\nand that is when the the visiting\nprofessorship in sweden\num and the collaboration with the open\nrestaurant came in\nin an article we wrote for design issues\nby mit on\ntechnology and more than human design\njohan and i expand on this idea of core\nperformance and more than human design\npractice\nbeyond the one-to-one relation that we\nexperience\nwhen we interact with the system\nwe were very interested in applying\nthese ideas\nto the decentralized context of the\ncurrent data economy\nwhich is a context where digital things\nand services\nare assembled at runtime constantly\nchanged\nand sometimes with millions of people\nusing them\nsimultaneously but for different\npurposes\nand so we began to\nask ourselves what would it mean\nto to attend to these core performances\nin a decentralized context as\nacts of design could we\nconsider them as diffuse\ndistributed probabilistic acts of design\nin such a context and what would that\nmean\nfor how we look at matters of\nresponsibility\nin design\nthrough an analysis in the text\nthat that we conduct of what is\nclearly a moment of crisis of design\nnone of these designers\nwanted to produce these effects none of\nthat\nof them was really\nable to fully anticipate\nthe consequences of particular decisions\nthe designer of the like button in\nfacebook\nhad never thought um that that\nparticular design decision\nwould have in the long term\nuh produce an effect such as a poster\ntruth society\nso so it's it's it's a moment of crisis\nfor designers\nand it's quite clear that there is some\nsort of\ncontemporary paradox here with humans\ninto design if it's not\nserving us well in in making these\ndecisions\nand anticipating these effects\nand the reason why the reason for the\nparadox is that human central design\nstill maintains design ideas for forms\nand food processes\nwhere the only real agency present and\naccounted for\nin the design situation as i'm designing\nis the one held by what we refer to as\nthe user and the rest is just\nfunctionality that can be engineered\nbut then of course we cannot effectively\ncare for humanity as designers and\ncreate inclusive and sustainable futures\nif we're not equipped to understand to\naccount for and to anticipate\nour interplay with autonomous technology\nin everyday life and the broader effects\nof this interplay\nin in in a universe of design that's\nexpanded that's decentralized\nwhere things digital things are not just\nused\nbut made assembled\nnot just by one user and not even by\nmany users but\nare the result of this emergent\ninterplay of multiple\nusers multiple systems that are\ninstantiated in multiple forms and with\nmultiples even conflicting intents\nand so the this is a huge\nchallenge for designers and of course we\ndon't get even\nclose to uh providing a solution for\nthat but one of the\ntenets that we identify in this article\nto begin dealing with with this\ncomplex challenge is the idea that\nwhen it comes to responsibility um\nthen perhaps responsibility here is not\nabout\nlocating right response\nbut it's about persistently sustaining\nan ability to respond\nto enable and invite response from the\nstakeholders\nfrom everyday people those are\nusing and making at the same time the\nsystems\nand from the system itself and not just\nto the point of design\nbut also later when people are actually\ninteracting with the system\nand so differently than accountability\nresponsibility if we if we\nunderstand it as a future orientations\nand orientation to our future effects\ncannot be fully engineered in the\nfunctionality of the system\nit is not just about the failures\nmachine learning model or the most\nexplainable algorithm because\nin a decentralized system that is made\nso to speak in real time there is not\njust one single intention\nor perspective that defines the outcome\nand so it is instead also about the\ndesign of the relations and\ninteractions that will enable people\nto situate tune and negotiate\nthose responses and to repair them or\ncontest them\nif necessary future product service\nsystems powered by i must be designed\nperhaps and evaluated as responsive\nand it is not autonomous entities\nthat have to be responsive to our shared\nmoral values but also\nto the evolving social norms and\nenhanced interests and aspirations\nthat people have in the specific context\nin which both people and system\nencounter and learn from each other\nas you can imagine after writing this\narticle\nwe were left with a sense of urgency for\nall right we need to really\nfundamentally reconceptualize\ndesign and the way we teach it in our\nschools\nin our design programs uh we just barely\nscratched the surface\nof what the problem is and so i was left\nwith a belief\nthat\nwe really had to take\na proactive role that imagining and\nmanifesting these alternative features\ncouldn't just be an\nafterthought you know writing an article\nand looking at what went wrong and how\nit could be done differently\nso it it cannot be just fixing things\nwhen they go wrong\nor making technology socially acceptable\nwithout questioning it it has to be a\nproactive effort\nit has to start with asking why we\nshould design something in the very\nfirst place\nand if we are to design anything to\nactually have the competence\nto embrace this complexity that we have\nbeen talking about\nand so a few weeks ago we launched the\ncode\nand the code is a we call it posted\ndisciplinary european research\nnetwork to rethink design\nin the digital society with a commitment\nto understand and shape these new\nrealities with real world\nimpact\nwe're starting with with funding um\nby horizon 2020 as a as an itn\nand we're currently hiring 15 phd\nresearchers\nwith different disciplinary background\nand the key idea\nhere for us to really push the envelope\nis is to have them working\nin prototypes we call them\num having these teams of researchers\nfrom different backgrounds\niteratively deployed in real-world\nsettings in collaboration with our\nnon-academic partners\ncutting across sectors and domains\nto experiment with this idea of\ndesign-led ecosystems for the digital\ntransformation of society\ni mean and this is a huge challenge and\nit will be a huge\nchallenge um\nfor the researchers and and for us and\nsupervisors\nbut it's really through this challenge\nthat we hope they will be able to\ndevelop\nthis new design competence and\nprototype their future design\nprofessions and roles\nover the course of the next four years\nwhat we want to investigate\nand push the envelope of\nour topics such as human machine\nfeatures\num how to create this\nhow to craft these relations between\nhumans and algorithms by bringing\nanthropologists and social scientists\nto work together early on in the design\nprocess with data scientists and\nengineers\nuh decentralized interactions that are\ndriven\nsocio-economic models data governance\nand future design practices and what we\nwant to do is to develop new knowledge\nand skills\nto advance design competence at each of\nthese levels\nbut also more importantly across these\nlevels\nlearning about the implications that a\ndesign decision\ntaken at one level has for all the other\nlevels\nand then rehearsing practice in in real\nworld contexts\nworking on real world problems how these\ndependencies\nmust be taken into account and\nconfigured\nin the organizational future design\npractices in order\nfor these design decisions to really\nproduce an effect\num so yeah this is a bit of a promo i\nguess\nthe applications are open until the 14th\nof february\num and we did we did receive quite\nenthusiastic\nresponses which which is encouraging\nbecause\nbecause i think that in in\na bit all over around the world there is\nthis sense of urgency\num in the field of design for really\num addressing seriously these issues\num rather than just fixing problems\num in in adult fashion\nand and i think that also\nwhat really resonates um is\nbesides the fact that we have excellent\nsupervisors\num on board and we're feeling what is an\napparently really strong need for such a\nprogram is\nis the the the fact that\nas a program we explicitly position\nagency as foundational to digital design\ntoday to develop in this new design\ncompetence\nas was once the notion of function\nto industrial design to industrial\ndesign and\nand the idea that this agency this\ndesigning with\nis not necessarily something we can\ncontrol\nand prototype in a traditional sense\nbut something that we must learn to seed\nand care for\nand and what does that mean of course\nis um at the core of the program to\ndevelop\nand i think that with this i made it\nreally\non in 30 minutes and i just want to\nthank you for listening\nand open the floor for questions\nthank you very much alisa it was a very\ninteresting presentation\nand uh even for me that i know partially\nsome of this work already but uh i\nreally like to see\nall the pieces together and how they led\nto\nnow the program i think it's really it's\nreally nice\nso the floor is open for questions\nplease go ahead hi elisa\nit's worth waking up early in california\noh my god indeed um\nwhat i want to suggest especially uh\nreading someone like bruno latour and\nactor network theory\nis that giving agency to non-humans\nis something that that humans do\nalmost by reflex so uh\nwe do it with uh with the thermostat we\ndo it with\ndoor closers we we do it with all the\ntechnology around us\nso my and i think that my natural\nextension it's happening also with\nwith autonomous uh non-human autonomous\nagents\nso the question is what what do you\nthink\nwe have to actually deliberately learn\nuh in order to do that kind of\ninteraction design or what do we have to\nworry about that because we give agency\nso naturally to to non-human objects\nwhat do we have to\nbe aware of or be weary of yeah i mean i\ni\nthink that you know the the so two\nthings\nthat the fact that as human beings we\ntend to\nproject agency or even\nin some cases personality right\num on two things\nis in a way kind of problematic when\nyou're trying to stress the fact\nthat agency is produced in the interplay\nbetween humans and non-humans\nand it's particularly problematic in\ndesign because\nin a creative design process using some\nof these techniques\nattribute human qualities to non-human\nthings\nis is useful for the creative process\nbut it's not um\nalways the right way to go if we\nwant to really tackle the the problem of\nhow to design\nautonomous technology in in a\nresponsible way it might be very useful\nto come up with an interface\num that people have an easy way to\nrelate with\num but not when when when you\nwant to look at um\nat a way of of contextualizing that that\nparticular interface of that particular\ninteraction in in a broader um\nset of of other interactions and that's\nwhere\nthe focus on social practice theory for\nme\num becomes really useful compared to for\nexample\nyou know electorian approach or or a\nfluid assemblages\num type of conceptualization because it\nit really\nhelps me think in terms of ecosystems\num and it helps me think in terms of\num to put some boundaries right\nfor but that's exactly latour that's\nexactly actor network theory it's really\nthe interaction right and the the sense\nof ecosystem\nbut uh um i agree\nthat that i think in interaction with\nwith non-humans we we produce\nagencies um and uh\nbut i think some of the the the problems\nwith autonomous\nuh uh agents is that a lot of the\nagency is actually given by\nthe engineering and the design already\nso and it's hidden from the interaction\nright so\nit's the interaction of the user versus\nthe interaction of the designer\nsomething like that\nbut it doesn't have to right so that we\ncould find\nways of manifesting that\nyeah yeah yeah very nice\nother questions i think i have a comment\nyeah someone was saying something\nuh okay so i just had something\nuh to this uh comment by deborah also\nfor me when you\nsay okay we should move as designers\nfrom\nuh we should account for urgency as\nit was functioned before right but\nagency in this\nlike as something shared i think it's\nmuch more complex also\nas a as something to grasp to understand\nright\nso it means that uh we need to have\nlike a completely different education\nand then\n[Music]\nyeah so i think yeah indeed it's part of\nthe program my amazing but then\nand also my questions are\na lot about the formation of a designer\nnow so\nthe actual skills many times come from\npractice\nso how do we see this understanding of\nurgency as a shared property emerging\nalso from practice\nthat is the most interesting thing for\nme to understand\nis it possible that it will come also\nfrom actually engaging with\nai uh in practice\nyeah but that was also part of the\nreason for for\ni mean quite quite central to to\nto get in a program together where you\nyou know\nnot only work with a limited\nand relatively homogeneous group of\ncollaborators\num in in one particular context\nbut bringing together um\na more diverse group of researchers\num and in several non-academic partners\nthat represent these sectors\nthat are increasingly blurring because\nof this\nthis type of technology and so this idea\nof croco teams\nis really crucial because it's really\nhard\num and that was the the the the feeling\ni had also after many years working on\nthese things it's really really hard\nto just um\nyou need to engage you need to engage\nwith these problems\nin in in in the real world you need to\nbe confronted\nwith what does it mean to you\nmake a particular decision at the level\nof the of the interface\nand what does it mean for the um for the\ntype of value that will\nwill be fostered um\nonce people begin interacting with that\ninterface across\na you know it decentralized\num in a decentralized context and so\nit would be a bit of a i think a bit of\na balancing act\nbetween the phd researchers being able\nto\nto bring forward different\nconceptualizations and some of them will\nfocus a little bit more on that\nof for example you know what what is\nwhat is value\num in in in this\ndata economy it can it be multi-sided\num what what is value when it's not\njust um money\nand um at the same time\nhaving to really deal with real world\nproblems and and having to bring these\nideas\nto to application as\nthey are developing right so that that\nwill be difficult\nand i'm sure that there will be failures\nbut hopefully there will be also\nsuccesses\num but you're absolutely right you know\nthe the\nthe context of practice particularly for\nus as designers is fundamental\nalso because a lot of concepts and a lot\nof methodologies hold\nto an extent and then it's the\nsensibility that you develop as you do\nthings\nthat really matters yeah\nthanks so we have a question from from\nroy\ni think yeah um\nthank you elisa for this uh really uh\ninspiring uh\nyeah presentation but also overview of\nyour\nthinking uh in the past and towards the\nfuture is really\nreally interesting um i want to\nrespond to you you were talking about\nthis is being able\nthrough your your work being able to\nbetter problematize design problems\nand you were also just uh responding to\nmaria about like how to\nwhat is needed to really engage with the\ni would say more of the politics of uh\nthat's that's where a lot of this seems\nto go where a lot of\nmy work also seems to be going and where\nuh indigo's also here he's uh he just\nstarted as a psd researcher he'll be\nlooking at\num one of the questions is like what's\nthe value of\ndesign method methodology in uh like\nhelping public\ninstitutions like\nintegrates responsibly like data-driven\ntechnologies in\nin decision making um so i was wondering\nin this maybe\nin this uh this program that you're\nbuilding like yeah you seem to\nbe looking at a much different uh way of\nprototyping or\ncontext for for design like what are\nyour ideas for like really engaging with\nthe actual institution like the\ndemocratic institutions or\ncontexts and uh and yeah what's what are\nthe challenges\nthat you see yeah no i hear you yeah so\nof course we had to make choices\nright and so we um\nwill be probably what the the type the\nthe very important type of work that you\nare doing that's more on the\ni would say on the policy poli level of\nuh of design right um\nthat will probably be a bit on the\nfringe for us\nwhat we are interested in because it\nhasn't been done yet\nmuch is how to\ntake politics seriously when it comes to\num the actual interaction with the\nsystem right\ncan we imagine mechanisms of public\ndeliberation\nas you use the system right and and so\nof course there are processes and design\nmethodologies that can be\nand need to be in place at the point of\ndesign\nright when you think about you know what\nwhat this should be\nand who should be accountable and who\nare the stakeholders that should be\ninvolved in\num in the process of setting this up\nand um and so on and so forth\nbut um our focus will be primarily on\non we have um\none phd that is uh for example will be\nlooking into\nan alternative to the terms of service\ncontract\nso what would be a different um concept\nfor the terms of service that that's\nmost\nalong the line of a social contract\nthat gives legitimacy to the company to\noperate\nand another phd instead will look more\ninto what i was just saying is\num mechanisms for the liberation\num within a use time\nso during the use of the system but of\ncourse i can imagine\nthat that connects very much also with\nwhat you're doing\nso i i just would be a lot of\nconversation to have in the years\nto come and we're also the\nthe hope is also to be able to open up\nthe summer schools to other phd students\nbecause you know not just our students\nat least part of that because i think\nit's it's important\nfor people that work in this space to\nconnect\nand have conversation\nyeah i know that's that sounds it sounds\nwonderful i think that there'll be a lot\nof uh a lot of those questions your\nyour uh your naming are quite similar to\nuh\nto what we're interested in so it would\nbe great to uh\nyeah to move along together yes\nabsolutely\nthank you\nand there is some enthusiasm about the\nsummer school so elisa\nuh you should also share and use ai tech\nchannels to\nto disseminate the summer school when\nit's\nwhen it will be time and date is a\nquestion\nhi elisa uh thanks so much for the talk\nuh thoroughly enjoyed that and a lot of\nit resonated very nicely\ni also feel i understand decode a bit\nbetter\nwhich is very useful right now\num i've got a kind of a curiosity about\nhow\nhow we understand changes and shifts in\nagency\nand this has come up a few times\n[Music]\nfrom kind of designing some lightly\nautonomous systems\nand finding that as as a designer i was\ndescribing\nor trying to see more agency there than\nthere really was\num and falling into what uh deborah was\nalluding to of kind of\nanthropomorphizing um\nrather than actually being agency\nbut then i think there are times when\nthe when the the systems genuinely do\npush back\num and there's something a bit\ngenerative and something a bit\nsurprising and emergent that acts in\nways that\num a designer might not be expecting and\nit's very clearly\nagential um and it also makes me think a\nbit of\nopen ai when they came up with their big\nlanguage models and they started saying\nwell we're not going to put them out\ninto the world\nbecause it's too dangerous they would\nhave too much of their own agency to\nchange the way that humans communicate\nso it it feels like they felt there\nwould be\ntoo much of a shift in agency there so\ni'm kind of curious\nhow we can chart that and make sense of\nit and\nsee those shifts as they happen or\nbefore they happen\nthat's a really good and really\ndifficult question dave\num are you are you talking about\nbeing able to anticipate the shifts\ni i guess i guess it starts with\nnoticing them\num and then moves on to anticipating\num i think i think that\nit's actually quite hard to anticipate\nthem\nbut but if you if you\ni i don't have a good answer for you\nreally i think that\nif you if you slightly\nmaybe shift conceptually from the idea\nof\nfully anticipating to what you\nsaid noticing and and then sort of\nreflect\nreflexively um\nproviding a\na way to to tune\nyou know who does what\num that might um when when we\nzoom out of it um perhaps provide\ninsights\nfor um a more\nradical redesign of a system at the\nprofessional level so like\nsort of a ref reflexive cycle between\neveryday design\npractice as i call it in professional\ndesign practice maybe there is a way of\nnoticing\num these on these these these\nshifts and and what the the trend is in\nthese shifts perhaps\nthere is something to be learned for how\nthe system\nshould be redesigned or more um\nmore radically reseeded so you think so\nto speak\num also at the you know the functional\nlevel\nbut that's just me thinking aloud with\nyou\nyeah because it it feels a bit like you\nknow\nwhen when we try and get people to go\ninto the studio and start prototyping\na lot of that is about realizing the\nagency of the stuff they're working with\nand getting out of the idea of just\nstraight imposing your will on the world\num and it feels like there's a similar\nkind of process for\nmaking increasingly autonomous systems\num and at the other end uh if i think of\nlike\nuh uber drivers going on strike\nit's very clear they're exercising\nagency there\num and it's very obvious but there's a\nlot of gradations in the middle\nyeah and there's also be this you know\nthese this\nnarrative that we we all sometimes and\nadvertisingly\nfall into rather than\nthat the system is it should be\nincreasingly autonomous\nand while in a way you know\ntechnological innovation pushes in that\ndirection and\nand it's also true that machines learn\nand much faster than human beings\nright generations of technology are\nfaster\nin retaining learning than generations\nof human beings\nso so in that sense um\nyou have a bit of a mismatch between\nthat that that phase and\nand the enculturation of technology but\num\nyeah i think there are a lot a lot of\nthe conversation here will depends also\non on how you frame how we frame the\nnarrative\nthat was a difficult question it's a\nsuper difficult question\nthese are the fun questions the only one\nthat makes you think that's why you give\ntalks\nand for me personally it's a very dear\nquestion like uh\nespecially what they were saying like\nthis\nuh thing of this urge of imposing our\nview into the world no it's what uh\nlike in my uh perspective we should\nlearn\ntry not to do like try to develop\nto to be a bit more pragmatic and uh\nacknowledge where we are really trying\nto\nsay our to impose our view and uh\ntry to open to our to others now and um\ni i think there are some ways of doing\nit but uh yeah it's still\nreally really challenging is there\nany other questions\nhi uh thank you so much for your talk it\nwas\nreally interesting and i'm the one who\nsaid yes please to the summerslam yes\nso that's me i would love to that sounds\namazing\nyeah um so i was just wondering if you\ncould it's we've\ni've you've touched on it throughout the\npresentation and in the\nin the questions but i'm really\ninterested in um\nthe difficulty of sort of the limits of\nlanguage or the challenge and ambiguity\naround the language of\nautonomous and agency and i come from a\nphilosophy background\nand i work with roboticists at tu delft\nand\nwe say autonomous we are talking about\ndifferent things\num we talk about agent we're talking\nabout different things so we've had to\nreally\nsort of sit down together and say like\nokay what do you mean\nand and then try and find a common\nlanguage so i'm wondering about the\nprocess of finding a common language\nand then i was interested in if if you\nmight suggest that your research would\ncome up with a new language\nor like should we keep calling this\ntechnology autonomous or should we come\nup with new way\nto to i so so my background is in\nin the humanities and so for me language\nis so\nimportant because because it really\nshapes the way we think\nand you're absolutely right you know we\nuse the same\nwords to mean different things even when\nwe say design we mean different things\nso when we say\nsystem you know it's it's it's a\nit's a different thing so um there is\nsomething to be said about clarifying\nwhen you work interdisciplinary the\nmeaning of the basic terms that you're\nusing\nbut yes we we're hoping to to\nin our own field you know for referring\ndesign interaction design broadly\nbroadly defined that's include social\nservice design\nto to develop a new vocabulary\nand so you know notions of\ncore performance um\nwe're looking into notions of and dave\nwill be in the intentional interaction\nright so how do you conceptualize the\ninteraction that\nis actually responding to a multiplicity\nof\ndifferent and possibly also conflicting\nintentions\num kind of interface you need to develop\nfor that um\num this idea also of\nresponsibility they might not be exactly\nthese terms\nbut uh we'll try to capture\nour learnings and and the\nway that we are re-conceptualizing\nparticular\naspects um of the design process\nalso with a new vocabulary that's that's\nthe ambition\nyeah i think it's important like i hate\nusing\ni hated it forever but i don't have\nanother you know the only other word i\ncan use is people\nbut but it doesn't you know people could\nbe\nthe the person that's interacting with\nthe system it could be the stakeholder\nthat\nyou know has as a voice in determining\num what kind of system should be in\nplace\num in the first place so yeah\nwe don't have an alternative for that\nthanks very very important yes\nokay we might have time for one last\nquestion\nif there is one or\n[Music]\nyes it doesn't look like\nthere are other questions it was a very\nnice presentation\nand again yeah i think all the\nthe summer school programs will be very\ninteresting for the network of ai tech\nso\nwe will thank you very share\nsorry i just wanted to you know thank\nthe boring\nand dave and madeline and rule for the\nquestions\nand helping me think along with you\non these really important topics i hope\nthat\nyou do as you said with with with the\nprogram in place and uh via ai tech that\nthere will be ways to\nyou know bring our energies together on\nthe very concrete projects\nso thank you very much again elisa and\nto everyone for participating\nsee you next week\nbye bye bye thank you thank you\nyou", "date_published": "2021-02-03T09:45:07Z", "authors": ["AiTech - TU Delft"], "summaries": []}