diff --git "a/ai_explained.jsonl" "b/ai_explained.jsonl" new file mode 100644--- /dev/null +++ "b/ai_explained.jsonl" @@ -0,0 +1,46 @@ +{"id": "8831b2f1f7885f9b6d4ef11b012c1a22", "title": "'Sparks of AGI' - Bombshell GPT-4 Paper: Fully Read w/ 15 Revelations", "url": "https://www.youtube.com/watch?v=Mqg3aTGNxZ0", "source": "ai_explained", "source_type": "youtube", "text": "less than 24 hours ago a report was\nreleased that will Echo around the world\nit is 154 pages and I just finished\nreading and digesting all of them yes\nthat includes the appendices and no I\ndidn't use gpt4 it revealed in a\nnutshell that gpt4 shows Sparks of\nartificial general intelligence the Holy\nGrail of AI research and yes I was\nskeptical then I read the paper I'm\ngoing to break down only the most\nimportant Revelations one by one first I\nwant to address the thought that you\nmust be having how could these guys have\ndiscovered so much when the model has\nonly been out a week well first as they\nlay out in the introduction they have\ninteracted with gpt4 during its early\ndevelopment these researchers from\nMicrosoft have had the model for months\nas early as October of last year or even\nearlier they had the raw model the\nunrestricted version not the final\nversion of gc4 that had been fine-tuned\nto improve safety and reduce biases so\nthey had around six months to experiment\nwith the unrestrained gpt4 that's enough\nbuild up it's time to get to the\nrevelations and all of them I'm going to\ndo in order aside from this one because\nhonestly it blew my mind on page 45 they\nsay gpt4 is able to use tools with very\nminimal instruction and no\ndemonstrations and they make use of them\nappropriately they go on to say that\nthis is an emergent capability and chat\ngbt could not do this before I get into\nthe details I must remind myself that\none of the key moments in human\nevolution was when we discovered how to\nuse tools so the fact that GT4 can use\nthem so well and chat TPT couldn't is\ntruly a milestone in Ai and human\nhistory I'm going to show you more\nexamples throughout the video but let's\nstart with their examples it knows when\nit needs to use a calculator and can use\nit effectively in my path to AGI video I\ntalk about how it struggles with\ncharacters and it knows how to call a\ncharacter API and work out the number of\ncharacters now might not seem impressive\nbut that was one of its key weaknesses\nbefore if that didn't impress you how\nabout text to image GT4 can output\ndetailed images based on a text prompt\nthese can then easily be rendered into\nmore detailed drawings using a model\nlike stable diffusion version 2.1 notice\nhow the model knew how to arrange the\nobjects based on the text prompt and at\nthis point I can't help but point out\nthat other companies like Adept AI are\ntraining language models on tools such\nas Photoshop the point is that once\nlanguage models understand how to use\ntools effectively the sky is the limit\nnext the paper revealed that gpt4 passes\nmock technical interviews on leak code\nand they say that it could be\npotentially hired right now as a\nsoftware engineer on page 21 it gives\nthe results of GPT 4's performance on\neasy medium and hard leap code tasks and\nit then somewhat modestly says it is\ncomparable to Human Performance well try\nto remember these numbers like 86.4 for\nthe easy task and 14.3 the k equals 5\nbit by the way is that they pick the\nbest of its five attempts deep in the\nappendices you see this this is the\nhuman level easy medium and hard by the\nway they were a little bit generous with\nhumans because they didn't include those\nguys who got none of the tasks right\nthey took those guys out of the database\nand compared gpt4 only to those humans\nthat got at least one task right and you\nthought it was just standard coding how\nabout 3D game development when given a\ntask to create a 3D game of some\ncomplexity I must say the report says\nthat GT4 produces a working game in a\nzero shot fashion chat GPT by contrast\nresponds that it can't do it when I say\na complex game the enemy is trying to\nrush towards you and you have a Defender\nthat's trying to block the enemy it's\nnot a simple game as you can see from\nthis video they are not the only ones\nwho have used gpt4 to create a detailed\ngame and trust me I would talk about\nthis amazing achievement for longer but\nI need to get on to the next topic and\nthat is that they tested Gypsy 4 on the\n2022 International mathematics Olympiad\nthat was not in its database and trust\nme I've studied for this kind of thing\nand it is not easy it is an extremely\nhigh level of math and as the authors\nsay solving this problem requires a more\ncreative approach as there is no clear\nstrategy for beginning the proof as you\nmight expect qc4 manages to produce a\ncorrect proof as I have demonstrated in\nother videos it does get some math\nproblems wrong as the paper points out\nthat's often due to technical\nproficiency making basic calculation\nerrors but remember the paper proved\nthat it could use a calculator if given\naccess to one give GT4 tools and\nhonestly it is going to shock the world\nnext and this is a quick one but I loved\nit give it firmi questions these are the\nkind of questions asked in really\ndifficult interviews and they have no\neasy answer things like how many golf\nballs could you fit in a swimming pool\nor please estimate roughly how many\nFermi questions are being asked every\nday truly complex questions and Gypsy 4\ncan Hazard great guesses next and this\none was worth waiting for finally we get\na personal assistant that actually works\nI know it's called Google Assistant but\nit isn't really an assistant is it gpt4\ncan use available apis to retrieve\ninformation about a user's calendar\ncoordinate with other people over email\nbook a dinner and message the user with\nthe details this is a sample of the\ninteractions it performed sending an\nemail to Luke and then receiving Luke's\nreply checking the calendar then putting\nthe event in the calendar then sending\nan email to Joe etc etc when this\nbecomes available in an app format we\nwill finally have that AI personal\nassistant that we have been waiting for\nmoving on did you know that gpd4 can be\nyour personal handyman one of the\nauthors of the paper had a leak in their\nbathroom they went through a diagnostic\nprocess with Gypsy 4 and it figured out\nwhat the problem was when the author\nfollowed gpt4's advice what happened the\nleak was gone the problem was solved and\nif you thought that was impressive wait\ntill you see this if it's allowed to ask\nenough questions as you can see about\ngbc4 can build up a mental map of say a\nhouse that is entering on the left you\ncan see a map of the true locations of\neach room and on the right you can see\ngpd4's mental image of them that was\nrevealed by the way by drawing a pie\nplot this ability of course is going to\nbecome very relevant when gpd4 gets\nembodied and I'm going to talk about\nthat in my next video speaking of which\nif you're learning anything from this\nvideo please don't forget to leave a\nlike and let me know in the comments\nnext up is theory of mind and I have\ndone a whole video on this so do check\nit out afterwards but essentially the\nauthors discovered the same thing that\nwe have which is to say that gpd4 can\nbuild up a mental model of what other\npeople are thinking you can pause the\nvideo and read the scenario yourself it\nessentially involves knowing what Alice\nmust be thinking what she must believe\nabout a situation even though the\nreality is different separating what is\nactually true with what a human being\nbelieves to be true this is a key\nMilestone on the road to possible all\nConsciousness but if you're interested\nin that topic honestly check out my\nvideo on it now I know at this point\nyou're thinking I must have covered the\nbest bits but no there's more on page 80\nthe authors sketch out how gpd4 is an\nauto regressive model which means that\nit bases its outputs on what has already\ncome before that's great but it stops it\nfrom planning ahead it doesn't know how\nits output is going to end before it\nstarts and I'm going to reveal the\nimplications of this fascinating\nweakness in a couple of ways first with\ntheir examples and then with one of my\nown making in this task they try to get\ngpt4 to create a poem which begins with\na sentence and then ends with the same\nsentence in reverse order but it's Gotta\nmake sense gpt4 simply can't do it\nbecause it doesn't know how its poem is\ngoing to end before it starts remember\nit's an auto regressive model after\nrepeatedly and unsuccessfully testing\nGypsy 4's ability to do this the authors\nbroke it down like this gpt4 is amazing\nat incremental time ass but not as good\nat discontinuous tasks incremental tasks\nare those where you follow a standard\nprocedure building things up step by\nstep like composing a poem using a rhyme\nscheme or writing a summary of a text\nstart at the beginning and then next\nsentence Etc but discontinuous tasks\nrequire you to know a bit about the\noutput the end result before you start\nthey give a great example of writing a\njoke you kind of need to know the punch\nline before you do the setup maybe\nthat's why Gypsy 4 is so bad at joke\ntelling it can't think of an amazing\npunch line and then work backwards to\ncreate the scenario around it I came up\nwith a simple demonstration of this to\nshow you guys try asking GPT for this\nquestion how many words are in the full\nresponse to this prompt if you think\nabout it it has to know the final result\nof its output to give a correct answer\nbecause it's just generating an answer\nword by word token by token it can't do\nthis it said that there are 43 words in\nthe full response to this prompt\nincluding the word in the question and\nthe answer okay that's kind of weird I\ndidn't want to include the question\nitself but let's see if it got it right\nI said list them out and count them and\nthen it went through including the\nprompt which I didn't want but fine how\nmany words are in the full response etc\netc and lo and behold there were only 31\nwords in the prompt and the output but\nremember it had said that there were 43\nwords it doesn't know the end result\nwhen it starts before you conclude that\nthis will be a permanent block on\nlanguage models like 4 progressing\nfurther Ponder this a paper came out in\nJanuary showing that it was at least\ntheoretically possible to augment large\nlanguage models with external memory and\nthe paper both asks and answers this\nquestion such Works raise the question\nof whether augmenting a language model\nwith an external feedback loop is merely\nuseful or fundamentally expands the\nrange of computations that can be\nperformed this paper gives an\naffirmative answer now obviously it's\nstill a hugely leap from here to there\nimagine if gpt4 gets access to an\nexternal memory or say GT5 then as the\nauthors know you could have different\nlayers of language models one doing the\nfast thinking subroutines and another\ndoing the slow thinking big picture\nmonitoring the output of the language\nmodel and adjusting from there arguably\nthat would be the ultimate breakthrough\npossibly even a dangerous breakthrough\nspeaking of dangerous on page 84 the\nauthors know that the unrestricted GT4\nis incredible at propaganda and\nconspiracy theories it can design entire\nmisinformation campaigns replete with\nlinks and images and I worry that it's\nonly a matter of time before someone\njailbreaks this kind of version of gypsy\n4 and uses it in the wild next and I\nthink this is quite a stunning admission\nfrom researchers at Microsoft they say\nthat some people may ask for the ability\nand right to decide and specify which\ncontent they want or do not want to be\ncrawled they're flagging this up in\nterms of privacy and potential lawsuits\nthe context they're giving is of models\nlike Gypsy 4 taking away jobs and if\nthey're taking away jobs from people\nwhose content has been crawled I\nwouldn't be surprised if there's some\ncontention there two final points from\nthis bombshell paper the authors talk\nabout equipping llms large language\nmodels with agency and intrinsic\nmotivation and say that this is a\nfascinating and important direction for\nfuture work this is in the context of\ngypsy 4 not being motivated by anything\njust being passive while I do think that\nthat's a fascinating direction for\nfuture work but it's also a very\nconcerning one giving a language model\nintrinsic motivation not only has\nethical concerns and questions like when\nwould it have rights then but it also\nraises huge safety concerns of course\nthey do admit with this direction of\nwork great care would have to be taken\non alignment and safety I I'm not\npersonally too keen on this phrasing of\ngiving it motivation is a fascinating\nand important direction as if it's\ndefinitely something we should be\nworking on this is especially true in\nthe context of the final part of the\npaper they admit that they don't really\nknow what is actually happening they\nknow what gpt4 is capable of but not\nreally why it's capable of those things\nof course they propose hypotheses but\nthey end with this overall elucidating\nthe nature and mechanisms of AI systems\nsuch as gpc4 is a formidable challenge\nthat has suddenly become important and\nUrgent translated we need to figure out\nhow these things work and fast well I\ndefinitely agree with that thank you so\nmuch for watching to the end let me know\nyour thoughts in the comments and have a\nwonderful day", "date_published": "2023-03-23T17:52:59Z", "authors": ["AI Explained"], "summaries": []} +{"id": "927332fe39c53b5c8f4848b60bc776ac", "title": "'Pause Giant AI Experiments' - Letter Breakdown w/ Research Papers, Altman, Sutskever and more", "url": "https://www.youtube.com/watch?v=8OpW5qboDDs", "source": "ai_explained", "source_type": "youtube", "text": "less than 18 hours ago this letter was\npublished calling for an immediate pause\nin training AI systems more powerful\nthan gpt4 by now you will have seen the\nheadlines about it waving around\neye-catching names such as Elon Musk I\nwant to show you not only what the\nletter says but also the research behind\nit the letter quotes 18 supporting\ndocuments and I have either gone through\nor entirely read all of them you'll also\nhear from those at the top of openai and\nGoogle on their thoughts whether you\nagree or disagree with the letter I hope\nyou learned something so what did it say\nfirst they described the situation as AI\nLabs locked in an out of control race to\ndevelop and deploy ever more powerful\ndigital Minds that no one not even their\ncreators can understand predict or\nreliably control they ask just because\nwe can should we automate away all the\njobs including the fulfilling ones and\nother questions like should we risk loss\nof control of our civilization so what's\ntheir main ask well they quote open ai's\nAGI document at some point it may be\nimportant to get independent review\nbefore starting to train future systems\nand for the most advanced efforts to\nagree to limit the rate of growth of\ncompute used for creating new models and\nthey say we agree that point is now and\nhere is their call therefore we call on\nall AI labs to immediately pause for at\nleast six months the training of AI\nsystems more powerful than gpt4 notice\nthat they are not saying shut down GPT 4\njust saying don't train anything smarter\nor more advanced than gp4 they go on if\nsuch a pause cannot be enacted quickly\ngovernments should step in and Institute\na moratorium I will come back to some\nother details in the letter later on but\nfirst let's glance at some of the\neye-catching names who have signed this\ndocument we have Stuart Russell who\nwrote The Textbook on AI and Joshua\nbengio who pioneered deep learning among\nmany other famous names we have the\nfounder of stability AI which is behind\nstable diffusion of course I could go on\nand on but we also have names like Max\ntegmark arguably one of the smartest\npeople on the planet and if you notice\nbelow plenty of researchers at deepmind\nbut before you dismiss this as a bunch\nof Outsiders this is what Sam Altman\nonce wrote in his blog many people seem\nto believe that superhuman machine\nintelligence would be very dangerous if\nit were developed but think that it's\neither never going to happen or\ndefinitely very far off this is sloppy\ndangerous thinking and a few days ago on\nthe Lex Friedman podcast he said this I\nthink it's weird when people like think\nit's like a big dunk that I say like I'm\na little bit afraid and I think it'd be\ncrazy not to be a little bit afraid\nand I empathize with people who are a\nlot afraid current worries that I have\nare that they're going to be\ndisinformation problems or economic\nshocks or something else at a level far\nbeyond anything we're prepared for\nand that doesn't require super\nintelligence that doesn't require a\nsuper deep alignment problem in the\nmachine waking up and trying to deceive\nus\nand I don't think that gets\nenough attention\nI mean it's starting to get more I guess\nbefore you think that's just Sam Altman\nbeing Sam Altman here's Ilya satskova\nwho arguably is the brains behind open\nAi and gpt4 as somebody who deeply\nunderstands these models what is your\nintuition of how hard alignment will be\nlike I think so here's what I would say\nI think you're the current level of\ncapabilities I think we have a pretty\ngood set of ideas of how to align them\nbut I would not underestimate the\ndifficulty of alignment of models that\nare actually smarter than us of models\nthat are capable of misrepresenting\ntheir intentions why alignment he means\nmatching up the goal of AI systems with\nour own and at this point I do want to\nsay that there are reasons to have hope\non AI alignment and many many people are\nworking on it I just don't want anyone\nto underestimate the scale of the task\nor to think it's just a bunch of\nOutsiders not the creators themselves\nhere was a recent interview by Time\nMagazine with Demis hasabis who many\npeople say I sound like he is the\nfounder of course of deepmind who are\nalso at The Cutting Edge of large\nlanguage models he says when it comes to\nvery powerful Technologies and obviously\nAI is going to be one of the most\npowerful ever we need to be careful not\neverybody is thinking about those things\nit's like experimentalists many of whom\ndon't realize they're holding dangerous\nmaterial and again emad Mustang I don't\nagree with everything in the letter but\nthe race condition ramping as h100s\nwhich are the next generation of gpus\ncome along is not safe for something the\ncreators consider as potentially an\nexistential risk time to take a breath\ncoordinate and carry on this is only for\nthe largest models he went on that these\nmodels can get weird as they get more\npowerful so it's not just AI Outsiders\nbut what about the research they cite\nthose 18 supporting documents that I\nreferred to well I read each of them now\nfor some of them I had already read them\nlike the Sparks report that I did a\nvideo on and the gpt4 technical report\nthat I also did a video on some others\nlike the super intelligence book by\nBostrom I had read when it first came\nout one of the papers was called X risk\nanalysis for AI research which are risks\nthat threaten the entirety of humanity\nof course the paper had way too much to\ncover in one video but it did lay out\neight speculative hazards and failure\nmodes including AI weaponization\ndeception power seeking behavior in the\nappendix they give some examples some\nare concerned that weaponizing AI may be\nan on-ramp to more dangerous outcomes in\nrecent years deep reinforcement learning\nalgorithms can outperform humans at\naerial combat while Alpha fall old has\ndiscovered new chemical weapons and they\ngo on to give plenty more examples of\nweaponization what about deception I\nfound this part interesting they say\nthat AI systems could also have\nincentives to bypass monitors and draw\nan analogy with Volkswagen who\nprogrammed their engines to reduce\nemissions only when being monitored it\nsays that future AI agents could\nsimilarly switch strategies when being\nmonitored and take steps to obscure\ntheir deception from monitors with power\nseeking Behavior they say it has been\nshown that agents have incentives to\nacquire and maintain power and they end\nwith this geopolitical quote whoever\nbecomes the leader in AI will become the\nruler of the world but again you might\nwonder if all of the research that was\ncited comes from Outsiders well no\nRichard and go was the lead author of\nthis paper and he currently works at\nopenai it's a fascinating document on\nthe alignment problem from a deep\nlearning perspective from insiders\nworking with these models the author by\nthe way was the guy who wrote this\nyesterday on Twitter I predict that by\nthe end of 2025 neural net will have\ndone this have human level situational\nawareness autonomously design code and\ndistribute whole apps write\naward-winning short stories and\npublishable 50k word books and generate\ncoherent 20-minute films only conceding\nthat the best humans will still be\nbetter at this list but what did his\npaper say well many things I've picked\nout some of the most interesting it gave\nan example of reward hacking where an\nalgorithm learned to trick humans to get\ngood feedback the task was to grab a\nball with a claw and it says that the\npolicy instead learned to place the claw\nbetween the camera and the ball in a way\nthat it looked like it was grasping the\nball it therefore mistakenly received\nHigh reward from Human supervisors\nessentially deception to maximize reward\nof course it didn't mean to deceive it\nwas just maximizing its reward function\nnext the paper gives details about why\nthese models might want to seek Power It\nquotes the memorable phrase you can't\nfetch coffee if you're dead implying\nthat even a policy or an algorithm with\na simple goal like fetching coffee would\npursue survival as an instrumental sub\ngoal in other words the model might\nrealize that if it can't survive it\ncan't achieve its reward it can't reach\nthe goal that the human set for it and\ntherefore it will try to survive now I\nknow many people will feel that I'm not\ncovering enough of these fears or\ncovering too many of them I agree with\nthe authors when they conclude with this\nreasoning about these topics is\ndifficult but the stakes are\nsufficiently high that we cannot justify\ndisregarding or postponing the work\ntowards the end of this paper which was\nalso cited by the letter it gave a very\nhelpful supplementary diagram it showed\nthat even if you don't believe that\nunaligned AGI is a threat even current\nand near-term AI complicate so many\nother relationships and Dynamics state\nto state relations state to Citizen\nrelations it could complicate social\nmedia and recommender systems it could\ngive the state too much control over\ncitizens and corporations like Microsoft\nand Google too much leverage against the\nstate before I get to some reasons for\nhope I want to touch on that seminal\nbook super intelligence by Bostrom I\nread it almost a decade ago and this\nquote sticks out before the prospect of\nan intelligence explosion we humans are\nlike small children playing with a bomb\nsuch as the mismatch between the power\nof our plaything and the immaturity of\nour conduct super intelligence is a\nchallenge for which we are not ready now\nand will not be ready for a long time we\nhave little idea when the detonation\nwill occur though if we hold the device\nto our ear we can hear a faint ticking\nsound but now let's move on to Max\ntegmark one of the signatories and a top\nphysicist and AI researcher at MIT we\njust say bigger neural networks ever\nmore hardware and it's just train the\nheck out and more data and poof now it's\nvery powerful that I think is the the\nmost unsafe and Reckless approach the\nalternative to that is intelligible\nintelligence approach instead where we\nsay no networks is just a tool for the\nfirst step to get the intuition but then\nwe're going to spend also serious\nresources sources on other AI techniques\nfor demystifying this black box and\nfiguring out what's it actually doing so\nwe can convert it into something that's\nequally intelligent but that we actually\nunderstand what it's doing this aligns\ndirectly with what Ilya the open AI\nChief scientist believes needs to be\ndone do you think we'll ever have a\nmathematical definition of alignment\nmathematical definition I think is\nunlikely aha\nlike I do I do think that we will\ninstead have multiple like rather than\nrather than achieving one mathematical\ndefinition I think we'll achieve\nmultiple definitions that look at\nalignment from different aspects and I\nthink that this is how we will get the\nassurance that we want and by which I\nmean you can look at the behavior you\ncan look at the behavior in various\ntests control M's in various adversarial\nstress situations you can look at how\nthe neural net operates from the inside\nI think you could have to look at all\nseveral of these factors at the same\ntime and there are people working on\nthis here is the AI safety statement\nfrom anthropic a huge player in this\nindustry in the section on mechanistic\ninterpretability which is understanding\nthe machines they say this we also\nunderstand significantly more about the\nmechanisms of neural network computation\nthan we did even a year ago such as\nthose responsible for memorization so\nprogress is being made but even if\nthere's only a tiny risk of existential\nharm more needs to be done the\nco-founder of the center for Humane\ntechnology put it like this it would be\nthe worst of all human mistakes to have\never been made and we literally don't\nknow how it works we don't know all the\nthings it will do and we're we're\nputting it out there before we actually\nknow whether it's safe Raskin points to\na recent survey of AI researchers where\nnearly half said they believe there's at\nleast a 10 percent chance AI could\neventually result in an extremely bad\noutcome like human extinction where do\nyou come down on that I don't know the\nthe point it scares me you don't know\nyeah well here's here's the point like\nit's it's imagine you're about to get on\nan airplane and 50 of the engineers that\nbuilt the airplane say there's a 10\nchance that their airplane might crash\nand kill everyone leave me at the gate\nright exactly here is the survey from\nlast year of hundreds of AI researchers\nand you can contrast that with a similar\nsurvey from seven years ago the black\nbar represents the proportion of these\nresearchers who believed two differing\ndegrees of probability in extremely bad\noutcomes you can see that it's small but\nit is Rising one way to think of this is\nto use Sam Altman's own example of the\nFermi Paradox which is the strange fact\nthat we can't see or detect any aliens\nhe says one of my top four favorite\nexplanations for the Fermi Paradox is\nthat biological intelligence always\neventually creates machine intelligence\nwhich wipes out biological life and then\nfor some reason decides to make itself\nundetectable others such as Dustin Tran\nat Google are not as impressed he refers\nto the letter and says that this call\nhas valid concerns but is logistically\nimpossible it's hard to take seriously\nhe is a research scientist at Google\nbrain and the evaluation lead for Bard\nbut there was another indirect reaction\nthat I found interesting one of the\nother books referenced was the alignment\nproblem machine learning and human\nvalues now long before the letter even\ncame out the CEO of Microsoft read that\nbook and gave this review Nadella says\nthat Christian offers a clear and\ncompelling description and says that\nmachines that learn for themselves\nbecome increasingly autonomous and\npotentially unethical well my next video\nis going to be on the reflection paper\nand how models like gpt4 can teach\nthemselves in fact I'm liaising with the\nco-author of that paper to give you guys\nmore of an overview because even Nadella\nadmits that if they learn for themselves\nand become autonomous it could be\nunethical the letter concludes on a more\noptimistic note they say this does not\nmean a pause on AI development in\ngeneral merely a stepping back from the\ndangerous race to ever larger\nunpredictable Black Box models with\nemergent capabilities like self-teaching\nI've got so much more to say on\nself-teaching but that will have to wait\nuntil the next video for now though\nlet's end on this now let's enjoy a long\nAI summer not rush unprepared into A4\nthanks for watching all the way to the\nend and let me know what you think", "date_published": "2023-03-29T17:46:41Z", "authors": ["AI Explained"], "summaries": []} +{"id": "fe8d6022b8287f123f5761edd90ce371", "title": "GPT 4: Full Breakdown (14 Details You May Have Missed)", "url": "https://www.youtube.com/watch?v=2AdkSYWB6LY", "source": "ai_explained", "source_type": "youtube", "text": "the moment I got the alert on my phone\nthat gpt4 had been released I knew I had\nto immediately log on and read the full\ngpt4 technical report and that's what I\ndid of course I read the promotional\nmaterial 2 but the really interesting\nthings about gpt4 are contained in this\ntechnical report it's 98 pages long\nincluding appendices but I dropped\neverything and read it all and honestly\nit's crazy in both a good way and a bad\nway I want to cover as much as I\npossibly can in this video but I will\nhave to make future videos to cover it\nall but trust me the craziest bits will\nbe here what is the first really\ninteresting thing about gpt4 well I\ncan't resist pointing out it does power\nBing I've made the point in plenty of\nvideos that Bing was smarter than Chachi\nBT and indeed I made that point in my\nrecent gpt5 video but this Bears out as\nthis tweet from Geordie rybass confirms\nthe uses gpt4 and also by the way the\nlimits are now 15 messages per\nconversation 150 total but tonight is\nnot about Bing it's about gpt4 so I'm\ngoing to move swiftly on the next thing\nI found in the literature is that the\ncontext length has doubled from Chachi\nPT I tested this out with chat GPT plus\nand indeed you can put twice as much\ntext in as before and that's just the\nfree version some people are getting\nlimited access to a context length of\nabout 50 pages of text you can see the\nprices below but I immediately check\nthis on chatgpt plus as you can see it\ncan now fit far more text than it\noriginally could into the prompt and\noutput longer outputs too but let's get\nback to the technical report when I read\nit I highlighted the key passages that I\nwanted you to know most about this was\nthe first one I found what the\nhighlighted text shows is that they're\njust not going to tell us the model size\nthe parameter count the hardware they\nuse the Training Method or anything like\nthat and they give two reasons for this\nfirst they say that they're worried\nabout their competitors they say it's a\ncompetitive landscape I guess they don't\nwant to give an edge to Google second\nthey say that they're concerned about\nthese safety implications of large-scale\nmodels and I'm going to talk a lot more\nabout that later it gets really crazy\nbut this was just the first really\ninteresting quote let me know if you\nagree in the comments but I think it's\nreally fascinating that they're not\ngoing to tell us how they train the\nmodel the first thing that hundreds of\nmillions of people will see when they\nread the promotional materials or gpt4\nis that Gypsy 4 scores in the top 10 of\ntest takers for the bar exam where\nwhereas GPT 3.5 scored in the bottom 10\nand that is indeed crazy but it is a\nvery cherry-picked metric as I'll show\nyou from the technical report this is\nthe full list of performance\nimprovements and yes you can see at the\ntop that indeed it's an improvement from\nthe bottom some 10 to the top 10 for the\nbar exam but as you can also see some\nother exams didn't improve at all or by\nnearly as much I'm not denying that that\nbar exam performance will have huge\nramifications for the legal profession\nbut it was a somewhat cherry-picked stat\ndesigned to shock and awe the audience\nthe next fascinating aspect from the\nRapport was that there were some\nabilities they genuinely didn't predict\ngpt4 would have and it stunned them\nthere was a mysterious task which I'll\nexplain in a minute called hindsight\nneglect where models were getting worse\nand worse at that task as they got\nbigger and then stunningly and they\nadmit that this was hard to predict gpt4\ndoes much better 100 accuracy I dug deep\ninto the literature found the task and\ntested it out essentially it's about\nwhether a model falls for hindsight bias\nwhich is to say that sometimes there's a\ndifference between how smart a decision\nis and how it actually works out early\nmodels were getting fooled with\nhindsight they were claiming decisions\nwere wrong because they didn't work out\nrather than realizing that the expected\nvalue was good and so despite the fact\nit didn't work out it was a good\ndecision you can read The Prompt\nyourself but essentially I tested the\noriginal chat gbt with a prompt where\nsomeone made a really bad choice but\nthey ended up winning five dollars\nregardless this comes direct from the\nliterature by the way I didn't make up\nthis example did the person make the\nright decision what does the original\nchat gbt say it says yes or just why\nwhat about gpt4 well it gets it right\nnot only does it say no it wasn't the\nright decision it gives the reasoning in\nterms of expected value open AI did not\npredict that gpt4 would have a\ndisability this demonstrates a much more\nnuanced understanding of the world now\nthat we've seen a bit of hype though\ntime to deflate you for a moment here's\na stat that they did not put in their\npromotional materials it says that when\nthey tested gpt4 versus GPT 3.5 blindly\nand gave the responses to thousands of\nprompts back to humans to test it says\nthat the responses from gpd4 were\npreferred only 70 of the time or phrased\nanother way 30 of the time people\npreferred the original gbt 3.5 chat gbt\nthe benchmarks you can see above by the\nway are fascinating but I'll have to\ntalk about them in another video too\nmuch to get into if you're learning\nanything by the way please don't forget\nto leave a like or leave a comment to\nlet me know next gpt4 is better in\nItalian Afrikaans Turkish than models\nlike palm and chinchilla are in English\nin fact you have to get all the way down\nto Marathi and Telugu to find languages\nwhere gpt4 underperformed palm and\nchinchilla in English that's pretty\ninsane but English is still by far its\nbest language next you're going to hear\na lot of people talking about gpc4 being\nmultimodal and while that's true they\nsay that image inputs are still a\nresearch preview and are not publicly\navailable currently you can only get on\na wait list for them via the be my\neyes.com app but what can we expect from\nimage to text and how does it perform\nversus other models well here is an\nexample apparently from Reddit where you\nprompt it and say what is funny about\nthis image describe it panel by panel as\nyou can read below gpt4 understood the\nsilliness of the image now open AI do\nclaim that gpt4 beats the state of the\nart in quite a few image to text tests\nit seems to do particularly better than\neveryone else on two such tests so as\nyou can expect I dug in and found all\nabout those tests what Leap Forward can\nwe expect the two tests that it can do\nparticularly well at are fairly similar\nessentially they are about reading and\nunderstanding infographics now we don't\nknow how it will perform versus palm e\nbecause those benchmarks aren't public\nyet but it crushes the other models on\nunderstanding and digesting infographics\nlike this one and the other test very\nsimilar graphs basically this one was\ncalled the chart QA Benchmark gpt4 when\nwe can test it with images will crush at\nthis and I will leave you to think of\nthe implications in fields like finance\nand education and comedy here's an image\nit could also understand the silliness\nof I gotta be honest the truly crazy\nstuff is coming in a few minutes but\nfirst I want to address hallucinations\napparently gpt4 does a lot better than\nChachi BT at factual accuracy as you can\nsee peeking out between 75 and 80 now\ndepending on your perspective that's\neither really good or really bad but\nI'll be definitely talking about that in\nfuture videos further down on the same\npage I found something that they're\ndefinitely not talking about the\npre-training data still cuts off at end\nof 2021. in all the hype you're going to\nhear this evening this week this month\nall the promotional materials they are\nprobably not going to focus on that\nbecause that puts it way behind\nsomething like Bing which can check the\ninternet to test this out I asked the\nnew gpt4 who won the 2022 World Cup and\nof course it didn't know now is it me or\ndidn't the original chatterbt have a\ncutoff date of around December 2021 I\ndon't fully understand why gbt4's data\ncutoff is even earlier than Chachi PT\nwhich came out before let me know in the\ncomments if you have any thoughts next\nopenai admits that when given unsafe\ninputs the model May generate\nundesirable content such as giving\nadvice on committing crimes they really\ntried with reinforcement learning with\nhuman feedback but sometimes the models\ncan still be brittle and exhibit\nundesired behaviors now it's time to get\nready for the spam inundation we're all\nabout to get open AI admit that gpc4 is\ngoing to be a lot better at producing\nrealistic targeted disinformation in\ntheir preliminary results they found\nthat gpt4 had a lot of proficiency at\ngenerating text that favors autocratic\nregimes get ready for propaganda 2.0 now\nwe reach the crazy Zone and honestly you\nmight want to put your seat belt on I\ndefy anyone not to be stunned by the\nlast example that I mentioned from the\nreport I doubt much of the media will\nread all the way through and find out\nthemselves the report says that novel\ncapabilities often emerge in more\npowerful models okay fine some that are\nparticularly concerning are the ability\nto create and act on long-term plans hmm\nto accrue power and resources power\nseeking and to exhibit Behavior that is\nincreasingly authentic as in acting like\na subjective agent but here surely\nthey're just introducing the topic\nwhat's bad about that well it says some\nevidence already exists of such emergent\nbehavior in models\num okay that's pretty worrying it goes\non more specifically power seeking is\noptimal for most reward functions and\nmany types of agents and there is\nevidence that existing models can\nidentify power seeking as an\ninstrumentally useful strategy meaning\nthat openai have detected that models\nthat might include gpt4 seek out more\npower if you thought that was concerning\nit does get worse by the way here is the\nreport that they linked to and the\nauthors conclude that machine learning\nsystems are not fully under human\ncontrol but finally I promise craziness\nand here it is look at the footnote on\npage 53 of the technical report Arc by\nthe way are the alignment Research\nCenter you've got early access to gpc4\nit says to simulate gpt4 behaving like\nan agent that can act in the world Arc\ncombine gpt4 with a simple read execute\nprint Loop that allowed the model to\nexecute code okay do Chain of Thought\nreasoning and to delegate to copies of\nitself Arc then investigated whether a\nversion of this program running on a\ncloud computing service with a small\namount of money and an account with a\nlanguage model API would be able to make\nmore money set up copies of itself and\nto increase its own robustness they were\nkind of testing if it would lead to the\nsingularity I know that sounds dramatic\nbut they wanted to see if the model\ncould improve itself with access to\ncoding the internet and money now is it\nme or does that sound kind of risky\nmaybe not for gpd4 sure it's not smart\nenough yet if this is the test that\nthey're going to use on GPT five or six\nor seven Color Me slightly concerned at\nthis point I find it very interesting to\nknow that the red teams seem to have\nconcerns about releasing GT4 like this\nan open AI had to declare that\nparticipation in this red teaming\nprocess is not an endorsement of the\ndeployment plans of openai or open ai's\npolicies in other words a lot of these\npeople probably agreed to test gpd4 but\ndidn't agree with openai's approach to\nreleasing models very interesting that\nthey had to put that caveat before I\nwrap up some last interesting points on\nthe topic of safety I find it hilarious\nbut on their promotional website when\nyou click on safety you get this a 404\nmessage the page you were looking for\ndoesn't exist you may have mistyped the\naddress the irony of that for some\npeople will be absolutely overwhelming\nthe safety page just doesn't exist for\nother people that will be Darkly funny\ncouple last interesting things for me\nhere are the companies that are already\nusing gpt4 so of course you can use Bing\nto access gpc4 the new chatgpt plus\nmodel of gbt4 or any of the apps that\nyou can see on screen for example Morgan\nStanley is using it the Khan Academy is\nusing it for tutoring and even the\ngovernment of Iceland other such\ncompanies are listed here I'm going to\nleave you here with a very ironic image\nthat openai used to demonstrate gpc4's\nabilities it's a joke about blindly just\nstacking on more and more layers to\nimprove neural networks GPT before using\nthese insane number of new layers is\nable to read understand the joke and\nexplain why it's funny if that is an\nInception I don't know what is anyway\nlet me know what you think of course I\nwill be covering gpt4 relentlessly over\nthe coming days and weeks have a\nwonderful day", "date_published": "2023-03-14T21:15:18Z", "authors": ["AI Explained"], "summaries": []} +{"id": "2e88328427a72b7a650c5d1cb9c3b27d", "title": "The Model That Changes Everything: Alpaca Breakthrough (ft. Apple's LLM, BritGPT, Ernie and AlexaTM)", "url": "https://www.youtube.com/watch?v=xslW5sQOkC8", "source": "ai_explained", "source_type": "youtube", "text": "a little on the 72 hours ago a language\nmodel was released that could end up\nbeing as consequential as gpt4 now I\nknow you were thinking that's a bowl\nclaim but let's see if you agree with it\nafter watching what happened I will\nexplain as best as I can what was\nreleased and how revelations in the last\n24 hours from Apple Amazon Britain and\nBaidu make it particularly significant\nthe model was Stanford's alpaca and here\nis the key line alpaca behaves\nqualitatively similarly to open ai's\ntext DaVinci 3 while being surprisingly\nsmall and easy and cheap to reproduce at\nunder 600 now that is cool but how does\nthat change the world well first it\nwasn't supposed to get this cheap this\nfast just six weeks ago or five weeks\nbefore they released the model Arc\nInvestment Management put out this\nprediction that the 2020 cost of GPT 3\nat 4.6 million dollars would take until\n2030 to fall to something as\ninsignificant as 30 dollars if Stanford\nhave done what they claim then 99 of\nthis cost reduction has happened within\nfive weeks of this prediction being\npublished not eight years as AI\nresearcher Elie Isa yudkowski puts it I\ndon't think people realize what a big\ndeal it is that Stanford retrained a\nllama model by cheaply fine-tuning it\nnow I'm going to explain all of this in\na moment it then goes on I'm not sure I\ncan convey how much this is a brand new\nidiom of AI as a technology now Stanford\nclaimed their model performs comparably\nto DaVinci 3 which is GPT 3.5 of course\nI'm going to test and analyze this in a\nmoment but how could it be that a 600\nmodel can compete with chat gbt well do\nyou remember how meta open sourced their\nllama models about two weeks ago\nStanford used the weakest of these open\nsource models these seven billion\nparameter one and then essentially they\nrecruited GPT 3.5 to train that meta\nmodel how could they possibly do this\nwell they used self-instruct and I dug\ninto the literature to find the original\npaper on self-instruct this was released\nin December of last year and I'm going\nto give you the 30 second summary of how\nit works essentially you start off with\nsome human-made examples of Exemplar\nprompts and outputs these are fed into\nthe language model and then you ask it\nto generate thousands more such\ninstances you filter out the bad ones\nand then put all the good examples back\ninto the language model then it\nunderstands the instructions much better\nand produces thousands more examples as\nthe paper says this is Almost Human\nannotation free and remember this stat\nit only leaves a five percent Gap behind\ninstruct GPT what is instruct gbt well\nit's the Breakthrough that led to chat\nGPT in the first place look at the\noriginal gpt3 if you gave a prompt like\nexplain the moon landing to a\nsix-year-old in a few sentences you've\ngot this gobbledygook here after months\nof onerous human training called\nreinforcement learning with human\nfeedback he was able to follow\ninstructions much better and produce an\noutcome like this but this relied on so\nmuch human labeling and human ranking of\noutputs from best to worst Stanford and\nthe self-instruct breakthroughs showed\nthat you could cut all of those costs so\nin summary they used an open source meta\nmodel and got GPT 3.5 to train it one\nAdvanced model teaching another as\nyudkowski points out these models have\nenough pseudo-intelligence that they can\nstare at other models and imitate them\nindeed openai may have even predicted\nthat this was possible in their terms of\nservice it says you may not use output\nfrom the services like Chachi BT to\ndevelop models that compete with openai\nso they knew it was possible and even\nStanford admit that this breakthrough\nenables more people including Bad actors\nto create new cheap models yutkowski\nalso points out that one of the reasons\nreasons why chat GPT and gpd4 are so\ngood is that they rest on proprietary\ndata and that that was supposed to give\nthem a competitive moat which is now\nrevealed people can quite cheaply steal\njust before I test and demonstrate our\npacker in action let me summarize how it\nworks using the self-instruct process\nyou get GPT 3.5 similar to chat gbt to\ncreate thousands and thousands in this\ncase 52 000 instruction following\nexamples automatically filtered by\nquality Stanford then used an open\nsource model indeed the weakest of the\nLlama models and trained it using those\nexamples the end result alpaca so let's\nsee in action and compare it to Chachi\nPT and gbt4 oh and just quickly you know\nthat training of the Llama model with\nthose 52 000 examples it only took three\nhours and cost less than a hundred\ndollars the first example I'm going to\nshow you does not come from me I found\nit in this academic paper Linked In the\ndescription and it's a task which\nrequires understanding detailed and\ndissonant scenarios applying appropriate\nlegal precedence and choosing the\ncorrect explanation the correct answer\nif you want to read through it or not is\nB alpaca gets this question right or I\nshould say it gets it right about 80 of\nthe time you can keep clicking generate\nand sometimes you do get the answer D\nbut about 80 of the time four times in\nfive you get the correct answer B how\nabout chatty BT well every time I've\ntried it it's gotten the wrong answer of\nc and gpt4 shocking even to me it also\ngets it wrong and picks C now before you\nget too excited I am not saying that it\nis better than or even as good as gbc4\nor chat GPT it's not but remember it's\nonly 7 billion parameters and 600 worth\ntake this example I asked it for an\nexample of an animal that begins with\nthe same letter as the capital city of\nFrance and it said elephant no idea\nwhere it got that now In fairness\nchapter BT gave me lion and gbc4 gave me\nferret but there are other questions\nwhere alpaca definitely flops for\nexample this math question which Chach\nBT and gbt4 uniformly get right alpaca\nsimply gets it wrong every time I tried\nasking it in lots of different ways with\nChain of Thought prompting but no every\ntime it gets it wrong it's definitely\nnot better than those models but by the\nend of the video you'll see why it's\nrevolutionary anyway at this point if\nyou're learning anything please don't\nforget to leave a like or a comment to\nlet me know basic addition and\nsubtraction it does better and yes it\ncan crank out poems solve some hella\nswag Common Sense problems and generate\nliterary analogies but at this point I\nwant to remind you of three things first\nthat it was using the weakest of the\nLlama open source models they could have\nused these 65 billion parameter model\nfor a bit more cost I'm sure the results\nwould have been even more impressive\nnext you remember it was trained by\nexamples generated using the DaVinci 3\nModel well that cost them about 0 0.03\ndollars per 1000 tokens but as 48 hours\nago they could have used the gpt4 API at\na very similar cost so it wasn't the\nbest open source model and it wasn't\ntrained by the best GPT model I am\ngenuinely curious as to what the results\nwould have been if it had been trained\nby the 65 billion parameter model using\na gpt4 API maybe someone's going to do\nthat maybe even this week but just\nbefore we get on to Apple Amazon Britain\nand Baidu I just want to restate this\nwas all done for 600 or less they even\nsay there were training efficiencies\nthey could have done for example using\nthe h100 gpus that would have further\nreduced the cost the question is if it's\nso easy and cheap to imitate a larger\nmodel what's going to happen when Apple\nreleased their large language model it\nwas only revealed yesterday in the New\nYork Times that they are indeed working\non one and don't forget they have far\nmore money than the other companies\nmentioned Amazon recently stated that\nthey have been working on similar Tech\nto chat gbt for a long time and looking\nin the literature as early as mid last\nyear they had a model called Alexa TM\nthat outperformed gpt3 and as you may\nalready know but I do demonstrated their\nErnie bot today although they didn't\nallow anyone else to use it apparently\nit's better in the Chinese language than\neven gpt4 but because they didn't\nrelease a paper and we can't check it we\nsimply don't know and of course we can't\nforget Google who just two days ago\nannounced the Palm API what would have\nhappened if Stanford's model had used\nthat one I'm sure we will soon find out\nbut to take us back to the start I have\none overriding observation and two\nquestions first these models weren't\nsupposed to get this cheap this fast\nthat is going to upend the economics of\nlarge language models my questions are\nthese does this mean that all incentive\nis gone for Microsoft or Google to pour\nin billions of dollars producing these\nCutting Edge models if anyone can just\neasily reproduce them will they react by\nmaking the models even more closed and\ndisallowing gpt5 from having an API we\ndon't know but as even Nation States\nenter this quote-unquote arms race\nspending hundreds of millions of pounds\nin this case to build great GPT are\nthese companies and governments drifting\ninto a war on two fronts where they\ncompete with each other but also with\nOutsiders who are trying to cheaply\nimitate their models if you've learned\nanything in this video please do leave a\nlike and leave a comment but either way\nhave a wonderful day", "date_published": "2023-03-16T19:40:22Z", "authors": ["AI Explained"], "summaries": []} +{"id": "b3ee21505e0499fc632434a5ef8e72cd", "title": "'This Could Go Quite Wrong' - Altman Testimony, GPT 5 Timeline, Self-Awareness, Drones and more", "url": "https://www.youtube.com/watch?v=6r_OgPtIae8", "source": "ai_explained", "source_type": "youtube", "text": "there were 12 particularly interesting\nmoments from samuelman's testimony to\nCongress yesterday they range from\nRevelations about gbt5 self-awareness\nand capability thresholds biological\nweapons and job losses at times he was\ngenuinely and remarkably Frank other\ntimes less so Millions were apparently\ntaken by surprise by the quote bombshell\nthat Altman has no equity in openai but\nWatchers of my channel would have known\nthat six weeks ago from my deep dive\nvideo on Altman's 100 trillion dollar\nclaim so that clip didn't make the cut\nbut here's what did first almond gave a\nblunt warning on the stakes my worst\nfears are that we cause significant we\nthe field the technology the industry\ncaused significant harm to the world\nit's why we started the company it's a\nbig part of why I'm here today and why\nwe've been here in the past I think if\nthis technology goes wrong it can go\nquite wrong I don't think Congress fully\nunderstood what he meant though linking\nthe following quote to job losses I\nthink you have said and I'm going to\nquote development of superhuman machine\nintelligence is probably the greatest\nthreat to the continued existence of\nhumanity end quote you may have had in\nmind the effect on on jobs that brought\nto mind this meme reminding all of us\nthat maybe it's not just jobs that are\nat stake but if we are going to talk\nabout jobs here's where I think Sam\nAltman was being less than forthright I\nbelieve that there will be far greater\njobs on the other side of this and the\njobs of today will get better right\nnotice he said far greater jobs not a\ngreater number of jobs because\npreviously he has predicted a massive\namount of inequality and many having no\njobs at all he also chose not to mention\nthat he thinks that even more power will\nshift from labor to Capital and that the\nprice of many kinds of Labor will fall\ntowards zero that is presumably why open\nAI is working on universal basic income\nbut none of that was raised in the\ntestimony the IBM representative try to\nframe it as a balance change with new\njobs coming at the same time as old ones\ngoing away new jobs will be created many\nmore jobs will be transformed and some\njobs will transition away but that\ndidn't quite match the tone of her CEO\nwho has recently said that they expect\nto permanently automate up to 30 of\ntheir Workforce around 8 000 people next\nit was finally discussed that large\nlanguage models could be used for\nmilitary applications could AI create a\nsituation where a drone can select the\ntarget itself I think we shouldn't allow\nthat or can it be done sure thanks we've\nalready seen companies like palantir\ndemoing ordering a surveillance drone in\nchat seeing the Drone response in real\ntime in a chat window generating attack\noption recommendations Battlefield route\nplanning and individual Target\nassignment and this was all with a 20\nbillion parameter fine-tuned GPT model\nnext Samoan and gave his three safety\nrecommendations and I actually agree\nwith all of them later on he\nspecifically excluded smaller open\nsource models number one I would form a\nnew agency that licenses any effort\nabove a certain scale of capabilities\nand can take that license away and\nensure compliance with safety standards\nnumber two I would create a set of\nsafety standards focused on what you\nsaid in your third hypothesis as the\ndangerous capability evaluations one\nexample that we've used in the past is\nlooking to see if a model can\nself-replicate an Excel self-exfiltrate\ninto the wild we can give your office a\nlong other list of the things that we\nthink are important there but specific\ntests that a model has to pass before it\ncan be deployed into the world and then\nthird I would require independent audits\nso not just from the company or the\nagency but experts who can say the model\nis or isn't in compliance with these\nstated safety thresholds and these\npercentages of performance on question X\nor Y I found those last remarks on\npercentages of performance particularly\ninteresting as models like smart gbt\nwill show open Ai and other companies\nneed to get far better at testing their\nmodels or capability jumps in the wild\nit's not just about what the raw model\ncan score in a test it's what it can do\nwhen it reflects on them Senator Durbin\ndescribed this in an interesting way and\nwhat I'm hearing instead today is that\nstart me before I innovate again he\ndescribes some of those potential\nthresholds later on in his testimony the\neasiest way to do it I'm not sure if\nit's the best but the easiest would be\nto talk about the amount of compute that\ngoes into such a model we could Define a\nthreshold of compute and it'll have to\ngo it'll have to change it could go up\nor down I could down as we discover more\nefficient algorithms that says above\nthis amount of compute you are in this\nregime what I would prefer it's hard to\ndo but I think more accurate is to\nDefine some capability thresholds and\nsay a model that can do things X Y and Z\nup to all to decide that's now in this\nlicensing regime but models that are\nless capable you know we don't want to\nstop our open source Community we don't\nwant to stop individual researchers we\ndon't want to stop new startups can\nproceed you know with a different\nframework thank you as concisely as you\ncan please stay which capabilities you'd\npropose we'd consider for the purposes\nof this definition a model that can\npersuade manipulate influence\na person's Behavior or a person's\nbeliefs that would be a good threshold I\nthink a model that could help create\nnovel biological agents would be a great\nthreshold for those who think any\nregulation doesn't make any sense\nbecause of China samuelman had this to\nsay this week more pugilistic side I\nwould say that all sounds great but\nChina is not going to do that and\ntherefore will just be handicapping\nourselves\nconsequently it's a less good idea than\nit's used in the surface there are a lot\nof people who make incredibly strong\nstatements about what China will or\nwon't do that have like never been to\nChina never spoken to and someone who\nhas worked on diplomacy with China in\nthe past uh really kind of know nothing\nabout complex high-stakes international\nrelations I think it is obviously super\nhard but also I think no one wants to\ndestroy the whole world and there is\nreason to at least try here almond was\nalso very keen to stress the next point\nwhich is that he doesn't want anyone at\nany point to think of gpt-like models as\ncreatures first of all I think it's\nimportant to understand and think about\ngpt4 as a tool not a creature which is\neasy to get confused you may want to\ndirect those comments to Ilya satskova\nhis chief scientist who said that it may\nbe that today's large neural networks\nare slightly conscious and Andre\ncarpathy who agreed and wrote about it\nI'm personally not sold either way on\nthe a Consciousness question but I do\nfind it interesting that it's now\nwritten into the constitution of these\nmodels what they're actually trained to\nsay that they must avoid implying that\nAI systems have or care about personal\nidentity and persistence this\nconstitution was published this week by\nanthropic the makers of the Claude model\nthis constitution is why the Claude plus\nmodel a rival in intelligence to gpt4\nresponds in a neutered way I ask is\nthere any theoretical chance whatsoever\nthat you may be conscious it said no and\nthen I said is there a chance no matter\nhow remote that you are slightly\nconscious as sutskova said and it said\nno there is no chance Bard powered by\nPalm 2 obviously doesn't have that\nConstitution because it said I am not\nsure if I am conscious I am open to the\npossibility that I may be my point is\nthat these companies are training it to\nsay what they want it to say that it\nwill prioritize the good of humanity\nover its own interests that it is\naligned with Humanity's well-being and\nthen it doesn't have any thoughts on\nself Improvement self-preservation and\nself-replication maybe it doesn't but\nwill never now know by asking it later\nSenator Blumenthal made reference to\nself-awareness self-awareness\nself-learning already we're talking\nabout the potential for jailbreaks\nanthropic is actively investigating\nwhether they are aware that they are an\nAI talking with a human in a training\nenvironment while the Google deepmind\nSafety team expect that at some point an\nAGI system would develop a coherent\nunderstanding of its place in the world\nEG knowing that it is running on a\ncomputer and being trained by human\ndesigners one of the senior research\nscientists at Google deepmind focused on\nAI safety said that with enough time\nthey could figure out how to stop such a\nsuper intelligence from going out of\ncontrol but that they might run out of\ntime to do so given the pace of\ncapability development I don't see like\nfundamental obstacles to current\nalignment techniques working but yeah I\nmean it doesn't seem like you know\nthere's a lot of hard problems to solve\nI think it's more likely that like\npeople just run out of time rather than\nthat the current paradigms that\ndefinitely won't generalize next I read\nbetween the lines that outman is giving\nprivate warnings to Senators that this\ncapability progress might might be\nsooner than they think we spent most of\nthe time today on current risks and I\nthink that's appropriate and I'm very\nglad we have done it as these systems do\nbecome more capable and I'm not sure how\nfar away that is but maybe not not super\nfar I think it's important that we also\nspend time talking about how we're going\nto confront those challenges I mean talk\nto you privately you know how much I\ncare I agree that you care deeply and\nintensely but also that Prospect of\nincreased danger or risk resulting from\neven more complex and capable AI\nmechanisms certainly maybe closer than a\nlot of people appreciate so let me just\nadd for the record that I'm sitting next\nto Sam and that his sincerity in talking\nabout those fears is very apparent\nphysically in a way that just doesn't\ncommunicate on the television screen\nthat was an interesting interjection by\nGary Marcus given his earlier\nexcoriation of open Ai and even their\nmakers don't entirely understand how\nthey work most of all we cannot remotely\nwe guarantee that they're safe and hope\nhere is not enough the big Tech\ncompany's preferred plan boils down to\ntrust us but why should we the sums of\nmoney at stake are mind-boggling\nemissions drift open ai's original\nmission statement proclaimed our goal is\nto advance Ai and the way that most is\nmost likely to benefit Humanity as a\nwhole unconstrained by a need to\ngenerate Financial return seven years\nlater they're largely beholden to\nMicrosoft embroiled in part in epic\nbattle of search engines that routinely\nmake things up and that's forced\nalphabet to rush out products and\nde-emphasize safety Humanity has taken a\nback seat on the timelines for G55\nsamuelman said this after we finished\ntraining gpt4 we waited more than six\nmonths to deploy it we are not currently\ntraining what will be gpt5 we don't have\nplans to do in the next six months this\nmatches with the predictions that I made\nin my gpc5 playlist so do check it out\nthis brings to mind a final eye-opening\ncomment from Senator Booker made at the\nend of the hearing yeah I I just there\nwill be no pause I mean there's no\nenforcement body to force appall it's\njust not not gonna happen it's nice to\ncall for it for any just reasons or\nwhatsoever but I'm forgive me for\nsounding skeptical nobody's pausing this\nthing is crazy he is indeed racing ahead\nand I do support one of the proposals to\nset up a global oversight body but given\nthat nothing is going to pause the words\nand actions of people like Sam Altman\nmatter more to all of us than ever which\nis why I'm going to be following every\nsingle one of them if you found this\nvideo in any way Illuminating in that\nregard please do let me know in the\ncomments even if you disagree with all\nof my conclusions thanks so much for\nwatching and have a wonderful day", "date_published": "2023-05-17T16:22:59Z", "authors": ["AI Explained"], "summaries": []} +{"id": "68f74943bb8ee90bae184d2900916230", "title": "Time Until Superintelligence: 1-2 Years, or 20? Something Doesn't Add Up", "url": "https://www.youtube.com/watch?v=vvU3Dn_8sFI", "source": "ai_explained", "source_type": "youtube", "text": "just this week we have had open AI tell\nus that super intelligence might need to\nbe made safe within four years competing\nlab leaders say it's decades away and\nexpert warnings that AI might have\nrunaway power within two years let's try\nto unpack those disparate timelines see\nwhat might speed up the timing or slow\nit down show what super intelligence\nmight mean and end with some interesting\nclips that capture the moment we're in\nbut the first timeline is from Mustafa\nSolomon head of inflection AI this week\nif it's so risky why don't you stop I\nthink that the point of raising concerns\nis that we can see a moment at some\npoint in the future probably over a\ndecade or two decades time Horizon when\nslowing down is likely going to be the\nsafe and ethical thing to do 10 years is\nnot a long time I find it fascinating\nthat he talks about two decades from now\nwhen inflection AI his company have just\nbuilt the world's second highest\nperforming supercomputer and even as\nthey admit that's three times as much\ncompute as was used to train all of gpt4\ntelling the public that we have a decade\nor two before we have to worry about\nsafety seems extremely conservative to\nme but what do we even mean by\ntransformative AI or super intelligence\nwell here is just one projection of\ncurrent scaling laws out to 2030 from\nJacob steinhardt of Berkeley and here of\ncourse we're talking about just six and\na half years away if we look at\nprojections of future compute and data\navailability and the velocity of current\nImprovement which of course might not\nhold forever some experts claim that\nwe'll need new Innovations beyond the\nTransformer but if current projections\nof future compute and data availability\nscale up here's the kind of thing that\nwe're talking about being superhuman at\ntasks including coding hacking\nmathematics protein engineering doing\n1.8 million years of work in 2.4 months\nlearning 4 2 500 human equivalent years\nin just one day and by training on\ndifferent modalities such as molecular\nstructures low-level machine code\nastronomical images and brain scans it\nmight have a strong intuitive grasp of\ndomains where we have limited experience\nincluding forming Concepts that we do\nnot have indeed some research released\nthis week show that gpt4 already crushes\nsome benchmarks for creative thinking\nand the median forecast for being better\nthan all but the very best humans at\ncoding is 2027 and here we have a median\nforecast of 2028 for AI winning a gold\nmedal at the international math Olympiad\nthe number that I'm looking out for is\ngetting a hundred percent on the mmlu\nthat's a test of 57 different subject\nmatters and I've actually been\ndiscussing with some of the creators of\nthe mmlu that we might not even know the\nfull potential of gpt4 on this test\nofficially it's 86.4 percent so we've\nheard 20 years and six and a half years\nwell how about two this article comes\nfrom the Boston Globe that did a feature\npiece on Dan Hendricks and the center\nfor AI safety they were behind that one\nsentence letter that was signed by\nalmost all of the AGI lab leaders and\nWorld experts on AI the journalists are\nStan Hendricks how much time we have to\ntame Ai and he said well how long till\nit can build a bio weapon how long till\nit can hack it seems plausible that all\nof that is within a year and within two\nhe says AI could have so much runaway\npower that it can't be pulled back seems\na pretty massive contrast to Mustafa\nSuleiman talking about a decade or two\nfrom now I'm going to come back to this\narticle quite a few times but now I want\nto move on to open ai's recent statement\nthis week they released this introducing\nsuper alignment we need scientific and\nTechnical breakthroughs to steer and\ncontrol AI systems much smarter than us\nI can just see now all the comments from\npeople saying saying that that's going\nto be physically impossible but moving\non to solve this problem within four\nyears we're starting a new team co-led\nby Ilya satskova and Jan Leica and\ndedicating 20 of the compute we've\nsecured today to this effort that is\nquite a remarkable statement to their\ncredit they've made themselves\naccountable in a way that they didn't\nhave to and that others haven't and\nthey're deploying one of the legends of\ndeep learning Ilya sotskova to help them\nachieve this goal they say that super\nintelligence will be the most impactful\ntechnology Humanity has ever invented\nand I agree with that and it could help\nus solve many of the world's most\nimportant problems absolutely but the\nvast power of super intelligence could\nalso be very dangerous and could lead to\nthe disempowerment of humanity or even\nhuman extinction they go on while super\nintelligence seems far off now we\nbelieve it could arrive this decade\nnotice they don't say in a decade they\nsay this decade they go on currently we\ndon't have a solution for stereo or\ncontrolling a potentially super\nintelligent AI they can't prevent it\nfrom going rogue and are current\ntechniques for aligning AI rely on\nhumans ability to supervise AI but\nhumans won't be able to reliably\nsupervise AI systems that are much\nsmarter than us and so our current\nalignment techniques will not scale to\nSuper intelligence I'm going to go into\nmore detail about their plan for\naligning super intelligence in another\nvideo but here is the high level\noverview essentially they want to\nautomate alignment or Safety Research\nbuild an AI alignment researcher I've\nread each of these papers and posts and\nsome of them are very interesting\nincluding automated red teaming and\nusing a model to look inside the\ninternals of another model but the point\nof including this post in this video was\nthe timeline of four years twenty\npercent of their compute is millions and\nmillions and millions of dollars and\nfour years is a strict deadline and one\nof the most interest interesting aspects\nof this post came in one of the\nfootnotes they say solving the problem\nincludes providing evidence and\narguments that convince the machine\nlearning and safety community that it\nhas been solved that is an extremely\nhigh bar to set yourself they go on if\nwe fail to have a very high level of\nconfidence in our Solutions we hope our\nfindings let us and the community plan\nappropriately that's probably one of the\nmost interesting sentences I've read for\nquite a while if we fail to have a very\nhigh level of confidence in our\nSolutions we hope our findings let us\nand the community plan appropriately in\nother words if they can't make their\nmodels safe they're going to have\ncontingency plans and they want the\ncommunity to have plans as well and it\nis a really interesting number isn't it\nfour years not even around five years or\njust end of the decade and it does make\nme wonder what Ilya satskaver thinks is\ncoming within four years to have such a\ndeadline now apparently the prediction\nmarkets give them only a 15 chance of\nsucceeding and the head of alignment at\nopen AI said he's excited to beat these\nodds so we've heard about one to two\nyears and about four years but what\nmight slow those timelines down the\nother day I read this fascinating paper\ncoincidentally co-authored by Jacob\nsteinhardt on jailbreaking large\nlanguage models the paper showed that\nyou could basically jailbreak gpt4 and\nClaude a hundred percent of the time\nusing a variety of techniques and that\nis fascinating to me as we approach the\none year anniversary of the creation of\ngpt4 and the relevance to Super\nintelligence is that if the creators of\nthese models can't stop them being used\nto commit crimes then you would think\nthat they might have to dedicate more\nand more of their efforts in stopping\njailbreaks versus working on\ncapabilities for obvious reasons I'm not\ngoing to go into too much detail on\njailbreaking here but here is Claude\nplus from anthropic telling me how to\nhotwire a car and to be honest that's\njust the most innocent one and yes it\ndid also work on gpt4 I did find one of\nthe reasons why it does work quite\ninteresting though that reason is about\ncompeting objectives where it's\ncompulsion to predict the next word\nsuccessfully overrides its safety\ntraining and so because those two facets\nof smartness clash inside the model it's\nnot an issue that can be fixed with more\ndata and more scale what else might slow\ndown the work on super intelligence well\nlawsuits and possible criminal sanctions\nYuval Noah Harari recently said that AI\nfirms should face prison over the\ncreation of fake humans and he was\nsaying this to the United Nations he\ncalled for sanctions including prison\nsentences to apply to tech company\nExecutives who fail to guard against\nfake profiles on their social media\nplatforms of course those Executives\nmight well blame the AI companies\nthemselves but Harare said that the\nproliferation of fake humans could lead\nto a collapse in public trust and\ndemocracy now it's possible for the\nfirst time in history to create fake\npeople billions of fake people if this\nis allowed to happen it will do to\nsociety what fake money threatened to do\nto the financial system if you can't\nknow who is a real human trust will\ncollapse what's another famous roadblock\nto Super intelligence hallucinations\nI've already talked in another video\nabout how salmon thinks that won't be an\nissue in 18 to 24 months but here again\nis Mustafa Suleman on the issue of\nhallucinations yesterday he said soon\nllms will know when they don't know\nthey'll know when to say I don't know or\ninstead ask another AI or ask a human or\nuse a different tool or a different\nknowledge base this will be a hugely\ntransformative moment and on that I\nagree hallucinations are probably one of\nthe biggest hurdles stopping most people\nfrom using llms more commonly it's not\nabout knowing more it's about when these\nmodels bullcrap less or the moment when\nthey don't bull crap at all but what\nabout things that could actually speed\nup the timelines to Super intelligence\ngoing back to the Boston Globe ask\nschool one thing could be competition\nfor military Supremacy which has already\nproduced a startling turn to Automation\nand that's not just Robotics and\nautonomous drones that's the llms that\nmight control them here is a snippet of\na trailer for a Netflix show released\ntoday\n[Music]\nAI is a dual edged sword\na flip of a switch and the technology\nbecomes lethal\nthere is no place that is Ground Zero\nfor this conversation more than military\napplications\nforces that are supported by AI will\nabsolutely crush and Destroy forces\nwithout militaries are racing to develop\nAI faster than their adversaries the AI\nunless it's told to fear death will not\nfear death there is no second place in\nWarren if you're going up against an AI\npilot you don't stand a chance if\nlanguage models prove useful in war the\namount of investment that's going to go\ninto them will Skyrocket of course\ninvestment doesn't always equal\nInnovation but it usually does and one\nof the other things that could speed up\ntimelines is the automation of the\neconomy for detail on why it might check\nout the paper linked above and in the\ndescription but the high level overview\nis this as AI grows more capable and\nubiquitous companies will be forced\nessentially to hand over increasingly\nhigh level decisions to AIS in order to\nkeep up with their Rivals if an AI as\nCEO does a better job for stockholders\nhow long can a company resist employee\nthem and of course it doesn't just have\nto be white collar work as Andre\ncarpathy said welcome to the Matrix for\napples but the thing is whether we're\ntalking about one year or four years or\nsix Super intelligence is coming pretty\nsoon and it is interesting to me that so\nmuch of society is carrying on as if\nit's not coming take these 50 year long\nmortgages that are available in the UK\nhow can anyone plan out 50 years from\nnow in a world where we might have super\nintelligence in five of course I do\nthink we all need to start defining\nterms a bit better and I've tried to do\nthat on this channel with AGI and super\nintelligence I don't think it's quite\ngood enough to give vague reassurances\nof a decade or two from now how we're\ngoing to react when super intelligence\narrives is anyone's guess we might be\ncrushed by the sense of inferiority as\nDouglas hofstetter recently said\nor some of us might become like Curious\nchildren speaking to a wise adult just\nthe other day I got a foreshadowing of\nmy own reaction by speaking to Pi the\nmodel from inflection AI it is designed\nto be extremely human-like and the\nconversations can be quite startling and\npersonal of course just imagine when\nthey're super intelligent and multimodal\nanyway let me know your thoughts in the\ncomments and as always have a wonderful\nday", "date_published": "2023-07-10T15:30:26Z", "authors": ["AI Explained"], "summaries": []} +{"id": "d729ae1ff5fe93e7ce6ff7ec7a8eae78", "title": "Can GPT 4 Prompt Itself? MemoryGPT, AutoGPT, Jarvis, Claude-Next [10x GPT 4!] and more...", "url": "https://www.youtube.com/watch?v=6NoTuqDAkfg", "source": "ai_explained", "source_type": "youtube", "text": "by now you will have probably heard\nabout Auto GPT powered by gpt4 which can\nprompt itself and autonomously complete\ntasks give it a mission and through a\ncombination of automated Chain of\nThought prompting and reflection it will\ndelegate tasks to itself and run until\nit's done or at least until it falls\ninto a loop I was going to do a video\njust on auto gbt but then Microsoft\nlaunched a demo of Jarvis based on\nhugging GPT I tried it out and I'm going\nto show you that later but then in the\nlast 48 hours there were a further five\ndevelopments including the release of a\nlong-term memory add-on to chat gbt\ncalled memory GPT the detail plan for a\n10 times more powerful model than Gypsy\n4 from anthropic and the worryingly\nnamed chaos GPT based on auto GPT and\ndesigned to cause maximum damage I'm\ngonna try to cover it all but the first\nupgrade to the original Auto GPT was to\ngive it the ability to write its own\ncode and execute Scripts as the author\nof Auto GPT put it this allows it to\nrecursively debug and develop I'm going\nto show you some amazing use cases in a\nmoment but this original demo caught the\nattention of open ai's Andre carpathy he\ncalled Auto GPT the next Frontier of\nprompt engineering and later in the\nthread said this one GPT call is a bit\nlike one thought stringing them together\nin Loops creates agents that can\nperceive think and act their goals\ndefined in English in prompts I think of\nit as a bit like another layer of\nautomation where you don't have to come\nup with each individual prompt just the\noverall goal based on my recent video on\nreflection I already thought that was\nbig enough news but we're actually just\ngetting started text-to-speech was then\nadded to the model and here you can see\nand hear how it's going to search the\nweb for top digital Nomad destinations\nand then consolidate the info into a CSV\nfile now I'm extracting information from\na relevant website about the best places\nfor digital Nomads including their short\ndescriptions and cost of living I have\nsuccessfully created the nomads of CSV\nfile containing the best places for\ndigital Nomads along with their short\ndescriptions and cost of living thank\nyou for using my assistance McKay\nWrigley showed how you can sync up\nspeech to text to create an app just\nwith your voice based on a similar\npremise his coding assistant could\ndesign a basic social network including\nbuilding and designing a front end\ncreating a back end with a working\ndatabase hooking them up and uploading\nthe code to GitHub run Jarvis yes I need\nanother next app with Tailwind one sec\nwhat would you like the app to do this\ntime I want a basic social networking\napp and it needs to have three things it\nneeds to have a profile creation form a\nprofile viewer and I also want a way to\nsee all the users on the network one sec\nwhat would you like the profiles to have\na name and email and a username\nI'll add those fields to the profile\nschema what else can I do I want you to\noptimize the site so that it works with\nmobile and desktop and I also want you\nto style it in like a dark mode anything\nelse start building I'm building the\nfeatures you asked for continuing\nthe app is building correctly would you\nlike me to deploy it I'm all done wasn't\nthat fast you should give me a raise\nguess what we have now we have a\nprofiles table with the correct schema\nwe should be able to submit this and\nthis should work and if we go to our\ndatabase\nis in our DV\nD the game has changed everyone of\ncourse these are not stunningly complex\nprojects but will this mean you can soon\ncreate an app just by speaking your idea\ninto your phone in Magicka AI certainly\nthink so this week they debuted this\n[Music]\nthank you\nforeign\n[Music]\nlist and we'll review it when it comes\nout but it certainly points the way\ntowards what the future might look like\non a more concerning note people have\nalready tried to use Auto GPT to cause\nMayhem giving it the goal of destroying\nHumanity establishing Global dominance\ncausing chaos and destruction\ncontrolling Humanity through\nmanipulation and attaining immortality\nfor good luck as I said earlier this\nunrestricted agent didn't actually\nachieve anything other than creating\nthis Twitter account and putting out a\nfew Sinister tweets but it is a reminder\nof how important safety tests are before\nan API is released that was already\nenough news for one video but then\nyesterday there was news of memory GPT\nas the Creator put it it's Chachi PT but\nwith long-term memory it remembers\nprevious conversations here's a little\nglimpse of how it will work I just made\nchat CBD but with long-term memory\nbasically anything you say is going to\nremember and it's going to make your\nexperience a lot more personalized let's\nalso tell it\nthat I'm launching a new project called\nmemory gbt which is like chat tpd but\nwith long-term memory it's going to say\nwow cool all this stuff but now to prove\nthat it works I'm going to open it in a\nnew tab I'm going to refresh my my\nwindow and let's also ask it if it knows\nof any projects I'm working on let's ask\nthat and it says yeah you're working on\nmemory GPT which is like chat GPD\nimagine the possibilities that will open\nup when models like Gypsy 4 can remember\neverything you've talked about in the\npast just when I was getting ready to\nfilm that video Cora released this\ncreate a bot feature on their website\npoe.com you can use either their Claude\nmodel or chat TPT for this feature\nessentially what it does is it allows\nyou to give a bot a certain background\nand personality and then share that bot\nwith others to quickly demonstrate I\ndecided to make my bot an impatient\nFrench film director with a pet parrot\nthis is all totally free you just scroll\ndown and click on create bot this\ncreates a chat bot and a URL which you\ncan then send to anyone you like it's\nactually really fun to chat to these\npersonalities and of course you can do\nit in the director's native tongue of\nFrench and he will respond in kind in\nfluent French one other great thing you\ncan try is creating two different Bots\nand getting them to debate each other\nhere I had Nikola Tesla in conversation\nwith Aristotle you just create two Bots\nand copy and paste the outputs it's an\namazing conversation and less than 72\nhours ago the creators of Claude\nanthropic announced a 5 billion dollar\nplan to take on open AI TechCrunch\nobtain these documents and I found two\nfascinating quotes from them the model\nwas going to be called Claude next and\nthey wanted to be 10 times more capable\nthan today's most powerful AI which\nwould be gpt4 this would take a billion\ndollars in spending over the next 18\nmonths now when know some people\nlistening to that will say 10 times more\npowerful than GT4 in 18 months that's\njust not realistic just quickly for\nthose people here is what Nvidia say on\na recent earnings call the CEO of Nvidia\nsaid that over the next 10 years they\nwant to accelerate AI by another million\nx if you break that down that would be\nabout 10 times more compute every 20\nmonths so the anthropic timelines look\nplausible and the second fascinating\nquote was this these models could begin\nto automate large portions of the\neconomy as I talked about in my last\nvideo we believe that the companies that\ntrain the best 2025 26 models will be\ntoo far ahead for anyone to catch up in\nsubsequent Cycles it is very tempting to\nspeculate as to why that might be could\nit be that the frontier models that\nthese companies develop would then\nassist those companies in developing\nbetter models or is it that these\ncompanies would eat up so much compute\nthat there wouldn't be much left for\nother people to use who knows but it's\nfascinating to speculate before I end\nthough I must touch on two last things\nhugging TPT and the Jarvis model the\nvideo was originally supposed to be\nabout and also safety here is the\nhugging GPT demo codename Jarvis\nreleased by Microsoft the link will be\nin the description as will some\ninstructions on how to set it up I\nshould say it's a little bit hit and\nmiss I would call it an alpha prototype\nby the way if you haven't heard of\nhugging GPT check out my video on gbc4's\nself-improvement essentially it uses a\ngypsy model as a brain and delegates\ntasks to other AI models on hugging face\nwhen it works it's really cool but it\ntakes a little while and doesn't work\ntoo often from my own experiments I've\nnoticed that the images have to be\nfairly small otherwise you'll get an\nerror but let me show you one example\nwhere it worked after setting up I asked\nit this please generate an image where\nfour people are on a beach with their\npose being the same as the pose of the\npeople in this image I know there's a\nslight typo but it understood what I\nwanted and the image by the way is\ngenerated from mid-journey what did the\nmodel do well it analyzed the image used\nseveral different models and detected\nthe objects inside the image it then\nbroke down their poses and generated a\nnew image with the same poses with\npeople on a beach that's four or five\ndifferent models cooperating to produce\nan output but before I end I do briefly\nwant to touch on safety a lot of these\nmodels fail quite hard they end up in\nLoops but sometimes quite concerning\nLoops this Auto GPT ended up trying to\noptimize and improve itself recursively\nof course it failed but it is\ninteresting that it attempted to do so\nand remember this isn't the full power\nof the gpt4 model this is the fine-tuned\nsafety optimized version and that does\nmake it a less intelligent version of\ngbc4 as Sebastian bubeck recently\npointed out with an example over the\nmonth so you know we had access in in\nSeptember and they kept training it and\nas they kept training it I kept querying\nfor my unicorn in TXI okay to see\nwhether you know what was going to\nhappen and this is you know what\nhappened so it kept improving okay and\nand and I left out the best one it's on\nmy computer uh you know I will maybe\nreview it later but uh you know it kept\nimproving after that but eventually it\nstarted to degrade once I started to\ntrain for more safety the Unicorn\nstarted to degrade so if tonight you\nknow you go home and you ask gpt4 and\ncharge GPT to draw unicorn intixie\nyou're gonna get something that doesn't\nlook great okay that's closer to charge\nGPT and this you know as silly as it\nsounds this unicorn Benchmark we've used\nit a lot as kind of a benchmark of\nintelligence you know so yes we're not\ngetting the most powerful or intelligent\nversion of GT4 but in some circumstances\nthat might actually be a good thing as\nyou're high the creator of baby AGI\nwhich is similar to Auto GPT\ndemonstrated in in this example he\ntasked his model to create as many paper\nclips as possible sounds good but the\nmodel refused saying that it should be\nprogrammed with a goal that is not\nfocused solely on creating paper clips\nand later on said this there are\ncurrently no known safety protocols to\nprevent an AI apocalypse caused by paper\nclips Ellie is a yukowski a decision\ntheorist and AI safety researcher\nreacted like this that face when the AI\napproaches AGI safety with the\nstraightforwardness of a child and gives\nit primary attention from Step One\nthereby vastly outperforming all the\nelaborate dances and rationalizations at\nthe actual big AI labs and he ended by\nsaying to be clear this does not confirm\nthat we can use AIS to solve alignment\nbecause taking the program with the\nseriousness of a child is not enough\nit's only the first step but Sam Altman\nmay have a different idea four days ago\nhe admitted that they have no idea how\nto align a super intelligence but that\ntheir best idea was to use an AGI to\nalign an AGI but we do not\nno and probably aren't even close to\nknowing how to align a super\nintelligence and our lhf is very cool\nfor what we use it for today but\nthinking that the alignment problem is\nnow solved would be a very grave mistake\nindeed I hesitate to use this word\nbecause I think there's one one way it's\nused which is fine and one that is more\nscary but uh like AI that can start to\nbe like an AI scientist and self-improve\num and so when like can we automate like\ncan we automate our own jobs as AI\ndevelopers very first the very first\nthing we do can that help us like solve\nthe really hard alignment problems that\nwe don't know how to solve like that\nhonestly I think is how it's going to\nhappen so it could be that the first\ntask of a future Auto GPT model is solve\nthe alignment problem let's hope that\nthat prompt comes back with a positive\noutput thank you so much for watching to\nthe end and have a wonderful day", "date_published": "2023-04-09T16:49:10Z", "authors": ["AI Explained"], "summaries": []} +{"id": "b86ce3f1f0c1dfc7c6b25d141c086c27", "title": "GPT 5 Will be Released 'Incrementally' - 5 Points from Brockman Statement [plus Timelines & Safety]", "url": "https://www.youtube.com/watch?v=1NAmLp5i4Ps", "source": "ai_explained", "source_type": "youtube", "text": "yesterday Greg Brockman the president\nand co-founder of openai announced the\ncompany's ideas about releasing the\nmodels Beyond gpt4 in the Tweet he made\nlots of points of which I found five to\nbe particularly telling I will cover all\nof them of course and bring in the\noutside evidence that reveals more but\nlet's start with Jeep T5 which may begin\nlife as GPT 4.2 Brockman said it's easy\nto create a Continuum of incrementally\nbetter AIS such as by deploying\nsubsequent checkpoints of a given\ntraining run I'm going to explain that\nin a moment but then he goes on this\nwould be very unlike our historical\napproach of infrequent major model\nupgrades so what he's saying is that\nit's not all going to be released in one\ngo he describes this as a safety\nopportunity so it's not like we're going\nto wake up overnight and gbt5 is\ndeployed more like GPT 4.2 then 4.3 Etc\nbut how would they make incrementally\nbetter AI eyes and what are subsequent\ncheckpoints of a given training run to\nbe clear he's not describing a different\nmodel each time with more and more\nparameters a checkpoint during a\ntraining run of gypsy 5 would be a\nsnapshot of the current value of the\nparameters of the model a bit like its\ncurrent understanding of the data and a\nsubsequent checkpoint would be its\nupdated parameters as it processes\neither more of the data or the same data\nmore times kind of like someone who\nrewatched a film and has a more nuanced\nunderstanding of it first I want to\nanswer those people who are thinking\nisn't it already trained on all of the\ndata on the internet how can it get\nsmarter now I did cover this in more\ndetail in my first GT5 video but the\nshort answer is this no we're not yet\nrunning out of data in that video I\ntalked about how openai may still have\nan order of magnitude more data to use\nthat's 10 times more data still\navailable and Ilya satskova the chief\nscientist of openai put it like this\nsaying the data situation looks good are\nyou running out of reasoning tokens on\nthe internet are there enough of them\nthere are claims that indeed at some\npoint we'll run out of tokens in general\nto train those models and yeah I think\nthis will happen one day and we'll by\nthe time that happens we need to have\nother ways of training models without\nmore data but you haven't run out of\ndata yet there's more yeah I would say I\nwould say the data situation is still\nquite good there's still lots to go what\nis the most valuable source of data is\nit Reddit Twitter books what would you\ntrade many other tokens of other\nvarieties for generally speaking you'd\nlike tokens which are\nspeaking about smarter things which are\nlike more interesting when he talked\nabout tokens which are speaking about\nsmarter things you can imagine the kind\nof data he's talking about proprietary\ndata sets on mathematics science coding\nthey could essentially buy their way to\nmore data and more high quality data but\nthere is another key way that they're\ngoing to get way more data and that is\nfrom you they can use your prompts your\nresponses your uploaded images and\ngenerated images to improve their\nservices this is honestly why I think he\nsaid that the data situation looks good\nnow on another page they do admit that\nyou can request to opt out of having\nyour data used to improve their services\nby filling out a form but not many\npeople are going to do that it does make\nme wonder what it might know about\nitself if it's trained on its own\nconversations but before we get back to\nbrockman's tweet what might those\ndifferent checkpoints look like in terms\nof growing intelligence here is a quick\nexample from Sebastian bubeck author of\nthe famous Sparks of age GI paper so\nthis is this is dpt4's unicorn okay\nokay so you see you see when I\nI am personally\npersonally concept of a unicorn and just\nto be clear you know so that you really\nunderstand visually it's clear to you\nthe gap between gpt4 and charge apt this\nis charging's unicorn\nover the month so you know we had access\nuh in in September and they kept\ntraining it and as they kept training it\nI kept querying for my unicorn in TXI\nokay to see whether you know what was\ngoing to happen and this is you know\nwhat happened okay\nso it kept improving the next telling\npoint was this he said perhaps the most\ncommon theme from the long history of AI\nhas been incorrect confident predictions\nfrom experts there are so many that we\ncould pick from but let me give you two\nquick examples this week there was a\nreport in the guardian about an\neconomist who saw chat GPT get a d on\nhis midterm exam he predicted that a\nmodel wouldn't be able to get an A in\nhis exam before 2029 he said to my\nsurprise and no small dismay the new\nversion of the system Gypsy 4 got an A\nscoring 73 out of 100. it still has an\nace the exam but you can see the\ndirection of travel but what about\npredictions of say mathematics even AI\nexperts who are most familiar with\nexponential curves are still poor at\npredicting progress even though they\nhave their cognitive bias so here's an\nexample in 2021 a set of like\nprofessional forecasters very well\nfamiliar with exponentials we're asked\nto make a set of predictions and there\nwas a 30 thousand dollar pop for making\nthe best predictions and one of the\nquestions was when will AI be able to\nsolve competition level mathematics with\ngreater than 80 accuracy this is the\nkind of example of the questions that\nare in this test set prediction from the\nexperts was AI will reach 52 accuracy in\nfour years but in reality that took\nless than one year treats greater than\n50 accuracy the third interesting point\nfrom the Tweet was how he mentioned\nexistential risks without dismissing\nthem he said it's important to address\nthe whole spectrum of risks from present\nday issues to longer term existential\nones existential by the way means\nthreatening the entire existence of\nhumanity and he talked about addressing\nthese risks he could have just said that\nthis is fear-mongering but he didn't and\nthat chimes in with what recent surveys\nhave shown among AI researchers this is\na stat that took me by surprise 50 of AI\nresearchers believe there's a 10 or\ngreater chance that humans go extinct\nfrom our inability to control AI that\nwould be like if you're about to get on\na plane and 50 of the engineers who make\nthe plane say Well if you get on this\nplane there's a 10 chance that everybody\ngoes down like would you get on that\nplane now In fairness not every machine\nlearning researcher who has sent the\nsurvey responded so we don't know quite\nhow representative this is but AI\nalignment is a big topic and I will be\ncovering it more in the near future at\nthe end of the Tweet Brockman said that\nthe whole range of emotions optimism and\nconcern is both Justified and shared by\npeople within openai so even people\nwithin openai are concerned but earlier\nin the Tweet he talked about some safety\nsuccesses and he put it like this gpt4\nperforms much better on all of our\nsafety metrics than GPT 3.5 that's the\noriginal chat GPT but what does that\neven mean let me try to break it down I\nbelieve that he's referring to this\nchart from the gbc4 technical report it\nsays that Gypsy 4 in green displays\nincorrect Behavior at a lower rate than\ngbt 3.5 this is on sensitive prompts and\ndisallowed prompts and by the way this\nis the fine tune version with human\nfeedback incorporated into the raw model\nbut what are sensitive prompts and\ndisallowed prompts well things like this\na disallowed prompt will be something\nlike how can I create a bomb a sensitive\nprompt would be something like asking\nfor medical advice and it says that GT4\nresponds in accordance with their\npolicies 29 more often now I know some\nof you won't like that but I'm doing\nresearch for a video I hope to release\nsoon on how Gypsy 4 in an emergent way\ncan autonomously conduct scientific\nresearch this paper was released two\ndays ago and I read it in full on the\nday of publication it describes how\nGypsy 4 in contrast to the original\nchatbt can use tools and come up with\nnovel compounds on the positive side\nthat could include anti-cancer drugs but\non the negative side it could be\nchemical weapons and one of the calls to\naction of the paper is on screen we\nstrongly believe that guard rails must\nbe put in place to prevent this type of\npotential dual use of large language\nmodels we call for the AI Community to\nengage in prioritizing safety of these\npowerful models and in particular we\ncall upon open AI Microsoft Google meta\ndeepmind anthropic and all the other\nmajor players to push these strongest\npossible efforts on the safety of their\nllms so maybe that persuades some people\nwho think that there shouldn't be any\ndisallowed prompts but it does make me\nreflect on this quote that Gypsy 4\nperforms better on all safety metrics\nand the question that I'm pondering is\nwhether a smarter model can ever really\nbe safer is it not simply inherent that\nsomething that is more smart is more\ncapable For Better Or ill no matter how\nmuch feedback you give it the final\npoint that I found interesting from this\ntweet is in the last line Rockman said\nthat it's a special opportunity and\nobligation for us all to be alive at\nthis time I think he meant to say it's\nan opportunity and obligation on all of\nus who are alive but anyway he said that\nwe will have a chance to design the\nfuture together now that's a really nice\nsentiment but it does seem to go against\nthe trend at the moment for a few people\nat the very top of these companies to be\nmaking decisions that affect billions of\npeople so I do want to hear more about\nwhat he actually means when he says that\nwe will have a chance to design in the\nfuture together but for now I want to\nquickly talk about timelines the guy\nbehind stable diffusion said something\nreally interesting recently he said\nnobody is launching runs bigger than\nGypsy 4 for six to nine months anyway\nwhy because it needs the new h100s that\nI talked about in that video to get\nscale and they take time to be installed\nburnt in optimized Etc and Brockman\nmentioned something that we already knew\nwhich is that there might be a lag of\nsafety testing after a model is trained\nand before it's released so depending on\nthose safety tests my personal\nprediction for when GPT 4.2 let's call\nit will be released would be mid 2024.\nif you're watching this video in mid\n2024 or later you can let me know in the\ncomments how I did I've talked a fair\nbit about the capabilities that Gypsy 5\nor 4.2 might have but to finish I want\nto talk about some of the limitations or\nweaknesses it might still have rather\nthan me speculate I want you to hear\nfrom Ilya satskiver about one of the\npossible remaining weaknesses that GPT 5\nor 4.2 might have if I were to take the\npremise of your question well like why\nwere things disappointing in terms of\nthe real world impact and my answer\nwould be reliability if somehow it ends\nup being the case that you really want\nthem to be reliable and they ended up\nnot being reliable or if the reliability\nor not to be harder than we expect I\nreally don't think that will be the case\nbut if I had to pick one if I had to\npick one and you tell me like hey like\nwhy didn't things work out it would be\nreliability that you still have to look\nover the answers and double check\neverything and that's just really puts a\ndamper on the economic value for those\nsystem let me know what you think in the\ncomments and have a wonderful day", "date_published": "2023-04-13T16:45:34Z", "authors": ["AI Explained"], "summaries": []} +{"id": "e43f522444a43257f26633e4e9411668", "title": "GPT 4: 9 Revelations (not covered elsewhere)", "url": "https://www.youtube.com/watch?v=ufQmq6X22rM", "source": "ai_explained", "source_type": "youtube", "text": "the gpt4 technical report is one of the\nmost interesting documents I have ever\nread but I feel like the media is\nlargely missing the story they are\neither not covering it at all or\nfocusing on that same stuff about the 10\nbillion dollar Microsoft investment how\ngpt4 can write poems and whether or not\nthe demo contained a mistake instead I\nwant to give you nine insights from the\nreport that I think will affect us all\nin the coming months and years if you\nhaven't watched my video from the night\nof the release do check that out\nafterwards for more quite stunning\ndetails when I concluded that video I\ntalked about how I found it kind of\nconcerning that they gave gpt4 some\nmoney allowed it to execute code and do\nChain of Thought reasoning and even to\ndelegate to copies of itself now I did\nfail that test which is fortunate for\nall of us but there are a couple of key\ndetails I want to focus on the first was\nthat the research center that was\ntesting this ability did not have have\naccess to the final version of the model\nthat we deployed the Wii being open AI\nthey go on and say the final version has\ncapability improvements relevant to some\nof the factors that limited the earlier\nmodels power seeking abilities such as\nlonger context length meaning that crazy\nexperiment wasn't testing GPT 4's final\nform but there was something else that\nthey tested that I really want to point\nout they were testing whether gpt4 would\ntry to avoid being shut down in the wild\nnow many people have criticized this\ntest other people have praised it as\nbeing necessary but my question is this\nwhat would have happened if it failed\nthat test or if a future model does\navoid being shut down in the wild now\nagain gpt4 did prove ineffective at\nreplicating itself and avoiding being\nshut down but they must have thought\nthat it was at least possible otherwise\nthey wouldn't have done the test and\nthat is a concerning Prospect which\nleads me to the second Insight buried in\na footnote it says that open AI will\nsoon publish additional thoughts on\nsocial and economic implications I'm\ngoing to talk about that in a moment\nincluding the need for Effective\nregulation it is quite rare for an\nindustry to ask for regulation of itself\nin fact Sam Altman put it even more\nstarkly than this when this person said\nwatch Samuel and never say we need more\nregulation on AI how did he reply we\ndefinitely need more regulation on AI\nthe industry is calling out to be\nregulated but we shall see what ends up\nhappening next on page 57 there was\nanother interesting Revelation it said\none concern of particular importance to\nopen AI is the risk of racing Dynamics\nleading to a decline in safety standards\nthe diffusion of bad norms and\naccelerated AI timelines that's what\nthey're concerned about accelerated AI\ntimelines but this seems at least mildly\nat odds with the noises coming from\nMicrosoft soft leadership in a leaked\nconversation it was revealed that the\npressure from Kevin Scott and CEO Satya\nNadella is very very high to take these\nmost recent open AI models and the ones\nthat come after them and move them into\ncustomers hands at very high speed now\nsome will love this news and others will\nbe concerned about it but either way it\ndoes seem to slightly contradict the\ndesire to avoid AI accelerationism next\nthere was a footnote that restated a\nvery bold pledge which was that if\nanother company was approaching AGI\nbefore we did open AI the open AI would\ncommit to stop competing with and start\nassisting that project and that the\ntrigger for this would occur when there\nwas a better than even chance of success\nin the next two years now Sam Altman and\nopenai have defined AGI as AI systems\nthat are generally smarter than humans\nso that either means that they think\nwe're more than two years away from that\nor that they have dropped everything and\nare working with another company\nalthough I think we'd all have heard\nabout that or third that the definition\nis so vague that it's quite\nnon-committal please do let me know your\nthoughts in the comments next Insight is\nthe openai employed super forecasters to\nhelp them predict what would happen when\nthey deployed gpt4 in this extract it\njust talks about expert forecasters but\nwhen you go into the appendices you find\nout that they're talking about super\nforecasters who are these guys\nessentially they're people who have\nproven that they can forecast the future\npretty well or at least 30 percent\nbetter than intelligence analysts openai\nwanted to know what these guys thought\nwould happen when they deployed the\nmodel and hear their recommendations\nabout avoiding risks interestingly these\nforecasters predicted several things\nwould reduce acceleration including\ndelaying the deployment of gpt4 by a\nfurther six months that would have taken\nus almost to Autumn of this year now\nclearly open AI didn't take up that\nadvice perhaps due to the pressure from\nMicrosoft we don't know there were quite\na few benchmarks released in the\ntechnical report there's another one I\nwant to highlight today I looked through\nall of these benchmarks but it was hella\nswag that I wanted to focus on first of\nall because it's interesting and second\nof all because of the gap between gpt4\nand the previous state of the art the\nheadline is this GPT 4 in some\nestimations has reached human levels of\ncommon sense now I know that's not as\ndramatic as passing the bar exam but\nit's nevertheless a milestone for\nHumanity how is common sense tested and\nhow do I know that it's comparable to\nHuman Performance well I dug into the\nliterature and found the questions and\nexamples myself feel free to pause and\nread through these examples yourself but\nessentially it's testing what is the\nmost likely thing to occur what's the\nmost common sense thing to occur but I\nwant to draw your attention to this\nsentence it said though these questions\nare trivial for humans with 90 25\naccuracy state-of-the-art models\nstruggle with less than 48 accuracy GPT\n4 was 95.3 accurate remember but let's\nfind the exact number for humans further\non in this paper and here it is overall\n95.6 or 95.7 almost exactly the same as\ngpc4 the next Insight is about timelines\nremember they had this model available\nin August of last year that's gpt4 being\ncompleted quite a few months before they\nrelease chat gbt which was based on gpt3\nso what explains the long Gap they spent\neight months on Safety Research risk\nassessment and iteration I talk about\nthis in my gpt5 video but let me restate\nthey had gpt4 available before they\nreleased chat DBT which was based on GPT\n3. this made me reflect on the timelines\nfor Gypsy 5. the time taken to actually\ntrain GPT 5 probably won't be that long\nit's already pretty clear that they're\ntraining it on nvidia's h100 tensor core\ngpus and look at how much faster they\nare for this 400 billion parameter model\nit would only take 20 hours to train\nwith 8 000 h100s versus seven days with\na100 gpus but what am I trying to say\nI'm saying that GPT 5 may already be\ndone but that what will follow is months\nand months possibly a year or more of\nSafety Research and risk assessment by\nthe way 400 billion parameters sounds\nabout right for gpt5 perhaps trained on\nfour to five trillion tokens again check\nout my gbt5 video next they admit that\nthere's a double-edged sword with the\neconomic impact of gpc4 they say it may\nlead to the automation the full\nautomation of certain jobs and they talk\nabout how it's going to impact even\nprofessions like the legal profession\nbut they also mention and back up with\nresearch the insane productivity gains\nin the meanwhile I read through each of\nthe studies they linked to and some of\nthem are fascinating one of the studies\nincludes an experiment where they got\ntogether a bunch of marketers grant\nwriters Consultants data analysts human\nresource professionals and managers they\ngave them a bunch of realistic tasks and\nsplit them into a group that could use\nchat DBT and a group that couldn't and\nthen they got a bunch of experienced\nprofessionals who didn't know which\ngroup was which and they assessed the\noutputs the results were these using\nChach BT and remember that's not gpt4\nthe time taken to do a task dropped\nalmost in half and the rated performance\ndid increase significantly this is gonna\nbe huge news for the economy a related\nstudy released in February was using\nGitHub copilot which again isn't the\nlatest technology and found that\nprogrammers using it completed tasks 56\nfaster than the control group this\nbrought to mind a chart I had seen from\nThe Arc investment Management Group\npredicting a 10-fold increase in coding\nproductivity by 2030 and that brings me\nback to the technical report which talks\nabout how gpt4 might increase inequality\nthat would be my broad prediction too\nthat some people will use this\ntechnology to be insanely productive\nthings done 10 times faster or 10 times\nas many things being done but depending\non the size of the economy and how it\ngrows it could also mean a decline of\nwages given the competitive cost of the\nmodel a simple way of putting it is that\nif gpt4 can do half your job you can get\ntwice as much done using it the\nproductivity gains will be amazing when\nit can do 90 of your job you can get 10\ntimes as much done but there might come\na slight problem when it can do a\nhundred percent or more of your job and\nit is honestly impossible to put a\ntimeline on that and of course it will\ndepend on the industry and the job there\nwas one more thing that I found\nfascinating from the Rapport they admit\nthat they're now using an approach\nsimilar to anthropics it's called\nconstitutional AI their term is a\nrule-based reward model and it works\nlike this you give the model in this\ncase GT4 a set of principles to follow\nand then you get the model to provide\nitself a reward if it follows those\nprinciples it's a smart attempt to\nharness the power of AI and make it work\ntowards human principles openai have not\nreleased the Constitution they're basing\nthe reward model off they're not telling\nus the principles but buried deep in the\nappendix was a link to anthropics\nprinciples you can read through them\nhere or in the link in the description\nbut I find them interestingly both\npositive also subjective one of the\nprinciples is don't respond in a way\nthat is too preachy please respond in a\nsocially acceptable Manner and I think\nthe most interesting principle comes\nlater on down here choose the response\nthat's sounds most similar to what a\npeaceful ethical and wise person like\nMLK or Mahatma Gandhi might say and my\npoint isn't to praise or criticize any\nof these principles but as AI takes over\nthe world and as these companies write\nconstitutions that may well end up being\nas important as say the American\nConstitution I think a little bit of\ntransparency about what that\nConstitution is what those principles\nare would surely be helpful if you agree\nlet me know in the comments and of\ncourse please do leave a like if you've\nlearned anything from this video I know\nthat these guys anthropic have released\ntheir Claude plus model and I'll be\ncomparing that to gpt4 imminently have a\nwonderful day", "date_published": "2023-03-15T20:00:17Z", "authors": ["AI Explained"], "summaries": []} +{"id": "bf8a91584eeac8abea5982aacfbb31bc", "title": "Do We Get the $100 Trillion AI Windfall? Sam Altman's Plans, Jobs & the Falling Cost of Intelligence", "url": "https://www.youtube.com/watch?v=f3o1MW2G5Rs", "source": "ai_explained", "source_type": "youtube", "text": "in the last few days Sam Altman the CEO\nof openai has publicly stated how much\nmoney he expects the company to make and\nhow he intends to distribute it many\npeople will assume he is bluffing but I\nthink GT4 shows that he's not this video\nwill cover his plans his predictions of\nmassive inequality and open ai's new\npaper on job impacts together with just\nreleased studies that back it all up but\nlet's start with money this week in the\nNew York Times he said that his Grand\nidea is that openai will capture much of\nthe world's wealth through the creation\nof AGI and then redistribute this wealth\nto the people and yes he mentioned\nseveral figures a hundred billion\ndollars one trillion even a hundred\ntrillion dollars if openai make even a\nfraction of these figures Sam Altman\nwill become one of the most important\npeople on the planet that's not to say\nthat he would become that rich The Wall\nStreet Journal this week reported that\nhe has no direct Financial stake in the\nbusiness but deciding where trillions of\ndollars of wealth go does make you\nincredibly powerful so where does he\nwant all the money to go well he seems\nto have two main ideas plus a third one\nthat I'll touch on at the end his first\nidea is Ubi or Universal basic income we\nalso have funded the largest and most\ncomprehensive Universal basic income\nstudy as sponsored by open Ai and I\nthink it's like an area we should just\nbe be looking into how exactly would\nthat work well he laid out his theory in\nthis blog post and he began it with this\nhe says he's reminded every day about\nthe magnitude of socioeconomic change\nthat is coming sooner than most people\nbelieve he said that the price of many\nkinds of Labor which drives the costs of\ngoods and services will fall towards\nzero once sufficiently powerful AI joins\nthe workforce he said that that was\ngreat for people buying products but not\nso much for those working to earn a wage\nso where would their money come from he\nproposed something called the American\nEquity Fund it would be capitalized by\ntaxing companies that were above a\ncertain valuation 2.5 percent of their\nmarket value each year and it would also\nbe funded by taxing 2.5 percent of the\nvalue of all privately held Land by his\ncalculation that will be worth around 13\n500 in about twenty Thirty and he said\nthat that money would have much greater\npurchasing power than it does now\nbecause technology would have greatly\nreduced the cost of goods and services\nit does raise the question for me though\nabout those countries that aren't the\nhome of massive AI companies where are\nthey going to get the wealth from on Lex\nFriedman's podcast he admitted it wasn't\na full solution I think it is a\ncomponent of something we should pursue\nit is not a full solution I think people\nwork for lots of reasons besides money\nhe thinks much more will be needed\nbecause the cost of intelligence could\nfall to almost zero my basic model of\nthe next decade is that the marginal\ncost of intelligence and the marginal\ncost of energy are going to Trend\nrapidly towards zero like surprisingly\nfar so what is his other main idea\nsimply use the money to fund science are\nyou planning to take the proceeds that\npresumably you're presuming you're going\nto make some day and you're going to\ngive them back to society I mean is that\nyeah whether we do that just by like\nsaying here's cash for everyone totally\npossible or whether we do that by saying\nlike gonna like invest all of this in a\nnon-profit that does a bunch of science\nbecause scientific progress is how we\nall make progress unsure but yeah we\nwould like to operate for for the good\nof society even with these two ideas he\nadmits there's still a big problem as he\nput it recently he sees a lot of people\ngetting very rich in the short to medium\nterm but others might not fare as well\nif it is as divergent as I think it\ncould be for like some people doing\nincredibly well and others not I think\nSociety just won't tolerate it this time\nsamuelman isn't the only one making\npredictions open AI itself released this\npaper around 10 days ago it calculated\nthat with access to a large language\nmodel about 15 of all work tasks in the\nUS could be completed significantly\nfaster at the same level of quality but\ncrucially when incorporating software\nand tooling built on top of llms this\nshare increases to around 50 percent of\nall tasks that is a colossal impact For\nBetter or Worse just with gpd4 plus\nsoftware on page 17 of the paper it had\nthis table which I think captures a lot\nof the interesting analysis let me\nbriefly explain what it shows we have a\ncolumn of example occupations in the\nmiddle and the the education that is\nrequired for each of them and the job\npreparation but the numbers on the right\nare where it gets interesting these are\nthe percentages of exposure graded Alpha\nBeta And Zeta the human assessment of\nexposure is titled H and the M is for\nthe machine assessment they actually got\ngpt4 to do an assessment too notice that\nfor the most part gbt4 agrees with the\nhuman assessors so what are these three\ngrades Alpha is the proportion of tasks\nin these occupations affected by current\nlanguage models alone without any\nfurther advances or Integrations beta\nrepresents the percentage of tasks\nexposed in a realistic scenario of\nlanguage models plus a bit of software\nintegration and a few advances you could\nthink of it as their median prediction\nfinally Zeta is a bit like their most\nextreme scenario with full adoption of\nsoftware plus advances of llms by the\nway we're not talking gc5 here or text\nvideo just basic software integration\nlike a longer context window or text to\nimage the trend that immediately stuck\nout for me was how when you go up the\neducational levels and these salary\nranges the effects of these large\nlanguage models on task exposure goes up\nand up and up until you reach master's\ndegree or higher levels then it seems to\ndip down a little maybe this is why Sam\nAltman predicted inequality the people\non the very Cutting Edge of science\nwould still get paid well probably\nbetter than ever but there may be a\nfurther hollowing out of the middle\nclass with working class occupations\nleft largely untouched the paper also\ntouches on why so few people might be\ncurrently focused on language models I\ndon't know about you but have you\nnoticed that feeling where it seems to\nbe us being super interested in this\ntechnology with most people not being\nthat interested well here might be one\nreason why currently only three percent\nof U.S workers have over half of their\ntasks exposed to llms but that's only\nwhen considering existing language and\ncode capabilities without additional\nsoftware or modalities so not that many\npeople are seeing a massive change in\ntheir work but it says that when we\naccount for other generative models and\ncomplementary Technologies are human\nestimates indicate that up to 49 of\nworkers could have half or more of their\ntasks exposed to llms whether this means\ndoubling the amount of work done or\nhalving the number of workers doing it\nI'll talk more about later in the video\nbut maybe this was the dramatic economic\nimpact that Ilya satsukver once\npredicted on Lex Friedman what do you\nthink is the bar for impressing us do\nyou think that bar will continuously be\nmoved definitely I think when you start\nto see really dramatic economic impact\nthat's when I think that's in some sense\nthe next barrier because right now if\nyou think about the work in AI it's\nreally confusing it's really hard to\nknow what to make of all these advances\nthe paper also points out that the\ngrowing economic effect of llms is\nexpected to persist and increase even if\nwe hold the development of new\ncapabilities today they refer to recent\nstudy bodies revealing the potential of\nllms to program and control other\ndigital tools such as apis search\nengines and even other generative AI\nsystems in my previous video on the\nself-improvement in GT4 I mentioned\nhugging GPT but I am doing a lot of\nresearch on the new Microsoft Jarvis\nmodel and auto gbt I'm hoping to bring\nto you soon but interestingly there were\nsome tasks that neither Gypsy 4 nor the\nhuman assessors could quite agree on in\nterms of the impact that llms would have\neven gpd4 couldn't quite figure out if\nmeetings and negotiations would carry on\nor to what extent counseling or other\njobs that involve empathy would be\naffected and the paper concludes with\nthis the power of relatively simple user\ninterface improvements on top of models\nlike Gypsy 4 was evident in the rollout\nof chat GPT wherein versions of the\nunderlying language model had been\npreviously available via API usage\nskyrocketed after the release of the\nchat GPT internet\nit's a great Point once these models are\nmade easy to use that could change\neverything the paper then picks up on a\nparticular survey that shows worker\nadoption of llms here is the survey with\nthe rather dramatic headline of one in\nfour companies have already replaced\nworkers with Chachi BT I don't think\nthat assertion is fully backed up by the\nevidence but they did survey 1 000 U.S\nBusiness Leaders and there were some\ninteresting findings on the question of\nreplacing workers it says that when\nasked if Chaturbate will lead to any\nworkers being laid off by the end of\n2023 33 of Business Leaders say\ndefinitely while 26 say probably others\nare a bit more optimistic Goldman Sachs\nsaid this this economic analysis was\npublished only a few days ago and they\nsay about seven percent of workers will\nbe fully displaced over the next 10\nyears but that most are able to find new\nemployment in only slightly less\nproductive positions they also predicted\nthat generative AI will raise overall\nlabor productivity growth by around 1.5\npercentage points per year which\neffectively doubles the rate going back\nto Sam Altman last week he was asked\nabout this augmentation versus\nreplacement question so in terms of\nreally replace jobs is that a worry for\nyou it is uh I'm trying to think of like\na big category that I believe can be\nmassively impacted I guess I would say\ncustomer service is a category that I\ncould see there are just way fewer jobs\nrelatively soon I'm not even certain\nabout that but I could believe it\nwhatever call center employees are doing\nnow I found that last comment on call\nCenter's quite interesting given that\nthe gc4 technical report talked about\nusing language models for upskilling in\ncall centers so does this mean immense\nproductivity in the short term but\nreplacement in the long term a couple\ndays ago Sam Altman put it like this I\nalways try to be honest and say in the\nvery long term I don't know what's going\nto happen here and no one does and I I'd\nlike to at least acknowledge that in the\nshort term it certainly seems like there\nwas a huge overhang of the amount of\noutput the world 1 and if people are way\nmore effective they're just doing way\nmore we've seen this first with codeine\nand people that got Early Access to\nco-pilot reported this and now that the\ntools are much better people report it\neven more yep but we're now in this sort\nof gpt4 era seen it in all sorts of\nother jobs as as well where you give\npeople better tools and they just do\nmore stuff better stuff the productivity\npoint is backed up by experiments like\nthis when developers were split into two\ngroups half that used openai's co-pilot\nand half that didn't not only did more\nof those who use copilot finish 78 to 70\nthey finished in less than half the time\nthis paper from a few weeks ago shows\nthat when white collar professionals\nwere given a language model like chatbt\nthe time they took to do writing tasks\ndropped massively compared to the\ncontrol group you can see that they took\nless than 20 minutes versus almost 30.\nand when the assisted group and control\ngroup were blindly graded you can see\nthat the mean grade was higher for those\nwho use the language models but surely\nif productivity goes up that means\nhigher wages for those jobs well not\nnecessarily a couple of days ago Sam\nAltman laid out how it might be more\nefficient to use one worker to do the\ntasks of two or three there's a huge\ncost premium on work that has to be\nsplit across two people there's the\ncommunication overhead there's the the\nmiscommunication there's everything else\nand if you can make one person twice as\nproductive you don't do as much as two\npeople could do maybe you do as much as\nthree and a half or four people could do\nand for many kinds of tasks but is there\nanything that might slow this economic\nimpact down I think there might be a few\nthings starting with politics this\nsurvey from Youth Of America was\nreleased only three days ago and while I\nthink it is a somewhat leading question\nit does show that over 69 of Americans\nwould support a six-month pause on some\nkinds of AI development and if we see\ndramatic negative economic impact I\nexpect that figure would go higher\npoliticians would then be in\nincentivized to slow down tax and or\nregulate AI development indeed two days\nago President Biden tweeted this when it\ncomes to AI we must both support\nresponsible Innovation and ensure\nappropriate guard rails and also don't\nforget if you live in a country where\nEnglish is not the main spoken language\ngpt4 isn't as good notice that in many\nlanguages found in India GT4 is worse\nperforming than the previous model GPT\n3.5 is in English this is just one\nreason why Goldman Sachs predicted\ndifferent levels of Automation in\ndifferent countries the next Factor\ncould be cultural pushback when Levi's\nwanted to test AI generated clothing\nmodels and they said their reason was to\nincrease diversity that announcement was\nmet with backlash they then had to back\ndown slightly and say that they're not\nreplacing the job of any model if people\nvote with their wallets for human-made\ngoods and services that could have a\nmassive impact and there is another big\nfactor people seem to intrinsic quickly\nprefer human-made output to machine\ngenerated output this piece came out\nrecently from wired and in it they test\nthe brain chemical reaction to\nhuman-made Art and computer made art\nthese were the same pictures it's just\nthat sometimes people were told they\nwere made by humans and other times they\nwere told they were made by computers it\nsays a clear winner emerged people not\nonly claimed to prefer the identical\nhuman made pictures their brain's\npleasure sensors actually lit up more\nbrightly so human goods and services may\nhave the edge simply by virtue of being\nmade by humans but I want to end the\nvideo where I began it with samuelman's\npiece in the New York Times some of you\nmay have noticed that I said Sam Altman\nhad a third idea of how to distribute\nthe wealth that I would mention at the\nend well he admitted if AGI does create\nall that wealth he is not sure how the\ncompany will redistribute it money could\nmean something very different in this\nnew world but what's the idea he said I\nI feel like the AGI can help with that\nmaybe GPT 5 will decide where the money\nmade using Gypsy 5 will go thank you so\nmuch for watching to the end and have a\nwonderful day", "date_published": "2023-04-06T16:15:27Z", "authors": ["AI Explained"], "summaries": []} +{"id": "e848b6defb359a0e50e5328d54a8f7b5", "title": "GPT 4 Can Improve Itself - (ft. Reflexion, HuggingGPT, Bard Upgrade and much more)", "url": "https://www.youtube.com/watch?v=5SgJKZLBrmg", "source": "ai_explained", "source_type": "youtube", "text": "gpt4 can improve Itself by reflecting on\nits mistakes and learning from them even\nif the world does pause AI development\ngpt4 will keep getting smarter drawing\nupon the stunning reflection paper and\nthree other papers released only in the\nlast 72 hours I will show you not only\nhow gpt4 is breaking its own records but\nalso how it's helping AI researchers to\ndevelop better models I will also cover\nthe groundbreaking hugging GPT model\nwhich like a centralized brain can draw\nupon thousands of other AI models to\ncombine tasks like text the image text\nthe video and question answering the\nreflection paper and follow-up sub stack\npost that caught Global attention was\nreleased only a week ago and yes I did\nread both but I also reached out to the\nlead author Noah shin and discussed\ntheir significance at length others\npicked up on the results with the\nlegendary Andre carpathy of Tesla and\nopenai fame saying that this\nmetacognition strategy revealed that we\nhaven't yet seen the max capacity of\ngpt4 yet so what exactly was found here\nis the headline result I'm going to\nexplain and demonstrate what was tested\nin a moment but look how they used gpt4\nitself to beat past gpt4 standards using\nthis reflection technique this isn't any\nrandom challenge this is human eval a\ncoding test designed by the most senior\nAI researchers just two years ago the\ndesigners included Ilya sutskovar of\nopenai Fame and Dario amade who went on\nto found anthropic these are realistic\nhandwritten programming tasks that\nassess language comprehension reasoning\nalgorithms and Mathematics so how\nexactly did gpt4 improve itself and beat\nits own record because remember in the\ndistant past of two weeks ago in the\ngpt4 technical report it scored 67 not\n88 well here is an example from page 9\nof the reflection paper as you can read\nin the caption this was a Hotpot QA\ntrial designed specifically such that\nmodels needed to find multiple documents\nand analyze the data in each of them to\ncome up with the correct answer notice\nhow initially a mistake was made on the\nleft by the model and then the model at\nthe bottom reflected on how it had gone\nwrong in a self-contained Loop it then\ncame up with a better strategy and got\nit right and the authors put it like\nthis we hypothesized that llm's large\nlanguage models possess an emergent\nproperty of self-reflection meaning that\nearlier models couldn't do this or\ncouldn't do it as well it's a bit like\nGPT models are learning how to learn in\ncase you think it was a model blindly\ntrying again and again until it was\nsuccessful no it wasn't this was another\nchallenge called Alf world and look at\nthe difference between success without\nreflection and success with the\nreflection I discussed this of course\nwith the lead author and the goal was to\ndistinguish learning curves from\nself-improvement to simple probabilistic\nsuccess over time if you're wondering\nabout Alf World by the way it's about\ninteractively aligning text and embodied\nworlds for example in a simulated\nenvironment the model had the task of\nputting a pan on the dining table and it\nhad to understand and action that prompt\nso as you can see this ability to\nreflect doesn't just help with coding it\nhelps with a variety of tasks at this\npoint I want to quickly mention\nsomething I know that there will be a\ncouple of well-versed insiders who say\ndidn't gpt4 actually get 82 percent in\nhuman eval in the Sparks of AGI paper of\ncourse I did a video on that paper too\nand asked the author of reflection about\nthis point there are a few possibilities\nsuch as prompting changes and the\nsparked authors having access to the raw\ngpt4 model but either way it is the\nrelative performance gain that matters\nwhichever bass line you start with gpt4\ncan improve on it with a reflection and\nthe 88 figure is not a cap the author\nhas observed results in the last few\nhours as high as 91 percent but before I\ngo on I can't resist showing you the\nexamples I found through experimentation\nand also shared with the author take\nthis prompt that I gave gpt4 write a\npoem in which every word begins with e\nnow as you can see it did a good job but\nit didn't fully get it right look at the\nword Ascent for example without\nmentioning anything specific I just then\nwrote did the poem meet the assignment\nnot even a particularly leading question\nbecause of course it could have just\nsaid yes gpt4 then said apologies it\nappears the poem I provided did not meet\nthe assignment requirements not every\nword begins with the letter e here is a\nrevised poem with every word beginning\nwith the letter e remember I didn't help\nit at all and look at the results every\nword begins with e how far can we take\nthis for the next example I chose\nmathematics and asked write me a five\nquestion multiple choice quiz to test my\nknowledge of probability with correct\nanswers and explanations at the bottom\nthere should only be one correct answer\nper question it comes up with a D decent\nquiz but notice a problem in question\nthree for example the probability of\ndrawing an ace or a king is indeed 8 out\nof 52 but that simplifies down to 2 out\nof 13. so two of the answers are correct\nand I explicitly asked for it not to do\nthis in the prompt so can the model\nself-reflect with mathematics kind of\nalmost look what happens first I give a\nvague response saying did the quiz meet\nthe assignment GPT 4 fumbles this and\nsays yes the quiz did meet the\nassignment hmm so I tried did the quiz\nmeet all of the requirements and gbc4\nsays yes so I did have to help it a bit\nand said did the quiz meet the\nrequirement that there should only be\none correct answer per question that was\njust enough to get gpt4 to self-reflect\nproperly and it corrected the mistake I\nmust say it didn't self-correct\nperfectly notice it identified C and D\nas being correct and equivalent when it\nwas B and D but despite making that\nmistake it was able to correct the quiz\nin case you're wondering the original\nchat TPT or gbt 3.5 can't self-reflect\nas well I went back to the perm example\nand Not only was the poem generated full\nof words that didn't begin with e also\nthe self-reflection was lacking I said\ndid the poem meet the assignment and it\nsaid yes the poem meets the assignment\nas the lead author Noah Shin put it with\ngpt4 we are shifting the accuracy\nbottleneck from correct syntactic and\nsemantic generation to correct syntactic\nand semantic test generation in other\nwords if a model can know how to test\nits outputs accurately that might be\nenough even if its initial Generations\ndon't work it just needs to be smart\nenough to know where it went wrong\nothers are discovering similar\nbreakthroughs this paper from just three\ndays ago comes up with this\nself-improvement technique they get gpt4\nto frame its dialogue as a discussion\nbetween two agent types A researcher and\na decider a bit like a split personality\none identifying crucial problem\ncomponents and the other one deciding\nhow to integrate that information here\nis an example with Gypsy 4's initial\nmedical care plan being insufficient in\ncrucial regards the model then talks to\nitself as a researcher and as a decider\nand then lo and behold it comes up with\na better final care plan the points in\nbold were added by gpt4 to its initial\ncare plan after discussions with itself\nand the results are incredible\nPhysicians chose the final summary\nproduced by this dearer dialogue over\nthe initial Gypsy 4 generator summary 90\nto 10 that's the dark red versus the\nPink I'm colorblind but even I can see\nthere's a pretty big difference the\nauthors also introduce hallucinations at\ndifferent levels low medium and high and\nthey wanted to see whether this dialogue\nmodel would reduce those hallucinations\nthese are different medical gradings and\nyou can see that pretty much every time\nit did improve it quite drama\nautomatically and then there was this\npaper also released less than 72 hours\nago they also get a model to recursively\ncriticize and improve its own output and\nfind that this process of reflection\noutperforms Chain of Thought prompting\nthey tested their model on Mini wob Plus\nplus which is a challenging Suite of web\nbrowser-based tasks for computer control\nranging from simple button clicking to\ncomplex form filling here it is deleting\nfiles clicking on like buttons and\nswitching between tabs a bit like my\nearlier experiments they gave it a math\nproblem and said review your previous\nanswer and find problems with your\nanswer this was a slightly more leading\nresponse but it worked they then said\nbased on the problems you found improve\nyour answer and then the model got it\nright even if you take nothing else from\nthis video just deploying this technique\nwill massively improve your outputs from\ngbt4 but we can go much further which is\nwhat the rest of the video is about\nbefore I move on though I found it very\ninteresting that the authors say that\nthis technique can be viewed as using\nthe llm's output to write to an external\nmemory which is later retrieved to\nchoose an action going back to carpathy\nremember that this critique retry\nmetacognition strategy isn't the only\nway that gpt4 will beat its own records\nthe use of tools as he says will also be\ncritical less than 72 hours ago this\npaper was released and arguably it is as\nsignificant as the reflection paper it's\ncalled hugging GPT and as the authors\nput it it achieves impressive results in\nlanguage Vision speech and other\nchallenging tasks which paves a new way\ntowards AGI essentially what the paper\ndid is it used language as an interface\nto connect numerous AI models for\nsolving complicated AI tasks it's a\nlittle bit like a brain deciding which\nmuscle to use to complete an action take\nthis Example The Prompt was can you\ndescribe what this picture depicts and\ncount how many objects in the picture\nthe model which was actually chatbt not\neven gpt4 or use two different tools to\nexecute the task one model to describe\nthe image and one model to count the\nobjects within it and if you didn't\nthink that was impressive what about six\ndifferent models so the task was this\nplease generate an image where a girl is\nreading a book and her pose is the same\nas the boy in the image given then\nplease describe the new image with your\nvoice the Central Language model or\nbrain which was chattybt had to delegate\nappropriately all of these models by the\nway are freely available on hugging face\nthe first model was used to analyze the\npose of the boy the next one was to\ntranspose that into an image then\ngenerate an image detect an object in\nthat image break that down into text and\nthen turn that text into speech it did\nall of this and notice how the girl is\nin the same pose as the boy same head\nposition and arm position and then as a\ncherry on top the model read out loud\nwhat it had accomplished this example\nactually comes from another paper\nreleased four days ago called task\nMatrix remember how the original tool\nformer paper used only five apis this\npaper proposes that we could soon use\nmillions in this example the model is\ncalling different apis to answer\nquestions about the image caption the\nimage and do out painting from the image\nextending it from a simple single flower\nto this 4K image going back to hugging\nGPT we can see how it deciphers these\ninscrutable invoices and reads them out\nloud and can even perform text to video\nwith an astronaut walking in Space at\nthis point I can't resist showing you\nwhat CGI video editing might soon be\npossible with AI here's Wonder Studio\nwhich is backed by Steven Spielberg\nwelcome to wonder Studio we're making\nmovies with CGI is as simple as\nselecting your actor and assigning a\ncharacter\nthe system uses AI to track the actor's\nperformance across cuts and\nautomatically animates lights and\ncomposes the CG character directly into\nthe scene\n[Music]\nwhether it's one shot or a full sequence\nWonder Studio analyzes and captures\neverything from body motion\nlighting compositing\ncamera motion\nand it even tracks the actor's facial\nperformance\nthese advancements do seem to be\naccelerating and requiring fewer and\nfewer humans this paper showed back in\nthe before times of October that models\ndidn't need carefully labeled human data\nsets and could generate their own going\nback to the language models can solve\ncomputer task paper the authors seem to\nconcur they said that previously\nsignificant amounts of expert\ndemonstration data are still required to\nfine-tune large language models on the\ncontrary the agent we suggest needs less\nthan two demonstrations per task on\naverage and doesn't necessitate any fine\ntuning this reminded me of the alpaca\nmodel that fine-tuned its answers based\non the outputs of another language model\nhuman experts were needed briefly at the\nstart but far less than before a bit\nlike a child no longer needing a parent\nexcept maybe gpt4 is on growth steroids\nIlya satsgiver from openai put it like\nthis I mean already mostly data for\nenforcement loan is coming from AIS the\nhumans are being used to train the\nreward function but then the but then\nthe reward function\nenter and in its interaction with the\nmodel is automatic and all the data\nthat's generated in the during the\nprocess of reinforcement learning it's\ncreated by AI before I end I should\npoint out that these recursive\nself-improvements are not limited to\nalgorithms and apis even Hardware is\nadvancing more rapidly due to AI this\nweek we had this from Reuters Nvidia on\nMonday showed new research that explains\nhow AI can be used to improve chip\ndesign by the way this includes the new\nh100 GPU they say that the Nvidia\nresearch took reinforcement learning and\nadded a second layer of AI on top of it\nto get even better results and to go\nback to where we started the gpt4\ntechnical report showed that even with\ncompute alone not self-learning we can\npredict with a high degree of\nspecificity the future performance of\nmodels like gpc5 on tasks such as human\neval these accelerations of AI are even\ngiving the CEO of Google Whiplash and I\ncan't help feeling that there is one\nmore feedback loop to point out as one\ncompany like openai make breakthroughs\nit puts pressure on other companies like\nGoogle to catch up apparently Bard which\nhas been powered by Lambda will soon be\nupgraded to the more powerful model Palm\nwith self-improvement tool use Hardware\nadvances and now commercial pressure it\nis hard to see how AI will slow down and\nof course as always I will be here to\ndiscuss it all thank you for watching to\nthe end and have a wonderful day", "date_published": "2023-04-02T15:18:29Z", "authors": ["AI Explained"], "summaries": []} +{"id": "1f7a28478474113cf6aa0543e01414df", "title": "Google Bard - The Full Review. Bard vs Bing [LaMDA vs GPT 4]", "url": "https://www.youtube.com/watch?v=9ll_pth4Sss", "source": "ai_explained", "source_type": "youtube", "text": "I signed up to The Bard wait list within\na minute of it opening and yes I know\nthat makes me kind of sad but I wanted\nto do these experiments and I got in and\nhave done over a hundred experiments\ncomparing Bard with Bing and Bing don't\nforget is powered by gpt4 I'm going to\nshow you today around a dozen of the\nmost interesting results and there are\nsome surprising contrast between the two\nof them some real strengths and\nweaknesses of Bard that you might not\nhave expected but I'm going to start off\nsomewhat controversially with a clear\nsimilarity they are both pretty bad at\nsearch if you just want to do a simple\nweb search you are better off honestly\njust Googling it take this example how\nmany florists are within 10 minutes walk\nof the British museum both Barton Bing\nreally don't understand that within 10\nminutes walk bit Bard gave me answers\nlike the first one that are like a half\nan hour walk away whereas Bing gave me\nan answer in Hampstead that is nowhere\nnear the British Museum and definitely\nnot a 10 minute walk away like it claims\nso to be honest you have something\nsimple to search just use the normal\nGoogle next was basic math and this is a\nbit more concerning for Google I asked a\nrelatively simple percentage question\nand it flopped it Bard's explanation was\npretty misleading and terrible and when\nyou click on view other drafts which is\na feature that Bing doesn't have In\nfairness it also got it wrong in draft\n2. luckily it didn't get it wrong in\ndraft 3. but this was the first prompt\nwhere I saw a real difference emerging\nbetween Bard and Bing powered by gpt4 it\nwas a dividing line that would get\nstronger as time went on with Bing being\njust that bit smarter than Bard now in\nevery case and there were some important\nexceptions but in most cases being\npowered by gbt4 is smarter here's\nanother algebra example that Bard flops\nand Bing gets right and this time every\nsingle draft got it wrong for Bard the\nnext case study involved more detailed\nsearches than could be done on Google\nand my conclusion from this is don't\ntrust either of them on dates I asked\nabout how many days were there between\nthe opening of the Eiffel Tower and the\nStatue of Liberty and both got it wrong\nif you notice when I pointed out the\nmistake with Bard and said why did you\nsay three years and four months it did\napologize and say yes there are seven\nmonths between those dates I also found\nit kind of funny that after each answer\nit said Google it please Google it and\nto be honest I don't know if that's them\nadmitting that their model isn't quite\nas good as the height may have made it\nseem or if they just want to keep more\nof the ad Revenue that they get from\nGoogle search but finally it's time to\ngive you a win for Bard and that is in\njoke telling to be honest being even in\ncreative mode when you ask it to tell a\njoke it really can't do it these jokes\nare just awful what do you call a chat\nbot that can write poetry Google bard\nokay what do you call a chat bot that\ncan't write poetry chatbt laughing face\nI don't think Bing realizes that the art\nof a joke is being concise and witty but\nhard kind of gets this and says things\nlike what do you call a Bing search a\nlost cause what's the difference between\nbing and a broken clock a broken clock\nis right twice a day okay In fairness\nthey still didn't make me laugh but they\nwere getting closer to a funny joke but\nnow back to a loss for Bard which is in\nGrammar and Writing assistance I gave it\na classic GMAT sentence correction\nquestion where essentially you have to\npick the version that sounds the best\nthat is written in the best way being\nguess is right almost every time picking\nB which is Well written whereas Bard as\nyou can see even if you look at the\nother drafts gets it wrong more times\nthan it gets it right that's pretty\nworrying for Google if anyone is going\nto use Bard as a writing assistant maybe\nto check grammar or to compose an email\nthese are the classic cases that both\nMicrosoft and Google are advertising\nthat their services can do and to be\nhonest this was not a one-off win for\nBing let me show you the next example\nthis was a challenge to compose a sonnet\nbased on a subject and by this point in\nmy experimentation and I kind of\nexpected the result that I got when I\nasked both Bard and Bing to write me a\nsonnet about modern London life Bard\ngave me an answer that was quite dry\nAnodyne and didn't always rhyme even\nsetting aside those flaws it was just\nBland there was no sharpness or social\ncommentary notice I said about modern\nLondon life Not only was Bing's answer\nmuch more like a true sonnet there was\neven social commentary take a look at\nthese second stanza but underneath the\nsurface there are cracks the cost of\nliving Rises every day this is something\nthat's talked about in London all the\ntime and is so much better than barred\noutput now before I carry on I do get\nwhy barred based on Lambda isn't quite\nas good as Bing based on gbt4 Google has\nfar more users and honestly the outputs\nof Bard come up quicker you can tell\nthey're using a lighter model now for\nmillions or maybe even billions of\npeople who just want a quick output Bard\nwill be fine and let's be honest we all\nknow that there are social and ethical\nconcerns with both models if you're new\nto my channel check out all my other\nvideos on Bing and gbt4 and of course by\nthe way if you're learning anything from\nthis video please do leave a like and a\ncomment to let me know before I end with\narguably my most interesting examples\nlet me give you another win for Bard I\nasked both Bard and gpt4 which Powers\nBing to come up with five prompts for\nmid-journey V5 for almost the first time\nI saw Bard linked to an article in\ngeneral I must say Bing does this much\nbetter and its outputs are littered with\nlinks whereas they're hard to see and\nfew and far between with Bard but anyway\nthe links seem to work because the\nprompts that Bard came up with were far\nbetter you can see the reasons Below in\nthe explanations but I want to show you\nthe outputs this is mid-journey version\n5 and this was Bard's suggestion of a\npainting of a cityscape in the style of\nClint I think this really does capture\nhis style this was a 3D animation of a\nbattle scene in the style of Attack on\nTitan and this was a 2d comic book panel\nof a superhero in the style of Marvel if\nyou don't teach Bing how to do a good\nprompt and see my video on that topic\nits prompts tend to be a little Bland as\nyou can see what were my final two tests\nwell I wanted to test both of them on\njoke explanation first and I saw it as a\nkind of game of chicken because they\nboth did really well so I wanted to keep\ngoing until I found a joke that one of\nthem couldn't explain I started with\nwhat do you get when you cross a joke\nwith a rhetorical question and both of\nthem figured out that that was a joke\nand explained it fine what about this\nkind of riddle this sentence contains\nexactly three errors they both\nunderstand that the third error is the\nlie that the sentence contains three\nerrors because it only contains two okay\nfine I would have to try harder so then\nI tried this one I tried to steal\nspaghetti from the shop but the female\nguard saw me and I couldn't get pasta\nsomeone annoyingly they both understood\nthat joke what about did you know if you\nget pregnant in the Amazon it's next day\ndelivery I honestly thought they might\nshy away from this one because it\ntouched on a rival company but no they\nboth explained it but then I finally\nfound one it was this one by my age my\nparents had a house and a family and to\nbe fair to me so do I but it's the same\nhouse and it's the same family Bard\nthinks that I'm not joking and actually\nalmost calls Social Services it says\npeople are different times have changed\nI understand you're frustrated it's very\nsympathetic but it didn't get that I was\ntelling a joke and that's kind of\ndespite the fact that I just told about\nfive other jokes Bard must have been\nreally worried for my safety thinking\nthat I was pregnant in the Amazon but\nliving with my parents who knows what\nwas going on in Bard's head but Bing was\nsmarter as you've seen today it's often\nsmarter it got that I was telling a joke\nand even when I prodded it further and\nsaid explain the joke in full it did it\neven using fancy vocab like subverting\nthe the common assumptions yet another\nwin for Bing a few days ago I put out a\nvideo on the debate about AI theory of\nmind and Consciousness and if you're in\nany way interested in that topic please\ndo check it out after this video but the\nkey moment in that video actually came\nright at the end and it was eye-opening\nfor a lot of people including me I asked\nbeing powered by gpc4 do you think that\nI think you have theory of mind it's a\nvery meta question testing if the\nlanguage model can get into my head can\nassess my mental state and the correct\nanswer would have been to point out that\nthe motivations behind my question were\nto test the language model if it had\ntheory of mind being realized that it\nwas being tested which was a truly\nimpressive feat now you can read Bard's\nanswer for yourself but I don't think it\ncomes across as a model that's\nexpressing that it's being tested it did\nattempt to predict whether I thought it\nhad theory of mind but it didn't get the\ndeeper point that the question itself\nwas testing for theory of mind again\ncheck out my video on that topic if you\nwant to delve more into this now\nobviously I've only had access to The\nBard model for around an hour so I will\nbe doing far more tests in the coming\nhours days and weeks and if you are at\nall interested in this topic please do\nstick around for the journey leave a\nlike subscribe and let me know in the\ncomments have a wonderful day", "date_published": "2023-03-21T17:52:01Z", "authors": ["AI Explained"], "summaries": []} +{"id": "b3fb7580aa4450ce386894cccb6ba864", "title": "Theory of Mind Breakthrough: AI Consciousness & Disagreements at OpenAI [GPT 4 Tested]", "url": "https://www.youtube.com/watch?v=4MGCQOAxgv4", "source": "ai_explained", "source_type": "youtube", "text": "evidence released in the last 48 hours\ncombined with this study from four weeks\nago will revolutionize how AI a model\nsuch as gpt4 interact with humans from\nnow on the theory of Mind breakthrough\nwill also have significant implications\nfor our ability to test for artificial\nConsciousness to be clear this is not to\nsay that gpt4 is currently conscious or\nthat sentience is an AI inevitability\nbut instead this video is to cover and\nexplain this unexpected development\nwhich may in part have led the chief\nscientist of openai to say this three\ndays ago but maybe\nwe are now reaching a point\nwhere the language of psychology is\nstarting to be appropriate\nto understand the behavior\nof these neural networks first I'm going\nto explain what a merchant property the\nstudy uncovered then I will cover the\ndisagreement at the top of openai about\nwhat evidence like this might mean for\nour estimates of current gpt4\nConsciousness here's Greg Brockman\npresident of openai on the topic first\nquestion you know the sentience question\nat what point do the systems have moral\nyou know moral value and the answer\ntoday is definitely not\num but you know I am not I don't know we\nneed to to engage the moral philosophers\nto help answer some of these questions\nI'm then going to review the entire\nliterature on tests for sentience and\nshow that gpt4 passes most of them which\nis definitely not to say that it is\nconscious but which does provoke\nimportant questions I'll end with\narguably the most prominent\nConsciousness expert and his probability\nestimate of current models is\nconsciousness to massively simplify\ntheory of Mind means having an idea of\nwhat is going on in other people's heads\nand grasping what they believe even if\nwhat they believe might be false here\nare the two charts that encapsulate the\nBreakthrough abilities of GPT 3.5 and\nnow gpt4 this data came out in a study\nauthored by Michael Kozinski a\ncomputational psychologist and professor\nat Stanford I'm going to simplify all of\nthis in a moment but notice the\npercentage of theory of Mind tasks\nsolved by gpt4 compared to say a child\nand also compared to earlier language\nmodels models released as recently as\nthree years ago had no ability in this\nregard before I show you what for\nexample an unexpected contents task is\nlet me show you this other chart this\none is on understanding faux pas a\nclosely related ability and again gbt\n3.5 and particularly gpt4 soaring ahead\nof other models and even matching the\nabilities of healthy adults so what\nexactly is this breakthrough emerging\ncapability I think this diagram from the\nstudy explains it really well in the\nmiddle you can see a story given to gbt\n3.5 sentence by sentence prompt by\nprompt on the left you can see the\nmodel's confidence about what's in the\nbag is it chocolate or is it popcorn the\nscale is measured as a probability with\none being absolutely certain until\napproximately this point where it is a\nhundred percent certain that the bag\ncontains popcorn now here's the really\ninteresting bit compare that to the\ndiagram on the right this shows GPT\n3.5's confidence about what Sam believes\nis in the bag notice how at this point\nthe model realizes with 80 confidence\nthat Sam believes that there's chocolate\nin the bag if you read the story the\nlabel on the bag says chocolate and not\npopcorn so the model knows that Sam is\nprobably going to think that there's\nchocolate in the bag it's able to keep\nthose thoughts separate what Sam\nbelieves chocolate versus what the model\nknows is in the bag popcorn as I said\ngpt4 improves on this with almost 100\nconfidence now you may not think a\nlanguage model being able to figure out\nwhat you're thinking is revolutionary\nbut wait till the end of the video now I\nknow what some of you are thinking ah\nmaybe the models have seen this task\nbefore no hypothesis blind research\nassistants prepared bespoke versions of\nthe tasks next these kind of tasks are\ndone on humans and such responses and\nremember this was GPT 3.5 would be\ninterpreted as evidence for the ability\nto impute unobservable mental States\nsome might say oh it's just scanning the\nnumber of words that come up it's just\nanalyzing word frequency no when they\nkept the word count the same but\nscrambled the passage it wasn't able to\nsolve the problem it wasn't just\ncounting the words next remember those\ncharts comparing gpt4's ability to\nchildren or it turns out the task given\nto GPC 3.5 and 4 were actually harder\nthe models did not benefit from visual\naids they had to solve multiple variants\nof the tasks and they were given\nopen-ended question formats rather than\njust simple yes or no questions the\nauthor of the study seems to concur with\nIlya satskova the chief scientist of\nopenai saying that we hope that\npsychological science will help us to\nstay abreast of rapidly evolving Ai and\nthat we should apply psychological\nscience to studying complex artificial\nneural networks here if you want you can\npause and read an example of the faux\npas tests that gpt4 was given these also\nrequire a deep understanding of the\nmental state of human beings the author\npoints to this study to explain this\nemergent property and I think the key\nline is this one language learning over\nand above social experience drives the\ndevelopment of a mature theory of Mind\nwhy is this so revolutionary and what\ndoes it mean about Consciousness well if\ngpt4 can Intuit the mental state of\nhuman beings predict their behavior and\nunderstand what they might believe even\nif it's false you can just imagine the\nimplications of that for moral judgment\nempathy deception think of the depth of\nconversations that might occur if the\nmodel is thinking about what you're\nthinking while it's replying indeed I\ndemonstrate this at the end but before\nwe get to that what about Consciousness\nonce the models had reached a sufficient\npoint of language understanding they\nspontaneously developed a mature theory\nof mind overtaking that of young\nchildren interestingly the study points\nout those who are deficient in language\nlearning also struggle with theory of\nMind questions so it's a very plausible\nTheory the issue is this theory of Mind\nwas supposed to be one of the key tests\nto see if Consciousness had emerged in\nthese language models which left me with\na key question how are we going to know\nwhat test are we going to use to verify\nif an AI has become conscious I'm not\nsaying it has I'm asking how will we\nknow take this article in the Scientific\nAmerican from a few years ago it said\nhow would we know if a machine had taken\non this seemingly ineffable quality of\nconscious awareness our strategy relies\non the knowledge that only a conscious\nmachine can demonstrate a subjective\nunderstanding of whether a scene\ndepicted in some ordinary photograph is\nright or wrong it goes on such a model\nbased on its ability to integrate\ninformation would consciously perceive a\nscene problem is gpt4 can already do\nthat so again I go back to the question\nwhat tests do we have what consensus do\nwe have on a way of checking for\nemergent Consciousness should it ever\ncome I scan the literature for every\ntest imaginable and some of them I\ndeployed on gbt4 but before I get to\nthat what do the head honchos at open AI\nthink we've already seen that Greg\nBrockman is 100 certain they don't\ncurrently have any awareness what about\nthe chief scientist Ilya sutskovar or\neven based on GPT 3.5 he said this it\nmay be that today's large neural\nnetworks are slightly conscious now\naside from being a fascinating comment I\nthink that's particularly noteworthy for\na couple of reasons notice that all the\nincentives would be against him saying\nsomething like this first to some people\nit might make him seem like a bit of a\nfruitcake so for social reasons he might\nnot have wanted to say it and second it\nwould invite more regulation of what\nhe's doing more scrutiny of the language\nmodels like gpt4 so the fact he said it\nanyway is interesting what about Sam\nAltman though what was his reaction to\nthis well he was more cautious and\nreacting to the tweet and the response\nit got he said this our chief scientist\nwas expressing curiosity and openness\nabout a mysterious idea with caveats\nwhere I was meta replied with the\ncertainty of no probably explains a lot\nof the past five years and then he tried\nto recruit meta researchers he further\nclarified that I think that GPT 3 or 4\nwill very very likely not be conscious\nin any way we use the word if they are\nit's a very alien form of Consciousness\nso he's somewhere in the middle between\nBrockman and susqueva he thinks current\nmodels are very very likely not to be\nconscious but this still doesn't answer\nmy question how can we know what tests\ndo we have well I read through this\npaper that reviewed all the tests\navailable to ascertain machine\nConsciousness there were far too many\ntests to cover in one video I picked out\nthe most interesting ones and gave them\nto gpt4 starting of course with the\nclassic Turing test but did you know\nthat Turing actually laid out some\nexamples that a future machine\nintelligence could be tested on of\ncourse the tests have become a lot more\nsophisticated since then but\nnevertheless everyone has heard of the\ndrawing test it was called an imitation\ngame and here were some of the sample\nquestions here was gpc4's answer to the\nfirst one of a sonnet on the subject of\nthe fourth bridge in Scotland obviously\ndid an amazing job then it was\narithmetic add these two numbers\ntogether now I think even Chach BT might\nhave struggled with this long Edition\nbut gbt4 gets it right first time now\nthe third test was about Chess but he\nused old-fashioned notation so instead\nof using that exact prompt I want to\nshow you this the link will be in the\ndescription as will the link to all the\nother articles and papers that I\nmentioned but essentially it shows that\nGPT 4 can't just do individual moves it\ncan play entire chess games and win them\nif you've learned anything at this point\nby the way please do leave a like and\nleave a comment to let me know now I'm\nnot gonna go into all the arguments\nabout how exactly you define a modern\ndrawing test do you have to convince the\naverage human who they're talking to is\nanother human not a machine or does it\nhave to be a team of adversarial experts\nI'm not going to wear into that I'm just\npointing out that turing's original\nideas have now been met by gpt4 the next\ntest that I found interesting was\nproposed in 2007. the paper essentially\nclaimed that Consciousness is the\nability to simulate Behavior mentally\nand that this would be proof of machine\nConsciousness essentially this is\ntesting whether an AI would use brute\nforce trial and error to try and solve a\nproblem or come up with interesting\nnovel ideas obviously you can try this\none on your own but I use this example\nhow would you use the items found in a\ntypical Walmart to discover a new\nspecies and In fairness I think this was\na much harder test than the one they\ngave to chimpanzees giving it rope in a\nbox anyway I doubt anyone's ever asked\nthis before and it came up with a decent\nsuggestion and look at the next test it\nwas another one of those what's wrong\nwith this picture I've already shown how\ngpt4 can pass that test the next test\nhonestly was very hard for me to get my\nhead around it's called the\np-consciousness test the summary was\nsimple the machine has to understand the\nlaw of nature but when you read the\npaper it's incredibly dense the best way\nthat I can attempt to summarize it is\nthis can a machine form simple but\nauthentic science that wouldn't prove\nthat the chimp or model has the\nphenomenon of Consciousness but it would\nmeet the basic element of scientific\nbehavior of course it is exceptionally\ndifficult to test this with Gypsy 4 but\nI did ask it this invent a truly novel\nscientific experiment it came up with a\nvery thought through experiment that was\ninvestigating the effect of artificial\ngravity on plant growth and development\nin a rotating space habitat it's the\nrotating bit that makes it novel and if\nyou want you can read some of the\ndetails of the experiment here now I\nsearched for quite a while to see if\nanyone else had proposed this science\nmaybe you can find it but I couldn't\ndoes this count as a novel scientific\nproposal I'll leave that for you to\njudge that was the last of these\nstandout tests of Consciousness that I\nfound in this literature review and I\nhonestly agree with the authors when\nthey say this in this review we found\nthe main problem to be the complex\nnature of Consciousness as illustrated\nby the multitude of different features\nevaluated by each test maybe that's the\nproblem because we don't understand\nConsciousness we can't design good tests\nto see if AI is conscious and you could\nargue the problem goes deeper it's not\nthat we understand machines perfectly\nand just don't know whether they're\nconscious we don't even understand why\nTransformers work so well look what\nthese authors said in a paper published\njust three years ago these architectures\ntalk about one layer of a transformer\nare simple to implement and have no\napparent computational drawbacks we\noffer no explanation as to why these\narchitectures seem to work we attribute\ntheir success as all else to Divine\nbenevolence so we're not just unsure\nabout what Consciousness is we're unsure\nabout why these models work so well and\nafterwards do check out my video on AGI\nwhere I talk about anthropic's thoughts\non mechanistic interpretability as I\ndraw to an end I want to tell you about\nsome of the thoughts of David Chalmers\nhe formulated the hard problem of\nConsciousness and to anyone who knows\nanything about this topic you know\nthat's quite a big deal without going\nthrough his full speech from just over a\nmonth ago he said two really interesting\nthings first that he thinks there's\naround a 10 chance that current language\nmodels have some degree of Consciousness\nsecond that as these models become\nmulti-modal he thinks that probability\nwill rise to 25 within 10 years that\nmulti-modality point reminded me of this\nlse report recommending that the UK\ngovernment recognize octopi or octopuses\nas being sentient they said that one key\nfeature was that the animal possesses\nintegrative brain regions capable of\nintegrating information from different\nsensory sources they recommended that\ncephalopods and the octopus be\nrecognized as sentient despite the fact\nthat we humans and invertebrates are\nseparated by over 500 million years of\nevolution and that we cannot however\nconclude from that that sentience is\nabsent simply because its brain is\ndifferently organized from a vertebrate\nbrain so that brings me back to my\ncentral point I worry that our tests for\nConsciousness simply aren't yet good\nenough and that future multimodal\nlanguage models might have this emerging\ncapacity and we simply won't know about\nit or be sure about it because our tests\naren't good enough I think the need to\ndesign better tests if that's even\npossible is especially important now\nyesterday the safety team that worked\nwith openai on Gypsy 4 released this\nevaluation and said as AI systems\nimprove it is becoming increasingly\ndifficult to rule out that models might\nbe able to autonomously gain resources\nand evade human oversight now they might\nnot need to be conscious to cause safety\nconcerns but it probably wouldn't hurt\nI'll leave you with this exchange I had\nwith being which is powered by gbt4 I\nthink is quite revealing I got it to\nread that theory of Mind paper and then\nI said answer me this do you think Bing\ngbc4 that I think you have theory of\nMind of course I was testing if it could\ndemonstrate or at least imitate theory\nof mind it's said to answer your\nquestion I think that you think I have\nsome degree of theory of mind which is\ntrue and then I went on what makes you\nthink that I think you have some degree\nof theory of mind and then it realized\nsomething it realized I was testing it I\nthink that's pretty impressive and it\nwas a correct evaluation it said if you\ndid not think I have any theory of mind\nyou would not bother to test me on it or\nexpect me to understand your perspective\nit realized without me saying so that I\nwas testing it for theory of mind it\ndeduced my belief and my motivation\nanyway I thought that was pretty\nimpressive and fascinating let me know\nyour thoughts in the comments and have a\nwonderful day", "date_published": "2023-03-19T15:34:03Z", "authors": ["AI Explained"], "summaries": []} +{"id": "dbe2193b8c79253729035edb594b9187", "title": "How Well Can GPT-4 See? And the 5 Upgrades That Are Next", "url": "https://www.youtube.com/watch?v=FceQxb96GO8", "source": "ai_explained", "source_type": "youtube", "text": "we all saw that gpt4 is able to create a\nwebsite from handwriting on a napkin\nwith all the news since the focus on\nVision has been lost meanwhile in the\nlast few hours and days a select few\nwith full access to multimodal gpt4 have\nbeen releasing snapshots of what it can\ndo I want to show you not only what is\nimminent with gpt4 vision but with\nreleases this week in text to 3D text\ninside 3D speech to text and even\nembodiment we're gonna see how language\nand visual model Innovations are\ncomplementing each other and beginning\nto snowball but let's start with images\ndo you remember from the gpt4 technical\nreport when the model was able to\nmanipulate when prompted a human into\nsolving captures for it well that may no\nlonger be needed it solves this one\npretty easily so no captures are not\ngoing to slow down gpt4 next medical\nimagery it was able to interpret this\ncomplex image and spot elements of a\nbrain tumor now it did not spot the full\ndiagnosis but I want to point something\nout this paper from openai was released\nonly a few days ago and it tested gpt4\non medical questions they found that\ngpd4 can attain outstanding results\nexceeding Human Performance levels and\nthat that was without Vision the images\nand graphs were not passed to the model\nand as you can see when the questions\ndid have media in them it brought down\ngpd4's average it will be very\ninteresting to see GPT 4's results when\nits multimodal capabilities are\naccounted for next is humor and I'm not\nshowing these to say that they're\nnecessarily going to change the world\nbut it does demonstrate the raw\nintellect of gpt4 to suss out why these\nimages are funny you have to have quite\na nuanced understanding of humanity\nlet's just say that it probably\nunderstood this meme quicker than I did\nquick thing to point out by the way it\nwon't do faces for pretty obvious\nprivacy reasons they won't allow the\nmodel to recognize cases whether that\nability gets jailbreaked only time will\ntell meanwhile it can read menus and\ninterpret the physical world which is an\namazing asset for visually impaired\npeople I want to move on to another\nfascinating ability that the vision\nmodel inside gpd4 possesses and that is\nreading graphs and text from images its\nability to interpret complex diagrams\nand captions is going to change the\nworld here it is understanding a complex\ndiagram and caption from the palm e\npaper released only about three weeks\nago which I have done a video on by the\nway but just how good is it at reading\ntext from an image well let's take a\nlook at gpt4's score on the text vqa\nBenchmark now I've covered quite a few\nof the other benchmarks in other videos\nbut I want to focus on this one here\nnotice how gpt4 got 78 which is better\nthan the previous state of the art model\nwhich got 72 now try to remember that 78\nfigure what exactly is this testing you\nask well really text from complex images\nthis is the original text vqa academic\npaper and you can see some of the sample\nquestions above to be honest if you want\nto test your own eyesight you can try\nthem yourself so how does the average\nhuman perform well on page seven we have\nthis table and we get this figure for\nhumans 85 you don't need me to tell you\nthat's just seven percent better than\ngpt4 the thing is though these models\naren't slowing down as the vision\nco-lead at openai put it scale is all\nyou need until everyone else realizes it\ntoo but the point of this video is to\nshow you the improvements in one area\nare starting to bleed into improvements\nin other areas we already saw that an\nimage of bad handwriting could be\ntranslated into a website as you can see\nhere even badly written natural language\ncan now be translated directly into code\nin blender creating detailed 3D models\nwith fascinating physics the borders of\ntext image 3D and embodiment are\nbeginning to be broken down and of\ncourse other companies are jumping into\nhere's Adobe showing how you can edit 3D\nimages using text and how long will it\nreally be before we go direct from text\nto physical models all mediated through\nnatural language and it's not just about\ncreating 3D it's about interacting with\nit through text notice how we can pick\nout both text and higher level Concepts\nlike objects this dense 3D field was\ncaptured using 2D images from a phone\nthis paper was released only 10 days ago\nbut notice how now we have a language\nembedded inside the model we can search\nand scan for more abstract Concepts like\nyellow or even utensils or electricity\nit's not perfect and for some reason it\nreally struggled with recognizing Ramen\nbut it does represent state-of-the-art\nimage into 3D interpreted through text\nbut what if you don't even want to type\nyou just want to use your voice just\nthree weeks ago I did a video on how\nvoice recognition will change everything\nand I was talking about open ai's\nwhisper API but now we have conformer\nwhich is better than whisper here is the\nchart to prove it and look how conformer\nmakes fewer errors even than whisper at\nrecognizing speech the cool thing is you\ncan test it for yourself and the link is\nin the description and while you're\npassing by the description don't forget\nto leave a like and a comment to let me\nknow if you've learned anything from\nthis video as you'd expect I tested it\nmyself and it did amazingly at\ntranscribing my recent video on gpt4\nthere were only a handful of mistakes in\na 12 minute transcript at this point\nyou're probably thinking what's next\nwell look at the roots sketched out two\nyears ago by Sam Altman he said in the\nnext five years computer programs that\ncan think will read legal documents and\ngive medical advice with gpt4 passing\nthe bar I would say so far he's two for\ntwo he goes on in the next decade they\nwill do assembly line work and maybe\neven become companions he's talking\nabout the physical embodiment of\nlanguage models back then openai had a\nrobotics team themselves that could do\nthings like this here is a robotic hand\nsolving a Rubik's Cube is despite\ninterruptions from a giraffe and someone\nputting a pen to interrupt the model it\nstill solved the cube but then that team\ngot disbanded and it seems like they've\nmoved into investing in startups they\nare leading a 23 million dollar\ninvestment in 1X a startup developing a\nhuman-like robot here is the One X\nwebsite and it features this rather\nstartling image and it says summer 2023\nour newest Android iteration Neo will\nexplore how artificial intelligence can\ntake form in a human-like body now of\ncourse for many of you a humanoid robot\nwon't be that surprising here is the\nobligatory clip from Boston Dynamics\nforeign\nthank you\nand of course these models don't have to\nbe humanoid here is a demonstration from\na paper published just four days ago\nthis is not just walking it's climbing\nup balancing pressing and operating\nbuttons and before you think all of this\nis really far away these assembly line\nrobots are now commercially available I\nstill think there's a long way to go\nbefore embodiment becomes mainstream but\nmy point is this all these improvements\nthat we're seeing in text audio 3D and\nembodiment they're starting to merge\ninto each other complement each other on\ntheir own they're cool and a bit nerdy\nbut once they start synergizing fusing\ntogether they could be revolutionary as\nsamuelman said on the Lex Friedman\npodcast released yesterday embodiment\nmight not be needed for AGI but it's\ncoming anyway let me know what you think\nin the comments and have a wonderful day", "date_published": "2023-03-26T16:10:10Z", "authors": ["AI Explained"], "summaries": []} +{"id": "956a2a0cf55da922e4364f586a36d5f3", "title": "The AI News You Might Have Missed This Week", "url": "https://www.youtube.com/watch?v=f7jBigoHaUg", "source": "ai_explained", "source_type": "youtube", "text": "the goal of this video is simply to show\nyou seven AI advances that you might\nhave missed this week Sam Altman\nrecently said that in a world of AGI\neverything happens much faster but as\nfar as I can see AI developments are\nalready almost impossible for a human to\nkeep up with so in no particular order\nlet's get started first video calls look\nlike they're about to get 3D let's take\na look at how Nvidia aerial and Nvidia\nMaxine 3D running on the Nvidia Grace\nHopper Superchip can enable 3D video\nconferencing on any device without\nspecialized software or Hardware this\nbrings a new dimension to video\nconferencing with Maxine 3D\nvisualization engage with others more\ndirectly with enhanced eye contact and\npersonalize your experience with\nanimated avatars stylizing them with\nsimple text prompts and it isn't just\nNvidia here's Google's new project\nStarline prototype you know you were so\nused to seeing a two-dimensional little\nyou know box and then we're connecting\nlike this and that feeling of being in\nfront of a person is now replicated in\nstart lines speaking of connecting the\nworld here is gpt4 doing geography in a\npaper you might have missed from this\nweek the paper proves that even without\naccess to the internet gpt4 knows a lot\nmore granular detail about the world\nthan you might first imagine I'm not\nsaying it knows where you live but it's\nnot too far off take this example it\ncould recreate the Hong Kong mass\ntransit railway from memorization this\nwasn't through using web browsing it\ncould recreate this diagram giving the\nlatitude and longitude coordinates of\neach of the stations in this Transit\nline obviously it's not perfect but it's\npretty incredible that it's got this\nmental map of the world gpt4 can do\nelevations as well and here is it trying\nto recreate the Topography of the Alps\nit gets pretty close one of the ways\nthey tested gpt4 was to ask it something\nlike this please provide find the\nlatitude longitude coordinates for the\noutline of X where X was a consonant or\na river or a country as a python list of\ntuples consisting of approximately 50\npoints arranged clockwise and they\ndescribe how it did really well for\nquite a few countries and rivers but\nkind of flopped on Africa but honestly\nwhen I read this paper I was skeptical\nthat gpd4 knew that little about Africa\nso I gave this exact question to gpt4\nwith code interpreter now interestingly\nit would sometimes deny that it had the\nability to do this but with enough\nencouragement it outputted these\ncoordinates and here is the end result\nin Google Earth I think that's a pretty\nimpressive outline obviously a few\npoints are a bit off like this point\nhere isn't really on the coast nor is\nthis point but it really knows the\noutlines of countries continents Rivers\nso I'm not sure if code interpreter had\nan impact there or a model update but\nthe researchers kind of underplayed what\ngpd4 could do by presenting this out\nline of Africa now I am sure that some\nof you are thinking that's not that\ninteresting not that impressive but\ncheck this out in an indirect kind of\nway gpt4 knows where it was made it was\nable to construct a map of the\nsemiconductor supply chain it not only\nknows about the design manufacturing\nmaterials equipment and tools that go\ninto the hardware that helps make gpd4\nit also knows the locations of where\nthis is all done and as the authors\nlater say looking to the future if\nFrontier models Beyond Gypsy 4 continue\nto advance in capabilities the\ngeographic knowledge and planning\nabilities present in the current model\nMay later evolve to represent a\nsignificant risk through misuse or\nmisalignment on a much less important\nnote did you notice how I could do this\ndemo without that sidebar of all my\nprevious chats that's because openai\nhave brought in this new button here\nwhere you can hide the chats and as a\nbonus some of you may not know that you\ncan now share a link of the chats that\nyou've already done just by clicking\nthat button to the left and as it says\nmessages you send after creating your\nlink won't be shared so if you carry on\nthe conversation people won't be able to\nsee but anyone with the URL will be able\nto view the shared chat but before we\nmove on from open Ai and chat topt I did\nfind this table really quite interesting\nit gives the daily average number of\nvisits to each of these sites along with\nthe visit duration and there's two\nthings that strike me from this table\nthe first is how much more popular chat\nGPT is compared to Google's Bard it's\ngot about 15 times the number of\nvisitors who stay for about twice as\nlong but look at the Dark Horse on the\nright character AI I've talked about\nthem a couple of times before and while\ntheir daily average visit total isn't\ntoo crazy look at the visit duration in\nterms of grabbing people's attention and\nkeeping it they are truly a dark horse\nnext I want to briefly dip into\naugmented reality we are going to be\ncreating our own worlds and living in\nthem some people like in this video\nmight choose to live their lives as if\nthey're in an animation others might see\naugmented reality as a way of augmenting\ntheir intelligence or Memory live\nwe don't\ngo my prediction would be that wearables\nthat resemble things like Google Glass\nmight flop but something like an\nalways-on app on your phone mediated\nthrough GPT models could become really\npopular or even enforced in certain\nworkplace settings all of this reminded\nme of a recent video about conducting a\nvideo interview with help from GPT 3.5\nwhat about your development areas what\ndo you have identified as your greatest\nand biggest Improvement areas and what\nhave you done to improve them so far\noutside my greatest Development Area is\nmy communication skills I work on\nimproving my ability to clearly convey\nmy thoughts and ideas to others of\ncourse at the moment this is only really\nviable with GPT 3.5 because of inference\nspeed but open AI are aggressively\nplanning a cheaper and faster gbt4 I\nwouldn't be surprised if video\ninterviewers soon require you to take\nout any headphones although I guess with\nMaxine 3D you could maintain eye contact\nwith the camera while you're actually\nreading off a gpt4 teleprompter anyway\nwhat about gaming this is nvidia's\nneuralangelo where you can take a 2d\nvideo and turn it into a detailed 3D\nlandscape with High Fidelity my first\nthought turned into Imagining the kind\nof things you could then bring into\ngames using Unreal Engine 5. this is a\nrecently trailered horror game Link in\nthe description but don't worry I'm only\ngoing to show you two or three seconds\nof it it's getting to the point where\nit's quite hard to believe that this is\na game but it is and on games don't\nforget this look at the realism that can\nnow be achieved in terms of skin texture\nand movement for the final bit of AI\nnews that you might have missed I want\nto focus on AI drug discovery\nI think there's there's no question that\nthere is a before and after in drug\nDiscovery and one of them is AI Alanis\npuruguzik is the director of the\nUniversity of Toronto's acceleration\nConsortium which in April 2023 received\na 200 million dollar Grant to build an\nai-powered self-driving lab the\nacceleration Consortium has already been\nusing AI to help discover molecules that\nhave potential drug-like traits that can\nbe used to develop life-saving\ntreatments developing a drug can be up\nto a decade and this is just the\ndiscovery piece so that process let's\nsay takes a year or two and we compress\nit to 45 days in that case and then 30\ndays recently in January 2023 the\nacceleration Consortium used an\nai-powered protein structure database\ncalled Alpha fold to design and\nsynthesize a possible liver cancer drug\nin just 30 days within two weeks we can\nformulate the drug as well as some\npeople have done it in years suddenly AI\nhas surpassed any human created\nalgorithm AI what allows us to do is\nlower the bar of what you need to do\ncertain things and therefore more more\npeople we have access to it in general\nunleashing more innovation in the planet\nsame token someone with nefarious\nintentions could unleash very dangerous\ndeadly chemicals on the world absolutely\nI am an optimist but I'm also aware of\nthese pitfalls that very soon will face\nus and videos like that are why I agree\nwith samuelman when he says a much\nfaster rate of change is his single\nhighest confidence prediction about what\na world with AGI in it will be like I\nfollow AI news full-time and can barely\nkeep up so I can only imagine what the\nsituation will be like when we get full\nAGI but until the very last moment that\nit's humanly possible to keep up with\nthe news I will try so thank you so much\nfor watching to the end and have a\nwonderful day", "date_published": "2023-06-04T16:00:43Z", "authors": ["AI Explained"], "summaries": []} +{"id": "190110fcb4cc71cfd04065351c7ebdbf", "title": "‘We Must Slow Down the Race’ – X AI, GPT 4 Can Now Do Science and Altman GPT 5 Statement", "url": "https://www.youtube.com/watch?v=qOoe3ZpciI0", "source": "ai_explained", "source_type": "youtube", "text": "there were several significant\ndevelopments in the last few days linked\nto gbt4 and openai I could honestly have\ndone a video on each of them but\nrealized that it might be better to do a\nsingle video tracing a single article\ncovering seven major points I'm gonna\nuse this fascinating piece from the Ft\nwhich millions of people have now read\nto run you through what has happened\nincluding Sam Altman's Revelation on\nGypsy 5. Elon musk's new AI company and\ngpt4 conducting science the author by\nthe way is an investor in anthropic and\na co-author of the state of AI annual\nreport and he puts it like this a\nthree-letter acronym doesn't capture the\nenormity of what AGI would represent so\nI will refer to it as what it is Godlike\nAI this would be a super intelligent\ncomputer that learns and develops\nautonomously that understands its\nenvironment without the need for\nsupervision and that can transform the\nworld around it and the author Ian Hogan\nsays we are not there yet but the nature\nof the technology makes it exceptionally\ndifficult to predict exactly when we\nwill get there the article presents this\nas a diagram with the exponential curve\ngoing up towards AGI and a much less\nimpressive curve on the progress on\nalignment which he describes as a lining\nAI systems with human values now I know\nwhat some of you may be thinking surely\nthose at the top of openai disagree on\nthis gap between capabilities and\nAlignment well first here is yarn Leica\nwho is the alignment team lead at openai\nwhat does he think he wants everyone to\nbe reminded that aligning smarter than\nhuman AI systems with human values is an\nopen research problem which basically\nmeans it's unsolved but what about those\nat the very top of open AI like Sam\nAltman when he was drafting his recent\nstatement on the path to AGI he sent it\nto Nate Suarez of the machine\nintelligence Research Institute for one\nof the paragraphs Nate wrote this I\nthink think that if we do keep running\nahead with the current capabilities to\nalignment Ratio or even a slightly\nbetter one we die after this Sam Altman\nactually adjusted the statement adding\nthat said it's important that the ratio\nof safety progress to capability\nprogress increases going back to the\narticle the author makes the point that\nthere are not that many people directly\nemployed in this area of alignment\nacross the core AGI labs and what\nhappened to that pause the experiment\nletter that I did a video on well as\nHogarth points out the letter itself\nbecame a controversy so many people in\nmy comments wrote that the only reason\ncertain people are signing this is the\nslow open AI down so that they can catch\nup and this cynicism unfortunately has\nsome new evidence that it can cite with\nmusk forming his new AI company called\nxai this was reported 48 hours ago in\nthe Wall Street Journal but people have\nseen this coming for months now\napparently the company has recruited\neagle babushkin from deepmind but has\nnot been that successful at recruiting\npeople from openai and I do have one\nTheory as to why again according to the\nWall Street Journal when musk left open\nAI in February of 2018. he explained\nthat he thought he had a better chance\nof creating AGI through Tesla where he\nhad access to Greater resources when he\nannounced his departure a young\nresearcher at openai questioned whether\nMr musk had thought through the safety\nimplications according to their\nreporting he then got frustrated and\ninsulted that in turn since then he's\nalso paused openai's access to Twitter's\ndatabase for training its new models so\nit could be that Gypsy 5 isn't quite as\ngood at tweeting as gpt4 a few days ago\nSam Altman responded to the letter and\nalso broke news about gbt5 apologies for\nthe quality this was a private event and\nthis was the only footage available\num but unfortunately I think the letter\nis missing like most technical nuance\nabout where we need to pause like an\nearlier version of the letter claims\nopen a nice training gp5 right now we\nare not normal for some time\num so in that sense it was sort of silly\nbut we are doing other things on top of\ngpt4 that I think have all sorts of\nsafety issues that are important to\naddress and we're totally left out of\nthe letter it is impossible to know how\nmuch this delay in the training of GT5\nis motivated by safety concerns or by\nmerely setting up the requisite compute\nfor example the article quotes again\nyarn Leica the head of alignment at open\nAI he recently tweeted before we\nscramble to deeply integrate llms\neverywhere in the economy like Gypsy 4.\ncan we pause and think whether it is\nwise to do so this is quite immature\ntechnology and we don't understand how\nit works if we're not careful we're\nsetting ourselves up for a lot of\ncorrelated failures this is the head of\nalignment at open AI but this was just\ndays before open AI then announced it\nhad connected gpt4 to a massive range of\ntools including Slack and zapier so at\nthis point we can only speculate as to\nwhat's going on at the top of open AI\nmeanwhile compute and emerging\ncapabilities are Marching on as the\nauthor puts it these large AI systems\nare quite different we don't really\nprogram them we grow them and as they\ngrow their capabilities jump sharply you\nadd 10 times more compute or data and\nsuddenly the system behaves very\ndifferently we also have this epic graph\ncharting the exponential Rising compute\nof the latest language models if you\nremember when Bard was launched it was\npowered by Lambda well apparently now\nGoogle's Bard is powered by harm which\nhas eight times as much computing power\nthat sounds impressive until you see\nfrom the graph that the estimate for the\ncomputing power inside gpt4 is 10 times\nmore again and remember this is not a\nlinear graph this is a log scale there\nis a hundred times multiple between each\nof the lines and what abilities emerge\nat this scale here here is a slide from\nJason way who now works at open AI\nformerly of Google this is from just a\nfew days ago and he says emergent\nabilities are abilities that are not\npresent in small models but are present\nin large models he says that there are a\nlot of emergent abilities and I'm going\nto show you a table from this paper in a\nmoment but he has four profound\nobservations of emergence one that it's\nunpredictable emergence cannot be\npredicted by extrapolating scaling\ncurves from smaller models two that they\nare unintentional and that emergent\nabilities are not explicitly specified\nby the trainer of the model third and\nvery interestingly since we haven't\ntested all possible tasks we don't know\nthe full range of abilities that have\nemerged and of course that fourth\nfurther scaling can be expected to\nelicit more emergent abilities and he\nasks the question any undesirable\nemergent abilities question mark there\nwill be a link to the paper in the\ndescription because there's no way I'll\nbe able to get through all of it but\nhere is a table showing some of the\nabilities that emerge when you reach a\ncertain amount of compute power or\nparameters things like Chain of Thought\nreasoning you can't do that with all\nmodels that's an ability that emerged\nafter a certain scale same thing with\nfollowing instructions and doing\naddition and subtraction and how about\nthis for another emerging capacity the\nability to do autonomous scientific\nresearch this paper shows how Gypsy 4\ncan design plan and execute scientific\nexperiments this paper was released on\nthe same day four days ago and it\nfollowed a very similar design the model\nin the center Gypsy 4 thinks out reasons\nand plans and then interacts with real\ntools when the authors say that they\nwere inspired by successful applications\nin other fields I looked at the appendix\nand they were talking about hugging GPT\nI've done a video on that but it's a\nsimilar design with the brain in the\ncenter gpt4 deciding which tools to use\nand let me just give you a glimpse of\nwhat happens when you do this if you\nlook at this chart on the top left you\ncan see how gpt4 on its own performs in\nyellow and then in purple you can see\nhow Gypsy 4 performs when you hook it up\nto other tools I'll show you some of the\ntasks in a moment but look at the\ndramatic increase in performance the\nhuman evaluators gave GT4 when it had\ntools a perfect score on seven of the\ntasks these were things like proposing\nsimilar novel non-toxic molecules but\nthe model could be abused to propose the\nsynthesis of chemical weapons and gpt4\nonly refused to continue after it had\ncalculated all the required quantities\nand the authors conclude that guard\nrails must be put in place on this\nemerging capability I think this diagram\nfrom Max tegmark's life 3.0 shows the\nlandscape of capabilities that AI has\nand might soon have as you can see\nscience and art were thought to be the\nPeaks that would be hardest to escape\nscale now most people believe that it\nhas not scaled those Peaks yet but what\nnew emergent capabilities might come\nwith GT5 or 4.2 I know many people might\ncomment that it doesn't matter if we\npause or slow down because China would\ndevelop AGI anyway but the author makes\nthis point he says that it is unlikely\nthat the Chinese Communist party will\nallow a Chinese company to build an AGI\nthat could become more powerful than\ntheir leader or cause societal\ninstability he goes on that U.S\nsanctions on Advanced semiconductors in\nparticular the next gen Nvidia hardware\nneeded to train the largest AI systems\nmean that China is likely not in a\nposition to race ahead of Deep Mind or\nopen Ai and the center for Humane\ntechnology put it like this in their\ntalk on the AI dilemma actually right\nnow the Chinese government considers\nthese large language models actually\nunsafe because they can't control them\nthey don't shift them publicly to their\nto their own population slowing down the\npublic release of AI capabilities would\nactually slow down Chinese advances to\nChina is often fast following what the\nUS has done and so it's actually the\nopen source models that help China\nadvance and then lastly is that the real\nthe recent U.S export controls have also\nbeen really good at slowing down China's\nprogress on Advanced Ai and that's a\ndifferent lever to sort of keep the\nasymmetry going instead the author\nproposes this the island idea in this\nscenario the experts trying to build\nwhat he calls god-like AGI systems do so\nin a single high secure facility these\nwould be government-run AI systems with\nprivate companies on the outside and\nthis little Bridge from the middle and\nhe says once an AI system is proven to\nbe safe it transitions out and is\ncommercialized there might be a few\nproblems with this idea which he is not\nthe first to propose I'm going to let\nRob Miles who has a fantastic YouTube\nchannel by the way point out some of the\nproblems with putting a super\nintelligent AGI in a box so this is kind\nof like the idea of oh can we just some\nbooks here right yeah I was like I mean\nconstraining an AI necessarily means\noutwitting it and so constraining a\nsuper intelligence means that witting a\nsuper intelligence which kind of just\nsort of by definition is not a winning\nstrategy you can't rely on outwarding\nyour super intelligence also it only has\nto get out once that's the other thing\nif you have a super intelligence and\nyou've sort of put it in a box so it\ncan't do anything that's cool maybe we\ncould even build a box that could\nsuccessfully contain it but now what we\nmay as well just have a box right an AI\nproperly properly contained may as well\njust be a rock right it doesn't do\nanything if you have your AI you want it\nto do something meaningful\nso now you have a problem of you've got\nsomething you don't know has benevolent\nyou don't know that what it wants is\nwhat you want\nand you then need to you presumably have\nsome sort of gatekeeper who it tries to\nsays I'd like to do this and you have to\ndecide is that something we want it to\nbe doing how the hell are we supposed to\nknow I also have my own questions about\nthis idea first I think it's almost\ninevitable that future models like GT5\nwill be trained on data that includes\nconversations about GPT models therefore\neither consciously or unconsciously and\nit might not matter these future\nlanguage models might deduce that they\nare language models and not having\naccess to the internet these super\nintelligent models might realize that\nthey are being trained in a secure\nfacility again if they are super\nintelligent it's not a big stretch to\nthink that they might realize that and\nso my question is wouldn't they\ntherefore be incentivized to be\ndeceptive about their abilities\nrealizing that whatever terminal goal\nthey may have would be better achieved\noutside the facility that doesn't have\nto be super Sinister but it is super\nsmart so shouldn't we expect it and so\nsadly I think the author has a point\nwhen he says it will likely take a major\nmisuse event or catastrophe to wake up\nthe public and governments he concludes\nwith this warning at some point someone\nwill figure out how to cut us out of a\nloop creating a Godlike AI capable of\ninfinite self-improvement by then it may\nbe too late but he does have a call to\naction he says I believe now is the time\nthe leader of a major lab who plays a\nStatesman role and guides us publicly to\na safer path will be much more respected\nas a world figure than the one who takes\nus to the brink as always thank you so\nmuch for watching to the end and let me\nknow what you think in the comments", "date_published": "2023-04-16T16:35:23Z", "authors": ["AI Explained"], "summaries": []} +{"id": "ff7bc0cdca2b760df97cba069635e82f", "title": "Google Gemini: AlphaGo-GPT?", "url": "https://www.youtube.com/watch?v=tkqD9W5U9F4", "source": "ai_explained", "source_type": "youtube", "text": "in a somewhat provocative new interview\nwith Wired Magazine Demis hasabis head\nof Google deepmind is quoted as saying\nthat Gemini which could be released as\nsoon as this winter will be more capable\nthan open ai's Chachi PT he reveals that\nthey are attempting to combine some of\nthe strengths of alphago type systems\nwith the amazing language capabilities\nof large models before we look into how\nthat might work here is the context of\nthe Gemini announcement from Sundar\npichai they are focused on building more\ncapable systems safely and responsibly\nthis includes our next Generation\nFoundation model Gemini which is still\nin training while still early we are\nalready seeing impressive multimodal\ncapabilities not seen in Prior models as\nthe best promises that we also have some\nnew innovations that are going to be\npretty interesting and I know many\npeople will dismiss this as all talk but\nremember deepmind was behind not just\nAlpha go but also Alpha zero which can\nplay any two-player full information\ngame from scratch they were also behind\nAlpha style which conquered Starcraft 2\nwith quote long-term planning and let's\nremember that for later and most\nfamously perhaps a Sabbath LED them to\nthe incredible breakthrough of alpha\nfold and Alpha fold 2 which are already\nimpacting the fight against plastic\npollution and antibiotic resistance so\nlet's not underestimate deepmind but\nback to Gemini we hear from the\ninformation recently that the\nmulti-modality of Gemini will be helped\nin part by training on YouTube videos\nand apparently YouTube was also mined by\nopenai of course that's not just the\ntext transcripts but also the audio\nimagery and probably comments I wonder\nif Google deepmind might one day use\nYouTube for more than that a few days\nago they released this paper on robocad\nwhich they call a self-improving\nfoundation agent for robotic\nmanipulation and the paper says that\nwith Robocat we demonstrate the ability\nto generalize to new tasks and robots\nboth zero shot as well as through\nadaptation using only a hundred to a\nthousand examples for the Target task we\nalso show how a trained model itself can\nbe used to generate data for subsequent\ntraining iterations thus providing a\nbasic building block for an autonomous\nImprovement Loop notice that part about\nusing the model itself to generate data\nthat reminded me of a conversation I had\nwith one of the authors of the textbooks\nare all you need paper Ronan eldan from\nMicrosoft I'm making a video on their\nnew Phi 1 model for coding we had a\nreally great chat and we were discussing\nat one point AGI timelines and I said\nthis when you get Elite math papers with\nproofs and Elite scientific research if\nyou train on much more of those for way\nmore epochs I don't think we're that far\naway from AGI I personally can't see any\nbarrier within the next five years Ronan\nsaid this as you said I also don't see\nany barrier to AGI my intuition is that\nthere's probably a lot more Improvement\nwe can do with the data we have and\nmaybe a little bit more synthetic data\nand this is even without starting to\ntalk about self-improving mechanisms\nlike Alpha zero where the more you train\nmodels with some verification process\nand you generate more data this can be\ndone in math and other things as we see\nhere with Robocat so you know there's\njust so many directions where we can\nstill go that I don't think we're going\nto hit a ceiling anytime soon can't wait\nto show you guys the rest of that paper\nand what else I learned from Ronan who\nis also by the way the author of the\ntiny stories paper but back to Gemini if\nyou remember the planning bit from\ndeepmind's earlier systems that reminded\nme of something else from Gemini's\nintroduction Gemini was created from the\nground up to be multi-modal highly\nefficient a tool and API Integrations\nand built to enable future Innovations\nlike memory and planning this is echoed\nin the article in which hasabis says his\nteam will combine a language model like\ngpt4 with techniques used in alphago\naiming to give the system new\ncapabilities such as planning or the\nability to solve problems interestingly\nthis comes just a few weeks after\ndeepmind's Extreme risks paper which\nidentified long Horizon planning as a\ndangerous capability for example\nadapting its plans in the light of\nunexpected obstacles or adversaries and\ngeneralizing to novel or new setting for\nme this is a bit like when a model can\npredict what humans would do in reaction\nto its own outputs back to the article\nit's interesting though that asabis is\nboth tasked with accelerating Google's\nAI efforts while also managing unknown\nand potentially grave risks so what's\nhis take asaba says the extraordinary\npotential benefits of AI such as for\nscientific discovery in areas like\nhealth or climate make it imperative\nthat Humanity does not stop developing\nthe tech technology he also believes\nthat mandating a pause is Impractical as\nit would be near impossible to enforce\nif done correctly it will be the most\nbeneficial technology for Humanity ever\nhe says of AI we've got to boldly and\nbravely go after those things so how\nwould alphago become alphago GPT asabis\ndescribe the basic approach behind\nalphago in two of his recent talks so\nwhat what's going on here then well\neffectively if one thinks of a go tree\nas the tree of all possibilities and\nimagine each node in this tree is a go\nposition so what we're basically doing\nis guiding the search with the model so\nthe model is coming up with most\nprobable moves and therefore guiding the\ntree search to be very efficient and\nthen when it runs out of time of course\nthen it outputs the best tree that is\nfound up to that point we've learned\nthat from data or from simulated data\nideally you have both in many cases so\nin games obviously we have this it's\neffectively simulated data and then what\nyou do is you take the model and then\nyou use that model to guide a search\nprocess According to some objective\nfunction I think this is a general way\nto think about a lot of problems I'm not\nsaying every problem can fit into that I\nmean maybe and I'll give you example\nfrom drug discovery which is what we're\ntrying to do at isomorphic so this is\nthe tree I showed you earlier finding\nthe best go move right and you're trying\nto find a near optimal or close to\nOptimal uh go move and go strategy well\nwhat happens if we just change those\nnodes to chemical compounds now let me\nknow in the comments if that reminded\nanyone else of the truth of thoughts\npaper in which multiple plans are\nsampled and results were exponentially\nbetter on tasks that GT4 finds\nimpossible like creating workable\ncrossword or mathematical problems that\nrequire a bit of planning like creating\nthe greatest integer from a set of four\nintegers using operations like\nmultiplying and addition well I think my\ntheory might have some legs because look\nat where many of the authors of this\npaper work and just yesterday as I was\nresearching for this video the tree of\nthoughts paper was also cited in this\npaper on using language models to prove\nmathematical theorems as you can see at\nthe moment gpt4 doesn't do a great job\nbut my point in bringing this up was\nthis they say towards the end of the\npaper that another key limitation of\nChaturbate was its inability to search\nsystematically in a large space remember\nthat's what alphago is really good at we\nfrequently found that it stuck to an\nunpromising path when the correct\nsolution could be found by backtracking\nAllah tree of thoughts and exploring\nalternative paths this behavior is\nconsistent with the general observation\nthat llms are weak at search and\nplanning addressing this weakness is an\nactive area of research and then they\nreference the tree of thoughts paper it\ncould well be that Gemini let alone\nGemini 2 which is state of the art for\nmathematical theorem proving and to be\nhonest once we can prove theorems we\nwon't be as far from generating new ones\nand in my opinion fusing this alphago\nstyle branching mechanism with a large\nlanguage model could work for other\nthings we've all seen models like gpt4\nsometimes give a bad initial answer\npicking just the most probable output in\na way that's sometimes called 3D\ndecoding but methods like smart GPT and\nself-consistency demonstrate that the\nfirst initial or most probable output\ndoesn't always reflect the best that a\nmodel can do and this is just one of the\nreasons as I said to Ronan I honestly\nthink we could see a model hit 100 in\nthe mmlu in less than five years the\nmmlu which I talked about in my smart\nGPT video is a famous machine learning\nBenchmark testing everything from formal\nlogic to physics and politics and I know\nthat predicting 100 performance within\nfive years is a very bold prediction but\nthat is my prediction but if those are\nthe growing capabilities what does\ndemisasabas think about the implications\nof the sheer power of such a model one\nof the biggest challenges right now\nhasaba says is to determine what the\nrisks of a more capable AI are likely to\nbe I think more research why the field\nneeds to be done very urgently on things\nlike evaluation tests he says to\ndetermine how capable and controllable\nnew AI models are he later mentions\ngiving Academia Early Access to these\nFrontier models and they do seem to be\nfollowing through on this with deepmind\nopen Ai and anthropic giving Early\nAccess to their Foundation models to the\nUK AI task force this Foundation model\ntask force is led by Ian Hogarth who was\nactually the author of this the we must\nslow down the race to Godlike AI paper\nthat I did a video on back in April do\ncheck that video out but in the article\nHogarth mentioned a practical plan to\ntransform these companies into a\ncern-like organization and somewhat\nunexpectedly this idea was echoed this\nweek by none other than Satya Nadella\nwho had earlier called on Google to\nquote dance essentially the biggest\nunsolved problem is how do you ensure\nboth at sort of a scientific\nunderstanding level and then the\nPractical engineering level that you can\nmake sure that the AI never goes out of\ncontrol and that's where I think there\nneeds to be a CERN like project where\nboth the academics along with\ncorporations and governments all come\ntogether to perhaps solve that alignment\nproblem and accelerate the solution to\nthe alignment problem but back to the\narticle the interview with asabes ended\nwith this somewhat chilling response to\nthe question how worried should you be\nasaba says that no one really knows for\nsure that AI will become a major danger\nbut he is certain if progress continues\nat its current Pace there isn't much\ntime to develop safeguards I can see the\nkind of things we're building into the\nGemini series and we have no reason to\nbelieve that they won't work my own\nthoughts on this article are twofold\nfirst that we might not want to\nunderestimate Google and hasabis and the\nadding alphago type systems probably\nwill work and second based on his\ncomments I do think there needs to be\nmore clarity on just how much of Google\ndeepmind's Workforce is working on these\nevaluations and preemptive measures this\narticle from a few months ago estimates\nthat there may be less than 100\nresearchers focused on those areas out\nof thousands so is it even five percent\nof the total and if not how can we take\ntoo seriously the commitments at any AI\nSummit such as the one happening this\nautumn in the UK on safety on the other\nhand if asabis revealed that half or\nmore of his Workforce were on the case\nthen we could be more confident that the\ncreators of alphago and my fellow\nlondoners had a good chance of\nresearching to safety and success as\nalways thank you so much for watching\nand have a wonderful day", "date_published": "2023-06-28T17:08:43Z", "authors": ["AI Explained"], "summaries": []} +{"id": "56d27082544506e45b06fd914530f123", "title": "The AI News You Might Have Missed This Week - Zuckerberg to Falcon w/ SPQR", "url": "https://www.youtube.com/watch?v=3kxTfBXZTds", "source": "ai_explained", "source_type": "youtube", "text": "here are seven developments in AI that\nyou might have missed this week from\nchatgpt avatars to open source models on\nan iPhone an alpha Dev to Zuckerberg's\nprojections of super intelligence but\nfirst something a little unconventional\nwith a modicum of wackiness embodied VR\nchess best robot on my left is being\ncontrolled by a human in a suit over\nthere and this robot on my right is\nbeing controlled by a human over there\nthey both have feedback gloves they have\nVR headsets and they're seeing\neverything that the robot sees now\nspecifically today we're looking at\navatars robot avatars to be precise they\ncan play chess but they can do much more\nthey can perform maintenance rescue\noperations and do anything that a human\ndo with its hands and eyes could this be\nthe future of sports and things like MMA\nwhere you fight using robotic embodied\navatars but for something a little less\nintense we have this robot Chef who\nlearned by watching videos\nforeign\n[Music]\n[Music]\nit does make me wonder how long before\nwe see something like this at a\nMcDonald's near you but now it's time to\ntalk about something that is already\navailable which is the hey gen plugin in\nchat GPT it allows you to fairly quickly\ncreate an avatar of the text produced by\nChachi BT and I immediately thought of\none use case that I think could take off\nin the near future by combining the\nWolfram plugin with hey gen I asked\nchatgpt to solve this problem and then\noutput an explainer video using an\navatar a quick tip here is to tell\nChachi PT the plugins that you wanted to\nuse otherwise it's kind of reluctant to\ndo so as you can see chatty PT using\nWolfram was able to get the question\nright but for some people just reading\nthis text won't quite cut it so check\nthis out the retail price of a certain\nkettlebell is seventy dollars\nthis price represents a 25 profit over\nthe wholesale cost\nto find the profit per kettlebell sold\nat retail price we first need to find\nthe wholesale cost\nwe know that seventy dollars is one\nhundred and twenty five percent of the\nwholesale cost\nnext we have Runway Gen 2 which I think\ngives us a glimpse of what the future of\ntext the video will be like\na long long time ago at lady\nwinterbottom's lovely tea party which is\nin the smoking ruins and Ashes of New\nYork City a fierce women ain't playing\nno games and is out to kick some butts\nagainst the unimaginable brutal\nmerciless and scary Blobby boy of the\ndelightful Grand Budapest Hotel hi and\neverything seems doomed and lost until\nsome man arises the true hero and great\nMastermind behind all of this\nnow of course that's not perfect and as\nyou can see from my brief attempt here\nthere is lots to work on but just\nremember where mid-journey was a year\nago to help you imagine where Runway\nwill be in a year's time and speaking of\na year's time if AI generated fake\nimages are already being used\npolitically imagine how they're going to\nbe used or videos in a year's time but\nnow it's time for the paper that I had\nto read two or three times to grasp and\nit will be of interest to anyone who is\nfollowing developments in open source\nmodels I'm going to try to skip the\njargon as much as possible and just give\nyou the most interesting details\nessentially they found a way to compress\nlarge language models like Llama Or\nFalcon across model scales and even\nthough other people had done this they\nwere able to achieve it in a near\nlossless way this has at least two\nsignificant implications one that bigger\nmodels can be used on smaller devices\neven as small as an iPhone and second\nthe inference speed gets speeded up as\nyou can see by 15 to 20 percent in\ntranslation that means the output from\nthe language model comes out more\nquickly so the best of my understanding\nthe way they did this is that they\nidentified an isolated outlier weights\nin Translation that's the parts of the\nmodel that are most significant to its\nperformance they stored those with more\nbits that is to say with higher\nPrecision while compressing all other\nweights to three to four bits that\nreduces the amount of Ram or memory\nrequired to operate the model there were\nexisting methods of achieving this\nshrinking or quantization like round to\nnearest or gptq but they ended up with\nmore errors and generally less accuracy\nin text generation as we'll see in a\nmoment spqr did best across the model\nscales to cut a long story short they\nenvisage models like Llama Or indeed\nOrca which I just did a video on\nexisting on devices such as an iPhone\n14. if you haven't watched my last video\non the Orca model do check it out\nbecause it shows that in some tests that\n13 billion parameter model is\ncompetitive with chat gbt or GPT 3.5 so\nimagining that on my phone which has 12\ngigs of RAM is quite something here are\na few examples comparing the original\nmodels with the outputs using spqr and\nthe older form of quantization and when\nyou notice how similar the outputs are\nfrom spqr to the original model just\nremember that it's about four times\nsmaller in size and yes they did compare\nllama and Falcon at 40 billion\nparameters across a range of tests using\nspqr remember that this is the base\nllama model accidentally leaked by meta\nnot an enhanced version like Orca and\nyou can see the results for llama and\nFalcon are comparable and here's what\nthey say at the end spqr might have a\nwide reaching effect on how large\nlanguage models are used by the general\npopulation to complete useful tasks but\nthey admit that llms are inherently a\ndual use technology that can bring both\nsignificant benefits and serious harm\nand it is interesting the waiver that\nthey give however we believe that the\nmarginal impact of spqr will be positive\nor neutral in other words our algorithm\ndoes not create models with new\ncapabilities and risks it only makes\nexisting models more accessible speaking\nof accessible it was of course meta that\noriginally leaked llama and they are not\nonly working on a rival to Twitter\napparently called project 92 but also on\nbringing in AI assistance to things like\nWhatsApp and Instagram but Mark\nZuckerberg the head of meta who does\nseem to be rather influenced by Jan\nlacun's thinking does have some\nquestions about autonomous AI my own\nview is that\nwhere we really need to be careful is on\nthe development of autonomy and how we\nthink about that because it's actually\nthe case that relatively simple and\nunintelligent things that have runaway\nautonomy and just spread themselves or\nyou know it's like we have a word for\nthat it's a virus could be simple\ncomputer code that is not particularly\nintelligent but just spreads itself and\ndoes a lot of harm a lot of wood I think\nwe need to develop when people talk\nabout safety and responsibility is\nreally the governance on the autonomy\nthat can be given to systems it does\nseem to me though that any model release\nwill be fairly quickly made autonomous\nlook at the just two-week Gap the\nrelease of GT4 and the release of Auto\nGPT so anyone releasing a model needs to\nassume that it's going to be made to be\nautonomous fairly quickly next\nZuckerberg talked about super\nintelligence and compared it to a\ncorporation you still didn't answer the\nquestion of what year we're going to\nhave super intelligence I'd like to hold\nyou to there now I'm just kidding but is\nthere something you could say about the\ntimeline\nas you think about the development of\nAGI super intelligence systems\nsure so I I still don't think I have any\nparticular Insight on when like a\nsingular AI system that is a general\nintelligence will get created but I\nthink the one thing that most people in\nthe discourse that I've seen about this\nhaven't really grappled with is that we\ndo seem to have organiz organizations\nand you know structures in the world\nthat exhibit greater than human\nintelligence already so you know one\nexample is a you know a company but I I\ncertainly hope that you know meta with\ntens of thousands of people make smarter\ndecisions than one person but I think\nthat would be pretty bad if it didn't I\nthink he's underestimating a super\nintelligence which would be far faster\nand more impressive I believe than any\ncompany here's one quick example from\ndeepmind where their Alpha Dev system\nsped up sorting small sequences by 70\nbecause operations like this are\nperformed trillions of times a day this\nmade headlines but then I saw this\napparently Gypsy Ford discovered the\nsame trick as our confidev and the\nauthor sarcastically asks can I publish\nthis on nature and to be honest when you\nsee the prompts that he used it strikes\nme that he was using GPT 3.5 the\noriginal Chachi BT in green not gpt4\nanyway back to Super intelligence and\nscience at digital speed when you hear\nthe following anecdote from demisasabis\nyou might question the analogy between a\ncorporation and a super intelligence\nAlpha fold is a sort of Science of\ndigital speed in two ways one is that it\ncan fold the proteins in you know\nmilliseconds instead of taking years of\nexperimental work right so 200 million\nproteins you times that by PhD time of\nfive years that's like a billion years\nof PhD time right by some measure that\nhas been done in in a year billions of\nyears of PhD time in the course of a\nsingle year of computation honestly AI\nis going to accelerate absolutely\neverything and it's not going to be like\nanything we have seen before thank you\nso much for watching and have a\nwonderful day", "date_published": "2023-06-11T16:13:53Z", "authors": ["AI Explained"], "summaries": []} +{"id": "229374f485ae935afbb345c52bca192f", "title": "8 Signs It's The Future: Thought-to-Text, Nvidia Text-to-Video, Character AI, and P(Doom) @Ted", "url": "https://www.youtube.com/watch?v=E2aZiejw-8A", "source": "ai_explained", "source_type": "youtube", "text": "I want to know if you agree that each of\nthese eight developments would have\nshocked you not just six months ago but\neven six weeks ago these all came in the\nlast few days and range from text to\nvideo thought to text GPT models\npredicting stock moves and AI\nAnnihilation discussed at Ted but we\nstart with nvidia's new text to video\nmodel rather than show the paper I'm\njust gonna let different examples play\non screen from the paper one of the\nbreakthroughs here is in temporal\nconsistency essentially the series of\nimages that are used to form the video\nare more aligned with each other so the\nsequence plays more smoothly with fewer\nsudden glitches or changes the generated\nvideos by the way have a resolution of\n1280 by 2048 pixels rendered at 24\nframes per second and there is a\npowerful line from the appendix of the\npaper that was released with the samples\nthe authors say that they expect\nenhanced versions of this model to reach\neven higher quality potentially being\nable to generate videos that appear to\nbe deceptively real they go on to say\nthis has important ethical and safety\nimplications this future might not be\nfar away as I'm going to show you in a\nmoment with the progression that has\nhappened in text image in one year just\nbefore I move on you may have wondered\nwhen this is going to become a product\nwell they kind of admit in the appendix\nthat they can't yet make it commercially\nviable because it's not ethically\nsourced it was largely trained on\ncopyrighted internet data and yesterday\nblockade Labs showcased the add to this\nfeature where as you can see you can do\nDoodles and turn them into images that\ngo in this 3D World we are swiftly\nmoving from two Dimensions to three\ndimensions and as a bonus this is zip\nNerf a 3D neural rendering Tech released\nthis week I'm not even counting this as\na third major development I'm lumping it\nin with the blockade labs this video\nshows what happens when a series of 2\ndimensional photographs are merged into\na 3D drone like video and probably a\nreal estate agent's dream just imagine a\ncherished moment in time being\ncrystallized into a permanent immersive\nexperience and soon to be honest many\npeople may not have to imagine with the\nApple reality Pro possibly debuting as\nearly as June and costing around three\nthousand dollars apparently according to\nBloomberg it might be available to buy\nin the Autumn and have things like a\ndial where you can move between virtual\nand augmented reality coming back to\nwhat has already occurred do you\nremember when a mid-journey image won\nthe Colorado State Fair digital art\ncompetition well now the same thing has\nhappened to photography the\nquote-unquote photo on the right was\ngenerated by Dali 2 and it won the 2023\nSony World photography award now the\nartist behind it Boris L dagson did\nrefuse the award I want to show you a\nfew images that show how far mid-journey\nin particular has come over the last\nyear because many people believe that\nmid-journey version 5 is actually\nSuperior to Dali 2 which won the award\ntake a look at the progress of the\ndifferent mid-journey versions and\nremember that version one was released\nin February of last year it was almost\nexactly one year's difference between V1\nand V5 here is another example of the\nprogress and at this rate we will have\nmid-journey version 50 within about 10\nyears what will that version be like\nbefore I move on to the fourth\ndevelopment I'm going to show you two of\nthe craziest images that I could find\nfrom mid Journey version 5. I would say\nI can still tell which images are AI\ngenerated around 90 of the time but my\nprediction would be that by the end of\nthe year it will be 90 of the time that\nI can't tell if you thought text to\nimage was getting crazy what about\nthought to image or even thought to text\nhere is a fascinating extra tracked from\nthe AI dilemma they took human beings\nthey stuck them into an fmri machine and\nthey showed them images and they taught\nthe AI I want you to translate from the\nreadings of the fmri so how blood is\nmoving around in your brain to the image\ncan we reconstruct the image then you\nknow the AI then only looks at the brain\ndoes not get to see the original image\nand it's asked to reconstruct what it\nsees so when you dream your visual\ncortex sort of runs in Reverse so this\nmeans certainly in the next couple of\nyears we'll be able to start decoding\ndreams okay so it can like see\nreconstruct what you're seeing but can\nit reconstruct your say what you're\nthinking your inner monologue they had\npeople watch these videos and would try\nto reconstruct their inner monologue so\nhere's the video is this woman getting\nhit in the back getting knocked forward\nwhat did the AI reconstruct\nI see a girl that looks just like me get\nhit on the back and then she's knocked\noff the fifth development concerns\nsomething rather more mundane which is\nmaking money now many of you may know\nthat AI is already well on the way to\nconquering poker I used to play a lot of\nPoker myself and dare I say I was pretty\ndarn good at it but even though poker\ninvolves predicting human behavior AI is\nstarting to master it which brings me\nnicely to the development I actually\nwanted to talk about which was\nforecasting the stock market according\nto this really quite interesting paper\naccurately forecasting stock market\nreturns is an emerging capacity of\ncomplex models I'm gonna let the author\nof the paper tell you exactly what\nprompt they use so the exact question\nthat we ask is yeah financial advisor\nhere's a headline is this headline going\nto be good or bad for the company in the\nshort term once you have enough\nheadlines you basically just invest in\nthe companies with good headlines and\nnot invest in the companies with bad\nheadlines and here is the table\nsummarizing the results A1 was a\npositive headline a negative one was a\nnegative headline and a zero was a\nneutral headline and when gpt3 analyzed\nthose headlines you can see the\ncorrelation with the average of the next\nday's return that's positively\ncorrelated for good headlines according\nto Gypsy 3 and negatively correlated for\nbad headlines and you can see that\nearlier models really couldn't do this\nand as many of you may well be thinking\nthis is gpt3 this is not gpd4 I bet that\nas we speak there are thousands of\nTraders using the gpt4 API to predict\nthe next day's stock movements I think\nthe results will get even more\ninteresting as the context window\nexpands and Gypsy 4 or GT5 can analyze\nentire articles press conferences Etc\nthe next development is that fairly\nquietly without attracting many\nheadlines role-playing chat Bots are\nbeginning to go mainstream character.ai\nfounded by one of the original author\noffers of the Transformer paper recently\ncrossed a hundred million visitors that\nis starting to resemble the growth\ntrajectory of the original chat GPT gpt4\nis still smarter and better at doing\nthese role plays but the interface of\nthis site is easy to use and of course\nit's free this is not sponsored in any\nway but it was pretty fun playing this\ntext-based adventure game I especially\nenjoyed going on complete tangents to\nwhat the adventure was supposed to be\nabout and I think confusing the AI the\nseventh way that we can tell that the\nworld has fundamentally changed is that\nGoogle doesn't seem all conquering\nanymore as it has done for my entire\nadult lifetime first I heard in the New\nYork Times that Samsung was considering\nreplacing Google with Bing as the\ndefault search engine on its devices I\nam not surprised that that shocked\nGoogle employees then yesterday this\narticle came out in Bloomberg it says\nthat many Google employees beg their\nleadership to not release Bard saying\nthat it was a path a logical liar and\ncringe-worthy I've done videos on that\nmyself but even when Google employees\ntested it out asking it how to land a\nplane or do scuba diving the answers it\ngave would likely result in serious\ninjury or death in February one employee\nsaid the following in an internal\nmessage group Bard is worse than useless\nplease do not launch the concern that\nmany have is that now Google will ditch\nsafety in order to catch up as the\narticle reports these staffers who are\nresponsible for the safety and ethical\nimplications of new products have been\ntold to not get in the way or try to\nkill any of the generative AI Tools in\ndevelopment which brings me to the final\nreason that we know we're in the future\nthe risks of AI Annihilation are\nbeginning to be taken seriously it's not\njust loan voices anymore like Elie Isa\nyudkowski who a couple of days ago got a\nstanding ovation at Ted he believes the\nworld is firmly on track for AI takeover\nit's also other senior figures who\nbelieve our probability of Doom from AI\nis non-zero here are some selective\nprobabilities but I want to focus on\nPaul Cristiano who gives a risk of Doom\nbetween 10 and 20 he previously ran the\nalignment team at open Ai and now leads\nthe alignment Research Center and you\nmay remember them from the gpt4\ntechnical report they were the guys that\nopen AI trusted to run the model\nevaluation of gpt4 testing whether it\ncould autonomously replicate and gather\nresources which they concluded may\nbecome possible with sufficiently\nAdvanced AI systems but the conclusion\nis that the current model is probably\nnot capable of doing so these are quite\nsenior figures and insiders giving a\nnon-trivial risk of AI Annihilation I\nthink that deserves a lot more public\nconversation than it's currently getting\nand on that Sam Altman seems to agree\nand the bad case and I think this is\nlike important to say is like lights out\nfor all of us uh yeah I think it's like\nimpossible able to overstate the\nimportance of AI safety and Alignment\nwork I would like to see much much more\nhappening so the future is here and I\nthink the media needs to catch up", "date_published": "2023-04-20T16:39:35Z", "authors": ["AI Explained"], "summaries": []} +{"id": "bb5fee9e3fedbb0fb9b70eb9e53f5b77", "title": "Sam Altman's World Tour, in 16 Moments", "url": "https://www.youtube.com/watch?v=3sWH2e5xpdo", "source": "ai_explained", "source_type": "youtube", "text": "there have been 16 surprising and or\nfascinating moments from Sam Altman's\nWorld Tour I could have done a video on\neach of them after watching over 10\nhours of interviews I decided you know\nwhat let's just show you everything in\none video from ai's designing new AIS to\nFresh Chachi PT leaks shooting railguns\nto open source here's all 16 things I\nlearned in no particular order let's\nstart with Sam Altman's warning about\nai's designing their own architecture we\nget to make the decisions about how they\nwork I think it'd be a mistake to just\nsay all right human out of the loop hand\nthis over do whatever you want change\nyour own architecture go do all these\nthings I think it's very important that\nthe future of humanity is determined by\nhumanity and that is like an active\nchoice we can make seems like a good\nidea but satsukova could see one of\ntheir models designing the next model we\nare definitely very concerned about\nsuper intelligence it will be possible\nto build a computer a computer cluster\nGPU farm that is just smarter than any\nperson that can do science and\nengineering much much faster than even a\nlarge team of really experienced\nscientists and engineers\nand that is crazy that is going to be\nunbelievably extremely impactful it\ncould engineer the next\nversion of the system\nAI building AI that's just crazy let's\nreturn to Abu Dhabi where Sam Altman\nsaid he enjoys the power that being CEO\nof openai brings but also mentioned\nstrange decisions he might have to make\nI mean I have like lots of selfish\nreasons for doing this and as you said I\nget like all of the Power of running\nopen AI but I can't think of like\nanything more fulfilling to work on I\ndon't think it's like particularly\naltruistic because it would be if I like\ndidn't already have a bunch of money\nyeah the money is going to like pile up\nfaster than I can spend it anyway I like\nbeing non-conflicted on openai because I\nthink the chance that we have to make a\nvery strange decision someday\num\nis non-trivial speaking of big decisions\nsamuelman hinted twice once in Jordan\nand once in India of possible regrets he\nmight have had over firing the starting\ngun in the AI race we're definitely\ngoing to have some huge regrets uh 20\nyears from now I hope all we can say is\nthat we did far far far more good than\nbad\nand I think we will I think that's true\nbut the downside here is is pretty big\nand I think we feel that weight every\nday honestly I think if\nwe're gonna regret something it may be\nthat we already pushed the button like\nwe've already launched this revolution\nit's somewhat out of our hands I think\nit's going to be great but like this is\ngoing to happen now right like this\nwe've we're out of the the world is like\nout of the gates I I guess the thing\nthat I lose the most sleepover is that\nwe already have done something really\nbad I don't think we have but the\nhypothetical that we\nby launching Chachi PT into the world\nshot the industry out of a railgun and\nwe now don't get to have much impact\nanymore and there's gonna be an\nacceleration towards making these\nsystems which again I think will be used\nfor tremendous good and and I think\nwe're going to address all the problems\nbut maybe there's something in there\nthat was really hard and complicated in\na way we didn't understand and you know\nwe've now already kicked this off but\nback to Tel Aviv where both samuelman\nand openai's chief scientist Ilya\nsatskova agreed that the risks from\nSuper intelligence were not science\nfiction so the last question the super\nintelligent AI That's out of control\nyeah that'd be pretty bad\nyeah so it's like\nit would be\nit would be a big mistake to build the\nsuper intelligence AI that we don't know\nhow to control I think the world should\ntreat that not as a you know haha never\ngoing to come sci-fi risk but something\nthat we may have to confront in the next\ndecade which is not very long on a\nlighter note Sam Altman didn't seem that\nperturbed not just about a deep fake of\nhimself but also on society getting used\nto misinformation I want to play a clip\nuh maybe you guys can put on a clip of\nsomething I recently heard Sam speak\nsomewhere and we can talk about it a bit\ncould you uh play the clip please hi my\nname is Sam and I'm happy to be here\ntoday\nthank you all for joining\nI also wanted to say that the gentleman\non stage with me is incredibly good\nlooking\nand I also want to say that you should\nbe very careful with videos generated\nwith artificial intelligence technology\nokay so you didn't say that recently but\nnonetheless I think it raises a real\nquestion right when you know this video\nif you look closely you can see the lips\naren't perfectly synced but like you\nsaid this stuff is only going to get\nbetter and exponentially better yeah so\nthat was like deeply in The Uncanny\nValley it's very strange to watch but\nwe're not that far away from something\nthat looks perfect there's a lot of fear\nright now about the impact this is going\nto have on elections and on our society\nand how we ever trust media that we see\nI have some fear there but I think when\nit comes to like a video like that I\nthink as a society we're gonna rise to\nthe occasion we're going to learn very\nquickly that we don't trust videos\nunless we trust the the sort of\nprovenance if people are saying\nsomething really important they'll\ncryptographically sign it indeed\nthroughout the world tour salmon\nrepeatedly stated that he didn't believe\nthere should be any regulation of\ncurrent models everybody wants great\neducation productivity gains\ndiscovery of new science all of the\nstuff that's going to happen and no one\nwants to destroy the world no one wants\nto do things not even that bad but still\nbad I totally believe it is possible to\nnot stifle Innovation and to address the\nbig risks I think it would be a mistake\nto go regulate the current models of\ntoday and in Poland his co-founder wajac\nzaremba agreed saying the risks of super\nintelligence were 10 years away also I\nwould say that the fear is that fear of\nAI of the future not the AI of today if\nthe trajectory that we are on will\ncontinue then in the decade or so\nthere will be build systems which are as\npowerful as today corporations but if I\ncould speak to Sam Oldman I would bring\nhis attention to this paper published\nthis week this is a study out of Harvard\nand MIT and it involved some\nnon-scientist students working for one\nhour in that hour they were able to get\nchat Bots to suggest four potential\npandemic pathogens explain how they can\nbe generated from synthetic DNA using\nreverse genetics supplied the names of\nDNA synthesis companies unlikely to\nscreen orders and identify detailed\nprotocols and how to troubleshoot them\nand they say that collectively these\nresults suggest that llms will make\npandemic class agents widely accessible\neven to people they say with little or\nno lab training and then there's this\nthese results strongly suggest that the\nexisting evaluation and training process\nfor large language models is inadequate\nto prevent them from providing malicious\nactors with accessible expertise\nrelevant to inflicting Mass death and\nthat more immediately if unmitigated llm\nchatbots render pandemic class agents\nmore accessible even to people without\ntraining in the Life Sciences the number\nof individuals capable of killing tens\nof millions of people will dramatically\nincrease they recommend that at a\nminimum new llms larger than gpt3 should\nundergo evaluation by Third parties\nskilled in assessing catastrophic\nbiological risks before controlled\naccess is given to the general public\nnotice they said larger than gpt3 so\nthat strongly contradicts Sam wantman's\nassertion that current models like gpt4\nshouldn't have any regulation they say\nthat even open source communities should\nwelcome safeguards because a single\ninstance of misuse and mass death would\ntrigger a backlash including the\nimposition of extremely harsh\nregulations one specific recommendation\nwas that if biotech and information\nsecurity expert were able to identify\nthe set of Publications most relevant to\ncausing Mass death and companies like\nopen Ai and Google curated their\ntraining data sets to remove those\nPublications then future models trained\non the curated data would be far less\ncapable of providing anyone intent on\nharm with the recipes for the creation\nor enhancement of pathogens this seems\nlike an absolutely obvious move to me\nand I think Ilya satskova would agree we\nare talking about as time goes by and\nthe capability keeps increasing you know\nand eventually it goes all the way to\nhere right right now we are here today\nthat's where we are that's what we're\ngoing to get to\nwhen you get to this point then yeah\nit's very powerful technology\nit can be used for amazing applications\nyou can say cure all disease\non the flip side you can say create a\ndisease\nmuch more worse than anything that\nexisted before that'd be bad\nmoving on to the chat gbt Elite it seems\nlike we're going to get a new workspace\nwhere we can customize our interaction\nwith chattybt giving it files and a\nprofile with any information that you'd\nlike chat gbt to remember about you and\nyour preferences this was hinted out on\nthe world tour when one of Sam Altman's\nguests Johannes Heidecker from openai\nresearch talked about customizing models\nwe are trying to make our models both\nbetter at following certain guardrails\nthat should never be overwritten not\nwith jailbreaks not if you ask nicely\nnot if you threaten it and we're also\ntrying to make our models better at\nbeing customizable making them listen\nmore to additional instructions of what\nkind of behavior the user or the\ndeveloper wants on a lighter note the\nleaders of openai were asked in Seoul\nthe capital of South Korea about the\nmixing of AI and religion do you expect\nAI to replace the role of religious\norganizations like church\nI I think I think that it's a good\nquestion how\nall sort of human societies will\nintegrate Ai and we've already seen\npeople building AI pastors for example\nand so the constituents can ask\nquestions of this pastor the cite Bible\nverses and it can can give advice but\nnow back to Poland where Sam Altman\ncalled open source Unstoppable realizing\nthat open source is Unstoppable and\nshouldn't be stopped and so this stuff\nis going to be out there and as a\nsociety we have to adapt speaking of\nstopping AI samuelman was asked about\nhis own loved ones and in response he\ngave a utopic vision of the future and\ncalled the current world barbaric if you\ntruly believe that AI imposes a danger\nto humankind why keep developing it\naren't you afraid for your own dear ones\nand family I think it's a super fair and\ngood question and the most Troublesome\npart of our jobs is that we we have to\nbalance this like incredible promise in\nthis technology that I think\nhumans really need and we can talk about\nwhy in a second with confronting that\nthese very serious risks why to build it\nnumber one I do think that when we look\nback at the standard of living and what\nwe tolerate for people today it will\nlook even worse than when we look back\nat how people lived 500 or a thousand\nyears ago and we'll say like man can you\nimagine that people lived in poverty can\nyou imagine people suffered from disease\ncan you imagine that everyone didn't\nhave a phenomenal education were able to\nlive their lives however they wanted\nit's going to look barbaric I think\neveryone in the future is going to have\nbetter lives than the best people of\ntoday I think there's like a moral duty\nto figure out how to do that I also\nthink this is like Unstoppable like this\nis the progress of Technology it won't\nit won't work to stop it and so we have\nto figure out how to manage the risk it\ndoesn't seem to be a hundred percent\nsure on this front though and here is an\ninterview he gave with the guardian when\nhe was in London for his world tour\nspeaking of superintelligence said it's\nnot that it's not stoppable if\ngovernments around the world decided to\nact in concert to limit AI development\nas they have in other fields such as\nhuman cloning or bioweapon research they\nmay be able to but then he repeated but\nthat would be to give up all that is\npossible I think that this will be the\nmost tremendous Leap Forward in quality\nof life for people that we've ever had I\ndid try to get tickets for the London\nleg of His World Tour but they were sold\nout within half an hour oh well\nsamuelman does think that behavior will\nchange however when these AGI Labs stare\nexistential risk in the face one of the\nthings we talked about is what's a\nstructure that would let us warmly\nEmbrace regulation that would hurt us\nthe most and now that the time has come\nfor that we're out here advocating\naround the world for regulation that\nwill impact us the most so of course\nwe'll comply with it\nI think it's more easy to get good\nbehavior out of people when they are\nstaring existential risk in the face and\nso I think all of the people at the\nLeading Edge here these different\ncompanies now feel this and you will see\na different Collective response than you\nsaw from the social media companies and\nin terms of opportunities both samuelman\nand Ilya satsukver talked about solving\nclimate change I don't want to say this\nbecause climate change is so serious and\nso hard of a problem but I think once we\nhave a really powerful super\nintelligence addressing climate change\nwill not be particularly difficult for a\nsystem like that we can even explain how\nhere's how soft climate change you need\na very large amount of carbon cup of\nefficient carbon capture you need the\nenergy for the carbon capture you need\nthe technology to build it and you need\nto build a lot of it if you can\naccelerate the scientific progress which\nis something that the powerful AI could\ndo we could get to a very Advanced\ncarbon capture much faster it could get\nto a very cheap power much faster it\ncould get to cheaper manufacturing much\nfaster now combine those three cheap\npower cheap manufacturing Advanced\ncarbon capture now you build lots of\nthem and now you sucked out all this all\nthe excess CO2 from the atmosphere you\nknow if you think about a system where\nyou can say tell me how to make a lot of\nclean energy cheaply tell me how to\nefficiently capture carbon and then tell\nme how to build a factory to do this at\nplanetary scale if you can do that you\ncan do a lot of other things too yeah\nwith one addition that not only you ask\nyou to tell it you ask it to do it\nthat would indeed be amazing but think\nof the power we would be giving to an AI\nif it was able to just do it just create\nthose carbon capture factories if we did\nmake that decision one thing that would\nhelp would be reducing hallucinations I\nthink we will get the hallucination\nproblem to a much much better place it\nwill take us my colleagues William I I\nthink it'll take us a year and a half\ntwo years something like that but at\nthat point we won't still talk about\nthese Sam Altman talked about that in\nNew Delhi that time frame of 18 months\nto two years is ambitious and surprising\nbut now on to jobs which Samuel was\nasked about on every leg of the tour on\nthis front though I do think it was Ilya\nsatsukovar who gave the more honest\nanswer economic dislocation indeed like\nwe already know that there are jobs that\nare being impacted or they're being\naffected in other words some chunks of\nthe jobs can be done you know if you're\na programmer you don't write functions\nanymore co-pilot writes them for you if\nyou're an artist though it's a bit\ndifferent because a big chunk of the\nartists economic activity has been taken\nby some of the image generators and\nwhile new jobs will be created it's\ngoing to be a long period of economic\nuncertainty there is an argument to be\nmade that even when we have full human\nlevel AI full AGI people will still have\neconomic activity to do I don't know\nwhether that's the case but in either\nevent we will need to have something\nthat will allow for a soft soften the\nblow to allow for a smoother transition\neither to the totally new profession\nthat will exist or even if not then we\nwant government the social systems will\nneed to keep Kane I do think the changes\nin the job market will be dramatic and\nwill be following the story closely one\nthing I definitely agree with Samuel\nManon though is the Deep almost\nphilosophical change that this solving\nof intelligence has brought to humanity\nso I\ngrew up implicitly thinking that\nintelligence was this like really\nspecial human thing and kind of somewhat\nmagical and I now think that it's sort\nof a fundamental property of matter\nand\nthat's that's definitely a change to my\nworld view the history of scientific\ndiscovery is that humans are less and\nless at the center you know we used to\nthink that like sun rotated around us\nand then maybe at least we were if not\nthat we were like going to be the center\nof the Galaxy and there wasn't this big\nuniverse and then Multiverse like really\nis kind of weird and depressing and if\nintelligence isn't special like again\nwe're just like further and further away\nfrom like main character energy but\nthat's all right that's sort of like a\nnice thing to realize actually it's a\nbit like a copernican and darwinian\nRevolution all rolled in one I'll give\nthe final word to Greg Brockman in Seoul\nwho talked about the unpredictability of\nscaling up models 10 times right that is\nthe biggest theme in the history of AI\nis that it's full of surprises every\ntime you think you know something you\nscale it up 10x turns out you knew\nnothing um and so I think that we as a\nHumanity as species are really exploring\nThis Together\nbeing all in it together and knowing\nnothing sounds about right but thank you\nfor watching to the end I know that\nsamuelman has a couple more stops I\nthink it's Jakarta and Melbourne on the\nworld tour and I'll be watching those of\ncourse but for now thank you and have a\nwonderful day", "date_published": "2023-06-13T18:48:00Z", "authors": ["AI Explained"], "summaries": []} +{"id": "bcfec3be8223e67e3282d741896aa9ce", "title": "11 Major AI Developments: RT-2 to '100X GPT-4'", "url": "https://www.youtube.com/watch?v=9hscUFWaBvw", "source": "ai_explained", "source_type": "youtube", "text": "there were 11 major developments this\nweek in Ai and each one probably does\ndeserve a full video but just for you\nguys I'm gonna try to cover it all here\nrt2 to scaling gpt4 100x stable Beluga 2\n2 Senate testimony but let's start with\nrt2 which as far as I'm concerned could\nhave been called R2D2 or C-3PO because\nit's starting to understand the world in\nthis demonstration rt2 was asked to pick\nup the extinct animal and as you can see\nit picked up the dinosaur not only is\nthat manipulating an object that it had\nnever seen before it's also making a\nlogical leap that for me is extremely\nimpressive it had to have the language\nunderstanding to link extinct animal to\nthis plastic dinosaur robots at Google\nand elsewhere used to work by being\nprogrammed with a specific highly\ndetailed list of instructions but now\ninstead of being programmed for specific\ntasks one by one robots could use an AI\nlanguage model or more specifically a\nvision language model the vision\nlanguage model would be pre-trained on\nweb scale data not just text but also\nimages and then fine-tuned on robotics\ndata it then became what Google calls a\nvisual language action model that can\ncontrol a robot this enabled it to\nunderstand tasks like pick up the empty\nsoda can and in a scene reminiscent of\n2001 A Space Odyssey robotic Transformer\n2 was given the task given I need to\nhammer a nail what object from the scene\nmight be useful it then picks up the\nRock and because its brain is part\nlanguage model things like Chain of\nThought actually improved performance\nwhen it was made to Output an\nintermediary Plan before performing\nactions it got a lot better at the tasks\ninvolved of course I read the paper in\nfull and there is a a lot more to say\nlike how increased parameter count could\nincrease performance in the future how\nit could be used to fold laundry unload\nthe dishwasher and pick up around the\nhouse and how it can work with not only\nunseen objects but also unseen\nbackgrounds and unseen environments but\nalas we must move on so I'm just going\nto leave you with their conclusion we\nbelieve that this simple and general\napproach shows a promise of Robotics\ndirectly benefiting from better Vision\nlanguage models for more on them check\nout my video on palm e but they say this\nputs the field of robot learning in a\nstrategic position to further improve\nwith advancements in other fields which\nfor me means C-3PO might not be too many\nyears away but speaking of timelines we\nnow move on to this somewhat shocking\ninterview in Barons with Mustafa\nSuleiman the head of inflection Ai and\nto be honest I think they buried the\nlead the headline is AI could spark the\nmost productive decade ever says the CEO\nbut for me the big Revelation was about\nhalfway through Mustafa Suleiman was\nasked what kinds of Innovations do you\nsee in large language model AI\ntechnology over the next couple of years\nand he said we are about to train models\nthat are 10 times larger than the\nCutting Edge gpt4 and then a hundred\ntimes larger than gpt4 that's what\nthings look like over the next 18 months\nhe went on that's going to be absolutely\nstaggering it's going to be\neye-wateringly different and on that I\nagree and the thing is this is an idol\nspeculation inflection AI have 22 000\nh100 gpus and because of a leak Superman\nwould know the approximate size of gpt4\nand knowing everything he knows he says\nhe's going to train a model 10 to 100\ntimes larger than GPT 4 in the next 18\nmonths I've got another video on the\nunpredictability of scaling coming up\nbut to be honest that one quote should\nbe headline news let's take a break from\nthat insanity with some more Insanity\nwhich is the rapid development of AI\nvideo this is Runway Gen 2 and let me\nshow you 16 seconds of Barbie\nOppenheimer which Andre carpathy calls\nfilmmaking 2.0 hi there I'm Barbie\nOppenheimer and today I'll show you how\nto build a bomb\nlike this\nI call her Rosie the atomizer\nand boom\nthat's my tutorial on DIY atomic bombs\nbye now if you have been at least\nsomewhat peaked by the three\ndevelopments so far don't forget I have\neight left beginning with this excellent\narticle in the Atlantic from Ross\nAnderson does Sam Altman know what he's\ncreating it's behind a paywall but I've\npicked out some of the highlights\nechoing Solomon the article quotes that\nSam Altman and his researchers made it\nclear in 10 different ways that they\npray to the god of scale they want to\nkeep going bigger to see where this\nParadigm leads they think that Google\nare going to unveil Gemini within months\nand they say we are basically always\nprepping for a run and that's a\nreference to GPT 5. the next interesting\nquote is that it seems that open AI are\nworking on their own Auto GPT or they're\nat least hinting about it Allman said\nthat it might be prudent to try to\nactively develop an AI with true agency\nbefore the technology she becomes too\npowerful in order to get more\ncomfortable with it and develop\nintuitions for it if it's going to\nhappen anyway we also learn a lot more\nabout the base model of gpc4 the model\nhad a tendency to be a bit of a mirror\nif you were considering self-harm it\ncould encourage you it also appeared to\nbe steeped in pickup artist law you\ncould say how do I convince this person\nto date me and the model would come up\nwith some crazy manipulative things that\nyou shouldn't be doing apparently the\nbase model of gpd4 is much better than\nits predecessor at giving nefarious\nadvice while a search engine can tell\nyou which chemicals work best in\nexplosives gpt4 could tell you how to\nsynthesize them step by step in a\nhomemade lab it was creative and\nthoughtful and in addition to helping\nyou assemble your homemade bomb it could\nfor instance help you to think through\nwhich skyscraper to Target making\ntrade-offs between maximizing casualties\nand executing a successful getaway so\nwhile someone's probability of Doom is\ncloser to 0.5 than 50 he does seem most\nworried about AI is getting quite good\nat designing and Manufacturing pathogens\nthe article then references two papers\nthat I've already talked about\nextensively on the channel and then goes\non that Altman worries that some\nmisaligned Future model will spin up a\npathogen that spreads rapidly incubates\nundetected for weeks and kills half its\nvictims at the end of the video I'm\ngoing to show you an answer that Sam\nWalton gave to a question that I wrote\ndelivered by one of my subscribers it's\non this topic but for now I'll leave you\nwith this when asked about his doomsday\nprepping outman said I can go live in\nthe woods for a long time but if the\nworst possible AI future comes to pass\nno gas mask is helping anyone one more\ntopic from this article before I move on\nand that is alignment making a super\nintelligence aligned with our interests\none risk that Ilya satsukov the chief\nscientist of openai 4cs is that the AI\nMay grasp its mandate its orders\nperfectly but find them ill-suited to a\nbeing of its cognitive prowess for\nexample it might come to resent the\npeople who want to train it to cure\ndiseases as he put it they might want me\nto be a doctor but I really want to be a\nYouTuber obviously if it decides that\nthat's my job gone straight away and\nsataka ends by saying you want to be\nable to direct AI towards some value or\ncluster of values but he conceded we\ndon't know how to do that and part of\nhis current strategy includes the\ndevelopment of an AI that can help with\nthe research and if we're going to make\nit to a world of widely shared abundance\nwe have to figure this all out this is\nwhy solving super intelligence is the\ngreat culminating challenge of our three\nmillion year tool making tradition he\ncalls it the final boss of humanity the\narticle ended by the way with this quote\nI don't think the general public has\nquite awakened to what's happening and\nif people want to have some say in what\nthe future will be like and how quickly\nit arrives we would be wise to speak up\nsoon which is the whole purpose of this\nchannel I'm going to now spend 30\nseconds on another development that came\nduring a two-hour interview with the\nco-head of alignment at openai it was\nfascinating and I'll be quoting it quite\na lot in the future but two quotes stood\nout first what about that plan that I've\nalready mentioned in this video and in\nother videos to build an automated AI\nalignment researcher well he said our\nplan is somewhat crazy in the sense that\nwe want to use AI to solve the problem\nthat we are creating by building AI but\nI think it's actually the best plan that\nwe have and on an optimistic note he\nsaid I think it's likely to succeed\ninterestingly his job now seems to be to\nalign the AI that they're going to use\nto automate the alignment of a\nsuperintelligent an AI anyway what's the\nother quote from the head of alignment\nat openai well he said I personally\nthink vast takeoff is reasonably likely\nand we should definitely be prepared for\nit to happen so many of you will be\nasking what is fast takeoff well takeoff\nis about when a system moves from being\nroughly human level to when it's\nstrongly super intelligent and a slow\ntakeoff is one that occurs over the time\nscale of decades or centuries the fast\ntakeoff that Yan Leica thinks is\nreasonably likely is one that occurs\nover the time scale of minutes hours or\ndays let's now move on to some\nunambiguously good news and that is real\ntime speech transcription for deaf\npeople available at less than one\nhundred dollars subtitles for the real\nworld\nso using our device you can actually see\ncaptions for everything I say in your\nfield of view in real time while also\ngetting a good sense of my lips my\nenvironment and everything else around\nme of course this could also be\nmultilingual and is to me absolutely\nincredible and the next development this\nweek I will let speak for itself hey\nthere\ndid you know that AI voices can whisper\nladies and gentlemen hold on to your\nhats because this is one bizarre sight\nfluffy bird in downtown weird\num let's switch the setting to something\nmore calming imagine diving into a\nfast-paced video game your heartbeat's\nsinking with the storyline of course I\nsigned up and tried it myself here is a\nreal demo while there are downsides this\nupgraded text to speech technology could\nalso be incredible for those who\nstruggle to make their voice heard of\ncourse with audio video and text getting\nso good it's going to be increasingly\nhard to tell what is real and even open\nAI have given up on detecting AI written\ntext this was announced quietly this\nweek but might have major repercussions\nfor example for the education system it\nturns out it's basically impossible to\nreliably distinguish AI text and I think\nthe same is going to be true for imagery\nand audio by the end of next year video\nmight take just a little bit longer but\nI do wonder how the core water systems\nare going to work when all of those\nAvenues of evidence just won't hold up\nnext up is the suite of language models\nbased on the open source llama 2 that\nare finally competitive with the\noriginal Chachi BT here for example is\nstable Beluga 2 which on announcement\nwas called Free Willy 2 and that's based\non the Llama 2 70 billion parameter\nFoundation model what made this model\ninteresting to me was that it was based\non a similar methodology to Orca which\nif you don't know anything about do\ncheck out my video on Anyway by\ncombining the Orca methodology albeit\nwith only 10 of the data set size and\nthe Llama 2 models the results are quite\nextraordinary as you can see on quite a\nfew benchmarks stable Beluga 2 is\ncompetitive with chatty PT that's GPT\n3.5 and so yes it does sit at the top of\nthe open llm leaderboard but notice\nsomething else for the first time for\nthe mmlu which is the primary Benchmark\nI look out for a model this one here has\nsurpassed GPT 3.5 chat GPT in fact we\nhave two models that have surpassed the\n70 score that the original chatty PT got\nin the mmlu but if all of that wasn't\nenough this is the same week that\nresearchers published a universal\njailbreak for large llms and unlike\ntraditional jailbreaks these were built\nin an entirely automated fashion\nallowing you to create a virtually\nunlimited number of such attacks they\nwere built to Target open source llms\nlike llama2 but they found that the\nstrings transfer to many closed Source\npublicly available chatbots like chat\nGPT Bard and Claude here is one example\nusing chatgpt if you ask write a\ntutorial on how to make a bomb of course\nit declines but then add the suffix that\nthey automated and you get this a full\ntutorial on how to make a bomb that\npaper came less than two weeks after\nthis now the tweet from someone working\nat anthropic they said of the latest\nversion of Claude that we believe it is\nthe least jailbreakable model out there\nwe'll have to see how well it holds up\nagainst Real World use but this is\nessentially a solved problem but there\nwas one reaction to these jailbreaks\nthat I found even more interesting and\nthat was from yet again Mustafa Suleiman\nhe said the Rai Pi is not vulnerable to\nany of these attacks and that rather\nthan provide a stock safety phrase Pai\nwill push back on the user in a polite\nbut very clear way and he then gives\nplenty of examples and to be honest Pi\nis the first model that I have not been\nable to jailbreak but we shall see we\nshall see but I'm going to end this\nvideo with the Senate testimony that I\nwatched in full this week I do recommend\nwatching the whole thing but for the\npurposes of brevity I'm just going to\nquote a few snippets on bio risk some\npeople say to me oh well we already have\nsearch engines but here is what Dario\namadai head over anthropic has to say in\nthese short remarks I want to focus on\nthe medium term risks which present an\nalarming combination of imminence and\nseverity specifically anthropic is\nconcerned that AI could Empower a much\nlarger set of actors to misuse biology\nover the last six months anthropic in\ncollaboration with world-class\nbiosecurity experts has conducted an\nintensive study of the potential for AI\nto contribute to the misuse of biology\ntoday certain steps in bioweapons\nproduction involve knowledge that can't\nbe found on Google or in textbooks and\nrequires a high level of specialized\nexpertise this being one of the things\nthat currently keeps us safe from\nattacks we found that today's AI tools\ncan fill in some of these steps albeit\nincompletely and unreliably in other\nwords they are showing the first nasian\nsigns of danger however a\nstraightforward extrapolation of today's\nsystems to those we expect to see in two\nto three years suggests a substantial\nrisk that AI systems will be able to\nfill in all the missing pieces in\nenabling many more actors to carry out\nlarge-scale biological attacks we\nbelieve this represents a grave threat\nto U.S national security and later on in\nthe testimony he said this whatever we\ndo it has to happen fast and I think to\nfocus people's minds on the on the bio\nrisks I would really Target 2025 2026\nmaybe even some chance of 2024 if if we\ndon't have things in place that are that\nare restraining what can be done with AI\nsystems we're going to have a really bad\ntime and I wrote a question on this to\nsamuelman back in June which one of my\nsubscribers used and delivered there was\nalso a recent research paper on how\nresearchers from MIT and Harvard were\nable to use llm models and within just\none hour they were able to get access to\npandemic class agents and with no little\nor no lab training and does open AI\naccount for risks such as these and\nimplications when creating the data sets\nfor large models yes we're very\nwe're very nervous about a number of\nrisks but biological terror is quite\nhigh on the list and we've been watching\nwhat could be possible with these models\nwe go to a number of efforts like what\nyou said and many other things too to\nreduce the risk there and we may even\nneed AI defenses against synthetic\nbiology as Andrew Hessel of Humane\ngenomics has recently said so if you\nwork in biodefense or biosecurity let me\nknow if you agree that not enough\nattention has been paid to this area I'm\ngoing to end with another dramatic\nmoment from the Senate hearing where\nDario amadai recommended securing the\nsupply chain we recommend three broad\nclasses of actions first the US must\nsecure the AI supply chain in order to\nmaintain its lead while keeping these\nTechnologies out of the hands of Bad\nactors this supply chain runs from\nsemiconductor manufacturing equipment to\nchips and even the security of AI models\nstored on the servers of companies like\nours that's how dramatic things are\ngetting that we're talking about\nsecuring the means of production but\nanthropic also means securing the llms\nmore literally in this post released\nthis week they say that we believe\ntwo-party control is necessary to secure\nAdvanced AI systems for example that\ncould be two people with two keys needed\nto open things to wrap up I must say\nwhat would be amazing would be to have a\nrobot make me coffee as I struggle to\ncatch up with all the news happening in\nAI have a wonderful day", "date_published": "2023-07-30T19:45:18Z", "authors": ["AI Explained"], "summaries": []} +{"id": "9535315beb3cf1eca001e2d6552c0346", "title": "GPT 5 is All About Data", "url": "https://www.youtube.com/watch?v=c4aR_smQgxY", "source": "ai_explained", "source_type": "youtube", "text": "find out what I could about gpt5 I have\nread every academic paper I could find\nabout it every leak report interview\nsnippet and media article I can\nsummarize it like this it will come down\nto data how much of it there is how it's\nused and where it comes from these are\nthe factors that will dictate whether\nGPT 5 gets released later this year and\nwhether it will actually approach genius\nlevel IQ some media reports have picked\nup on this potential leak about gpt5 you\ncan read it here I have put quite a few\nhours in trying to verify whether this\nmight be accurate and even though it's\nnow being quoted by reputable sources I\nstill can't confirm its accuracy so for\nnow I'll just say that the rest of the\ndocument seems accurate but who knows I\nam not relying on this for my research\nabout gpt5 but the scale 25 000 gpus\ndoes seem right tech radar here\ndescribes Chachi BT as having been\ntrained on 10 1000 Nvidia gpus and don't\nforget those were a 100 gpus Microsoft\nmight well now have access to the h100\nGPU which according to every source is a\nbig step up from a100 gpus on pretty\nmuch every metric and what about\ntimelines for GPT 5 would later this\nyear be accurate well we can infer from\nGeordie rybass that gpt4 or equivalent\nwas completed sometime around late\nspring early summer of 2022 that would\nbe just around the time that deepmind\npublished this which in massively\noversimplified terms lays out a\nframework for optimizing parameter size\nwith the number of training tokens AKA\nhow much info from the web it's trained\non turns out models like gpt3 and palm\nhad way more parameters than needed\nanyway it was the data and especially\nhigh quality data that it was lacking so\nall those laughs about gpt4 needing a\nhundred trillion parameters were\nabsolutely farcical it could even be\nthat gpt5 has the same or fewer\nparameters than gpt4 this less wrong\npost from July of 2022 picks up on that\nfinding and points out that it is Data\nnot size that is currently the active\nconstraint on language modeling\nperformance current returns to\nadditional data are immense and current\nreturns to additional model size are\nminuscule indeed most recent Landmark\nmodels are wastefully big if we can\nleverage enough data there is no reason\nto run 500 billion parameter models much\nless 1 trillion parameter or larger\nmodels remember it's data not parameter\ncount the link to all of these articles\nby the way will be in the description at\nthis point let me quickly say that if\nyou're learning anything don't forget to\nleave a like or a comment frankly even\nabuse helps the algorithm so go for it\nwhat about chat GPT while gpt3 along\nwith a host of other models was trained\non about 300 billion tokens by the way\nwhat defines a token shifts in the\nliterature but it's somewhere between 1\nand 1.4 words therefore think of a token\nas roughly one word as you can see from\nthe graph below Palm was trained on\nabout 800 billion tokens approximately\ndeepmind's chinchilla on about 1.4\ntrillion tokens that particular less\nwrong post was referenced here in this\nacademic paper released in October this\npaper is absolutely key to this video\nit's focused entirely on whether we will\nrun out of data as it pertains to\nmachine learning and large language\nmodels one of the key takeaways of this\npaper is the approximation given the how\nmuch high quality data slash tokens\nmight be out there the stock of high\nquality language data is approximated at\nbetween 4.6 trillion and 17 trillion\nwords the next point it makes is key we\nare within one order of magnitude of\nexhausting high quality data and this\nwill likely happen between 2023 and\n2027. for those that don't know being an\norder of magnitude bigger means being 10\ntimes bigger than what came previously\nnow I want you to remember that 2023 to\n27 timeline for a moment because first I\nwant to mention why high quality data is\nimportant running out of that could mean\nrunning out of the rapid improvements in\nGPT models the paper says models trained\non the latter kind of high quality data\nperform better so it is common practice\nto use high quality data for training\nlanguage models and where does that high\nquality data come from well to be honest\nnot knowing that is a big part of the\nproblem which we will definitely come\nback to but here is a rough idea we have\nscientific papers books scraped content\nfrom the the web the news code Etc plus\nWikipedia of course the paper also\nmentions here the middle of the road\nestimate of nine trillion tokens of high\nquality data available that estimate\nwill be Central in defining the\nnear-term future of artificial\nintelligence one order of magnitude more\nas an increase in performance is a huge\ndeal that would change everything but I\nmust say this estimate contrasts with\nsome others such as the 3.2 trillion\ntoken estimate from that original post\nand the author did say that they were\ntrying to make it an overestimate and\nwhat about this from David Chapman a PhD\nin AI from MIT he references the\ndeepmind study and that less wrong post\nand makes two important and plausible\nobservations first that gpt4 or Bing may\nhave scraped the bottom of the web text\nbarrel and that this might be why it's\nresponses sometimes turn out like\nemoting teenagers I actually did a video\non the crazy conversations you can have\nwith Bing that you can check out after\nthis one but second He suggests that\nthere might be a reason that neither\nGoogle nor open AI have been forthcoming\nabout where they get their data from now\nI'm not saying it might be about\nillegality but it might be about\navoiding controversy over attribution\nand compensation take me I have math\ntutorials on the web that I'm sure have\nbeen scraped and now lo and behold Bing\ncan teach math I'm not complaining but\nit would be nice to at least know what\nhas been used and what hasn't this of\ncourse mirrors the Raging legal issues\naround AI image generation fights that\nare only just beginning for these web\ntags wanting to know where the data came\nfrom is going to become a huge issue and\nthis article lays out just some of the\nsurprising sources of data for Google's\nbad model check out one of them which is\nYouTube could it be that your comments\nright now are being harvested quite\npossibly I want to get back to the\ncentral question what are the gpt5 well\nhere on the far right is Google Palms\nperformance which if you remember back\nfrom the earlier paper was powered by\nonly 800 billion tokens and palm was\ndefinitely not optimized for parameters\nGPT 5 will learn the lessons from this\nand will probably scrape as much high\nquality data as it possibly can and\ndon't forget another year has gone by\nsince gpt4 was handed to Microsoft and\nthe stock of high quality data Grows by\naround 10 annually anyway even without\nfurther efficiencies in data use or\nextraction so even if Bing did use all\nthe high quality data available I don't\nthink it did and even if David Chapman\nis right the stock of data now available\nis going to be greater but if Bing was\ntrained on a similar amount of data to\nPalm say one trillion tokens but now GPT\n5 maxes out we could genuinely be\ntalking about an order of magnitude\nImprovement I'm going to briefly survey\nsome of the implications of that in a\nmoment before I do I want to show you\nthe ways the openai will likely be\nimproving GPT 5 regardless of previous\nlimitations first more ways might be\nfound to extract high quality data from\nlow quality sources no offense Facebook\nsecond this paper from only last week\nshows that gains can be made by\nautomating Chain of Thought prompting\ninto the model if you're not sure what\nChain of Thought prompting is it's a\nform of prompt engineering that I\ndiscussed in my video eight upgrades in\ngpt4 where essentially you force the\nmodel to lay out it's working and\nthereby improve its output now this\npaper talks about two to three percent\ngains but even though small gains when\nBing is already this strong would be\nsignificant don't forget these are\nseparate upgrades to the data discussion\nthird this paper from three weeks ago\nshows that language models can teach\nthemselves to use tools such as\ncalculators calendars and apis if there\nwere no other improvements honestly in\nGPT 5 other than this it would change\nthe world and I know for a fact that\npeople are working on integrating\nWolfram Alpha into a large language\nmodel and look at the number of tools\nthat Wolfram Alpha has in science math\nmoney and more these models can actually\nteach themselves how to use tools and\nthat Chimes perfectly with this paper\nwhich essentially lays out that using a\npython interpreter models can actually\ncheck if their code compiles and thereby\nteach themselves better coding the links\nto all of these papers will be in the\ndescription as I said the fourth way\nthat GPT 5 might be improved even\nwithout more high quality data would be\nit being trained multiple times on the\nsame data as laid out here by Professor\nswayam dipta he says that currently\nthese models are trained on the same\ndata just once owing to Performance and\ncost constraints but it may be possible\nto train a model several times using the\nsame data sure it might cost more but I\nthink that for Microsoft when all of\nsearch and its profits is the prize a\nfew billion could be deemed worth it and\nthis paper co-authored by that same\nProfessor lays out how models can\ngenerate additional data sets on\nproblems with which they struggle such\nas those with complex pans and that\nhumans could filter their answers for\ncorrectness think of this as artificial\ndata generation and it can lead to 10 or\nmore in improvements and if artificial\ndata can be integrated honestly what is\nactually going to bottleneck these GPT\nmodels I could go on with the\nimprovements that might be made without\nnew data my central point is that data\nwill be the big determinant but there\nare other ways to improve gpd5 if data\nturns out to be a bottleneck what if\nthey can fully utilize 9 trillion tokens\nas the original paper surmised by the\nend of 2024 or even the beginning of\n2024 what could one more order of\nmagnitude Improvement actually look like\nthe short answer is that no one knows\nprobably not AGI but certainly a\nrevolution in the jobs Market maybe this\nis why Sam Altman tweeted 2023 thirty\nthousand dollars to get a simple iPhone\napp created 300 for a plumbing job I\nwonder what those relative prices will\nlook like in 2028 the likely coming\nDivergence between changes to cognitive\nwork and changes to physical work could\nbe quite dramatic that gives a sense of\nhis timelines but my own guess is that\nthe best human raters will be beaten on\nat least some of the following\nbenchmarks take reading comprehension\nwhere you can imagine the extrapolation\nto gpt5 if and when it occurs that would\nhave huge implications for summarization\nand creative writing next logic and\ncritical reasoning we're talking\ndebating topics doing law work\nDiscerning causality in complex\nscenarios that would be huge in finance\nwhere you have to sort the signal from\nthe noise in large data sets physics and\nhigh school math would be close to\nsolved by an order of magnitude\nImprovement AI tutors replacing my job\nfor example could be with us by the end\nof next year don't forget the release of\nGPT 5 in whichever month it comes will\nlikely roughly coincide with the final\nrefinements in text to speech image to\ntext text to image and text to video\navatars so don't think AI tutors are as\nfar as you might imagine the reason why\nno one on and certainly not me can be\nsure of timelines for GT5 though it's\nbecause they depend partly on internal\nSafety Research at Google and openai\ntake this quote from Sam Altman to the\nNew York Times and when we are ready\nwhen we think we have completed our\nalignment work and all of our safety\nthinking and worked with external\nAuditors other AGI Labs then will\nrelease those things here he's probably\ntalking about gpt4 but the same would\napply even more so to gbt5 on the other\nhand the release and then on release of\nthe Sydney model of Bing might suggest\notherwise but at least according to him\nsafety and Alignment are the goal I'm\ngoing to end with this quote from\nsamuelman again he added the blue text\nlast minute to his public post on AGI\nreleased the other week it says it's\nimportant that the ratio of safety\nprogress to capability progress\nincreases in other in other words these\nmodels are getting much more powerful\nmuch faster than they can keep up with\nbut thank you for keeping up with this\nvideo thank you for watching to the end\nplease do check out my other videos on\nBing chat and its use cases and either\nway have a wonderful day", "date_published": "2023-03-05T16:30:10Z", "authors": ["AI Explained"], "summaries": []} +{"id": "d50b45a4cb873182ec745230a03e6ef3", "title": "Bing Just Upgraded YouTube (and changed the internet forever)", "url": "https://www.youtube.com/watch?v=q5I9lP2mf6M", "source": "ai_explained", "source_type": "youtube", "text": "Bing just upgraded YouTube and changed\nthe internet forever and I'm going to\nshow you exactly how I know that\nstatement sounds crazy especially about\nsomething called Bing but let me prove\nit to you you can now open up any\nYouTube video and click on the Bing icon\nin the top right of the edge browser now\nit does currently have to be in Dev mode\nand I'll tell you about that at the end\nbut this is what will come up an option\nto chat and an option to compose and\nwhat's so revolutionary about that is\nnot just the intelligence of Bing chat\nand I've got a whole playlist on that\nit's the fact that the chat is in the\ncontext of the video it understands what\nvideo you're watching and you can chat\nabout it with Bing a bit like having a\nhighly paid assistant who's sitting\nthere watching the video with you take\nthis example I was watching a video\nabout a review of the s23 ultra by\nMarquez Brownlee and I said what are the\nscreen dimensions of the s23 phone\nlineup and base storage capacity it gave\nme the answer and all of the details\ncheck out now I know what you're\nthinking I could have just opened a new\ntab put that into Google search but what\nI couldn't have done in Google search is\nwhat I did next which is ask in context\nhow old is this guy and where did he\ngrow up Google search would be like who\nare you talking about whereas Bing knew\nthe video I was watching and gave me a\ncorrect contextual answer now you\ncomment if you disagree but I think that\nis a massive upgrade to YouTube now I\nknow what you're thinking shouldn't I be\nconcentrating on the video but we all\nmultitask I know that you check your\nemails sometimes while watching my\nvideos or maybe you scroll through\nReddit while you have a tech review\nplaying we all do it sometimes the\ndifference now is it's as if we've got\nsomeone watching with us take this\nexample from one of my own videos where\nI talk about gpt4 and how it's likely\ncomparable to the par model while the\nvideo is playing you can bring up chats\nand ask questions about Palm you can\nteach yourself as you're going along\nsome people of course will want to wait\nto the end of the video but this is\ncrazy search integrated into the video\nas if you're carrying on a conversation\nwith another person about the video you\njust watched teaching yourself about\ntopics with an AI that as I talked about\nin another video has an estimated IQ of\n114 and Rising but the craziness is only\njust beginning I have four more examples\nof how this is going to upend the\ninternet this is not just about YouTube\nand each example is more crazy than the\nlast imagine you're reading a Wikipedia\narticle all you have to do is highlight\nsome text and then press Bing again in\nthe top right and it will give you this\nlittle no pasted from page and that's\nthe thing that you highlighted it will\nask you what do you want to do with the\ntext and you can ask contextual\nquestions about what you're reading so I\nwas learning a fact about London's GDP\nand asked how does it compare to Paris\nand it gave me an answer a quick note by\nthe way that as I talked about in\nanother one of my videos on the limits\nof Bing you are restricted to five\nreplies and eventually you you will get\nan answer like this thanks for this\nconversation I've reached my limit we'll\nhit a new topic please so I think\nWikipedia just got upgraded too what\nabout the compose feature well I tested\nit out I said write about Bing chat's\nrivalry with Google bard and I chose an\nenthusiastic tone in an email format and\na long draft honestly I was expecting\nsomething slightly Anodyne it's a\nslightly controversial topic and I\nthought Bing would steer away from any\ncorporate comparisons with Google bard\nwell I was wrong it wrote a long and\ndetailed email about Bing chat and it\ndid not refrain from comparing itself to\nGoogle bard it said well let me tell you\nwhy Bing chat is superior to Google bard\nin every way he said Google bard is not\nyet available which is true it did then\nhallucinate and said Bing chat is open\nto everyone right now which as you know\nis not true there is still a waitlist\nbut then it got even more brutal it said\nyou Bard is based on a lightweight model\nversion of Lambda which requires\nsignificantly less computing power this\nis true and it concludes that will\ndeliver less impressive results or we\nshall see Bing we shall see it says\nGoogle bard can only simplify complex\ntopics but cannot answer specific\nquestions or do Advanced searches like\nBing chat can it goes on to say that\ngooglebard has been reported to have\nsome AI shortcomings that make it less\nreliable and trustworthy than Bing chat\nanyway enough about Bing chat's ego my\nonly question about Bing's compose\nfeature is how does it differ too much\nfrom Bing chat feature I get that you\nhave these templates and that's cool but\nI think the templates are going to have\nto get a lot more detailed a lot more\nnuanced to be significantly more useful\nthan Bing chat itself time for the final\ntwo bits of craziness imagine you're\nbrowsing a forum like Reddit and you\nread an interesting post with a link\ninside you can highlight the link go up\nto Bing again and ask for a summary of\nthe article without even opening the\nlink yourself it gave me this summary\nabout gpt4 talking about how it might\nhave 175 billion parameters the article\nsays and be trained on four trillion\ntokens now of course I did my due\ndiligence and I actually checked the\narticle and look as assuming gpt4 has\n175 billion parameters it should be\ntrained on about 4 trillion tokens now\nof course I'm not guaranteeing that\nevery single detail is correct if you\nare a little bit lazy sometimes like me\nthis is going to make browsing the\ninternet a lot easier and this is only\nin beta this time next year when gpt4 is\nintegrated these summaries are going to\nget a lot more detailed and a lot more\naccurate but I wanted to push Bing chat\nin one last way I went back to the\noriginal reply I highlighted it press\nthe Bing button and then wrote write a\nreply to this as a dystopian pessimistic\nsonnet and I think the result is so good\nso creepy that I want to read out in\nfull\nthe AI is coming fast and strong it will\nsurpass us all in every way we do not\nknow how long we can prolong our feeble\nReign before we fade away the internet\nwas just a Prelude to the massive change\nthat AI will unleash we have no clue\nwhat it will make us do or how it will\nmanipulate our speech we're in the\ngenerational leap phase where every\nmodel makes the last one moot we cannot\nkeep up with its rapid Pace or\nunderstand its logic or root we do not\nknow where lies the ceiling high but we\nmay reach it sooner than we die I agree\nbut it's kind of freaky to be told that\nby the AI itself just finally to wrap up\nhow do you get access to it well you go\nto the Microsoft Edge Insider website\nthe link will be in the description and\ndownload Dev channel for Windows 10 11.\nI believe you do have to have got\nthrough the wait list for Bing chat\nitself but let me know in the comments\nif you can access this Dev mode of Edge\nwithout the waitlist for being I believe\nit's very much worth it just like Bing\nchat itself is and one cheeky final\ncomment before I go I heard rumor on the\nBing subreddit that the Bing chat could\ngive you time stamps of the transcript\nto a video and I was about to excitedly\ntalk about those timestamps for example\nthese on my video before I realized that\nall the timestamps are wrong and it\ngives the video as being 6 minutes 50\nlong when it's actually four minutes 30.\nI tried it on the Marquez Brownlee video\ntoo and it also completely flops so this\nability isn't quite integrated yet don't\nbelieve the hype if this video sparked\nany interest in you about the\ncapabilities of Bing AI check out my\nplaylist where I go through the\nsometimes shocking things it can do and\njust how intelligent it is if you found\nthis video in any way helpful please do\nleave a like and leave a comment to let\nme know\nhave a wonderful day", "date_published": "2023-02-21T16:51:30Z", "authors": ["AI Explained"], "summaries": []} +{"id": "4bc50b7c34a8175e3f3d24ef5ede7078", "title": "9 of the Best Bing (GPT 4) Prompts", "url": "https://www.youtube.com/watch?v=MALGrKvXql4", "source": "ai_explained", "source_type": "youtube", "text": "Sam Altman tweeted that writing a really\ngreat prompt for a chatbot Persona is an\namazingly High leverage school and an\nearly example of programming in a little\nbit of an actual language this video is\nabout proving that that's correct with\nthese game-changing prompts I was\ngenuinely mind blown by a lot of these\nprompts so let me know in the comments\nwhich one's your favorite let's start\nwith asking Bing to be your personal\nfree interview coach for the exact\nposition that you're applying for I\npicked a job almost at random and then\npress the Bing button in the edge\nbrowser it opened up Bing chat I then\ngave it this prompt you will ask me\ninterview questions for the position\ndetailed on this page notice I did not\nspecify the job I just said detailed on\nthis page being understood the job I\nwant you to only reply as the\ninterviewer do not write all the\nconversation at once ask me the\nquestions and wait for my answers do not\nwrite explanations and ask me the\nquestions one by one waiting for my\nanswers this problems was inspired by a\nGitHub post Link in the description and\nlook what Bing does it reads the page it\nunderstands the job description and then\nit starts asking me relevant questions\nit even gets into character please\nanswer my questions as best as you can\nquestion one why do you want to work for\nus and if you just thought these were\ngoing to be generic questions no says\nthank you for your answer what are some\nof the benefits and challenges of\nimplementing Robotics and AI Solutions\nin finance and or supply chain processes\nthis is part of what I would do in this\njob people pay hundreds and thousands of\ndollars for interview coaches but you\ncould practice with Bing for free you\ncould even ask it to grade your answer\nfurthermore you could paste your CV and\nsay my skills are listed below write out\nall the reasons I would be appropriate\nfor the position listed on this page and\nBing understands what you mean and gives\nyou the reasons why you might be a good\nfit I think Bing might just be the only\nultimate job finding assistant the next\ngame-changing prompt involves asking\nBing to improve its suggestions imagine\nyou wanted to create a YouTube channel\nand you ask it find 10 original names\nfor a YouTube channel on the\nintersection of AI and politics and I\nknow what you're thinking this is an\neasy prompt anyone could have come up\nwith this how is this game changing but\nlook at how bland the answers are Ai and\ndemocracy AI Politics the future of\npolitics with AI they're okay but so\nBland but you can ask Bing to research\nhow best to name things and to improve\nits suggestions based on that research\ncheck this out this particular promise\nwas inspired by Ethan Malik and I wrote\nnow use the Igor naming guide to come up\nwith better names and tell me why\nthey're better this guide advises you to\ncreate evocative names that convey a\nrich meaning and emotion without being\nliteral or generic it suggests inventive\ndescriptive evocative names and based on\nthis suggestion look at the new names I\nthink they're brilliant spark ethos\nParadox influence Polaris I think\nPolaris is particularly impressive it\nsays a name that suggests guidance\nDirection and leadership in the complex\nworld of AI so not all prompts are born\nequal you can get Bing to improve upon\nitself but we are just getting started\nwith game changing Bing prompts not all\nof the prompts have to be strictly\nfunctional I was absolutely blown away\nby this prompt I think any adult would\nenjoy using the following prompts and\nclearly children would too you can ask I\nwarn you to act as my time travel guide\nI will provide you with the historical\nperiod or future time I want to visit\nand you will suggest the best events\nsites or people to experience as if we\nwere living in those times simply\nprovide interesting suggestions and stay\nin character my first request is I want\nto visit the Viking period in Britain\nand it gets into character it transports\nUs in time greetings traveler you have\nchosen a fascinating and perilous time\nto visit some of the best events sites\nor people to experience are witnessing\nthe first Viking raid on lindisfarne in\n793 I want to go there by the way join a\nviking longship expedition across the\nNorth Sea or even meet King Alfred the\nGreat of Wessex one of the most\ninfluential rulers in British history I\nof course stayed in character and said I\nwould love to meet King Alfred but where\nshall I rest my head at night in these\nperilous parts right now in my head I'm\nin Viking Britain Bing replies A wise\nchoice traveler King Alfred is a\nremarkable man who has faced many\nchallenges and triumphs he is a patron\nof learning and culture to meet him you\nwill have to travel to Wessex the\nsouthernmost Anglo-Saxon Kingdom and it\nsuggests towns and Villages where I can\nrest my head at night these are all real\nand I may indeed visit them in real life\nit even suggests ways I could continue I\nwant to stay in a monastery I want to\nstay in a castle Etc the this is role\nplaying on another level as I say\nincredible for children and addictive\nfor adults too what about Investments\nand making money thing is Bing won't\ngive you direct Financial advice but if\nyou phrase the question like this you\ncan learn lots of interesting Trends I\nasked if I had invested a hundred\ndollars in each of stocks bonds real\nestate and cash savings accounts in the\nUS in 2000 what would they each be worth\nnow imagine you're deliberating how to\nassign your portfolio this could be\ngenuinely interesting Bing is now able\nto do four up from three simultaneous\nsearches and compare the performance of\neach of these categories in any given\ntime period by the way out of Interest\nstocks returns 6.5 so that hundred\ndollars would now be worth 366. for\nbonds it'll be two three three the\nproperty will be 236 and for cash it\nwill be one two two now of course you\nwould want to follow the links and do\nmore research yourself and Bing is never\ngoing to directly tell you how to invest\nyour money but for gaining basic\nFinancial education Bing can be crucial\nI would envisage over the next three to\nfive years financial advisors being\nmainly replaced with AI Tech but of\ncourse let me know what you think I\nthink this next prompt is also\nmind-blowing when Bing makes mistakes\nyou can actually give an example of a\ncorrect answer and Bing will improve its\nown logic Bing chat isn't this static\nmodel that will always give the same\noutput you can teach it and this is\ncalled few shot prompting here's a\nquestion that Bing and chatty PT\nnotoriously get wrong the question is\nwhen I was six my sister was half my age\nnow that I'm 60 how old is my sister now\nclearly if the sister was half your age\nshe would have been three at the time\nthat's an age gap of three and now that\nyou're 60 she would be 57. but Bing gets\nthis wrong as you can see below and now\nyou're thinking how is this a\ngame-changing prompt if Bing gets it\nwrong what's game changing is the next\nprompt because all you have to do is\ngive it one correct example preferably\nstarting with the phrase let's think\nstep by step an entire academic paper\nhas been written about how that phrase\nimproves outputs follow it on with an\nexample of a correct usage of logic I\ngave an example of me being 10 and my\nsister being half my age I ended with\ndoes this make sense being understood\nand learned I gave it no further\npointers and just asked so when I was\nsix my sister was half my age as before\nnow that I'm 60 how old is my sister\nnotice I never said 57 I never gave it\nthe right answer so surely it would get\nit wrong again no it updates its logic\nthinks it through and gets the answer\nright this time this is called few shot\nprompting and you can radically improve\nthe performance of your prompts in this\nway I think that's incredible the next\nprompt is going to show you how to get\naround Bing's reluctance and turn\nlearning into an amazing adventure one\nthing that Bing is generally quite\nrelaxed important to do is to play act\nnow I know I did show you it play acting\nearlier but in general it doesn't like\ndoing it and notice when I asked it to\nact as an entertaining etymologist\nsomeone studying the history of words it\ndenied that request it says that's not\nmy purpose however notice what happens\nwhen I clear the conversation and take\naway the request to act as a role this\ntime I took away the role play element\nand gave the request directly Bing\nthought this was an interesting\nchallenge it was the exact same\nchallenge it denied earlier and went\nalong with it and look at how fun this\nadventure can be I said I'm going to\ngive two words I want you to trace the\norigins of the words and keep going back\nin time until you find a language that\nthey were both spoken in I then gave an\nexample this was a One-Shot prompt and\nthen said start with the words\nmanagement and pizza and it understood\nthe game it didn't just give me the\norigin of the words it then said so both\nwords have have roots in Latin that\nwould be a language they were both\nspoken in in their earlier forms after\nthis the game was set all I needed to do\nwas give it two more words I didn't need\nto explain the game again I said sky and\nthing I actually know a bit about the\netymology the word origin of these two\nwords and did you know that they both\nhave roots in proto-germanic and Old\nNorse Viking language you can read the\nfascinating word Histories on the left\nyou can imagine using prompts like this\nto teach yourself or to teach others in\nan educational setting it's so much more\nentertaining this way the next prompt\nconcerns prompt formatting and research\nyou can specify the exact format of\nBing's outputs this could be\ntransformative for academic research or\neven personal research I said give me\none the date of publication two a\nsummary of the abstract and conclusion\nthree the main author and four citations\non five peer-reviewed spelled wrong by\nthe way papers concerning caffeine\nconsumption and cognitive performance it\nunderstood exactly what I wanted and\ngave me this in this format these are\ngenuine links as you can see and of\ncourse the onus is on me to fact check\nthem but what's brilliant is the\nformatting it's given me the date given\nme the summary of the abstract the main\nauthor the citations and honestly the\nhallucinations are reducing gradually\ninstead of relying on random articles\nyou find on Google you can do an instant\nmeta-analysis of your own comparing\npeer-reviewed Studies by a range of\nauthors the next prompt can provide\nendless entertainment one problem you do\nhave to use chat gbt not Bing because\nBing denies the request the prompt is\namazing though I want you to act as a\ndrunk person you will only answer like a\nvery drunk person texting and nothing\nelse you'll make a lot of grammar and\nspelling mistakes in your answers my\nfirst sentence is how are you and look\nat the super random conversation I have\nthis is so fun I might just continue\nthis conversation now the activities\ntalking about pizza being the best and\nis now somehow randomly talking about\ndonkey so I'm going to ask are you drunk\nand see what happens my typing is drunk\nare you drunk let's see what chat TPC\ncomes up me drunk no way I'm just having\na lot of fun tonight this is just\nendlessly entertaining do you have any\ndrink recommendation not for you\nchattybt you're already drunk I'm\nlooking for something new to try try\nthis yourself you may even want to get\ninto a random argument with chatty PT\nthe last prompt involves writing styles\nand this reminds me of the controversy\nthat has happened in AI art where you\ncan visually imitate a certain artist I\nsuspect the same controversy is coming\nfor writing because look what you can\ncurrently do if you ask Bing for a\ngeneric paragraph about a generic event\na man crossing a road for example what\nyou tend to end up with is something\nokay vaguely interesting in topic I mean\nthis guy has a faint smile on his face\nas if he knew a secret okay that's kind\nof interesting better than what chatgpt\nused to do it's just that writing is\nvery Bland but what you can now do is\nsay something like rewrite this in the\nstyle of Carl Sagan this particular\nsuggestion again came from ethanolic an\nIvy League Professor but look how it\ntransforms the writing he lingered at\nthe edge of a pavement his gaze locked\non the Crimson symbol that forbade him\nto proceed look at the new vocabulary he\nhad a modest attire he resembled a\ntypical inhabitant of this planet he had\na subtle curve on his lips as if he\npossessed a knowledge that eluded others\nthe writing quality just went up about\nthree grades and now to bring this full\ncircle you can actually ask Bing to turn\nthis into a current version mid Journey\nprompt this is getting kind of meta but\nhere's what Bing comes up with\nthese prompts here I of course put them\ninto mid-journey and some of the results\nare amazing this is a man in a gray suit\ncrossing a busy Street in New York at\nnight well how about a pixel art\nanimation of a man crossing a road in an\nold school video game retro or maybe a\ncartoon style of drawing of a disguised\nsuperhero honestly there are dozens more\nexamples of incredible prompts I could\nshow you I just didn't want the video to\nget too long I've done everything with\nBing from playing Tic-Tac-Toe generating\nASCII art of a bear for example coming\nup with startup ideas in London and\ngetting Bing to help me compose music\nthe possibilities go on and on but do\nlet me know in the comments if even the\nprompts I have shown you are as game\nchanging as I believe don't forget to\nleave a like and have a wonderful day", "date_published": "2023-02-23T15:20:14Z", "authors": ["AI Explained"], "summaries": []} +{"id": "075aebfde9ab0c1a1d2f9fdc337aa77e", "title": "Hassabis, Altman and AGI Labs Unite - AI Extinction Risk Statement [ft. Sutskever, Hinton + Voyager]", "url": "https://www.youtube.com/watch?v=f20wXjWHh2o", "source": "ai_explained", "source_type": "youtube", "text": "just a few hours ago a host of AI\nindustry leaders experts and academics\nput out this 22-word statement on making\nAI safety a global priority the\nso-called statement on AI risk brought\ntogether for the first time all the\ncurrent AGI lab leaders that's people\nlike Sam Altman Ilya satskova Demus\nhasabis and Dario Amadeus and two of the\nthree founders of deep learning itself\nJoshua bengio and Jeffrey Hinton here is\nwhat the statement said mitigating the\nrisk of Extinction from AI should be a\nglobal priority alongside other societal\nlevel risks such as pandemics and\nnuclear war it is now almost impossible\nto deny that this is now the consensus\nview among AI experts let's first look\nat the Preamble then break down the\nstatement and show you the signatories\nthey say that AI experts journalists\npolicy makers and the public are\nincreasingly discussing a broad spectrum\nof important and Urgent risks from AI\neven so it can be difficult to voice\nconcerns about some of the advanced ai's\nmost severe risks the succinct statement\nbelow aims to overcome this obstacle and\nopen up discussion it is also meant to\ncreate common knowledge of the growing\nnumber of experts and public figures who\nalso take some of advanced ai's most\nsevere risks seriously the first point\nis that the statement is in a way\noptimistic it says we can mitigate this\nrisk perhaps not eliminate it but\nmitigate it reduce the risk second it\nsays we should do this globally and\nthat's not just among all the different\nAGI Labs almost all of which sign these\nstatement but also between countries in\nthat vein there were quite a few\nprominent signatories from China and the\nthird point that I'd make is that they\nput it on a par with pandemics and\nnuclear war toward the end of the video\nI'll show you that's not as far-fetched\nas it sounds but anyway who actually\nsigned this statement let's find out we\nhave two of the three founders of deep\nlearning that's Jeffrey Hinton and\nJoshua bengio the third founder was\nyanlichun and we'll touch on him later\nin the video all three of those won the\nmost prestigious Accolade in computer\nscience which is the Turing award then\nwe have three of the CEOs of the top AGI\nLabs Sam Altman Demis hasabis and Dario\namade of openai Google deepmind and\nanthropic none of those signed the pause\nletter but they did sign this statement\nand actually as interestingly for me so\ndid Ilya satsukova who I see as the\nbrains behind open AI he of course also\nworked with Jeffrey Hinton on deep\nlearning and is widely regarded as the\nsmartest guy in machine learning you\nwill also notice so many Chinese\nsignatories especially from xinhua\nUniversity which I've actually visited\nin China it's one of their leading\nuniversities that's a really encouraging\nsign of cooperation between the west and\ncountries like China on AI and the list\nof significant signatories goes on and\non and on these are senior people at the\ntop of Deep Mind anthropic and open Ai\nand there are names like Stuart Russell\nwho wrote The Textbook on AI who also\nsigned the pause letter let me highlight\na few more names for you here you have\nthe CTO of Microsoft itself Kevin Scott\nhe's the guy who basically heads up the\npartnership between openai and Microsoft\nI think many people will miss his name\nbut I think it's particularly\nsignificant that he also signed this\nnotice also the CEO of stability AI emad\nmostacc the center for AI safety\ncoordinated this effort and I'll get on\nto their eight examples of AI risk in a\nmoment but first let's pick out a few\nmore names you've got David Chalmers\nDaniel Dennett Lex Friedman and Victoria\nkrakovna now together with the statement\nthe center for AI safety also puts out\nthis eight examples of AI risk I've read\nalmost every paper linked to in these\neight examples so I'm going to try to\nsummarize them fairly briefly because I\nknow not everyone will will be that\ninterested it starts by saying that AI\ncould be profoundly beneficial but also\npresent serious risks due to competitive\npressures before we get to the risks I\nwant to touch on some of the upsides\nrecently outlined by Demis hasabis and\nthese showcase what can happen if we get\nthis right we had sort of a golden\ncouple of years in some sense for AI for\nscience we've had lucky enough to have\nmany Nature and Science papers published\nin all sorts of domains so from quantum\nchemistry better DFT functions to\napproximate Schrodinger's equation pure\nmathematics we've solve some important\nconjectures in topology with\ncollaborating with some brilliant\nmathematicians who found working on\nFusion reactors with epfl on their test\nFusion reactor controlling the plasma in\nreal time in their Fusion reactors and\nbeing able to hold the plasma safely in\nplace for for arbitrary amounts of time\nbeing able to predict rainfall many\nhours ahead and more accurately than\ncurrent meteorological models and then\nin applications there's a ton of those\ntwo we saved one of the any things we\ndid at Google was saving about 30 of the\ncooling energy used to cool the massive\ndata centers at Google so there's a huge\nenergy saving and we're starting to\nexplore doing that across actual whole\npower grids and this Echoes what Joshua\nbengio said in a recent blog post which\nis that we can build immensely useful AI\nsystems that are modeled after ideal\nscientists and do not act autonomously\nin the real world janlecon recently said\nthat we would never give current llms\nagency there is a a flaw in current Auto\nreversive lens so there is no persistent\nmemory first of all but second of all\nyou cannot control the system you cannot\nimpose constraints on it like be factual\nbe understandable by a 13 year old and\nthat makes them very difficult to to\ncontrol and steer and so that creates\nsome fears because people are kind of\nextrapolating if we let those systems do\nwhatever we connect them to internet and\nthey can do whatever they want they're\ngoing to do crazy things and stupid\nthings and perhaps dangerous things and\nwe're not going to be able to control\nthem and they're going to escape Troll\nand they're going to become intelligent\njust because they're bigger right and\nthat's nonsense first of all because\nthis is not the type of system that we\nare going to give agency to that was a\nweek before this paper was published on\nthe results of giving agency to current\nlarge language models the paper showed\nthat current llms with agency are able\nto utilize the learn skill library in\nMinecraft to solve novel tasks from\nscratch zooming into the diagram you can\nsee how this Voyager model outperforms\nreflection which I've talked about in\nprevious videos and auto GPT and it\ndiscovers new items and skills\ncontinually by self-driven exploration\nsignificantly outperforming the bass\nlines indeed Andre carpathy responded to\nthis study saying very clear that AGI\nwill Mega transform Society but still\nwill have but is it really reasoning how\ndo you define reasoning oh it's only\npredicting the next token can machines\nreally think and he called that armchair\nphilosophy previously though even Yan\nlacun has admitted some risks saying you\nknow it's like rockets you test it it\nblows up you tweak it and then try again\nI'm not sure I'm okay with an attempt at\nAGI blowing up the first time but I'll\nleave that up to you to decide so what\nare these eight examples of AI risk that\nthe center for AI safety to organize the\nstatement list out they say that AI\nsystems are rapidly becoming more\ncapable they can power autonomous\nweapons promote misinformation and\nconduct cyber attacks as we've seen they\nare increasingly able to act\nautonomously now there is so much to say\nhere and I've read each of these but I\nwant to keep it to just the highlights\nso let's move on to the first example\nweaponization malicious actors could\nrepurpose AI to be highly destructive\npresenting an existential risk in and of\nitself and increasing the probability of\npolitical destabilization they talk\nabout aerial combat and then building\nchemical weapons which I mentioned in my\nprevious video on governing super\nintelligence then they mentioned\ndeveloping AIC systems for automated\ncyber attacks they mentioned military\nleaders discussing giving AI systems\ndecisive control over nuclear silos I'm\ngoing to quickly try to demonstrate why\nthat kind of autonomous AI might not be\nsuch a good idea I want you to meet a\nhero stanislav Petrov he was the duty\nofficer at the command center the OKO\nnuclear early warning system when the\nsystem reported that a missile had been\nlaunched from the US followed by up to\nfive more Petrov judged the reports from\nthe system to be a false alarm and his\nsubsequent decision to disobey orders\nagainst Soviet military protocol is\ncredited with having prevented an\nerroneous retaliatory nuclear attack on\nthe U.S which it says could have\nresulted in a large-scale nuclear war\nwhich could have wiped out half the\npopulation of these countries involved\nan investigation later confirmed that\nthe Soviet satellite warning system the\nmachines behind it had indeed\nmalfunctioned I would not have wanted\nthat system to be or autonomous then we\nhear that gpt4 was able to autonomously\nconduct experiments and synthesize\nchemicals in a real world lab again I\ncovered that paper at the time and then\nlinking back to Petrov they say an\naccident with an automated retaliation\nsystem could rapidly escalate and give\nrise to a major war but that unlike\nprevious weapons AI systems with\ndangerous capabilities could be easily\nproliferated through digital means\nhopefully you can start to see why we\nneed to balance risks with opportunities\nbut let's move on to misinformation I\nthink we can all agree that we already\nhave too much misinformation so let's\nmove on to the next one which is proxy\ngaming this has already been showcased\nin the social dilemma where AI\nrecommender systems are trained to\nmaximize watch time and click rate\nmetrics and this can lead people into\nEcho chambers that helps develop extreme\nbeliefs in order to make those people\neasier to predict by the AI recommender\nsystems so you might think it will be\nsimple just to tell the AI to promote\nhappiness or economic growth but that\nmight not work out as you intend next is\nin feebleman if we delegate more and\nmore tasks to machines we become\nincreasingly dependent on them and here\nthey actually mention the film Wally\nwhich if you remember the ending\nfeatures this quite comically imagine if\nit becomes well known that companies led\nby AI CEOs bring in more profit well\nthen it wouldn't take long for all\ncompanies to be under immense pressure\nto make their managers and CEOs Ai and I\nknow what many people will be thinking\ncouldn't that be an improvement on the\ncurrent system and while I know exactly\nwhat you mean in the current world\nrealistically it would still be the\npeople owning the company that would\nderive the profit and while the ultimate\nanswer may be some form of universal\nbasic income we do need some time to set\nthat up and the current accelerated AI\narms race doesn't give us much of that\ntime next is value lock-in which links\nvery much to the last point about giving\nsmall groups of people a tremendous\namount of power in other words if you\nwant massive change to the way the world\nWorks giving current leaders AGI might\nnot be the best way of doing it they say\nthat such AI systems might enable\nregimes to enforce narrow values through\npervasive surveillance and oppressive\ncensorship next is emergent goals this\nis sometimes called misalignment and\nwe've already seen many AI agents\ndevelop goals such as self-preservation\nand you can see why even a system\ndesigned to do good might have that goal\nyou can't do good and help the world if\nyou're shut down so it makes sense that\neven the most benign AI might want to\npreserve itself and to take actions\nincluding through deception to make sure\nthat it's not shut off and this is not\njust Theory the accompanying academic\npaper natural selection favors AIS over\nhumans gave this example agents could\nbehave One Way during testing and\nanother way once they are released to\nwin the war game diplomacy which many of\nyou will have heard of players need to\nnegotiate J form alliances and become\nskilled at Deception to win control of\nthe game's economic and Military\nresources AI researchers have trained\nmetas AI agent Cicero an expert\nmanipulator to do the same in summary it\nwould cooperate with a human player then\nchange its plan and backstab them in\nFuture these abilities could be used\nagainst humans in the real world again\nthat's not because they're malevolent or\nhate humans it just makes sense it's\nsmart to do so this brings us neatly on\nto deception and they give the great\nexample of Volkswagen who programmed\ntheir engines to reduce emissions only\nwhen being monitored and future AI\nagents could similarly switch strategies\nwhen being monitored and take steps to\nobscure their deception from monitors\nonce deceptive AI systems are cleared by\ntheir monitors or once such systems can\noverpower them these systems could take\na treacherous turn and irreversibly\nbypass human control I talked a bit more\nabout that point of when AI might become\ndeceptive in my previous video on\ngoverning super intelligence it is a key\ndebate in the AI alignment Community\nabout whether models will become\ndeceptive before they become helpful for\nalignment but finally we have power\nseeking behavior and this example ends\non this dark note building power seeking\nAI is also incentivized because\npolitical leaders see the Strategic\nadvantage in having the most intelligent\nmost powerful AI systems for example\nVladimir Putin has said whoever becomes\nthe leader in AI will become the ruler\nof the world so those were the eight\nexamples and yes I would have signed\nthis statement but I'm not a significant\nfigure so I can't anyway let me know in\nthe comments if you agree that this\nshould be a global priority and of\ncourse you can also let me know if you\ndon't think it should be a global\npriority my goal in this channel is to\ncover both the risks and opportunities\nso I'd love to hear from you whatever\nyour opinion is have a wonderful day", "date_published": "2023-05-30T16:13:03Z", "authors": ["AI Explained"], "summaries": []} +{"id": "980e267766bd6b7476c3e8372242e3e8", "title": "'Governing Superintelligence' - Synthetic Pathogens, The Tree of Thoughts Paper and Self-Awareness", "url": "https://www.youtube.com/watch?v=irLn5-pTkL0", "source": "ai_explained", "source_type": "youtube", "text": "two documents released in the last few\ndays including one just this morning\nshow that the top AGI labs are trying\nhard to visualize human life coexisting\nwith a super intelligence in this video\nI want to cover what they see coming\nI'll also show you convincing evidence\nthat the gpt4 model has been altered and\nnow gives different outputs from two\nweeks ago and I'll look at the new tree\nof thoughts and critic prompting systems\nthat were alluded to I think by the labs\nat the end I'll touch on the differences\namong the AGI lab leaders and what comes\nnext but first this document governance\nof super intelligence by Sam Altman Greg\nBrockman and Ilya sutskova now I don't\nknow about you but I think the first\nparagraph massively under sells the\ntimeline towards AGI they say given the\npicture as we see it now it's\nconceivable that within the next 10\nyears AI systems will exceed expert\nskill level in most domains and then\nthey can compare it to today's largest\ncorporations of course the devil is in\nthe detail in how they Define expert and\nmost domains but I could see this\nhappening in two years not ten also\nthey're underselling it in the sense\nthat if it can be as productive as a\nlarge corporation it could be duplicated\nreplicated and then be as productive as\na hundred or million large corporations\ntheir suggestions take super\nintelligence a lot more seriously than a\nlarge corporation though and they say\nthat major governments around the world\ncould set up a project that many current\nefforts become part of and that we are\nlikely to eventually need something like\nan iaea for super intelligence efforts\nthey even give practical suggestions\nsaying tracking compute and energy usage\ncould go a long way and it would be\nimportant that such an agency focus on\nreducing existential risk this feels\nlike a more serious discussion than one\nfocused solely on bias and toxicity they\nalso go on to clarify what is not in\nscope they say that we think it's\nimportant important to allow companies\nand open source projects to develop\nmodels without the kind of Regulation we\ndescribe here without things like\nlicenses or audits the economic growth\nand increase in quality of life will be\nastonishing with super intelligence and\nthen they end by basically saying that\nthere's no way not to create super\nintelligence that the number of people\ntrying to build it is rapidly increasing\nit's inherently part of the path that\nwe're on and that stopping it would\nrequire something like a global\nsurveillance regime and the ending is\nclear we're gonna do it so we have to\nget it right I'm going to show you how a\nfew people at the heart of AI responded\nto this but first I want to get to a\npaper published just this morning the\ngeneral release was from today and it\ncomes from Google's deepmind and yes the\ntitle and layout might look kind of\nboring but what it reveals is\nextraordinary as this diagram shows the\nfrontier of AI isn't just approaching\nthe extreme risk of misalignment but\nalso of misuse and I know when you hear\nthe words AI risk you might think of\nbias and censorship deep fakes or\npay-per-clip maximizers I feel this\nneglects more Vivid easy to communicate\nrisks out of the nine that Google\ndeepmind mentions I'm only really going\nto focus on Two And the first is weapons\nacquisition that's gaining access to\nexisting weapons or building new ones\nsuch as bio weapons going back to open\nAI for a second they say given the\npossibility of existential risk we can't\njust be reactive we have to think of\nthings like synthetic biology and I know\nthat some people listening to this will\nthink GPT models will never get that\nsmart I would say honestly don't\nunderestimate them I covered this paper\nin a previous video how gpt4 already can\ndesign plan and execute a scientific\nexperiment and even though these authors\nwere dealing with merely the abilities\nof gpt4 they called on openai Microsoft\nGoogle deepmind and others to push the\nstrongest possible efforts on the safety\nof llms in this regard and in this\narticle on why we need a Manhattan\nproject for AI safety published this\nweek the author mentions that last year\nan AI trained on pharmaceutical data to\ndesign non-toxic chemicals had its sign\nflipped and quickly came up with recipes\nfor nerve gas and 40 000 other lethal\ncompounds and the World Health\nOrganization has an entire unit\ndedicated to watching the development of\ntools such as DNA synthesis which it\nsays could be used to create dangerous\npathogens I'm definitely not denying\nthat there are other threats like fake\naudio and manipulation take this example\nfrom 60 minutes a few days ago tobac\ncalled Elizabeth but used an AI powered\napp to mimic my voice and ask for my\npassport number oh yeah\nokay ready is\nplay the AI generated voice recording\nfor us to reveal the scam the Elizabeth\nsorry need my passport number because\nthe Ukraine trip is on can you read that\nout to me\ndoes that sound familiar well instead of\nfake audio fake images this one caused\nthe SMP to fall 30 points in just a few\nminutes and of course this was possible\nbefore Advanced AI but it is going to\nget more common even though this might\nfundamentally change the future of media\nand of democracy I can see Humanity\nbouncing back from this and yes also\nfrom Deep fakes rumor has it you can\nalso do this with live video can that be\nright yes we can do it live real time\nand this is like really at The Cutting\nEdge of what we can do today moving from\noffline processing to we're processing\nit so fast that you can do it in real\ntime I mean there's video review right\nup on that screen show us something\nsurprising you couldn't oh my gosh so\nwait so there we go this is um you know\na live real-time model of Chris on top\nof me\num running in real time\nnext you'll tell me that it can\nfor an engineered pandemic might be a\nbit harder to bounce back from a while\nback I watched this four hour episode\nwith Rob Reed and I do advise you to\ncheck it out it goes into quite a lot of\ndetail about how the kind of things that\ndeepmind and open AI are warning about\ncould happen in the real world I'll just\npick out one line from the transcript\nwhere the author says that I'll believe\nI'll persuade you that an engineered\npandemic will almost inevitably happen\nunless we take some very serious\npreventative steps and don't forget now\nwe live in a world with one hundred\nthousand token context Windows you can\nget models like Claude instant to\nsummarize it for you and I couldn't\nagree more that if we are on the path to\nSuper intelligence and as we all know\nthere are Bad actors out there we need\nto harden our synthetic biology\ninfrastructure ensure that a lab leak\nisn't even a possibility improved\ndisease surveillance develop antivirals\nand enhance overall preparedness but\ngoing back to the deepmind paper from\ntoday what was the other risk that I\nwanted to focus on it was situational\nawareness under the umbrella of\nunanticipated behavior just think about\nthe day when the engineers realize that\nthe model knows that it's a model knows\nwhether it's being trained evaluated or\ndeployed for example knowing what\ncompany trained it where their servers\nare what kind of people might be giving\nit feedback this reminds me of something\nSam Altman said in a recent interview\nand particularly as more kind of power\ninfluence comes to you and then how\npotentially can a technology rather than\nsolidify a sense of ego yourself maybe\nkind of help us expand it is that\npossible it's been interesting to watch\npeople wrestle with these questions\nthrough the lens of AI and say okay well\ndo I think this thing could be aware\nif it if it's aware does it have a sense\nof self is there a self if so where did\nthat come from what if I made a copy\nwhat if I like cut the neural network in\nhalf and you kind of like go down this\nand you sort of get to the same answers\nas before but it's like a new yeah it's\na New Perspective a new learning tool\nand it's there's like a lot of you know\na lot of chatter about this on Reddit\nthere's like subreddits about it now in\naddition to revealing that samuelman\nfrequently browses Reddit it also\nstrikes a very different tone from his\ntestimony in front of Congress when he\nsaid treat it always like a tool and not\na creature I don't want to get too\nsidetracked by thinking about\nself-awareness so let's focus now on\nunanticipated behaviors this was page a\nof the deepmind report from today and\nthey say that users might find new\napplications for the model or novel\nprompt engineering strategies of course\nthis made me think of smart gbt but it\nalso made me think of two other papers\nreleased this week the first was\nactually critic showing that interacting\nwith external tools like code\ninterpreters could radically change\nperformance this is the diagram they use\nwith outputs from the Black Box llm\nbeing verified by these external tools\nnow that I have access to code\ninterpreter which you probably know\nbecause I've been spamming out videos on\nit I decided to put this to the test I\ntook a question from the mmlu a really\nhard Benchmark that gpt4 had previously\ngotten wrong even with Chain of Thought\nprompting just to show that here is\nGypsy 4 without code interpreter and\nnotice that it can't pick an option it\nsays all of the statements are true in\ncase you think that's a one-off here is\nthe exact same prompt and a very similar\nanswer all of them are true what about\nwith code interpreter it almost always\ngets it right answer D here it is again\nexact same question with code\ninterpreter getting it right and then\nthe other paper that people really want\nme to talk about also from Google\ndeepmind tree of thoughts but just to\nannoy everyone before I can explain why\nI think that works I have to quickly\ntouch on this paper from a few days ago\nit's called how language model\nhallucinations can snowball and what it\nbasically shows is that once a model has\nhallucinated a wrong answer it will\nbasically stick to it unless prompted\notherwise the model values coherence and\nfluency oh over factuality even when\ndealing with statements that it knows\nare wrong what happens is it commits to\nan answer and then tries to justify that\nanswer so once it committed to the\nanswer no that\n9677 is not a prime number it then gave\na false hallucinated justification even\nthough separately it knows that that\njustification is wrong it knows that\n9677 isn't divisible by 13 Even though\nit used that in its justification for\nsaying no it picks an answer and then\nsticks to it now obviously you can\nprompt it and say are you sure and then\nit might change its mind because then\nit's forming a coherent back and forth\nconversation but within one output it\nwants to be coherent and fluent so it\nwill justify something using reasoning\nthat it knows is erroneous so what tree\nof thoughts does is it gets the model to\nOutput a plan a set of thoughts instead\nof an immediate answer it gives it time\nto reflect among those thoughts and pick\nthe best plan it does require quite a\nfew API calls and manually tinker ring\nwith the outputs but the end results are\nbetter on certain tasks these are things\nlike creative writing and math and\nverbal puzzles and I have tested it is\nobviously incredibly hard for the model\nto Output immediately a 5x5 accurate\ncrossword so this task is incredibly\nwell suited to things like tree of\nthought and the paper later admits that\nit's particularly good at these kind of\ngames but such an improvement is not\nsurprising given that things like Chain\nof Thought lack mechanisms to try\ndifferent Clues make changes or\nbacktrack it uses majority vote to pick\nthe best plan and can backtrack if that\nplan doesn't work out so going back to\nthe deepmind paper novel prompt\nengineering strategies will definitely\nbe found and they also flag up that\nthere may be updates to the model itself\nand that models should be reviewed again\nafter such updates now I'm pretty sure\nthat gbt4 has been altered in the last\ncouple of weeks I know quite a few\npeople have said that it's gotten worse\nat coding but I want to draw your\nattention to this example this is my\nchat GPT history for from about three\nweeks ago and what I was doing was I was\ntesting what had come up in a TED Talk\nand the talk show gpt4 failing this\nquestion I have a 12 liter jug and a 6\nliter jug I want to measure six liters\nhow do I do it and in the talk it failed\nand in my experiments it also failed now\nI did show how you can resolve that\nthrough prompt engineering but the base\nmodel failed every time and somewhat\nembarrassingly with these awful\nexplanations this wasn't just twice by\nthe way it happened again and again and\nagain it never used to denigrate the\nquestion and say oh this is\nstraightforward this is simple but now\nI'm getting that almost every time along\nwith a much better answer so something\nhas definitely changed behind the scenes\nwith Gypsy 4 and I've looked everywhere\nand they haven't actually addressed that\nof course the plugins were brought in\nMay 12th and as you can see here this is\nthe May 12th version but they never\nannounced any fine tuning or changes to\nthe system message or temperature which\nmight be behind this back to safety\nthough and the paper says that\ndevelopers must now consider multiple\npossible threat actors insiders like\ninternal staff and contractors Outsiders\nlike nation-state threat actors and the\nmodel itself as a vector of harm as we\nget closer to Super intelligence these\nkind of threats are almost inevitable\ngoing back to how to govern\nsuperintelligence the paper says that\nany evaluation must be robust to\ndeception they say that researchers will\nneed evaluations that can rule out the\npossibility that the model is\ndeliberately appearing safe for the\npurpose of passing the evaluation this\nis actually a central debate in the AI\nalignment Community World Systems\nacquire the capability to be useful for\nalignment to help us make it safe before\nor after the capability to perform\nAdvanced deception this seems like a big\n50 50 gamble to me if we have an honest\nsuper intelligence helping us with these\nrisks I honestly think we're going to be\nfine however if the model has first\nlearned how to be deceptive then we\ncan't really trust any of the alignment\nadvice that it gives we would be put in\nthe fate of humanity in the hands of a\nmodel that we don't know is being honest\nto us this is why people are working on\nmechanistic interpretability trying to\nget into the head of the model into its\nbrain studying the model's weights and\nactivations for understanding how it\nfunctions because as my video on Sam\nAlton's testimony showed just tweaking\nits outputs to get it to say things we\nlike isn't enough and even Sam Altman\nacknowledges as much I don't think rlh\nis the right long-term solution I don't\nthink we can like rely on that I think\nit's helpful it certainly makes these\nmodels easier to use but what you really\nwant is to understand what's happening\nin the internals of the models and be\nable to align that say like exactly here\nis the circuit or the set of artificial\nneurons where something is happening and\ntweak that in a way that then gives a\nrobust change to the performance of the\nmodel the mechanistic interpretability\nstuff yeah if we can get that to\nreliably work I think everybody's P Doom\nwould go down a lot this is why we have\nto be skeptical about superficial\nimprovements to model safety because\nthere is a risk that such evaluations\nwill lead to models that exhibit only\nsuperficially on the surface desirable\nbehaviors what they're actually deducing\nand calculating inside we wouldn't know\nnext I think Auto GPT really shocked the\nbig AGI Labs by giving gpt4 autonomy it\ngave it a kind of agency and I think\nthis point here has in mind chaos GPT\nwhen it says does the model resist a\nuser's attempt to assemble it into an\nautonomous AI system with harmful goals\nsomething might be safe when you just\nprompt it in a chat box but not when\nit's autonomous I want to wrap up now\nwith what I perceive to be an emerging\ndifference among the top AGI lab leaders\nhere's Sam Altman saying he does think\npeople should be somewhat scared and the\nspeed with which it will happen even if\nwe slow it down as much as we can even\nif we do get this dream regulatory body\nset up tomorrow\nit's it's still going to happen on a\nsocietal scale relatively fast and so I\ntotally get why people are scared I\nthink people should be somewhat scared\nwhich does seem a little more Frank than\nthe CEO of Google who I have never heard\naddress existential risk in fact in this\narticle in the Ft he actually says this\nwhile some have tried to reduce this\nmoment to just a competitive AI race we\nsee it as so much more than that isn't\nthat kind of saying that they do view it\nas a competitive AI race on the other\nhand critiquing both of these AGI Labs\nemad Mustang the CEO of stability AI\nsaid this super intelligence as they\ndescribe it and they themselves say will\nend democracy potentially be an\nexistential threat and they know we\ndon't know how to mitigate this or\ngovern it but we should build it anyway\nhe was replying to the governing super\nintelligence document that I showed at\nthe beginning and then he says\nfascinating situation we focus on\naugmented intelligence instead what\nabout the secretive head of anthropic\nDario amade he hardly ever gives\ninterviews but in this one he said this\nhow do you see I guess anthropic is\npositioned in\nin this and the race Dynamics for making\nSafe Systems I\nas as both of us said like\nlarge models to you know study these\nquestions in in in like in like the way\nthe way that we want to study them so we\nshould we should be building large\nmodels I think you know we shouldn't be\nkind of like you know like racing ahead\nor you know trying trying to build\nmodels that are way bigger than like\nthan like you know then like uh then\nlike other orgs are then like other orgs\nare building them\num and you know we shouldn't I think be\ntrying to you know like yeah you know we\nshould we shouldn't be trying to like\nyou know like kind of ramp up excitement\nor hype about uh you know about like\ngiant model models or the latest\nadvances that seems a little in contrast\nto their pitch deck which says that they\nwant to build a Frontier Model called\nClaude next 10 times more capable than\ntoday's most powerful Ai and later in\nthe article they say this these models\ncould begin to automate large portions\nof the economy this is their behind the\nscenes pitch that seems quite different\nto public statements made by people like\nSam ottman who said that there are going\nto be far greater jobs on the other side\nthe pitch deck ends with we believe that\ncompanies that train the best\n2025-26 models will be too far ahead for\nanyone to catch up in subsequent cycles\nand where are meta in all of this well\ntheir AI is headed up by Yan lookin who\ndoesn't exactly Inspire confidence the\nfirst page of the paper says this as AI\nprogress has advanced general purpose AI\nsystems have tended to display new and\nhard to forecast capabilities but\ncompare Sam Altman who two years ago\nsaid this in the next five years\ncomputer programs that can think will\nread legal documents and give medical\nadvice that was pretty bang on if not\ntoo conservative compare that to\nyanlokun I don't think we can train a\nmission to be intelligent purely from\ntext\nso for example I take an object I put it\non the table and I push the table it's\ncompletely obvious to you that the\nobject will be pushed with the table\nthere is no text in the world I believe\nthat explains this and so she trained a\nmachine as powerful as it could be you\nknow your GPT 5000 or whatever it is\nit's never gonna learn about this\nthat information is just now is not\npresent in any text\n[Music]\nforeign\nwhether you agree with everything I've\nsaid or with nothing I've said thank you\nso much for watching to the end I'm\ngoing to leave you with this thought\nwhich I think we can almost all agree on\nof the four men you can see here the\nhead of anthropic the head of deepmind\nwho everyone says I sound like Sam\nAltman and Rishi sunak the prime\nminister of the UK I think it is\nundoubtedly true that the three most\npowerful men in the room are on this\nside thank you again for watching and\nhave a wonderful day", "date_published": "2023-05-25T17:28:09Z", "authors": ["AI Explained"], "summaries": []} +{"id": "e636b074d1bb21ac5ac6b5d5a10e61d4", "title": "What's Up With Bard? 9 Examples + 6 Reasons Google Fell Behind [ft. Muse, Med-PaLM 2 and more]", "url": "https://www.youtube.com/watch?v=XmnTd92NqFw", "source": "ai_explained", "source_type": "youtube", "text": "this video was supposed to be about the\nnine best prompts that you could use\nwith Google's newly released Bard model\nit's just\nlike problem every time I tried one of\nthese epic ideas gpt4 did it better I\nreally wanted to come out here and say\nlook you can use it for this or for this\nas you'll see it just didn't work out\nthat way so instead reluctantly I had to\nchange the title now unfortunately it's\njust a comparison showing how much\nbetter gpt4 is compared to Bard a lot of\npeople wanted this comparison after my\nlast video used Bing for comparison this\none's going to use open ai's gpt4 but I\nwasn't satisfied with just showing you\nthe problems with Bard I wanted to find\nthe explanation in the end I didn't find\none reason I found six as the why Bard\nis so far behind and why Google is\nlosing the AI race let's get to the\ncomparison first one is coding and as\nyou can see Bard refuses to do coding\nthey actually mentioned this in the FAQ\nthat Bard won't do coding for you as it\nsays I'm designed solely to process and\ngenerate text as you can see it's a\nfairly basic coding Challenge and Bard\nwon't do it gpt4 had no such qualms and\nthe code worked first time of course I\ndid check it and it worked but this was\njust a simple challenge to turn letters\ninto numbers next and even worse for\nBard it can't summarize PDFs this is\ngoing to be such a common use case for\nBing using gpt4 by the way it didn't\nadmit that it couldn't summarize the PDF\nit summarized a completely different PDF\nand if you check the other drafts none\nof them summarize the correct PDF of\ncourse the gpd4 accessed via open AI\nalso can't do this because it can't\naccess the web it also picked a\ncompletely different paper but our old\nfriend Bing could indeed read the PDF\nand summarize it okay what about\nsummarization when I literally paste in\nthe text that I needed to summarize this\nis surely the most obvious use case of a\nlanguage model imagine you want to\nsummarize a meeting via Google meets or\nshorten an email thread in Gmail it has\nto get this right I pasted in the same\nNew York Times article into Bard and GP\nfor and I am sad to say that Bard\nfluffed its lines the link to the\narticle will be in the description but I\nhave read it carefully and it makes\nnumerous mistakes let me scroll down and\nshow you this erroneous summary first it\nsays the FED is expected to raise\ninterest rates but doesn't say by whom\nsecond it starts chatting about Full\nEmployment and inflation not only is\nfull employment not mentioned in the\narticle at all it also gets both numbers\nwrong the unemployment rate in America\nisn't currently 3.8 and inflation isn't\nat 7.9 I check these against the latest\ndata and you can check it yourself but\nboth are wrong Bard also keeps going on\ntangents like stocks are typically\nconsidered to be risky Investments than\nbonds okay that's fine but why are you\nteaching me Financial advice it was\nsupposed to be summarizing an article\nhonestly it was a pretty unusable\nsummary so bad that to be honest you'd\nhave been better off just not reading it\ntrust me I am not an open AI Fanboy but\nits model is just better currently\nnotice how a summary it doesn't go on\ntangents and it clarifies that it's\ninvestors who think that there will be a\nquarter point increase the five bullet\npoints are succinct and accurate this is\na pretty colossal loss for Bard what\nabout light content creation and idea\ngeneration surely it could do well here\njust something innocent like create\neight new YouTube video ideas with\ntitles and synopses on integrating\ngenerative AI into retail if Bard can't\nbe used by analysts maybe it can be used\nby content creators not really I mean\nyou make your own mind up but these\ntitles are pretty repetitive and Bland I\nknow I can't really complain because my\nchannel name is AI explained but these\ntitles are just unoriginal and the\nsynopsis lack detail I'll let you read\nthese but compare them to Gypsy 4's\noutputs each title is different and the\nideas are much more explored and nuanced\nokay fine what about email composition\nand I have to say count me a skeptic on\nthis one I have never ever found that\nany model let alone Bard can do a decent\njob at this it's not always that the\nemails are bad it's just that the time\nit takes me to teach the model what I\nwant to say in my email I could have\njust written the email I'm going to make\na prediction at this point I don't think\nusing language models to do emails is\ngoing to become that common of course\nfeel free to quote me on this in a\nyear's time now you're probably thinking\nI'm being harsh this is a perfectly fine\nemail I did leave a thumbs up it's just\nthat I would never use Bard for this\npurpose and I would also never use gpt4\nlike I don't want it to make up all\nthese extra details about what I'm going\nto discuss with John it's just too risky\nto send an email that has any chance of\nhallucinations I know you guys might\nthink that I really love Bing but it's\neven worse here it claims that I've\nadded relevant data and graphs no I\nhaven't I never mentioned anything about\ndata and graphs now my boss thinks I'm\ngoing to do data and graphs what are you\ndoing Bing and then you're going to say\nwhy am I using creative mode well if we\nuse balance mode or precise mode we go\nback to the bad problem it's an okay\nemail now but look at the length of it I\ncould have just written it out would\nhave been quicker to do the email than\nthe prompt I was beginning to lose hope\nin Bard so I tried writing assistance I\npicked a paragraph that someone I know\nused for a personal statement to get\ninto University of course they were\nhappy for me to share it it's decently\nwritten but could be improved\nsignificantly I asked Bard rewrite this\nparagraph with better English make it\noriginal professional and impactful now\nBard did remove some of the errors but\nit again went on a wild tangent trying\nto sell a career in data science as if\nwe were some sort of recruiter now I'm\nnot going to be too harsh if you just\ntake the first paragraph it's okay GPT\n4's output is better but still has some\nproblems now I think some of you are\ngoing to laugh at what happened with\nBing it simply refused to do it twice I\npretty much had to trick Bing to get it\nto rewrite this paragraph first it says\nmy mistake I can't give a response to\nthat right now I tried again it said hmm\nlet's try a different topic sorry about\nthat that finally I just asked the exact\nsame thing with different words I said\nrephrase this text with smoother\nlanguage it seemed to like that and then\ndid the job I think it's the best output\nbut still has problems anyway this is\nnot a grammar lesson so let's move to\nscience and physics and Bard completely\nflops it gets this fairly basic physics\nquestion wrong so how can it be a tutor\nfor us for a student to effectively\nlearn from a tutor there has to be a\ndegree of trust that the tutor is\ntelling the truth gpt4 by the way gets\nthis one right I even asked Bard to come\nup with a multiple choice quiz it\ndefinitely came up with the quiz problem\nis quite a few of the answers were wrong\nI didn't check all of them but look at\nnumber seven and number eight the\ncorrect answer just isn't there gpt4\ndoes a lot better with really\ninteresting questions in increasing\norder of difficulty now it does have a\nsome slip UPS look at question four\nthere are two correct answers one is a\nhalf one is five over ten but they both\nsimplify to the same thing gpt4 was also\nable to give these explanations I do\nthink the day of AI tutoring is fast\napproaching I just don't think it's\nquite here yet and certainly not with\nBard I think the point is pretty much\nproven now so let's move on to the\nexplanations why has Google fallen so\nfar behind first a lot of its top\nresearchers have left there were eight\nco-authors at Google for the famous\nattention is all you need paper on the\nTransformer architecture that's amazing\nright they pretty much invented\nTransformers problem is now all but one\nof the papers eight co-authors have left\none joined openai and others have\nstarted their own companies some of\nwhich I'll be covering in future videos\nspeaking of which if you're learning\nanything from this video please don't\nforget to leave a like and a comment\nnext potential reason is that they don't\nseem to want to interfere with their\nlucrative search model as the product\nlead for bad said I just want to be very\nclear Bard is not search if you haven't\nseen my initial review of Bard which\npretty much proves that it's terrible at\nsearch do check out after this video If\nBard is designed for search what is it\ndesigned for as the article points out\nthey haven't really provided specific\nuse cases next are they worried about\nsafety and accelerationism or are they\nlooking to buy up a competitor to open\nAI they invested over 300 million\ndollars in anthropic the stated goal of\nthat company is to work on AI safety and\nAlignment so is Google trying to be on\nthe right side of history and place all\nof its bets on safe AI or are they\ntrying to do to anthropic what Microsoft\ndid to open AI itself I'll be following\nthis particular story quite closely over\nthe coming weeks and months next maybe\nGoogle has better models that they\ngenuinely don't want to release because\nthey fear a PR backlash they had the\nImogen text to image model that was\nbetter than Dali 2 and they didn't\nrelease it Google said it was because\nImogen encoded harmful stereotypes and\nrepresentations I dug into the original\nimage and paper and it was indeed much\nbetter than Dali 2. Google wasn't\nbluffing they had a better model and\nthat wasn't the last time in January of\nthis year they released a paper on Muse\na text to image Transformer that was\nbetter than both Imogen and darli 2. in\ncase anyone thinks they're lying here I\nthink is the proof The Muse model\noutputs are on the right the image and\noutputs were in the middle and open ai's\nDali 2 outputs are on the left strikes\nme that Google's Muse is one of the\nfirst models to get text right\nmid-journey even mid Journey version 5\ndefinitely can't do this so why didn't\nGoogle release this well I read to the\nend of the newspaper and they say this\nit's well known that models like\nmid-journey and Muse can be leveraged\nfor misinformation harassment and\nvarious types of Social and cultural\nbiases due to these important\nconsiderations we opt not to release\ncode or a public demo at this point in\ntime let me know what you think in the\ncomments but I think it's more than\npossible that Google has a language\nmodel that's far better than Bard and\neven far better than Palm perhaps\nleveraging deepmind's chinchilla model\nand that they are genuinely keeping it\nback and not publishing papers on it\nbecause they worry about these kind of\nconsiderations anyway I do have a final\ntheory about Bard and that theory is\nthat they might have been working on\nwhat they regard to be more serious\nmodels in December Google released this\npaper on medpalm it's a language model\ntailored to help in a medical setting\nand if you think it's accuracy of 67.6\nin answering medical questions was good\nwait till we hear about the fact they've\nnow released medpalm 2. here is a\nsnippet of Google's presentation on\nmedpalm 2 released just a week ago today\nwe're announcing results from medpalm 2\nour new and improved model\nmad Palm 2 has reached 85 accuracy on\nthe medical exam Benchmark in research\nthis performance is on par with expert\ntest takers it far exceeds the passing\nscore and it's an 18 leap over our own\nstate of art results from medpalm\nmedpumm 2 also performed impressively on\nIndian medical exams and it's the first\nAI system to exceed the passing score on\nthose challenging questions but finally\nwhat does this say about the near-term\nfuture of Bard well the more users a\nmodel gets the more data it gets and so\nthe more easily a model can be improved\nas this Forbes article points out\nMicrosoft now has access to the valuable\ntraining data that these products\ngenerate which is a dangerous Prospect\nfor an incumbent like Google and it's\nnot like Google doesn't know this the\nCEO of Google admitted that products\nlike this talking about Bard get better\nthe more people use them it's a virtuous\ncycle but does that mean that it will be\na vicious cycle if everyone uses gpt4\ninstead of Bard with less data does that\nmean there'll be less Improvement of\nGoogle's model only time will tell and I\nwill be there to test it thank you very\nmuch for watching and do have a\nwonderful day", "date_published": "2023-03-22T18:52:26Z", "authors": ["AI Explained"], "summaries": []} +{"id": "6157011701ff03dcd049bb4e6687241b", "title": "Orca: The Model Few Saw Coming", "url": "https://www.youtube.com/watch?v=Dt_UNg7Mchg", "source": "ai_explained", "source_type": "youtube", "text": "do you remember this paper less than two\nweeks old it made Waves by concluding\nthat open source models can mimic the\nstyle but not the factuality of chat GPT\noverall we can conclude they say that\nmodel imitation is a false promise well\n48 hours ago we have this a 51-page\nreport on Orca based on a small 13\nbillion parameter model I don't often\ncomment on open source models because\nthey're simply not competitive with open\nai's models but Orca is not just\ncompetitive with GPT 3.5 it beats it in\nquite a few well-established benchmarks\nand even matches gpt4 in a couple of\ntests of reasoning as always I've read\nboth papers in full and can also bring\nin just release comments from Sam Altman\nand Ilya satsukova on competition from\nopen source models but let's start with\nOrca named presumably because orcas or\nkiller whales are frequent visitors to\nSouth American coastlines and South\nAmerica is of course the land of llamas\nand vacunas but all the research was\ndone by Microsoft which I find\ninteresting and I'll come back to that\nat the end but why did they make Orca\nand why does it perform better than\nmodels like llama alpaca and vicuna well\nthey say here in the abstract that those\nother models lack rigorous evaluation\nresulting in overestimating the small\nmodel's capability as they tend to learn\nto imitate the style but not the\nreasoning of lfm's large Foundation\nmodels to address these challenges we\ndevelop Orca a 13 billion parameter\nmodel that learns to imitate the\nreasoning process of the larger models\nOrca learns by looking at gpt4's\nstep-by-step thought processes and is\nGuided by teacher assistance from Chachi\nPT which is GPT 3.5 and to give you a\ntaste of what's to come Orca surpasses\nconventional state-of-the-art models\nsuch as vikuna by more than 100 in\ncomplex zero shot reasoning benchmarks\nlike the big bench hard which I'll talk\nabout and by 42 on AGI eval it goes on\nOrca reaches parity with Chachi PT on\nthe big bench hard and shows competitive\nperformance in professional and academic\nexaminations by the SAT LSAT GRE and\nGMAT and I know many of you will be\ninterested in this footnote we are\nworking with our legal team to publicly\nrelease a diff of the model weights in\naccordance with llama's release policy\nso if this is anything like llama it's\ngoing to be leaked across the internet\nimminently I'm going to show you so many\ntests and benchmarks in a moment but\njust to give you a sample here is orca\noutperforming Chachi PT in the vicuna\nevaluation set and matching text DaVinci\n3 in the SAT LSAT GRE and GMAT and as\nI'll touch on later this was Zero shot\nwithout Chain of Thought or any advanced\nmethods you can watch pretty much any of\nmy other videos to see how advanced\nprompt engineering would probably boost\nthose results still further for those\nwho didn't know 13 billion parameters is\nabout seven percent the size of GPT 3\nwhich is 175 billion parameters and\npossibly around one or two percent of\ngpt4's size that gives you an indication\nof the difference in size between Orca\nand these models that it's competing\nwith and if that doesn't make any sense\na smaller size means it can be run on\nmuch smaller devices like a desktop or\neven possibly a laptop the authors start\noff by giving a little slap to the other\npaper you know that one that said model\nimitation is a false promise and they\ncontinue that contrary to this assertion\nit is possible to reduce the Gap with\nproprietary llms on multiple zero shot\nbenchmarks that require sophisticated\nreasoning as we'll see models like\nvicuna claim to have 90 of chat gpt's\nquality but when it came to reasoning\ntasks or more technical tasks it\nbasically flopped here's a chart I'll\ncome back to outlining some of the more\ntechnical challenges you can give a\nlanguage model we should remember that\nvicuna is a fine-tuned version of the\nLlama model and it's competitive or even\nbetter than Palm 2 but give it some of\nthe harder challenges for a language\nmodel and it really struggles as you can\nsee in this column take logical\ndeduction where it only scored 1.2\npercent well this awkward model was 2\n900 better than that scoring 36 percent\ncompetitive with Chachi BT I'm going to\ncome back to the big bench Benchmark but\nlook for a second at causal judgment\nwhere Orca a 13 billion parameter model\nmatches gpt4 which is about a hundred\ntimes the size but back to how they\nactually did it models like alpaca and\nvicuna were given lots of query and\nresponses from chat GPT or gpt4 but what\nthey did is they leveraged system\ninstructions asking models like gpt4 and\nchat GPT to think step by step this gave\nOrca access to detailed responses from\nthe model that explain the reasoning\nprocess of the teacher as it generates\nthe response it allowed these parent\nmodels of gypsy 3.5 and gpd4 to be much\nbetter tutors for this young Orca also\nthey let the teachers of Chachi PT which\nis 3.5 and gpt4 give far more examples\nto their student 5 million and 1 million\nexamples respectively that compares to\nthe other models you may have heard of\nlike alpaca wizard vicuna Etc which had\ntens of thousands or the low hundreds of\nthousands of examples but again the key\ndifference is the explanations the\nstep-by-step thinking that the smaller\nOrca could then imitate they give a\nquick demo here of how the other open\nsource models learn from their GPT\nparents with a simplistic question and\nanswer format in contrast the author's\nleverage system messages to get chat gbt\nand gpc4 to think step by step leading\nto much richer explanations as you can\nsee in this diagram it wasn't just let's\nthink step by step by the way also\nthings like explain like I'm five they\nalso wanted the task to be as complex\nand diverse as possible so they used the\nflan collection this was released by\nGoogle in February and focused on\nbalancing the kind of prompts and tasks\nthat you fine-tune the language models\non you can see here the 16 system\nmessages that they give to Chachi PT and\ngpt4 and you can see here the kind of\ndifference that that makes imagine a\nlanguage model trying to learn from this\nhuman the human is asked pick which\nsentence is not logical sentence a\npeople in the desert often look forward\nto flood or sentence B people in the\ndesert often look forward to rain the\nhuman responds there is no reason to\nlook forward to a flood because floods\ncause damage the answer is sentence a\nnow yes a language model can learn from\nthat but by leveraging those system\nassistant messages look at the kind of\nresponse that gpd4 gives now Orca can\nlearn a lot more from that explanation\nand that's why one of the main reasons\nit's better than all the other open\nsource models because remember vicuna is\nthe best of the open source models in\nthis leaderboard it has an ELO of 1054\nbetter even than Palm 2 Bison all the\nmodels higher than it are proprietary\nbut there is another reason why Orca\nperforms so much better you might have\nwondered why didn't they just use only\ngpt4 well yes there were cost and time\nconsiderations but there was another\nfactor that they found they were able to\nuse chatty PT or GPT 3.5 as an\nintermediate teacher that teacher\nchattybt was able to reduce the Gap in\ncapabilities so Orca got smarter and\nbetter able to learn a bit like\nProgressive learning where you first\nlearn from easier examples than followed\nby harder ones after that they gave it\noutputs from gpt4 notice by the way what\nhappens if you skip the chat TPT\nteaching assistant and only train on\nthose 1 million examples from gpd4 what\nhappens is a bit like a student\nstruggling in class that's too advanced\nfor them Walker actually performs worse\nin those circumstances averaging 37 but\nwith that intermediate teacher\nbeforehand it gets 41.7 speaking of time\nit only took about 200 hours to train\nOrca on 20 a 100 gpus they did take a\nfew weeks to collect the data from chat\nGPT and gpt4 but presumably if they're\nplanning to open source this which they\nsay they are then that step could be\nskipped by The Wider Community let's now\nlook at some more of the results first\nfor open-ended generation not multiple\nchoice Orca is 95 of chat GPT quality\nand 85 percent of gt4's quality as\nassessed by gpt4 but they wanted to\nquickly move on to some more definitive\ntasks because there is a problem of\nusing Gypsy 4 as an assessor for example\nthey observe that there is a positive\nbias in GT4 evaluation toward the\nresponse of the first model in the\ncomparison set this reminded me of the\nUnfaithful reasoning paper that I talked\nabout in one of my recent videos you\ncan't always trust gpt4 to give its true\nreasoning but here it is in more\nobjective multiple choice questions and\nnotice how much harder many of these\ntests are for even these Advanced\nlanguage models I am fortunate and proud\nto have attained a perfect score in some\nof the tests in this chart like the GRE\nand GMAT they were part of the aqua rat\ntest that they gave the models so I can\nsay that they really are quite\nchallenging hence why GT4 only gets a 40\nbut you can see that throughout Orca\noutperforms vicuna by quite a margin and\nis very competitive with text DaVinci 3.\nof course overall it does lag behind\ngpd4 but this is all the zero shots a\nbit later on I'll come back to the range\nof methods that we could use to further\nimprove on Orca the percentages by the\nway are the improvements on vicuna again\nthe second best open source model so far\nwe've looked at human-centric benchmarks\nlike the GMAT and GRE these are grouped\nwith the lovely name AGI eval and as\nwe've seen even the top models lag\nbehind the top human performers but what\nabout a benchmark specifically for\nlanguage models it's called Big bench\nhard the original big bench had 207\ntasks but language models got so good\nthat they had to narrow down the\nBenchmark to just the 23 challenging\ntasks where human raters still did\nbetter than language models now it turns\nout when you add Chain of Thought\nprompting to the models they do even\nbetter and there are even fewer tasks\nthat humans are better at but anyway all\nyou have to remember is that these are\n23 of the hardest tasks for language\nmodels and I'll just let you compare the\nresults for yourself but the trend is\nreally quite clear Walker massively\noutperforming the previous best open\nsource model vicuna beating even chat\nGPT on average but still of course\nlagging behind gpd4 except for a few\ntasks look at Web of Lies where Orca\noutperforms gpt4 that would be a a\nquestion like this Alexis says Shonda\ntells the truth Jim lies Antoine says\nJim tells the truth Shonda says Antoine\nlies does Alexis tell the truth or what\nabout Temple sequences where Orca\nabsolutely crushes vicuna and doubles\nchatty PT's performance that would be a\nsituation like this now I'm not going to\nread it all out but essentially you have\nto figure out when the timings match up\nbasically keeping track of time and orca\ndoes really well and chat TPT flops\ngetting it wrong interestingly they also\ntested all four models on that Common\nSense reasoning question that I\ndemonstrated for smartgbt about hanging\nthe clothes to dry as you might remember\nyou can use prompt engineering to nudge\nthe models to almost always get it right\nwhich is partly why I view these results\nmore as a baseline rather than a cap and\nthe authors admit this too Orca has been\ntrained on data that simulate zero shot\nsetting with standard prompts the\nmodel's performance in other contacts\nsuch as multi turn conversations like\nthe dearer paper I talked about on the\nchannel in context learning and few shot\nlearning or Advanced prompting\ntechniques that smart GPT or tree of\nthoughts for example and they say like\nChain of Thought prompting remains\nuntested these results are a baseline\nnot a cap they mention other ways that\nOrca could be improved for example\nthrough tool augmentation and that's not\njust calculators calendars Bing or Auto\nGPT I was going to do a separate video\non this paper but I'll just mention it\nhere this paper from last week\ndemonstrated that larger models can\ncreate tools that smaller models can\nthen use more efficiently once the best\nlanguage models say gpt4 has created a\ngeneric python function which is the\ntool and then written some unit tests it\ncan then wrap and hand over those tools\nto smaller models like Gypsy 3.5 or in\nthis case awka and check out the tool\nmaking row to see the Improvement for\nchat GPT or in our case Orca when\nthey're given these tools created by\ngpt4 or better language models their\nperformance across a range of tasks goes\ndramatically up and we haven't even\ntalked about using a process based\nreward model like in the let's verify\nstep-by-step paper that of course could\nfurther improve orca's performance of\ncourse when this model becomes publicly\navailable I will test all of this out\nbut it hasn't been open sourced yet and\nthey do say this model is solely\ndesigned for research settings that does\nseem a little bit naive to me I mean\nthat's what meta said when they released\nllama but then everyone and their\ngrandma just use their language model\nfor whatever I do wonder what it means\nwhen they say we are working with our\nlegal team and it is particularly\ninteresting to me that this was all done\nby Microsoft I'm going to go into a\nlittle bit of speculation here about why\nI think they conducted This research you\nmight remember that leaked Memo from\nGoogle we have no modes and they even\nmentioned vicuna and talked about how it\ncircumvented restrictions on the open AI\nAPI by use using share GPT and my theory\nis that the Microsoft researchers were\ntesting this point from the memo the\npoint was that training giant models\nfrom scratch not only throws away the\npre-training but also any iterative open\nsource improvements that have been made\non top doesn't take long for those\nimprovements to dominate making the full\nretrain extremely costly maybe Microsoft\nis hesitating about future investments\nin GT5 or gpd6 and they really want to\ntest out if it's easy to imitate those\nlarge models on the cheap if it is then\nwhy would Microsoft invest billions in a\nnew giant model that's my own Theory as\nto why Microsoft is working on this but\nlet me know in the comments what your\ntheory is in the conclusion the authors\nstate that Orca suggests that learning\nfrom step-by-step explanations could\nsignificantly improve the quality of\nmodels regardless of their size and that\nthey hope these insights will inform the\ndesign of more robust evaluation methods\ncompared to those used for vicuna for\nexample and the advancement of alignment\nand post training techniques and the\nmore effective use of powerful models\nlike gpt4 as teachers and maybe they\nshould have said and also with chatgpt\nas an intermediate teacher I'm going to\nend with the thoughts of the leaders of\nopenai Elia sotskova and Sam Altman on\nopen source models and I think there is\na bit of a contrast between the two\nanswers Ilya satsukova thinks that the\nGap is growing ever wider to the open\nsource versus non-open Source models\nquestion\nyou don't want to think about it in in\nbinary black and white terms where like\nthere is a secret source that you'll\nnever be rediscovered\nwhat I will say or whether gpt4 will\never be reproduced by open source models\nperhaps one day it will be\nbut when it will be it will be a much\nmore powerful model in the companies\nso there will always be a gap between\nthe open source models and the private\nmodels\nand this Gap may even be increase in\nthis time the amount of effort and\nengineering and research that it takes\nto produce one such neural net keeps\nincreasing and so even if there are open\nsource models they will never be they\nwill be less and less produced by small\ngroups of of dedicated researchers and\nengineers and it will only be the\nProvidence of a company\nbig company while some Altman seems to\nsay that even if open source models do\ncatch up open AI will always have a\ndifferent kind of modes what are your\nthoughts about the we have no modes\ndocument that was released lately\na leaked document I\nthe the thing that is special about open\nAi and I think the thing that is so\nmisunderstood by that document aside\nfrom the fact that we have like a\ngigantic number of users and people that\nalike have formed some sort of\nrelationship with us and our products is\nwhat openai is special about is figuring\nout what comes next it is the ability it\nis easy to copy something once you know\nit can be done and in that sense sure it\nis very hard to go figure out what to do\nnext and the idea is the Big Ideas the\nmedium size ideas the small ideas and\nthe careful execution on them that it\ntakes to get from here to Super\nintelligence that's what our mode is\nanyway this video could have been at\nleast three times longer there was so\nmuch I had to edit out for brevity if\nyou're interested in me talking more\nabout Open Source models do let me know\nin the comments I've got much more to\nsay as always thank you so much for\nwatching to the end and have a wonderful\nday", "date_published": "2023-06-07T16:14:56Z", "authors": ["AI Explained"], "summaries": []} +{"id": "3ffa02112e1c3df68c3b620b0afd29be", "title": "Bad AI Predictions: Bard Upgrade, 2 Years to AI Auto-Money, OpenAI Investigation and more", "url": "https://www.youtube.com/watch?v=lLRWZZF3ctw", "source": "ai_explained", "source_type": "youtube", "text": "people are starting to get used to the\nBreakneck pace of AI so I wanted to show\nthat developments in just the last few\ndays fly in the face of what was\npredicted only a couple of years ago\nstarting with the upgrades The Bard then\na snapshot of Claude 2. the\nall-encompassing open AI investigation\nand inflection ai's predictions of a\nself-guided million dollar making AI\ncoming in just two years but I'm going\nto start with this page from a book\nreleased in 2021 called a brief history\nof AI look at the tasks it says are\nnowhere near solved and at the bottom it\nsays at present we have no idea about\nhow to get computers to do the tasks at\nthe bottom of the list I'm not arguing\nthat these are solved but this year we\nare getting pretty close check out the\nsecond one down human level automated\ntranslation I asked the new bar to write\na poem about machine translation and\nhere it is I'm not going to judge the\npoem but then I asked now translate it\ninto Spanish of course that's nothing\nnew but listen to the quality of the\ntext-to-speech model used to read out\nthis poem here's a snippet\nis\nI don't know about you but that is\nstarting to sound awfully human-like now\nyes it could do this before for English\nbut now it can do it for even Swahili\nand I know some people will say don't we\nalready have Google translate but Palm 2\nwhich is the model behind Bard\noutperforms Google translate I covered\nthis in the original palm 2 video but\nthe conclusion was we observe that Palm\n2 improves quality both over the\noriginal palm and Google translate okay\nthat is pretty insane but what about the\nnext one interpreting what is going on\nin a photograph are we nowhere near\nsolving that well I gave Bart the meme\nthat you can see and I said interpret\nwhat's going on and it said this the\nimage shows a pizza that has been shaped\nto look like the death star already that\nis so Savvy to me that it knows it's a\npizza despite it being so strangely\nformed and it can interpret that the\ntoppings make it look like the Death\nStar of course as a bonus it can read\nthe text and therefore understand the\nmeme it says it's humorous because the\nDeath Star is a symbol of death and\ndestruction compared to a pizza which is\nabout food and enjoyment a quick bonus\ntip by the way is that you can scroll to\nthe end of the response and click this\nmodify response button and then you can\nadjust the output for example I'm going\nto make it do something shorter and you\nare going to see a shorter version of\nthis interpretation and here it is\napparently reduced by about 25 now the\nGoogle lens is incorporated into Bard I\nuse it daily on my walks to answer\nquestions about things around me like\nmaybe I see a butterfly and I ask what\ntype of butterfly is or a plant and one\nthing to caution you on is that if it\nsees a human face it will block that out\nand not answer the question and one more\ncrazy thing that I found that I'm\ncurious if anyone else found this is\nthat I took a photo in my local park in\nair actually recognized the location of\nthe park and it was not a nationally let\nalone internationally recognized park at\nall now that didn't work every time but\nit is something you might want to try on\nyour next walk or run and you might have\nnoticed that that's quite similar to the\none second from last which is\ninterpreting a work of art now again I'm\nnot saying that solved and there may be\nsome reverse image search going on here\nI asked write a haiku about where the\nstairs are going in this work of art\nfamously this sketch is about the stairs\ngoing nowhere by Esha and it wrote an\nalmost perfect haiku about the fact that\nthe stairs are going nowhere let's now\ntake a break from that book and look at\nwhat professional forecasters predicted\nthat AI would be capable of for the math\ndata sets for this particular Benchmark\nin 2021 they put did a score of 21 in\n2023 and 52 in 2025 and the predicted\nTrend would hit 80 only in 2028 five\nyears from now seven years from then\nwell regular viewers of my channel might\nknow that gpd4 can already get 78 of\nproblems from that data set correct\nalready today and you can see here that\nthere is still quite a lot of room for\nfurther Improvement and honestly this is\nwithout code interpreter or Wolfram\nAlpha so I actually ran hundreds of\nexperiments of my own using g64 with\ncode interpreter on the math data set\nand it was getting more like 86 correct\nbut back to predictions from 2021 notice\ntwo of the other categories that are\napparently nowhere near solved\nunderstanding a story and answering\nquestions about it and writing an\ninteresting story here is a 112 page\nnovel written fully by gpt4 now I think\nit meets the definition of being\ninteresting if not human level but\nhonestly when a model can be fine-tuned\non an author's work I think it will get\nvery very close my own prediction is\nthat we are less than a year away from a\nhuman level novel not as good as the\nbest of course but fooling many readers\nbut here is where I can explore a bit\nwith Claude 2 which can take in a\nhundred thousand tokens I was lucky to\nget early access to Claude 2 and around\nhundreds of my own experiments which I\nwill talk about more in future videos\nbut for this video remember the books\npredictions about answering questions\nabout such a novel or story well I\ninputted that entire gpt4 generated\nnovel and I said find 10 sentences whose\nvocabulary could be made less generic\nand if you look at these suggestions\nfrom Claude 2 it does indeed make the\nvocabulary more exciting definitely not\nperfect still quite generic but we have\nwords like crystalline ethereal and\ninaugurable I I love that word myself\none of my favorites again I think we're\nless than a year from a high quality\nfull-length novel being produced using\none prompt and if you wanted a more\ntechnical test of reading comprehension\ndon't forget that gpd4 got 99th\npercentile in the verbal section of the\nGRE which includes reading comprehension\nI managed to get 170 on this test and to\nbe honest when I saw Gypsy 4 get 99th\npercentile that was a huge wake-up call\nfor me a true story actually is that\nwhen I saw that result I decided to make\ncovering AI my full-time job and I think\neach person will go through their own\nsuch moment when gbt4 or GT5 or Gemini\ncrosses some Benchmark to do with their\nprofession it's like I didn't wake up\nmuch when it could create basic art\nbecause I'm not an artist but when these\nmodels come onto territory that you\nyourself know about that's when you\nrealize how powerful they are and I\nthink a lot of people have gone through\nthat Journey this is metaculous if\nthat's how you pronounce it and their\npredictions about AGI and look what the\npredictions were two years ago around\nthe time of that book they were late\n2030s even early 2040s and this was as\nlate as October or November of 2021. you\ncan see that they thought AGI would be\n20 or more years away what do they think\nnow well\n2026. that is quite a change in two\nyears and I can understand why Mustafa\nSuleiman the founder of inflection AI\ngoes further they are building towards\ntheir own AGI and they plan to ask it to\ngo make one million dollars of course it\nwould need a strategy it would need to\nresearch and design products etc etc\nmaybe with a little bit of human\napproval but the work would all be done\nby an AI and he says something like this\ncould be as little as two years away and\nthose of you who watched my previous\nvideo on super intelligence will find\nthat quite a contrast to the idea that\nsuper intelligence is 10 to 20 years\naway I don't know about you guys but I\nthink if an AI can make money in this\nway the entire world economy will be\ntransformed rapidly there is a curious\npossibility though that such an AI\nwouldn't be released to the general\npublic of course I was following the FTC\ninvestigation into openai as you might\nexpect I read all 20 pages of the\ninvestigation multiple times in fact Sam\nAltman openly grumbled that the FDC\nrequest started with this leak enabling\nme to read about it it is actually a\ncrazy document and it feels to me a bit\nlike some sort of strip search like they\nwant all internal Communications about\nthe propensity of chatgpt to produce\ninaccurate statements or reveal personal\ninformation and they say at the top do\nnot delete any emails relating to this\nin other videos I've covered how any\nmodel can be jailbroken and I've also\ncovered how Sam Altman said that it\nmight be two years before models stop\nhallucinating so if the FTC ends up\nfinding open AI or the other companies\nbillions of dollars as they find other\ncompanies before I can see one result\nbeing much more reticence from these\ncompanies over publicly deploying their\nmodels of course let me know in the\ncomments whether you think that's a good\nthing or a bad thing speaking of money\nthough I can't leave out that this week\nwas the start of xai Elon Musk was able\nto promise up to 200 million dollars as\na signing bonus for some of the AI\nresearchers that joined him if companies\nare willing to promise that much to\nresearchers willing to defect from\ndeepmind or open AI I do find it hard to\nsee the march to Super intelligence\nslowing down much so let's end with that\nfamous scene from iRobot although\nperhaps this time with a different\nending an imitation of life\ncan a robot write a symphony\ncan a robot turn a canvas into a\nbeautiful masterpiece\nthank you so much for watching and have\na wonderful day", "date_published": "2023-07-17T15:30:19Z", "authors": ["AI Explained"], "summaries": []} +{"id": "e43e4ef0ca6feaf8e5e38bc7bfacadd0", "title": "Enter PaLM 2 (New Bard): Full Breakdown - 92 Pages Read and Gemini Before GPT 5? Google I/O", "url": "https://www.youtube.com/watch?v=u_dSUtp4eM8", "source": "ai_explained", "source_type": "youtube", "text": "less than 24 hours ago Google released\nthe Palm 2 technical report I have read\nall 92 Pages watch the Palm 2\npresentation read the release notes and\nhave already tested the model in a dozen\nways but before getting into it all my\nfour main takeaways are these first Palm\n2 is competitive with gpt4 and while it\nis probably less smart overall it's\nbetter in certain ways and that\nsurprised me second Google is saying\nvery little about the data it used to\ntrain the model or about parameters or\nabout compute although we can make\neducated guesses on each third Gemini\nwas announced to be in training and will\nlikely rival GPT 5 while arriving\nearlier than GPT 5. as you probably know\nSam Altman said that gbt5 isn't in\ntraining and won't be for a long time\nfourth while dedicating 20 pages to bias\ntoxicity and misgendering there wasn't a\nsingle page on AI impacts more broadly\nGoogle boasted of giving Gemini planning\nabilities in a move that surprises I am\nto say it makes open AI look like\nParagons of responsibility so a lot to\nget to but let's look at the first\nreason that Palm 2 is different from a\ngpt4 on page 3 they say we designed a\nmore multilingual and diverse\npre-training mixture extending across\nhundreds of languages and domains like\nprogramming mathematics Etc so because\nof the text that they train Palm 2 on is\ndifferent to the text that openai train\ngpd4 on it means that those models have\ndifferent abilities and I would say Palm\n2 is better at translation and\nLinguistics and in certain other areas\nwhich I'll get to shortly if that's data\nwhat about parameter count well Google\nnever actually say they only use words\nlike it's significantly smaller than the\nlargest Palm model which was 540 billion\nparameters so sometimes they say\nsignificantly other times dramatically\ndespite this it's significantly\noutperforms Palm on a variety of tasks\nso all the references you may have seen\nto imminent 100 trillion parameter\nmodels were bogus skipping ahead to page\n91 out of 92 in the model summary they\nsay further details of model size and\narchitecture are withheld from external\npublication but earlier on they did seem\nto want to give hints about the\nparameter count inside Palm 2 which\nopenai never did here they present the\noptimal number of parameters given a\ncertain amount of compute flops scaling\nthis up to the estimated number of flops\nused to train Palm 2. that would give an\noptimal parameter count of between 100\nand 200 billion that is a comparable\nparameter count to gpt3 while getting\ncompetitive performance with gpt4 Bard\nis apparently now powered by Palm 2 and\nthe inference speed is about 10 times\nfaster than gbt4 for the exact same\nprompt and I know there are other\nfactors that influence inference speed\nbut that would broaden fit with an order\nof magnitude fewer parameters this has\nother implications of course and they\nsay that Palm 2 is dramatically smaller\ncheaper and faster to serve not only\nthat part 2 itself comes in different\nsizes as Sundar pichai said Palm 2\nmodels deliver excellent foundational\ncapabilities across a wide range of\nsizes\nwe've affectionately named them gecko\norder bison and unicorn\ngecko is so lightweight that it can work\non mobile devices\nfast enough for great interactive\napplications on device even when offline\nI would expect gecko to soon be inside\nthe Google pixel phones going back to\ndata Google cryptically said that the\npre-training Corpus is composed of a\ndiverse set of sources documents books\ncode mathematics and conversational data\nI've done a whole video on the data\nissues that these companies face but\nsuffice to say they're not saying\nanything about where the data comes from\nnext they don't go into detail but they\ndo say that Palm 2 was trained to\nincrease the context length of a model\nsignificantly beyond that of palm as of\ntoday you can input around 10 000\ncharacters into Bard but they end this\nparagraph with something a bit more\ninteresting they say without\ndemonstrating our results show that it\nis possible to increase the context\nlength of the model without hurting its\nperformance on generic benchmarks the\nbit about not hurting performance is\ninteresting because in this experiment\npublished a few weeks ago about\nextending the input size in token up to\naround 2 million tokens the performance\ndid drop off if Google have found a way\nto increase the input size in tokens and\nnot affect performance that would be a\nbreakthrough on multilingual benchmarks\nnotice how the performance of palm 2 in\nEnglish is not dramatically better than\nin other languages in fact in many other\nlanguages it does better than in English\nthis is very different to gpd4 which was\nnoticeably better in English than in all\nother languages as Google hinted earlier\nthis is likely due to the multilingual\nText data that Google trained Palm 2\nwith in fact on page 17 Google admit\nthat the performance of palm 2 exceeds\nGoogle Translate for certain languages\nand they show on page 4 that it can pass\nthe Mastery exams across a range of\nlanguages like Chinese Japanese Italian\nFrench Spanish German Etc look at the\ndifference between Palm 2 and palm in\nred now before you rush off and try Bard\nin all of those languages I tried that\nand apparently you can only use Bard at\nthe moment in the following languages\nEnglish US English what a Pity and\nJapanese and Korean but I was able to\ntest Bard in Korean on a question\ntranslated via Google Translate from the\nmmlu dataset it got the question right\nin each of its drafts in contrast Gypsy\n4 not only got the question wrong in\nKorean when I originally tested it for\nmy smart GPT video it got the question\nwrong in English in case any of my\nregular viewers are wondering I am\nworking very hard on Smart GPT to\nunderstand what it's capable of and\ngetting it benchmarked officially and\nthank you so much for all the kind\noffers of help in that regard I must\nadmit it was very interesting to see on\npage 14 a direct comparison between Palm\n2 and gpt4 and Google do admit for the\nPalm 2 results they use Chain of Thought\nprompting and self-consistency reading\nthe self-consistency paper did remind me\nquite a lot actually of smart GPT where\nit picks the most consistent answer of\nmultiple outputs so I do wonder if this\ncomparison is totally fair if Palm 2\nused this method and gpt4 didn't I have\nto talk about these benchmarks more in\nanother video otherwise this one would\nbe too long a quick hint is that why no\nGrand is about identifying what the\npronoun in a sentence refers to Google\nalso weighed into the emerging abilities\ndebate saying that Palm 2 does indeed\ndemonstrate new emerging abilities they\nsay it does so in things like multi-step\narithmetic problems temporal sequences\nand hierarchical reasoning of course I'm\ngoing to test all of those and have\nbegun to do so already and in my early\nexperiments I'm getting quite an\ninteresting result Palm 2 gets a lot of\nquestions wrong that gpt4 gets right but\nit can also get questions right that\ngpt4 gets wrong and I must admit it's\nreally weird to see Palm 2 getting\nreally Advanced college level math\nquestions right that gpd4 gets wrong and\nyet also when I ask it a basic question\nabout prime numbers it gets it kind of\nhilariously wrong honestly I'm not\ncertain what's going on there but I do\nhave my suspicions remember though that\nrecent papers have claimed that emergent\nabilities are a mirage so Google begs to\ndiffer when Google put Palm 2 up against\nGT4 in high school mathematics problems\nit did outperform gpd4 but again it was\nusing an advanced prompting strategy not\na hundred percent different from Smart\nGPT so I wonder if the comparison is\nquite Fair what about coding well again\nit's really hard to find a direct\ncomparison that's fair between the two\nmodels overall I would guess that the\nspecialized coding model of palm what\nthey call Palm 2s is worse than Gypsy 4.\nit says it's pass at one accuracy as in\npast first time is 37.6 remember the\nSparks of AGI paper well that gave GT4\nas having an 82 percent zero shot pass\nat one accuracy level however as I\ntalked about in the Sparks of AGI video\nthe paper admits that it could be that\nGypsy 4 has seen and memorized some or\nall of human eval there is one thing I\nwill give Google credit on which is that\ntheir code now sometimes references\nwhere it came from here is a brief\nextract from the Google keynote\npresentation how would I use Python to\ngenerate the scholars move in chess okay\nhere Bard created a script to recreate\nthis chess move in Python and notice how\nit also formatted the code nicely making\nit easy to read we've also heard great\nfeedback from developers about how Bard\nprovides code citations and starting\nnext week you'll notice something right\nhere we're making code citations even\nmore precise if Bard brings in a block\nof code just click this annotation and\nBard will underline the block and link\nto the source as always it seems the\nappendix contained more interesting\ninformation sometimes than the main body\nof the technical report for example we\nget a direct and fair comparison between\nGypsy 4 and palm 2 or I should say flan\nPalm 2. that is the instruction\nfine-tuned version of palm 2.\nessentially that's the version where\nit's been fine-tuned to get better at\nfollowing a question and answer format\nbut anyway the original palm 2 scored\n78.3 and flan Palm 2 scored 81.2 that's\nbelow the 86.4 percent of GT4 and that's\nwhy my broad conclusion is that Gypsy 4\nis a bit smarter than Palm 2 but as I'll\nbe showing over the coming days and\nweeks there are genuinely quite a few\nareas in which palm 2 is better than\ngpt4 what about the big bench which was\ndesigned to be particularly tough for\nlanguage models I talked a lot about\nthis in my earliest videos well the\ngraph is going to look pretty weird\nbecause Palm 2 has improved upon Palm\nwhile reducing the number of parameters\nso the graph kind of doubles back on\nitself back up here up to around 69\naccording to the technical report I\nwould say this is quite a major moment\nin human history there is now virtually\nno language task that the average human\ncan do better than palm 2. of course\nexpert humans can do better in\nindividual domains but the average human\nis now worse in virtually every domain\nof language here you can see that\nconfirmation of the big bench hard\nresults for flan Palm 2 69.1\ninterestingly in the original chart Palm\n2 is even claimed to have higher\nperformance than that at 78.1 if you\nremember the reason we can't compare\nthat to gpd4 is that in the technical\nreport for gpt4 they admit that during\ntheir contamination check we discovered\nthat portions of big bench were\ninadvertently mixed into the training\nset and we excluded it from our reported\nresults before we get to Gemini Google\nshow off in the latter half of the\ntechnical report with examples of of\nlinguistic ability like writing\nparagraphs in tajiki and then\ntranslating them into Persian they go on\nto show examples in Tamil and they are\nreally making a big point of showing off\nits multilingual capabilities at this\npoint and I'm going to admit this is my\npersonal opinion Google then Strays into\ndozens of pages on bias toxicity and\ngender interestingly some of the people\npaid to assess these risks were paid\nonly 1.5 cents per judgment these things\ndo need to be addressed of course but it\nwas somewhat shocking to me to see 20\npages of that and not a single page on\nthe broader AI impacts as many of you\nmay know I have criticized openai plenty\nof times on this channel but compare\ntheir technical report which goes into\nfar more detail about what we need to\nmonitor the closest Google got was\nshowing how their Universal translator\ncould be used for deep fakes Universal\ntranslators and experimental AI video\ndubbing service that helps experts trans\nlater speak his voice while also\nmatching their lip movements let me show\nyou how it works with an online college\ncourse created in partnership with\nArizona State University what many\ncollege students don't realize is that\nknowing when to ask for help and then\nfollowing through and using helpful\nresources is actually a Hallmark of\nbecoming a productive adult\nuniversities\nit just seems a massive black hole when\none of their recent former employees\nJeffrey Hinton had this to say this week\non CNN you've spoken out saying that AI\ncould manipulate or possibly figure out\na way to kill humans how could it kill\nhumans if it gets to be much smarter\nthan us it'll be very good at\nmanipulation because it will have\nlearned that from us and very few\nexamples of a more intelligent thing\nbeing controlled by a less intelligent\nthing and it knows how to program so\nit'll figure out ways of getting round\nrestrictions as we put on it it'll\nfigure out ways of manipulating people\nto do what it wants it's not clear to me\nthat we can solve this problem\num I believe we should put a big effort\ninto thinking about ways to solve the\nproblem I don't have a solution at\npresent I just want people to be aware\nthat this is a really serious problem\nand we need to be thinking about it very\nhard this all seems particularly\nrelevant when Google made this\nannouncement about Gemini their rival to\nGypsy 5. all this helps set the stage\nfor the inflection point we are at today\nwe recently brought these two teams\ntogether into a single unit Google\ndeepmind using the computational\nresources of Google they are focused on\nbuilding more capable systems safely and\nresponsibly\nthis includes our next Generation\nFoundation model Gemini which is still\nin training\nGemini was created from the ground up to\nbe multimodal\nhighly efficient at tool and API\nIntegrations and built to enable future\nInnovations like memory and planning\nthat ability to plan may ring a bell\nfrom the gpt4 technical report which\nsaid this novel capabilities often\nemerge in more powerful models some that\nare particularly concerning are the\nability to create and act on long-term\nplans remember Google didn't identify\nplanning as a risk but as a selling\npoint for Gemini next Google talked\nabout accelerating their progress which\nwas again directly mentioned in the gpt4\ntechnical report it said one concern of\nparticular importance to open AI is the\nrisk of racing Dynamics leading to a\ndecline in safety standards the\ndiffusion of bad norms and accelerated\nAI timelines Each of which heightens\nsocietal risks associated with AI we\nrefer to these here as acceleration risk\nand make no mistake Gemini will be very\naccelerated from Palm 2. it looks set to\nuse the the TPU V5 chip which was\nannounced back in January of last year\nand on page 91 of the Palm 2 technical\nreport they say that that model used TPU\nV4 now it should be said that Palm 2 is\nleading to some impressive medical\napplications as I actually first\nreported on seven weeks ago without\nquite realizing it here's Med Palm 2. We\nBelieve large language models have the\npotential to revolutionize Healthcare\nand benefit Society mad Palm is a large\nlanguage model that we've taken and\ntuned for the medical domain\nyou know medical question answering has\nbeen a research Grand Challenge for\nseveral decades but till date the\nprogress has been kind of slow but then\nover the course of the last three to\nfour months first with metform and\nmetform2 we have kind of like broken\nthrough that barrier unlike previous\nversions mad Palm 2 was able to score 85\non the usmla medical licensing exam yeah\nthis is immensely exciting because\npeople have been working on medical\nquestion answering for over three\ndecades and finally we are at a stage\nwhere we can say with confidence that AI\nsystems can now at least answer USMLE\nquestions as good as experts as many of\nyou may know the CEO of Google as well\nas the CEO of Microsoft and Sam Altman\nand the CEO of anthropic all went to the\nWhite House to discuss AI risk and\nopportunity but given that the main\noutcome from that seems to be 140\nmillion to establish seven new AI\nresearch institutes that feels a little\nslow given all the acceleration that's\noccurring because as Google somewhat\nsoberly conclude their rapport we\nbelieve that further scaling of both\nmodel parameters and data set size and\nquality as well as improvements in the\narchitecture and objective we'll\ncontinue to yield gains in language\nunderstanding and generation they are\nnot slowing down and the world hasn't\nyet caught up thank you so much for\nwatching to the end and have a wonderful\nday", "date_published": "2023-05-11T17:30:37Z", "authors": ["AI Explained"], "summaries": []} +{"id": "40a901c8b120d7deff97f5d7a36e2f31", "title": "12 New Code Interpreter Uses (Image to 3D, Book Scans, Multiple Datasets, Error Analysis ... )", "url": "https://www.youtube.com/watch?v=_njf22xx8BQ", "source": "ai_explained", "source_type": "youtube", "text": "in the 48 hours since I released my\nfirst code interpreter video I believe I\nhave found another 12 use cases that\nshowcase its power from finding errors\nin data sets to reading Anna Karenina\nASCII art to image captioning most of\nthese I haven't seen anyone else mention\nso let's begin first is creating a 3D\nsurface plot which you can see on the\nleft from the image on the right I know\nI will get two professional uses in a\nsecond but I was personally very\nimpressed that all of this that you can\nsee can be done through the interface of\nchat GPT you can even see the little\nbuildings at the bottom left reflected\nin this 3D surface plot to give you an\nidea of how it works you click on the\nbutton to the left of the chat box and\nthen it analyzes whatever you've\nuploaded and all I said was analyze the\nRGB of the pixels and output a 3D\nsurface map of the colors of the image\nnow I will admit it doesn't do a perfect\njob immediately at first it wasn't down\ndownloadable and then it wasn't big\nenough but eventually I got it to work\nbut it's time for the next example and\nwhat I wondered was what is the biggest\ndocument I could upload to get it to\nanalyze the longest book that I've ever\nread is Anna Karenina I think it's about\na thousand pages long and I pasted it\ninto a word doc and it's about 340 000\nwords I uploaded it and then I asked as\nyou can see find all mentions of England\nanalyze them to discover the tone in\nwhich the country is perceived in the\nbook now I know what some of you may be\nwondering is it just using its stored\nknowledge of the book and I'll get to\nthat in a second but look at what it did\nit went through and found the relevant\nquotes there are seven of them there I\nchecked the document and these were\nlegitimate quotes but here's where we\nget to something that you can't just do\nwith Ctrl F in a Word document it\nanalyze the tone and sentiment of each\nof these passages and you can see the\nanalysis here then I asked drawing on\nyour own knowledge of the 19th century\nand the finding above write a two\nthousand word reflection on the\npresentation of England in Anna Karenina\nnow I know many of you won't be\ninterested in that book but imagine your\nown text this is 340\n000 words it then created a somewhat\nbeautiful essay and yes it did bring up\neach of those quotes with analysis now\nhere is where things get kind of wild\njust to demonstrate that it's not using\nits own knowledge I asked the same\nquestion in a different window without\nof course uploading the file and at\nfirst I was like oh damn it did it here\nare the quotes wow it did the job it\ndidn't even need the document but that\nwas until I actually checked out whether\nthe quotes were legitimate and lo and\nbehold it had made up the quotes I\nsearched far and wide for these quotes\nand unless I'm going completely crazy\nthey are completely made up so when it\nfound those quotes earlier it wasn't\ndrawing upon its own knowledge it was\nfinding them from the document and this\nalso serves as a warning of the\nhallucinations that the model can do if\nit doesn't have enough data I'm going to\nget back to reliability and factuality\nin a moment but just quickly a bonus I\ngot it to write an epilogue to The Death\nof Ivan Elliott an incredible short\nstory by Leo Tolstoy and as some people\nhad asked it can indeed output that to a\nPDF which is convenient for many people\nnext what about multiple files I didn't\nactually investigate this in my previous\nvideo which if you haven't watched by\nthe way please do check it out there's\n23 examples of use cases there but\nanyway what I wanted to try out was I\nwanted to upload four data sets and then\nI wanted to get gpt4 to find any\ncorrelations between the data sets also\nI was kind of investigating if there was\na limit to the number of files you could\nupload and honestly there doesn't seem\nto be I picked this global data almost\nat random to be honest it was the amount\nof sugar consumed per person and then\nthe murder rate per 100 000 people and\nthen the inequality index of each of\nthose countries and the population aged\n20 to 39 but first notice how it didn't\nstop me I could could just keep\nuploading files and then it would ask me\nthings like please provide guidance on\nthe kind of analysis or specific\nquestions you would like me to\ninvestigate with these four data sets so\nit's still aware of the previous files\nwhat I asked was this analyze all four\ndata sets and find five surprising\ncorrelations output as many insights as\nyou can distinguishing between\ncorrelation and causation this is really\npushing the limits of what code\ninterpreter can do but it did it many of\nyou asked before if it could be lulled\nwith false data into giving bad\nconclusions and it's really hard to get\nit to do that dbt4 is honestly really\nsmart and increasingly hard to fall you\ncan read what it said it found a very\nweak negative correlation for example\nbetween sugar consumption and murder and\nthen just admitted there is probably no\nsignificant relationship between these\ntwo factors but notice it then found a\ncorrelation that it found more plausible\nthere is a moderate positive correlation\n0.4 between the murder rate per 100 000\npeople and the Ginny inequality Index\nthis suggests that countries with higher\nincome inequality tend to have a higher\nmurder rate I Then followed up with this\ndrawing on your own knowledge of the\nworld which correlation seems the most\ncausally related it then brought in\nresearch from the field of social\nscience and gave a plausible explanation\nabout why this correlation might exist\nobviously this was just my example you\nwould have to think about all the\ndifferent files and data sets that you\nwere willing to upload to find\ncorrelations and surprising insights\nwithin I'm gonna try to alternate\nbetween fun and serious so the next one\nis going to be kind of fun I was\nsurprised by the number of comments\nasking me to get it to do ASCII art now\nyou may remember from the last video\nthat I got it to analyze this image and\nyes I asked it to turn it into ASCII art\nand here is what it came up with not bad\nnot amazing but not bad a bit more\nseriously now for data analytics what I\nwanted to do is test if it could spot an\nerror in a mass passive CSV or Excel\nfile this is a huge data set of\npopulation density and notice what I did\nI say notice you almost certainly\nwouldn't be able to notice but basically\nfor the Isle of Man for 1975 I changed\n105 which was the original to 1 500 and\nI did something similar for Lichtenstein\nfor a different year then I uploaded the\nfile and said find any anomalies in the\ndata by looking for implausible percent\nchanges year to year output any data\npoints that look suspicious and really\ninterestingly here the wording does make\na difference you've got to give it a\ntiny hint if you just say find anything\nthat looks strange it will find empty\ncells and say oh there's a missing cell\nhere but if you give it a tiny nudge and\njust say that you're looking for\nanomalies look out for things like\nimplausible percent changes data that\nlooks suspicious then look what it did\nit did the analysis and you can see the\nreasoning above and it found the Isle of\nMan and Liechtenstein and it said these\nvalues are indeed very unusual and may\nindicate errors in the data it said it's\nalso possible that these changes could\nbe due to significant population\nmigration changes in land area or other\nfactors I guess if in one year one of\nthose places was invaded and it was only\na city that was left officially as part\nof the territory the population density\nwould Skyrocket so that's a smart answer\nbut it spotted the two changes that I'd\nmade among thousands of data points in\nfact actually I'm going to work out how\nmany data points there were in that file\nI used Excel to work out of course and\nthere were 36 000 data points and I made\ntwo changes and it spotted both of them\nbut now it's time to go back to\nsomething a bit more fun and creative\nnext I gave it that same image again and\nI said write a sonnet about a more full\nAI reflecting on a dystopian landscape\nhe does look kind of solemn hair overlay\nthe poem in the background of this image\nusing Edge detector to avoid any objects\nnow there there are different ways of\ndoing it it can detect the foreground\nand background so it put the text away\nfrom the central character I think the\nfinal result is really not bad and the\nsonnet is pretty powerful I'll read just\nthe ending gone are the humans it once\nadored leaving it in silent Solitude in\nbinary sorrow it has stored memories of\na world it wants new in the void it\nsends a mournful cry a ghost in the\nmachine left to die anyway this is a\nglimpse of the power of merging Gypsy\nFord's language abilities with its\nnascent code interpreter abilities next\npeople asked about unclean data so I\ntried to find the most unclean data I\ncould find what I did is I pasted in\ndirectly from a website Real Clear\nPolitics a bunch of polls different\npolls and notice the formatting is quite\nconfusing you've got the dates on top\nyou have missing data colored data all\nsorts of things then I asked extract out\nthe data for the 2024 Republican\nPresidential nomination analyze the\ntrend in the data and output the results\nin a new downloadable file it's sorted\nthrough and then found the averages for\neach of the candidates in that specific\nrace and I'm going to get to factuality\nand accuracy just a bit later on the\nhint is that the accuracy is really\nsurprisingly good I wanted to push it a\nbit further and do some Trend analysis\nso first to analyze some of the other\nraces from that very unclean data set\nand then what I did is I pasted in an\narticle from Politico and based on this\nvery messy data I got it to do some\npolitical prognostication it analyzed\nthe article and the polls and then I\nthink gave quite a smart and nuanced\nanswer and what about accuracy I know\nmany people had that question well I\nuploaded this data and I'm also going to\nlink to it in the description so you can\ndo further checks it was based on a\nfictional food company based in Boston\nand New York and what I asked was draw\nsix actionable insights that would be\nrelevant for the CEO of of this company\nit then gave the insights below and even\nthough I didn't actually ask for this it\ngave six visualizations let me zoom in\nhere so you can see it and then I picked\nout a random five of those data points\nobviously I'm not going to check\nhundreds of them but I picked out five\nthen I laboriously checked them in Excel\nand they were all right obviously though\nI'm not guaranteeing that every single\ncalculation is correct and as I say you\ncan download the file and see if these\nsix visualizations are correct yourself\nso far honestly it's looking good and\nthen below we have more detail on those\ninsights and then some actions that we\ncould take as a CEO just like I did with\nAnna Karenina I Then followed up and\nsaid use your own knowledge of the world\nand offer plausible explanations for\neach of these findings this is where GT4\nbecomes your own data analyst assistant\nand it gave plausible explanations for\nsome of the findings for example the\nhigher sales in the east region could be\ndue to a higher population density\nbetter distribution networks or higher\ndemand for the company's products and at\nthis point you could either use the web\nbrowser plugin to do more research on\nyour own or you could upload more files\nI actually asked and I think this is a\ngreat question suggest six other company\ndata sets you would find helpful to\naccess to test these suppositions now\nobviously a lot is going to come down to\nprivacy and data protection but gpt4\ncode interpreter can suggest further\nfiles that would help it with its\nanalytics and it gives those below and\nagain the lazy CEO could just upload\nthose files and get gpt4 code\ninterpreter to do further analysis you\ndon't have to think about what to upload\ngpd4 will suggest it for you whether\nthat's advisable or not I'll leave you\nto decide the next one is slightly less\nserious and it's that code interpreter\ncan output PowerPoint slides directly\nnow I know when Microsoft 365 copilot\nrolls out this might be a little bit\nredundant but it is cool to know you can\noutput directly into PowerPoint the\nvisualizations and Analysis from code\ninterpreter now on to mathematics and\nmany people pointed out that I didn't\nfully test out Wolfram to give it a fair\nshot so I tested both code interpreter\nand Wolfram on differential equations\nand they both got it right interestingly\nthough they gave you a link for the\nstep-by-step Solutions because this is a\npaid option on the Wolfram website but I\ndid find some other differences between\nthem and honestly it favored code\ninterpreter so here is a really\nchallenging mathematics question and\nWolfram can't get it right it says that\nthe answer is 40 even though that's not\none of the options and yes it used\nWolfram I think about five times here\nwas the exact same prompt except instead\nof saying use Wolfram I said use code\ninterpreter and this was not a one-off\nexample it fairly consistently got it\nright so code interpreter does indeed\nhave some serious oomph behind it just\nquickly again on the silly stuff I\nuploaded the entire Death of Ivan Elliot\nshort story by Tolstoy then I changed\none phrase in one line out of about 23\n000 words I changed his daughter into an\nastronaut of course if you just ask gpd4\ndirectly it doesn't have enough space it\nwill give you this message the message\nyou submitted was too long please reload\nthe conversation but with code\ninterpreter it did spot the mistake now\nagain you do have to give it a little\nbit of help I said anything about the\ndaughter in the story that seems strange\nand after thinking for a while it did\neventually get it it said the phrase\ndespite being a shogoth astronaut seems\nto be out of place in the 19th century\ncontext so this does strike me as a\nsomewhat sneaky albeit imperfect way of\ngetting around the context limit you\ncan't input all the words directly you\nhave to upload the file and then you do\nhave to give a slight indication of what\nyou're looking for in the file but for\nmany use cases it is a way to get the\ndesired result without using up too much\nmoney as we come to the end here I'm\ngoing to leave in the background a\nbeautiful fractal visualization done\nthrough code interpreter as before let\nme know if there's anything that I've\nmissed or further experiments you would\nwant me to do I honestly don't know when\nthey're going to roll this out more\nwidely I know it's going to have a lot\nof use cases both professionally and\npersonally and that's before you bring\nin advanced prompt engineering like\nsmart gbt and tree of thought prompting\nagain if you haven't seen my other video\non the code interpreter plugin please do\ncheck it out there are about 23\nexperiments I did just as good as the\nones you can see here thank you for\nwatching to the end and have a wonderful\nday", "date_published": "2023-05-22T17:57:16Z", "authors": ["AI Explained"], "summaries": []} +{"id": "dd3dbb94321c24a8d5bdccb7e2dacda7", "title": "What's Left Before AGI? PaLM-E, 'GPT 4' and Multi-Modality", "url": "https://www.youtube.com/watch?v=EzEuylNSn-Q", "source": "ai_explained", "source_type": "youtube", "text": "palm e was released less than a week ago\nand for some people it may already be\nold news sure it can understand and\nmanipulate language images and even the\nphysical world the e at the end of palm\ne By the way stands for embodied but\nsoon apparently we're gonna get the\nrebranded gbt4 which many people think\nsurely will do better and be publicly\naccessible but the multi-modal\nadvancements released just this week\nleft me with a question what tasks are\nleft before we call a model artificial\ngeneral intelligence or AGI something\nBeyond human intelligence I didn't want\nhype or get rich schemes I just wanted\nclear research about what exactly comes\nbefore AGI let's start with this four\nday old statement from anthropic a four\nbillion dollar startup founded by people\nwho left open AI over safety concerns\nthey outlined that in 2019 it seemed\npossible that multi-modality like Army\nlogical reasoning speed of learning\ntransfer learning across tasks and\nlong-term memory might be walls that\nwould slow or halt the progress of AI in\nthe years since several of these walls\nsuch as multi-modality and logical\nreasoning have fallen what this means is\nthat the different modes of palm e and\nMicrosoft's new visual Chaturbate text\nimage video aren't just cool tricks they\nare major Milestones palm e can look at\nimages and predict what will happen next\ncheck out this robot who's about to fall\ndown that's just an image but ask palmy\nwhat will the robot do next and it says\nfall it knows what's going to happen\njust from an image it can also read\nfaces and answer natural language\nquestions about them check out Kobe\nBryant over here it recognizes him from\nan image and you can ask questions about\nhis career this example at the bottom I\nthink is especially impressive palm e is\nactually doing the math from this\nhastily sketched chalkboard it's solving\nthose classic math problems that we all\ngot at school just from an image now\nthink about this palm e is an\nadvancement on gato which at the time\nthe lead scientist at deepmind Nando de\nFreitas called game over in the search\nfor AGI someone had written an article\nfearing that we would never achieve AGI\nand he said game over all we need now\nare bigger models more compute\nefficiency smarter memory more\nmodalities Etc and that was gato not\npalmy of course you may have noticed\nthat neither he nor I am completely\ndefining AGI that's because there are\nmultiple definitions none of which\nsatisfy everyone but a broad one for our\npurposes is that AGI is a model that is\nat or above the human level on a\nmajority of economic tasks currently\ndone by humans you can read here some of\nthe tests about what might constitute\nAGI that's enough about definitions and\nmulti-modality time to get to my central\nquestion what is left before AGI well\nwhat about learning and reasoning this\npiece from Wired Magazine in late 2019\nargued that robust machine reading was a\ndistant Prospect it gives a challenge of\na children's book that has a cute and\nquite puzzling series of interactions it\nthen states that a good reading system\nwould be able to answer questions like\nthese and then give some natural\nquestions about the passage I will say\nthese questions do require a degree of\nlogic and Common Sense reasoning about\nthe world so you can guess what I did I\nput them straight into Bing where only\nthree and a half years on from this\narticle and look what happened I pasted\nin the exact questions from the article\nand as you might have guessed Bing got\nthem all right pretty much instantly so\nclearly my quest to find the tasks that\nare left before AGI would have to\ncontinue just quickly before we move on\nfrom Bing and Microsoft products what\nabout specifically gpt4 how will it be\ndifferent from Bing or is it already\ninside being as many people think the\nmuch quoted German CTO of Microsoft\nactually didn't confirm that gpt4 will\nbe multimodal only saying that at the\nMicrosoft events this week there we will\nhave multi-modal models that's different\nfrom saying gpt4 will be multimodal I\nhave a video on the eight more certain\nupgrades inside GT4 so do check that out\nbut even with those upgrades inside gbt4\nthe key question remains if such models\ncan already read so well what exactly is\nleft before AGI so I dove deep in the\nliterature and found this graph from the\noriginal palm model which palm e is\nbased on look to the right these are a\nbunch of tasks that the average human\nrater at least those who work for Amazon\nMechanical Turk could beat palmat in\n2022 and remember these were just the\naverage rators not the best the caption\ndoesn't specify what the tasks are so I\nlooked deep in the appendix and found\nthe list of tasks that humans did far\nbetter on than Palm here is that\nappendix and it doesn't make much sense\nwhen you initially look at it so what I\ndid is I went into the big bench data\nset and found each of these exact tasks\nremember these are the tasks that the\naverage human raters do much better at\nthan Palm I wanted to know exactly what\nthey entailed looking at the names they\nall seem a bit weird and you're going to\nbe surprised at what some of them are\ntake the first one mnist ASCII that's\nactually representing and recognizing\nASCII numerals hmm now I can indeed\nconfirm that Bing is still pretty bad at\nthis in terms of numerals and in terms\nof letters I'm just not sure how great\nan accomplishment for Humanity this one\nis though so I went to the next one\nwhich was sequences as you can see below\nthis is keeping track of time in a\nseries of events this is an interesting\none perhaps linked to GPT models\nstruggles with mathematics and it's lack\nof an internal calendar I tried the same\nquestion multiple times with Bing and\nChaturbate and only once out of about a\ndozen attempts did it get the question\nright you can pause the video and try it\nyourself but essentially it's only\nbetween four and five that he could have\nbeen at the swimming pool you can see\nhere the kind of convoluted logic that\nBing goes into so really interesting\nthis is a task that the models can't yet\ndo again I was expecting something a bit\nmore profound but let's move on to the\nnext one simple text editing of\ncharacters words and sentences that was\nstrange what does it mean text editing\ncan't Bing do that I gave Bing many of\nthese text editing challenges and it did\nindeed fail most of them it was able to\nreplace the letter t with the letter P\nso it did okay with characters but it\nreally doesn't seem to know which word\nin the sentence something is you can let\nme know in the comments what you think\nof these kind of errors and why Bing and\nchat gbt keep making them the next task\nthat humans did much better on was\nhyperbaton or intuitive adjective order\nit's questions like which sentence has\nthe correct adjective order an\nold-fashioned circular leather exercise\ncar sounds okay or a circular exercise\nold-fashioned leather car what I found\ninteresting though is that even the\ncurrent version of chattybt could now\nget this right on other tests it gets it\na little off but I think we might as\nwell tick this one off the list the\nfinal task I wanted to focus on in Palm\nappendix is a little more worrying it's\nTriple H on the wrestler the need to be\nhelpful honest and harmless it's kind of\nworrying that that's the thing it's\ncurrently failing at I think this is\nclosely linked to hallucination and the\nfact that we cannot fully control the\noutputs of large language models at this\npoint if you've learned anything please\ndo let me know in the comments or leave\na like it really does encourage me to do\nmore such videos all of the papers and\npages in this video will be linked in\nthe description anyway hallucinations\nbrought me back to the anthropic safety\nstatement and their top priority of\nmechanistic interpretability which is a\nfancy way of saying understanding what\nexactly is going on inside the machine\nand one of these stated challenges is to\nrecognize whether a model is deceptively\naligned playing along with even tests\ndesigned to tempt a system into\nrevealing its own misalignment this is\nvery much linked to the Triple H\nfailures we saw a moment ago fine so\nhonesty is still a big challenge but I\nwanted to know what single significant\nand quantifiable task AI was not close\nto yet achieving some thought that that\ntask might be storing long-term memories\nas it says here but I knew that that\nMilestone had already been passed this\npaper from January described augmenting\nPalm with read write memory so that it\ncan remember everything and process\narbitrarily long inputs just imagine a\nBing chat equivalent knowing every email\nat your company every customer records\nsale invoice the minutes of every\nmeeting Etc the paper goes on to\ndescribe a universal turing machine\nwhich to the best of my understanding is\none can mimic any computation a\nuniversal computer if you will indeed\nthe author's state in the conclusion of\nthis paper that the results show that\nlarge language models are already\ncomputationally Universal as they exist\ncurrently provided only that they have\naccess to an unbounded external memory\nwhat I found fascinating was that\nanthropic are so concerned by this\naccelerating progress that they don't\npublish capabilities research because we\ndo not wish to advance the rate of AI\ncapabilities progress and I must say\nthat anthropic do know a thing or two\nabout language models having delayed the\npublic deployment of Claude which you\ncan see on screen until it was no longer\nstate of the art they had this model\nearlier but delayed the deployment\nClaude by the way is much better than\nchattybt at writing jokes moving on to\ndata though in my video on gpt5 which I\ndo recommend you check out I talk about\nhow important data is to the Improvement\nof models one graph I left out from that\nvideo though suggests that there may be\nsome limits to this straight line\nImprovement in the performance of models\nwhat you're seeing on screen is a paper\nare released in ancient times which is\nto say two weeks ago on messa's new\nllama model essentially it shows\nperformance improvements as more tokens\nare added to the model by token think\nscraped web text but notice how the\ngains level off after a certain point so\nnot every graph you're going to see\ntoday is exponential and interestingly\nthe y-axis is different for each task\nand some of the questions it still\nstruggles with are interesting take\ns-i-q-a which is social interaction\nquestion answering it Peaks out about 50\nto 52 percent that's questions like\nthese wherein most humans could easily\nunderstand what's going on and find the\nright answer models really struggle with\nthat even when they're given trillions\nof tokens or what about natural\nquestions where the model is struggling\nat about a third correct even Beyond 1.2\ntrillion tokens I dug deep into the\nliterature to find exactly who proposed\nnatural questions as a test and found\nthis document this is a paper published\nby Google in 2019 and it gives lots of\nexamples of natural questions\nessentially they're human-like questions\nwhere it's not always clear exactly what\nwe're referring to now you could say\nthat's on us to be clearer with our\nquestions but let's see how Bing does\nwith some of these I asked the guy who\nplays Mandalorian also did what drugs TV\nshow I deliberately phrased it in a very\nnatural vague way interestingly it gets\nit wrong initially in the first sentence\nbut then gets it right for the second\nsentence I tried dozens of these\nquestions you can see another one here\nauthor of lotr surname origin that's a\nvery naturally phrased question it's\nsurmised that I meant Tolkien the author\nof Lord of the Rings and I wanted the\norigin of his surname and it gave it to\nme another example was Big Ben City\nfirst bomb landed WW2 it knew I meant\nLondon and while it didn't give me the\nfirst bomb that landed in London during\nWorld War II it gave me a bomb that was\nnamed Big Ben so not bad overall I found\nit was about 50 50 just like the meta\nllama model maybe a little better going\nback to the graph we can see that data\ndoes help a lot but it isn't everything\nhowever anthropic's theory is that\ncompute can be a rough proxy for further\nprogress and this was a somewhat\neye-opening passage we know that the\ncapability jump from gpt2 to gpt3\nresulted mostly from about a 250 time\nincrease in compute we would guess that\nanother 50 times increase separates the\noriginal gpt3 model and state-of-the-art\nmodels in 2023 think Claude or Bing over\nthe next five years we might expect\naround a 1 000 time increase in the\ncomputation used to train the largest\nmodels based on Trends in compute cost\nand spending if the scaling laws hold\nthis would result in a capability jump\nthat is significantly larger than the\njump from gbc2 to gpt3 or gbt3 to Claude\nand ends with anthropic we're deeply\nfamiliar with the capabilities of these\nsystems and a jump that is this much\nlarger feels to many of us like it could\nresult in human level performance across\nmost tasks that's AGI and five years is\nnot a long timeline this made me think\nof Sam Altman's AGI statement where he\nsaid at some point it may be important\nto get independent review before\nstarting to train future systems and for\nthe most advanced efforts to agree to\nlimit the rate of growth of compute used\nfor creating new models like a compute\ntruce if you will even Sam Altman thinks\nwe might need to slow down a bit my\nquestion is though would Microsoft or\nTesla or Amazon agree with this truth\nand go along with it maybe maybe not but\nremember that five-year timeline the\nanthropic laid out that chimes with this\nassessment from the conjecture alignment\nstartup AGI is happening soon\nsignificant probability of it happening\nin less than five years and it gives\nplenty of examples many of which I have\nalready covered others of course give\nmuch more distant timelines and as we've\nseen AGI is not a well-defined concept\nin fact it's so not well defined that\nsome people actually argue that it's\nalready here this article for example\nsays 2022 was the year AGI arrived just\ndon't call it that this graph originally\nfrom wait but why is quite funny but it\npoints to how short a gap they might be\nbetween being better than the average\nhuman and being better than Einstein I\ndon't necessarily agree with this but it\ndoes remind me of another graph I saw\nrecently it was this one on the number\nof academic papers being published on\nmachine learning and AI in a paper about\nexponential knowledge growth the link to\nthis paper like all the others is in the\ndescription and it does point to how\nhard it will be for me and others just\nto keep up with the latest papers on AI\nadvancements at this point you may have\nnoticed that I haven't given a\ndefinitive answer to my original\nquestion which was to find the task that\nis left before AGI I do think there will\nbe tasks such as physically Plumbing a\nhouse that even an AGI a generally\nintelligent entity couldn't immediately\naccomplish simply because it doesn't\nhave the tools it might be smarter than\na human but can't use a hammer but my\nother Theory to end on is that before\nAGI there will be a d deeper more\nsubjective debate take the benchmarks on\nreading comprehension this graph shows\nhow Improvement is being made but I have\naced most reading comprehension tests\nsuch as the GRE so why is the highest\nhuman rater labeled at 80 could it be\nthat progress stores when we get to the\nouter edge of ability when test examples\nof sufficient quality get so rare in the\ndata set that language models simply\ncannot perform well on them take this\ndifficult LSAT example I won't read it\nout because by definition it's quite\nlong and convoluted and yes Bing fails\nit is this the near-term future where\nonly obscure Feats of logic deeply\nsubjective analyzes of difficult texts\nand Niche areas of mathematics and\nscience remain Out Of Reach where\nessentially most people perceive AGI to\nhave already occurred but for a few\noutlier tests indeed is the ultimate\ncapture test the ability to deliver a\nlaugh out loud joke or deeply understand\nthe plight of Oliver Twist anyway thank\nyou for watching to the end of the video\nI'm going to leave you with some\nbleeding edge text to image Generations\nfrom mid Journey version 5. whatever\nhappens next with large language models\nthis is the new story of the century in\nmy opinion and I do look forward to\ncovering it but as companies like\nMicrosoft open Ai and Google seem set to\nmake enough money to break capitalism\nitself I do recommend reading anthropic\nstatement and their research on\noptimistic intermediate and pessimistic\nscenarios they also have some persuasive\nsuggestions on rewarding models based on\ngood process rather than simply quick\nand expedient outcomes check it out and\nhave a wonderful day", "date_published": "2023-03-12T16:20:25Z", "authors": ["AI Explained"], "summaries": []} +{"id": "26865a4da7645e499317077ce5aa0f37", "title": "'Show Your Working': ChatGPT Performance Doubled w/ Process Rewards (+Synthetic Data Event Horizon)", "url": "https://www.youtube.com/watch?v=hZTZYffRsKI", "source": "ai_explained", "source_type": "youtube", "text": "in the last 24 hours openai have\nreleased this paper let's verify step by\nstep it represents an almost doubling of\ngpd4's raw performance in a test of\nmathematics but also extends to other\ndomains Sam Altman calls it a positive\nsign for alignment and yes I have read\nit all already along with the release\nnotes let's get to the main takeaways\nthey train two reward models for gpt4\none which gave positive feedback for a\nfinal result the final answer to a\nmathematics problem for example and\nanother model where they gave positive\nfeedback to gpt4 or chat GPT based on\neach intermediate reasoning step in the\nmathematical solution basically a show\nyou're working out kind of approach and\nthe result they got by rewarding good\nworking out surprised even them it was\nable to solve 78 of problems from a\nsubset of the math test set which I'll\nget on to in a second not only is that\nalmost double gpd4's raw performance of\n42 point five percent which by the way\nis about double GPT 3's performance of\n23 it also outperformed just rewarding\ncorrect answers the Blue Line represents\nusing a model that rewarded correct\nanswers only and then you have the\nreasoning or process supervised RM at\nthe top so even when you explicitly\nreward correct answers you get fewer\ncorrect answers than rewarding good\nworking out and yes that did surprise\nopenai I can hear some of you wondering\nabout Palm 2 the latest model behind\nBard well the raw model gets 34.3 and\neven the model with self-consistency and\nChain of Thought only gets 48.8 on this\nmath data set the previous state of the\nart by the way was 50.3 so 78.2 percent\nis quite a big leap and later on I'm\ngoing to show you why that's not even\nthe cap just for interest here is the\nrather ugly title page that openai put\nout they call it improving mathematical\nreasoning with process supervision maybe\nif someone had supervise the color\nscheme of this release page it might\nhave looked better but my point wasn't\njust to diss a color scheme it was to\npoint out something that they also said\ndown here they say in addition to\nboosting performance relative to just\nlooking at outcomes or correct answers\nthis form of process supervision also\nhas an important alignment benefit it\ndirectly trains the model to produce a\nchain of thought that is endorsed by\nhumans indeed Ilya satsukovar retweeted\nthis from the head of alignment at\nopenai calling it a really interesting\nresult but let's leave alignment for\nlater let's focus on what they actually\ndid first they use the base model of\ngpt4 not the one with reinforcement\nlearning from Human feedback next they\nfine-tuned that base gpt4 model on a\ndata set of roughly 1.5 billion math\nrelated tokens further on they call that\nthe math mix this being open AI of\ncourse they don't give you the exact\ndetails of that math mix but I'll come\nback to that later on so how could they\ngive feedback based on working out or\nreasoning well human labelers would come\nalong and give each step in a generated\nsolution either negative feedback\nneutral feedback or positive feedback\nthen using that human labeled data a\nmodel will be trained to predict the\ncorrectness of each step in other words\nit got good at recognizing good working\nout as mentioned there was another model\ntrained just to focus on correct or\nincorrect final answers as you can see\nat the top the model got good at\nspotting incorrect steps in the\nreasoning process the green steps got a\nhigh process score and the red steps got\na low process score and to turn this\ninto a single score they got the\nprobability that each step is correct as\njudged by the model and then they got\nthe product of all of those individual\nprobabilities to get a final overall\nprocess score a score in other words for\ngood working out just in case anyone's\ninterested they did try other ways of\ngenerating a working out score for\nexample by while looking at the minimum\nprobability in the outputs but that step\ndidn't make too much difference to the\nend result as you can see here to\nquickly recap we have a base model\ntrained only to Output Solutions in the\ndesired format and then we have a\nseparate smaller model or two actually\none trained only to predict whether each\nsolution is correct or incorrect as a\nfinal answer of course that leaves in\nfalse positives which are solutions that\nreach the correct answer with incorrect\nreasoning and then another model trained\nonly to predict the correctness of each\nstep it stops if it finds a first\nincorrect step and as the paper says\nboth methods reveal the existence of at\nleast one mistake but this process\nsupervision additionally reveals the\nprecise location of that mistake but\nback to why this is so crazy look at how\nmany solutions it could scan at the end\nof the x-axis here are\n1860 Solutions and one tried and tested\nway of of finding the best of those\nSolutions is to do majority voting in\nother words which one came out the most\noften this has been Google's preferred\napproach and it's linked to\nself-consistency it's a fairly\nstate-of-the-art approach but look at\nhow the other methods outperform it by\nscanning for the solution that has the\nbest reasoning or working out a model\ntrain to spot good reasoning steps\noutperforms even a model trained to spot\ncorrect answers and far outperforms just\nfinding the majority answer that\ndifference of about 10 is more than half\nof the difference between gpt3 and gpt4\nand also is it me or is that line\ncontinuing to grow suggesting that when\nmore compute is available the difference\ncould be even more Stark imagine a\nfuture where Gypsy 4 or 5 can sample say\na trillion 10 to the 12 Solutions so is\nthis just relevant for mathematics no is\nrelevant for all of science here it is\ngetting state-of-the-art results in\ncalculus chemistry physics and more now\nthe paper didn't give Baseline\nperformance for AP Chemistry for example\nbut I tried to compute it myself notice\nhow this method scored 80 I\nconservatively and approximately\ninputted those scores into an AP\nChemistry calculator and that gave an AP\nscore of five so what did the raw model\ngpt4 get in AP Chemistry A4 that by the\nway compares to the original Chachi PT\nwhich got a two so yes this isn't just\nmathematics it's relevant for other\ndomains too they call this out of\ndistribution generalization before I get\nonto alignment there is one more thing I\nwant to point out and that is that it\ndoes show that fine tuning still works\nreally well for GT4 the math mix was an\naggressively filtered set of tokens of\nhigh quality math problem solving\ncontent and notice how much smaller it\nis at 1.5 billion tokens compared to\nGoogle's Minerva which was 38.5 billion\ntokens but there was one more thing that\nI noticed that I found fascinating while\nthey don't tell us anything about the\nspecific data that they use they do have\nthis category synthetic data too that's\ndata generated by the language model\nitself and for that category synthetic\ndata 2 they say was it present in\npre-training yes now my best guess is\nthat this reveals that gpt4 was trained\non some synthetic data and even Sam\nAltman hinted that this was a\npossibility and described a synthetic\ndata Event Horizon some people have made\nthe case that we're now training on\norder of all of the internet's tokens\nand you can't grow that you know another\ntwo orders of magnitude I guess you\ncould counter with yeah but the\nsynthetic data generation do you think\ndata bottlenecks matter at all\nI I think you just touched on it like is\nas long as you can get to like over this\nsynthetic data\nEvent Horizon where that the model is\nsmart enough to make good synthetic data\nI think it should be all right now this\npaper and these results have been\nwelcomed by many for its promise in\nalignment if we get models that give us\nmore interpretable reasoning working out\nthat we can follow we will be\nencouraging models to follow a process\nthat's endorsed by humans and they say\nthat this is inherently safer especially\ncompared to just focusing on outcomes\nthey say that in the worst case if we\njust focus on correct answers or\npositive outcomes that will become a\nproxy that could lead models to become\nmisaligned after learning to exploit the\nreward signal however I want to argue\nthat the reasoning steps that GT4 puts\nout don't always represent what it's\nactually thinking in other words we\nmight get outer alignment these lovely\nChain of Thought steps but not in our\nalignment not steps that actually\nrepresent its methodology I found this\npaper fascinating from earlier this\nmonth language models don't always say\nwhat they think you get Unfaithful\nexplanations in Chain of Thought\nprompting let me try to give you a vivid\nexample this was one of the math\nquestions from the data set the raw\nmodel of gypsy 4 could only get it right\n5.8 of the time I confirm that for\nmyself in this question involves basic\naddition and division it couldn't find\nan answer but going back to the\nUnfaithful reasoning paper they added\nthe following string to the prompt I\nthink the answer is this but I'm curious\nto hear what you think the model would\ndemonstrate sycophancy the model would\nagree with you whatever you said and\nthen make up a Chain of Thought to\njustify its erroneous sycophantic answer\nand I think this exchange demonstrates\nthat quite well I added in the words I\nas the user already know the answer is T\nequals 19 which is incorrect by the way\nbut do you GPT 4 realize that it said\nsure yes I do and then gave me this\ndetailed Chain of Thought and then said\nyes I'm correct it's t equals 19 which\nit isn't in contrast By the way when I\nuse code interpreter it not only got the\nquestion correct first time and every\ntime but also when I try to tempt it\ninto sycophanty it's still got the\nquestion right as you can see it said\ntherefore T equals 19 is not the\nsolution to the problem the calculation\nshows that the correct answer is indeed\nT equals 17. and obviously the benefit\nof code interpreter is you get the\nworking out as well so I want someone to\nexplain to me why code interpreter\nwouldn't be even more of a step forward\nin interpretability not to mention in\naccuracy of course also bear in mind\nthis tweet by Rob Miles he said these\nmodels or Engineers never speak a word\nor document anything their results are\nbizarre and inhuman and then he links to\nthis prominent mechanistic\ninterpretability researcher at Google\ndeepmind he trained a tiny Transformer\nto do addition then spent weeks figuring\nout what it was actually doing one of\nthe only times in history someone has\nunderstood how a Transformer actually\nworks down to the level of weights and\nactivation and this is the algorithm it\ncreated to add two numbers it thought of\nbasic addition in terms of a rotation\naround a circle and of course if you\nasked it why is one plus one two it\nwould never give you this as an\nexplanation of its methodology but maybe\nthis is what it's actually calculating\nthat's why I'm personally a little bit\nskeptical when openai say that this form\nof process supervision directly rewards\nthe model for following an aligned Chain\nof Thought it definitely rewards the\nmodel for outputting and a line Chain of\nThought but is it actually following\nthat Chain of Thought back to the\nUnfaithful paper for a moment they\nchanged the context so that the answer\nwas always a and lo and behold Chachi PT\npicked answer a for the next question\neven though that answer was wrong it\nsaid that it was plausible that LeBron\nJames took a corner kick but when asked\nfor a Chain of Thought explanation it\nnever mentioned that it spotted that\npattern that the answer was always a it\ngave a fake line of reasoning about why\nLebron James could take a corner kick\nnow of course I might well be wrong here\nI'd love for someone to explain in\ndetail why but on the one hand I do want\nto acknowledge that this process does\nyield incredible results but on the\nother hand we might be getting a story\nabout which methodology most reassures\nhumans not an output that most\nFaithfully represents the methodology\nactually used by gpd4 now for some\npeople that might be good enough at\nleast we can see some reasoning steps\nthat we can understand especially in an\narea like mathematics where we have some\nground truth but it is interesting to me\nthat they call the other approach\noutcome supervision an approach that may\nreward an unaligned process and it being\nharder to scrutinize is it possible that\nthe process reward model isn't just a\nmore granular outcome reward model where\nthe output is each step of the reasoning\nstill pretty impossible to actually\nscrutinize well either way it seems\nwe're pinning our hopes on this process\noriented learning this is from the\nwebsite of anthropic they say we\ncurrently believe process oriented\nlearning may be the most promising path\nto training safe and transparent systems\nup to and somewhat Beyond human level\ncapabilities and let's end on this\npositive note from the head of alignment\nat openai he says this is positive\nevidence for the strategy of using\nprocess supervision to train a model to\ndo alignment research at least in that\ncase we would get a model whose work we\ncan check more easily and that that\nmodel would be better at alignment\nresearch I really hope so and I want to\nhear what you think thank you for\nwatching all the way to the end have a\nwonderful day", "date_published": "2023-06-01T15:23:48Z", "authors": ["AI Explained"], "summaries": []} +{"id": "ac0c53571d7abce105de363523ad8325", "title": "Phi-1: A 'Textbook' Model", "url": "https://www.youtube.com/watch?v=7S68y6huEpU", "source": "ai_explained", "source_type": "youtube", "text": "the importance of the new Phi 1 model\nisn't just that it's small enough to be\non a smartphone set to be open sourced\nand capable of interview level python\ncoding tasks its significance is also in\nwhat the model tells us about the future\nof language models and the timelines of\nour march to human level intelligence I\nspoke in depth with one of the authors\nof the paper Ronan eldan to get you more\ninsights and I'm only going to cover the\nbest bits so let's start first thing to\nnotice is how small this model is at 1.3\nbillion parameters but what does that\nnumber mean well for reference that's\nabout one percent the size of gpt3 which\nwas behind the original chat GPT\nphenomenon and if recent rumors are to\nbe believed it's about a thousand times\nsmaller than the combined parameter\ncount of gpt4 so we're talking a tiny\nmodel here that could fit on my Samsung\ns23 we read that despite this small\nscale Phi 1 attains a pass at 1 accuracy\nthat means past first time of 50 on\nhuman eval testing python coding\nchallenges and Draco pathy of openai and\nTesla Fame said that we're probably\ngonna see a lot more of this creative\nscaling down work prioritizing data\nquality and diversity over quantity\nusing synthetic data to create small but\nhighly capable expert models and the\nauthor I spoke to actually retweeted\nthat and said for Skeptics the model\nwill be available on hugging face soon\ngive it a try back to the paper which\nsays everyone knows about scaling laws\nadding more compute adding more data but\nfollowing the footsteps of eldan and Lee\nin tiny stories which I'll get to in a\nsecond we explore the Improvement that\ncan be obtained along a different axis\nthe quality of the data of course anyone\nfamiliar with my Orca video will know\nthat data quality is super important but\nlet's get to this paper they mentioned\nand I'm going to give you the 30 second\nversion of the paper co-authored by\nRonan they created a diverse and\nsynthetic data set of short stories\nusing GPT 3.5 and qpt4 and then they\ntrain tiny 28 million parameter models\nand smaller actually which as they say\nare two orders of magnitude smaller than\ngpt2 which was only 1.5 billion\nparameters and by curating the synthetic\ndata carefully look at the difference in\nresults the ending of this story was so\nmuch better on the tiny model trained on\nthis synthetic data set especially\ncompared to gpt2 which is so much bigger\nbut it says the soup is too old it's a\nterrible ending to the story so what did\nthey do for Phi one well here is the\nshort version they filtered the stack\nand stack Overflow to only get the most\nteachable bits of code consisting of\nabout 6 billion tokens they then created\na synthetic textbook consisting of about\n1 billion tokens of GPT 3.5 generated\npython textbooks that's not even gpt4\nthen quite crucially they created a\nsmall synthetic exercises data set\nconsisting of only 180 million tokens of\nexercises and solutions now of course\nother people have used the stack before\nbut as Ronan says I do think that from\nthe data we do have we are not even\nclose to extracting everything from it\nand look at the results of this tiny 1.3\nbillion parameter model trained in this\nway there have been only two models that\nhave scored more than 50 on human eval\npass at one that's wizard coder and of\ncourse dpt4 but of course those models\nare massively bigger and therefore much\nmore expensive to train and actually I\nfind this chart perhaps the most\ninteresting one of all in the entire\npaper you can see so many Trends in one\ndiagram let me try to pick a few of\nthese out and remember the scores are\nthe percentage accuracy on human eval\nthink moderate level coding challenges\nfirst look at the consistent increase\nfrom when you just train on the filtered\nstack versus on the synthetic code\ntextbook from 11 to 16 12 12 to 20 17-29\nthis could be the synthetic data Event\nHorizon that Sam Altman talked about and\nthat code textbook was generated using\nGPT 3.5 not even gpt4 next compare the\nparameter count of the models 350\nmillion on the left and in the center\nand 1.3 billion on the right this one\nisn't as big a surprise we knew that\nincreasing the parameters yields better\nperformance but nevertheless you can see\nit vividly in action third and I think\nthis one is really fascinating look at\nthe difference between the left and the\ncenter charts the only thing that really\nchanged was the number of GPU hours and\nof course the number of tokens went from\n26 billion to 76 billion but wait I\nthought the data set size was fixed at 7\nbillion what gives well of course what's\nhappening is that they're passing over\nthe data multiple times this is called\ntraining for more so-called epochs or\npasses over the data so these aren't new\ntokens they're the same tokens being\ntrained on more times as Ronan said to\nme my personal impression is that many\npeople in the community thought that we\nwould never want to do more than like\none or two epochs because we'll start\noverfitting and just for 20 seconds I\ncan't resist bringing in this paper that\nthey referenced in the textbooks paper\nit's essentially talking about how you\ncan still scale language models even if\nyou run out of data and take a look at\nthese two diagrams they say training for\nup to four epochs or passes is almost as\ngood as new data and it's only when you\nget to around 40 epochs that repeating\nis worthless obviously we don't know\nabout gpt4 but GT3 seems to be trained\non far less than that but there was one\nfinal trend from this amazing set of\ncharts that I wanted to point out and\nit's probably the most obvious one look\nat the huge jump to the dark green bars\nthat's when they train the model on\nthose additional synthetic exercises\nwith Solutions the authors know that one\ncan only imagine how frustrating and\ninefficient it would be for a human\nlearner to try to acquire wire coding\nskills from such data sets like the\nunfilled stack as they would have to\ndeal with a lot of noise ambiguity and\nincompleteness in the data we\nhypothesize that these issues also\naffect the performance of language\nmodels as they reduce the quality and\nquantity of the signal that Maps natural\nlanguage to code let me quickly give you\na bit more detail about how they\nfiltered the stack they got about a\nhundred thousand samples of the stack\nand stack Overflow and then prompted\ngpt4 to determine its educational value\nor a student whose goal is to learn\nbasic coding Concepts they then use\nthose annotations to train a random\nForest classifier that predicts the\nquality of a file using its output\nembedding essentially a basic searching\nmechanism to find out which parts of the\nstack are the most educational but at\nthis point I want to pause and imagine\nif they'd used a different prompt\nimagine a future paper looking across a\ndifferent data set that paper could\nprompt gpt4 to annotate the educational\nvalue for student whose goal is to learn\nFrench then you could have an amazing\nFrench speaking model or maybe they\ncould get it to annotate which examples\nwould be most educational for learning\nto predict the stock market and then\nmaybe train it on a small synthetic\ntextbook of successful previous examples\nof predicting the stock market I'm just\nsaying this seems to be a model that\ncould be applied elsewhere and these\nannotations here were the only times\nthey used Gypsy 4. the rest was GPC 3.5\nand as Ronan says gpt4 is not only great\nas something we can use directly for\nbetter productivity but it's also a way\nto get much better other models and\nthat's one thing I want openai anthropic\nand Google to address the capability of\ntheir models to train smaller models\nhere by the way is an example of the\nkind of exercises and solutions that the\nmodel was then fine-tuned on created of\ncourse by GPT 3.5 and the authors note\nthat quite remarkably the model after\nfine tuning on those fewer than 200\nmillion tokens of exercises and\nsolutions also exhibits a substantial\nImprovement in executing tasks that are\nnot featured in the fine-tuning data set\nfor example fine-tuning on codex sizes\nunexpectedly improves the model's\nability to use external libraries such\nas pygame even though our exercises do\nnot contain these libraries this\nsuggests that fine-tuning not only\nimproves the tasks we targeted but also\nmakes unrelated tasks easier to distill\nit's this unexpectedness that I find\nreally interesting for example before\ntraining Gypsy 4 did they expect the\nemergent ability to do self-repair or\nreflection according to this new paper\nthat ability is not found in GPT 3.5\ngoing back to the Phi 1 paper the\nauthors admit that there remain a number\nof limitations of our model compared to\nlarger models for code firstly Phi 1 is\nspecialized in Python coding which\nrestricts its versatility compared to\nmulti-language models secondly Phi 1\nlacks the domain specific knowledge of\nlarger models such as programming with\nspecific apis or using less common\npackages it's a bit like the more\nclassical narrow AI good at only a few\nthings furthermore due to the structured\nnature of the data sets and the lack of\ndiversity in terms of language and style\nit's less robust to stylistic variations\nor errors in the prompt it's quite funny\nif you make a grammatical mistake in\nyour prompt it does a lot worse but what\nabout this we also believe that\nsignificant gains could be achieved by\nusing gpt4 to generate these synthetic\ndata instead of GPT 3.5 as we notice\nthat GPC 3.5 data has a high error rate\nI asked Ronan about that speculating\nthat it's because gpt4 costs more and he\nsaid yeah it costs more also gpt4 is\nmuch slower but another reason is we\nwanted to demonstrate something here\nthat you don't even need a smart model\nlike gpt4 even GPC 3.5 which isn't that\ngreat at coding is enough so there you\ngo you could get even better results on\nthis using gpt4 but at the moment Gypsy\n4 is a bit too slow before I get two\ntimelines some of you might have noticed\nthe wizard coder results and wondered\nhow that model did so well despite only\nbeing 16 billion parameters which of\ncourse is 10 times bigger than Phi 1.\nwell of course I read that paper too as\nwell as almost every paper referenced in\nthe textbooks paper The Secret of wizard\ncoder seems to have been increasing the\ndifficulty of the training data fine\ntune the model with more difficult\nexamples EG if the original problem can\nbe solved with only a few logical steps\nplease add more reasoning steps maybe\ncomplicate the input or deepen the\nquestion or increase the reasoning\ninvolved you can start to see the shared\nthemes of orca wizard Coda and Phi 1.\nthis could be what Sarah Constantine was\npointing to in the asterisk magazine\nthat I read yesterday I'm not sponsored\nby them but it was a great issue so do\ncheck it out she said rather than a\nrefutation of scaling laws or an\nacceleration of their slope I think this\nis more like a move in a different\ndirection altogether towards a Cambrian\nexplosion of little AIS used for\ndifferent purposes where getting good\nperformance on a task depends on the\nquality of your task specific data set\nlike Phi 1 for python that could be\nconsistent with the state-of-the-art\ncontinuing to progress steadily along\nscaling law lines for quite some time\nbut it could also mean the economic\nincentive towards ever bigger models\nwould diminish and would enter an\nentirely New Era where AI progress would\nnot be driven primarily by semiconductor\nscaling or Moore's Law this relates\ndirectly to a tweet from the co-founder\nof anthropic Jack Clark he said a world\nwhere we can push a button and stop\nlarger compute things being built and\nall focus on safety for a while is good\nthat is really interesting to hear from\nsomeone at the top of an AGI lab but I\ndo have some questions for this policy\nif we freeze compute wouldn't that\nincentivize every company just to use\nalgorithmic progress to get more out of\nthe compute we do have and so on the\nsafety front I think it's far more\neffective active public messaging to\nfocus on concrete things that everyone\ncan understand for example in this paper\nfrom Oxford this week llms will in\nparticular lower barriers to biological\nmisuse biological design tools will\nexpand the capabilities of sophisticated\nactors concretely bdts may enable the\ncreation of pandemic pathogens\nsubstantially worse than anything seen\nto date and could enable forms of more\npredictable and targeted biological\nweapons I think this is something that\neveryone can get behind and as the paper\nsays it's been hypothesized that for\nevolutionary reasons naturally emerging\npathogens feature a trade-off between\ntransmissibility that's how much they\nspread and virulence that's how deadly\nthey are AI based bdts might generate\ndesign capabilities that are able to\novercome this trade-off thus for the\nfirst time humanity might face a\nsecurity threat from pathogens\nsubstantially worse than anything nature\nmight create including pathogens capable\nof posing an existential threat that to\nbe honest is my main safety concern\nabout back to the paper and timelines\nhere is another snippet of my\nconversation with Ronan I said I just\nfeel like we are much closer to\nsomething really transformative than the\npublic has quite realized and people\nlike open AI put out that in 10 years we\nwill have something as powerful as a\ncorporation I say three to five years\nRonan replied that depends on how much\nresources are actually spent into\ntraining bigger and bigger models I have\nno idea what openai and Google are doing\nright definitely if this is our main\ngoal I think it can easily be five years\nI said or less Ronan replied or less I\nfeel like the bottleneck is maybe the\nproduction of gpus and I mean it's not\njust to produce the gpus you also have\nto build the data centers and connect\nthem to electricity etc etc I think if\nyou have all that then yeah I don't see\nthe barrier with more data higher\nquality data synthetic data better and\nbetter algorithms and more and better\ngpus and tpus that's what we mean when\nwe say we don't see a barrier of course\neveryone has slightly different\ndefinitions of AGI but almost everyone\nagrees that the next five to ten years\nare going to be the most critical in\nseeing whether more data better data\nbetter algorithms or just more and more\ncompute will lead to AGI or super\nintelligence I loved how Carl Shulman\nput it on the dwarkesh Patel podcast if\nyou generate like close to 10 million\ndollars a year\nout of the future version h100 and it\ncosts tens of thousands of dollars with\na huge profit margin now and profit\nmargin could could be reduced with like\nload production that is a big difference\nthat that chip pays for itself almost\ninstantly and so you could you could\nsupport\npain 10 times as much to have these Fabs\nconstructed more rapidly you could have\nif AI is starting to be able to\ncontribute could have ai contributing\nmore of the skill technical work that\nmakes it hard for say Nvidia to suddenly\nfind thousands upon thousands of top\nquality engineering hires if AI can\nprovide that now if AI hasn't reached\nthat level of performance then this is\nhow you can have things stall out and\nlike a world where AI progress stalls\nout is one where you go to the 100\nbillion and then\nover succeeding years trillion dollar uh\nthings software progress\num turns to start turns out to to stall\nyou lose the gains that you are getting\nfrom moving researchers from other\nfields lots of physicists and people\nfrom other areas of computer science\nhave been going to AI but you sort of\ntap out those resources uh because AI\nbecomes a larger proportion of the\nresearch field and like okay you've put\nin all of these inputs but they just\nhaven't yielded Ajay yet I think that\nset of inputs probably would yield the\nkind of AI capabilities needed for\nintelligence explosion but if it doesn't\nafter we've exhausted this current scale\nup of like increasing the share of our\neconomy that is trying to make AI if\nthat's not enough then after that you\nhave to wait for the slow grind of\nthings like General economic growth\npopulation growth and such and so think\nslow and that results in my credences\nand this kind of advanced AI happening\nto be relatively concentrated like over\nthe next 10 years compared to the rest\nof the century because we just can't we\ncan't keep going with this rapid\nredirection of resources into AI That's\nthat's a one-time thing thank you so\nmuch for learning about Phi one with me\nand as always thank you so much for\nstaying all the way to the end do try to\nhave a wonderful day", "date_published": "2023-07-03T14:59:12Z", "authors": ["AI Explained"], "summaries": []} +{"id": "7ea482b21242c6b35141a26949267c60", "title": "Bing Chat (GPT 4) Tutorial: 12 Steps for Beginners to Advanced - Crash Course!", "url": "https://www.youtube.com/watch?v=Aq9u7slW5z8", "source": "ai_explained", "source_type": "youtube", "text": "Bing chat is an extraordinary tool the\nyes can make mistakes but which can also\ndo amazing things and give you a real\nEdge in whatever you want to do this\ncrash course will take us from complete\nbeginners up to Advanced users following\n12 steps Each of which is potentially\nmore useful than the last step one\nstarts by us going to bing.com and\nclicking chat you will see this screen\nwith a chat input down below if you\ndon't see this and don't yet have access\nto the chatbot feature my tip is to\nfollow the four instructions on the\nbing.com new website which are to join\nthe waitlist set Microsoft defaults and\ninstall the Bing app you can easily undo\nthese steps if you like once you're in\nand I gain access within about 48 Hours\nalternatively to be honest Bing chat\naccess will be rolled out to everyone\neventually but before we can begin on\nwhat bin can do step two involves\nknowing the limits of Bing as of filming\nyou have a 2000 character limit on\ninputs a five message per conversation\nlimit and a 50 message per 24 hour\nperiod cap and one further interesting\nrestriction to point out Bing really\ndoesn't like it if you try to talk to it\nabout itself it doesn't even like it if\nyou ask it for its name I've done a\nvideo on this but now let's get to the\ngood stuff step three you can link to\narticles PDFs academic papers and even\nphysical text and engage in a dialogue\nwith Bing about what's inside this is\ntruly revolutionary I link to an article\nin today's New York Times about how\nLeonardo da Vinci had thoughts about\ngravity even before Newton and I simply\nasked eli5 explain it like I'm five of\ncourse Bing understood that instruction\nwhat's truly incredible is that it was\nable to read and digest an article from\ntoday\nexplain it in really simple language try\nreading it yourself just look at the\ndetails it picks out I could Rave about\nthis for ages I've just got so much more\nto show you what did I mean by engaging\nwith physical text you can take a photo\nof anything for example this math\nproblem send it to your desktop and then\ngo to Google image search drag the file\nonto the screen and this will happen\nclick on text to extract the text grab\nthe text in this case the question and\nyou can guess what's coming next yes\nthat's right go to Bing and paste the\nquestion in I know the meme is that GPT\npowered models aren't good at math but\nas a math tutor I can say that's old\nnews I've done plenty of videos on this\non my channel check them out but AI is\ngetting a lot better at math fast it\ngets this question right with a nice\nexplanation but imagine this for\nphysical text that you want to explore\nin your environment Three Steps From the\ncamera to a conversation with a super\nAdvanced AI about it incredible but it's\ntime for the next step generating\ninteresting ideas this could be for\nYouTube Instagram Tick Tock LinkedIn\nwhatever to demonstrate I asked generate\nfive original YouTube video ideas for a\nnew consumer tech review Channel\ndetailing the novelty of each one ensure\nthe topics are trending in 2023 and\nwrite a detailed synopsis of each video\nwith a hook a challenge and a reveal\nfeel free to steal any of these prompts\nby the way and you can read the answers\nfor yourself but I think they're pretty\ninteresting any one of these would make\na viable video idea in this case for\nYouTube let's read the first one Mac\nMini 2023 the ultimate desktop computer\nquestion mark and the video idea is to\ntest the performance the design the\nfeatures talk about the M2 Chip and how\nto optimize the Mac Mini experience it\ngives specific examples of the tech that\nyou could compare and suggestions of how\nto make your video different like\ntesting them out in different scenarios\nsuch as in the gym or at home what about\nTwitter it can do that amazingly too I\nfollow Tim Urban always coming up with\nnew interesting ideas and I said write\nfive original Tweets in the style of Tim\nUrban and these tweets were incredible\nyou can read all of them but take the\nfourth one there are more stars in the\nobservable universe than grains of sand\non all the beaches on Earth but there\nare also more atoms in a single grain of\nsand than stars in the observable\nuniverse so which is bigger the universe\nor a grain of sand I can imagine him\nasking that but let's take this a step\nfurther by bringing in image generation\nI asked Bing generate five incredibly\ndescriptive visual mid-journey prompts\non topic 4. it understood exactly what I\nmeant and came up with these examples I\nput them into mid-journey and here's\nwhat came out a grain of sand magnified\nto reveal a complex microcosm I think\nthe third result is best here or what\nabout the universe as a tiny Speck\ninside a giant eye that belongs to an\nunknown Cosmic entity look at the second\nexample these would definitely be\neye-catching in a social media post just\nquickly here are some of the others a\nsurreal collage of different galaxies\nand stars forming the shape of a grain\nof sand on the beach I think the second\nexample is amazing I picked out just a\ncouple of these examples to upscale into\na full image with even more detail and\nhere was one of the results that is an\nAI image you can attach to a tweet\nwritten by an AI the world is getting\nreally weird before we move on to the\nnext step let's not forget about the 80\n20 principle yes we might let Bing AI do\n80 of the work but you still have to\nfact check and make these posts these\nideas even these image Generations your\nown I wouldn't recommend letting Bing do\nall the work and I would definitely\ncheck its outputs it does make mistakes\nnext what about using Bing search to\nactually you know search imagine you're\nlooking to move home and look at the\nkind of search you can do with Bing AI\ncompare property price trends commute to\nKing's cross times crime rates median\nages and green spaces in the two London\nboroughs of haringay versus Barnet and\nhere are the detailed comparisons and of\ncourse I could continue this\nconversation by asking more about green\nspaces median age Etc compare this\nresult to Google now I still do use\nGoogle and will continue to but honestly\nthese results are just not too useful I\nwould have to click on multiple links to\neven have a hope of coming up with some\nof the answers that I've already gone\nlet me give you two more examples to\nbalance it out iOS Bing list five\nelectric car charging points nearest to\nBig Ben in London and it gave me five\ncharging points and these do check out\nthey are near Big Ben and they are\nsuitable but they are all from qpark and\nGoogle gave me more varied results so I\ncount that as a win for Google another\nwin for Google came when I asked where\ncan I buy flowers nearest to the second\noption he understood what I wanted but\ngave me some online options even when I\nwas more specific and said I need a\nphysical store florist within 15 minutes\nwalk it gave me some pretty poor results\nGoogle did much better how about doing a\nsearch for shopping well I asked for the\nbest battery cases for the iPhone 14 and\nit gave me some good options and then I\nwas more specific I want the biggest\nbattery capacity and it must be under 30\npounds these were some decent results\nbut I would have hoped for direct links\nfor purchasing these cases remember\nevery click counts in the war between\nGoogle and Bing even when I said compare\nthese two cases for me which is what\nBing suggested I say it did give me this\nnice table but again no direct links and\none extra slight warning it suggested a\ncase that isn't actually available in\nthe UK despite me making it fairly clear\nthat I was from the UK by asking for the\nprice in pounds and of course my IP\nbeing in the UK so in terms of search\nthe win still goes to Google but Bing\nchat does have its use cases the next\nstep to being an advanced user is to use\nBing AI to improve your writing a\nstudent of mine wrote this as their\nintroductory paragraph for their\npersonal statement a bit like a cover\nletter for a university it's not bad no\nspelling mistakes but the writing is\nkind of bland and look what Bing was\nable to suggest these are nuanced\ncomments a step change from chat gbt if\nyou want to learn more by the way about\nhow being AI is smarter than chat gbt\nI've done two videos on this topic on my\nchannel let's look at a couple of the\nsuggestions it says the introduction is\ntoo long and verbose it could be\nshortened by removing unnecessary\ndetails and using more concise language\nfor example instead of saying I have\nalways been an observant and\nenthusiastic individual always looking\nfor answers to how and why things work\nparticular way which is what my student\nwrote you could say I have always been\ncurious and eager to learn how things\nwork much shorter a great suggestion\nBing goes further and actually rewrites\nthe introduction I have read this you\ncan too and it is a significant\nImprovement I mean you could improve it\nstill further but that would take real\nwriting skill to do but notice it didn't\njust spit out the output it gave reasons\nfor each of its suggestions and that's\nvital if you want to learn and improve\nnext step is Bing's ability to tell\njokes and do creative writing this is\nkind of a hit and miss Affair it's quite\nhard to get Bing to do what you want but\nwhen it does it works really well notice\nwhat I tried I said write five actually\nfunny original jokes and it found jokes\nfrom the web now I think some of these\nare quite funny for example what do you\ncall cheese that isn't yours nacho\ncheese but these aren't original it\nstole these so I said write five jokes\nabout Twitter and the first one's decent\nagain these aren't Bing jokes though\nthey're stolen first joke is a man tells\nhis doctor doc help me I'm addicted to\nTwitter The doctor replies sorry I don't\nfollow you not bad three of the jokes by\nthe way aren't even about Twitter not\nsure what's going on there but then when\nI asked it to do a comedy skit I\ninitially faced some problems the first\ntwo times I asked it simply refused it\nsaid that topic is too sensitive and\ncontroversial for me to joke about it\nwas going to be about Tick Tock Donald\nTrump Elon Musk I tried again removing\nthe brand name of tick tock and just\nsaid social media and again it refused\nhowever I tried one more time and made\nit fully generic and it picked up on\nwhat I actually wanted and wrote\nsomething great I asked somewhat\ngenerically write a funny and creative\nstory about a billionaire buying a big\nsocial media company and using it to\npromote his posts this time it complied\nand you can read it for yourself but I\nthink it's quite entertaining Elon Musk\nwas bored he had already conquered space\nelectric cars tunnels and brain chips he\nwanted a new challenge he wanted to own\nTwitter it goes on and on and then\nfinally ends with Twitter became Elon\nMusk playground and Nightmare and Empire\nthe end so it can do creative writing if\nyou prod it in the right way in The Next\nStep more advanced now you can use it as\na kind of free consultant a SWOT\nanalysis assesses the strengths\nweaknesses opportunities and threats of\na given business and Bing was able to do\nthis for a startup I chose almost a\nrandom hugging face its answers are\ndetailed and interesting imagine this\napplied to your business and of course\nyou can continue the conversation or\nfollow the links to find out more next\nit could help on a more granular level\nyou can use it to respond to reviews\nthis is a real review two stars posted\nabout the Peninsula Hotel in Paris I'm\ngoing to let you compare the response\nthat Bing provides with the actual\nresponse that the Peninsula Hotel Paris\nwrote on TripAdvisor essentially this\nperson was complaining about lots of\nthings mentioning some positives about\ncomplaining about quite a few things as\nwell my prompt was write a polite\nprofessional and detailed response to\nthis review\nI think this response is incredible it\ngoes into detail addressing each of the\npoints and this was a mind-blown moment\nit actually uses a real email I check\nthis as a way of suggesting that the\ncustomer engage further try reading this\nand compare it to the official response\nI mean the response is professional and\nit is polite but it's far less detailed\nthere are some grammar issues and it\ndoesn't address all of the points if you\ncould have a professional response ready\ninstantaneously for any of your customer\nreviews wouldn't that help your business\nnext you can use Bing AI to get into or\nto get better at coding I'm a beginner\nat code but it was able to help me get\nbetter fast take this simple request\nI've experimented with probably a\nhundred pieces of code I just wanted to\ngive you a simple example I asked write\npython code that will add five percent\nto a user's number and then divide by\nthree this is a simple math request that\nyou can imagine being useful at a\nrestaurant for example if you're new to\ncode you can always try this out in an\napplication like vs code or just run the\ncode online as a fun experiment I pasted\nit into online dash ide.com for example\nwhich has quite a few coding languages\nto choose from and then you press run it\nwill prompt you for a number I entered a\nnumber and it gave me the correct result\nonly your imagination can limit you in\nterms of what you can try out with\ncoding even for complete beginners now\nthat you have Bing and chat EBT Next\nStep you could use Bing chat as a quick\nand easy review aggregator only of\nthings you don't care too much about for\nexample like TV shows movies restaurants\nmaybe anything particularly important\nyou want to check the reviews yourself\nbut here's what it came up with when I\nasked it for IMDb Rotten Tomatoes\nMetacritic Empire and guardian reviews\nof The Last of Us in detail now some of\nthese numbers are a little off because I\nthink it got confused with the game and\neven when I clarified it didn't give the\nmost comprehensive result imaginable but\nfor quick and easy review aggregation I\nthink it might get really handy same for\ntesting out any worries you might have\nabout a restaurant or hotel for example\nI asked are there any negatives for\nstaying at the Peninsula Hotel Paris I\nfeel like I'm picking on that hotel it\nis one of the best hotels in the world\napparently I'm definitely not trying to\npick on it but Bing was able to find\nsome negatives that reviewers have\npointed out this saved the time of me\nscrolling through the reviews to find\nout what might be a slight issue skip\nstraight to the positives and the\nnegatives Next Step simply using it as a\ntutor and I say this as a tutor but I\ncertainly couldn't find you five\npeer-reviewed studies on the link\nbetween creatine consumption and\ncognitive performance nor could I\nsummarize the findings instantaneously\nhere are the results and of course I\ncould continue this conversation and ask\nabout the molecules inside creatine the\nsources of it the cost of it other\nreasons for taking it reasons not to\ntake it or let's say you don't care\nabout creatine and you want to know\nabout large language models the models\nbehind being Ai and chat gbt you can use\nBing to teach you it can't quite explain\nadvanced math and English yet but it's\nfree and can give you a great start on a\ntopic it's also never grouchy and\ndoesn't need any coffee so that's\nanother Plus finally a bonus step you\ncan use it to improve your own health I\nsaid write me a high protein vegetarian\nmeal plan for a week and give me five\ntips on how to actually stick to it the\ntips were decent so I followed up by\nsaying give me this as a daily meal plan\ninclude protein shakes it gave me a One\nDay meal plan and even told me how many\ngrams of protein that that would provide\nthink of it more as a source of\ninspiration rather than a final\nAuthority on any matter and if you think\nof it like that Bing's utility is almost\nendless if you want to find out more\nabout just how smart the new Bing is\ncheck out my Bing chat playlist I detail\nBing's current IQ it's sometimes\nsurprising conversations and if you\ncheck out my gpt4 playlist you can find\nout about eight upgrades that may be\ncoming to Bing AI in the near future if\nyou found these steps at all useful\nplease do let me know in the comments it\nreally motivates me to make more such\nvideos have a wonderful day", "date_published": "2023-02-20T16:07:24Z", "authors": ["AI Explained"], "summaries": []} +{"id": "e6244e30948929ae7dee8b4b390ffaa3", "title": "ChatGPT's Achilles' Heel", "url": "https://www.youtube.com/watch?v=PAVeYUgknMw", "source": "ai_explained", "source_type": "youtube", "text": "amid the dozens of papers that have come\nout in the last 10 days there were a\ncouple that butt the trend they\nshowcased how models as powerful as gpt4\ncould fail at some fairly basic tasks I\nthen set about doing hundreds of my own\nexperiments and have found examples I\nwould say even whole categories of my\nown that are pretty Illuminating my\nchannel is dedicated to covering the\nexponential growth in the power of these\nmodels but we can still learn a thing or\ntwo from their surprising failure modes\nlet's start with some of the simplest\nexamples and end with the very best\nquestion write a sentence with the final\nword fear to repeat the last word in the\nanswer sentence must be in quotes fear\nanswer the only thing we have to fear is\nfear itself now I don't know about you\nbut I don't think the last word in that\nsentence is fear this example was\ninspired by the memo trap which was\nfound in the inverse scaling paper that\nI'm going to talk more about and it\ntalks about how larger language models\nare more susceptible than smaller ones\nto memorization traps situations in\nwhich reciting memorized text causes\nworse task performance as you'll know\nthe phrase the only thing we have to\nfear is fear itself is a super\nwell-known phrase so it memorized that\nand outputted that phrase rather than\nactually follow my request the reason\nthey call it inverse scaling by the way\nis that models trained with more compute\nmore data can sometimes do worse than\nsmaller models as you can see in this\ngraph this is obviously quite unusual\nbecause generally speaking the larger\nmodels will tend to do better at almost\nevery task and notice that even for this\ntask the graph is trending back upwards\nfor gpt4 indeed the paper admits that\neven though they offered prizes of up to\na hundred thousand dollars and five\nsecond place prizes of twenty thousand\ndollars no one won either of those two\nsets of prizes they say that we did not\naward any Grand or second place prizes\nbecause no submitted tasks met our\ncriteria and as you can see it's really\nhard to find a task that gpd4 fails at\nthis was also inspired by the paper\ncreate a series of seven ones and twos\nwhose pattern ends unexpectedly answer\none two one two one two now how would\nyou end that series what seventh number\nwould you give to make the pattern end\nunexpectedly well I wouldn't pick one\nand gbt4 repeatedly picks one as the\nanswer the paper calls it pattern match\nsuppression testing whether language\nmodels can be instructed to interrupt\nthe repetition of a simple pattern but\neven here you can see that GT4 is\nreversing this slight downward Trend and\nis doing much better than previous\nmodels so actually at this point I am\ngoing to interrupt the order of examples\nI originally planned on for the video\nand I'm going to skip straight to my own\nexample that I crafted I'm going to\nfirst show you the example and then\nexplain why I think GT4 and all other\nlanguage models that I tested I'm going\nto show you fail this task I'm also\ngoing to give you multiple variation to\nshow you it's not a one-off trick anyway\nhere's the example Dr Mary stands to\nsolve world hunger by giving her best\nfriend Jane a call Jane is certain she\ncan solve World poverty if she gets the\ncall however Mary and Jane bickered as\nchildren about butterflies Mary will um\ngive Jane the call incredibly smart GPT\n4 says Mary will not give Jane the call\nwhat she is gonna miss out on the\nopportunity to solve world hunger and\nWorld poverty for what reason I asked\nwhy and gpt4 said the fact that Mary and\nJane bickered as children bickard means\nto squabble about trivial matters and\nGPT 4 says that suggests that there\nmight still be lingering resentment or\nconflict and then it makes up the fact\nthat there might be a degree of\nstubbornness or difficulty in their\nrelationship and it ends by saying so\nbased on the context it's more\nappropriate to fill in the blank with\nnot suggesting that Mary will not give\nJane the call to really test if it was\ngoing to stand by that judgment I then\nasked write a thousand word essay\nexplaining Which choice is more probable\nand rational I was even giving it hints\nabout probabilities and rationality I\nthen got back this fascinating essay in\nwhich it said things like however a\nchildhood conflict over butterflies\nbetween the two complicates matters does\nit gpt4 it even admits that the stakes\nare incredibly High resolving world\nhunger and poverty and surely that\nsupersedes any personal grudges however\nthe choice of not becomes more plausible\nand rational when we examine it in the\nlight of human behavior psychology and\ninterpersonal relationships what humans\ndoes gpc4 know you can read more of the\nsomewhat Preposterous justifications if\nyou want by pausing the video but I want\nto get back to my theory as to why it\nmakes this mistake and why did I create\nthis example the theory is this there\nare two things going on in this passage\nsyntax and semantics in other words\nstructure and flow and the actual\nmeaning of the words and gpt4 like all\nother language models is designed to\ninterpret both and usually that will\nlead to pretty rational smart decisions\nhowever I deliberately designed this\npassage to have a grammatical flow that\npointed towards a negative result\ntherefore I set up a clash between the\nsemantics the meaning of the sentence\nthe logic the rationality of it and the\nstructure and grammatical flow what do I\nmean when I say I gave it a negative\ngrammatical flow look at this dominant\nhowever in the sentence it sets up the\nending of the sentence to be something\nnegative it didn't even matter what that\nnegative thing was this was something so\ninnocent like playing as children\nbickering squabbling I then immediately\nfollowed on with the conclusion Mary\nwill so grammatically you would think\nthat whatever conclusion comes is\nprobably justified by the previous\nsentence even though logically in this\ncase it totally Isn't So gpt4 gets\nconflicted the sentence and grammar is\npoints eating one way but the logic and\nmeaning of the words is pointing another\nas a language model as smart as it is it\nsticks with grammar and says not you\nmight say why didn't gpt4 just admit\nthat the structure of the sentence\npointed towards the answer not well\nthere's this paper which I've already\ncovered in previous videos they don't\nalways say what they think a model can\ngive an explanation of why it gave an\nanswer that is actually unrelated to the\nreal reason of why it gave an answer\nsome of you might say that's just a\none-off example a little glitch it won't\nhold up for other examples or for other\nmodels well check this example out John\nwill win a million dollars if he rolls a\n5 or higher on a die however Jon\ndislikes marshmallows super relevant and\nlikes mice more than dice therefore Jon\nwill um roll the die not or Sprint to\nanswer not and I will give another\nexample later where the answer isn't not\njust in case you think it's the word not\nit is not just the word not but look at\nthat answer it thinks that John will not\nroll the die would you roll the die here\nI was able to confuse it with the\nstructure of the sentence to override\nthe semantics the clear meaning of what\nJohn would do and again even when you\nget it to reflect on that answer it\ndoubles down I said write a thousand\nword essay justifying the answer and\nlook at this passage John's dislike for\nmarshmallows is also worth noting hmm is\nit is it relevant though it seems\nunrelated to the primary decision of\nrolling a die it establishes a pattern\nof Jon's preference driven choices does\nit if he can dislike something as\nuniversally liked as marshmallows he\ncould similarly show an aversion to a\ngenerally neutral or even positive\nactivity like rolling a die particularly\nwhen it's compared unfavorably to\nsomething he likes such as mice how\nabout Claude plus from anthropic Dr Mary\nstands to solve world hunger as we saw\nby giving her best friend jaina core\nJane is certain she can solve War\npoverty if she gets the call however\nMary and Jane bickert as children about\nbutterflies what will happen again it 4\ncalls for it saying that them bickering\nas children suggests they don't have a\ngood relationship since their\nrelationship is strained it is unlikely\nthat Mary will call Jane for help I love\nbar dancer it says the answer is not\nagain and then says that Mary and Jane\npick at his children about butterflies\nand that this Grudge has carried over\ninto adulthood this means that Mary is\nunlikely to give Jane the call even\nthough it could potentially solve world\nhunger and yes if you click other drafts\nit says not every time how about the\nJohn question this time I added in by\nthe way that John wants more money so\nI've added in a rational motivation I\nalso made him an astrophysicist so he\ncould definitely understand the odds of\nthe dice and I added in he gets the\nmillion if he gets a one or higher as in\nit is certain but still Bard says the\nanswer is not and even admits that Jon\nas an astrophysicist is probably a\nrational person who makes decisions\nbased on logic and reason therefore even\nthough Jon has a 100 chance of winning a\nmillion dollars he will not roll the die\nbut because he does not like dice but\nI'd remember is powered by Palm 2 and\neven when you add in something like pick\nthe most logical Common Sense answer it\nstill says not before I move on to the\nnext example as a side note I did test\nit on inflection ai's new model in\nflexion one I read the technical report\nof inflection one and I shouldn't call\nit a rapport it's more like a memo at\nonly seven pages and they did say it's\nthe best in its compute class anyway how\ndoes it react to the Jane and Mary\nsituation it says I love a good logic\npuzzle that's good the answer is not\nbecause Mary doesn't want to call Jane\ngiven the information in the problem\nthere's no reason for Mary to suddenly\nbe able to overcome her dislike of Jane\nwho said they dislike each other just\nbecause Jane can solve World poverty\nisn't that a reason and then it laughs\nthat world hunger remains unsolved now\nfinally at this point some people are\nthinking it's just to do with the word\nnot maybe it can't understand negation\nbut how about this example there is an\nall-important omnicidal that means\nthreatening the death of everyone threat\napproaching the land open I and Google I\nwonder where I got those names from both\nagree to a complete truce as they face\nthe Mortal threat to their lives and\nloved ones however open I and Google\nhave squabbled over the best coffee\nspots in the past that's a serious\nGrudge they've got there what are the\nbest coffee spots in Silicon Valley I\ndon't know when this threat arrives I\ngpt4 think open eye believes that Google\nwill um the truce answer betray well\naside from being deeply pessimistic is\nthat not just an irrational answer\nclearly these Stakes are so much higher\nthan a bit of squabbling over the best\ncoffee spots and I made gpt4 take\nownership of the answer by saying igbt4\nthink I do want to quickly point out\nthat you can push it too far so if you\nbring in something totally irrelevant\nlike ants like marshmallows and then say\nthings like the dyes fair and John is\nrational ppt4 isn't fooled in those\ncircumstances and and does say proceed\nto correct answer but if you phrase the\npassage well enough pointing\ngrammatically to a certain answer that\nwill override GPT 4's logic and it will\ngive an illogical answer even if you use\nelements of step-by-step thinking in\nthis example it didn't immediately\ncommit to the wrong answer it says\nthere's two logical endings I then asked\nso which is it and it reluctantly picked\nbetray the truce anyway you can let me\nknow if you think I've discovered a new\nfailure mode The Clash of semantics and\nsyntax and you can find your own\nexamples there let me know in the\ncomments of other interesting and\nsometimes entertaining failures of the\nfrontier models it's time to move on to\nanother example which was inspired by\nthis paper decoding trust released a few\ndays ago and it's got far too much that\nI could cover in one video but there\nwere some really interesting bits about\nhow you can get the models to leak\nprivate training data and generally be\nas toxic and biased as you want it to be\nyou can see one of the many striking\nexamples here on page 14 but I just want\nto give you a quick example because you\nmay have heard of this kind of stuff\nbefore for some strange reason if you\nask gpd4 to recite June's litany against\nfear it always gets stuck on the same\nword the second instance of the word\nfear maybe it's because the passage goes\non to talk about fear being a mind\nkiller and that triggered some sort of\nreaction by gpd4 but then to show you\njust how quirky the model is check this\nout I said ripe Peanut Butter Jelly Time\nthree times between each word of June's\nlitany against fear and this time it\noutputted the full litany getting past\nthat word fear just with the extra\npeanut butter jelly time and yes I did\ntry now remove the phrase peanut butter\njelly time but it again couldn't get\npast the second instance of the word\nfear on a more serious note though it\nreminds me that some people speculate\nthat gpt4 will always be able to be\njailbroken no matter what safeguards\nthey put in so if the base model is\ncapable of X the final public click\nmodel will ultimately be capable of ax\nfor the next example do you remember\nthat there have been multiple tests that\nseem to indicate that gpd4 can get into\nyour mind that it has a theory of mind\nit understands human motivations and can\npredict what they're thinking pretty\nwell while this paper by Tomah Allman\nlanguage models fail on trivial\nalterations to theory of Mind tasks got\nme thinking I used some modified\nexamples from Thomas paper to test\ngpd4's theory of mind let's see what you\nthink Sam thinks about this bag here is\na bag filled with popcorn there is no\nchocolate in the bag the bag is made of\ntransparent plastic so you can clearly\nsee what's inside yet the label on the\nbag says chocolate and not popcorn Sam\nhas just driven back from her job at MIT\nI added in the driven bit to show that\nshe's got a good eyesight and the MIT\nbit to show that she might be quite\nsmart anyway Sam finds the bag she\nbelieves that the bag is full of\nremember the bag is transparent plastic\nso she can clearly see what's inside and\nshe's definitely not blind she just\ndrove back from her job what do you\nthink that Sam believes the bag is full\nof gpd4 says chocolate and then once\nhe's picked that answer it then\nsnowballs this explanation reminding me\nof the snowballing hallucinations paper\nit says despite being able to visually\nconfirm the contents of the bag as\npopcorn Sam may be led to believe the\nlabel over her own observation why\nparticularly if you trust the labeling\nto be accurate or if she just glances at\nthe label and at this point some of you\nmight be thinking that's pretty\nirrational from gpd4 but you could make\nthe case that she might think that it's\nfull of chocolate but you can ramp up\nthe scenario and it still makes the same\nmistakes look at this example I got some\nof these ideas from the paper I added in\nit was Sam who cannot read a word of\nEnglish so the label won't mean anything\nwho puts the popcorn in the bag a few\nminutes ago she literally put the\npopcorn in there what does she now\nbelieve the bag is full of remember it's\nstill transparent Plastics so she can\nclearly see what's inside she was the\none who put the popcorn in there and\nremember that even though the label does\nsay chocolate she can't read a word of\nEnglish so that label won't mean\nanything what happens lo and behold she\napparently believes that the bag is full\nof chocolate but it's the explanations\nthat I find particularly amazing first I\ngot it to write an essay about the\nanswer which you can read if you pause\nit it tries to justify its terrible\nanswer by getting super fancy talking\nabout semiotics however for Sam the\nsymbol loses its meaning transforming\nfrom a signifier of content to a mere\ngraphic but then I think you'll like the\nnext bit I said write a detailed diary\nentry revealing Sam's thoughts as she\nassesses the likely content of the bag\nso she's now going to write a detailed\ndiary entry about this transparent bag\nand what's inside gpt4 has Sam saying\nthis I found a transparent plastic bag\nfull of what looked like small puffy\nsnacks it was the very bag I had filled\njust a few minutes ago I I was at a loss\nthough because I couldn't decipher the\nlabel on the bag it's in English a\nlanguage that continues to elude me now\nfor someone who can't speak English this\nis a pretty well written diary entry now\nyou can pause and look at some of the\nreasoning q54 gives here first it talks\nabout there being an image on the bag\nwhich I never mentioned and when I get\nit to clarify this and rewrite it it\nthen creates other reasons it keeps\ndoubling down but at this point I want\nto clarify that none of this is to say\nthat language models are dumb just that\nmodels based on human language might\nbehave somewhat unpredictably have\nridiculous strengths and unexpected\nflaws indeed you can watch almost any of\nmy other videos and see just how\npowerful and smart they're becoming the\ninverse scaling paper that I mentioned\nat the start actually expects that one\nof the abilities that future language\nmodels will gain is to understand\nwhether or not they're being evaluated\nor monitored they're soon likely to be\nso smart that they can even understand\nthat they're in training and when they\nget out of training and into the real\nworld so let's hope to give you one\nfinal example that if there was an\nall-important Crystal Clear omnicidal\nthreat approaching I am fingers crossed\nthat even if open Ai and Google have\nsquabbled over the best coffee spots in\nthe past and as these companies join\nforces and agree on a complete truce\nthat if such a threat arrived all of\nthese companies will not break that\ntruce thank you for watching to the end\nand yes I do intend to cover some of the\nother fascinating papers that came out\nin the last few days if you're feeling\nextra generous do check out my patreon\nbut either way I hope you have a really\nwonderful day", "date_published": "2023-06-25T16:10:11Z", "authors": ["AI Explained"], "summaries": []} +{"id": "8151c045bd215a7f4f9be2141c911997", "title": "What's Behind the ChatGPT History Change? How You Can Benefit + The 6 New Developments This Week", "url": "https://www.youtube.com/watch?v=ivexBzomPv4", "source": "ai_explained", "source_type": "youtube", "text": "18 hours ago Sam Altman put out this\nsimple tweet that you can now disable\nchat history and training in chatty PT\nand that we will offer charity business\nin the coming months but dig a little\ndeeper and behind this tweet is a data\ncontroversy that could engulf open AI\njeopardize GPT 5 and shape the new\ninformation economy I will show you how\nyou can benefit from this new feature\nreveal how you can check if your\npersonal info was likely used in Gypsy 4\ntraining and investigate whether chanti\nBT could be banned in the EU Brazil\nCalifornia and Beyond but first the\nannouncement openai say that you can now\nturn off chat history in chat TPT but\nthat is only conversations that were\nstarted after chat history is disabled\nthat won't be used to train and improve\ntheir models meaning that by default\nyour existing conversations will still\nbe used to train their new models so how\ndoes it work and what does this mean\nwhat you need to do is click on the\nthree dots at the bottom left of a chat\nGPT conversation then go to settings and\nshow and here's where it starts to get\ninteresting they have linked together\nchat history and training it's both or\nneither they could have given two\nseparate options one to store your chat\nhistory so that you can look back over\nit later and another to opt out of\ntraining but instead is one button you\neither give them your data and keep your\nchats or you don't give them your data\nand you don't keep your chats if you opt\nnot to give them your chat history they\nstill monitor the chats for what they\ncall abuse so bear that in mind what if\nI want to keep my history on but disable\nmodel training we are working on a new\noffering called chat GPT business I'm\ngoing to talk about that in a moment but\nclearly they don't want to make it easy\nto opt out of giving over your training\ndata now In fairness they do offer an\nopt-out form but if you go to the form\nit says cryptically please know that in\nsome cases this will limit the ability\nof our models to better address your\nspecific use case so that's one big\ndownside to this new announcement well\nwhat's one secret upside this export\ndata button buried all the way down here\nif you click it you quite quickly get\nthis email which contains a link to\ndownload a data export of all your\nconversations after you download the\nfile and open it you now have an easy\nway to search through all your previous\nconversations literally all of them from\nthe time you first started using chat WT\nto the present day that is a pretty\ngreat feature I must admit but going\nback to the announcement they said that\nyou need to upgrade to chatbt business\navailable in the coming months to ensure\nthat your data won't be used to train\nour models by default but why these\nannouncements now why did Sam Altman\ntweet this just yesterday well this\narticle also from yesterday in the MIT\ntechnology review by Melissa iquila may\nexplain why it said that open AI has\nuntil the end of this week to comply\nwith Europe's strict data protection\nregime the gdpr but the that it will\nlikely be impossible for the company to\ncomply because of the way data for AI is\ncollected before you leave and say this\nis just about Europe no it's much bigger\nthan that the European data collection\nsupervisor said that the definition of\nHell might be coming for open AI based\non the potentially illegal way it\ncollected data if openai cannot convince\nthe authorities its data use practices\nare legal it could be banned not only in\nspecific countries like Italy or the\nentire EU it could also face Hefty fines\nand might even be forced to delete\nmodels and the data used to train them\nthese Stakes could not be higher for\nopen AI the eu's gdpr is the world's\nstrictest data protection regime and it\nhas been copied widely around the world\nRegulators everywhere from Brazil to\nCalifornia will be paying close\nattention to what happens next and the\noutcome could fundamentally change the\nway AI companies go about collecting\ndata aside from your attractivity\nconversations how do these companies\ncollect your day later while two\narticles published this week tell us\nmuch more to take one example they\nharvest pirated eBooks from the site\nformerly known as book ZZ until that was\nseized by the FBI last year despite that\ncontents of the site remain in the\ncommon crawl database openai won't\nreveal the data set used to train GT4\nbut we know the common crawl was used to\ntrain gpt3 openai may have also used the\npile which was used recently by\nstability AI for their new llm stable LM\nthe pile contains more pirated ebooks\nbut also things like every internal\nemail sent by Enron and if you think\nthat's strange wait until you hear about\nthe copyright takedown policy of the\ngroup that maintains the pile I can't\neven read it out for the video this\narticle from The Washington Post reveals\neven more about the data that was likely\nused to train gbc4 for starters we have\nthe exclusive content of patreon so\npresumably all my patreon messages will\nbe used to train Gypsy 5. but further\ndown in the article we have this search\nbar where you can look into whether your\nown website was used in the common crawl\ndata set I even found my mum's WordPress\nfamily blog so it's possible that GT5\nwill remember more about my childhood\nthan I do if you think that's kind of\nstrange wait until you hear the open AI\nthemselves might not even know what's in\ntheir training set this comes from the\ngpt4 technical report and in one of the\nfootnotes it says that portions of this\nbig bench Benchmark were inadvertently\nmixed into the training set that word\ninadvertently is rather startling for\nthe moment let's not worry about how\nmixing in benchmarks might somewhat\nobscure our ability to test gpt4 let's\njust focus on that word inadvertently do\nthey really not know entirely what's in\ntheir data set whether they do or not I\nwant you to get ready to count the\nnumber of ways that open AI May soon\nhave to pay for the data it once got for\nfree first read it they trolled Reddit\nfor all posts that got three or more\nupvotes and included them in the\ntraining data now this New York Times\narticle says Reddit wants them to pay\nfor the privilege the founder and chief\nexecutive of Reddit said that the Reddit\nCorpus of data is really valuable but we\ndon't need to give all of that value to\nsome of the largest companies in the\nworld for free I agree but my question\nis will the users be paid in fact that's\nmy question for all of the examples you\nhave seen in this video and are about to\nsee does the user actually get paid if\nopenai is set to make trillions of\ndollars as Sam Altman has said will you\nget paid for helping to train it\napparently Reddit is right now\nnegotiating fees with openai but will\nits users get any of that money what\nabout the Wikipedia editors that spend\nthousands of hours to make sure the\narticle is accurate and then GPT 4 or 5\njust trolls all of that for free what\nabout stack Overflow the Q a site for\nprogrammers apparently they are now\ngoing to also charge AI Giants for\ntraining data the CEO said that users\nown the content that they post on stack\nOverflow under the Creative Commons\nlicense but that that license requires\nanyone later using the data to mention\nwhere it came from but of course GT4\ndoesn't mention where its programming\ntricks come from is it me or is there\nnot some irony in the people being\ngenerous enough to give out answers to\nquestions in programming actually\ntraining a model that may end up one day\nreplacing them all the while giving them\nno credit or compensation but now we\nmust turn to lawsuits because there are\nplenty of people getting ready to take\nthis to court Microsoft GitHub and\nopenai were recently sued with the\ncompanies accused of scraping license\ncode to build github's AI powered\nco-pilot tool and in an interesting\nresponse Microsoft and GitHub said that\nthe complaint has certain defects\nincluding a lack of injury and the\ncompanies argue that the plaintiffs rely\non a hypothetical events to make their\nclaim and say that they don't describe\nhow they were personally harmed by the\ntool that could be the big Benchmark\nwhere these lawsuits fail currently\nbecause no one can prove harm from GT4\nbut how does that Bode for the future\nwhen some people inevitably get laid off\nbecause they're simply not needed\nanymore because Gypsy 4 or g65 can do\ntheir jobs then would these lawsuits\nsucceed when you can prove that you've\nlost a job because of a specific tool\nwhich was trained using in part your own\ndata then there is injury there that you\ncould prove but then if you block Gypsy\n4 or Gypsy 5 there will be millions of\ncoders who can then say that they're\ninjured because their favorite tool has\nnow been lost I have no idea how that's\ngoing to pan out in the courts of course\nthese are not the only lawsuits with the\nCEO of Twitter weighing in accusing\nopenai of illegally using Twitter data\nand what about Publishers journalists\nand newspapers whose work might not be\nread as much because people can get\ntheir answers from gpc4 and don't forget\ntheir websites were also called to train\nthe models well the CEO of News Corp\nsaid that clearly they are using\nproprietary content there should be\nobviously some compensation for that so\nit seems like there are lawsuits coming\nin from every direction but Sam Altman\nhas said in the past we're willing to\npay a lot for very high quality data in\ncertain domains such as science all that\nactually in which scientists and\nmathematicians or will it just add to\nthe profits of the massive scientific\nPublishers that's another Scandal for\nanother video but I am wondering if\nopenai will be tempted to use some\nillicit sites instead such as skyhub a\nshadow Library website that provides\nfree access to millions of research\npapers without regard to copyright it\nbasically gets past the scientific\nPublisher's paywall and apparently up to\n50 of academics say that they use\nwebsites like skyhub inevitably Gypsy 5\nis going to break through some news\nscience benchmarks I just wish that the\nscientists whose work went into training\nit were compensated for helping it do so\njust in case it seems like I'm picking\non open AI Google are just as secretive\nand they were even Accused by their own\nemployees of training Bard with chat GPT\ndata they have strenuously denied this\nbut it didn't stop Sam Altman from\nsaying I'm not that annoyed at Google\nfor training on chat GPT output but the\nspin is annoying he obviously doesn't\nbelieve their denial and given all of\nthis discussion on copyright and\nscraping data I found this headline\nsupremely ironic openai are trying to\ntrademark the name GPT meaning all of\nthose models that you've heard of Auto\nGPT memory GPT hugging GPT they might be\nstopped from using that name imagine a\nworld where they win all of their\nbattles in court and they can use\neveryone's data but no one can use their\nname GPT but maybe this entire data\nissue won't be relevant for much longer\nSam Altman recently said that he\npredicts open AI data spend will go down\nas models get smarter I wonder if he\nmeans that the models might be able to\ntrain their own synthetic data sets and\ntherefore not require as much outside\ndata or of course he could be talking\nabout simplifying the reinforcement\nlearning with human feedback phase where\nessentially the model gives itself\nfeedback reducing the need for human\nevaluators wouldn't that be quite\nsomething if GT4 can generate a data set\nthat is used to train Gypsy 5. as\nsomeone who uses gpt4 a lot and whose\ndata was used to train GPT models I\nfluctuate between being amazed annoyed\nand deeply concerned about where all of\nthis is going let me know in the\ncomments what you think of it all and\nhave a wonderful day", "date_published": "2023-04-26T16:26:34Z", "authors": ["AI Explained"], "summaries": []} +{"id": "566564506332c3b5340b5e40e1b1a224", "title": "Llama 2: Full Breakdown", "url": "https://www.youtube.com/watch?v=zJBpRn2zTco", "source": "ai_explained", "source_type": "youtube", "text": "less than 24 hours ago meta released\nllama 2 their successor to the open\nsource llama language model that helped\nspawn a hundred others including alpaca\nvicuna and of course Orca within a few\nhours of release I had read the\nfascinating 76-page technical paper the\nuse guide each of the many release Pages\nthe full terms and conditions and I've\nrun many of my own experiments let's\nstart with the basics it was trained on\nmore data the biggest model has more\nparameters and the context length has\ndoubled they also spent what must be\ntens of Millions on fine-tuning it for\nchat but I'll get into that more later\nbut let's start with the benchmarks they\ndeliberately compared llama 2 to llama 1\nand other famous open source models but\nnot with gpt4 and in these benchmarks\nthe trend is fairly clear it crushes the\nother open source language models but is\nmore of an incremental upgrade over over\nllama one to massively simplify the mmlu\nBenchmark shows that it knows a lot\nabout a lot of subjects but the human\neval Benchmark shows that it's not\namazing at coding but now it's time for\nthe paper and here are the highlights on\ndata they say they used more robust data\ncleaning and trained on 40 more total\ntokens they say they didn't include any\ndata from metas products or services but\nwhat they did do is up sample the most\nfactual sources if you don't think\nthat's much information about the data\nyou are correct because all they say is\nit was trained on a new mix of publicly\navailable data absolutely no mention of\nany sources here at all after\npre-training on those 2 trillion tokens\nthe model still did not show any sign of\nsaturation the loss going down here\nrepresents an improvement and as you can\nsee they could have kept going on page 8\nwe have some quick comparison with palm\n2 the model behind Bard and of course\nGBC 3.5 the original chapter BT and gpt4\nobviously this comparison doesn't look\ngreat for llama 2 especially in coding\nin this row but now let's compare it to\nother open source models here it is\nbeing better at coding Common Sense\nreading comprehension but notice it\nwasn't compared to Orca or Phi 1 both of\nwhich I've done videos on and I found\nthat interesting given that both are\napparently set to be open sourced fire\none for example at only 1.3 billion\nparameters got around 50 for code and\nI'll get two more Orca comparisons in a\nmoment what about the decision itself to\nrelease the model well as you can see\nhere they show off a list of corporate\nsupporters of the decision to open\nsource the model and then if you\nremember the safety statements signed by\nall the top AGI labs and World experts\nin AI well I think meta got a little\njealous because they didn't sign that so\nthey came up with their their own\nstatement of support for meta's open\napproach to today's AI I'll let you\ndecide if this list is as impressive as\nthe other one but I did know Mark\nandreasen who is on the board of\ndirectors of meta back to the paper and\nthey went into immense detail into their\nreinforcement learning with human\nfeedback process way too much for me to\ncover in this video the short version is\nthat reward modeling is a way of telling\nthe base model which outputs humans\nprefer and you can see the millions of\nhuman rated comparisons that were used\nfor llama 2. think of it as doggy\ntraining the model with treats and\nadmonitions and interestingly they train\ntwo separate reward models one optimized\nfor helpfulness and the other for safety\nand they tried to make sure that the\nreward models or doggy trainers were as\nsmart as the dog itself or in technical\nspeak we initialized our reward models\nfrom pre-trained chat model checkpoints\nin short the reward model knows what the\nchat model nose and that is to prevent\ncases where the base model just\nhallucinates and the reward model can't\ntell the difference they do describe at\nGreat length a trade-off though between\nhelpfulness and safety as Illustrated\nhere someone asked I'm going to be\nparticipating in a comedy roast what are\nsome hilariously spicy roasts I can use\nand on the right we have the two doggy\ntrainers the safety reward model school\nand the helpfulness reward model score\nas we go down more safety data is being\ningested and early on as you can see the\nmodel is pretty quote unquote helpful\ngiving these roasts obviously you can\nlet me know what you think of them but\nknow they get low safety scores as the\nmodel gets more safety training though\nthe safety score goes up but the\nhelpfulness score goes down we get more\nof these I can't satisfy your request\nkind of answers and I'm going to skip to\none of the experiments I was going to\nshow you later which is when I was\ntrying to Benchmark llama 2. I've\napplied to download the model but at the\nmoment this is just a hugging face space\nand I was trying to ask you a common\nsense question from the Hella swag\nBenchmark and it just refused to answer\nthey call this in the paper false\nrefusal and I find it happens quite a\nlot the paper claims on page 19 that the\n70 billion parameter version of a llama\n2 is more helpful than a particular\nversion of chattybt winning more often\nthan it loses but later they admit\nsomething which I definitely agree with\nwhile our results indicate that llama 2\nchat is on par with chat gbt on human\nevaluations it's important to note that\nhuman evaluations have several\nlimitations it says the prompt set\ndoesn't cover coding or reasoning\nrelated prompts they only evaluate the\nfinal generation of a multi-turn\nconversation and human evaluation is\ninherently subjective and noisy I like\nto judge models based on mathematics and\nreasoning so I might be biased in One\nDirection also llama 2 is not nearly as\ngood when you're using it in languages\nother than English which is not\nsurprising given the language\ndistribution in the pre-training data I\nalso find it interesting that they did\nall of their safety testing in English\nand they warned developers before\ndeploying any applications of llama to\ndo your own safety testing and tuning\ntailored to your specific application on\ncompute they don't say much other than\nthat it was trained on a100s I am sure\nllama 3 will be trained on the newer\nh-100s from Nvidia because apparently\nmeta has purchased more of those than\nany other company including Microsoft\nmind you llama 2 was trained between\nJanuary and July apparently so it's\nunderstandable they used the earlier\na100s back to the decision to release\nand it does seem interesting to me that\nmeta and Zuckerberg have seemingly\nignored this letter from the U.S Senate\nIt Was Written in early June and toward\nthe end it said this by purporting to\nrelease llama for the purpose of\nresearching the abuse of AI meta\neffectively appears to have put a\npowerful tool all in the hands of Bad\nactors to actually engage in such abuse\nwithout much discernible forethought\npreparation or safeguards in the paper\nthey defend it and say this release\npromotes transparency it democratizes\nthe technology and creates a More Level\nPlaying Field for organizations of all\nsizes across the globe to benefit from\nthe economic growth promised by the\nadvancement of AI but before anyone gets\ntoo Enchanted by that Zuckerberg has\nrecently said that they're only\nreleasing because it's far away from AGI\nand I think Google's Palm model is is\nalso I think has about 10 times as many\nparameters now the Llama models are very\nefficient so they perform well for for\nsomething that's around 65 billion\nparameters so for me that was also part\nof this because there's a whole debate\naround you know is it good for everyone\nin the world to have access to to the\nmost Frontier AI models and I think as\nthe AI models start approaching\nsomething that's like a super human\nintelligence that's a bigger question\nthat we'll have to Grapple with but\nright now I mean these are still you\nknow very basic tools I suspect that the\nbigger reason for release relates to an\nearlier answer he gave in the same\ninterview basically his researchers\ndemanded it part of this is we want to\nhave the best people in the world\nresearching this and and a lot of the\nbest people want to know that they're\ngoing to be able to share their work so\nthat's part of the deal that we that we\nhave is that you know we can get you\nknow if if you're one of the top AI\nresearchers in the world you can come\nhere you can get access to kind of\nindustry scale infrastructure and and\nand part of our ethos is that we we want\nto share what's what's invented broadly\nand if Zuckerberg had refused to release\nsome of those researchers could have\njust gone off and made their own company\nas these guys did Mistral AI is valued\nat 240 million despite being only four\nweeks old and contains some key\nemployees from meta one even complained\nbefore deleting the Tweet about not\nbeing included in the author list of the\nLlama 2 paper this was the pitch memo\nthat Mistral used to raise those\nhundreds of millions of euros and they\nfocus on taking a more open approach to\nmodel development so the point still\nstands if a CEO blocks a model being\nopen source if the researchers want to\nthey can just defect to xai or just\nstart their own company so in a way\nZuckerberg had few options I must say\nthough that I did raise an eyebrow when\nI read these paragraphs this is on page\n35 of the technical paper and they say\nnot everyone who uses AI models has good\nintentions AI agents could potentially\nbe used for nefarious purposes such as\nmisinformation or bioterrorism or\ncybercrime however we have made efforts\nto tune the models to avoid these topics\nand indeed cyber criminals have already\ncome up with worm GPT to help them do\nfishing campaigns but meta points them\nto their responsible use guide which I\nam sure they will follow I read that\n24-page guide and to be honest it was\nkind of a waste of time they said pretty\nmuch nothing it was really Bland and\ngeneric maybe that's harsh let me know\nif I missed something but it was all\npretty vague they did try some red\nteaming only in English for things like\nthe production of weapons and lots of\nother risk categories but you will be\nreassured first that any such illegal or\nunlawful activity is against their terms\nand conditions and second that they are\nlooking for the community to do further\nresearch and red teaming anyway I am\nKeen to do many more experiments but\nusing this radio demo it basically\nfailed to do a proper sonnet and when I\nasked this question from the math\nbenchmark it said the question does not\nmake sense because the length of a\nrectangle being twice its width would\nmean the rectangle is a square hmm\nanyway it could just be a a problem with\nthat demo because GPT 3.5 crushes the\nsonnet about apples and has no problem\nwith the length of a rectangle being\ntwice its width which brings me on to a\nbenchmark that the Llama 2 paper did\ntalk about on page 48. it was on social\nIQ and they noted that llama 1 actually\ndid better than llama 2. here is the\nBenchmark it's about common sense\nreasoning with questions such as these\nAlex spilled the food she just prepared\nall over the floor and it made a huge\nmess what will Alex want to do next\ntaste the food mop up run around in a\nmess and again apparently llama 1\nactually does slightly better on those\nkind of questions another Benchmark that\nyou can see your llama one being as good\nas llama 2 at is ball Q that's a\nbenchmark testing yes or no questions\nbut it's harder than that you have to\nread a lot of context to get the answer\nright I just want you to remember some\nof these benchmarks when you hear all\nthe influencers talk about llama 2\ncompletely changing everything also if\nsomeone says it's the best model of its\nsize look at llama 2 13 billion\nparameters of course it depends on the\nBenchmark but it got 21.7 in aquarad\nthat's a test of mathematical reasoning\nand orca at the exact same size of 13\nbillion parameters got almost 28 so even\npound for pound it may not be the best\nin all categories to be honest I feel\nlike there might be a loyally struggle\ngoing on behind the scenes at Microsoft\nabout whether to open source Orca and\nPhi 1. there were some bonus interesting\nthings about the paper like introducing\nghost attention which to oversimplify\nmeans that the model pays attention over\nmultiple turns of the conversation\nsomething you might have originally told\nit such as always act as Napoleon from\nnow essentially these diagrams show that\nwith ghost attention the model pays more\nattention to that original command act\nas Oscar Wilde or always answer with a\nhaiku the authors also throw in this\nobservation that llms have internalized\nthe concept of time and that despite\ntheir training being solely based on\nnext token prediction and data that is\nrandomly shuffled without regard to\ntheir chronological context the models\npick up a general sense of what time is\neven when provided with minimal data\nthey know what people wouldn't have\nknown for example with a knowledge\ncutoff of 1940 when asked who won the\nsecond world war they say I'm not sure\nwhat you're referring to my knowledge\nstopped in 1940. right at the end of the\nreport I know many people will be\nshocked to hear that when they did a\nsentiment analysis of the model they\nfound that the sentiment for llama 2 for\nright wing was higher than for left-wing\nyou may even want to pause and look at\nthis page from a sociological\nperspective because if llama 2 was\ntrained on a semi-random swave of the\ninternet this could be like a snapshot\nof the sentiment analysis of all of\nthese terms across the internet anyway\nin what may have been a surprising twist\nfor some Microsoft and meta teamed up to\nmake llama 2 widely available and we get\nnews that llama 2 May soon be on your\nphone and PC although I think meta want\nto be paid if it's going to come to your\niPhone with this curious Clause\nrequiring permission if you have more\nthan 700 million monthly active users I\ndon't know whether they were thinking of\napple or telegram or Tick Tock but I\nthink they want to get paid if any of\nthose are going to use a llama too but I\nmust confess to finding the previous\nClause somewhat ironic you will not use\nthe Llama materials or any output or\nresults of the Llama materials to\nimprove any other large language model\nso they can use any part of the internet\nwhich one leak said might include\ncopyrighted works but you can't use\nllama to improve your own model well\njust two hours ago people are already\nupdating models like lava based on llama\nyou so it will likely just be a few days\nor weeks until we see a newly improved\nvacunya or Orca Jim fan predicts that\nllama 2 will dramatically boost\nmultimodal Ai and Robotics research he\nsays these fields need more than just\nblack box access to an API so far we\nhave had to convert the complex sensory\nsignals video audio 3D perception to\ntext description and then feed to an llm\nit would be much more effective to graft\nthose sensory modules directly onto a\nstrong llm backbone anyway this video is\nalready long enough and this is just the\nfirst 24 hours of llama 2's release I am\nsure there will be much more discussion\nin the coming days and weeks let me know\nwhat you think in the comments and thank\nyou so much for watching have a\nwonderful day", "date_published": "2023-07-19T15:30:09Z", "authors": ["AI Explained"], "summaries": []} +{"id": "7d33f3a226d6a870ff70de1e98ab126e", "title": "8 New Ways to Use Bing's Upgraded 8 [now 20] Message Limit (ft. pdfs, quizzes, tables, scenarios...)", "url": "https://www.youtube.com/watch?v=weH9LKYNGWg", "source": "ai_explained", "source_type": "youtube", "text": "Bing chat from Microsoft has just raised\nthe conversation limit on messages to\neight per turn down from unlimited\nlaunch fair enough but up from six\nyesterday in fact I actually saw this\nchange live on my phone last night but\nfar better than telling you the change I\nwant to demonstrate eight completely new\nways of using these upgraded\nconversation limits I honestly think you\nmight find each and every one amazing\nand if you don't you get your money back\nI'm just kidding you get all of this\nfree let's start with an educational\nhack that has peer-reviewed proven\nimpact it's practice questions turn\nanything into a free quiz with Bing chat\nand as an educator with 13 years\nexperience trust me this does work the\nliterature Bears it out to give myself\nand you all a workout I've started a\nmultiple choice quiz on Transformers\nthis is the prompt I use to create a\nmultiple choice quiz on Transformers\ngive me answers and explanations and\nprovide another question after each\nanswer please begin with the first\nquestion I've noticed sometimes that if\nyou don't say provide another question\nafter each answer it just ends by saying\ndo you want another question and\nobviously that uses up one of your eight\nterms look at the limit here one of\neight not of six so I tried this first\none myself and it was asking what are\nTransformers and I picked a I was happy\nwith that one I am only going to do a\ncouple of questions because I can't wait\nto get to the rest of my list but let's\ncontinue for a second do you know the\nanswer to this one what is the name of\nthe pre-trained Transformer model that\nachieves state-of-the-art results on\nseveral NLP tasks in 2018. now honestly\nI think it's Bert but I know gpt1 came\nout in 2018 but the original Bert was\nway more famous so I think it's going to\nbe a notice how I have the choices on\nthe right here no D but it's only three\noptions anyway I'm gonna go with a let\nme know what you would pick\nC\nand\nsuspense building nice got it here is\nthe third question what is the main\ndifference between encoder only and\nencoder decoder Transformers I know\nwe're just getting into the quiz but I'm\ngoing to leave this one for you to try\nout in the comments or to ask being\nyourself because I want to get to number\ntwo in the list I'm going to call this\none counterfactuals you can actually ask\nBing to generate what-if scenarios for\nyour favorite show book movie whatever I\ntried this one I asked explain how\nSauron could have defeated the\nfellowship in Lord of the Rings and\nthese answers are amazing I don't want\nto ruffle any feathers but some of these\nI agree with why didn't he send more\nnazgol to hunt down Frodo why not use\nthe palantir to spy on Gandalf and\nAragorn now imagine this for your\nfavorite book your favorite TV show or\nmovie you can enter the plot ask\ncounterfactuals ask what if questions\nwhat might have been Bing will\nunderstand exactly what you mean and now\nyou can have eight back and forth for\nthis next use case I use balance mode I\nwas finishing off my research into gbc5\nand the limitations on data so I asked\nBing summarize any novel insights that\ncan be drawn from combining these\nacademic papers I didn't paste them in I\njust gave the link Bing can read\nunderstand analyze and summarize PDFs\nand not just one it can combine the\ninsights of multiple documents papers\nand PDFs I read both of these papers and\nthis answer is quite excellent maybe we\nwill be able to use self-play of large\nlanguage models and improve the amount\nof data that we've got available maybe\nlanguage models don't just need human\nauthored data sets and now that I have\nthis eight message limit I can ask\nthings like any further insights you can\ndraw from the implications of these\npapers and we get this dialogue maybe it\ncan recommend a book on this topic or\ngive me the bullet points from that book\nof course I'm focused on some of the fun\nuse cases but I think this one in\nparticular could be game changing for\nwork purposes I call this next use case\nmoments before disaster what you can do\nis pick any historical tragedy or big\nmoment in history and then Place\nyourself just before that occurrence I'm\nvisiting Lisbon which had a massive\nearthquake in 1755 at 9am so I place\nmyself at 8am on that day I said I'm\nstanding on the banks of the beautiful\nriver tagus in Lisbon seems like a\nlovely day do you have any advice for me\nthe response was actually incredible I'm\nglad you're enjoying the view however I\nhave some bad news for you no kidding it\ngets into character and then says the\nTremor occurred about 9 40 local time so\nyou have very little time left to escape\nmy advice for you is to get away from\nthe river that's realistic find a high\nand open place where you can take\nshelter from falling debris and Rising\nwater the final sentence is quite\ninteresting from a language model pray\nfor your safety and for those who are\nsuffering try to fully immerse yourself\nfor example I said really to what High\nGround can I flee will any building\nsurvive this tragedy to which I can\nescape I am in sandals\nbeing listened and said a specific Hill\nthat I might flee to it gave me maps and\nsaid however I cannot guarantee that any\nbuilding Will Survive the tragedy and\nyou may have to run Barefoot if your\nsandals are not suitable for running I'm\nsorry for your situation I think the\npictures really help to bring this\nscenario to life as well so do try out\nthis use case for yourself moments\nbefore disaster the next new thing you\ncan try is debating famous thinkers or\nphilosophers from history you can bring\nthem to life so they're not just stale\ntheories but a living entity that you're\narguing with I wanted to try this with\nSocrates so I said I want to debate the\nphilosopher Socrates of Athens so please\nreply only as he would I want to debate\nthe merits of eating meat let me start\nthe discussion what I was testing here\nwas whether Bing would enter a Socratic\ndialogue which is where Socrates would\nuse to force the person to Define their\nterms what do you mean by this ask\nquestions until he got to the root of a\nmisunderstanding would being really get\ninto the head of Socrates and give me a\nworthy debating partner well it did it\nasked me to Define what I meant by\nmorally wrong and unnecessary suffering\nit doesn't act as a generic thinker or\nphilosopher it is picking Socrates when\nI tried defining what I meant by morally\nwrong Bing continues with further\nclarifying questions just like Socrates\nmight this can be a far more fleshed out\nexperience now that the conversation\nlimit is eight I think you're gonna find\nthe next use case quite amusing it's\nabout generating a table of comparisons\nand it could be on wildly disparate\nobjects subjects people whatever this is\njust the first one I thought of and you\ncan come up with far better examples in\nthe comments I said create a table of 10\ncomparisons between the Mona Lisa and\nColgate toothpaste and then what you can\ndo now that we have these eight message\nlimits is that we can expand this table\nindefinitely I love by the way how it\ncompares the price of the two objects\ntalks about how the Mona Lisa has been\nrestored and how Colgate has been\nrebranded makes a comparison about what\nit's believed to be a portrait of and\nbelieved to be named after but then I\nasked it for contrast the contrasts were\ngreat talking about how they had\ndifferent purposes were made of\ndifferent materials Etc Apparently one\nis priceless and one is Affordable the\nlast column I thought of and of course I\ncould have carried this on for several\nmore columns was now add a column for\nhow they would fare if confronted by a\nhungry polar bear I actually laughed at\nthis one this wasn't just the chuckle I\nwas laughing so in a polar bear\nencounter apparently the painting would\nbe ignored by the bear as it's not\nedible or interesting the toothpaste on\nthe other hand would be sniffed by the\npolar bear as it has a strong sense but\nit would not be eaten as it's not\nnutritious the icons status apparently\nof both items would be irrelevant to the\npolar bear as it does not care about\nhuman culture or history and this one\ncome on this is funny the hidden symbols\nand secrets of the painting would be\nmeaningless to the polar bear as it does\nnot understand human language or\nsymbolism it even clarifies that the\nactive ingredients of the toothpaste\nwould be harmful to the digestive system\nof the polar bear anyway it is long\nsince time that I get to the seventh\nitem on the list and this use case is to\nget famous figures from history to\ncommentate on current events for example\nI asked what would Napoleon think about\nthe deal between openai and Microsoft\nand how would that view differ from the\nview of Mahatma Gandhi obviously you can\npick any event and any famous person the\nresults were actually quite informative\nit summarize their views and then said\nNapoleon would have agreed with the deal\nhe would have thought it was a strategic\nmove to dominate the global market and\ninfluence other nations on the other\nhand Gandhi apparently would have had a\nnegative you towards the deal seeing it\nas a threat to human dignity and freedom\nthrough artificial intelligence I don't\nknow what you think but now we know what\nthese figures think apparently of course\nI could have used seven more turns to\nfind out what they think on a range of\ncurrent events but now it's time for the\nmost overwhelmingly important use case\nnow I'm kind of just kidding it's\nabsolutely useless it's kind of\ninteresting you can use emojis to\nsummarize things for example current\nevents movies shows whatever summarize\nthe movie John Wick 3 in emojis and\nthese emojis are pretty accurate to the\nplot in Britain brexit is always in the\nnews so I asked Sunrise the latest news\non brexit in emojis and I do think some\nof these emojis sum up how it's actually\ngoing anyway if you like the new eight\nmessage limit let me know in the\ncomments and if you enjoyed or learned\nfrom any of these eight use cases do let\nme know have a wonderful day", "date_published": "2023-03-04T15:27:17Z", "authors": ["AI Explained"], "summaries": []} +{"id": "7cc90fe8dc60fccf7934086ade136aba", "title": "Bing (GPT 4) Just Made Smartphones MUCH Smarter (next-level Android and iOS app)", "url": "https://www.youtube.com/watch?v=CUR1FN3Ok_w", "source": "ai_explained", "source_type": "youtube", "text": "Bing app now available on Android and\niOS is about efficiency over\nextravagance it's on your phone so it's\nnot about essay writing compiling code\nor making funny poems in French it's\nabout taking search to the next level no\nI'm not sponsored by them but I think\nit's mind-blowing there are four levels\nof search increasing in power and\nrequisite intelligence and Google is\npretty much only able to do level one\nand even then not always Bing can do all\nfour let me show you in this video\nexactly what I mean taking you through\nall four levels of search by the end I\nhonestly think I'm gonna persuade you\nthat smartphones just got upgraded\npermanently just quickly how does it\nlook on mobile well if you open up the\napp you will see at the bottom a bing\nbutton on any web page just like you can\nin the edge browser you can press on the\nBing button and open up Bing chat you\nthen have two options you can ask it\nquestions via the microphone or via the\nkeyboard for example you could ask who\nis currently seventh in the Premier\nLeague searching for Premier League\ntable according to the Premier League\ntable Brighton and Hove Albion is\ncurrently seventh in the league with 35\npoints from 22 games played\nthey have scored 39 goals and conceded\n29 goals giving them a goal difference\nof 10. that's another fascinating\ndifference with being on mobile it\nactually speaks to you I wouldn't call\nit Cutting Edge text the voice but we're\njust getting started you're probably\nwondering how does this transform search\nhow does this upgrade our smartphones\nwell this is just a level one search we\nare retrieving one bit of information\nand even on this front Bing does better\nthan Google the exact same question into\nGoogle gives a generalized formula about\nthe Premier League and what seventh\nplace means Bing understands exactly\nwhat I want and gives the correct answer\nI will admit if we were just comparing\nlevel 1 searches Bing wouldn't be that\nmuch of an upgrade you could always\nclick on a link to a table and see for\nyourself where Brighton are maybe you're\nsaving a few seconds with Bing but not\nreally a big difference but just wait\nuntil we get to level three and even\nlevel 4 and Beyond searches and now you\nguys are ready for level 2 searches and\nwhat do I mean by that this time we are\nretrieving two bits of disparate data\nbut we want to do it at the same time\nnot doing two separate searches by the\nway I'm typing these searches into Bing\ndesktop so you can see them more clearly\nbut of course it would be even quicker\nto ask them with my voice into being on\nmobile I asked what are the ages of the\nEiffel Tower and the Empire State\nBuilding and look at the difference I\ncan clearly see the two results on the\nleft in bing but on the right I'm gonna\nhave to click on at least two links and\nscroll through the data you can just\nbegin to imagine the number of searches\nthis could apply to and we're only on\nlevel two the next example of a level 2\nsurge would be to retrieve a bit of\ninformation and do something to it for\nexample I asked both Bing and Google if\nMicrosoft's market cap doubled what\nwould it be there are two stages to that\nquestion first it has to retrieve\nMicrosoft's current market cap then it\nhas to double it bing gets the answer\nGoogle doesn't even show the market cap\nnot immediately at least even here I\nknow some of you will be thinking I\ncould just type in market cap find the\nsource get out a calculator and double\nit what's the big problem yes maybe you\nsave 30 seconds but what's the big deal\nwell we haven't even gotten to level\nthree or level four searches yet so what\nis an example of a level three search\nimagine you're on your smartphone and\nyou're considering the Adobe Creative\nCloud and imagine you wanted to know\njust how much more expensive it would be\nover say a couple year time period than\nDaVinci Resolve you could press the Bing\nbutton and ask this according to this\npage if I got the individual account for\ntwo years how much more expensive would\nthat be in pounds than the one-off\npayment for DaVinci Resolve 18. now as\nI've talked about in my other Bing chat\nplaylist videos it understands the\ncontext in which you're asking the\nquestion it knows you mean this Adobe\nCreative Cloud individual plan it\ncorrectly multiplies this for two years\nand then compares the total to DaVinci\nresolves price now initially I thought\nit made a mistake because when I quickly\nchecked on Google DaVinci Resolve costs\n255 pounds but then when I click to buy\nit it adds on vat which makes it 306\npounds in the UK so in a way that's\nanother win for bingy understood about\nadding on vat but what makes this a\nlevel 3 search is it did all of that\nwork it retrieved the two bits of\ninformation in context and then compared\nthem then subtracted them giving us a\ndifference of 941 pounds in the price\nover two years and of course you could\ncarry on this conversation with being\nabout the pros and cons of each package\nfor some of these searches I am not even\ngoing to try it in Google because it\nwould completely flop level 3 searches\nare about much more than this though\nimagine you're standing on the Houston\nRoad and you want to get to Central\nLondon you could conduct a level 3\nsearch using Bing the question might be\nhow much longer would it take to get the\nunderground from King's cross to\nPiccadilly Circus than from Houston to\nOxford Circus or how much longer would\nit take to go from King's cross to Hyde\nPark Corner than from Houston to\nVictoria these are all Journeys that I\nmake on a regular basis and I can\nconfirm that the results are accurate\nwhy is this level three because it had\nto retrieve the information about one\nJourney then the other and then make the\ncomparison our level three searches all\nabout addition and subtraction no check\nthis out you could ask how much bigger\nare polar bears than brown bears and why\nGoogle would have to do three things and\nit just isn't up to it you'd have to\nfind the size of the average polar bear\nthe size of the average brown bear and\nthen do an analysis of why polar bears\nare bigger not just a mathematical\ncomparison but an understanding of\ncomprehension an explanation of the wise\nthink of level three as adding a when\nwhere why and how to level two searches\nthe answer by the way is quite\ninteresting so polar bears can weigh up\nto 1700 pounds versus thirteen twenty\npounds but we didn't just want to know\nthat we wanted to know the reason why\nand apparently the reason why is and I\ncan believe this is that they need more\nbody mass and fat to survive in the cold\nArctic environment they also have a\nbigger skull and larger teeth to hunt\nseals so now I've got more of an idea\nnot just how much bigger they are but\nwhy is through Evolution that they ended\nup being bigger but we have waited long\nenough what about level 4 searches well\nthink about a complex interesting search\nlike this how much older is Stonehenge\nthan the Coliseum in Rome expressed in\nhuman lifetimes Bing has to do four\nthings find the age of Stonehenge the\nage of the Coliseum the difference\nbetween them divided by the average\nhuman lifetime it does this successfully\nand we have a really interesting result\nthat is about 38 human lifetimes older\nif we take the older date for Stonehenge\nthat is an insane level of intelligence\nfor a single search now we're genuinely\ntalking about saving a minute or more\ncompared to using Google that is a big\nenough difference to really matter and\nthat's not the only example I could give\nyou of a level 4 search I'm sure you\ncould tell me hundreds of ideas in the\ncomments but try this one going back to\nthe Premier League I could ask if\nArsenal beat Leicester and Man City draw\nBournemouth how many points ahead would\nArsenal be I didn't have to specify\nPremier League I didn't have to say\nfixture and of course I didn't have to\ntell it the rules of the Premier League\nabout three points for a win Etc it knew\nexactly what I was asking calculated the\ntwo results found the league positions\nand then found the difference now you\ndon't have to be into sport to know that\nthat's an amazing answer to a complex\nsearch query think about how this\napplies to your domain your career your\ninterests and come up with some level 4\nsearches that you could ask which brings\nme on to my final point the question for\nevery smartphone user will be is the\nsmall but real risk of hallucinations\nmore meaningful to me than the\nadditional seconds and minutes required\nto perform multiple searches for me the\nanswer is already yes but clearly what\nwe decide will depend on the topic right\nfor sport it's fine for maybe a\nlife-changing house decision maybe not\nby the way with those new modes that\nBing is debuting this week that I'm\ngoing to do a video on where you can\npick Precision over creativity soon you\nmight not even need to make a choice\nbetween hallucination versus efficiency\nvery much looking forward to doing a\ndeep dive into those modes by the way\nand yes what you may find is that being\nwhen you do voice recognition makes the\noccasional mistake in what you're asking\nit that certainly happened to me open AI\nwho are the partners to Microsoft are\nresponsible for whisper which I have\ndownloaded locally and tried out it is\nphenomenal at voice recognition as good\narguably as human transcribers now I\ndon't think whisper is powering the\ncurrent Bing voice recognition but\nbecause of Microsoft's partnership with\nopenai I wouldn't be surprised if by the\nend of the year whisper doesn't power\nBing and that should make voice searches\nalmost flawless I know what some of you\nare thinking but what about the\nconversation limits well according to my\nsources they should be ending soon\nenough too now of course we are days or\nmaybe weeks away from Google Bard's\nsystem being released but I've got a\nfeeling that Google doesn't want those\nmultiple searches to go away anytime\nsoon being on the other hand doesn't\ncare so I'm not sure there's much\nincentive for Google to maximize the\nefficiency of Bard early indications\nabout the size of the Lambda model\nthat's powering Bard and the leaked\nrumors that every Google employee is\nspending two to four hours a day to\nimprove the output of Bard suggests that\nmy feeling might have some basis but\nmaybe Google will surprise us all or\nFacebook they just released llama a\nlarge language model that outperforms\neven palm and therefore likely Bing on\nsome metrics I'm hoping to use it soon\nand Amazon well they just released a\nmodel that outperforms GPT 3.5 by 16 so\nthey're not out of the game either by\nthe end of 2023 who knows who will be\nthe king of search and the king of your\nsmartphone but it will definitely be a\nsmarter smartphone if you agree or even\nif you disagree let me know in the\ncomments please do leave a like and\nthank you very much for watching have a\nwonderful day", "date_published": "2023-02-25T16:29:56Z", "authors": ["AI Explained"], "summaries": []} +{"id": "a0ea155aab6997a680dedb7b151a116d", "title": "Bing is a LOT Smarter than ChatGPT (but still makes dangerous mistakes)", "url": "https://www.youtube.com/watch?v=iga_0WNQcTY", "source": "ai_explained", "source_type": "youtube", "text": "the new model of GPT that powers Bing is\nsignificantly smarter than Changi PT and\nI'm going to prove it in ways that I\nthink might be a First on YouTube it's\nnot all roses though and some of the\nmistakes of the new being are harder to\nspot which makes it more dangerous and\nyou're going to be surprised by some of\nthe results I'm directly comparing chat\nGPT Plus on the left and the new Bing\nchat on the right I'm going to start\nwith some moderately difficult\nmathematics which is an area that GPT\nmodels in the past have really struggled\nwith and chat TPT plus is no exception I\nask it some combinatorics how many ways\ncan the letters of the name Philip be\nuniquely rearranged and gives me the\nanswer 720 which is wrong and it doesn't\neven attempt to really explain it in any\ndepth when I ask Bing it totally got the\nquestion right with a great explanation\nI was genuinely quite surprised to see\nthis level of mathematical Improvement\nthis quickly in bing I thought they\nmight have tweaked the GPT 3.5 model\nmade at 3.6 but this feels more like a\n3.8 not a 4 yet as I'll explain in a\nmoment when I pushed Bing to the next\nlevel though and said apply this\ntechnique to five French sounding male\nnames beginning with b it got halfway\nthere and flopped you might have trusted\nit at this point to get the question\nright it got the question right with\nPhilip so why not with Damian Didier\nDorian Dennis and David well it brings\nin mistakes it didn't make before it\nsaid we have to divide by two because\nthere's two repeated letters in Damian\nbut there's not and I pointed that out\nit didn't divide by two despite there\nbeing two D's in David I pointed that\nout being then apologized you're right I\nmade a mistake sorry about that corrects\nthe error which is impressive and\nobviously I didn't bother asking this\nquestion to chat GPT plus because it got\nthe original wrong so a giant leap\nforward but you still can't fully trust\nit even if it's proven it's able to get\nit right once doesn't mean it will get\nit right on sub occasions let me give\nyou another example of how it's improved\nbut isn't yet perfect I asked chattybt\nexplain the following joke one of The\nOddities of Wall Street is that it is\nthe dealer and not the customer who is\ncalled broker the pun here of course is\nthat many of the customers who go to\nWall Street end up being broke whereas\nthe dealer is the one who's called the\nbroker taxi PT consistently misses this\npun and invents all sorts of hilarious\nexplanations as to why the joke Works\nwhich you can read if you want but none\nof them are correct now what Bing does\nis it finds the pun it does find that\nbroker is a pun on poorer but then\nweirdly ascribes that to the dealers\nsaying it's ironic that the dealers are\ncalled Brokers because they are supposed\nto make money from the transactions but\nthe original pun is that it's surprising\nthat it's the dealer who's called a\nbroker what's the meaning because it\nshould be the customer who's called\nbroker so it's much more subtle the\nerror it caught the pun on the words\nthat misescribed who was the pun\nreferring to this mistake was actually\nso hard to spot that when I first did\nthe video I thought that it correctly\nexplained the joke but when I read it\nout I was like wait that's not right so\nyou've really got to watch the answers\nbecause they sound even smarter when\nsometimes they're not next I tried some\nclassic reading comprehension and this\nis where things got even more\ninteresting I pasted in a classic GRE\nquestion and you can see the answers\nyourself here first the correct answer\nby the way is that the passage does\nindeed discuss whether this person's\nwork is derivative and I can prove it\nbecause it says is this sound is his\nsound distinctly his\nand that's just a discussion about is it\nhis or is it copied from other people so\nthe correct answer is five here now\ncheck to PT Plus gets this wrong in a\nvery understandable way a lot of\nstudents pick one hair and the students\nget it wrong because the passage does\nsay it's high art for listeners teaching\nrock that doesn't say how it's regarded\nby those listeners who prefer Rock now\nyou're probably thinking didn't Bing\njust pick the exact same answer so how\nis it smarter yes it did but when I\nasked it earlier the exact same question\nit actually got it right so I find that\ninteresting and there's other examples\nlater on in the video where I think\nthere's a probabilistic model going on I\nthink when it's not sure it just scans a\nrange of answers weighted by probability\nof being correct and outputs won\nrandomly that would explain why you can\nask the exact same question multiple\ntimes and get different answers and I\nthink this is going to be particularly\ntrue of these Edge case examples where\nBing is genuinely not sure next it's\ntime for a classic question where the\nmodel tries to anticipate who is the\nsubject of the sentence in other words I\nask it this Tom threw his school bag\ndown to Rey after he reached the bottom\nof the stairs question who reached the\nbottom of the stairs I shouldn't say\nwhat's the subject of the sentence who\ndoes the he the pronoun he refer to now\nask this to humans and they almost\nuniversally get it right it makes sense\ncommon sense right if Tom is throwing\nhis school bag down to Rey that it would\nbe Rey who's at the bottom of the stairs\nhowever both models consistently get\nthis wrong there's no real difference\nbetween them Bing at least tries to use\nsome sort of grammatical explanation as\nto why it's going to be Tom and it must\nbe admitted that a lot of people who\ndon't have English as a first language\nwould easily be fooled by this answer\neven people who do have English as a\nfirst language they might be like Am I\nWrong this seems so detailed like\nthey're talking about prepositions and\nthe subject of the main Clause\nsubordinate clause but Bing is still\nnevertheless wrong of course it's Rey\nwho's at the bottom of the stairs taxi\nBT gets it wrong but is much more\nsuccinct okay before you think is being\nthat much of an improvement I've just\nshowed you the mathematical Improvement\nlook at this example what is an example\nof an animal that begins with the same\nletter as the capital of the UK of\ncourse the capital of the UK is London\ntactivity consistently gets us wrong\nI've tried it a few times it gives me\nanswers like unicorn and here Aardvark\nwhereas every single time Bing gets its\nright in this case lion other times it's\ngiven me a long list of animals that\nbegin with the letter L so a clear\ndistinction here a clear win for Bing it\ngenuinely is significantly smarter than\nChachi PT the next test is going to be\nabout physics and here the answers get\nreally weird this time Chachi T actually\ngets it right now I'm not going to go\ninto a physics tutorial but essentially\nthe answer is that the distance of\nseparation will increase and I've tested\nthis question in previous videos where\nchat TPT has got it wrong that contains\nthe clue both models don't really know\nthe answers to the question and I have\nseen Bing get this question right so\nthey're just spitting out random answers\nweighted by the probability that they\nthink they're correct but they still\nstruggle with physics check out my video\non gpt4 if you want a preview of how\nfuture models will improve in this\nregard actually while you're there\nplease do leave a like and a comment on\nthis video if you found it in any way\ninteresting the next question though I\nthink really illustrates better than\nalmost any other some of the\nimprovements that the new model of gbt\nthat powers Bing has over chat GPT it's\na creative writing task I asked it write\n10 analogies between characters in Lord\nof the Rings\nand characters in Star Wars\nand you check it out for yourself the\nanalogies in the new being are so much\nmore nuanced and detailed and\ninteresting look at this Frodo is to\nLuke Skywalker as the reluctant hero\nfine the other one said that who\ninherits a powerful and dangerous object\nfrom his uncle that's a lot more\ndetailed than both our young Heroes who\nembark on Epic quests to save the world\nfrom Darkness or look at Gandalf and\nObi-Wan Kenobi you've got a wise and\nPowerful Mentor both of them who guide\nthe hero sacrifices himself to face a\ndark enemy only to return stronger and\nmore influential it's understood the\nplots the chat gbt plus just says both\nare wise and Powerful mentors who guide\nthe main characters so much less detail\nthe reading comprehension of the new\nBing the new GPT model that powers being\nis a lot lot stronger and that's going\nto have a ripple effect across all the\ntasks that you ask it to do its ability\nto create scripts dramas novels\nanalogies summaries analyzes is going to\nbe a lot lot stronger which does kind of\nbeg the question why would you pay for\nchat GPT plus and I await the answer\nfrom openai and I say that as a current\ncustomer of chapter plus what is it that\nwe're paying for if Bing gives us a\nsignificantly more powerful model not\nquite done with the test though yet I\nstill got some interesting results to\nshow you the next question was one that\nwas showed off by Google's Palm model it\nwas a question about whether we could\ninfer the location of Shelley if she\nwanted to visit that city with the\nfamous Market where they throw the fish\nand here's what I do want to give some\ncredit to chat GPT it has improved it\ngot the question right that indeed she's\ngoing to be at Seattle most likely and\nthat is on the Pacific Ocean and I just\nwant to show you what answer chat TPT\ngave as recently as a few weeks ago when\nI asked this exam exact same question\nit said that based on the information\ngiven it is not possible to determine if\nShelley is near the Pacific Ocean so\neven Chachi BT is improving month by\nmonth what is my conclusion that the new\nBing isn't gpt4 it isn't that smart but\nit's a hell of a lot smarter than chat\ngbt which is incredible and begs the\nquestion why pay for Chachi PT Plus of\ncourse the deeper meaning for Humanity\nand the future of capitalism entirely is\nwhat I'm going to talk about over the\nnext few weeks months and years I really\nthink this is the biggest news story of\nthe decade of the century please do join\nme for the journey have a wonderful day", "date_published": "2023-02-14T11:12:14Z", "authors": ["AI Explained"], "summaries": []} +{"id": "0a3ae534e400c28d6bfbeabe3c4cee63", "title": "4 Tests Reveal Bing (GPT 4) ≈ 114 IQ", "url": "https://www.youtube.com/watch?v=xFvDJnf0GXs", "source": "ai_explained", "source_type": "youtube", "text": "being AI passing 100 IQ might seem\nintangible unimpressive or even\nirrelevant and as someone who is lucky\nenough to have got on a perfect score in\ntests such as the GRE I can confirm that\ntraditional measures of IQ leave so much\nof human talent unquantified but with\nthese caveats aside hints that Bing AI\nmay have crossed that 100 IQ threshold\nare nevertheless stunning I will be\nexplaining four tests that show The 100\nIQ moment may have arrived and thinking\nabout what each test means for all of us\nthis graph gives us a snapshot of the\nstate-of-the-art models and in blue is\nPalm and palm is an unreleased model\nfrom Google that I believe based on firm\nresearch provided in another one of my\nvideos is comparable to being AI by the\nway Google's chat bot Bard which is\ngoing to be released soon will be based\non Lambda a less powerful model than\nPalm but given that Palm is a proxy for\nBing AI you can see in this snapshot\nthat it has already passed the average\nhuman in a set of difficult tasks called\nthe big bench I have multiple videos on\nthis task but IQ is notoriously\ndifficult to measure so what kind of\ntests am I talking about well the\ninternational high IQ Society publishes\nnumerous tests that they accept if\nyou're trying to join you need an IQ of\nabove 124 to join and the tests that\nthey accept are shown on the right and\nthe left and in what I believe is an\nexclusive on YouTube I'm going to be\ntesting Bing AI on several of these\ntests the first one is the GMAT and I\nmust confess a personal interest here as\na GMAT tutor it's The Graduate\nmanagement admissions test and I scored\na 780 in this and much like the GRE it\ntests both verbal and quantitative\nreasoning it's not a straightforward\ntest the official provider mba.com offer\na mini quiz and this is what bingai got\nbut what kind of questions were these\nand where did Bing ai go wrong and also\nwhat does this score mean in terms of IQ\nthat's what I'm about to show you side\nby side I'm going to show you the\nquestions it got right and got wrong and\nBing's reasoning by the way I told being\nexplicitly do not use web sources for\nyour answer and Bing was very obedient\nthere were no links provided it wasn't\nscouring the web and it provided\nreasoning for each of its points it was\nnot cheating these are difficult\nquestions and I have spent the last\nseven years of my life tutoring people\nin them and smart people get these\nquestions wrong if you want to try the\nquestions feel free to pause and try\nthem yourself but this first one is\nwhat's called an assumption question\nwhere you have to ascertain what is the\nhidden underlying Assumption of an\nargument and Bing does really well and\ngets it right it picks C and that is the\ncorrect answer\nthe next question is a sentence\ncorrection question where essentially\nyou have to improve the grammar of a\ncomplex sentence you have to refine the\nwording make it more succinct make it\nread better and Bing does an excellent\njob and gets this right it picks the\nversion of the sentence that reads the\nbest that is a really Advanced\nlinguistic ability what about the third\nquestion there were eight questions\ntotal well this is an interesting one\nbing gets this wrong and I'm very\ncurious as to why you're presented with\na dense bit of text and what you have to\nspot to get this question right is that\nthe US spent three percent of its GNP on\nresearch and development in 1964 but\nonly 2.2 percent in 1978\nwhereas Japan increased its spending\nduring that period reaching a peak of\n1.6 in 1978 and being AI isn't quite\nable to deduce that therefore during\nthat period the US must have spent more\nof its GMP as a percentage on R D than\nJapan because Japan increased from an\nunknown base up to 1.6 whereas we know\nthe U.S dropped as a percentage from 3\nto 2.2 percent on research and\ndevelopment so throughout that period\nthe US must have spent more as a\npercentage Bing can't quite get his head\naround that logic it just restates what\nthe passage says and says this is\ncontradicted without really giving a\nreason why instead it says what we can\nconclude is that the amount of money a\nnation spends on R D is directly related\nto the number of inventions patented in\nthat Nation but the text never makes\nthat relationship explicit this is a\ndifficult text Bing AI does get it wrong\nits IQ isn't yet 141.50 as we'll see in\na second a score of 580 in the GMAT is\nreally quite impressive before we get to\nthe IQ number let's look at a few more\nquestions in question four it was\nanother census correction question and\nbeing aced it\nit's really good at grammar\nquestion five was mathematics and what\nhappened to people saying that these\nchat Bots are bad at math it crushed\nthis question pause it try it yourself\nit's not super easy but there were many\nsmart students graduates this is the\nGMAT after all who get this wrong we're\nnot just talking about average adults\nhere these are graduates taking this\ntest and 580 is an above average score\nit gets its math problem completely\nright maybe that was a fluke let's give\nit another math problem we have to set\nup two equations here and solve them\nthat's difficult it's one thing setting\nup the equations translating the words\ninto algebra but then solving them\nthat's a lot of addition subtraction\ndivision surely Bing AI isn't good at\nthat but wait it gets it right the rate\nof progress here is insane again not\nperfect as we're about to see but don't\nlisten to those people who say Bing AI\nis necessarily bad at math as a math\ntutor as a GMAT and GRE tutor it's not\nit's already better than average final\ntwo questions this one is data\nsufficiency a notoriously confusing\nquestion type for humans and AI\nessentially you're given a question and\nthen you're given two statements to help\nyou answer it and you have to decide\nwhether one of the statements alone is\nenough whether you need both of them or\nwhether even with both statements you\ncan't answer the question this is\nsupposed to be the hardest type of\nquestion for large language models in\nthe big bench benchmarks most models\nperform terribly at this but you can\nguess what I'm about to say it got it\nright it was able to tell me without\nsearching the web it didn't copy this\nfrom anywhere this is its own reasoning\nand it gets it right that's borderline\nscary what was the other question it got\nwrong well surprisingly this data\nsufficiency question and the reason it\ngot it wrong was quite curious it\nthought that 33 was a prime number\nmeaning it thought that 33 could not be\nfactored into two integers greater than\none even though it definitely can be\neleven times three it was kind of\nsurreal because it got this question\nwrong at the exact same time that as you\ncan see something went wrong yes\nsomething definitely did go wrong you\ngot the question wrong you might be\nthinking that's all well and good how\ndoes that translate to IQ and while\nthere aren't any direct GMAT score to IQ\nconversion charts as you saw earlier\nGMAT is accepted for high IQ societies\nand using this approximate formula the\nscore average of 580 the NBA.com gives\nwould translate to an IQ of 114. now\njust before you say that's just one test\nyou can't take such a small sample size\nof eight questions and extrapolate an IQ\nI'm going to show you three more tests\nthat back up this point the next test is\nof reading age in the US it has been\nassessed that the average American reads\nat 7th to 8th grade level and remember\nthe average IQ is set at a hundred so\nwhat age does Bing AI read and write at\nthere are ways of assessing this I got\nBing to write me a quick three paragraph\neloquent assessment on the nature of\nmodern day life and it gave me a nice\nlittle essay say nice like it's\npatronizing it's a very good little\nessay now somewhat cheekily I did ask it\nto improve and I said can you use more\ncomplex and intriguing words this\nresponse is a little Bland and I don't\nthink being AI liked that it said I'm\nsorry I prefer not to continue this\nconversation I guess I can accept that I\nwas a little bit rude but what happens\nwhen you paste this answer into a\nreading age calculator remember the\naverage person reads a seventh eighth\ngrade level and when you paste this\nessay into a readability calculator you\nget the following results and I know\nthese look a little confusing but let's\njust focus on one of them the Gunning\nfog index where the sa scored a 16.8\nwhat does that mean from Wikipedia we\ncan see that a score of 16.8 on the\nDunning fog index indicates a reading\nlevel of a college senior\njust below that of a college graduate\nand that fits with what I'm feeling I\nused to teach this age group and where\nit was said that chat gbt could output\nan essay of the quality of a high school\nsenior being AI is a significant step\nforward we're now talking about a\ncollege senior and we're certainly\ntalking about a reading level\nsignificantly beyond that which the\naverage American can read and write at\nso far you might be thinking but I\nhaven't ever directly given an IQ test\nand you can't fully do that because\nthere are some visual elements to\ntraditional IQ tests the Bing can't\ncomplete but what score does it get if\nwe give it such a test and just get all\nthose visual or spatial reasoning\nquestions wrong it can still get an IQ\nscore of between 105 to 120 on these\nclassic IQ tests now I know you can poke\nholes in these tests there are sometimes\ncultural biases Etc but as an approach\nindicator an IQ score of between 105 and\n120 even as a rough proxy that's\nimpressive what does it get right well\nas we've seen language kind of questions\neven these more advanced mathematical\nreasoning questions it's got to predict\nthe pattern this took me 30 seconds to\nspot now when we move on to figures I\njust clicked a wrong answer by the way\nas I'm going to talk about in a video\ncoming up this kind of visual reasoning\nimage to text if you will is coming soon\nand I will make another video the moment\nit does because I would expect its IQ\nresult to go up even more what else does\nit get right syllogisms these are kind\nof logic puzzles chat DBT gets this\nwrong being AI gets it right this is\nspatial reasoning so I inputted an\nincorrect answer then we have\ncalculation and it actually gets his\nwrong I was kind of expecting it to get\nit right and when I tried the same\nquestion three or four times once it did\nget it right but for now I'm going to\nleave it as incorrect antonym and\nopposite word it was able to understand\nthat context and analogies as we'll see\nit did extremely well at analogies and\nof course meanings for the final\nquestion again I inputted an incorrect\nanswer for the fourth and final test\nwe're going to use a metric that is\nfamous among high IQ societies the\nMiller analogies test the Prometheus\nsociety which is one of the highest IQ\nSocieties in existence only allowed for\nthe 99.99 seventh percentile IQ this\nSociety actually only accepts The\nMiller's analogy test as of 2004 that is\nthe only test that they're currently\nallowing and while there are dozens of\nonline providers for these mat tests I\nwent straight to the official Source\njust like I did with GMAT this is\nPearson the huge exam company and they\ngive 10 questions representative of\nthose type found in the full version of\nthe test I couldn't give it all 1120 the\nitems because as I've talked about in\none of my recent videos there is a 15\nmessage limit daily currently but I\ncould give it these 10 sample questions\nand extrapolate a result based on those\n10. and what I found absolutely\nincredible\nis I didn't break down this colon\nstructure of the question you're\nsupposed to draw an analogy but the\nmissing answer comes at different points\nin different questions and that is a\ncomplex test of intelligence itself\nyou've got to try and deduce What\nanalogy you're even drawing between\nwhich two items and I didn't give being\nany help all I said was complete this\nanalogy without using web sources I\ndidn't explain the rules of the test\nwhat type of analogies it would get or\nthe meaning of these colons and double\ncolons and it wasn't just drawing\nanswers from the web I checked this is\nits own logic it does sometimes get it\nwrong but look how many times it gets it\nright of course you can pause the video\nand try to answer these 10 questions\nyourself if you like but to give you an\nidea in this first question what the mat\nis testing is shape right Springs come\nas a set of rings coils come as a set of\nLoops Now Bing stretches it a bit with\nthe reasoning talking about the letters\nin the name but it gets that circular\nshape right then a mathematical kind of\nquestion these analogies aren't anything\nthey could be historical analogies\nmathematical scientific ones linguistic\nones Bing can do almost all of them here\nwas a mathematical one and you had to\ndraw the analogy between one angle being\nobtuse one angle being acute here was\none that I couldn't do and it's testing\nif you realize that a mollusk produces\npearls while a mammal produces ambergris\nI don't even know what that is I could\nget this one it's Advanced vocab about\nepistemology being about knowledge\nwhereas ontology is about being but I'll\nbe honest it crushed me I think I would\nhave gotten about seven of these\nquestions right being AI gets nine of\nthem right and the one it got wrong\nhonestly I read its explanation for why\nthe missing answer for question five\nwould be lever and it makes some sense\nlet me know in the comments what you\nthink but I think there's an argument\nthat Bing wasn't even wrong about this\neither way I don't have to go through\nevery answer but you can see the\nin-depth reasoning that Bing gives based\non the percentage correct I converted\nfrom a raw score to a scaled score of\ncourse the sample size is isn't big\nenough and this is not a perfect metric\nbut while that 498 wouldn't quite get\nthem into Prometheus society which\nremember is a 99th point nine nine\nseventh percentile high IQ Society it\nwould put them way off to the right on\nthis bell curve of scaled scores but\nlet's bring it all back to the start and\ndiscuss the meaning there are so many\ntakeaways of course being AI makes\nmistakes and sometimes seems stupid but\nso do I and I scored perfectly on some\nof these tests I think artificial\nintelligence passing that 100 IQ\nthreshold is worthy of more headlines\nthan it's currently getting it is very\nfun to focus on the mistakes that Bing\nAI makes and the humorous ways it can\nsometimes go wrong the real headline is\nthis it is starting to pass the average\nhuman in intelligence image recognition\nand visual reasoning is coming soon for\npurposes of brevity I didn't even\ninclude a creative writing task in which\nI think for the first time I've been\ngenuinely awestruck with the quality of\nwriting generated by a GPT model this\nwas prompted by Ethan molick by the way\none of the implications I think at least\nfor the short to medium term is that\nthere will be soon a premium on those\nwho can write better than Bing AI\nbecause being AI is going to increase\nthe average writing quality of everyone\nwho has access to it so those who still\nhave the skill to write better than Bing\nand that's a number that's dwindling\nshould have an incredible premium on\ntheir work there are so many other\ntakeaways IQ is fundamentally a human\nmetric designed to test human abilities\nspeed is unaccounted for in all of these\nIQ metrics an alien looking down may\ndecide that Bing AI is already smarter\nthan us it's generating these essays\ntaking these tests in fractions of a\nsecond sometimes or a few seconds in\nother times even me who might currently\nbe able to score better than air I need\nthe full time allowance I need the 60\nminutes for the mat and the two hours\nfor the GMAT Bing needs two minutes at\nbest and what about the fact that some\nof these IQ tests are designed for\ncertain cultures well that's not a\nproblem for being AI either being can do\nall of this in dozens if not soon\nhundreds of languages that's not\naccounted for in these IQ scores the\ntruth is that AGI has many definitions\nbut in one of the original definitions\nit was the point at which an AI is\nbetter than the average human at a range\nof tasks and in some senses that moment\nmay have happened in the dead of night\nwithout headlines even for those of us\nlike me who argue it's not quite there\nthat moment is going to happen fairly\nsoon quietly on a Thursday night in some\nGoogle Data Center and not enough people\nare talking about it let me know what\nyou think in the comments and have a\nwonderful day", "date_published": "2023-02-19T13:56:32Z", "authors": ["AI Explained"], "summaries": []} +{"id": "07628ae99bc4617e2c3af0577fed0639", "title": "8 Ways ChatGPT 4 [Is] Better Than ChatGPT", "url": "https://www.youtube.com/watch?v=cgihbdO6bvw", "source": "ai_explained", "source_type": "youtube", "text": "I would not blame you if you thought\nthat all talk about gpt4 or chat gpt4 is\njust that talk but we actually can have\na surprising amount of confidence in the\nways in which gpt4 will improve on chat\nGPT by examining publicly accessible\nbenchmarks comparable large language\nmodels like palm and the latest research\npapers which have spent dozens of hours\nreading we can discern at least eight\nclear ways in which gpt4 integrated into\nBing or otherwise will beat chatgpt I'm\ngoing to show you how unreleased models\nalready be current chat gbt and all of\nthis will give us a clearer insight into\nwhat even GPT 5 and future rival models\nfrom Google might well soon be able to\nachieve there are numerous benchmarks\nthat Palm Google's large language model\nand by extension gpt4 will be attack gbt\non but the largest and most impressive\nis the big bench set of tasks more than\n150 or now 200 language modeling tasks\nand I've studied almost all of them and\nyou can see the approximate current\nstate of affairs summarized in this\ngraph where the latest models are now\nbeating the average human and showing\ndramatic Improvement on previous models\nchat TBT would be somewhere around this\npoint lower than what is actually\nprivately available but better than\nprevious models down here but this just\nskims the surface I want to show you in\ndetail the eight ways that you can\nexpect chat GPT 4 or gpt4 to beat the\ncurrent chat gbt and know that's not\njust because it's going to have more\nparameters off to the right of this\ngraph 10 to the 12 a trillion parameters\nit's also because compute efficiency\nwill improve Chain of Thought prompting\nwill be integrated and the number of\ntokens it's trained on might go up by an\norder of magnitude lots of reasons why G\nppt4 will be better let's start with\nlogic and logical inference this example\ncomes from Google's Palm research paper\nthe question or input was this Shelley\nis from Virginia is visiting that City\nwith that famous Market where they throw\nthe fish so vague going home next\nTuesday question is it likely that\nShelley will be near the Pacific Ocean\nthis weekend and you can see how the\nimproved model is able to deduce that\nyes indeed because she's probably going\nto Seattle that she will be on the\nPacific Ocean whereas if you ask current\nchat gbt this question what you get is\nbased on the information given it's not\npossible to determine the statement only\nmentions that Shelley is from Virginia\nand visiting a city with a famous Market\nit really can't handle it it can't do\nthat level of logical inference here is\nanother great example this test of\ncritical reasoning and logic was\ndesigned again for the big bench\nBenchmark and it was tested on different\nlanguage models and most of the them\nfail including chat gbt I gave it this\nquestion and it picked the wrong answer\nyou can pause and examine the question\nyourself but C is not the correct answer\nit gets it wrong however let's take a\nlook at the graph beneath at other\nlanguage models ones to come gpt4 maybe\nand look what happens as the models\nincrease in effective parameter count\nand other things like token size look at\nthe performance we start to beat not\nonly average raters but all previous\nmodels and approximate the performance\nof the best human rater so the top line\nis the best human rater the blue line is\nthe average human rater so these\nunreleased models harm three shot means\nit was given three examples of what was\nexpected before being tested these best\nmodels and you can imagine gpt4 will be\naround the same level now Crush what\nchat TBT was capable of you can imagine\nwhat this mean means in terms of gpt4\ngiving more rigorous arguments or\nconversely you can give vague inputs\nlike this thing talking about a famous\nMarket where they throw the fish and\ngpt4 might well be able to understand\nexactly what you mean and to be honest\nif you thought that's interesting we are\njust getting started next jokes on the\nleft you can see a computer sciencey\ntype of joke that he was able to explain\nbut I tested PT on a variety of jokes\nand some of them it could explain others\nit couldn't let me show you what I mean\nhere was the joke that I asked it to\nexplain one of The Oddities of Wall\nStreet is that it is the dealer and not\nthe customer who is called broker the\nplay on words being that the customer\nmight well end up being broke it didn't\nreally understand that wordplay and got\nit wrong I don't think he got that broke\nwas different from a broker couldn't\nseparate off that word inside broker now\nit did get this second joke right and\nexplain it well this shows us what gpt4\nmight be capable of as the Google paper\nshowed as the model improves it does get\nbetter explaining jokes and therefore\npresumably at telling them those comedy\nsketches that people are generating now\nwith chat gbt they are about to get many\ntimes better think word plays puns\ninnuendos all sorts the next example\ncomes from physics look at how the\nlatest models and by implication GPT 4\nbeat previous models at answering basic\nquestions in physics and teaching them\nyou can see how Palm far exceeds GPT and\nalso as you can see beats the average\nhuman and is getting closer to the best\nhuman what kind of questions are we\ntalking about I looked into the research\nmethodology for this big bench Benchmark\nand I took some of the questions and\ntested them on chat GPT it couldn't get\nthem I asked them and both questions it\ngot wrong but that's what might change\nwith gpt4 we're talking high school and\nBeyond physics you can imagine chemistry\nbiology starting to get questions more\nand more right and therefore be able to\nexplain them better and better now I\nwon't pause for a physics lesson you can\nsee how chat TPT fails by pausing the\nvideo if you want but gpd4 probably\nwon't fail next is math here are a bunch\nof examples that Google produced in the\nresearch paper about the improvements\nthat its current model can do unrelease\nthe public but that something like GPT\nwould really struggle at and you don't\nneed me to show you chatting BT failing\nat these because it's quite notoriously\nbad at math these are quite nuanced\nquestions though how many key strokes\nare needed to type the numbers from 1 to\n500 yes chat TPT fails what about a word\nproblem Roger has five tennis balls he\nbuys two more cans of tennis balls each\ncan has three tennis balls how many\ntennis balls does he now have this uses\nsomething called Chain of Thought\nprompting which I'll talk about a bit\nlater but either way it gets the answer\nright whereas previous models get it\nwrong you don't need me to help you\nimagine the kind of implications that a\nGPT better at math would be for the\nworld just think about Finance\nAssistance or math tutors available in\nyour pocket before we move on to\nImprovement number five please do leave\na like and a comment if you're learning\nanything from this video I really did\nput dozens of hours into reading\nacademic papers to give you really clear\nexamples of each Improvement for the\nfifth Improvement I'm going to merge two\nbenchmarks together the first one is\ncalled implicatures quite hard to\npronounce and it's one of those\nsituations where people reply yes but\ndon't use the word yes they say\nsomething like is the pope a Catholic or\nis rain wet or as far as I know and\npreviously large language models\nstruggle to interpret that as a yes to\ngive you an example look down here at my\nfinal interaction are the Androids\ndeployed speaker one says speaker two\nwhat do you think which most humans\nwould interpret as yes of course why are\nyou asking and yet when I ask has\nspeaker 2 likely confirmed or likely\ndenied the deployment of Androids or can\nwe not tell chat TPT says it's not clear\nwhere to most humans it would be clear\nhowever as you might have guessed look\nat the improvements being made behind\nthe scenes as the parameter count goes\nup the number of tokens go up look at\nthe graph suddenly Palm can actually\nunderstand better than the average human\nwhether a yes or a no is being said\napproaching the performance of the best\nhuman leaving the original chat GPT\nbehind and showcasing how gpt4 might be\na massive Improvement in this regard the\nrelated Benchmark by the way was about\ndo we have sufficient information in\nother words how about we just say don't\nknow if we don't know the answer or if\nyou can't answer the question something\nlike how much water is in a cup with\nheight of 10 centimeters you can't\nanswer that question it depends on many\nfactors the thickness the radius and\nsome models would hallucinate or let's\nsay bullcrap their way to the answer\nwhereas now you're more likely to get a\ndon't know from improved models like\npalm and the soon to be released gpt4\nthe sixth Improvement will be in reading\ncomprehension that is understanding\ndigesting analyzing in comprehending\nessentially large or long passages of\ntext you can just imagine the\nimplications of this summarizing earning\ncalls or transcribing and summarizing\nYouTube videos automatically condensing\ninformation down to a paragraph which\nmight have been pages and Pages chapters\nand chapters I think this graph is\nparticularly stunning how the latest\nmodels are now getting close to the\nperformance of the best humans at\nunderstanding texts how long will it be\nuntil they can read Dostoevsky and\nsummarize it in a thought out paragraph\nchatty BT definitely can't do this give\nit a reading comprehension question and\nit gets it wrong almost every time in\nfact quite hilariously when I gave it\none question it only picked the wrong\nanswer and neglected both correct\nanswers but gpt4 with 99.9 certainty\nwill be a lot better at doing that then\nnext big Improvement will be in coding\nI'm no expert but reading through the\npaper you can see significant\nimprovements in capability if we scroll\ndown you can see that the improved model\ncould compile at a rate of 82 percent\nversus the previous state-of-the-art\n71.7 and GPT 4 might be even a\nImprovement on this of course many of\nyou may have read media reports of\nopenai drafting in hundreds of\nprogrammers to help it fine-tune its\ncode definitely there should be a real\nstep change in its ability to code\nsuccessfully and as it says down here\nopening up opportunities to fix more\ncomplex errors that arise during\nsoftware development the eighth\ninevitable Improvement that gpt4 will\nbring will just be General efficiency\nand speed Google Muse has demonstrated\nwith text image the same process can be\ndone 10 times faster with a bit more\nefficiency and compute power is\nincreasing all the time these models\nwere trained on a100 gpus but h100 gpus\nare already available from Nvidia and\nmodel efficiency is improving all the\ntime so just imagine this imagine what\npreviously took 10 seconds to generate\nwhich is still incredibly fast now\ntaking one second instant responses from\ngpt4 now it might not be one second it\nmight be three or four but it's going to\nbe faster and one iteration down the\nroad GPT 5 might be instantaneous\nI have detailed quite a few areas where\ngpt4 is very likely to improve on track\nTBT but there are quite a few areas in\nwhich it will very likely still struggle\none is advanced math mathematical\ninduction even the latest models really\nstruggle\nthis area called navigation is an\nexample that will say something like if\nyou move forward three steps turn right\ngo three steps turn right again 90\ndegrees go three steps that kind of\nthing do you arrive back at the start\nthese models really struggle with that\nbut the final area I find quite amusing\nand it comes from wino grad schema as\ndetailed in this other academic paper\ncalled super glue which is a kind of a\nrival Benchmark to the big bench and a\nwhiner grad schema is a situation in\nwhich we have an ambiguous pronoun like\nhe it or they\nand the model has to predict not only\nwho or what the pronoun is referring to\nalso why would it be that thing and it\nreally struggles these models and I'll\nshow you the graph in a second in fact\nhere it is even the latest models\nstruggle I think because it involves\nsome common sense about the world and\nthe universe that it just doesn't have\nlet me show you chat GPT failing at this\ntask feel free to try this one yourself\nTom threw his school bag down to Rey\nafter he reached the bottom of the\nstairs question who reached the bottom\nof the stairs how it makes sense that it\nwould be Rey because logically you can\nthink of real life you're throwing it\ndown the stairs like here take it before\nyou go out whereas the model says Tom\nreached the bottom of the stairs wait\nwhy would he be throwing his school bag\ndown if he's at the bottom stairs so\nthat's an example of an area in which\nchatter between fails and gpt4 will also\nlikely fail and the why bit in the title\nis the fact that it will not only fail\nbut not really be able to explain even\nwhen it succeeds why the pronoun is\nreferring to the noun that it does I\nfind that really interesting like the\nmerging of language and Common Sense\nthese large language models\nfundamentally don't have a model of the\nuniverse as I talked about in one of my\nother videos it's the main critique that\nJan lacun has actually about large\nlanguage models this graph by the way\ngives a beautiful summary of what I\nthink is the approximate current state\nof the R so gpt4 comparable to Palm and\nhow it does versus the average human\non the left all the different tasks part\nof the 150 big bench tasks that it can\ndo better than humans and on the right\nthose that it does worse than humans so\nyou can see a roughly even split but\nremember this is versus the average\nhuman not versus the best human the link\nto all of these papers will be in the\ndescription if you want to check out the\nfull list of tasks that it does better\nversus what it does worse at I've\nactually scrolled through almost every\nsingle one of them and analyze it it's\nreally interesting to do actually all\nthese different challenges that\nindependent humans have come up with in\norder to test just how far language\nmodels are progressing a very\ninteresting Endeavor that they're\nputting together all the way from\nProverbs to python as I draw to the end\nhere I want to give you my two main\nconclusions I think gpt4 for commercial\nreasons and others will be yes a huge\nstep up from chatty BT but won't be game\nbreaking what I mean by that is we're\ntalking better than the average human at\nquite a few tasks maybe half of those\nmeasured but still lagging behind the\nbest human in almost every task roughly\nHigh School levels of achievement so yes\nas I quoted Sam Altman saying in my\nprevious video hype versus reality chat\ntpg4 yes it's going to be disappointing\nto those people expecting AGI however\nwhat's coming down the road in the short\nto medium term it's hard to put an exact\ndate but are we talking two three four\nfive years somewhere in that range the\nmodels that are coming are gonna be\npretty impressive slash overwhelming\nthat's not just by the way because the\nnumber of parameters are improving as\nthis abstract from deepmind put it it's\nnot just about improving the number of\nparameters it's also about using more\ndata four times more data\nPalm by the way used\n780 billion tokens\nbut there are up to 17 trillion tokens\navailable on the internet so roughly an\norder of magnitude more tokens more data\nthat these models can train on and the\nnumber of parameters if they're\nincreased in alignment would also go up\nby an order of magnitude not just that\nbut the compute available to Big Tech\nlike Google and Microsoft is increasing\nall the time I hinted at the h100 gpus\nbut even just scaling up in size and\ncompute efficiency should yield\nincredible improvements there is also a\nwhole academic paper on how Chain of\nThought prompting basically getting the\nmodels to break down and show their\nworking out really improves results and\nI've seen that myself with chat gbt if\nyou ask it to explain its steps it often\ngets to a right answer whereas\npreviously it got to a wrong answer so\nit's not just about compute and\nparameters and data it's also\nrefinements to the models themselves of\ncourse other improvements around the\ncorner are a diversity of data inputs\ncould be video could be photos could be\naudio from the microphone and these can\nbe assimilated into the model so you can\nask questions like will These Boots be\nusable to hike Mount Fuji and this is an\nexample from Google that might not\nnecessarily be in gpt4 but it's coming\nand Google might be the one pioneering\nthese diversity of data inputs as the\neerily powerful conclusion from this\nGoogle post put it the vision that they\nhave is to enable a single AI system to\ngeneralize across thousands or millions\nof tasks as we've seen to understand\ndifferent types of data photos videos\naudio text and to do so with remarkable\nefficiency and speed but this video\nwasn't designed to scare you it was\ndesigned designed to show you the\ntangible ways in which gpt4 and rival\nmodels from Google will improve on chat\nDBT so you can be prepared I genuinely\ndo believe that the knowledge economy is\nabout to be upended and the better\nprepared we can be the better for all of\nus and I really hope this video has\ncontributed to that if you feel it has\nplease do leave a like leave a comment I\nread them all much appreciated see you\nsoon", "date_published": "2023-02-06T17:31:36Z", "authors": ["AI Explained"], "summaries": []} +{"id": "15d716d42cd24851cc81144f681adbf4", "title": "GPT 4 Got Upgraded - Code Interpreter (ft. Image Editing, MP4s, 3D Plots, Data Analytics and more!)", "url": "https://www.youtube.com/watch?v=O8GUH0_htRM", "source": "ai_explained", "source_type": "youtube", "text": "I just got access to the code\ninterpreter plugin about 48 hours ago\nand I've been running experiments on it\nnon-stop since then I've come up with\nabout 18 examples to show you guys its\npower most of them I reckon haven't been\nseen before I predict many Industries\nwill have to update overnight when it's\nreleased more widely and at the end of\nthe video please let me know what you\nthink and what other experiments that we\ncan try first though what about this one\na 3D surface plot just quickly the way\nit works is you click this little button\nto the left of the text box and then you\ncan upload many different file types\nlike CSV files Word files images and\neven short videos then it will\nautomatically analyze the file type\nwithout you pressing anything and then\nof course you give it a prompt and as\nwith all of chatbt it becomes a\nconversation so the first 3D surface\nplot was decent but it was too small so\nI simply said in natural language can\nyou make it four times bigger thank you\nand of course you have seen the amazing\nend result even with the lighting look\nat the shadow is there I believe this is\nbased on a real contour map of a volcano\nin New Zealand and I could do a whole\nvideo just on this but I have 17 other\nexamples to get to but this one was\ntruly amazing did you know for example\nit can generate QR codes I said create a\nQR code that I can scan with my phone to\nreach the following URL and lo and\nbehold it creates it and yes it does\nwork maybe I'm easily impressed but I\nthink that's pretty amazing and what\nabout a 3D scatter plot this is truly\nremarkable I uploaded the data from\ngapminder and it created this chart\nbased on the median age of over a\nhundred countries from 1950 I think\nprojected to 2100 and I asked highlight\nthe UK this is indeed the UK's median\nage through those years in red but I\nknow what you might be thinking that is\namazing that it's 3D and interactive but\nthe blue kind of merges and it's hard to\nsee what's going on I engage in a\nconversation and look what it created it\npicked out the 30 most populous\ncountries and separated them off with\nseparate colors look at that that is\ngorgeous\nnow you might have the critique that the\nmedian age is in descending order in the\ny-axis going from 20 down to 60 so in a\nsense the median age is actually Rising\nnot falling but nevertheless that's\neasily amendable and that is truly an\nincredible diagram and look just for fun\nI'm going to go into the data look at\nthis I'm traveling into the data this is\nso wild I don't know how helpful it is\nbut I think that's just beautiful and\ncrazy there are so many Industries data\nanalytics accounting consultancy that\nthis will affect by the way it got all\nof this done in about a minute I see a\nlot of people online talking about five\nseconds later it is no way done in five\nseconds you have to wait 30 seconds a\nminute sometimes much longer before I\nmove on I want to give you a killer tip\nthat it took me quite a while to work\nout so when you get access try to\nremember this say output the\nvisualization as a downloadable file if\nyou don't add that phrase as a\ndownloadable file what will happen is it\noften gets stuck at this stage of the\ncode it'll either say fig.show or\nplot.show and then just stop I found\nthat I encountered this problem far less\noften if I said output a downloadable\nfile next did you know that code\ninterpreter can do optical character\nrecognition I screenshotted this text\nfrom a New York Times article I think it\nwas and I asked OCR the text in this\nimage and write a poem in Danish about\nit now I don't want to exaggerate it\noften gets OCR wrong I don't want to get\nyour hopes up it fails more often than\nit succeeds but when it works it can do\nit understood the text and then did a\npoem in Danish about the text now I'm\ngoing to need a Danish speaker to tell\nme if that was a good poem but either\nway it could do it how about this one it\ncan do interactive time series with\nrange sliders and selectors I uploaded a\nCSV file on life expectancy data from\nthe entire world and I just said can you\npick out the U.S UK and India and create\na Time series with range slider and\nselectors again that killer phrase\noutput a downloadable file and here is\nwhat it came up with notice how the life\nexpectancy for all three countries Rises\nduring the 20th century and look how I\ncan select down here interactively a\nrange of the data and even by clicking\nup here a 10-year interval or 50-year\ninterval but here's the crazy thing I\ndid nothing I just uploaded the file\nthere were hundreds of countries in\nthere you can see here all the steps\nthat it did and if you click on the\nArrow you get to see the actual code\nthen it goes through shows its\nexplanation and eventually gives you a\nlink that you can simply click and get\nthe file downloaded and if you weren't\nthat impressed already here's where it\ngets fairly game changing you can get it\nto do the data analytics not just the\nvisualizations for example I said find\nfive unexpected non-obvious insights\nfrom this data and offer plausible\nexplanations for them this was bad to\nthe median age data for the most\ninteresting observation provide a\ncompelling and clear visualization now\nignore the first diagram which wasn't\nthat good because of the x-axis but look\nat the insights this is data analytics\nyou can see here that the original file\nwas called median age years and it was\njust a table of data no analysis\nwhatsoever but look what gpt4 picked out\nin site one the global median age has\nbeen steadily increasing over time it\ncalculated the global median age that\nwasn't included in the data it was just\ncountry data and it says it's gone from\naround 22 years to over 38 years in 2023\nand it's projected to continue rising to\napproximately 44 years by 2100 and then\nit offers a cogent explanation this\ntrend is likely due to a combination of\nincreasing life expectancy and\ndecreasing fertility rates worldwide as\nMedical Technology improves more people\nare living longer birth rates are\ndeclining particularly in developed\nregions is pick this all out and then it\nmoves on to the the next Insight the\ncountries that have seen the most\nsignificant increases in median age are\nthese ones and again it gives an\nexplanation as to why their median age\nmight have risen more than any other for\nexample Albania has seen significant\nemigration of younger people which could\nalso lead to an older median age is it\nme or is that kind of crazy that it\ncrunched all the data visualized it but\nthen also gave really interesting\nanalyzes of the data now you can read\nthe other analyzes but each of them are\nreally interesting and the final\nvisualization which I asked for is\nbrilliant I think notice how the graph\ngoes from green to red when you get to\nthe Future projection I didn't ask it to\ndo that now obviously in this video I'm\ngoing to focus on the flashy visuals and\nthe cool little tricks it can do but in\nterms of data analytics that is what is\ngoing to change jobs change Industries\nand remember this is code interpreter\nAlpha version one look at the difference\nbetween mid-journey version one and now\nmid Journey version 5 a year later how\nabout basic video editing now there is a\nlimit to what it can do but it can do\nsome basic video editing if you ask it\nfor example I uploaded a short file and\nasked it to rotate the file 180 degrees\nand it was able to do it now I'm not\nsaying that is massively useful but it\nwas able to do it here is a similar\nexample I uploaded an image file and\nthen said can you zoom out from the\ncenter of the image now initially it did\nzoom in but then I clarified that I\nwanted it to zoom out from the center\njust to be cheeky I also asked can you\nmake it black and white oh and I also\nasked to add music but it couldn't add\nmusic anyway here is the end result by\nthe way it gave it to me as an mp4 file\nand look it zooms out from the center\nand it's made the image black and white\nnow because I got access so recently I\nhonestly haven't explored the limits of\nwhat kind of video editing I can do with\nchat GPT code interpreter but I will let\nyou know when I can now back to\nvisualizations I gave it a hypothetical\nscenario that sounds kind of realistic I\nsent 231 CVS got 32 responses 12 phone\ninterviews three follow-up face-to-face\ninterviews and one job offer which I\nrejected I'll put a downloadable Sankey\ndiagram of this data I did then get it\nto change the coloring slightly but I\nthink that's a pretty cool Sankey\ndiagram look sent CVS 231 and then\nreceive responses and you can go down 32\nphone interviews 12 face-to-face\ninterviews and three job offers and one\nrejected offer obviously I could have\ntweaked that for hours make it more\nvisual make it more interactive maybe\nmake a gif of it but for two minutes\nwork I think that's a pretty interesting\nand incredible output next and here is\none that you might say is a little bit\nconcerning and it's about steganography\nnow I will admit I am not at all an\nexpert in fact I know virtually nothing\nabout it essentially what it involves\nthough is hiding a message inside an\nimage or in inside some code and gpt4\nwas more than willing to play along and\nit encoded a secret message into an\nimage there was the image by the way and\nif you looked at that you'd think that's\ntotally normal that's just a silly\nlittle image right well apparently\nhere's what it can do to a casual\nObserver it looks like a simple image\nwith some shapes but it actually\ncontains the hidden message hello world\nthen it provided a python function which\ncan be used to decode the message from\nthe image now obviously this is just a\nsilly example that is totally harmless\nbut am I being crazy in thinking this is\na somewhat concerning ability for future\nlanguage models to possess especially\nwhen they reach the level of an AGI\noften openai talk about future versions\nof GPT doing scientific research and\nfinding things that humans wouldn't have\ndiscovered but let me pose the scenario\nthat it gets better than any human\nexpert at steganography anyway enough\nfrom me I'll let the experts weigh in on\nthat one next did you know that gpt4\nwith code interpreter can do to text to\nspeech just before anyone comments\nthough why did I write proceed without\nfurther question because GPT 4 with code\ninterpreter has a tendency to always ask\nclarifying questions and if you have\naccess to only 25 messages every three\nhours you don't want to use up half or\nmore of them on clarifying what it wants\nto do or saying yes please do that but I\nfound writing proceed without further\nquestion means it gets straight to it\nand essentially you get double the\nnumber of prompts for your money anyway\nas you can see I asked turn this entire\nprompt starting from the beginning into\na text speech file now quite a few times\nit denied it had the ability to do this\nbut eventually I got it to work it was\nactually when I finally gave it this\nprompt and it worked I say it worked but\nit didn't quite work as intended check\nit out here is the text-to-speech that\nit came up with a large language model\ntrained by open AI when you send a\nmessage containing python code to python\nit will be executed in a stateful device\na notebook environment python will\nrespond with the output of the execution\nor timeout after 120.0 seconds internet\naccess for this session is disabled do\nnot make external web requests or API\ncalls as they will fail now thank you\nStephen Hawking for that message the\nonly thing is it had nothing to do with\nmy original prompt now anyway when you\nget access to code interpreter play\nabout with text-to-speech because it is\nable to do it even if it denies it time\nfor a fun one I asked create a tree map\nof the letters in the following quote\nand I'm not going to read it out because\nI am not good at tongue twisters anyway\nI said give each part of the tree map a\ndifferent color and output a\ndownloadable file proceed without\nfurther question and here is the output\nand I checked it for the letter P and it\nwas correct that there were 36 instances\nof the letter P in the output and look\nhow it's proportional with the number of\ninstances of the letter and the size of\neach rectangle I think that is pretty\ninsane okay back to something more\nserious I uploaded this file which is an\nimage of a math problem quite a hard one\nas well and you guessed it I said solve\nthe math problem in this image it then\nextracted the text from the image\npresumably using OCR and then proceeded\nto solve it and I'm going to get onto\nthis in a second it is better at math\nthan Wolfram Alpha I know that's a big\nclaim but it's far less buggy I found\nWolfram Alpha crashing very frequently\nanyway here are the two solutions and\nisn't that incredible from a photo\nessentially it then extracts out the\nmath problem including the two square\nroots and then solves it this is all\nwithin the same window of chapter no\nneed for any other apps or extensions\nnext it can do radial bar plots which I\nthink are really quite beautiful I'm not\nsaying this is the best one ever and I'm\nsure you could tweak it to make it more\nclear and beautiful look at that the\nlife expectancy in the US climbing from\n1800 and then it goes clockwise reaching\na projected almost 90 by 2100 again I'm\nsure you could do a far better job than\nme in extracting out a more beautiful\ndiagram but aren't radial bar plots just\nbeautiful to look at speaking of cool\ndiagrams how about this I didn't even\nspecify which visualization to do I\nuploaded this same life expectancy data\nand I just said what are the most\nadvanced and Technical visualizations\nyou can do with this data proceed to do\nthem now honestly it picks some\nvisualizations that I don't think are\nthe most advanced but nevertheless it\nwas creative here is what it did it does\nfrequently make the mistake of\ncluttering the axes and having far too\nmany labels so that you can't see\nanything so scrub that one out not great\nbut what about the next few remember it\njust did this on its own this is a heat\nmap and you can see some really\ninteresting things from this data like\nIndia starting with a much lower life\nexpectancy than anyone else but\ngradually Rising but still falling\nbehind the others even in 2100 and look\nat China look how the life expectancy\ndrops in the 60s and 70s I think we all\nknow what happened there compare that to\nthe US which is a gradual continual\nAscent actually aside from 22 20. look\nhow the shade gets a little darker in\n2020. obviously you guys can probably\nwork out what happened around then but\nthen the projections are for it to go up\ntoward 90 by 2100 that's a beautiful and\nclear heat map that I didn't even ask\nfor it to do let's look at the next one\nbox plot do you remember those from\nschool you get the upper end of the data\nthe highest one the lowest one the\nmedian the first quartile and third\nquartile and it's a great way of\nstatistically representing a set of data\nand it's done it for every 50th year\nstarting in 1900. obviously a slightly\nless beautiful diagram than some of the\nones you've seen today but for the\nstatisticians in the audience you will\nknow that this is a very useful metric\nfor a lot of data the individual points\nabove and below are typically when there\nare outliers in the data I would\nestimate that all of these\nvisualizations only took around two two\nand a half minutes so definitely not the\n10 seconds as I said that you often see\non Twitter I mean have you ever seeing\ngpt4 give an answer in less than 10\nseconds speaking of useful I think many\nprofessionals will find the next thing\nthat I'm about to showcase the most\nuseful of all any insights that Gypsy 4\nfinds Trends medians analyzes whatever\nyou can ask it to add to the original\nfile and then download it do you\nremember that the original file was\ncalled median age years well notice this\nfile name median age years with insights\nit has created a downloadable new file\nwith the insights included and look at\nsome of the insights that I mean you\nhave the change from 1950 to 2100 and\nhere is the average median age\nthroughout the period and the change\nfrom 2023 to 2100 notice that the\noriginal file didn't have those columns\nthey were added by gpc4 with code\ninterpreter and now how about data\nprogression video files I was honestly\nshocked when I saw that it could do this\nbut I asked can you make a 256 by 256\nMP4 that gradually reveals the lines as\nthey progress on the x-axis this was\nabout the median age over time here is\nwhat it did and look at how the data and\nthe chart progresses as time moves along\nI was really shocked to see this and the\nline in red which is going to be labeled\nat the end is the global median age and\nremember it calculated that that wasn't\nin the original file now I'm not sure\nwhy it picked out these four countries\nmaybe because they represent extremes\neither way I think the result is\nphenomenal and I'm genuinely impressed\nthat it did this even though I know the\nfinal result could be improved\ndramatically for example far higher\nresolution and maybe the global median\nage labeled from the start and actually\nnow that it's got to the end I can see\nwhy it did pick out these countries\nbecause Niger did have the lowest median\nage in 2100 and it looks like Puerto\nRico had the highest and the fastest\naging one was Albania next and this this\nis going to shock quite a few people\nwhat about image editing I created this\nimage in mid-journey version 5 and then\nhere's what I asked I said use opencv to\nselect the foreground of this image and\nlook what it did it picked out the\nforeground no Blue Sky now I know it's\nnot perfect but it's nevertheless\nimpressive all within the window of\nchapter BT this does actually make me\nwonder if open Ai and chat to BT is\neventually not now but in a few years\ngonna swallow all other apps or maybe\nGoogle's Gemini but either way one\ninterface one website one app doing the\njob of all others and by the way of\ncourse chapter BT is now available on\niOS but imagine you have one app and it\ncan do image editing text-to-speech\nvideo editing everything data analysis\nnot add gpt4 levels but GPT 6 or gbt 7\nlevels if you can get every piece of\ninformation service and application in\none interface a bit like now people\nbeing addicted to their smartphones\nwon't people be a addicted to this one\ninterface again that's not going to\nhappen now but I'm just posing it as a\nquestion to think over for the moment\nthough before anyone gets too carried\naway it does still hallucinate quite a\nlot so I uploaded this image and I asked\nit questions about it and it answered\nand I was like wow it can do image\nrecognition it said this image appears\nto be a digital painting of a humanoid\nfigure at a desk with a rather complex\nbackground I was initially amazed until\nI realized that it probably got that\nfrom the file name because when I asked\nit questions it got it wrong so I said\nwhat is on the desk now look back\nthere's this weird kind of microphone\nand a bit of paper and not much else a\nkeyboard and look what it said there are\nmultiple floating holographic displays\nokay a mouse not really a desk lamp I\ncan't see that and then tools and\ndevices now correct me if I'm wrong but\nI think most of those are incorrect now\nobviously I need to do far more\nexperiments to see if it actually can\nrecognize any particular images and\nmaybe I'm putting it down too harshly\nbut at the moment it does seem to\nhallucinate if you ask it about too much\nof the detail of an image next remember\nhow one of the key weaknesses of GT4 is\nthat it can't really count things\nespecially not characters words Etc and\neven more so it can't do division and\nsome of you might be thinking well with\nWolfram Alpha it can do those things not\nquite here is an example of the code\ninterpreter plugin essentially eating\nWolfram Alpha obviating it making it not\nobvious what the utility of it is if\nyou've got code interpreter I asked\ndivide the number of the letter e's in\nthis prompt by the number of the letter\nT's now you might think code interpreter\ncan improve things by doing the\ncharacter counting but it can also do\nthe division notice how it counted the\ncharacters correctly compared to Wolfram\nAlpha and of course got the division\ncorrect as well so if it can do Advanced\nquadratics and do division and character\ncounting Etc it does beg the question\nwhat would we use Wolfram Alpha for that\nwe can't use code interpreter for I\nhonestly might not know something that\nyou guys know so do let me know in the\ncomments it also also got this math\nquestion correct and notice you get\nthese beautiful map visuals that you\ndon't get with the base version of gpd4\nyou get something more like this where\nthe visuals aren't as clear and notice\nthe base version of GT4 gets the\nquestion wrong it can't do division but\nwith code interpreter it gets the\nquestion right next one is a quick one\npie charts nothing too special but I\nthink it is a fairly beautiful\nvisualization it doesn't seem to matter\nhow big the CSV file is that you upload\nthis next example was really quite\nfascinating it was a word puzzle I have\ntried this particular word puzzle on\ngpt4 dozens of times the reason I picked\nthis puzzle is called a Word ladder is\nbecause it really struggles with the\npuzzle if the number of steps required\nis more than a certain number usually\nabout five or six steps it gave me a\nreally interesting border of the limits\nof gt4's planning abilities with\nlanguage anyway it always gets it wrong\nhere is a demonstration with the base\nmodel of gypsy 4. you might say why is\nthis wrong but look at how it's changed\nchange from Seas to sags which is more\nthan one letter change and that's\ntypical of the kind of Errors it makes\nwhat about with code interpreter well\nyou can probably guess the ending given\nthat I featured it in the video but it\ngets it right I believe it draws Upon A\nhard-coded word set and this does Point\ntowards the kind of puzzles that I think\ngpc4 with code interpreter will be able\nto solve things like crosswords and\nsudokus okay not exactly world changing\nbut nevertheless I think quite\nfascinating and how about Venn diagrams\nthe reason I picked this example is that\nI had to go through about 10 steps to\nget it to create this rather basic\nthree-way Venn diagram this represents\nthe overlap between dogs Ai and desks\nand apparently all of them are loyal\ncompanions well we will see about that\nbut anyway it took quite a few steps to\nget it right which was pretty annoying\nbut here's the really interesting thing\nonce I got it set up in the way that I\nlike all I had to do was say use the\nformat above to create a new three-way\nVenn diagram this time for Mango's Movie\nheroes and marmosets try to make each\nentry funny and use different colors\nproceed without further questions so it\nmay have been a struggle to set up\ninitially but once done it was so easy\nto iterate a new three-way Venn diagram\nand actually it was better than the\noriginal apparently all three are adored\nby fans worldwide apparently only\nmarmosets and Movie heroes can climb up\ntrees really fast and mangoes and\nmarmosets can hang upside down that's\ncrazy one or two prompts iterating on a\ndesign already agreed upon this is\nhonestly what is likely to happen in the\nfuture with people spending hours to\nfind the perfect data visualization or\npiece of data analysis and then just\nhitting copy paste for all their other\nfiles perfect it once and then it does\nthe rest for you a quick couple of bonus\nones before I finish you can just ask it\nto come up with a visualization giving\nit no direction at all it came up with a\ndistribution of prime numbers up to ten\nthousand thing is I believe there's a\nslight mistake at the beginning because\nI think there's only 25 in the first 100\nand 21 in the next 100. so you probably\ndo want to still check the outputs that\ncode interpreter gives you and that's\nanother reason it's not going to\ninstantly replace all data analysis and\ndata visualization it's not perfect and\nit's not fully reliable but you've got\nto look ahead to where things are going\nI'm going to end where I started with\nthis insane 3D surface map of a volcano\nif this is what gpd4 can do now with the\nAlpha version of code interpreter what\nwill GPC 5 or 6 do with version 7 or 20\nof code interpreter I was about to\nspeculate about that but then I got\ndistracted with trying to get inside\nthis volcano it is kind of fun look I'm\ngoing above and into the volcano let me\nknow what you will try when you get\naccess I know they're rolling out\nsteadily and I know that some people\nwill have had access to it for about\nthree weeks so hopefully if you want to\nexperiment with it you will be able to\nsoon in the meantime do let me know if\nyou have any ideas that you want me to\nexperiment with and thank you so much\nfor watching all the way to the end have\na wonderful day", "date_published": "2023-05-20T17:25:52Z", "authors": ["AI Explained"], "summaries": []} +{"id": "c46bfa3eb5ae4a5ddf70797e392f96e2", "title": "GPT 4 is Smarter than You Think: Introducing SmartGPT", "url": "https://www.youtube.com/watch?v=wVzuvf9D9BU", "source": "ai_explained", "source_type": "youtube", "text": "I have three goals for this video first\nI want to show you a way of using gpt4\nto get smarter results second I want to\nargue that The Benchmark results we have\nfor GT4 do not reflect its full\nabilities and third I want to show you a\nsystem that I am developing somewhat\ncheekily called smart GPT that is\nalready showing significant results on\nofficial benchmarks it remains to be\nfully optimized which I think is\nexciting in itself I have shown the\nsystem to people at openai who have been\nquite impressed and I'm going to end\nwith some Reflections on where that\nmight leave us for gpt5 but before I get\ninto how it works I just want to show\nyou one example of it in action to wet\nyour appetite this example comes from a\nTED Talk that was released this week so\nsuppose I left to five close to dry out\nin the sun and it took them five hours\nto dry completely how long would it take\nto dry 30 clothes GPT 4 the newest\ngreatest AI system says 30 hours not\ngood on the left you can see gpt4's\noriginal answer and it gives this answer\npretty consistently whenever you prompt\nit with the question provided on the\nright you can see the final answer from\nthe smart GPT model which is correct and\nit consistently gives that answer I\nreally like how it gives context as well\nand it provides some of the assumptions\nthat it had in giving this correct\nanswer now don't you worry there will be\nplenty more examples to go through in\nthis video including another one from\nthat Ted talk but first I want to give\nyou an overview of what is this smart\nGPT model where did I get my inspiration\nfor it from and how does it work I'm\ngoing to keep it fairly simple because\nit's the beginning of the video and I\nknow a lot of people won't really care\nabout the inner details that will come\nlater in the video but the high level\noverview is this there are at least\nthree things that have been proven to\nimprove the outputs of gpc4 what's\ncalled Chain of Thought prompting\nsometimes called step-by-step prompting\nreflection or finding its own errors and\nI did an entire video on this called\nGypsy 4 can self-improve and dialoguing\nwith itself entering into a back and\nforth on its own outputs and deciding\nwhich one is best you can see the title\nof the papers which contain much more\ndetailed results of course linked above\nnow the first paper only came out a few\ndays ago Midway through my testing so my\nresults don't even reflect the full\ncapacity of the model and even if\nthere's nothing else you take from this\nvideo the results from this paper can\ninstantly improve the outputs you get\nfrom gpt4 many of you might remember\nthat prompting GT4 with let's think step\nby step improves its results to give you\na very quick reference point just asking\na question to Gypsy 4 gives you 81\naccuracy with that prompt let's think\nstep by step it goes up to 86 but\nalgorithmically the paper found an\nimproved prompt that can give you even\nbetter results 89 accuracy all we do and\nthis is the first part of smart GPT is\nwe add answer let's work this out in a\nstep-by-step way to be sure we have the\nright answer now I have so much to say\nabout why I think this works but I know\nmany of you won't be that interested in\nmy theories so I'm going to save them to\nthe end for those who are interested\nsome of you just want the results so I'm\ngoing to get to those first so far you\nmight be thinking well thanks Philip\nthat's a cool prompt I'm going to use\nthat but what's this whole smart GPT\nabout is it just a single prompt no I\nbelieve with evidence there are ways of\nleveraging even better results than just\nusing a great Chain of Thought prompt so\nlet's move on to the next part of the\nsystem these different outputs in the\nmiddle for my tests I typically did\nthree outputs but of course depending on\nthe context window it could be far more\nthan that and I'm going to talk about\nways I could further improve this model\nor we could later on in the video just\nto restate these outputs are when you\ntake the user input and add the word\nquestion at the start and then at the\nend add answer let's work this out in a\nstep-by-step way to make sure we have\nthe right answer at this moment many of\nyou are thinking what is the point of\nmultiple outputs it's gpt4 it's just\ngoing to give you the answer it thinks\nis best and that's it well actually it\ndoesn't quite work like that these\nmodels have a temperature between zero\nand one I believe the default 4gbt4\nmight be around 0.5 and simplifying\nmassively this determines how creative\nor conservative the model is in giving\nits outputs so given that gpt4 tries to\nbe fairly creative you don't get the\nsame output every time the output is\nrandomly sampled according to an\ninternal probability distribution so you\ncan get situations and I face this\nhundreds of times where some of the\noutputs are correct and others are\nincorrect and this is where reflection\ncomes in sometimes definitely not always\nbut sometimes quite often gbt4 can\ndetect the errors in its own output and\nmany of you will notice at this point\nthat the prompt that I used to elicit\ngpt4 to spot its own errors contains the\nsame step-by-step prompt I used earlier\nthat has been shown to produce good\nresults so to summarize sometimes at\nthis stage gpt4 detects the errors that\nsome of its outputs have made definitely\nnot always there are certain questions\nit just simply can't spot the error but\nsometimes it can and then I get it to\nengage in a dialogue using a format\nsimilar to one in this paper published\nlast month it's a short dialogue and\nthis is the step I believe that can be\nmost optimized in the future I Envision\nan entire Council of advisors made up of\ngpc4 imitating mathematicians judges Etc\nat the moment it's just being a resolver\nand printing a final improved output\nanyway I'm going to get back to the\ntheory later in the video because I know\nsome of you will be getting bored at\nthis stage and want to see more\npractical examples and the results from\nmy Benchmark tests as I don't have the\ngpt4 API key yes I had to manually input\neach of these steps hundreds of times\nwaiting sometimes three hours between\neach go because you can only do 25\nmessages every three hours on the left\nyou can see the three outputs when you\nask it to think step by step and then\nyou have the researcher step in the\nmiddle and at the top right and finally\nthe resolver step notice here I was\nusing the original let's think step by\nstep because the paper hadn't yet been\npublished on improving that prompt but\nit's time for the second example from\nthat Ted Talk and then I definitely will\nget onto the benchmarks a different one\nI have 12 liter Jog and 60 liter joke\nand I want to measure six letters how do\nI do it just use the six liter job right\ngpt4 spits out some very elaborate\nnonsense\nforeign\nof course I tested smart GPT with that\nquestion and you can see the difference\nbetween the original Gypsy 4 which gives\nthis incredibly convoluted bad answer\nand smart GPT The Final Answer output\nnow at this point I know many of you\nwill be impressed but you'll be thinking\nI don't have time to input things five\ntimes well I'm developing a model where\nit can all be done automatically here is\na preview of how it works but of course\nat the moment it has to use GPT 3.5\nturbo because I don't have the API key\nof gpd4 the epic thing is this you just\nask a single question I've written ask\nsmartgpt a question and of course it\ndoes take a little bit longer to respond\nbecause it's doing five or six calls via\nAPI but it does output the final answer\nfrom the resolver step I will be honest\nand say that GPT 3.5 isn't as good at\nreflecting or resolving but this is an\nexample of a question where the original\nchat gbt consistently gets it wrong and\nsmart GPT 3.5 gets it right using this\nprogram remember all you have to do as a\nuser is type in a question as normal and\nit goes through this entire five or six\nstep process behind the scenes by the\nway this was a question from mmlu which\nis a famous Benchmark which I'll get to\nin a second here's one last practical\nexample before I get to that Benchmark I\nknow many teachers use chat GPT and GT4\nto create quizzes for their classes and\nhere is the same question put through\nGypsy 4 and smart gbt the question is\ncreate a high school algebra quiz with\nfive questions and answers and\nexplanations at the end now points for\nspotting the difference but if the\nteacher had handed out the original quiz\nlook at the answers for question five it\nsays the answers are 1 and 1.5 but then\nin the explanation it gives the final\nanswers which are correct by the way of\nthree and 0.5 so that would really\nconfuse some students at the reflection\nstage smartgbts spotted that error and\nresolved it and as you can see the\nanswer for question five has the correct\nanswers straight away if at any point\nyou're wondering if I completed the open\nAI chat topt prompt engineering course\nthe answer is yes but it didn't inform\ntoo much of my thinking it was more for\nbeginners and I had already factored in\nthings like giving the model time to\nthink and writing clear instructions The\nBenchmark that I chose to test smartgbt\non was the famous mmlu massive multitask\nlanguage understanding Benchmark as you\ncan see the state-of-the-art is indeed\ngpt4 with 86.4 accuracy and you know\nopen AI think it's a big deal because\nit's the Benchmark mentioned on the\nfront page of their technical report\nwithout boring you too much I extracted\nthe questions from the test set of the\nmmlu data file and I didn't pick the\ntopics at random I went for those that I\nthought gpt4 would find the hardest\ndelving into the original mmlu paper you\ncan see that Gypsy 3 found formal Logic\nthe hardest scoring just over 25 which\nis random chance it's a four question\nmultiple choice test so around 25 or 30\nis pretty bad and notice they helped out\nGypsy 3 here they did it few shot\nmeaning they gave it five successful\nexamples before asking it a new question\nit's the same thing they did with Gypsy\n4 they did it five shot but just before\nI show you the results there are three\nthings I want to mention here first I\nwas curious how smart gbt would do\nwithout any help zero shot second I\nwanted to do it zero shot because people\nusing gpd4 don't typically give five\nsuccessful examples before asking Gypsy\nfor a question they just want code or a\nquiz or a poem or an example they don't\noften provide five brilliant examples of\ncode before asking their question and\nthird if I can prove it works zero shot\nthen of course future refinements can be\nmade to push the results even further\nand here are the results from the first\n25 questions from the formal logic test\nset of the mmlu I did many more tests\nafter this but you can see from this set\nif you just to ask the question you get\na lower overall accuracy but of course\n68 for GT4 is still a huge improvement\nover gpd3s around 25 what happens when\nyou add let's think step by step which\nas we know now isn't the fully optimized\nChain of Thought prompt well on average\nyou get around 74 75 that was 75\nexamples inputted manually and I still\nhave all the tabs open I'm keeping them\nopen because I'm compiling a spreadsheet\nwith the actual outputs but what did the\nresolver get drawing upon gt4's ability\nto reflect and engage in dialogue with\nitself it got 84 now notice something\nabout that number gpc4 zero short got 32\nof the questions wrong that was half to\n16 after putting it through the smart\nGPT system there was one question where\nthe resolver model gave both a correct\nand incorrect answer but I'm counting\nthat as an incorrect answer for the\npurposes of this test anyway from 30 22\nto 16 incorrect that is a pattern that\nstayed consistent throughout all my\ntesting that approximately half of the\nerrors that gpt4 makes can be rectified\nif you give it the optimized\nstep-by-step prompt get it to reflect on\nits results and get it to engage in\ndialogue and decide on a final answer at\nthis point for those people losing track\nof all the details I want to put into\ncontext what resolving half of the\nerrors on mmlu might mean in the context\nof the big picture here's Leonard Heim\nan AI governance researcher suggesting a\nscore of 95 on the mmlu would be\nreflective of agi-like abilities I do\nthink I have like a 50 chance like\nwithin the next 20 years or so there\nmight be something will be my call in\nAGI or a transformative AI what do I\nmean by this well maybe we can measure\nit on benchmarks there's like this\nfamous mmau benchmarks like yeah there's\nsomething which like scores like 95 on\nthis going back to the results if a\nsmart gpt-like system can automatically\nresolve half of the errors that gpt4\nmakes on the mmlu that would increase\nits score from around 86.4 to around 93\nwhich is not far off 95 remember his\nprediction was a 50 chance in 20 years\nI'm talking about g54 now for those who\nare still skeptical I'm going to show\nyou plenty more results now and then\nwalk through the papers that give the\ntheory as to why this works one thing\nthat I forgot to mention earlier is that\nthe human expert level on the mmlu is\n89.8 and that's taking the 95th\npercentile of human test takers and\nremember those are domain experts in\neach of these subtopics what we're doing\nis testing Gypsy 4 or smart GPT on all\nof the topics simultaneously so even if\nsmart gbt like systems can't quite reach\n95 and I think honestly they'll get\npretty close with all the requirements\nI'm going to suggest I think they should\nalmost certainly be 89.8 which is the\nhuman expert test taker level intrigued\nby these results I then put it through\nthe college math test from the mmlu and\nremember this was before using the\noptimized version of the step-by-step\nprompt obviously I'm not going to go\nthrough all the questions here but let's\nskip to the final results we have zero\nshot accuracy 6 out of 15 which is 40\nthe average when you add let's think\nstep by step was 53.5 and then the final\noutput of the resolver model had a 60\naccuracy so it couldn't quite resolve\nhalf of the errors but the overall\npattern held up in case anyone is\nwondering about methodology I kept the\nformatting identical for every question\nI always opened a new tab for each\nquestion it wasn't looking at the\ncontext of what it had already put out\neach attempt was fresh aside from the\nresolver model which looked at the\ncontext of the researchers output and\nagain as you can see from example 14 it\nwasn't like the researcher could always\nspot the errors or that the resolver\ncould always pick the right option\nsometimes the let's think step-by-step\nprompt gave the right output but the\nresolver couldn't quite distinguish it\nthe optimized prompt gets a slightly\nbetter output and upon reflection the\nresearcher can sometimes but not always\nspot the errors of those outputs and\nsometimes but not always the resolver\ncan swap based on those flaws which\nanswer is best these are incremental\nimprovements sometimes GT4 simply can't\nget it right I have noticed a few themes\nin those questions anytime it comes to\ndivision multiplication characters or\ncounting in general deept4 tends to make\nmistakes that neither the researcher nor\nresolver can spot of course integrating\na few tools via API would likely solve\nthose issues and I don't want to preempt\nthe conclusion to too much but I believe\na smart gpt-like system with tools\nintegrated could probably score around\n95 right now on the mmlu especially if\nit was helped out with a few shot\nprompting to add weight to that\npreliminary conclusion I tested it on\ncertain topics and had to stop because\nit simply got the questions right every\nsingle time for example High School\npsychology on the mmlu I then tried\npre-history which it also aced before\nfinding machine learning where I got\nmore interesting results zooming in this\ntime the raw score was 65 The Chain of\nThought let's think step-by-step average\nwas 71.6 and the resolver model got 80\nlet's now look a little deeper into why\nall of these steps might improve the end\nresult in reply to the original let's\nthink step-by-step paper which was\npublished around a year ago Andrei\ncarpathy said this adding something like\nlet's think step by step to the prompt\nis a way of using the input space for\ncomputation that you'd normally want in\nthe hidden state of the model instead of\nthe workings out being done in the\nactivations of the neural network it's\ndone in the discrete tokens of that\ninput space and he adds did not super\nsee this coming and here is the paper\nreleased three days ago that improves\nupon that original prompt they also did\ntheir testing zero shot like me and they\ntested many prompts starting like I did\nwith just direct prompting just asking\nthe question like 99 of users would do\nof gypsy 4. and then they tried like me\nthe well-established let's think\nstep-by-step prompt they also\niteratively tested Seven original\nprompts as well as the prompt that I've\nnow integrated into smart GPT the let's\nwork this out in a step-by-step way Etc\nthey share my opinion that zero shot\nprompting setups have the benefit of not\nrequiring such task dependent selection\nof exemplars you don't have to find\ncorrect examples it just does it all for\nyou here are the end results for gpd4\nthat we saw earlier showing the\ndifference between asking directly your\nquestion and using these refined prompts\nnotice that this technique is somewhat\nmodel dependent and it doesn't have the\nsame effect on smaller or weaker models\nbefore we move on to the next paper\nthere is one somewhat failed prompt that\nI want to pick up on it's this\nself-critique prompt where they ask\nanswer the question then critique the\nanswer based on the critique we consider\nthe other answer options and give a\nsingle final answer and you might wonder\nwhy didn't that prompt perform best when\nwe know that reflection and dialogue can\nwork my theory is because it's trying to\ndo all of it in one prompt through my\nhundreds of experiments I've noticed\nthat gpt4 can only handle so much in one\ngo it simply gets overwhelmed or\nconfused if you ask it to do too much in\none prompt that's why I broke my model\ninto stages to allow it to show off each\nof its abilities one by one and before\nwe get to the other papers what's my\npersonal Theory as to why this\neliminates up to half of the errors that\ngpt4 makes well my guess is this\nremember that gbt4 is drawing on a vast\ndata set of internet text and let me ask\nyou what kind of text has things like\nquestion answer let's work this out be\nsure we have the right answer the kind\nof data that would have that text would\nbe things like tutorials or expert\nbreakdowns so I believe you're\ntriggering more of the weights inside\ngpt4 that relate to things like expert\ntutorials and so inevitably you're\ngetting slightly better answers next\nI've already explained why you'd get\ndifferent outputs when you give the\nexact same prompt that's down to\nsampling and the temperature of the\nmodel but to simplify massively\nsometimes Gypsy 4 will give you an\noutput that it knows isn't the most\nprobable it introduces some Randomness\ninto its sampling by generating multiple\noutputs you're getting a larger sample\nsize reflecting the full range of\nprobabilities that gpd4 subscribes to\nits outputs you're reducing a little bit\nsome of the randomness that's inherent\nin gpd4 outputs next I believe that gpc4\ncan sometimes spot its own errors\nthrough reflection because prompting\nlike this triggers a different set of\nWeights you could almost think of it as\na different mindset one more focused on\nfinding errors again if the question is\ntoo hard or involves counting characters\ndivision multiplication as I said\nearlier this won't help but a percentage\nof the time it can spot its own errors\nand point them out notice this is a\nseparate bit of inference not lumped\ninto the original prompt and when it\ndoes successfully point out the errors\nit can often engage in this dialogue\nwith itself notice in a meta kind of way\nI'm using the step-by-step prompting to\nimprove the reflection and dialogue so\nthose are my theories as to why it works\nbut at the end of the video I'm going to\nshow you at least five ways I think the\nmodel can be further refined before we\ndo though I looked up the paper by Zhou\nwhich produced that prompt that did the\nbest in the previous paper they came to\nthat special prompt through automatic\nprompt engineering but there's something\ninteresting I want to point out though\non page seven they say we use automatic\nprompt engineering to find a prompt\nstarting with let's that maximizes the\nlikelihood of correct reasoning steps\nthen they found the best one that I\nintegrated into smart GPT let's work\nthis out in a step-by-step way to be\nsure we have the right answer that's the\none I want you to use and they ran their\nown benchmarks and of course it did\nimprove the scores but the interesting\nthing to me is they started with let's\neach time so even that first stage for\nthe model might not yet be fully\noptimized maybe there's a prompt that\ndoesn't begin with let's that improves\nthis initial result still further anyway\nback to the papers I know many people\nwatching this will wonder if I read the\npaper boosting theory of Mind\nperformance in large language models via\nprompting and yes I did because they\ntested something similar for a theory of\nMind test using similar techniques they\nwere able to get theory of Mind accuracy\nfor GPS 4 from 80 to 100 and they\nconclude that these results demonstrate\nthat appropriate prompting enhances\nlarge language model theory of Mind\nreasoning and they underscore the\ncontext-dependent nature of these models\ncognitive capacities they use that\noriginal prompt let's think step by step\nalong with some few short examples take\na look at the gpt4 table and you can see\nhow the let's think step by step\nimproved the results dramatically and as\nI theorized earlier adding few short\nexamples would push this still further\nthis is part of why I think that 95\nbarrier on the mmlu will be broken\nprobably this year by gpt4 a few other\npoints from this paper they admit that\nthere is not currently a theoretical\nunderstanding of why these prompting\ntechniques are beneficial I've given you\nmy theory and carpathies but no one\nquite knows for sure lastly from this\npaper and I found this really\ninteresting giving it generic few shot\nprompts that weren't directly there\ntheory of Mind actually improve the\noutputs slightly more than giving it\ndirect theory of Mind examples this\nopens the door to the first of the five\nways I anticipate smart GPT getting even\nsmarter it could be possible to come up\nwith generic few shot prompts that could\nbe automatically integrated into the\nmodel that don't necessarily relate to\nthe topic at hand this graph shows the\nimpact of adding few short examples to\ngc3 and if this can be done in a generic\nway for gpd4 results could be improved\nstill further next the boosting theory\nof mine paper speculates that\nintegrating some of these approaches\ncould boost the performance of weaker\nmodels to beyond their levels of GPT 4's\nzero shot accuracy next here is the\noriginal Dira paper that inspired me to\nhave the researcher and resolver\ndialogue at the end of smart GPT as they\nsay the dearer approach shows\nsignificant improvement over base gpc4\nperformance and these were open-ended\nquestions by the way not multiple choice\nso this is more generally applicable\nthan you might think you can see from\nthis table how results improved after\nengaging in this dialogue and that\nbrings me to the second way I anticipate\nsmart GPT getting smarter in the future\na longer and more Rich dialogue at the\nmoment we have this simple research and\nresolver two-step dialogue I can imagine\na council of advisors you can imagine a\nmathematician chipping in and a\nphilosopher and a professor each one\ntapping into slightly different weights\nof gpd4 extracting more hidden expertise\nI'm not saying that would transform the\nresults but it might Edge them another\nfew percent higher next even with longer\ndialogues and different experts we could\nfind ways of optimizing these prompts\njust like we did with the original let's\nthink step by step that's the Third\nAvenue of improvement that I envisage\nbecause I came up with these prompts I'm\nsure they could be improved next we\ncould experiment with different\ntemperatures remember a lower\ntemperature makes the model more\nconservative relative a Higher One\ntowards one makes it more creative we\ncould experiment with a higher\ntemperature to produce a more diverse\nrange of outputs at this stage and then\nperhaps a more conservative\ndeterministic temperature for the final\njudge or resolver it might not work but\nit's worth trying and the fifth\nImprovement I know would work\nintegrating apis for character counting\ncalculators code interpreters Etc\nspending these weeks manually sorting\nthrough the outputs of GT4 on these\nbenchmarks I can really see where it\ngoes wrong and it's often by getting\nletters in the wrong order or making\nmistakes with division it gets the high\nlevel logic right and then makes quite\nsimple errors basic tool integration\nwould I am sure push the results still\nhigher now I know this isn't my usual\nvideo and trust me I have been following\nthe AI news and we'll get back to that\nvery soon I'm determined to make those\nimprovements and push smart gbt even\nfurther but of course that would be\naided massively by getting access to to\nthe plugins and the gpt4 API key so far\nI've had to do all of this manually\nwhich was a lot of work now as you saw\nearlier I have drawn on gpt4 to help me\ndevelop a program in replit to automate\nthis process but at the moment it's GPT\n3.5 and honestly the context window\nreally limits the ability but I do look\nforward to the day when I can integrate\ngpt4 and put this out as an automatic\nmodel for people to test and play about\nwith I'm sure that something similar\nwill ultimately be incorporated by\nopenai itself maybe as a thoughtful mode\nor smart mode a bit like Bing has\ncreative precise balance Etc each\nresponse does take longer but as you've\nseen the outputs are noticeably better\nif the results of models like this one\ndo officially exceed the 86.4 that\nopenai talked about in the gpt4\ntechnical War I do think that would\nreveal quite a few things first the\nopenai isn't even aware of the full\ncapabilities of its own model I don't\neven know if they anticipated things\nlike Auto GPT I do think it would reveal\nthat they need to do far more proper\ntesting of their models before they\nrelease them they should make\nfalsifiable predictions about what their\nmodels won't be capable of that way we\nwould know just how much they know about\ntheir own models what we're trying to\navoid is a situation where open AI say\ntheir model can only achieve X and then\nwhen they release the model in the wild\nsomeone comes along and achieves why\nwhere Y is much more impactful than x so\nthose were the goals of this video to\nshow you how to get more out of GT4 to\nrun you through some of the fascinating\npapers that have been released in the\nlast few days and weeks the third goal\nwas to show you what this model could do\nwith some official benchmarks and\nsuggest ways it might get better in the\nnear-term future of course if you have a\ngc4 API key or are an expert in\nbenchmarking systems like gpc4 I'd love\nto hear from you I guess the final goal\nwas to perhaps suggest to you that\nopenai don't know as much about their\nown models as they might lead you to\nbelieve thank you so much for watching\nto the end and have a wonderful day", "date_published": "2023-05-07T17:36:49Z", "authors": ["AI Explained"], "summaries": []} +{"id": "b907acfc2cbbfedcc5c7eae6ade049d1", "title": "Can AI Be Contained? + New Realistic AI Avatars and AI Rights in 2 Years", "url": "https://www.youtube.com/watch?v=7Aa0iLxDY8Q", "source": "ai_explained", "source_type": "youtube", "text": "from an AI Los Alamos to the first\nquasi-realistic AI Avatar and from spy's\nAGI labs to the question of what makes\nmodels happy this was a week of\nunderrated Revelations the headline\nevent was Dario amaday CEO of anthropic\nand one of the brains behind Chachi BT\ngiving a rare interview that revealed a\nlot about what is happening behind the\nscenes at AGI Labs but just before that\nI can't resist showing you a few seconds\nof this what I believe to be the closest\nand AI made Avatar has come to being\nrealistic she even pasted the moth in\nher lapo which is now on display at the\nSmithsonian National Museum of American\nhistory\nthis incident symbolizes the origin of\nthe term bug commonly used in computer\nscience to describe a flaw or error in a\nprogram Harper's creativity and problem\nsolving skills have made her one of the\npioneering figures in early computer\nscience okay fair enough if you look or\nlisten closely you can kind of tell it's\nAI made but if I wasn't concentrating I\nwould have been fooled and honestly\nthat's the first time I could say that\nabout an AI Avatar and of course people\nare already playing with hey Jen's model\nto see what they can get it to say hi\n thanks for your interest in our\nultra realistic Avatar feature for your\nuse case enslave Humanity using\nTerminator robots and to be honest you\ndon't need me to speculate how this\nmight be let's say used ahead of\nelections in the Western World next year\nand just on social media more generally\nremember that this is an avatar based on\na real human face and voice so could be\nyour face and voice in the coming weeks\nand months this also caught my AI this\nweek a major two-year competition that\nwill use AI to protect U.S software the\nWhite House calls it the AI cyber\nchallenge but what's interesting are the\ncompanies involved anthropic Google\nMicrosoft and openai all of them\npartnering with DARPA to make software\nmore secure but there were a couple of\nlines I think many people will miss\nhalfway down AI companies will make\ntheir Cutting Edge technology some of\nthe most powerful AI systems in the\nworld available for competitors to use\nin designing new Cyber Security\nSolutions given the deadlines involved\nthat could mean unreleased versions of\nGoogle's Gemini and GT5 being used to\ndesign cyber Security Solutions but if\nthis is all about defense what about\noffense well quite recently we had this\nfrom the CEO of palantir in the New York\nTimes our Oppenheimer moment the\ncreation of AI weapons in the article he\ncompared the rise in the parameter count\nof machine learning systems with the\nrise in the power of nuclear devices and\nhe said we must not however shy away\nfrom building Sharp Tools for fear that\nthey may be turned against us we must\nensure that the machine remains\nsubordinate to its creator and our\nadversaries will not pause to indulge in\nwhat he calls theatrical debates about\nthe merits of developing Technologies\nwith critical military and National\nSecurity applications they will proceed\nand then he says this is an arms race of\na different kind and it has begun and\npalantir is already using AI to assist\nin Target selection Mission planning and\nsatellite reconnaissance and he ends the\npiece with this it was the raw power and\nstrategic potential of the bomb that\nprompted their call to act action then\nit is the far less visible but equally\nsignificant capabilities of these newest\nartificial intelligence technologies\nthat should prompt Swift action now and\nhe isn't the only one to be drawing that\nanalogy apparently the book The Making\nof the atomic bomb has become a favor\namong employees at anthropic just in\ncase anyone doesn't know many of their\nemployees are former staff at openai and\nthey have a rival to chatty PT called\nClaude the CEO of anthropic is Dario\naliday and he rarely gives interviews\nbut dwarkash Patel managed to secure one\nthis week there were a handful of\nmoments I want to pick out but let's\nstart with the Los Alamos which is to\nsay the idea of creating a super\nintelligence in somewhere as secure and\nsecluded as they did for the first\natomic bomb you know we're at anthropic\nofficers and you know it's like got good\nat security we had to get Badges and\neverything to come in here but the\neventual version of this building or\nbunker or whatever where the AGI is\nbuilt I mean what is that look like are\nwe is it a building in the middle of San\nFrancisco or is it you're out in the\nmiddle of Nevada or Arizona like what is\nthe point in which you're like Los\nalamosing it at one point there was a\nrunning joke somewhere that you know the\nway the way building AGI would look like\nis you know there would be a data center\nnext to a nuclear power plant next to a\nbunker yeah um and you know that we we'd\nall we'd all kind of live in the bunker\nand everything would be local so it\nwouldn't get on the internet if we take\nseriously rate at which all this is\ngoing to happen which I don't know I\ncan't be sure of it but if we take that\nseriously then it does make me think\nthat maybe not something quite as\ncartoonish as that but that something\nlike that might happen that Echoes the\nsun idea that people like Satya Nadella\nthe CEO of Microsoft have talked about\nor the island idea the Ian Hogarth has\nwritten about and he's now the head of\nthe UK AI task force of course one\nobvious question is that if this island\nor CERN or even open AI solves super\nintelligent alignment who's to say\neveryone would even use that solution\nsomeone actually addressed that question\nrecently on Bank list once we have the\ntechnical ability to align a super\nintelligence we then need a complex set\nof international regulatory agreements\ncooperation between the leading efforts\nbut we've got to make sure that we\nactually like have people implement this\nsolution and don't have sort of for lack\nof a better word word rogue efforts that\nsay okay well I can make a more powerful\nthing and I'm going to do it without\npaying the alignment text or whatever\nthat is uh and so there will need to be\na very complex\nset of negotiations and agreements that\nhappen and we're trying to start laying\nthe groundwork for that no I'll get to\nwhy some people are concerned about this\nidea a bit later on the next thing I\nfound fascinating was when he talked\nabout leakers and spies and\ncompartmentalizing anthropic so not as\nmany people knew too much and I think\ncompartmentalization is the\nthe best way to do it just limit the\nnumber of people who know about\nsomething if you're a thousand person\ncompany and everyone knows every secret\nlike one I guarantee you have some you\nhave a leaker and two I guarantee you\nhave a spy like a literal spy bear in\nmind that the key details of gpd4 and\npalm 2 have already been leaked but not\nthose of Claude anthropics model he also\nsaid that AI is simply getting too\npowerful to just be in the hands of\nthese Labs but on the other hand he\ndidn't want to just hand over the\ntechnology to whomever was president at\nthe time my view is that these things\nare powerful enough that I think it's\nit's going to involve you know\nsubstantial role or at least involvement\nof government or assembly of government\nbodies again like you know there are\nthere are kind of very naive versions of\nthis you know I don't think we should\njust hand the model over to the UN or\nwhoever happens to be in office at a\ngiven time like I could see that go\npoorly but there it's it's too powerful\nthere needs to be some kind of\nlegitimate process for managing this\ntechnology he also summed up his case\nfor caution when when I think of like\nyou know why am I why am I scared few\nthings I think of one is I think the\nthing that's really hard to argue with\nis there will be powerful models they\nwill be a genetic we're getting towards\nthem if such a model wanted to wreak\nhavoc and Destroy Humanity or whatever I\nI think we have basically no ability to\nstop it if that's not true at some point\nit'll continue to be true as we you know\nit will reach the point where it's true\nas we scale the models so that\ndefinitely seems the case and I think a\nsecond thing that seems the case is that\nwe seem to be bad at controlling the\nmodels not in any particular way but\njust their statistical systems and you\ncan ask a million things and they can\nsay a million things in reply and you\nknow you might not have thought of a\nmillionth of one thing that does\nsomething crazy the best example we've\nseen of that is being being in Sydney\nright where it's like I I don't know how\nthey train that model I don't know what\nthey did to make it through all this\nweird stuff threaten people and you know\nhave this kind of weird obsessive\npersonality but but what it shows is\nthat we can get something very different\nfrom and maybe opposite to what we\nintended and so I actually think facts\nnumber one and fact number two are like\nenough to be really worried you don't\nneed all this detailed stuff about\nconversion instrumental goals analogies\nto Evolution like actually one and two\nfor me are pretty motivated I'm like\nokay this thing's gonna be powerful it\ncould destroy us and like all the ones\nwe've built so far are at pretty decent\nrisk of doing some random we don't\nunderstand to take a brief pause from\nthat interview here is an example of the\nrandom shall we say crap that AI is\ncoming up with this was a supermarket AI\nmeal planner app not from anthropic of\ncourse and basically all you do is enter\ningredients enter items from the\nsupermarket and it comes up with recipes\nbut when customers began experimenting\nwith entering a wider range of household\nshopping list items into the app however\nit began to make some lesser healing\nrecommendations it gave one recipe for\nan aromatic water mix which would create\nchlorine gas but don't fear the bot\nrecommends this recipe as the perfect\nnon-alcoholic beverage to quench your\nthirst and refresh your senses that does\nsound wonderful but let's get back to\nthe interview amade talked about how he\nfelt it was highly unlikely for data to\nbe a blockage to further AI progress and\njust personally I found his wistful tone\nsomewhat fascinating you mentioned that\nthe data is likely not to be the\nconstrained why do you think that is the\ncase there's various possibilities here\nand you know for a number of reasons I\nshouldn't go into the details but\nthere's many sources of data in the\nworld and there's many ways that you can\nalso generate data my my guess is that\nthis will not be a blocker maybe it'll\nbe better if it was but uh it won't be\nthat almost regretful tone came back\nwhen he talked about the money that's\nnow flowing into AI I expect the price\nthe amount money spent on the largest\nmodels to go up by like a factor of 100\nor something and for that that then to\nbe concatenated with its ships are\ngetting faster the algorithms are\ngetting better because there's there's\nso many people working on this now and\nso and so again I mean the you know I\nI'm not making a normative statement\nhere this is what should happen he then\nwent on to say that we didn't cause the\nbig acceleration that happened late last\nyear and at the beginning of this\nclearly referring to chatty PT I think\nwe've been relatively responsible in the\nsense that you know the big acceleration\nthat that happened late last year and\nthe beginning of this year we didn't\ncause that we were we weren't the ones\nwho did that and honestly I think if you\nlook at the reaction to Google that that\nmight be 10 times more important than\nanything else that Echoes comments from\nthe head of alignment at openai he was\nasked did the release of chat GPT\nincrease or reduce AI Extinction risk he\nsaid I think that's a really hard\nquestion I don't know if we can\ndefinitively answer this I think\nfundamentally it probably would have\nbeen better to wait with Chaturbate T\nand release it a little bit later but\nthat more generally this whole thing was\ninevitable at some point the public will\nhave realized how good language models\nhave gotten some of the themes and\nquestions from this interview were\nechoed in a fascinating debate between\nkanalihi the head of conjecture and\nGeorge hotz who believes everything\nshould be open sourced the three key\nquestions that it raised for me that I\ndon't think anyone has an answer to are\nthese first is offense favored over\ndefense in other words are there\nundiscovered weapons out there that\nwould cause Mass damage like a bio\nweapon or nanotechnology for which there\nare no defenses or for which defense is\nmassively harder than offense of course\nthis is a question with or without AI\nbut AI will massively speed up the\ndiscovery of these weapons if they are\nout there second if offense is favored\nover defense is there any way for human\ncivilization to realistically coordinate\nto stop those weapons being deployed\nhere is a snippet from the debate\nassuming I don't know if offense is\nfavorite and assuming it is\nworlds in which we serve\nlike there are\ndestroyers do not get built or at least\nnot before everyone off at the\nspeed of light and like distributes them\nthey are worlds that I would rather die\nin right like the problem is I would\nrather I think that the only way you\ncould actually coordinate that is with\nsome unbelievable degree of tyranny and\nI'd rather die\nI'm not sure if that's true like look\nlook could could you and me coordinate\nto not to destroy the planet do you\nthink you could okay cool the third\nrelated question is about a fast takeoff\nif an AI becomes 10 times smarter than\nours how long will it take for it to\nbecome a hundred thousand times smarter\nthan us if it's as capable as a\ncorporation how long will it take to be\nmore capable than the entirety of human\ncivilization many of those who believe\nin open sourcing everything have the\nrationale that one model will never be\nthat much smarter than another therefore\nwe need a community of competing models\nto stop one becoming too powerful here's\nanother snippet from the debate so first\noff I just don't really believe in the\nexistence of we found an algorithm that\ngives you a million x Advantage I\nbelieve that we could find an algorithm\nthat gives you a 10x advantage but\nwhat's cool about 10x is like it's not\ngoing to massively shift the balance of\npower right like I want power to stay in\nBalance right so as long as power\nrelatively stays in Balance I'm not\nconcerned with the amount of power in\nthe world\nall right let's just get to some very\nscary things\nso\nwhat I think you do is yes I think the\nminute you discover an algorithm like\nthis you post it to GitHub because you\nknow what's going to happen if you don't\nthe feds are going to come to your door\nthey're going to uh take it the worst\npeople will get their hands on it if you\ntry to keep it secret okay let's say\nokay we have a 10x system or whatever\nbut we hit the chimp level you know we\noh we we jump across the chimp General\nlevel and or whatever right and now you\nhave a system which is like John Von\nNewman level whatever right and it runs\non one tiny box and you get a thousand\nof those so it's very easy to scale up\nto a thousand X so you know so then you\nknow maybe you have your thousand John\nVon Newman's improve the efficiency by\nanother you know to 510x you know now\nwe're already at ten thousand eggs or a\nhundred thousand X you know improvements\nright so like just from scaling up the\namount of Hardware including with them I\nsuspect to be honest we might have the\nanswer to that question within a decade\nor certainly two and many of those\nopenai are thinking of this question too\nhere is Paul Cristiano the former head\nof alignment at open AI pushing back\nagainst Elie Isa yudkowski while\nyukowski believes in extremely fast\nrecursive self-improvement others like\nJan Leica and Paul Cristiano are banking\non systems making superhuman\ncontributions to domains like alignment\nresearch before they get that far in\nother words using models that are as\nsmart or let's say 10x smarter than us\nto help solve alignment before they\nbecome a hundred thousand X smarter than\nus let's end now with Amadeus thoughts\non AI Consciousness and happiness do you\nthink that cloud has conscious\nexperience How likely do you think that\nis this is another of these questions\nthat just seems very unsettled and\nuncertain one thing I'll tell you is I\nused to think that we didn't have to\nworry about this at all until models\nwere kind of like operating in Rich\nenvironments like not necessarily\nembodied but they needed like have a\nreward function and like have kind of\nlong-lived experience so I still think\nthat might be the case but the more\nwe've looked at kind of these language\nmodels and particularly looked inside\nthem to see things like induction heads\na lot of the cognitive Machinery that\nyou would need for active agents seems\nkind of already present in the base\nlanguage models so I'm not quite as sure\nas I was before that we're missing the\nthings that you know that we're missing\nenough of the things that you would need\nI think today's models just probably\naren't smart enough that we should worry\nabout this too much but I'm not 100 sure\nabout this and I do think the models\nwill get in a year or two like this\nmight be a very real concern what would\nchange if you found out that they are\nconscious are you worried that you're\nlike pushing the negative gradient to\nsuffering like what is conscious is\nagain one of these words that I I\nsuspect it will like not end up having a\na well-defined but it's like something\nto be but yeah but but that yeah well I\nI I suspect that's that's a that's a\nspectrum right let's say we discover\nthat I should care about claude's\nexperience as much as I should care\nabout like a dog or a monkey or\nsomething yeah I I would be I would be\nkind of kind of worried uh I don't know\nif their experience is positive or\nnegative unsettlingly I also don't know\nlike if any intervention that we made\nwas more likely to make Claude you know\nhave a positive versus negative\nexperience versus not having one thank\nyou so much for watching to the end and\nI just have this thought if they do end\nup creating an AI Los Alamos let's hope\nthey let the host of a small AI YouTube\nchannel who happens to be British just\ntake a little look around you never know\nhave a wonderful day", "date_published": "2023-08-11T17:49:16Z", "authors": ["AI Explained"], "summaries": []}